סמינר בחומר מעובה: Efficient neural computations in the face of noise
Jonathan Kadmon, Stanford University
The living brain faces a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, external nuisance inputs, imprecisely specified connectivity, and nonnegligible delays in synaptic transmission. Overcoming such difficulties in distributed sensorimotor circuits can be essential for animal survival. I will present a theory of efficient coding that enables high-fidelity encoding by noisy neural circuits. The underlying mechanism is strong recurrent feedback suppressing the noise in the direction of a linear readout, while the overall activity remains heterogeneous, similar to recoded activity in behaving animals. Using finite-size corrections to mean-field theory, I study the readout error as a function of the feedback, noise, and synaptic delays in the network. As the feedback efficacy increases, noise is reduced, but oscillations appear due to the delays. The trade-off between random fluctuations and coherent oscillations results in an optimal feedback strength, with a minimal decoding error, which depends on the delays. The result can potentially explain prominently observed oscillations in brain activity.
In the second part of the talk, I will address the difficulties of extracting insights from noisy data. Tensor Component Analysis (TCA) is an unsupervised dimensionality reduction method that can extract meaningful patterns from multi-modal data and holds great potential for neural data analysis. However, current TCA algorithms are inefficient when the signal-to-noise ratio (SNR) in the data is low. I develop and analyze a new TCA algorithm based on Approximate Message Passing that successfully performs Bayesian inference on arbitrary tensors at low SNR. The algorithm outperforms non-Bayesian methods and requires far fewer data points. Using dynamic mean-field theory, I derive the state evolution equations and show two phase transitions in the inference problem, separating the easy, hard, and impossible regimes. The transitions depend on the level of noise, amount of data, the tensors' shape, and the Bayesian priors.
מארגן הסמינר: פרופ' ערן סלע