Paper List
-
The Effective Reproduction Number in the Kermack-McKendrick model with age of infection and reinfection
This paper addresses the challenge of accurately estimating the time-varying effective reproduction number ℛ(t) in epidemics by incorporating two crit...
-
Covering Relations in the Poset of Combinatorial Neural Codes
This work addresses the core challenge of navigating the complex poset structure of neural codes to systematically test the conjecture linking convex ...
-
Collective adsorption of pheromones at the water-air interface
This paper addresses the core challenge of understanding how amphiphilic pheromones, previously assumed to be transported in the gas phase, can be sta...
-
pHapCompass: Probabilistic Assembly and Uncertainty Quantification of Polyploid Haplotype Phase
This paper addresses the core challenge of accurately assembling polyploid haplotypes from sequencing data, where read assignment ambiguity and an exp...
-
Setting up for failure: automatic discovery of the neural mechanisms of cognitive errors
This paper addresses the core challenge of automating the discovery of biologically plausible recurrent neural network (RNN) dynamics that can replica...
-
Influence of Object Affordance on Action Language Understanding: Evidence from Dynamic Causal Modeling Analysis
This study addresses the core challenge of moving beyond correlational evidence to establish the *causal direction* and *temporal dynamics* of how obj...
-
Revealing stimulus-dependent dynamics through statistical complexity
This paper addresses the core challenge of detecting stimulus-specific patterns in neural population dynamics that remain hidden to traditional variab...
-
Exactly Solvable Population Model with Square-Root Growth Noise and Cell-Size Regulation
This paper addresses the fundamental gap in understanding how microscopic growth fluctuations, specifically those with size-dependent (square-root) no...
Linear Readout of Neural Manifolds with Continuous Variables
Department of Physics and Kempner Institute, Harvard University | Center for Computational Neuroscience, Flatiron Institute
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) determines the efficiency of linearly decoding continuous variables, amidst complex neural variability.
核心创新
- Theory Develops the first statistical-mechanical theory of 'regression capacity,' extending manifold capacity theory from discrete classification to continuous regression problems.
- Methodology Derives closed-form analytical formulas for regression capacity in synthetic models (e.g., spherical manifolds) and provides an instance-based estimator applicable to finite, real-world datasets.
- Biology Applies the framework to primate visual cortex data, quantitatively demonstrating a monotonic increase in linear decodability for object pose parameters (size, position) along the ventral stream (pixels → V4 → IT).
主要结论
- For synthetic spherical manifold models, regression capacity α decreases with increasing manifold dimensionality D and equivalent radius R_equiv (e.g., capacity drops as D increases for fixed R_equiv).
- In the mean-field model for point-like manifolds, capacity depends solely on the asymptotically equivalent tolerance ε_equiv = ε/(σ√(1-ρ)), where σ scales labels and ρ controls label correlations.
- Application to macaque ventral stream data shows regression capacity for object size and position increases (critical dimension N_crit decreases) from early (pixels) to late (IT) processing stages, indicating more efficient geometric organization for linear readout.
摘要: Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.