Paper List
-
Macroscopic Dominance from Microscopic Extremes: Symmetry Breaking in Spatial Competition
This paper addresses the fundamental question of how microscopic stochastic advantages in spatial exploration translate into macroscopic resource domi...
-
Linear Readout of Neural Manifolds with Continuous Variables
This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) d...
-
Theory of Cell Body Lensing and Phototaxis Sign Reversal in “Eyeless” Mutants of Chlamydomonas
This paper solves the core puzzle of how eyeless mutants of Chlamydomonas exhibit reversed phototaxis by quantitatively modeling the competition betwe...
-
Cross-Species Transfer Learning for Electrophysiology-to-Transcriptomics Mapping in Cortical GABAergic Interneurons
This paper addresses the challenge of predicting transcriptomic identity from electrophysiological recordings in human cortical interneurons, where li...
-
Uncovering statistical structure in large-scale neural activity with Restricted Boltzmann Machines
This paper addresses the core challenge of modeling large-scale neural population activity (1500-2000 neurons) with interpretable higher-order interac...
-
Realizing Common Random Numbers: Event-Keyed Hashing for Causally Valid Stochastic Models
This paper addresses the critical problem that standard stateful PRNG implementations in agent-based models violate causal validity by making random d...
-
A Standardized Framework for Evaluating Gene Expression Generative Models
This paper addresses the critical lack of standardized evaluation protocols for single-cell gene expression generative models, where inconsistent metr...
-
Single Molecule Localization Microscopy Challenge: A Biologically Inspired Benchmark for Long-Sequence Modeling
This paper addresses the core challenge of evaluating state-space models on biologically realistic, sparse, and stochastic temporal processes, which a...
Linear Readout of Neural Manifolds with Continuous Variables
Department of Physics and Kempner Institute, Harvard University | Center for Computational Neuroscience, Flatiron Institute
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) determines the efficiency of linearly decoding continuous variables, amidst complex neural variability.
核心创新
- Theory Develops the first statistical-mechanical theory of 'regression capacity,' extending manifold capacity theory from discrete classification to continuous regression problems.
- Methodology Derives closed-form analytical formulas for regression capacity in synthetic models (e.g., spherical manifolds) and provides an instance-based estimator applicable to finite, real-world datasets.
- Biology Applies the framework to primate visual cortex data, quantitatively demonstrating a monotonic increase in linear decodability for object pose parameters (size, position) along the ventral stream (pixels → V4 → IT).
主要结论
- For synthetic spherical manifold models, regression capacity α decreases with increasing manifold dimensionality D and equivalent radius R_equiv (e.g., capacity drops as D increases for fixed R_equiv).
- In the mean-field model for point-like manifolds, capacity depends solely on the asymptotically equivalent tolerance ε_equiv = ε/(σ√(1-ρ)), where σ scales labels and ρ controls label correlations.
- Application to macaque ventral stream data shows regression capacity for object size and position increases (critical dimension N_crit decreases) from early (pixels) to late (IT) processing stages, indicating more efficient geometric organization for linear readout.
摘要: Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.