Paper List
-
SpikGPT: A High-Accuracy and Interpretable Spiking Attention Framework for Single-Cell Annotation
This paper addresses the core challenge of robust single-cell annotation across heterogeneous datasets with batch effects and the critical need to ide...
-
Unlocking hidden biomolecular conformational landscapes in diffusion models at inference time
This paper addresses the core challenge of efficiently and accurately sampling the conformational landscape of biomolecules from diffusion-based struc...
-
Personalized optimization of pediatric HD-tDCS for dose consistency and target engagement
This paper addresses the critical limitation of one-size-fits-all HD-tDCS protocols in pediatric populations by developing a personalized optimization...
-
Realistic Transition Paths for Large Biomolecular Systems: A Langevin Bridge Approach
This paper addresses the core challenge of generating physically realistic and computationally efficient transition paths between distinct protein con...
-
Consistent Synthetic Sequences Unlock Structural Diversity in Fully Atomistic De Novo Protein Design
This paper addresses the core pain point of low sequence-structure alignment in existing synthetic datasets (e.g., AFDB), which severely limits the pe...
-
MoRSAIK: Sequence Motif Reactor Simulation, Analysis and Inference Kit in Python
This work addresses the computational bottleneck in simulating prebiotic RNA reactor dynamics by developing a Python package that tracks sequence moti...
-
On the Approximation of Phylogenetic Distance Functions by Artificial Neural Networks
This paper addresses the core challenge of developing computationally efficient and scalable neural network architectures that can learn accurate phyl...
-
EcoCast: A Spatio-Temporal Model for Continual Biodiversity and Climate Risk Forecasting
This paper addresses the critical bottleneck in conservation: the lack of timely, high-resolution, near-term forecasts of species distribution shifts ...
Linear Readout of Neural Manifolds with Continuous Variables
Department of Physics and Kempner Institute, Harvard University | Center for Computational Neuroscience, Flatiron Institute
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) determines the efficiency of linearly decoding continuous variables, amidst complex neural variability.
核心创新
- Theory Develops the first statistical-mechanical theory of 'regression capacity,' extending manifold capacity theory from discrete classification to continuous regression problems.
- Methodology Derives closed-form analytical formulas for regression capacity in synthetic models (e.g., spherical manifolds) and provides an instance-based estimator applicable to finite, real-world datasets.
- Biology Applies the framework to primate visual cortex data, quantitatively demonstrating a monotonic increase in linear decodability for object pose parameters (size, position) along the ventral stream (pixels → V4 → IT).
主要结论
- For synthetic spherical manifold models, regression capacity α decreases with increasing manifold dimensionality D and equivalent radius R_equiv (e.g., capacity drops as D increases for fixed R_equiv).
- In the mean-field model for point-like manifolds, capacity depends solely on the asymptotically equivalent tolerance ε_equiv = ε/(σ√(1-ρ)), where σ scales labels and ρ controls label correlations.
- Application to macaque ventral stream data shows regression capacity for object size and position increases (critical dimension N_crit decreases) from early (pixels) to late (IT) processing stages, indicating more efficient geometric organization for linear readout.
摘要: Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.