Paper List
-
STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings
This paper addresses the core challenge of generalizing protein function prediction to unseen or newly introduced Gene Ontology (GO) terms by overcomi...
-
Incorporating indel channels into average-case analysis of seed-chain-extend
This paper addresses the core pain point of bridging the theoretical gap for the widely used seed-chain-extend heuristic by providing the first rigoro...
-
Competition, stability, and functionality in excitatory-inhibitory neural circuits
This paper addresses the core challenge of extending interpretable energy-based frameworks to biologically realistic asymmetric neural networks, where...
-
Enhancing Clinical Note Generation with ICD-10, Clinical Ontology Knowledge Graphs, and Chain-of-Thought Prompting Using GPT-4
This paper addresses the core challenge of generating accurate and clinically relevant patient notes from sparse inputs (ICD codes and basic demograph...
-
Learning From Limited Data and Feedback for Cell Culture Process Monitoring: A Comparative Study
This paper addresses the core challenge of developing accurate real-time bioprocess monitoring soft sensors under severe data constraints: limited his...
-
Cell-cell communication inference and analysis: biological mechanisms, computational approaches, and future opportunities
This review addresses the critical need for a systematic framework to navigate the rapidly expanding landscape of computational methods for inferring ...
-
Generating a Contact Matrix for Aged Care Settings in Australia: an agent-based model study
This study addresses the critical gap in understanding heterogeneous contact patterns within aged care facilities, where existing population-level con...
-
Emergent Spatiotemporal Dynamics in Large-Scale Brain Networks with Next Generation Neural Mass Models
This work addresses the core challenge of understanding how complex, brain-wide spatiotemporal patterns emerge from the interaction of biophysically d...
Linear Readout of Neural Manifolds with Continuous Variables
Department of Physics and Kempner Institute, Harvard University | Center for Computational Neuroscience, Flatiron Institute
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) determines the efficiency of linearly decoding continuous variables, amidst complex neural variability.
核心创新
- Theory Develops the first statistical-mechanical theory of 'regression capacity,' extending manifold capacity theory from discrete classification to continuous regression problems.
- Methodology Derives closed-form analytical formulas for regression capacity in synthetic models (e.g., spherical manifolds) and provides an instance-based estimator applicable to finite, real-world datasets.
- Biology Applies the framework to primate visual cortex data, quantitatively demonstrating a monotonic increase in linear decodability for object pose parameters (size, position) along the ventral stream (pixels → V4 → IT).
主要结论
- For synthetic spherical manifold models, regression capacity α decreases with increasing manifold dimensionality D and equivalent radius R_equiv (e.g., capacity drops as D increases for fixed R_equiv).
- In the mean-field model for point-like manifolds, capacity depends solely on the asymptotically equivalent tolerance ε_equiv = ε/(σ√(1-ρ)), where σ scales labels and ρ controls label correlations.
- Application to macaque ventral stream data shows regression capacity for object size and position increases (critical dimension N_crit decreases) from early (pixels) to late (IT) processing stages, indicating more efficient geometric organization for linear readout.
摘要: Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.