Paper List
-
Discovery of a Hematopoietic Manifold in scGPT Yields a Method for Extracting Performant Algorithms from Biological Foundation Model Internals
This work addresses the core challenge of extracting reusable, interpretable, and high-performance biological algorithms from the opaque internal repr...
-
MS2MetGAN: Latent-space adversarial training for metabolite–spectrum matching in MS/MS database search
This paper addresses the critical bottleneck in metabolite identification: the generation of high-quality negative training samples that are structura...
-
Toward Robust, Reproducible, and Widely Accessible Intracranial Language Brain-Computer Interfaces: A Comprehensive Review of Neural Mechanisms, Hardware, Algorithms, Evaluation, Clinical Pathways and Future Directions
This review addresses the core challenge of fragmented and heterogeneous evidence that hinders the clinical translation of intracranial language BCIs,...
-
Less Is More in Chemotherapy of Breast Cancer
通过纳入细胞周期时滞和竞争项,解决了现有肿瘤-免疫模型的过度简化问题,以定量比较化疗方案。
-
Fold-CP: A Context Parallelism Framework for Biomolecular Modeling
This paper addresses the critical bottleneck of GPU memory limitations that restrict AlphaFold 3-like models to processing only a few thousand residue...
-
Open Biomedical Knowledge Graphs at Scale: Construction, Federation, and AI Agent Access with Samyama Graph Database
This paper addresses the core pain point of fragmented biomedical data by constructing and federating large-scale, open knowledge graphs to enable sea...
-
Predictive Analytics for Foot Ulcers Using Time-Series Temperature and Pressure Data
This paper addresses the critical need for continuous, real-time monitoring of diabetic foot health by developing an unsupervised anomaly detection fr...
-
Hypothesis-Based Particle Detection for Accurate Nanoparticle Counting and Digital Diagnostics
This paper addresses the core challenge of achieving accurate, interpretable, and training-free nanoparticle counting in digital diagnostic assays, wh...
Linear Readout of Neural Manifolds with Continuous Variables
Department of Physics and Kempner Institute, Harvard University | Center for Computational Neuroscience, Flatiron Institute
30秒速读
IN SHORT: This paper addresses the core challenge of quantifying how the geometric structure of high-dimensional neural population activity (neural manifolds) determines the efficiency of linearly decoding continuous variables, amidst complex neural variability.
核心创新
- Theory Develops the first statistical-mechanical theory of 'regression capacity,' extending manifold capacity theory from discrete classification to continuous regression problems.
- Methodology Derives closed-form analytical formulas for regression capacity in synthetic models (e.g., spherical manifolds) and provides an instance-based estimator applicable to finite, real-world datasets.
- Biology Applies the framework to primate visual cortex data, quantitatively demonstrating a monotonic increase in linear decodability for object pose parameters (size, position) along the ventral stream (pixels → V4 → IT).
主要结论
- For synthetic spherical manifold models, regression capacity α decreases with increasing manifold dimensionality D and equivalent radius R_equiv (e.g., capacity drops as D increases for fixed R_equiv).
- In the mean-field model for point-like manifolds, capacity depends solely on the asymptotically equivalent tolerance ε_equiv = ε/(σ√(1-ρ)), where σ scales labels and ρ controls label correlations.
- Application to macaque ventral stream data shows regression capacity for object size and position increases (critical dimension N_crit decreases) from early (pixels) to late (IT) processing stages, indicating more efficient geometric organization for linear readout.
摘要: Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.