Paper List
-
Developing the PsyCogMetrics™ AI Lab to Evaluate Large Language Models and Advance Cognitive Science
This paper addresses the critical gap between sophisticated LLM evaluation needs and the lack of accessible, scientifically rigorous platforms that in...
-
Equivalence of approximation by networks of single- and multi-spike neurons
This paper resolves the fundamental question of whether single-spike spiking neural networks (SNNs) are inherently less expressive than multi-spike SN...
-
The neuroscience of transformers
提出了Transformer架构与皮层柱微环路之间的新颖计算映射,连接了现代AI与神经科学。
-
Framing local structural identifiability and observability in terms of parameter-state symmetries
This paper addresses the core challenge of systematically determining which parameters and states in a mechanistic ODE model can be uniquely inferred ...
-
Leveraging Phytolith Research using Artificial Intelligence
This paper addresses the critical bottleneck in phytolith research by automating the labor-intensive manual microscopy process through a multimodal AI...
-
Neural network-based encoding in free-viewing fMRI with gaze-aware models
This paper addresses the core challenge of building computationally efficient and ecologically valid brain encoding models for naturalistic vision by ...
-
Scalable DNA Ternary Full Adder Enabled by a Competitive Blocking Circuit
This paper addresses the core bottleneck of carry information attenuation and limited computational scale in DNA binary adders by introducing a scalab...
-
ELISA: An Interpretable Hybrid Generative AI Agent for Expression-Grounded Discovery in Single-Cell Genomics
This paper addresses the critical bottleneck of translating high-dimensional single-cell transcriptomic data into interpretable biological hypotheses ...
Translating Measures onto Mechanisms: The Cognitive Relevance of Higher-Order Information
University of Amsterdam | University of Cambridge | Queen Mary University of London | Imperial College London | University of Vermont | Indiana University | University of Glasgow | Universidad Catolica del Maule | University of Helsinki
30秒速读
IN SHORT: This review addresses the core challenge of translating abstract higher-order information theory metrics (e.g., synergy, redundancy) into defensible, mechanistic explanations for cognitive function in neuroscience.
核心创新
- Methodology Systematizes Shannon-based multivariate metrics (e.g., Total Correlation, Dual Total Correlation, O-information) into a unified framework defined by two independent axes: interaction strength and redundancy-synergy balance.
- Theory Proposes that a balanced layering of synergistic integration and redundant broadcasting optimizes multiscale complexity, formalizing a fundamental computation-communication tradeoff in neural systems.
- Methodology Provides a pragmatic guide for applying Partial Information Decomposition (PID) to neural data, emphasizing the critical conceptual and practical consequences of choosing a specific redundancy function.
主要结论
- Higher-order dependence in multivariate systems can be parsimoniously characterized by two largely independent axes: interaction strength (e.g., quantified by S-information) and redundancy-synergy balance (e.g., quantified by O-information).
- Prototypical systems demonstrate this duality: a purely redundant COPY distribution yields O-information = +1 bit, while a purely synergistic XOR distribution yields O-information = -1 bit, despite both having an S-information of 3 bits.
- The balanced integration of synergistic (head-to-head) and redundant (tail-to-tail) information motifs is proposed as a mechanism optimizing multiscale complexity, formalizing a tradeoff critical for cognitive function.
摘要: Higher–order information theory has become a rapidly growing toolkit in computational neuroscience, motivated by the idea that multivariate dependencies can reveal aspects of neural computation and communication invisible to pairwise analyses. Yet functional interpretations of synergy and redundancy often outpace principled arguments for how statistical quantities map onto mechanistic cognitive processes. Here we review the main families of higher-order measures with the explicit goal of translating mathematical properties into defensible mechanistic inferences. Firstly, we systematize Shannon-based multivariate metrics and demonstrate that higher-order dependence is parsimoniously characterized by two largely independent axes: interaction strength and redundancy-synergy balance. We argue that balanced layering of synergistic integration and redundant broadcasting optimizes multiscale complexity, formalizing a computation-communication tradeoff. We then examine the partial information decomposition and outline pragmatic considerations for its deployment in neural data. Equipped with the relevant mathematical essentials, we connect redundancy-synergy balance to cognitive function by progressively embedding their mathematical properties in real-world constraints, starting with small synthetic systems before gradually building up to neuroimaging. We close by identifying key future directions for mechanistic insight: cross-scale bridging, intervention-based validation, and thermodynamically grounded unification of information dynamics.