Paper List
-
Developing the PsyCogMetrics™ AI Lab to Evaluate Large Language Models and Advance Cognitive Science
This paper addresses the critical gap between sophisticated LLM evaluation needs and the lack of accessible, scientifically rigorous platforms that in...
-
Equivalence of approximation by networks of single- and multi-spike neurons
This paper resolves the fundamental question of whether single-spike spiking neural networks (SNNs) are inherently less expressive than multi-spike SN...
-
The neuroscience of transformers
提出了Transformer架构与皮层柱微环路之间的新颖计算映射,连接了现代AI与神经科学。
-
Framing local structural identifiability and observability in terms of parameter-state symmetries
This paper addresses the core challenge of systematically determining which parameters and states in a mechanistic ODE model can be uniquely inferred ...
-
Leveraging Phytolith Research using Artificial Intelligence
This paper addresses the critical bottleneck in phytolith research by automating the labor-intensive manual microscopy process through a multimodal AI...
-
Neural network-based encoding in free-viewing fMRI with gaze-aware models
This paper addresses the core challenge of building computationally efficient and ecologically valid brain encoding models for naturalistic vision by ...
-
Scalable DNA Ternary Full Adder Enabled by a Competitive Blocking Circuit
This paper addresses the core bottleneck of carry information attenuation and limited computational scale in DNA binary adders by introducing a scalab...
-
ELISA: An Interpretable Hybrid Generative AI Agent for Expression-Grounded Discovery in Single-Cell Genomics
This paper addresses the critical bottleneck of translating high-dimensional single-cell transcriptomic data into interpretable biological hypotheses ...
Transformer的神经科学
Institute of Cognitive Science, University of Osnabrück | Department of Neuroscience, Erasmus MC
30秒速读
IN SHORT: 提出了Transformer架构与皮层柱微环路之间的新颖计算映射,连接了现代AI与神经科学。
核心创新
- Methodology Introduces a granular mapping where cortical columns correspond to transformer tokens, rather than whole cortical areas mapping to ANN layers.
- Biology Hypothesizes that laminar pathways (L4→L2/3 as Values, L1 feedback as Queries, tangential streams as Keys) implement self-attention-like multiplicative gating.
- Theory Frames the cortical column as a reusable module performing context-dependent routing and local recoding, analogous to a transformer block.
主要结论
- 皮层柱可以映射到具有特定层状对应的Transformer块:L4→L2/3作为Values(前馈驱动),L1反馈作为Queries(自上而下的上下文),L2/3/L5中的切向流作为Keys。
- 皮层中的乘法相互作用(增益调制、树突非线性)可能实现了自注意力的计算主题,实现了上下文相关的路由,而无需字面的QKV矩阵。
- 这一映射生成了可验证的假设:例如,丘脑驱动仅占约10-15%的突触,计算由皮层内上下文主导(类似于注意力权重覆盖输入嵌入)。
摘要: 神经科学长期以来为人工神经网络的发展提供了信息,但现代架构的成功反过来提出了一个问题:现代网络能否教会我们关于大脑功能的经验?在这里,我们检查了皮层柱的结构,并提出Transformer为皮层微环路组织的多个元素提供了自然的计算类比。我们并非声称Transformer方程在皮层中的字面实现,而是开发了Transformer操作与层状皮层特征之间的假设映射,使用这一类比作为分析和讨论的导向框架。这一映射使我们能够更深入地研究上下文选择、内容路由、循环整合和层间转换如何分布在皮层电路中。通过这样做,我们生成了关于层状特化、上下文调制、树突整合、振荡协调和皮层柱有效连接性的一系列广泛预测和可实验验证的假设。这一提议旨在作为一个结构化的假设,而非对皮层计算的确定性描述。将Transformer操作和皮层结构学置于一个共同的描述框架中,可以锐化问题,揭示新的功能对应关系,并为系统神经科学和现代AI之间的互惠交流开辟一条富有成效的途径。更广泛地说,这一视角表明,在计算组织层面比较大脑和架构可以为两者带来真正的洞见。