Paper List
-
Evolutionarily Stable Stackelberg Equilibrium
通过要求追随者策略对突变入侵具有鲁棒性,弥合了斯塔克尔伯格领导力模型与演化稳定性之间的鸿沟。
-
Recovering Sparse Neural Connectivity from Partial Measurements: A Covariance-Based Approach with Granger-Causality Refinement
通过跨多个实验会话累积协方差统计,实现从部分记录到完整神经连接性的重建。
-
Atomic Trajectory Modeling with State Space Models for Biomolecular Dynamics
ATMOS通过提供一个基于SSM的高效框架,用于生物分子的原子级轨迹生成,弥合了计算昂贵的MD模拟与时间受限的深度生成模型之间的差距。
-
Slow evolution towards generalism in a model of variable dietary range
通过证明是种群统计噪声(而非确定性动力学)驱动了模式形成和泛化食性的演化,解决了间接竞争下物种形成的悖论。
-
Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search
通过将印象草稿基于检索到的历史病例,并采用明确引用和基于置信度的拒绝机制,解决放射学报告生成中的幻觉问题。
-
Unified Policy–Value Decomposition for Rapid Adaptation
通过双线性分解在策略和价值函数之间共享低维目标嵌入,实现对新颖任务的零样本适应。
-
Mathematical Modeling of Cancer–Bacterial Therapy: Analysis and Numerical Simulation via Physics-Informed Neural Networks
提供了一个严格的、无网格的PINN框架,用于模拟和分析细菌癌症疗法中复杂的、空间异质的相互作用。
-
Sample-Efficient Adaptation of Drug-Response Models to Patient Tumors under Strong Biological Domain Shift
通过从无标记分子谱中学习可迁移表征,利用最少的临床数据实现患者药物反应的有效预测。
Towards unified brain-to-text decoding across speech production and perception
Zhejiang University | Chinese Academy of Sciences | Huashan Hospital, Fudan University
30秒速读
IN SHORT: This paper addresses the core challenge of developing a unified brain-to-text decoding framework that works across both speech production and perception modalities for Mandarin Chinese, overcoming limitations of single-modality approaches and alphabetic language systems.
核心创新
- Methodology First unified brain-to-sentence decoding framework for both speech production and perception in Mandarin Chinese, enabling direct comparison of neural dynamics across modalities.
- Methodology Three-stage post-training and two-stage inference framework for 7B-parameter LLM that outperforms larger commercial LLMs (hundreds of billions of parameters) in mapping toneless Pinyin syllables to Chinese sentences.
- Biology Revealed neural characteristics of Mandarin speech: production engages broader cortical regions than perception; shared channels show similar patterns with perception delayed by ~106.5ms; comparable decoding performance across hemispheres.
主要结论
- Achieved best-case Chinese character error rates of 14.71% for spoken sentences and 21.80% for heard sentences across 12 participants with depth electrodes (mean speaking CER = 31.52%, mean listening CER = 37.28%).
- NeuroSketch (2D-CNN) achieved mean initial/final accuracies of 59.54%/50.17% for speaking and 58.92%/48.05% for listening, representing 394.9%/412.0% and 389.7%/406.6% improvements over chance respectively.
- Speech production involved neural responses across broader cortical regions than auditory perception (p<0.05), with perception showing consistent temporal delay relative to production (mean = -106.5ms, 90% CI [-249.4, 23.05]).
摘要: Speech production and perception constitute two fundamental and distinct modes of human communication. Prior brain-to-text decoding studies have largely focused on a single modality and alphabetic languages. Here, we present a unified brain-to-sentence decoding framework for both speech production and perception in Mandarin Chinese. The framework exhibits strong generalization ability, enabling sentence-level decoding when trained only on single-character data and supporting characters and syllables unseen during training. In addition, it allows direct and controlled comparison of neural dynamics across modalities. We collected neural data from 12 participants implanted with depth electrodes and achieved full-sentence decoding across multiple participants, with best-case Chinese character error rates of 14.71% for spoken sentences and 21.80% for heard sentences. Mandarin speech is decoded by first classifying syllable components in Hanyu Pinyin, namely initials and finals, from neural signals, followed by a post-trained large language model (LLM) that maps sequences of toneless Pinyin syllables to Chinese sentences. To enhance LLM decoding, we designed a three-stage post-training and two-stage inference framework based on a 7-billion-parameter LLM, achieving overall performance that exceeds larger commercial LLMs with hundreds of billions of parameters or more. In addition, several characteristics were observed in Mandarin speech production and perception: speech production involved neural responses across broader cortical regions than auditory perception; channels responsive to both modalities exhibited similar activity patterns, with speech perception showing a temporal delay relative to production; and decoding performance was broadly comparable across hemispheres. Our work not only establishes the feasibility of a unified decoding framework but also provides insights into the neural characteristics of Mandarin speech production and perception. These advances contribute to brain-to-text decoding in logosyllabic languages and pave the way toward neural language decoding systems supporting multiple modalities.