Paper List
-
Evolutionarily Stable Stackelberg Equilibrium
通过要求追随者策略对突变入侵具有鲁棒性,弥合了斯塔克尔伯格领导力模型与演化稳定性之间的鸿沟。
-
Recovering Sparse Neural Connectivity from Partial Measurements: A Covariance-Based Approach with Granger-Causality Refinement
通过跨多个实验会话累积协方差统计,实现从部分记录到完整神经连接性的重建。
-
Atomic Trajectory Modeling with State Space Models for Biomolecular Dynamics
ATMOS通过提供一个基于SSM的高效框架,用于生物分子的原子级轨迹生成,弥合了计算昂贵的MD模拟与时间受限的深度生成模型之间的差距。
-
Slow evolution towards generalism in a model of variable dietary range
通过证明是种群统计噪声(而非确定性动力学)驱动了模式形成和泛化食性的演化,解决了间接竞争下物种形成的悖论。
-
Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search
通过将印象草稿基于检索到的历史病例,并采用明确引用和基于置信度的拒绝机制,解决放射学报告生成中的幻觉问题。
-
Unified Policy–Value Decomposition for Rapid Adaptation
通过双线性分解在策略和价值函数之间共享低维目标嵌入,实现对新颖任务的零样本适应。
-
Mathematical Modeling of Cancer–Bacterial Therapy: Analysis and Numerical Simulation via Physics-Informed Neural Networks
提供了一个严格的、无网格的PINN框架,用于模拟和分析细菌癌症疗法中复杂的、空间异质的相互作用。
-
Sample-Efficient Adaptation of Drug-Response Models to Patient Tumors under Strong Biological Domain Shift
通过从无标记分子谱中学习可迁移表征,利用最少的临床数据实现患者药物反应的有效预测。
Cross-Species Transfer Learning for Electrophysiology-to-Transcriptomics Mapping in Cortical GABAergic Interneurons
Unknown
30秒速读
IN SHORT: This paper addresses the challenge of predicting transcriptomic identity from electrophysiological recordings in human cortical interneurons, where limited labeled data and class imbalance hinder model performance.
核心创新
- Methodology Developed an attention-based BiLSTM that operates directly on structured IPFX feature-family representation (12 families, 498 features), eliminating the need for sparse PCA preprocessing and providing interpretable attention weights over feature families.
- Methodology Implemented a cross-species transfer learning framework with joint supervised training (shared encoder + two heads) followed by human-only fine-tuning, improving human macro-F1 by leveraging abundant mouse data (3,699 cells) to augment limited human data (506 cells).
- Biology Demonstrated conserved electrophysiological-to-transcriptomic mapping across species for GABAergic interneuron subclasses (Lamp5, Pvalb, Sst, Vip), enabling meaningful cross-species transfer despite biological and experimental distribution shifts.
主要结论
- Successfully replicated the Gouwens et al. (2020) baseline with random forest achieving 90.72% accuracy and 0.8728 macro-F1 on mouse data, confirming reproducibility of the electrophysiology-to-transcriptomics pipeline.
- The attention-based BiLSTM with SMOTE and ArcFace achieved 0.8923 macro-F1 on mouse data, matching feature-engineered baselines while providing interpretable attention weights over 12 electrophysiological feature families.
- Cross-species transfer learning (mouse pretraining + human fine-tuning) improved human macro-F1 compared to human-only training, demonstrating measurable gains despite distribution shifts and limited human sample size.
摘要: Single-cell electrophysiological recordings provide a powerful window into neuronal functional diversity and offer an interpretable route for linking intrinsic physiology to transcriptomic identity. Here, we replicate and extend the electrophysiology-to-transcriptomics framework introduced by Gouwens et al. (2020) using publicly available Allen Institute Patch-seq datasets from both mouse and human cortex. We focus on GABAergic inhibitory interneurons to target a subclass structure (Lamp5, Pvalb, Sst, Vip) that is comparable and conserved across species. After quality control, we analyzed 3,699 mouse visual cortex neurons and 506 human neocortical neurons from neurosurgical resections. Using standardized electrophysiological features and sparse PCA, we reproduced the major class-level separations reported in the original mouse study. For supervised prediction, a class-balanced random forest provided a strong feature-engineered baseline in mouse data and a reduced but still informative baseline in human data. We then developed an attention-based BiLSTM that operates directly on the structured IPFX feature-family representation, avoiding sPCA and providing feature-family-level interpretability via learned attention weights. Finally, we evaluated a cross-species transfer setting in which the sequence model is pretrained on mouse data and fine-tuned on human data for an aligned 4-class task, improving human macro-F1 relative to a human-only training baseline. Together, these results confirm reproducibility of the Gouwens pipeline in mouse data, demonstrate that sequence models can match feature-engineered baselines, and show that mouse-to-human transfer learning can provide measurable gains for human subclass prediction.