Paper List
-
Evolutionarily Stable Stackelberg Equilibrium
通过要求追随者策略对突变入侵具有鲁棒性,弥合了斯塔克尔伯格领导力模型与演化稳定性之间的鸿沟。
-
Recovering Sparse Neural Connectivity from Partial Measurements: A Covariance-Based Approach with Granger-Causality Refinement
通过跨多个实验会话累积协方差统计,实现从部分记录到完整神经连接性的重建。
-
Atomic Trajectory Modeling with State Space Models for Biomolecular Dynamics
ATMOS通过提供一个基于SSM的高效框架,用于生物分子的原子级轨迹生成,弥合了计算昂贵的MD模拟与时间受限的深度生成模型之间的差距。
-
Slow evolution towards generalism in a model of variable dietary range
通过证明是种群统计噪声(而非确定性动力学)驱动了模式形成和泛化食性的演化,解决了间接竞争下物种形成的悖论。
-
Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search
通过将印象草稿基于检索到的历史病例,并采用明确引用和基于置信度的拒绝机制,解决放射学报告生成中的幻觉问题。
-
Unified Policy–Value Decomposition for Rapid Adaptation
通过双线性分解在策略和价值函数之间共享低维目标嵌入,实现对新颖任务的零样本适应。
-
Mathematical Modeling of Cancer–Bacterial Therapy: Analysis and Numerical Simulation via Physics-Informed Neural Networks
提供了一个严格的、无网格的PINN框架,用于模拟和分析细菌癌症疗法中复杂的、空间异质的相互作用。
-
Sample-Efficient Adaptation of Drug-Response Models to Patient Tumors under Strong Biological Domain Shift
通过从无标记分子谱中学习可迁移表征,利用最少的临床数据实现患者药物反应的有效预测。
How to make the most of your masked language model for protein engineering
BigHat Biosciences
30秒速读
IN SHORT: This paper addresses the critical bottleneck of efficiently sampling high-quality, diverse protein sequences from Masked Language Models (MLMs) for practical antibody engineering, where traditional mutation-centric methods are computationally expensive and often produce dysfunctional variants.
核心创新
- Methodology Proposes a novel sequence-centric stochastic beam search (SBS) method that reframes generation as a search problem, leveraging MLMs' efficiency in evaluating the pseudo-log-likelihood (PLL) of all 1-edit neighbors of a sequence, achieving a 20EL× speedup over mutation-centric methods.
- Methodology Introduces a flexible, gradient-free multi-objective optimization (MOO) framework compatible with the SBS sampler, enabling guidance by arbitrary black-box scoring functions (e.g., binding affinity, humanness, stability) without requiring differentiability or partially-masked sequence inputs.
- Biology Provides the first extensive head-to-head in vitro evaluation of MLM sampling algorithms and models in real antibody therapeutic campaigns, revealing that the choice of sampling algorithm is at least as impactful as the choice of model itself.
主要结论
- The proposed stochastic beam search sampler significantly outperformed traditional Gibbs sampling in vitro, with AbLang2+SBS achieving higher success rates (e.g., perfect 100% success rate when combined with Smooth Tchebycheff Scalarization guidance).
- Model choice matters: ESM2-650M (trained on generic proteins) and AbLang2 (antibody-specific) performed best in silico and in vitro, while the sampling algorithm choice (SBS vs. Gibbs) had an equal or greater impact on outcome quality.
- Supervision is highly effective: Using a trained classifier for post-MLM ranking improved the success rate of AbLang2 outputs considerably, and MOO guidance (NDS/STS) during generation further enhanced performance and eliminated generation of very weak binders.
摘要: A plethora of protein language models have been released in recent years. Yet comparatively little work has addressed how to best sample from them to optimize desired biological properties. We fill this gap by proposing a flexible, effective sampling method for masked language models (MLMs), and by systematically evaluating models and methods both in silico and in vitro on actual antibody therapeutics campaigns. Firstly, we propose sampling with stochastic beam search, exploiting the fact that MLMs are remarkably efficient at evaluating the pseudo-perplexity of the entire 1-edit neighborhood of a sequence. Reframing generation in terms of entire-sequence evaluation enables flexible guidance with multiple optimization objectives. Secondly, we report results from our extensive in vitro head-to-head evaluation for the antibody engineering setting. This reveals that choice of sampling method is at least as impactful as the model used, motivating future research into this under-explored area.