Paper List
-
Translating Measures onto Mechanisms: The Cognitive Relevance of Higher-Order Information
This review addresses the core challenge of translating abstract higher-order information theory metrics (e.g., synergy, redundancy) into defensible, ...
-
Emergent Bayesian Behaviour and Optimal Cue Combination in LLMs
This paper addresses the critical gap in understanding whether LLMs spontaneously develop human-like Bayesian strategies for processing uncertain info...
-
Vessel Network Topology in Molecular Communication: Insights from Experiments and Theory
This work addresses the critical lack of experimentally validated channel models for molecular communication within complex vessel networks, which is ...
-
Modulation of DNA rheology by a transcription factor that forms aging microgels
This work addresses the fundamental question of how the transcription factor NANOG, essential for embryonic stem cell pluripotency, physically regulat...
-
Imperfect molecular detection renormalizes apparent kinetic rates in stochastic gene regulatory networks
This paper addresses the core challenge of distinguishing genuine stochastic dynamics of gene regulatory networks from artifacts introduced by imperfe...
-
PanFoMa: A Lightweight Foundation Model and Benchmark for Pan-Cancer
This paper addresses the dual challenge of achieving computational efficiency without sacrificing accuracy in whole-transcriptome single-cell represen...
-
Beyond Bayesian Inference: The Correlation Integral Likelihood Framework and Gradient Flow Methods for Deterministic Sampling
This paper addresses the core challenge of calibrating complex biological models (e.g., PDEs, agent-based models) with incomplete, noisy, or heterogen...
-
Contrastive Deep Learning for Variant Detection in Wastewater Genomic Sequencing
This paper addresses the core challenge of detecting viral variants in wastewater sequencing data without reference genomes or labeled annotations, ov...
How to make the most of your masked language model for protein engineering
BigHat Biosciences
30秒速读
IN SHORT: This paper addresses the critical bottleneck of efficiently sampling high-quality, diverse protein sequences from Masked Language Models (MLMs) for practical antibody engineering, where traditional mutation-centric methods are computationally expensive and often produce dysfunctional variants.
核心创新
- Methodology Proposes a novel sequence-centric stochastic beam search (SBS) method that reframes generation as a search problem, leveraging MLMs' efficiency in evaluating the pseudo-log-likelihood (PLL) of all 1-edit neighbors of a sequence, achieving a 20EL× speedup over mutation-centric methods.
- Methodology Introduces a flexible, gradient-free multi-objective optimization (MOO) framework compatible with the SBS sampler, enabling guidance by arbitrary black-box scoring functions (e.g., binding affinity, humanness, stability) without requiring differentiability or partially-masked sequence inputs.
- Biology Provides the first extensive head-to-head in vitro evaluation of MLM sampling algorithms and models in real antibody therapeutic campaigns, revealing that the choice of sampling algorithm is at least as impactful as the choice of model itself.
主要结论
- The proposed stochastic beam search sampler significantly outperformed traditional Gibbs sampling in vitro, with AbLang2+SBS achieving higher success rates (e.g., perfect 100% success rate when combined with Smooth Tchebycheff Scalarization guidance).
- Model choice matters: ESM2-650M (trained on generic proteins) and AbLang2 (antibody-specific) performed best in silico and in vitro, while the sampling algorithm choice (SBS vs. Gibbs) had an equal or greater impact on outcome quality.
- Supervision is highly effective: Using a trained classifier for post-MLM ranking improved the success rate of AbLang2 outputs considerably, and MOO guidance (NDS/STS) during generation further enhanced performance and eliminated generation of very weak binders.
摘要: A plethora of protein language models have been released in recent years. Yet comparatively little work has addressed how to best sample from them to optimize desired biological properties. We fill this gap by proposing a flexible, effective sampling method for masked language models (MLMs), and by systematically evaluating models and methods both in silico and in vitro on actual antibody therapeutics campaigns. Firstly, we propose sampling with stochastic beam search, exploiting the fact that MLMs are remarkably efficient at evaluating the pseudo-perplexity of the entire 1-edit neighborhood of a sequence. Reframing generation in terms of entire-sequence evaluation enables flexible guidance with multiple optimization objectives. Secondly, we report results from our extensive in vitro head-to-head evaluation for the antibody engineering setting. This reveals that choice of sampling method is at least as impactful as the model used, motivating future research into this under-explored area.