Paper List
-
EnzyCLIP: A Cross-Attention Dual Encoder Framework with Contrastive Learning for Predicting Enzyme Kinetic Constants
This paper addresses the core challenge of jointly predicting enzyme kinetic parameters (Kcat and Km) by modeling dynamic enzyme-substrate interaction...
-
Tissue stress measurements with Bayesian Inversion Stress Microscopy
This paper addresses the core challenge of measuring absolute, tissue-scale mechanical stress without making assumptions about tissue rheology, which ...
-
DeepFRI Demystified: Interpretability vs. Accuracy in AI Protein Function Prediction
This study addresses the critical gap between high predictive accuracy and biological interpretability in DeepFRI, revealing that the model often prio...
-
Hierarchical Molecular Language Models (HMLMs)
This paper addresses the core challenge of accurately modeling context-dependent signaling, pathway cross-talk, and temporal dynamics across multiple ...
-
Stability analysis of action potential generation using Markov models of voltage‑gated sodium channel isoforms
This work addresses the challenge of systematically characterizing how the high-dimensional parameter space of Markov models for different sodium chan...
-
Personalized optimization of pediatric HD-tDCS for dose consistency and target engagement
This paper addresses the critical limitation of one-size-fits-all HD-tDCS protocols in pediatric populations by developing a personalized optimization...
-
Consistent Synthetic Sequences Unlock Structural Diversity in Fully Atomistic De Novo Protein Design
This paper addresses the core pain point of low sequence-structure alignment in existing synthetic datasets (e.g., AFDB), which severely limits the pe...
-
Generative design and validation of therapeutic peptides for glioblastoma based on a potential target ATP5A
This paper addresses the critical bottleneck in therapeutic peptide design: how to efficiently optimize lead peptides with geometric constraints while...
Mechanistic Interpretability of Antibody Language Models Using SAEs
Department of Statistics, University of Oxford, UK | Reticular, San Francisco, USA | Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
The 30-Second View
IN SHORT: This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, specifically for antibody design.
Innovation (TL;DR)
- Methodology First application of Sparse Autoencoders (SAEs) to interrogate autoregressive antibody-specific language models (p-IgGen), moving beyond general protein language models.
- Methodology Systematic comparison reveals a key trade-off: TopK SAEs yield highly interpretable, monosemantic features (e.g., for CDR identity with validation accuracy 0.99) but lack causal steerability, while Ordered SAEs provide reliable generative control at the cost of interpretability.
- Biology Identifies and validates antibody-specific, biologically meaningful latent features, such as CDR identity and germline gene identity (e.g., IGHJ4 prediction with F1 macro score of 0.93), demonstrating the model's learning of immunologically relevant concepts.
Key conclusions
- TopK SAEs effectively compress and preserve biological information (CDR identity prediction accuracy 0.99 vs. 0.98 for raw neurons) and yield sparse, interpretable activation patterns localized to specific regions (e.g., CDRH3), overcoming neuron polysemanticity.
- High feature-concept correlation (e.g., F1 > 0.5 for IGHJ4 latents) does not guarantee causal steerability; steering on TopK-identified IGHJ4 features failed to consistently increase IGHJ4 proportions in generated sequences.
- Ordered SAEs, with their enforced hierarchical latent structure (via per-index nested grouping and decreasing truncation weights), successfully identify features that enable predictable generative steering, albeit with more complex activation patterns.
Abstract: Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature–concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mecahnistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs suffice for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.