Paper List
-
Pharmacophore-based design by learning on voxel grids
This paper addresses the computational bottleneck and limited novelty in conventional pharmacophore-based virtual screening by introducing a voxel cap...
-
CONFIDE: Hallucination Assessment for Reliable Biomolecular Structure Prediction and Design
This paper addresses the critical limitation of current protein structure prediction models (like AlphaFold3) where high-confidence scores (pLDDT) can...
-
On the Approximation of Phylogenetic Distance Functions by Artificial Neural Networks
This paper addresses the core challenge of developing computationally efficient and scalable neural network architectures that can learn accurate phyl...
-
EcoCast: A Spatio-Temporal Model for Continual Biodiversity and Climate Risk Forecasting
This paper addresses the critical bottleneck in conservation: the lack of timely, high-resolution, near-term forecasts of species distribution shifts ...
-
Training Dynamics of Learning 3D-Rotational Equivariance
This work addresses the core dilemma of whether to use computationally expensive equivariant architectures or faster symmetry-agnostic models with dat...
-
Fast and Accurate Node-Age Estimation Under Fossil Calibration Uncertainty Using the Adjusted Pairwise Likelihood
This paper addresses the dual challenge of computational inefficiency and sensitivity to fossil calibration errors in Bayesian divergence time estimat...
-
Few-shot Protein Fitness Prediction via In-context Learning and Test-time Training
This paper addresses the core challenge of accurately predicting protein fitness with only a handful of experimental observations, where data collecti...
-
scCluBench: Comprehensive Benchmarking of Clustering Algorithms for Single-Cell RNA Sequencing
This paper addresses the critical gap of fragmented and non-standardized benchmarking in single-cell RNA-seq clustering, which hinders objective compa...
Mechanistic Interpretability of Antibody Language Models Using SAEs
Department of Statistics, University of Oxford, UK | Reticular, San Francisco, USA | Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
The 30-Second View
IN SHORT: This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, specifically for antibody design.
Innovation (TL;DR)
- Methodology First application of Sparse Autoencoders (SAEs) to interrogate autoregressive antibody-specific language models (p-IgGen), moving beyond general protein language models.
- Methodology Systematic comparison reveals a key trade-off: TopK SAEs yield highly interpretable, monosemantic features (e.g., for CDR identity with validation accuracy 0.99) but lack causal steerability, while Ordered SAEs provide reliable generative control at the cost of interpretability.
- Biology Identifies and validates antibody-specific, biologically meaningful latent features, such as CDR identity and germline gene identity (e.g., IGHJ4 prediction with F1 macro score of 0.93), demonstrating the model's learning of immunologically relevant concepts.
Key conclusions
- TopK SAEs effectively compress and preserve biological information (CDR identity prediction accuracy 0.99 vs. 0.98 for raw neurons) and yield sparse, interpretable activation patterns localized to specific regions (e.g., CDRH3), overcoming neuron polysemanticity.
- High feature-concept correlation (e.g., F1 > 0.5 for IGHJ4 latents) does not guarantee causal steerability; steering on TopK-identified IGHJ4 features failed to consistently increase IGHJ4 proportions in generated sequences.
- Ordered SAEs, with their enforced hierarchical latent structure (via per-index nested grouping and decreasing truncation weights), successfully identify features that enable predictable generative steering, albeit with more complex activation patterns.
Abstract: Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature–concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mecahnistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs suffice for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.