Paper List
-
SSDLabeler: Realistic semi-synthetic data generation for multi-label artifact classification in EEG
This paper addresses the core challenge of training robust multi-label EEG artifact classifiers by overcoming the scarcity and limited diversity of ma...
-
Decoding Selective Auditory Attention to Musical Elements in Ecologically Valid Music Listening
This paper addresses the core challenge of objectively quantifying listeners' selective attention to specific musical components (e.g., vocals, drums,...
-
Physics-Guided Surrogate Modeling for Machine Learning–Driven DLD Design Optimization
This paper addresses the core bottleneck of translating microfluidic DLD devices from research prototypes to clinical applications by replacing weeks-...
-
Mechanistic Interpretability of Antibody Language Models Using SAEs
This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, spec...
-
The Effective Reproduction Number in the Kermack-McKendrick model with age of infection and reinfection
This paper addresses the challenge of accurately estimating the time-varying effective reproduction number ℛ(t) in epidemics by incorporating two crit...
-
Fluctuating Environments Favor Extreme Dormancy Strategies and Penalize Intermediate Ones
This paper addresses the core challenge of determining how organisms should tune dormancy duration to match the temporal autocorrelation of their envi...
-
Covering Relations in the Poset of Combinatorial Neural Codes
This work addresses the core challenge of navigating the complex poset structure of neural codes to systematically test the conjecture linking convex ...
-
Revealing stimulus-dependent dynamics through statistical complexity
This paper addresses the core challenge of detecting stimulus-specific patterns in neural population dynamics that remain hidden to traditional variab...
Mechanistic Interpretability of Antibody Language Models Using SAEs
Department of Statistics, University of Oxford, UK | Reticular, San Francisco, USA | Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
The 30-Second View
IN SHORT: This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, specifically for antibody design.
Innovation (TL;DR)
- Methodology First application of Sparse Autoencoders (SAEs) to interrogate autoregressive antibody-specific language models (p-IgGen), moving beyond general protein language models.
- Methodology Systematic comparison reveals a key trade-off: TopK SAEs yield highly interpretable, monosemantic features (e.g., for CDR identity with validation accuracy 0.99) but lack causal steerability, while Ordered SAEs provide reliable generative control at the cost of interpretability.
- Biology Identifies and validates antibody-specific, biologically meaningful latent features, such as CDR identity and germline gene identity (e.g., IGHJ4 prediction with F1 macro score of 0.93), demonstrating the model's learning of immunologically relevant concepts.
Key conclusions
- TopK SAEs effectively compress and preserve biological information (CDR identity prediction accuracy 0.99 vs. 0.98 for raw neurons) and yield sparse, interpretable activation patterns localized to specific regions (e.g., CDRH3), overcoming neuron polysemanticity.
- High feature-concept correlation (e.g., F1 > 0.5 for IGHJ4 latents) does not guarantee causal steerability; steering on TopK-identified IGHJ4 features failed to consistently increase IGHJ4 proportions in generated sequences.
- Ordered SAEs, with their enforced hierarchical latent structure (via per-index nested grouping and decreasing truncation weights), successfully identify features that enable predictable generative steering, albeit with more complex activation patterns.
Abstract: Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature–concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mecahnistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs suffice for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.