Paper List
-
STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings
This paper addresses the core challenge of generalizing protein function prediction to unseen or newly introduced Gene Ontology (GO) terms by overcomi...
-
Incorporating indel channels into average-case analysis of seed-chain-extend
This paper addresses the core pain point of bridging the theoretical gap for the widely used seed-chain-extend heuristic by providing the first rigoro...
-
Competition, stability, and functionality in excitatory-inhibitory neural circuits
This paper addresses the core challenge of extending interpretable energy-based frameworks to biologically realistic asymmetric neural networks, where...
-
Enhancing Clinical Note Generation with ICD-10, Clinical Ontology Knowledge Graphs, and Chain-of-Thought Prompting Using GPT-4
This paper addresses the core challenge of generating accurate and clinically relevant patient notes from sparse inputs (ICD codes and basic demograph...
-
Hypothesis-Based Particle Detection for Accurate Nanoparticle Counting and Digital Diagnostics
This paper addresses the core challenge of achieving accurate, interpretable, and training-free nanoparticle counting in digital diagnostic assays, wh...
-
MCP-AI: Protocol-Driven Intelligence Framework for Autonomous Reasoning in Healthcare
This paper addresses the critical gap in healthcare AI systems that lack contextual reasoning, long-term state management, and verifiable workflows by...
-
Model Gateway: Model Management Platform for Model-Driven Drug Discovery
This paper addresses the critical bottleneck of fragmented, ad-hoc model management in pharmaceutical research by providing a centralized, scalable ML...
-
Tree Thinking in the Genomic Era: Unifying Models Across Cells, Populations, and Species
This paper addresses the fragmentation of tree-based inference methods across biological scales by identifying shared algorithmic principles and stati...
Mechanistic Interpretability of Antibody Language Models Using SAEs
Department of Statistics, University of Oxford, UK | Reticular, San Francisco, USA | Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
The 30-Second View
IN SHORT: This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, specifically for antibody design.
Innovation (TL;DR)
- Methodology First application of Sparse Autoencoders (SAEs) to interrogate autoregressive antibody-specific language models (p-IgGen), moving beyond general protein language models.
- Methodology Systematic comparison reveals a key trade-off: TopK SAEs yield highly interpretable, monosemantic features (e.g., for CDR identity with validation accuracy 0.99) but lack causal steerability, while Ordered SAEs provide reliable generative control at the cost of interpretability.
- Biology Identifies and validates antibody-specific, biologically meaningful latent features, such as CDR identity and germline gene identity (e.g., IGHJ4 prediction with F1 macro score of 0.93), demonstrating the model's learning of immunologically relevant concepts.
Key conclusions
- TopK SAEs effectively compress and preserve biological information (CDR identity prediction accuracy 0.99 vs. 0.98 for raw neurons) and yield sparse, interpretable activation patterns localized to specific regions (e.g., CDRH3), overcoming neuron polysemanticity.
- High feature-concept correlation (e.g., F1 > 0.5 for IGHJ4 latents) does not guarantee causal steerability; steering on TopK-identified IGHJ4 features failed to consistently increase IGHJ4 proportions in generated sequences.
- Ordered SAEs, with their enforced hierarchical latent structure (via per-index nested grouping and decreasing truncation weights), successfully identify features that enable predictable generative steering, albeit with more complex activation patterns.
Abstract: Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature–concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mecahnistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs suffice for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.