Paper List
-
An AI Implementation Science Study to Improve Trustworthy Data in a Large Healthcare System
This paper addresses the critical gap between theoretical AI research and real-world clinical implementation by providing a practical framework for as...
-
The BEAT-CF Causal Model: A model for guiding the design of trials and observational analyses of cystic fibrosis exacerbations
This paper addresses the critical gap in cystic fibrosis exacerbation management by providing a formal causal framework that integrates expert knowled...
-
Hierarchical Molecular Language Models (HMLMs)
This paper addresses the core challenge of accurately modeling context-dependent signaling, pathway cross-talk, and temporal dynamics across multiple ...
-
Stability analysis of action potential generation using Markov models of voltage‑gated sodium channel isoforms
This work addresses the challenge of systematically characterizing how the high-dimensional parameter space of Markov models for different sodium chan...
-
Approximate Bayesian Inference on Mechanisms of Network Growth and Evolution
This paper addresses the core challenge of inferring the relative contributions of multiple, simultaneous generative mechanisms in network formation w...
-
EnzyCLIP: A Cross-Attention Dual Encoder Framework with Contrastive Learning for Predicting Enzyme Kinetic Constants
This paper addresses the core challenge of jointly predicting enzyme kinetic parameters (Kcat and Km) by modeling dynamic enzyme-substrate interaction...
-
Tissue stress measurements with Bayesian Inversion Stress Microscopy
This paper addresses the core challenge of measuring absolute, tissue-scale mechanical stress without making assumptions about tissue rheology, which ...
-
DeepFRI Demystified: Interpretability vs. Accuracy in AI Protein Function Prediction
This study addresses the critical gap between high predictive accuracy and biological interpretability in DeepFRI, revealing that the model often prio...
Mechanistic Interpretability of Antibody Language Models Using SAEs
Department of Statistics, University of Oxford, UK | Reticular, San Francisco, USA | Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
30秒速读
IN SHORT: This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, specifically for antibody design.
核心创新
- Methodology First application of Sparse Autoencoders (SAEs) to interrogate autoregressive antibody-specific language models (p-IgGen), moving beyond general protein language models.
- Methodology Systematic comparison reveals a key trade-off: TopK SAEs yield highly interpretable, monosemantic features (e.g., for CDR identity with validation accuracy 0.99) but lack causal steerability, while Ordered SAEs provide reliable generative control at the cost of interpretability.
- Biology Identifies and validates antibody-specific, biologically meaningful latent features, such as CDR identity and germline gene identity (e.g., IGHJ4 prediction with F1 macro score of 0.93), demonstrating the model's learning of immunologically relevant concepts.
主要结论
- TopK SAEs effectively compress and preserve biological information (CDR identity prediction accuracy 0.99 vs. 0.98 for raw neurons) and yield sparse, interpretable activation patterns localized to specific regions (e.g., CDRH3), overcoming neuron polysemanticity.
- High feature-concept correlation (e.g., F1 > 0.5 for IGHJ4 latents) does not guarantee causal steerability; steering on TopK-identified IGHJ4 features failed to consistently increase IGHJ4 proportions in generated sequences.
- Ordered SAEs, with their enforced hierarchical latent structure (via per-index nested grouping and decreasing truncation weights), successfully identify features that enable predictable generative steering, albeit with more complex activation patterns.
摘要: Sparse autoencoders (SAEs) are a mechanistic interpretability technique that have been used to provide insight into learned concepts within large protein language models. Here, we employ TopK and Ordered SAEs to investigate an autoregressive antibody language model, p-IgGen, and steer its generation. We show that TopK SAEs can reveal biologically meaningful latent features, but high feature–concept correlation does not guarantee causal control over generation. In contrast, Ordered SAEs impose an hierarchical structure that reliably identifies steerable features, but at the expense of more complex and less interpretable activation patterns. These findings advance the mecahnistic interpretability of domain-specific protein language models and suggest that, while TopK SAEs suffice for mapping latent features to concepts, Ordered SAEs are preferable when precise generative steering is required.