Paper List
-
SSDLabeler: Realistic semi-synthetic data generation for multi-label artifact classification in EEG
This paper addresses the core challenge of training robust multi-label EEG artifact classifiers by overcoming the scarcity and limited diversity of ma...
-
Decoding Selective Auditory Attention to Musical Elements in Ecologically Valid Music Listening
This paper addresses the core challenge of objectively quantifying listeners' selective attention to specific musical components (e.g., vocals, drums,...
-
Physics-Guided Surrogate Modeling for Machine Learning–Driven DLD Design Optimization
This paper addresses the core bottleneck of translating microfluidic DLD devices from research prototypes to clinical applications by replacing weeks-...
-
Mechanistic Interpretability of Antibody Language Models Using SAEs
This work addresses the core challenge of achieving both interpretability and controllable generation in domain-specific protein language models, spec...
-
The Effective Reproduction Number in the Kermack-McKendrick model with age of infection and reinfection
This paper addresses the challenge of accurately estimating the time-varying effective reproduction number ℛ(t) in epidemics by incorporating two crit...
-
Fluctuating Environments Favor Extreme Dormancy Strategies and Penalize Intermediate Ones
This paper addresses the core challenge of determining how organisms should tune dormancy duration to match the temporal autocorrelation of their envi...
-
Covering Relations in the Poset of Combinatorial Neural Codes
This work addresses the core challenge of navigating the complex poset structure of neural codes to systematically test the conjecture linking convex ...
-
Revealing stimulus-dependent dynamics through statistical complexity
This paper addresses the core challenge of detecting stimulus-specific patterns in neural population dynamics that remain hidden to traditional variab...
STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings
Department of Computer Engineering, Bogazici University, Istanbul, Turkiye
The 30-Second View
IN SHORT: This paper addresses the core challenge of generalizing protein function prediction to unseen or newly introduced Gene Ontology (GO) terms by overcoming the limitations of existing models that either prioritize graph structure at the expense of semantic meaning or vice versa.
Innovation (TL;DR)
- Methodology Introduces a novel GO embedding module that integrates textual definitions (via SBERT-BioBERT) with ontology graph structure through a multi-task autoencoder, learning unified representations that preserve both semantic similarity and hierarchical dependencies.
- Methodology Proposes a hierarchical Transformer decoder that processes GO terms in topological order (ancestors to descendants) using causal self-attention, enabling information propagation across ontology levels and capturing functional dependencies.
- Biology Demonstrates superior zero-shot generalization to unseen GO terms, particularly for Molecular Function and Biological Process terms, by effectively leveraging semantic information from textual definitions, which transfers better to novel ontology concepts than purely structural embeddings.
Key conclusions
- STAR-GO achieves state-of-the-art or competitive performance across all three GO subontologies (BP, CC, MF), with the highest AUC scores (e.g., 0.989 for BP, 0.988 for CC, 0.995 for MF), indicating strong term-level discriminability.
- In zero-shot evaluation on 16 held-out GO terms, STAR-GO variants achieve the highest AUCs in 13 cases, significantly outperforming baselines like DeepGOZero and DeepGO-SE, demonstrating superior generalization to unseen functions.
- Ablation studies reveal that semantic embeddings (STAR_T) achieve the best zero-shot results for most MF and BP terms (e.g., AUC of 0.949 for GO:0001228), while structural embeddings (STAR_S) perform best for a few terms but poorly for MF, highlighting the critical role of semantic information for generalization.
Abstract: Motivation: Accurate prediction of protein function is essential for elucidating molecular mechanisms and advancing biological and therapeutic discovery. Yet experimental annotation lags far behind the rapid growth of protein sequence data. Computational approaches address this gap by associating proteins with Gene Ontology (GO) terms, which encode functional knowledge through hierarchical relations and textual definitions. However, existing models often emphasize one modality over the other, limiting their ability to generalize, particularly to unseen or newly introduced GO terms that frequently arise as the ontology evolves, and making the previously trained models outdated. Results: We present STAR-GO, a Transformer-based framework that jointly models the semantic and structural characteristics of GO terms to enhance zero-shot protein function prediction. STAR-GO integrates textual definitions with ontology graph structure to learn unified GO representations, which are processed in hierarchical order to propagate information from general to specific terms. These representations are then aligned with protein sequence embeddings to capture sequence–function relationships. STAR-GO achieves state-of-the-art performance and superior zero-shot generalization, demonstrating the utility of integrating semantics and structure for robust and adaptable protein function prediction. Availability: Code and pre-trained models are available at https://github.com/boun-tabi-lifelu/stargo.