Paper List
-
SpikGPT: A High-Accuracy and Interpretable Spiking Attention Framework for Single-Cell Annotation
This paper addresses the core challenge of robust single-cell annotation across heterogeneous datasets with batch effects and the critical need to ide...
-
Unlocking hidden biomolecular conformational landscapes in diffusion models at inference time
This paper addresses the core challenge of efficiently and accurately sampling the conformational landscape of biomolecules from diffusion-based struc...
-
Personalized optimization of pediatric HD-tDCS for dose consistency and target engagement
This paper addresses the critical limitation of one-size-fits-all HD-tDCS protocols in pediatric populations by developing a personalized optimization...
-
Realistic Transition Paths for Large Biomolecular Systems: A Langevin Bridge Approach
This paper addresses the core challenge of generating physically realistic and computationally efficient transition paths between distinct protein con...
-
Consistent Synthetic Sequences Unlock Structural Diversity in Fully Atomistic De Novo Protein Design
This paper addresses the core pain point of low sequence-structure alignment in existing synthetic datasets (e.g., AFDB), which severely limits the pe...
-
MoRSAIK: Sequence Motif Reactor Simulation, Analysis and Inference Kit in Python
This work addresses the computational bottleneck in simulating prebiotic RNA reactor dynamics by developing a Python package that tracks sequence moti...
-
On the Approximation of Phylogenetic Distance Functions by Artificial Neural Networks
This paper addresses the core challenge of developing computationally efficient and scalable neural network architectures that can learn accurate phyl...
-
EcoCast: A Spatio-Temporal Model for Continual Biodiversity and Climate Risk Forecasting
This paper addresses the critical bottleneck in conservation: the lack of timely, high-resolution, near-term forecasts of species distribution shifts ...
A Standardized Framework for Evaluating Gene Expression Generative Models
University of Cambridge | Wellcome Sanger Institute | Sapienza University of Rome | ISTI-CNR
30秒速读
IN SHORT: This paper addresses the critical lack of standardized evaluation protocols for single-cell gene expression generative models, where inconsistent metric implementations and computation spaces make cross-study comparisons impossible.
核心创新
- Methodology Introduces GGE, the first unified Python framework with explicit computation space parameterization (raw, PCA, DEG-restricted) for standardized evaluation of generative models.
- Methodology Proposes perturbation-effect correlation metric that measures direction and magnitude of perturbation responses rather than raw expression correlation, focusing evaluation on biologically relevant signals.
- Methodology Demonstrates that Wasserstein distance values vary by nearly an order of magnitude (17.2 to 104.3) depending solely on computation space, quantifying the standardization problem.
主要结论
- Metric values vary substantially with implementation choices: W₂ distance ranges from 17.2 (PCA-25) to 104.3 (raw space) on identical data, highlighting critical need for standardization.
- DEG selection strategy affects correlation metrics: top-20 DEG selection yields Pearson correlation of 0.614±0.066 vs strict threshold selection (lfc>1, p<0.01) yielding 0.506±0.217 on Norman dataset.
- Perturbation-effect correlation in DEG space provides biologically meaningful evaluation, focusing on genes that actually respond to perturbations rather than steady-state background expression.
摘要: The rapid development of generative models for single-cell gene expression data has created an urgent need for standardised evaluation frameworks. Current evaluation practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and a lack of biologically-grounded metrics. We present Generated Genetic Expression Evaluator (GGE), an open-source Python framework that addresses these challenges by providing a comprehensive suite of distributional metrics with explicit computation space options and biologically-motivated evaluation through differentially expressed gene (DEG)-focused analysis and perturbation-effect correlation, enabling standardized reporting and reproducible benchmarking. Through extensive analysis of the single-cell generative modeling literature, we identify that no standardized evaluation protocol exists. Methods report incomparable metrics computed in different spaces with different hyperparameters. We demonstrate that metric values vary substantially depending on implementation choices, highlighting the critical need for standardization. GGE enables fair comparison across generative approaches and accelerates progress in perturbation response prediction, cellular identity modeling, and counterfactual inference.