Paper List
-
An AI Implementation Science Study to Improve Trustworthy Data in a Large Healthcare System
This paper addresses the critical gap between theoretical AI research and real-world clinical implementation by providing a practical framework for as...
-
The BEAT-CF Causal Model: A model for guiding the design of trials and observational analyses of cystic fibrosis exacerbations
This paper addresses the critical gap in cystic fibrosis exacerbation management by providing a formal causal framework that integrates expert knowled...
-
Hierarchical Molecular Language Models (HMLMs)
This paper addresses the core challenge of accurately modeling context-dependent signaling, pathway cross-talk, and temporal dynamics across multiple ...
-
Stability analysis of action potential generation using Markov models of voltage‑gated sodium channel isoforms
This work addresses the challenge of systematically characterizing how the high-dimensional parameter space of Markov models for different sodium chan...
-
Approximate Bayesian Inference on Mechanisms of Network Growth and Evolution
This paper addresses the core challenge of inferring the relative contributions of multiple, simultaneous generative mechanisms in network formation w...
-
EnzyCLIP: A Cross-Attention Dual Encoder Framework with Contrastive Learning for Predicting Enzyme Kinetic Constants
This paper addresses the core challenge of jointly predicting enzyme kinetic parameters (Kcat and Km) by modeling dynamic enzyme-substrate interaction...
-
Tissue stress measurements with Bayesian Inversion Stress Microscopy
This paper addresses the core challenge of measuring absolute, tissue-scale mechanical stress without making assumptions about tissue rheology, which ...
-
DeepFRI Demystified: Interpretability vs. Accuracy in AI Protein Function Prediction
This study addresses the critical gap between high predictive accuracy and biological interpretability in DeepFRI, revealing that the model often prio...
A Standardized Framework for Evaluating Gene Expression Generative Models
University of Cambridge | Wellcome Sanger Institute | Sapienza University of Rome | ISTI-CNR
30秒速读
IN SHORT: This paper addresses the critical lack of standardized evaluation protocols for single-cell gene expression generative models, where inconsistent metric implementations and computation spaces make cross-study comparisons impossible.
核心创新
- Methodology Introduces GGE, the first unified Python framework with explicit computation space parameterization (raw, PCA, DEG-restricted) for standardized evaluation of generative models.
- Methodology Proposes perturbation-effect correlation metric that measures direction and magnitude of perturbation responses rather than raw expression correlation, focusing evaluation on biologically relevant signals.
- Methodology Demonstrates that Wasserstein distance values vary by nearly an order of magnitude (17.2 to 104.3) depending solely on computation space, quantifying the standardization problem.
主要结论
- Metric values vary substantially with implementation choices: W₂ distance ranges from 17.2 (PCA-25) to 104.3 (raw space) on identical data, highlighting critical need for standardization.
- DEG selection strategy affects correlation metrics: top-20 DEG selection yields Pearson correlation of 0.614±0.066 vs strict threshold selection (lfc>1, p<0.01) yielding 0.506±0.217 on Norman dataset.
- Perturbation-effect correlation in DEG space provides biologically meaningful evaluation, focusing on genes that actually respond to perturbations rather than steady-state background expression.
摘要: The rapid development of generative models for single-cell gene expression data has created an urgent need for standardised evaluation frameworks. Current evaluation practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and a lack of biologically-grounded metrics. We present Generated Genetic Expression Evaluator (GGE), an open-source Python framework that addresses these challenges by providing a comprehensive suite of distributional metrics with explicit computation space options and biologically-motivated evaluation through differentially expressed gene (DEG)-focused analysis and perturbation-effect correlation, enabling standardized reporting and reproducible benchmarking. Through extensive analysis of the single-cell generative modeling literature, we identify that no standardized evaluation protocol exists. Methods report incomparable metrics computed in different spaces with different hyperparameters. We demonstrate that metric values vary substantially depending on implementation choices, highlighting the critical need for standardization. GGE enables fair comparison across generative approaches and accelerates progress in perturbation response prediction, cellular identity modeling, and counterfactual inference.