Paper List
-
GOPHER: Optimization-based Phenotype Randomization for Genome-Wide Association Studies with Differential Privacy
This paper addresses the core challenge of balancing rigorous privacy protection with data utility when releasing full GWAS summary statistics, overco...
-
Real-time Cricket Sorting By Sex A low-cost embedded solution using YOLOv8 and Raspberry Pi
This paper addresses the critical bottleneck in industrial insect farming: the lack of automated, real-time sex sorting systems for Acheta domesticus ...
-
Training Dynamics of Learning 3D-Rotational Equivariance
This work addresses the core dilemma of whether to use computationally expensive equivariant architectures or faster symmetry-agnostic models with dat...
-
Fast and Accurate Node-Age Estimation Under Fossil Calibration Uncertainty Using the Adjusted Pairwise Likelihood
This paper addresses the dual challenge of computational inefficiency and sensitivity to fossil calibration errors in Bayesian divergence time estimat...
-
Few-shot Protein Fitness Prediction via In-context Learning and Test-time Training
This paper addresses the core challenge of accurately predicting protein fitness with only a handful of experimental observations, where data collecti...
-
scCluBench: Comprehensive Benchmarking of Clustering Algorithms for Single-Cell RNA Sequencing
This paper addresses the critical gap of fragmented and non-standardized benchmarking in single-cell RNA-seq clustering, which hinders objective compa...
-
Simulation and inference methods for non-Markovian stochastic biochemical reaction networks
This paper addresses the computational bottleneck of simulating and performing Bayesian inference for non-Markovian biochemical systems with history-d...
-
Assessment of Simulation-based Inference Methods for Stochastic Compartmental Models
This paper addresses the core challenge of performing accurate Bayesian parameter inference for stochastic epidemic models when the likelihood functio...
A Standardized Framework for Evaluating Gene Expression Generative Models
University of Cambridge | Wellcome Sanger Institute | Sapienza University of Rome | ISTI-CNR
30秒速读
IN SHORT: This paper addresses the critical lack of standardized evaluation protocols for single-cell gene expression generative models, where inconsistent metric implementations and computation spaces make cross-study comparisons impossible.
核心创新
- Methodology Introduces GGE, the first unified Python framework with explicit computation space parameterization (raw, PCA, DEG-restricted) for standardized evaluation of generative models.
- Methodology Proposes perturbation-effect correlation metric that measures direction and magnitude of perturbation responses rather than raw expression correlation, focusing evaluation on biologically relevant signals.
- Methodology Demonstrates that Wasserstein distance values vary by nearly an order of magnitude (17.2 to 104.3) depending solely on computation space, quantifying the standardization problem.
主要结论
- Metric values vary substantially with implementation choices: W₂ distance ranges from 17.2 (PCA-25) to 104.3 (raw space) on identical data, highlighting critical need for standardization.
- DEG selection strategy affects correlation metrics: top-20 DEG selection yields Pearson correlation of 0.614±0.066 vs strict threshold selection (lfc>1, p<0.01) yielding 0.506±0.217 on Norman dataset.
- Perturbation-effect correlation in DEG space provides biologically meaningful evaluation, focusing on genes that actually respond to perturbations rather than steady-state background expression.
摘要: The rapid development of generative models for single-cell gene expression data has created an urgent need for standardised evaluation frameworks. Current evaluation practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and a lack of biologically-grounded metrics. We present Generated Genetic Expression Evaluator (GGE), an open-source Python framework that addresses these challenges by providing a comprehensive suite of distributional metrics with explicit computation space options and biologically-motivated evaluation through differentially expressed gene (DEG)-focused analysis and perturbation-effect correlation, enabling standardized reporting and reproducible benchmarking. Through extensive analysis of the single-cell generative modeling literature, we identify that no standardized evaluation protocol exists. Methods report incomparable metrics computed in different spaces with different hyperparameters. We demonstrate that metric values vary substantially depending on implementation choices, highlighting the critical need for standardization. GGE enables fair comparison across generative approaches and accelerates progress in perturbation response prediction, cellular identity modeling, and counterfactual inference.