Paper List
-
GOPHER: Optimization-based Phenotype Randomization for Genome-Wide Association Studies with Differential Privacy
This paper addresses the core challenge of balancing rigorous privacy protection with data utility when releasing full GWAS summary statistics, overco...
-
Real-time Cricket Sorting By Sex A low-cost embedded solution using YOLOv8 and Raspberry Pi
This paper addresses the critical bottleneck in industrial insect farming: the lack of automated, real-time sex sorting systems for Acheta domesticus ...
-
Training Dynamics of Learning 3D-Rotational Equivariance
This work addresses the core dilemma of whether to use computationally expensive equivariant architectures or faster symmetry-agnostic models with dat...
-
Fast and Accurate Node-Age Estimation Under Fossil Calibration Uncertainty Using the Adjusted Pairwise Likelihood
This paper addresses the dual challenge of computational inefficiency and sensitivity to fossil calibration errors in Bayesian divergence time estimat...
-
Few-shot Protein Fitness Prediction via In-context Learning and Test-time Training
This paper addresses the core challenge of accurately predicting protein fitness with only a handful of experimental observations, where data collecti...
-
scCluBench: Comprehensive Benchmarking of Clustering Algorithms for Single-Cell RNA Sequencing
This paper addresses the critical gap of fragmented and non-standardized benchmarking in single-cell RNA-seq clustering, which hinders objective compa...
-
Simulation and inference methods for non-Markovian stochastic biochemical reaction networks
This paper addresses the computational bottleneck of simulating and performing Bayesian inference for non-Markovian biochemical systems with history-d...
-
Assessment of Simulation-based Inference Methods for Stochastic Compartmental Models
This paper addresses the core challenge of performing accurate Bayesian parameter inference for stochastic epidemic models when the likelihood functio...
Budget-Sensitive Discovery Scoring: A Formally Verified Framework for Evaluating AI-Guided Scientific Selection
Indian Institute of Information Technology Allahabad (IIITA) | National Institute of Electronics & Information Technology (NIELIT)
30秒速读
IN SHORT: This paper addresses the critical gap in evaluating AI-guided scientific selection strategies under realistic budget constraints, where existing metrics fail to jointly model budget limitations, asymmetric error costs, and the option to abstain.
核心创新
- Methodology Introduces the Budget-Sensitive Discovery Score (BSDS), a formally verified metric with 20 machine-checked theorems that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level.
- Methodology Proposes the Discovery Quality Score (DQS) as a budget-averaged summary statistic that prevents proposers from inflating scores by performing well at cherry-picked budgets.
- Biology Provides the first comprehensive evaluation showing that LLMs add no marginal value to existing ML pipelines for drug discovery candidate selection, with the simple RF-based Greedy-ML proposer achieving the best DQS (-0.046).
主要结论
- The RF-based Greedy-ML proposer achieves the best DQS (-0.046), outperforming all 39 proposers including 28 LLM configurations, demonstrating that simple ML baselines remain superior for drug discovery candidate selection.
- No LLM configuration (zero-shot or few-shot) surpasses the Greedy-ML baseline on HIV or Tox21 datasets, establishing that LLMs provide no marginal value over existing trained classifiers in realistic deployment scenarios.
- The proposer hierarchy generalizes robustly across five MoleculeNet benchmarks spanning extreme prevalence ranges (0.18%–46.2%) and a non-drug AV safety domain, with parameter robustness demonstrated across a 9×7 grid (τ≥0.636, mean τ=0.863).
摘要: Scientific discovery increasingly relies on AI systems to select candidates for expensive experimental validation, yet no principled, budget-aware evaluation framework exists for comparing selection strategies—a gap intensified by large language models (LLMs), which generate plausible scientific proposals without reliable downstream evaluation. We introduce the Budget-Sensitive Discovery Score (BSDS), a formally verified metric—20 theorems machine-checked by the Lean 4 proof assistant—that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level. Its budget-averaged form, the Discovery Quality Score (DQS), provides a single summary statistic that no proposer can inflate by performing well at a cherry-picked budget. As a case study, we apply BSDS/DQS to a question of broad interest: do LLMs add marginal value to an existing ML pipeline for drug discovery candidate selection? We evaluate 39 proposers—11 mechanistic variants, 14 zero-shot LLM configurations, and 14 few-shot LLM configurations—using SMILES (Simplified Molecular Input Line Entry System) representations on MoleculeNet HIV (41,127 compounds, 3.5% active, 1,000 bootstrap replicates) under both random and scaffold splits. Three findings emerge. First, the simple RF-based Greedy-ML proposer achieves the best DQS (−0.046), outperforming all MLP variants and LLM configurations; additional MLP reranking layers degrade rather than improve the RF’s discriminative ranking. Second, no LLM surpasses the Greedy-ML baseline under either zero-shot or few-shot evaluation on HIV or Tox21—establishing that LLMs provide no marginal value over an existing trained classifier, the realistic deployment scenario. Third, the proposer hierarchy generalizes across five MoleculeNet benchmarks spanning 0.18%–46.2% prevalence, a non-drug AV safety domain, and a 9×7 grid of penalty parameters (τ≥0.636, mean τ=0.863). The framework applies in principle to any setting where candidates are selected under budget constraints and asymmetric error costs, as demonstrated here across pharmaceutical screening and autonomous vehicle safety triage.