Paper List
-
SpikGPT: A High-Accuracy and Interpretable Spiking Attention Framework for Single-Cell Annotation
This paper addresses the core challenge of robust single-cell annotation across heterogeneous datasets with batch effects and the critical need to ide...
-
Unlocking hidden biomolecular conformational landscapes in diffusion models at inference time
This paper addresses the core challenge of efficiently and accurately sampling the conformational landscape of biomolecules from diffusion-based struc...
-
Personalized optimization of pediatric HD-tDCS for dose consistency and target engagement
This paper addresses the critical limitation of one-size-fits-all HD-tDCS protocols in pediatric populations by developing a personalized optimization...
-
Realistic Transition Paths for Large Biomolecular Systems: A Langevin Bridge Approach
This paper addresses the core challenge of generating physically realistic and computationally efficient transition paths between distinct protein con...
-
Consistent Synthetic Sequences Unlock Structural Diversity in Fully Atomistic De Novo Protein Design
This paper addresses the core pain point of low sequence-structure alignment in existing synthetic datasets (e.g., AFDB), which severely limits the pe...
-
MoRSAIK: Sequence Motif Reactor Simulation, Analysis and Inference Kit in Python
This work addresses the computational bottleneck in simulating prebiotic RNA reactor dynamics by developing a Python package that tracks sequence moti...
-
On the Approximation of Phylogenetic Distance Functions by Artificial Neural Networks
This paper addresses the core challenge of developing computationally efficient and scalable neural network architectures that can learn accurate phyl...
-
EcoCast: A Spatio-Temporal Model for Continual Biodiversity and Climate Risk Forecasting
This paper addresses the critical bottleneck in conservation: the lack of timely, high-resolution, near-term forecasts of species distribution shifts ...
Budget-Sensitive Discovery Scoring: A Formally Verified Framework for Evaluating AI-Guided Scientific Selection
Indian Institute of Information Technology Allahabad (IIITA) | National Institute of Electronics & Information Technology (NIELIT)
30秒速读
IN SHORT: This paper addresses the critical gap in evaluating AI-guided scientific selection strategies under realistic budget constraints, where existing metrics fail to jointly model budget limitations, asymmetric error costs, and the option to abstain.
核心创新
- Methodology Introduces the Budget-Sensitive Discovery Score (BSDS), a formally verified metric with 20 machine-checked theorems that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level.
- Methodology Proposes the Discovery Quality Score (DQS) as a budget-averaged summary statistic that prevents proposers from inflating scores by performing well at cherry-picked budgets.
- Biology Provides the first comprehensive evaluation showing that LLMs add no marginal value to existing ML pipelines for drug discovery candidate selection, with the simple RF-based Greedy-ML proposer achieving the best DQS (-0.046).
主要结论
- The RF-based Greedy-ML proposer achieves the best DQS (-0.046), outperforming all 39 proposers including 28 LLM configurations, demonstrating that simple ML baselines remain superior for drug discovery candidate selection.
- No LLM configuration (zero-shot or few-shot) surpasses the Greedy-ML baseline on HIV or Tox21 datasets, establishing that LLMs provide no marginal value over existing trained classifiers in realistic deployment scenarios.
- The proposer hierarchy generalizes robustly across five MoleculeNet benchmarks spanning extreme prevalence ranges (0.18%–46.2%) and a non-drug AV safety domain, with parameter robustness demonstrated across a 9×7 grid (τ≥0.636, mean τ=0.863).
摘要: Scientific discovery increasingly relies on AI systems to select candidates for expensive experimental validation, yet no principled, budget-aware evaluation framework exists for comparing selection strategies—a gap intensified by large language models (LLMs), which generate plausible scientific proposals without reliable downstream evaluation. We introduce the Budget-Sensitive Discovery Score (BSDS), a formally verified metric—20 theorems machine-checked by the Lean 4 proof assistant—that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level. Its budget-averaged form, the Discovery Quality Score (DQS), provides a single summary statistic that no proposer can inflate by performing well at a cherry-picked budget. As a case study, we apply BSDS/DQS to a question of broad interest: do LLMs add marginal value to an existing ML pipeline for drug discovery candidate selection? We evaluate 39 proposers—11 mechanistic variants, 14 zero-shot LLM configurations, and 14 few-shot LLM configurations—using SMILES (Simplified Molecular Input Line Entry System) representations on MoleculeNet HIV (41,127 compounds, 3.5% active, 1,000 bootstrap replicates) under both random and scaffold splits. Three findings emerge. First, the simple RF-based Greedy-ML proposer achieves the best DQS (−0.046), outperforming all MLP variants and LLM configurations; additional MLP reranking layers degrade rather than improve the RF’s discriminative ranking. Second, no LLM surpasses the Greedy-ML baseline under either zero-shot or few-shot evaluation on HIV or Tox21—establishing that LLMs provide no marginal value over an existing trained classifier, the realistic deployment scenario. Third, the proposer hierarchy generalizes across five MoleculeNet benchmarks spanning 0.18%–46.2% prevalence, a non-drug AV safety domain, and a 9×7 grid of penalty parameters (τ≥0.636, mean τ=0.863). The framework applies in principle to any setting where candidates are selected under budget constraints and asymmetric error costs, as demonstrated here across pharmaceutical screening and autonomous vehicle safety triage.