Paper List
-
Developing the PsyCogMetrics™ AI Lab to Evaluate Large Language Models and Advance Cognitive Science
This paper addresses the critical gap between sophisticated LLM evaluation needs and the lack of accessible, scientifically rigorous platforms that in...
-
Equivalence of approximation by networks of single- and multi-spike neurons
This paper resolves the fundamental question of whether single-spike spiking neural networks (SNNs) are inherently less expressive than multi-spike SN...
-
The neuroscience of transformers
提出了Transformer架构与皮层柱微环路之间的新颖计算映射,连接了现代AI与神经科学。
-
Framing local structural identifiability and observability in terms of parameter-state symmetries
This paper addresses the core challenge of systematically determining which parameters and states in a mechanistic ODE model can be uniquely inferred ...
-
Leveraging Phytolith Research using Artificial Intelligence
This paper addresses the critical bottleneck in phytolith research by automating the labor-intensive manual microscopy process through a multimodal AI...
-
Neural network-based encoding in free-viewing fMRI with gaze-aware models
This paper addresses the core challenge of building computationally efficient and ecologically valid brain encoding models for naturalistic vision by ...
-
Scalable DNA Ternary Full Adder Enabled by a Competitive Blocking Circuit
This paper addresses the core bottleneck of carry information attenuation and limited computational scale in DNA binary adders by introducing a scalab...
-
ELISA: An Interpretable Hybrid Generative AI Agent for Expression-Grounded Discovery in Single-Cell Genomics
This paper addresses the critical bottleneck of translating high-dimensional single-cell transcriptomic data into interpretable biological hypotheses ...
Budget-Sensitive Discovery Scoring: A Formally Verified Framework for Evaluating AI-Guided Scientific Selection
Indian Institute of Information Technology Allahabad (IIITA) | National Institute of Electronics & Information Technology (NIELIT)
30秒速读
IN SHORT: This paper addresses the critical gap in evaluating AI-guided scientific selection strategies under realistic budget constraints, where existing metrics fail to jointly model budget limitations, asymmetric error costs, and the option to abstain.
核心创新
- Methodology Introduces the Budget-Sensitive Discovery Score (BSDS), a formally verified metric with 20 machine-checked theorems that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level.
- Methodology Proposes the Discovery Quality Score (DQS) as a budget-averaged summary statistic that prevents proposers from inflating scores by performing well at cherry-picked budgets.
- Biology Provides the first comprehensive evaluation showing that LLMs add no marginal value to existing ML pipelines for drug discovery candidate selection, with the simple RF-based Greedy-ML proposer achieving the best DQS (-0.046).
主要结论
- The RF-based Greedy-ML proposer achieves the best DQS (-0.046), outperforming all 39 proposers including 28 LLM configurations, demonstrating that simple ML baselines remain superior for drug discovery candidate selection.
- No LLM configuration (zero-shot or few-shot) surpasses the Greedy-ML baseline on HIV or Tox21 datasets, establishing that LLMs provide no marginal value over existing trained classifiers in realistic deployment scenarios.
- The proposer hierarchy generalizes robustly across five MoleculeNet benchmarks spanning extreme prevalence ranges (0.18%–46.2%) and a non-drug AV safety domain, with parameter robustness demonstrated across a 9×7 grid (τ≥0.636, mean τ=0.863).
摘要: Scientific discovery increasingly relies on AI systems to select candidates for expensive experimental validation, yet no principled, budget-aware evaluation framework exists for comparing selection strategies—a gap intensified by large language models (LLMs), which generate plausible scientific proposals without reliable downstream evaluation. We introduce the Budget-Sensitive Discovery Score (BSDS), a formally verified metric—20 theorems machine-checked by the Lean 4 proof assistant—that jointly penalizes false discoveries (λ-weighted FDR) and excessive abstention (γ-weighted coverage gap) at each budget level. Its budget-averaged form, the Discovery Quality Score (DQS), provides a single summary statistic that no proposer can inflate by performing well at a cherry-picked budget. As a case study, we apply BSDS/DQS to a question of broad interest: do LLMs add marginal value to an existing ML pipeline for drug discovery candidate selection? We evaluate 39 proposers—11 mechanistic variants, 14 zero-shot LLM configurations, and 14 few-shot LLM configurations—using SMILES (Simplified Molecular Input Line Entry System) representations on MoleculeNet HIV (41,127 compounds, 3.5% active, 1,000 bootstrap replicates) under both random and scaffold splits. Three findings emerge. First, the simple RF-based Greedy-ML proposer achieves the best DQS (−0.046), outperforming all MLP variants and LLM configurations; additional MLP reranking layers degrade rather than improve the RF’s discriminative ranking. Second, no LLM surpasses the Greedy-ML baseline under either zero-shot or few-shot evaluation on HIV or Tox21—establishing that LLMs provide no marginal value over an existing trained classifier, the realistic deployment scenario. Third, the proposer hierarchy generalizes across five MoleculeNet benchmarks spanning 0.18%–46.2% prevalence, a non-drug AV safety domain, and a 9×7 grid of penalty parameters (τ≥0.636, mean τ=0.863). The framework applies in principle to any setting where candidates are selected under budget constraints and asymmetric error costs, as demonstrated here across pharmaceutical screening and autonomous vehicle safety triage.