Paper List
-
STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings
This paper addresses the core challenge of generalizing protein function prediction to unseen or newly introduced Gene Ontology (GO) terms by overcomi...
-
Incorporating indel channels into average-case analysis of seed-chain-extend
This paper addresses the core pain point of bridging the theoretical gap for the widely used seed-chain-extend heuristic by providing the first rigoro...
-
Competition, stability, and functionality in excitatory-inhibitory neural circuits
This paper addresses the core challenge of extending interpretable energy-based frameworks to biologically realistic asymmetric neural networks, where...
-
Enhancing Clinical Note Generation with ICD-10, Clinical Ontology Knowledge Graphs, and Chain-of-Thought Prompting Using GPT-4
This paper addresses the core challenge of generating accurate and clinically relevant patient notes from sparse inputs (ICD codes and basic demograph...
-
Hypothesis-Based Particle Detection for Accurate Nanoparticle Counting and Digital Diagnostics
This paper addresses the core challenge of achieving accurate, interpretable, and training-free nanoparticle counting in digital diagnostic assays, wh...
-
MCP-AI: Protocol-Driven Intelligence Framework for Autonomous Reasoning in Healthcare
This paper addresses the critical gap in healthcare AI systems that lack contextual reasoning, long-term state management, and verifiable workflows by...
-
Model Gateway: Model Management Platform for Model-Driven Drug Discovery
This paper addresses the critical bottleneck of fragmented, ad-hoc model management in pharmaceutical research by providing a centralized, scalable ML...
-
Tree Thinking in the Genomic Era: Unifying Models Across Cells, Populations, and Species
This paper addresses the fragmentation of tree-based inference methods across biological scales by identifying shared algorithmic principles and stati...
PanFoMa: A Lightweight Foundation Model and Benchmark for Pan-Cancer
Extracted from affiliations in the content snippet (specific institutions not fully listed in provided text)
The 30-Second View
IN SHORT: This paper addresses the dual challenge of achieving computational efficiency without sacrificing accuracy in whole-transcriptome single-cell representation learning for pan-cancer analysis, moving beyond the limitations of pure Transformer or Mamba architectures.
Innovation (TL;DR)
- Methodology Proposes a novel hybrid architecture (PanFoMa) that decouples local gene interaction modeling (via a lightweight, chunked Transformer encoder) from global context integration (via a bidirectional Mamba decoder), achieving O(C·M² + N log N) complexity.
- Methodology Introduces a Global-informed Dynamic Sorting (GDS) mechanism that adaptively orders genes for the Mamba decoder based on a learned global cell state vector, moving beyond static, heuristic gene ordering (e.g., by mean expression).
- Biology Constructs and releases PanFoMaBench, a large-scale, rigorously curated pan-cancer single-cell benchmark comprising over 3.5 million high-quality cells across 33 cancer subtypes from 23 tissues, addressing the lack of comprehensive evaluation resources.
Key conclusions
- PanFoMa achieves state-of-the-art pan-cancer classification accuracy of 94.74% (ACC) and 92.5% (Macro-F1) on PanFoMaBench, outperforming GeneFormer by +3.5% ACC and +4.0% F1.
- The model demonstrates superior generalizability across foundational tasks, showing improvements of +7.4% in cell type annotation, +4.0% in batch integration, and +3.1% in multi-omics integration over baselines.
- The hybrid local-global design and dynamic sorting are validated as effective, enabling efficient processing of full transcriptome-scale data (~3000 genes) while capturing both fine-grained local interactions and broad global regulatory patterns.
Abstract: Single-cell RNA sequencing (scRNA-seq) is essential for decoding tumor heterogeneity. However, pan-cancer research still faces two key challenges: learning discriminative and efficient single-cell representations, and establishing a comprehensive evaluation benchmark. In this paper, we introduce PanFoMa, a lightweight hybrid neural network that combines the strengths of Transformers and state-space models to achieve a balance between performance and efficiency. PanFoMa consists of a front-end local-context encoder with shared self-attention layers to capture complex, order-independent gene interactions; and a back-end global sequential feature decoder that efficiently integrates global context using a linear-time state-space model. This modular design preserves the expressive power of Transformers while leveraging the scalability of Mamba to enable transcriptome modeling, effectively capturing both local and global regulatory signals. To enable robust evaluation, we also construct a large-scale pan-cancer single-cell benchmark, PanFoMaBench, containing over 3.5 million high-quality cells across 33 cancer subtypes, curated through a rigorous preprocessing pipeline. Experimental results show that PanFoMa outperforms state-of-the-art models on our pan-cancer benchmark (+4.0%) and across multiple public tasks, including cell type annotation (+7.4%), batch integration (+4.0%) and multi-omics integration (+3.1%). The code is available at https://github.com/Xiaoshui-Huang/PanFoMa.