Paper List
-
Mapping of Lesion Images to Somatic Mutations
This paper addresses the critical bottleneck of delayed genetic analysis in cancer diagnosis by predicting a patient's full somatic mutation profile d...
-
Reinventing Clinical Dialogue: Agentic Paradigms for LLM‑Enabled Healthcare Communication
This paper addresses the core challenge of transforming reactive, stateless LLMs into autonomous, reliable clinical dialogue agents capable of longitu...
Reinventing Clinical Dialogue: Agentic Paradigms for LLM‑Enabled Healthcare Communication
College of Management and Economics, Laboratory of Computation and Analytics of Complex Management Systems (CACMS), Tianjin University | Computer Network Information Center, Chinese Academy of Sciences
The 30-Second View
IN SHORT: This paper addresses the core challenge of transforming reactive, stateless LLMs into autonomous, reliable clinical dialogue agents capable of longitudinal patient management and verifiable decision-making within the stringent safety constraints of healthcare.
Innovation (TL;DR)
- Methodology Introduces a novel taxonomy for clinical dialogue agents based on two orthogonal axes: Knowledge Source and Agency Objective, categorizing systems into four distinct paradigms (Latent Space Clinicians, Emergent Planners, Grounded Synthesizers, Verifiable Workflow Automators).
- Methodology Provides a first-principles, in-depth analysis of the core cognitive components (planning, memory, action, collaboration, evolution) across the proposed paradigms, moving beyond surface-level application cataloging.
- Theory Formalizes clinical dialogue as a Partially Observable Markov Decision Process (POMDP), mathematically framing the agent's task as state estimation and long-term utility optimization, bridging abstract AI theory with clinical rigor.
Key conclusions
- The agentic paradigm, operationalized through five core components (planning, memory, action, collaboration, evolution), is essential to overcome the reactive and stateless limitations of foundational LLMs for complex clinical tasks.
- A taxonomy based on Knowledge Source and Agency Objective reveals fundamental trade-offs between creativity/reliability and autonomy/safety, providing a systematic framework for designing clinical agents (analysis based on a review of over 300 papers).
- Effective clinical dialogue requires modeling the interaction as a POMDP to handle information asymmetry, where the agent must maintain a belief state (b_t) and optimize a policy (π*) for long-term clinical utility, as formalized in Eq. 1 and Eq. 2.
Abstract: Clinical dialogue represents a complex duality requiring both the empathetic fluency of natural conversation and the rigorous precision of evidence-based medicine. While Large Language Models possess unprecedented linguistic capabilities, their architectural reliance on reactive and stateless processing often favors probabilistic plausibility over factual veracity. This structural limitation has catalyzed a paradigm shift in medical AI from generative text prediction to agentic autonomy, where the model functions as a central reasoning engine capable of deliberate planning and persistent memory. Moving beyond existing reviews that primarily catalog downstream applications, this survey provides a first-principles analysis of the cognitive architecture underpinning this shift. We introduce a novel taxonomy structured along the orthogonal axes of knowledge source and agency objective to delineate the provenance of clinical knowledge against the system’s operational scope. This framework facilitates a systematic analysis of the intrinsic trade-offs between creativity and reliability by categorizing methods into four archetypes: Latent Space Clinicians, Emergent Planners, Grounded Synthesizers, and Verifiable Workflow Automators. For each paradigm, we deconstruct the technical realization across the entire cognitive pipeline, encompassing strategic planning, memory management, action execution, collaboration, and evolution to reveal how distinct architectural choices balance the tension between autonomy and safety. Furthermore, we bridge abstract design philosophies with the pragmatic implementation ecosystem. By mapping real-world applications to our taxonomy and systematically reviewing benchmarks and evaluation metrics specific to clinical agents, we provide a comprehensive reference for future development. Finally, we identify critical frontiers regarding trustworthiness, outlining a roadmap for future research to foster reliable and ethically aligned healthcare AI. The latest papers and related resources are maintained on our website.