Paper List
-
Nyxus: A Next Generation Image Feature Extraction Library for the Big Data and AI Era
This paper addresses the core pain point of efficiently extracting standardized, comparable features from massive (terabyte to petabyte-scale) biomedi...
-
Topological Enhancement of Protein Kinetic Stability
This work addresses the long-standing puzzle of why knotted proteins exist by demonstrating that deep knots provide a functional advantage through enh...
-
A Multi-Label Temporal Convolutional Framework for Transcription Factor Binding Characterization
This paper addresses the critical limitation of existing TF binding prediction methods that treat transcription factors as independent entities, faili...
-
Social Distancing Equilibria in Games under Conventional SI Dynamics
This paper solves the core problem of proving the existence and uniqueness of Nash equilibria in finite-duration SI epidemic games, showing they are a...
-
Binding Free Energies without Alchemy
This paper addresses the core bottleneck of computational expense in Absolute Binding Free Energy calculations by eliminating the need for numerous al...
-
SHREC: A Spectral Embedding-Based Approach for Ab-Initio Reconstruction of Helical Molecules
This paper addresses the core bottleneck in cryo-EM helical reconstruction: eliminating the dependency on accurate initial symmetry parameter estimati...
-
Budget-Sensitive Discovery Scoring: A Formally Verified Framework for Evaluating AI-Guided Scientific Selection
This paper addresses the critical gap in evaluating AI-guided scientific selection strategies under realistic budget constraints, where existing metri...
-
Probabilistic Joint and Individual Variation Explained (ProJIVE) for Data Integration
This paper addresses the core challenge of accurately decomposing shared (joint) and dataset-specific (individual) sources of variation in multi-modal...
D-MEM: Dopamine-Gated Agentic Memory via Reward Prediction Error Routing
UC San Diego | Carnegie Mellon University
30秒速读
IN SHORT: This paper addresses the fundamental scalability bottleneck in LLM agentic memory systems: the O(N²) computational complexity and unbounded API token costs caused by processing every user utterance through expensive memory evolution pipelines, regardless of information value.
核心创新
- Methodology Introduces D-MEM, a bio-inspired architecture implementing dopamine-gated fast/slow routing based on Agentic Reward Prediction Error (RPE), decoupling short-term interaction from long-term cognitive restructuring.
- Methodology Develops the LoCoMo-Noise benchmark with systematic 75% noise injection (Filler: 40%, Status: 30%, Tangent: 30%) to simulate real-world conversational dynamics and evaluate memory robustness.
- Methodology Implements zero-cost retrieval augmentation through hybrid BM25 search with Reciprocal Rank Fusion and an O(1) Shadow Buffer fallback mechanism to prevent adversarial hallucinations.
主要结论
- D-MEM reduces API token consumption by 80% (from 1,648K to 319K tokens) while maintaining or improving accuracy on complex reasoning tasks under extreme noise conditions (ρ=0.75).
- The architecture achieves superior multi-hop reasoning performance (42.7% F1 vs. A-MEM's 27.0%, a +15.7 point gap) by preserving cleaner relational memory structures through selective cognitive restructuring.
- The Critic Router successfully gates 80% of computational resources while maintaining overall F1 score of 37.4% on standard benchmarks, demonstrating the effectiveness of the bio-inspired RPE mechanism.
摘要: The integration of structured, long-term memory is critical for the development of autonomous Large Language Model (LLM) agents. Recent advancements, such as the Agentic Memory (A-MEM) framework, have achieved significant progress by dynamically constructing and evolving knowledge graphs. However, existing architectures inherently operate as synchronous, "append-and-evolve-all" systems. Processing every user utterance through a computationally expensive O(N²) memory evolution pipeline introduces severe write-latency, unbounded API token costs, and catastrophic context window pollution caused by conversational noise. To address this scalability bottleneck, we introduce D-MEM (Dopamine-Gated Agentic Memory), a biologically inspired architecture that decouples short-term interaction from long-term cognitive restructuring. Drawing inspiration from the Dopamine-driven Reward Prediction Error (RPE) gating mechanism in the mammalian Ventral Tegmental Area (VTA), D-MEM implements a highly efficient Fast/Slow routing system. We introduce a lightweight Critic Router that continuously evaluates the Information Entropy (Surprise) and Long-term Utility of incoming stimuli. Routine inputs with low RPE are either bypassed entirely or cached in an O(1) fast-access buffer, preserving computational resources. Conversely, inputs generating a high RPE—such as factual contradictions or paradigm-shifting preference changes—trigger a "dopamine release" that activates the slow, O(N) deep memory evolution pipeline, actively reshaping the agent's global knowledge graph. To enable rigorous evaluation under realistic conditions, we further introduce the LoCoMo-Noise benchmark, which systematically injects controlled conversational noise into long-term dialogue sessions to simulate real-world interaction dynamics. Extensive evaluations demonstrate that D-MEM reduces API token consumption by over 80% and eliminates O(N²) write-latency bottlenecks, all while strictly outperforming synchronous baselines in complex multi-hop reasoning and adversarial resilience. By selectively gating cognitive restructuring and leveraging zero-cost retrieval augmentations, D-MEM provides a highly scalable and cost-efficient foundation for lifelong agentic memory. To support reproducibility, we open-source our implementation at https://github.com/london-and-tequila/dmem.