Recognition: 2 theorem links
· Lean TheoremZep: A Temporal Knowledge Graph Architecture for Agent Memory
Pith reviewed 2026-05-11 10:57 UTC · model grok-4.3
The pith
A temporal knowledge graph lets AI agents dynamically integrate conversational and business data for better memory performance.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Zep employs Graphiti, a temporally-aware knowledge graph engine, to dynamically synthesize unstructured conversational data and structured business data while preserving their temporal relationships, resulting in improved accuracy on temporal reasoning benchmarks compared to previous approaches.
What carries the argument
Graphiti is a temporally-aware knowledge graph engine that builds and queries a structure containing time-stamped facts and relationships from both text conversations and database records.
If this is right
- Agents can synthesize information across multiple sessions with higher accuracy.
- Latency for responses involving long-term context drops significantly.
- Dynamic updates to knowledge from new conversations and business data are handled without full recomputation.
- Enterprise tasks requiring cross-source temporal reasoning become more reliable.
Where Pith is reading between the lines
- Similar temporal graph techniques might apply to other AI systems needing to track evolving knowledge, such as in scientific data analysis.
- Combining this with other memory architectures could lead to hybrid systems for even more complex agent behaviors.
- Real-world testing in production environments would be needed to confirm if the benchmark gains persist under variable data conditions.
Load-bearing premise
Benchmarks focused on memory retrieval and temporal reasoning accurately predict performance in actual enterprise agent applications with diverse and changing data sources.
What would settle it
A direct comparison on a held-out enterprise dataset with frequent data updates and long conversation histories where Zep fails to show accuracy or latency improvements over baselines.
read the original abstract
We introduce Zep, a novel memory layer service for AI agents that outperforms the current state-of-the-art system, MemGPT, in the Deep Memory Retrieval (DMR) benchmark. Additionally, Zep excels in more comprehensive and challenging evaluations than DMR that better reflect real-world enterprise use cases. While existing retrieval-augmented generation (RAG) frameworks for large language model (LLM)-based agents are limited to static document retrieval, enterprise applications demand dynamic knowledge integration from diverse sources including ongoing conversations and business data. Zep addresses this fundamental limitation through its core component Graphiti -- a temporally-aware knowledge graph engine that dynamically synthesizes both unstructured conversational data and structured business data while maintaining historical relationships. In the DMR benchmark, which the MemGPT team established as their primary evaluation metric, Zep demonstrates superior performance (94.8% vs 93.4%). Beyond DMR, Zep's capabilities are further validated through the more challenging LongMemEval benchmark, which better reflects enterprise use cases through complex temporal reasoning tasks. In this evaluation, Zep achieves substantial results with accuracy improvements of up to 18.5% while simultaneously reducing response latency by 90% compared to baseline implementations. These results are particularly pronounced in enterprise-critical tasks such as cross-session information synthesis and long-term context maintenance, demonstrating Zep's effectiveness for deployment in real-world applications.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces Zep, a memory layer service for LLM-based agents featuring Graphiti, a temporally-aware knowledge graph engine that dynamically integrates unstructured conversational data with structured business data while preserving historical relationships. It claims to outperform the prior state-of-the-art MemGPT on the Deep Memory Retrieval (DMR) benchmark (94.8% vs. 93.4% accuracy) and to deliver up to 18.5% higher accuracy together with 90% lower response latency on the more challenging LongMemEval benchmark, with particular gains in cross-session synthesis and long-term context maintenance.
Significance. If the reported gains are robust, the work would constitute a practical advance in agent memory architectures for enterprise settings that require ongoing temporal reasoning over mixed conversational and structured sources, moving beyond static RAG limitations.
major comments (2)
- Abstract: The headline performance claims consist solely of point estimates (94.8% vs 93.4% on DMR; up to 18.5% accuracy and 90% latency on LongMemEval) with no accompanying information on the number of runs, standard deviations, confidence intervals, baseline hyperparameter settings, or statistical significance tests. Without these controls, it is impossible to determine whether the observed margins exceed experimental noise, which is known to be high on retrieval benchmarks sensitive to prompt phrasing and retrieval parameters.
- Abstract / Evaluation section: The manuscript asserts that LongMemEval 'better reflects enterprise use cases' and that the reported gains are 'particularly pronounced in enterprise-critical tasks,' yet provides no explicit justification, task breakdown, or ablation showing that the temporal synthesis performed by Graphiti is the causal factor rather than other implementation differences.
minor comments (1)
- The abstract introduces Graphiti without a concise one-sentence definition of its core data model or update mechanism before stating its performance advantages.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on the evaluation presentation and benchmark justification. We address each major comment below and will revise the manuscript to strengthen these aspects.
read point-by-point responses
-
Referee: Abstract: The headline performance claims consist solely of point estimates (94.8% vs 93.4% on DMR; up to 18.5% accuracy and 90% latency on LongMemEval) with no accompanying information on the number of runs, standard deviations, confidence intervals, baseline hyperparameter settings, or statistical significance tests. Without these controls, it is impossible to determine whether the observed margins exceed experimental noise, which is known to be high on retrieval benchmarks sensitive to prompt phrasing and retrieval parameters.
Authors: We agree that additional statistical context would improve interpretability of the results. In the revised manuscript, we will report the number of evaluation runs, standard deviations for accuracy and latency metrics, baseline hyperparameter settings, and a brief discussion of result stability across runs. Formal statistical significance testing was not performed in the original experiments due to the focus on practical deployment metrics, but the observed margins remained consistent; we will note this limitation explicitly. revision: yes
-
Referee: Abstract / Evaluation section: The manuscript asserts that LongMemEval 'better reflects enterprise use cases' and that the reported gains are 'particularly pronounced in enterprise-critical tasks,' yet provides no explicit justification, task breakdown, or ablation showing that the temporal synthesis performed by Graphiti is the causal factor rather than other implementation differences.
Authors: We acknowledge the need for clearer justification. The revised manuscript will include a task breakdown of LongMemEval, grouping tasks by temporal reasoning requirements such as cross-session synthesis and long-term context maintenance. We will also add an ablation comparing Graphiti with its temporal components disabled, which isolates the contribution of temporal knowledge graph synthesis to the accuracy gains on these tasks and supports the claim that LongMemEval better captures enterprise temporal reasoning needs. revision: yes
Circularity Check
No derivation chain present; claims are purely empirical benchmark comparisons.
full rationale
The paper describes an architecture (Zep/Graphiti) for temporal knowledge graph memory in agents and reports direct performance numbers on DMR (94.8% vs 93.4%) and LongMemEval (up to 18.5% accuracy gain, 90% latency reduction). No equations, first-principles derivations, fitted parameters, uniqueness theorems, or self-citation load-bearing steps appear in the provided text or abstract. All central claims reduce to external benchmark runs against independent baselines (MemGPT), with no internal reduction to the paper's own inputs or prior self-work. This is the expected non-circular outcome for a systems/engineering paper.
Axiom & Free-Parameter Ledger
invented entities (1)
-
Graphiti
no independent evidence
Forward citations
Cited by 45 Pith papers
-
MedMemoryBench: Benchmarking Agent Memory in Personalized Healthcare
MedMemoryBench supplies a 2,000-session synthetic medical trajectory dataset and an evaluate-while-constructing streaming protocol to expose memory saturation and reasoning failures in current agent architectures for ...
-
Trojan Hippo: Weaponizing Agent Memory for Data Exfiltration
Trojan Hippo attacks on LLM agent memory achieve 85-100% success rates in data exfiltration across four memory backends even after 100 benign sessions, while evaluated defenses reduce success rates but impose varying ...
-
MEME: Multi-entity & Evolving Memory Evaluation
All tested LLM memory systems fail at dependency reasoning in multi-entity evolving scenarios, with only an expensive file-based setup showing partial recovery.
-
Remember the Decision, Not the Description: A Rate-Distortion Framework for Agent Memory
Memory for long-horizon agents should preserve distinctions that affect decisions under a fixed budget, not descriptive features, yielding an exact forgetting boundary and a new online learner DeMem with regret guarantees.
-
DeepRefine: Agent-Compiled Knowledge Refinement via Reinforcement Learning
DeepRefine refines agent-compiled knowledge bases via multi-turn abductive diagnosis and RL training with a GBD reward, yielding consistent downstream task gains.
-
Nautilus Compass: Black-box Persona Drift Detection for Production LLM Agents
Nautilus Compass is a black-box drift detector for production LLM agents that uses weighted cosine similarity on BGE-m3 embeddings of raw text against anchors, achieving 0.83 ROC AUC on real session traces while shipp...
-
EquiMem: Calibrating Shared Memory in Multi-Agent Debate via Game-Theoretic Equilibrium
EquiMem calibrates shared memory in multi-agent debate by computing a game-theoretic equilibrium from agent queries and paths, outperforming heuristics and LLM validators across benchmarks while remaining robust to ad...
-
MEMOREPAIR: Barrier-First Cascade Repair in Agentic Memory
MemoRepair formalizes the cascade update problem in agentic memory and solves it via a min-cut reduction that eliminates invalidated memory exposure to 0% while recovering 91-94% of valid successors at 57-76% of basel...
-
Belief Memory: Agent Memory Under Partial Observability
BeliefMem is a probabilistic memory architecture for LLM agents that retains multiple candidate conclusions with probabilities updated by Noisy-OR, achieving superior average performance over deterministic baselines o...
-
Belief Memory: Agent Memory Under Partial Observability
BeliefMem stores multiple candidate conclusions with probabilities in agent memory and updates them via Noisy-OR rules to preserve uncertainty under partial observability.
-
MemFlow: Intent-Driven Memory Orchestration for Small Language Model Agents
MemFlow routes queries by intent to tiered memory operations, nearly doubling accuracy of a 1.7B SLM on long-horizon benchmarks compared to full-context baselines.
-
Learning How and What to Memorize: Cognition-Inspired Two-Stage Optimization for Evolving Memory
MemCoE learns memory organization guidelines via contrastive feedback and then trains a guideline-aligned RL policy for memory updates, yielding consistent gains on personalization benchmarks.
-
vstash: Local-First Hybrid Retrieval with Adaptive Fusion for LLM Agents
vstash shows that hybrid retrieval disagreements provide a free training signal to fine-tune 33M-parameter embeddings, yielding NDCG@10 gains up to 19.5% on NFCorpus and matching some larger models on three of five BE...
-
SAGER: Self-Evolving User Policy Skills for Recommendation Agent
SAGER equips LLM recommendation agents with per-user evolving policy skills via two-representation architecture, contrastive CoT diagnosis, and skill-augmented listwise reasoning, yielding SOTA gains orthogonal to mem...
-
PersonalAI 2.0: Enhancing knowledge graph traversal/retrieval with planning mechanism for Personalized LLM Agents
PAI-2 improves factual correctness in LLM answers by 4% on average across benchmarks using adaptive graph traversal and planning, with 6% gains from traversal algorithms and 18% from enabled planning.
-
Cognifold: Always-On Proactive Memory via Cognitive Folding
Cognifold is a new proactive memory architecture that folds event streams into emergent cognitive structures by extending complementary learning systems theory with a prefrontal intent layer and graph topology self-or...
-
PRISM: Pareto-Efficient Retrieval over Intent-Aware Structured Memory for Long-Horizon Agents
PRISM achieves higher accuracy than baselines on long-horizon agent tasks at an order-of-magnitude smaller context budget by combining hierarchical bundle search, query-sensitive costing, evidence compression, and ada...
-
SAGE: A Self-Evolving Agentic Graph-Memory Engine for Structure-Aware Associative Memory
SAGE is a self-evolving agentic graph-memory engine that dynamically constructs and refines structured memory graphs via writer-reader feedback, yielding performance gains on multi-hop QA, open-domain retrieval, and l...
-
ClinicalBench: Stress-Testing Assertion-Aware Retrieval for Cross-Admission Clinical QA on MIMIC-IV
Intent-aware retrieval over assertion-labeled knowledge graphs improves clinical QA accuracy by 22 percentage points on a new MIMIC-IV benchmark that stresses negation, temporality, and attribution.
-
HAGE: Harnessing Agentic Memory via RL-Driven Weighted Graph Evolution
HAGE proposes a trainable weighted graph memory framework with LLM intent classification, dynamic edge modulation, and RL optimization that improves long-horizon reasoning accuracy in agentic LLMs over static baselines.
-
GASim: A Graph-Accelerated Hybrid Framework for Social Simulation
GASim accelerates hybrid LLM-ABM social simulations via graph-optimized memory, graph message passing, and entropy-driven agent grouping, delivering 9.94x speedup and under 20% token use while aligning with real-world trends.
-
Memanto: Typed Semantic Memory with Information-Theoretic Retrieval for Long-Horizon Agents
Memanto delivers 89.8% and 87.1% accuracy on LongMemEval and LoCoMo benchmarks using typed semantic memory and information-theoretic retrieval, outperforming hybrid graph and vector systems with a single query and zer...
-
MemSearch-o1: Empowering Large Language Models with Reasoning-Aligned Memory Growth in Agentic Search
MemSearch-o1 mitigates memory dilution in agentic LLM search through reasoning-aligned token-level memory growth, retracing with a contribution function, and path reorganization, improving reasoning activation on benchmarks.
-
MemSearch-o1: Empowering Large Language Models with Reasoning-Aligned Memory Growth in Agentic Search
MemSearch-o1 uses reasoning-aligned memory growth from seed tokens, retracing via contribution functions, and path reorganization to mitigate memory dilution in LLM agentic search.
-
GAM: Hierarchical Graph-based Agentic Memory for LLM Agents
GAM decouples event-level memory encoding from topic-level consolidation in LLM agents using hierarchical graphs to reduce interference and improve long-term coherence and retrieval.
-
TSUBASA: Improving Long-Horizon Personalization via Evolving Memory and Self-Learning with Context Distillation
TSUBASA improves long-horizon personalization in LLMs via dynamic memory evolution for writing and context-distillation self-learning for reading, outperforming Mem0 and Memory-R1 on Qwen-3 benchmarks while reducing t...
-
MemReader: From Passive to Active Extraction for Long-Term Agent Memory
MemReader uses distilled passive and GRPO-trained active extractors to selectively write low-noise long-term memories, outperforming passive baselines on knowledge updating, temporal reasoning, and hallucination tasks.
-
Task-Adaptive Retrieval over Agentic Multi-Modal Web Histories via Learned Graph Memory
ACGM learns task-adaptive sparse graphs over multi-modal agent histories via policy-gradient optimization, reaching 82.7 nDCG@10 and 89.2% Precision@10 on WebShop, VisualWebArena, and Mind2Web while outperforming 19 b...
-
HingeMem: Boundary Guided Long-Term Memory with Query Adaptive Retrieval for Scalable Dialogues
HingeMem segments dialogue memory via boundary-triggered hyperedges over four elements and applies query-adaptive retrieval, yielding ~20% relative gains and 68% lower QA token cost versus baselines on LOCOMO.
-
FileGram: Grounding Agent Personalization in File-System Behavioral Traces
FileGram grounds AI agent personalization in file-system behavioral traces via a data simulation engine, a diagnostic benchmark, and a bottom-up memory architecture.
-
Opal: Private Memory for Personal AI
Opal enables private long-term memory for personal AI by decoupling reasoning to a trusted enclave with a lightweight knowledge graph and piggybacking reindexing on ORAM accesses.
-
Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework
A unified framework for LLM agent memory is benchmarked, with a new hybrid method outperforming state-of-the-art on standard tasks.
-
Memory in the Age of AI Agents
The paper maps agent memory research via three forms (token-level, parametric, latent), three functions (factual, experiential, working), and dynamics of formation/evolution/retrieval, plus benchmarks and future directions.
-
GRAVITY: Architecture-Agnostic Structured Anchoring for Long-Horizon Conversational Memory
GRAVITY adds structured relational, temporal, and thematic memory anchors to conversational LLMs at generation time, delivering 7.5-10.1% average gains in LLM-judge accuracy across five host systems on LongMemEval and LoCoMo.
-
EgoSelf: From Memory to Personalized Egocentric Assistant
EgoSelf uses graph-based memory of user interactions to derive personalized profiles and predict future behaviors for egocentric assistants.
-
Scaling Human-AI Coding Collaboration Requires a Governable Consensus Layer
Agentic Consensus replaces code as the main artifact with a typed property graph world model that maintains commitments and evidence through synchronization operators, shifting evaluation to alignment fidelity and con...
-
The Continuity Layer: Why Intelligence Needs an Architecture for What It Carries Forward
AI intelligence is limited by the lack of an architecture that carries forward understanding across sessions, and the proposed continuity layer with Decomposed Trace Convergence Memory addresses this by enabling persi...
-
Evo-MedAgent: Beyond One-Shot Diagnosis with Agents That Remember, Reflect, and Improve
Evo-MedAgent adds three evolving memory stores to LLM agents for chest X-ray diagnosis, raising MCQ accuracy from 0.68 to 0.79 on GPT-5-mini and 0.76 to 0.87 on Gemini-3 Flash without any training.
-
Retrieval Is Not Enough: Why Organizational AI Needs Epistemic Infrastructure
OIDA adds typed knowledge objects, decay-based importance scores, contradiction edges, and an inverse-decay QUESTION primitive for ignorance to raise epistemic fidelity beyond retrieval.
-
MemCoT: Test-Time Scaling through Memory-Driven Chain-of-Thought
MemCoT redefines long-context reasoning as iterative stateful search with zoom-in/zoom-out memory perception and dual short-term memories, claiming SOTA results on LoCoMo and LongMemEval-S benchmarks.
-
MemMachine: A Ground-Truth-Preserving Memory System for Personalized AI Agents
MemMachine stores entire conversational episodes and applies contextualized retrieval plus adaptive query routing to achieve 0.9169 accuracy on LoCoMo and 93 percent on LongMemEvalS while using 80 percent fewer tokens...
-
Memory as Metabolism: A Design for Companion Knowledge Systems
This paper designs a companion knowledge system with TRIAGE, DECAY, CONTEXTUALIZE, CONSOLIDATE, and AUDIT operations plus memory gravity and minority-hypothesis retention to give contradictory evidence a path to updat...
-
Back to Basics: Let Conversational Agents Remember with Just Retrieval and Generation
A minimalist retrieval-and-generation framework using turn isolation and query-driven pruning outperforms complex memory systems by directly addressing signal sparsity and dual-level redundancy in dialogues.
-
ATANT v1.1: Positioning Continuity Evaluation Against Memory, Long-Context, and Agentic-Memory Benchmarks
Existing memory benchmarks cover at most two of the seven continuity properties from ATANT v1.0, with a median of one and none covering more than two.
-
PASK: Toward Intent-Aware Proactive Agents with Long-Term Memory
PASK introduces the DD-MM-PAS paradigm for streaming proactive agents with intent-aware detection, hybrid memory modeling, and a new real-world benchmark where the IntentFlow model matches top LLMs on latency while fi...
Reference graph
Works this paper leans on
-
[1]
Gomez, Lukasz Kaiser, and Illia Polosukhin
Ashish V aswani, Noam Shazeer, Niki Parmar, Jakob Uszkor eit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023
work page 2023
-
[2]
K. Sparck Jones. A statistical interpretation of term sp ecificity and its application in retrieval. Journal of Docu- mentation, 28(1):11–21, 1972. 10 Using Knowledge Graphs to power LLM-Agent Memory
work page 1972
-
[3]
Patil, Ion Stoica, and Joseph E
Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, S hishir G. Patil, Ion Stoica, and Joseph E. Gonzalez. Memgpt: Towards llms as operating systems, 2024
work page 2024
-
[4]
From local to global: A graph rag approach to query-f ocused summarization, 2024
Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Ale x Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. From local to global: A graph rag approach to query-f ocused summarization, 2024
work page 2024
-
[5]
Zep: Long-term memory for ai agents
Zep. Zep: Long-term memory for ai agents. https://www.getzep.com, 2024. Commercial memory layer for AI applications
work page 2024
-
[6]
Graphiti: Temporal knowledge graphs for agentic ap plications
Zep. Graphiti: Temporal knowledge graphs for agentic ap plications. https://github.com/getzep/graphiti, 2024. Graphiti builds dynamic, temporally aware Knowledg e Graphs that represent complex, evolving relationships bet ween entities over time
work page 2024
-
[7]
Longmemeval: Benchmark- ing chat assistants on long-term interactive memory, 2024
Di Wu, Hongwei Wang, Wenhao Y u, Y uwei Zhang, Kai-Wei Chang, and Dong Y u. Longmemeval: Benchmark- ing chat assistants on long-term interactive memory, 2024
work page 2024
-
[8]
Wong Gonzalez and Daniela. The relationship between semantic and episodic memory: Exp loring the effect of semantic neighbourhood density on episodic memory . PhD thesis, University of Winsor, 2018
work page 2018
-
[9]
Ari- graph: Learning knowledge graph world models with episodic memory for llm agents, 2024
Petr Anokhin, Nikita Semenov, Artyom Sorokin, Dmitry Ev seev, Mikhail Burtsev, and Evgeny Burnaev. Ari- graph: Learning knowledge graph world models with episodic memory for llm agents, 2024
work page 2024
-
[10]
Hiqa: A hierarchical contextual augmentation rag for multi-documents qa, 2024
Xinyue Chen, Pengyu Gao, Jiangjiang Song, and Xiaoyang Tan. Hiqa: A hierarchical contextual augmentation rag for multi-documents qa, 2024
work page 2024
-
[11]
Hiro: Hierarchical infor mation retrieval optimization, 2024
Krish Goel and Mahek Chandak. Hiro: Hierarchical infor mation retrieval optimization, 2024
work page 2024
-
[12]
Re- flexion: Language agents with verbal reinforcement learnin g, 2023
Noah Shinn, Federico Cassano, Edward Berman, Ashwin Go pinath, Karthik Narasimhan, and Shunyu Y ao. Re- flexion: Language agents with verbal reinforcement learnin g, 2023
work page 2023
-
[13]
Learning from label ed and unlabeled data with label propagation
Xiaojin Zhu and Zoubin Ghahramani. Learning from label ed and unlabeled data with label propagation. 2002
work page 2002
-
[14]
V . A. Traag, L. Waltman, and N. J. van Eck. From louvain to leiden: guaranteeing well-connected communities. Sci Rep 9, 5233 , 2019
work page 2019
-
[15]
Neo4j - the world’s leading graph database, 2012
Neo4j. Neo4j - the world’s leading graph database, 2012
work page 2012
-
[16]
Apache lucene - scoring, 2 011
Apache Software Foundation. Apache lucene - scoring, 2 011. letzter Zugriff: 20. Oktober 2011
work page 2011
-
[17]
Lightrag: Simple and fast retrieval-augmented generation, 2024
Zirui Guo, Lianghao Xia, Y anhua Y u, Tu Ao, and Chao Huang . Lightrag: Simple and fast retrieval-augmented generation, 2024
work page 2024
-
[18]
V ector search with openai embeddings: Lucene is all you need, 2023
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, and Jasper Xi an. V ector search with openai embeddings: Lucene is all you need, 2023
work page 2023
-
[19]
Prafulla Kumar Choubey, Xin Su, Man Luo, Xiangyu Peng, C aiming Xiong, Tiep Le, Shachar Rosenman, V a- sudev Lal, Phil Mui, Ricky Ho, Phillip Howard, and Chien-She ng Wu. Distill-synthkg: Distilling knowledge graph synthesis workflow for improved coverage and efficienc y, 2024
work page 2024
-
[20]
Gordon V . Cormack, Charles L. A. Clarke, and Stefan Buet tcher. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’09, pages 758–759. ACM, 2009
work page 2009
-
[21]
The use of mmr, dive rsity-based reranking for reordering documents and producing summaries
Jaime Carbonell and Jade Goldstein. The use of mmr, dive rsity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Conf erence on Research and Development in Information Retrieval , SIGIR ’98, page 335–336, New Y ork, NY , USA, 1998. Associati on for Computing Machinery
work page 1998
-
[22]
Beyond goldfish memory: Long-term open-domain conversation, 2021
Jing Xu, Arthur Szlam, and Jason Weston. Beyond goldfish memory: Long-term open-domain conversation, 2021
work page 2021
-
[23]
Ma king large language models a better foundation for dense retrieval, 2023
Chaofan Li, Zheng Liu, Shitao Xiao, and Yingxia Shao. Ma king large language models a better foundation for dense retrieval, 2023
work page 2023
-
[24]
Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu L ian, and Zheng Liu. Bge m3-embedding: Multi- lingual, multi-functionality, multi-granularity text em beddings through self-knowledge distillation, 2024
work page 2024
-
[25]
Triplex: a sota llm for knowledge graph construction, 2024
Shreyas Pimpalgaonkar, Nolan Tremelling, and Owen Col egrove. Triplex: a sota llm for knowledge graph construction, 2024
work page 2024
-
[26]
Shilong Li, Y ancheng He, Hangyu Guo, Xingyuan Bu, Ge Bai , Jie Liu, Jiaheng Liu, Xingwei Qu, Y angguang Li, Wanli Ouyang, Wenbo Su, and Bo Zheng. Graphreader: Build ing graph-based agent to enhance long-context abilities of large language models, 2024. 11 Using Knowledge Graphs to power LLM-Agent Memory
work page 2024
-
[27]
Financebench: A new benchmark for financial question answering, 2023
Pranab Islam, Anand Kannappan, Douwe Kiela, Rebecca Qi an, Nino Scherrer, and Bertie Vidgen. Financebench: A new benchmark for financial question answering, 2023
work page 2023
-
[28]
Beir: A heterogenous benchmark for zero-shot evaluation of information retriev al models, 2021
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A heterogenous benchmark for zero-shot evaluation of information retriev al models, 2021. 12
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.