Mem0 improves long-term LLM conversational performance by up to 26% on LLM-as-Judge while cutting p95 latency 91% and token costs over 90% versus full-context baselines.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2025 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
Mem0 improves long-term LLM conversational performance by up to 26% on LLM-as-Judge while cutting p95 latency 91% and token costs over 90% versus full-context baselines.