Mem0 improves long-term LLM conversational performance by up to 26% on LLM-as-Judge while cutting p95 latency 91% and token costs over 90% versus full-context baselines.
(1:56 pm on 8 May, 2023) Caroline: Hey Mel! Good to see you! How have you been? (1:56 pm on 8 May, 2023) Melanie: Hey Caroline! Good to see you! I’m swamped with the kids & work
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2025 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
Mem0 improves long-term LLM conversational performance by up to 26% on LLM-as-Judge while cutting p95 latency 91% and token costs over 90% versus full-context baselines.