Recognition: no theorem link
Portable Agent Memory: A Protocol for Cryptographically-Verified Memory Transfer Across Heterogeneous AI Agents
Pith reviewed 2026-05-13 01:00 UTC · model grok-4.3
The pith
An open protocol transfers persistent memory between AI agents on different models with cryptographic verification and injection protection.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Portable Agent Memory is an open protocol and reference implementation that uses a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph, capability-based access control, an injection-resistant rehydration protocol, and JSON-first serialization to enable cryptographically verified transfer of episodic, semantic, procedural, working, and identity memory across heterogeneous AI agents.
What carries the argument
The Merkle-DAG provenance graph that links content-addressable memory entries to supply tamper evidence, combined with the injection-resistant rehydration protocol that adapts recalled content to a target model's requirements.
If this is right
- Memory state accumulated by one agent becomes usable by agents built on different underlying models or platforms.
- Selective disclosure of memory segments becomes possible without exposing unrelated private context.
- Tampering or alteration of transferred memory becomes detectable through breaks in the Merkle-DAG links.
- Cross-model demonstrations confirm the protocol works between current major architectures including GPT-4, Claude, Gemini, and Llama.
Where Pith is reading between the lines
- Standardized memory formats could reduce the engineering cost of moving context between independently developed agent systems.
- Multi-vendor agent teams might coordinate more effectively if each can import verified memory from the others without custom adapters.
- Tooling that imports and exports memory according to this protocol could become a common layer for persistent agent state.
Load-bearing premise
The rehydration protocol can adapt memory content to heterogeneous target models while mitigating indirect prompt injection risks.
What would settle it
Transfer a memory segment to a new model and observe whether the target agent either ignores the imported content or produces outputs showing successful indirect prompt injection through the rehydrated material.
read the original abstract
We present Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents. Modern AI agents accumulate rich context -- episodic events,semantic knowledge, procedural skills, working state, and identity preferences -- but this context remains locked within vendor-specific runtimes. Portable Agent Memory addresses this through: (1) a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph providing tamper-evidence; (2) capability-based access control enabling selective, scoped disclosure of memory segments; (3) an injection-resistant rehydration protocol that adapts recalled content to heterogeneous target models while mitigating indirect prompt injection; and (4) a JSON-first serialization format with optional CBOR compaction for efficient transport. We provide a Python SDK with 54 passing tests, agent skills for multiple platforms, and demonstrate cross-model memory transfer between GPT-4, Claude, Gemini, and Llama architectures. The protocol is open-source under Apache 2.0.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents. It defines a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph for tamper-evidence, capability-based access control for selective disclosure, an injection-resistant rehydration protocol that adapts recalled content to target models while mitigating indirect prompt injection, and a JSON-first serialization format with optional CBOR compaction. The work includes a Python SDK with 54 passing tests, agent skills for multiple platforms, and a demonstration of cross-model memory transfer between GPT-4, Claude, Gemini, and Llama.
Significance. If the security properties of the rehydration protocol can be substantiated, the protocol would represent a meaningful contribution to AI agent interoperability by enabling cryptographically verified memory transfer without vendor lock-in. The use of standard cryptographic primitives (Merkle-DAGs, capability-based access) and the open-source Python SDK with 54 tests plus cross-model demonstrations are clear strengths that support reproducibility and practical adoption.
major comments (1)
- [Abstract (component 3)] Abstract (component 3): The claim that the rehydration protocol is 'injection-resistant' and 'mitigates indirect prompt injection' is load-bearing for the central value proposition of safe cross-agent transfer on heterogeneous models, yet the manuscript provides no formal threat model, no reduction to a known secure primitive, and no adversarial evaluation. The reported 54 unit tests and functional cross-model demo (GPT-4/Claude/Gemini/Llama) only establish that transfer works under benign conditions, not that crafted memory payloads attempting to escape the adaptation wrapper are blocked.
minor comments (1)
- The abstract contains a typographical error: 'episodic events,semantic knowledge' is missing a space after the comma.
Simulated Author's Rebuttal
We thank the referee for their thorough review and for recognizing the potential significance of Portable Agent Memory for AI agent interoperability. We address the major comment below and commit to revisions that strengthen the security analysis of the rehydration protocol.
read point-by-point responses
-
Referee: The claim that the rehydration protocol is 'injection-resistant' and 'mitigates indirect prompt injection' is load-bearing for the central value proposition of safe cross-agent transfer on heterogeneous models, yet the manuscript provides no formal threat model, no reduction to a known secure primitive, and no adversarial evaluation. The reported 54 unit tests and functional cross-model demo (GPT-4/Claude/Gemini/Llama) only establish that transfer works under benign conditions, not that crafted memory payloads attempting to escape the adaptation wrapper are blocked.
Authors: We agree with the referee that the security properties of the rehydration protocol require more rigorous substantiation to support the claims of mitigating indirect prompt injection. The manuscript describes the protocol's use of model-specific adaptation wrappers and structured rehydration to reduce the risk of prompt injection from recalled memory content, but does not include a formal threat model or adversarial experiments. In the revised manuscript, we will add a new section detailing: (1) a threat model for indirect prompt injection attacks via portable memory, specifying adversary goals and capabilities; (2) how the rehydration protocol's design choices (e.g., content sanitization, format constraints, and target-model adaptation) provide mitigation, with references to related secure primitives where applicable; and (3) results from adversarial evaluations, including tests with crafted injection payloads on the GPT-4, Claude, Gemini, and Llama models. These additions will clarify that the protocol aims to mitigate rather than provide absolute resistance, and will include quantitative results from the evaluations. We believe this revision will address the concern and enhance the paper's contribution. revision: yes
Circularity Check
No circularity: protocol defined from first principles using standard primitives
full rationale
The paper defines a five-component memory model, Merkle-DAG provenance, capability-based access control, and an injection-resistant rehydration protocol directly from standard cryptographic and access-control building blocks. No equations, predictions, or central claims reduce by construction to fitted parameters, self-definitions, or self-citation chains. The work is self-contained: it specifies the protocol, provides a reference implementation with 54 tests, and demonstrates cross-model transfer without any load-bearing step that equates output to input by definition.
Axiom & Free-Parameter Ledger
axioms (1)
- standard math Merkle-DAG structures provide content-addressable, tamper-evident provenance for memory entries.
invented entities (2)
-
Five-component structured memory model
no independent evidence
-
Injection-resistant rehydration protocol
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Anderson, Daniel Bothell, Michael D
John R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind.Psychological Review, 111(4):1036–1060, 2004
work page 2004
-
[2]
Model context protocol (MCP), 2024
Anthropic. Model context protocol (MCP), 2024. Accessed: 2025-01-15
work page 2024
-
[3]
Jack B. Dennis and Earl C. Van Horn. Programming semantics for multiprogrammed computations.Communications of the ACM, 9(3):143–155, 1966
work page 1966
-
[4]
Regulation (EU) 2016/679 (general data protection regulation), art
European Parliament and Council. Regulation (EU) 2016/679 (general data protection regulation), art. 20: Right to data portability. Official Journal of the EU, L 119, 2016
work page 2016
-
[5]
Agent2agent (A2A) protocol, 2025
Google and Linux Foundation. Agent2agent (A2A) protocol, 2025. Accessed: 2025-01-15
work page 2025
-
[6]
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection. InProceedings of the ACM Workshop on AI and Security (AISec), CCS 2023, 2023
work page 2023
-
[7]
Laird, Allen Newell, and Paul S
John E. Laird, Allen Newell, and Paul S. Rosenbloom. Soar: An architecture for general intelligence.Artificial Intelligence, 33(1):1–64, 1987
work page 1987
-
[8]
Retrieval-augmented generation for knowledge-intensive NLP tasks
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Na- man Goyal, Heinrich K¨ uttler, Mike Lewis, Wen tau Yih, Tim Rockt¨ aschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems (NeurIPS) 33, 2020
work page 2020
-
[9]
The memory layer for AI applications, 2024
Mem0. The memory layer for AI applications, 2024. Accessed: 2025-01-15
work page 2024
-
[10]
Ralph C. Merkle. A digital signature based on a conventional encryption function. In Advances in Cryptology — CRYPTO ’87, pages 369–378, 1987
work page 1987
-
[11]
Agent memory communication protocol (AMCP), 2025
nunchi-ai. Agent memory communication protocol (AMCP), 2025. Accessed: 2025-01-15
work page 2025
- [12]
-
[13]
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technol- ogy (UIST), 2023
work page 2023
-
[14]
Ignore previous prompt: Attack techniques for language models
F´ abio Perez and Ian Ribeiro. Ignore previous prompt: Attack techniques for language models. InNeurIPS 2022 ML Safety Workshop, 2022
work page 2022
-
[15]
Zep: A temporal knowledge graph architecture for agent memory
Preston Rasmussen, Pavel Paliychuk, Travis Beauvais, Justin Ryan, and Daniel Chalef. Zep: A temporal knowledge graph architecture for agent memory. 2025. 14
work page 2025
-
[16]
Toolformer: Language models can teach themselves to use tools
Timo Schick, Jane Dwivedi-Yu, Roberto Dess` ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. 2023
work page 2023
-
[17]
Reflexion: Language agents with verbal reinforcement learning
Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. In Advances in Neural Information Processing Systems (NeurIPS) 36, 2023
work page 2023
-
[18]
An open standard for portable AI agent identity, 2024
Soul Protocol. An open standard for portable AI agent identity, 2024. Accessed: 2025-01- 15
work page 2024
-
[19]
Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L. Griffiths. Cog- nitive architectures for language agents.Transactions on Machine Learning Research (TMLR), 2024
work page 2024
-
[20]
How many memory systems are there?American Psychologist, 40(4):385– 398, 1985
Endel Tulving. How many memory systems are there?American Psychologist, 40(4):385– 398, 1985
work page 1985
-
[21]
Voyager: An open-ended embodied agent with large lan- guage models
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large lan- guage models. 2023
work page 2023
-
[22]
White, Doug Burger, and Chi Wang
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W. White, Doug Burger, and Chi Wang. AutoGen: Enabling next-gen LLM applications via multi- agent conversation. 2023
work page 2023
-
[23]
Yingxuan Yang, Huacan Chai, Yuanyi Song, Shuai Qi, Meng Wen, Ning Li, Jing Liao, Huan Hu, Jie Lin, Guoqing Chang, Wei Liu, Yonghong Wen, Yinghui Yu, and Wenhao Zhang. A survey on agent protocols. 2025
work page 2025
-
[24]
ReAct: Synergizing reasoning and acting in language models
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. InProceedings of the International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[25]
Zeyu Zhang, Bo Bo, Chao Ma, Rui Li, Ziyue Chen, Dongyu Dai, Zhiyu Zhu, Shuo Liu, and Chuan Cheng. A survey on the memory mechanism of large language model based agents. 2024. A Artifact Schema (Abbreviated) { " p a m _ v e r s i o n " : " 1.0 " , " s c h e m a _ v e r s i o n " : " 1.0 " , " c r e a t e d _ a t " : " 2025 -01 -15 T10 :00:00 Z " , " s o u ...
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.