pith. machine review for the scientific record. sign in

arxiv: 2605.11032 · v1 · submitted 2026-05-10 · 💻 cs.CR · cs.AI

Recognition: no theorem link

Portable Agent Memory: A Protocol for Cryptographically-Verified Memory Transfer Across Heterogeneous AI Agents

Santhosh Kumar Ravindran

Authors on Pith no claims yet

Pith reviewed 2026-05-13 01:00 UTC · model grok-4.3

classification 💻 cs.CR cs.AI
keywords AI agentsmemory transferMerkle-DAGprompt injectioncryptographic verificationagent interoperabilityportable memorycapability-based access
0
0 comments X

The pith

An open protocol transfers persistent memory between AI agents on different models with cryptographic verification and injection protection.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Portable Agent Memory as a protocol that frees accumulated context from vendor-specific AI runtimes so it can move to other agents. Memory is organized into five components whose entries are content-addressable and linked by a Merkle-DAG graph that records provenance and detects tampering. Capability-based rules control which parts of the memory are shared, and a rehydration step reformats recalled content for the target model while blocking indirect prompt injection. A reference implementation with tests and skills demonstrates successful transfers among GPT-4, Claude, Gemini, and Llama agents.

Core claim

Portable Agent Memory is an open protocol and reference implementation that uses a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph, capability-based access control, an injection-resistant rehydration protocol, and JSON-first serialization to enable cryptographically verified transfer of episodic, semantic, procedural, working, and identity memory across heterogeneous AI agents.

What carries the argument

The Merkle-DAG provenance graph that links content-addressable memory entries to supply tamper evidence, combined with the injection-resistant rehydration protocol that adapts recalled content to a target model's requirements.

If this is right

  • Memory state accumulated by one agent becomes usable by agents built on different underlying models or platforms.
  • Selective disclosure of memory segments becomes possible without exposing unrelated private context.
  • Tampering or alteration of transferred memory becomes detectable through breaks in the Merkle-DAG links.
  • Cross-model demonstrations confirm the protocol works between current major architectures including GPT-4, Claude, Gemini, and Llama.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Standardized memory formats could reduce the engineering cost of moving context between independently developed agent systems.
  • Multi-vendor agent teams might coordinate more effectively if each can import verified memory from the others without custom adapters.
  • Tooling that imports and exports memory according to this protocol could become a common layer for persistent agent state.

Load-bearing premise

The rehydration protocol can adapt memory content to heterogeneous target models while mitigating indirect prompt injection risks.

What would settle it

Transfer a memory segment to a new model and observe whether the target agent either ignores the imported content or produces outputs showing successful indirect prompt injection through the rehydrated material.

read the original abstract

We present Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents. Modern AI agents accumulate rich context -- episodic events,semantic knowledge, procedural skills, working state, and identity preferences -- but this context remains locked within vendor-specific runtimes. Portable Agent Memory addresses this through: (1) a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph providing tamper-evidence; (2) capability-based access control enabling selective, scoped disclosure of memory segments; (3) an injection-resistant rehydration protocol that adapts recalled content to heterogeneous target models while mitigating indirect prompt injection; and (4) a JSON-first serialization format with optional CBOR compaction for efficient transport. We provide a Python SDK with 54 passing tests, agent skills for multiple platforms, and demonstrate cross-model memory transfer between GPT-4, Claude, Gemini, and Llama architectures. The protocol is open-source under Apache 2.0.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The manuscript presents Portable Agent Memory, an open protocol and reference implementation for transferring persistent memory state across heterogeneous AI agents. It defines a five-component structured memory model with content-addressable entries linked by a Merkle-DAG provenance graph for tamper-evidence, capability-based access control for selective disclosure, an injection-resistant rehydration protocol that adapts recalled content to target models while mitigating indirect prompt injection, and a JSON-first serialization format with optional CBOR compaction. The work includes a Python SDK with 54 passing tests, agent skills for multiple platforms, and a demonstration of cross-model memory transfer between GPT-4, Claude, Gemini, and Llama.

Significance. If the security properties of the rehydration protocol can be substantiated, the protocol would represent a meaningful contribution to AI agent interoperability by enabling cryptographically verified memory transfer without vendor lock-in. The use of standard cryptographic primitives (Merkle-DAGs, capability-based access) and the open-source Python SDK with 54 tests plus cross-model demonstrations are clear strengths that support reproducibility and practical adoption.

major comments (1)
  1. [Abstract (component 3)] Abstract (component 3): The claim that the rehydration protocol is 'injection-resistant' and 'mitigates indirect prompt injection' is load-bearing for the central value proposition of safe cross-agent transfer on heterogeneous models, yet the manuscript provides no formal threat model, no reduction to a known secure primitive, and no adversarial evaluation. The reported 54 unit tests and functional cross-model demo (GPT-4/Claude/Gemini/Llama) only establish that transfer works under benign conditions, not that crafted memory payloads attempting to escape the adaptation wrapper are blocked.
minor comments (1)
  1. The abstract contains a typographical error: 'episodic events,semantic knowledge' is missing a space after the comma.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their thorough review and for recognizing the potential significance of Portable Agent Memory for AI agent interoperability. We address the major comment below and commit to revisions that strengthen the security analysis of the rehydration protocol.

read point-by-point responses
  1. Referee: The claim that the rehydration protocol is 'injection-resistant' and 'mitigates indirect prompt injection' is load-bearing for the central value proposition of safe cross-agent transfer on heterogeneous models, yet the manuscript provides no formal threat model, no reduction to a known secure primitive, and no adversarial evaluation. The reported 54 unit tests and functional cross-model demo (GPT-4/Claude/Gemini/Llama) only establish that transfer works under benign conditions, not that crafted memory payloads attempting to escape the adaptation wrapper are blocked.

    Authors: We agree with the referee that the security properties of the rehydration protocol require more rigorous substantiation to support the claims of mitigating indirect prompt injection. The manuscript describes the protocol's use of model-specific adaptation wrappers and structured rehydration to reduce the risk of prompt injection from recalled memory content, but does not include a formal threat model or adversarial experiments. In the revised manuscript, we will add a new section detailing: (1) a threat model for indirect prompt injection attacks via portable memory, specifying adversary goals and capabilities; (2) how the rehydration protocol's design choices (e.g., content sanitization, format constraints, and target-model adaptation) provide mitigation, with references to related secure primitives where applicable; and (3) results from adversarial evaluations, including tests with crafted injection payloads on the GPT-4, Claude, Gemini, and Llama models. These additions will clarify that the protocol aims to mitigate rather than provide absolute resistance, and will include quantitative results from the evaluations. We believe this revision will address the concern and enhance the paper's contribution. revision: yes

Circularity Check

0 steps flagged

No circularity: protocol defined from first principles using standard primitives

full rationale

The paper defines a five-component memory model, Merkle-DAG provenance, capability-based access control, and an injection-resistant rehydration protocol directly from standard cryptographic and access-control building blocks. No equations, predictions, or central claims reduce by construction to fitted parameters, self-definitions, or self-citation chains. The work is self-contained: it specifies the protocol, provides a reference implementation with 54 tests, and demonstrates cross-model transfer without any load-bearing step that equates output to input by definition.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The protocol rests on standard cryptographic assumptions for Merkle-DAG tamper-evidence and introduces new structuring for AI memory without additional fitted parameters.

axioms (1)
  • standard math Merkle-DAG structures provide content-addressable, tamper-evident provenance for memory entries.
    Invoked in the description of the structured memory model and provenance graph.
invented entities (2)
  • Five-component structured memory model no independent evidence
    purpose: Organizes memory into episodic events, semantic knowledge, procedural skills, working state, and identity preferences.
    Newly proposed categorization for agent memory.
  • Injection-resistant rehydration protocol no independent evidence
    purpose: Adapts recalled memory content to target models while mitigating indirect prompt injection.
    Proposed adaptation mechanism without detailed security proof in abstract.

pith-pipeline@v0.9.0 · 5469 in / 1163 out tokens · 43559 ms · 2026-05-13T01:00:20.571005+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

25 extracted references · 25 canonical work pages

  1. [1]

    Anderson, Daniel Bothell, Michael D

    John R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind.Psychological Review, 111(4):1036–1060, 2004

  2. [2]

    Model context protocol (MCP), 2024

    Anthropic. Model context protocol (MCP), 2024. Accessed: 2025-01-15

  3. [3]

    Dennis and Earl C

    Jack B. Dennis and Earl C. Van Horn. Programming semantics for multiprogrammed computations.Communications of the ACM, 9(3):143–155, 1966

  4. [4]

    Regulation (EU) 2016/679 (general data protection regulation), art

    European Parliament and Council. Regulation (EU) 2016/679 (general data protection regulation), art. 20: Right to data portability. Official Journal of the EU, L 119, 2016

  5. [5]

    Agent2agent (A2A) protocol, 2025

    Google and Linux Foundation. Agent2agent (A2A) protocol, 2025. Accessed: 2025-01-15

  6. [6]

    Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection

    Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection. InProceedings of the ACM Workshop on AI and Security (AISec), CCS 2023, 2023

  7. [7]

    Laird, Allen Newell, and Paul S

    John E. Laird, Allen Newell, and Paul S. Rosenbloom. Soar: An architecture for general intelligence.Artificial Intelligence, 33(1):1–64, 1987

  8. [8]

    Retrieval-augmented generation for knowledge-intensive NLP tasks

    Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Na- man Goyal, Heinrich K¨ uttler, Mike Lewis, Wen tau Yih, Tim Rockt¨ aschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems (NeurIPS) 33, 2020

  9. [9]

    The memory layer for AI applications, 2024

    Mem0. The memory layer for AI applications, 2024. Accessed: 2025-01-15

  10. [10]

    Ralph C. Merkle. A digital signature based on a conventional encryption function. In Advances in Cryptology — CRYPTO ’87, pages 369–378, 1987

  11. [11]

    Agent memory communication protocol (AMCP), 2025

    nunchi-ai. Agent memory communication protocol (AMCP), 2025. Accessed: 2025-01-15

  12. [12]

    Gonzalez

    Charles Packer, Sarah Fang, Vivian Patil, Kevin Lin, Sid Wooders, and Joseph E. Gonzalez. MemGPT: Towards LLMs as operating systems. 2023

  13. [13]

    O’Brien, Carrie J

    Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technol- ogy (UIST), 2023

  14. [14]

    Ignore previous prompt: Attack techniques for language models

    F´ abio Perez and Ian Ribeiro. Ignore previous prompt: Attack techniques for language models. InNeurIPS 2022 ML Safety Workshop, 2022

  15. [15]

    Zep: A temporal knowledge graph architecture for agent memory

    Preston Rasmussen, Pavel Paliychuk, Travis Beauvais, Justin Ryan, and Daniel Chalef. Zep: A temporal knowledge graph architecture for agent memory. 2025. 14

  16. [16]

    Toolformer: Language models can teach themselves to use tools

    Timo Schick, Jane Dwivedi-Yu, Roberto Dess` ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. 2023

  17. [17]

    Reflexion: Language agents with verbal reinforcement learning

    Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. In Advances in Neural Information Processing Systems (NeurIPS) 36, 2023

  18. [18]

    An open standard for portable AI agent identity, 2024

    Soul Protocol. An open standard for portable AI agent identity, 2024. Accessed: 2025-01- 15

  19. [19]

    Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L

    Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L. Griffiths. Cog- nitive architectures for language agents.Transactions on Machine Learning Research (TMLR), 2024

  20. [20]

    How many memory systems are there?American Psychologist, 40(4):385– 398, 1985

    Endel Tulving. How many memory systems are there?American Psychologist, 40(4):385– 398, 1985

  21. [21]

    Voyager: An open-ended embodied agent with large lan- guage models

    Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large lan- guage models. 2023

  22. [22]

    White, Doug Burger, and Chi Wang

    Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W. White, Doug Burger, and Chi Wang. AutoGen: Enabling next-gen LLM applications via multi- agent conversation. 2023

  23. [23]

    A survey on agent protocols

    Yingxuan Yang, Huacan Chai, Yuanyi Song, Shuai Qi, Meng Wen, Ning Li, Jing Liao, Huan Hu, Jie Lin, Guoqing Chang, Wei Liu, Yonghong Wen, Yinghui Yu, and Wenhao Zhang. A survey on agent protocols. 2025

  24. [24]

    ReAct: Synergizing reasoning and acting in language models

    Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. InProceedings of the International Conference on Learning Representations (ICLR), 2023

  25. [25]

    p a m _ v e r s i o n

    Zeyu Zhang, Bo Bo, Chao Ma, Rui Li, Ziyue Chen, Dongyu Dai, Zhiyu Zhu, Shuo Liu, and Chuan Cheng. A survey on the memory mechanism of large language model based agents. 2024. A Artifact Schema (Abbreviated) { " p a m _ v e r s i o n " : " 1.0 " , " s c h e m a _ v e r s i o n " : " 1.0 " , " c r e a t e d _ a t " : " 2025 -01 -15 T10 :00:00 Z " , " s o u ...