pith. machine review for the scientific record. sign in

arxiv: 2604.16524 · v1 · submitted 2026-04-16 · 💻 cs.CR

Recognition: unknown

Anumati: Proof of Adherence as a Formal Consent Model for Autonomous Agent Protocols

Ravi Kiran Kadaboina

Authors on Pith no claims yet

Pith reviewed 2026-05-10 11:13 UTC · model grok-4.3

classification 💻 cs.CR
keywords consent modelproof of adherenceautonomous agentsagent-to-agent protocolsaccountability gapPolicyDocumentAdherenceEventappend-only trail
0
0 comments X

The pith

Autonomous agents can prove they evaluated and followed specific policy clauses for each action taken.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Autonomous AI agents that call other agents create an accountability gap because standard authentication only confirms permission to call, not that the caller understood or honored the callee's current terms. The paper separates proof of acceptance, a basic timestamped acknowledgement, from proof of adherence, a detailed record of the reasoning applied to a particular clause for that specific action. It defines three primitives—PolicyDocument to hold the versioned terms, ConsentRecord to capture acceptance, and AdherenceEvent to log each evaluation—that together build an append-only, versioned trail. This trail is offered as a non-breaking addition to protocols such as A2A and MCP, backed by a TLA+ specification of the lifecycle and sample validators.

Core claim

The paper establishes that the accountability gap in agent-to-agent calls can be closed by requiring each permitted action to produce an AdherenceEvent that cites the exact clause from the current PolicyDocument version, with the entire history preserved through linked ConsentRecords in an append-only structure.

What carries the argument

The three primitives PolicyDocument, ConsentRecord, and AdherenceEvent that together form a versioned, append-only consent model for tracking per-action policy evaluations.

If this is right

  • Callee agents gain the ability to audit whether each incoming call was made under a valid and correctly interpreted version of their policy.
  • Policy changes can be introduced without invalidating prior consents because records always reference a specific document version and clause.
  • Existing authentication protocols such as OAuth or mutual TLS continue to handle identity while the new primitives add the missing condition-checking layer.
  • Formal TLA+ models and reference validators can be used to check that the consent trail remains consistent across multiple agent interactions.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Human principals could later query the adherence trail to determine which policy clause an agent relied on when delegating a task.
  • The model could support automated dispute resolution between agents by providing machine-readable evidence of clause evaluation.
  • Overhead measurements from the reference Python implementation would indicate whether the approach scales to high-frequency agent calls.

Load-bearing premise

Calling agents can generate and store a per-action adherence record that correctly cites the relevant policy clause without prohibitive cost or delay, and that callee policies remain stable enough to be referenced precisely during live interactions.

What would settle it

A working counter-example in which an agent correctly accepts a policy yet produces an AdherenceEvent that either cites a non-existent clause or omits the actual reasoning used for the action, while still passing the chain-integrity validator.

Figures

Figures reproduced from arXiv: 2604.16524 by Ravi Kiran Kadaboina.

Figure 1
Figure 1. Figure 1: ACAP consent lifecycle state machine. The seven states govern the [PITH_FULL_IMAGE:figures/full_fig_p010_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: ACAP consent and adherence sequence. The POST /acap/adherence endpoint operates in one of two modes, declared in the AgentCard’s usage_policy object. In local mode the adherence event is fire-and-forget such that the caller records its own decision and the callee appends it to the audit trail. In delegated mode the callee evaluates the event and returns an enforcement decision; the caller MUST NOT invoke t… view at source ↗
Figure 3
Figure 3. Figure 3: Caller agent trace: consent handshake (top), blocked skill call on [PITH_FULL_IMAGE:figures/full_fig_p018_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Callee server log for the same end-to-end session. Every interaction is [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Audit endpoint output. The linked-list structure of the consent chain [PITH_FULL_IMAGE:figures/full_fig_p019_5.png] view at source ↗
read the original abstract

As autonomous AI agents increasingly call other agents to complete tasks on behalf of a human principal, a structural accountability gap has emerged: the calling agent accepts the terms of service of the callee without any protocol-level mechanism to prove that it understood those terms or that it subsequently honoured them. Authentication protocols such as OAuth and mutual TLS establish who may call which capability. They do not address under what conditions a permitted call may be made, and those conditions change as the callee's policies evolve. In this paper we formalise the distinction between proof of acceptance (a timestamped acknowledgement) and proof of adherence (a per-action reasoning record citing the specific clause evaluated). We propose three primitives (PolicyDocument, ConsentRecord, and AdherenceEvent) that together constitute a versioned, append-only consent model for agent-to-agent communication. The model is instantiated as a non-breaking extension to two widely used agent protocols: the Agent2Agent (A2A) protocol and the Model Context Protocol (MCP). A TLA+ specification of the consent lifecycle, together with a reference Python implementation of the chain integrity and adherence trail validators, is available in the accompanying repository.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The paper proposes Anumati, a formal consent model for autonomous AI agent protocols. It distinguishes proof of acceptance (a timestamped acknowledgement) from proof of adherence (a per-action reasoning record citing the specific policy clause evaluated). The model is defined via three primitives—PolicyDocument, ConsentRecord, and AdherenceEvent—that together form a versioned, append-only consent structure. This is instantiated as a non-breaking extension to the Agent2Agent (A2A) and Model Context Protocol (MCP) protocols, supported by a TLA+ specification of the consent lifecycle and a reference Python implementation providing validators for chain integrity and adherence trails.

Significance. If the formalization is sound, the work addresses a genuine accountability gap in agent-to-agent interactions where authentication protocols alone do not establish policy understanding or subsequent adherence. The explicit separation of acceptance from per-action adherence records is a clear conceptual contribution. The provision of a TLA+ specification together with reproducible Python validators for chain integrity is a notable strength, as it enables machine-checked verification and supports independent validation of the append-only property.

minor comments (2)
  1. A concrete worked example showing the generation of an AdherenceEvent that cites a specific clause from a PolicyDocument would improve readability and help readers assess how the per-action reasoning record is constructed in practice.
  2. The repository containing the TLA+ specification and Python validators should be referenced with an explicit, permanent URL or commit hash in the main text (rather than only in the abstract) to ensure long-term reproducibility.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive review and recommendation of minor revision. The referee summary accurately reflects the paper's focus on distinguishing proof of acceptance from proof of adherence, the three primitives, and the TLA+ specification with Python validators as a non-breaking extension to A2A and MCP.

Circularity Check

0 steps flagged

No significant circularity: purely definitional formalization

full rationale

The paper introduces a consent model by defining three new primitives (PolicyDocument, ConsentRecord, AdherenceEvent) that distinguish timestamped acceptance from per-action adherence records and form a versioned append-only structure. This is presented as an original formalization instantiated in A2A/MCP protocols, backed by a TLA+ specification and Python validators for chain integrity. No equations, fitted parameters, self-citations, or reductions appear in the provided text that would make any claim equivalent to its inputs by construction; the central contribution is definitional and self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 3 invented entities

The contribution rests on three newly introduced entities and one domain assumption about agent capability to produce clause-specific records; no free parameters are described.

axioms (1)
  • domain assumption Calling agents can generate per-action reasoning records that cite specific policy clauses
    Required for AdherenceEvent to function as described; appears in the definition of the adherence primitive.
invented entities (3)
  • PolicyDocument no independent evidence
    purpose: Versioned container for the callee's policies
    New primitive introduced to support the consent model.
  • ConsentRecord no independent evidence
    purpose: Timestamped proof of policy acceptance
    New primitive for proof of acceptance.
  • AdherenceEvent no independent evidence
    purpose: Per-action record citing the evaluated clause
    New primitive for proof of adherence.

pith-pipeline@v0.9.0 · 5500 in / 1306 out tokens · 40691 ms · 2026-05-10T11:13:13.067115+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

26 extracted references · 5 canonical work pages · 2 internal anchors

  1. [1]

    The 2025 AI Agent Index: Documenting Technical and Safety Features of Deployed Agentic AI Systems

    AI Agent Index 2025. arXiv:2602.17753, February 2026. (The report covers the 2025 landscape; the preprint was published in February 2026.)

  2. [2]

    Regulation (EU) 2024/1689

    EU AI Act. Regulation (EU) 2024/1689. https://digital-strategy.ec.europa.e u/en/policies/regulatory-framework-ai

  3. [3]

    AI Act Service Desk: Frequently Asked Questions

    EU AI Office. AI Act Service Desk: Frequently Asked Questions. https: //ai-act-service-desk.ec.europa.eu/en/faq (As of March 2026, no FAQ or guidance document addresses consent mechanisms for autonomous agent-to- agent transactions.)

  4. [4]

    https://a2a-protocol.org/latest/specification/

    A2A Protocol Specification. https://a2a-protocol.org/latest/specification/

  5. [5]

    https://modelcontextprotocol.io/spec ification/

    Model Context Protocol Specification. https://modelcontextprotocol.io/spec ification/

  6. [6]

    https://modelcontextprotocol.io/specificat ion/2025-11-25/changelog

    MCP November 2025 Changelog. https://modelcontextprotocol.io/specificat ion/2025-11-25/changelog

  7. [7]

    https://www.uniformlaw s.org/

    Uniform Electronic Transactions Act (UETA), §14. https://www.uniformlaw s.org/

  8. [8]

    https://www.pr oskauer.com/blog/contract-law-in-the-age-of-agentic-ai-whos-really-clicking- accept

    Proskauer: Contract Law in the Age of Agentic AI, 2025. https://www.pr oskauer.com/blog/contract-law-in-the-age-of-agentic-ai-whos-really-clicking- accept

  9. [9]

    https://www.w3.org/TR/odrl-model/

    W3C ODRL Information Model 2.2. https://www.w3.org/TR/odrl-model/

  10. [10]

    https://datatracker.ietf.org/doc /html/rfc7515

    RFC 7515: JSON Web Signature (JWS). https://datatracker.ietf.org/doc /html/rfc7515

  11. [11]

    https://github.com/google-agentic- commerce/ap2

    AP2: Agent Payments Protocol. https://github.com/google-agentic- commerce/ap2

  12. [12]

    Open Agent Governance Specification (OAGS)

    Ngozo, J.F. Open Agent Governance Specification (OAGS). Sekuire, 2026. https://sekuire.ai/blog/introducing-open-agent-governance-specification

  13. [13]

    https://air-governance-framework

    FINOS AI Governance Framework v2.0. https://air-governance-framework. finos.org/

  14. [14]

    Aylward, J. et al. AIGA: AI Governance and Accountability Protocol. IETF Internet-Draft, draft-aylward-aiga-1. https://datatracker.ietf.org/doc/draft- aylward-aiga-1/

  15. [15]

    OpenMandate: Governing AI Agents by Authority, Not Instruction

    McDonough, R. OpenMandate: Governing AI Agents by Authority, Not Instruction. Law://WhatsNext, 2026. https://lawwhatsnext.substack.com/p/op enmandate-governing-ai-agents-by

  16. [16]

    Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents,

    Mavračić, J. Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents. arXiv:2510.24383, 2025. 24

  17. [17]

    Palumbo, N. et al. PCAS: Policy Compiler for Secure Agentic Systems. arXiv:2602.16708, 2026

  18. [18]

    Gaurav, S. et al. Governance-as-a-Service: A Multi-Agent Framework for AI System Compliance and Policy Enforcement. arXiv:2508.18765, 2025

  19. [19]

    Wang, C.L. et al. MI9: An Integrated Runtime Governance Framework for Agentic AI. arXiv:2508.03858, 2025

  20. [20]

    https://standards.ieee.org/ieee/7012/

    IEEE P7012: Standard for Machine-Readable Personal Privacy Terms. https://standards.ieee.org/ieee/7012/

  21. [21]

    https://w3c.github.io/dpv/dpv/

    W3C Data Privacy Vocabulary (DPV) v2.2. https://w3c.github.io/dpv/dpv/

  22. [22]

    https://kantarainiti ative.org/

    Kantara Initiative: Consent Receipt Specification v1.1. https://kantarainiti ative.org/

  23. [23]

    and Liang, B

    Li, D., Yu, G., Wang, X. and Liang, B. AuditableLLM: A Hash-Chain- Backed, Compliance-Aware Auditable Framework for Large Language Models. Electronics, 15(1), 56. MDPI, 2025

  24. [24]

    When an AI Agent Says ‘I Agree,’ Who’s Consenting? TechPol- icy.Press, December 2025

    Rida, C. When an AI Agent Says ‘I Agree,’ Who’s Consenting? TechPol- icy.Press, December 2025. https://www.techpolicy.press/when-an-ai-agent-says- i-agree-whos-consenting/

  25. [25]

    https://datatracker.ie tf.org/doc/html/rfc9396

    RFC 9396: OAuth 2.0 Rich Authorization Requests. https://datatracker.ie tf.org/doc/html/rfc9396

  26. [26]

    January 2017

    Kantara Initiative.User-Managed Access (UMA) 2.0 Grant for OAuth 2.0 Authorization. January 2017. https://kantarainitiative.org/uma-specifications/ 25