pith. machine review for the scientific record. sign in

arxiv: 2604.04103 · v1 · submitted 2026-04-05 · 💻 cs.AI

Recognition: 1 theorem link

· Lean Theorem

Compliance-by-Construction Argument Graphs: Using Generative AI to Produce Evidence-Linked Formal Arguments for Certification-Grade Accountability

Mahyar T. Moghaddam

Authors on Pith no claims yet

Pith reviewed 2026-05-13 17:11 UTC · model grok-4.3

classification 💻 cs.AI
keywords compliance-by-constructionformal argumentsgenerative AIargument graphsassurance casescertificationRAGprovenance
0
0 comments X

The pith

A compliance-by-construction architecture uses GenAI to draft formal arguments that must pass deterministic evidence and reasoning checks before acceptance.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a system that integrates generative AI into building structured formal arguments for high-stakes certification and compliance decisions. Each AI-generated fragment is treated as a claim that must link to verifiable evidence and satisfy explicit validation rules before it enters the official record. The architecture combines argument graphs, retrieval-augmented generation, a reasoning kernel, and provenance tracking to enforce this discipline. A sympathetic reader would care because it offers a route to faster argument construction without accepting hallucinated reasoning or losing auditability. The work presents the design and an evaluation approach based on enforceable invariants and examples.

Core claim

The central claim is that deterministic validation rules, applied through a reasoning and validation kernel within a typed Argument Graph representation, can prevent unsupported or hallucinated claims from GenAI from entering the decision record while still allowing the technology to accelerate the construction of evidence-linked formal arguments for certification-grade accountability.

What carries the argument

The compliance-by-construction architecture, which structures each AI-assisted step as a claim that must be supported by verifiable evidence and validated against explicit completeness and admissibility constraints before acceptance.

If this is right

  • GenAI can accelerate formal argument construction for safety-critical systems while maintaining traceability.
  • Unsupported claims are blocked from the decision record by the validation kernel.
  • Full auditability is supported through a W3C PROV-aligned provenance ledger.
  • Argument graphs become the enforceable structure that ties claims, reasoning, and evidence together.
  • The approach enables compliance by construction rather than post-hoc review.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This pattern could reduce manual certification effort in domains that already use assurance cases.
  • If the kernel proves reliable, the same structure might apply to other AI-assisted high-stakes workflows such as regulatory submissions.
  • The design implicitly connects GenAI assistance to existing formal methods for safety argumentation.
  • A practical next step would be to measure the kernel's detection rate on real certification documents containing known subtle errors.

Load-bearing premise

The reasoning and validation kernel can reliably detect and block hallucinated or incomplete reasoning in GenAI outputs without missing subtle gaps or requiring human intervention.

What would settle it

A concrete test case in which the kernel accepts a GenAI-generated argument fragment that contains a subtle unsupported claim or missing evidence link, allowing it into the official decision record.

Figures

Figures reproduced from arXiv: 2604.04103 by Mahyar T. Moghaddam.

Figure 1
Figure 1. Figure 1: Compliance-by-construction pipeline for AI-assisted argument gen [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Layered architecture for compliance-by-construction argument gener [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Example argument graph fragment showing hierarchical claims, [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
read the original abstract

High-stakes decision systems increasingly require structured justification, traceability, and auditability to ensure accountability and regulatory compliance. Formal arguments commonly used in the certification of safety-critical systems provide a mechanism for structuring claims, reasoning, and evidence in a verifiable manner. At the same time, generative artificial intelligence systems are increasingly integrated into decision-support workflows, assisting with drafting explanations, summarizing evidence, and generating recommendations. However, current deployments often rely on language models as loosely constrained assistants, which introduces risks such as hallucinated reasoning, unsupported claims, and weak traceability. This paper proposes a compliance-by-construction architecture that integrates Generative AI (GenAI) with structured formal argument representations. The approach treats each AI-assisted step as a claim that must be supported by verifiable evidence and validated against explicit reasoning constraints before it becomes part of an official decision record. The architecture combines four components: i) a typed Argument Graph representation inspired by assurance-case methods, ii) retrieval-augmented generation (RAG) to draft argument fragments grounded in authoritative evidence, iii) a reasoning and validation kernel enforcing completeness and admissibility constraints, and iv) a provenance ledger aligned with the W3C PROV standard to support auditability. We present a system design and an evaluation strategy based on enforceable invariants and worked examples. The analysis suggests that deterministic validation rules can prevent unsupported claims from entering the decision record while allowing GenAI to accelerate argument construction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a compliance-by-construction architecture integrating generative AI with typed Argument Graphs (inspired by assurance cases), retrieval-augmented generation for evidence grounding, a reasoning and validation kernel enforcing completeness and admissibility constraints, and a W3C PROV-aligned provenance ledger. The central claim is that deterministic validation rules in the kernel can prevent hallucinated or unsupported claims from entering the decision record while allowing GenAI to accelerate argument construction in high-stakes certification contexts.

Significance. If the validation invariants prove sound and complete for open-ended GenAI fragments, the architecture could meaningfully advance accountable AI use in safety-critical domains by combining formal argument structures with generative assistance and audit trails. The proposal usefully synthesizes assurance-case methods, RAG, and provenance standards into a single workflow, and the emphasis on enforceable invariants and worked examples provides a clear starting point for further development.

major comments (2)
  1. [reasoning and validation kernel component] The description of the reasoning and validation kernel states that it enforces completeness and admissibility constraints on typed Argument Graphs, but supplies neither the concrete predicates/invariants nor any argument for their coverage of cases such as implicit assumptions, missing evidence links, or context-dependent logical gaps typical in GenAI outputs. This leaves the 'compliance-by-construction' guarantee resting on an unverified assumption that the rules are both sound and complete.
  2. [evaluation strategy] The evaluation strategy is described as relying on enforceable invariants and worked examples, yet the manuscript presents only a high-level system design with no implemented rules, machine-checked proofs, empirical results, or data demonstrating that the invariants actually block unsupported claims in practice.
minor comments (2)
  1. [system design] A diagram showing the data flow between the four components (Argument Graph, RAG, kernel, and provenance ledger) would clarify how fragments are validated before ledger entry.
  2. [introduction] The abstract and introduction use 'deterministic validation rules' without defining the term or contrasting it with probabilistic checks; a brief formalization or pseudocode example would help.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major comment point by point below, indicating planned revisions where appropriate. The manuscript is a high-level system design proposal, which shapes our responses.

read point-by-point responses
  1. Referee: [reasoning and validation kernel component] The description of the reasoning and validation kernel states that it enforces completeness and admissibility constraints on typed Argument Graphs, but supplies neither the concrete predicates/invariants nor any argument for their coverage of cases such as implicit assumptions, missing evidence links, or context-dependent logical gaps typical in GenAI outputs. This leaves the 'compliance-by-construction' guarantee resting on an unverified assumption that the rules are both sound and complete.

    Authors: We agree that the manuscript describes the kernel at an architectural level and does not enumerate concrete predicates or provide a formal argument for their soundness and completeness. The compliance-by-construction claim rests on the principle that deterministic rules can be defined to enforce constraints before any fragment enters the record, but we acknowledge the current text leaves coverage of GenAI-specific issues (implicit assumptions, missing links, logical gaps) implicit. In the revised manuscript we will add a dedicated subsection with example invariants, such as: (i) every claim node must have at least one evidence link to a retrieved source or explicit justification; (ii) no inference edge may be added without a typed reasoning rule that matches the claim and evidence types; and (iii) context-dependent gaps are flagged by requiring explicit assumption nodes that themselves must be supported. We will also sketch how these rules address the listed cases and note that full machine-checked proofs remain future work. revision: yes

  2. Referee: [evaluation strategy] The evaluation strategy is described as relying on enforceable invariants and worked examples, yet the manuscript presents only a high-level system design with no implemented rules, machine-checked proofs, empirical results, or data demonstrating that the invariants actually block unsupported claims in practice.

    Authors: The manuscript is explicitly positioned as a system design and architecture proposal rather than an implemented or empirically evaluated system. Consequently it contains no running code, machine-checked proofs, or quantitative results, and the evaluation strategy is described only at the level of how invariants and worked examples would be applied once the kernel is realized. We accept that this leaves the practical blocking power of the invariants undemonstrated. In revision we will (a) explicitly label the contribution as a design framework, (b) expand the evaluation-strategy section with a concrete plan for constructing worked examples (including GenAI-generated fragments that contain the problematic cases listed in the first comment) and (c) outline a staged validation roadmap that includes both manual review of examples and planned formal verification. This clarifies the paper's scope without overstating current evidence. revision: partial

Circularity Check

0 steps flagged

No circularity: architecture proposal with no derivations or self-referential reductions

full rationale

The paper is a forward-looking system design proposal that outlines four components (typed Argument Graph, RAG drafting, reasoning/validation kernel, and PROV-aligned ledger) and an evaluation strategy based on invariants and worked examples. No equations, fitted parameters, predictions of new quantities from inputs, or load-bearing self-citations appear in the provided text or abstract. The central claim—that deterministic validation rules can enforce compliance-by-construction—rests on the (unspecified) design of those rules rather than any derivation that reduces to its own inputs by construction. The skeptic concern about rule completeness is a specification gap, not a circularity in any derivation chain. The work is therefore self-contained as an architectural sketch without circular elements.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The proposal rests on standard domain assumptions from assurance cases and AI safety literature without introducing new free parameters or invented entities.

axioms (2)
  • domain assumption Formal argument graphs can structure claims, reasoning, and evidence in a verifiable manner for certification
    Invoked when describing the typed Argument Graph representation inspired by assurance-case methods
  • domain assumption Retrieval-augmented generation can produce fragments grounded in authoritative evidence
    Assumed when stating that RAG drafts argument fragments

pith-pipeline@v0.9.0 · 5558 in / 1247 out tokens · 25262 ms · 2026-05-13T17:11:12.245353+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

18 extracted references · 18 canonical work pages

  1. [1]

    Regulation (eu) 2024/2847 of the european par- liament and of the council, act resilience,

    R. European Union, “Regulation (eu) 2024/2847 of the european par- liament and of the council, act resilience,”Regulation (eu), 2024

  2. [2]

    Human–ai interactions in public sector decision making:“automation bias

    S. Alon-Barkat and M. Busuioc, “Human–ai interactions in public sector decision making:“automation bias” and “selective adherence” to algorithmic advice,”Journal of Public Administration Research and Theory, vol. 33, no. 1, pp. 153–169, 2023

  3. [3]

    Algorithmic accountability in public administration: The gdpr paradox,

    S. S. Kang, “Algorithmic accountability in public administration: The gdpr paradox,” inProceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 32–32

  4. [4]

    A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions,

    L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qinet al., “A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions,” ACM Transactions on Information Systems, vol. 43, no. 2, pp. 1–55, 2025

  5. [5]

    Gsn commu- nity standard version 1,

    Goal Structuring Notation Working Group, “Gsn commu- nity standard version 1,” Origin Consulting (York) Ltd., Tech. Rep., Nov. 2011, accessed: 2026-03-08. [Online]. Available: https://www.faa.gov/about/office org/headquarters offices/ ang/redac/redac-sas-201503-gsn-community-standard-v1.pdf

  6. [6]

    S. E. Toulmin,The uses of argument. Cambridge university press, 2003

  7. [7]

    Retrieval- augmented generation for knowledge-intensive nlp tasks,

    P. Lewis, E. Perez, A. Piktus, F. Petroni, V . Karpukhin, N. Goyal, H. K ¨uttler, M. Lewis, W.-t. Yih, T. Rockt ¨aschelet al., “Retrieval- augmented generation for knowledge-intensive nlp tasks,”Advances in neural information processing systems, vol. 33, pp. 9459–9474, 2020

  8. [8]

    Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic audit- ing,

    I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, “Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic audit- ing,” inProceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 33–44

  9. [9]

    Prov-dm: The prov data model,

    K. Belhajjame, R. B’Far, J. Cheney, S. Coppens, S. Cresswell, Y . Gil, P. Groth, G. Klyne, T. Lebo, J. McCuskeret al., “Prov-dm: The prov data model,”W3C Recommendation, vol. 14, pp. 15–16, 2013

  10. [10]

    Reviewable automated decision- making: A framework for accountable algorithmic systems,

    J. Cobbe, M. S. A. Lee, and J. Singh, “Reviewable automated decision- making: A framework for accountable algorithmic systems,” inPro- ceedings of the 2021 ACM conference on fairness, accountability, and transparency, 2021, pp. 598–609

  11. [11]

    On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games,

    P. M. Dung, “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games,” Artificial intelligence, vol. 77, no. 2, pp. 321–357, 1995

  12. [12]

    Prompting gpt–4 to support automatic safety case generation,

    M. Sivakumar, A. B. Belle, J. Shan, and K. K. Shahandashti, “Prompting gpt–4 to support automatic safety case generation,”Expert Systems with Applications, vol. 255, p. 124653, 2024

  13. [13]

    Hao Li, Viktor Schlegel, Yizheng Sun, Riza Batista-Navarro, and Goran Nenadic

    H. Li, V . Schlegel, Y . Sun, R. Batista-Navarro, and G. Nenadic, “Large language models in argument mining: A survey,”arXiv preprint arXiv:2506.16383, 2025

  14. [14]

    Decision provenance: Harnessing data flow for accountable systems,

    J. Singh, J. Cobbe, and C. Norval, “Decision provenance: Harnessing data flow for accountable systems,”IEEE Access, vol. 7, pp. 6562–6574, 2018

  15. [15]

    Regulation (eu) 2024/1689 of the european parliament and of the council,

    A. Act, “Regulation (eu) 2024/1689 of the european parliament and of the council,”Regulation (eu), 2024

  16. [16]

    Algorithmic transparency recording standard hub,

    G. UK, “Algorithmic transparency recording standard hub,” 2023

  17. [17]

    Communication-efficient learning of deep networks from decentralized data,

    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” inArtificial intelligence and statistics. Pmlr, 2017, pp. 1273– 1282

  18. [18]

    Practical Secure Aggregation for Federated Learning on User-Held Data

    K. Bonawitz, V . Ivanov, B. Kreuter, A. Marcedone, H. B. McMa- han, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data,”arXiv preprint arXiv:1611.04482, 2016