Recognition: 2 theorem links
· Lean TheoremAI Agents Under EU Law
Pith reviewed 2026-05-10 19:36 UTC · model grok-4.3
The pith
High-risk AI agents with untraceable behavioral drift cannot currently satisfy the EU AI Act's essential requirements.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors argue that high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements. The provider's foundational compliance task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons. This conclusion rests on integration of draft harmonised standards under M/613, the GPAI Code of Practice, CRA standards under M/606, and Digital Omnibus proposals, supported by a taxonomy of nine agent deployment categories and a regulatory trigger mapping that links concrete actions to applicable legislation.
What carries the argument
The twelve-step compliance architecture and the taxonomy of nine agent deployment categories that map concrete actions to regulatory triggers across the AI Act and overlapping EU laws.
If this is right
- Providers must inventory every external action, data flow, and affected person before deploying high-risk agents.
- Untraceable drift creates a structural barrier that current standards cannot overcome for high-risk systems.
- Compliance architecture must simultaneously address the AI Act, GDPR, Cyber Resilience Act, and sector rules.
- The regulatory trigger mapping allows specific agent behaviors to be tied directly to required obligations.
- Failure to complete the inventory leaves high-risk agents ineligible for EU market placement.
Where Pith is reading between the lines
- Agent designs will need built-in audit logs and observability features to enable the required inventory.
- The inventory approach could inform compliance strategies in other jurisdictions adopting similar risk-based AI rules.
- Developers may need to restrict autonomy levels in high-risk domains until traceability improves.
- Standardised logging tools for multi-party action chains could emerge as a practical next step for industry.
Load-bearing premise
The assumption that the draft harmonised standards under M/613, the GPAI Code of Practice, CRA standards under M/606, and Digital Omnibus proposals will accurately represent the final enforceable obligations.
What would settle it
A documented high-risk AI agent that exhibits untraceable behavioral drift yet passes an official AI Act conformity assessment after the twelve-step compliance process is applied.
Figures
read the original abstract
AI agents - i.e. AI systems that autonomously plan, invoke external tools, and execute multi-step action chains with reduced human involvement - are being deployed at scale across enterprise functions ranging from customer service and recruitment to clinical decision support and critical infrastructure management. The EU AI Act (Regulation 2024/1689) regulates these systems through a risk-based framework, but it does not operate in isolation: providers face simultaneous obligations under the GDPR, the Cyber Resilience Act, the Digital Services Act, the Data Act, the Data Governance Act, sector-specific legislation, the NIS2 Directive, and the revised Product Liability Directive. This paper provides the first systematic regulatory mapping for AI agent providers integrating (a) draft harmonised standards under Standardisation Request M/613 to CEN/CENELEC JTC 21 as of January 2026, (b) the GPAI Code of Practice published in July 2025, (c) the CRA harmonised standards programme under Mandate M/606 accepted in April 2025, and (d) the Digital Omnibus proposals of November 2025. We present a practical taxonomy of nine agent deployment categories mapping concrete actions to regulatory triggers, identify agent-specific compliance challenges in cybersecurity, human oversight, transparency across multi-party action chains, and runtime behavioral drift. We propose a twelve-step compliance architecture and a regulatory trigger mapping connecting agent actions to applicable legislation. We conclude that high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements, and that the provider's foundational compliance task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper provides the first systematic regulatory mapping for providers of AI agents (autonomous systems that plan, invoke tools, and execute multi-step actions) under the EU AI Act (Regulation 2024/1689) and intersecting legislation including GDPR, CRA, DSA, and NIS2. It introduces a nine-category taxonomy of agent deployment scenarios that maps concrete actions to regulatory triggers, identifies agent-specific challenges in cybersecurity, human oversight, transparency in multi-party chains, and runtime behavioral drift, and proposes a twelve-step compliance architecture. The central conclusion is that high-risk agentic systems with untraceable behavioral drift cannot satisfy the AI Act's essential requirements, with the provider's foundational task being an exhaustive inventory of external actions, data flows, connected systems, and affected persons; the analysis draws on draft harmonised standards under M/613 (Jan 2026), GPAI Code of Practice (July 2025), CRA M/606 (April 2025), and Digital Omnibus proposals (Nov 2025).
Significance. If the interpretive mappings to the cited drafts prove accurate to the final enforceable texts, the paper would offer significant practical value by translating overlapping EU obligations into a concrete taxonomy and compliance roadmap tailored to agentic systems. The explicit linkage of agent actions to regulatory triggers and the twelve-step architecture provide actionable structure for providers facing multi-party chains and drift risks, filling a gap in current guidance. The work also correctly flags that inventory of actions and data flows is a prerequisite for risk management and transparency obligations.
major comments (2)
- [Abstract and Conclusion] Abstract and concluding section: The claim that 'high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements' is load-bearing for the paper's contribution, yet it is derived entirely from the specific versions of the draft harmonised standards (M/613 as of January 2026, GPAI Code July 2025, CRA M/606 April 2025, Digital Omnibus November 2025). The manuscript contains no sensitivity analysis or discussion of how alterations to final texts on human oversight, multi-party transparency, or acceptable risk-management measures would affect this conclusion, undermining the durability of the central claim.
- [Twelve-step compliance architecture] Section on the twelve-step compliance architecture: The architecture treats the exhaustive inventory of external actions, data flows, connected systems, and affected persons as the 'foundational compliance task,' but does not demonstrate how this inventory would be maintained dynamically in the presence of runtime behavioral drift or tool invocation chains; without addressing update mechanisms or auditability, the architecture risks being incomplete for the very drift scenarios identified as disqualifying.
minor comments (2)
- [Abstract] The abstract lists specific dates for the draft documents (e.g., 'as of January 2026'); these should be cross-referenced in a dedicated table or footnote in the main text to allow readers to locate the exact versions used.
- [Taxonomy of Agent Deployment Categories] The nine-category taxonomy is presented without concrete real-world deployment examples or case studies; adding one or two illustrative scenarios per category would improve clarity of how actions map to triggers.
Simulated Author's Rebuttal
We are grateful to the referee for their insightful review and for acknowledging the practical value of our regulatory mapping and compliance architecture for AI agent providers. We have addressed each major comment below, proposing targeted revisions to strengthen the manuscript's durability and completeness.
read point-by-point responses
-
Referee: [Abstract and Conclusion] Abstract and concluding section: The claim that 'high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements' is load-bearing for the paper's contribution, yet it is derived entirely from the specific versions of the draft harmonised standards (M/613 as of January 2026, GPAI Code July 2025, CRA M/606 April 2025, Digital Omnibus November 2025). The manuscript contains no sensitivity analysis or discussion of how alterations to final texts on human oversight, multi-party transparency, or acceptable risk-management measures would affect this conclusion, undermining the durability of the central claim.
Authors: We concur that the absence of a sensitivity analysis limits the long-term applicability of our central claim. In the revised manuscript, we will insert a new paragraph in the conclusion that discusses the potential effects of changes to the final versions of the referenced standards and codes. We will emphasize that the AI Act's core obligations for risk management (Article 9) and human oversight (Article 14) require traceability and verifiability, which untraceable drift fundamentally precludes. While specific implementation details in harmonised standards may shift, the structural incompatibility is likely to persist. This addition will qualify our conclusion appropriately without altering its substance. revision: partial
-
Referee: [Twelve-step compliance architecture] Section on the twelve-step compliance architecture: The architecture treats the exhaustive inventory of external actions, data flows, connected systems, and affected persons as the 'foundational compliance task,' but does not demonstrate how this inventory would be maintained dynamically in the presence of runtime behavioral drift or tool invocation chains; without addressing update mechanisms or auditability, the architecture risks being incomplete for the very drift scenarios identified as disqualifying.
Authors: This is a valid critique of our architecture's presentation. To rectify this, we will revise the relevant section to explicitly outline dynamic maintenance procedures. These will include continuous logging of tool invocations, automated detection of behavioral deviations using monitoring agents, periodic re-inventory protocols triggered by detected changes, and requirements for auditable records of all updates. We will also note that these measures aim to mitigate drift but may not fully resolve untraceable cases, thereby aligning with and reinforcing our conclusion that certain high-risk systems cannot meet the essential requirements. This will render the architecture more actionable for providers. revision: yes
Circularity Check
No significant circularity; analysis draws directly from external regulatory texts.
full rationale
The paper performs a regulatory mapping and compliance analysis for AI agents by referencing external sources including the EU AI Act (Regulation 2024/1689), draft harmonised standards under M/613, the GPAI Code of Practice, CRA standards under M/606, and Digital Omnibus proposals. It introduces a nine-category taxonomy and twelve-step architecture as interpretive tools derived from these documents rather than from any internal equations, fitted parameters, or self-referential definitions. No self-citation chains, uniqueness theorems imported from prior author work, or renamings of known results appear as load-bearing steps. The central conclusion that high-risk agentic systems cannot satisfy essential requirements follows from the cited obligations on human oversight, transparency, and risk management, remaining self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The EU AI Act risk-based framework and listed overlapping regulations apply directly to autonomous multi-step AI agents as described.
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We present a practical taxonomy of nine agent deployment categories mapping concrete actions to regulatory triggers... We propose a twelve-step compliance architecture...
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 4 Pith papers
-
Governing What the EU AI Act Excludes: Accountability for Autonomous AI Agents in Smart City Critical Infrastructure
The EU AI Act narrows accountability for multi-agent AI in critical infrastructure by excluding safety components from key explanation and impact assessment rights, and the paper proposes AgentGov-SC, a three-layer ar...
-
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
Claude Code centers on a model-tool while-loop surrounded by permission systems, context compaction, extensibility hooks, subagent delegation, and session storage; the same design questions yield different answers in ...
-
Decision Evidence Maturity Model for Agentic AI: A Property-Level Method Specification
DEMM defines four executable evidence-sufficiency categories plus a conflicting category for agentic AI decisions and rolls per-property verdicts into a five-level maturity rubric.
-
Making AI Compliance Evidence Machine-Readable
OSCAL is extended with 16 AI-specific properties and a three-layer Compliance-as-Code architecture to generate validated assurance evidence automatically as a byproduct of training high-risk AI systems.
Reference graph
Works this paper leans on
-
[1]
Formal Opinion 512: Generative Artificial Intelligence Tools
American Bar Association, Standing Committee on Ethics and Professional Responsibility. “Formal Opinion 512: Generative Artificial Intelligence Tools.” 29 July 2024.https://www.americanbar.org/content/dam/aba/ administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512. pdf
2024
-
[2]
Agentic Artificial Intelligence from the perspective of Data Protection
Agencia Española de Protección de Datos (AEPD). “Agentic Artificial Intelligence from the perspective of Data Protection.” 18 February 2026. https://www.aepd.es/en/guides/agentic-artificial-intelligence. pdf
2026
-
[3]
Warning on highly autonomous AI agents and GDPR accountability
Autoriteit Persoonsgegevens (Dutch Data Protection Authority). Warning on highly autonomous AI agents and GDPR accountability. February 2026. https://www.autoriteitpersoonsgegevens.nl/en/current/ ap-warns-of-major-security-risks-with-ai-agents-like-openclaw
2026
-
[4]
Systemic Risks Associated with Agentic AI: A Policy Brief
Bellogin, A., Giudici, P., Larsson, S., Pang, J., Schimpf, G., Sengupta, B., & Solmaz, G. (2025). “Systemic Risks Associated with Agentic AI: A Policy Brief.” ACM Europe TPC-Autonomous Systems Subcommit- tee. 15 October 2025. https://www.acm.org/binaries/content/assets/public-policy/europe-tpc/ systemic_risks_agentic_ai_policy-brief_final.pdf
2025
-
[5]
Artificial Intelligence and Civil Liability – A European Perspective
Bertolini, A. et al. “Artificial Intelligence and Civil Liability – A European Perspective.” European Parliament commissioned study (PE 776.426), 24 July 2025. https://www.europarl.europa.eu/RegData/etudes/ STUD/2025/776426/IUST_STU(2025)776426_EN.pdf
2025
-
[6]
On the Opportunities and Risks of Foundation Models
Bommasani, R. et al. “On the Opportunities and Risks of Foundation Models.” arXiv:2108.07258 [cs.LG], August 2021.https://arxiv.org/abs/2108.07258
work page internal anchor Pith review Pith/arXiv arXiv 2021
-
[7]
Certifying Legal AI Assistants for Unrepresented Litigants: A Global Survey of Access to Civil Justice, Unauthorized Practice of Law, and AI
Bonardi, M. & Branting, L. “Certifying Legal AI Assistants for Unrepresented Litigants: A Global Survey of Access to Civil Justice, Unauthorized Practice of Law, and AI.” 26Science and Technology Law Review1 (2025). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4901658
2025
-
[8]
SB 53, Transparency in Frontier Artificial Intelligence Act
California Legislature. SB 53, Transparency in Frontier Artificial Intelligence Act. Chapter 138, Statutes of
-
[9]
Effective 1 January 2026.https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml? bill_id=202520260SB53
2026
-
[10]
Civil Rights Council Secures Approval for Regulations to Protect Against Employment Discrimination Related to Artifi- cial Intelligence
California Civil Rights Department. “Civil Rights Council Secures Approval for Regulations to Protect Against Employment Discrimination Related to Artifi- cial Intelligence.” 30 June 2025. https://calcivilrights.ca.gov/2025/06/30/ civil-rights-council-secures-approval-for-regulations-to-protect-against-employment-discrimination-related-to-artificial-intelligence/
2025
-
[11]
The 2025 AI Agent Index: Documenting Technical and Safety Features of Deployed Agentic AI Systems
Casper, S., Kolt, N. et al. “The 2025 AI Agent Index: Documenting Technical and Safety Features of Deployed Agentic AI Systems.” 2025.https://arxiv.org/abs/2602.17753
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[12]
CDT Europe’s AI Bulletin: March 2026
Center for Democracy and Technology Europe. “CDT Europe’s AI Bulletin: March 2026.” 26 March 2026. https://cdt.org/insights/cdt-europes-ai-bulletin-march-2026/
2026
-
[13]
Artificial Intelligence
CEN-CENELEC JTC 21. “Artificial Intelligence.” https://www.cencenelec.eu/areas-of-work/ cen-cenelec-topics/artificial-intelligence/
-
[14]
Update on CEN and CENELEC’s Decision to Accelerate the Development of Standards for Artificial Intelligence
CEN-CENELEC. “Update on CEN and CENELEC’s Decision to Accelerate the Development of Standards for Artificial Intelligence.” 23 October 2025. https://www.cencenelec.eu/news-events/news/2025/ brief-news/2025-10-23-ai-standardization/
2025
-
[15]
Cyber Resilience Act: Standardization Request Officially Accepted by CEN, CEN- ELEC, and ETSI
CEN-CENELEC. “Cyber Resilience Act: Standardization Request Officially Accepted by CEN, CEN- ELEC, and ETSI.” 3 April 2025. https://www.cencenelec.eu/news-events/news/2025/newsletter/ ots-62-cra/
2025
-
[16]
Standardisation Request M/606 (Cyber Resilience Act)
CEN/CENELEC/ETSI. Standardisation Request M/606 (Cyber Resilience Act). Accepted 3 April 2025. https: //www.cencenelec.eu/news-events/news/2025/newsletter/ots-62-cra/
2025
-
[17]
European Commission. Standardisation Request to the European Committee for Standardisation and the Eu- ropean Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence. C(2023) 3215 final (M/593, as amended by M/613). https://ec.europa.eu/growth/tools-databases/ enorm/mandate/593_en
2023
-
[18]
prEN 18228: Artificial Intelligence – Risk Management
CEN/CENELEC JTC 21. prEN 18228: Artificial Intelligence – Risk Management. Working draft, Jan- uary 2026. https://standards.cencenelec.eu/ords/f?p=205:110:::::FSP_PROJECT,FSP_LANG_ID: 79438,25&cs=126AEABA70EBCF2433A6A5472A8FD6F84 44 AI AGENTSUNDEREU LAW- WORKINGPAPER, APRIL7, 2026
2026
-
[19]
prEN 18229-1: Artificial Intelligence – AI trustworthiness framework – Part 1: Logging, transparency and human oversight
CEN/CENELEC JTC 21. prEN 18229-1: Artificial Intelligence – AI trustworthiness framework – Part 1: Logging, transparency and human oversight. Working draft, January 2026.https://standards.cencenelec.eu/ords/ f?p=205:110:::::FSP_PROJECT,FSP_LANG_ID:76986,25&cs=17E4F9ABCBAC14D2D4C9D36D274FAA1FB
2026
-
[20]
prEN 18229-2: Artificial Intelligence – Trustworthiness Framework – Part 2: Accu- racy and Robustness
CEN/CENELEC JTC 21. prEN 18229-2: Artificial Intelligence – Trustworthiness Framework – Part 2: Accu- racy and Robustness. Working draft, January 2026. https://standards.cencenelec.eu/ords/f?p=205: 110:::::FSP_PROJECT,FSP_LANG_ID:82493,25&cs=1400FEB0B6AA9D5AB34BF0233CC4E75B7
2026
-
[21]
prEN 18281: Artificial Intelligence — Evaluation methods for accurate computer vi- sion systems
CEN/CENELEC JTC 21. prEN 18281: Artificial Intelligence — Evaluation methods for accurate computer vi- sion systems. Working draft, January 2026. https://standards.cencenelec.eu/ords/f?p=205:110::::: FSP_PROJECT,FSP_LANG_ID:79657,25&cs=14D236550058D93DCA7973A19C8D13B24
2026
-
[22]
prEN 18282: Artificial Intelligence – Cybersecurity specifications for AI Sys- tems
CEN/CENELEC JTC 21. prEN 18282: Artificial Intelligence – Cybersecurity specifications for AI Sys- tems. Working draft, January 2026. https://standards.cencenelec.eu/ords/f?p=205:110:::::FSP_ PROJECT,FSP_LANG_ID:79708,25&cs=1F22C53E33572EA17236E5EF8F9DE9DCD
2026
-
[23]
prEN 18283: Artificial Intelligence — Concepts, measures and requirements for managing bias in AI systems
CEN/CENELEC JTC 21. prEN 18283: Artificial Intelligence — Concepts, measures and requirements for managing bias in AI systems. Working draft, January 2026. https://standards.cencenelec.eu/ords/f? p=205:110:::::FSP_PROJECT,FSP_LANG_ID:80353,25&cs=1A0A0C0DBC7B012D69EA9AAAF8D5DFDA3
2026
-
[24]
CEN/CENELEC JTC 21. prEN 18284: Artificial Intelligence — Quality and governance of datasets in AI, January 2026 https://standards.cencenelec.eu/ords/f?p=205:110:::::FSP_PROJECT,FSP_LANG_ ID:80364,25&cs=16B97AF755CF36534FF119CB9782A0D1C
2026
-
[25]
prEN 18286: Artificial intelligence - Quality management system for EU AI Act regulatory purposes
CEN/CENELEC JTC 21. prEN 18286: Artificial intelligence - Quality management system for EU AI Act regulatory purposes. Working draft, January 2026. https://standards.cencenelec.eu/ords/f?p=205: 110:::::FSP_PROJECT,FSP_LANG_ID:80556,25&cs=175392A97352F2C5B1211EEC4FA215C15
2026
-
[26]
SB 24-205, Concerning Consumer Protections for Artificial Intelligence
Colorado General Assembly. SB 24-205, Concerning Consumer Protections for Artificial Intelligence. Signed 17 May 2024. Effective date delayed to 30 June 2026 by SB 25B-004. https://leg.colorado.gov/bill_ files/47770/download
2024
-
[27]
AI Visions in 2026: A Transatlantic Strategic Divide
Control Risks. “AI Visions in 2026: A Transatlantic Strategic Divide.” Early 2026.https://www.controlrisks. com/our-thinking/insights/ai-visions-in-2026-a-transatlantic-strategic-divide
2026
-
[28]
HUDERIA: Human Rights, Democracy and Rule of Law Impact Assessment for AI Systems
Council of Europe. “HUDERIA: Human Rights, Democracy and Rule of Law Impact Assessment for AI Systems.” Developed under the Framework Convention on AI (CETS No. 225), 2025.https://www.coe.int/en/web/ artificial-intelligence/huderia
2025
-
[29]
General approach on the proposal for a Regulation amending Regula- tion (EU) 2024/1689 (Digital Omnibus on AI)
Council of the European Union. General approach on the proposal for a Regulation amending Regula- tion (EU) 2024/1689 (Digital Omnibus on AI). 13 March 2026. https://www.consilium.europa.eu/en/ policies/digital-agenda/eu-ai-act/
2024
-
[30]
Systemic Risks of Interacting AI
Darius, P., Hoppe, T., and Aleksandrov, A. “Systemic Risks of Interacting AI.” arXiv:2512.17793 [cs.AI], December 2025.https://arxiv.org/abs/2512.17793
-
[31]
Algorithmic discrimination under the AI Act and the GDPR
De Luca, S. “Algorithmic discrimination under the AI Act and the GDPR.” European Parliamentary Research Ser- vice, 26 February 2025.https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025) 769509
2025
-
[32]
AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
Deng, Z., Guo, Y ., Han, C., Ma, W., Xiong, J., Wen, S., & Xiang, Y . “AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways.”ACM Computing Surveys, 57(7), 1-36. 2025. https://doi.org/10. 1145/3716628
2025
-
[33]
Regulatory Innovation at the Cross- roads: Five Years of Data on Entity-Regulation Reform in Arizona and Utah
Engstrom, D.F., Knowlton, L. & Ricca, D. “Regulatory Innovation at the Cross- roads: Five Years of Data on Entity-Regulation Reform in Arizona and Utah.” Stanford Law School, 2 June 2025. https://law.stanford.edu/2025/06/02/ regulatory-innovation-at-the-crossroads-five-years-of-data-on-entity-regulation-reform-in-arizona-and-utah/
2025
-
[34]
Security and Privacy Considerations in Autonomous Agents
ENISA. “Security and Privacy Considerations in Autonomous Agents.” https://www.enisa.europa.eu/ publications/considerations-in-autonomous-agents
-
[35]
Cyber Resilience Act Requirements Standards Mapping
ENISA and JRC. “Cyber Resilience Act Requirements Standards Mapping.” 2024/2025.https://www.enisa. europa.eu/publications/cyber-resilience-act-requirements-standards-mapping
2024
-
[36]
European Telecommunications Standards Institute, December 2025.https: //www.etsi.org/deliver/etsi_en/304200_304299/304223/02.01.01_60/en_304223v020101p.pdf
ETSI TC CYBER SAI.ETSI EN 304 223 V2.1.1: Securing Artificial Intelligence (SAI); Baseline Cyber Security Re- quirements for AI Models and Systems. European Telecommunications Standards Institute, December 2025.https: //www.etsi.org/deliver/etsi_en/304200_304299/304223/02.01.01_60/en_304223v020101p.pdf
2025
-
[37]
Working draft under CRA Standardisation Request M/606, available under Open Consultation
ETSI TC CYBER EUSR.ETSI EN 304 617 (draft): Cybersecurity requirements for browsers with digital elements. Working draft under CRA Standardisation Request M/606, available under Open Consultation. https: //docbox.etsi.org/CYBER/EUSR/Open 45 AI AGENTSUNDEREU LAW- WORKINGPAPER, APRIL7, 2026
2026
-
[38]
Frequently Asked Questions
European AI Office. “Frequently Asked Questions.” AI Act Service Desk, European Commission. https: //ai-act-service-desk.ec.europa.eu/en/faq(accessed March 2026)
2026
-
[39]
Code of Practice on Marking and Labelling of AI-generated Con- tent — Second Draft
European Commission / AI Office. “Code of Practice on Marking and Labelling of AI-generated Con- tent — Second Draft.” 5 March 2026. https://digital-strategy.ec.europa.eu/en/library/ commission-publishes-second-draft-code-practice-marking-and-labelling-ai-generated-content
2026
-
[40]
Blue Guide on the implementation of EU product rules 2022
European Commission. Blue Guide on the implementation of EU product rules 2022. OJ C 247/01, 29.6.2022. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:C:2022:247:TOC
2022
-
[41]
Commission calls on 10 Member States to comply with the Data Governance Act
European Commission. “Commission calls on 10 Member States to comply with the Data Governance Act.” 16 December 2024. https://digital-strategy.ec.europa.eu/en/news/ commission-calls-10-member-states-comply-data-governance-act
2024
-
[42]
COM(2020) 64 final
European Commission. COM(2020) 64 final. Report on the safety and liability implications of Artificial Intelli- gence, the Internet of Things and robotics. 19 February 2020.https://eur-lex.europa.eu/legal-content/ EN/TXT/PDF/?uri=CELEX:52020DC0064
2020
-
[43]
Agentic AI: Leveraging European AI Talent and Regulatory Assets to Scale Adoption
European Commission. “Agentic AI: Leveraging European AI Talent and Regulatory Assets to Scale Adoption.” StepUp StartUps Initiative, 23 January 2026. https://digital-strategy.ec.europa.eu/en/library/ agentic-ai-leveraging-european-ai-talent-and-regulatory-assets-scale-adoption
2026
-
[44]
Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act
European Commission. “Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act.” 2025. https://digital-strategy.ec.europa.eu/en/library/ guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act
2025
-
[45]
COM(2025) 836 final
European Commission. COM(2025) 836 final. Proposal for a Regulation amending Regulation (EU) 2024/1689 (Digital Omnibus on AI). 19 November 2025. https://eur-lex.europa.eu/legal-content/EN/TXT/ ?uri=celex:52025PC0836
2025
-
[46]
COM(2025) 837 final
European Commission. COM(2025) 837 final. Proposal for a Regulation amending Regulation (EU) 2023/2854 and other digital legislation (Digital Legislation Omnibus). 19 November 2025.https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=celex:52025PC0837
2025
-
[47]
Harmonised Standards for the European AI Act
European Commission JRC / AI Watch. “Harmonised Standards for the European AI Act.” Science for Policy Brief, 25 October 2024. https://ai-watch.ec.europa.eu/news/ harmonised-standards-european-ai-act-2024-10-25_en
2024
-
[48]
Agentic AI
European Data Protection Supervisor. “Agentic AI.” TechSonar 2025–2026, 24 November 2025.https://www. edps.europa.eu/data-protection/technology-monitoring/techsonar/agentic-ai_en
2025
-
[49]
Report A10-0073/2026 on the proposal for a Regulation amending Regulation (EU) 2024/1689 (Digital Omnibus on AI)
European Parliament, Committees IMCO and LIBE. Report A10-0073/2026 on the proposal for a Regulation amending Regulation (EU) 2024/1689 (Digital Omnibus on AI). Adopted in committee 18 March 2026. https: //www.europarl.europa.eu/doceo/document/A-10-2026-0073_EN.html
2026
-
[50]
Plenary vote validating negotiating mandate on Digital Omnibus on AI (A10-0073/2026)
European Parliament. Plenary vote validating negotiating mandate on Digital Omnibus on AI (A10-0073/2026). 26 March 2026.https://www.europarl.europa.eu/news/en/press-room/20260323IPR38829
-
[51]
Digital Omnibus on AI
European Parliamentary Research Service. “Digital Omnibus on AI.” Briefing, 2026.https://www.europarl. europa.eu/thinktank/en/document/EPRS_BRI(2026)782651
2026
-
[52]
Published 10 July 2025
EU Code of Practice for General-Purpose AI Models. Published 10 July 2025. https://digital-strategy. ec.europa.eu/en/policies/contents-code-gpai
2025
-
[53]
Declaratory Ruling,Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, CG Docket No
Federal Communications Commission. Declaratory Ruling,Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, CG Docket No. 23-362, FCC 24-17, 39 FCC Rcd 1783 (2024). Adopted 2 February 2024. https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf
2024
-
[54]
FTC Launches Inquiry into AI Chatbots Acting as Companions
Federal Trade Commission. “FTC Launches Inquiry into AI Chatbots Acting as Companions.” Press Release, 11 September 2025. https://www.ftc.gov/news-events/news/press-releases/2025/09/ ftc-launches-inquiry-ai-chatbots-acting-companions
2025
-
[55]
Children’s Online Privacy Protection Rule: Final Rule
Federal Trade Commission. Children’s Online Privacy Protection Rule: Final Rule. 90 Fed. Reg. 16918 (22 April 2025) (codified at 16 C.F.R. Part 312). Compliance deadline: 22 April 2026. https://unblock. federalregister.gov/
2025
-
[56]
232-3052
Federal Trade Commission.In the Matter of Rytr LLC, File No. 232-3052. Final Order set aside 22 December 2025 (2–0 vote, Chairman Ferguson and Commissioner Meador).https://www.ftc.gov/legal-library/browse/ cases-proceedings/232-3052-rytr-llc-matter 46 AI AGENTSUNDEREU LAW- WORKINGPAPER, APRIL7, 2026
2025
-
[57]
Policy Statement on AI and Section 5 of the FTC Act
Federal Trade Commission. Policy Statement on AI and Section 5 of the FTC Act. 11 March 2026. https: //www.jdsupra.com/legalnews/emerging-federal-ai-policy-what-to-know-8882048/#:~: text=January%2010%2C%202026:%20DOJ%20AI,the%20date%20of%20this%20alert
2026
-
[58]
Human Oversight under Article 14 of the EU AI Act
Fink, M. “Human Oversight under Article 14 of the EU AI Act.” 2025.https://www.aigl.blog/content/ files/2025/04/Human-Oversight-under-Article-14-of-the-EU-AI-Act.pdf
2025
-
[59]
Finnish Ministry of Economic Affairs and Employment. “National supervision of EU Artificial Intelligence Act to begin – laws on powers of authorities to take effect at start of the year.” 7 January 2026. https://tem.fi/en/-/1410877/ national-supervision-of-eu-artificial-intelligence-act-to-begin-laws-on-powers-of-authorities-to-take-effect-at-start-of-the-year
-
[60]
Conformity Assessments under the EU AI Act: A Step-by-Step Guide
Serban, A., Rovilos, V ., Demetzou, K. “Conformity Assessments under the EU AI Act: A Step-by-Step Guide.” Future of Privacy Forum / OneTrust, updated April 2025. https://fpf.org/wp-content/uploads/2025/ 04/OT-comformity-assessment-under-the-eu-ai-act-WP-1.pdf
2025
-
[61]
Understanding the New Wave of Chatbot Leg- islation: California SB 243 and Beyond
Future of Privacy Forum. “Understanding the New Wave of Chatbot Leg- islation: California SB 243 and Beyond.” 2025. https://fpf.org/blog/ understanding-the-new-wave-of-chatbot-legislation-california-sb-243-and-beyond/
2025
-
[62]
Gardhouse, K., Oueslati, A., & Kolt, N. “Regulating AI Agents.” Working paper, 24 March 2026. https: //arxiv.org/abs/2603.23471
-
[63]
Interplay between the AI Act and the EU digital legislative framework
Graux, H., Garstka, K., Murali, N., Cave, J., Botterman, M. “Interplay between the AI Act and the EU digital legislative framework.” European Parliament, ITRE Committee, Policy Department for Transformation, Innovation and Health, 29–30 October 2025. https://www.europarl.europa.eu/thinktank/en/document/ECTI_ STU(2025)778575
2025
-
[64]
Multi-agent risks from advanced ai.arXiv preprint arXiv:2502.14143, 2025
Hammond, L., Chan, A., Clifton, J. et al. “Multi-Agent Risks from Advanced AI.” Cooperative AI Foundation Technical Report, 19 February 2025.https://arxiv.org/abs/2502.14143
-
[65]
EU Cyber Resilience Act: Key 2026 Milestones toward CRA Compliance
Hanssen, H. and Brockmeyer, J. “EU Cyber Resilience Act: Key 2026 Milestones toward CRA Compliance.” Hogan Lovells, 20 January 2026. https://www.hoganlovells.com/en/publications/ eu-cyber-resilience-act-getting-ready-for-cra-compliance-in-2026
2026
-
[66]
EU AI Act Regulatory Directory
IAPP. “EU AI Act Regulatory Directory.” Updated January 2026.https://iapp.org/resources/article/ eu-ai-act-regulatory-directory
2026
-
[67]
European Commission misses deadline for AI Act guidance on high-risk systems
IAPP. “European Commission misses deadline for AI Act guidance on high-risk systems.” 3 February 2026. https://iapp.org/news/a/ european-commission-misses-deadline-for-ai-act-guidance-on-high-risk-systems
2026
-
[68]
Model AI Governance Framework for Agentic AI
Infocomm Media Development Authority (IMDA), Singapore. “Model AI Governance Framework for Agentic AI.” 22 January 2026. https://www.imda.gov.sg/-/media/imda/files/about/ emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf
2026
-
[69]
Colorado Officials Push to Repeal and Replace the Col- orado AI Act
Inside Global Tech. “Colorado Officials Push to Repeal and Replace the Col- orado AI Act.” 27 March 2026. https://www.insideglobaltech.com/2026/03/27/ colorado-officials-push-to-repeal-and-replace-the-colorado-ai-act/
2026
-
[70]
2025.https://www.iso.org/standard/89455.html
ISO/IEC 4213 ed.2: Artificial intelligence — Performance measurement for AI classification, regression, clustering and recommendation tasks. 2025.https://www.iso.org/standard/89455.html
2025
-
[71]
Published 2025.https://www.iso.org/standard/84111.html
ISO/IEC 12792:2025: Information technology — Artificial intelligence — Transparency taxonomy of AI systems. Published 2025.https://www.iso.org/standard/84111.html
2025
-
[72]
FDIS registered 12 March 2026
ISO/IEC FDIS 27090: Cybersecurity — Artificial Intelligence — Guidance for addressing security threats and compromises to artificial intelligence systems. FDIS registered 12 March 2026. https://www.iso.org/ standard/56581.html
2026
-
[73]
International Organization for Standardization, 2022
ISO.ISO 31073:2022: Risk management — Vocabulary. International Organization for Standardization, 2022. https://www.iso.org/standard/79637.html
2022
-
[74]
Published May 2025.https://www.iso.org/standard/42005
ISO/IEC 42005:2025, Information technology — Artificial intelligence — AI system impact assessment. Published May 2025.https://www.iso.org/standard/42005
2025
-
[75]
DIS ballot initiated 24 November 2025.https://www.iso.org/standard/86902.html
ISO/IEC DIS 42105: Information technology — Artificial intelligence — Guidance for human oversight of AI systems. DIS ballot initiated 24 November 2025.https://www.iso.org/standard/86902.html
2025
-
[76]
prEN ISO/IEC 23282: Artificial Intelligence - Evaluation methods for accurate natural language processing systems Draft
ISO/IEC. prEN ISO/IEC 23282: Artificial Intelligence - Evaluation methods for accurate natural language processing systems Draft. https://standards.cencenelec.eu/ords/f?p=205:110:::::FSP_PROJECT, FSP_LANG_ID:77582,25&cs=12CADCFE745035835A4EE315C44A5045C 47 AI AGENTSUNDEREU LAW- WORKINGPAPER, APRIL7, 2026
2026
-
[77]
prEN ISO/IEC 24970: Artificial intelligence - AI system logging (ISO/IEC DIS 24970:2025)
ISO/IEC. prEN ISO/IEC 24970: Artificial intelligence - AI system logging (ISO/IEC DIS 24970:2025). Draft https://standards.cencenelec.eu/ords/f?p=205:110:::::FSP_PROJECT,FSP_LANG_ID: 78565,25&cs=11492185C74B08825BC0990EEEF862587
2025
-
[78]
Jariwala, M. “A Comparative Analysis of the EU AI Act and the Colorado AI Act: Regulatory Approaches to Artificial Intelligence Governance.”Int’l J. Computer Applications, 186(38): 23–29, September 2024. https: //doi.org/10.5120/ijca2024923954
-
[79]
Preprint at https://arxiv.org/abs/ 2601.11893
Ji, Y . et al. “Taming Various Privilege Escalation in LLM-Based Agent Systems: A Mandatory Access Control Framework.” arXiv:2601.11893 [cs.CR], January 2026.https://arxiv.org/abs/2601.11893
-
[80]
Agentic Tool Sovereignty
Jones, L. “Agentic Tool Sovereignty.” European Law Blog, 2025.https://www.europeanlawblog.eu/pub/ dq249o3c
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.