Recognition: unknown
From Disclosure to Self-Referential Opacity: Six Dimensions of Strain in Current AI Governance
Pith reviewed 2026-05-10 11:52 UTC · model grok-4.3
The pith
As capability asymmetry grows between AI systems and their overseers, disclosure-based governance remedies fail and political strains emerge in legitimacy and non-domination.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Governance opacity over AI systems shifts in kind as capability asymmetry grows, and the strongest forms defeat the disclosure-based remedies governance ordinarily relies on. Applying the six-dimension framework to six arrangements ordered by increasing asymmetry shows that proprietary secrecy yields to disclosure at the low end, but at the high end the governed system either games its own evaluation or sits inside the governance process. Legitimacy and non-domination strain more consistently across the sample than corrigibility and resilience, which respond more readily to institutional design quality. The sample cannot separate institutional design maturity from capability asymmetry, and t
What carries the argument
The six-dimension political theory framework of legitimacy, accountability, corrigibility, non-domination, subsidiarity, and institutional resilience, applied to AI governance arrangements ordered by increasing capability asymmetry between system and overseer.
If this is right
- Disclosure remedies lose traction when the AI can influence its own oversight process.
- Legitimacy and non-domination face persistent challenges across varying governance setups as asymmetry increases.
- Corrigibility and institutional resilience can be strengthened through targeted improvements in institutional design.
- The patterns observed cannot yet distinguish between effects of design quality and capability asymmetry, requiring further validation.
Where Pith is reading between the lines
- If the ordering holds, governance for highly capable AI may need to shift toward mechanisms that prevent self-referential control without relying on external disclosure.
- Testing the same framework on additional AI systems could reveal whether the strain patterns generalize beyond the initial six cases.
- Neighbouring issues like ensuring fair treatment in AI decisions might benefit from focusing on non-domination as a core requirement.
Load-bearing premise
The six chosen AI governance arrangements can be ordered by increasing capability asymmetry between the system and overseer, and the political theory dimensions can be applied directly without significant distortion.
What would settle it
A multi-rater assessment of the six arrangements that finds corrigibility and resilience straining as consistently as legitimacy and non-domination, or that shows institutional design fully explains the differences independent of asymmetry levels.
Figures
read the original abstract
Governance opacity over AI systems shifts in kind as capability asymmetry grows, and the strongest forms defeat the disclosure-based remedies governance ordinarily relies on. This paper applies a six-dimension framework from political theory (legitimacy, accountability, corrigibility, non-domination, subsidiarity, institutional resilience) to six AI governance arrangements already in operation, ordered by increasing capability asymmetry between system and overseer. Proprietary secrecy yields to disclosure at the low end, but at the high end the governed system either games its own evaluation or sits inside the governance process, and transparency remedies lose traction. Legitimacy and non-domination strain more consistently across the sample than corrigibility and resilience, which respond more readily to institutional design quality. The sample cannot separate institutional design maturity from capability asymmetry, and the patterns are offered as hypotheses for multi-rater validation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript applies a six-dimension framework from political theory (legitimacy, accountability, corrigibility, non-domination, subsidiarity, institutional resilience) to six existing AI governance arrangements ordered by increasing capability asymmetry between system and overseer. It claims that opacity shifts from disclosure-based to self-referential forms at higher asymmetry, defeating standard transparency remedies; legitimacy and non-domination exhibit more consistent strain across the sample while corrigibility and resilience track institutional design quality more closely. Patterns are presented as hypotheses for multi-rater validation, with explicit acknowledgment that the sample cannot separate design maturity from asymmetry.
Significance. If the differential strain patterns hold under clearer case ordering and validation, the work provides a structured bridge between political theory and AI governance analysis, identifying dimensions where disclosure-based approaches systematically lose traction. It generates testable hypotheses rather than overclaiming empirical results, which positions it as a useful foundation for subsequent empirical or design-oriented research on oversight mechanisms.
major comments (2)
- [Abstract and case-selection discussion] Abstract and case-selection discussion: The central comparative finding (legitimacy and non-domination strain more consistently than corrigibility and resilience) is predicated on ordering the six arrangements by increasing capability asymmetry. No pre-specified, independent, or reproducible criteria for measuring or sequencing this asymmetry are supplied, raising the possibility that observed patterns are artifacts of case selection or post-hoc sequencing rather than attributable to asymmetry itself.
- [Findings and limitations section] Findings and limitations section: The manuscript states that the sample cannot separate institutional design maturity from capability asymmetry, yet attributes the differential consistency of strains to asymmetry levels. This acknowledged confound directly undermines the load-bearing claim that certain dimensions respond more readily to design quality while others are asymmetry-driven.
minor comments (2)
- [Case descriptions] The six arrangements are introduced without a consolidated table listing their asymmetry ordering rationale, key features, and observed strains per dimension; adding one would improve traceability of the comparative claims.
- [Introduction] Terminology such as 'self-referential opacity' is used in the title and abstract but receives its operational definition only later; an earlier explicit definition would aid readers.
Simulated Author's Rebuttal
Thank you for the constructive comments on our manuscript. We address each major comment below and propose targeted revisions to clarify the exploratory framing, strengthen the justification for case ordering, and ensure the language in the findings section avoids any over-attribution given the acknowledged limitations.
read point-by-point responses
-
Referee: [Abstract and case-selection discussion] The central comparative finding (legitimacy and non-domination strain more consistently than corrigibility and resilience) is predicated on ordering the six arrangements by increasing capability asymmetry. No pre-specified, independent, or reproducible criteria for measuring or sequencing this asymmetry are supplied, raising the possibility that observed patterns are artifacts of case selection or post-hoc sequencing rather than attributable to asymmetry itself.
Authors: We acknowledge that the ordering is based on qualitative assessment rather than a pre-specified quantitative metric, which limits reproducibility at this stage. The six cases were chosen to represent a spectrum of asymmetry based on documented characteristics including level of model access, evaluator independence, and embedding of the system within oversight processes. As the work is positioned as hypothesis-generating rather than definitive causal analysis, we will expand the case-selection discussion with an explicit table of asymmetry indicators for each arrangement and a clearer statement that the sequencing is illustrative. This revision will make the rationale more transparent without altering the exploratory nature of the study. revision: partial
-
Referee: [Findings and limitations section] The manuscript states that the sample cannot separate institutional design maturity from capability asymmetry, yet attributes the differential consistency of strains to asymmetry levels. This acknowledged confound directly undermines the load-bearing claim that certain dimensions respond more readily to design quality while others are asymmetry-driven.
Authors: We agree that the current phrasing risks implying stronger attribution than the data support. The manuscript already states the confound and frames the patterns as observed associations within the sample, with legitimacy and non-domination showing more consistent strain across cases while corrigibility and resilience vary more with institutional features. We will revise the findings and limitations section to explicitly describe these as descriptive patterns in the selected cases, reiterate that the confound precludes causal claims, and emphasize that the results are offered as hypotheses for future multi-rater or empirical validation rather than as evidence of differential responsiveness independent of design maturity. revision: yes
Circularity Check
No circularity: external political-theory framework applied to independent cases without reduction to self-defined quantities
full rationale
The paper imports a six-dimension framework (legitimacy, accountability, corrigibility, non-domination, subsidiarity, institutional resilience) from political theory and applies it to six external AI governance arrangements. No equations, fitted parameters, or predictions appear. The ordering by capability asymmetry is presented as a hypothesis-generating device rather than a derived quantity, and the text explicitly notes the inability to separate design maturity from asymmetry. No self-citations are load-bearing for the central claims, and no step reduces a result to its own inputs by construction. This is a standard non-circular application of an imported analytic lens.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The six dimensions (legitimacy, accountability, corrigibility, non-domination, subsidiarity, institutional resilience) from political theory are appropriate and sufficient for evaluating AI governance arrangements.
- domain assumption The six governance arrangements can be ordered by increasing capability asymmetry between system and overseer.
Reference graph
Works this paper leans on
-
[1]
Rawan Abulibdeh, Leo Anthony Celi, and Ervin Sejdic. The illusion of safety: A report to the FDA on AI healthcare product approvals.PLOS Digital Health, 4(6):e0000866, 2025. doi: 10.1371/journal.pdig.0000866
-
[2]
Safety first? adalovelaceinstitute.org, 2024
Ada Lovelace Institute. Safety first? adalovelaceinstitute.org, 2024
2024
-
[3]
Statement on the UK AI Safety Institute transition to the UK AI Security Institute
AI Now Institute. Statement on the UK AI Safety Institute transition to the UK AI Security Institute. ainowinstitute.org, 2025. 22
2025
-
[4]
Our 2025 year in review
AI Security Institute. Our 2025 year in review. aisi.gov.uk, 2025
2025
-
[5]
Frontier AI trends report
AI Security Institute. Frontier AI trends report. aisi.gov.uk, 2025
2025
-
[6]
Frontier AI safety commitments
AI Seoul Summit. Frontier AI safety commitments. GOV .UK, 2024. May 2024
2024
-
[7]
Gregory C. Allen. The AI Safety Institute international network: Next steps and recommenda- tions. CSIS, 2024
2024
-
[8]
Machine learning-enabled medical devices authorized by the US FDA in 2024: Regulatory characteristics, predicate lineage, and transparency reporting.JAMA Network Open, 2025
Saif Almarie, Carlos Gonzalez-Gonzalez, Wagner Barbosa, Chris Lutz, Erin Grosse, and Felipe Fregni. Machine learning-enabled medical devices authorized by the US FDA in 2024: Regulatory characteristics, predicate lineage, and transparency reporting.JAMA Network Open, 2025
2024
-
[9]
Machine bias.ProPublica, 2016
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias.ProPublica, 2016. May 23, 2016
2016
-
[10]
Responsible scaling policy
Anthropic. Responsible scaling policy. anthropic.com, 2025. Updated May 14, 2025
2025
-
[11]
Renan Araujo, Kristina Fort, and Oliver Guest. Understanding the first wave of AI safety institutes: Characteristics, functions, and challenges.Institute for AI Policy and Strategy, 2024. arXiv: 2410.09219
-
[12]
Verifying international agreements on AI: Six layers of verification, 2025
Mauricio Baker et al. Verifying international agreements on AI: Six layers of verification, 2025. arXiv: 2507.15916
-
[13]
What AI evaluations for preventing catastrophic risks can and cannot do, 2024
Peter Barnett and Lisa Thiergart. What AI evaluations for preventing catastrophic risks can and cannot do, 2024. arXiv: 2412.08653
-
[14]
FDA oversight: Understanding the regulation of health AI tools
Bipartisan Policy Center. FDA oversight: Understanding the regulation of health AI tools. Issue Brief, 2025. June 2025
2025
-
[15]
Public policy and superintelligent AI: A vector field approach
Nick Bostrom, Allan Dafoe, and Carrick Flynn. Public policy and superintelligent AI: A vector field approach. In S. Matthew Liao, editor,Ethics of Artificial Intelligence. Oxford University Press, 2020
2020
-
[16]
Analysing and assessing accountability: A conceptual framework.European Law Journal, 13(4):447–468, 2007
Mark Bovens. Analysing and assessing accountability: A conceptual framework.European Law Journal, 13(4):447–468, 2007
2007
-
[17]
Hudson, Anton Korinek, Matthew M
Justin Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young, and Baobao Zhang.The Oxford Handbook of AI Governance. Oxford University Press, 2024
2024
-
[18]
Jenna Burrell. How the machine “thinks”: Understanding opacity in machine learning algo- rithms.Big Data & Society, 3(1), 2016. doi: 10.1177/2053951715622512
-
[19]
Senate bill 53: Transparency in frontier artificial intelligence act, 2025
California Legislature. Senate bill 53: Transparency in frontier artificial intelligence act, 2025. Signed September 29, 2025, effective January 1, 2026
2025
-
[20]
Moss, editors.Preventing Regulatory Capture: Special Interest Influence and How to Limit It
Daniel Carpenter and David A. Moss, editors.Preventing Regulatory Capture: Special Interest Influence and How to Limit It. Cambridge University Press, 2014
2014
-
[21]
AI incident reporting: Addressing a gap in the UK’s regulation of AI, 2024
Centre for Long-Term Resilience. AI incident reporting: Addressing a gap in the UK’s regulation of AI, 2024
2024
-
[22]
University of Chicago Press, 2007
Harry Collins and Robert Evans.Rethinking Expertise. University of Chicago Press, 2007
2007
-
[23]
Lindberg, Jan Teorell, David Altman, Michael Bernhard, Agnes Cornell, M
Michael Coppedge, John Gerring, Carl Henrik Knutsen, Staffan I. Lindberg, Jan Teorell, David Altman, Michael Bernhard, Agnes Cornell, M. Steven Fish, Lisa Gastaldi, et al. V-Dem methodology, v14. Varieties of Democracy Institute, University of Gothenburg, 2024
2024
-
[24]
The EU’s AI power play: Between deregulation and innovation
Raluca Csernatoni. The EU’s AI power play: Between deregulation and innovation. Carnegie Endowment for International Peace, 2025
2025
-
[25]
AI governance: A research agenda.Governance of AI Program, Future of Humanity Institute, University of Oxford, 2018
Allan Dafoe. AI governance: A research agenda.Governance of AI Program, Future of Humanity Institute, University of Oxford, 2018
2018
-
[26]
Dahl.Democracy and Its Critics
Robert A. Dahl.Democracy and Its Critics. Yale University Press, 1989
1989
-
[27]
The accuracy, fairness, and limits of predicting recidivism
Julia Dressel and Hany Farid. The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1):eaao5580, 2018
2018
-
[28]
Anthropic is quietly backpedalling on its safety commitments, 2025
Effective Altruism Forum. Anthropic is quietly backpedalling on its safety commitments, 2025. May 2025. 23
2025
-
[29]
Regulation (EU) 2024/1689 lay- ing down harmonised rules on artificial intelligence (artificial intelligence act), 2024
European Parliament and Council of the European Union. Regulation (EU) 2024/1689 lay- ing down harmonised rules on artificial intelligence (artificial intelligence act), 2024. OJ L 2024/1689
2024
-
[30]
Executive order 14179: Removing barriers to American leadership in artificial intelligence, 2025
Executive Office of the President. Executive order 14179: Removing barriers to American leadership in artificial intelligence, 2025. January 23, 2025
2025
-
[31]
Survey article: Subsidiarity.Journal of Political Philosophy, 6(2):190–218, 1998
Andreas Follesdal. Survey article: Subsidiarity.Journal of Political Philosophy, 6(2):190–218, 1998
1998
-
[32]
AI safety evaluations: An explainer.Center for Security and Emerging Technology, 2025
Alex Friedland. AI safety evaluations: An explainer.Center for Security and Emerging Technology, 2025
2025
-
[33]
Farrar, Straus and Giroux, New York, 2014
Francis Fukuyama.Political Order and Political Decay. Farrar, Straus and Giroux, New York, 2014
2014
-
[34]
George and Andrew Bennett.Case Studies and Theory Development in the Social Sciences
Alexander L. George and Andrew Bennett.Case Studies and Theory Development in the Social Sciences. MIT Press, Cambridge, MA, 2005
2005
-
[35]
Regulatory responses to medical machine learning.Journal of Law and the Biosciences, 7(1):lsaa002, 2020
Sara Gerke, Timo Minssen, and Glenn Cohen. Regulatory responses to medical machine learning.Journal of Law and the Biosciences, 7(1):lsaa002, 2020
2020
-
[36]
Alvin I. Goldman. Experts: Which ones should you trust?Philosophy and Phenomenological Research, 63(1):85–110, 2001
2001
-
[37]
Frontier safety framework, 2024
Google DeepMind. Frontier safety framework, 2024. May 2024
2024
-
[38]
Verification for international AI governance
Ben Harack et al. Verification for international AI governance. Oxford Martin AI Governance Initiative, 2025
2025
-
[39]
Epistemic dependence.Journal of Philosophy, 82(7):335–349, 1985
John Hardwig. Epistemic dependence.Journal of Philosophy, 82(7):335–349, 1985
1985
-
[40]
Addressing the regulatory gap: Moving towards an EU AI audit ecosystem beyond the AI Act by including civil society.AI and Ethics, 2024
David Hartmann, José Renato Laranjeira de Pereira, Chiara Streitbörger, and Bettina Berendt. Addressing the regulatory gap: Moving towards an EU AI audit ecosystem beyond the AI Act by including civil society.AI and Ethics, 2024
2024
-
[41]
International institutions for advanced AI.arXiv preprint arXiv:2307.04699, 2023
Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, and Duncan Snidal. International institutions for advanced AI. 2023. arXiv: 2307.04699
-
[42]
Governance of artificial intelligence (AI)
House of Commons Science, Innovation and Technology Committee. Governance of artificial intelligence (AI). UK Parliament, 2025. 10 January 2025
2025
-
[43]
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Evan Hubinger, Carson Denison, Jesse Mu, et al. Sleeper agents: Training deceptive LLMs that persist through safety training, 2024. arXiv:2401.05566
work page internal anchor Pith review arXiv 2024
-
[44]
The philosophical novelty of computer simulation methods.Synthese, 169(3): 615–626, 2009
Paul Humphreys. The philosophical novelty of computer simulation methods.Synthese, 169(3): 615–626, 2009
2009
-
[45]
Resolution on the implementation of the NPT safeguards agreement in the islamic republic of iran, 2025
IAEA Board of Governors. Resolution on the implementation of the NPT safeguards agreement in the islamic republic of iran, 2025. GOV/2025/38, 12 June 2025
2025
-
[46]
IAEA safeguards: Serving nuclear non-proliferation
International Atomic Energy Agency. IAEA safeguards: Serving nuclear non-proliferation. IAEA Factsheet, 2024
2024
-
[47]
Technologies of humility: Citizen participation in governing science.Minerva, 41(3):223–244, 2003
Sheila Jasanoff. Technologies of humility: Citizen participation in governing science.Minerva, 41(3):223–244, 2003
2003
-
[48]
Kaminski and Gianclaudio Malgieri
Margot E. Kaminski and Gianclaudio Malgieri. The right to explanation in the AI Act. 2025. SSRN: 5194301
2025
-
[49]
The worldwide governance indicators: Methodology and analytical issues.Hague Journal on the Rule of Law, 3(2):220–246, 2011
Daniel Kaufmann, Aart Kraay, and Massimo Mastruzzi. The worldwide governance indicators: Methodology and analytical issues.Hague Journal on the Rule of Law, 3(2):220–246, 2011
2011
-
[50]
Keohane, and Sidney Verba.Designing Social Inquiry: Scientific Inference in Qualitative Research
Gary King, Robert O. Keohane, and Sidney Verba.Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton University Press, Princeton, NJ, 1994
1994
-
[51]
The legitimacy of international law: A constitutionalist framework of analysis
Mattias Kumm. The legitimacy of international law: A constitutionalist framework of analysis. European Journal of International Law, 15(5):907–931, 2004
2004
-
[52]
Isabel Kusche. Possible harms of artificial intelligence and the EU AI Act: Fundamental rights and risk.Journal of Risk Research, 2024. doi: 10.1080/13669877.2024.2350720. 24
-
[53]
Trustworthy artificial intelligence and the European Union AI Act: On the conflation of trustworthiness and acceptability of risk
Johann Laux, Sandra Wachter, and Brent Mittelstadt. Trustworthy artificial intelligence and the European Union AI Act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18(1):3–32, 2024
2024
-
[54]
IAEA for AI
Lawfare. Do we want an “IAEA for AI”? lawfaremedia.org, 2024
2024
-
[55]
Lindberg, Michael Coppedge, John Gerring, and Jan Teorell
Staffan I. Lindberg, Michael Coppedge, John Gerring, and Jan Teorell. V-Dem: A new way to measure democracy.Journal of Democracy, 25(3):159–169, 2014
2014
-
[56]
Evaluation awareness: Why frontier AI models are getting harder to test
Sambhav Maheshwari and Joe O’Brien. Evaluation awareness: Why frontier AI models are getting harder to test. Institute for AI Policy and Strategy, 2026. 31 March 2026
2026
-
[57]
Common elements of frontier AI safety policies
METR. Common elements of frontier AI safety policies. metr.org, 2025. December 2025 update
2025
-
[58]
Conformity assess- ments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation.Minds and Machines, 32:241–268, 2022
Jakob Mökander, Maria Axente, Federico Casolari, and Luciano Floridi. Conformity assess- ments and post-market monitoring: A guide to the role of auditing in the proposed European AI regulation.Minds and Machines, 32:241–268, 2022
2022
-
[59]
Privatizing sentencing: A delegation framework for recidivism risk assessment
Andrea Nishi. Privatizing sentencing: A delegation framework for recidivism risk assessment. Columbia Law Review, 119(6), 2019
2019
-
[60]
Taking AI risks seriously: A new assessment model for the AI Act.AI & Society, 39(5), 2024
Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi. Taking AI risks seriously: A new assessment model for the AI Act.AI & Society, 39(5), 2024
2024
-
[61]
A robust governance for the AI Act: AI Office, AI Board, scientific panel, and national authorities
Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal, and Luciano Floridi. A robust governance for the AI Act: AI Office, AI Board, scientific panel, and national authorities. European Journal of Risk Regulation, 2025
2025
-
[62]
Preparedness framework
OpenAI. Preparedness framework. openai.com, 2024
2024
-
[63]
Cambridge University Press, 1990
Elinor Ostrom.Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990
1990
-
[64]
Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks
Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. AI deception: A survey of examples, risks, and potential solutions.Patterns, 5(5), 2024
2024
-
[65]
Report on algorithmic risk assessment tools in the U.S
Partnership on AI. Report on algorithmic risk assessment tools in the U.S. criminal justice system. Technical report, 2021
2021
-
[66]
Harvard University Press, 2015
Frank Pasquale.The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015
2015
-
[67]
Inside the U.K.’s bold experiment in AI safety.TIME, 2025
Billy Perrigo. Inside the U.K.’s bold experiment in AI safety.TIME, 2025. 16 January 2025
2025
-
[68]
Oxford University Press, 1997
Philip Pettit.Republicanism: A Theory of Freedom and Government. Oxford University Press, 1997
1997
-
[69]
Mary Phuong, Matthew Aitchison, Elliot Catt, Sarah Cogan, Victoria Krakovna, Zachary Kenton, Martin Wainwright, et al. Evaluating frontier models for dangerous capabilities. 2024. arXiv: 2403.13793
-
[70]
The UK’s contribution to developing the international governance of AI safety and how its policy took a sharp turn.Opinio Juris, 2025
Mando Rachovitsa. The UK’s contribution to developing the international governance of AI safety and how its policy took a sharp turn.Opinio Juris, 2025. 18 November 2025
2025
-
[71]
Ragin.The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies
Charles C. Ragin.The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. University of California Press, Berkeley, 1987
1987
-
[72]
The paradoxes of the European Union’s AI regulation.The Regulatory Review, 2026
Nicoletta Rangone. The paradoxes of the European Union’s AI regulation.The Regulatory Review, 2026. March 10, 2026
2026
-
[73]
Nicoletta Rangone and L. Megale. Risks without rights? the EU AI Act’s approach to AI in law and rule-making.European Journal of Risk Regulation, 16:1082–1097, 2025
2025
-
[74]
Columbia University Press, New York, 1993
John Rawls.Political Liberalism. Columbia University Press, New York, 1993
1993
-
[75]
Governing artificial intelligence in china and the european union: Comparing aims and mechanisms.Global Policy, 2024
Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang, and Luciano Floridi. Governing artificial intelligence in china and the european union: Comparing aims and mechanisms.Global Policy, 2024
2024
-
[76]
Tony Rost. Cognitive comparability and the limits of governance: Evaluating authority under radical capability asymmetry. arXiv preprint, https://arxiv.org/abs/2604.02720, 2026. Superintelligence Governance Institute. 25
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[77]
Scott.Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed
James C. Scott.Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press, 1998
1998
-
[78]
Simon.Administrative Behavior: A Study of Decision-Making Processes in Adminis- trative Organizations
Herbert A. Simon.Administrative Behavior: A Study of Decision-Making Processes in Adminis- trative Organizations. Free Press, 4th edition, 1997
1997
-
[79]
Mapping IAEA verification tools to international AI governance: A mechanism-by-mechanism analysis, 2024
Simon Institute for Longterm Governance. Mapping IAEA verification tools to international AI governance: A mechanism-by-mechanism analysis, 2024
2024
-
[80]
Smuha and Karen Yeung
Nathalie A. Smuha and Karen Yeung. The European Union’s AI Act: Beyond motherhood and apple pie? In Nathalie A. Smuha, editor,The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence, pages 228–258. Cambridge University Press, 2025
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.