pith. machine review for the scientific record. sign in

arxiv: 2605.06806 · v1 · submitted 2026-05-07 · 💻 cs.CY

Recognition: no theorem link

Big AI's Regulatory Capture: Mapping Industry Interference and Government Complicity

Authors on Pith no claims yet

Pith reviewed 2026-05-11 01:03 UTC · model grok-4.3

classification 💻 cs.CY
keywords regulatory captureAI industrytaxonomydiscourse influencelegal evasioninnovation narrativesgovernment complicityantitrust
0
0 comments X

The pith

Big AI and governments capture AI regulation through 27 mechanisms shown in news analysis.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper constructs a taxonomy of 27 mechanisms that enable regulatory capture by the AI industry, organized into five categories, drawing from design science methods and a broad review of sources. By annotating 100 news articles, it identifies 249 occurrences of these mechanisms, highlighting discourse and epistemic influence along with evasion of laws as the most common, often paired with narratives that portray regulation as harmful to innovation or national interests. This mapping reveals the extensive complicity between Big AI companies and governments in shaping or avoiding oversight, which the authors present as a situation demanding immediate attention from policymakers and the public to prevent further entrenchment.

Core claim

The authors develop a taxonomy consisting of 27 mechanisms across five categories to map how the AI industry interferes with regulation. Through manual annotation of 100 news articles, they document 249 instances of these mechanisms, with the most frequent being those related to shaping public discourse and eluding legal requirements. They argue that this capture, involving both industry and government actors, should be treated as an emergency, and provide lessons and tactics from other sectors to counter it.

What carries the argument

A taxonomy of 27 regulatory capture mechanisms across five categories, derived from design science research and scoping review, which is applied to annotate and quantify instances in 100 news articles.

Load-bearing premise

The scoping review of existing literature and media reports provides a comprehensive and unbiased basis for identifying all relevant regulatory capture mechanisms.

What would settle it

An independent scoping review using alternate sources or search criteria that identifies substantially fewer than 27 mechanisms or different dominant categories would undermine the taxonomy.

Figures

Figures reproduced from arXiv: 2605.06806 by Abeba Birhane, Bhaskar Mitra, Harshvardhan J. Pandit, Riccardo Angius, Roel Dobbe, William Agnew, Zeerak Talat.

Figure 1
Figure 1. Figure 1: PRISMA diagram for the construction of dataset [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Quantitative results for capture mechanisms found in the annotated datasets. [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Frequency of capture narratives across DS1 and DS2. Regulation stifles innovation. We found this to be the most frequently occurring narrative (16% overall; 24% and 13% in DS1 and DS2, respectively), decrying regulation as ontologically at odds with progress. Red tape. The second most frequent narrative concerns alleged Red tape with 15% of all articles labelled for this narrative (28% of DS1 and 11% of DS… view at source ↗
read the original abstract

Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence. It is therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem. Grounded in design science research (DSR) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives. Our analysis identifies 249 instances of capture mechanisms, often co-occurring with narratives that rationalise such capture. We find that the most recurring categories of mechanisms are Discourse & Epistemic Influence, concerning narrative framing, and Elusion of law, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws. We further find that Regulation stifles innovation, Red tape and National Interest are the most frequently invoked narratives used to rationalise capture. We emphasize the extent and breadth of regulatory capture by coalescing forces -- Big AI and governments -- as something policy makers and the public ought to treat as an emergency. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript develops a taxonomy of 27 mechanisms of regulatory capture by Big AI, organized into five categories, using design science research methodologies and a scoping review of literature and media reports. It then applies this taxonomy via an annotation template to manually analyze 100 news articles, identifying 249 instances of capture mechanisms. The analysis finds that Discourse & Epistemic Influence and Elusion of law are the most recurring categories, with narratives such as 'Regulation stifles innovation', 'Red tape', and 'National Interest' most frequently invoked to rationalize capture. The paper frames the coalescing of Big AI and government forces as an emergency and offers lessons and tactics from other industries for resistance and counter-narratives.

Significance. If the methodological transparency issues are resolved, this work would provide a structured taxonomy that could aid researchers and policymakers in mapping regulatory capture in the AI sector, a timely contribution given the industry's influence. The attempt to quantify mechanisms through annotation and to identify dominant narratives adds empirical grounding, while the transferable tactics section draws useful parallels to other industries. The paper's grounding in DSR and scoping review represents a systematic effort to catalog interference mechanisms.

major comments (3)
  1. [empirical analysis / annotation section] The section describing the selection and annotation of the 100 news articles does not report the sampling method, including search strings, databases, date range, or inclusion/exclusion criteria. This is load-bearing for the central claim that Discourse & Epistemic Influence and Elusion of law are the 'most recurring' categories, as the distribution of the 249 instances could be shaped by unstated selection choices rather than the underlying corpus.
  2. [annotation methodology] The annotation procedure lacks any report of inter-rater reliability, double-coding, or agreement metrics, despite the authors performing the manual annotation themselves. This directly affects the reliability of the quantified findings (249 instances and category frequencies) that support the 'emergency' framing and policy recommendations.
  3. [taxonomy development / scoping review] The scoping review used to derive the 27-mechanism taxonomy does not specify the search strategy, number of sources screened, or inclusion criteria. This raises a risk of selection or interpretation bias in the taxonomy itself, which is foundational to both the validation step and the frequency claims.
minor comments (2)
  1. [abstract] The abstract could more explicitly separate the taxonomy construction phase from the empirical annotation phase and clarify how the annotation serves as validation.
  2. [introduction] Ensure early definition of 'Big AI' and consistent terminology across sections to aid readability for readers outside the immediate field.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thoughtful and constructive review. The comments correctly identify gaps in methodological transparency that we agree require addressing to strengthen the paper's claims. We will revise the manuscript to incorporate detailed descriptions of the sampling, annotation, and scoping review procedures. Our point-by-point responses follow.

read point-by-point responses
  1. Referee: [empirical analysis / annotation section] The section describing the selection and annotation of the 100 news articles does not report the sampling method, including search strings, databases, date range, or inclusion/exclusion criteria. This is load-bearing for the central claim that Discourse & Epistemic Influence and Elusion of law are the 'most recurring' categories, as the distribution of the 249 instances could be shaped by unstated selection choices rather than the underlying corpus.

    Authors: We acknowledge that the manuscript did not provide sufficient detail on the article selection process. In the revised version, we will add a new subsection under the empirical analysis that explicitly reports the search strings (e.g., combinations of terms like 'AI regulation', 'Big Tech lobbying', 'antitrust AI'), the databases and sources used (Google News, major outlets via LexisNexis), the date range (2018–2024), and the inclusion/exclusion criteria (e.g., articles must discuss AI policy or industry influence and be from reputable sources). This addition will allow readers to evaluate potential selection effects on the observed frequencies and support the 'most recurring' claims with greater transparency. revision: yes

  2. Referee: [annotation methodology] The annotation procedure lacks any report of inter-rater reliability, double-coding, or agreement metrics, despite the authors performing the manual annotation themselves. This directly affects the reliability of the quantified findings (249 instances and category frequencies) that support the 'emergency' framing and policy recommendations.

    Authors: The annotation was conducted by the author team through iterative discussion and consensus to apply the taxonomy consistently. We agree that the absence of formal reliability metrics is a limitation for the quantified results. In revision, we will expand the methodology to describe the annotation template, coding rules, and resolution process in detail. We will also note the single-team approach as a limitation and, where possible, report a post-hoc agreement check on a subsample. This will improve credibility without altering the exploratory nature of the quantification. revision: partial

  3. Referee: [taxonomy development / scoping review] The scoping review used to derive the 27-mechanism taxonomy does not specify the search strategy, number of sources screened, or inclusion criteria. This raises a risk of selection or interpretation bias in the taxonomy itself, which is foundational to both the validation step and the frequency claims.

    Authors: The taxonomy was iteratively developed following design science research principles, synthesizing academic literature, policy reports, and media sources on regulatory capture. We accept that the scoping review protocol details were omitted. The revised manuscript will include a methods subsection specifying the search strategy (keywords such as 'AI regulatory capture', 'tech industry influence'), databases (Google Scholar, Web of Science, arXiv), approximate number of sources screened, and inclusion criteria (focus on mechanisms of industry-government interaction in AI or analogous sectors). A flow diagram will be added to document the process and reduce concerns of bias. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper constructs its taxonomy of 27 mechanisms via design science research and scoping review of external literature and media reports, then applies an annotation template derived from that taxonomy to a separate set of 100 news articles to produce frequency counts and validation. This chain relies on external data sources and interpretive coding rather than any self-definitional loop, fitted parameter renamed as prediction, self-citation that bears the central load, or mathematical derivation that reduces outputs to inputs by construction. No equations, uniqueness theorems, or ansatzes are present; the quantified claims (249 instances, dominant categories) emerge directly from applying the taxonomy to the annotated corpus without tautological reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the completeness of the scoping review and the representativeness of the 100 selected news articles for validating the taxonomy.

axioms (1)
  • domain assumption The scoping review of existing literature and media reports is comprehensive and unbiased in identifying capture mechanisms.
    Invoked to ground the development of the 27-mechanism taxonomy.

pith-pipeline@v0.9.0 · 5607 in / 1334 out tokens · 53287 ms · 2026-05-11T01:03:12.082116+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

129 extracted references · 12 canonical work pages

  1. [1]

    Mohamed Abdalla and Moustafa Abdalla. 2021. The grey hoodie project: Big tobacco, big tech, and the threat on academic integrity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 287–297

  2. [2]

    AFP. 2025. Concentration of corporate power a ’huge’ concern: UN rights chief. (Nov. 2025). https://us.afpnews.com/article/ ?concentration-of-corporate-power-a-huge-concern-un-rights-chief,83MG3CQ

  3. [3]

    Agrell and Axel Gautier

    Per J. Agrell and Axel Gautier. 2012. Rethinking regulatory capture. InRecent advances in the analysis of competition policy and regulation. Edward Elgar Publishing

  4. [4]

    Nur Ahmed, Muntasir Wahed, and Neil C Thompson. 2023. The growing influence of industry in AI research.Science379, 6635 (2023), 884–886

  5. [5]

    AI Now Institute. 2024. Lessons from the FDA for AI. https://ainowinstitute.org/publications/research/lessons-from-the-fda-for-ai

  6. [6]

    AI Now Institute. 2025. People’s AI Action Plan Launches to Provide Counter-Weight to Trump’s Industry-Backed AI Plan and EOs. https://ainowinstitute.org/news/announcement/peoples-ai-action-plan-launches-to-provide-counter-weight-to-trumps- industry-backed-ai-plan-and-eos

  7. [7]

    Shocking

    Lucas Amin. 2025. Revealed: “Shocking” scale of Big Tech’s influence over Labour — democracyforsale.substack.com. https:// democracyforsale.substack.com/p/revealed-shocking-scale-of-big-tech-influence-labour-peter-kyle-amazon-google-meta. [Accessed 26-12-2025]

  8. [8]

    Advait Arun. 2025. Bubble or Nothing. https://publicenterprise.org/report/bubble-or-nothing/

  9. [9]

    sound science

    Annamaria Baba, Daniel M Cook, Thomas O McGarity, and Lisa A Bero. 2005. Legislating “sound science”: the role of the tobacco industry.American Journal of Public Health95, S1 (2005), S20–S27

  10. [10]

    Joseph Bak-Coleman, Cailin O’Connor, Carl Bergstrom, and Jevin West. 2025. The Risks of Industry Influence in Tech Research.arXiv preprint arXiv:2510.19894(2025)

  11. [11]

    H Barakat. 2024. Selective perspectives: A content analysis of The New York Times’ reporting on artificial intelligence.Computer Says Maybe(2024). Big AI’s Regulatory Capture FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  12. [12]

    Burcu Baykurt. 2025. Gov-tech as capture: public infrastructures under data capitalism.Information, Communication & Society(2025), 1–16

  13. [13]

    BBC. 2025. UK social media campaigners among five denied US visas — bbc.com. https://www.bbc.com/news/articles/cp39kngz008o. [Accessed 26-12-2025]

  14. [14]

    Rebecca Bellan. 2025. Silicon Valley is pouring millions into pro-AI PACs to sway midterms | TechCrunch — techcrunch.com. https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-millions-into-pro-ai-pacs-to-sway-midterms/. [Accessed 27-12-2025]

  15. [15]

    Bellingcat. 2026. https://www.bellingcat.com/

  16. [16]

    European Consumer Organisation (BEUC). 2026. Open Joint Letter on the Digital Omnibus on AI Preserving the Scope and Integrity of the AI Act. https://www.beuc.eu/letters/open-joint-letter-digital-omnibus-ai-preserving-scope-and-integrity-ai-act

  17. [17]

    Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, and Inioluwa Deborah Raji. 2024. AI auditing: The Broken Bus on the Road to AI Accountability. In2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)(Toronto, ON, Canada, 2024-04-09). IEEE, 612–643. doi:10.1109/SaTML59370.2024.00037

  18. [18]

    Cabrera, L

    L. Cabrera, L. Caroli, and D.E. Harris. 2025. Human rights are universal, not optional: don’t undermine the EU AI act with a faulty code of practice

  19. [19]

    Carole Cadwalladr. 2026. The Nerve. https://www.thenerve.news/

  20. [20]

    2013.Preventing regulatory capture: Special interest influence and how to limit it

    Daniel Carpenter and David A Moss. 2013.Preventing regulatory capture: Special interest influence and how to limit it. Cambridge University Press

  21. [21]

    David M Carr. 2003. Pfizer’s epidemic: a need for international regulation of human experimentation in developing countries.Case W. Res. J. Int’l L.35 (2003), 15

  22. [22]

    Laurens Cerulus, Hanne Cokelaere, Marianne Gros, and Bartosz Brzeziński. 2025. Ranked: The 10 most intensely lobbied EU laws

  23. [23]

    Public Citizen. 2025. Deleting Tech Enforcement - Public Citizen — citizen.org. https://www.citizen.org/article/deleting-enforcement- trump-big-tech-billion-report/. [Accessed 27-12-2025]

  24. [24]

    Public Citizen. 2025. How Trump Is Halting Enforcement Against Corporate Lawbreakers — citizen.org. https://www.citizen.org/ article/corporate-clemency-trump-enforcement-report/. [Accessed 27-12-2025]

  25. [25]

    European Commission. 2025. Speech by President von der Leyen at the Copenhagen Competitiveness Summit — luxem- bourg.representation.ec.europa.eu. https://luxembourg.representation.ec.europa.eu/actualites-et-evenements/actualites/speech- president-von-der-leyen-copenhagen-competitiveness-summit-2025-10-01_en. [Accessed 27-12-2025]

  26. [26]

    Corporate Europe Observatory. 2025. Bias baked in: How Big Tech sets its own AI standards. https://corporateeurope.org/en/2025/01/ bias-baked

  27. [27]

    Carl F Cranor. 2008. The tobacco strategy entrenched

  28. [28]

    Ernesto Dal Bó. 2006. Regulatory capture: A review.Oxford review of economic policy22, 2 (2006), 203–225

  29. [29]

    Kevin De Liban. 2024. Inescapable AI: The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive

  30. [30]

    Somaiyeh Dehghan, Mehmet Umut Sen, and Berrin Yanikoglu. 2025. Dealing with Annotator Disagreement in Hate Speech Classification. doi:10.48550/arXiv.2502.08266 arXiv:2502.08266 [cs]

  31. [31]

    Roel Dobbe. 2022. System safety and artificial intelligence. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1584–1584

  32. [32]

    The Economist. 2025. Investors expect AI use to soar. That’s not happening.The Economist(26 Nov. 2025). https://www.economist. com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening

  33. [33]

    European Digital Rights (EDRi). 2025. The EU must uphold hard-won protections for digital human rights. https://edri.org/wp- content/uploads/2025/11/The-EU-must-uphold-hard-won-protections-for-digital-human-rights.pdf

  34. [34]

    Ruben Enikolopov and Maria Petrova. 2015. Media capture: Empirical evidence. InHandbook of media economics. Vol. 1. Elsevier, 687–700

  35. [35]

    Antonio Estache, Liam Wren-Lewis, S Rose-Ackerman, and T Soreide. 2011. Anti-corruption policy in theories of sector regulation. Chapter9 (2011), 269–299

  36. [36]

    EU. 2018. Antitrust: Commission fines Google€4.34 billion for illegal practices regarding Android mobile devices to strengthen dominance of Googleś search engine — ec.europa.eu. https://ec.europa.eu/commission/presscorner/detail/en/ip_18_4581. [Accessed 23-12-2025]

  37. [37]

    EU. 2023. EU AI Act first regulation on artificial intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu- ai-act-first-regulation-on-artificial-intelligence

  38. [38]

    European Center for Not-for-Profit Law (ECNL) and European AI & Society Fund. 2024. Towards an AI Act that serves people and society: Strategic actions for civil society and funders on the enforcement of the EU AI Act. https://ecnl.org/sites/default/files/2024- 08/241508_AIAct%20implementation_ECNL%20report_final%20design.pdf

  39. [39]

    European Commission. 2025. Simpler digital rules to help EU businesses grow. (Nov. 2025). https://commission.europa.eu/news-and- media/news/simpler-digital-rules-help-eu-businesses-grow-2025-11-19_en

  40. [40]

    2019.The global expansion of AI surveillance

    Steven Feldstein. 2019.The global expansion of AI surveillance. Vol. 17. Carnegie Endowment for International Peace Washington, DC. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Birhane et al

  41. [41]

    Irish Council for Civil Liberties. 2023. The European Commission must follow Ireland’s lead, and switch off Big Tech’s toxic algorithms. https://www.iccl.ie/2023/the-european-commission-must-follow-irelands-lead-and-switch-off-big-techs-toxic-algorithms/

  42. [42]

    Center for Democracy & Technology. 2026. Joint Open Letter: Preserving the Scope and Integrity of the AI Act. https://cdt.org/insights/joint-open-letter-preserving-the-scope-and-integrity-of-the-ai-act/

  43. [43]

    Center for Democracy & Technology. 2026. This is what corporate capture looks like! Report: How corporations run the EU deregulation agenda. https://corporateeurope.org/en/2026/04/what-corporate-capture-looks

  44. [44]

    FragDenStaat. 2025. Koalitionsverhandlungen CDU/CSU/SPD AG 3 - Digitales — fragdenstaat.de. https://fragdenstaat.de/dokumente/ 258016-koalitionsverhandlungen-cdu-csu-spd-ag-3-digitales/. [Accessed 27-12-2025]

  45. [45]

    Damien Gayle. 2025. Cop30 was meant to be a turning point, so why do some say the climate summit is broken?

  46. [46]

    Robert Gorwa, Grzegorz Lechowski, and Daniel Schneiß. 2024. Platform lobbying: Policy influence strategies and the EU’s Digital Services Act.Internet Policy Review13, 2 (2024), 1–26

  47. [47]

    Green Screen Coalition. 2025. Within Bounds: Limiting AI’s environmental impact. https://greenscreen.network/en/blog/within- bounds-limiting-ai-environmental-impact/ Section: blog

  48. [48]

    Andrew B Hall, Anna Sun, and GSB Stanford. 2025. Investing in Political Expertise: The Remarkable Scale of Corporate Policy Teams. (2025)

  49. [49]

    Ryan Harvey and Melanie Foley. 2021. How to Fight Big Pharma — and Win. (2021). https://truthout.org/articles/how-to-fight-big- pharma-and-win/

  50. [50]

    Health, Education, Labor, and Pensions Committee (Chair: Bernard Sanders). 2024. Big Pharma’s Business Model: Corporate Greed. https://www.help.senate.gov/imo/media/doc/big_pharmas_business_model_report.pdf

  51. [51]

    Alan R. Hevner. 2007. A three cycle view of design science research.Scandinavian journal of information systems19, 2 (2007), 4

  52. [52]

    Hevner, Salvatore T

    Alan R. Hevner, Salvatore T. March, Jinsoo Park, and Sudha Ram. 2004. Design Science in Information Systems Research.MIS quarterly (2004), 75–105

  53. [53]

    Ghofran Hilal, Thawab Hilal, and Mohammad Al-Fawareh. 2024. Misinformation and the demonization of human Rights: the Jordanian Child Rights Law.Cogent Education11, 1 (2024), 2329417

  54. [54]

    Sofia Hiltner, Emily Eaton, Noel Healy, Andrew Scerri, Jennie C Stephens, and Geoffrey Supran. 2024. Fossil fuel industry influence in higher education: a review and a research agenda.Wiley Interdisciplinary Reviews: Climate Change15, 6 (2024), e904

  55. [55]

    Hurt, M.E

    R.D. Hurt, M.E. Muggli, and L.B. Becker. 2004. Turning free speech into commercial speech: Philip Morris’ use of journalists to discredit the EPA report on secondhand smoke.Journal of Clinical Oncology22, 14_suppl (2004), 6151–6151

  56. [56]

    AI Now Institute. 2025. Artificial Power: 2025 Landscape Report - AI Now Institute — ainowinstitute.org. https://ainowinstitute.org/ publications/research/ai-now-2025-landscape-report. [Accessed 23-12-2025]

  57. [57]

    The Ada Lovelace Institute and The Alan Turing Institute. 2025. The Ada Lovelace Institute and The Alan Turing Institute, How do people feel about AI? Wave two of a nationally representative survey of UK attitudes to AI.The Ada Lovelace Institute(2025)

  58. [58]

    Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. 2025. Computer-vision research powers surveillance technology.Nature(2025), 1–7

  59. [59]

    Sheldon Krimsky. 1985. The corporate capture of academic science and its social costs. InGenetics and the Law III. Springer, 45–55

  60. [60]

    2004.Science in the private interest: Has the lure of profits corrupted biomedical research?Bloomsbury Publishing PLC

    Sheldon Krimsky. 2004.Science in the private interest: Has the lure of profits corrupted biomedical research?Bloomsbury Publishing PLC

  61. [61]

    Paul Lachapelle, Patrick Belmont, Marco Grasso, Roslynn McCann, Dawn H Gouge, Jerri Husch, Cheryl de Boer, Daniela Molzbichler, and Sarah Klain. 2024. Academic capture in the Anthropocene: a framework to assess climate action in higher education.Climatic Change177, 3 (2024), 40

  62. [62]

    Filippo Lancieri, Laura Edelson, and Stefan Bechtold. 2024. AI Regulation: Competition, Arbitrage & Regulatory Capture.Center for Law & Economics Working Paper Series11 (2024)

  63. [63]

    Kelley Lee. 2025. Engaging policymakers on the Commercial Determinants of Health: Lessons from Global Tobacco Control. https: //www.publichealthontario.ca/-/media/Event-Presentations/25/07/engaging-policymakers-global-tobacco-control.pdf

  64. [64]

    2024.AN OPEN LETTER: Europe needs regulatory certainty on AI

    Open Letter. 2024.AN OPEN LETTER: Europe needs regulatory certainty on AI. https://euneedsai.com//

  65. [65]

    2011.Handbook on the Politics of Regulation

    David Levi-Faur. 2011.Handbook on the Politics of Regulation. Edward Elgar Publishing

  66. [66]

    Wendy Y. Li. 2023. Regulatory capture’s third face of power.Socio-Economic Review21, 2 (2023), 1217–1245

  67. [67]

    Hause Lin, Jana Lasser, Stephan Lewandowsky, Rocky Cole, Andrew Gully, David G Rand, and Gordon Pennycook. 2023. High level of correspondence across different news domain quality rating sets.PNAS Nexus2, 9 (Sept. 2023), pgad286. doi:10.1093/pnasnexus/pgad286

  68. [68]

    LobbyFacts. 2026. LobbyFacts - exposing lobbying in the European institutions. https://www.lobbyfacts.eu/

  69. [69]

    Samuel Loewenberg. 2008. Drug company trials come under increasing scrutiny.The Lancet371, 9608 (2008), 191–192

  70. [70]

    Parliament Politics Magazine. 2025. Britain delays AI regulations to align with Trump’s policies — parliamentnews.co.uk. https: //parliamentnews.co.uk/britain-delays-ai-regulations-to-align-with-trumps-policies. [Accessed 29-12-2025]

  71. [71]

    2022.Resisting AI: an anti-fascist approach to artificial intelligence

    Dan McQuillan. 2022.Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press

  72. [72]

    2026.404 Media

    404 Media. 2026.404 Media. https://www.404media.co/ Big AI’s Regulatory Capture FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  73. [73]

    Brian Merchant. 2025. Hundreds of workers mobilize to ’Stop Gen AI’ and help each other survive AI automation.Blood in the Machine (June 2025). https://www.bloodinthemachine.com/p/hundreds-of-workers-mobilize-to-stop

  74. [74]

    2008.Doubt is their product: how industry’s assault on science threatens your health

    David Michaels. 2008.Doubt is their product: how industry’s assault on science threatens your health. Oxford University Press

  75. [75]

    Gabby Miller. 2025. Data centers have a political problem — and Big Tech wants to fix it. https://www.politico.com/news/2025/12/17/data- centers-have-a-political-problem-and-big-tech-wants-to-fix-it-00693695. [Accessed 27-12-2025]

  76. [76]

    Julie Margetta Morgan and Devin Duffy. 2019. The cost of capture: how the pharmaceutical industry has corrupted policymakers and harmed patients.The Roosevelt Institute(2019)

  77. [77]

    Adriana Moro and Noela Invernizzi. 2017. The thalidomide tragedy: the struggle for victims’ rights and improved pharmaceutical regulation.História, Ciências, Saúde-Manguinhos24 (2017), 603–622

  78. [78]

    Madhumita Murgia. 2019. AI academics under pressure to do commercial research.Financial Times13 (2019)

  79. [79]

    2004.Degrees of capture: Universities, the oil industry and climate change

    Greg Muttitt. 2004.Degrees of capture: Universities, the oil industry and climate change. New Economics Foundation

  80. [80]

    Meta Newsroom. 2024. Building AI Technology for Europeans in a Transparent and Responsible Way. https://about.fb.com/news/ 2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/ [Accessed 29-12-2025]

Showing first 80 references.