pith. machine review for the scientific record. sign in

arxiv: 2605.09793 · v1 · submitted 2026-05-10 · 💻 cs.HC

Recognition: 2 theorem links

· Lean Theorem

Push and Pushback in Contesting AI: Demands for and Resistance to Accountability

Jae Woo Lee, Jatinder Singh, Lucas Lichner, Renwen Zhang, Sijia Xiao, Yulu Pi

Pith reviewed 2026-05-12 02:38 UTC · model grok-4.3

classification 💻 cs.HC
keywords AI contestationaccountabilityinstitutional resistancethematic analysiscase studiesAI governancedemands for redressresponse tactics
0
0 comments X

The pith

AI contestation unfolds as an iterative push for accountability from affected groups that institutions often resist through targeted tactics.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper analyzes 43 real-world cases where communities, workers, and advocates direct demands at AI developers and deployers for redress or change. It frames these contests as accountability-seeking processes in which challengers from below press actors from above, who then accept, resist, or sidestep the demands. Thematic analysis yields categories for the strategies challengers use, the response tactics institutions employ, the resulting outcome types, and the contextual factors that influence both sides. The account shows that accountability is frequently limited in practice rather than fully granted. These categories supply concrete guidance for anticipating institutional evasion and strengthening future contestation efforts.

Core claim

Our thematic analysis of 43 cases produces empirically grounded categories of contestation strategies, institutional response tactics, outcome types, and the contextual factors that shape them. Situating the work in a relational model of accountability, we treat contestation as a dynamic process in which actors from below direct explicit demands at actors from above, who respond by accepting, resisting, or circumventing accountability. We show that those being contested routinely deploy a range of strategies to limit their accountability.

What carries the argument

Thematic analysis of 43 real-world AI contestation cases, organized through a relational accountability model that distinguishes demands from below and responses from above.

Load-bearing premise

The 43 chosen cases capture representative patterns of AI contestation without major selection bias or interpretive subjectivity in the thematic coding.

What would settle it

A larger, independently sampled collection of AI contestation cases that produces substantially different strategy categories, response tactics, or outcome patterns would undermine the claimed generality of the framework.

Figures

Figures reproduced from arXiv: 2605.09793 by Jae Woo Lee, Jatinder Singh, Lucas Lichner, Renwen Zhang, Sijia Xiao, Yulu Pi.

Figure 1
Figure 1. Figure 1: A characterization of AI contestation dynamics, capturing the two-sided accountability relationship between actors [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: We identify four types of contesters, five dimensions of AI they contest, and six institutional response strategies. [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
read the original abstract

As AI becomes increasingly embedded in daily life, it has been shown to fail critically, cause harm, and spark public controversy, prompting affected communities, workers, and public-interest groups to contest it. Yet how these contestations unfold in practice remains underexplored. We address this gap by developing an empirically grounded account of AI contestation dynamics. We do so through a thematic analysis of 43 real-world cases in which affected actors direct demands toward those responsible for AI development and deployment, seeking redress, influence, or changes to AI practices. Situating our work within Bovens's relational model of accountability, we conceptualize contestation as accountability-seeking: a dynamic, iterative process in which actors "from below" direct explicit demands at actors "from above," who respond by accepting, resisting, or circumventing accountability. Our analysis produces empirically grounded categories of contestation strategies, institutional response tactics, outcome types, and the contextual factors that shape them, illuminating how accountability is pursued and evaded in practice. We show that those being contested often deploy a range of strategies to limit their accountability. Based on these insights, we offer guidance for researchers, policymakers, advocates, and other stakeholders seeking to support effective AI contestation, with particular attention to anticipating and countering institutional strategies used to evade accountability.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper conducts a thematic analysis of 43 real-world cases of AI contestation to develop an empirically grounded framework of contestation dynamics, including strategies used by affected actors, response tactics by institutions, outcome types, and contextual factors. Drawing on Bovens's relational accountability model, it conceptualizes contestation as accountability-seeking and provides guidance for supporting effective contestation while countering evasion strategies.

Significance. If the methodological details support the robustness of the categories, this study offers important contributions to understanding practical AI accountability processes in HCI and related fields. It provides concrete, empirically derived insights into how accountability is pursued and resisted, which can inform advocacy, policy, and research on AI governance. The use of an established theoretical model with independent cases is a strength.

major comments (2)
  1. [Methods] Methods section: The description of how the 43 cases were selected (including any sampling frame, inclusion/exclusion criteria, or efforts to mitigate bias such as toward high-profile Western incidents) is not provided in sufficient detail. This directly undermines verification of the central claim that the analysis produces representative, empirically grounded categories of contestation strategies and institutional tactics.
  2. [Methods] Methods section: Details on the thematic analysis process, such as the inductive coding procedure, number of coders, and any inter-rater reliability metrics, are absent. Without these, the link between the raw cases and the derived categories cannot be fully assessed for subjectivity or replicability, which is load-bearing for the paper's assertion of illuminating accountability dynamics in practice.
minor comments (1)
  1. [Abstract] Abstract: A short statement acknowledging the qualitative, non-probability sample and its implications for generalizability would help readers interpret the scope of the categories and guidance offered.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thoughtful review and constructive feedback on our manuscript. The comments on the methods section are well-taken and point to areas where greater transparency will strengthen the paper. We have revised the methods section in response to both major comments, adding the requested details on case selection and the thematic analysis process. These changes improve the replicability and credibility of our empirically grounded framework without altering the core findings or contributions.

read point-by-point responses
  1. Referee: [Methods] Methods section: The description of how the 43 cases were selected (including any sampling frame, inclusion/exclusion criteria, or efforts to mitigate bias such as toward high-profile Western incidents) is not provided in sufficient detail. This directly undermines verification of the central claim that the analysis produces representative, empirically grounded categories of contestation strategies and institutional tactics.

    Authors: We agree that the original methods section lacked sufficient detail on case selection. The 43 cases were drawn from publicly available sources including news reports, advocacy organization publications, and documented incidents of AI-related harms and contestation. To address this, we have expanded the methods section to explicitly describe the sampling approach: a purposive sampling frame focused on cases involving explicit demands directed at AI developers or deployers by affected actors (communities, workers, or public-interest groups). Inclusion criteria required evidence of contestation (e.g., public campaigns, legal actions, or organized protests seeking accountability), while exclusion criteria omitted purely technical failures without contestation or cases lacking sufficient public documentation. We also describe efforts to mitigate Western/high-profile bias by including searches across multiple languages and regions, incorporating cases from the Global South where available. This revision directly supports the claim of empirically grounded categories by making the selection process verifiable. revision: yes

  2. Referee: [Methods] Methods section: Details on the thematic analysis process, such as the inductive coding procedure, number of coders, and any inter-rater reliability metrics, are absent. Without these, the link between the raw cases and the derived categories cannot be fully assessed for subjectivity or replicability, which is load-bearing for the paper's assertion of illuminating accountability dynamics in practice.

    Authors: We acknowledge that the original manuscript did not provide adequate detail on the thematic analysis procedure. The analysis followed an inductive approach: initial open coding of all 43 cases to identify recurring patterns in demands, institutional responses, outcomes, and contextual factors, followed by axial coding to group these into higher-level categories aligned with Bovens's relational accountability model, with iterative refinement through team discussions. In the revised manuscript, we now specify that the coding was conducted by two authors independently on a subset of cases, with all authors reviewing and discussing discrepancies to reach consensus; we also note the absence of formal inter-rater reliability statistics (common in qualitative thematic analysis) and describe steps taken to enhance rigor, such as maintaining an audit trail of coding decisions and member-checking categories against the data. This addition addresses concerns about subjectivity and replicability while preserving the qualitative, interpretive nature of the study. revision: yes

Circularity Check

0 steps flagged

No circularity: categories derived inductively from independent cases using external model

full rationale

The paper performs a thematic analysis of 43 real-world cases to generate categories of contestation strategies, institutional responses, outcomes, and contextual factors. These are situated in Bovens's relational accountability model, which is an external reference rather than a self-citation. No equations, fitted parameters, self-definitional constructs, or load-bearing self-citations appear in the derivation chain. The central claims rest on empirical coding of independent cases rather than reducing to the inputs by construction, making the analysis self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the applicability of Bovens's relational model to AI and the sufficiency of the 43 cases for generating generalizable categories; no free parameters or invented entities are introduced.

axioms (1)
  • domain assumption Bovens's relational model of accountability applies directly to AI contestation as accountability-seeking from below
    Invoked to conceptualize the dynamic process of demands and responses.

pith-pipeline@v0.9.0 · 5548 in / 1143 out tokens · 61954 ms · 2026-05-12T02:38:35.975336+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

128 extracted references · 128 canonical work pages

  1. [1]

    Harold Abelson, Ross Anderson, Steven M Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G Neumann, Ronald L Rivest, Jeffrey I Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso. 2024. Bugs in our pockets: the risks of client-side scanning.Journal of Cybersecurity10, 1 (01 2024), tyad020. arXiv:https://acad...

  2. [2]

    2018.Litigating Algorithms

    AI Now Institute. 2018.Litigating Algorithms. AI Now Institute. https://ainowinstitute.org/news/litigating-algorithms-3

  3. [3]

    2022.ACLU v

    American Civil Liberties Union. 2022.ACLU v. Clearview AI. American Civil Liberties Union. https://www.aclu.org/cases/aclu-v- clearview-ai

  4. [4]

    Anonymous. 2012. Incident Number 75: Google Instant’s Allegedly ’Anti-Semitic’ Results Lead To Lawsuit In France.AI Incident Database(2012). https://incidentdatabase.ai/cite/75 Retrieved December 2025 from https://incidentdatabase.ai/cite/75

  5. [5]

    Anonymous. 2018. Incident Number 184: Facial Recognition Program in São Paulo Metro Stations Suspended for Illegal and Dispropor- tionate Violation of Citizens’ Right to Privacy.AI Incident Database(2018). https://incidentdatabase.ai/cite/184 Retrieved December 2025 from https://incidentdatabase.ai/cite/184

  6. [6]

    Anonymous. 2021. Incident Number 360: McDonald’s AI Drive-Thru Allegedly Collected Biometric Customer Data without Consent, Violating BIPA.AI Incident Database(2021). https://incidentdatabase.ai/cite/360 Retrieved December 2025 from https://incidentdatabase. ai/cite/360

  7. [7]

    Anonymous. 2021. Incident Number 534: Facebook Alleged in Lawsuit Misleading Public about Effects of Algorithms on Children.AI Incident Database(2021). https://incidentdatabase.ai/cite/534 Retrieved January 2026 from https://incidentdatabase.ai/cite/534

  8. [8]

    Daniel Atherton. 2020. Incident Number 926: Giorgia Meloni Reportedly Targeted by Deepfake Pornography.AI Incident Database (2020). https://incidentdatabase.ai/cite/926 Retrieved December 2025 from https://incidentdatabase.ai/cite/926

  9. [9]

    Daniel Atherton. 2022. Incident Number 639: Air Canada Chatbot Reportedly Provides Inaccurate Bereavement Fare Information, Leading to Customer Overpayment.AI Incident Database(2022). https://incidentdatabase.ai/cite/639 Retrieved December 2025 from https://incidentdatabase.ai/cite/639

  10. [10]

    Daniel Atherton. 2023. Incident Number 597: Female Students at Westfield High School in New Jersey Reportedly Targeted with Deepfake Nudes.AI Incident Database(2023). https://incidentdatabase.ai/cite/597 Retrieved December 2025 from https://incidentdatabase.ai/ cite/597

  11. [11]

    Daniel Atherton. 2023. Incident Number 608: UnitedHealth Accused of Deploying Allegedly Flawed AI to Deny Medical Coverage.AI Incident Database(2023). https://incidentdatabase.ai/cite/608 Retrieved December 2025 from https://incidentdatabase.ai/cite/608

  12. [12]

    Daniel Atherton. 2023. Incident Number 726: A Self-Driving Cruise Robot Taxi Reportedly Struck and Dragged a Pedestrian 20 Feet.AI Incident Database(2023). https://incidentdatabase.ai/cite/726 Retrieved December 2025 from https://incidentdatabase.ai/cite/726

  13. [13]

    Agathe Balayn, Yulu Pi, David Gray Widder, Kars Alfrink, Mireia Yurrita, Sohini Upadhyay, Naveena Karusala, Henrietta Lyons, Cagatay Turkay, Christelle Tessono, Blair Attard-Frost, and Ujwal Gadiraju. 2024. From Stem to Stern: Contestability Along AI Value Chains. InCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and So...

  14. [14]

    Bruce Barcott. 2025. In early ruling, federal judge defines Character.AI chatbot as product, not speech. https://www. transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech. Accessed: 2025-12- 03

  15. [15]

    Ames, Jed Brubaker, and Paul Dourish

    Eric Baumer, Jenna Burrell, Morgan G. Ames, Jed Brubaker, and Paul Dourish. 2015. On the Importance and Implications of Studying Technology Non-Use.interactions22 (02 2015), 52–56. doi:10.1145/2723667

  16. [16]

    BBC News. 2020. A-levels and GCSEs: Government U-turn on exam grades to use teacher assessments. https://www.bbc.co.uk/news/ education-53923279

  17. [17]

    2024.Regulating Facial Recognition in Brazil: Legal and Policy Perspectives

    Luca Belli, Walter Britto Gaspar, and Nicolo Zingales. 2024.Regulating Facial Recognition in Brazil: Legal and Policy Perspectives. Cambridge University Press, 228–241. Push and Pushback in Contesting AI: Demands for and Resistance to Accountability FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  18. [18]

    Benefits Tech Advocacy Hub. 2022. Arkansas Medicaid Home and Community Based Services Hours Cuts. https://www.btah.org/case- study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html Accessed 2025-12-16

  19. [19]

    Andrea Bingham, J. 2023. From Data Management to Actionable Findings: A Five-Phase Process of Qualitative Data Analysis. 22 (2023). doi:10.1177/16094069231183620

  20. [20]

    Hannah Bloch-Wehba. 2022. Algorithmic Governance from the Bottom Up.BYU Law Review(01 2022). doi:10.2139/ssrn.4054640

  21. [21]

    Daniel James Bogiatzis-Gibbons. 2024. Beyond Individual Accountability: (Re-)Asserting Democratic Control of AI. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 74–84. doi:10.1145/3630106.3658541

  22. [22]

    2024.Algorithms of Resistance: The Everyday Fight Against Platform Power

    Tiziano Bonini and Emiliano Treré. 2024.Algorithms of Resistance: The Everyday Fight Against Platform Power. The MIT Press. doi:10.7551/mitpress/14329.001.0001

  23. [23]

    Mark Bovens. 2007. Analysing and Assessing Accountability: A Conceptual Framework.European Law Journal13 (07 2007), 447 – 468. doi:10.1111/j.1468-0386.2007.00378.x

  24. [24]

    Mark Bovens. 2010. Two Concepts of Accountability: Accountability as a Virtue and as a Mechanism.West European Politics33, 5 (2010), 946–967. arXiv:https://doi.org/10.1080/01402382.2010.486119 doi:10.1080/01402382.2010.486119

  25. [25]

    2025.US judge preliminarily approves $1.5 billion Anthropic copyright settlement

    Blake Brittain. 2025.US judge preliminarily approves $1.5 billion Anthropic copyright settlement. Reuters. https://www.reuters.com/ sustainability/boards-policy-regulation/us-judge-approves-15-billion-anthropic-copyright-settlement-with-authors-2025-09-25/ Ac- cessed: 2025-12-19

  26. [26]

    2015.Obfuscation: A User’s Guide for Privacy and Protest

    Finn Brunton and Helen Nissenbaum. 2015.Obfuscation: A User’s Guide for Privacy and Protest. The MIT Press. doi:10.7551/mitpress/ 9780262029735.001.0001

  27. [27]

    Bundesverfassungsgericht, German Federal Constitutional Court. 2023. Judgment of the First Senate of 16 February 2023 - 1 BvR 1547/19 -, paras. 1-178. https://www.bverfg.de/e/rs20230216_1bvr154719en.html. English translation. Accessed 2025-12-08

  28. [28]

    Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Big Data & Society3, 1 (2016), 2053951715622512. doi:10.1177/2053951715622512

  29. [29]

    Artificial Intelligence principles

    Business and Human Rights Centre. 2018.Google introduces “Artificial Intelligence principles” that prohibit its use in weapons & human rights abuses. Business and Human Rights Centre. https://www.business-humanrights.org/en/latest-news/google-introduces-artificial- intelligence-principles-that-prohibit-its-use-in-weapons-human-rights-abuses/ Accessed: 2025-12-19

  30. [30]

    Hannah L. Buxbaum. 2025. Adhesive Forum Selection Agreements and Access to Justice: The Function and Limits of Anti-Waiver Protections.German Law Journal26, 5 (2025), 876–888. doi:10.1017/glj.2025.10140

  31. [31]

    Claudia Camargo and Eelco Jacobs. 2013. Working Paper 16: Social accountability and its conceptual challenges: An analytical framework.Basel Institute on Governance Working Papers(02 2013), 1–24. doi:10.12685/bigwp.2013.16.1-24

  32. [32]

    Chandra and Liang

    Yanto. Chandra and Liang. Shang. 2019. Inductive Coding. InQualitative Research Using R: A Systematic Approach. Springer, Singapore. doi:10.1007/978-981-13-3170-1_8

  33. [33]

    Caroline Chen. 2020. Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents. https://www.propublica. org/article/only-seven-of-stanfords-first-5-000-vaccines-were-designated-for-medical-residents. Accessed 2025-12-08

  34. [34]

    Ben Chester Cheong. 2024. Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision- making.Frontiers in Human Dynamics6 (2024). doi:10.3389/fhumd.2024.1421273

  35. [35]

    Cowan, J

    Marika Cifor, Patricia Garcia, T.L. Cowan, J. Rault, T. Sutherland, A. Chan, J. Rode, A.L. Hoffmann, N. Salehi, and L. Nakamura. 2019. Feminist Data Manifest-No. https://www.manifestno.com/

  36. [36]

    Clarkson Law Firm, P.C. 2023. Class Action Challenges OpenAI on Privacy. https://clarksonlawfirm.com/class-action-challenges- openai-on-privacy/ Accessed: 2025-12-12

  37. [37]

    Laura Claus and Paul Tracey. 2019. Making Change from Behind a Mask: How Organizations Challenge Guarded Institutions by Sparking Grassroots Activism.Academy of Management Journal63 (08 2019). doi:10.5465/amj.2017.0507

  38. [38]

    Coalition Against Predictive Policing in Pittsburgh. 2025. Coalition Against Predictive Policing in Pittsburgh – Predictive Policing Advocacy and Resources. https://capp-pgh.com/ Advocacy site opposing predictive policing in Pittsburgh; accessed 18 December 2025

  39. [39]

    Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh. 2021. Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 598–609. doi:10.1145/344218...

  40. [40]

    Jennifer Cobbe and Jatinder Singh. 2021. Artificial intelligence as a service: Legal responsibilities, liabilities, and policy challenges. Computer Law & Security Review42 (2021), 105573. doi:10.1016/j.clsr.2021.105573

  41. [41]

    Jennifer Cobbe, Michael Veale, and Jatinder Singh. 2023. Understanding accountability in algorithmic supply chains. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(Chicago, IL, USA)(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1186–1197. doi:10.1145/3593013.3594073

  42. [42]

    Cary Coglianese and Lavi M. Ben Dor. 2021. AI in Adjudication and Administration.Brooklyn Law Review(2021). https://scholarship. law.upenn.edu/faculty_scholarship/2118 FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Yulu Pi, Lucas Lichner, Jae Woo Lee, Sijia Xiao, Renwen Zhang, and Jatinder Singh

  43. [43]

    Commission nationale de l’informatique et des libertés (CNIL). 2024. Employee Monitoring: CNIL Fined Amazon France Logistique€32 Million.CNIL. https://www.cnil.fr/en/employee-monitoring-cnil-fined-amazon-france-logistique-eu32-million Accessed: 2025-12-13

  44. [44]

    Penny Crofts and Honni van Rijswijk. 2020. Negotiating ’Evil’: Google, Project Maven and the Corporate Form.Law, Technology and Humans2 (02 2020). doi:10.5204/lthj.v2i1.1313

  45. [45]

    2023.Challenges to the extraterritorial enforcement of data privacy law - EU case study

    Michal Czerniawski and Dan Svantesson. 2023.Challenges to the extraterritorial enforcement of data privacy law - EU case study. 127–153

  46. [46]

    massive amounts of personal data,

    Grace Dean. 2023.A lawsuit claims OpenAI stole “massive amounts of personal data, ” including medical records and information about children, to train ChatGPT. Business Insider. https://www.businessinsider.com/openai-chatgpt-generative-ai-stole-personal-data- lawsuit-children-medical-2023-6 Accessed: 2025-12-19

  47. [47]

    Alicia DeVrio, Motahhare Eslami, and Kenneth Holstein. 2024. Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 1093–1106. doi:10.1145...

  48. [48]

    Anders Eidesvik. 2025. Hunger strike against AGI in San Francisco passes one-month mark. https://peninsulapress.com/2025/10/02/ hunger-strike-against-agi-passes-one-month-mark/ Accessed 2025-12-16

  49. [49]

    Madeleine Elish. 2019. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.Engaging Science, Technology, and Society 5 (03 2019), 40–60. doi:10.17351/ests2019.260

  50. [50]

    Jakob Emerson. 2025. Judge denies UnitedHealth’s bid to limit discovery in AI coverage denial case. https://www.beckerspayer.com/ payer/medicare-advantage/judge-denies-unitedhealths-bid-to-limit-discovery-in-ai-coverage-denial-case/

  51. [51]

    2025.How a Shady US AI Company Dodged Fines and Defied Regulators Across Europe

    Lydia Emmanouilidou. 2025.How a Shady US AI Company Dodged Fines and Defied Regulators Across Europe. We Are Solomon. https://wearesolomon.com/mag/format/investigation/clearview-how-a-shady-us-ai-company-dodged-fines-and-defied- regulators-across-europe/ Accessed: 2025-12-13

  52. [52]

    2024.Bartz v

    Ethical Tech Initiative, George Washington University. 2024.Bartz v. Anthropic PBC. https://blogs.gwu.edu/law-eti/ai-litigation- database/case-detail-page/?pid=251 DAIL – Database of AI Litigation. Accessed: 2025-12-19

  53. [53]

    2025.Criminal complaint against facial recognition company Clearview AI

    European Center for Digital Rights (noyb). 2025.Criminal complaint against facial recognition company Clearview AI. noyb.eu. https://noyb.eu/en/criminal-complaint-against-facial-recognition-company-clearview-ai Accessed: 2025-12-19

  54. [54]

    Menno Fenger and Robin Simonse. 2024. The implosion of the Dutch surveillance welfare state.Social Policy & Administration58, 2 (2024), 264–276. doi:10.1111/spol.12998

  55. [55]

    Future of Life Institute. 2023. Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai- experiments/. Accessed 2025-12-16

  56. [56]

    Maya Ganesh and Emanuel Moss. 2022. Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’.Media International Australia183 (02 2022), 1329878X2210762. doi:10.1177/1329878X221076288

  57. [57]

    Google PAIR. 2019. Feedback + Control: Design feedback and control mechanisms to improve your AI and the user experience. https://pair.withgoogle.com/guidebook/chapters/feedback-and-controls/design-ai-feedback-loops

  58. [58]

    2024.Voters approve Prop

    Gabe Greschler. 2024.Voters approve Prop. E, giving more powers to San Francisco police. The San Francisco Standard. https: //sfstandard.com/2024/03/05/san-francisco-voters-election-prop-e-results-police/ Accessed: 2025-12-19

  59. [59]

    Lara Groves, Aidan Peppin, Andrew Strait, and Jenny Brennan. 2023. Going public: the role of public participation approaches in commercial AI labs. arXiv:2306.09871 [cs.HC] https://arxiv.org/abs/2306.09871

  60. [60]

    AI is Soulless

    Brett A. Halperin and Daniela K. Rosner. 2025. “AI is Soulless”: Hollywood Film Workers’ Strike and Emerging Perceptions of Generative Cinema.ACM Trans. Comput.-Hum. Interact.32, 2, Article 19 (April 2025), 27 pages. doi:10.1145/3716135

  61. [61]

    2008.Sciences from Below: Feminisms, Postcolonialities, and Modernities

    Sandra Harding. 2008.Sciences from Below: Feminisms, Postcolonialities, and Modernities. Duke University Press. doi:10.2307/j.ctv11smmtn Accessed 4 March 2026

  62. [62]

    Benjamin Hardy. 2018. ARChoices rule blocked. https://arknews.org/index.php/2018/05/30/archoices-rule-blocked/. Accessed: 2025-12-03

  63. [63]

    Aspen Hopkins, Isabella Struckman, Kevin Klyman, and Susan S. Silbey. 2025. Recourse, Repair, Reparation, & Prevention: A Stakeholder Analysis of AI Supply Chains. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 209–227. doi:10.1145/3715275.3732017

  64. [64]

    Benefits Tech Advocacy Hub. [n. d.]. Arkansas Medicaid Home and Community Based Services Hours Cuts. https://www.btah.org/case- study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html. Accessed: 2025-12-03

  65. [65]

    Institute for Healthcare Policy & Innovation, University of Michigan. 2018. What happens when an algorithm cuts your health care. https://ihpi.umich.edu/news/what-happens-when-algorithm-cuts-your-health-care. Accessed 2025-12-08

  66. [66]

    Jersey Evening Post. 2020. Johnson and Williamson forced into U-turn over A-level grades. https://jerseyeveningpost.com/morenews/ uknews/2020/08/17/johnson-and-williamson-forced-into-u-turn-over-a-level-grades/

  67. [67]

    Nari Johnson, Sanika Moharana, Christina Harrington, Nazanin Andalibi, Hoda Heidari, and Motahhare Eslami. 2024. The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, ...

  68. [68]

    Justia U.S Law. 2017. Arkansas Department of Human Services v. Ledgerwood (Majority). https://law.justia.com/cases/arkansas/supreme- court/2017/cv-17-183.html. Accessed 2025-12-08

  69. [69]

    Emma Kallina, Thomas Bohné, and Jatinder Singh. 2025. Stakeholder Participation for Responsible AI Development: Disconnects Between Guidance and Current Practice. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 1060–1079. doi:10.1145/3715275.3732069

  70. [70]

    Emma Kallina and Jatinder Singh. 2024. Stakeholder Involvement for Responsible AI Development: A Process Framework. InProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(San Luis Potosi, Mexico)(EAAMO ’24). Association for Computing Machinery, New York, NY, USA, Article 1, 14 pages. doi:10.1145/3689904.3694698

  71. [71]

    Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. 2023. The Surveillance AI Pipeline. arXiv:2309.15084 [cs.CV] https://arxiv.org/abs/2309.15084

  72. [72]

    Kaminski and Jennifer M

    Margot E. Kaminski and Jennifer M. Urban. 2021. The Right to Contest AI.Columbia Law Review121, 7 (2021). https://ssrn.com/ abstract=3965041 U of Colorado Law Legal Studies Research Paper No. 21-30

  73. [73]

    Joshua A. Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 758–771. doi:10.1145/3442188.3445937

  74. [74]

    Data Justice Lab. [n. d.].Data Harm Record. https://datajusticelab.org/project/data-harm-record/ Accessed: 2026-03-13

  75. [75]

    Khoa Lam. 2023. Incident Number 592: Facial Recognition Misidentifies Pregnant Woman Leading to False Arrest in Detroit.AI Incident Database(2023). https://incidentdatabase.ai/cite/592 Retrieved December 2025 from https://incidentdatabase.ai/cite/592

  76. [76]

    Algorithmic Justice League. [n. d.].Share Your Story. Spark Change.https://www.ajl.org/harms Accessed: 2026-03-13

  77. [77]

    Roman Lutz. 2017. Incident Number 96: Houston Schools Must Face Teacher Evaluation Lawsuit.AI Incident Database(2017). https://incidentdatabase.ai/cite/96 Retrieved December 2025 from https://incidentdatabase.ai/cite/96

  78. [78]

    Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions.Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 106 (apr 2021), 25 pages. doi:10.1145/3449180

  79. [79]

    Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach

    Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. ...

  80. [80]

    Samantha Maldonado. 2025. NYPD bypassed facial recognition ban to ID Pro-Palestinian student protester.THE CITY - NYC News(July 2025). https://www.thecity.nyc/2025/07/18/nypd-fdny-clearview-ai-ban-columbia-palestinian-protest/ Accessed 2025-12-16

Showing first 80 references.