Recognition: 2 theorem links
· Lean TheoremPush and Pushback in Contesting AI: Demands for and Resistance to Accountability
Pith reviewed 2026-05-12 02:38 UTC · model grok-4.3
The pith
AI contestation unfolds as an iterative push for accountability from affected groups that institutions often resist through targeted tactics.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Our thematic analysis of 43 cases produces empirically grounded categories of contestation strategies, institutional response tactics, outcome types, and the contextual factors that shape them. Situating the work in a relational model of accountability, we treat contestation as a dynamic process in which actors from below direct explicit demands at actors from above, who respond by accepting, resisting, or circumventing accountability. We show that those being contested routinely deploy a range of strategies to limit their accountability.
What carries the argument
Thematic analysis of 43 real-world AI contestation cases, organized through a relational accountability model that distinguishes demands from below and responses from above.
Load-bearing premise
The 43 chosen cases capture representative patterns of AI contestation without major selection bias or interpretive subjectivity in the thematic coding.
What would settle it
A larger, independently sampled collection of AI contestation cases that produces substantially different strategy categories, response tactics, or outcome patterns would undermine the claimed generality of the framework.
Figures
read the original abstract
As AI becomes increasingly embedded in daily life, it has been shown to fail critically, cause harm, and spark public controversy, prompting affected communities, workers, and public-interest groups to contest it. Yet how these contestations unfold in practice remains underexplored. We address this gap by developing an empirically grounded account of AI contestation dynamics. We do so through a thematic analysis of 43 real-world cases in which affected actors direct demands toward those responsible for AI development and deployment, seeking redress, influence, or changes to AI practices. Situating our work within Bovens's relational model of accountability, we conceptualize contestation as accountability-seeking: a dynamic, iterative process in which actors "from below" direct explicit demands at actors "from above," who respond by accepting, resisting, or circumventing accountability. Our analysis produces empirically grounded categories of contestation strategies, institutional response tactics, outcome types, and the contextual factors that shape them, illuminating how accountability is pursued and evaded in practice. We show that those being contested often deploy a range of strategies to limit their accountability. Based on these insights, we offer guidance for researchers, policymakers, advocates, and other stakeholders seeking to support effective AI contestation, with particular attention to anticipating and countering institutional strategies used to evade accountability.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper conducts a thematic analysis of 43 real-world cases of AI contestation to develop an empirically grounded framework of contestation dynamics, including strategies used by affected actors, response tactics by institutions, outcome types, and contextual factors. Drawing on Bovens's relational accountability model, it conceptualizes contestation as accountability-seeking and provides guidance for supporting effective contestation while countering evasion strategies.
Significance. If the methodological details support the robustness of the categories, this study offers important contributions to understanding practical AI accountability processes in HCI and related fields. It provides concrete, empirically derived insights into how accountability is pursued and resisted, which can inform advocacy, policy, and research on AI governance. The use of an established theoretical model with independent cases is a strength.
major comments (2)
- [Methods] Methods section: The description of how the 43 cases were selected (including any sampling frame, inclusion/exclusion criteria, or efforts to mitigate bias such as toward high-profile Western incidents) is not provided in sufficient detail. This directly undermines verification of the central claim that the analysis produces representative, empirically grounded categories of contestation strategies and institutional tactics.
- [Methods] Methods section: Details on the thematic analysis process, such as the inductive coding procedure, number of coders, and any inter-rater reliability metrics, are absent. Without these, the link between the raw cases and the derived categories cannot be fully assessed for subjectivity or replicability, which is load-bearing for the paper's assertion of illuminating accountability dynamics in practice.
minor comments (1)
- [Abstract] Abstract: A short statement acknowledging the qualitative, non-probability sample and its implications for generalizability would help readers interpret the scope of the categories and guidance offered.
Simulated Author's Rebuttal
We thank the referee for their thoughtful review and constructive feedback on our manuscript. The comments on the methods section are well-taken and point to areas where greater transparency will strengthen the paper. We have revised the methods section in response to both major comments, adding the requested details on case selection and the thematic analysis process. These changes improve the replicability and credibility of our empirically grounded framework without altering the core findings or contributions.
read point-by-point responses
-
Referee: [Methods] Methods section: The description of how the 43 cases were selected (including any sampling frame, inclusion/exclusion criteria, or efforts to mitigate bias such as toward high-profile Western incidents) is not provided in sufficient detail. This directly undermines verification of the central claim that the analysis produces representative, empirically grounded categories of contestation strategies and institutional tactics.
Authors: We agree that the original methods section lacked sufficient detail on case selection. The 43 cases were drawn from publicly available sources including news reports, advocacy organization publications, and documented incidents of AI-related harms and contestation. To address this, we have expanded the methods section to explicitly describe the sampling approach: a purposive sampling frame focused on cases involving explicit demands directed at AI developers or deployers by affected actors (communities, workers, or public-interest groups). Inclusion criteria required evidence of contestation (e.g., public campaigns, legal actions, or organized protests seeking accountability), while exclusion criteria omitted purely technical failures without contestation or cases lacking sufficient public documentation. We also describe efforts to mitigate Western/high-profile bias by including searches across multiple languages and regions, incorporating cases from the Global South where available. This revision directly supports the claim of empirically grounded categories by making the selection process verifiable. revision: yes
-
Referee: [Methods] Methods section: Details on the thematic analysis process, such as the inductive coding procedure, number of coders, and any inter-rater reliability metrics, are absent. Without these, the link between the raw cases and the derived categories cannot be fully assessed for subjectivity or replicability, which is load-bearing for the paper's assertion of illuminating accountability dynamics in practice.
Authors: We acknowledge that the original manuscript did not provide adequate detail on the thematic analysis procedure. The analysis followed an inductive approach: initial open coding of all 43 cases to identify recurring patterns in demands, institutional responses, outcomes, and contextual factors, followed by axial coding to group these into higher-level categories aligned with Bovens's relational accountability model, with iterative refinement through team discussions. In the revised manuscript, we now specify that the coding was conducted by two authors independently on a subset of cases, with all authors reviewing and discussing discrepancies to reach consensus; we also note the absence of formal inter-rater reliability statistics (common in qualitative thematic analysis) and describe steps taken to enhance rigor, such as maintaining an audit trail of coding decisions and member-checking categories against the data. This addition addresses concerns about subjectivity and replicability while preserving the qualitative, interpretive nature of the study. revision: yes
Circularity Check
No circularity: categories derived inductively from independent cases using external model
full rationale
The paper performs a thematic analysis of 43 real-world cases to generate categories of contestation strategies, institutional responses, outcomes, and contextual factors. These are situated in Bovens's relational accountability model, which is an external reference rather than a self-citation. No equations, fitted parameters, self-definitional constructs, or load-bearing self-citations appear in the derivation chain. The central claims rest on empirical coding of independent cases rather than reducing to the inputs by construction, making the analysis self-contained.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Bovens's relational model of accountability applies directly to AI contestation as accountability-seeking from below
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclearthematic analysis of 43 real-world cases... categories of contestation strategies, institutional response tactics, outcome types, and the contextual factors that shape them
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclearconceptualize contestation as accountability-seeking... Bovens's relational model
Reference graph
Works this paper leans on
-
[1]
Harold Abelson, Ross Anderson, Steven M Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G Neumann, Ronald L Rivest, Jeffrey I Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso. 2024. Bugs in our pockets: the risks of client-side scanning.Journal of Cybersecurity10, 1 (01 2024), tyad020. arXiv:https://acad...
-
[2]
AI Now Institute. 2018.Litigating Algorithms. AI Now Institute. https://ainowinstitute.org/news/litigating-algorithms-3
work page 2018
-
[3]
American Civil Liberties Union. 2022.ACLU v. Clearview AI. American Civil Liberties Union. https://www.aclu.org/cases/aclu-v- clearview-ai
work page 2022
-
[4]
Anonymous. 2012. Incident Number 75: Google Instant’s Allegedly ’Anti-Semitic’ Results Lead To Lawsuit In France.AI Incident Database(2012). https://incidentdatabase.ai/cite/75 Retrieved December 2025 from https://incidentdatabase.ai/cite/75
work page 2012
-
[5]
Anonymous. 2018. Incident Number 184: Facial Recognition Program in São Paulo Metro Stations Suspended for Illegal and Dispropor- tionate Violation of Citizens’ Right to Privacy.AI Incident Database(2018). https://incidentdatabase.ai/cite/184 Retrieved December 2025 from https://incidentdatabase.ai/cite/184
work page 2018
-
[6]
Anonymous. 2021. Incident Number 360: McDonald’s AI Drive-Thru Allegedly Collected Biometric Customer Data without Consent, Violating BIPA.AI Incident Database(2021). https://incidentdatabase.ai/cite/360 Retrieved December 2025 from https://incidentdatabase. ai/cite/360
work page 2021
-
[7]
Anonymous. 2021. Incident Number 534: Facebook Alleged in Lawsuit Misleading Public about Effects of Algorithms on Children.AI Incident Database(2021). https://incidentdatabase.ai/cite/534 Retrieved January 2026 from https://incidentdatabase.ai/cite/534
work page 2021
-
[8]
Daniel Atherton. 2020. Incident Number 926: Giorgia Meloni Reportedly Targeted by Deepfake Pornography.AI Incident Database (2020). https://incidentdatabase.ai/cite/926 Retrieved December 2025 from https://incidentdatabase.ai/cite/926
work page 2020
-
[9]
Daniel Atherton. 2022. Incident Number 639: Air Canada Chatbot Reportedly Provides Inaccurate Bereavement Fare Information, Leading to Customer Overpayment.AI Incident Database(2022). https://incidentdatabase.ai/cite/639 Retrieved December 2025 from https://incidentdatabase.ai/cite/639
work page 2022
-
[10]
Daniel Atherton. 2023. Incident Number 597: Female Students at Westfield High School in New Jersey Reportedly Targeted with Deepfake Nudes.AI Incident Database(2023). https://incidentdatabase.ai/cite/597 Retrieved December 2025 from https://incidentdatabase.ai/ cite/597
work page 2023
-
[11]
Daniel Atherton. 2023. Incident Number 608: UnitedHealth Accused of Deploying Allegedly Flawed AI to Deny Medical Coverage.AI Incident Database(2023). https://incidentdatabase.ai/cite/608 Retrieved December 2025 from https://incidentdatabase.ai/cite/608
work page 2023
-
[12]
Daniel Atherton. 2023. Incident Number 726: A Self-Driving Cruise Robot Taxi Reportedly Struck and Dragged a Pedestrian 20 Feet.AI Incident Database(2023). https://incidentdatabase.ai/cite/726 Retrieved December 2025 from https://incidentdatabase.ai/cite/726
work page 2023
-
[13]
Agathe Balayn, Yulu Pi, David Gray Widder, Kars Alfrink, Mireia Yurrita, Sohini Upadhyay, Naveena Karusala, Henrietta Lyons, Cagatay Turkay, Christelle Tessono, Blair Attard-Frost, and Ujwal Gadiraju. 2024. From Stem to Stern: Contestability Along AI Value Chains. InCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and So...
-
[14]
Bruce Barcott. 2025. In early ruling, federal judge defines Character.AI chatbot as product, not speech. https://www. transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech. Accessed: 2025-12- 03
work page 2025
-
[15]
Ames, Jed Brubaker, and Paul Dourish
Eric Baumer, Jenna Burrell, Morgan G. Ames, Jed Brubaker, and Paul Dourish. 2015. On the Importance and Implications of Studying Technology Non-Use.interactions22 (02 2015), 52–56. doi:10.1145/2723667
-
[16]
BBC News. 2020. A-levels and GCSEs: Government U-turn on exam grades to use teacher assessments. https://www.bbc.co.uk/news/ education-53923279
work page 2020
-
[17]
2024.Regulating Facial Recognition in Brazil: Legal and Policy Perspectives
Luca Belli, Walter Britto Gaspar, and Nicolo Zingales. 2024.Regulating Facial Recognition in Brazil: Legal and Policy Perspectives. Cambridge University Press, 228–241. Push and Pushback in Contesting AI: Demands for and Resistance to Accountability FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
work page 2024
-
[18]
Benefits Tech Advocacy Hub. 2022. Arkansas Medicaid Home and Community Based Services Hours Cuts. https://www.btah.org/case- study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html Accessed 2025-12-16
work page 2022
-
[19]
Andrea Bingham, J. 2023. From Data Management to Actionable Findings: A Five-Phase Process of Qualitative Data Analysis. 22 (2023). doi:10.1177/16094069231183620
-
[20]
Hannah Bloch-Wehba. 2022. Algorithmic Governance from the Bottom Up.BYU Law Review(01 2022). doi:10.2139/ssrn.4054640
-
[21]
Daniel James Bogiatzis-Gibbons. 2024. Beyond Individual Accountability: (Re-)Asserting Democratic Control of AI. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 74–84. doi:10.1145/3630106.3658541
-
[22]
2024.Algorithms of Resistance: The Everyday Fight Against Platform Power
Tiziano Bonini and Emiliano Treré. 2024.Algorithms of Resistance: The Everyday Fight Against Platform Power. The MIT Press. doi:10.7551/mitpress/14329.001.0001
-
[23]
Mark Bovens. 2007. Analysing and Assessing Accountability: A Conceptual Framework.European Law Journal13 (07 2007), 447 – 468. doi:10.1111/j.1468-0386.2007.00378.x
-
[24]
Mark Bovens. 2010. Two Concepts of Accountability: Accountability as a Virtue and as a Mechanism.West European Politics33, 5 (2010), 946–967. arXiv:https://doi.org/10.1080/01402382.2010.486119 doi:10.1080/01402382.2010.486119
-
[25]
2025.US judge preliminarily approves $1.5 billion Anthropic copyright settlement
Blake Brittain. 2025.US judge preliminarily approves $1.5 billion Anthropic copyright settlement. Reuters. https://www.reuters.com/ sustainability/boards-policy-regulation/us-judge-approves-15-billion-anthropic-copyright-settlement-with-authors-2025-09-25/ Ac- cessed: 2025-12-19
work page 2025
-
[26]
2015.Obfuscation: A User’s Guide for Privacy and Protest
Finn Brunton and Helen Nissenbaum. 2015.Obfuscation: A User’s Guide for Privacy and Protest. The MIT Press. doi:10.7551/mitpress/ 9780262029735.001.0001
-
[27]
Bundesverfassungsgericht, German Federal Constitutional Court. 2023. Judgment of the First Senate of 16 February 2023 - 1 BvR 1547/19 -, paras. 1-178. https://www.bverfg.de/e/rs20230216_1bvr154719en.html. English translation. Accessed 2025-12-08
work page 2023
-
[28]
Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Big Data & Society3, 1 (2016), 2053951715622512. doi:10.1177/2053951715622512
-
[29]
Artificial Intelligence principles
Business and Human Rights Centre. 2018.Google introduces “Artificial Intelligence principles” that prohibit its use in weapons & human rights abuses. Business and Human Rights Centre. https://www.business-humanrights.org/en/latest-news/google-introduces-artificial- intelligence-principles-that-prohibit-its-use-in-weapons-human-rights-abuses/ Accessed: 2025-12-19
work page 2018
-
[30]
Hannah L. Buxbaum. 2025. Adhesive Forum Selection Agreements and Access to Justice: The Function and Limits of Anti-Waiver Protections.German Law Journal26, 5 (2025), 876–888. doi:10.1017/glj.2025.10140
-
[31]
Claudia Camargo and Eelco Jacobs. 2013. Working Paper 16: Social accountability and its conceptual challenges: An analytical framework.Basel Institute on Governance Working Papers(02 2013), 1–24. doi:10.12685/bigwp.2013.16.1-24
-
[32]
Yanto. Chandra and Liang. Shang. 2019. Inductive Coding. InQualitative Research Using R: A Systematic Approach. Springer, Singapore. doi:10.1007/978-981-13-3170-1_8
-
[33]
Caroline Chen. 2020. Only Seven of Stanford’s First 5,000 Vaccines Were Designated for Medical Residents. https://www.propublica. org/article/only-seven-of-stanfords-first-5-000-vaccines-were-designated-for-medical-residents. Accessed 2025-12-08
work page 2020
-
[34]
Ben Chester Cheong. 2024. Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision- making.Frontiers in Human Dynamics6 (2024). doi:10.3389/fhumd.2024.1421273
- [35]
-
[36]
Clarkson Law Firm, P.C. 2023. Class Action Challenges OpenAI on Privacy. https://clarksonlawfirm.com/class-action-challenges- openai-on-privacy/ Accessed: 2025-12-12
work page 2023
-
[37]
Laura Claus and Paul Tracey. 2019. Making Change from Behind a Mask: How Organizations Challenge Guarded Institutions by Sparking Grassroots Activism.Academy of Management Journal63 (08 2019). doi:10.5465/amj.2017.0507
-
[38]
Coalition Against Predictive Policing in Pittsburgh. 2025. Coalition Against Predictive Policing in Pittsburgh – Predictive Policing Advocacy and Resources. https://capp-pgh.com/ Advocacy site opposing predictive policing in Pittsburgh; accessed 18 December 2025
work page 2025
-
[39]
Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh. 2021. Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 598–609. doi:10.1145/344218...
-
[40]
Jennifer Cobbe and Jatinder Singh. 2021. Artificial intelligence as a service: Legal responsibilities, liabilities, and policy challenges. Computer Law & Security Review42 (2021), 105573. doi:10.1016/j.clsr.2021.105573
-
[41]
Jennifer Cobbe, Michael Veale, and Jatinder Singh. 2023. Understanding accountability in algorithmic supply chains. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(Chicago, IL, USA)(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1186–1197. doi:10.1145/3593013.3594073
-
[42]
Cary Coglianese and Lavi M. Ben Dor. 2021. AI in Adjudication and Administration.Brooklyn Law Review(2021). https://scholarship. law.upenn.edu/faculty_scholarship/2118 FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Yulu Pi, Lucas Lichner, Jae Woo Lee, Sijia Xiao, Renwen Zhang, and Jatinder Singh
work page 2021
-
[43]
Commission nationale de l’informatique et des libertés (CNIL). 2024. Employee Monitoring: CNIL Fined Amazon France Logistique€32 Million.CNIL. https://www.cnil.fr/en/employee-monitoring-cnil-fined-amazon-france-logistique-eu32-million Accessed: 2025-12-13
work page 2024
-
[44]
Penny Crofts and Honni van Rijswijk. 2020. Negotiating ’Evil’: Google, Project Maven and the Corporate Form.Law, Technology and Humans2 (02 2020). doi:10.5204/lthj.v2i1.1313
-
[45]
2023.Challenges to the extraterritorial enforcement of data privacy law - EU case study
Michal Czerniawski and Dan Svantesson. 2023.Challenges to the extraterritorial enforcement of data privacy law - EU case study. 127–153
work page 2023
-
[46]
massive amounts of personal data,
Grace Dean. 2023.A lawsuit claims OpenAI stole “massive amounts of personal data, ” including medical records and information about children, to train ChatGPT. Business Insider. https://www.businessinsider.com/openai-chatgpt-generative-ai-stole-personal-data- lawsuit-children-medical-2023-6 Accessed: 2025-12-19
work page 2023
-
[47]
Alicia DeVrio, Motahhare Eslami, and Kenneth Holstein. 2024. Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 1093–1106. doi:10.1145...
-
[48]
Anders Eidesvik. 2025. Hunger strike against AGI in San Francisco passes one-month mark. https://peninsulapress.com/2025/10/02/ hunger-strike-against-agi-passes-one-month-mark/ Accessed 2025-12-16
work page 2025
-
[49]
Madeleine Elish. 2019. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.Engaging Science, Technology, and Society 5 (03 2019), 40–60. doi:10.17351/ests2019.260
-
[50]
Jakob Emerson. 2025. Judge denies UnitedHealth’s bid to limit discovery in AI coverage denial case. https://www.beckerspayer.com/ payer/medicare-advantage/judge-denies-unitedhealths-bid-to-limit-discovery-in-ai-coverage-denial-case/
work page 2025
-
[51]
2025.How a Shady US AI Company Dodged Fines and Defied Regulators Across Europe
Lydia Emmanouilidou. 2025.How a Shady US AI Company Dodged Fines and Defied Regulators Across Europe. We Are Solomon. https://wearesolomon.com/mag/format/investigation/clearview-how-a-shady-us-ai-company-dodged-fines-and-defied- regulators-across-europe/ Accessed: 2025-12-13
work page 2025
-
[52]
Ethical Tech Initiative, George Washington University. 2024.Bartz v. Anthropic PBC. https://blogs.gwu.edu/law-eti/ai-litigation- database/case-detail-page/?pid=251 DAIL – Database of AI Litigation. Accessed: 2025-12-19
work page 2024
-
[53]
2025.Criminal complaint against facial recognition company Clearview AI
European Center for Digital Rights (noyb). 2025.Criminal complaint against facial recognition company Clearview AI. noyb.eu. https://noyb.eu/en/criminal-complaint-against-facial-recognition-company-clearview-ai Accessed: 2025-12-19
work page 2025
-
[54]
Menno Fenger and Robin Simonse. 2024. The implosion of the Dutch surveillance welfare state.Social Policy & Administration58, 2 (2024), 264–276. doi:10.1111/spol.12998
-
[55]
Future of Life Institute. 2023. Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai- experiments/. Accessed 2025-12-16
work page 2023
-
[56]
Maya Ganesh and Emanuel Moss. 2022. Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’.Media International Australia183 (02 2022), 1329878X2210762. doi:10.1177/1329878X221076288
-
[57]
Google PAIR. 2019. Feedback + Control: Design feedback and control mechanisms to improve your AI and the user experience. https://pair.withgoogle.com/guidebook/chapters/feedback-and-controls/design-ai-feedback-loops
work page 2019
-
[58]
Gabe Greschler. 2024.Voters approve Prop. E, giving more powers to San Francisco police. The San Francisco Standard. https: //sfstandard.com/2024/03/05/san-francisco-voters-election-prop-e-results-police/ Accessed: 2025-12-19
work page 2024
- [59]
-
[60]
Brett A. Halperin and Daniela K. Rosner. 2025. “AI is Soulless”: Hollywood Film Workers’ Strike and Emerging Perceptions of Generative Cinema.ACM Trans. Comput.-Hum. Interact.32, 2, Article 19 (April 2025), 27 pages. doi:10.1145/3716135
-
[61]
2008.Sciences from Below: Feminisms, Postcolonialities, and Modernities
Sandra Harding. 2008.Sciences from Below: Feminisms, Postcolonialities, and Modernities. Duke University Press. doi:10.2307/j.ctv11smmtn Accessed 4 March 2026
-
[62]
Benjamin Hardy. 2018. ARChoices rule blocked. https://arknews.org/index.php/2018/05/30/archoices-rule-blocked/. Accessed: 2025-12-03
work page 2018
-
[63]
Aspen Hopkins, Isabella Struckman, Kevin Klyman, and Susan S. Silbey. 2025. Recourse, Repair, Reparation, & Prevention: A Stakeholder Analysis of AI Supply Chains. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 209–227. doi:10.1145/3715275.3732017
-
[64]
Benefits Tech Advocacy Hub. [n. d.]. Arkansas Medicaid Home and Community Based Services Hours Cuts. https://www.btah.org/case- study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html. Accessed: 2025-12-03
work page 2025
-
[65]
Institute for Healthcare Policy & Innovation, University of Michigan. 2018. What happens when an algorithm cuts your health care. https://ihpi.umich.edu/news/what-happens-when-algorithm-cuts-your-health-care. Accessed 2025-12-08
work page 2018
-
[66]
Jersey Evening Post. 2020. Johnson and Williamson forced into U-turn over A-level grades. https://jerseyeveningpost.com/morenews/ uknews/2020/08/17/johnson-and-williamson-forced-into-u-turn-over-a-level-grades/
work page 2020
-
[67]
Nari Johnson, Sanika Moharana, Christina Harrington, Nazanin Andalibi, Hoda Heidari, and Motahhare Eslami. 2024. The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, ...
-
[68]
Justia U.S Law. 2017. Arkansas Department of Human Services v. Ledgerwood (Majority). https://law.justia.com/cases/arkansas/supreme- court/2017/cv-17-183.html. Accessed 2025-12-08
work page 2017
-
[69]
Emma Kallina, Thomas Bohné, and Jatinder Singh. 2025. Stakeholder Participation for Responsible AI Development: Disconnects Between Guidance and Current Practice. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 1060–1079. doi:10.1145/3715275.3732069
-
[70]
Emma Kallina and Jatinder Singh. 2024. Stakeholder Involvement for Responsible AI Development: A Process Framework. InProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(San Luis Potosi, Mexico)(EAAMO ’24). Association for Computing Machinery, New York, NY, USA, Article 1, 14 pages. doi:10.1145/3689904.3694698
- [71]
-
[72]
Margot E. Kaminski and Jennifer M. Urban. 2021. The Right to Contest AI.Columbia Law Review121, 7 (2021). https://ssrn.com/ abstract=3965041 U of Colorado Law Legal Studies Research Paper No. 21-30
work page 2021
-
[73]
Joshua A. Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 758–771. doi:10.1145/3442188.3445937
-
[74]
Data Justice Lab. [n. d.].Data Harm Record. https://datajusticelab.org/project/data-harm-record/ Accessed: 2026-03-13
work page 2026
-
[75]
Khoa Lam. 2023. Incident Number 592: Facial Recognition Misidentifies Pregnant Woman Leading to False Arrest in Detroit.AI Incident Database(2023). https://incidentdatabase.ai/cite/592 Retrieved December 2025 from https://incidentdatabase.ai/cite/592
work page 2023
-
[76]
Algorithmic Justice League. [n. d.].Share Your Story. Spark Change.https://www.ajl.org/harms Accessed: 2026-03-13
work page 2026
-
[77]
Roman Lutz. 2017. Incident Number 96: Houston Schools Must Face Teacher Evaluation Lawsuit.AI Incident Database(2017). https://incidentdatabase.ai/cite/96 Retrieved December 2025 from https://incidentdatabase.ai/cite/96
work page 2017
-
[78]
Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions.Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 106 (apr 2021), 25 pages. doi:10.1145/3449180
-
[79]
Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach
Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. ...
-
[80]
Samantha Maldonado. 2025. NYPD bypassed facial recognition ban to ID Pro-Palestinian student protester.THE CITY - NYC News(July 2025). https://www.thecity.nyc/2025/07/18/nypd-fdny-clearview-ai-ban-columbia-palestinian-protest/ Accessed 2025-12-16
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.