Recognition: no theorem link
Humanwashing -- It Should Leave You Feeling Dirty
Pith reviewed 2026-05-14 17:41 UTC · model grok-4.3
The pith
The 'human in the loop' metaphor in AI discussions often obscures real oversight needs and enables misleading safety claims.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that indiscriminate application of the loop metaphor to AI decision systems fails to illuminate required human roles or achieved outcomes and instead supports humanwashing by directing attention toward reassuring language rather than practical limitations in oversight.
What carries the argument
The 'human in the loop' metaphor, which the authors show functions to imply safety and human control in AI systems while often avoiding detailed scrutiny of actual involvement.
If this is right
- Oversight proposals for AI must specify exact human roles rather than rely on the loop metaphor to convey safety.
- Claims of accountability in current AI systems require re-examination when they rest primarily on the presence of human involvement.
- Language around bias, transparency, and manipulation in AI should shift toward concrete process descriptions instead of metaphorical shorthand.
- Writers and regulators should test whether the metaphor improves or hinders public understanding of decision outcomes.
Where Pith is reading between the lines
- The same pattern of reassuring metaphors could apply to other AI terms such as 'augmented' or 'assisted,' warranting parallel checks for accuracy.
- Empirical studies could compare public perceptions of AI safety before and after replacing the loop phrase with specific oversight protocols.
- Industry adoption of such language may follow predictable cycles seen in other technologies, suggesting a need for ongoing metaphor audits.
Load-bearing premise
The metaphor is mainly deployed in settings where it does not clarify oversight and instead serves to present AI systems favorably.
What would settle it
Documentation of multiple AI decision contexts where the phrase 'human in the loop' is paired with precise, verifiable descriptions of human responsibilities that directly reduce documented errors or biases.
Figures
read the original abstract
The phrase 'human in the loop' is increasingly used to imply a sense of safety in relation to AI decision systems. It shouldn't. There are contexts where it can be applied appropriately, but these are not in the deployed decision systems we see dominating today. Human oversight of AI decision processes is one of the most popular proposals for addressing concerns, especially about bias, discrimination, misinformation, manipulation, accountability, and transparency. But there is insufficient examination of what human oversight actually means. The question raised in this paper is whether using the metaphor of a loop does anything to assist understanding of what is required and what is achieved in a particular decision context. Indiscriminate use of the loop metaphor obscures both processes and outcomes. It enables 'humanwashing', an activity analogous to 'greenwashing', where writers and commentators use language primarily aimed at putting systems in the best possible light.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that the phrase 'human in the loop' is increasingly used to imply safety in AI decision systems but often fails to clarify actual human oversight processes or outcomes. Indiscriminate application of the loop metaphor enables 'humanwashing' (analogous to greenwashing), where language primarily serves to present systems favorably rather than accurately describing accountability mechanisms for issues like bias and transparency.
Significance. If substantiated with concrete evidence, the conceptual argument could encourage more precise terminology in AI governance and HCI research, reducing the risk of misleading claims about human involvement in automated decisions.
major comments (1)
- [Abstract] Abstract: The central assertion that the loop metaphor 'obscures both processes and outcomes' is unsupported by any specific examples, case studies, or comparisons from deployed AI systems. No concrete instances are provided showing misuse in contexts like bias mitigation or accountability, nor is there a definition of 'obscuring' versus 'clarifying' use, leaving the greenwashing analogy without empirical grounding.
minor comments (1)
- [Abstract] Abstract: The invented term 'humanwashing' is introduced without an explicit definition or differentiation from related concepts such as ethics-washing.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback. We address the major comment below and outline planned revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central assertion that the loop metaphor 'obscures both processes and outcomes' is unsupported by any specific examples, case studies, or comparisons from deployed AI systems. No concrete instances are provided showing misuse in contexts like bias mitigation or accountability, nor is there a definition of 'obscuring' versus 'clarifying' use, leaving the greenwashing analogy without empirical grounding.
Authors: We acknowledge that the abstract presents the argument at a conceptual level without concrete examples or an explicit definition of obscuring versus clarifying uses. The manuscript's core contribution is a critique of metaphorical language in AI governance, drawing an analogy to greenwashing as a form of misleading framing rather than an empirical study of specific systems. The claim rests on the observation that 'human in the loop' is routinely invoked in policy documents and industry statements without detailing oversight mechanisms or outcomes. In revision we will add a clear definition distinguishing obscuring from clarifying applications of the metaphor and include illustrative examples from recent AI ethics guidelines and public deployment announcements to better ground the analogy, while preserving the paper's conceptual focus. revision: yes
Circularity Check
No circularity; central claim is definitional argument by analogy without derivations or self-referential reductions
full rationale
The paper advances a conceptual critique that the 'human in the loop' metaphor obscures oversight processes and enables 'humanwashing' analogous to greenwashing. No equations, fitted parameters, predictions, or self-citations appear in the provided text. The argument rests on definitional premises and an unelaborated analogy rather than any derivation chain that reduces to its own inputs by construction. This matches the expected non-finding for papers whose claims are not formally derived.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The 'human in the loop' metaphor is frequently used indiscriminately in AI literature and deployments.
invented entities (1)
-
humanwashing
no independent evidence
Reference graph
Works this paper leans on
-
[1]
The Flaws of Policies Requiring Human Oversight of Government Algorithms
Green B. The Flaws of Policies Requiring Human Oversight of Government Algorithms. Computer Law & Security Review. 2022 Jul;45:105681
work page 2022
-
[2]
Rethinking ‘Meaningful Human Control’
Eggert L. Rethinking ‘Meaningful Human Control’. In: Responsible Use of AI in Military Systems. Chapman and Hall/CRC; 2024. p. 213-31
work page 2024
-
[3]
Is Human Oversight to AI Systems Still Possible? New Biotechnology
Holzinger A, Zatloukal K, M¨ uller H. Is Human Oversight to AI Systems Still Possible? New Biotechnology. 2025 Mar;85:59-62
work page 2025
-
[4]
A Pro-Innovation Approach to AI Regulation
DSIT. A Pro-Innovation Approach to AI Regulation. Department for Science, Innovation & Technology; 2023
work page 2023
-
[5]
Information Commissioner’s Office. na, editor. How Do We Ensure Individual Rights in Our AI Systems?. ICO; 2025. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and- resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure- individual-rights-in-our-ai-systems/
work page 2025
-
[6]
Article 3: Definitions|EU Artificial Intelligence Act
EU. Article 3: Definitions|EU Artificial Intelligence Act. European Union; 2024
work page 2024
-
[7]
Article 14: Human Oversight|EU Artificial Intelligence Act
EU. Article 14: Human Oversight|EU Artificial Intelligence Act. European Union; 2024
work page 2024
-
[8]
EU Withdraws Proposed Rules on AI Liability
Jennifer Rankin. EU Withdraws Proposed Rules on AI Liability. the Guardian. 2025 Feb
work page 2025
-
[9]
Artificial Intelligence Liability Directive
Author: Tambiama Madiega. Artificial Intelligence Liability Directive. European Parlia- ment; 2023. PE 739.342
work page 2023
-
[10]
Wachter S. Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond. SSRN Electronic Journal. 2024
work page 2024
-
[11]
General Data Protection Regulation
EU. General Data Protection Regulation. European Parliament; 2016
work page 2016
-
[12]
Wachter, Sandra and Mittelstadt, Brent and Floridi, Luciano. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation [SSRN Scholarly Paper]. International Data Privacy Law, 2017. 2016 Dec;2017(2903469)
work page 2017
-
[13]
Artificial Intelligence and other Speculative Metaphors
Blythe M, Lindley S, Murray-Rust D. Artificial Intelligence and other Speculative Metaphors. In: Proceedings of the 2025 ACM Designing Interactive Systems Conference. DIS ’25. New York, NY, USA: Association for Computing Machinery; 2025. p. 347-56. Available from:https://dl.acm.org/doi/10.1145/3715336.3735714
-
[14]
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction
Elish MC. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. En- gaging Science, Technology, and Society. 2019 Mar;5:40-60. Available from:https: //estsjournal.org/index.php/ests/article/view/260
work page 2019
-
[15]
Valtonen L, M¨ akinen SJ. Human-in-the-Loop: Explainable or Accurate Artificial Intelli- gence by Exploiting Human Bias? In: 2022 IEEE 28th International Conference on En- gineering, Technology and Innovation (ICE/ITMC) & 31st International Association For Management of Technology (IAMOT) Joint Conference; 2022. p. 1-8
work page 2022
-
[16]
When Combinations of Humans and AI Are Useful: A Systematic Review and Meta-Analysis
Vaccaro M, Almaatouq A, Malone T. When Combinations of Humans and AI Are Useful: A Systematic Review and Meta-Analysis. Nature Human Behaviour. 2024 Dec;8(12):2293- 303
work page 2024
-
[17]
Addressing the Synergy Gap: The Six Elements of the Design Space
Tommaso Turchi, Ben Wilson, Matt Roach, Alan Dix, Alessio Malizia. Addressing the Synergy Gap: The Six Elements of the Design Space. forthcoming. 2026
work page 2026
-
[18]
Developing Team Design Patterns for Hybrid Intelligence Systems
Van Zoelen E, Mioch T, Tajaddini M, Fleiner C, Tsaneva S, Camin P, et al. Developing Team Design Patterns for Hybrid Intelligence Systems. In: Proceedings of the Second International Conference on Hybrid Human-Artificial Intelligence; 2023. p. 3-16. Available from:https://ebooks.iospress.nl/doi/10.3233/FAIA230071
-
[19]
Human/Computer Control of Undersea Teleop- erators
Sheridan TB, Verplank WL, Brooks TL. Human/Computer Control of Undersea Teleop- erators. In: NASA. Ames Res. Center; 1978. p. 343-57
work page 1978
-
[20]
Flight-Deck Automation: Promises and Problems
WIENER EL, CURRY RE. Flight-Deck Automation: Promises and Problems. Er- gonomics. 1980 Nov;23(10):995-1011
work page 1980
-
[21]
The Application of Human Factors to the Development of Expert Systems for Advanced Cockpits
Endsley MR. The Application of Human Factors to the Development of Expert Systems for Advanced Cockpits. Proceedings of the Human Factors Society Annual Meeting. 1987 Sep;31(12):1388-92
work page 1987
-
[22]
Human-Centered Aircraft Automation: A Concept and Guidelines
Billings CE. Human-Centered Aircraft Automation: A Concept and Guidelines. Ames Research Center, Moffet Field, California: NASA; 1991. NAS 1.15:103885
work page 1991
-
[23]
The Out-of-the-Loop Performance Problem and Level of Control in Automation
Endsley MR, Kiris EO. The Out-of-the-Loop Performance Problem and Level of Control in Automation. Human Factors. 1995 Jun;37(2):381-94
work page 1995
-
[24]
A Model for Types and Levels of Human Interaction with Automation
Parasuraman R, Sheridan TB, Wickens CD. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 2000 May;30(3):286-97
work page 2000
-
[25]
Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems
Vierhauser M, Islam MNA, Agrawal A, Cleland-Huang J, Mason J. Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems. In: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Founda- tions of Software Engineering. Athens Greece: ACM; 2021. p. 8-19
work page 2021
-
[26]
Taddeo M, Blanchard A. 6. In: Mazzi F, editor. A Comparative Analysis of the Definitions of Autonomous Weapons. Cham: Springer Nature Switzerland; 2023. p. 57-79. Available from:https://doi.org/10.1007/978-3-031-28678-0_6
-
[27]
Natarajan S, Mathur S, Sidheekh S, Stammer W, Kersting K. Human-in-the-Loop or AI- in-the-loop? Automate or Collaborate? Proceedings of the AAAI Conference on Artificial Intelligence. 2025 Apr;39(27):28594-600
work page 2025
-
[28]
Do We Collaborate With What We Design? Topics in Cognitive Science
Evans KD, Robbins SA, Bryson JJ. Do We Collaborate With What We Design? Topics in Cognitive Science. 2023;n/a(n/a). eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/tops.12682. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1111/tops.12682
-
[29]
Wilson B, Natali C, Roach M, Scott D, Rahat A, Rawlinson D, et al. Dimensions of Human-Machine Combination: Prompting the Development of Deployable Intelligent De- cision Systems for Situated Clinical Contexts. Computer Supported Cooperative Work (CSCW). 2025 Apr
work page 2025
-
[30]
Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition
Geissler D, Krupp L, Banwari V, Habusch D, Zhou B, Lukowicz P, et al. Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition. In: HHAI 2025. Pisa: IOS Press; 2025. p. 16-30. Available from:https://ebooks.iospress. nl/volumearticle/74963
work page 2025
-
[31]
Interacting Meaningfully with Machine Learning Systems: Three Experiments
Stumpf S, Rajaram V, Li L, Wong WK, Burnett M, Dietterich T, et al. Interacting Meaningfully with Machine Learning Systems: Three Experiments. International Journal of Human-Computer Studies. 2009 Aug;67(8):639-62
work page 2009
-
[32]
Explanatory Interactive Machine Learning
Teso S, Kersting K. Explanatory Interactive Machine Learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 239-45
work page 2019
-
[33]
Wiethof C, Bittner E. Hybrid Intelligence – Combining the Human in the Loop with the Computer in the Loop: A Systematic Literature Review. ICIS 2021 Proceedings. 2021 Dec;2021(11):nk. Available from:https://aisel.aisnet.org/icis2021/ai_business/ai_ business/11
work page 2021
-
[34]
Human-Centered Artificial Intelligence: Three Fresh Ideas
Shneiderman B. Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Trans- actions on Human-Computer Interaction. 2020 Sep;12(3):109-24
work page 2020
-
[35]
AI, ALGORITHMS, AND AWFUL HUMANS
Solove DJ, Matsumi H. AI, ALGORITHMS, AND AWFUL HUMANS. Fordham Law Review. 2024 Apr
work page 2024
-
[36]
User Ex Machina : Simulation as a Design Probe in Human-in-the- Loop Text Analytics
Crisan A, Correll M. User Ex Machina : Simulation as a Design Probe in Human-in-the- Loop Text Analytics. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. New York, NY, USA: Association for Computing Machinery
work page 2021
-
[37]
van Voorst R. Challenges and Limitations of Human Oversight in Ethical Artificial Intelli- gence Implementation in Health Care: Balancing Digital Literacy and Professional Strain. Mayo Clinic Proceedings: Digital Health. 2024 Dec;2(4):559-63
work page 2024
-
[38]
Gomez C, Cho SM, Ke S, Huang CM, Unberath M. Human-AI Collaboration Is Not Very Collaborative yet: A Taxonomy of Interaction Patterns in AI-assisted Decision Making from a Systematic Review. Frontiers in Computer Science. 2025;6:1521066. Available from:https://www.frontiersin.org/journals/computer-science/articles/ 10.3389/fcomp.2024.1521066
-
[39]
Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning
Gajos KZ, Mamykina L. Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning. In: Proceedings of the 27th International Conference on Intelligent User Interfaces. IUI ’22. New York, NY, USA: Association for Computing Machinery
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.