pith. machine review for the scientific record. sign in

arxiv: 2605.13723 · v1 · submitted 2026-05-13 · 💻 cs.HC · cs.AI· cs.LG· cs.SI

Recognition: no theorem link

Humanwashing -- It Should Leave You Feeling Dirty

Authors on Pith no claims yet

Pith reviewed 2026-05-14 17:41 UTC · model grok-4.3

classification 💻 cs.HC cs.AIcs.LGcs.SI
keywords human in the loophumanwashingAI oversightmetaphorgreenwashingdecision systemstransparencyaccountability
0
0 comments X

The pith

The 'human in the loop' metaphor in AI discussions often obscures real oversight needs and enables misleading safety claims.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper questions whether the phrase 'human in the loop' clarifies what human oversight actually delivers in AI decision systems. It finds that the metaphor frequently serves to suggest safety and accountability around issues like bias and transparency without examining the concrete processes involved. This usage creates 'humanwashing,' where language presents systems favorably in ways that parallel greenwashing in environmental claims. A reader would care because reliance on this shorthand could weaken demands for genuine accountability in deployed AI tools. The argument centers on contexts where the metaphor adds little understanding and primarily polishes perceptions.

Core claim

The paper establishes that indiscriminate application of the loop metaphor to AI decision systems fails to illuminate required human roles or achieved outcomes and instead supports humanwashing by directing attention toward reassuring language rather than practical limitations in oversight.

What carries the argument

The 'human in the loop' metaphor, which the authors show functions to imply safety and human control in AI systems while often avoiding detailed scrutiny of actual involvement.

If this is right

  • Oversight proposals for AI must specify exact human roles rather than rely on the loop metaphor to convey safety.
  • Claims of accountability in current AI systems require re-examination when they rest primarily on the presence of human involvement.
  • Language around bias, transparency, and manipulation in AI should shift toward concrete process descriptions instead of metaphorical shorthand.
  • Writers and regulators should test whether the metaphor improves or hinders public understanding of decision outcomes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pattern of reassuring metaphors could apply to other AI terms such as 'augmented' or 'assisted,' warranting parallel checks for accuracy.
  • Empirical studies could compare public perceptions of AI safety before and after replacing the loop phrase with specific oversight protocols.
  • Industry adoption of such language may follow predictable cycles seen in other technologies, suggesting a need for ongoing metaphor audits.

Load-bearing premise

The metaphor is mainly deployed in settings where it does not clarify oversight and instead serves to present AI systems favorably.

What would settle it

Documentation of multiple AI decision contexts where the phrase 'human in the loop' is paired with precise, verifiable descriptions of human responsibilities that directly reduce documented errors or biases.

Figures

Figures reproduced from arXiv: 2605.13723 by Ben Wilson, Matimba Swana, Matt Roach, Peter Winter.

Figure 1
Figure 1. Figure 1: A real ‘human in the loop’ system. Author facsimile of detail from [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
read the original abstract

The phrase 'human in the loop' is increasingly used to imply a sense of safety in relation to AI decision systems. It shouldn't. There are contexts where it can be applied appropriately, but these are not in the deployed decision systems we see dominating today. Human oversight of AI decision processes is one of the most popular proposals for addressing concerns, especially about bias, discrimination, misinformation, manipulation, accountability, and transparency. But there is insufficient examination of what human oversight actually means. The question raised in this paper is whether using the metaphor of a loop does anything to assist understanding of what is required and what is achieved in a particular decision context. Indiscriminate use of the loop metaphor obscures both processes and outcomes. It enables 'humanwashing', an activity analogous to 'greenwashing', where writers and commentators use language primarily aimed at putting systems in the best possible light.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper claims that the phrase 'human in the loop' is increasingly used to imply safety in AI decision systems but often fails to clarify actual human oversight processes or outcomes. Indiscriminate application of the loop metaphor enables 'humanwashing' (analogous to greenwashing), where language primarily serves to present systems favorably rather than accurately describing accountability mechanisms for issues like bias and transparency.

Significance. If substantiated with concrete evidence, the conceptual argument could encourage more precise terminology in AI governance and HCI research, reducing the risk of misleading claims about human involvement in automated decisions.

major comments (1)
  1. [Abstract] Abstract: The central assertion that the loop metaphor 'obscures both processes and outcomes' is unsupported by any specific examples, case studies, or comparisons from deployed AI systems. No concrete instances are provided showing misuse in contexts like bias mitigation or accountability, nor is there a definition of 'obscuring' versus 'clarifying' use, leaving the greenwashing analogy without empirical grounding.
minor comments (1)
  1. [Abstract] Abstract: The invented term 'humanwashing' is introduced without an explicit definition or differentiation from related concepts such as ethics-washing.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive feedback. We address the major comment below and outline planned revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central assertion that the loop metaphor 'obscures both processes and outcomes' is unsupported by any specific examples, case studies, or comparisons from deployed AI systems. No concrete instances are provided showing misuse in contexts like bias mitigation or accountability, nor is there a definition of 'obscuring' versus 'clarifying' use, leaving the greenwashing analogy without empirical grounding.

    Authors: We acknowledge that the abstract presents the argument at a conceptual level without concrete examples or an explicit definition of obscuring versus clarifying uses. The manuscript's core contribution is a critique of metaphorical language in AI governance, drawing an analogy to greenwashing as a form of misleading framing rather than an empirical study of specific systems. The claim rests on the observation that 'human in the loop' is routinely invoked in policy documents and industry statements without detailing oversight mechanisms or outcomes. In revision we will add a clear definition distinguishing obscuring from clarifying applications of the metaphor and include illustrative examples from recent AI ethics guidelines and public deployment announcements to better ground the analogy, while preserving the paper's conceptual focus. revision: yes

Circularity Check

0 steps flagged

No circularity; central claim is definitional argument by analogy without derivations or self-referential reductions

full rationale

The paper advances a conceptual critique that the 'human in the loop' metaphor obscures oversight processes and enables 'humanwashing' analogous to greenwashing. No equations, fitted parameters, predictions, or self-citations appear in the provided text. The argument rests on definitional premises and an unelaborated analogy rather than any derivation chain that reduces to its own inputs by construction. This matches the expected non-finding for papers whose claims are not formally derived.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The paper relies on the assumption that language use in AI is often performative rather than descriptive, and introduces a new term without empirical validation in the abstract.

axioms (1)
  • domain assumption The 'human in the loop' metaphor is frequently used indiscriminately in AI literature and deployments.
    This is the premise for claiming it obscures understanding.
invented entities (1)
  • humanwashing no independent evidence
    purpose: To describe the misleading use of human oversight language in AI systems.
    New term introduced by analogy to greenwashing.

pith-pipeline@v0.9.0 · 5453 in / 1029 out tokens · 39602 ms · 2026-05-14T17:41:13.235087+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

39 extracted references · 39 canonical work pages

  1. [1]

    The Flaws of Policies Requiring Human Oversight of Government Algorithms

    Green B. The Flaws of Policies Requiring Human Oversight of Government Algorithms. Computer Law & Security Review. 2022 Jul;45:105681

  2. [2]

    Rethinking ‘Meaningful Human Control’

    Eggert L. Rethinking ‘Meaningful Human Control’. In: Responsible Use of AI in Military Systems. Chapman and Hall/CRC; 2024. p. 213-31

  3. [3]

    Is Human Oversight to AI Systems Still Possible? New Biotechnology

    Holzinger A, Zatloukal K, M¨ uller H. Is Human Oversight to AI Systems Still Possible? New Biotechnology. 2025 Mar;85:59-62

  4. [4]

    A Pro-Innovation Approach to AI Regulation

    DSIT. A Pro-Innovation Approach to AI Regulation. Department for Science, Innovation & Technology; 2023

  5. [5]

    na, editor

    Information Commissioner’s Office. na, editor. How Do We Ensure Individual Rights in Our AI Systems?. ICO; 2025. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and- resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure- individual-rights-in-our-ai-systems/

  6. [6]

    Article 3: Definitions|EU Artificial Intelligence Act

    EU. Article 3: Definitions|EU Artificial Intelligence Act. European Union; 2024

  7. [7]

    Article 14: Human Oversight|EU Artificial Intelligence Act

    EU. Article 14: Human Oversight|EU Artificial Intelligence Act. European Union; 2024

  8. [8]

    EU Withdraws Proposed Rules on AI Liability

    Jennifer Rankin. EU Withdraws Proposed Rules on AI Liability. the Guardian. 2025 Feb

  9. [9]

    Artificial Intelligence Liability Directive

    Author: Tambiama Madiega. Artificial Intelligence Liability Directive. European Parlia- ment; 2023. PE 739.342

  10. [10]

    Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond

    Wachter S. Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond. SSRN Electronic Journal. 2024

  11. [11]

    General Data Protection Regulation

    EU. General Data Protection Regulation. European Parliament; 2016

  12. [12]

    Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation [SSRN Scholarly Paper]

    Wachter, Sandra and Mittelstadt, Brent and Floridi, Luciano. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation [SSRN Scholarly Paper]. International Data Privacy Law, 2017. 2016 Dec;2017(2903469)

  13. [13]

    Artificial Intelligence and other Speculative Metaphors

    Blythe M, Lindley S, Murray-Rust D. Artificial Intelligence and other Speculative Metaphors. In: Proceedings of the 2025 ACM Designing Interactive Systems Conference. DIS ’25. New York, NY, USA: Association for Computing Machinery; 2025. p. 347-56. Available from:https://dl.acm.org/doi/10.1145/3715336.3735714

  14. [14]

    Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction

    Elish MC. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. En- gaging Science, Technology, and Society. 2019 Mar;5:40-60. Available from:https: //estsjournal.org/index.php/ests/article/view/260

  15. [15]

    Valtonen L, M¨ akinen SJ. Human-in-the-Loop: Explainable or Accurate Artificial Intelli- gence by Exploiting Human Bias? In: 2022 IEEE 28th International Conference on En- gineering, Technology and Innovation (ICE/ITMC) & 31st International Association For Management of Technology (IAMOT) Joint Conference; 2022. p. 1-8

  16. [16]

    When Combinations of Humans and AI Are Useful: A Systematic Review and Meta-Analysis

    Vaccaro M, Almaatouq A, Malone T. When Combinations of Humans and AI Are Useful: A Systematic Review and Meta-Analysis. Nature Human Behaviour. 2024 Dec;8(12):2293- 303

  17. [17]

    Addressing the Synergy Gap: The Six Elements of the Design Space

    Tommaso Turchi, Ben Wilson, Matt Roach, Alan Dix, Alessio Malizia. Addressing the Synergy Gap: The Six Elements of the Design Space. forthcoming. 2026

  18. [18]

    Developing Team Design Patterns for Hybrid Intelligence Systems

    Van Zoelen E, Mioch T, Tajaddini M, Fleiner C, Tsaneva S, Camin P, et al. Developing Team Design Patterns for Hybrid Intelligence Systems. In: Proceedings of the Second International Conference on Hybrid Human-Artificial Intelligence; 2023. p. 3-16. Available from:https://ebooks.iospress.nl/doi/10.3233/FAIA230071

  19. [19]

    Human/Computer Control of Undersea Teleop- erators

    Sheridan TB, Verplank WL, Brooks TL. Human/Computer Control of Undersea Teleop- erators. In: NASA. Ames Res. Center; 1978. p. 343-57

  20. [20]

    Flight-Deck Automation: Promises and Problems

    WIENER EL, CURRY RE. Flight-Deck Automation: Promises and Problems. Er- gonomics. 1980 Nov;23(10):995-1011

  21. [21]

    The Application of Human Factors to the Development of Expert Systems for Advanced Cockpits

    Endsley MR. The Application of Human Factors to the Development of Expert Systems for Advanced Cockpits. Proceedings of the Human Factors Society Annual Meeting. 1987 Sep;31(12):1388-92

  22. [22]

    Human-Centered Aircraft Automation: A Concept and Guidelines

    Billings CE. Human-Centered Aircraft Automation: A Concept and Guidelines. Ames Research Center, Moffet Field, California: NASA; 1991. NAS 1.15:103885

  23. [23]

    The Out-of-the-Loop Performance Problem and Level of Control in Automation

    Endsley MR, Kiris EO. The Out-of-the-Loop Performance Problem and Level of Control in Automation. Human Factors. 1995 Jun;37(2):381-94

  24. [24]

    A Model for Types and Levels of Human Interaction with Automation

    Parasuraman R, Sheridan TB, Wickens CD. A Model for Types and Levels of Human Interaction with Automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 2000 May;30(3):286-97

  25. [25]

    Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems

    Vierhauser M, Islam MNA, Agrawal A, Cleland-Huang J, Mason J. Hazard Analysis for Human-on-the-Loop Interactions in sUAS Systems. In: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Founda- tions of Software Engineering. Athens Greece: ACM; 2021. p. 8-19

  26. [26]

    Taddeo M, Blanchard A. 6. In: Mazzi F, editor. A Comparative Analysis of the Definitions of Autonomous Weapons. Cham: Springer Nature Switzerland; 2023. p. 57-79. Available from:https://doi.org/10.1007/978-3-031-28678-0_6

  27. [27]

    Human-in-the-Loop or AI- in-the-loop? Automate or Collaborate? Proceedings of the AAAI Conference on Artificial Intelligence

    Natarajan S, Mathur S, Sidheekh S, Stammer W, Kersting K. Human-in-the-Loop or AI- in-the-loop? Automate or Collaborate? Proceedings of the AAAI Conference on Artificial Intelligence. 2025 Apr;39(27):28594-600

  28. [28]

    Do We Collaborate With What We Design? Topics in Cognitive Science

    Evans KD, Robbins SA, Bryson JJ. Do We Collaborate With What We Design? Topics in Cognitive Science. 2023;n/a(n/a). eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/tops.12682. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1111/tops.12682

  29. [29]

    Dimensions of Human-Machine Combination: Prompting the Development of Deployable Intelligent De- cision Systems for Situated Clinical Contexts

    Wilson B, Natali C, Roach M, Scott D, Rahat A, Rawlinson D, et al. Dimensions of Human-Machine Combination: Prompting the Development of Deployable Intelligent De- cision Systems for Situated Clinical Contexts. Computer Supported Cooperative Work (CSCW). 2025 Apr

  30. [30]

    Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition

    Geissler D, Krupp L, Banwari V, Habusch D, Zhou B, Lukowicz P, et al. Human in the Latent Loop (HILL): Interactively Guiding Model Training Through Human Intuition. In: HHAI 2025. Pisa: IOS Press; 2025. p. 16-30. Available from:https://ebooks.iospress. nl/volumearticle/74963

  31. [31]

    Interacting Meaningfully with Machine Learning Systems: Three Experiments

    Stumpf S, Rajaram V, Li L, Wong WK, Burnett M, Dietterich T, et al. Interacting Meaningfully with Machine Learning Systems: Three Experiments. International Journal of Human-Computer Studies. 2009 Aug;67(8):639-62

  32. [32]

    Explanatory Interactive Machine Learning

    Teso S, Kersting K. Explanatory Interactive Machine Learning. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AIES ’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 239-45

  33. [33]

    Hybrid Intelligence – Combining the Human in the Loop with the Computer in the Loop: A Systematic Literature Review

    Wiethof C, Bittner E. Hybrid Intelligence – Combining the Human in the Loop with the Computer in the Loop: A Systematic Literature Review. ICIS 2021 Proceedings. 2021 Dec;2021(11):nk. Available from:https://aisel.aisnet.org/icis2021/ai_business/ai_ business/11

  34. [34]

    Human-Centered Artificial Intelligence: Three Fresh Ideas

    Shneiderman B. Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Trans- actions on Human-Computer Interaction. 2020 Sep;12(3):109-24

  35. [35]

    AI, ALGORITHMS, AND AWFUL HUMANS

    Solove DJ, Matsumi H. AI, ALGORITHMS, AND AWFUL HUMANS. Fordham Law Review. 2024 Apr

  36. [36]

    User Ex Machina : Simulation as a Design Probe in Human-in-the- Loop Text Analytics

    Crisan A, Correll M. User Ex Machina : Simulation as a Design Probe in Human-in-the- Loop Text Analytics. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. New York, NY, USA: Association for Computing Machinery

  37. [37]

    Challenges and Limitations of Human Oversight in Ethical Artificial Intelli- gence Implementation in Health Care: Balancing Digital Literacy and Professional Strain

    van Voorst R. Challenges and Limitations of Human Oversight in Ethical Artificial Intelli- gence Implementation in Health Care: Balancing Digital Literacy and Professional Strain. Mayo Clinic Proceedings: Digital Health. 2024 Dec;2(4):559-63

  38. [38]

    Human-AI Collaboration Is Not Very Collaborative yet: A Taxonomy of Interaction Patterns in AI-assisted Decision Making from a Systematic Review

    Gomez C, Cho SM, Ke S, Huang CM, Unberath M. Human-AI Collaboration Is Not Very Collaborative yet: A Taxonomy of Interaction Patterns in AI-assisted Decision Making from a Systematic Review. Frontiers in Computer Science. 2025;6:1521066. Available from:https://www.frontiersin.org/journals/computer-science/articles/ 10.3389/fcomp.2024.1521066

  39. [39]

    Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning

    Gajos KZ, Mamykina L. Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning. In: Proceedings of the 27th International Conference on Intelligent User Interfaces. IUI ’22. New York, NY, USA: Association for Computing Machinery