pith. machine review for the scientific record. sign in

arxiv: 2604.12793 · v1 · submitted 2026-04-14 · 💻 cs.HC

Recognition: unknown

Human Agency, Causality, and the Human Computer Interface in High-Stakes Artificial Intelligence

Georges Hattab

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:29 UTC · model grok-4.3

classification 💻 cs.HC
keywords human-computer interactionhuman agencycausalityexplainable AIhigh-stakes AIuncertainty quantificationinterface designmedia theory
0
0 comments X

The pith

High-stakes AI erodes human agency by severing the user's direct perception of causality at the interface.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that ethical AI discourse centered on trust and responsibility misses the core human-computer interaction problem in critical domains. It maintains that AI systems function like media that extend human reach while cutting off immediate causal awareness, so that flawed interfaces produce dangerous misrepresentations of system state. Current explainable AI techniques are faulted for emphasizing correlations over causal structure and for neglecting uncertainty, leaving users without the information needed to retain control. In response, the work advances a Causal-Agency Framework that nests causal modeling, uncertainty measures, and human evaluation to keep agency intact.

Core claim

The paper claims that the decisive difficulty for high-stakes AI is not insufficient trust but the loss of human causal control. Framing AI through McLuhan's media theory, it treats the technology as one that augments capability while amputating the operator's direct sense of cause and effect. The interface therefore becomes the essential site where a double uncertainty—the user's and the model's—must be reconciled. Existing XAI methods are criticized for their correlational emphasis and inability to convey uncertainty in ways that support agency. The proposed remedy is a nested Causal-Agency Framework that unites causal models, uncertainty quantification, and human-centered metrics to rein-

What carries the argument

The human-computer interface as the mediator of double uncertainty between the human user and the probabilistic model, analyzed through McLuhan's augmentation-amputation lens.

If this is right

  • Interfaces that misrepresent uncertainty or omit causal links will continue to produce human errors even when the underlying model is accurate.
  • Explainable AI must move beyond correlation-based explanations to include explicit causal structure and uncertainty representation.
  • High-stakes deployments require evaluation criteria that directly assess preservation of user agency rather than model transparency alone.
  • The interface layer, not the model layer, is the primary location for restoring causal control.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Design standards for critical AI systems could shift from model auditing to mandatory interface tests for causal clarity and uncertainty communication.
  • The same amputation-of-causality concern may apply to other automated decision systems outside AI, such as algorithmic trading platforms or smart infrastructure controls.
  • Empirical studies could test whether training users on causal diagrams before AI-assisted tasks measurably improves retention of agency compared with post-hoc explanations.

Load-bearing premise

The assumption that AI necessarily acts as a McLuhan medium that amputates the user's direct causal perception is the premise that, if removed, collapses the argument that current interfaces undermine agency.

What would settle it

A controlled experiment in a high-stakes domain such as medical diagnosis or process control that measures user decision accuracy, error rates, and reported sense of causal control when using standard XAI interfaces versus interfaces built on the proposed Causal-Agency Framework; failure of the new interfaces to improve those measures would falsify the central claim.

Figures

Figures reproduced from arXiv: 2604.12793 by Georges Hattab.

Figure 1
Figure 1. Figure 1: The Causal-Agency Framework (CAF). From the Data to the User: The flow moves from the core technical engine [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
read the original abstract

Current discourse on Artificial Intelligence (AI) ethics, dominated by "trustworthy" and "responsible" AI, overlooks a more fundamental human-computer interaction (HCI) crisis: the erosion of human agency. This paper argues that the primary challenge of high-stakes AI systems is not trust, but the preservation of human causal control. We posit that "bad AI" will function as "bad UI," a metaphor for catastrophic interface failures that misrepresent system state and lead to human error. Applying Marshall McLuhan's media theory, AI can be framed as a technology of "augmentation" that simultaneously "amputates" the user's direct perception of causality. This places the interface as the critical locus where a "double uncertainty"--that of the human user and that of the probabilistic model--must be mediated. We critique current Explainable AI (XAI) for its correlational focus and failure to represent uncertainty. We conclude by proposing a rigorous, nested Causal-Agency Framework (CAF) that integrates causal models, uncertainty quantification, and human-centered evaluation to restore agency at the interface.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that high-stakes AI ethics discourse overemphasizes trust and overlooks a deeper HCI crisis of eroded human agency. It argues that the core problem is preserving human causal control, frames AI via McLuhan's media theory as a technology that augments while amputating direct causal perception, posits that bad AI functions as bad UI by misrepresenting system states, critiques XAI for its correlational focus and failure to represent uncertainty, and proposes a nested Causal-Agency Framework (CAF) to mediate the resulting double uncertainty at the interface through causal models, uncertainty quantification, and human-centered evaluation.

Significance. If operationalized, the reframing could shift HCI and AI design priorities toward explicit preservation of causal agency and uncertainty mediation, potentially informing safer interfaces in domains like healthcare or autonomous systems. The paper's strength is its interdisciplinary synthesis of media theory with AI ethics concepts, providing a coherent philosophical lens. However, as a purely conceptual proposal without empirical validation, formal derivations, reproducible implementations, or falsifiable predictions, its significance is prospective and hinges on whether future work can translate the CAF into testable designs.

major comments (2)
  1. [Abstract] Abstract and CAF proposal section: The central claim that preserving causal control (rather than trust) is the primary challenge rests on the unelaborated metaphor of AI as a McLuhan-style medium that 'amputates' causality; this framing is load-bearing but lacks concrete examples of how specific high-stakes AI interfaces currently erode causal perception or how the CAF would restore it differently from existing causal HCI approaches.
  2. [CAF proposal] XAI critique and CAF definition: The argument that current XAI fails to represent uncertainty in a way that preserves agency is presented as a key motivation for the CAF, yet the CAF itself is defined in terms of 'integrating causal models, uncertainty quantification, and human-centered evaluation' without specifying mechanisms for nesting or independent benchmarks, creating a risk of circularity where the solution presupposes the concepts it aims to solve.
minor comments (2)
  1. The term 'double uncertainty' is introduced in the abstract without immediate definition or reference to prior sections, which may reduce accessibility until the reader reaches the interface mediation discussion.
  2. The manuscript would benefit from an explicit section or figure outlining the nested structure of the CAF to make the integration of its three components more concrete and evaluable.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thoughtful and constructive review. Their comments identify key areas where the conceptual framing can be made more concrete and less vulnerable to charges of circularity. We address each major comment below and commit to revisions that strengthen the manuscript while preserving its interdisciplinary, proposal-oriented character.

read point-by-point responses
  1. Referee: [Abstract] Abstract and CAF proposal section: The central claim that preserving causal control (rather than trust) is the primary challenge rests on the unelaborated metaphor of AI as a McLuhan-style medium that 'amputates' causality; this framing is load-bearing but lacks concrete examples of how specific high-stakes AI interfaces currently erode causal perception or how the CAF would restore it differently from existing causal HCI approaches.

    Authors: We agree that the McLuhan-inspired metaphor requires additional grounding to carry the central claim. In the revised manuscript we will insert two concrete illustrations: (1) an AI diagnostic system in which saliency maps highlight correlations rather than causal pathways from patient data to outcome, thereby amputating the clinician’s ability to trace intervention effects; and (2) an autonomous-vehicle interface that presents aggregated confidence scores without exposing the underlying causal structure of perception-planning loops, limiting driver intervention. We will also differentiate the CAF from prior causal HCI literature by stressing its explicit nesting of uncertainty quantification at the interface layer—an element not foregrounded in existing causal-modeling approaches. These additions will be placed in a new subsection of the CAF proposal without altering the paper’s conceptual scope. revision: yes

  2. Referee: [CAF proposal] XAI critique and CAF definition: The argument that current XAI fails to represent uncertainty in a way that preserves agency is presented as a key motivation for the CAF, yet the CAF itself is defined in terms of 'integrating causal models, uncertainty quantification, and human-centered evaluation' without specifying mechanisms for nesting or independent benchmarks, creating a risk of circularity where the solution presupposes the concepts it aims to solve.

    Authors: The referee correctly flags a presentational weakness that could suggest circularity. We will expand the CAF definition to articulate the nesting explicitly: causal models supply the structural skeleton, uncertainty quantification (via conformal prediction intervals or Bayesian credible sets) is rendered at the interface to communicate model limitations, and human-centered evaluation supplies agency-preservation metrics (e.g., intervention success rate and perceived locus of control) that serve as independent benchmarks. We will also outline a minimal comparative protocol—testing CAF-augmented interfaces against standard XAI baselines on the same high-stakes task—to allow falsifiable assessment. These clarifications will be added to the CAF proposal section. revision: yes

Circularity Check

0 steps flagged

No circularity: conceptual proposal with no derivational reduction

full rationale

The manuscript is a philosophical reframing of HCI challenges in high-stakes AI, drawing on McLuhan's media theory to argue that preserving causal control (rather than trust) is primary. It critiques XAI and proposes a Causal-Agency Framework (CAF) as an integrative approach. No equations, parameter fits, predictions, or formal derivations exist that could reduce to self-defined inputs or self-citations. The central claims function as interpretive stance and design proposal without any load-bearing step that loops back by construction to its own premises. This is a standard non-circular conceptual paper.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on interpretive application of media theory and several domain assumptions about how interfaces affect human perception of causality; no free parameters or new physical entities are introduced.

axioms (2)
  • domain assumption AI systems function as media that simultaneously augment and amputate human perception of causality (McLuhan framing)
    Invoked to position the interface as the critical locus of double uncertainty.
  • domain assumption Current XAI methods are limited to correlational explanations and fail to represent model uncertainty
    Used to motivate the need for the new framework.
invented entities (1)
  • Causal-Agency Framework (CAF) no independent evidence
    purpose: Integrates causal models, uncertainty quantification, and human-centered evaluation to restore agency at the interface
    Newly proposed construct without external validation or falsifiable predictions supplied.

pith-pipeline@v0.9.0 · 5485 in / 1447 out tokens · 25172 ms · 2026-05-10T14:29:47.493356+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

31 extracted references · 22 canonical work pages

  1. [1]

    Akshat Dubey, Aleksandar Anžel, Bahar İlgen, and Georges Hattab. 2026. UbiQTree: Uncertainty quantification in XAI with tree ensembles.Patterns (Feb. 2026), 101454. doi:10.1016/j.patter.2025.101454

  2. [2]

    Akshat Dubey, Zewen Yang, and Georges Hattab. 2024. A nested model for AI design and validation.Iscience27, 9 (2024). doi:10.1016/j.isci.2024.110603

  3. [3]

    Rosanna Fanni, Valerie Eveline Steinkogler, Giulia Zampedri, and Jo Pierson. 2023. Enhancing human agency through redress in Artificial Intelligence Systems.AI & society38, 2 (2023), 537–547. doi:10.1007/s00146-022-01454-7

  4. [4]

    Daniela Fernandes, Steeven Villa, Salla Nicholls, Otso Haavisto, Daniel Buschek, Albrecht Schmidt, Thomas Kosch, Chenxinran Shen, and Robin Welsch. 2026. AI makes you smarter but none the wiser: The disconnect between performance and metacognition.Computers in Human Behavior175 (2026), 108779. doi:10. 1016/j.chb.2025.108779

  5. [5]

    Iddo Gal. 2002. Adults’ Statistical Literacy: Meanings, Components, Responsi- bilities.International Statistical Review70, 1 (2002), 1–25. doi:10.1111/j.1751- 5823.2002.tb00336.x

  6. [6]

    Georges Hattab. 2022. Ten Challenges and Explainable Analogs of growth func- tions and distributions for statistical literacy and fluency. In2022 IEEE Workshop on Visualization for Social Good (VIS4Good). 6–9. doi:10.1109/VIS4Good57762. 2022.00006

  7. [7]

    Georges Hattab, Christopher Irrgang, Nils Körber, Denise Kühnert, and Katharina Ladewig. 2025. The Way Forward to Embrace Artificial Intelligence in Public Health.American Journal of Public Health115, 2 (2025), 123–128. doi:10.2105/ AJPH.2024.307888 PMID: 39571129

  8. [8]

    Howe, Katharine T

    Jessica L. Howe, Katharine T. Adams, A. Zachary Hettinger, and Raj M. Ratwani

  9. [9]

    doi:10.1001/jama.2018.1171

    Electronic Health Record Usability Issues and Potential Contribution to Patient Harm.JAMA319, 12 (03 2018), 1276–1278. doi:10.1001/jama.2018.1171

  10. [10]

    Bahar İlgen and Georges Hattab. 2025. Toward Human-Centered Readability Eval- uation. InProceedings of the Fourth Workshop on Bridging Human-Computer Inter- action and Natural Language Processing (HCI+NLP), Su Lin Blodgett, Amanda Cer- cas Curry, Sunipa Dev, Siyan Li, Michael Madaio, Jack Wang, Sherry Tongshuang Wu, Ziang Xiao, and Diyi Yang (Eds.). Assoc...

  11. [11]

    Amir-Hossein Karimi, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, and Been Kim. 2023. On the relationship between explanation and prediction: a causal view. InProceedings of the 40th International Conference on Machine Learning(Honolulu, Hawaii, USA)(ICML’23). JMLR.org, Article 650, 23 pages. doi:10.5555/3618408.3619058

  12. [12]

    Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera

  13. [13]

    InXxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers(Vienna, Austria)

    Towards Causal Algorithmic Recourse. InXxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers(Vienna, Austria). Springer-Verlag, Berlin, Heidelberg, 139–166. doi:10.1007/978-3-031-04083-2_8

  14. [14]

    Leveson and C.S

    N.G. Leveson and C.S. Turner. 1993. An investigation of the Therac-25 accidents. Computer26, 7 (1993), 18–41. doi:10.1109/MC.1993.274940

  15. [15]

    Vera Liao, Daniel Gruen, and Sarah Miller

    Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. doi:10.1145/3313831.3376590

  16. [16]

    1988.Understanding media : the extensions of man

    Marshall McLuhan. 1988.Understanding media : the extensions of man. New American Library, New York

  17. [17]

    1994.Understanding Media: The Extensions of Man

    Marshall McLuhan. 1994.Understanding Media: The Extensions of Man. MIT Press, Cambridge, MA

  18. [18]

    Lukas Muttenthaler, Klaus Greff, Frieda Born, Bernhard Spitzer, Simon Kornblith, Michael C Mozer, Klaus-Robert Müller, Thomas Unterthiner, and Andrew K Lampinen. 2025. Aligning machine and human visual representations across abstraction levels.Nature647, 8089 (2025), 349–355. doi:10.1038/s41586-025- 09631-6

  19. [19]

    D. A. Norman. 1990. The ’Problem’ with Automation: Inappropriate Feedback and Interaction, not ’Over-Automation’.Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences327, 1241 (1990), 585–593. http: //www.jstor.org/stable/55330

  20. [20]

    Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse.Human Factors39, 2 (1997), 230–253. doi:10.1518/ 001872097778543886

  21. [21]

    Judea Pearl. 2018. Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution. InProceedings of the Eleventh ACM Interna- tional Conference on Web Search and Data Mining(Marina Del Rey, CA, USA) (WSDM ’18). Association for Computing Machinery, New York, NY, USA, 3. doi:10.1145/3159652.3176182

  22. [22]

    2018.The Book of Why: The New Science of Cause and Effect(1st ed.)

    Judea Pearl and Dana Mackenzie. 2018.The Book of Why: The New Science of Cause and Effect(1st ed.). Basic Books, Inc., USA

  23. [23]

    Jens Rasmussen. 1983. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models.IEEE Transactions on Systems, Man, and CyberneticsSMC-13, 3 (1983), 257–266. doi:10.1109/TSMC. 1983.6313160

  24. [24]

    Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Nature machine intelligence1, 5 (2019), 206–215. doi:10.1038/s42256-019-0048-x

  25. [25]

    Lars Sipos, Ulrike Schäfer, Katrin Glinka, and Claudia Müller-Birn. 2023. Identify- ing Explanation Needs of End-users: Applying and Extending the XAI Question Bank. InProceedings of Mensch Und Computer 2023(Rapperswil, Switzerland) (MuC ’23). Association for Computing Machinery, New York, NY, USA, 492–497. doi:10.1145/3603555.3608551

  26. [26]

    Betsy Sparrow, Jenny Liu, and Daniel M. Wegner. 2011. Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.Science333, 6043 (2011), 776–778. doi:10.1126/science.1207745

  27. [27]

    Lucy A. Suchman. 1987.Plans and situated actions: the problem of human-machine communication. Cambridge University Press, USA

  28. [28]

    Hasan Tinmaz, Mina Fanea-Ivanovici, and Hasnan Baber. 2022. A snapshot of digital literacy.Library Hi Tech News40, 1 (04 2022), 20–23. doi:10.1108/LHTN- 12-2021-0095

  29. [29]

    Michelle Vaccaro, Abdullah Almaatouq, and Thomas Malone. 2024. When com- binations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour8, 12 (2024), 2293–2303. doi:10.1038/s41562-024-02024-1

  30. [30]

    Lotte Vermeire, Wendy Van den Broeck, Fazlyn Petersen, and Leo Van Audenhove

  31. [31]

    Media and Communication13 (2025)

    Beyond Numeracy, a Data Literacy Topical Scoping Review (2011-2023). Media and Communication13 (2025). doi:10.17645/mac.9237