pith. machine review for the scientific record. sign in

arxiv: 2605.15064 · v1 · submitted 2026-05-14 · 💻 cs.HC

Recognition: no theorem link

After the Interface: Relocating Human Agency in the Age of Conversational AI

Authors on Pith no claims yet

Pith reviewed 2026-05-15 03:10 UTC · model grok-4.3

classification 💻 cs.HC
keywords human agencyconversational AIHCIprocess controloutcome controlinterface affordancesagency relocation
0
0 comments X

The pith

Human agency has not diminished in conversational AI but has relocated from interface controls to goal articulation and outcome evaluation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that worries about eroding human agency in AI systems stem from a framing issue rather than actual loss. As interfaces give way to conversational and generative AI, agency shifts from direct manipulation of features to the processes of stating goals, judging outputs, and negotiating results. The authors use a distinction between process control and outcome control to map this shift across different systems, showing redistribution instead of disappearance. This matters for how designers assign responsibility and support user judgment in unverifiable AI outputs. The work urges the community to redefine agency around interaction rather than traditional affordances.

Core claim

Agency has not diminished but has relocated. As interaction has shifted from command- and feature-based paradigms toward conversational, generative, and agentic AI, human agency migrates from interface affordances to interaction itself: articulating goals, evaluating outputs, and negotiating outcomes. The paper distinguishes process control from outcome control to demonstrate that apparent loss of agency is redistribution, while addressing the objection that outcome agency may be illusory when AI produces plausible but unverifiable results.

What carries the argument

The process-control versus outcome-control distinction, applied as a diagnostic lens to show agency moving from interface affordances into the interaction activities of goal-setting and output negotiation.

If this is right

  • Design focus must move to tools that help users clearly state goals and assess AI outputs rather than building more interface controls.
  • Responsibility for AI outcomes falls more on users' capacity to negotiate and evaluate during interaction.
  • Systems producing unverifiable outputs require new mechanisms to make outcome agency real rather than illusory.
  • Agency in human-AI interaction becomes defined by interaction quality instead of interface features.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Users skilled at prompting and evaluation may gain disproportionate agency, creating new skill-based divides.
  • New metrics for AI success could center on how effectively a system supports goal negotiation and outcome verification.
  • This view connects to alignment challenges where conversational interfaces must accurately capture and act on user intent.
  • Comparative experiments could test whether outcome control feels substantive in high-stakes domains like medical or legal advice.

Load-bearing premise

The process versus outcome control split is enough to prove agency has truly relocated, and that outcome control stays meaningful even when AI results cannot be fully verified.

What would settle it

A study finding that users report equivalent or lower perceived control and satisfaction when using conversational AI for goal articulation and outcome negotiation compared with traditional command interfaces on matched tasks.

Figures

Figures reproduced from arXiv: 2605.15064 by Mengke Wu, Mike Yao.

Figure 1
Figure 1. Figure 1: A two-dimensional control space situating contemporary systems, with examples, by Process Control (vertical) and [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
read the original abstract

As AI systems take on greater autonomy, a quiet anxiety has settled over the HCI community: human agency is eroding. Users no longer control execution, interfaces recede, and machines decide. We argue that this anxiety, while understandable, reflects a framing problem rather than an empirical finding. Agency has not diminished but has relocated. As interaction has shifted from command- and feature-based paradigms toward conversational, generative, and agentic AI, human agency migrates from interface affordances to interaction itself: articulating goals, evaluating outputs, and negotiating outcomes. To make this relocation visible, we revisit control as a diagnostic lens, distinguish process control and outcome control, and map different systems across this space to show that what looks like agency's disappearance is actually its redistribution. We take seriously the objection that outcome-based agency may be illusory in systems that produce plausible but unverifiable outputs, and argue that this concern reveals what agency in human-AI interaction truly requires. This paper invites the CUI community to reconsider what agency means, where it lives, and what it demands, including who gets to have it and who holds responsibility when it fails, before the consequences become impossible to overlook.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper argues that human agency has not diminished with the rise of conversational, generative, and agentic AI but has relocated from interface affordances to the interaction process itself: articulating goals, evaluating outputs, and negotiating outcomes. It revisits control as a diagnostic lens by distinguishing process control from outcome control, maps various systems across this space to illustrate redistribution, and engages the objection that outcome-based agency may be illusory when AI outputs are plausible but unverifiable.

Significance. If the process/outcome distinction can be maintained without collapsing, the paper supplies a useful conceptual diagnostic for the CUI and HCI communities to analyze agency shifts, design implications, and responsibility allocation in human-AI systems. Its strength lies in directly addressing the relocation framing rather than assuming loss, though as a purely interpretive piece its broader impact would increase with clearer operational criteria or empirical tests.

major comments (1)
  1. [§3 (Revisiting control as a diagnostic lens and the subsequent mapping)] §3 (Revisiting control as a diagnostic lens and the subsequent mapping): The process-control versus outcome-control distinction is load-bearing for the central relocation claim, yet the paper supplies no explicit criteria for when negotiation constitutes genuine outcome control versus post-hoc ratification of unverifiable outputs. This leaves the diagnostic vulnerable to collapse in generative systems where the AI's internal generation is the opaque process, as the skeptic's stress-test concern highlights.
minor comments (2)
  1. [Abstract] Abstract: The final sentence on consequences becoming 'impossible to overlook' introduces a normative tone that could be softened to maintain the paper's measured conceptual focus.
  2. [Mapping section] Mapping section: A table or diagram summarizing how example systems (traditional interfaces, conversational agents, generative tools) occupy the process/outcome space would improve clarity and make the diagnostic lens more immediately usable.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and incisive review. The single major comment identifies a genuine point of vulnerability in the central distinction, which we address directly below.

read point-by-point responses
  1. Referee: §3 (Revisiting control as a diagnostic lens and the subsequent mapping): The process-control versus outcome-control distinction is load-bearing for the central relocation claim, yet the paper supplies no explicit criteria for when negotiation constitutes genuine outcome control versus post-hoc ratification of unverifiable outputs. This leaves the diagnostic vulnerable to collapse in generative systems where the AI's internal generation is the opaque process, as the skeptic's stress-test concern highlights.

    Authors: We agree that the manuscript does not supply a formal set of explicit criteria for demarcating genuine outcome control from ratification of unverifiable outputs. The distinction is developed conceptually through the user's retained capacities for goal articulation, output evaluation, and negotiation, with the system mappings intended to show how these capacities redistribute agency even when internal processes remain opaque. The later engagement with the skeptic's objection argues that this concern itself clarifies what agency requires (mechanisms for informed assessment rather than full transparency). However, we acknowledge the risk of collapse without sharper boundaries. In revision we will add a short subsection to §3 that specifies operational indicators: (1) user-initiated iteration that alters outputs relative to stated goals, (2) explicit acceptance or rejection when external verification is possible, and (3) documented acknowledgment of uncertainty when outputs are unverifiable. These indicators will be illustrated using the existing mappings rather than new empirical data. This change is partial and preserves the paper's interpretive scope. revision: partial

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper advances a conceptual reframing of human agency as relocated rather than diminished, using the introduced distinction between process control and outcome control as an analytical diagnostic to map HCI systems. This lens is presented as a revisiting of established control concepts to make the relocation visible, without any equations, fitted parameters, or derivations that reduce the central claim to a self-definition or input by construction. No load-bearing self-citations, uniqueness theorems, or ansatzes are invoked in the provided text; the argument draws on prior HCI literature as independent support and explicitly engages the illusion objection without resolving it tautologically. The derivation chain is therefore self-contained as an interpretive mapping rather than a forced equivalence.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper rests on two domain assumptions about the nature of agency and control; no free parameters or invented entities are introduced.

axioms (2)
  • domain assumption Human agency in interaction can be usefully diagnosed by distinguishing process control from outcome control.
    This distinction is invoked to map systems and demonstrate that agency relocates rather than disappears.
  • domain assumption Observable shifts from command-based to conversational AI paradigms indicate relocation of agency rather than its net loss.
    This premise underpins the claim that the anxiety reflects a framing problem.

pith-pipeline@v0.9.0 · 5501 in / 1335 out tokens · 70773 ms · 2026-05-15T03:10:47.898094+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

56 extracted references · 56 canonical work pages

  1. [1]

    Deepak Bhaskar Acharya, Karthigeyan Kuppan, and B Divya. 2025. Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey.IEEe Access (2025)

  2. [2]

    Dan Bennett, Oussama Metatla, Anne Roudaut, and Elisa D Mekler. 2023. How does HCI understand human agency and autonomy?. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18

  3. [3]

    Joanna Bergström, Jarrod Knibbe, Henning Pohl, and Kasper Hornbæk. 2022. Sense of agency and user experience: Is there a link?ACM Transactions on Computer-Human Interaction (TOCHI)29, 4 (2022), 1–22

  4. [4]

    Good” and “bad

    Petter Bae Brandtzaeg, Yukun You, Xi Wang, and Yucong Lao. 2023. “Good” and “bad” machine agency in the context of human-AI communication: The case of ChatGPT. InInternational Conference on Human-Computer Interaction. Springer, 3–23

  5. [5]

    Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making.Proceedings of the ACM on Human-computer Interaction5, CSCW1 (2021), 1–21

  6. [6]

    2018.The psychology of human-computer interaction

    Stuart K Card. 2018.The psychology of human-computer interaction. Crc Press

  7. [7]

    Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert- Jan Houben, Catholijn M Jonker, et al. 2023. Meaningful human control: actionable properties for AI system development.AI and Ethics3, 1 (2023), 241–255

  8. [8]

    Andy Cockburn, Carl Gutwin, and Saul Greenberg. 2007. A predictive model of menu performance. InProceedings of the SIGCHI conference on Human factors in computing systems. 627–636

  9. [9]

    Paramveer S Dhillon, Somayeh Molaei, Jiaqi Li, Maximilian Golub, Shaochun Zheng, and Lionel Peter Robert. 2024. Shaping human-AI collaboration: Varied scaffolding levels in co-writing with language models. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–18

  10. [10]

    I always assumed that I wasn’t really that close to [her]

    Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. " I always assumed that I wasn’t really that close to [her]" Reasoning about Invisible Algorithms in News Feeds. InProceedings of the 33rd annual ACM conference on human factors in computing systems. 153–162

  11. [11]

    George Fragiadakis, Christos Diou, George Kousiouris, and Mara Nikolaidou

  12. [12]

    Evaluating human-ai collaboration: A review and methodological frame- work.arXiv preprint arXiv:2407.19098(2024)

  13. [13]

    Batya Friedman. 1996. Value-sensitive design.interactions3, 6 (1996), 16–23

  14. [14]

    Batya Friedman and Helen Nissenbaum. 1996. User autonomy: who should con- trol what and when?. InConference Companion on Human Factors in Computing Systems. 433

  15. [15]

    Jie Gao, Simret Araya Gebreegziabher, Kenny Tsu Wei Choo, Toby Jia-Jun Li, Simon Tangi Perrault, and Thomas W Malone. 2024. A taxonomy for human- llm interaction modes: An initial exploration. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–11

  16. [16]

    Simret Araya Gebreegziabher, Zheng Zhang, Xiaohang Tang, Yihao Meng, Elena L Glassman, and Toby Jia-Jun Li. 2023. Patat: Human-ai collaborative qualitative coding with explainable interactive rule synthesis. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19

  17. [17]

    Catalina Gomez, Sue Min Cho, Shichang Ke, Chien-Ming Huang, and Mathias Unberath. 2025. Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Frontiers in Computer Science6 (2025), 1521066

  18. [18]

    Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems.Proceedings of the National Academy of Sciences116, 6 (2019), 1844–1850

  19. [19]

    Steffen Holter and Mennatallah El-Assady. 2024. Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation. InComputer graphics forum, Vol. 43. Wiley Online Library, e15107

  20. [20]

    Laurie Hughes, Yogesh K Dwivedi, Tegwen Malik, Mazen Shawosh, Mousa Ahmed Albashrawi, Il Jeon, Vincent Dutot, Mandanna Appanderanda, Tom Crick, Rahul De’, et al. 2025. AI agents and agentic systems: A multi-expert analysis.Journal of Computer Information Systems(2025), 1–29

  21. [21]

    Hutchins, James D

    Edwin L. Hutchins, James D. Hollan, and Donald A. Norman. 1985. Direct Manipulation Interfaces.Human–Computer Interaction1, 4 (1985), 311–338. https://doi.org/10.1207/s15327051hci0104_2

  22. [22]

    Beautiful Seams

    Sarah Inman and David Ribes. 2019. " Beautiful Seams" Strategic Revelations and Concealments. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14

  23. [23]

    Philipp Kirschthaler, Martin Porcheron, and Joel E Fischer. 2020. What can i say? effects of discoverability in vuis on task performance and user experience. In Proceedings of the 2nd Conference on Conversational User Interfaces. 1–9

  24. [24]

    Bart P Knijnenburg, Martijn C Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. Explaining the user experience of recommender systems. User modeling and user-adapted interaction22, 4 (2012), 441–504

  25. [25]

    Vivian Lai, Chacha Chen, Alison Smith-Renner, Q Vera Liao, and Chenhao Tan

  26. [26]

    InProceedings of the 2023 ACM conference on fairness, accountability, and transparency

    Towards a science of human-AI decision making: An overview of de- sign space in empirical human-subject studies. InProceedings of the 2023 ACM conference on fairness, accountability, and transparency. 1369–1385

  27. [27]

    Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. InProceedings of the 2025 CHI conference on human factors in computing systems. 1–22

  28. [28]

    Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing.ACM computing surveys55, 9 (2023), 1–35

  29. [29]

    Kai Lukoff, Ulrik Lyngs, Himanshu Zade, J Vera Liao, James Choi, Kaiyue Fan, Sean A Munson, and Alexis Hiniker. 2021. How the design of youtube influences user sense of agency. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17

  30. [30]

    Caterina Moruzzi and Solange Margarido. 2024. A user-centered framework for human-ai co-creativity. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–9

  31. [31]

    Brad Myers, Scott E Hudson, and Randy Pausch. 2000. Past, present, and future of user interface software tools.ACM Transactions on Computer-Human Interaction (TOCHI)7, 1 (2000), 3–28

  32. [32]

    Brad A Myers. 1998. A brief history of human-computer interaction technology. interactions5, 2 (1998), 44–54

  33. [33]

    Suchismita Naik, Austin L Toombs, Amanda Snellinger, Scott Saponas, and Amanda K Hall. 2025. Designing with Multi-Agent Generative AI: Insights from Industry Early Adopters. InProceedings of the 2025 ACM Designing Interactive Systems Conference. 1961–1972

  34. [34]

    Yuri Nakao, Simone Stumpf, Subeida Ahmed, Aisha Naseer, and Lorenzo Strap- pelli. 2022. Toward involving end-users in interactive human-in-the-loop AI fairness.ACM Transactions on Interactive Intelligent Systems (TiiS)12, 3 (2022), 1–30

  35. [35]

    Donald A Norman. 1999. Affordance, conventions, and design.interactions6, 3 (1999), 38–43

  36. [36]

    Pearl Pu, Li Chen, and Rong Hu. 2011. A user-centric evaluation framework for recommender systems. InProceedings of the fifth ACM conference on Recommender systems. 157–164

  37. [37]

    Jesús Sánchez Cuadrado, Sara Pérez-Soler, Esther Guerra, and Juan De Lara

  38. [38]

    In Proceedings of the 6th ACM Conference on Conversational User Interfaces

    Automating the Development of Task-oriented LLM-based Chatbots. In Proceedings of the 6th ACM Conference on Conversational User Interfaces. 1–10

  39. [39]

    Ranjan Sapkota, Konstantinos I Roumeliotis, and Manoj Karkee. 2026. AI Agents vs. Agentic AI: A Conceptual taxonomy, applications and challenges.Information Fusion126 (2026), 103599

  40. [40]

    Jingyu Shi, Rahul Jain, Hyungjun Doh, Ryo Suzuki, and Karthik Ramani. 2023. An HCI-centric survey and taxonomy of human-generative-AI interactions.arXiv preprint arXiv:2310.07127(2023)

  41. [41]

    Ben Shneiderman. 1983. Direct manipulation: A step beyond programming languages.Computer16, 08 (1983), 57–69

  42. [42]

    2010.Designing the user interface: strategies for effective human- computer interaction

    Ben Shneiderman. 2010.Designing the user interface: strategies for effective human- computer interaction. Pearson Education India

  43. [43]

    2022.Human-centered AI

    Ben Shneiderman. 2022.Human-centered AI. Oxford University Press

  44. [44]

    Leonie Nora Sieger and Henrik Detjen. 2021. Exploring Users’ Perceived Control Over Technology. InProceedings of Mensch und Computer 2021. 344–348

  45. [45]

    Tao Song, Man Luo, Xiaolong Zhang, Linjiang Chen, Yan Huang, Jiaqi Cao, Qing Zhu, Daobin Liu, Baicheng Zhang, Gang Zou, et al. 2025. A multiagent-driven robotic ai chemist enabling autonomous chemical research on demand.Journal of the American Chemical Society147, 15 (2025), 12534–12545. CUI ’26, July 21–24, 2026, Bremen, Germany Wu and Yao

  46. [46]

    Elias Storms, Oscar Alvarado, and Luciana Monteiro-Krebs. 2022. ’Transparency is Meant for Control’and Vice Versa: Learning from Co-designing and Evaluating Algorithmic News Recommenders.Proceedings of the ACM on Human-Computer InteractionCSCW2 (2022), 1–24

  47. [47]

    Yuan Sun, Eunchae Jang, Fenglong Ma, and Ting Wang. 2024. Generative AI in the wild: Prospects, challenges, and strategies. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–16

  48. [48]

    S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII).Journal of Computer-Mediated Communication25, 1 (2020), 74–88

  49. [49]

    Lev Tankelevitch, Viktor Kewenig, Auste Simkute, Ava Elizabeth Scott, Advait Sarkar, Abigail Sellen, and Sean Rintel. 2024. The metacognitive demands and opportunities of generative AI. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–24

  50. [50]

    Oleksandra Vereschak, Fatemeh Alizadeh, Gilles Bailly, and Baptiste Caramiaux

  51. [51]

    InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems

    Trust in ai-assisted decision making: Perspectives from those behind the system and those for whom the decision is made. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–14

  52. [52]

    Mengke Wu, Weizi Liu, Yanyun Wang, Weiyu Ding, and Mike Yao. 2026. Rethink- ing User Empowerment in AI Recommender System: Innovating Transparent and Controllable Interfaces. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems. 1–21

  53. [53]

    Mengke Wu, Weizi Liu, Yanyun Wang, and Mike Yao. 2025. Negotiating the Shared Agency between Humans & AI in the Recommender System. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–9

  54. [54]

    Yunzhijun Yu, Mehmet A Yetim, and Ovidiu Cocieru. 2025. Whose Voice Is It Anyway? Understanding AI Customization and Responsibility Attribution in Human-AI Collaboration.International Journal of Human–Computer Interaction (2025), 1–30

  55. [55]

    Shuning Zhang, Hui Wang, and Xin Yi. 2025. Exploring collaboration patterns and strategies in human-ai co-creation through the lens of agency: A scoping review of the top-tier hci literature.Proceedings of the ACM on Human-Computer Interaction9, 7 (2025), 1–43

  56. [56]

    Jiayi Zhou, Renzhong Li, Junxiu Tang, Tan Tang, Haotian Li, Weiwei Cui, and Yingcai Wu. 2024. Understanding nonlinear collaboration between human and AI agents: A co-design framework for creative design. InProceedings of the 2024 CHI conference on human factors in computing systems. 1–16. After the Interface: Relocating Human Agency in the Age of Conversa...