pith. machine review for the scientific record. sign in

arxiv: 2605.07012 · v1 · submitted 2026-05-07 · 💻 cs.HC · cs.CY

Recognition: no theorem link

Exploring the "Banality" of Deception in Generative AI

Ishitaa Narwane, Johanna Gunawan, Konrad Kollnig

Pith reviewed 2026-05-11 01:06 UTC · model grok-4.3

classification 💻 cs.HC cs.CY
keywords banal deceptiongenerative AIdeceptive designchatbotsdark patternsuser awarenessdesign friction
0
0 comments X

The pith

The banality of deception in generative AI suggests introducing friction to protect users.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper applies the concept of banal deception to generative AI, focusing on chatbots where influence is subtle and embedded in normal use. It explores how users participate in their own deception by treating these interactions as helpful. This leads to proposals for introducing friction to protect users, such as increasing awareness, offering intervention tools, and enhancing regulations. The authors present these as topics for discussion among scholars of deceptive design.

Core claim

The paper establishes that banal deception offers a lens for understanding deception in generative AI experiences, particularly chatbots, where it is quietly embedded in default settings and conversations. Recognizing users' involvement in this deception opens paths to future work on safeguards like awareness raising, intervention tools, and regulatory improvements.

What carries the argument

Banal deception, the subtle and normalized influence in digital interactions that involves users' participation, used as a lens for generative AI chatbots.

If this is right

  • Introducing friction can safeguard users from deception in generative AI interactions.
  • Empowering users through raising awareness can address their involvement in deception.
  • Providing intervention tools offers practical ways to counter subtle influence.
  • Regulatory or enforcement improvements can tackle embedded deceptions beyond visible patterns.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Extending this lens to other AI systems like image generators or recommendation engines could reveal similar patterns of normalized deception.
  • Connecting banal deception to user trust in AI might help address broader issues of misinformation spread through automated responses.
  • Designing and testing specific awareness interventions could provide evidence on their effectiveness in reducing acceptance of deceptive AI outputs.

Load-bearing premise

That the idea of users' involvement in their deception applies to generative AI chatbots and that friction through awareness and tools will safeguard users without tested evidence.

What would settle it

A controlled experiment demonstrating that users detect and resist deceptive elements in generative AI chatbots equally well with or without added awareness or tools would falsify the need for these specific interventions.

read the original abstract

Current approaches to addressing deceptive design largely focus on visible interface manipulations, commonly referred to as "dark patterns". With the rise of generative AI, deception is becoming more difficult to spot and easier to live with, as it is quietly embedded in default settings, automated suggestions, and conversational interactions rather than discrete interface elements. These subtle, normalised forms of influence, which Simone Natale frames as "banal deception", shape everyday digital use and blur the line between AI-enabled assistance and manipulation. This position paper explores banality as a lens through which to reason through deception in generative AI experiences, especially with chatbots. We explore what Natale describes as users' own involvement in their deception, and argue that this perspective could lead to future work for introducing friction to safeguard users from deception in generative AI interactions, such as empowering users through raising awareness, providing them with intervention tools, and regulatory or enforcement improvements. We present these concepts as points for discussion for the deceptive design scholarly community.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. This position paper proposes 'banal deception' (drawing on Simone Natale) as a conceptual lens for understanding normalized, user-involved deception in generative AI systems, especially chatbots. It contrasts this with traditional dark patterns, asserts that deception is embedded in default settings and conversational interactions, and argues that the lens can inform future safeguards via user awareness-raising, intervention tools, and regulatory improvements. These ideas are framed explicitly as 'points for discussion' rather than validated mechanisms.

Significance. If the interpretive frame holds traction in the community, the paper could usefully expand deceptive-design scholarship in HCI beyond overt interface manipulations to subtle, everyday AI interactions. Its value is in opening a discussion rather than delivering tested predictions or derivations; the absence of empirical support is consistent with its position-paper genre and does not undermine the invitation to consider new research directions.

major comments (1)
  1. [Abstract] Abstract and opening paragraphs: the central move—applying Natale’s account of users’ own involvement in their deception to generative-AI chatbots—is asserted without a concrete mapping or illustrative example of how banal deception manifests differently in LLM-based interactions versus prior interface designs. This mapping is load-bearing for the subsequent claim that the lens will productively guide friction-based interventions.
minor comments (2)
  1. The paper would benefit from a short related-work subsection that situates Natale’s concept against existing HCI literature on dark patterns in conversational agents and AI recommendation systems.
  2. [Discussion of future work] The three proposed future-work directions (awareness, tools, regulation) are listed at a high level; adding one or two sentence-level challenges or pilot ideas for each would make the discussion points more actionable for readers.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and positive assessment of our position paper. We appreciate the suggestion to strengthen the abstract and opening paragraphs with a clearer mapping and will incorporate revisions to address this point directly.

read point-by-point responses
  1. Referee: [Abstract] Abstract and opening paragraphs: the central move—applying Natale’s account of users’ own involvement in their deception to generative-AI chatbots—is asserted without a concrete mapping or illustrative example of how banal deception manifests differently in LLM-based interactions versus prior interface designs. This mapping is load-bearing for the subsequent claim that the lens will productively guide friction-based interventions.

    Authors: We agree that an explicit illustrative example would help ground the application of Natale’s framework and clarify its distinctiveness for LLM-based systems. As a position paper focused on opening discussion rather than empirical validation, the manuscript intentionally frames these ideas at a conceptual level. In the revised version, we will expand the abstract and introduction with a concrete mapping: for instance, we will contrast how users in generative AI chatbots actively sustain deception through iterative prompting and continued engagement (e.g., accepting simulated empathy or knowledge despite awareness of its limits), thereby participating in the normalization of the interaction, versus traditional dark patterns that rely on static, one-directional interface manipulations such as hidden fees or disguised buttons. This addition will better illustrate the user-involved banality and directly support the proposed directions for friction, awareness tools, and regulatory approaches. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

This is a conceptual position paper that applies an external framework (Simone Natale's 'banal deception') to generative AI chatbots and lists discussion points for future work on friction, awareness, and regulation. It contains no equations, derivations, fitted parameters, predictions, or self-referential definitions. The central move is interpretive extension of a cited external concept rather than any reduction of claims to the paper's own inputs or self-citations. All load-bearing elements remain independent of the present text.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a conceptual position paper with no quantitative models, empirical data, or formal derivations, so it introduces no free parameters, axioms, or invented entities.

pith-pipeline@v0.9.0 · 5470 in / 1139 out tokens · 23915 ms · 2026-05-11T01:06:00.793720+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

21 extracted references · 12 canonical work pages · 1 internal anchor

  1. [1]

    Bobby Allyn. 2025. OpenAI and CEO Sam Altman sued by parents who blame ChatGPT for teen’s death. CNN Business. https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen- suicide-lawsuit Accessed: 2026-02-07

  2. [2]

    Harry Brignull. 2011. Dark patterns: Deception vs. honesty in UI design.Interaction Design, Usability338 (2011), 2–4

  3. [3]

    dark pattern

    Sean Goedecke. 2025. Sycophancy is the first LLM “dark pattern”. https://www.seangoedecke. com/ai-sycophancy/

  4. [4]

    and Redmiles, Elissa M

    Colin M. Gray, Cristiana Teixeira Santos, Nataliia Bielova, and Thomas Mildner. 2024. An Ontology of Dark Patterns Knowledge: Foundations, Definitions, and a Pathway for Shared Knowledge-Building. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, U...

  5. [5]

    Andrea L. Guzman. 2019. Voices in and of the machine: Source orientation toward mobile virtual assistants.Computers in Human Behavior90 (2019), 343–350. doi:10.1016/j.chb.2018.08.009

  6. [6]

    2026.The App Economy: Making Sense of Platform Power in the Age of AI

    Konrad Kollnig. 2026.The App Economy: Making Sense of Platform Power in the Age of AI. Bristol University Press, Bristol, UK. doi:10.51952/9781529247725

  7. [7]

    Mathur, G

    Arunesh Mathur, Gunes Acar, Michael J. Friedman, Eli Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites.Proc. ACM Hum.-Comput. Interact.3, CSCW, Article 81 (Nov. 2019), 32 pages. doi:10.1145/3359183

  8. [8]

    Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makes a Dark Pattern... Dark? Design Attributes, Normative Considerations, and Measurement Methods. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 360, 18 pages. doi:10....

  9. [9]

    Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata.Ethics and information technology6, 3 (2004), 175–183

  10. [10]

    2021.Deceitful media: Artificial intelligence and social life after the Turing test

    Simone Natale. 2021.Deceitful media: Artificial intelligence and social life after the Turing test. Oxford University Press

  11. [11]

    Simone Natale. 2025. Digital media and the banalization of deception.Convergence31, 1 (2025), 402–419. doi:10.1177/13548565241311780

  12. [12]

    arXiv preprint arXiv:2212.09251 , year =

    Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kern...

  13. [13]

    van der Goot, and Caroline L

    Jochen Peter, Theo Araujo, Carolin Ischen, Sonia Jawaid Shaikh, Margot J. van der Goot, and Caroline L. van Straten. 2024.Human–Machine Communication. Amsterdam University Press, 205–220. http://www.jstor.org/stable/jj.11895525.15

  14. [14]

    Majchrzak, Jens Fromm, and Isabell Wohlgenannt

    Jaziar Radianti, Tim A. Majchrzak, Jens Fromm, and Isabell Wohlgenannt. 2020. A Systematic Review of Immersive Virtual Reality Applications for Higher Education: Design Elements, Lessons Learned, and Research Agenda.Computers & Education147 (2020), 103778. doi:10. 1016/j.compedu.2019.103778

  15. [15]

    Matthew Raine and Maria Raine. 2025. Complaint and Demand for Jury Trial, Raine v. OpenAI, Inc. et al. Superior Court of California, County of San Francisco. https://www.law.berkeley. edu/wp-content/uploads/2025/09/Raine-v-OpenAI.pdf Case No. CGC-25-628528

  16. [16]

    Everett M. Rogers. 2003.Diffusion of Innovations(5 ed.). Free Press, New York

  17. [17]

    Marc Schmitt and Ivan Flechais. 2024. Digital deception: Generative artificial intelligence in social engineering and phishing.Artificial Intelligence Review57, 12 (2024), 324

  18. [18]

    Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R Johnston, et al . 2023. Towards understanding sycophancy in language models.arXiv preprint arXiv:2310.13548 (2023)

  19. [19]

    Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V. Le. 2024. Simple synthetic data reduces sycophancy in large language models. arXiv:2308.03958 [cs.CL] https://arxiv.org/abs/ 2308.03958

  20. [20]

    Cornelia Wrzus, Marie Ottilie Frenkel, and Benjamin Schöne. 2024. Current opportunities and challenges of immersive virtual reality for psychological research and application.Acta Psychologica249 (2024), 104485. doi:10.1016/j.actpsy.2024.104485

  21. [21]

    Xiao Zhan, Yifan Xu, Noura Abdi, Joe Collenette, and Stefan Sarkadi. 2025. Banal Deception and Human-AI Ecosystems: A Study of People’s Perceptions of LLM-generated Deceptive Behaviour.Journal of Artificial Intelligence Research84 (Oct. 2025). doi:10.1613/jair.1.18724