pith. machine review for the scientific record. sign in

arxiv: 2605.10898 · v1 · submitted 2026-05-11 · 💻 cs.HC

Recognition: no theorem link

How Creatives Approach GenAI Image Generation: Tensions Between Structured Guidance, Self-Experimentation, and Creative Autonomy

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:32 UTC · model grok-4.3

classification 💻 cs.HC
keywords generative AIimage generationcreativesself-experimentationcreative autonomyAI literacyuser studiesHCI
0
0 comments X

The pith

Creatives using GenAI image tools often prefer self-experimentation over structured guidance because they believe the latter can limit creative autonomy even when it aids understanding.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper investigates how artists and hobbyists approach learning generative AI tools for image generation. It shows that self-experimentation and tutorials are the main strategies, yet many resist structured guidance on the grounds that it may constrain their personal creative process. The authors reached these conclusions through interviews with eight creatives, a survey of 159 respondents, and a probe study with seventeen participants who tried sample guidance materials. A sympathetic reader cares because the work surfaces a practical tension in designing support for new creative technologies without eroding user agency.

Core claim

Creatives commonly turn to self-experimentation or tutorials to explore GenAI image tools, despite frequent confusion over AI terminology, and in the probe study even those who found structured guidance helpful for understanding AI still preferred self-experimentation because they felt guidance could limit their creativity. The paper therefore frames the core problem as a tension between providing AI literacy support and preserving creative freedom.

What carries the argument

The research probe that presented structured guidance materials to participants and elicited their reactions, which directly revealed the widespread preference for self-experimentation and the perception that guidance restricts creative autonomy.

If this is right

  • GenAI image tool interfaces should default to optional rather than required guidance to accommodate users who favor self-experimentation.
  • AI literacy efforts aimed at creatives must include pathways for unstructured exploration if they are to retain user engagement.
  • Over-provision of prescriptive guidance risks lowering users' sense of ownership over the resulting images.
  • Designers need mechanisms that let creatives toggle guidance on and off based on their current preference for autonomy.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same tension between guidance and autonomy may appear in other generative tools such as text or audio generators used by creatives.
  • Longer-term deployment studies could test whether the preference for self-experimentation actually produces measurably different creative outputs or satisfaction levels.
  • The findings suggest that hybrid interfaces offering lightweight, on-demand hints rather than full tutorials might better serve this population.

Load-bearing premise

That the small samples from interviews, the probe study, and self-reported survey responses sufficiently represent the broader population of artists and hobbyists using GenAI image tools.

What would settle it

A larger study in which most creatives report preferring structured guidance, report higher creative satisfaction when using it, and show no perceived loss of autonomy would falsify the central claim.

Figures

Figures reproduced from arXiv: 2605.10898 by Haidan Liu, Isabelle Kwan, Jeffrey Loverock, Nicholas Vincent, Parmit K Chilana, Taiga Okuma.

Figure 1
Figure 1. Figure 1: Overview of our staged inquiry across three studies. The formative interviews and survey studies (left) identified [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Leonardo AI interface used in the interview study. Participants were given a sketch image provided by the researchers [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a) Responses to the survey question (single-select): What is your most preferred way to learn about GenAI image [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Helpfulness ratings of four GenAI tutorial types among respondents who preferred tutorials, shown all tutorial [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Early design exploration of our research probe, where users could click on red dots to reveal object attributes in a [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: An example of the Text-to-Image tutorial. The interactive design allows users to click through the explanation [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Openart AI interface: (a) Interface where participants interacted with the text-to-image tasks; (b) Interface for sketch [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: User study session procedure. Participants worked through two text-to-image tasks and two image-to-image tasks, [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Images used in the study tasks. (a) and (b) for text-to-image, (c) and (d) for style transfer, and (e) for sketch-to-image [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
read the original abstract

As generative AI tools increasingly influence creative practice, they raise longstanding HCI questions about how creatives learn complex software and how they can be better supported. We conducted an interview study with artists and hobbyists (n=8) and a follow-up survey (n=159) to understand how this population approaches and seeks guidance for GenAI image tools. We found that creatives commonly use either self-experimentation or tutorials to explore GenAI tools, yet many struggle with confusing AI terminology. To gain further insight into creatives' learning experiences, we developed a research probe to elicit creatives' perceptions of structured guidance. Our user study with 17 creatives revealed that, even when creatives described the guidance as helpful for understanding AI, many still preferred self-experimentation, feeling that guidance could limit their creativity. Our findings highlight a central tension in supporting AI literacy for creatives: balancing guidance and promoting literacy while preserving creative freedom.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript reports on a mixed-methods study exploring how artists and hobbyists approach generative AI image generation tools. Through interviews (n=8), a survey (n=159), and a probe study (n=17), the authors identify that creatives often rely on self-experimentation or tutorials, face challenges with AI terminology, and experience a tension where structured guidance aids understanding but may constrain creative autonomy, leading many to prefer self-experimentation.

Significance. This study contributes timely insights to HCI on supporting AI literacy in creative practices without undermining autonomy. The mixed-methods design, including a custom research probe, is a strength that allows for both broad patterns from the survey and in-depth perceptions from the probe. As an exploratory qualitative study, the modest sample sizes are appropriate and the findings are presented descriptively, addressing the concern about generalizability.

minor comments (2)
  1. [Abstract] The abstract states that 'many still preferred self-experimentation' from the probe study; specifying the number or proportion of participants who expressed this preference would enhance the precision of the claim.
  2. [Discussion or Limitations] A more explicit discussion of potential selection bias in recruiting participants who are already using GenAI tools would strengthen the presentation of the findings.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive and constructive review of our manuscript. We appreciate the recognition of the study's timeliness, the value of the mixed-methods design, and the appropriateness of the sample sizes for an exploratory study. No specific major comments were provided in the report.

Circularity Check

0 steps flagged

No significant circularity; purely empirical qualitative study

full rationale

The paper is an exploratory HCI study based on original data collection: interviews (n=8), survey (n=159), and a probe study (n=17). Central claims describe participant-reported patterns and tensions (e.g., preference for self-experimentation even when guidance is viewed as helpful). No mathematical derivations, equations, fitted parameters, predictions, or self-citation chains appear in the derivation chain. All findings are grounded in the collected empirical data rather than reducing to prior results or inputs by construction. This is the expected outcome for a self-contained qualitative paper.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

This is an empirical qualitative HCI study; it relies on standard assumptions about the validity of interview and survey data for capturing user perceptions rather than introducing new parameters or entities.

axioms (2)
  • domain assumption Self-reported interview and survey responses from creatives accurately reflect their actual learning behaviors and preferences with GenAI tools.
    Core premise of the interview and survey components; typical in HCI but subject to recall and social-desirability biases.
  • domain assumption A custom research probe can reliably elicit perceptions of structured guidance without introducing its own bias.
    Invoked in the probe-based user study section of the abstract.

pith-pipeline@v0.9.0 · 5484 in / 1357 out tokens · 45512 ms · 2026-05-12T03:32:08.454902+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

106 extracted references · 106 canonical work pages

  1. [1]

    [n. d.]. DALL·E 3. https://openai.com/index/dall-e-3/

  2. [2]

    Abuzuraiq and Philippe Pasquier

    Ahmed M. Abuzuraiq and Philippe Pasquier. 2025. Explainability-in-Action: En- abling Expressive Manipulation and Tacit Understanding by Bending Diffusion Models in ComfyUI. arXiv:2508.07183 [cs] https://arxiv.org/abs/2508.07183

  3. [3]

    Adobe. 2025. Make stunning updates to your images with text prompts using Generative Fill. https://helpx.adobe.com/content/help/ca/en/photoshop/using/ generative-fill.html

  4. [4]

    Leonardo AI. n.d.. AI Image Generator - Create Art, Images & Video | Leonardo AI. https://leonardo.ai/

  5. [5]

    Jay Alammar. 2022. The Illustrated Stable Diffusion. https://jalammar.github. io/illustrated-stable-diffusion/

  6. [6]

    Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz

    Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human- AI Interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–13. https:...

  7. [7]

    1986.Social foundations of thought and action: A social cognitive theory

    Albert Bandura. 1986.Social foundations of thought and action: A social cognitive theory. Prentice-Hall, Inc, Englewood Cliffs, NJ, US

  8. [8]

    Lasecki, Daniel S

    Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance.Proceedings of the AAAI Conference on Human Computation and Crowdsourcing7 (Oct. 2019), 2–11. https://doi.org/10.1609/hcomp.v7i1.5285

  9. [9]

    Proceedings of the AAAI Conference on Artificial Intelligence34(07), 10639–10646 (2020) https://doi.org/10.1609/aaai

    Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff.Proceedings of the AAAI Conference on Artificial Intelligence33, 01 (July 2019), 2429–2437. https://doi.org/10.1609/aaai. v33i01.33012429

  10. [10]

    Ronald A Beghetto and Maciej Karwowski. 1945. Creative agency unbound. American history1861, 1900 (1945)

  11. [11]

    Lucia Benaquisto. 2008. Open Coding. InThe SAGE Encyclopedia of Qual- itative Research Methods. SAGE Publications, Inc. https://doi.org/10.4135/ 9781412963909

  12. [12]

    Kirsten Boehner, Janet Vertesi, Phoebe Sengers, and Paul Dourish. 2007. How HCI interprets the probes. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007 (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery, 1077–1086. https://doi.org/10.1145/1240624.1240789 SIGCHI C...

  13. [13]

    Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Grossman

  14. [14]

    https://doi.org/10.48550/arXiv.2304.09337

    Promptify: Text-to-Image Generation through Interactive Prompt Explo- ration with Large Language Models. https://doi.org/10.48550/arXiv.2304.09337

  15. [15]

    Virginia Braun, , and Victoria Clarke. 2006. Using thematic analysis in psy- chology.Qualitative Research in Psychology3, 2 (Jan. 2006), 77–101. https: //doi.org/10.1191/1478088706qp063oa

  16. [16]

    Nick Bryan-Kinns, Berker Banar, Corey Ford, Courtney N Reed, Yixiao Zhang, Simon Colton, and Jack Armitage. 2023. Exploring XAI for the Arts: Explaining Latent Space in Generative Music. (2023). https://arxiv.org/pdf/2308.05496

  17. [17]

    Harald Burgsteiner, Martin Kandlhofer, and Gerald Steinbauer. 2016. IRobot: Teaching the Basics of Artificial Intelligence in High Schools.Proceedings of the AAAI Conference on Artificial Intelligence30, 1 (March 2016). https: //doi.org/10.1609/aaai.v30i1.9864

  18. [18]

    John Carroll and Mary Beth Rosson. 1987. Paradox of the active user. 80–111

  19. [19]

    CBC. 2025. How a book on climate became an international bestseller.CBC News(May 2025). https://www.cbc.ca/news/science/what-on-earth-climate- comic-1.7546104

  20. [20]

    Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie J Cai, and Michael Terry

    Minsuk Chang, Stefania Druga, Alexander J. Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie J Cai, and Michael Terry. 2023. The Prompt Artists. InCreativity and Cognition. ACM, Virtual Event USA, 75–87. https://doi.org/10.1145/3591196. 3593515

  21. [21]

    John Joon Young Chung and Eytan Adar. 2023. PromptPaint: Steering Text- to-Image Generation Through Paint Medium-like Interactions. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. ACM, San Francisco CA USA, 1–17. https://doi.org/10.1145/3586183.3606777

  22. [22]

    Chiara Di Lodovico, Federico Torrielli, Luigi Di Caro, and Amon Rapp. 2025. How Do People Develop Folk Theories of Generative AI Text-to-Image Models? A Qualitative Study on How People Strive to Explain and Make Sense of GenAI. International Journal of Human–Computer Interaction41, 23 (Dec. 2025), 14846– 14870. https://doi.org/10.1080/10447318.2025.2491009

  23. [23]

    Smith, Nicole DeCario, and Will Buchanan

    Jesse Dodge, Taylor Prewitt, Remi Tachet Des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the Carbon Intensity of AI in Cloud Instances. https://doi.org/10.48550/arXiv.2206.05229

  24. [24]

    Stefania Druga, Fee Lia Christoph, and Amy J Ko. 2022. Family as a Third Space for AI Literacies: How do children and parents learn about AI together?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3502031

  25. [25]

    Vu, Eesh Likhith, and Tammy Qiu

    Stefania Druga, Sarah T. Vu, Eesh Likhith, and Tammy Qiu. 2019. Inclusive AI literacy for kids around the world. InProceedings of FabLearn 2019 (FL2019). Association for Computing Machinery, New York, NY, USA, 104–111. https: //doi.org/10.1145/3311890.3311904

  26. [26]

    Noyan Evirgen, Ruolin Wang, and Xiang ’Anthony Chen. 2024. From Text to Pix- els: Enhancing User Understanding through Text-to-Image Model Explanations. InProceedings of the 29th International Conference on Intelligent User Interfaces. ACM, Greenville SC USA, 74–87. https://doi.org/10.1145/3640543.3645173

  27. [27]

    Matteo Farinella. 2018. The potential of comics in science communication. Journal of Science Communication17, 1 (Jan. 2018), Y01. https://doi.org/10. How Creatives Approach GenAI Image Generation C&C ’26, July 13–16, 2026, London, United Kingdom 22323/2.17010401

  28. [28]

    Rahel Flechtner and Aeneas Stankowski. 2023. AI Is Not a Wildcard: Challenges for Integrating AI into the Design Curriculum. InProceedings of the 5th Annual Symposium on HCI Education. ACM, Hamburg Germany, 72–77. https://doi. org/10.1145/3587399.3587410

  29. [29]

    Katja Fleischmann. 2024. Generative Artificial Intelligence in Graphic Design Education: A Student Perspective.Canadian Journal of Learning and Technology 50, 1 (Aug. 2024), 1–17. https://doi.org/10.21432/cjlt28618

  30. [30]

    Katja Fleischmann. 2025. Preparing Creative Arts and Design Students for the New World of Generative Artificial Intelligence in the Workplace. In11th International Conference on Higher Education Advances (HEAd’25). Editorial Universitat Politècnica de València (edUPV), 206–213. https://doi.org/10.4995/ HEAd25.2025.19958

  31. [31]

    Corey Ford, Elizabeth Wilson, Shuoyang Zheng, Gabriel Vigliensoni, Jeba Rezwana, Lanxi Xiao, Michael Paul Clemens, Makayla Lewis, Drew Hemment, Alan Chamberlain, Helen Kennedy, and Nick Bryan-Kinns. 2025. Explainable AI for the Arts 3 (XAIxArts3). InProceedings of the 2025 Conference on Creativity and Cognition (C&C ’25). Association for Computing Machine...

  32. [32]

    Glassman

    Katy Ilonka Gero, Meera Desai, Carly Schnitzler, Nayun Eom, Jack Cushman, and Elena L. Glassman. 2025. Creative Writers’ Attitudes on Writing as Training Data for Large Language Models. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–16. https: //doi.org/10.1145/3706598.3713287

  33. [33]

    Malik Ghallab. 2019. Responsible AI: requirements and challenges.AI Perspec- tives1, 1 (Sept. 2019), 3. https://doi.org/10.1186/s42467-019-0003-z

  34. [34]

    Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein, and Nikolas Martelaro. 2023. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools. InPro- ceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–20. https://doi.org/10.1145/3544...

  35. [35]

    Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, and Xiaoru Yuan. 2024. PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation.IEEE Transactions on Visualization and Computer Graphics(2024), 1–12. https://doi.org/10.1109/TVCG.2024.3408255

  36. [36]

    Angel Hsing-Chi Hwang. 2022. Too Late to be Creative? AI-Empowered Tools in Creative Processes. InCHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New Orleans LA USA, 1–9. https://doi.org/10.1145/ 3491101.3503549

  37. [37]

    Nanna Inie, Jeanette Falk, and Steve Tanimoto. 2023. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. InEx- tended Abstracts of the 2023 CHI Conference on Human Factors in Computing Sys- tems. ACM, Hamburg Germany, 1–8. https://doi.org/10.1145/3544549.3585657

  38. [38]

    Instagram. [n. d.]. Use AI generated stickers on Instagram | Instagram Help Center. https://help.instagram.com/231732506511219/?helpref=related_articles

  39. [39]

    Amir Jahanlou and Parmit K Chilana. 2024. How Example-Based Authoring of Motion Graphics Impacts Creative Expression: Differences in Perceptions of Professional and Casual Motion Designers. InCreativity and Cognition. ACM, Chicago IL USA, 347–357. https://doi.org/10.1145/3635636.3656197

  40. [40]

    Amir Jahanlou, William Odom, and Parmit Chilana. 2021. Challenges in Getting Started in Motion Graphic Design: Perspectives from Casual and Professional Motion Designers. (2021). https://graphicsinterface.org/wp-content/uploads/ gi2021-06.pdf

  41. [41]

    Manu Kaur. 2021. Using productive failure to activate deeper learn- ing. https://www.timeshighereducation.com/campus/using-productive-failure- activate-deeper-learning

  42. [42]

    Reishiro Kawakami and Sukrit Venkatagiri. 2024. The Impact of Generative AI on Artists. InCreativity and Cognition. ACM, Chicago IL USA, 79–82. https: //doi.org/10.1145/3635636.3664263

  43. [43]

    Patrick Gage Kelley and Allison Woodruff. 2023. Advancing Explainability Through AI Literacy and Design Resources.Interactions30, 5 (Sept. 2023), 34–38. https://doi.org/10.1145/3613249

  44. [44]

    Anjali Khurana, Hariharan Subramonyam, and Parmit K Chilana. 2024. Why and When LLM-Based Assistants Can Go Wrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking. InProceedings of the 29th International Conference on Intelligent User Interfaces. ACM, Greenville SC USA, 288–303. https://doi.org/10.1145/3640543.3645200

  45. [45]

    One-Size-Fits-All

    Kimia Kiani, George Cui, Andrea Bunt, Joanna McGrenere, and Parmit K. Chi- lana. 2019. Beyond "One-Size-Fits-All": Understanding the Diversity in How Software Newcomers Discover and Make Use of Help Resources. InProceed- ings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300570

  46. [46]

    Journey of Finding the Best Query

    Soomin Kim, Jinsu Eun, Changhoon Oh, and Joonhwan Lee. 2025. “Journey of Finding the Best Query”: Understanding the User Experience of AI Image Generation System.International Journal of Human–Computer Interaction41, 2 (Jan. 2025), 951–969. https://doi.org/10.1080/10447318.2024.2307670

  47. [47]

    Howard J Klein. 1989. An integrated control theory model of work motivation. Academy of management review14, 2 (1989), 150–172

  48. [48]

    Hyung-Kwon Ko, Gwanmo Park, Hyeon Jeon, Jaemin Jo, Juho Kim, and Jinwook Seo. 2023. Large-scale Text-to-Image Generation Models for Visual Artists’ Creative Works. InProceedings of the 28th International Conference on Intelligent User Interfaces. 919–933. https://doi.org/10.1145/3581641.3584078

  49. [49]

    Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/ 3290605.3300641

  50. [50]

    David A. Kolb. 2014.Experiential learning: Experience as the source of learning and development. FT press. https://books.google.com/books?hl=en&lr= &id=jpbeBQAAQBAJ&oi=fnd&pg=PR7&dq=info:cBzPyc88vIgJ:scholar.google. com&ots=Vp8RmSWXMe&sig=oFUwjFrgLxfURMKrb5afKXj6Ttk

  51. [51]

    Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learn- ing. InProceedings of the 20th International Conference on Intelligent User In- terfaces (IUI ’15). Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/2678025.2701399

  52. [52]

    1991.Situated Learning: Legitimate Peripheral Participation

    Jean Lave and Etienne Wenger. 1991.Situated Learning: Legitimate Peripheral Participation. Cambridge University Press

  53. [53]

    Dow, and Jane L

    Mingyi Li, Mengyi Chen, Sarah Luo, Yining Cao, Haijun Xia, Maitraye Das, Steven P. Dow, and Jane L. E. 2026. VizCrit: Exploring Strategies for Displaying Computational Feedback in a Visual Design Tool. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 8...

  54. [54]

    Lim and Anind K

    Brian Y. Lim and Anind K. Dey. 2010. Toolkit to support intelligibility in context- aware applications. InProceedings of the 12th ACM international conference on Ubiquitous computing (UbiComp ’10). Association for Computing Machinery, New York, NY, USA, 13–22. https://doi.org/10.1145/1864349.1864353

  55. [55]

    Haidan Liu, Poorvi Bhatia, Nicholas Vincent, and Parmit K Chilana. 2026. Trac- ing Everyday AI Literacy Discussions at Scale: How Online Creative Com- munities Make Sense of Generative AI. InProceedings of the 2026 CHI Con- ference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 488, 28 page...

  56. [56]

    Tania Lombrozo. 2006. The structure and function of explanations.Trends in Cognitive Sciences10, 10 (Oct. 2006), 464–470. https://doi.org/10.1016/j.tics. 2006.08.004

  57. [57]

    Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–16. https://doi.org/ 10.1145/3313831.3376727

  58. [58]

    Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai

  59. [59]

    Di Geronimo, L

    Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. InProceedings of the 2020 CHI Conference on Human Factors in Comput- ing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831. 3376739

  60. [60]

    Mackay and Joanna McGrenere

    Wendy E. Mackay and Joanna McGrenere. 2025. Comparative Structured Ob- servation.ACM Transactions on Computer-Human Interaction32, 2 (April 2025), 1–27. https://doi.org/10.1145/3711838

  61. [61]

    Atefeh Mahdavi Goloujeh, Anne Sullivan, and Brian Magerko. 2024. Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI Tools. InProceedings of the CHI Conference on Human Factors in Computing Sys- tems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3613904.3642861

  62. [62]

    Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging Trial-and-Error for Learning Complex Software Applications. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3501895

  63. [63]

    Vera Liao, Mary Lou Maher, Charles Patrick Martin, and Greg Walsh

    Michael Muller, Lydia B Chilton, Anna Kantosalo, Q. Vera Liao, Mary Lou Maher, Charles Patrick Martin, and Greg Walsh. 2023. GenAICHI 2023: Generative AI and HCI at CHI 2023. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 3...

  64. [64]

    Clifford Nass, Jonathan Steuer, and Ellen R. Tauber. 1994. Computers are social actors. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Boston Massachusetts USA, 72–78. https://doi.org/10.1145/ 191666.191703

  65. [65]

    Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2025. A Comprehensive Overview of Large Language Models.ACM Trans. Intell. Syst. Technol.16, 5, Article 106 (Aug. 2025), 72 pages. https://doi.org/10.1145/3744746

  66. [66]

    Davy Tsz Kit Ng, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021. Conceptualizing AI literacy: An exploratory review.Computers and Education: Artificial Intelligence2 (2021), 100041. https://doi.org/10.1016/j.caeai. 2021.100041 C&C ’26, July 13–16, 2026, London, United Kingdom Liu et al

  67. [67]

    Davy Tsz Kit Ng, Jiahong Su, Jac Ka Lok Leung, and Samuel Kai Wah Chu

  68. [68]

    2024), 6204–6224

    Artificial intelligence (AI) literacy education in secondary schools: a review.Interactive Learning Environments32, 10 (Nov. 2024), 6204–6224. https: //doi.org/10.1080/10494820.2023.2255228

  69. [69]

    NightCafe. 2022. How Does NightCafe AI Work? https://nightcafe.studio/ blogs/info/how-does-nightcafe-ai-work

  70. [70]

    Donald Norman. 1987. Some Observations on Mental Models. InMental Models. Lawrence Erlbaum Associates, Inc., New Jersey, 7–14

  71. [71]

    NVIDIA. [n. d.]. What is Generative AI? https://www.nvidia.com/en-us/ glossary/generative-ai/

  72. [72]

    OpenAI. [n. d.]. Editing your images with ChatGPT Images. https: //help.openai.com/en/articles/9055440-editing-your-images-with-chatgpt- images?utm_source=chatgpt.com

  73. [73]

    OpenAI. 2025. Production bias vs paradox

  74. [74]

    OpenArt. [n. d.]. Create Art or Modify Images with AI. https://openart.ai/home

  75. [75]

    Jonas Oppenlaender. 2022. The Creativity of Text-to-Image Generation. InPro- ceedings of the 25th International Academic Mindtrek Conference. ACM, Tampere Finland, 192–202. https://doi.org/10.1145/3569219.3569352

  76. [76]

    Jonas Oppenlaender. 2024. A Taxonomy of Prompt Modifiers for Text-To-Image Generation.Behaviour & Information Technology43, 15 (Nov. 2024), 3763–3776. https://doi.org/10.1080/0144929X.2023.2286532

  77. [77]

    Jonas Oppenlaender, Rhema Linder, and Johanna Silvennoinen. 2025. Prompt- ing AI Art: An Investigation into the Creative Skill of Prompt Engineer- ing.International Journal of Human–Computer Interaction0, 0 (2025), 1–23. https://doi.org/10.1080/10447318.2024.2431761

  78. [78]

    Srishti Palani, David Ledo, George Fitzmaurice, and Fraser Anderson. 2022. ”I don’t want to feel like I’m working in a 1960s factory”: The Practitioner Perspective on Creativity Support Tool Adoption. InCHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. https://doi. org/10.1145/3491102.3501933

  79. [80]

    Srishti Palani and Gonzalo Ramos. 2024. Evolving Roles and Workflows of Creative Practitioners in the Age of Generative AI. InCreativity and Cognition. ACM, Chicago IL USA, 170–184. https://doi.org/10.1145/3635636.3656190

  80. [81]

    Xiaohan Peng, Janin Koch, and Wendy E. Mackay. 2024. DesignPrompt: Us- ing Multimodal Interaction for Design Exploration with Generative AI. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24). Association for Computing Machinery, New York, NY, USA, 804–818. https://doi.org/10.1145/3643834.3661588

Showing first 80 references.