Recognition: no theorem link
How Creatives Approach GenAI Image Generation: Tensions Between Structured Guidance, Self-Experimentation, and Creative Autonomy
Pith reviewed 2026-05-12 03:32 UTC · model grok-4.3
The pith
Creatives using GenAI image tools often prefer self-experimentation over structured guidance because they believe the latter can limit creative autonomy even when it aids understanding.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Creatives commonly turn to self-experimentation or tutorials to explore GenAI image tools, despite frequent confusion over AI terminology, and in the probe study even those who found structured guidance helpful for understanding AI still preferred self-experimentation because they felt guidance could limit their creativity. The paper therefore frames the core problem as a tension between providing AI literacy support and preserving creative freedom.
What carries the argument
The research probe that presented structured guidance materials to participants and elicited their reactions, which directly revealed the widespread preference for self-experimentation and the perception that guidance restricts creative autonomy.
If this is right
- GenAI image tool interfaces should default to optional rather than required guidance to accommodate users who favor self-experimentation.
- AI literacy efforts aimed at creatives must include pathways for unstructured exploration if they are to retain user engagement.
- Over-provision of prescriptive guidance risks lowering users' sense of ownership over the resulting images.
- Designers need mechanisms that let creatives toggle guidance on and off based on their current preference for autonomy.
Where Pith is reading between the lines
- The same tension between guidance and autonomy may appear in other generative tools such as text or audio generators used by creatives.
- Longer-term deployment studies could test whether the preference for self-experimentation actually produces measurably different creative outputs or satisfaction levels.
- The findings suggest that hybrid interfaces offering lightweight, on-demand hints rather than full tutorials might better serve this population.
Load-bearing premise
That the small samples from interviews, the probe study, and self-reported survey responses sufficiently represent the broader population of artists and hobbyists using GenAI image tools.
What would settle it
A larger study in which most creatives report preferring structured guidance, report higher creative satisfaction when using it, and show no perceived loss of autonomy would falsify the central claim.
Figures
read the original abstract
As generative AI tools increasingly influence creative practice, they raise longstanding HCI questions about how creatives learn complex software and how they can be better supported. We conducted an interview study with artists and hobbyists (n=8) and a follow-up survey (n=159) to understand how this population approaches and seeks guidance for GenAI image tools. We found that creatives commonly use either self-experimentation or tutorials to explore GenAI tools, yet many struggle with confusing AI terminology. To gain further insight into creatives' learning experiences, we developed a research probe to elicit creatives' perceptions of structured guidance. Our user study with 17 creatives revealed that, even when creatives described the guidance as helpful for understanding AI, many still preferred self-experimentation, feeling that guidance could limit their creativity. Our findings highlight a central tension in supporting AI literacy for creatives: balancing guidance and promoting literacy while preserving creative freedom.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports on a mixed-methods study exploring how artists and hobbyists approach generative AI image generation tools. Through interviews (n=8), a survey (n=159), and a probe study (n=17), the authors identify that creatives often rely on self-experimentation or tutorials, face challenges with AI terminology, and experience a tension where structured guidance aids understanding but may constrain creative autonomy, leading many to prefer self-experimentation.
Significance. This study contributes timely insights to HCI on supporting AI literacy in creative practices without undermining autonomy. The mixed-methods design, including a custom research probe, is a strength that allows for both broad patterns from the survey and in-depth perceptions from the probe. As an exploratory qualitative study, the modest sample sizes are appropriate and the findings are presented descriptively, addressing the concern about generalizability.
minor comments (2)
- [Abstract] The abstract states that 'many still preferred self-experimentation' from the probe study; specifying the number or proportion of participants who expressed this preference would enhance the precision of the claim.
- [Discussion or Limitations] A more explicit discussion of potential selection bias in recruiting participants who are already using GenAI tools would strengthen the presentation of the findings.
Simulated Author's Rebuttal
We thank the referee for their positive and constructive review of our manuscript. We appreciate the recognition of the study's timeliness, the value of the mixed-methods design, and the appropriateness of the sample sizes for an exploratory study. No specific major comments were provided in the report.
Circularity Check
No significant circularity; purely empirical qualitative study
full rationale
The paper is an exploratory HCI study based on original data collection: interviews (n=8), survey (n=159), and a probe study (n=17). Central claims describe participant-reported patterns and tensions (e.g., preference for self-experimentation even when guidance is viewed as helpful). No mathematical derivations, equations, fitted parameters, predictions, or self-citation chains appear in the derivation chain. All findings are grounded in the collected empirical data rather than reducing to prior results or inputs by construction. This is the expected outcome for a self-contained qualitative paper.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Self-reported interview and survey responses from creatives accurately reflect their actual learning behaviors and preferences with GenAI tools.
- domain assumption A custom research probe can reliably elicit perceptions of structured guidance without introducing its own bias.
Reference graph
Works this paper leans on
-
[1]
[n. d.]. DALL·E 3. https://openai.com/index/dall-e-3/
-
[2]
Abuzuraiq and Philippe Pasquier
Ahmed M. Abuzuraiq and Philippe Pasquier. 2025. Explainability-in-Action: En- abling Expressive Manipulation and Tacit Understanding by Bending Diffusion Models in ComfyUI. arXiv:2508.07183 [cs] https://arxiv.org/abs/2508.07183
-
[3]
Adobe. 2025. Make stunning updates to your images with text prompts using Generative Fill. https://helpx.adobe.com/content/help/ca/en/photoshop/using/ generative-fill.html
work page 2025
-
[4]
Leonardo AI. n.d.. AI Image Generator - Create Art, Images & Video | Leonardo AI. https://leonardo.ai/
-
[5]
Jay Alammar. 2022. The Illustrated Stable Diffusion. https://jalammar.github. io/illustrated-stable-diffusion/
work page 2022
-
[6]
Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human- AI Interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–13. https:...
-
[7]
1986.Social foundations of thought and action: A social cognitive theory
Albert Bandura. 1986.Social foundations of thought and action: A social cognitive theory. Prentice-Hall, Inc, Englewood Cliffs, NJ, US
work page 1986
-
[8]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance.Proceedings of the AAAI Conference on Human Computation and Crowdsourcing7 (Oct. 2019), 2–11. https://doi.org/10.1609/hcomp.v7i1.5285
-
[9]
Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff.Proceedings of the AAAI Conference on Artificial Intelligence33, 01 (July 2019), 2429–2437. https://doi.org/10.1609/aaai. v33i01.33012429
-
[10]
Ronald A Beghetto and Maciej Karwowski. 1945. Creative agency unbound. American history1861, 1900 (1945)
work page 1945
-
[11]
Lucia Benaquisto. 2008. Open Coding. InThe SAGE Encyclopedia of Qual- itative Research Methods. SAGE Publications, Inc. https://doi.org/10.4135/ 9781412963909
work page 2008
-
[12]
Kirsten Boehner, Janet Vertesi, Phoebe Sengers, and Paul Dourish. 2007. How HCI interprets the probes. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007 (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery, 1077–1086. https://doi.org/10.1145/1240624.1240789 SIGCHI C...
-
[13]
Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Grossman
-
[14]
https://doi.org/10.48550/arXiv.2304.09337
Promptify: Text-to-Image Generation through Interactive Prompt Explo- ration with Large Language Models. https://doi.org/10.48550/arXiv.2304.09337
-
[15]
Virginia Braun, , and Victoria Clarke. 2006. Using thematic analysis in psy- chology.Qualitative Research in Psychology3, 2 (Jan. 2006), 77–101. https: //doi.org/10.1191/1478088706qp063oa
- [16]
-
[17]
Harald Burgsteiner, Martin Kandlhofer, and Gerald Steinbauer. 2016. IRobot: Teaching the Basics of Artificial Intelligence in High Schools.Proceedings of the AAAI Conference on Artificial Intelligence30, 1 (March 2016). https: //doi.org/10.1609/aaai.v30i1.9864
-
[18]
John Carroll and Mary Beth Rosson. 1987. Paradox of the active user. 80–111
work page 1987
-
[19]
CBC. 2025. How a book on climate became an international bestseller.CBC News(May 2025). https://www.cbc.ca/news/science/what-on-earth-climate- comic-1.7546104
work page 2025
-
[20]
Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie J Cai, and Michael Terry
Minsuk Chang, Stefania Druga, Alexander J. Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie J Cai, and Michael Terry. 2023. The Prompt Artists. InCreativity and Cognition. ACM, Virtual Event USA, 75–87. https://doi.org/10.1145/3591196. 3593515
-
[21]
John Joon Young Chung and Eytan Adar. 2023. PromptPaint: Steering Text- to-Image Generation Through Paint Medium-like Interactions. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. ACM, San Francisco CA USA, 1–17. https://doi.org/10.1145/3586183.3606777
-
[22]
Chiara Di Lodovico, Federico Torrielli, Luigi Di Caro, and Amon Rapp. 2025. How Do People Develop Folk Theories of Generative AI Text-to-Image Models? A Qualitative Study on How People Strive to Explain and Make Sense of GenAI. International Journal of Human–Computer Interaction41, 23 (Dec. 2025), 14846– 14870. https://doi.org/10.1080/10447318.2025.2491009
-
[23]
Smith, Nicole DeCario, and Will Buchanan
Jesse Dodge, Taylor Prewitt, Remi Tachet Des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the Carbon Intensity of AI in Cloud Instances. https://doi.org/10.48550/arXiv.2206.05229
-
[24]
Stefania Druga, Fee Lia Christoph, and Amy J Ko. 2022. Family as a Third Space for AI Literacies: How do children and parents learn about AI together?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3502031
-
[25]
Vu, Eesh Likhith, and Tammy Qiu
Stefania Druga, Sarah T. Vu, Eesh Likhith, and Tammy Qiu. 2019. Inclusive AI literacy for kids around the world. InProceedings of FabLearn 2019 (FL2019). Association for Computing Machinery, New York, NY, USA, 104–111. https: //doi.org/10.1145/3311890.3311904
-
[26]
Noyan Evirgen, Ruolin Wang, and Xiang ’Anthony Chen. 2024. From Text to Pix- els: Enhancing User Understanding through Text-to-Image Model Explanations. InProceedings of the 29th International Conference on Intelligent User Interfaces. ACM, Greenville SC USA, 74–87. https://doi.org/10.1145/3640543.3645173
-
[27]
Matteo Farinella. 2018. The potential of comics in science communication. Journal of Science Communication17, 1 (Jan. 2018), Y01. https://doi.org/10. How Creatives Approach GenAI Image Generation C&C ’26, July 13–16, 2026, London, United Kingdom 22323/2.17010401
work page 2018
-
[28]
Rahel Flechtner and Aeneas Stankowski. 2023. AI Is Not a Wildcard: Challenges for Integrating AI into the Design Curriculum. InProceedings of the 5th Annual Symposium on HCI Education. ACM, Hamburg Germany, 72–77. https://doi. org/10.1145/3587399.3587410
-
[29]
Katja Fleischmann. 2024. Generative Artificial Intelligence in Graphic Design Education: A Student Perspective.Canadian Journal of Learning and Technology 50, 1 (Aug. 2024), 1–17. https://doi.org/10.21432/cjlt28618
-
[30]
Katja Fleischmann. 2025. Preparing Creative Arts and Design Students for the New World of Generative Artificial Intelligence in the Workplace. In11th International Conference on Higher Education Advances (HEAd’25). Editorial Universitat Politècnica de València (edUPV), 206–213. https://doi.org/10.4995/ HEAd25.2025.19958
-
[31]
Corey Ford, Elizabeth Wilson, Shuoyang Zheng, Gabriel Vigliensoni, Jeba Rezwana, Lanxi Xiao, Michael Paul Clemens, Makayla Lewis, Drew Hemment, Alan Chamberlain, Helen Kennedy, and Nick Bryan-Kinns. 2025. Explainable AI for the Arts 3 (XAIxArts3). InProceedings of the 2025 Conference on Creativity and Cognition (C&C ’25). Association for Computing Machine...
-
[32]
Katy Ilonka Gero, Meera Desai, Carly Schnitzler, Nayun Eom, Jack Cushman, and Elena L. Glassman. 2025. Creative Writers’ Attitudes on Writing as Training Data for Large Language Models. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–16. https: //doi.org/10.1145/3706598.3713287
-
[33]
Malik Ghallab. 2019. Responsible AI: requirements and challenges.AI Perspec- tives1, 1 (Sept. 2019), 3. https://doi.org/10.1186/s42467-019-0003-z
-
[34]
Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein, and Nikolas Martelaro. 2023. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools. InPro- ceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–20. https://doi.org/10.1145/3544...
-
[35]
Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, and Xiaoru Yuan. 2024. PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation.IEEE Transactions on Visualization and Computer Graphics(2024), 1–12. https://doi.org/10.1109/TVCG.2024.3408255
- [36]
-
[37]
Nanna Inie, Jeanette Falk, and Steve Tanimoto. 2023. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. InEx- tended Abstracts of the 2023 CHI Conference on Human Factors in Computing Sys- tems. ACM, Hamburg Germany, 1–8. https://doi.org/10.1145/3544549.3585657
- [38]
-
[39]
Amir Jahanlou and Parmit K Chilana. 2024. How Example-Based Authoring of Motion Graphics Impacts Creative Expression: Differences in Perceptions of Professional and Casual Motion Designers. InCreativity and Cognition. ACM, Chicago IL USA, 347–357. https://doi.org/10.1145/3635636.3656197
-
[40]
Amir Jahanlou, William Odom, and Parmit Chilana. 2021. Challenges in Getting Started in Motion Graphic Design: Perspectives from Casual and Professional Motion Designers. (2021). https://graphicsinterface.org/wp-content/uploads/ gi2021-06.pdf
work page 2021
-
[41]
Manu Kaur. 2021. Using productive failure to activate deeper learn- ing. https://www.timeshighereducation.com/campus/using-productive-failure- activate-deeper-learning
work page 2021
-
[42]
Reishiro Kawakami and Sukrit Venkatagiri. 2024. The Impact of Generative AI on Artists. InCreativity and Cognition. ACM, Chicago IL USA, 79–82. https: //doi.org/10.1145/3635636.3664263
-
[43]
Patrick Gage Kelley and Allison Woodruff. 2023. Advancing Explainability Through AI Literacy and Design Resources.Interactions30, 5 (Sept. 2023), 34–38. https://doi.org/10.1145/3613249
-
[44]
Anjali Khurana, Hariharan Subramonyam, and Parmit K Chilana. 2024. Why and When LLM-Based Assistants Can Go Wrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking. InProceedings of the 29th International Conference on Intelligent User Interfaces. ACM, Greenville SC USA, 288–303. https://doi.org/10.1145/3640543.3645200
-
[45]
Kimia Kiani, George Cui, Andrea Bunt, Joanna McGrenere, and Parmit K. Chi- lana. 2019. Beyond "One-Size-Fits-All": Understanding the Diversity in How Software Newcomers Discover and Make Use of Help Resources. InProceed- ings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300570
-
[46]
Journey of Finding the Best Query
Soomin Kim, Jinsu Eun, Changhoon Oh, and Joonhwan Lee. 2025. “Journey of Finding the Best Query”: Understanding the User Experience of AI Image Generation System.International Journal of Human–Computer Interaction41, 2 (Jan. 2025), 951–969. https://doi.org/10.1080/10447318.2024.2307670
-
[47]
Howard J Klein. 1989. An integrated control theory model of work motivation. Academy of management review14, 2 (1989), 150–172
work page 1989
-
[48]
Hyung-Kwon Ko, Gwanmo Park, Hyeon Jeon, Jaemin Jo, Juho Kim, and Jinwook Seo. 2023. Large-scale Text-to-Image Generation Models for Visual Artists’ Creative Works. InProceedings of the 28th International Conference on Intelligent User Interfaces. 919–933. https://doi.org/10.1145/3581641.3584078
-
[49]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/ 3290605.3300641
-
[50]
David A. Kolb. 2014.Experiential learning: Experience as the source of learning and development. FT press. https://books.google.com/books?hl=en&lr= &id=jpbeBQAAQBAJ&oi=fnd&pg=PR7&dq=info:cBzPyc88vIgJ:scholar.google. com&ots=Vp8RmSWXMe&sig=oFUwjFrgLxfURMKrb5afKXj6Ttk
work page 2014
-
[51]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learn- ing. InProceedings of the 20th International Conference on Intelligent User In- terfaces (IUI ’15). Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/2678025.2701399
-
[52]
1991.Situated Learning: Legitimate Peripheral Participation
Jean Lave and Etienne Wenger. 1991.Situated Learning: Legitimate Peripheral Participation. Cambridge University Press
work page 1991
-
[53]
Mingyi Li, Mengyi Chen, Sarah Luo, Yining Cao, Haijun Xia, Maitraye Das, Steven P. Dow, and Jane L. E. 2026. VizCrit: Exploring Strategies for Displaying Computational Feedback in a Visual Design Tool. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 8...
-
[54]
Brian Y. Lim and Anind K. Dey. 2010. Toolkit to support intelligibility in context- aware applications. InProceedings of the 12th ACM international conference on Ubiquitous computing (UbiComp ’10). Association for Computing Machinery, New York, NY, USA, 13–22. https://doi.org/10.1145/1864349.1864353
-
[55]
Haidan Liu, Poorvi Bhatia, Nicholas Vincent, and Parmit K Chilana. 2026. Trac- ing Everyday AI Literacy Discussions at Scale: How Online Creative Com- munities Make Sense of Generative AI. InProceedings of the 2026 CHI Con- ference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 488, 28 page...
-
[56]
Tania Lombrozo. 2006. The structure and function of explanations.Trends in Cognitive Sciences10, 10 (Oct. 2006), 464–470. https://doi.org/10.1016/j.tics. 2006.08.004
-
[57]
Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–16. https://doi.org/ 10.1145/3313831.3376727
-
[58]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai
-
[59]
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. InProceedings of the 2020 CHI Conference on Human Factors in Comput- ing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831. 3376739
-
[60]
Wendy E. Mackay and Joanna McGrenere. 2025. Comparative Structured Ob- servation.ACM Transactions on Computer-Human Interaction32, 2 (April 2025), 1–27. https://doi.org/10.1145/3711838
-
[61]
Atefeh Mahdavi Goloujeh, Anne Sullivan, and Brian Magerko. 2024. Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI Tools. InProceedings of the CHI Conference on Human Factors in Computing Sys- tems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3613904.3642861
-
[62]
Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging Trial-and-Error for Learning Complex Software Applications. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3501895
-
[63]
Vera Liao, Mary Lou Maher, Charles Patrick Martin, and Greg Walsh
Michael Muller, Lydia B Chilton, Anna Kantosalo, Q. Vera Liao, Mary Lou Maher, Charles Patrick Martin, and Greg Walsh. 2023. GenAICHI 2023: Generative AI and HCI at CHI 2023. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 3...
- [64]
-
[65]
Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2025. A Comprehensive Overview of Large Language Models.ACM Trans. Intell. Syst. Technol.16, 5, Article 106 (Aug. 2025), 72 pages. https://doi.org/10.1145/3744746
-
[66]
Davy Tsz Kit Ng, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021. Conceptualizing AI literacy: An exploratory review.Computers and Education: Artificial Intelligence2 (2021), 100041. https://doi.org/10.1016/j.caeai. 2021.100041 C&C ’26, July 13–16, 2026, London, United Kingdom Liu et al
-
[67]
Davy Tsz Kit Ng, Jiahong Su, Jac Ka Lok Leung, and Samuel Kai Wah Chu
-
[68]
Artificial intelligence (AI) literacy education in secondary schools: a review.Interactive Learning Environments32, 10 (Nov. 2024), 6204–6224. https: //doi.org/10.1080/10494820.2023.2255228
-
[69]
NightCafe. 2022. How Does NightCafe AI Work? https://nightcafe.studio/ blogs/info/how-does-nightcafe-ai-work
work page 2022
-
[70]
Donald Norman. 1987. Some Observations on Mental Models. InMental Models. Lawrence Erlbaum Associates, Inc., New Jersey, 7–14
work page 1987
-
[71]
NVIDIA. [n. d.]. What is Generative AI? https://www.nvidia.com/en-us/ glossary/generative-ai/
- [72]
-
[73]
OpenAI. 2025. Production bias vs paradox
work page 2025
-
[74]
OpenArt. [n. d.]. Create Art or Modify Images with AI. https://openart.ai/home
-
[75]
Jonas Oppenlaender. 2022. The Creativity of Text-to-Image Generation. InPro- ceedings of the 25th International Academic Mindtrek Conference. ACM, Tampere Finland, 192–202. https://doi.org/10.1145/3569219.3569352
-
[76]
Jonas Oppenlaender. 2024. A Taxonomy of Prompt Modifiers for Text-To-Image Generation.Behaviour & Information Technology43, 15 (Nov. 2024), 3763–3776. https://doi.org/10.1080/0144929X.2023.2286532
-
[77]
Jonas Oppenlaender, Rhema Linder, and Johanna Silvennoinen. 2025. Prompt- ing AI Art: An Investigation into the Creative Skill of Prompt Engineer- ing.International Journal of Human–Computer Interaction0, 0 (2025), 1–23. https://doi.org/10.1080/10447318.2024.2431761
-
[78]
Srishti Palani, David Ledo, George Fitzmaurice, and Fraser Anderson. 2022. ”I don’t want to feel like I’m working in a 1960s factory”: The Practitioner Perspective on Creativity Support Tool Adoption. InCHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. https://doi. org/10.1145/3491102.3501933
-
[80]
Srishti Palani and Gonzalo Ramos. 2024. Evolving Roles and Workflows of Creative Practitioners in the Age of Generative AI. InCreativity and Cognition. ACM, Chicago IL USA, 170–184. https://doi.org/10.1145/3635636.3656190
-
[81]
Xiaohan Peng, Janin Koch, and Wendy E. Mackay. 2024. DesignPrompt: Us- ing Multimodal Interaction for Design Exploration with Generative AI. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24). Association for Computing Machinery, New York, NY, USA, 804–818. https://doi.org/10.1145/3643834.3661588
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.