Recognition: unknown
How Designers Envision Value-Oriented AI Design Concepts with Generative AI
Pith reviewed 2026-05-09 19:33 UTC · model grok-4.3
The pith
Designers using generative AI for concept creation engage in reciprocal reflection that surfaces value tensions and prioritizes harm detection.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Designers engage in reciprocal reflection-in-action with AI during concept envisioning; this process surfaces multi-level value tensions across tool, designer, and concept; designers demonstrate greater attunement to harm recognition as a primary design signal than to articulating positive value fulfillment; and designers exercise anticipatory judgment through meta-design reasoning about how tool assumptions risk propagating into designed concepts and future use contexts.
What carries the argument
Reciprocal reflection-in-action with AI, the iterative dialogue between designer and tool output that surfaces value tensions and supports meta-design reasoning about assumption propagation.
If this is right
- AI-mediated design tools should be redesigned to make value tensions visible during use.
- Design education and practice should prioritize harm-centered reasoning alongside positive value articulation.
- Design work should be positioned as foundational input for AI system development.
- Tool assumptions must be examined for how they may embed into concepts and future contexts.
Where Pith is reading between the lines
- Current generative AI tools may embed assumptions that go unexamined unless designers actively surface them through reflection.
- The same reciprocal process could appear in other creative professions that adopt generative AI for idea generation.
- Training programs might add explicit practice in meta-design reasoning to reduce unintended harms from AI-supported designs.
Load-bearing premise
The specific concept envisioning activity and interviews with 18 designers capture how designers generally navigate values when using generative AI in real-world settings.
What would settle it
A field observation of practicing designers using generative AI in their normal projects, without any structured envisioning task, that shows no evidence of reciprocal reflection, multi-level value tension awareness, or meta-design reasoning about harm propagation.
Figures
read the original abstract
As AI integrates into design practice, designers increasingly use generative AI tools to envision AI-enabled solutions, positioning AI as both design tool and design material. This dual role creates recursive value tensions distinct from traditional design work. We engaged 18 designers in a concept envisioning activity and interviews to understand how they navigate values and recognize potential harms in this context. Our analysis reveals that (i) designers engage in reciprocal reflection-in-action with AI; (ii) this process surfaces multi-level value tensions across tool, designer, and concept; (iii) designers demonstrate greater attunement to harm recognition as a primary design signal than to articulating positive value fulfillment; and (iv) designers exercise anticipatory judgment through meta-design reasoning about how tool assumptions risk propagating into designed concepts and future use contexts. We extend Schon's reflection-in-action framework and discuss implications for redesigning AI-mediated design tools, supporting harm-centered reasoning, and positioning design as foundational to AI development.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports a qualitative study in which 18 designers completed a generative-AI-supported concept-envisioning activity followed by semi-structured interviews. The authors claim that designers engage in reciprocal reflection-in-action with the AI tool, that this process surfaces value tensions at the levels of tool, designer, and concept, that participants attend more readily to potential harms than to positive value fulfillment, and that they perform anticipatory meta-design reasoning about how tool assumptions may propagate into future use contexts. The work extends Schön’s reflection-in-action framework and offers implications for the redesign of AI-mediated design tools.
Significance. If the findings are robust, the paper contributes to HCI by adapting an established design-theory lens to the novel setting of generative AI as both tool and material. The hands-on activity provides a methodological strength by eliciting reflection-in-action in real time rather than relying solely on retrospective accounts. The emphasis on harm recognition and multi-level tensions supplies concrete guidance for tool builders and for value-sensitive design curricula. These elements position the work as a useful bridge between design research and AI ethics.
major comments (3)
- [Methods] Methods section (analysis subsection): the thematic-analysis procedure is described only at a high level; no details are given on the coding scheme, inter-rater reliability, resolution of disagreements, or member-checking. Because the four numbered findings rest directly on these interpretations, greater transparency is required to evaluate the reliability of the claims.
- [Participants and Study Design] Participants and Study Design section: the sample of 18 designers recruited via professional networks and the use of a single contrived concept-envisioning task leave open the possibility that observed patterns are artifacts of the experimental setup rather than representative of everyday value navigation with generative AI. This directly affects the load-bearing claim that the results extend Schön’s framework and justify redesign recommendations for real-world tools.
- [Findings] Findings section (claim iii): the assertion that designers exhibit “greater attunement to harm recognition … than to articulating positive value fulfillment” is supported only by selected quotes. Without systematic counts, a comparison table, or explicit coding criteria, the comparative strength of this claim cannot be assessed and remains vulnerable to selection bias.
minor comments (3)
- [Abstract] Abstract: a one-sentence description of the analysis approach would improve standalone readability and address the reviewer concern about unshown rigor.
- [Discussion] Discussion: each implication for tool redesign should be explicitly cross-referenced to the specific finding and participant excerpt that supports it.
- [Related Work] Related Work: ensure that recent papers on value-sensitive design for AI (e.g., work on participatory AI and harm-centered design) are cited so the positioning of the Schön extension is clear.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback and positive assessment of the manuscript's potential contributions to HCI. We address each major comment point by point below, with clear indications of planned revisions.
read point-by-point responses
-
Referee: [Methods] Methods section (analysis subsection): the thematic-analysis procedure is described only at a high level; no details are given on the coding scheme, inter-rater reliability, resolution of disagreements, or member-checking. Because the four numbered findings rest directly on these interpretations, greater transparency is required to evaluate the reliability of the claims.
Authors: We agree that greater transparency is needed in the analysis subsection. The current description is high-level, which limits evaluation of interpretive reliability. In the revised manuscript we will expand this section to specify the inductive thematic analysis approach, the iterative development of the coding scheme through repeated reading of transcripts by the lead author, collaborative review of codes with the second author, resolution of disagreements via discussion to consensus, and the member-checking process (sharing theme summaries with participants for feedback). We will also note that formal inter-rater reliability was not computed, as the analysis follows interpretive qualitative conventions common in HCI rather than positivist standards. revision: yes
-
Referee: [Participants and Study Design] Participants and Study Design section: the sample of 18 designers recruited via professional networks and the use of a single contrived concept-envisioning task leave open the possibility that observed patterns are artifacts of the experimental setup rather than representative of everyday value navigation with generative AI. This directly affects the load-bearing claim that the results extend Schön’s framework and justify redesign recommendations for real-world tools.
Authors: We acknowledge that the modest sample size and single-task design introduce potential limitations on generalizability. As an exploratory qualitative study, our goal was depth of insight into reflection-in-action rather than statistical representativeness. In revision we will augment the Participants and Study Design section with additional recruitment details, justification of the task (grounded in pilot testing with designers), and a dedicated limitations discussion addressing possible setup artifacts. We will also moderate language around the extension of Schön’s framework and tool redesign implications to reflect the provisional, context-specific nature of the findings while retaining the core contribution of the observed patterns. revision: partial
-
Referee: [Findings] Findings section (claim iii): the assertion that designers exhibit “greater attunement to harm recognition … than to articulating positive value fulfillment” is supported only by selected quotes. Without systematic counts, a comparison table, or explicit coding criteria, the comparative strength of this claim cannot be assessed and remains vulnerable to selection bias.
Authors: We agree that the comparative claim would be more robust with systematic evidence beyond illustrative quotes. In the revised Findings section we will add explicit coding criteria distinguishing harm recognition from positive value articulation, together with a summary (table or textual) of code prevalence across participants and transcripts. This addition will allow readers to evaluate the strength of the attunement claim and mitigate selection-bias concerns while preserving the qualitative richness of the supporting excerpts. revision: yes
Circularity Check
No significant circularity in qualitative empirical study
full rationale
This paper is a qualitative HCI study that derives its four main findings directly from thematic analysis of interviews and a concept-envisioning activity with 18 designers. No mathematical derivations, fitted parameters, equations, or self-referential definitions appear in the provided text or abstract. Claims rest on observed participant data rather than reducing to inputs by construction, self-citation chains, or renamed known results. The extension of Schön's framework is presented as an interpretive contribution grounded in the data, with no load-bearing self-citations or ansatz smuggling. Generalizability concerns exist but are orthogonal to circularity; the derivation chain is self-contained as empirical observation.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Participant reflections during interviews accurately reflect their internal value navigation processes during the AI design activity.
- domain assumption The sample of 18 designers provides sufficient insight into broader designer practices with generative AI.
Reference graph
Works this paper leans on
-
[1]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–13. doi:10.1145/3290605.3300233
-
[2]
Barrett R Anderson, Jash Hemant Shah, and Max Kreminski. 2024. Homogeniza- tion effects of large language models on human creative ideation. InProceedings of the 16th Conference on Creativity & Cognition (C&C ’24). ACM, 413–425. doi:10.1145/3635636.3656204
-
[3]
Apple. 2019. Human interface guidelines: Machine learning. https: //developer.apple.com/design/human-interface-guidelines/technologies/ machine-learning/introduction
2019
-
[4]
Jesse Josua Benjamin, Arne Berger, Nick Merrill, and James Pierce. 2021. Machine learning uncertainty as a design material: A post-phenomenological inquiry. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, Article 171, 14 pages. doi:10.1145/3411764.3445481
-
[5]
Alan Borning and Michael Muller. 2012. Next steps for value sensitive design. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, 1125–1134. doi:10.1145/2207676.2208560
-
[6]
Richard Buchanan. 1992. Wicked problems in design thinking.Design issues8, 2 (1992), 5–21. doi:10.2307/1511637
-
[7]
2010.Sketching user experiences: Getting the design right and the right design
Bill Buxton. 2010.Sketching user experiences: Getting the design right and the right design. Morgan Kaufmann. doi:10.1016/B978-0-12-374037-3.X5043-3
-
[8]
Joel Chan, Pao Siangliulue, Denisa Qori McDonald, Ruixue Liu, Reza Moradinezhad, Safa Aman, Erin T Solovey, Krzysztof Z Gajos, and Steven P Dow. 2017. Semantically far inspirations considered harmful? Accounting for cognitive states in collaborative ideation. InProceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition (C&C ’17). ACM, 93–10...
-
[9]
Elizabeth F Churchill and Mikael Wiberg. 2025. Human augmentation: A para- digm shift for HCI?Interactions32, 1 (2025), 5. doi:10.1145/3708356
-
[10]
2021.Design thinking: Understanding how designers think and work
Nigel Cross. 2021.Design thinking: Understanding how designers think and work. Bloomsbury Publishing. doi:10.5040/9781474293884
-
[11]
Wesley Hanwen Deng, Nur Yildirim, Monica Chang, Motahhare Eslami, Kenneth Holstein, and Michael Madaio. 2023. Investigating practices and opportunities for cross-functional collaboration around AI fairness in industry practice. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Trans- parency (FAccT ’23). ACM, 705–716. doi:10.1145/...
-
[12]
2019.Responsible artificial intelligence: How to develop and use AI in a responsible way
Virginia Dignum. 2019.Responsible artificial intelligence: How to develop and use AI in a responsible way. Vol. 2156. Springer. doi:10.1007/978-3-030-30371-6
-
[13]
Dennis P Doordan. 2003. On materials.Design Issues19, 4 (2003), 3–8. doi:10. 1162/074793603322545000
2003
-
[14]
Graham Dove and Anne-Laure Fayard. 2020. Monsters, metaphors, and ma- chine learning. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–17. doi:10.1145/3313831.3376275
-
[15]
Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX design innovation: Challenges for working with machine learning as a design material. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, 278–288. doi:10.1145/3025453.3025739
-
[16]
KJ Kevin Feng, Maxwell James Coppock, and David W McDonald. 2023. How do UX practitioners communicate AI as a design material? Artifacts, conceptions, and propositions. InProceedings of the 2023 ACM Designing Interactive Systems Conference (DIS ’23). ACM, 2263–2280. doi:10.1145/3563657.3596101
-
[17]
2019.Value sensitive design: Shaping technology with moral imagination
Batya Friedman and David G Hendry. 2019.Value sensitive design: Shaping technology with moral imagination. MIT Press. doi:10.7551/mitpress/7585.001. 0001
-
[18]
Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods.University of Washington Technical Report2, 8 (2002), 8 pages. https://faculty.washington.edu/pkahn/articles/vsd-theory-methods- tr.pdf
2002
-
[19]
Batya Friedman, Peter H Kahn Jr, Alan Borning, and Alina Huldtgren. 2013. Value sensitive design and information systems. InEarly engagement and new technologies: Opening up the laboratory. Vol. 5. Springer, 55–95. doi:10.1007/978- 94-007-7844-3_4
-
[20]
Yue Fu, Han Bin, Tony Zhou, Marx Wang, Yixin Chen, Zelia Gomes Da Costa Lai, Jacob O Wobbrock, and Alexis Hiniker. 2024. Creativity in the age of AI: Evaluating the impact of generative AI on design outputs and designers’ creative thinking.arXiv preprint arXiv:2411.00168(2024). doi:10.48550/arXiv.2411.00168
-
[21]
Iason Gabriel. 2020. Artificial intelligence, values, and alignment.Minds and machines30, 3 (2020), 411–437. doi:10.1007/s11023-020-09539-2
-
[22]
Elisa Giaccardi and Elvin Karana. 2015. Foundations of materials experience: An approach for HCI. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, 2447–2456. doi:10.1145/2702123. 2702337
-
[23]
Barney G Glaser and Anselm L Strauss. 1998. Grounded theory.Strategien Qualitativer Forschung4 (1998)
1998
-
[24]
Gabriela Goldschmidt. 1991. The dialectics of sketching.Creativity research journal4, 2 (1991), 123–143. doi:10.1080/10400419109534381
-
[25]
Xinyue Guo, Yu Xiao, Jun Wang, and Ting Ji. 2023. Rethinking Designer Agency: A Case Study of Co-creation between Designers and AI. InProceedings of IASDR 2023: Life-Changing Design, David De Sainz Molestina, Laura Galluzzo, Francesca Rizzo, and Davide Spallazzo (Eds.). Article 170, 18 pages. doi:10.21606/iasdr. 2023.478
-
[26]
Nava Haghighi, Matthew Jörke, Yousif Mohsen, Andrea Cuadra, and James A Landay. 2023. A Workshop-based method for navigating value tensions in col- lectively speculated worlds. InProceedings of the 2023 ACM Designing Interactive Systems Conference (DIS ’23). ACM, 1676–1692. doi:10.1145/3563657.3595992
-
[27]
Lars Erik Holmquist. 2017. Intelligence on tap: Artificial intelligence as a new design material.interactions24, 4 (2017), 28–33. doi:10.1145/3085571
-
[28]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–16. doi:10.1145/3290605.3300830
-
[29]
Xi Hu, Yiwen Xing, Xudong Cai, Yihang Zhao, Michael Cook, Rita Borgo, and Timothy Neate. 2025. Designing interactions with generative AI for art and creativity: A systematic review and taxonomy. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 1126–1155. doi:10. 1145/3715336.3735843
-
[30]
IBM. 2019. Design for AI. https://www.ibm.com/design/ai/
2019
-
[31]
Tim Ingold. 2007. Materials against materiality.Archaeological dialogues14, 1 (2007), 1–16. doi:10.1017/S1380203807002127
-
[32]
Anniek Jansen and Sara Colombo. 2023. Mix & match machine learning: An ideation toolkit to design machine learning-enabled solutions. InProceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’23). ACM, Article 8, 18 pages. doi:10.1145/3569009.3572739
-
[33]
Ji-Youn Jung, Devansh Saxena, Minjung Park, Jini Kim, Jodi Forlizzi, Ken Hol- stein, and John Zimmerman. 2025. Making the Right Thing: Bridging HCI and Responsible AI in Early-Stage AI Concept Selection. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 2992–3012. doi:10.1145/3715336.3735745
-
[34]
Tiffany Knearem, Mohammed Khwaja, Yuling Gao, Frank Bentley, and Clara E Kliman-Silver. 2023. Exploring the future of design tooling: The role of artificial intelligence in tools for user experience professionals. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23). ACM, Article 384, 6 pages. doi:10.1145/3544...
-
[35]
Sean Kross and Philip Guo. 2021. Orienting, framing, bridging, magic, and coun- seling: How data scientists navigate the outer loop of client collaborations in industry and academia.Proceedings of the ACM on Human-Computer Interaction 5, CSCW2, Article 311 (2021), 28 pages. doi:10.1145/3476052
-
[36]
Jie Li, Hancheng Cao, Laura Lin, Youyang Hou, Ruihao Zhu, and Abdallah El Ali
-
[37]
User experience design professionals’ perceptions of generative artificial intelligence. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 381, 18 pages. doi:10.1145/3613904. 3642114
-
[38]
Q Vera Liao, Hariharan Subramonyam, Jennifer Wang, and Jennifer Wort- man Vaughan. 2023. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). ACM, Article 9, 21 pages. doi:10.1145/3544548.3580652
-
[39]
2023.Critical theory of AI
Simon Lindgren. 2023.Critical theory of AI. John Wiley & Sons
2023
-
[40]
Fang Liu, Junyan Lv, Shenglan Cui, Zhilong Luan, Kui Wu, and Tongqing Zhou
-
[41]
Smart "Error"! Exploring imperfect AI to support creative ideation.Pro- ceedings of the ACM on Human-Computer Interaction8, CSCW1, Article 121 (2024), 28 pages. doi:10.1145/3637398
-
[42]
Yaoli Mao, Dakuo Wang, Michael Muller, Kush R Varshney, Ioana Baldini, Casey Dugan, and Aleksandra Mojsilović. 2019. How data scientists work together with domain experts in scientific collaborations: To find the right answer or to ask the right question?Proceedings of the ACM on Human-Computer Interaction 3, GROUP, Article 237 (2019), 23 pages. doi:10.11...
-
[43]
Jon McCormack, Patrick Hutchings, Toby Gifford, Matthew Yee-King, Maria Teresa Llano, and Mark d’Inverno. 2020. Design considerations for real-time collaboration with creative artificial intelligence.Organised Sound25, 1 (2020), 41–52. doi:10.1017/S1355771819000451
-
[44]
Michael J Muller and Sandra Kogan. 2012. Grounded theory method in human- computer interaction and computer-supported cooperative work.The Human– Computer Interaction Handbook3 (2012)
2012
-
[45]
Meena Devii Muralikumar and David W McDonald. 2025. An emerging de- sign space of how tools support collaborations in AI design and development. Proceedings of the ACM on Human-Computer Interaction9, 1, Article GROUP2 (2025), 28 pages. doi:10.1145/3701181
-
[46]
Nadia Nahar, Shurui Zhou, Grace Lewis, and Christian Kästner. 2022. Collabo- ration challenges in building ML-enabled systems: Communication, documenta- tion, engineering, and process. InProceedings of the 44th International Conference on Software Engineering (ICSE ’22). ACM, 413–425. doi:10.1145/3510003.3510209 DIS ’26, June 13–17, 2026, Singapore, Singa...
-
[47]
Cheryl Nakata. 2025. Designing innovations for and with human flourishing: A research agenda.The Design Journal28, 1 (2025), 50–53. doi:10.1080/14606925. 2024.2440783
-
[48]
Lexi Namer and Sharon Joines. 2025. What 96 designers taught us about harm: The behaviors around considering harm in digital products.Journal of User Ex- perience20, 3 (2025), 125–142. https://uxpajournal.org/harm-digital-products/
2025
-
[49]
2014.The design way: Intentional change in an unpredictable world
Harold G Nelson and Erik Stolterman. 2014.The design way: Intentional change in an unpredictable world. MIT Press. doi:10.7551/mitpress/9188.001.0001
-
[50]
Sachita Nishal and Nicholas Diakopoulos. 2025. Values as problems, principles, and tensions in sociotechnical system design for journalism. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 2975–2991. doi:10.1145/3715336.3735717
-
[51]
2014.Things that make us smart
Don Norman. 2014.Things that make us smart. Diversion Books
2014
-
[52]
Jonas Oppenlaender, Hannah Johnston, Johanna Maria Silvennoinen, and Helena Barranha. 2025. Artworks reimagined: Exploring human-AI co-creation through body prompting.Proceedings of the ACM on Human-Computer Interaction9, 4, Article EICS012 (2025), 34 pages. doi:10.1145/3734189
-
[53]
Google PAIR. 2019. People + AI Guidebook. https://pair.withgoogle.com/ guidebook
2019
-
[55]
Minjung Park, Jodi Forlizzi, and John Zimmerman. 2025. Exploring the In- novation Opportunities for Pre-trained Models. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 1973–2005. doi:10.1145/3715336.3735753
-
[56]
Elena Popa. 2021. Human goals are constitutive of agency in artificial intelligence (AI).Philosophy & Technology34, 4 (2021), 1731–1750. doi:10.1007/s13347-021- 00483-2
-
[57]
A Terry Purcell and John S Gero. 1998. Drawings and the design process: A review of protocol studies in design and other disciplines and related research in cognitive psychology.Design studies19, 4 (1998), 389–430. doi:10.1016/S0142- 694X(98)00015-5
-
[58]
Roope Raisamo, Ismo Rakkolainen, Päivi Majaranta, Katri Salminen, Jussi Rantala, and Ahmed Farooq. 2019. Human augmentation: Past, present and future.International Journal of Human-Computer Studies131 (2019), 131–143. doi:10.1016/j.ijhcs.2019.05.008
-
[59]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investi- gating the impact of publicly naming biased performance results of commercial AI products. InProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19). ACM, 429–435. doi:10.1145/3306618.3314244
-
[60]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. InProceedings of the 2020 Con- ference on Fairness, Accountability, and Transparency (FAccT ...
-
[61]
Jude Rayan, Dhruv Kanetkar, Yifan Gong, Yuewen Yang, Srishti Palani, Haijun Xia, and Steven P Dow. 2024. Exploring the potential for generative AI-based conversational cues for real-time collaborative ideation. InProceedings of the 16th Conference on Creativity & Cognition (C&C ’24). ACM, 117–131. doi:10. 1145/3635636.3656184
-
[62]
2023.Towards designing engaging and ethical human-centered AI partners for human-AI co-creativity
Jeba Rezwana. 2023.Towards designing engaging and ethical human-centered AI partners for human-AI co-creativity. Ph. D. Dissertation. The University of North Carolina at Charlotte. http://hdl.handle.net/20.500.13093/etd:3601
2023
-
[63]
Mark Ryan. 2020. In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics26, 5 (2020), 2749–2767. doi:10.1007/s11948-020- 00228-y
-
[64]
Samar Sabie, Steven J Jackson, Wendy Ju, and Tapan Parikh. 2022. Unmaking as agonism: Using participatory design with youth to surface difference in an intergenerational urban context. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, Article 324, 16 pages. doi:10.1145/3491102.3501930
-
[65]
Malak Sadek, Rafael A Calvo, and Céline Mougenot. 2024. Designing value- sensitive AI: A critical review and recommendations for socio-technical design processes.AI and Ethics4, 4 (2024), 949–967. doi:10.1007/s43681-023-00373-7
-
[66]
Malak Sadek, Marios Constantinides, Daniele Quercia, and Céline Mougenot
-
[67]
Guidelines for integrating value sensitive design in responsible AI toolkits. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 472, 20 pages. doi:10.1145/3613904.3642810
-
[68]
Devansh Saxena, Ji-Youn Jung, Jodi Forlizzi, Kenneth Holstein, and John Zimmer- man. 2025. AI mismatches: Identifying potential algorithmic harms before AI development. InProceedings of the 2025 CHI Conference on Human Factors in Com- puting Systems (CHI ’25). ACM, Article 8, 23 pages. doi:10.1145/3706598.3714098
-
[69]
1983.The reflective practitioner: How professionals think in action
Donald A Schön. 1983.The reflective practitioner: How professionals think in action. Routledge
1983
-
[70]
1987.Educating the reflective practitioner
Donald A Schön. 1987.Educating the reflective practitioner. Jossey-Bass
1987
-
[71]
Donald A Schön. 1992. Designing as reflective conversation with the materials of a design situation.Knowledge-Based Systems5, 1 (1992), 3–14. doi:10.1016/0950- 7051(92)90020-G
-
[72]
Hua Shen, Tiffany Knearem, Reshmi Ghosh, Kenan Alkiek, Kundan Krishna, Yachuan Liu, Ziqiao Ma, Savvas Petridis, Yi-Hao Peng, Li Qiwei, et al . 2024. Towards bidirectional human-AI alignment: A systematic review for clarifica- tions, framework, and future directions.arXiv preprint arXiv:2406.09264(2024). doi:10.48550/arXiv.2406.09264
-
[73]
Hua Shen, Tiffany Knearem, Reshmi Ghosh, Yu-Ju Yang, Tanushree Mitra, and Yun Huang. 2024. Valuecompass: A framework of fundamental values for human-AI alignment.arXiv preprint arXiv:2409.09586(2024). doi:10.48550/arXiv. 2409.09586
work page internal anchor Pith review doi:10.48550/arxiv 2024
-
[74]
Yang Shi, Tian Gao, Xiaohan Jiao, and Nan Cao. 2023. Understanding design collaboration between designers and artificial intelligence: A systematic litera- ture review.Proceedings of the ACM on Human-Computer Interaction7, CSCW2, Article 368 (2023), 35 pages. doi:10.1145/3610217
-
[75]
Katie Shilton. 2018. Values and ethics in human-computer interaction.Foun- dations and Trends®in Human–Computer Interaction12, 2 (2018), 104–171. doi:10.1561/1100000073
-
[76]
2019.The sciences of the artificial
Herbert A Simon. 2019.The sciences of the artificial. MIT Press. doi:10.7551/ mitpress/12107.001.0001
2019
-
[77]
Pitch Sinlapanuntakul, Soyun Moon, Yuri Kawada, Yeha Chung, and Mark Zachry. 2026. Developing an AI concept envisioning toolkit to support reflective juxtaposition of values and harms. InProceedings of the 2026 ACM Designing In- teractive Systems Conference (DIS ’26). ACM, 1–22. doi:10.1145/3800645.3813054
-
[78]
Pitch Sinlapanuntakul and Mark Zachry. 2025. Exploring the future of AI- powered design with working professionals: A novel design jam approach. In Proceedings of the 2025 IEEE International Professional Communication Conference (ProComm ’25). IEEE, 185–189. doi:10.1109/ProComm64814.2025.00042
-
[79]
Pitch Sinlapanuntakul and Mark Zachry. 2025. Impacts of AI on human design- ers: A systematic literature review.IEEE Transactions on Professional Communi- cation68, 3 (2025), 264–283. doi:10.1109/TPC.2025.3588655
-
[80]
Kihoon Son, DaEun Choi, Tae Soo Kim, and Juho Kim. 2024. Demystifying tacit knowledge in graphic design: Characteristics, instances, approaches, and guide- lines. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 221, 18 pages. doi:10.1145/3613904.3642886
-
[81]
Jonathan Stray, Alon Halevy, Parisa Assar, Dylan Hadfield-Menell, Craig Boutilier, Amar Ashar, Chloe Bakalar, Lex Beattie, Michael Ekstrand, Claire Leibowicz, et al. 2024. Building human values into recommender systems: An interdisciplinary synthesis.ACM Transactions on Recommender Systems2, 3, Article 20 (2024), 57 pages. doi:10.1145/3632297
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.