Recognition: unknown
Developing an AI Concept Envisioning Toolkit to Support Reflective Juxtaposition of Values and Harms
Pith reviewed 2026-05-09 19:30 UTC · model grok-4.3
The pith
A toolkit of capability libraries, value-harm cards, and tension maps helps designers juxtapose values against potential harms during early AI concept envisioning.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The AI Concept Envisioning Toolkit—built from an AI Capability Library, 24 Value–Harm Cards, and a Value–Tension Map—supports reasoning by directly juxtaposing values and harms within chosen AI technical capabilities, and this juxtaposition encourages value reflection, helps anticipate potential harms, and renders ethical considerations more transparent during early-stage design.
What carries the argument
The AI Concept Envisioning Toolkit, which supplies an AI Capability Library, 24 Value–Harm Cards, and a Value–Tension Map so designers can juxtapose values against harms inside specific technical capabilities.
If this is right
- Designers can surface value tensions earlier and navigate them explicitly rather than discovering them after implementation.
- Ethical considerations become part of the same workflow as technical capability selection instead of a separate review step.
- Productive friction is introduced at the concept stage, slowing down premature convergence on a single framing.
- The same materials can be reused across multiple projects because they are organized around reusable capabilities and card pairs.
Where Pith is reading between the lines
- If the toolkit is used at the very first sketch stage, downstream decisions about data, models, and interfaces may shift away from high-risk value conflicts.
- The approach of pairing value cards directly with harm cards could be extended to other domains such as robotics or data platforms where capability libraries already exist.
- Longer-term studies could track whether repeated use of the toolkit changes the kinds of projects designers choose to pursue.
- Teams without access to the physical cards might still gain similar benefits by adapting the juxtaposition logic into digital whiteboards or prompts.
Load-bearing premise
That positive reactions from a small set of surveyed and interviewed designers will translate into actual changes in how designers frame and decide on AI concepts.
What would settle it
A controlled comparison in which one group of designers uses the toolkit and another does not, then measuring whether the toolkit group identifies and mitigates a higher number of value-related harms in their final concepts.
Figures
read the original abstract
Early-stage concept envisioning is a critical juncture in AI design, shaping how designers frame problems and the decisions that follow. Yet values and potential harms are often too abstract or addressed too late to meaningfully shape design. Using a Research-through-Design (RtD) approach, we developed the AI Concept Envisioning Toolkit, comprising an AI Capability Library, 24 Value--Harm Cards, and a Value--Tension Map, to support reasoning by juxtaposing values and harms within AI technical capabilities. Through a survey with 30 designers and in-depth interviews with 12 designers, we find that the toolkit is clear and perceived as valuable, and that it encourages value reflection, helps anticipate potential harms, and makes ethical considerations more transparent in early-stage design. We reflect on our design process and discuss design approaches for tools that promote reflection on values and potential harms, surface and navigate value tensions, and introduce productive friction throughout design workflows.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript uses a Research-through-Design approach to develop the AI Concept Envisioning Toolkit (AI Capability Library, 24 Value-Harm Cards, and Value-Tension Map) for supporting juxtaposition of values and harms during early-stage AI concept envisioning. Evaluation consists of a survey with 30 designers and interviews with 12 designers, from which the authors conclude that the toolkit is clear and perceived as valuable, encourages value reflection, helps anticipate potential harms, and increases transparency of ethical considerations.
Significance. If the results hold, the work provides a concrete, practitioner-oriented contribution to ethical AI design by demonstrating how structured cards and maps can introduce productive friction and value reflection into early workflows. The RtD process and direct feedback from designers supply actionable design insights for tools that surface value tensions, which is a strength in the HCI literature on reflective design methods.
major comments (2)
- [Evaluation] Evaluation section: The central empirical claims rest on self-reported perceptions from the N=30 survey and N=12 interviews. No pre/post design artifacts, control condition, independent coding of value/harm incorporation, or behavioral outcome measures are described, so the statements that the toolkit 'encourages value reflection' and 'helps anticipate potential harms' remain untested beyond subjective experience.
- [Methodology] Methodology description: Participant recruitment, selection criteria, and potential selection biases are not detailed, nor are the exact survey items or interview protocol. This makes it difficult to evaluate how representative the positive perceptions are of broader design practice.
minor comments (2)
- [Abstract] Abstract: The number of Value-Harm Cards (24) and the three toolkit components are mentioned but could be listed more explicitly to give readers an immediate overview.
- Figure captions and toolkit illustrations should include step-by-step usage examples to clarify how the Value-Tension Map is populated from the cards.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which identifies key areas for strengthening the presentation of our evaluation and methodology. We address each major comment below, explaining our position and the revisions we will undertake.
read point-by-point responses
-
Referee: [Evaluation] Evaluation section: The central empirical claims rest on self-reported perceptions from the N=30 survey and N=12 interviews. No pre/post design artifacts, control condition, independent coding of value/harm incorporation, or behavioral outcome measures are described, so the statements that the toolkit 'encourages value reflection' and 'helps anticipate potential harms' remain untested beyond subjective experience.
Authors: We agree that our evaluation is based on self-reported perceptions collected via survey and interviews, which is standard for Research-through-Design studies in HCI that seek to explore how practitioners experience a novel tool. The data do not include objective measures such as pre/post artifacts, control conditions, or behavioral coding, and we do not claim to have demonstrated causal effects. In the revised manuscript we will qualify all claims to reflect this (e.g., replacing 'encourages value reflection' with 'participants reported that the toolkit encouraged value reflection') and add an explicit limitations paragraph discussing the reliance on subjective data and the absence of controlled or behavioral measures. These changes will be made without altering the core contribution of the RtD process and designer feedback. revision: partial
-
Referee: [Methodology] Methodology description: Participant recruitment, selection criteria, and potential selection biases are not detailed, nor are the exact survey items or interview protocol. This makes it difficult to evaluate how representative the positive perceptions are of broader design practice.
Authors: We accept this critique and will substantially expand the Methods section in the revision. The updated text will describe recruitment channels (professional design networks, online communities, and academic mailing lists), inclusion criteria (designers with prior experience in AI or HCI projects), and a brief discussion of self-selection bias. We will also include the complete survey instrument and the semi-structured interview protocol, either in the main text or as supplementary material, to improve transparency and allow readers to assess representativeness. revision: yes
Circularity Check
Empirical evaluation independent of toolkit construction
full rationale
The paper's central claims derive from a separate empirical stage (survey N=30 + interviews N=12) that collects participant perceptions of the already-built toolkit; these data are not obtained by fitting parameters to the toolkit's own definition, by self-referential equations, or by load-bearing self-citations that close the loop. The RtD development process and the subsequent evaluation are distinct steps, with the reported outcomes (clarity, perceived value, encouragement of reflection) resting on external participant responses rather than being entailed by the toolkit's internal structure or prior author work. No step reduces the findings to the inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Research-through-Design is a valid approach for creating and evaluating design tools in HCI.
invented entities (2)
-
AI Concept Envisioning Toolkit
no independent evidence
-
Value-Harm Cards
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Julio Abascal and Colette Nicolle. 2005. Moving towards inclusive design guide- lines for socially and ethically aware HCI.Interacting with Computers17, 5 (2005), 484–505. doi:10.1016/j.intcom.2005.03.002
-
[2]
Babak Abedin, Christian Meske, Iris Junglas, Fethi Rabhi, and Hamid R Motahari- Nezhad. 2022. Designing and managing human-AI interactions.Information Systems Frontiers24, 3 (2022), 691–697. doi:10.1007/s10796-022-10313-1
-
[3]
AIxDESIGN. 2023. AI Meets Design Toolkit. https://aixdesign.gumroad.com/l/ toolkit Accessed: Apr 2026
2023
-
[4]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–13. doi:10.1145/3290605.3300233
-
[5]
Barrett R Anderson, Jash Hemant Shah, and Max Kreminski. 2024. Homogeniza- tion effects of large language models on human creative ideation. InProceedings of the 16th Conference on Creativity & Cognition (C&C ’24). ACM, 413–425. doi:10.1145/3635636.3656204
-
[6]
Stephanie Ballard, Karen M Chappell, and Kristen Kennedy. 2019. Judgment call the game: Using value sensitive design and design fiction to surface ethical concerns related to technology. InProceedings of the 2019 Designing Interactive Systems Conference (DIS ’19). ACM, 421–433. doi:10.1145/3322276.3323697
-
[7]
Jeffrey Bardzell and Shaowen Bardzell. 2015. What is Humanistic HCI? In Humanistic HCI. Springer, 13–32. doi:10.1007/978-3-031-02214-2_2
-
[8]
2015.Words onscreen: The fate of reading in a digital world
Naomi S Baron. 2015.Words onscreen: The fate of reading in a digital world. Oxford University Press. doi:10.5406/amerjpsyc.128.4.0550
-
[9]
Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, et al. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias.IBM Journal of Research and Develop- ment63, 4/5 (2019), 1–15. doi:10.1147/JRD.2019.2942287
-
[10]
Jesse Josua Benjamin, Arne Berger, Nick Merrill, and James Pierce. 2021. Machine learning uncertainty as a design material: A post-phenomenological inquiry. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, Article 171, 14 pages. doi:10.1145/3411764.3445481
-
[11]
Melanie Birks, Ysanne Chapman, and Karen Francis. 2008. Memoing in qualita- tive research: Probing data and processes.Journal of Research in Nursing13, 1 (2008), 68–75. doi:10.1177/1744987107081254
-
[12]
Sara Bly and Elizabeth F Churchill. 1999. Design through matchmaking: Technol- ogy in search of users.Interactions6, 2 (1999), 23–31. doi:10.1145/296165.296174
-
[13]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psy- chology.Qualitative research in psychology3, 2 (2006), 77–101. doi:10.1191/ 1478088706qp063oa
2006
-
[14]
Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, et al. 2020. Toward trustworthy AI development: Mechanisms for supporting verifiable claims.arXiv preprint arXiv:2004.07213(2020). doi:10.48550/arXiv.2004.07213
-
[15]
Richard Buchanan. 1992. Wicked problems in design thinking.Design issues8, 2 (1992), 5–21. doi:10.2307/1511637
-
[16]
Marion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. InProceed- ings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (DIS ’00). ACM, 424–433. doi:10.1145/347642.347802
-
[17]
Daniele Busciantella-Ricci and Sofia Scataglini. 2024. Research through co- design.Design Science10 (2024), e3. doi:10.1017/dsj.2023.35
-
[18]
2010.Sketching user experiences: Getting the design right and the right design
Bill Buxton. 2010.Sketching user experiences: Getting the design right and the right design. Morgan Kaufmann. doi:10.1016/B978-0-12-374037-3.X5043-3
-
[19]
Anna L Cox, Sandy J J Gould, Marta E Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016. Design frictions for mindful interactions: The case for microboundaries. InProceedings of the 2016 CHI Conference Extended Ab- stracts on Human Factors in Computing Systems (CHI EA ’16). ACM, 1389–1397. doi:10.1145/2851581.2892410
-
[20]
Nathan Crilly. 2019. Creativity and fixation in the real world: A literature review of case study research.Design Studies64 (2019), 154–168. doi:10.1016/j.destud. 2019.07.002
-
[21]
2021.Design thinking: Understanding how designers think and work
Nigel Cross. 2021.Design thinking: Understanding how designers think and work. Bloomsbury Publishing. doi:10.5040/9781474293884
-
[22]
Wesley Hanwen Deng, Nur Yildirim, Monica Chang, Motahhare Eslami, Kenneth Holstein, and Michael Madaio. 2023. Investigating practices and opportunities for cross-functional collaboration around AI fairness in industry practice. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Trans- parency (FAccT ’23). ACM, 705–716. doi:10.1145/...
-
[23]
Dennis P Doordan. 2003. On materials.Design Issues19, 4 (2003), 3–8. doi:10. 1162/074793603322545000
2003
-
[24]
2015.Frame innovation: Create new thinking by design
Kees Dorst. 2015.Frame innovation: Create new thinking by design. MIT Press. doi:10.7551/mitpress/10096.001.0001
-
[25]
Graham Dove and Anne-Laure Fayard. 2020. Monsters, metaphors, and ma- chine learning. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–17. doi:10.1145/3313831.3376275
-
[26]
Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX design innovation: Challenges for working with machine learning as a design material. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, 278–288. doi:10.1145/3025453.3025739
-
[27]
MIT Press, Cambridge, MA (2021), https://mitpress.mit.edu/9780262044776
Anthony Dunne and Fiona Raby. 2013.Speculative everything: Design, fiction, and social dreaming. MIT Press. https://mitpress.mit.edu/9780262019842
-
[28]
1988.Work-oriented design of computer artifacts
Pelle Ehn. 1988.Work-oriented design of computer artifacts. Ph. D. Dissertation. Lawrence Erlbaum Association Inc. doi:10.5555/1102017
-
[29]
KJ Kevin Feng, Maxwell James Coppock, and David W McDonald. 2023. How do UX practitioners communicate AI as a design material? Artifacts, conceptions, and propositions. InProceedings of the 2023 ACM Designing Interactive Systems Conference (DIS ’23). ACM, 2263–2280. doi:10.1145/3563657.3596101
-
[30]
Batya Friedman and David Hendry. 2012. The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, 1145–1148. doi:10.1145/2207676.2208562
-
[31]
2019.Value sensitive design: Shaping technology with moral imagination
Batya Friedman and David G Hendry. 2019.Value sensitive design: Shaping technology with moral imagination. MIT Press. doi:10.7551/mitpress/7585.001. 0001
-
[32]
Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods.University of Washington Technical Report2, 8 (2002), 8 pages. https://faculty.washington.edu/pkahn/articles/vsd-theory-methods- tr.pdf
2002
-
[33]
Batya Friedman, Peter H Kahn Jr, Alan Borning, and Alina Huldtgren. 2013. Value sensitive design and information systems. InEarly engagement and new technologies: Opening up the laboratory. Vol. 5. Springer, 55–95. doi:10.1007/978- 94-007-7844-3_4
-
[34]
Yue Fu, Han Bin, Tony Zhou, Marx Wang, Yixin Chen, Zelia Gomes Da Costa Lai, Jacob O Wobbrock, and Alexis Hiniker. 2024. Creativity in the age of AI: Evaluating the impact of generative AI on design outputs and designers’ creative thinking.arXiv preprint arXiv:2411.00168(2024). doi:10.48550/arXiv.2411.00168
-
[35]
2004.Game design workshop: Designing, prototyping, & playtesting games
Tracy Fullerton, Chris Swain, and Steven Hoffman. 2004.Game design workshop: Designing, prototyping, & playtesting games. CRC Press. doi:10.5555/994735
-
[36]
Iason Gabriel. 2020. Artificial intelligence, values, and alignment.Minds and machines30, 3 (2020), 411–437. doi:10.1007/s11023-020-09539-2
-
[37]
Elisa Giaccardi and Elvin Karana. 2015. Foundations of materials experience: An approach for HCI. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, 2447–2456. doi:10.1145/2702123. 2702337
-
[38]
2025.Inclusive design for a digital world: Designing with accessibility in mind(2 ed.)
Reginé M Gilbert. 2025.Inclusive design for a digital world: Designing with accessibility in mind(2 ed.). Apress. doi:10.1007/979-8-8688-1820-2
-
[39]
Gabriela Goldschmidt. 1991. The dialectics of sketching.Creativity research journal4, 2 (1991), 123–143. doi:10.1080/10400419109534381
-
[40]
Saul Greenberg and Bill Buxton. 2008. Usability evaluation considered harmful (some of the time). InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). ACM, 111–120. doi:10.1145/1357054.1357074
-
[41]
2019.Universal methods of design expanded and revised: 125 Ways to research complex problems, develop innovative ideas, and design effective solutions
Bruce Hanington and Bella Martin. 2019.Universal methods of design expanded and revised: 125 Ways to research complex problems, develop innovative ideas, and design effective solutions. Rockport Publishers
2019
-
[42]
Lars Erik Holmquist. 2017. Intelligence on tap: Artificial intelligence as a new design material.interactions24, 4 (2017), 28–33. doi:10.1145/3085571
-
[43]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, 1–16. doi:10.1145/3290605.3300830
-
[44]
Gary Hsieh, Brett A Halperin, Evan Schmitz, Yen Nee Chew, and Yuan-Chi Tseng. 2023. What is in the cards: Exploring uses, patterns, and trends in design cards. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). ACM, Article 765, 18 pages. doi:10.1145/3544548.3580712
-
[45]
IBM. 2019. Design for AI. https://www.ibm.com/design/ai/
2019
-
[46]
Anniek Jansen and Sara Colombo. 2023. Mix & match machine learning: An ideation toolkit to design machine learning-enabled solutions. InProceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’23). ACM, Article 8, 18 pages. doi:10.1145/3569009.3572739 AI Concept Envisioning Toolkit for Reflective Juxtap...
-
[47]
Ji-Youn Jung, Devansh Saxena, Minjung Park, Jini Kim, Jodi Forlizzi, Ken Hol- stein, and John Zimmerman. 2025. Making the Right Thing: Bridging HCI and Responsible AI in Early-Stage AI Concept Selection. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 2992–3012. doi:10.1145/3715336.3735745
-
[48]
Holtzblatt Karen and Jones Sandra. 2017. Contextual inquiry: A participatory technique for system design. InParticipatory Design. CRC Press, 177–210
2017
-
[49]
Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, and Kenneth Holstein. 2024. The situate AI guidebook: Co-designing a toolkit to support multi-stakeholder, early-stage deliberations around public sector AI proposals. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 749, 22 pages. doi:10.1145/3613...
-
[50]
Claire Kayacik, Sherol Chen, Signe Noerly, Jess Holbrook, Adam Roberts, and Douglas Eck. 2019. Identifying the intersections: User experience + research scientist collaboration in a generative machine learning interface. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). ACM, 1–8. doi:10.1145/3290607.3299059
-
[51]
Tiffany Knearem, Mohammed Khwaja, Yuling Gao, Frank Bentley, and Clara E Kliman-Silver. 2023. Exploring the future of design tooling: The role of artificial intelligence in tools for user experience professionals. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23). ACM, Article 384, 6 pages. doi:10.1145/3544...
-
[52]
Sean Kross and Philip Guo. 2021. Orienting, framing, bridging, magic, and coun- seling: How data scientists navigate the outer loop of client collaborations in industry and academia.Proceedings of the ACM on Human-Computer Interaction 5, CSCW2, Article 311 (2021), 28 pages. doi:10.1145/3476052
-
[53]
Harsh Kumar, Jonathan Vincentius, Ewan Jordan, and Ashton Anderson. 2025. Human creativity in the age of LLMs: Randomized experiments on divergent and convergent thinking. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). ACM, Article 23, 18 pages. doi:10.1145/ 3706598.3714198
-
[54]
David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquardt, Lora Oehlberg, and Saul Greenberg. 2018. Evaluation strategies for HCI toolkit research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, 1–17. doi:10.1145/3173574.3173610
-
[55]
Jie Li, Hancheng Cao, Laura Lin, Youyang Hou, Ruihao Zhu, and Abdallah El Ali
-
[56]
User experience design professionals’ perceptions of generative artificial intelligence. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 381, 18 pages. doi:10.1145/3613904. 3642114
-
[57]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing design practices for explainable AI user experiences. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–15. doi:10.1145/3313831.3376590
-
[58]
Q Vera Liao, Hariharan Subramonyam, Jennifer Wang, and Jennifer Wort- man Vaughan. 2023. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). ACM, Article 9, 21 pages. doi:10.1145/3544548.3580652
-
[59]
Yaoli Mao, Dakuo Wang, Michael Muller, Kush R Varshney, Ioana Baldini, Casey Dugan, and Aleksandra Mojsilović. 2019. How data scientists work together with domain experts in scientific collaborations: To find the right answer or to ask the right question?Proceedings of the ACM on Human-Computer Interaction 3, GROUP, Article 237 (2019), 23 pages. doi:10.11...
-
[60]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice.Proceedings of the ACM on Human-Computer Interaction3, CSCW, Article 72 (2019), 23 pages. doi:10.1145/3359174
-
[61]
Jessica Morley, Luciano Floridi, Libby Kinsey, and Anat Elhalal. 2020. From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Science and engineering ethics 26, 4 (2020), 2141–2168. doi:10.1007/s11948-019-00165-5
-
[62]
Camille Moussette and Richard Banks. 2010. Designing through making: Ex- ploring the simple haptic design space. InProceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’10). ACM, 279–282. doi:10.1145/1935701.1935763
-
[63]
Meena Devii Muralikumar and David W McDonald. 2025. An emerging de- sign space of how tools support collaborations in AI design and development. Proceedings of the ACM on Human-Computer Interaction9, 1, Article GROUP2 (2025), 28 pages. doi:10.1145/3701181
-
[64]
Nadia Nahar, Shurui Zhou, Grace Lewis, and Christian Kästner. 2022. Collabo- ration challenges in building ML-enabled systems: Communication, documenta- tion, engineering, and process. InProceedings of the 44th International Conference on Software Engineering (ICSE ’22). ACM, 413–425. doi:10.1145/3510003.3510209
-
[65]
Cheryl Nakata. 2025. Designing innovations for and with human flourishing: A research agenda.The Design Journal28, 1 (2025), 50–53. doi:10.1080/14606925. 2024.2440783
-
[66]
Lexi Namer and Sharon Joines. 2025. What 96 designers taught us about harm: The behaviors around considering harm in digital products.Journal of User Ex- perience20, 3 (2025), 125–142. https://uxpajournal.org/harm-digital-products/
2025
-
[67]
2014.The design way: Intentional change in an unpredictable world
Harold G Nelson and Erik Stolterman. 2014.The design way: Intentional change in an unpredictable world. MIT Press. doi:10.7551/mitpress/9188.001.0001
-
[68]
Sachita Nishal and Nicholas Diakopoulos. 2025. Values as problems, principles, and tensions in sociotechnical system design for journalism. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 2975–2991. doi:10.1145/3715336.3735717
-
[69]
Google PAIR. 2019. People + AI Guidebook. https://pair.withgoogle.com/ guidebook
2019
-
[71]
Minjung Park, Jodi Forlizzi, and John Zimmerman. 2025. Exploring the In- novation Opportunities for Pre-trained Models. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). ACM, 1973–2005. doi:10.1145/3715336.3735753
-
[72]
Elena Popa. 2021. Human goals are constitutive of agency in artificial intelligence (AI).Philosophy & Technology34, 4 (2021), 1731–1750. doi:10.1007/s13347-021- 00483-2
-
[73]
Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W Crandall, Nicholas A Christakis, Iain D Couzin, Matthew O Jackson, et al. 2019. Machine behaviour.Nature568, 7753 (2019), 477–486. doi:10.1038/s41586-019-1138-y
-
[74]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. InProceedings of the 2020 Con- ference on Fairness, Accountability, and Transparency (FAccT ...
-
[75]
Jude Rayan, Dhruv Kanetkar, Yifan Gong, Yuewen Yang, Srishti Palani, Haijun Xia, and Steven P Dow. 2024. Exploring the potential for generative AI-based conversational cues for real-time collaborative ideation. InProceedings of the 16th Conference on Creativity & Cognition (C&C ’24). ACM, 117–131. doi:10. 1145/3635636.3656184
-
[76]
Mohammad Rashidujjaman Rifat, Abdullah Hasan Safir, Sourav Saha, Ja- hedul Alam Junaed, Maryam Saleki, Mohammad Ruhul Amin, and Syed Ishtiaque Ahmed. 2024. Data, annotation, and meaning-making: The politics of categoriza- tion in annotating a dataset of faith-based communal violence. InProceedings of the 2024 ACM Conference on Fairness, Accountability, an...
-
[77]
Malak Sadek, Rafael A Calvo, and Céline Mougenot. 2024. Designing value- sensitive AI: A critical review and recommendations for socio-technical design processes.AI and Ethics4, 4 (2024), 949–967. doi:10.1007/s43681-023-00373-7
-
[78]
Malak Sadek, Marios Constantinides, Daniele Quercia, and Céline Mougenot
-
[79]
Guidelines for integrating value sensitive design in responsible AI toolkits. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). ACM, Article 472, 20 pages. doi:10.1145/3613904.3642810
-
[80]
Devansh Saxena, Ji-Youn Jung, Jodi Forlizzi, Kenneth Holstein, and John Zimmer- man. 2025. AI mismatches: Identifying potential algorithmic harms before AI development. InProceedings of the 2025 CHI Conference on Human Factors in Com- puting Systems (CHI ’25). ACM, Article 8, 23 pages. doi:10.1145/3706598.3714098
-
[81]
Morgan Klaus Scheuerman, Alex Hanna, and Remi Denton. 2021. Do datasets have politics? Disciplinary values in computer vision dataset development. Proceedings of the ACM on Human-Computer Interaction5, CSCW2, Article 317 (2021), 37 pages. doi:10.1145/3476058
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.