Recognition: no theorem link
AI Disclosure with DAISY
Pith reviewed 2026-05-13 19:01 UTC · model grok-4.3
The pith
A form-based tool helps authors create more complete disclosures of AI use in research.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper introduces DAISY as a form-based tool for generating AI disclosure statements, developed from literature-derived requirements and co-design sessions. In a study with 31 authors, DAISY-supported disclosures satisfied more completeness criteria and offered clearer breakdowns of AI use across research and writing tasks than disclosures created without support. Authors using the tool maintained similar comfort levels with their statements despite potential concerns about perceptions of AI use.
What carries the argument
DAISY, a structured form that prompts for specific details on AI tool use in different phases of research and writing.
If this is right
- Disclosures become more standardized and informative for readers.
- Authors face fewer cognitive barriers when reporting AI contributions.
- Similar tools could be adopted in academic publishing platforms.
- Transparency in AI use becomes easier to achieve without added discomfort.
Where Pith is reading between the lines
- Structured disclosure forms might improve reporting in other areas like data sharing or methodology details.
- Integration of DAISY-like prompts into writing assistants could automate part of the process.
- Future work could examine long-term effects on how readers interpret AI-assisted research.
Load-bearing premise
The study assumes that the literature-derived completeness criteria truly measure meaningful transparency and that findings from a small group of 31 authors extend to other researchers and real-world submissions.
What would settle it
Observing no significant difference in disclosure completeness between DAISY users and non-users in a larger, more diverse sample of research submissions.
Figures
read the original abstract
The use of AI tools in research is becoming routine, alongside growing consensus that such use should be transparently disclosed. However, AI disclosure statements remain rare and inconsistent, with policies offering limited guidance and authors facing social, cognitive, and emotional barriers when reporting AI use. To explore how structured disclosure shapes what authors report and how they experience disclosure, we present DAISY (Disclosure of AI-uSe in Your Research), a form-based tool for generating AI disclosure statements. DAISY was developed from literature-derived requirements and co-design (N =11), and deployed in a user study with authors (N=31). DAISY-supported disclosures met more completeness criteria, offering clearer breakdowns of AI use across research and writing than unsupported disclosures. Surprisingly, despite concerns about how transparently disclosed AI use might be perceived, the use of DAISY did not reduce author comfort with the disclosure statements. We discuss design implications and a research agenda for AI disclosure as a sociotechnical practice.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces DAISY, a form-based tool for generating structured AI disclosure statements in research. Developed from literature-derived requirements and a co-design phase (N=11), it was evaluated in a user study (N=31) comparing DAISY-supported disclosures against unsupported ones. The central finding is that DAISY disclosures met more completeness criteria, offered clearer breakdowns of AI use across research and writing phases, and did not reduce author comfort with disclosure.
Significance. If the empirical results hold under fuller reporting, the work provides a concrete design contribution to HCI on supporting transparent AI use in academic workflows. It addresses documented barriers to disclosure and supplies initial evidence that structured tooling can improve completeness without the expected comfort penalty, while outlining a sociotechnical research agenda. The co-design and comparative evaluation approach is a strength for grounding the tool in user needs.
major comments (2)
- [User Study] User Study section: The statistical reporting for the completeness comparison (e.g., exact tests, effect sizes, confidence intervals, and handling of the N=31 sample) is incomplete. This is load-bearing because the central claim of improved completeness rests on this difference; without these details it is impossible to rule out small-sample artifacts or post-hoc selection of criteria.
- [Methods / Completeness Criteria] Completeness Criteria derivation (likely §2 or Methods): The criteria are presented as literature-derived but receive no validation showing they predict downstream outcomes such as reviewer comprehension or perceived legitimacy. This assumption underpins the interpretation that higher scores equal meaningfully better transparency.
minor comments (2)
- [Abstract] Abstract: The word 'surprisingly' when describing unchanged comfort levels introduces interpretive tone that could be replaced with a neutral statement of the result.
- [Discussion] Discussion: The proposed research agenda would benefit from one or two concrete, falsifiable follow-up metrics (e.g., reviewer comprehension scores or submission-time disclosure rates) rather than remaining at a high level.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback on our manuscript. We address each major comment below and describe the revisions we will make to strengthen the paper.
read point-by-point responses
-
Referee: [User Study] User Study section: The statistical reporting for the completeness comparison (e.g., exact tests, effect sizes, confidence intervals, and handling of the N=31 sample) is incomplete. This is load-bearing because the central claim of improved completeness rests on this difference; without these details it is impossible to rule out small-sample artifacts or post-hoc selection of criteria.
Authors: We agree that the statistical reporting requires additional detail to support the central claim. In the revised manuscript we will expand the User Study section to report the exact statistical tests (including whether chi-square, Fisher's exact, or another test was used for the categorical completeness comparison), effect sizes, confidence intervals, and a clear account of how the N=31 sample was handled, including any adjustments for multiple comparisons or power considerations. These additions will allow readers to assess the robustness of the findings directly. revision: yes
-
Referee: [Methods / Completeness Criteria] Completeness Criteria derivation (likely §2 or Methods): The criteria are presented as literature-derived but receive no validation showing they predict downstream outcomes such as reviewer comprehension or perceived legitimacy. This assumption underpins the interpretation that higher scores equal meaningfully better transparency.
Authors: The completeness criteria were derived from a synthesis of existing literature on AI disclosure guidelines and academic publishing standards. We acknowledge that empirical validation against downstream outcomes such as reviewer comprehension would provide stronger evidence of their practical value. However, conducting such validation would require a separate study involving reviewers and lies outside the scope of the current work, which focuses on the tool's impact on authors' disclosure completeness and comfort. In the revision we will clarify the literature basis for the criteria and explicitly identify downstream validation as an important direction for future research. revision: partial
Circularity Check
No circularity: empirical tool evaluation relies on external literature criteria and participant data
full rationale
The paper presents an empirical study of the DAISY disclosure tool. It derives requirements from prior literature and a separate co-design session (N=11), then evaluates via a user study (N=31) comparing DAISY-supported vs. unsupported disclosures against those literature-derived completeness criteria. No equations, fitted parameters, predictions, or derivations appear. Central claims rest on direct participant outputs and external benchmarks rather than any self-referential reduction or self-citation chain. This is a standard self-contained empirical evaluation with no load-bearing circular steps.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption AI use in research should be transparently disclosed and structured guidance improves reporting completeness
Forward citations
Cited by 1 Pith paper
-
A Faceted Proposal for Transparent Attribution of AI-Assisted Text Production
A faceted model based on Form, Generation, Evaluation with extensions for Intent, Control, and Traceability is proposed as a minimal baseline for representing AI-assisted text production at multiple document scales.
Reference graph
Works this paper leans on
-
[1]
Abacus.ai. 2025.AI Usage Label Generator. Abacus.ai. https://aiusagefacts. abacusai.app AI usage labeling tool for manuscripts (accessed January 20, 2026)
work page 2025
-
[2]
Asmaa T Aldulaijan and Shatha M Almalky. 2025. The impact of generative AI tools on postgraduate students’ learning experiences: New insights into usage patterns.Journal of Information Technology Education: Research24 (2025), 003. doi:10.28945/5428
-
[3]
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human- AI Interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk)(CHI ’19). Associa...
-
[4]
Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.New Media & Society20, 3 (2018), 973–989. doi:10.1177/1461444816676645
-
[5]
M Ans, L Maggio, H Algodi, J Costello, E Driessen, K Oswald, and L Lingard. 2025. The presence and nature of AI-use disclosure statements in medical education journals: a bibliometric study.medRxiv(2025), 2025–11. doi:10.1101/2025.11.11. 25340015
-
[6]
Association for Computing Machinery. [n. d.]. ACM Policy on Authorship. https: //www.acm.org/publications/policies/new-acm-policy-on-authorship. Accessed: 2025-12-04
work page 2025
-
[7]
Aorigele Bao and Yi Zeng. 2025. AI disclosure, moral shame, and the punishment of honesty.Accountability in Research(2025), 1–14. doi:10.1080/08989621.2025. 2542197
-
[8]
Tigmanshu Bhatnagar, Maarya Omar, Davor Orlic, James Smith, Catherine Hol- loway, and Maria Kett. 2025. Bridging AI and Humanitarianism: An HCI-Informed Framework for Responsible AI Adoption. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Ma- chinery, New York, NY, USA, Article 1098, 17 ...
-
[9]
Edyta Paulina Bogucka, Marios Constantinides, Sanja Šćepanović, and Daniele Quercia. 2025. Impact Assessment Card: Communicating Risks and Benefits of AI Uses.Proceedings of the ACM on Human-Computer Interaction9, 7 (2025), 1–42. doi:10.1145/3757482
-
[10]
Richard Brown, Elizabeth Sillence, and Dawn Branley-Bell. 2025. AcademAI: Investigating AI usage, attitudes, and literacy in higher education and research. Journal of Educational Technology Systems54, 1 (2025), 6–33. doi:10.1177/ 00472395251347304
work page 2025
-
[11]
Juliane Busboom, Nina Boulus-Rødje, and Susanne Bødker. 2025. Tracing Trans- formations of the Modern Workplace and Imagining its Future. InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work. 1–14. doi:10.1145/3729176.3729186
-
[12]
Cecilia Ka Yuk Chan. 2025. Understanding AI guilt: the development, pilot- testing, and validation of an instrument for students.Education and Information Technologies(2025), 1–20. doi:10.1007/s10639-025-13629-y
-
[13]
Jilin Chen and Joseph A Konstan. 2010. Conference paper selectivity and impact. Commun. ACM53, 6 (2010), 79–83
work page 2010
-
[14]
Clarivate. 2023. Clarivate Unveils Journal Citation Reports 2023: A Trusted Resource to Support Research Integrity and Promote Accurate Journal Evaluation. https://clarivate.com/news/clarivate-unveils-journal-citation-reports-2023- a-trusted-resource-to-support-research-integrity-and-promote-accurate- journal-evaluation/. Accessed: 2025-12-01
work page 2023
-
[15]
Jérémie F Cohen and David Moher. 2025. Generative artificial intelligence and academic writing: friend or foe?Journal of Clinical Epidemiology179 (2025), 111646. doi:10.1016/j.jclinepi.2024.111646
-
[16]
Marios Constantinides, Edyta Bogucka, Daniele Quercia, Susanna Kallio, and Mohammad Tahaei. 2024. RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles. Proc. ACM Hum.-Comput. Interact.8, CSCW2, Article 388 (Nov. 2024), 28 pages. doi:10.1145/3686927
-
[17]
Marios Constantinides, Himanshu Verma, Shadan Sadeghian, and Abdallah El Ali
-
[18]
InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ’25)
The Future of Work is Blended, Not Hybrid. InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ’25). Association for Computing Machinery, New York, NY, USA, Article 28, 13 pages. doi:10.1145/3729176.3729202
-
[19]
COPE Council. 2024. Authorship and AI tools.COPE Position. Disponibil la https://publicationethics. org/guidance/cope-position/authorship-and-ai-tools#:˜: text= Ethics% 20publicationethics, legal. Accesat în16 (2024). doi:10.24318/ cCVRZBms
work page 2024
-
[20]
Hyo Jin Do, Molly Q Feldman, Jessica He, Angel Hsing-Chi Hwang, and Seyun Kim. 2025. Navigating Generative AI Disclosure, Ownership, and Accountability in Co-Creative Domains. InAdjunct Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ’25 Adjunct). Association for Computing Machinery, Article 2, 4 pages. doi:10.1145/...
-
[21]
Kirstin Early, Jessica Hammer, Megan Kelly Hofmann, Jennifer A Rode, Anna Wong, and Jennifer Mankoff. 2018. Understanding gender equity in author order assignment.Proceedings of the ACM on Human-Computer Interaction2, CSCW (2018), 1–21. doi:10.1145/3274315
-
[22]
Abdallah El Ali, Karthikeya Puttur Venkatraj, Sophie Morosoli, Laurens Naudts, Natali Helberger, and Pablo Cesar. 2024. Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How. InExtended Abstracts of the CHI Confer- ence on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY...
-
[23]
Elsevier. [n. d.]. Publishing Ethics and Author Guidelines. https://www.elsevier. com/about/policies-and-standards/publishing-ethics. Accessed: 2025-12-04
work page 2025
-
[24]
Ziv Epstein, Aaron Hertzmann, Investigators of Human Creativity, Memo Akten, Hany Farid, Jessica Fjeld, Morgan R Frank, Matthew Groh, Laura Herman, Neil Leach, et al. 2023. Art and the science of generative AI.Science380, 6650 (2023), 1110–1111
work page 2023
-
[25]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM64, 12 (2021), 86–92
work page 2021
-
[26]
Tom Gross and Mirko Fetter. 2010. Disclosure Templates for Selective Information Disclosure.Proceedings of IADIS CT(2010), 101–108
work page 2010
-
[27]
Hilda Hadan, Derrick M Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang- Kennedy, and Lennart E Nacke. 2024. The great AI witch hunt: Reviewers’ perception and (Mis) conception of generative AI in research writing.Computers in Human Behavior: Artificial Humans2, 2 (2024), 100095. doi:10.1016/j.chbah. 2024.100095
-
[28]
Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Pittsburgh, Pennsylvania, USA)(CHI ’99). Association for Computing Machinery, New York, NY, USA, 159–166. doi:10.1145/302979.303030
-
[29]
International Science Council. 2025. AI Disclosure in Research: Towards a Global Reporting Standard. https://council.science/our-work/ai-disclosure-in-research/. Accessed: 2026-04-01
work page 2025
-
[30]
Maurice Jakesch, Jeffrey T Hancock, and Mor Naaman. 2023. Human heuristics for AI-generated language are flawed.Proceedings of the National Academy of Sciences120, 11 (2023), e2208839120. doi:10.1073/pnas.2208839120
-
[31]
Eric J. Johnson, Suzanne B. Shu, Benedict G. C. Dellaert, Craig Fox, Daniel G. Goldstein, Gerald Häubl, Richard P. Larrick, John W. Payne, Ellen Peters, David Schkade, Brian Wansink, and Elke U. Weber. 2012. Beyond Nudges: Tools of a Choice Architecture.Marketing Letters23, 2 (June 2012), 487–504. doi:10.1007/ s11002-012-9186-1
work page 2012
-
[32]
Gregory E Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hos- seini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn, and Mark J Cherry. 2023. Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing.Medicine, Health Care and Philoso- phy26, 4 (2023), 499–503. doi:10.1007/s11019-02...
-
[33]
Shivani Kapania, Ruiyi Wang, Toby Jia-Jun Li, Tianshi Li, and Hong Shen. 2025. ’I’m Categorizing LLM as a Productivity Tool’: Examining Ethics of LLM Use in HCI Research Practices.Proc. ACM Hum.-Comput. Interact.9, 2, Article CSCW102 (May 2025), 26 pages. doi:10.1145/3711000
-
[34]
Harold H Kelley. 1967. Attribution theory in social psychology. InNebraska Symposium on Motivation. University of Nebraska Press
work page 1967
-
[35]
2025.Nurturing Capabilities: Unpacking the Gap in Human-Centered Evaluations of AI-Based Systems
Aman Khullar, Nikhil Nalin, Abhishek Prasad, Ann John Mampilli, and Neha Kumar. 2025.Nurturing Capabilities: Unpacking the Gap in Human-Centered Evaluations of AI-Based Systems. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3706598.3713278
-
[36]
Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuan- dong Zhao, Lingjiao Chen, Haotian Ye, Sheng Liu, Zhi Huang, et al. 2024. Moni- toring ai-modified content at scale: A case study on the impact of chatgpt on ai conference peer reviews.arXiv preprint arXiv:2403.07183(2024)
-
[37]
Zhehui Liao, Maria Antoniak, Inyoung Cheong, Evie Yu-Yen Cheng, Ai-Heng Lee, Kyle Lo, Joseph Chee Chang, and Amy X Zhang. 2025. LLMs as Research Tools: A Large Scale Survey of Researchers’ Usage and Perceptions. InSecond Conference on Language Modeling
work page 2025
-
[38]
Xiaochen Luo, Zixuan Wang, Jacqueline L Tilley, Sanjeev Balarajan, Ukeme- Abasi Bassey, and Choi Ieng Cheang. 2025. Seeking Emotional and Mental Health Support From Generative AI: Mixed-Methods Study of ChatGPT User Experiences.JMIR Mental Health12, 1 (2025), e77951. doi:10.2196/77951
-
[39]
Miro. 2025. Miro Board Used During Co-Design Sessions. https://miro.com/ app/board/uXjVGe3PStg=/?share_link_id=406570345427. Online collaborative whiteboard used for participatory co-design activities
work page 2025
-
[40]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. InProceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). 220–229. doi:10.1145/3287560.3287596
-
[41]
Andrew S Nelson, Paola V Santamaría, Josephine S Javens, and Marvin Ricaurte
-
[42]
Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language.Education Sciences15, 5 (2025), 611. doi:10.3390/educsci15050611
-
[43]
Guy Paré, Marie-Claude Trudel, Mirou Jaana, and Spyros Kitsiou. 2015. Synthesiz- ing information systems knowledge: A typology of literature reviews.Information AI Disclosure with DAISY CHIWORK ’26, June 22–25, 2026, Linz, Austria & Management52, 2 (2015), 183–199. doi:10.1016/j.im.2014.08.008
-
[44]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end frame- work for internal algorithmic auditing. InProceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT)....
-
[45]
Pooja SB Rao, Sanja Šćepanović, Ke Zhou, Edyta Paulina Bogucka, and Daniele Quercia. 2025. RiskRAG: A Data-Driven Solution for Improved AI Model Risk Re- porting. InProceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI). 1–26. doi:10.1145/3706598.3713979
-
[46]
Jessica A Reif, Richard P Larrick, and Jack B Soll. 2025. Evidence of a social evaluation penalty for using AI.Proceedings of the National Academy of Sciences 122, 19 (2025), e2426766122. doi:10.1073/pnas.2426766122
- [47]
-
[48]
Yvonne Rogers. 2012. HCI Theory: Classical, Modern, and Contemporary.Syn- thesis Lectures on Human-Centered Informatics(2012)
work page 2012
-
[49]
Advait Sarkar. 2025. AI Could Have Written This: Birth of a Classist Slur in Knowledge Work. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–12. doi:10.1145/3706599.3716239
-
[50]
Alexander Skulmowski. 2024. Placebo or assistant? Generative AI between externalization and anthropomorphization.Educational Psychology Review36, 2 (2024), 58. doi:10.1007/s10648-024-09894-x
-
[51]
Pragati Sontake. 2025. A Review on Artificial Intelligence (AI) Tools in Research Writing.International Journal of Scientific Research and Technology(2025)
work page 2025
-
[52]
Springer Nature. [n. d.]. Springer Nature Authorship Principles. https://www. springer.com/gp/editorial-policies/authorship-principles. Accessed: 2025-12-04
work page 2025
-
[53]
Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Art...
-
[54]
Gareth Terry, Nikki Hayfield, Victoria Clarke, Virginia Braun, et al. 2017. The- matic analysis.The SAGE handbook of qualitative research in psychology2, 17-37 (2017), 25
work page 2017
-
[55]
Roger Tourangeau and Ting Yan. 2007. Sensitive questions in surveys.Psycho- logical bulletin133, 5 (2007), 859. doi:doi/10.1037/0033-2909.133.5.859
-
[56]
UC Merced Library. 2025. Artificial Intelligence (AI): Publisher Policies and Requirements. https://libguides.ucmerced.edu/artificial-intelligence/publisher- policies. Accessed: YYYY-MM-DD
work page 2025
-
[57]
Karthikeya Puttur Venkatraj, Sophie Morosoli, Hannes Cools, Laurens Naudts, Claes de Vreese, Natali Helberger, Pablo Cesar, and Abdallah El Ali. 2025. Under- standing AI Disclosure Needs for News Production and Journalism. InProceedings of the 24th International Conference on Mobile and Ubiquitous Multimedia. 202– 208
work page 2025
- [58]
-
[59]
Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J Cai. 2022. PromptChainer: Chaining Large Language Model Prompts through Visual Programming. InExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New Y...
-
[60]
Yell Business. 2011.A Guide to Form Elements. https://business.yell.com/insights/ website-design/a-guide-to-form-elements/ Accessed: 26 January 2026
work page 2011
-
[61]
John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(San Jose, California, USA)(CHI ’07). Association for Computing Machinery, New York, NY, USA, 493–502. doi:10.1145/1240624.1240704
-
[62]
Tim Zindulka, Sven Goller, Daniela Fernandes, Robin Welsch, and Daniel Buschek
-
[63]
It’s impressive, but in practice
The AI Memory Gap: Users Misremember What They Created With AI or Without.arXiv preprint arXiv:2509.11851(2025). CHIWORK ’26, June 22–25, 2026, Linz, Austria Ahmetoglu et al. Appendix A Corpus of Analyzed Publications Table 4: Corpus of analyzed publications included in this study. ID Venue / Publisher Year Title C1 ACM CHI 2025 “It’s impressive, but in p...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.