Recognition: unknown
Co-Writing with AI: An Empirical Study of Diverse Academic Writing Workflows
Pith reviewed 2026-05-07 15:31 UTC · model grok-4.3
The pith
Students integrate AI into academic writing through three selective, value-driven workflow configurations rather than uniform adoption.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Together the studies show that AI integration is selective and heterogeneous, forming three recurring and value-oriented configurations: early-stage (learning-oriented), where tools support exploration and understanding; late-stage (quality-oriented), where tools support drafting and refinement; and peripheral (productivity-oriented), where tools are used to reduce friction and sustain momentum across the process. Students evaluate and take responsibility for AI-generated outputs while balancing competing priorities of learning, quality, productivity, and authorship.
What carries the argument
Three value-oriented configurations of AI use in academic writing workflows (early-stage learning-oriented, late-stage quality-oriented, peripheral productivity-oriented) that organize task-specific patterns across ideation, sourcing, planning, drafting, and reviewing stages.
If this is right
- Individual factors such as AI literacy, writing confidence, trust, authorship concerns, and motivation are associated with which configuration a student adopts.
- AI use is assembled differently depending on the writing stage and the student's current priority among learning, quality, and productivity.
- Students retain responsibility by evaluating AI outputs before incorporating them into assessed work.
- Workflow-level patterns, rather than isolated task-level uses, better describe how AI fits into established academic writing practices.
Where Pith is reading between the lines
- Tool designers could build stage-aware features that default to exploration aids early and refinement aids later instead of offering the same interface throughout.
- Writing instruction might shift from blanket rules about AI to teaching students how to choose among the three configurations based on their goals for a given assignment.
- Long-term skill development could be tracked by whether students stay in peripheral mode or move toward early- and late-stage uses that preserve learning and quality ownership.
Load-bearing premise
Self-reported survey answers and interview reflections from UK students capture typical, stable, and unbiased patterns of AI use in writing.
What would settle it
A direct observational study or larger cross-cultural sample that finds either uniform AI use across all stages or entirely different groupings would undermine the three-configuration account.
read the original abstract
Despite AI tools becoming increasingly embedded in academic practice, little is known about how university students integrate them into their writing processes. We examine how students engage with AI across different writing tasks, and how this engagement is shaped by individual factors including AI literacy, writing confidence, trust, authorship concerns, and motivation. Study~1 surveys 107 UK university students to map task-specific and co-occurring patterns of AI use across five writing stages (ideation, sourcing, planning, drafting, and reviewing) and their associations with individual factors. Study~2 complements this by exploring how these patterns can be assembled in practice, through interviews with 12 postgraduates reflecting on their established use of AI in assessed writing. Together, the studies suggest that AI integration is selective and heterogeneous, forming three recurring and value-oriented configurations: (1) early-stage (learning-oriented), where tools support exploration and understanding; (2) late-stage (quality-oriented), where tools support drafting and refinement; and (3) peripheral (productivity-oriented), where tools are used to reduce friction and sustain momentum across the process. We offer a workflow-level account of AI-supported academic writing, showing how students navigate competing priorities of learning, quality, productivity, and authorship, and how they evaluate and take responsibility for AI-generated outputs.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports two complementary empirical studies on AI integration in academic writing by UK university students. Study 1 surveys 107 students to map task-specific AI use across five stages (ideation, sourcing, planning, drafting, reviewing) and its associations with factors such as AI literacy, writing confidence, trust, authorship concerns, and motivation. Study 2 uses reflections from 12 postgraduates to illustrate how these patterns assemble into practice. The central claim is that AI use is selective and heterogeneous, forming three recurring value-oriented configurations: early-stage (learning-oriented), late-stage (quality-oriented), and peripheral (productivity-oriented).
Significance. If the configurations and their value orientations hold under scrutiny, the work supplies a workflow-level empirical account of how students navigate competing priorities of learning, quality, productivity, and authorship when using AI tools. This contributes descriptive grounding to HCI and writing studies on co-writing practices and could inform tool design and educational guidance.
major comments (2)
- [Results and Discussion (synthesis of Studies 1 and 2)] The derivation of the three recurring configurations from self-reported survey responses and interview reflections is load-bearing for the central claim, yet the manuscript provides no details on the analytical procedure (e.g., coding scheme, clustering method, or inter-rater checks) used to group patterns into the early-stage, late-stage, and peripheral categories. Without this, it is unclear whether the groupings are robustly data-driven or primarily interpretive.
- [Methods (Study 1 survey and Study 2 interviews)] The claim that the configurations reflect stable, typical workflows rests on self-reported data without described controls for social desirability bias (particularly around authorship) or external validation (e.g., writing artifacts or logs). This weakens the assertion that the patterns are recurring beyond the UK student sample.
minor comments (1)
- [Abstract] The abstract and introduction could more explicitly state the sample limitations (UK-only, students) and the absence of behavioral measures to set appropriate expectations for generalizability.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which identifies key areas where greater transparency and qualification of claims will strengthen the manuscript. We address each major comment below, indicating planned revisions.
read point-by-point responses
-
Referee: [Results and Discussion (synthesis of Studies 1 and 2)] The derivation of the three recurring configurations from self-reported survey responses and interview reflections is load-bearing for the central claim, yet the manuscript provides no details on the analytical procedure (e.g., coding scheme, clustering method, or inter-rater checks) used to group patterns into the early-stage, late-stage, and peripheral categories. Without this, it is unclear whether the groupings are robustly data-driven or primarily interpretive.
Authors: We agree that the manuscript lacks sufficient detail on how the three configurations were derived, which is essential for evaluating the synthesis. In the revised version, we will insert a dedicated subsection (e.g., 'Deriving the Configurations') in the Results and Discussion. This will outline: the quantitative mapping from Study 1 survey data (frequency counts and co-occurrence matrices across the five stages to identify selective patterns); the qualitative thematic analysis in Study 2 (initial open coding of reflections for value orientations such as learning/exploration, quality/refinement, and productivity/momentum, followed by axial coding to link these to stage-specific uses); and the integrative synthesis process that combined both datasets to surface the recurring early-stage, late-stage, and peripheral configurations. We will clarify that the groupings combine data-driven elements (e.g., stage-specific usage frequencies) with interpretive synthesis of value orientations, and report any coding reliability procedures employed (or note their absence if single-coder). revision: yes
-
Referee: [Methods (Study 1 survey and Study 2 interviews)] The claim that the configurations reflect stable, typical workflows rests on self-reported data without described controls for social desirability bias (particularly around authorship) or external validation (e.g., writing artifacts or logs). This weakens the assertion that the patterns are recurring beyond the UK student sample.
Authors: We concur that self-reported data carries risks of social desirability bias, especially on sensitive topics like authorship and responsibility for AI outputs. In revision, we will substantially expand the Limitations section to detail mitigation steps (anonymous survey format in Study 1; open, non-judgmental interview prompts in Study 2) and to explicitly qualify the claims: the configurations are presented as recurring patterns observed within this UK university sample rather than asserted as stable or typical workflows in general. We will also note the absence of external validation data such as logs or artifacts as a design limitation and suggest this as an avenue for future work. These additions will temper generalizability statements while preserving the descriptive contribution of the complementary studies. revision: partial
- We cannot add external validation data (writing artifacts or logs) without conducting an entirely new study, as the current design was limited to self-reports and reflections.
Circularity Check
No circularity: purely empirical synthesis from survey and interview data
full rationale
This is an empirical descriptive study with no mathematical derivations, fitted models, or predictive equations. Study 1 uses survey responses from 107 students to map task-specific AI use across five stages and correlate with individual factors; Study 2 uses reflections from 12 postgraduates to illustrate how patterns assemble in practice. The three configurations (early-stage learning-oriented, late-stage quality-oriented, peripheral productivity-oriented) are synthesized inductively from the observed patterns in the data. No self-definitional loops, fitted inputs renamed as predictions, load-bearing self-citations, uniqueness theorems, or smuggled ansatzes appear in the derivation chain. The central claim rests on the reported behaviors and their interpretive grouping, which is independent of any prior fitted parameters or self-referential definitions within the paper.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Abayomi Arowosegbe, Jaber S. Alqahtani, and Tope Oyelade. 2024. Perception of generative AI use in UK higher education.Frontiers in Education9 (2024). doi:10.3389/feduc.2024.1463208
-
[2]
2025.Workflow Differences in University Students’ GenAI Use Across Academic Writing Stages: A Mixed-Methods Survey and Interview Study
Silvia Bodei. 2025.Workflow Differences in University Students’ GenAI Use Across Academic Writing Stages: A Mixed-Methods Survey and Interview Study. MSc dissertation. University College London
2025
-
[3]
Mari Mar Boillos and Nahia Idoiaga. 2025. Student perspectives on the use of AI-based language tools in academic writing.Journal of Writing Research17, 1 (June 2025), 155–170. doi:10.17239/jowr-2025.17.01.06
-
[4]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology.Qualitative Research in Psychology3, 2 (2006), 77–101. doi:10.1191/ 1478088706qp063oa
2006
-
[5]
Cecilia Ka Yuk Chan and Wenxin Zhou. 2023. An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI.Smart Learning Environments10, 1 (Dec. 2023), 64. doi:10.1186/s40561-023-00284-4
-
[6]
K. Y. F. Cheung, E. J. N. Stupple, and J. Elander. 2017. Development and validation of the Student Attitudes and Beliefs about Authorship Scale: A psychometrically robust measure of authorial identity.Studies in Higher Education42, 1 (2017), 97–114. doi:10.1080/03075079.2015.1034673
-
[7]
Jacob Cohen. 1992. A Power Primer.Psychological Bulletin112, 1 (1992), 155–159. doi:10.1037/0033-2909.112.1.155
-
[8]
Dhillon, Somayeh Molaei, Jiaqi Li, Maximilian Golub, Shaochun Zheng, and Lionel Peter Robert
Paramveer S. Dhillon, Somayeh Molaei, Jiaqi Li, Maximilian Golub, Shaochun Zheng, and Lionel Peter Robert. 2024. Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New ...
-
[9]
Don A. Dillman. 2007.Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method(2 ed.). John Wiley & Sons
2007
-
[10]
James Elander, Katherine Harrington, Lin Norton, Hannah Robinson, and Pete Reddy. 2006. Complex skills and academic writing: A review of evidence about the types of learning required to meet core assessment criteria.Assessment & Evaluation in Higher Education31, 1 (2006), 71–90. doi:10.1080/02602930500262379
-
[11]
Daniela Fernandes, Steeven Villa, Salla Nicholls, Otso Haavisto, Daniel Buschek, Albrecht Schmidt, Thomas Kosch, Chenxinran Shen, and Robin Welsch. 2026. AI makes you smarter but none the wiser: The disconnect between performance and metacognition.Computers in Human Behavior 175 (2026), 108779. doi:10.1016/j.chb.2025.108779
-
[12]
John C. Flanagan. 1954. The Critical Incident Technique.Psychological Bulletin51, 4 (1954), 327–358. doi:10.1037/h0061470
-
[13]
2025.Student Generative AI Survey 2025
Josh Freeman. 2025.Student Generative AI Survey 2025. HEPI Policy Note 61. Higher Education Policy Institute (HEPI). https://www.hepi.ac.uk/wp- content/uploads/2025/02/HEPI-Kortext-Student-Generative-AI-Survey-2025.pdf
2025
-
[14]
Sandy J. J. Gould, Duncan P. Brumby, and Anna L. Cox. 2024. ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY, USA, Article 552, 7 pages. doi:10.1145/3613905.3644062
-
[15]
Alicia Guo, Shreya Sathyanarayanan, Leijie Wang, Jeffrey Heer, and Amy X. Zhang. 2025. From Pen to Prompt: How Creative Writers Integrate AI into their Writing Practice. InProceedings of the 2025 Conference on Creativity and Cognition (C&C ’25). Association for Computing Machinery, New York, NY, USA, 527–545. doi:10.1145/3698061.3726910
-
[16]
Angel Hsing-Chi Hwang, Q. Vera Liao, Su Lin Blodgett, Alexandra Olteanu, and Adam Trischler. 2025. ’It was 80% me, 20% AI’: Seeking Authenticity in Co-Writing with Large Language Models. 41 pages. doi:10.1145/3711020
-
[17]
Jiahui Luo (Jess). 2024. A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education49, 5 (2024), 651–664. doi:10.1080/02602938.2024.2309963
-
[18]
Heather Johnston, Rebecca F. Wells, Elizabeth M. Shanks, Timothy Boey, and Bryony N. Parsons. 2024. Student perspectives on the use of generative artificial intelligence technologies in higher education.International Journal for Educational Integrity20, 1 (2024), 2. doi:10.1007/s40979-024-00149-4
-
[19]
Sanna Järvelä, Andy Nguyen, Eija Vuorenmaa, Jonna Malmberg, and Hanna Järvenoja. 2023. Predicting regulatory activities for socially shared regulation to optimize collaborative learning.Computers in Human Behavior144 (2023), 107737. doi:10.1016/j.chb.2023.107737
-
[20]
Jinhee Kim, Sang-Soog Lee, Rita Detrick, Jialin Wang, and Na Li. 2026. Students-Generative AI interaction patterns and its impact on academic writing.Journal of Computing in Higher Education38, 1 (2026), 504–525. doi:10.1007/s12528-025-09444-6 Manuscript submitted to ACM 24 Bodei et al
-
[21]
Jinhee Kim, Seongryeong Yu, Rita Detrick, and Na Li. 2025. Exploring students’ perspectives on Generative AI-assisted academic writing.Education and Information Technologies30, 1 (2025), 1265–1300. doi:10.1007/s10639-024-12878-7
-
[22]
Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CH...
-
[23]
Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L.C
Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A. Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L.C. Guo, Md Naimul Hoque, Yewon Kim, Simon Knight, Seyed Parsa Neshaei, Antonette Shibani, Disha Shrivastava, Lila Shr...
-
[24]
Florian Lehmann, Krystsina Shauchenka, and Daniel Buschek. 2026. Collaborative Document Editing with Multiple Users and AI Agents. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 58, 27 pages. doi:10.1145/3772318.3790648
-
[25]
Susan Lin, Jeremy Warner, J.D. Zamfirescu-Pereira, Matthew G Lee, Sauhard Jain, Shanqing Cai, Piyawat Lertvittayakumjorn, Michael Xuelin Huang, Shumin Zhai, Bjoern Hartmann, and Can Liu. 2024. Rambler: Supporting Writing With Speech via LLM-Assisted Gist Manipulation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu,...
-
[26]
Yier Ling, Alex Kale, and Alex Imas. 2026. Underreporting of AI Use: The Role of Social Desirability Bias. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 62, 22 pages. doi:10.1145/3772318.3791073
-
[27]
Tomáš Lintner. 2024. A systematic review of AI literacy scales.npj Science of Learning9, 1 (Aug. 2024), 50. doi:10.1038/s41539-024-00264-4
-
[28]
Roderick J. A. Little and Donald B. Rubin. 2019.Statistical Analysis with Missing Data(3 ed.). John Wiley & Sons. doi:10.1002/9781119482260
-
[29]
Agung Rinaldy Malik, Yuni Pratiwi, Kusubakti Andajani, I Wayan Numertayasa, Sri Suharti, Arisa Darwis, and Marzuki. 2023. Exploring Artificial Intelligence in Academic Essay: Higher Education Student’s Perspective.International Journal of Educational Research Open5 (2023), 100296. doi:10.1016/j.ijedro.2023.100296
-
[30]
2025.AI in Education Report: Insights to Support Teaching and Learning
Microsoft Education Team. 2025.AI in Education Report: Insights to Support Teaching and Learning. Technical Report. Microsoft. https://www. microsoft.com/en-us/education/blog/2025/08/ai-in-education-report-insights-to-support-teaching-and-learning/ Accessed: 2026-04-22
2025
-
[31]
Kim M. Mitchell, Diana E. McMillan, Michelle M. Lobchuk, Nathan C. Nickel, Rasheda Rabbani, and Johnson Li. 2021. Development and validation of the Situated Academic Writing Self-Efficacy Scale (SAWSES).Assessing Writing48 (2021), 100524. doi:10.1016/j.asw.2021.100524
-
[32]
Andy Nguyen, Yvonne Hong, Belle Dang, and Xiaoshan Huang. 2024. Human-AI collaboration patterns in AI-assisted academic writing.Studies in Higher Education49, 5 (2024), 847–864. doi:10.1080/03075079.2024.2323593
-
[33]
Andy Nguyen, Faith Ilesanmi, Belle Dang, Eija Vuorenmaa, and Sanna Järvelä. 2024. Hybrid Intelligence in Academic Writing: Examining Self- Regulated Learning Patterns in an AI-Assisted Writing Task. InHHAI 2024: Hybrid Human AI Systems for the Social Good. Frontiers in Artificial Intelligence and Applications, Vol. 386. IOS Press, 241–254. doi:10.3233/FAIA240198
-
[34]
Francisco M. Olmos-Vega, Renée E. Stalmeijer, Lara Varpio, and Renate Kahlke. 2023. A practical guide to reflexivity in qualitative research: AMEE Guide No. 149.Medical Teacher45, 3 (2023), 241–251. doi:10.1080/0142159X.2022.2057287
-
[35]
Amy Wanyu Ou, Christian Stöhr, and Hans Malmström. 2024. Academic communication with AI-powered language tools in higher education: From a post-humanist perspective.System121 (2024), 103225. doi:10.1016/j.system.2024.103225
-
[36]
Jessica L. Parker, Veronica M. Richard, Alexandra Acabá, Sierra Escoffier, Stephen Flaherty, Shannon Jablonka, and Kimberly P. Becker. 2025. Negotiating Meaning with Machines: AI’s Role in Doctoral Writing Pedagogy.International Journal of Artificial Intelligence in Education35, 3 (2025), 1218–1238. doi:10.1007/s40593-024-00425-x Available online
-
[37]
Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. 2024. The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment.Journal of University Teaching and Learning Practice21, 6 (April 2024). doi:10.53761/q3azde36
-
[38]
Anders Persson, Mikael Laaksoharju, and Hiroshi Koga. 2021. We Mostly Think Alike: Individual Differences in Attitude Towards AI in Sweden and Japan.The Review of Socionetwork Strategies15, 1 (2021), 123–142. doi:10.1007/s12626-021-00071-y
-
[39]
Mohi Reza, Nathan M Laundry, Ilya Musabirov, Peter Dushniku, Zhi Yuan “Michael” Yu, Kashish Mittal, Tovi Grossman, Michael Liut, Anastasia Kuzminykh, and Joseph Jay Williams. 2024. ABScribe: Rapid Exploration & Organization of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models. InProceedings of the 2024 CHI Conference on ...
-
[40]
Jeba Rezwana and Mary Lou Maher. 2023. Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems.ACM Transactions on Computer-Human Interaction30, 5, Article 67 (Sept. 2023), 28 pages. doi:10.1145/3519026
-
[41]
Yvon Ruitenburg, Hayoun Noh, Jing Li, Sima Amirkhani, Hyuna Jo, Max Van Kleek, Younah Kang, Sarah Foley, and Minha Lee. 2026. What we chose to (Not) share: Unpacking how HCI researchers self-disclose in interactions with participants with stigmatised identities.International Journal of Manuscript submitted to ACM Co-Writing with AI 25 Human-Computer Studi...
-
[42]
Advait Sarkar. 2025. AI Could Have Written This: Birth of a Classist Slur in Knowledge Work. InProceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25). Association for Computing Machinery, New York, NY, USA, Article 621, 12 pages. doi:10.1145/3706599.3716239
-
[43]
Antonette Shibani, Simon Knight, Kirsty Kitto, Ajanie Karunanayake, and Simon Buckingham Shum. 2024. Untangling Critical Interaction with AI in Students’ Written Assessment. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY, USA, Article 357, 6...
-
[44]
Cuiping Song and Yanping Song. 2023. Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students.Frontiers in Psychology14 (2023). doi:10.3389/fpsyg.2023.1260843
-
[45]
Christian Stöhr, Amy Wanyu Ou, and Hans Malmström. 2024. Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study.Computers and Education: Artificial Intelligence7 (2024), 100259. doi:10.1016/j.caeai.2024.100259
-
[46]
Artur Strzelecki. 2024. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology.Interactive Learning Environments32, 9 (2024), 5142–5155. doi:10.1080/10494820.2023.2209881
-
[47]
Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Art...
-
[48]
John Sweller. 1988. Cognitive Load During Problem Solving: Effects on Learning.Cognitive Science12, 2 (1988), 257–285. doi:10.1207/ s15516709cog1202_4
1988
-
[49]
Lev Tankelevitch, Viktor Kewenig, Auste Simkute, Ava Elizabeth Scott, Advait Sarkar, Abigail Sellen, and Sean Rintel. 2024. The Metacognitive Demands and Opportunities of Generative AI. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article...
-
[50]
Rama Adithya Varanasi, Batia Mishan Wiesenfeld, and Oded Nov. 2025. AI Rivalry as a Craft: How Resisting and Embracing Generative AI Are Reshaping the Writing Profession. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 1198, 19 pages. doi:10.1145/3706...
-
[51]
Vygotsky and Michael Cole
Lev S. Vygotsky and Michael Cole. 1978.Mind in Society: Development of Higher Psychological Processes. Harvard University Press
1978
-
[52]
If You’re Very Clever, No One Knows You’ve Used It
Qing (Nancy) Xia, Marios Constantinides, Advait Sarkar, Duncan P. Brumby, and Anna L. Cox. 2026. "If You’re Very Clever, No One Knows You’ve Used It": The Social Dynamics of Developing Generative AI Literacy in the Workplace. InProceedings of the 5th Annual Symposium on Human-Computer Interaction for Work (CHIWORK ’26)(Linz, Austria). Association for Comp...
-
[53]
Yuan Yao, Yiwen Sun, Siyu Zhu, and Xinhua Zhu. 2025. A Qualitative Inquiry Into Metacognitive Strategies of Postgraduate Students in Employing ChatGPT for English Academic Writing.European Journal of Education60, 1 (2025), e12824. doi:10.1111/ejed.12824
-
[54]
Tim Zindulka, Sven Goller, Daniela Fernandes, Robin Welsch, and Daniel Buschek. 2026. The AI Memory Gap: Users Misremember What They Created With AI or Without. InProceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26). Association for Computing Machinery, New York, NY, USA, Article 61, 22 pages. doi:10.1145/3772318.3791494 ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.