pith. machine review for the scientific record. sign in

arxiv: 2604.07813 · v1 · submitted 2026-04-09 · 💻 cs.AI · cs.HC

Recognition: 2 theorem links

· Lean Theorem

Agentivism: a learning theory for the age of artificial intelligence

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:56 UTC · model grok-4.3

classification 💻 cs.AI cs.HC
keywords Agentivismlearning theoryhuman-AI interactionAI-assisted learningdurable capabilityselective delegationepistemic monitoringinternalization
0
0 comments X

The pith

Agentivism defines learning in the AI era as durable human capability built through selective delegation, monitoring, internalization, and unsupported transfer.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Existing theories of learning do not explain how performance achieved with AI support becomes independent human skill once the AI is removed. Learners can finish tasks effectively while AI handles the cognitive work, yet show little lasting understanding or judgment. The paper therefore proposes Agentivism as a dedicated theory for human-AI interaction. It states that genuine learning occurs only when people choose what to delegate, check the AI's output, rebuild it in their own terms, and later demonstrate the same capability without help. This distinction matters because AI tools are now a routine part of study and work, so instruction must deliberately cultivate the four processes rather than assume performance equals learning.

Core claim

Agentivism defines learning as durable growth in human capability through selective delegation to AI, epistemic monitoring and verification of AI contributions, reconstructive internalization of AI-assisted outputs, and transfer under reduced support. The theory addresses the fact that successful performance with AI no longer reliably signals learning, since learners may complete work effectively while developing weaker understanding and limited transferable skill.

What carries the argument

The four-process definition of learning within Agentivism: selective delegation to AI, epistemic monitoring and verification, reconstructive internalization, and transfer under reduced support.

If this is right

  • Instructional designs must include explicit practice in verifying and reconstructing AI contributions rather than relying on final output alone.
  • Assessment of learning requires phases of reduced or removed AI support to confirm that capability has transferred.
  • Curricula should teach learners criteria for deciding which tasks to delegate and which to keep under human control.
  • Educational tools need features that prompt epistemic monitoring and internalization instead of simply accelerating task completion.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same four processes could be examined in non-educational settings such as professional workflows where AI assists with reports or code.
  • Interfaces could be redesigned to make monitoring and reconstruction easier, potentially increasing the rate at which AI assistance converts into independent skill.
  • Long-term studies might track whether repeated use of the four processes changes how learners approach new problems even when no AI is present.

Load-bearing premise

Existing learning theories cannot directly explain the conditions under which AI-assisted performance produces lasting human capability once support is withdrawn.

What would settle it

A controlled study that measures retention of problem-solving skill after AI removal in two groups, one trained to monitor and internalize AI outputs and one allowed to delegate without monitoring.

read the original abstract

Learning theories have historically changed when the conditions of learning evolved. Generative and agentic AI create a new condition by allowing learners to delegate explanation, writing, problem solving, and other cognitive work to systems that can generate, recommend, and sometimes act on the learner's behalf. This creates a fundamental challenge for learning theory: successful performance can no longer be assumed to indicate learning. Learners may complete tasks effectively with AI support while developing less understanding, weaker judgment, and limited transferable capability. We argue that this problem is not fully captured by existing learning theories. Behaviourism, cognitivism, constructivism, and connectivism remain important, but they do not directly explain when AI-assisted performance becomes durable human capability. We propose Agentivism, a learning theory for human-AI interaction. Agentivism defines learning as durable growth in human capability through selective delegation to AI, epistemic monitoring and verification of AI contributions, reconstructive internalization of AI-assisted outputs, and transfer under reduced support. The importance of Agentivism lies in explaining how learning remains possible when intelligent delegation is easy and human-AI interaction is becoming a persistent and expanding part of human learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes Agentivism as a new learning theory tailored to human-AI interaction. It argues that established theories (behaviourism, cognitivism, constructivism, connectivism) do not directly explain the conditions under which AI-assisted task performance produces durable growth in human capability. Agentivism defines learning as such growth achieved via four processes: selective delegation to AI, epistemic monitoring and verification of AI contributions, reconstructive internalization of AI-assisted outputs, and transfer under reduced support.

Significance. If the proposed framework can be shown to identify mechanisms not already covered by extensions of prior theories, it could usefully guide the design of AI tools and educational interventions that prioritize capability retention over short-term performance. As a purely conceptual contribution without empirical tests, formal derivations, or falsifiable predictions, its significance remains prospective and depends on subsequent validation.

major comments (2)
  1. Abstract: The central motivation—that behaviourism, cognitivism, constructivism, and connectivism 'do not directly explain when AI-assisted performance becomes durable human capability'—is asserted without a comparative gap analysis or counterexamples. The manuscript does not demonstrate why, for instance, constructivist scaffolding and internalization (Vygotsky) cannot already subsume epistemic monitoring and reconstructive internalization when AI functions as an external tool, leaving the necessity of new primitives unsupported.
  2. Abstract and proposed definition: The four processes are presented as jointly sufficient for durable growth, yet no mechanism, interaction rules, or boundary conditions are supplied showing how selective delegation plus monitoring reliably produces internalization and transfer rather than performance-only outcomes. This renders the definitional claim load-bearing but ungrounded.
minor comments (2)
  1. The manuscript introduces 'Agentivism' without situating it against related concepts already in the AI-education literature (e.g., agentic learning or tool-mediated cognition), which would clarify novelty.
  2. No section outlines testable predictions or operational measures for the four processes, which would strengthen the proposal even as a conceptual framework.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive review of our manuscript on Agentivism. We address each major comment below, providing clarification on the conceptual scope of the work while indicating revisions where they strengthen the presentation without altering its foundational claims.

read point-by-point responses
  1. Referee: Abstract: The central motivation—that behaviourism, cognitivism, constructivism, and connectivism 'do not directly explain when AI-assisted performance becomes durable human capability'—is asserted without a comparative gap analysis or counterexamples. The manuscript does not demonstrate why, for instance, constructivist scaffolding and internalization (Vygotsky) cannot already subsume epistemic monitoring and reconstructive internalization when AI functions as an external tool, leaving the necessity of new primitives unsupported.

    Authors: We acknowledge the value of making the comparative gap more explicit. The manuscript motivates the proposal by noting that generative AI enables complete cognitive delegation in ways that differ from prior tools, such that performance can occur without the learner engaging in the reconstructive or monitoring processes that existing theories presuppose. To address the referee's point directly, we will expand the introduction with a short comparative subsection that contrasts Vygotsky-style scaffolding (where the more knowledgeable other remains human and the learner retains active construction) with AI delegation scenarios where the system can generate complete solutions, thereby illustrating why the four Agentivism processes are not automatically subsumed. This revision will supply the requested counterexamples while preserving the paper's conceptual focus. revision: yes

  2. Referee: Abstract and proposed definition: The four processes are presented as jointly sufficient for durable growth, yet no mechanism, interaction rules, or boundary conditions are supplied showing how selective delegation plus monitoring reliably produces internalization and transfer rather than performance-only outcomes. This renders the definitional claim load-bearing but ungrounded.

    Authors: The manuscript offers Agentivism as a definitional framework that identifies the conditions under which AI-assisted performance yields durable capability, rather than as a mechanistic model with formal interaction rules or boundary conditions. The four processes are proposed as jointly necessary on logical grounds: without selective delegation, monitoring, internalization, and transfer, capability growth is not assured when AI performs substantial cognitive work. Detailed mechanisms and falsifiable predictions are beyond the scope of this initial conceptual paper and are explicitly flagged as directions for subsequent empirical research. We will revise the discussion section to state this scope limitation more clearly and to outline example research questions that could test the proposed processes, thereby grounding the definition without overclaiming sufficiency. revision: partial

Circularity Check

0 steps flagged

No circularity: Agentivism is an independent definitional proposal

full rationale

The paper advances a new learning theory by explicitly defining learning as durable capability growth via four mechanisms (selective delegation, epistemic monitoring, reconstructive internalization, transfer under reduced support). No equations, fitted parameters, or predictions appear; the claim that prior theories (behaviourism, cognitivism, constructivism, connectivism) do not directly explain AI-assisted durable capability is stated as a premise without reduction to self-citation chains or imported uniqueness theorems. The contribution stands as a conceptual reframing rather than a derivation that collapses to its inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The proposal rests on two domain assumptions about the limits of prior theories and introduces one new conceptual entity without independent evidence.

axioms (2)
  • domain assumption Successful performance can no longer be assumed to indicate learning when AI systems can generate, recommend, and act on the learner's behalf.
    Stated as the fundamental challenge created by generative and agentic AI.
  • domain assumption Behaviourism, cognitivism, constructivism, and connectivism do not directly explain when AI-assisted performance becomes durable human capability.
    Explicitly argued as the motivation for proposing a new theory.
invented entities (1)
  • Agentivism no independent evidence
    purpose: A learning theory framework for human-AI interaction
    Newly proposed theory whose four processes are defined without prior empirical support or derivation from data.

pith-pipeline@v0.9.0 · 5499 in / 1443 out tokens · 31627 ms · 2026-05-10T17:56:33.215897+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

85 extracted references · 60 canonical work pages

  1. [1]

    Psychology as the Behaviorist Views It

    Watson JB. Psychology as the Behaviorist Views It. Psychological Review. 1913;20(2):158–177. https://doi.org/10.1037/h0074428

  2. [2]

    The Behavior of Organisms: An Experimental Analysis

    Skinner BF. The Behavior of Organisms: An Experimental Analysis. New York: Appleton-Century; 1938

  3. [3]

    The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information

    Miller GA. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review. 1956;63(2):81–97. https://doi.org/10.1037/h0043158

  4. [4]

    Human Memory: A Proposed System and Its Control Processes

    Atkinson RC, Shiffrin RM. Human Memory: A Proposed System and Its Control Processes. In: Spence KW, Spence JT, editors. The Psychology of Learning and Motivation. vol. 2. Academic Press; 1968. p. 89–195

  5. [5]

    Experience and Education

    Dewey J. Experience and Education. New York: Macmillan; 1938

  6. [6]

    The Origins of Intelligence in Children

    Piaget J. The Origins of Intelligence in Children. New York: International Universities Press; 1952

  7. [7]

    Mind in Society: The Development of Higher Psychological Processes

    Vygotsky LS. Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press; 1978

  8. [8]

    Connectivism: A Learning Theory for the Digital Age

    Siemens G. Connectivism: A Learning Theory for the Digital Age. International Journal of Instructional Technology and Distance Learning. 2005;2(1):3–10

  9. [9]

    Large Language Models Challenge the Future of Higher Education

    Milano S, McGrane JA, Leonelli S. Large Language Models Challenge the Future of Higher Education. Nature Machine Intelligence. 2023;5(4):333–334. https: //doi.org/10.1038/s42256-023-00644-2

  10. [10]

    Large AI Models Are Cultural and Social Technologies

    Farrell H, Gopnik A, Shalizi C, Evans J. Large AI Models Are Cultural and Social Technologies. Science. 2025;387(6739):1153–1156. https://doi.org/10. 1126/science.adt9819. 17

  11. [11]

    Build- ing Machines That Learn and Think with People

    Collins KM, Sucholutsky I, Bhatt U, Chandra K, Wong L, Lee M, et al. Build- ing Machines That Learn and Think with People. Nature Human Behaviour. 2024;8(10):1851–1863. https://doi.org/10.1038/s41562-024-01991-9

  12. [12]

    ChatGPT Has Entered the Classroom: How LLMs Could Trans- form Education

    Extance A. ChatGPT Has Entered the Classroom: How LLMs Could Trans- form Education. Nature. 2023;623(7987):474–477. https://doi.org/10.1038/ d41586-023-03507-3

  13. [13]

    Experimental Evidence on the Productivity Effects of Genera- tive Artificial Intelligence

    Noy S, Zhang W. Experimental Evidence on the Productivity Effects of Genera- tive Artificial Intelligence. Science. 2023;381(6654):187–192. https://doi.org/10. 1126/science.adh2586

  14. [14]

    Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study

    Ng DTK, Tan CW, Leung JKL. Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. British Journal of Educational Technology. 2024;55(4):1328–1353. https://doi.org/10.1111/bjet. 13454

  15. [15]

    Evaluation of ChatGPT’s Real-Life Implementation in Under- graduate Dental Education: Mixed Methods Study

    Kavadella A, Dias da Silva MA, Kaklamanos EG, Stamatopoulos V, Gian- nakopoulos K. Evaluation of ChatGPT’s Real-Life Implementation in Under- graduate Dental Education: Mixed Methods Study. JMIR Medical Education. 2024;10:e51344. https://doi.org/10.2196/51344

  16. [16]

    Integrating ChatGPT for vocabulary learning and retention: A classroom-based study of Saudi EFL learners

    Abdelhalim SM, Alsehibany R. Integrating ChatGPT for vocabulary learning and retention: A classroom-based study of Saudi EFL learners. Language Learning & Technology. 2025;p. 1–24. https://doi.org/10.64152/10125/73635

  17. [17]

    From Chalkboards to Chatbots: Evaluating the Impact of Generative AI on Learning Outcomes in Nigeria

    De Simone M, Tiberti F, Barron Rodriguez M, Manolio F, Mosuro W, Dikoru EJ. From Chalkboards to Chatbots: Evaluating the Impact of Generative AI on Learning Outcomes in Nigeria. Washington, DC: World Bank; 2025

  18. [18]

    doi:10.1111/bjet.13544 Nikolaus Franke, Martin Schreier, and Ulrike Kaiser

    Fan Y, Tang L, Le H, Shen K, Tan S, Zhao Y, et al. Beware of metacognitive lazi- ness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology. 2025;56(2):489–530. https://doi.org/10.1111/bjet.13544

  19. [20]

    Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry

    Stadler M, Bannert M, Sailer M. Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior. 2024;160:108386. https://doi.org/10.1016/j.chb.2024.108386

  20. [21]

    AI makes you smarter but none the wiser: The disconnect between performance and metacognition

    Fernandes D, Villa S, Nicholls S, Haavisto O, Buschek D, Schmidt A, et al. AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Computers in Human Behavior. 2025;p. 108779. https://doi.org/ 18 10.1016/j.chb.2025.108779

  21. [22]

    Distinguishing performance gains from learning when using generative AI

    Yan L, Greiff S, Lodge JM, Gaˇ sevi´ c D. Distinguishing performance gains from learning when using generative AI. Nature Reviews Psychology. 2025;4(7):435–

  22. [23]

    https://doi.org/10.1038/s44159-025-00467-5

  23. [24]

    Generative artificial intelligence-supported programming education: Effects on learning performance, self-efficacy and processes

    Li S, Liu J, Dong Q. Generative artificial intelligence-supported programming education: Effects on learning performance, self-efficacy and processes. Aus- tralasian Journal of Educational Technology. 2025;https://doi.org/10.14742/ajet. 9932

  24. [25]

    Machine Culture

    Brinkmann L, Baumann F, Bonnefon JF, Derex M, M¨ uller TF, Nussberger AM, et al. Machine Culture. Nature Human Behaviour. 2023;7(11):1855–1868. https: //doi.org/10.1038/s41562-023-01742-2

  25. [26]

    Lu, Lesley Luyang Song, and Lu Doris Zhang

    Lu JG, Song LL, Zhang LD. Cultural Tendencies in Generative AI. Nature Human Behaviour. 2025;9(11):2360–2369. https://doi.org/10.1038/s41562-025-02242-1

  26. [27]

    Robotics and Autonomous Systems , author =

    Jin F, Sun L, Pan Y, Lin CH. High heels, compass, spider-man, or drug? Metaphor analysis of generative artificial intelligence in academic writing. Computers & Education. 2025;228:105248. https://doi.org/10.1016/j.compedu.2025.105248

  27. [28]

    Doshi and O

    Doshi AR, Hauser OP. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances. 2024;10(28):eadn5290. https://doi.org/10.1126/sciadv.adn5290

  28. [29]

    Interaction between students and artificial intelligence in the con- text of creative potential development

    Xu M. Interaction between students and artificial intelligence in the con- text of creative potential development. Interactive Learning Environments. 2025;33(7):4460–4475. https://doi.org/10.1080/10494820.2025.2465439

  29. [30]

    Evaluating AI-assisted creative ideation: A crossover study in higher education

    Balta-Salvador R, Braso-Vives E, Pena M. Evaluating AI-assisted creative ideation: A crossover study in higher education. Thinking Skills and Creativity. 2026;59:101958. https://doi.org/10.1016/j.tsc.2025.101958

  30. [31]

    The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: an experimental study

    Wei X, Wang L, Lee LK, Liu R. The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: an experimental study. International Journal of Educational Technology in Higher Education. 2025;22(1). https://doi.org/10.1186/s41239-025-00526-0

  31. [32]

    When machines join the moral circle: The persona effect of generative AI agents in collaborative reasoning

    Jin Y, Martinez-Maldonado R, Shi W, Huang S, Zheng M, Han X, et al. When machines join the moral circle: The persona effect of generative AI agents in collaborative reasoning. British Journal of Educational Technology. 2026;00:1–24. https://doi.org/https://doi.org/10.1111/bjet.70067

  32. [33]

    The homogenizing effect of large lan- guage models on human expression and thought

    Sourati Z, Ziabari AS, Dehghani M. The homogenizing effect of large lan- guage models on human expression and thought. Trends in Cognitive Sciences. 2026;00:1–12. https://doi.org/https://doi.org/10.1016/j.tics.2026.01.003. 19

  33. [34]

    Artificial Intelligence and Illusions of Understanding in Scientific Research

    Messeri L, Crockett MJ. Artificial Intelligence and Illusions of Understanding in Scientific Research. Nature. 2024;627(8002):49–58. https://doi.org/10.1038/ s41586-024-07146-0

  34. [35]

    Extending Minds with Generative AI

    Clark A. Extending Minds with Generative AI. Nature Communications. 2025;16:4627. https://doi.org/10.1038/s41467-025-59906-9

  35. [36]

    Sycophantic AI decreases prosocial intentions and promotes dependence

    Cheng M, Lee C, Khadpe P, Yu S, Han D, Jurafsky D. Sycophantic AI decreases prosocial intentions and promotes dependence. Science. 2026;391(6792):eaec8352

  36. [37]

    Social theory and social structure

    Merton RK. Social theory and social structure. Simon and Schuster; 1968

  37. [39]

    AI chatbots as reading companions in self-directed out-of-class reading: A self-determination theory perspective

    Pan M, Lai C, Guo K. AI chatbots as reading companions in self-directed out-of-class reading: A self-determination theory perspective. British Journal of Educational Technology. 2025;https://doi.org/10.1111/bjet.70002

  38. [40]

    LLM-based collaborative program- ming: impact on students’ computational thinking and self-efficacy

    Yan YM, Chen CQ, Hu YB, Ye XD. LLM-based collaborative program- ming: impact on students’ computational thinking and self-efficacy. Humanities and Social Sciences Communications. 2025;12(1). https://doi.org/10.1057/ s41599-025-04471-1

  39. [41]

    Interactions with generative AI chat- bots: unveiling dialogic dynamics, students’ perceptions, and practical competen- cies in creative problem-solving

    Song Y, Huang L, Zheng L, Fan M, Liu Z. Interactions with generative AI chat- bots: unveiling dialogic dynamics, students’ perceptions, and practical competen- cies in creative problem-solving. International Journal of Educational Technology in Higher Education. 2025;22(1). https://doi.org/10.1186/s41239-025-00508-2

  40. [42]

    Fan G, Liu D, Zhang R, Pan L. The impact of AI-assisted pair programming on student motivation, programming anxiety, collaborative learning, and program- ming performance: a comparative study with traditional pair programming and individual approaches. International Journal of STEM Education. 2025;12(1). https://doi.org/10.1186/s40594-025-00537-3

  41. [43]

    In: Proceedings of the ACM on Human-Computer Interaction, pp

    Buˇ cinca Z, Malaya M, Gajos KZ. To trust or to think: Cognitive forcing functions can reduce over-reliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction. 2021;5(CSCW1):1–21. https://doi.org/ 10.1145/3449287

  42. [44]

    Measuring Actual Learning Versus Feeling of Learning in Response to Being Actively Engaged in the Classroom

    Deslauriers L, McCarty LS, Miller K, Callaghan K, Kestin G. Measuring Actual Learning Versus Feeling of Learning in Response to Being Actively Engaged in the Classroom. Proceedings of the National Academy of Sciences of the United States of America. 2019;116(39):19251–19257. https://doi.org/10.1073/ pnas.1821936116. 20

  43. [45]

    Generative AI Without Guardrails Can Harm Learning: Evidence from High School Mathemat- ics

    Bastani H, Bastani O, Sungu A, Ge H, Kabakci O, Mariman R. Generative AI Without Guardrails Can Harm Learning: Evidence from High School Mathemat- ics. Proceedings of the National Academy of Sciences of the United States of America. 2025;122(26):e2422633122. https://doi.org/10.1073/pnas.2422633122

  44. [47]

    A conversational agent based on contin- gent teaching model to support collaborative learning activities: impacts on students’ learning performance, self-efficacy and perceptions

    Hu W, Gong R, Wu S, Li Y. A conversational agent based on contin- gent teaching model to support collaborative learning activities: impacts on students’ learning performance, self-efficacy and perceptions. Educational Tech- nology Research and Development. 2025;73(5):3341–3372. https://doi.org/10. 1007/s11423-025-10526-6

  45. [48]

    Kim, Martijn A

    Hu W, Tian J, Li Y. Enhancing student engagement in online collaborative writ- ing through a generative AI-based conversational agent. The Internet and Higher Education. 2025;65:100979. https://doi.org/10.1016/j.iheduc.2024.100979

  46. [49]

    Investigating the tripartite interaction among teachers, students, and generative AI in EFL education: A mixed-methods study

    Guan L, Lee JCK, Zhang Y, Gu MM. Investigating the tripartite interaction among teachers, students, and generative AI in EFL education: A mixed-methods study. Computers and Education: Artificial Intelligence. 2025;8:100384. https: //doi.org/10.1016/j.caeai.2025.100384

  47. [50]

    Parent-led vs

    Xiao F, Zou EW, Lin J, Li Z, Yang D. Parent-led vs. AI-guided dialogic reading: Evidence from a randomized controlled trial in children’s e-book context. British Journal of Educational Technology. 2025;56(5):1784–1813. https://doi.org/10. 1111/bjet.13615

  48. [51]

    Supporting learner agency in collaborative writing with generative AI

    Kim S, So HJ, Park K. Supporting learner agency in collaborative writing with generative AI. British Journal of Educational Technology. 2026;https://doi.org/ 10.1111/bjet.70015

  49. [52]

    Bridging Learnersourcing and AI: Exploring the Dynamics of Student-AI Collaborative Feedback Genera- tion

    Singh A, Brooks C, Wang X, Li W, Kim J, Wilson D. Bridging Learnersourcing and AI: Exploring the Dynamics of Student-AI Collaborative Feedback Genera- tion. In: Proceedings of the 14th Learning Analytics and Knowledge Conference. New York, NY, USA: ACM; 2024. p. 742–748

  50. [53]

    Unveiling interaction patterns between students and generative AI teachable agents: Focusing on students’ agency and AI agents’ authority

    Xing W, Kim T, Song Y, Li H, Li C, Kim J. Unveiling interaction patterns between students and generative AI teachable agents: Focusing on students’ agency and AI agents’ authority. British Journal of Educational Technology. 2026;https://doi.org/10.1111/bjet.70038

  51. [54]

    Nature Human Behaviour9(8), 1645–1653 (2025)

    Salvi F, Horta Ribeiro M, Gallotti R, West R. On the conversational per- suasiveness of GPT-4. Nature Human Behaviour. 2025;9(8):1645–1653. https: //doi.org/10.1038/s41562-025-02194-6. 21

  52. [55]

    Scientific Discovery in the Age of Artificial Intelligence

    Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z, et al. Scientific Discovery in the Age of Artificial Intelligence. Nature. 2023;620(7972):47–60. https://doi.org/10. 1038/s41586-023-06221-2

  53. [56]

    Teachers’ professional agency in learning with AI: A case study of a generative AI-based knowledge-building learning companion for teachers

    Tan SC, Tan YY, Teo CL, Yuan G. Teachers’ professional agency in learning with AI: A case study of a generative AI-based knowledge-building learning companion for teachers. British Journal of Educational Technology. 2026;https://doi.org/10. 1111/bjet.70013

  54. [57]

    Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al

    Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large Language Models Encode Clinical Knowledge. Nature. 2023;620(7972):172–180. https:// doi.org/10.1038/s41586-023-06291-2

  55. [58]

    Artificial Intelligence for Modelling Infectious Dis- ease Epidemics

    Kraemer MUG, et al. Artificial Intelligence for Modelling Infectious Dis- ease Epidemics. Nature. 2025;638(8051):623–635. https://doi.org/10.1038/ s41586-024-08564-w

  56. [59]

    Multimodal Generative AI for Medical Image Interpretation

    Rao V, et al. Multimodal Generative AI for Medical Image Interpretation. Nature. 2025;639(8056):888–896. https://doi.org/10.1038/s41586-025-08675-y

  57. [61]

    Social Cognitive Theory: An Agentic Perspective

    Bandura A. Social Cognitive Theory: An Agentic Perspective. Annual Review of Psychology. 2001;52(1):1–26. https://doi.org/10.1146/annurev.psych.52.1.1

  58. [62]

    Impact of AI Assistance on Student Agency

    Darvishi A, Khosravi H, Sadiq S, Gaˇ sevi´ c D, Siemens G. Impact of AI Assistance on Student Agency. Computers & Education. 2024;210:104967. https://doi.org/ 10.1016/j.compedu.2023.104967

  59. [64]

    GPTs Are GPTs: Labor Market Impact Potential of LLMs

    Eloundou T, Manning S, Mishkin P, Rock D. GPTs Are GPTs: Labor Market Impact Potential of LLMs. Science. 2024;384(6702):1306–1308. https://doi.org/ 10.1126/science.adj0998

  60. [65]

    Biased AI writing assistants shift users’ attitudes on societal issues

    Williams-Ceci S, Jakesch M, Bhat A, Kadoma K, Zalmanson L, Naaman M. Biased AI writing assistants shift users’ attitudes on societal issues. Science Advances. 2026;12(11):eadw5578. https://doi.org/10.1126/sciadv.adw5578

  61. [66]

    AI Is Turning Research into a Scientific Monoculture

    Traberg CS, Roozenbeek J, van der Linden S. AI Is Turning Research into a Scientific Monoculture. Communications Psychology. 2026;4:37. https://doi.org/ 22 10.1038/s44271-026-00428-5

  62. [67]

    Artificial intelligence tools expand scientists’ impact but contract science’s focus

    Hao Q, Xu F, Li Y, Evans J. Artificial intelligence tools expand scientists’ impact but contract science’s focus. Nature. 2026;649:1237–1243. https://doi.org/10. 1038/s41586-025-09922-y

  63. [68]

    The effects of three different approaches to human-AI collaboration on online collaborative learning

    Gyasi JF, Zheng L, Love SF, Boateng FO. The effects of three different approaches to human-AI collaboration on online collaborative learning. Educational Technol- ogy & Society. 2025;28(2):373–392. https://doi.org/10.30191/ETS.202504 28(2) .TP07

  64. [69]

    Combining human and artificial intelligence for enhanced AI literacy in higher education

    Tzirides AO, Zapata G, Kastania NP, Saini AK, Castro V, Ismael SA, et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Computers and Education Open. 2024;6:100184. https://doi.org/10. 1016/j.caeo.2024.100184

  65. [70]

    Generative AI: A double-edged sword for creative thinking learning — Evidence from facial expressions and fNIRS

    Song X, Zhang Y, Lu Z, Xu L, Shen H. Generative AI: A double-edged sword for creative thinking learning — Evidence from facial expressions and fNIRS. Computers & Education. 2026;247:105578. https://doi.org/10.1016/j.compedu. 2026.105578

  66. [71]

    Generative AI as a reflective scaffold in a UAV-based STEM project: A mixed-methods study on students’ higher-order thinking and cognitive transformation

    Chen SY, Chen WC, Lai CF. Generative AI as a reflective scaffold in a UAV-based STEM project: A mixed-methods study on students’ higher-order thinking and cognitive transformation. Education and Information Technologies. 2025;30(17):24787–24814. https://doi.org/10.1007/s10639-025-13758-4

  67. [72]

    Beyond the “Wow” Fac- tor: Using Generative AI for Increasing Generative Sense-Making

    Makransky G, Shiwalia BM, Herlau T, Blurton S. Beyond the “Wow” Fac- tor: Using Generative AI for Increasing Generative Sense-Making. Educational Psychology Review. 2025;37(3). https://doi.org/10.1007/s10648-025-10039-x

  68. [73]

    Towards reliable generative AI-driven scaffolding: Reducing hallucinations and enhancing quality in self-regulated learning support

    Qian K, Liu S, Li T, Rakovi´ c M, Li X, Guan R, et al. Towards reliable generative AI-driven scaffolding: Reducing hallucinations and enhancing quality in self-regulated learning support. Computers & Education. 2026;240:105448. https://doi.org/10.1016/j.compedu.2025.105448

  69. [74]

    Development and implementation of a generative AI-enhanced simulation to enhance problem- solving skills for pre-service teachers

    Lim J, Lee U, Koh J, Jeong Y, Lee Y, Byun G, et al. Development and implementation of a generative AI-enhanced simulation to enhance problem- solving skills for pre-service teachers. Computers & Education. 2025;232:105306. https://doi.org/10.1016/j.compedu.2025.105306

  70. [75]

    AI Tutoring Outperforms In-Class Active Learning: An RCT Introducing a Novel Research-Based Design in an Authentic Educational Setting

    Kestin G, Miller K, Klales A, Milbourne T, Ponti G. AI Tutoring Outperforms In-Class Active Learning: An RCT Introducing a Novel Research-Based Design in an Authentic Educational Setting. Scientific Reports. 2025;15(1). https://doi. org/10.1038/s41598-025-97652-6

  71. [76]

    Human and Artificial Intelligence Collabora- tion for Socially Shared Regulation in Learning

    J¨ arvel¨ a S, Nguyen A, Hadwin A. Human and Artificial Intelligence Collabora- tion for Socially Shared Regulation in Learning. British Journal of Educational 23 Technology. 2023;54(5):1057–1076. https://doi.org/10.1111/bjet.13325

  72. [77]

    A Qualitative Systematic Review on AI Empowered Self- Regulated Learning in Higher Education

    Lan M, Zhou X. A Qualitative Systematic Review on AI Empowered Self- Regulated Learning in Higher Education. npj Science of Learning. 2025;10(1):21. https://doi.org/10.1038/s41539-025-00319-0

  73. [78]

    Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review

    Yan L, Sha L, Zhao L, Li Y, Martinez-Maldonado R, Chen G, et al. Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review. British Journal of Educational Technology. 2024;55(1):90–112. https://doi.org/10.1111/bjet.13370

  74. [79]

    Agency as a system property in human—AI interaction in edu- cation

    Cukurova M. Agency as a system property in human—AI interaction in edu- cation. British Journal of Educational Technology. 2026;p. in press. https: //doi.org/10.1111/bjet.70060

  75. [80]

    Towards hybrid human-AI learning technologies

    Molenaar I. Towards hybrid human-AI learning technologies. European Journal of Education. 2022;57(4):632–645. https://doi.org/10.1111/ejed.12527

  76. [81]

    Enhancing self-directed learning and Python mastery through integration of a large language model and learning analytics dashboard

    Liu M, Wu Z, Dai H, Su Y, Malik L, Liao J, et al. Enhancing self-directed learning and Python mastery through integration of a large language model and learning analytics dashboard. British Journal of Educational Technology. 2025;https:// doi.org/10.1111/bjet.70005

  77. [82]

    Nicolás, The bar derived category of a curved dg algebra, Journal of Pure and Applied Algebra 212 (2008) 2633–2659

    Pozdniakov S, Brazil J, Banihashem SK, Noroozi O, Sadiq S, Khosravi H, et al. AI assistance in peer feedback provision: Pedagogically sound, but minimally adopted. Computers & Education. 2026;248:105591. https://doi.org/10.1016/j. compedu.2026.105591

  78. [83]

    3R Principle

    Rossi S, Fraccaro V, Manzotti R. The Brain Side of Human-AI Interactions in the Long-Term: The “3R Principle”. npj Artificial Intelligence. 2026;2(1):15. https://doi.org/10.1038/s44387-025-00063-1

  79. [84]

    ChatGPT in education: An effect in search of a cause

    Weidlich J, Gaˇ sevi´ c D, Drachsler H, Kirschner P. ChatGPT in education: An effect in search of a cause. Journal of Computer Assisted Learning. 2025;41(5):e70105

  80. [85]

    The effects of generative AI agents and scaffolding on enhancing students’ compre- hension of visual learning analytics

    Yan L, Martinez-Maldonado R, Jin Y, Echeverria V, Milesi M, Fan J, et al. The effects of generative AI agents and scaffolding on enhancing students’ compre- hension of visual learning analytics. Computers & Education. 2025;234:105322. https://doi.org/10.1016/j.compedu.2025.105322

Showing first 80 references.