Developers using AI showed the same core problem-solving behaviors as those without but differed in how they became stuck and recovered, with AI helping or hindering in specific cases.
citation dossier
Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task
why this work matters in Pith
Pith has found this work in 16 reviewed papers. Its strongest current cluster is cs.HC (5 papers). The largest review-status bucket among citing papers is UNVERDICTED (13 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
years
2026 16representative citing papers
NIRVANA supplies keystroke-level logs, complete ChatGPT dialogues, and copied content from 77 students to reconstruct AI-assisted essay writing and classify students into four behavioral profiles: Lead Authors, Collaborators, Drafters, and Vibe Writers.
AI alignment must move beyond assuming users have fully formed goals and instead provide active cognitive support to help form and refine intent over time.
Critical Inker scaffolds critical reflection during AI-assisted writing via Socratic questioning and visual logical-error feedback, reporting 91.2% argument overlap with ground truth and 87% validity accuracy in a pilot evaluation.
A minimal three-variable dynamical model of human-AI feedback predicts that increasing reliance on AI induces a transition to a low-diversity suboptimal equilibrium, interpreted as an emergent information bottleneck.
Claude Code centers on a model-tool while-loop surrounded by permission systems, context compaction, extensibility hooks, subagent delegation, and session storage; the same design questions yield different answers in OpenClaw's gateway context.
Interviews with 22 developers produced a preliminary reliance-control framework that uses levels of control over AI to identify appropriate reliance in software engineering.
This position paper advocates shifting AI education in materials discovery from basic tool access to a workflow-aligned literacy model that builds scientific judgment and equitable outcomes.
Prober.ai constrains LLMs via personas and JSON schemas to deliver gated, inquiry-based questions on argumentative writing weaknesses, aiming to reduce cognitive debt from AI overuse.
Model collapse threatens AI democratization by disproportionately degrading data and efficiency for low-resource communities.
Advanced LLMs improve EFL writing scores and diversity for lower-proficiency students but correlate with lower expert ratings on deep coherence, acting more as crutches than scaffolds.
AI functions as a determinant of health with ambient and personal exposure types, requiring new epidemiological study designs beyond current experiments.
Student-facilitated workshops in one design class produced AI policies highlighting double standards in disclosure requirements between students and faculty, demonstrating value in participatory governance.
Chatbot AI systems often fail complex needs while projecting authority, contributing to deskilling, labor displacement, economic concentration, and high environmental costs, so alternative pluralistic and task-specific designs are needed.
Student-written counterarguments to AI-generated thesis statements demonstrate logical reasoning as a component of critical thinking, and LLMs can assess such writing at scale with moderate agreement to human raters (Gwet's AC2 ~0.33).
AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.
citing papers explorer
-
ChatGPT: Friend or Foe When Comprehending and Changing Unfamiliar Code
Developers using AI showed the same core problem-solving behaviors as those without but differed in how they became stuck and recovered, with AI helping or hindering in specific cases.
-
NIRVANA: A Comprehensive Dataset for Reproducing How Students Use Generative AI for Essay Writing
NIRVANA supplies keystroke-level logs, complete ChatGPT dialogues, and copied content from 77 students to reconstruct AI-assisted essay writing and classify students into four behavioral profiles: Lead Authors, Collaborators, Drafters, and Vibe Writers.
-
Alignment has a Fantasia Problem
AI alignment must move beyond assuming users have fully formed goals and instead provide active cognitive support to help form and refine intent over time.
-
Critical Inker: Scaffolding Critical Thinking in AI-Assisted Writing Through Socratic Questioning
Critical Inker scaffolds critical reflection during AI-assisted writing via Socratic questioning and visual logical-error feedback, reporting 91.2% argument overlap with ground truth and 87% validity accuracy in a pilot evaluation.
-
Human-AI Co-Evolution and Epistemic Collapse: A Dynamical Systems Perspective
A minimal three-variable dynamical model of human-AI feedback predicts that increasing reliance on AI induces a transition to a low-diversity suboptimal equilibrium, interpreted as an emergent information bottleneck.
-
Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
Claude Code centers on a model-tool while-loop surrounded by permission systems, context compaction, extensibility hooks, subagent delegation, and session storage; the same design questions yield different answers in OpenClaw's gateway context.
-
Towards an Appropriate Level of Reliance on AI: A Preliminary Reliance-Control Framework for AI in Software Engineering
Interviews with 22 developers produced a preliminary reliance-control framework that uses levels of control over AI to identify appropriate reliance in software engineering.
-
Preparing Students for AI-Powered Materials Discovery: A Workflow-Aligned Framework for AI Literacy, Equity, and Scientific Judgment
This position paper advocates shifting AI education in materials discovery from basic tool access to a workflow-aligned literacy model that builds scientific judgment and equitable outcomes.
-
Prober.ai: Gated Inquiry-Based Feedback via LLM-Constrained Personas for Argumentative Writing Development
Prober.ai constrains LLMs via personas and JSON schemas to deliver gated, inquiry-based questions on argumentative writing weaknesses, aiming to reduce cognitive debt from AI overuse.
-
Position: the Stochastic Parrot in the Coal Mine. Model Collapse is a Threat to Low-Resource Communities
Model collapse threatens AI democratization by disproportionately degrading data and efficiency for low-resource communities.
-
The Crutch or the Ceiling? How Different Generations of LLMs Shape EFL Student Writings
Advanced LLMs improve EFL writing scores and diversity for lower-proficiency students but correlate with lower expert ratings on deep coherence, acting more as crutches than scaffolds.
-
The Epidemiology of Artificial Intelligence
AI functions as a determinant of health with ambient and personal exposure types, requiring new epidemiological study designs beyond current experiments.
-
Participatory, not Punitive: Student-Driven AI Policy Recommendations in a Design Classroom
Student-facilitated workshops in one design class produced AI policies highlighting double standards in disclosure requirements between students and faculty, demonstrating value in participatory governance.
-
What if AI systems weren't chatbots?
Chatbot AI systems often fail complex needs while projecting authority, contributing to deskilling, labor displacement, economic concentration, and high environmental costs, so alternative pluralistic and task-specific designs are needed.
-
Counterargument for Critical Thinking as Judged by AI and Humans
Student-written counterarguments to AI-generated thesis statements demonstrate logical reasoning as a component of critical thinking, and LLMs can assess such writing at scale with moderate agreement to human raters (Gwet's AC2 ~0.33).
-
Brainrot: Deskilling and Addiction are Overlooked AI Risks
AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.