Developers using AI showed the same core problem-solving behaviors as those without but differed in how they became stuck and recovered, with AI helping or hindering in specific cases.
hub
Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task
16 Pith papers cite this work. Polarity classification is still indexing.
hub tools
years
2026 16representative citing papers
NIRVANA supplies keystroke-level logs, complete ChatGPT dialogues, and copied content from 77 students to reconstruct AI-assisted essay writing and classify students into four behavioral profiles: Lead Authors, Collaborators, Drafters, and Vibe Writers.
AI alignment must move beyond assuming users have fully formed goals and instead provide active cognitive support to help form and refine intent over time.
Critical Inker scaffolds critical reflection during AI-assisted writing via Socratic questioning and visual logical-error feedback, reporting 91.2% argument overlap with ground truth and 87% validity accuracy in a pilot evaluation.
A minimal three-variable dynamical model of human-AI feedback predicts that increasing reliance on AI induces a transition to a low-diversity suboptimal equilibrium, interpreted as an emergent information bottleneck.
Claude Code centers on a model-tool while-loop surrounded by permission systems, context compaction, extensibility hooks, subagent delegation, and session storage; the same design questions yield different answers in OpenClaw's gateway context.
Interviews with 22 developers produced a preliminary reliance-control framework that uses levels of control over AI to identify appropriate reliance in software engineering.
This position paper advocates shifting AI education in materials discovery from basic tool access to a workflow-aligned literacy model that builds scientific judgment and equitable outcomes.
Prober.ai constrains LLMs via personas and JSON schemas to deliver gated, inquiry-based questions on argumentative writing weaknesses, aiming to reduce cognitive debt from AI overuse.
Model collapse threatens AI democratization by disproportionately degrading data and efficiency for low-resource communities.
Advanced LLMs improve EFL writing scores and diversity for lower-proficiency students but correlate with lower expert ratings on deep coherence, acting more as crutches than scaffolds.
AI functions as a determinant of health with ambient and personal exposure types, requiring new epidemiological study designs beyond current experiments.
Student-facilitated workshops in one design class produced AI policies highlighting double standards in disclosure requirements between students and faculty, demonstrating value in participatory governance.
Chatbot AI systems often fail complex needs while projecting authority, contributing to deskilling, labor displacement, economic concentration, and high environmental costs, so alternative pluralistic and task-specific designs are needed.
Student-written counterarguments to AI-generated thesis statements demonstrate logical reasoning as a component of critical thinking, and LLMs can assess such writing at scale with moderate agreement to human raters (Gwet's AC2 ~0.33).
AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.
citing papers explorer
-
What if AI systems weren't chatbots?
Chatbot AI systems often fail complex needs while projecting authority, contributing to deskilling, labor displacement, economic concentration, and high environmental costs, so alternative pluralistic and task-specific designs are needed.
-
Brainrot: Deskilling and Addiction are Overlooked AI Risks
AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.