pith. machine review for the scientific record. sign in

arxiv: 2605.12016 · v1 · submitted 2026-05-12 · 💻 cs.AI

Recognition: no theorem link

LLMs and the ZPD

Peter Wallis

Authors on Pith no claims yet

Pith reviewed 2026-05-13 04:52 UTC · model grok-4.3

classification 💻 cs.AI
keywords LLMsVygotskyZPDpracticeshallucinationinteractionprimitive thinking
0
0 comments X

The pith

LLMs perform primitive thinking through practices rather than distributed representations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes that large language models achieve their outputs by completing sequences of practices, drawing on Vygotsky's Zones of Proximal Development to explain how such thinking develops. This reframes common errors not as hallucinations but as dreams, and treats interaction as the foundation of communication instead of an optional extra. A reader would care because the view shifts attention from adding external controls to identifying the cognitive tools that produce everyday reasoning. The argument starts from historical psychology and applies it directly to current completion models.

Core claim

Contrary to claims that LLMs use distributed representations for thinking, the completion model performs primitive thinking in terms of practices. Viewed from this perspective, large language models do not hallucinate but dream, and what is needed is not guard rails but an investigation of the set of cognitive tools that enable common-sense behavior. Interaction is core to human communication rather than an add-on to real understanding.

What carries the argument

Vygotsky's Zone of Proximal Development applied to the LLM completion mechanism, where primitive thinking occurs through the enactment of practices acquired via interaction.

If this is right

  • LLM outputs are better understood as dreams than as hallucinations.
  • Development efforts should focus on identifying and supplying cognitive tools for common sense rather than adding guard rails.
  • Interaction becomes the primary mechanism for building understanding in both human and machine communication.
  • LLMs can be expected to show developmental progress in scientific thinking when placed in interactive zones analogous to those described by Vygotsky.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • AI training regimens could incorporate structured interactive loops to accelerate the acquisition of practice-based reasoning.
  • Evaluation benchmarks might shift from static factual accuracy to measures of how well models sustain coherent practices across extended dialogues.
  • The same lens could be applied to other generative systems to test whether their errors also resemble dreams shaped by incomplete practices.

Load-bearing premise

That characterizing the LLM completion process as primitive thinking in practices is a valid alternative to the distributed-representation description.

What would settle it

Clear internal evidence that LLMs solve novel reasoning tasks through mechanisms matching distributed representations rather than sequential practice completion would undermine the central claim.

Figures

Figures reproduced from arXiv: 2605.12016 by Peter Wallis.

Figure 1
Figure 1. Figure 1: A swanee whistle, bellows, and a microphone for an ethnographic study of Levinson’s ’interaction engine’ and pre￾intellectual speech. This switch in a child’s thinking happens over time, only once the child has the “cognitive tools” to progress, and only in inter￾action with adults. The child learns to think as an adult in Zones of Proximal Development or ZPDs. The proposal is that the completion model of … view at source ↗
Figure 2
Figure 2. Figure 2: “just glorified autocomplete” as a model of Vygot￾sky’s “natural” intelligence. hypothesize about what is happening in a rat brain based on its behaviour, we can actually look inside the robot and see how the behaviour is generated. My (simulated) robot has an exec￾utive function that reasons over behaviours rather than actions. A behaviour is a reflexive unit that, from a teleological perspec￾tive, fulfil… view at source ↗
read the original abstract

One hundred years ago Vygotsky and his circle were exploring the nature of consciousness and defining what would become psychology in the Soviet Union. They concluded that children develop "scientific thinking" through interacting with enculturated adults in Zones of Proximal Development or ZPDs. The proposal is that, contrary to the claims of some, the LLM mechanism is not doing thinking with "distributed representations," but rather the completion model is doing "primitive thinking" in terms of *practices*. Viewed from this perspective, it would seem our large language models don't hallucinate, but rather dream, and that what is needed is not "guard rails" but an investigation of the set of cognitive tools that enable us to do things that look like common-sense. The proposal here is that *interaction* is core to human communication rather than just an add-on to "real" understanding.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The paper proposes that, contrary to standard accounts, large language models (LLMs) perform 'primitive thinking' in terms of practices (drawing on Vygotsky's Zone of Proximal Development) rather than distributed representations. It argues that LLMs 'dream' instead of hallucinate and that interaction is core to communication rather than an add-on, suggesting a shift from guardrails to cognitive tools.

Significance. If the reinterpretation could be grounded in a concrete mapping from transformer mechanisms to practices or supported by falsifiable predictions, it might contribute to discussions on LLM alignment and cognitive modeling. As an unadorned analogy without derivations, data, or comparisons, its significance remains limited to stimulating philosophical debate in AI.

major comments (2)
  1. [Abstract] Abstract: the claim that 'the completion model is doing primitive thinking in terms of practices' rather than distributed representations is presented as a direct alternative without any technical mapping from attention, embeddings, or next-token prediction to specific practices, nor any argument establishing why the practices ontology is mechanistically superior or more accurate.
  2. [Abstract] Abstract: the assertion that LLMs 'don't hallucinate, but rather dream' and that interaction is 'core' rather than an add-on is introduced as a consequence of the ZPD perspective, but no independent benchmark, error analysis, or falsifiable test is supplied to distinguish this from standard accounts or to show it better matches observed LLM behavior.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments. Our manuscript offers a conceptual reinterpretation of LLM behavior through Vygotsky's Zone of Proximal Development, framing it as practice-based primitive thinking rather than a technical or empirical analysis. We address the major comments point by point below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'the completion model is doing primitive thinking in terms of practices' rather than distributed representations is presented as a direct alternative without any technical mapping from attention, embeddings, or next-token prediction to specific practices, nor any argument establishing why the practices ontology is mechanistically superior or more accurate.

    Authors: We acknowledge that the paper provides no technical mapping from transformer components such as attention or embeddings to specific practices, nor does it argue for mechanistic superiority. The proposal is explicitly an alternative ontological framing inspired by Vygotsky, suggesting that LLMs engage in primitive thinking via practices within ZPD-like interactions. This is not presented as replacing distributed representations but as a complementary perspective that may inform alignment discussions. We do not revise the manuscript to include such mappings, as they lie outside the scope of this short conceptual piece. revision: no

  2. Referee: [Abstract] Abstract: the assertion that LLMs 'don't hallucinate, but rather dream' and that interaction is 'core' rather than an add-on is introduced as a consequence of the ZPD perspective, but no independent benchmark, error analysis, or falsifiable test is supplied to distinguish this from standard accounts or to show it better matches observed LLM behavior.

    Authors: The referee is correct that no benchmarks, error analyses, or falsifiable tests are supplied. The 'dream' versus 'hallucinate' distinction and the centrality of interaction follow as interpretive consequences of the ZPD analogy, where outputs arise from practice-based generation rather than grounded reference. The manuscript does not claim empirical superiority over standard accounts; it aims to stimulate philosophical and cognitive modeling debate, proposing a shift toward cognitive tools. We make no revisions, as adding empirical tests would alter the paper's exploratory nature. revision: no

Circularity Check

0 steps flagged

No derivation chain or load-bearing prediction present; proposal is interpretive analogy without technical reduction

full rationale

The manuscript offers a conceptual reinterpretation of LLM next-token completion as 'primitive thinking' via practices (in the Vygotsky ZPD sense) rather than distributed representations, with related claims that models 'dream' instead of hallucinate and that interaction is core. No equations, parameter fits, uniqueness theorems, or first-principles derivations are supplied or referenced in the provided text. The central move is an assertion of an alternative ontology framed explicitly as 'the proposal,' not a result obtained from prior steps that could reduce to its own inputs by construction. Absent any claimed technical mapping or predictive step, none of the enumerated circularity patterns apply. The work is therefore self-contained as an analogy and receives the default non-circularity finding.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The proposal rests on Vygotsky's conclusions about consciousness and ZPDs as background, plus the assumption that LLM behavior can be re-described as practice-based primitive thinking without new evidence. No free parameters or invented physical entities are introduced, but the core distinction between practices and distributed representations functions as an ad-hoc interpretive lens.

axioms (2)
  • domain assumption Vygotsky and colleagues concluded that children develop scientific thinking through interaction with enculturated adults in Zones of Proximal Development.
    Invoked in the opening of the abstract as the foundational historical claim.
  • ad hoc to paper LLM completion models operate via primitive thinking in terms of practices rather than distributed representations.
    Central re-description of the mechanism with no derivation or comparison provided.
invented entities (1)
  • primitive thinking no independent evidence
    purpose: To characterize LLM behavior as practice-based rather than representation-based.
    New descriptive category introduced to reframe model outputs.

pith-pipeline@v0.9.0 · 5430 in / 1583 out tokens · 36388 ms · 2026-05-13T04:52:47.887593+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 38 canonical work pages · 1 internal anchor

  1. [1]

    It’s just glorified autocomplete,

    Introduction In an article in The New Yorker [1] Geoffrey Hinton is wor- ried that Artificial Neural Networks, with the benefit of mas- sive training data, have unleashed Artificial General Intelligence (AGI) on the world. His counter to the doubters is quoted at length as follows: People say, “It’s just glorified autocomplete,”... Now, let’s analyze that...

  2. [2]

    LLMs and the ZPD

    Natural Intelligence Hinton’s position is a common one based on a long history of thinking about thinking with symbols and representation. In classic AI this was explicit with symbols such as BRICK 3 rep- arXiv:2605.12016v1 [cs.AI] 12 May 2026 resenting an actual brick in the world, and when the bricks were on a table top, this type of reasoning worked ev...

  3. [3]

    prelinguistic thought

    Speech and Language The above provides a model of “prelinguistic thought”, what about preintellectual speech? Reasoning with symbolic representations has a very long tradition that is often seen as “peaking” in the west with Wit- genstein’s Tractatus [16]. Witgenstein is however most famous for moving on and in the Blue and Brown Books [17] he has an exam...

  4. [4]

    scientific

    Prelinguistic thought meets preintellectual language use: the ZPD We see a child, an ape, a cat, or a lizard, performing some task and think that it thinks about the task much as we do [15]. Vy- gotsky and his coworkers found that prelinguistic thought and preintellectual speech gradually merge to become “scientific”. The proposal is that the completion m...

  5. [5]

    scientifically

    Pre Intellectual Speech (Recognition) Word Error Rate has been a very useful tool for developing the tools that we have developed, but there is perhaps an assumption built into the measure that there are word-like thoughts in heads, and there is a mapping between sound and thoughts. The pro- posed alternative is that thoughts are constructed from sounds t...

  6. [6]

    Vygotsky’s claim was that Type 2 thinking is a cultural product and learnt by the infant in a ZPD

    Conclusion LLMs don’t think like us, and neither do marmosets or toddlers. Vygotsky’s claim was that Type 2 thinking is a cultural product and learnt by the infant in a ZPD. For ZPDs to work requires special features from the human interaction engine, but stud- ies of interaction in that space largely assume type 2 thinking. So how would we study non symb...

  7. [7]

    Why the godfather of a.i. fears what he built,

    J. Rothman, “Why the godfather of a.i. fears what he built,”The New Yorker, November 13th 2023, geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours

  8. [8]

    Kozulin,Vygotsky’s Psychology: A biography of ideas

    A. Kozulin,Vygotsky’s Psychology: A biography of ideas. Har- vester Wheatsheaf, 1990

  9. [9]

    The symbol grounding problem,

    S. Harnad, “The symbol grounding problem,”Physica D, vol. 42, pp. 335–346, 1990

  10. [10]

    Intelligence without representation,

    R. A. Brooks, “Intelligence without representation,”Artificial In- telligence, vol. 47, pp. 139–160, 1991

  11. [11]

    Ruthrof,Semantics and the body

    H. Ruthrof,Semantics and the body. Toronto University Press, 1997

  12. [12]

    The dynamic structure of everyday life,

    P. E. Agre, “The dynamic structure of everyday life,” MIT Artifi- cial Intelligence Laboratory, USA, Tech. Rep. AI-TR-1085, 1988

  13. [13]

    Biosemantics,

    R. Millikan, “Biosemantics,”Journal of Philosophy, vol. 86, no. 6, pp. 281–297, 1989

  14. [14]

    Hendriks-Jansen,Catching Ourselves in the Act

    H. Hendriks-Jansen,Catching Ourselves in the Act. Cambridge, MA: MIT Press, 1996

  15. [15]

    Something weird is happening with llms and chess,

    “Something weird is happening with llms and chess,” November

  16. [16]

    Available: https://dynomight.net/chess/

    [Online]. Available: https://dynomight.net/chess/

  17. [17]

    Ok, i can partly explain the llm chess weirdness now,

    “Ok, i can partly explain the llm chess weirdness now,” November

  18. [18]

    Available: https://dynomight.net/more-chess/

    [Online]. Available: https://dynomight.net/more-chess/

  19. [19]

    R. C. Arkin, Ed.,Behavior-Based Robotics. Cambridge, MA: MIT Press, 1998

  20. [20]

    Introduction: The varieties of enactivism,

    D. Ward, D. Silverman, and M. Villalobos, “Introduction: The varieties of enactivism,”Topoi, pp. 1–11, Apr. 2017

  21. [21]

    Plans and resource-bound practical reasoning,

    M. E. Bratman, D. J. Israel, and M. E. Pollack, “Plans and resource-bound practical reasoning,”Computational Intelligence, vol. 4, pp. 349–355, 1988

  22. [22]

    J. L. Kolodner,Case-Based Reasoning. San Mateo, CA.: Morgan-Kaufmann Publishers, Inc., 1993

  23. [23]

    Tomasello,The Evolution of Agency: Behavioiural Organi- zation from Lizards to Humans

    M. Tomasello,The Evolution of Agency: Behavioiural Organi- zation from Lizards to Humans. Cambridge, MA: MIT Press, 2022

  24. [24]

    Wittgenstein,Tractatus Logico-Philosophicus

    L. Wittgenstein,Tractatus Logico-Philosophicus. Weimar Re- public: Ostwald, 1921

  25. [25]

    Oxford: Blackwell, 1958

    ——,The Blue and Brown Books: Preliminary Studies for the ‘Philosophical Investigations’. Oxford: Blackwell, 1958

  26. [26]

    J. L. Austin,How to do Things with Words. Oxford, UK: Claren- don Press, 1955

  27. [27]

    Llms and the human condition,

    P. Wallis, “Llms and the human condition,” 2024. [Online]. Available: https://arxiv.org/abs/2402.08403

  28. [28]

    Vervet monkey alarm calls: Semantic communication in a free-ranging primate,

    R. M. Seyfarth, D. L. Cheney, and P. Marler, “Vervet monkey alarm calls: Semantic communication in a free-ranging primate,”Animal Behaviour, vol. 28, no. 4, pp. 1070– 1094, 1980. [Online]. Available: https://www.sciencedirect.com/ science/article/pii/S0003347280800972

  29. [29]

    Glue pizza and eat rocks: Google ai search errors go viral,

    L. McMahon, “Glue pizza and eat rocks: Google ai search errors go viral,” May 24th 2024, https://www.bbc.co.uk/news/articles/cd11gzejgz4o. [Online]. Available: https://www.bbc.co.uk/news/articles/cd11gzejgz4o

  30. [30]

    On the human ‘interaction engine’,

    S. C. Levinson, “On the human ‘interaction engine’,” in Roots of Human Sociality, Enfield and Levinson, Eds. Taylor and Francis Group, 2006, pp. 35–66. [Online]. Available: https://api.semanticscholar.org/CorpusID:143530156

  31. [31]

    Revisiting the human ’interaction engine’: comparative approaches to social action coordination,

    R. Heesen and M. Fr ¨ohlich, “Revisiting the human ’interaction engine’: comparative approaches to social action coordination,” Philosophical Transactions of the Royal Society, London B: Bi- ological Sciences, vol. 377, no. 1859, p. 20210092, September 2022

  32. [32]

    Robust normative systems: What happens when a normative system fails?

    P. Wallis, “Robust normative systems: What happens when a normative system fails?” inAbuse: the darker side of Human-Computer Interaction (INTERACT ’05), A. de Angeli, S. Brahnam, and P. Wallis, Eds., Rome, September 2005, http://www.agentabuse.org/. [Online]. Available: http://www. agentabuse.org/

  33. [33]

    An enactivist account of mind reading in natural language understanding,

    ——, “An enactivist account of mind reading in natural language understanding,”Multimodal Tecnologies and Interaction, vol. 6, no. 5, 2022. [Online]. Available: https://arxiv.org/abs/2111.06179

  34. [34]

    Gallagher,Action and Interaction

    S. Gallagher,Action and Interaction. Oxford University Press, 2020

  35. [35]

    The need for a common ground: Ontological guidelines for a mutual human-ai theory of mind,

    M. Pafla, M. Hancock, K. Larson, and J. Hoey, “The need for a common ground: Ontological guidelines for a mutual human-ai theory of mind,” Hawaii, USA, 2024. [Online]. Available: https://cs.uwaterloo.ca/∼jhoey/papers/Paf;a-2024.pdf

  36. [36]

    A convergent interaction engine: vocal communication among marmoset monkeys,

    J. M. Burkart, J. E. C. Adriaense, R. K. Br ¨ugger, F. M. Miss, K. Wierucka, and C. P. van Schaik, “A convergent interaction engine: vocal communication among marmoset monkeys,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 377, no. 1859, p. 20210098, 07 2022. [Online]. Available: https://doi.org/10.1098/rstb.2021.0098

  37. [37]

    Garfinkel,Studies in Ethnomethodology

    H. Garfinkel,Studies in Ethnomethodology. Prentice-Hall, 1967

  38. [38]

    ten Have,Doing Conversation Analysis: A Practical Guide (In- troducing Qualitative Methods)

    P. ten Have,Doing Conversation Analysis: A Practical Guide (In- troducing Qualitative Methods). California, USA: SAGE Publi- cations, 1999