Recognition: no theorem link
LLMs and the ZPD
Pith reviewed 2026-05-13 04:52 UTC · model grok-4.3
The pith
LLMs perform primitive thinking through practices rather than distributed representations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Contrary to claims that LLMs use distributed representations for thinking, the completion model performs primitive thinking in terms of practices. Viewed from this perspective, large language models do not hallucinate but dream, and what is needed is not guard rails but an investigation of the set of cognitive tools that enable common-sense behavior. Interaction is core to human communication rather than an add-on to real understanding.
What carries the argument
Vygotsky's Zone of Proximal Development applied to the LLM completion mechanism, where primitive thinking occurs through the enactment of practices acquired via interaction.
If this is right
- LLM outputs are better understood as dreams than as hallucinations.
- Development efforts should focus on identifying and supplying cognitive tools for common sense rather than adding guard rails.
- Interaction becomes the primary mechanism for building understanding in both human and machine communication.
- LLMs can be expected to show developmental progress in scientific thinking when placed in interactive zones analogous to those described by Vygotsky.
Where Pith is reading between the lines
- AI training regimens could incorporate structured interactive loops to accelerate the acquisition of practice-based reasoning.
- Evaluation benchmarks might shift from static factual accuracy to measures of how well models sustain coherent practices across extended dialogues.
- The same lens could be applied to other generative systems to test whether their errors also resemble dreams shaped by incomplete practices.
Load-bearing premise
That characterizing the LLM completion process as primitive thinking in practices is a valid alternative to the distributed-representation description.
What would settle it
Clear internal evidence that LLMs solve novel reasoning tasks through mechanisms matching distributed representations rather than sequential practice completion would undermine the central claim.
Figures
read the original abstract
One hundred years ago Vygotsky and his circle were exploring the nature of consciousness and defining what would become psychology in the Soviet Union. They concluded that children develop "scientific thinking" through interacting with enculturated adults in Zones of Proximal Development or ZPDs. The proposal is that, contrary to the claims of some, the LLM mechanism is not doing thinking with "distributed representations," but rather the completion model is doing "primitive thinking" in terms of *practices*. Viewed from this perspective, it would seem our large language models don't hallucinate, but rather dream, and that what is needed is not "guard rails" but an investigation of the set of cognitive tools that enable us to do things that look like common-sense. The proposal here is that *interaction* is core to human communication rather than just an add-on to "real" understanding.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes that, contrary to standard accounts, large language models (LLMs) perform 'primitive thinking' in terms of practices (drawing on Vygotsky's Zone of Proximal Development) rather than distributed representations. It argues that LLMs 'dream' instead of hallucinate and that interaction is core to communication rather than an add-on, suggesting a shift from guardrails to cognitive tools.
Significance. If the reinterpretation could be grounded in a concrete mapping from transformer mechanisms to practices or supported by falsifiable predictions, it might contribute to discussions on LLM alignment and cognitive modeling. As an unadorned analogy without derivations, data, or comparisons, its significance remains limited to stimulating philosophical debate in AI.
major comments (2)
- [Abstract] Abstract: the claim that 'the completion model is doing primitive thinking in terms of practices' rather than distributed representations is presented as a direct alternative without any technical mapping from attention, embeddings, or next-token prediction to specific practices, nor any argument establishing why the practices ontology is mechanistically superior or more accurate.
- [Abstract] Abstract: the assertion that LLMs 'don't hallucinate, but rather dream' and that interaction is 'core' rather than an add-on is introduced as a consequence of the ZPD perspective, but no independent benchmark, error analysis, or falsifiable test is supplied to distinguish this from standard accounts or to show it better matches observed LLM behavior.
Simulated Author's Rebuttal
We thank the referee for their constructive comments. Our manuscript offers a conceptual reinterpretation of LLM behavior through Vygotsky's Zone of Proximal Development, framing it as practice-based primitive thinking rather than a technical or empirical analysis. We address the major comments point by point below.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that 'the completion model is doing primitive thinking in terms of practices' rather than distributed representations is presented as a direct alternative without any technical mapping from attention, embeddings, or next-token prediction to specific practices, nor any argument establishing why the practices ontology is mechanistically superior or more accurate.
Authors: We acknowledge that the paper provides no technical mapping from transformer components such as attention or embeddings to specific practices, nor does it argue for mechanistic superiority. The proposal is explicitly an alternative ontological framing inspired by Vygotsky, suggesting that LLMs engage in primitive thinking via practices within ZPD-like interactions. This is not presented as replacing distributed representations but as a complementary perspective that may inform alignment discussions. We do not revise the manuscript to include such mappings, as they lie outside the scope of this short conceptual piece. revision: no
-
Referee: [Abstract] Abstract: the assertion that LLMs 'don't hallucinate, but rather dream' and that interaction is 'core' rather than an add-on is introduced as a consequence of the ZPD perspective, but no independent benchmark, error analysis, or falsifiable test is supplied to distinguish this from standard accounts or to show it better matches observed LLM behavior.
Authors: The referee is correct that no benchmarks, error analyses, or falsifiable tests are supplied. The 'dream' versus 'hallucinate' distinction and the centrality of interaction follow as interpretive consequences of the ZPD analogy, where outputs arise from practice-based generation rather than grounded reference. The manuscript does not claim empirical superiority over standard accounts; it aims to stimulate philosophical and cognitive modeling debate, proposing a shift toward cognitive tools. We make no revisions, as adding empirical tests would alter the paper's exploratory nature. revision: no
Circularity Check
No derivation chain or load-bearing prediction present; proposal is interpretive analogy without technical reduction
full rationale
The manuscript offers a conceptual reinterpretation of LLM next-token completion as 'primitive thinking' via practices (in the Vygotsky ZPD sense) rather than distributed representations, with related claims that models 'dream' instead of hallucinate and that interaction is core. No equations, parameter fits, uniqueness theorems, or first-principles derivations are supplied or referenced in the provided text. The central move is an assertion of an alternative ontology framed explicitly as 'the proposal,' not a result obtained from prior steps that could reduce to its own inputs by construction. Absent any claimed technical mapping or predictive step, none of the enumerated circularity patterns apply. The work is therefore self-contained as an analogy and receives the default non-circularity finding.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Vygotsky and colleagues concluded that children develop scientific thinking through interaction with enculturated adults in Zones of Proximal Development.
- ad hoc to paper LLM completion models operate via primitive thinking in terms of practices rather than distributed representations.
invented entities (1)
-
primitive thinking
no independent evidence
Reference graph
Works this paper leans on
-
[1]
It’s just glorified autocomplete,
Introduction In an article in The New Yorker [1] Geoffrey Hinton is wor- ried that Artificial Neural Networks, with the benefit of mas- sive training data, have unleashed Artificial General Intelligence (AGI) on the world. His counter to the doubters is quoted at length as follows: People say, “It’s just glorified autocomplete,”... Now, let’s analyze that...
work page 2023
-
[2]
Natural Intelligence Hinton’s position is a common one based on a long history of thinking about thinking with symbols and representation. In classic AI this was explicit with symbols such as BRICK 3 rep- arXiv:2605.12016v1 [cs.AI] 12 May 2026 resenting an actual brick in the world, and when the bricks were on a table top, this type of reasoning worked ev...
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[3]
Speech and Language The above provides a model of “prelinguistic thought”, what about preintellectual speech? Reasoning with symbolic representations has a very long tradition that is often seen as “peaking” in the west with Wit- genstein’s Tractatus [16]. Witgenstein is however most famous for moving on and in the Blue and Brown Books [17] he has an exam...
work page 2026
-
[4]
Prelinguistic thought meets preintellectual language use: the ZPD We see a child, an ape, a cat, or a lizard, performing some task and think that it thinks about the task much as we do [15]. Vy- gotsky and his coworkers found that prelinguistic thought and preintellectual speech gradually merge to become “scientific”. The proposal is that the completion m...
-
[5]
Pre Intellectual Speech (Recognition) Word Error Rate has been a very useful tool for developing the tools that we have developed, but there is perhaps an assumption built into the measure that there are word-like thoughts in heads, and there is a mapping between sound and thoughts. The pro- posed alternative is that thoughts are constructed from sounds t...
work page 2020
-
[6]
Vygotsky’s claim was that Type 2 thinking is a cultural product and learnt by the infant in a ZPD
Conclusion LLMs don’t think like us, and neither do marmosets or toddlers. Vygotsky’s claim was that Type 2 thinking is a cultural product and learnt by the infant in a ZPD. For ZPDs to work requires special features from the human interaction engine, but stud- ies of interaction in that space largely assume type 2 thinking. So how would we study non symb...
-
[7]
Why the godfather of a.i. fears what he built,
J. Rothman, “Why the godfather of a.i. fears what he built,”The New Yorker, November 13th 2023, geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours
work page 2023
-
[8]
Kozulin,Vygotsky’s Psychology: A biography of ideas
A. Kozulin,Vygotsky’s Psychology: A biography of ideas. Har- vester Wheatsheaf, 1990
work page 1990
-
[9]
S. Harnad, “The symbol grounding problem,”Physica D, vol. 42, pp. 335–346, 1990
work page 1990
-
[10]
Intelligence without representation,
R. A. Brooks, “Intelligence without representation,”Artificial In- telligence, vol. 47, pp. 139–160, 1991
work page 1991
-
[11]
Ruthrof,Semantics and the body
H. Ruthrof,Semantics and the body. Toronto University Press, 1997
work page 1997
-
[12]
The dynamic structure of everyday life,
P. E. Agre, “The dynamic structure of everyday life,” MIT Artifi- cial Intelligence Laboratory, USA, Tech. Rep. AI-TR-1085, 1988
work page 1988
-
[13]
R. Millikan, “Biosemantics,”Journal of Philosophy, vol. 86, no. 6, pp. 281–297, 1989
work page 1989
-
[14]
Hendriks-Jansen,Catching Ourselves in the Act
H. Hendriks-Jansen,Catching Ourselves in the Act. Cambridge, MA: MIT Press, 1996
work page 1996
-
[15]
Something weird is happening with llms and chess,
“Something weird is happening with llms and chess,” November
- [16]
-
[17]
Ok, i can partly explain the llm chess weirdness now,
“Ok, i can partly explain the llm chess weirdness now,” November
-
[18]
Available: https://dynomight.net/more-chess/
[Online]. Available: https://dynomight.net/more-chess/
-
[19]
R. C. Arkin, Ed.,Behavior-Based Robotics. Cambridge, MA: MIT Press, 1998
work page 1998
-
[20]
Introduction: The varieties of enactivism,
D. Ward, D. Silverman, and M. Villalobos, “Introduction: The varieties of enactivism,”Topoi, pp. 1–11, Apr. 2017
work page 2017
-
[21]
Plans and resource-bound practical reasoning,
M. E. Bratman, D. J. Israel, and M. E. Pollack, “Plans and resource-bound practical reasoning,”Computational Intelligence, vol. 4, pp. 349–355, 1988
work page 1988
-
[22]
J. L. Kolodner,Case-Based Reasoning. San Mateo, CA.: Morgan-Kaufmann Publishers, Inc., 1993
work page 1993
-
[23]
Tomasello,The Evolution of Agency: Behavioiural Organi- zation from Lizards to Humans
M. Tomasello,The Evolution of Agency: Behavioiural Organi- zation from Lizards to Humans. Cambridge, MA: MIT Press, 2022
work page 2022
-
[24]
Wittgenstein,Tractatus Logico-Philosophicus
L. Wittgenstein,Tractatus Logico-Philosophicus. Weimar Re- public: Ostwald, 1921
work page 1921
-
[25]
——,The Blue and Brown Books: Preliminary Studies for the ‘Philosophical Investigations’. Oxford: Blackwell, 1958
work page 1958
-
[26]
J. L. Austin,How to do Things with Words. Oxford, UK: Claren- don Press, 1955
work page 1955
-
[27]
P. Wallis, “Llms and the human condition,” 2024. [Online]. Available: https://arxiv.org/abs/2402.08403
-
[28]
Vervet monkey alarm calls: Semantic communication in a free-ranging primate,
R. M. Seyfarth, D. L. Cheney, and P. Marler, “Vervet monkey alarm calls: Semantic communication in a free-ranging primate,”Animal Behaviour, vol. 28, no. 4, pp. 1070– 1094, 1980. [Online]. Available: https://www.sciencedirect.com/ science/article/pii/S0003347280800972
work page 1980
-
[29]
Glue pizza and eat rocks: Google ai search errors go viral,
L. McMahon, “Glue pizza and eat rocks: Google ai search errors go viral,” May 24th 2024, https://www.bbc.co.uk/news/articles/cd11gzejgz4o. [Online]. Available: https://www.bbc.co.uk/news/articles/cd11gzejgz4o
work page 2024
-
[30]
On the human ‘interaction engine’,
S. C. Levinson, “On the human ‘interaction engine’,” in Roots of Human Sociality, Enfield and Levinson, Eds. Taylor and Francis Group, 2006, pp. 35–66. [Online]. Available: https://api.semanticscholar.org/CorpusID:143530156
work page 2006
-
[31]
Revisiting the human ’interaction engine’: comparative approaches to social action coordination,
R. Heesen and M. Fr ¨ohlich, “Revisiting the human ’interaction engine’: comparative approaches to social action coordination,” Philosophical Transactions of the Royal Society, London B: Bi- ological Sciences, vol. 377, no. 1859, p. 20210092, September 2022
work page 2022
-
[32]
Robust normative systems: What happens when a normative system fails?
P. Wallis, “Robust normative systems: What happens when a normative system fails?” inAbuse: the darker side of Human-Computer Interaction (INTERACT ’05), A. de Angeli, S. Brahnam, and P. Wallis, Eds., Rome, September 2005, http://www.agentabuse.org/. [Online]. Available: http://www. agentabuse.org/
work page 2005
-
[33]
An enactivist account of mind reading in natural language understanding,
——, “An enactivist account of mind reading in natural language understanding,”Multimodal Tecnologies and Interaction, vol. 6, no. 5, 2022. [Online]. Available: https://arxiv.org/abs/2111.06179
-
[34]
Gallagher,Action and Interaction
S. Gallagher,Action and Interaction. Oxford University Press, 2020
work page 2020
-
[35]
The need for a common ground: Ontological guidelines for a mutual human-ai theory of mind,
M. Pafla, M. Hancock, K. Larson, and J. Hoey, “The need for a common ground: Ontological guidelines for a mutual human-ai theory of mind,” Hawaii, USA, 2024. [Online]. Available: https://cs.uwaterloo.ca/∼jhoey/papers/Paf;a-2024.pdf
work page 2024
-
[36]
A convergent interaction engine: vocal communication among marmoset monkeys,
J. M. Burkart, J. E. C. Adriaense, R. K. Br ¨ugger, F. M. Miss, K. Wierucka, and C. P. van Schaik, “A convergent interaction engine: vocal communication among marmoset monkeys,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 377, no. 1859, p. 20210098, 07 2022. [Online]. Available: https://doi.org/10.1098/rstb.2021.0098
-
[37]
Garfinkel,Studies in Ethnomethodology
H. Garfinkel,Studies in Ethnomethodology. Prentice-Hall, 1967
work page 1967
-
[38]
ten Have,Doing Conversation Analysis: A Practical Guide (In- troducing Qualitative Methods)
P. ten Have,Doing Conversation Analysis: A Practical Guide (In- troducing Qualitative Methods). California, USA: SAGE Publi- cations, 1999
work page 1999
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.