Recognition: unknown
Perturbation is All You Need for Extrapolating Language Models
Pith reviewed 2026-05-08 17:01 UTC · model grok-4.3
The pith
Perturbing input prefixes into semantic neighbors enables language models to extrapolate beyond their training data support.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
A perturbation-based procedure that converts the prefix into a semantic neighbor and conditions next-token prediction on the perturbed version creates a hierarchical model whose extrapolability properties can be characterized theoretically, leading to consistent gains in out-of-support prediction on both synthetic and real language data while preserving in-support performance.
What carries the argument
The perturbation procedure that maps an exact prefix to a semantic neighbor, inducing a pre-post-additive noise structure within the hierarchical model for next-token prediction.
If this is right
- Consistent gains in accuracy on token sequences outside the empirical support of the training corpus.
- Competitive performance retained on sequences inside the training support.
- A rigorous theoretical characterization of extrapolability for models trained with this noise structure.
- A practical alternative to standard autoregressive training that requires only prefix perturbation.
Where Pith is reading between the lines
- The same perturbation idea could be tested on other sequential prediction domains such as time series or protein sequences.
- If the noise structure proves robust, training corpora could be smaller while still achieving reliable extrapolation.
- The method invites direct comparisons with other regularization techniques that aim to improve generalization outside observed data.
Load-bearing premise
Transforming the prefix into a semantic neighbor produces a useful pre-post-additive noise structure whose extrapolability properties transfer to real language data.
What would settle it
An experiment on held-out sequences where the perturbed-prefix model shows no gain, or a loss, in prediction accuracy compared with the exact-prefix baseline on out-of-support examples.
Figures
read the original abstract
We introduce a simple yet powerful framework for training large language models. In contrast to the standard autoregressive next-token prediction based on an exact prefix, we propose a perturbation-based procedure that first transforms the prefix into a semantic neighbor and then conditions on this perturbed variant for next-token prediction. This yields a hierarchical model with a pre-post-additive noise structure. Within this framework, we develop a rigorous theory of extrapolability, namely, the capacity of a model class to make reliable predictions for token sequences that lie outside the empirical support of the training corpus. We evaluate the finite-sample performance of the proposed procedure using both synthetic and real-world language data. Results show that the proposed method consistently improves out-of-support prediction while maintaining competitive in-support performance, demonstrating that perturbation offers a practical route to language modeling.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces a perturbation-based training framework for large language models. Rather than standard autoregressive next-token prediction on exact prefixes, the method first maps each prefix to a semantic neighbor and then conditions next-token prediction on the perturbed prefix. This construction is claimed to produce a hierarchical model with a pre-post-additive noise structure, for which the authors develop a rigorous theory of extrapolability (reliable prediction outside the empirical support of the training data). Finite-sample performance is evaluated on both synthetic and real-world language data, with the claim that the procedure improves out-of-support prediction while remaining competitive on in-support tasks.
Significance. If the claimed rigorous characterization of extrapolability holds and the empirical gains prove robust and reproducible, the work would offer a conceptually simple yet theoretically grounded route to improving generalization in sequence models. The idea of inducing a controllable noise structure via semantic perturbation could influence how practitioners think about training objectives beyond standard cross-entropy, particularly for tasks requiring reliable extrapolation.
major comments (2)
- [Abstract and theoretical development] The central theoretical contribution rests on the assertion that the semantic-neighbor perturbation induces a pre-post-additive noise structure whose extrapolability properties admit a rigorous characterization. However, the manuscript provides no explicit definition of the semantic-neighbor map, no statement of the noise model, and no proof sketch or derivation showing how extrapolability follows. This renders the theory unevaluable and makes the transfer from synthetic constructions to real, long-range-dependent language distributions impossible to assess.
- [Empirical evaluation] The empirical claim that the method 'consistently improves out-of-support prediction' is load-bearing for the practical contribution. Yet the abstract (and, by extension, the reported evaluation) supplies no experimental details, metrics, error bars, or description of how out-of-support sequences are constructed or identified. Without these, it is impossible to determine whether the reported gains are statistically meaningful or whether they could be explained by the perturbation acting as a form of data augmentation rather than the claimed noise structure.
minor comments (2)
- [Abstract] The abstract would benefit from a concise statement of the precise perturbation operator and the resulting hierarchical model before asserting the existence of a rigorous theory.
- [Theoretical framework] Notation for the pre- and post-perturbation quantities should be introduced explicitly and used consistently when describing the noise structure.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed review. We address each major comment below and outline the revisions we will make to improve clarity and evaluability of both the theoretical and empirical contributions.
read point-by-point responses
-
Referee: [Abstract and theoretical development] The central theoretical contribution rests on the assertion that the semantic-neighbor perturbation induces a pre-post-additive noise structure whose extrapolability properties admit a rigorous characterization. However, the manuscript provides no explicit definition of the semantic-neighbor map, no statement of the noise model, and no proof sketch or derivation showing how extrapolability follows. This renders the theory unevaluable and makes the transfer from synthetic constructions to real, long-range-dependent language distributions impossible to assess.
Authors: We agree that the abstract does not contain the explicit definitions or derivations, which limits immediate evaluability. In the revised manuscript we will add a concise definition of the semantic-neighbor map (a token-wise replacement based on embedding-space proximity) and state the pre-post-additive noise model explicitly. We will also insert a short proof sketch in the main text (or appendix) deriving the extrapolability bound from the hierarchical structure. These additions will clarify how the framework extends to real language data under the maintained assumption that semantic embeddings induce a suitable noise hierarchy. revision: yes
-
Referee: [Empirical evaluation] The empirical claim that the method 'consistently improves out-of-support prediction' is load-bearing for the practical contribution. Yet the abstract (and, by extension, the reported evaluation) supplies no experimental details, metrics, error bars, or description of how out-of-support sequences are constructed or identified. Without these, it is impossible to determine whether the reported gains are statistically meaningful or whether they could be explained by the perturbation acting as a form of data augmentation rather than the claimed noise structure.
Authors: We acknowledge that the abstract omits the necessary experimental specifics. In the revision we will expand the abstract to include the evaluation metrics (perplexity and next-token accuracy), a brief description of out-of-support construction (held-out prefix combinations), and mention of error bars from repeated trials. We will also highlight the ablation studies already present in the full manuscript that compare against standard augmentation baselines, thereby supporting that gains arise from the induced noise structure rather than augmentation alone. revision: yes
Circularity Check
No significant circularity; derivation is self-contained
full rationale
The paper introduces a perturbation procedure that defines a new hierarchical model class with pre-post-additive noise, then derives extrapolability properties inside that class and validates on both synthetic and real data. No equations or claims reduce by construction to fitted inputs, self-citations, or renamed known results; the theory is developed from the stated model assumptions rather than presupposing the target extrapolability result. The empirical improvements are reported separately from the theoretical development, leaving the central claims independent of the inputs they are tested against.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 1 Pith paper
-
Learning Perturbations to Extrapolate Your LLM
A learnable continuous perturbation framework for LLM token prefixes via latent vector transformations, optimized through unbiased estimating equations, yields gains in out-of-domain performance.
Reference graph
Works this paper leans on
-
[1]
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
Eikema, Bryan and Aziz, Wilker. Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation. Proceedings of the 28th International Conference on Computational Linguistics. 2020. doi:10.18653/v1/2020.coling-main.398
-
[2]
Journal of the American Statistical Association , volume=
Debiasing watermarks for large language models via maximal coupling , author=. Journal of the American Statistical Association , volume=. 2025 , publisher=
2025
-
[3]
Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=
Robust detection of watermarks for large language models under human edits , author=. Journal of the Royal Statistical Society Series B: Statistical Methodology , volume=. 2026 , publisher=
2026
-
[4]
The Implicit Curriculum: Learning Dynamics in RL with Verifiable Rewards
On the Learning Dynamics of RLVR at the Edge of Competence , author=. arXiv preprint arXiv:2602.14872 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[5]
arXiv preprint arXiv:2410.02504 , year=
Dual active learning for reinforcement learning from human feedback , author=. arXiv preprint arXiv:2410.02504 , year=
-
[6]
The Annals of Statistics , volume=
A statistical framework of watermarks for large language models: Pivot, detection efficiency and optimal rules , author=. The Annals of Statistics , volume=. 2025 , publisher=
2025
-
[7]
arXiv preprint arXiv:2601.06586 , year=
Detecting LLM-Generated Text with Performance Guarantees , author=. arXiv preprint arXiv:2601.06586 , year=
-
[8]
Journal of the American Statistical Association , volume=
Ranking inferences based on the top choice of multiway comparisons , author=. Journal of the American Statistical Association , volume=. 2025 , publisher=
2025
-
[9]
arXiv preprint arXiv:2509.01847 , year=
Uncertainty Quantification for Ranking with Heterogeneous Preferences , author=. arXiv preprint arXiv:2509.01847 , year=
-
[10]
Kernelized Advantage Estimation: From Nonparametric Statistics to LLM Reasoning
Kernelized Advantage Estimation: From Nonparametric Statistics to LLM Reasoning , author=. arXiv preprint arXiv:2604.28005 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[11]
arXiv preprint arXiv:2603.01162v3 , year=
Demystifying group relative policy optimization: Its policy gradient is a u-statistic , author=. arXiv preprint arXiv:2603.01162 , year=
-
[12]
Robust reinforcement learning from human feedback for large language models fine-tuning , author=. arXiv preprint arXiv:2504.03784 , year=
-
[13]
Contextual Online Uncertainty-Aware Preference Learning for Human Feedback
Contextual online uncertainty-aware preference learning for human feedback , author=. arXiv preprint arXiv:2504.19342 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[14]
arXiv preprint arXiv:2602.08259 , year=
A Statistical Framework for Alignment with Biased AI Feedback , author=. arXiv preprint arXiv:2602.08259 , year=
-
[16]
Privacy-Preserving Reinforcement Learning from Human Feedback via Decoupled Reward Modeling , author=. arXiv preprint arXiv:2603.22563 , year=
-
[17]
On the Algorithmic Bias of Aligning Large Language Models with
Xiao, Jiancong and Li, Ziniu and Xie, Xingyu and Getzen, Emily and Fang, Cong and Long, Qi and Su, Weijie , journal =. On the Algorithmic Bias of Aligning Large Language Models with. 2025 , publisher =
2025
-
[18]
Statistical impossibility and possibility of aligning llms with human preferences: From condorcet paradox to nash equilibrium , author=. arXiv preprint arXiv:2503.10990 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[19]
arXiv preprint arXiv:2512.03208 , year=
Uncertainty Quantification for Large Language Model Reward Learning under Heterogeneous Human Feedback , author=. arXiv preprint arXiv:2512.03208 , year=
-
[20]
Low-rank contextual reinforcement learning from heterogeneous human feedback , author=. arXiv preprint arXiv:2412.19436 , year=
-
[21]
Kingma and Jimmy Ba , title =
Diederik P. Kingma and Jimmy Ba , title =. 3rd International Conference on Learning Representations,
-
[22]
Openai gpt-5 system card , author=. arXiv preprint arXiv:2601.03267 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[23]
Scaling Laws for Neural Language Models
Scaling laws for neural language models , author=. arXiv preprint arXiv:2001.08361 , year=
work page internal anchor Pith review arXiv 2001
-
[24]
Forty-first International Conference on Machine Learning , year=
Position: Will we run out of data? Limits of LLM scaling based on human-generated data , author=. Forty-first International Conference on Machine Learning , year=
-
[25]
Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities , author=. arXiv preprint arXiv:2507.06261 , year=
work page internal anchor Pith review arXiv
-
[26]
Advances in neural information processing systems , volume=
Language models are few-shot learners , author=. Advances in neural information processing systems , volume=
-
[27]
2025 , journal=
Qwen3 Technical Report , author=. 2025 , journal=
2025
-
[28]
BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding
Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina. BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North A merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019. doi:10.18653/v...
-
[29]
Attention is all you need , year =
Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser,. Attention is all you need , year =. Proceedings of the 31st International Conference on Neural Information Processing Systems , pages =
-
[30]
2019 , editor =
Song, Kaitao and Tan, Xu and Qin, Tao and Lu, Jianfeng and Liu, Tie-Yan , booktitle =. 2019 , editor =
2019
-
[31]
2018 , institution=
Improving language understanding by generative pre-training , author=. 2018 , institution=
2018
- [32]
-
[33]
Text summarization branches out , pages=
Rouge: A package for automatic evaluation of summaries , author=. Text summarization branches out , pages=
-
[34]
International Conference on Learning Representations , year=
Decoupled Weight Decay Regularization , author=. International Conference on Learning Representations , year=
-
[35]
Hierarchical neural story generation
Fan, Angela and Lewis, Mike and Dauphin, Yann. Hierarchical Neural Story Generation. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018. doi:10.18653/v1/P18-1082
-
[36]
International Conference on Learning Representations , year=
Pointer Sentinel Mixture Models , author=. International Conference on Learning Representations , year=
-
[37]
Pillutla, Krishna and Liu, Lang and Thickstun, John and Welleck, Sean and Swayamdipta, Swabha and Zellers, Rowan and Oh, Sewoong and Choi, Yejin and Harchaoui, Zaid , title =. J. Mach. Learn. Res. , month = jan, articleno =. 2023 , issue_date =
2023
-
[38]
2019 , institution=
Language models are unsupervised multitask learners , author=. 2019 , institution=
2019
-
[39]
2022 , journal=
OPT: Open Pre-trained Transformer Language Models , author=. 2022 , journal=
2022
-
[40]
Zenodo , year=
Gpt-neo: Large scale autoregressive language modeling with mesh-tensorflow , author=. Zenodo , year=
-
[41]
Chenze Shao and Zhengrui Ma and Min Zhang and Yang Feng , booktitle=. Beyond
-
[42]
R e L earn: Unlearning via Learning for Large Language Models
Xu, Haoming and Zhao, Ningyuan and Yang, Liming and Zhao, Sendong and Deng, Shumin and Wang, Mengru and Hooi, Bryan and Oo, Nay and Chen, Huajun and Zhang, Ningyu. R e L earn: Unlearning via Learning for Large Language Models. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025. doi:10.1865...
-
[43]
The Eleventh International Conference on Learning Representations , year=
ReAct: Synergizing Reasoning and Acting in Language Models , author=. The Eleventh International Conference on Learning Representations , year=
-
[44]
Retrieval-Augmented Generative Question Answering for Event Argument Extraction
Du, Xinya and Ji, Heng. Retrieval-Augmented Generative Question Answering for Event Argument Extraction. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. doi:10.18653/v1/2022.emnlp-main.307
-
[45]
GPT 3 M ix: Leveraging Large-scale Language Models for Text Augmentation
Yoo, Kang Min and Park, Dongju and Kang, Jaewook and Lee, Sang-Woo and Park, Woomyoung. GPT 3 M ix: Leveraging Large-scale Language Models for Text Augmentation. Findings of the Association for Computational Linguistics: EMNLP 2021. 2021. doi:10.18653/v1/2021.findings-emnlp.192
-
[46]
Text data augmentation for large language models: a comprehensive survey of methods, challenges, and opportunities
Chai, Yaping and Xie, Haoran and Qin S., Joe. Text data augmentation for large language models: a comprehensive survey of methods, challenges, and opportunities. Artificial Intelligence Review. 2026
2026
-
[47]
Generative Data Augmentation for Commonsense Reasoning
Yang, Yiben and Malaviya, Chaitanya and Fernandez, Jared and Swayamdipta, Swabha and Le Bras, Ronan and Wang, Ji-Ping and Bhagavatula, Chandra and Choi, Yejin and Downey, Doug. Generative Data Augmentation for Commonsense Reasoning. Findings of the Association for Computational Linguistics: EMNLP 2020. 2020. doi:10.18653/v1/2020.findings-emnlp.90
-
[48]
and Gangal, Varun and Kang, Dongyeop and Mitamura, Teruko and Hovy, Eduard
Feng, Steven Y. and Gangal, Varun and Kang, Dongyeop and Mitamura, Teruko and Hovy, Eduard. G en A ug: Data Augmentation for Finetuning Text Generators. Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. 2020. doi:10.18653/v1/2020.deelio-1.4
-
[49]
Feng, Steven Y. and Li, Aaron W. and Hoey, Jesse. Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019. doi:10.18653/v1/D19-1272
-
[50]
The Eleventh International Conference on Learning Representations , year=
Tailoring Language Generation Models under Total Variation Distance , author=. The Eleventh International Conference on Learning Representations , year=
-
[51]
Zhu , booktitle=
Siyu Ren and Zhiyong Wu and Kenny Q. Zhu , booktitle=
-
[52]
M ix CE : Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
Zhang, Shiyue and Wu, Shijie and Irsoy, Ozan and Lu, Steven and Bansal, Mohit and Dredze, Mark and Rosenberg, David. M ix CE : Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023. doi:10.18653/v1/2023.acl...
-
[53]
EDA : Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
Wei, Jason and Zou, Kai. EDA : Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019. doi:10.18653/v1/D19-1670
-
[54]
Journal of the Royal Statistical Society Series B: Statistical Methodology , volume =
Shen, Xinwei and Meinshausen, Nicolai , title =. Journal of the Royal Statistical Society Series B: Statistical Methodology , volume =. 2025 , month =. doi:10.1093/jrsssb/qkae108 , eprint =
-
[55]
and Zhang, Kun , title =
Kong, Lingjing and Chen, Guangyi and Stojanov, Petar and Li, Haoxuan and Xing, Eric P. and Zhang, Kun , title =. Proceedings of the 38th International Conference on Neural Information Processing Systems , articleno =. 2024 , isbn =
2024
-
[56]
Large Language Models for Anomaly and Out-of-Distribution Detection: A Survey
Xu, Ruiyao and Ding, Kaize. Large Language Models for Anomaly and Out-of-Distribution Detection: A Survey. Findings of the Association for Computational Linguistics: NAACL 2025. 2025. doi:10.18653/v1/2025.findings-naacl.333
-
[57]
A Neural Probabilistic Language Model , volume =
Bengio, Yoshua and Ducharme, R\'. A Neural Probabilistic Language Model , volume =. Advances in Neural Information Processing Systems , editor =
-
[58]
Proceedings of the 41st International Conference on Machine Learning , articleno =
Shao, Chenze and Meng, Fandong and Liu, Yijin and Zhou, Jie , title =. Proceedings of the 41st International Conference on Machine Learning , articleno =. 2024 , publisher =
2024
-
[59]
Tilmann Gneiting and Adrian E Raftery , title =. Journal of the American Statistical Association , volume =. 2007 , publisher =. doi:10.1198/016214506000001437 , eprint =
-
[60]
The geometry of proper scoring rules , volume =
Dawid, Alexander , year =. The geometry of proper scoring rules , volume =. Annals of the Institute of Statistical Mathematics , doi =
-
[61]
2025 , eprint=
Distributional Diffusion Models with Scoring Rules , author=. 2025 , eprint=
2025
-
[62]
Foundations of statistical inference , pages=
Measuring information and uncertainty , author=. Foundations of statistical inference , pages=
-
[63]
Measuring information and uncertainty
Comment on “Measuring information and uncertainty” by Robert J. Buehler , author=. Foundations of Statistical Inference , pages=
-
[64]
Axiomatic Characterization of the Quadratic Scoring Rule , url =
Selten, Reinhard , year=. Axiomatic Characterization of the Quadratic Scoring Rule , volume=. doi:10.1023/A:1009957816843 , journal=
-
[65]
1964 , institution=
Belief states: A preliminary empirical study , author=. 1964 , institution=
1964
-
[66]
Monthly weather review , volume=
Verification of forecasts expressed in terms of probability , author=. Monthly weather review , volume=. 1950 , publisher=
1950
-
[67]
Information , VOLUME =
Wang, Jiapeng and Dong, Yihong , TITLE =. Information , VOLUME =. 2020 , NUMBER =
2020
-
[68]
2007 , booktitle=
A Tutorial on Energy-Based Learning , author=. 2007 , booktitle=
2007
-
[69]
2025 , eprint=
Autoregressive Language Models are Secretly Energy-Based Models: Insights into the Lookahead Capabilities of Next-Token Prediction , author=. 2025 , eprint=
2025
-
[70]
The Thirteenth International Conference on Learning Representations , year=
Energy-Based Diffusion Language Models for Text Generation , author=. The Thirteenth International Conference on Learning Representations , year=
-
[71]
A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation
Forristal, Jarad and Mireshghallah, Fatemehsadat and Durrett, Greg and Berg-Kirkpatrick, Taylor. A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation. Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL). 2023. doi:10.18653/v1/2023.conll-1.26
-
[72]
and Johnson, Charles R
Horn, Roger A. and Johnson, Charles R. , edition =. Matrix Analysis , year =
-
[73]
Foundations and Trends
A Primer on Reproducing Kernel Hilbert Spaces , doi =. Foundations and Trends. 2015 , volume =
2015
-
[74]
Modern Discrete Probability: An Essential Toolkit , DOI=
Roch, Sébastien , year=. Modern Discrete Probability: An Essential Toolkit , DOI=
-
[75]
A computational framework for conceptual blending , journal =. 2018 , issn =. doi:https://doi.org/10.1016/j.artint.2017.11.005 , author =
-
[76]
2002 , address =
Gilles Fauconnier and Mark Turner , title =. 2002 , address =
2002
-
[77]
Schafer and Benjamin K
Manami Sato and Amy J. Schafer and Benjamin K. Bergen , keywords =. Metaphor priming in sentence production: Concrete pictures affect abstract language production , journal =. 2015 , issn =
2015
-
[78]
Journal of Cognitive Neuroscience , year=
Neural Correlates of Analogical Reasoning on Syntactic Patterns , author=. Journal of Cognitive Neuroscience , year=
-
[79]
From words to meaning: A semantic illusion , journal =. 1981 , issn =. doi:https://doi.org/10.1016/S0022-5371(81)90165-1 , author =
-
[80]
Bailey and Vittoria Ferraro , title =
Fernanda Ferreira and Karl G.D. Bailey and Vittoria Ferraro , title =. Current Directions in Psychological Science , volume =. 2002 , doi =. https://doi.org/10.1111/1467-8721.00158 , abstract =
-
[81]
Edward Gibson and Leon Bergen and Steven T. Piantadosi , title =. Proceedings of the National Academy of Sciences , volume =. 2013 , doi =. https://www.pnas.org/doi/pdf/10.1073/pnas.1216438110 , abstract =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.