pith. machine review for the scientific record. sign in

arxiv: 2604.27188 · v1 · submitted 2026-04-29 · ⚛️ physics.soc-ph · cs.MA· cs.SI

Recognition: unknown

Nothing Deceives Like Success: Social Learning and the Illusion of Understanding in Science

Authors on Pith no claims yet

Pith reviewed 2026-05-07 10:15 UTC · model grok-4.3

classification ⚛️ physics.soc-ph cs.MAcs.SI
keywords social learningsuccess biasillusion of understandingagent-based simulationscientific communitiescollective problem-solvinginequality in science
0
0 comments X

The pith

In simulations of scientific communities, success-driven social learning creates an illusion of understanding that reduces actual performance and generates real-world inequality.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper uses agent-based simulations to test whether success bias helps or hinders collective theory-building when quality is hard to judge directly. It shows that agents who adopt ideas based on apparent success systematically overestimate their theories' quality, explore fewer alternatives, and perform worse as problems grow complex. When the agents tune their social behavior specifically to maximize perceived success, their true performance declines while inequality in outcomes rises to levels seen in actual science.

Core claim

In agent-based models of collective theory-building, success bias amplifies a persistent gap between perceived and actual performance. Communities that preferentially copy apparently successful theories filter out poor explanations but fail to discover better ones. This effect strengthens with problem complexity. When agents optimize their social learning rules to maximize perceived success, they paradoxically lower their real performance and produce inequality distributions that match those observed in real scientific communities.

What carries the argument

Agent-based simulation of scientists who adopt theories according to observed apparent success, with limited ability to evaluate true explanatory quality.

If this is right

  • Success bias narrows the range of explored theories while efficiently discarding weak ones.
  • The gap between perceived and actual performance grows as the underlying problem becomes more complex.
  • Optimizing social learning for perceived success lowers true collective performance.
  • The resulting distribution of success across agents reproduces inequality levels documented in real scientific communities.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The model implies that metrics emphasizing visible success may systematically undervalue exploratory work in complex domains.
  • Similar dynamics could appear in other collective search settings where quality is difficult to measure directly, such as technological innovation.
  • Mechanisms that reward evaluation of underlying quality rather than surface success might counteract the performance drop.

Load-bearing premise

Agents have no reliable way to assess true theory quality except by observing apparent success.

What would settle it

Empirical evidence that scientists who optimize their behavior for visible success metrics achieve higher actual performance than those who do not would contradict the simulation outcomes.

Figures

Figures reproduced from arXiv: 2604.27188 by Avery W. Louis, Marina Dubova.

Figure 1
Figure 1. Figure 1: Simulation overview. (A) The ground truth consists of Gaussian distributions embedded in uniform noise. Scientists conduct experiments by querying specific configurations of variables and observing the returned values, then build theories—shallow autoencoders that learn compressed representations of the patterns in their data. Scientists evaluate and exchange theories via social learning, sometimes adoptin… view at source ↗
Figure 2
Figure 2. Figure 2: Aspects of social learning. (A) Theory adoption between two autoencoders. The adopter copies one or more entire features from the adoptee’s neural-network over to their own network. (B) The effect of success bias and community bias on theory adoption. Success bias (left) controls how strongly selection favors high-scoring theories: low temperature concentrates probability on top performers, while high temp… view at source ↗
Figure 3
Figure 3. Figure 3: Illusions of Understanding. (A) Perceived theory success (teal) remains stable across levels of problem complexity, while actual theory success (green) declines, creating a growing gap between what agents believe about their theories and how those theories actually perform. Note: theory success is reversed, as it is measured as theory loss. (B) This gap between perceived and actual success is larger under … view at source ↗
Figure 4
Figure 4. Figure 4: Community structure and information flow under low and high success bias. (A) view at source ↗
Figure 5
Figure 5. Figure 5: Mechanisms underlying the effects of success bias. (A) view at source ↗
Figure 6
Figure 6. Figure 6: Results of Optimization. (A) Parameter values explored during random (Sobol) and optimization (GP-guided) phases, averaged across six environments. The optimizer converges strongly on high success bias and maximum explanation capacity, while remaining agnostic to community bias and sociality. (B) Distribution of Gini coefficients during the optimization phase. Vertical colored lines show empirical citation… view at source ↗
Figure 7
Figure 7. Figure 7: Comparisons of high-versus-low success bias using a median split. view at source ↗
read the original abstract

Success-driven social learning, in which individuals preferentially adopt the ideas and methods that appear most successful, is a foundational principle of collective behavior across systems ranging from ant colonies to scientific communities. But science is a particular kind of collective search -- one in which the quality of an explanation is itself difficult to assess. Is success bias adaptive in this setting? In agent-based simulations of collective theory building, we find that it is not. Scientists in our model systematically overestimate the quality of their own theories, creating an illusion of understanding: a persistent gap between perceived and actual performance. Success bias amplifies this illusion; communities that favor apparently successful theories explore a narrower range of possibilities, efficiently filtering out poor explanations but failing to discover better ones. This effect intensifies with problem complexity, as scientists in more complex environments become increasingly unable to assess how well their theories actually perform. Most strikingly, when agents optimize their social behavior to maximize the perceived success of their theories, they paradoxically undermine their actual performance, and produce levels of inequality that mirror those found in real scientific communities.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper uses agent-based simulations of collective theory-building to argue that success-biased social learning is maladaptive in science. Agents preferentially adopt apparently successful theories, which creates a persistent gap between perceived and actual performance (an 'illusion of understanding'), narrows exploration, and worsens with increasing problem complexity. When agents are allowed to optimize their social learning rules to maximize perceived success, actual performance declines while inequality in outcomes rises to levels observed in real scientific communities.

Significance. If the simulation results prove robust under variation of parameters and modeling assumptions, the work would offer a mechanistic account of how success-driven learning can produce deceptive collective outcomes and reproduce empirical patterns of inequality in science. The agent-based approach allows exploration of emergent effects that are difficult to derive analytically, and the finding that optimization over perceived success is self-undermining is a potentially falsifiable prediction with implications for understanding cumulative advantage in research communities.

major comments (3)
  1. [Model description and simulation protocol] The central claims rest entirely on forward simulation outputs, yet the manuscript provides no information on parameter sensitivity, robustness to alternative implementations of 'actual performance' versus 'perceived success', or validation against any empirical benchmarks. Without these checks, it is impossible to determine whether the reported illusion, narrowed exploration, and inequality effects are load-bearing results or artifacts of specific modeling choices.
  2. [Agent decision rules and environment definition] The weakest assumption—that agents have no reliable way to assess true theory quality beyond observing apparent success—is stated as foundational, but the text does not specify how actual performance is computed in the environment or whether agents receive any noisy but informative signal of quality. If even modest direct feedback on quality is introduced, the illusion and performance drop may disappear, undermining the claim that success bias is inherently maladaptive.
  3. [Optimization experiments] The optimization result (agents choosing social rules to maximize perceived success thereby lowering actual performance) is presented as the most striking finding, but no details are given on the optimization procedure, the objective function, or the search space over social behaviors. It is therefore unclear whether this paradox is a general consequence of the setup or specific to the particular optimization method employed.
minor comments (2)
  1. [Abstract and introduction] The abstract and introduction use the phrase 'levels of inequality that mirror those found in real scientific communities' without citing the specific empirical distributions or metrics being matched; a reference or explicit comparison would strengthen the claim.
  2. [Model section] Notation for perceived versus actual performance is introduced without a clear table or equation summarizing the two quantities and their relationship; adding such a summary would improve readability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive comments, which identify key areas where the manuscript can be strengthened through additional detail and checks. We address each major comment below and will incorporate the necessary revisions.

read point-by-point responses
  1. Referee: [Model description and simulation protocol] The central claims rest entirely on forward simulation outputs, yet the manuscript provides no information on parameter sensitivity, robustness to alternative implementations of 'actual performance' versus 'perceived success', or validation against any empirical benchmarks. Without these checks, it is impossible to determine whether the reported illusion, narrowed exploration, and inequality effects are load-bearing results or artifacts of specific modeling choices.

    Authors: We agree that explicit robustness checks are required. In the revised manuscript we will add a new section reporting parameter sensitivity analyses over ranges of population size, theory count, and complexity. We will also test alternative operationalizations of perceived success (different adoption-weighting schemes) and actual performance (alternative objective metrics) and confirm that the core illusion and inequality patterns remain. For empirical grounding we will expand the discussion to include direct quantitative comparisons between simulated inequality levels and observed distributions of citations and productivity in real scientific communities, while noting the limits of abstract models for full benchmarking. revision: yes

  2. Referee: [Agent decision rules and environment definition] The weakest assumption—that agents have no reliable way to assess true theory quality beyond observing apparent success—is stated as foundational, but the text does not specify how actual performance is computed in the environment or whether agents receive any noisy but informative signal of quality. If even modest direct feedback on quality is introduced, the illusion and performance drop may disappear, undermining the claim that success bias is inherently maladaptive.

    Authors: The model premise is that scientific quality is intrinsically difficult to evaluate directly, which is why agents rely on social signals. We will clarify in the methods that actual performance is defined by a hidden objective function (e.g., predictive accuracy against an underlying ground truth) that agents cannot access. Agents observe only apparent success via adoption or reported outcomes. To address the concern we will add supplementary simulations that introduce controlled levels of noisy direct feedback and show the boundary conditions under which the illusion and performance penalty persist, thereby supporting the claim that success bias is maladaptive precisely when reliable quality signals are absent. revision: yes

  3. Referee: [Optimization experiments] The optimization result (agents choosing social rules to maximize perceived success thereby lowering actual performance) is presented as the most striking finding, but no details are given on the optimization procedure, the objective function, or the search space over social behaviors. It is therefore unclear whether this paradox is a general consequence of the setup or specific to the particular optimization method employed.

    Authors: We will expand the methods section with a complete description of the optimization procedure. Agents' social-learning parameters (success-bias strength and exploration rate) are evolved via a genetic algorithm whose objective is the population-average perceived success. The search space consists of both discrete and continuous parameter combinations; we will report convergence criteria, results from multiple independent runs, and comparisons against random search baselines to establish that the self-undermining effect is robust to the specific optimizer. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper presents results from forward agent-based simulations of scientists using success-biased social learning in a collective theory-building environment. The reported outcomes—an illusion of understanding (perceived vs. actual performance gap), narrower exploration, and higher inequality when agents optimize for perceived success—emerge directly from executing the model under the stated assumptions about agents' inability to evaluate true theory quality. No equations embed the target quantities by definition, no parameters are fitted to subsets of data and then relabeled as predictions, and no load-bearing self-citations or uniqueness theorems are invoked to force the conclusions. The derivation chain consists of explicit simulation rules and their observable outputs rather than any self-referential reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract does not enumerate model parameters or assumptions, but the claims necessarily rest on unstated simulation parameters (e.g., number of agents, problem complexity measure, success observation noise) and the core modeling choice that agents cannot directly observe theory quality.

pith-pipeline@v0.9.0 · 5488 in / 1156 out tokens · 63627 ms · 2026-05-07T10:15:04.194307+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 38 canonical work pages

  1. [1]

    R.Boyd,P.J.Richerson,CultureandtheEvolutionaryProcess(UniversityofChicagoPress, Chicago, IL) (1988)

  2. [2]

    K. N. Laland, Social Learning Strategies.Learning & Behavior32(1), 4–14 (2004), doi:10.3758/BF03196002

  3. [3]

    R. L. Goldstone, T. W. Wisdom, M. E. Roberts, S. Frey, Learning along with Others, inThe PsychologyofLearningandMotivation,Vol.58,ThePsychologyofLearningandMotivation (Elsevier Academic Press, San Diego, CA, US), pp. 1–45 (2013)

  4. [4]

    Barkoczi, M

    D. Barkoczi, M. Galesic, Social Learning Strategies Modify the Effect of Network Structure on Group Performance.Nature Communications7(1), 13109 (2016), doi: 10.1038/ncomms13109

  5. [5]

    Science328(5975), 208–213 (2010)

    L.Rendell,etal.,WhyCopyOthers?InsightsfromtheSocialLearningStrategiesTournament. Science (New York, N.Y.)328(5975), 208–213 (2010), doi:10.1126/science.1184719

  6. [6]

    R.Baldini,Success-BiasedSocialLearning:CulturalandEvolutionaryDynamics.Theoretical Population Biology82(3), 222–228 (2012), doi:10.1016/j.tpb.2012.06.005

  7. [7]

    C.M.Wu,etal.,Adaptivemechanismsofsocialandasociallearninginimmersivecollective foraging.Nature communications16(1), 3539 (2025)

  8. [8]

    J. E. Hirsch, An Index to Quantify an Individual’s Scientific Research Output.Proceedings of the National Academy of Sciences102(46), 16569–16572 (2005), doi:10.1073/pnas. 0507655102

  9. [9]

    Research England, Research Excellence Framework 2029, https://2029.ref.ac.uk/ (2024), https://2029.ref.ac.uk/

  10. [10]

    Bibliometrics: The Leiden manifesto for research metrics

    D. Hicks, P. Wouters, L. Waltman, S. de Rijcke, I. Rafols, Bibliometrics: The Leiden Manifesto for Research Metrics.Nature520(7548), 429–431 (2015), doi:10.1038/520429a

  11. [11]

    Wu, Better than Best: Epistemic Landscapes and Diversity of Practice in Science

    J. Wu, Better than Best: Epistemic Landscapes and Diversity of Practice in Science. Philosophy of Science91(5), 1189–1198 (2024), doi:10.1017/psa.2023.129

  12. [12]

    J. P. A. Ioannidis, Why Most Published Research Findings Are False.PLOS Medicine2(8), e124 (2005), doi:10.1371/journal.pmed.0020124. Nothing Deceives Like Success29

  13. [13]

    Langmuir, Pathological Science.Research Technology Management32(5), 11–17 (1989)

    I. Langmuir, Pathological Science.Research Technology Management32(5), 11–17 (1989)

  14. [14]

    Potochnik,Idealization and the Aims of Science(University of Chicago Press, Chicago, IL) (2020)

    A. Potochnik,Idealization and the Aims of Science(University of Chicago Press, Chicago, IL) (2020)

  15. [15]

    K.P.Kepp,N.K.Robakis,P.F.Høilund-Carlsen,S.L.Sensi,B.Vissel,Theamyloidcascade hypothesis: an updated critical review.Brain146(10), 3969–3990 (2023)

  16. [16]

    M. J. Nye, N-Rays: An Episode in the History and Psychology of Science.Historical Studies in the Physical Sciences11(1), 125–156 (1980), doi:10.2307/27757473

  17. [17]

    W.F.Brewer,TheTheoryLadennessoftheMentalProcessesUsedintheScientificEnterprise: Evidence from Cognitive Psychology and the History of Science, inPsychology of Science: Implicit and Explicit Processes, R. W. Proctor, E. Capaldi, Eds. (Oxford University Press), pp. 289–334 (2012), doi:10.1093/acprof:oso/9780199753628.003.0013

  18. [18]

    M.F.McBride,F.Fidler,M.A.Burgman,EvaluatingtheAccuracyandCalibrationofExpert Predictions under Uncertainty: Predicting the Outcomes of Ecological Research.Diversity and Distributions18(8), 782–794 (2012), doi:10.1111/j.1472-4642.2012.00884.x

  19. [19]

    P. E. Tetlock,Expert Political Judgment: How Good Is It? How Can We Know?(Princeton University Press, Princeton) (2017)

  20. [20]

    L.Messeri,M.J.Crockett,ArtificialIntelligenceandIllusionsofUnderstandinginScientific Research.Nature627(8002), 49–58 (2024), doi:10.1038/s41586-024-07146-0

  21. [21]

    Dubova, A

    M. Dubova, A. Moskvichev, K. Zollman, Against Theory-Motivated Experimentation: Can Random Experimental Choice Lead to Better Theories?Collective Intelligence5(1) (2026), doi:10.1177/26339137261421577

  22. [22]

    R. K. Merton, The Matthew Effect in Science.Science159(3810), 56–63 (1968), doi: 10.1126/science.159.3810.56

  23. [23]

    Perc, The Matthew Effect in Empirical Data.Journal of The Royal Society Interface 11(98), 20140378 (2014), doi:10.1098/rsif.2014.0378

    M. Perc, The Matthew Effect in Empirical Data.Journal of The Royal Society Interface 11(98), 20140378 (2014), doi:10.1098/rsif.2014.0378

  24. [24]

    Clauset, S

    A. Clauset, S. Arbesman, D. B. Larremore, Systematic Inequality and Hierarchy in Faculty Hiring Networks.Science Advances1(1), e1400005 (2015), doi:10.1126/sciadv.1400005. Nothing Deceives Like Success30

  25. [25]

    W. C. Wimsatt, Levels, Robustness, Emergence, and Heterogeneous Dynamics: Finding Partial Organization in Causal Thickets, inLevels of Organization in the Biological Sciences, D. S. Brooks, J. DiFrisco, W. C. Wimsatt, Eds. (The MIT Press), pp. 21–38 (2021), doi:10.7551/mitpress/12389.003.0005

  26. [26]

    H. A. Simon, The Architecture of Complexity.Proceedings of the American Philosophical Society106(6), 467–482 (1962)

  27. [27]

    Cartwright,The Dappled World: A Study of the Boundaries of Science(Cambridge University Press, Cambridge) (1999), doi:10.1017/CBO9781139167093

    N. Cartwright,The Dappled World: A Study of the Boundaries of Science(Cambridge University Press, Cambridge) (1999), doi:10.1017/CBO9781139167093

  28. [28]

    S. D. Mitchell,Unsimple Truths: Science, Complexity, and Policy(University of Chicago Press, Chicago, IL) (2012)

  29. [29]

    H. W. Kuhn, The Hungarian Method for the Assignment Problem.Naval Research Logistics Quarterly2(1-2), 83–97 (1955), doi:10.1002/nav.3800020109

  30. [30]

    Y.Bengio,A.Courville,P.Vincent,RepresentationLearning:AReviewandNewPerspectives (2014), doi:10.48550/arXiv.1206.5538

  31. [31]

    V. D. Blondel, J.-L. Guillaume, R. Lambiotte, E. Lefebvre, Fast Unfolding of Communities in Large Networks.Journal of Statistical Mechanics: Theory and Experiment2008(10), P10008 (2008), doi:10.1088/1742-5468/2008/10/P10008

  32. [32]

    Shahriari, K

    B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. de Freitas, Taking the Human Out of the Loop: A Review of Bayesian Optimization.Proceedings of the IEEE104(1), 148–175 (2016), doi:10.1109/JPROC.2015.2494218

  33. [33]

    I. M. Sobol’, On the Distribution of Points in a Cube and the Approximate Evaluation of Integrals.USSR Computational Mathematics and Mathematical Physics7(4), 86–112 (1967), doi:10.1016/0041-5553(67)90144-9

  34. [34]

    M. W. Nielsen, J. P. Andersen, Global Citation Inequality Is on the Rise.Proceedings of the National Academy of Sciences118(7), e2012208118 (2021), doi:10.1073/pnas.2012208118

  35. [35]

    Baribault,et al., Metastudies for Robust Tests of Theory.Proceedings of the National Academy of Sciences115(11), 2607–2612 (2018), doi:10.1073/pnas.1708285114

    B. Baribault,et al., Metastudies for Robust Tests of Theory.Proceedings of the National Academy of Sciences115(11), 2607–2612 (2018), doi:10.1073/pnas.1708285114

  36. [36]

    C. M. Campbell, E. J. Izquierdo, R. L. Goldstone, Partial Copying and the Role of Diversity in Social Learning Performance.Collective Intelligence1(1), 26339137221081849 (2022), doi:10.1177/26339137221081849. Nothing Deceives Like Success31

  37. [37]

    L. Hong, S. E. Page, Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers.Proceedings of the National Academy of Sciences101(46), 16385–16389 (2004), doi:10.1073/pnas.0403723101

  38. [38]

    K. J. S. Zollman, The Epistemic Benefit of Transient Diversity.Erkenntnis72(1), 17–35 (2010), doi:10.1007/s10670-009-9194-6

  39. [39]

    P. E. Smaldino, C. Moser, A. Pérez Velilla, M. Werling, Maintaining Transient Diversity Is a General Principle for Improving Collective Problem Solving.Perspectives on Psychological Science19(2), 454–464 (2024), doi:10.1177/17456916231180100

  40. [40]

    Page,The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies - New Edition(Princeton University Press, Princeton, NJ) (2008)

    S. Page,The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies - New Edition(Princeton University Press, Princeton, NJ) (2008)

  41. [41]

    Wu, Epistemic advantage on the margin: A network standpoint epistemology.Philosophy and Phenomenological Research106(3), 755–777 (2023)

    J. Wu, Epistemic advantage on the margin: A network standpoint epistemology.Philosophy and Phenomenological Research106(3), 755–777 (2023)

  42. [42]

    Williamson, Overfitting and Degrees of Freedom, inOverfitting and Heuristics in Philosophy(Oxford University Press), chap

    T. Williamson, Overfitting and Degrees of Freedom, inOverfitting and Heuristics in Philosophy(Oxford University Press), chap. 2 (2024), doi:10.1093/oso/9780197779217.003. 0002

  43. [43]

    Á. V. Jiménez, A. Mesoudi, Prestige-Biased Social Learning: Current Evidence and Outstanding Questions.Palgrave Communications5(1), 1–12 (2019), doi:10.1057/ s41599-019-0228-7

  44. [44]

    Fanelli, Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data.PLOS ONE5(4), e10271 (2010), doi:10.1371/journal.pone.0010271

    D. Fanelli, Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data.PLOS ONE5(4), e10271 (2010), doi:10.1371/journal.pone.0010271

  45. [45]

    Mastroianni, Science Is a Strong-Link Problem, https://www.experimental- history.com/p/science-is-a-strong-link-problem (2025)

    A. Mastroianni, Science Is a Strong-Link Problem, https://www.experimental- history.com/p/science-is-a-strong-link-problem (2025)

  46. [46]

    K. O. Stanley, J. Lehman,Why Greatness Cannot Be Planned: The Myth of the Objective (Springer International Publishing : Imprint: Springer, Cham), 1st ed. 2015 ed. (2015), doi:10.1007/978-3-319-15524-1

  47. [47]

    C.Goodhart,ProblemsofMonetaryManagement:TheU.K.Experience.Papersinmonetary economics 1975 ; 11, 1 (1975)

  48. [48]

    J. G. March, Exploration and Exploitation in Organizational Learning.Organization Science 2(1), 71–87 (1991). Nothing Deceives Like Success32

  49. [49]

    Chang, Epistemic Iteration and Natural Kinds: Realism and Pluralism in Taxonomy, in Philosophical Issues in Psychiatry IV: Classification of Psychiatric Illness, K

    H. Chang, Epistemic Iteration and Natural Kinds: Realism and Pluralism in Taxonomy, in Philosophical Issues in Psychiatry IV: Classification of Psychiatric Illness, K. S. Kendler, J. Parnas, K. S. Kendler, J. Parnas, Eds. (Oxford University Press), pp. 229–245 (2017), doi:10.1093/med/9780198796022.003.0029

  50. [50]

    Chang,Inventing Temperature: Measurement and Scientific Progress, Oxford Studies in Philosophy of Science (Oxford University Press, Oxford, New York) (2007)

    H. Chang,Inventing Temperature: Measurement and Scientific Progress, Oxford Studies in Philosophy of Science (Oxford University Press, Oxford, New York) (2007)

  51. [51]

    W. C. Wimsatt,Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality(Harvard University Press, Cambridge, Mass) (2007)

  52. [52]

    Williamson, Overfitting and Degrees of Freedom, inOverfitting and Heuristics in Philosophy(Oxford University Press), chap

    M.Massimi,PluralismandPerspectivism,inPerspectivalRealism(OxfordUniversityPress), chap. 3 (2022), doi:10.1093/oso/9780197555620.003.0003

  53. [53]

    K. Friston,et al., The Anatomy of Choice: Dopamine and Decision-Making.Philosophical Transactions of the Royal Society B: Biological Sciences369(1655), 20130481 (2014), doi:10.1098/rstb.2013.0481

  54. [54]

    Gross, C

    K. Gross, C. T. Bergstrom, How competition propels scientific risk-taking.arXiv preprint arXiv:2509.06718(2025)

  55. [55]

    Feyerabend,Against Method

    P. Feyerabend,Against Method. Third Edition.(London: Verso.) (1993)

  56. [56]

    D. T. Campbell, Blind Variation and Selective Retentions in Creative Thought as in Other KnowledgeProcesses.PsychologicalReview67(6),380–400(1960),doi:10.1037/h0040373

  57. [57]

    K. R. Popper,Objective Knowledge: An Evolutionary Approach(Oxford University Press, Oxford Eng. : New York) (1979)

  58. [58]

    D. L. Hull,Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science, Science and Its Conceptual Foundations Series (University of Chicago Press, Chicago, IL) (1990)

  59. [59]

    Chang, Joseph Priestley (1733–1804), inHandbook of the Philosophy of Science

    H. Chang, Joseph Priestley (1733–1804), inHandbook of the Philosophy of Science. Vol. 6: PhilosophyofChemistry,A.I.Woody,R.F.Hendry,P.Needham,Eds.(Elsevier,Amsterdam), pp. 55–62 (2012)

  60. [60]

    C. H. Legare, A. Shtulman, Explanatory Pluralism across Cultures and Development, in Metacognitive Diversity: An Interdisciplinary Approach, J. Proust, M. Fortier, Eds. (Oxford University Press), pp. 415–432 (2018), doi:10.1093/oso/9780198789710.003.0019. Nothing Deceives Like Success33

  61. [61]

    Chang,Is Water H2O?: Evidence, Realism and Pluralism(Springer, Cambridge, England ; New York) (2014)

    H. Chang,Is Water H2O?: Evidence, Realism and Pluralism(Springer, Cambridge, England ; New York) (2014). Nothing Deceives Like Success34 Acknowledgments The authors thank Mirta Galesic for productive discussions that led to the improvement of this work. Funding:A. L. was supported by the National Science Foundation under Award No. 2349052, the The Bengier...