Recognition: unknown
Can We Volunteer Out of the Peer Review Crisis?
Pith reviewed 2026-05-07 07:33 UTC · model grok-4.3
The pith
A voluntary lottery for random pre-review rejection reaches a Nash equilibrium in which authors opt in, improving review quality for all who value the literature they read.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central discovery is that in a symmetric game where each author decides whether to enter the voluntary lottery, a Nash equilibrium with strictly positive participation probability exists whenever authors' payoffs incorporate both their own publication probability and the average quality of the published literature. At this equilibrium the lottery reduces the volume of papers sent for full review, allowing reviewers to allocate more effort per manuscript and thereby raising the expected quality of accepted papers for every participant who values the literature they consume.
What carries the argument
The voluntary lottery participation game, in which each author's payoff is a function of their individual chance of surviving the random draw and the resulting average quality of the reviewed papers that reach publication.
If this is right
- A positive fraction of authors will participate, directly shrinking the reviewer workload without external coercion.
- Reviewers can devote more time to each surviving manuscript, raising the accuracy and usefulness of the evaluations that do occur.
- Published science improves in average quality because effort is concentrated on fewer papers.
- The equilibrium is self-reinforcing: as review quality rises, the incentive to participate grows for authors who read the literature.
- No central authority is needed to enforce the scheme once the equilibrium participation rate is reached.
Where Pith is reading between the lines
- The mechanism could be piloted in a high-volume field such as computer science to observe whether actual participation rates match the model's predictions.
- Fields in which authors read widely across many papers may reach higher equilibrium participation than fields where researchers focus narrowly on their own sub-area.
- The lottery could be combined with existing preprint servers so that randomly rejected papers still receive community feedback outside formal review.
- Heterogeneous author types, such as early-career versus established researchers, could produce different participation thresholds that the basic model leaves unexplored.
Load-bearing premise
Authors must place enough weight on the quality of the papers they read that the gain from better reviews outweighs the personal cost of facing a random pre-review rejection.
What would settle it
A large-scale survey or field trial in which zero authors elect to join the lottery even after being informed that participation would raise average review quality would falsify the existence of a positive-participation equilibrium.
Figures
read the original abstract
The volume of scientific manuscripts is growing faster than the capacity to evaluate them, yet the institutions that govern peer review have remained largely unchanged. The result is a widening mismatch: reviewer scarcity, noisier assessments, and declining confidence in editorial decisions. Every scientist wants better reviews, but review quality depends on the total burden, which no single author can shift. To isolate this tension, we provide a game-theoretic thought experiment: a voluntary lottery in which authors accept a chance of random pre-review rejection, reducing reviewer burden and improving the quality of surviving evaluations. We show that a Nash equilibrium emerges in which authors voluntarily enter the lottery. Scientists who care about the literature they read, not just the papers they publish, will opt in, raising the quality of published science for all.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a voluntary pre-review rejection lottery as a mechanism to address the peer review crisis. Authors simultaneously choose a probability of entering a lottery that randomly rejects a fraction of papers before review, thereby reducing aggregate reviewer burden and improving the expected quality of evaluations for surviving papers. The authors claim to show that a Nash equilibrium exists in which a positive fraction of authors voluntarily participate when their utility places sufficient weight on the quality of the literature they read (as opposed to their own publication probability).
Significance. If the equilibrium result holds, the paper offers a creative game-theoretic thought experiment that frames peer review as a public-goods problem and identifies a self-enforcing voluntary mechanism. The modeling approach is novel in its use of a simultaneous-move game with an endogenous quality externality. Credit is due for the clean conceptual separation of individual publication risk from collective review-quality benefits. However, the result is sensitive to an uncalibrated preference parameter, which limits immediate policy relevance.
major comments (2)
- [Model section (equilibrium derivation)] Model section (equilibrium derivation): The abstract asserts the existence of a Nash equilibrium with interior participation, but the manuscript supplies no explicit payoff matrix, strategy space, or derivation of the best-response function. The equilibrium condition is stated to hold only when the marginal utility from literature quality is large enough to offset the personal risk at the symmetric point; without the functional form of the convex combination or the threshold value of the weight parameter, the claim cannot be verified.
- [Utility specification] Utility specification: The payoff is described as a convex combination of own-paper survival probability and expected literature quality, yet no sensitivity analysis or calibration is provided for the relative weights. If career incentives dominate (standard in the field), the unique equilibrium collapses to zero participation, which directly undermines the central claim that voluntary entry raises published quality for all.
minor comments (2)
- [Abstract] The abstract could state the precise condition on the preference weight required for positive equilibrium participation rather than asserting the result unconditionally.
- [Model description] Clarify the functional mapping from aggregate participation rate to review-quality improvement; the current description leaves the functional form implicit.
Simulated Author's Rebuttal
We thank the referee for the constructive comments on the equilibrium derivation and the sensitivity of the utility weights. We have revised the manuscript to supply the missing explicit derivations, best-response functions, and sensitivity analysis over the preference parameter α. These additions clarify the conditions for an interior equilibrium without altering the paper's framing as a theoretical thought experiment.
read point-by-point responses
-
Referee: Model section (equilibrium derivation): The abstract asserts the existence of a Nash equilibrium with interior participation, but the manuscript supplies no explicit payoff matrix, strategy space, or derivation of the best-response function. The equilibrium condition is stated to hold only when the marginal utility from literature quality is large enough to offset the personal risk at the symmetric point; without the functional form of the convex combination or the threshold value of the weight parameter, the claim cannot be verified.
Authors: We agree that greater formality is needed. In the revised Model section we now define the strategy space explicitly as each author i choosing p_i ∈ [0,1], the probability of entering the voluntary pre-review rejection lottery. The payoff is written as the convex combination u_i = (1-α)·s(p_i, p_{-i}) + α·q(∑p_j), where s is individual survival probability and q is expected review quality (increasing in aggregate participation). We derive the best-response correspondence BR_i(p_{-i}) in closed form and characterize the symmetric Nash equilibrium p* > 0, which exists precisely when α exceeds an explicit threshold α* that depends on the lottery rejection rate and the marginal quality gain; the revised text reports both the functional form and the numerical threshold under the baseline parameterization. revision: yes
-
Referee: Utility specification: The payoff is described as a convex combination of own-paper survival probability and expected literature quality, yet no sensitivity analysis or calibration is provided for the relative weights. If career incentives dominate (standard in the field), the unique equilibrium collapses to zero participation, which directly undermines the central claim that voluntary entry raises published quality for all.
Authors: We accept that the result is conditional on α. The central claim of the paper is not that voluntary participation always occurs, but that a positive-participation equilibrium exists whenever authors place sufficient weight on literature quality. The revision adds a dedicated sensitivity subsection that plots equilibrium participation p*(α) for a range of parameter values, showing that interior equilibria appear once α exceeds a moderate threshold (approximately 0.3 under baseline assumptions). We also include a brief discussion of plausible α values drawn from existing surveys on scientists’ motivations, while acknowledging that precise empirical calibration lies beyond the scope of this theoretical exercise. The model therefore demonstrates a self-enforcing mechanism that can operate when the quality externality is valued, rather than asserting universal applicability. revision: yes
Circularity Check
No significant circularity in the game-theoretic derivation
full rationale
The paper sets up an explicit simultaneous-move game in which each author selects a probability of entering a voluntary pre-review lottery. Payoffs are defined directly as a convex combination of (i) the probability that the author's own paper survives review and (ii) the expected quality of the literature the author later reads. The existence of a symmetric Nash equilibrium with interior participation rate is shown to hold when the marginal utility weight on literature quality is sufficiently large relative to the personal publication risk. This is a standard deductive step from stated model primitives to equilibrium outcome; the result is not equivalent to the inputs by construction, nor is any parameter fitted to data and then relabeled as a prediction. No self-citations, uniqueness theorems imported from prior author work, or ansatzes smuggled via citation appear in the load-bearing steps. The model is self-contained as a thought experiment and does not reduce to renaming known empirical patterns or self-definitional loops.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Authors are rational expected-utility maximizers who can compare the value of personal publication probability against the value of higher average review quality in the literature they consume.
invented entities (1)
-
Voluntary pre-review rejection lottery
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Mich\` e le Kovanis, Rapha\" e l Porcher, Philippe Ravaud, and Ludovic Trinquart. The global burden of journal peer review in the biomedical literature: Strong imbalance in the collective enterprise. PLoS ONE, 11 0 (11): 0 e0166387, 2016. doi:10.1371/journal.pone.0166387
-
[2]
Michael E. Hochberg, Jonathan M. Chase, Nicholas J. Gotelli, Alan Hastings, and Shahid Naeem. The tragedy of the reviewer commons. Ecology Letters, 12 0 (1): 0 2--4, 2009. doi:10.1111/j.1461-0248.2008.01276.x
-
[3]
Blended PC peer review model: Process and reflection
Chakkrit Tantithamthavorn, Nicole Novielli, Ayushi Rastogi, Olga Baysal, and Bram Adams. Blended PC peer review model: Process and reflection. In ACM SIGSOFT Software Engineering Notes, 2025. doi:10.1145/3735931.3735937
-
[4]
Weixin Liang, Yaohui Zhang, Hancheng Cao, Binglu Wang, Daisy Yi Ding, Xinyu Yang, Kailas Vodrahalli, Siqi He, Daniel Scott Smith, Yian Yin, Daniel McFarland, and James Zou. Monitoring AI -modified content at scale: a case study on the impact of ChatGPT on AI conference peer reviews. Nature, 640: 0 461--469, 2025. doi:10.1038/s41586-024-08520-w
-
[5]
The NIPS experiment
Eric Price. The NIPS experiment. Blog post, http://blog.mrtz.org/2014/12/15/the-nips-experiment.html, 2014. Accessed 2026-03-30
2014
- [6]
-
[7]
Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan. Has the machine learning review process become more arbitrary as the field has grown? The NeurIPS 2021 consistency experiment. arXiv preprint arXiv:2306.03262, 2023
-
[8]
John Jerrim and Robert de Vries. Are peer-reviews of grant proposals reliable? An analysis of E conomic and S ocial R esearch C ouncil ( ESRC ) funding applications. The Social Science Journal, 60 0 (1): 0 91--109, 2020. doi:10.1080/03623319.2020.1728506
-
[9]
Bryan D. Neff and Julian D. Olden. Is peer review a game of chance? BioScience, 56 0 (4): 0 333--340, 2006. doi:10.1641/0006-3568(2006)56[333:IPRAGO]2.0.CO;2
-
[10]
Grantmaking, grading on a curve, and the paradox of relative evaluation in nonmarkets
J\' e r\^ o me Adda and Marco Ottaviani. Grantmaking, grading on a curve, and the paradox of relative evaluation in nonmarkets. Quarterly Journal of Economics, 139 0 (2): 0 1255--1319, 2024. doi:10.1093/qje/qjad056
-
[11]
Carl T. Bergstrom and Kevin Gross. Screening, sorting, and the feedback cycles that imperil peer review. PLOS Biology, 24 0 (2): 0 e3003650, 2026. doi:10.1371/journal.pbio.3003650
-
[12]
Kevin J. S. Zollman, Julian Garcia, and Toby Handfield. Academic journals, incentives, and the quality of peer review: A model. Philosophy of Science, 91 0 (1): 0 186--203, 2023. doi:10.1017/psa.2023.132
-
[13]
Honest signaling in academic publishing
Leonid Tiokhin, Karthik Panchanathan, Dani\" e l Lakens, Simine Vazire, Thomas Morgan, and Kevin Zollman. Honest signaling in academic publishing. PLoS ONE, 16 0 (2): 0 e0246675, 2021. doi:10.1371/journal.pone.0246675
-
[14]
Peer-review in a world with rational scientists: Toward selection of the average
Stefan Thurner and Rudolf Hanel. Peer-review in a world with rational scientists: Toward selection of the average. European Physical Journal B, 84 0 (4): 0 707--711, 2011. doi:10.1140/epjb/e2011-20545-7
-
[15]
A system-level analysis of conference peer review
Yichi Zhang, Fang-Yi Yu, Grant Schoenebeck, and David Kempe. A system-level analysis of conference peer review. Proceedings of the 23rd ACM Conference on Economics and Computation, pages 1041--1080, 2022. doi:10.1145/3490486.3538306
-
[16]
A scoping review of simulation models of peer review
Thomas Feliciani, Junwen Luo, Lai Ma, Pablo Lucas, Flaminio Squazzoni, Ana Marusic, et al. A scoping review of simulation models of peer review. Scientometrics, 121 0 (1): 0 555--594, 2019. doi:10.1007/s11192-019-03205-w
-
[17]
Ferric C. Fang and Arturo Casadevall. Research funding: The case for a modified lottery. mBio, 7 0 (2): 0 e00422--16, 2016. doi:10.1128/mBio.00422-16
-
[18]
Kevin Gross and Carl T. Bergstrom. Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17 0 (1): 0 e3000065, 2019. doi:10.1371/journal.pbio.3000065
-
[19]
What do editors maximize? Evidence from four economics journals
David Card, Stefano DellaVigna, Patricia Funk, and Nagore Iriberri. What do editors maximize? Evidence from four economics journals. Review of Economics and Statistics, 102 0 (1): 0 195--217, 2020. doi:10.1162/rest_a_00839
-
[20]
Emery D. Berger. Cs conference acceptance rates. https://github.com/emeryberger/csconferences, 2025. Accessed: 2026-04-27
2025
-
[21]
Experimental evidence on the productivity effects of generative artificial intelligence,
Shakked Noy and Whitney Zhang. Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381 0 (6654): 0 187--192, 2023. doi:10.1126/science.adh2586
-
[22]
Delving into ChatGPT usage in academic writing through excess vocabulary
Dmitry Kobak, Rita Gonz \'a lez-M \'a rquez, Em o ke- \'A gnes Horv \'a t, and Jan Lause. Delving into ChatGPT usage in academic writing through excess vocabulary. PLoS ONE, 19 0 (5): 0 e0297826, 2024. doi:10.1371/journal.pone.0297826
-
[23]
Artificial intelligence tools expand scientists' impact but contract science's focus
Qianyue Hao, Fengli Xu, Yong Li, and James Evans. Artificial intelligence tools expand scientists' impact but contract science's focus. Nature, 649 0 (8099): 0 1237--1243, 2026. doi:10.1038/s41586-025-09922-y
-
[24]
Ernst Fehr and Urs Fischbacher. Why social preferences matter --- the impact of non-selfish motives on competition, cooperation and incentives. Economic Journal, 112 0 (478): 0 C1--C33, 2002. doi:10.1111/1468-0297.00027
-
[25]
Remco Heesen and Liam Kofi Bright. Is peer review a good idea? British Journal for the Philosophy of Science, 72 0 (3): 0 635--663, 2021. doi:10.1093/bjps/axz029
-
[26]
Journals that close submissions part of the year – The Philosophers ' Cocoon , April 2026. URL https://web.archive.org/web/20260407075957/https://philosopherscocoon.com/2024/04/05/journals-that-close-submissions-part-of-the-year/
-
[27]
Primary paper initiative
IJCAI-ECAI 2026 . Primary paper initiative. https://2026.ijcai.org/primary-paper-initiative/, 2025. Announced November 2025. Accessed April 2026
2026
-
[28]
Rafael D'Andrea and James P. O'Dwyer. Can editors save peer review from peer reviewers? PLoS ONE, 12 0 (10): 0 e0186111, 2017. doi:10.1371/journal.pone.0186111
-
[29]
Evolving standards for academic publishing: A q-r theory
Glenn Ellison. Evolving standards for academic publishing: A q-r theory. Journal of Political Economy, 110 0 (5): 0 994--1034, 2002. doi:10.1086/341871
-
[30]
Social norms and community enforcement
Michihiro Kandori. Social norms and community enforcement. Review of Economic Studies, 59 0 (1): 0 63--80, 1992. doi:10.2307/2297925
-
[31]
https://doi.org/10.1017/CBO9780511807763
Elinor Ostrom. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 1990. doi:10.1017/CBO9780511807763
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.