pith. machine review for the scientific record. sign in

arxiv: 2605.02566 · v1 · submitted 2026-05-04 · 💻 cs.CY

Recognition: unknown

AI-Augmented Science and the New Institutional Scarcities

Lauri Lov\'en

Authors on Pith no claims yet

Pith reviewed 2026-05-08 17:39 UTC · model grok-4.3

classification 💻 cs.CY
keywords AI-augmented scienceinstitutional scarcitiesjudgment productioncertifying infrastructureintegration capacitylegitimate judgmentverified signal
0
0 comments X

The pith

AI produces competent judgment at near-zero marginal cost, forcing scientific institutions to compete directly for the role of manufacturing legitimate judgment.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that AI now generates selecting, ranking, attributing, and certifying at scale with marginal cost approaching zero. This inverts the standard economics of AI, where judgment was the scarce complement to cheap prediction. Scientific institutions, whose core function is to produce legitimate judgment, therefore compete with AI rather than simply adapting to it. Four complements become scarce and load-bearing: verified signal, legitimacy, authentic provenance, and integration capacity, defined as the community's tolerance for delegated cognition. Integration capacity is the most binding scarcity because no improvement in AI tooling can create it; the frontier therefore shifts from acceleration to redesign of certifying infrastructure.

Core claim

Competent-looking judgment is now produced at scale at marginal cost approaching zero by AI, inverting the dominant economics-of-AI reading that treats judgment as the scarce complement to cheap prediction. Scientific institutions, distinctively, manufacture legitimate judgment, so they do not merely adapt to AI; they compete with it for the same functional role. The four new scarcities—verified signal, legitimacy, authentic provenance, and integration capacity—become load-bearing, with integration capacity the least developed and most binding for AI-augmented science.

What carries the argument

The four new scarcities (verified signal, legitimacy, authentic provenance, and integration capacity as the community's tolerance for delegated cognition) that arise once AI supplies competent-looking judgment at marginal cost near zero.

If this is right

  • The frontier for AI-augmented science shifts from tool acceleration to redesign of certifying infrastructure.
  • Integration capacity cannot be purchased through better AI tooling and must be developed institutionally.
  • Scientific institutions enter direct competition with AI for the functional role of producing legitimate judgment.
  • Verified signal, legitimacy, and authentic provenance become new load-bearing complements to AI use.
  • No improvement in prediction or generation alone resolves the scarcity of delegated cognition tolerance.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Institutions outside science, such as regulatory bodies or news organizations, may face analogous competitions once AI judgment scales.
  • Measurement of integration capacity could involve tracking delegation rates in peer review and their correlation with output legitimacy.
  • Hybrid models might emerge where AI handles initial selection while institutions supply provenance and final certification.
  • The argument implies that fields with high delegation tolerance will adopt AI faster than those with low tolerance.

Load-bearing premise

That AI can now generate competent-looking judgment, including selecting, ranking, attributing, and certifying, at scale with marginal cost approaching zero.

What would settle it

A controlled comparison showing that scientific institutions retain their existing capacity to manufacture legitimate judgment without redesign even after widespread deployment of AI systems capable of selection, ranking, attribution, and certification.

read the original abstract

Competent-looking judgment, including selecting, ranking, attributing, and certifying, is now produced at scale at marginal cost approaching zero, inverting the dominant economics-of-AI reading that treats judgment as the scarce complement to cheap prediction. Scientific institutions, distinctively, manufacture legitimate judgment, so they do not merely adapt to AI; they compete with it for the same functional role. Four complements then become scarce and load-bearing for AI-augmented science: verified signal, legitimacy, authentic provenance, and integration capacity (the community's tolerance for delegated cognition). Of these four, integration capacity is the least developed for scientific institutions and the most binding: no improvement in AI tooling can buy it. The frontier for AI-augmented science is not acceleration; it is the redesign of the certifying infrastructure around these new scarcities.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that AI now generates competent-looking judgment (selecting, ranking, attributing, and certifying) at scale with marginal cost approaching zero. This inverts the standard economics-of-AI view in which judgment is the scarce complement to cheap prediction. Because scientific institutions distinctively manufacture legitimate judgment, they compete directly with AI for this functional role, generating four new scarcities—verified signal, legitimacy, authentic provenance, and integration capacity—with integration capacity (community tolerance for delegated cognition) being the most binding. The frontier for AI-augmented science is therefore redesign of certifying infrastructure rather than further acceleration of AI tooling.

Significance. If the inversion holds, the manuscript usefully reframes AI-augmented science as an institutional rather than purely technical problem and identifies integration capacity as a non-technical bottleneck that cannot be solved by better models alone. The conceptual clarity of the four-scarcity taxonomy and the explicit contrast with dominant AI-economics narratives are strengths that could stimulate discussion in science-policy and STS circles, even though the piece supplies no empirical tests, formal model, or machine-checked derivations.

major comments (2)
  1. [Abstract] Abstract and opening paragraphs: the central inversion—that AI-produced 'competent-looking judgment' now competes with scientific institutions for the role of manufacturing legitimate judgment—rests on an unexamined assumption of functional substitutability. The manuscript does not analyze the procedural mechanisms (peer review, replication norms, institutional affiliation, community uptake) through which legitimacy is actually conferred; if these layers remain necessary, AI outputs function as subordinate inputs rather than rivals, and the claimed binding status of integration capacity does not follow.
  2. [Abstract] The argument that integration capacity is 'the least developed for scientific institutions and the most binding' is asserted without comparative evidence or counterexamples from existing AI-augmented workflows (e.g., automated literature review or data annotation pipelines). This leaves the claim that 'no improvement in AI tooling can buy it' unsupported by any concrete illustration of where integration capacity has already constrained adoption.
minor comments (2)
  1. The four scarcities are introduced in the abstract but receive uneven elaboration; a short definitional subsection or table would improve clarity.
  2. The manuscript would benefit from citing relevant prior work on the economics of science (e.g., on certification costs) and on AI governance to situate the conceptual claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for this constructive report and the recommendation of major revision. The comments correctly identify places where the argument would be strengthened by greater attention to legitimacy mechanisms and concrete illustrations. We respond point by point below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Abstract] Abstract and opening paragraphs: the central inversion—that AI-produced 'competent-looking judgment' now competes with scientific institutions for the role of manufacturing legitimate judgment—rests on an unexamined assumption of functional substitutability. The manuscript does not analyze the procedural mechanisms (peer review, replication norms, institutional affiliation, community uptake) through which legitimacy is actually conferred; if these layers remain necessary, AI outputs function as subordinate inputs rather than rivals, and the claimed binding status of integration capacity does not follow.

    Authors: We accept that the manuscript presents functional substitutability at a high level without dissecting the procedural mechanisms that confer legitimacy. Our claim is that AI's capacity to produce competent-looking judgment at scale creates competitive pressure on the functional output itself, regardless of whether final integration occurs through existing channels. Nevertheless, the referee is right that this requires explicit treatment to show why AI outputs are not merely subordinate. In revision we will expand the abstract and opening paragraphs to address peer review, replication norms, and community uptake, arguing that scalable AI alternatives can strain or reconfigure these mechanisms and thereby make integration capacity binding. revision: yes

  2. Referee: [Abstract] The argument that integration capacity is 'the least developed for scientific institutions and the most binding' is asserted without comparative evidence or counterexamples from existing AI-augmented workflows (e.g., automated literature review or data annotation pipelines). This leaves the claim that 'no improvement in AI tooling can buy it' unsupported by any concrete illustration of where integration capacity has already constrained adoption.

    Authors: The manuscript is conceptual and indeed asserts the primacy of integration capacity without empirical illustrations or counterexamples from current workflows. We agree this weakens the claim that tooling improvements cannot resolve it. In revision we will add brief, concrete illustrations drawn from automated literature review and data annotation pipelines, showing documented cases where community tolerance for delegated cognition has limited uptake even as model performance has improved. These examples will support the assertion that integration capacity is the non-technical bottleneck. revision: yes

Circularity Check

0 steps flagged

No circularity in conceptual analysis

full rationale

The paper advances a conceptual argument about AI producing competent-looking judgment at near-zero marginal cost and the resulting institutional scarcities, without equations, fitted parameters, self-referential derivations, or load-bearing self-citations. All claims rest on asserted observations about AI capabilities and economics rather than reducing to the paper's own inputs by construction. No self-definitional steps, fitted-input predictions, or ansatz smuggling appear in the provided text or abstract; the derivation chain is self-contained as economic and institutional analysis.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper relies on conceptual assumptions about AI capabilities and institutional roles without introducing new parameters, entities, or formal axioms beyond domain knowledge in AI economics and science studies.

axioms (2)
  • domain assumption Competent-looking judgment including selecting, ranking, attributing, and certifying is now produced at scale at marginal cost approaching zero by AI
    Core premise stated in the abstract that inverts prior economics-of-AI views.
  • domain assumption Scientific institutions distinctively manufacture legitimate judgment
    Basis for claiming direct competition with AI for the same functional role.

pith-pipeline@v0.9.0 · 5426 in / 1399 out tokens · 48808 ms · 2026-05-08T17:39:53.985891+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

15 extracted references · 1 canonical work pages · 1 internal anchor

  1. [1]

    Perez, C.Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages(Edward Elgar, 2002)

  2. [2]

    & McAfee, A.The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies(W

    Brynjolfsson, E. & McAfee, A.The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies(W. W. Norton, 2014)

  3. [3]

    & Goldfarb, A.Power and Prediction: The Disruptive Economics of Artificial Intelligence(Harvard Business Review Press, 2022)

    Agrawal, A., Gans, J. & Goldfarb, A.Power and Prediction: The Disruptive Economics of Artificial Intelligence(Harvard Business Review Press, 2022)

  4. [4]

    & Hutter, M

    Legg, S. & Hutter, M. Universal intelligence: a definition of machine intelligence. Minds and Machines17, 391–444 (2007)

  5. [5]

    Institutions for the Post-Scarcity of Judgment

    Lov´ en, L. Institutions for the post-scarcity of judgment. arXiv:2604.22966; sub- mitted to Communications of the ACM (Opinion) (2026). Companion broader argument, 2604.22966

  6. [6]

    Simon, H. A. A behavioral model of rational choice.Quarterly Journal of Economics69, 99–118 (1955)

  7. [7]

    & Susskind, D.The Future of the Professions: How Technology Will Transform the Work of Human ExpertsUpdated edn (Oxford University Press, 2022)

    Susskind, R. & Susskind, D.The Future of the Professions: How Technology Will Transform the Work of Human ExpertsUpdated edn (Oxford University Press, 2022)

  8. [8]

    Pineau, J.et al.Improving reproducibility in machine learning research (a report from the NeurIPS 2019 reproducibility program).Journal of Machine Learning Research22, 1–20 (2021)

  9. [9]

    & Wachter, S

    Birhane, A., Kasirzadeh, A., Leslie, D. & Wachter, S. Science in the age of large language models.Nature Reviews Physics5, 277–280 (2023)

  10. [10]

    Ostrom, E.Governing the Commons: The Evolution of Institutions for Collective Action(Cambridge University Press, 1990)

  11. [11]

    & Ostrom, E

    Hess, C. & Ostrom, E. (eds)Understanding Knowledge as a Commons: From Theory to Practice(MIT Press, 2007)

  12. [12]

    M., Madison, M

    Frischmann, B. M., Madison, M. J. & Strandburg, K. J. (eds)Governing Knowledge Commons(Oxford University Press, 2014)

  13. [13]

    C2PA technical specification, v2.3

    Coalition for Content Provenance and Authenticity. C2PA technical specification, v2.3. https://c2pa.org (2025). Accessed April 2026

  14. [14]

    & Johnson, S.Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity(PublicAffairs, 2023)

    Acemoglu, D. & Johnson, S.Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity(PublicAffairs, 2023). 6

  15. [15]

    Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

    European Union. Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, OJ L, 2024/1689 (2024). Published 12 July 2024; entered into force 1 August 2024. 7