pith. machine review for the scientific record. sign in

arxiv: 2604.06322 · v1 · submitted 2026-04-07 · 🪐 quant-ph · gr-qc· hep-th

Recognition: 3 theorem links

· Lean Theorem

Probing the Planck scale with quantum computation

Boaz Katz, Shlomi Kotler

Pith reviewed 2026-05-10 18:24 UTC · model grok-4.3

classification 🪐 quant-ph gr-qchep-th
keywords quantum computationPlanck scalequantum gravitylogical qubitsoperation rategeneral relativityquantum mechanics
0
0 comments X

The pith

A quantum computer needs only 500 logical qubits to reject any theory whose validity is confined to laboratory scales.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that general relativity and quantum mechanics cannot both hold at the Planck scale, and that this incompatibility can be tested indirectly by building a quantum computer whose operation rate exceeds the maximum allowed by classical physics. The limit is one operation per Planck volume-time, or roughly 2 to the 491 per cubic meter per second. After subtracting the costs of all necessary computation and communication up to the size of the observable universe, the authors calculate that 500 logical qubits are enough to surpass the limit for any laboratory-sized theory, while 1600 logical qubits cover even universe-scale theories. Current commercial roadmaps for quantum hardware are projected to reach these numbers, turning an abstract incompatibility into a concrete, near-term experimental question.

Core claim

Exceeding the classical limit of one operation per Planck volume-time with a quantum computer directly challenges or rejects any theory whose domain of validity is restricted to laboratory scales, with the required logical-qubit count quantified after full accounting for operational and communication costs at all scales; 500 logical qubits suffice for laboratory confinement and 1600 for the observable universe.

What carries the argument

The conversion of logical-qubit count into an effective physical operation rate that can exceed one operation per Planck volume-time after subtracting all-scale communication and overhead costs.

If this is right

  • Theories whose validity stops at laboratory scales can be ruled out by a 500-logical-qubit quantum computer.
  • Theories valid up to the observable universe are still constrained by a 1600-logical-qubit device.
  • Commercial quantum-computing roadmaps already project hardware that would exceed the required rate.
  • The quantum-gravity incompatibility becomes testable without reaching Planck-scale energies directly.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same rate-based test could be applied to other effective theories whose cutoff is set by some other fundamental scale.
  • If the projected hardware succeeds, the result would force a choice between revising the Planck-volume limit or accepting that quantum computers probe physics beyond their own size.
  • The argument opens a route to using computational resources as a proxy for high-energy experiments that remain inaccessible.

Load-bearing premise

That surpassing the classical one-operation-per-Planck-volume-time limit with a quantum computer is enough by itself to reject any theory limited to smaller scales.

What would settle it

A calculation or measurement showing that the effective operation rate of a 500-logical-qubit device remains below 2 to the 491 per cubic meter per second once all physical constraints are included.

Figures

Figures reproduced from arXiv: 2604.06322 by Boaz Katz, Shlomi Kotler.

Figure 3
Figure 3. Figure 3: The total number of operations is obtained by integrating the product of these two factors [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

General relativity and quantum mechanics are incompatible at the Planck scale. This contention can be examined if a quantum computer is set to operate at a rate that exceeds the classical limit of one operation per Planck volume-time, or equivalently $2^{491}$ m$^{-3}$ s$^{-1}$. Here we quantify the relation between the logical qubit count and the extent to which classicality is challenged. We argue that 500 logical qubits are sufficient to reject theories confined to a laboratory. We account for the operational cost of computation and communication at all scales up to and including the observable universe, ultimately constrained by a 1600-logical-qubit computer. Remarkably, current plans for commercial quantum computers are projected to surpass this limit, thereby putting the quantum-gravity standoff to the test.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that a quantum computer operating faster than one operation per Planck volume-time (equivalently 2^{491} m^{-3} s^{-1}) can test the incompatibility of general relativity and quantum mechanics. It argues that 500 logical qubits suffice to reject any theory whose validity is confined to laboratory scales, while costs of computation and communication across all scales up to the observable universe ultimately require no more than 1600 logical qubits. The authors conclude that near-term commercial quantum computers are projected to exceed this threshold.

Significance. If the mapping from logical-qubit count to physical operation rate per unit volume were rigorously derived and verified, the result would offer a novel, resource-bounded route to confronting the quantum-gravity problem with existing hardware roadmaps. The attempt to quantify the crossover between logical resources and Planck-scale rates is conceptually interesting and could stimulate further work on information-theoretic bounds in fundamental physics. However, the absence of explicit derivations, volume estimates, or overhead accounting means the claimed thresholds remain unverified and the significance cannot yet be assessed as high.

major comments (2)
  1. [Abstract] Abstract: the central numerical claims (500 logical qubits suffice to exceed 2^{491} m^{-3} s^{-1} and thereby reject laboratory-scale theories; 1600 logical qubits bound the universe-scale case) are stated without any derivation of the physical volume occupied by a fault-tolerant device, the multiplicative overhead of error correction and communication, or the spatial density of physical operations. These steps are load-bearing for the equivalence between logical-qubit count and Planck-rate violation.
  2. [Abstract] Abstract: the statement that 'operational cost of computation and communication at all scales up to and including the observable universe' is accounted for supplies no equations, scaling relations, or numerical estimates showing how inter-scale communication overheads translate into the 1600-logical-qubit ceiling. Without this accounting the claimed bound cannot be reproduced or falsified.
minor comments (1)
  1. [Abstract] Abstract: the adverb 'Remarkably' is subjective and should be removed or replaced by a factual statement about the projection.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful reading and for identifying the need for explicit derivations to support the numerical claims. We agree that the original abstract presented the 500- and 1600-qubit thresholds without sufficient detail on volumes, overheads, and scaling, which are indeed load-bearing. We have revised the manuscript by adding a dedicated section with the missing derivations, equations, and numerical estimates, and we have updated the abstract to reference this section. These changes make the results reproducible while preserving the core argument.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central numerical claims (500 logical qubits suffice to exceed 2^{491} m^{-3} s^{-1} and thereby reject laboratory-scale theories; 1600 logical qubits bound the universe-scale case) are stated without any derivation of the physical volume occupied by a fault-tolerant device, the multiplicative overhead of error correction and communication, or the spatial density of physical operations. These steps are load-bearing for the equivalence between logical-qubit count and Planck-rate violation.

    Authors: We agree that the abstract as originally written did not include these derivations. The body of the manuscript contains scaling arguments from laboratory volumes to larger scales, but explicit formulas for fault-tolerant device volume, surface-code overhead (approximately 10^4 physical qubits per logical qubit at target error rates), and operation density per cubic meter were not stated. In the revised manuscript we have added Section 3, which derives the effective volume per logical qubit from a 3D lattice model with nearest-neighbor gates and error-correction cycle time. The Planck-rate threshold is then obtained by dividing the logical operation rate by this volume; the calculation shows that 500 logical qubits suffice to exceed 2^{491} m^{-3} s^{-1} within a 1 m^3 laboratory volume. revision: yes

  2. Referee: [Abstract] Abstract: the statement that 'operational cost of computation and communication at all scales up to and including the observable universe' is accounted for supplies no equations, scaling relations, or numerical estimates showing how inter-scale communication overheads translate into the 1600-logical-qubit ceiling. Without this accounting the claimed bound cannot be reproduced or falsified.

    Authors: We accept that the abstract provided no equations or numerical estimates for the inter-scale communication costs. The original text offered only a qualitative statement that costs at all scales are accounted for. The revision adds explicit scaling relations in Section 3 and Appendix A: communication overhead is bounded by light-travel time across distance d, requiring an additional factor of order log_2(d / l_P) logical qubits for hierarchical routing and coherence maintenance. Integrating from laboratory (1 m) to observable-universe (10^{26} m) scales, with the same error-correction overhead applied uniformly, produces an upper bound of 1600 logical qubits. The appendix supplies the integral and the resulting numerical value so that the ceiling can be independently verified. revision: yes

Circularity Check

0 steps flagged

No significant circularity; Planck rate is external and qubit thresholds are argued via scale accounting

full rationale

The paper begins from the standard, externally defined Planck operation limit of one operation per Planck volume-time (equivalently 2^{491} m^{-3} s^{-1}), which is not constructed inside the paper. It then quantifies a mapping from logical qubit count to challenging laboratory-scale theories by explicitly accounting for operational costs of computation and communication across scales up to the observable universe. No step in the provided derivation chain reduces the claimed thresholds (500 or 1600 logical qubits) to a redefinition of the input rate, a fitted parameter renamed as prediction, or a self-citation chain. The argument is presented as a calculation of overheads rather than a tautology, making the overall chain self-contained against external benchmarks even if the specific volume and error-correction assumptions remain open to independent scrutiny.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

Only the abstract is available, so the ledger records the explicit premises stated there. The qubit thresholds are presented as derived quantities whose supporting calculations are not shown.

free parameters (2)
  • 500 logical qubits
    Stated as sufficient to reject laboratory-confined theories; no derivation or fitting procedure given in abstract.
  • 1600 logical qubits
    Stated as the upper limit set by observable-universe communication costs; no explicit calculation shown.
axioms (2)
  • domain assumption General relativity and quantum mechanics are incompatible at the Planck scale.
    Opening premise of the abstract; taken as given.
  • domain assumption The classical limit is one operation per Planck volume-time, equivalent to 2^491 m^{-3} s^{-1}.
    Equivalence stated without derivation in the abstract.

pith-pipeline@v0.9.0 · 5422 in / 1650 out tokens · 52686 ms · 2026-05-10T18:24:10.675760+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

33 extracted references · 29 canonical work pages · 3 internal anchors

  1. [1]

    Evans, P

    L. Evans, P. Bryant, LHC Machine.Journal of Instrumentation3(08), S08001 (2008), doi:10. 1088/1748-0221/3/08/S08001,https://doi.org/10.1088/1748-0221/3/08/S08001

  2. [2]

    R. Alves Batista,et al., White paper and roadmap for quantum gravity phenomenology in the multi-messenger era.Classical and Quantum Gravity42(3), 032001 (2025), doi:10.1088/ 1361-6382/ad605a,https://doi.org/10.1088/1361-6382/ad605a

  3. [3]

    Bose,et al., Massive quantum systems as interfaces of quantum mechanics and gravity.Rev

    S. Bose,et al., Massive quantum systems as interfaces of quantum mechanics and gravity.Rev. Mod. Phys.97, 015003 (2025), doi:10.1103/RevModPhys.97.015003,https://link.aps. org/doi/10.1103/RevModPhys.97.015003

  4. [4]

    Carney, P

    D. Carney, P. C. E. Stamp, J. M. Taylor, Tabletop experiments for quantum gravity: a user’s man- ual.Classical and Quantum Gravity36(3), 034001 (2019), doi:10.1088/1361-6382/aaf9ca, https://doi.org/10.1088/1361-6382/aaf9ca

  5. [5]

    Chou,et al., The Holometer: an instrument to probe Planckian quantum geometry.Classical and Quantum Gravity34(6), 065005 (2017), doi:10.1088/1361-6382/aa5e5c,https://doi

    A. Chou,et al., The Holometer: an instrument to probe Planckian quantum geometry.Classical and Quantum Gravity34(6), 065005 (2017), doi:10.1088/1361-6382/aa5e5c,https://doi. org/10.1088/1361-6382/aa5e5c

  6. [6]

    The LIGO Scientific Collaboration,et al., Advanced LIGO.Classical and Quantum Gravity 32(7), 074001 (2015), doi:10.1088/0264-9381/32/7/074001,https://doi.org/10.1088/ 0264-9381/32/7/074001

  7. [7]

    Deutsch,The Fabric of Reality(Allen Lane / Penguin Press, New York) (1997)

    D. Deutsch,The Fabric of Reality(Allen Lane / Penguin Press, New York) (1997)

  8. [8]

    ’t Hooft,The Cellular Automaton Interpretation of Quantum Mechanics(Springer Inter- national Publishing, Cham), vol

    G. ’t Hooft,The Cellular Automaton Interpretation of Quantum Mechanics(Springer Inter- national Publishing, Cham), vol. 185 ofFundamental Theories of Physics, chap. 5.8 (2016), doi:10.1007/978-3-319-41285-6, Open Access

  9. [9]

    Palmer, Rational quantum mechanics: Testing quantum theory with quantum computers

    T. Palmer, Rational quantum mechanics: Testing quantum theory with quantum computers. Proceedings of the National Academy of Sciences123(12), e2523350123 (2026), doi:10.1073/ pnas.2523350123,https://www.pnas.org/doi/abs/10.1073/pnas.2523350123. 11

  10. [10]

    Aaronson,Limits on Efficient Computation in the Physical World, Ph.D

    S. Aaronson,Limits on Efficient Computation in the Physical World, Ph.D. thesis, University of California, Berkeley (2004),https://arxiv.org/pdf/quant-ph/0412143.pdf

  11. [11]

    NVIDIA, GeForce Graphics Cards Compare,https://www.nvidia.com/en-eu/geforce/ graphics-cards/compare/(accessed 30 March 2026)

  12. [12]

    M. A. Nielsen, I. L. Chuang,Quantum Computation and Quantum Informa- tion: 10th Anniversary Edition(Cambridge University Press, Cambridge) (2010), doi:10.1017/CBO9780511976667,https://www.cambridge.org/core/product/ 01E10196D0A682A6AEFFEA52D53BE9AE

  13. [13]

    Chevignard, P.-A

    C. Chevignard, P.-A. Fouque, A. Schrottenloher, Reducing the Number of Qubits in Quantum Factoring, inAdvances in Cryptology – CRYPTO 2025, Y. Tauman Kalai, S. F. Kamara, Eds. (Springer Nature Switzerland, Cham) (2025), pp. 384–415

  14. [14]

    R. A. Beth, C. Lasky, The Brookhaven Alternating Gradient Synchrotron.Science 128(3336), 1393–1401 (1958), doi:10.1126/science.128.3336.1393,https://doi.org/10. 1126/science.128.3336.1393

  15. [15]

    Materials and methods are available as supplementary material

  16. [16]

    Lloyd, Computational Capacity of the Universe.Physical Review Letters88(23), 237901 (2002), doi:10.1103/PhysRevLett.88.237901,https://link.aps.org/doi/10

    S. Lloyd, Computational Capacity of the Universe.Physical Review Letters88(23), 237901 (2002), doi:10.1103/PhysRevLett.88.237901,https://link.aps.org/doi/10. 1103/PhysRevLett.88.237901

  17. [17]

    Time, clocks, and the ordering of events in a distributed system

    L. Lamport, Time, clocks, and the ordering of events in a distributed system.Commun. ACM21(7), 558–565 (1978), doi:10.1145/359545.359563,https://doi.org/10.1145/ 359545.359563

  18. [19]

    Fredkin, Discrete theoretical processes (DTP), inA Computable Universe: Understanding and Exploring Nature as Computation, H

    E. Fredkin, Discrete theoretical processes (DTP), inA Computable Universe: Understanding and Exploring Nature as Computation, H. Zenil, R. Penrose, Eds. (World Scientific), pp. 365–380 (2012), doi:10.1142/8306. 12

  19. [20]

    A. K. Lenstra, H. W. Lenstra, M. S. Manasse, J. M. Pollard, The number field sieve, in Proceedings of the Twenty-Second Annual ACM Symposium on Theory of Computing, STOC ’90 (Association for Computing Machinery, New York, NY, USA) (1990), pp. 564–572, doi: 10.1145/100216.100295,https://doi.org/10.1145/100216.100295

  20. [21]

    D. Bluvstein,et al., Logical quantum processor based on reconfigurable atom arrays.Nature 626(7997), 58–65 (2024), doi:10.1038/s41586-023-06927-3,https://doi.org/10.1038/ s41586-023-06927-3

  21. [22]

    Het ´enyi, J

    B. Het ´enyi, J. R. Wootton, Creating Entangled Logical Qubits in the Heavy-Hex Lattice with Topological Codes.PRX Quantum5(4), 040334 (2024), doi:10.1103/PRXQuantum.5.040334, https://link.aps.org/doi/10.1103/PRXQuantum.5.040334

  22. [23]

    Demonstration of logical qubits and repeated error correction with better-than-physical error rates,

    A. Paetznick,et al., Demonstration of logical qubits and repeated error correction with better- than-physical error rates (2024),https://arxiv.org/pdf/2404.02280v3.pdf

  23. [24]

    Quantum error correction below the surface code threshold,

    R. Acharya,et al., Quantum error correction below the surface code threshold.Nature 638(8052), 920–926 (2025), doi:10.1038/s41586-024-08449-y,https://doi.org/10. 1038/s41586-024-08449-y

  24. [25]

    How to factor 2048 bit RSA integers with less than a million noisy qubits

    C. Gidney, How to factor 2048 bit RSA integers with less than a million noisy qubits (2025), https://arxiv.org/pdf/2505.15917.pdf

  25. [26]

    H. Zhou,et al., Resource Analysis of Low-Overhead Transversal Architectures for Recon- figurable Atom Arrays, inProceedings of the 52nd Annual International Symposium on Computer Architecture, ISCA ’25 (Association for Computing Machinery, New York, NY, USA) (2025), pp. 1432–1448, doi:10.1145/3695053.3731039,https://doi.org/10.1145/ 3695053.3731039

  26. [27]

    T. J. Yoder,et al., Tour de gross: A modular quantum computer based on bivariate bicycle codes (2025),https://arxiv.org/pdf/2506.03094.pdf

  27. [28]

    Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits (2026)

    M. Cain,et al., Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits (2026),https://arxiv.org/pdf/2603.28627.pdf. 13

  28. [29]

    Quantum computing in the NISQ era and beyond.Quantum, 2:79, 2018

    J. Preskill, Quantum Computing in the NISQ era and beyond.Quantum2, 79 (2018), doi: 10.22331/q-2018-08-06-79,https://doi.org/10.22331/q-2018-08-06-79

  29. [30]

    Aaronson, A

    S. Aaronson, A. Arkhipov, The computational complexity of linear optics, inProceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, STOC ’11 (Association for Computing Machinery, New York, NY, USA) (2011), pp. 333–342, doi:10.1145/1993636. 1993682,https://doi.org/10.1145/1993636.1993682

  30. [31]

    Arute et al., Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019), doi:10.1038/s41586-019-1666-5

    F. Arute,et al., Quantum supremacy using a programmable superconducting processor.Na- ture574(7779), 505–510 (2019), doi:10.1038/s41586-019-1666-5,https://doi.org/10. 1038/s41586-019-1666-5

  31. [32]

    Robust quantum computational advantage with programmable 3050- photon gaussian boson sampling,

    H.-L. Liu,et al., Robust quantum computational advantage with programmable 3050-photon Gaussian boson sampling (2025),https://arxiv.org/pdf/2508.09092v3.pdf

  32. [33]

    Planck Collaboration,et al., Planck 2018 results. VI. Cosmological parameters.Astronomy & Astrophysics641, A6 (2020), doi:10.1051/0004-6361/201833910

  33. [34]

    A. G. Riess,et al., A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km s-1 Mpc-1 Uncertainty from the Hubble Space Telescope and the SH0ES Team. The Astrophysical Journal Letters934(1), L7 (2022), doi:10.3847/2041-8213/ac5c5b,https: //doi.org/10.3847/2041-8213/ac5c5b. Supplementary materials Materials and Methods Supplementary...