Recognition: unknown
ADaPT: Adaptive-window Decoding for Practical fault-Tolerance
Pith reviewed 2026-05-09 18:42 UTC · model grok-4.3
The pith
Adaptive window decoding based on decoder confidence reduces time overhead while preserving target logical error rates in quantum error correction.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
ADaPT adjusts the decoding window size dynamically according to the confidence reported by the decoder. High-confidence steps use smaller windows to save time, while low-confidence steps retain the full size to ensure error correction. Benchmarks across codes and noise models confirm that this reaches the target error rate with reduced overhead compared to fixed windows.
What carries the argument
Adaptive-window decoding based on decoder confidence, which shortens the window for high-confidence steps to cut decoding time overhead.
If this is right
- Reduces reaction time for real-time fault-tolerant quantum computation.
- Applies effectively across different quantum codes and hardware noise models.
- Maintains logical error rates at target levels despite shorter windows.
- Exploits the sparsity of average-case errors to avoid fixed overhead costs.
Where Pith is reading between the lines
- This approach could be combined with parallelization techniques to further improve throughput.
- Potential to enable decoding for larger code distances under real-time constraints.
- Decoder confidence might serve as a signal for other optimizations in quantum error correction pipelines.
Load-bearing premise
Decoder confidence reliably indicates when a smaller window suffices without missing errors that would lead to undetected logical errors.
What would settle it
Running simulations where the adaptive decoder shows a higher logical error rate than the fixed-window decoder for the same code distance and noise model, or no reduction in average decoding time.
Figures
read the original abstract
Window decoding, first proposed to reduce decoding complexity for real-time decoding, is an essential component to realize scalable, universal-fault tolerant computation. Prior work has focused on improving throughput through parallelization and reducing reaction time via speculation on window boundaries. However, these methods use a fixed window size d, paying a fixed decoding time overhead for each window. In practice, we find this overhead of a fixed window size unnecessary in many cases due to the sparsity of average-case errors in QEC. Leveraging this insight, in this paper we propose an adaptive window decoding technique based on decoder confidence. This technique reduces the overhead in decoding time thus reducing reaction time without compromising on logical error rates. We benchmark adaptive window decoding across different codes and hardware inspired noise models. Our results show that this adaptive technique reaches the target error rate while maintaining a low decoding time overhead across different codes, and under different noise models.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes ADaPT, an adaptive-window decoding technique for quantum error correction that dynamically sizes decoding windows according to decoder confidence scores. The central claim is that this reduces average decoding-time overhead relative to fixed-window methods while still meeting target logical error rates, with supporting benchmarks across multiple codes and hardware-inspired noise models.
Significance. If the empirical results hold under rigorous validation, the method could meaningfully improve reaction times for real-time decoding in scalable fault-tolerant architectures by exploiting error sparsity, without requiring changes to the underlying code or decoder. The cross-code, cross-noise-model evaluation is a positive feature for practical relevance.
major comments (2)
- [Method (adaptive-window algorithm description)] The soundness of the adaptive scheme rests on the unproven assumption that high-confidence decoder outputs reliably indicate that shortening the window will not permit undetected logical errors to accumulate across successive windows. No formal bound, correlation analysis, or worst-case counter-example search is supplied to quantify the risk that sparse but critical errors produce misleadingly high confidence scores.
- [Results and benchmarking sections] The abstract states that benchmarks 'preserve target error rates' with 'low decoding time overhead,' yet the manuscript supplies no quantitative tables, error bars, threshold-selection procedure, or statistical validation of the chosen confidence cutoffs. Without these data the central empirical claim cannot be assessed.
minor comments (2)
- [Method] Notation for the confidence metric and the precise rule for window-size selection should be defined with an equation or pseudocode block rather than prose alone.
- [Figures] Figure captions and axis labels in the benchmarking plots should explicitly state the fixed-window baseline size, the noise-model parameters, and the number of Monte-Carlo shots used.
Simulated Author's Rebuttal
We thank the referee for their constructive comments on our manuscript. We address each major point below and have revised the manuscript to strengthen the presentation and empirical support where appropriate.
read point-by-point responses
-
Referee: The soundness of the adaptive scheme rests on the unproven assumption that high-confidence decoder outputs reliably indicate that shortening the window will not permit undetected logical errors to accumulate across successive windows. No formal bound, correlation analysis, or worst-case counter-example search is supplied to quantify the risk that sparse but critical errors produce misleadingly high confidence scores.
Authors: We agree that the adaptive scheme is grounded in an empirical assumption rather than a formal proof. In the revised manuscript we have added a dedicated subsection under Methods that reports correlation analysis between decoder confidence scores and logical-error occurrences, together with additional Monte-Carlo simulations that deliberately inject sparse critical-error patterns. These results quantify the observed risk under the tested noise models. A general worst-case theoretical bound remains outside the scope of the present work, as it would require noise-model assumptions that do not hold universally; we now explicitly state this limitation. revision: partial
-
Referee: The abstract states that benchmarks 'preserve target error rates' with 'low decoding time overhead,' yet the manuscript supplies no quantitative tables, error bars, threshold-selection procedure, or statistical validation of the chosen confidence cutoffs. Without these data the central empirical claim cannot be assessed.
Authors: We appreciate the referee highlighting this presentational gap. The revised manuscript now includes explicit tables that report logical error rates, average decoding-time overhead, and overhead reductions for every code and noise model examined, each accompanied by error bars obtained from repeated independent trials. We have also expanded the Methods section to describe the threshold-selection procedure for the confidence cutoffs and the bootstrap-based statistical validation used to confirm that target error rates are preserved. revision: yes
Circularity Check
No circularity: adaptive decoding is an independent algorithmic change with empirical validation
full rationale
The paper introduces an adaptive-window decoding technique that sizes windows according to decoder confidence, claiming reduced average decoding time without raising logical error rates. No equations, fitted parameters, or self-citations appear in the derivation chain; the method is presented as a direct algorithmic modification motivated by the sparsity of errors, and its performance is assessed via benchmarks across codes and noise models rather than by construction or tautology. The central claim therefore remains self-contained and externally testable.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Runtime reduction in lattice surgery utilizing time-like soft information,
Y . Akahoshi, R. Toshio, J. Fujisaki, H. Oshima, S. Sato, and K. Fujii, “Runtime reduction in lattice surgery utilizing time-like soft information,” 2025. [Online]. Available: https://arxiv.org/abs/2510.21149
-
[2]
Fault-tolerant postselection for low-overhead magic state preparation,
H. Bomb ´ın, M. Pant, S. Roberts, and K. I. Seetharam, “Fault-tolerant postselection for low-overhead magic state preparation,”PRX Quantum, vol. 5, p. 010302, 2024
2024
-
[3]
High-threshold and low-overhead fault-tolerant quantum memory,
S. Bravyi, A. W. Cross, J. M. Gambetta, D. Maslov, P. Rall, and T. J. Yoder, “High-threshold and low-overhead fault-tolerant quantum memory,”Nature, vol. 627, no. 8005, pp. 778–782, 2024
2024
-
[4]
Almost-linear time decoding algo- rithm for topological codes,
N. Delfosse and N. H. Nickerson, “Almost-linear time decoding algo- rithm for topological codes,”Quantum, vol. 5, p. 595, 2021
2021
-
[5]
Topological quantum memory,
E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,”Journal of Mathematical Physics, vol. 43, no. 9, pp. 4452– 4505, 2002
2002
-
[6]
arXiv preprint arXiv:1310.0863 , year=
A. G. Fowler, “Optimal complexity correction of correlated errors in the surface code,” 2013. [Online]. Available: https://arxiv.org/abs/1310.0863
-
[7]
Surface codes: Towards practical large-scale quantum computation,
A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,”Physical Review A, vol. 86, no. 3, p. 032324, 2012
2012
-
[8]
Spatially parallel decoding for multi-qubit lattice surgery,
S. Fuhui Lin, E. C. Peterson, K. Sankar, and P. Sivarajah, “Spatially parallel decoding for multi-qubit lattice surgery,”Quantum Science and Technology, vol. 10, no. 3, p. 035007, Apr. 2025. [Online]. Available: http://dx.doi.org/10.1088/2058-9565/adc6b6
-
[9]
Stim: a fast stabilizer circuit simulator,
C. Gidney, “Stim: a fast stabilizer circuit simulator,”Quantum, vol. 5, p. 497, 2021
2021
-
[10]
2022 , month = aug, publisher =
C. Gidney, “Stability experiments: The overlooked dual of memory experiments,”Quantum, vol. 6, p. 786, Aug. 2022. [Online]. Available: http://dx.doi.org/10.22331/q-2022-08-24-786
-
[11]
How to factor 2048 bit RSA integers with less than a million noisy qubits
C. Gidney, “How to factor 2048 bit rsa integers with less than a million noisy qubits,” 2025. [Online]. Available: https://arxiv.org/abs/2505.15917
work page internal anchor Pith review arXiv 2048
-
[12]
Yoked surface codes,
C. Gidney, M. Newman, P. Brooks, and C. Jones, “Yoked surface codes,” Nature Communications, vol. 16, p. 4498, 2025
2025
-
[13]
Magic state cultivation: growing T states as cheap as CNOT gates
C. Gidney, N. Shutty, and C. Jones, “Magic state cultivation: growing t states as cheap as cnot gates,” 2024. [Online]. Available: https://arxiv.org/abs/2409.17595
work page internal anchor Pith review arXiv 2024
-
[14]
Pymatching: A python package for decoding quantum codes with minimum-weight perfect matching,
O. Higgott, “Pymatching: A python package for decoding quantum codes with minimum-weight perfect matching,”ACM Transactions on Quantum Computing, vol. 3, no. 3, pp. 1–16, 2022
2022
-
[15]
Bohdanowicz, Alek- sander Kubica, Steven T
O. Higgott, T. C. Bohdanowicz, A. Kubica, S. T. Flammia, and E. T. Campbell, “Improved decoding of circuit noise and fragile boundaries of tailored surface codes,”Physical Review X, vol. 13, no. 3, Jul. 2023. [Online]. Available: http://dx.doi.org/10.1103/PhysRevX.13.031007
-
[16]
Localized statistics decoding for quantum low-density parity-check codes,
T. Hillmann, L. Berent, A. O. Quintavalle, J. Eisert, R. Wille, and J. Roffe, “Localized statistics decoding for quantum low-density parity-check codes,”Nature Communications, vol. 16, no. 8214, 2025. [Online]. Available: https://www.nature.com/articles/s41467-025-63214- 7
2025
-
[17]
Between shor and steane: A unifying construction for measuring error syndromes,
S. Huang and K. R. Brown, “Between shor and steane: A unifying construction for measuring error syndromes,”Physical Review Letters, vol. 127, p. 090505, 2021
2021
-
[18]
Windowed decoding of protograph-based ldpc convolutional codes over erasure channels,
A. R. Iyengar, M. Papaleo, P. H. Siegel, J. K. Wolf, A. Vardy, M. Lent- maier, and G. P. Fettweis, “Windowed decoding of protograph-based ldpc convolutional codes over erasure channels,”IEEE Transactions on Information Theory, vol. 58, no. 4, pp. 2303–2320, 2012
2012
-
[19]
Efficient Post- Selection for General Quantum LDPC Codes,
S.-H. Lee, L. H. English, and S. D. Bartlett, “Efficient post- selection for general quantum ldpc codes,” 2026. [Online]. Available: https://arxiv.org/abs/2510.05795
-
[20]
Degenerate quantum ldpc codes with good finite length performance,
P. Panteleev and G. Kalachev, “Degenerate quantum ldpc codes with good finite length performance,”Quantum, vol. 5, p. 585, 2021
2021
-
[21]
M. A. Perlin, “qLDPC,” https://github.com/qLDPCOrg/qLDPC, 2023
2023
-
[22]
LDPC: Python tools for low density parity check codes,
J. Roffe, “LDPC: Python tools for low density parity check codes,”
-
[23]
Available: https://pypi.org/project/ldpc/
[Online]. Available: https://pypi.org/project/ldpc/
-
[24]
Fold- transversal surface code cultivation
K. Sahay, P.-K. Tsai, K. Chang, Q. Su, T. B. Smith, S. Singh, and S. Puri, “Fold-transversal surface code cultivation,” 2026. [Online]. Available: https://arxiv.org/abs/2509.05212
-
[25]
Parallel window decoding enables scalable fault tolerant quantum computation,
L. Skoric, D. E. Browne, K. M. Barnes, N. I. Gillespie, and E. T. Campbell, “Parallel window decoding enables scalable fault tolerant quantum computation,”Nat Commun, vol. 14, p. 7040, 2023
2023
-
[26]
Architectures for heterogeneous quantum error correction codes,
S. Stein, S. Xu, A. W. Cross, T. J. Yoder, A. Javadi-Abhari, C. Liu, K. Liu, Z. Zhou, C. Guinn, Y . Ding, Y . Ding, and A. Li, “Architectures for heterogeneous quantum error correction codes,” arXiv preprint arXiv:2411.03202, 2024
-
[27]
Scalable surface-code decoders with parallelization in time,
X. Tan, F. Zhang, R. Chao, Y . Shi, and J. Chen, “Scalable surface-code decoders with parallelization in time,”PRX Quantum, vol. 4, no. 4, p. 040344, 2023
2023
-
[28]
Quantum error correction for quantum memories,
B. M. Terhal, “Quantum error correction for quantum memories,” Reviews of Modern Physics, vol. 87, no. 2, pp. 307–346, 2015
2015
-
[29]
Decoder switching: Breaking the speed-accuracy tradeoff in real-time quantum error correction,
R. Toshio, K. Kishi, J. Fujisaki, H. Oshima, S. Sato, and K. Fujii, “Decoder switching: Breaking the speed-accuracy tradeoff in real-time quantum error correction,”arXiv preprint arXiv:2510.25222, 2025
-
[30]
J. Viszlai, J. D. Chadwick, S. Joshi, G. S. Ravi, Y . Li, and F. T. Chong, “Swiper: Minimizing fault-tolerant quantum program latency via speculative window decoding,” inProceedings of the 52nd Annual International Symposium on Computer Architecture, ser. ISCA ’25. New York, NY , USA: Association for Computing Machinery, 2025, p. 1386–1401. [Online]. Avai...
-
[31]
Teo, Joshua Viszlai, and Fred Chong
W. Yang, J. Chadwick, M. H. Teo, J. Viszlai, and F. Chong, “Spacetime- efficient and hardware-compatible complex quantum logic units in qldpc codes,”arXiv preprint arXiv:2602.14273, 2026
-
[32]
Resource analysis of low-overhead transversal architectures for reconfigurable atom arrays,
H. Zhou, C. Duckering, C. Zhao, D. Bluvstein, M. Cain, A. Kubica, S.-T. Wang, and M. D. Lukin, “Resource analysis of low-overhead transversal architectures for reconfigurable atom arrays,” inProceedings of the 52nd Annual International Symposium on Computer Architecture, 2025, pp. 1432–1448. 10
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.