pith. machine review for the scientific record. sign in

arxiv: 2605.10638 · v2 · submitted 2026-05-11 · 🪐 quant-ph

Recognition: no theorem link

Quantifying the Hadamard Resilience Law: Discovery of the Coherence Gap in NISQ-Era Classifiers

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:21 UTC · model grok-4.3

classification 🪐 quant-ph
keywords Hadamard test perceptronNISQ quantum classifierscoherence gapKingston constantquantum machine learningMNIST classificationcoherent phase errorsHadamard resilience law
0
0 comments X

The pith

Hadamard Test Perceptron achieves 93.9 percent MNIST accuracy despite 93 percent signal collapse on NISQ hardware

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that a quantum classifier using the Hadamard test maintains high accuracy on image recognition tasks even after noise reduces the signal by 93 percent. This outcome supports the Hadamard Resilience Law as a principle of robustness in NISQ-era quantum machine learning. At larger feature dimensions the real hardware results separate from noise simulations, defining a coherence gap that the authors trace to accumulated phase errors. The work supplies a model to forecast when such gaps will appear and thereby limit circuit depth. Understanding this distinction helps focus efforts on the noise type that actually blocks progress toward useful quantum classifiers.

Core claim

The Hadamard Test Perceptron attains 93.9 percent accuracy classifying MNIST digits on NISQ hardware even though the Kingston Constant indicates a 93 percent reduction in signal magnitude. This performance confirms the Hadamard Resilience Law. At a feature count of 256 a coherence gap of roughly 0.91 opens because hardware accuracy falls while stochastic simulations stay high, which the authors link to coherent phase errors rather than random depolarizing noise. They also note a coherence wall at this scale where circuit depth surpasses the hardware limit and provide an updated hardware-aware model for predicting the safe operating range of quantum linear layers.

What carries the argument

The Hadamard Resilience Law, which accounts for sustained classification performance in the presence of large signal decay induced by NISQ noise.

Load-bearing premise

The divergence between physical hardware and stochastic simulations at high feature depths stems specifically from coherent phase errors and not from calibration problems or other unaccounted hardware effects.

What would settle it

An experiment that adds explicit phase-error terms to the stochastic simulation and checks whether it then matches the hardware accuracy drop at N equals 256 would test whether coherent phase errors explain the coherence gap.

Figures

Figures reproduced from arXiv: 2605.10638 by Wladimir Silva.

Figure 1
Figure 1. Figure 1: FIG. 1 [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2 [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4 [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5. Phase transition of the Hadamard Resilience Law. [PITH_FULL_IMAGE:figures/full_fig_p004_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIG. 6. The Hadamard Resilience Law is Mathematically [PITH_FULL_IMAGE:figures/full_fig_p005_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIG. 7. The Coherence Wall Visualization shows that at [PITH_FULL_IMAGE:figures/full_fig_p006_7.png] view at source ↗
read the original abstract

We report on a fundamental disparity between stochastic noise models and algorithmic performance in NISQ-era classifiers. Utilizing the ibm_kingston processor, we characterize the "Kingston Constant" ($\kappa \approx 0.07$), representing a 93% signal magnitude collapse. Despite this decay, we demonstrate that the Hadamard Test Perceptron maintains a 93.9% MNIST accuracy, validating our proposed Hadamard Resilience Law. However, a systemic divergence -- the "Coherence Gap" ($\Delta\rho \approx 0.91$) -- emerges at high feature depths ($N=256$), where physical hardware collapses while stochastic simulations remain resilient. This gap identifies coherent phase errors, rather than depolarizing noise, as the primary barrier to scaling quantum linear layers. Furthermore, experimental results on the ibm_kingston processor reveal a "Coherence Wall" at $N=256$, where circuit depth ($D \approx 10k$) exceeds the hardware's resilient depth limit ($D_{max} \approx 3.5k$). We provide a refined hardware-aware model that accounts for this coherence-induced signal decay, establishing a predictive boundary for robust quantum linear layers on current NISQ devices.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

4 major / 2 minor

Summary. The manuscript reports experiments on the ibm_kingston processor using a Hadamard Test Perceptron for MNIST classification. It introduces the Kingston Constant (κ ≈ 0.07) to quantify a 93% signal magnitude collapse, claims that 93.9% accuracy validates the proposed Hadamard Resilience Law despite this decay, identifies a Coherence Gap (Δρ ≈ 0.91 at N=256) attributed to coherent phase errors (where hardware collapses but stochastic simulations do not), and reports a Coherence Wall at circuit depth D ≈ 10k exceeding D_max ≈ 3.5k, along with a refined hardware-aware model.

Significance. If the empirical distinction between coherent errors and stochastic noise were rigorously established with controls and independent validation, the work would offer useful guidance for scaling quantum linear layers on NISQ devices and for hardware-aware modeling of classifier resilience. The reported divergence between physical hardware and simulations at high feature depths addresses a practically relevant question in quantum machine learning.

major comments (4)
  1. [Abstract] Abstract: fitted constants κ ≈ 0.07 and Δρ ≈ 0.91 together with the 93.9% accuracy figure are stated without error bars, derivation steps, data exclusion criteria, or baseline comparisons, so the statistical reliability of the central hardware-specific observations cannot be evaluated.
  2. [§4 (Hadamard Resilience Law)] The Hadamard Resilience Law is defined from the observed signal collapse and accuracy numbers and then validated by the same quantities, reducing the law to a post-hoc description of the fitted decay rather than an independent prediction.
  3. [Experimental Results on Coherence Gap] The attribution of the Coherence Gap (Δρ ≈ 0.91 at N=256) specifically to coherent phase errors is not supported by randomized benchmarking isolating T2 dephasing, Hamiltonian derivations, or ablation comparisons of depolarizing versus phase-error models against the hardware-simulation mismatch.
  4. [Discussion of Coherence Wall] The Coherence Wall claim at N=256 (D ≈ 10k vs D_max ≈ 3.5k) lacks a clear derivation of D_max and does not rule out alternative explanations such as calibration drifts, readout errors, or post-selection artifacts for the observed divergence.
minor comments (2)
  1. [Notation and Definitions] The newly introduced terms (Hadamard Resilience Law, Coherence Gap, Coherence Wall, Kingston Constant) should be defined in a single notation table or dedicated subsection to prevent overlap with standard quantum-information symbols.
  2. [References] Additional references to prior NISQ error-characterization studies and quantum classifier benchmarks would help situate the hardware-aware model.

Simulated Author's Rebuttal

4 responses · 0 unresolved

We thank the referee for the constructive and detailed review. The comments have prompted us to strengthen the statistical presentation, clarify the logical structure of the Hadamard Resilience Law, and supply additional supporting analyses. We address each major comment below and indicate the revisions made.

read point-by-point responses
  1. Referee: [Abstract] Abstract: fitted constants κ ≈ 0.07 and Δρ ≈ 0.91 together with the 93.9% accuracy figure are stated without error bars, derivation steps, data exclusion criteria, or baseline comparisons, so the statistical reliability of the central hardware-specific observations cannot be evaluated.

    Authors: We agree that the original abstract omitted these elements. In the revised manuscript we have (i) added standard-error bars obtained from 10 independent hardware runs for κ, Δρ, and accuracy; (ii) inserted a concise derivation of the fitted constants in the Methods section with explicit formulas; (iii) stated the outlier-rejection rule (values >3σ from the run mean); and (iv) included a classical perceptron baseline for comparison. The updated abstract now references these additions. revision: yes

  2. Referee: [§4 (Hadamard Resilience Law)] The Hadamard Resilience Law is defined from the observed signal collapse and accuracy numbers and then validated by the same quantities, reducing the law to a post-hoc description of the fitted decay rather than an independent prediction.

    Authors: The law was first derived in Section 3 from the phase-accumulation properties of the Hadamard test under a coherent-error Hamiltonian; the experimental data in Section 4 were intended as validation. To remove any ambiguity we have re-ordered the presentation: the theoretical derivation now precedes the data, the functional form is stated as a prediction, and the fit is shown only after the prediction is made. A new paragraph explicitly separates the a-priori prediction from the subsequent empirical test. revision: yes

  3. Referee: [Experimental Results on Coherence Gap] The attribution of the Coherence Gap (Δρ ≈ 0.91 at N=256) specifically to coherent phase errors is not supported by randomized benchmarking isolating T2 dephasing, Hamiltonian derivations, or ablation comparisons of depolarizing versus phase-error models against the hardware-simulation mismatch.

    Authors: We accept that randomized benchmarking was not performed. The primary evidence remains the systematic divergence between hardware and stochastic simulations that already incorporate T1/T2 and depolarizing channels. In the revision we have added (i) an explicit Hamiltonian derivation of the phase-error term in Appendix B and (ii) an ablation table comparing four noise models (depolarizing only, T2-only, combined stochastic, and coherent-phase). Only the coherent-phase model reproduces the observed gap magnitude and depth dependence. We note that full RB on ibm_kingston was not feasible within the allocated device time, but the model-comparison results provide direct support for the attribution. revision: partial

  4. Referee: [Discussion of Coherence Wall] The Coherence Wall claim at N=256 (D ≈ 10k vs D_max ≈ 3.5k) lacks a clear derivation of D_max and does not rule out alternative explanations such as calibration drifts, readout errors, or post-selection artifacts for the observed divergence.

    Authors: D_max is obtained from the product of the device’s reported T2 time and average two-qubit gate duration, now stated explicitly in Section 5.1 with the numerical values used. To address alternatives we have added: (a) repeated calibration logs showing <5 % drift over the 48-hour acquisition window; (b) readout-error mitigation via ancilla post-selection with the rejection rate reported; and (c) a simulation that injects the measured readout noise yet still fails to reproduce the hardware collapse. The divergence remains after these controls, supporting the coherence-wall interpretation. revision: yes

Circularity Check

2 steps flagged

Hadamard Resilience Law and Kingston Constant reduce to post-hoc naming of the same fitted accuracy and collapse observations

specific steps
  1. self definitional [Abstract]
    "Despite this decay, we demonstrate that the Hadamard Test Perceptron maintains a 93.9% MNIST accuracy, validating our proposed Hadamard Resilience Law."

    The law is proposed as the resilience of accuracy under signal collapse; the validation consists solely of reporting the high accuracy that was used to define the resilience, rendering the step tautological.

  2. fitted input called prediction [Abstract]
    "we characterize the 'Kingston Constant' (κ ≈ 0.07), representing a 93% signal magnitude collapse. Despite this decay, we demonstrate that the Hadamard Test Perceptron maintains a 93.9% MNIST accuracy, validating our proposed Hadamard Resilience Law."

    κ is extracted from the measured collapse in the same MNIST hardware runs; the law is then 'validated' by the accuracy numbers that already incorporate that fitted collapse, so the prediction is the input by construction.

full rationale

The paper introduces the Hadamard Resilience Law and Kingston Constant as explanatory constructs, then immediately validates them using the identical MNIST accuracy (93.9%) and signal-collapse measurements (κ ≈ 0.07) obtained from the ibm_kingston runs. No independent derivation, parameter-free prediction, or external benchmark is supplied; the 'law' and 'constant' are therefore equivalent to descriptive labels for the observed data rather than derived results. The Coherence Gap is similarly defined directly from the hardware-simulation divergence at N=256 without a separate Hamiltonian derivation or isolating experiment.

Axiom & Free-Parameter Ledger

3 free parameters · 1 axioms · 3 invented entities

The central claims rest on several fitted constants extracted from a single hardware run and on newly named phenomena whose independent falsifiability is not demonstrated.

free parameters (3)
  • Kingston Constant κ = 0.07
    Quantifies 93% signal collapse and is used to define the resilience law.
  • Coherence Gap Δρ = 0.91
    Difference between hardware and simulation at N=256, fitted from observed divergence.
  • Hardware resilient depth limit D_max = 3500
    Stated as approximately 3.5k and contrasted with observed circuit depth.
axioms (1)
  • domain assumption Stochastic depolarizing noise models are the appropriate baseline for comparison with real hardware behavior.
    Invoked to establish the coherence gap as a distinct phenomenon.
invented entities (3)
  • Hadamard Resilience Law no independent evidence
    purpose: Explains maintained classification accuracy despite signal collapse.
    Introduced without derivation and validated by the same data used to define it.
  • Coherence Gap no independent evidence
    purpose: Quantifies divergence between hardware and stochastic simulation.
    Observed quantity turned into a named barrier without external validation.
  • Coherence Wall no independent evidence
    purpose: Describes the depth limit beyond which hardware fails.
    New term for the observed circuit-depth threshold.

pith-pipeline@v0.9.0 · 5514 in / 1655 out tokens · 46842 ms · 2026-05-14T21:21:59.917593+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

6 extracted references · 6 canonical work pages · 1 internal anchor

  1. [1]

    Quantifying the Hadamard Resilience Law: Discovery of the Coherence Gap in NISQ-Era Classifiers

    famously established the boundaries of this era, sug- gesting that systemic noise would likely limit the utility of deep quantum circuits. Our work builds upon this by characterizing the specific noise floor of theibm kingston processor, identifying a systemic signal compression we term the “Kingston Constant.” The Hadamard Test itself is a fundamental pr...

  2. [2]

    Coherence Gap

    on each node. 4.Reduce:Classically sum partial results:IP total =Pk i=1 IP partial. Remark5 (The Distributed Advantage).This method tradesSpace (Multiple QPUs)forTime (Coherence). It countersFidelity Decay:F circuit ≈(1−e gate)n·d which at depth 10,000, signal-to-noise ratio effectively reaches zero, rendering Zero-Noise Extrapolation (ZNE) useless. By tr...

  3. [3]

    A polynomial quantum algorithm for approximating the Jones polynomial.Algorithmica, 55(3):395–421, 2009

    Dorit Aharonov et al. A polynomial quantum algorithm for approximating the Jones polynomial.Algorithmica, 55(3):395–421, 2009. Foundational work on the Hadamard Test applications

  4. [4]

    Supervised learning with quantum- enhanced feature spaces.Nature, 567(7747):209–212, 2019

    Vojtˇ ech Havl´ ıˇ cek et al. Supervised learning with quantum- enhanced feature spaces.Nature, 567(7747):209–212, 2019

  5. [5]

    Quantum computing in the NISQ era and beyond.Quantum, 2:79, 2018

    John Preskill. Quantum computing in the NISQ era and beyond.Quantum, 2:79, 2018

  6. [6]

    Quantum machine learning in feature Hilbert spaces.Physical Review Letters, 122(4):040504, 2019

    Maria Schuld and Nathan Killoran. Quantum machine learning in feature Hilbert spaces.Physical Review Letters, 122(4):040504, 2019