pith. machine review for the scientific record. sign in

arxiv: 2605.02066 · v1 · submitted 2026-05-03 · 🪐 quant-ph · cond-mat.dis-nn· physics.comp-ph

Recognition: 3 theorem links

· Lean Theorem

Accelerating Noisy Variational Quantum Algorithms with Physics-Informed Denoising Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-08 19:13 UTC · model grok-4.3

classification 🪐 quant-ph cond-mat.dis-nnphysics.comp-ph
keywords Physics-Informed Denoising NetworkZero-Noise ExtrapolationVariational Quantum AlgorithmsQuantum Error MitigationQAOAVQENoise Resilience
0
0 comments X

The pith

A physics-informed denoising network learns to mimic zero-noise extrapolation, matching its accuracy on variational quantum tasks while using four to six times fewer circuit executions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a Physics-Informed Denoising Network (PIDN) that learns a surrogate model of zero-noise extrapolation dynamics to accelerate noisy variational quantum algorithms. Instead of repeatedly evaluating circuits at multiple noise levels for each optimization step, PIDN takes the current noisy observation together with the historical parameter trajectory and directly outputs denoised expectation values and gradient estimates. Training incorporates a physics-informed loss that enforces consistency with gradient descent trajectories, allowing the network to preserve directional accuracy without explicit multi-noise sampling. Benchmarks on QAOA for graphs and spin models plus VQE for small molecules show that PIDN reaches solution quality comparable to full ZNE. This reduction in overhead matters because it makes error-mitigated variational optimization feasible on hardware where shot budgets are strictly limited.

Core claim

The central claim is that a neural network trained to reproduce ZNE-mitigated expectation values and gradients from single-noise observations plus trajectory history, regularized by a physics-informed loss that preserves descent dynamics, can replace repeated multi-noise ZNE evaluations and deliver equivalent optimization performance on QAOA and VQE instances while cutting circuit executions by a factor of approximately four to six.

What carries the argument

The Physics-Informed Denoising Network (PIDN), a surrogate model that maps noisy measurements and historical optimization trajectories to denoised expectation values and gradients, trained to match ZNE outputs while enforcing consistency with the underlying gradient descent dynamics through an auxiliary loss term.

If this is right

  • PIDN can be inserted directly into existing VQA training loops to replace the multi-noise sampling stage of ZNE.
  • The approach maintains gradient cosine similarity above 0.95 with ZNE across the tested QAOA and VQE problems.
  • Performance remains comparable to ZNE on 3-regular graphs, Sherrington-Kirkpatrick, transverse-field Ising, and small molecular Hamiltonians.
  • The method degrades only in regimes where ZNE itself becomes unreliable.
  • Ablation confirms that the physics-informed loss is required to keep denoised gradients directionally consistent.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the low-frequency landscape assumption continues to hold for larger qubit counts, PIDN could extend error-mitigated variational optimization to system sizes where repeated ZNE sampling becomes prohibitive.
  • The reliance on historical trajectory input opens the possibility of online retraining or adaptation as the optimization landscape evolves.
  • The surrogate idea may transfer to other noise-mitigation families that share the property of producing multiple noisy estimates of the same observable.
  • Combining PIDN with classical post-processing or other lightweight mitigators might yield further reductions in total shot count.

Load-bearing premise

The optimization trajectories contain enough low-frequency structure that a network trained on past ZNE-mitigated data can reliably predict clean gradients from current noisy observations without needing fresh multi-noise evaluations at every step.

What would settle it

Running PIDN and full ZNE side-by-side on a new variational instance and observing either a final solution energy that deviates beyond statistical noise from the ZNE result or a sustained drop in gradient cosine similarity below 0.9 while ZNE itself remains stable.

Figures

Figures reproduced from arXiv: 2605.02066 by Jie Liu, Xin Wang.

Figure 1
Figure 1. Figure 1: FIG. 1: Unified three-stage workflow for physics-informed reconstruction of ZNE optimization. Stage I (ZNE Data Acquisition) performs view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2: Workflow and architecture of the physics-informed denoising network (PIDN). Schematic illustration of the PIDN-assisted optimiza view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3: Noise-induced distortion of the optimization landscape. (a-c) Loss landscapes and optimization trajectories for (a) noiseless VQA, (b) view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4: Cosine similarity between PIDN-predicted gradients and view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5: AR as a function of total circuit executions for QAOA on view at source ↗
Figure 6
Figure 6. Figure 6: FIG. 6: Ground-state energy convergence for BeH view at source ↗
Figure 8
Figure 8. Figure 8: FIG. 8: Average cosine similarity between PIDN-predicted gradients view at source ↗
Figure 7
Figure 7. Figure 7: FIG. 7: Final ground-state energy deviation for VQE on BeH view at source ↗
Figure 9
Figure 9. Figure 9: FIG. 9: Dependence of PIDN trajectory-tracking accuracy on the view at source ↗
read the original abstract

Variational quantum algorithms are promising for near-term quantum computing, but are severely limited by hardware noise and the substantial circuit overhead required for error mitigation methods such as Zero-Noise Extrapolation (ZNE). We propose a Physics-Informed Denoising Network (PIDN) that reduces the cost of ZNE by learning a surrogate model of its optimization dynamics. By viewing the variational update as a trajectory in the parameter space, PIDN is trained to reproduce ZNE-mitigated expectation values and gradient directions while incorporating a physics-informed loss that preserves the gradient descent dynamics. Once trained, PIDN replaces repeated multi-noise evaluations with denoised expectation and gradient estimation directly from the current noisy observation and the historical trajectory, significantly reducing circuit executions. We benchmark the approach on the quantum approximate optimization algorithm for 3-regular graphs, Sherrington-Kirkpatrick, and transverse-field Ising models, as well as the variational quantum eigensolver for LiH, BeH$_2$ and H$_2$O. Across all tasks, PIDN attains performance comparable to ZNE, while reducing the number of circuit executions by a factor of approximately 4 to 6. Gradient cosine similarity with ZNE remains above 0.95 throughout training. Robustness analysis shows that PIDN fails only when ZNE itself becomes unreliable, and ablation studies confirm the necessity of the physics-informed loss for maintaining directional consistency. We further find that PIDN tracks optimization dynamics most accurately when the effective loss landscape retains strong low-frequency structure. These results establish PIDN as a scalable, resource-efficient strategy for noise-resilient variational optimization in the noisy intermediate-scale quantum regime.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces a Physics-Informed Denoising Network (PIDN) as a learned surrogate for Zero-Noise Extrapolation (ZNE) in variational quantum algorithms. PIDN maps a single noisy observation plus historical trajectory to ZNE-mitigated expectation values and gradients, trained with a physics-informed loss that preserves gradient-descent dynamics. Benchmarks on QAOA instances (3-regular graphs, Sherrington-Kirkpatrick, transverse-field Ising) and VQE for LiH, BeH2, H2O show PIDN matching ZNE performance while reducing circuit executions by a factor of approximately 4-6, with gradient cosine similarity above 0.95; the method fails only when ZNE itself is unreliable.

Significance. If the reported speedup is net of training costs and generalizes, PIDN could meaningfully lower the circuit overhead of error mitigation for NISQ variational optimization. The physics-informed loss and high directional fidelity are constructive elements, and the robustness result (failure only when ZNE fails) is a positive control. However, the absence of training-cost accounting, statistical error bars, and simpler baselines limits the immediate impact on the field.

major comments (3)
  1. [Abstract] Abstract and benchmark results: the central claim of a 4-6x reduction in circuit executions does not state whether the cost of generating the ZNE training targets (multiple noise-scaled circuits per trajectory) is included. Without the number of training trajectories, their similarity to test cases, and the amortization threshold, the net savings for a single optimization run cannot be assessed.
  2. [Benchmark results] Benchmark results: no quantitative error bars, confidence intervals, or statistical tests are reported for either the speedup factor or the performance comparability to ZNE. This weakens the claim that PIDN 'attains performance comparable to ZNE' across all tasks.
  3. [Methods] Methods and results: the manuscript provides no comparison against simpler baselines (e.g., linear extrapolation of noisy values or basic ML denoisers without the physics-informed term). Such controls are needed to establish that the network architecture and physics loss are necessary for the observed gains rather than artifacts of the training regime.
minor comments (2)
  1. [Abstract] The abstract states that 'PIDN tracks optimization dynamics most accurately when the effective loss landscape retains strong low-frequency structure,' but this observation would benefit from a quantitative metric or figure reference in the main text.
  2. Notation for the physics-informed loss term and the historical-trajectory input could be defined more explicitly with an equation reference to aid reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed review. We have revised the manuscript to address the concerns on training-cost accounting, statistical reporting, and baseline comparisons. Our point-by-point responses follow.

read point-by-point responses
  1. Referee: [Abstract] Abstract and benchmark results: the central claim of a 4-6x reduction in circuit executions does not state whether the cost of generating the ZNE training targets (multiple noise-scaled circuits per trajectory) is included. Without the number of training trajectories, their similarity to test cases, and the amortization threshold, the net savings for a single optimization run cannot be assessed.

    Authors: We agree that the original presentation did not make the distinction between training and inference costs explicit. The reported 4-6x factor describes the reduction during the variational optimization phase once the network is trained. In the revised manuscript we have added a dedicated paragraph in the Methods section that describes the training data generation (trajectories drawn from problem instances of the same class), the one-time nature of the training cost, and a qualitative amortization discussion indicating that net savings appear after a modest number of optimization runs. The abstract has been updated to qualify the speedup as applying after training. revision: yes

  2. Referee: [Benchmark results] Benchmark results: no quantitative error bars, confidence intervals, or statistical tests are reported for either the speedup factor or the performance comparability to ZNE. This weakens the claim that PIDN 'attains performance comparable to ZNE' across all tasks.

    Authors: We concur that the absence of variability measures weakens the quantitative claims. The revised manuscript now includes error bars (standard deviation across independent runs) on all benchmark plots and tables for final energies, iteration counts, and gradient cosine similarities. A brief statement on run-to-run consistency has also been added to the results section. revision: yes

  3. Referee: [Methods] Methods and results: the manuscript provides no comparison against simpler baselines (e.g., linear extrapolation of noisy values or basic ML denoisers without the physics-informed term). Such controls are needed to establish that the network architecture and physics loss are necessary for the observed gains rather than artifacts of the training regime.

    Authors: The manuscript already contains ablation studies isolating the physics-informed loss term. To directly address the request for simpler controls, we have added new experiments comparing PIDN against (i) linear extrapolation of the noisy expectation values and (ii) a standard feed-forward network trained without the physics-informed component. These results, now included in the revised figures and text, show that the full PIDN architecture with the physics loss yields higher gradient cosine similarity and more reliable optimization trajectories than either baseline. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical ML surrogate with independent benchmarking

full rationale

The paper proposes an empirical machine-learning method (PIDN) trained on ZNE-generated targets to approximate mitigated expectation values and gradients from single noisy observations plus trajectories. No first-principles derivation chain is claimed that reduces by construction to the inputs; the central results are benchmarked performance equivalence and measured circuit-count reduction on QAOA/VQE tasks. Training on ZNE data is standard supervised learning and does not equate the post-training inference outputs to the training targets by mathematical identity. The physics-informed loss and ablation studies provide independent content. No self-citation load-bearing steps, uniqueness theorems, or ansatz smuggling appear in the abstract or described method.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The approach rests on the assumption that variational trajectories contain sufficient low-frequency structure for a neural network to learn a reliable surrogate; no new physical entities are postulated.

axioms (2)
  • domain assumption The variational loss landscape retains strong low-frequency structure that a network can exploit.
    Stated in the final sentence of the abstract as the regime where PIDN performs best.
  • ad hoc to paper A physics-informed loss term can enforce consistency with gradient-descent dynamics without explicit multi-noise data at inference time.
    Central modeling choice introduced to replace ZNE evaluations.

pith-pipeline@v0.9.0 · 5600 in / 1411 out tokens · 52837 ms · 2026-05-08T19:13:12.894352+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

70 extracted references · 9 canonical work pages · 1 internal anchor

  1. [1]

    The circuit is executed at multiple noise amplification factors{λ 1, λ2, ..., λL}, often implemented via gate folding or pulse stretching [37, 41]

  2. [2]

    Expectation valuesC λl(θ)are measured at each noise level

  3. [3]

    trajectory tube

    A model (e.g., linear or polynomial) is fit to the data and extrapolated toλ→0, CZNE(θ) = lim λ→0 Cλ(θ).(23) While ZNE can significantly reduce systematic bias, it in- creases the total shot cost by a factor proportional to the num- ber of noise levelsLand must be repeated at each optimiza- tion step. When combined with gradient-based training, the overal...

  4. [4]

    At it- erationt:

    Stage I: ZNE Data Collection Starting from an initial parameterθ 0, we perform ZNE- corrected evaluations along the optimization trajectory. At it- erationt:

  5. [5]

    EvaluateC λk(θt)for all noise levels{λ k}

  6. [6]

    Compute the extrapolated costC ZNE(θt)

  7. [7]

    Estimate the gradientg ZNE(θt)using the parameter- shift rule applied to each noise-scaled circuit

  8. [8]

    Update parameters using gradient descent: θt+1 =θ t −ηg ZNE(θt).(37) The resulting trajectory provides training samples for the surrogate model

  9. [9]

    Stage II: Surrogate Training Using datasetD, we minimizeL(ϕ)to obtain trained pa- rametersϕ ∗

  10. [10]

    𝜃" 𝜃" →Hmaxcut ↑ →Hmaxcut ↑ 𝜃! 𝜃

    Stage III: Surrogate-Driven Optimization After training, PIDN is embedded into the workflow of VQA training, subsequent optimization steps replace ZNE evaluations by surrogate predictions: θt+1 =θ t −η∇ θFϕ∗(θ0:t, Cnoisy(θt)),(38) and thenθ t+1 is fed into the ansatz to evaluate the noisy cost functionC(θ t+1). The ansatz and PIDN is alternately evalu- at...

  11. [11]

    Cerezo, A

    M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio,et al., Nat. Rev. Phys.3, 625 (2021)

  12. [12]

    Bauer, S

    B. Bauer, S. Bravyi, M. Motta, and G. K.-L. Chan, Chem. Rev. 120, 12685 (2020)

  13. [13]

    Bharti, A

    K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin- Lea, A. Anand, M. Degroote, H. Heimonen, J. S. Kottmann, T. Menke,et al., Rev. Mod. Phys.94, 015004 (2022)

  14. [14]

    J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New J. Phys.18, 023023 (2016)

  15. [15]

    Melnikov, M

    A. Melnikov, M. Kordzanganeh, A. Alodjants, and R.-K. Lee, Adv. Phys. X8, 2165452 (2023)

  16. [16]

    Peruzzo, J

    A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, Nat. Commun. 5, 4213 (2014)

  17. [17]

    A Quantum Approximate Optimization Algorithm

    E. Farhi, J. Goldstone, and S. Gutmann, arXiv preprint arXiv:1411.4028 (2014)

  18. [18]

    Tilly, H

    J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y . Li, E. Grant, L. Wossnig, I. Rungger, G. H. Booth,et al., Phys. Rep.986, 1 (2022)

  19. [19]

    Abbas, A

    A. Abbas, A. Ambainis, B. Augustino, A. B ¨artschi, H. Buhrman, C. Coffrin, G. Cortiana, V . Dunjko, D. J. Egger, B. G. Elmegreen,et al., arXiv:2312.02279 (2023)

  20. [20]

    Preskill, Quantum2, 79 (2018)

    J. Preskill, Quantum2, 79 (2018)

  21. [21]

    Benamer, Preprints (2025), 10.20944/preprints202508.1482.v1

    H. Benamer, Preprints (2025), 10.20944/preprints202508.1482.v1

  22. [22]

    Larocca, S

    M. Larocca, S. Thanasilp, S. Wang, K. Sharma, J. Biamonte, P. J. Coles, L. Cincio, J. R. McClean, Z. Holmes, and M. Cerezo, Nat. Rev. Phys. , 1 (2025)

  23. [23]

    Fontana, N

    E. Fontana, N. Fitzpatrick, D. M. Ramo, R. Duncan, and I. Rungger, Phys. Rev. A104, 022403 (2021)

  24. [24]

    S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cin- cio, and P. J. Coles, Nat. Commun.12, 6961 (2021)

  25. [25]

    Knill, Nature434, 39 (2005)

    E. Knill, Nature434, 39 (2005)

  26. [26]

    Schumann, F

    M. Schumann, F. K. Wilhelm, and A. Ciani, Quantum Sci. Technol.9, 045019 (2024)

  27. [27]

    Giurgica-Tiron, Y

    T. Giurgica-Tiron, Y . Hindy, R. LaRose, A. Mari, and W. J. Zeng, in2020 IEEE Int. Conf. Quantum Comput. Eng. (QCE) (2020) pp. 306–316

  28. [28]

    A. He, B. Nachman, W. A. de Jong, and C. W. Bauer, Phys. Rev. A102, 012426 (2020)

  29. [29]

    Sweke, F

    R. Sweke, F. Wilde, J. Meyer, M. Schuld, P. K. F ¨ahrmann, B. Meynard-Piganeau, and J. Eisert, Quantum4, 314 (2020)

  30. [30]

    Surrogate modeling for computationally expensive simula- tions of supernovae in high-resolution galaxy simulations

    K. Hirashima, K. Moriwaki, M. S. Fujii, Y . Hirai, T. R. Saitoh, J. Makino, and S. Ho, arXiv preprint arXiv:2311.08460 (2023)

  31. [31]

    Maga ˜na Zertuche, L

    L. Maga ˜na Zertuche, L. C. Stein, K. Mitman, S. E. Field, V . Varma, M. Boyle, N. Deppe, L. E. Kidder, J. Moxon, H. P. Pfeiffer, M. A. Scheel, K. C. Nelli, W. Throwe, and N. L. Vu, Phys. Rev. D112, 024077 (2025)

  32. [32]

    Benedetti, E

    M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, Quantum Sci. Technol.4, 043001 (2019)

  33. [33]

    Reggio, N

    B. Reggio, N. Butt, A. Lytle, and P. Draper, Phys. Rev. A110, 022606 (2024)

  34. [34]

    G. G. Guerreschi and A. Y . Matsuura, Sci. Rep.9, 6903 (2019)

  35. [35]

    Aizenman, R

    M. Aizenman, R. Sims, and S. L. Starr, Phys. Rev. B68, 214403 (2003)

  36. [36]

    V . R. Arezzo, R. Wang, K. Thengil, G. Pecci, and G. E. San- toro, Phys. Rev. A113, 012610 (2026)

  37. [37]

    Loaiza, A

    I. Loaiza, A. M. Khah, N. Wiebe, and A. F. Izmaylov, Quantum Sci. Technol.8, 035019 (2023)

  38. [38]

    Tranter, P

    A. Tranter, P. J. Love, F. Mintert, and P. V . Coveney, J. Chem. Theory Comput.14, 5617 (2018)

  39. [39]

    Shee, P.-K

    Y . Shee, P.-K. Tsai, C.-L. Hong, H.-C. Cheng, and H.-S. Goan, Phys. Rev. Res.4, 023154 (2022)

  40. [40]

    P. J. Knowles and N. C. Handy, Chem. Phys. Lett.111, 315 (1984)

  41. [41]

    Dalton, C

    K. Dalton, C. K. Long, Y . S. Yordanov, C. G. Smith, C. H. Barnes, N. Mertig, and D. R. Arvidsson-Shukur, npj Quantum Inf.10, 18 (2024)

  42. [42]

    H. R. Grimsley, D. Claudino, S. E. Economou, E. Barnes, and N. J. Mayhall, J. Chem. Theory Comput.16, 1 (2019)

  43. [43]

    Y . Fan, C. Cao, X. Xu, Z. Li, D. Lv, and M.-H. Yung, J. Phys. Chem. Lett.14, 9596 (2023)

  44. [44]

    Kandala, A

    A. Kandala, A. B. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature549, 242 (2017)

  45. [45]

    Wecker, M

    D. Wecker, M. B. Hastings, and M. Troyer, Phys. Rev. A92, 042303 (2015)

  46. [46]

    J. F. Gonthier, M. D. Radin, C. Buda, E. J. Doskocil, C. M. Abuan, and J. Romero, Phys. Rev. Res.4, 033154 (2022)

  47. [47]

    Temme, S

    K. Temme, S. Bravyi, and J. M. Gambetta, Physical review letters119, 180509 (2017)

  48. [48]

    Majumdar, P

    R. Majumdar, P. Rivero, F. Metz, A. Hasan, and D. S. Wang, in2023 IEEE Int. Conf. Quantum Comput. Eng. (QCE), V ol. 1 (IEEE, 2023) pp. 881–887

  49. [49]

    Classical benchmarking of zero noise extrapolation beyond the exactly-verifiable regime,

    S. Anand, K. Temme, A. Kandala, and M. Zaletel, arXiv:2306.17839 (2023)

  50. [50]

    V . R. Pascuzzi, A. He, C. W. Bauer, W. A. De Jong, and B. Nachman, Phys. Rev. A105, 042406 (2022)

  51. [51]

    Li and S

    Y . Li and S. C. Benjamin, Physical Review X7, 021050 (2017)

  52. [52]

    Zlokapa and A

    A. Zlokapa and A. Gheorghiu, arXiv:2005.10811 (2020)

  53. [53]

    H. Liao, D. S. Wang, I. Sitdikov, C. Salcedo, A. Seif, and Z. K. Minev, Nat. Mach. Intell.6, 1478 (2024)

  54. [54]

    W.-Y . Liao, G. Yan, Y . Song, T.-C. Tian, W.-M. Zhu, D.-T. Jiang, Y . Du, and H.-L. Huang, arXiv:2511.07092 (2025)

  55. [55]

    Czarnik, A

    P. Czarnik, A. Arrasmith, P. J. Coles, and L. Cincio, Quantum 5, 592 (2021)

  56. [56]

    A. Lowe, M. H. Gordon, P. Czarnik, A. Arrasmith, P. J. Coles, and L. Cincio, Phys. Rev. Res.3, 033098 (2021)

  57. [57]

    Shaffer, L

    R. Shaffer, L. Kocia, and M. Sarovar, Phys. Rev. A107, 032415 (2023)

  58. [58]

    Du, M.-H

    Y . Du, M.-H. Hsieh, and D. Tao, Nat. Commun.16, 3790 (2025)

  59. [59]

    W.-Y . Liao, Y . Du, X. Wang, T.-C. Tian, Y . Luo, B. Du, D. Tao, and H.-L. Huang, arXiv:2507.17470 (2025)

  60. [60]

    PALQO: Physics-informed Model for Accelerating Large-scale Quantum Optimization,

    Y . Huang, Y . Hao, J. Zhou, X. Yuan, X. Wang, and Y . Du, arXiv:2509.20733 (2025)

  61. [61]

    Raissi, P

    M. Raissi, P. Perdikaris, and G. E. Karniadakis, J. Comput. Phys.378, 686 (2019)

  62. [62]

    L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis, SIAM Rev. 63, 208 (2021)

  63. [63]

    De Ryck and S

    T. De Ryck and S. Mishra, Adv. Comput. Math.48, 79 (2022)

  64. [64]

    G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, Nat. Rev. Phys.3, 422 (2021)

  65. [65]

    S. Cai, Z. Mao, Z. Wang, M. Yin, and G. E. Karniadakis, Acta Mech. Sinica37, 1727 (2021)

  66. [66]

    K. Nath, X. Meng, D. J. Smith, and G. E. Karniadakis, Sci. Rep.13, 13683 (2023)

  67. [67]

    Z. Chen, Y . Liu, and H. Sun, Nat. Commun.12, 6136 (2021)

  68. [68]

    Liu and X

    J. Liu and X. Wang, Phys. Rev. Res.7, 043137 (2025)

  69. [69]

    K. Cho, B. Van Merri ¨enboer, C ¸ . Gulc ¸ehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio, inProc. 2014 Conf. 18 Empirical Methods Nat. Lang. Process.(2014) pp. 1724–1734

  70. [70]

    Data for the manuscript “acceler- ating noisy variational quantum algorithms with physics- informed denoising networks

    J. Liu and X. Wang, “Data for the manuscript “acceler- ating noisy variational quantum algorithms with physics- informed denoising networks”,”https://github.com/ jliu-boat/PIDNVQA(2026). 19 Noise level QAOA VQE 3-regular SK model TFIM LiH BeH 2 H2O 1×10 −8 0.968 0.971 0.970 0.971 0.966 0.969 5×10 −8 0.965 0.964 0.966 0.966 0.963 0.960 1×10 −7 0.957 0.956 ...