pith. machine review for the scientific record. sign in

arxiv: 2604.12476 · v1 · submitted 2026-04-14 · 🪐 quant-ph

Recognition: unknown

Noise-enhanced quantum kernels on analog quantum computers

Chuan-Chi Huang, Hong-Bin Chen, Hsiang-Wei Huang, Shen-Liang Yang, Yueh-Nan Chen

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:26 UTC · model grok-4.3

classification 🪐 quant-ph
keywords quantum kernelsanalog quantum computingnoise-enhanced performancequantum machine learningnon-Markovianity estimationexpressivityhybrid quantum kernels
0
0 comments X

The pith

Operational noise improves the performance of analog and hybrid quantum kernels by increasing their expressivity and model complexity.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops quantum kernels based on analog quantum computing ideas instead of standard gate circuits. It tests these kernels on classification benchmarks and on the task of estimating non-Markovianity from limited data, where they match or exceed other kernel methods. The central result is that adding realistic operational noise to the kernels raises rather than lowers accuracy. The authors link this gain to greater expressivity and model complexity that noise introduces. If correct, the work indicates that quantum kernel methods can run effectively on current noisy hardware without needing perfect error correction.

Core claim

Analog quantum kernels and hybrid quantum kernels perform competitively with classical methods on benchmarking tasks and on estimating non-Markovianity from sparse data; when operational noise is included in the kernel construction, performance improves because the noise raises the expressivity and model complexity of the resulting feature map.

What carries the argument

The analog quantum kernel (and its hybrid variant) constructed from analog quantum computing principles, with operational noise deliberately incorporated to enlarge the effective feature space.

If this is right

  • The kernels achieve competitive accuracy on standard machine-learning benchmarks without requiring gate-based circuits.
  • Non-Markovianity can be estimated from fewer experimental runs than traditional methods demand.
  • Practical quantum kernel algorithms become feasible on near-term analog hardware that contains noise.
  • Reduced experimental overhead opens quantum kernel use for other sparse-data problems in physics.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Noise-enhanced expressivity may extend to other quantum machine-learning models run on the same hardware.
  • Hardware designers could explore controlled noise injection as a deliberate resource rather than only a defect to suppress.
  • Repeating the benchmarks on different analog platforms would test whether the benefit is platform-independent.

Load-bearing premise

The specific noise models and the chosen benchmarking and non-Markovianity tasks faithfully represent real analog quantum hardware behavior and the observed gains are not produced by limited data or by task selection alone.

What would settle it

Executing the same kernels on physical analog quantum devices and finding that added noise reduces accuracy in the non-Markovianity estimation task would disprove the noise-enhancement result.

Figures

Figures reproduced from arXiv: 2604.12476 by Chuan-Chi Huang, Hong-Bin Chen, Hsiang-Wei Huang, Shen-Liang Yang, Yueh-Nan Chen.

Figure 1
Figure 1. Figure 1: Schematic illustrations of quantum kernel function and quantum feature maps encoding feature ⃗x j . a The quantum circuit can be used to estimate the quantum kernel function kQ(⃗xk ,⃗x j) in Eq. (5) by measuring the probability of the measurement outcome |0 ⊗n ⟩. b The HEA used here consists of two encoding layers sandwiching an entangling layer. The feature⃗x j is encoded into the single-qubit gates, and … view at source ↗
Figure 2
Figure 2. Figure 2: Prediction of the benchmarking dataset by quantum and RBF models. We demonstrate the performance of the three quantum models, along with the RBF model, for the a ideal and b noisy cases in predicting a benchmarking dataset. Generally speaking, the predictions show a satisfactory overall accuracy. Additionally, this comparative study also reveals a counterintuitive phenomenon, where the presence of noise im… view at source ↗
Figure 3
Figure 3. Figure 3: Quantification of the non-Markovianity and prediction from sparse raw data. a The underlying idea of of the BLP non￾Markovianity measure (13) is to pursue the maximal revival of the trace distance D(ρ1(t),ρ2(t)) of a certain optimal initial state pair. b In an experimental implementation of the BLP measure, to accurately identify the local extrema, an substantial amount of experimental raw data with suffic… view at source ↗
Figure 4
Figure 4. Figure 4: Prediction of non-Markovianity by quantum and RBF models. We demonstrate the performance of the three quantum models, along with the RBF model, for the a ideal and b noisy cases in estimating non-Markovianity from sparse raw data. Most of the predictions align tightly with the ground truth, demonstrating outstanding overall accuracy. Additionally, the noise-enhanced performance can also be observed in this… view at source ↗
Figure 5
Figure 5. Figure 5: Enhancement of performance in the presence of noise at various interatomic distances. We demonstrate the performance of the a analog and b hybrid models in both cases with and without noise, along with the noisy digital and RBF models for comparison. Generally speaking, both analog and hybrid models can be enhanced in the presence of noise when the interatomic distances are set to a ≥ Rb and a ≤ Rb, respec… view at source ↗
Figure 6
Figure 6. Figure 6: Enhancement of the norm of weight |ω⃗ | 2 in the presence of noise at various interatomic distances. We show the model complexity in terms of the norm of weight |ω⃗ | 2 of the decision function f(⃗x) for the a analog and b hybrid quantum models. We can observe increased |ω⃗ | 2 for both models when a ≥ Rb and a ≤ Rb, respectively. Crucially, this tendency is fully consistent with the noise-enhanced perform… view at source ↗
read the original abstract

The quantum kernel method, a promising quantum machine learning algorithm, possesses substantial potential for demonstrating quantum advantage. Although the majority of the quantum kernel is constructed in the context of gate-based quantum circuits, inspired by the idea of analog quantum computing, here we construct an analog quantum kernel and a hybrid quantum kernel, and show their competitiveness against other kernel methods in a benchmarking task and the practical problem of estimating non-Markovianity from sparse data. Additionally, we also incorporate operational noise into the quantum kernels. Our results reveal that the presence of operational noise can be beneficial to the performance of the developed quantum kernels. We attribute this counterintuitive noise-enhanced performance to the improved expressivity and higher model complexity induced by noise. These results pave the way for practical implementations of quantum kernel methods and provide an efficient approach for estimating non-Markovianity with reduced experimental demands.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript constructs analog and hybrid quantum kernels for machine learning, benchmarks them competitively against classical kernels on a synthetic task, and applies them to non-Markovianity estimation from sparse data. It incorporates operational noise into the kernels and reports performance improvements, attributing the gains to noise-induced increases in expressivity and model complexity.

Significance. If substantiated, the finding that operational noise can enhance quantum kernel performance would be notable for analog quantum hardware implementations, potentially relaxing error-correction requirements and offering a practical route to non-Markovianity estimation with reduced experimental overhead. The work aligns with growing interest in noise-resilient quantum algorithms, though stronger quantitative backing for the expressivity claim would increase its impact.

major comments (2)
  1. [Results section] Results section (performance comparison and noise analysis): The central attribution that noise improves performance via 'improved expressivity and higher model complexity' is not supported by a direct quantitative metric such as kernel matrix rank, effective dimension from eigenvalue spectrum, or a Rademacher complexity bound. Without this or a control experiment (e.g., classical kernel with matched regularization strength), the causal link remains unisolated from task-specific regularization or optimization effects on the chosen benchmarks.
  2. [Methods and benchmarking subsections] Methods and benchmarking subsections: The synthetic benchmarking task and non-Markovianity estimation lack reported details on dataset sizes, number of trials, specific noise models (e.g., amplitude damping rates or non-Markovian parameters), and statistical error bars or confidence intervals on the performance lifts. This makes it difficult to rule out post-hoc task selection or limited-data artifacts as the source of the observed gains.
minor comments (2)
  1. [Abstract and Introduction] The abstract and introduction use 'operational noise' without an early, explicit definition or parameterization of the noise channels employed in the analog kernel construction.
  2. [Figures and Tables] Figure captions and tables should include the exact hyperparameter settings and kernel function definitions used for the hybrid quantum kernel to allow reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed review of our manuscript. We have carefully addressed each major comment and outline below how we will strengthen the paper through additional quantitative analysis and expanded methodological details. These revisions will improve the clarity and robustness of our claims regarding noise-enhanced quantum kernels.

read point-by-point responses
  1. Referee: Results section: The central attribution that noise improves performance via 'improved expressivity and higher model complexity' is not supported by a direct quantitative metric such as kernel matrix rank, effective dimension from eigenvalue spectrum, or a Rademacher complexity bound. Without this or a control experiment (e.g., classical kernel with matched regularization strength), the causal link remains unisolated from task-specific regularization or optimization effects on the chosen benchmarks.

    Authors: We agree that direct quantitative metrics would provide stronger support for attributing the observed performance gains to increased expressivity and model complexity. In the revised manuscript, we will compute and report the rank of the kernel matrices as well as the effective dimension obtained from the eigenvalue spectrum of the kernel matrices, comparing the noisy and noiseless cases explicitly. This will offer concrete evidence of how operational noise expands the feature space. While a perfectly matched classical control experiment is methodologically challenging given the distinct nature of quantum noise, we will include additional comparisons to classical kernels with varied regularization strengths to help isolate the contribution of noise-induced complexity. These changes will better substantiate the causal link. revision: yes

  2. Referee: Methods and benchmarking subsections: The synthetic benchmarking task and non-Markovianity estimation lack reported details on dataset sizes, number of trials, specific noise models (e.g., amplitude damping rates or non-Markovian parameters), and statistical error bars or confidence intervals on the performance lifts. This makes it difficult to rule out post-hoc task selection or limited-data artifacts as the source of the observed gains.

    Authors: We appreciate the referee's emphasis on reproducibility and statistical rigor. While some of these parameters appear in the supplementary material, we acknowledge that they should be more explicitly detailed in the main text. In the revision, we will expand the Methods and benchmarking subsections to include: the exact dataset sizes for both the synthetic task and non-Markovianity estimation, the number of independent trials, the specific noise model parameters (including amplitude damping rates and non-Markovianity parameters), and statistical error bars with confidence intervals for all performance metrics. These additions will allow readers to fully assess the robustness of the results and rule out potential artifacts. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper constructs analog and hybrid quantum kernels, benchmarks them on synthetic tasks and non-Markovianity estimation, then reports empirical performance gains under added noise. No equations or definitions are provided that make the reported performance metrics or expressivity claims reduce to fitted parameters by construction. The attribution of noise benefit to 'improved expressivity and higher model complexity' is presented as an interpretation of external benchmarking results rather than a self-referential definition or a prediction forced by prior self-citations. The derivation chain remains self-contained against the stated benchmarks and does not rely on load-bearing self-citations or ansatzes that collapse back to the inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides insufficient technical detail to enumerate specific free parameters, axioms, or invented entities; no equations or methods sections are available for audit.

pith-pipeline@v0.9.0 · 5453 in / 999 out tokens · 26497 ms · 2026-05-10T15:26:25.479076+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Generative Quantum-inspired Kolmogorov-Arnold Eigensolver

    quant-ph 2026-05 unverdicted novelty 7.0

    GQKAE uses quantum-inspired Kolmogorov-Arnold networks to reduce parameters by 66% in generative quantum eigensolvers while achieving chemical accuracy on H4, N2, LiH, and other molecules.

Reference graph

Works this paper leans on

106 extracted references · 9 canonical work pages · cited by 1 Pith paper

  1. [1]

    noisy energy-level spacing ˜∆∼∆+N(0,0.1 MHz),

  2. [2]

    noisy Rabi frequency ˜Ω∼Ω· N(1,0.01),

  3. [3]

    9 ∏ µ=1 CNOT(µ,µ+1)[I (µ) ⊗P (µ+1) (2(π−x (µ) j )(π−x (µ+1) j ))]CNOT(µ,µ+1) #

    and fluctuating atomic position ˜r (µ) iid∼r (µ) + N(0,0.1µm), whereN(µ,σ)is a Gaussian distribution with meanµand standard deviationσ. The relevant parameters are extracted from the authentic device [47]. Additionally, to model the operational noise in the CNOT gates, we first observe that a conventional CNOT gate, CNOT(1,2), acting on the second qubit w...

  4. [4]

    Carleo, I

    G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. V ogt-Maranto, and L. Zdeborová, Machine learning and the physical sciences, Rev. Mod. Phys.91, 045002 (2019)

  5. [5]

    Mehta, M

    P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, A high-bias, low-variance introduction to machine learning for physicists, Phys. Rep.810, 1 (2019)

  6. [6]

    G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, Physics-informed machine learning, Nat. Rev. Phys.3, 422 (2021)

  7. [7]

    Krenn, J

    M. Krenn, J. Landgraf, T. Foesel, and F. Marquardt, Artificial intelligence and machine learning for quantum technologies, Phys. Rev. A107, 010101 (2023)

  8. [8]

    Wang, H.-Y

    H.-M. Wang, H.-Y . Ku, J.-Y . Lin, and H.-B. Chen, Deep learning the hierarchy of steering measurement settings of qubit-pair states, Commun. Phys.7, 72 (2024)

  9. [9]

    Chen, C.-H

    H.-B. Chen, C.-H. Liu, K.-L. Lai, B.-Y . Tseng, P.-Y . Lo, Y .-N. Chen, and C.-H. Yu, Unveiling the nonclassicality within quasi- distribution representation through deep learing, Quantum Sci. Technol.10, 015029 (2025)

  10. [10]

    Aruteet al., Quantum supremacy using a programmable superconducting processor, Nature574, 505 (2019)

    F. Aruteet al., Quantum supremacy using a programmable superconducting processor, Nature574, 505 (2019)

  11. [11]

    Wright, K

    K. Wright, K. M. Beck, S. Debnath, J. M. Amini, Y . Nam, N. Grzesiak, J.-S. Chen, N. C. Pisenti, M. Chmielewski, C. Collins, K. M. Hudek, J. Mizrahi, J. D. Wong-Campos, S. Allen, J. Apisdorf, P. Solomon, M. Williams, A. M. Ducore, A. Blinov, S. M. Kreikemeier, V . Chaplin, M. Keesan, C. Monroe, and J. Kim, Benchmarking an 11-qubit quantum computer, Nat. C...

  12. [12]

    Y . Kim, A. Eddins, S. Anand, K. X. Wei, E. van den Berg, S. Rosenblatt, H. Nayfeh, Y . Wu, M. Zaletel, K. Temme, and A. Kandala, Evidence for the utility of quantum computing before fault tolerance, Nature618, 500 (2023)

  13. [13]

    S. A. Moses, C. H. Baldwin, M. S. Allman, R. Ancona, L. Ascarrunz, C. Barnes, J. Bartolotta, B. Bjork, P. Blanchard, M. Bohn, J. G. Bohnet, N. C. Brown, N. Q. Burdick, W. C. Burton, S. L. Campbell, J. P. Campora, C. Carron, J. Chambers, J. W. Chan, Y . H. Chen, A. Chernoguzov, E. Chertkov, J. Colina, J. P. Curtis, R. Daniel, M. DeCross, D. Deen, C. Delane...

  14. [14]

    Kuo and H.-B

    Y .-H. Kuo and H.-B. Chen, Adaptively partitioned analog quantum simulation on near-term quantum computers: The nonclassical free-induction decay of nv centers in diamond, Phys. Rev. Res.5, 043139 (2023)

  15. [15]

    Biamonte, P

    J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature549, 195 (2017)

  16. [16]

    Ciliberto, M

    C. Ciliberto, M. Herbster, A. Ialongo, M. Pontil, A. Rocchetto, S. Severini, and L. Wossnig, Quantum machine learning: a classi- cal perspective, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences474, 10.1098/rspa.2017.0551 (2018)

  17. [17]

    Cerezo, G

    M. Cerezo, G. Verdon, H.-Y . Huang, L. Cincio, and P. J. Coles, Challenges and opportunities in quantum machine learning, Nature Computational Science2, 567 (2022)

  18. [18]

    Schuld and N

    M. Schuld and N. Killoran, Quantum machine learning in feature hilbert spaces, Phys. Rev. Lett.122, 040504 (2019)

  19. [19]

    Havlíˇcek, A

    V . Havlíˇcek, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature567, 209–212 (2019)

  20. [20]

    Mitarai, M

    K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning, Phys. Rev. A98, 032309 (2018)

  21. [21]

    Classification with quantum neural networks on near term processors,

    E. Farhi and H. Neven, Classification with quantum neural networks on near term processors, arXiv , arXiv:1802.06002 (2018)

  22. [22]

    Zoufal, A

    C. Zoufal, A. Lucchi, and S. Woerner, Quantum generative adversarial networks for learning and loading random distributions, npj Quantum Inf.5, 103 (2019). 13

  23. [23]

    Fujii and K

    K. Fujii and K. Nakajima, Quantum reservoir computing: A reservoir approach toward quantum machine learning on near-term quantum devices, inReservoir Computing: Theory, Physical Implementations, and Applications, edited by K. Nakajima and I. Fischer (Springer Singapore, Singapore, 2021) pp. 423–450

  24. [24]

    Innocenti, S

    L. Innocenti, S. Lorenzo, I. Palmisano, A. Ferraro, M. Paternostro, and G. M. Palma, Potential and limitations of quantum extreme learning machines, Commun. Phys.6, 118 (2023)

  25. [25]

    J.-E. Park, B. Quanz, S. Wood, H. Higgins, and R. Harishankar, Practical application improvement to quantum svm: theory to practice, arxiv , arXiv:2012.07725 (2020)

  26. [26]

    Y . Liu, S. Arunachalam, and K. Temme, A rigorous and robust quantum speed-up in supervised machine learning, Nature Physics17, 1013–1017 (2021)

  27. [27]

    Huang, M

    H.-Y . Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean, Power of data in quantum machine learning, Nature Communications12, 2631 (2021)

  28. [28]

    Schuld and F

    M. Schuld and F. Petruccione,Supervised Learning with Quantum Computers, 2nd ed. (Springer, 2021) pp. XIV , 312

  29. [29]

    Du, M.-H

    Y . Du, M.-H. Hsieh, T. Liu, and D. Tao, Expressive power of parametrized quantum circuits, Phys. Rev. Res.2, 033125 (2020)

  30. [30]

    Kübler, S

    J. Kübler, S. Buchholz, and B. Schölkopf, The inductive bias of quantum kernels, inAdvances in Neural Information Processing Systems, V ol. 34, edited by M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan (Curran Associates, Inc., 2021) pp. 12661–12673

  31. [31]

    Thanasilp, S

    S. Thanasilp, S. Wang, M. Cerezo, and Z. Holmes, Exponential concentration in quantum kernel methods, Nature Communications15, 5200 (2024)

  32. [32]

    Peters, J

    E. Peters, J. Caldeira, A. Ho, S. Leichenauer, M. Mohseni, H. Neven, P. Spentzouris, D. Strain, and G. N. Perdue, Machine learning of high dimensional data on a noisy quantum processor, npj Quantum Information7, 161 (2021)

  33. [33]

    S. L. Wu, S. Sun, W. Guan, C. Zhou, J. Chan, C. L. Cheng, T. Pham, Y . Qian, A. Z. Wang, R. Zhang, M. Livny, J. Glick, P. K. Barkoutsos, S. Woerner, I. Tavernelli, F. Carminati, A. Di Meglio, A. C. Y . Li, J. Lykken, P. Spentzouris, S. Y .-C. Chen, S. Yoo, and T.-C. Wei, Application of quantum machine learning using the quantum kernel algorithm on high en...

  34. [34]

    Sancho-Lorente, J

    T. Sancho-Lorente, J. Román-Roche, and D. Zueco, Quantum kernels to learn the phases of quantum matter, Phys. Rev. A105, 042432 (2022)

  35. [35]

    Y . Wu, B. Wu, J. Wang, and X. Yuan, Quantum Phase Recognition via Quantum Kernel Methods, Quantum7, 981 (2023)

  36. [36]

    A. E. Paine, V . E. Elfving, and O. Kyriienko, Quantum kernel methods for solving regression problems and differential equations, Phys. Rev. A107, 032428 (2023)

  37. [37]

    Tancara, H

    D. Tancara, H. T. Dinani, A. Norambuena, F. F. Fanchini, and R. Coto, Kernel-based quantum regressor models learning non- markovianity, Phys. Rev. A107, 022402 (2023)

  38. [38]

    Tsai, H.-M

    Z.-L. Tsai, H.-M. Wang, and H.-B. Chen, Learning the hierarchy of steering measurement settings of qubit-pair states with kernel-based quantum models, New J. Phys.27, 094502 (2025)

  39. [39]

    Kusumoto, K

    T. Kusumoto, K. Mitarai, K. Fujii, M. Kitagawa, and M. Negoro, Experimental quantum kernel trick with nuclear spins in a solid, npj Quantum Information7, 94 (2021)

  40. [40]

    Bartkiewicz, C

    K. Bartkiewicz, C. Gneiting, A. ˇCernoch, K. Jiráková, K. Lemr, and F. Nori, Experimental kernel-based quantum machine learning in finite feature space, Sci. Rep.10, 12356 (2020)

  41. [41]

    J. R. Glick, T. P. Gujarati, A. D. Córcoles, Y . Kim, A. Kandala, J. M. Gambetta, and K. Temme, Covariant quantum kernels for data with group structure, Nature Physics20, 479–483 (2024)

  42. [42]

    Kandala, A

    A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature549, 242 (2017)

  43. [43]

    E. M. Stoudenmire and D. J. Schwab, Supervised learning with tensor networks, inProceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16 (Curran Associates Inc., 2016) p. 4806

  44. [44]

    LaRose and B

    R. LaRose and B. Coyle, Robust data encodings for quantum classifiers, Phys. Rev. A102, 032420 (2020)

  45. [45]

    Preskill, Quantum Computing in the NISQ era and beyond, Quantum2, 79 (2018)

    J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum2, 79 (2018)

  46. [46]

    R. P. Feynman, Simulating physics with computers, International Journal of Theoretical Physics21, 467 (1982)

  47. [47]

    V . M. Kendon, K. Nemoto, and W. J. Munro, Quantum analogue computing, Phil. Trans. R. Soc. A368, 3609 (2010)

  48. [48]

    T. H. Johnson, S. R. Clark, and D. Jaksch, What is a quantum simulator?, EPJ Quantum Technol.1, 10 (2014)

  49. [49]

    Henriet, L

    L. Henriet, L. Beguin, A. Signoles, T. Lahaye, A. Browaeys, G.-O. Reymond, and C. Jurczak, Quantum computing with neutral atoms, Quantum4, 327 (2020)

  50. [50]

    Wurtz, A

    J. Wurtz, A. Bylinskii, B. Braverman, J. Amato-Grill, S. H. Cantu, F. Huber, A. Lukin, F. Liu, P. Weinberg, J. Long, S.-T. Wang, N. Gemelke, and A. Keesling, Aquila: Quera’s 256-qubit neutral-atom quantum computer (2023), arXiv:2306.11727 [quant-ph]

  51. [51]

    1126/science.abo6587

    S. Ebadi, A. Keesling, M. Cain, T. T. Wang, H. Levine, D. Bluvstein, G. Semeghini, A. Omran, J.-G. Liu, R. Samajdar, X.- Z. Luo, B. Nash, X. Gao, B. Barak, E. Farhi, S. Sachdev, N. Gemelke, L. Zhou, S. Choi, H. Pichler, S.-T. Wang, M. Greiner, V . Vuleti´c, and M. D. Lukin, Quantum optimization of maximum independent set using rydberg atom arrays, Science...

  52. [52]

    Nguyen, J.-G

    M.-T. Nguyen, J.-G. Liu, J. Wurtz, M. D. Lukin, S.-T. Wang, and H. Pichler, Quantum optimization with arbitrary connectivity using rydberg atom arrays, PRX Quantum4, 010316 (2023)

  53. [53]

    Jeong, M

    S. Jeong, M. Kim, M. Hhan, J. Park, and J. Ahn, Quantum programming of the satisfiability problem with rydberg atom graphs, Phys. Rev. Res.5, 043037 (2023)

  54. [54]

    Martin, L

    A. Martin, L. Lamata, E. Solano, and M. Sanz, Digital-analog quantum algorithm for the quantum fourier transform, Phys. Rev. Res.2, 013012 (2020)

  55. [55]

    García-Molina, A

    P. García-Molina, A. Martin, M. G. de Andoin, and M. Sanz, Mitigating noise in digital and digital–analog quantum computation, Communications Physics7, 321 (2024). 14

  56. [56]

    Martin, R

    A. Martin, R. Ibarrondo, and M. Sanz, Digital-analog co-design of the harrow-hassidim-lloyd algorithm, Phys. Rev. Appl.19, 064056 (2023)

  57. [57]

    J. Z. Lu, L. Jiao, K. Wolinski, M. Kornjaˇca, H.-Y . Hu, S. Cantu, F. Liu, S. F. Yelin, and S.-T. Wang, Digital–analog quantum learning on rydberg atom arrays, Quantum Sci. Technol.10, 015038 (2025)

  58. [58]

    Parra-Rodriguez, P

    A. Parra-Rodriguez, P. Lougovski, L. Lamata, E. Solano, and M. Sanz, Digital-analog quantum computation, Phys. Rev. A101, 022305 (2020)

  59. [59]

    Noori, S

    M. Noori, S. S. Vedaie, I. Singh, D. Crawford, J. S. Oberoi, B. C. Sanders, and E. Zahedinejad, Analog-quantum feature mapping for machine-learning applications, Phys. Rev. Appl.14, 034034 (2020)

  60. [60]

    Henry, S

    L.-P. Henry, S. Thabet, C. Dalyac, and L. Henriet, Quantum evolution kernel: Machine learning on graphs with programmable arrays of qubits, Phys. Rev. A104, 032416 (2021)

  61. [61]

    R. Yang, S. Bosch, B. Kiani, S. Lloyd, and A. Lupascu, Analog quantum variational embedding classifier, Phys. Rev. Appl.19, 054023 (2023)

  62. [62]

    Attributed-graphs kernel implementation using local detuning of neutral-atoms rydberg hamil- tonian,

    M. Djellabi, M. Hecker, and S. Acheche, Attributed-graphs kernel implementation using local detuning of neutral-atoms rydberg hamil- tonian (2026), arXiv:2509.09421 [quant-ph]

  63. [63]

    Concentration-Free Quantum Kernel Learning in the Ry- dberg Blockade.arXiv e-prints, page arXiv:2508.10819, August 2025

    A. Sarkar, M. Schnee, R. Radgohar, M. Fadaie, V . Drouin-Touchette, and S. Kourtis, Concentration-free quantum kernel learning in the rydberg blockade (2025), arXiv:2508.10819 [cond-mat.str-el]

  64. [64]

    Albrecht, C

    B. Albrecht, C. Dalyac, L. Leclerc, L. Ortiz-Gutiérrez, S. Thabet, M. D’Arcangelo, J. R. K. Cline, V . E. Elfving, L. Lassablière, H. Silvério, B. Ximenez, L.-P. Henry, A. Signoles, and L. Henriet, Quantum feature maps for graph machine learning on a neutral atom quantum processor, Phys. Rev. A107, 042615 (2023)

  65. [65]

    Vijayakumar and S

    S. Vijayakumar and S. Wu, Sequential support vector classifiers and regression, inProc. International Conference on Soft Computing (SOCO’99), Genoa, Italy(1999)

  66. [66]

    V . N. Vapnik,The Nature of Statistical Learning Theory, 2nd ed., Information Science and Statistics (Springer New York, NY , 2000) p. 314, originally published as a monograph

  67. [67]

    Schulz, M

    E. Schulz, M. Speekenbrink, and A. Krause, A tutorial on gaussian process regression: Modelling, exploring, and exploiting functions, Journal of mathematical psychology85, 1 (2018)

  68. [68]

    A. E. Hoerl and R. W. Kennard, Ridge regression: Biased estimation for nonorthogonal problems, Technometrics12, 55 (1970)

  69. [69]

    V ovk, Kernel ridge regression, inEmpirical inference: Festschrift in honor of vladimir n

    V . V ovk, Kernel ridge regression, inEmpirical inference: Festschrift in honor of vladimir n. vapnik(Springer, 2013) pp. 105–116

  70. [70]

    Aronszajn, Theory of reproducing kernels, Transactions of the American Mathematical Society68, 337 (1950)

    N. Aronszajn, Theory of reproducing kernels, Transactions of the American Mathematical Society68, 337 (1950)

  71. [71]

    Schölkopf, R

    B. Schölkopf, R. Herbrich, and A. J. Smola, A generalized representer theorem, inComputational Learning Theory(Springer Berlin Heidelberg, Berlin, Heidelberg, 2001) pp. 416–426

  72. [72]

    Cortes, P

    C. Cortes, P. Haffner, and M. Mohri, Rational kernels: Theory and algorithms, Journal of Machine Learning Research5, 1035 (2004)

  73. [73]

    Haussler,Convolution kernels on discrete structures, Technical Report UCSC-CRL-99-10 (Baskin School of Engineering, University of California, Santa Cruz, 1999)

    D. Haussler,Convolution kernels on discrete structures, Technical Report UCSC-CRL-99-10 (Baskin School of Engineering, University of California, Santa Cruz, 1999)

  74. [74]

    Schölkopf, K

    B. Schölkopf, K. Tsuda, and J.-P. Vert, A primer on kernelmethods, inKernel Methods in Computational Biology(MIT Press, 2004) pp. 35–70

  75. [75]

    C. S. Adams, J. D. Pritchard, and J. P. Shaffer, Rydberg atom quantum technologies, J. Phys. B: At. Mol. Opt. Phys.53, 012002 (2019)

  76. [76]

    T. F. Gallagher, Rydberg atoms, Rep. Prog. Phys.51, 143 (1988)

  77. [77]

    M. D. Lukin, M. Fleischhauer, R. Cote, L. M. Duan, D. Jaksch, J. I. Cirac, and P. Zoller, Dipole blockade and quantum information processing in mesoscopic atomic ensembles, Phys. Rev. Lett.87, 037901 (2001)

  78. [78]

    Urban, T

    E. Urban, T. A. Johnson, T. Henage, L. Isenhower, D. D. Yavuz, T. G. Walker, and M. Saffman, Observation of rydberg blockade between two atoms, Nature Physics5, 110 (2009)

  79. [79]

    Bernien, S

    H. Bernien, S. Schwartz, A. Keesling,et al., Probing many-body dynamics on a 51-atom quantum simulator, Nature551, 579 (2017)

  80. [80]

    Mercurio, Y .-T

    A. Mercurio, Y .-T. Huang, L.-X. Cai, Y .-N. Chen, V . Savona, and F. Nori, Quantumtoolbox.jl: An efficient julia framework for simulating open quantum systems (2025), arXiv:2504.21440 [quant-ph]

Showing first 80 references.