pith. machine review for the scientific record. sign in

arxiv: 2605.01367 · v1 · submitted 2026-05-02 · 🪐 quant-ph · cs.LG· cs.SY· eess.SY

Recognition: unknown

From Characterization To Construction: Generative Quantum Circuit Synthesis from Gate Set Tomography Data

Authors on Pith no claims yet

Pith reviewed 2026-05-09 14:50 UTC · model grok-4.3

classification 🪐 quant-ph cs.LGcs.SYeess.SY
keywords quantum circuit synthesisgate set tomographygenerative modelsdiffusion modelsquantum machine learningNISQ devicesnoise-aware compilationcontext-aware quantum control
0
0 comments X

The pith

A generative framework learns quantum circuits directly from gate-set tomography data to match target output distributions on noisy hardware.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper proposes a quantum machine learning control framework that trains a generative model on tokenized GST germ circuits to synthesize full circuits for a user-specified target measurement distribution. The process embeds the circuits into a latent concept space with a set-vision transformer after progressive curriculum training on short then longer sequences, then samples via an unconditional diffusion model whose outputs are refined against the target conditional covariance. A sympathetic reader would care because the method integrates characterization and compilation into one step that automatically incorporates shared device noise such as crosstalk and drift, which standard two-stage pipelines that assume ideal gates routinely ignore.

Core claim

The central claim is that tokenizing GST germ circuits, embedding them via curriculum learning into a permutation-invariant latent space with a set-vision transformer, and sampling from the resulting concept space with a diffusion model conditioned on a target distribution produces circuits whose execution on the device yields the desired statistics, thereby replacing the conventional GST-plus-unitary-decomposition pipeline with a single context-aware generative process.

What carries the argument

The generative concept space formed by embedding tokenized GST germ circuits with a set-vision transformer and permutation-invariant pooling, from which an unconditional diffusion model samples circuits conditioned on a target output distribution.

If this is right

  • Circuit synthesis can directly incorporate device-specific correlated noise without an intermediate ideal-gate model.
  • The same latent space supports multiple target distributions by conditioning the diffusion sampler at inference time.
  • Aggregating GST data across germ circuits makes the representation inherently aware of shared environmental effects such as drift.
  • Denoising the sampled circuit against the target conditional covariance improves robustness of the generated output.
  • The end-to-end pipeline removes the need for separate gate characterization followed by decomposition algorithms.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the approach succeeds, it could support on-device retraining whenever calibration data updates, allowing the synthesis model to track slow drifts without human intervention.
  • The framework suggests a path toward compiling entire algorithms by specifying only the desired final measurement statistics rather than gate sequences.
  • A natural test would compare the achieved fidelity of these generated circuits against circuits produced by conventional compilers on the same hardware and noise profile.

Load-bearing premise

The generative model trained on tokenized GST data can output circuits that, when run on the actual device, produce output statistics close to the user-specified target even when the noise is complex, correlated, and time-varying.

What would settle it

Generate a circuit for a chosen target distribution, execute it on the hardware, and measure whether the obtained output statistics match the target within the expected shot-noise limits; a large mismatch would falsify the claim that the learned concept space captures the relevant noise.

Figures

Figures reproduced from arXiv: 2605.01367 by Aritra Sarkar, Erbing Hua, King Yiu Yu, Maximilian Rimbach-Russ, Ryoichi Ishihara, Sebastian Feld.

Figure 1
Figure 1. Figure 1: Workflow depicting the two traditional pipelines of characterizing native quantum gates via view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed architecture for tomography-driven quantum circuit generation. view at source ↗
Figure 3
Figure 3. Figure 3: Schematic illustration of the data grouping strategy. Raw data instances are partitioned into view at source ↗
read the original abstract

High-fidelity circuit execution on noisy intermediate-scale quantum devices is bottlenecked by compilation pipelines that disregard complex, correlated noise. To address this, this methodology article proposes a quantum machine learning control (QMLC) framework for generative quantum circuit synthesis from gate-set tomography (GST) data that bypasses the traditional two-step pipeline of characterizing native quantum gates via GST followed by unitary decomposition algorithms. Instead, a generative concept space is directly learnt from GST data, enabling conditional synthesis of quantum circuits on a desired output distribution. Our approach tokenizes GST germ circuits and embeds them into a structured latent space using a curriculum-learning-motivated strategy, starting with short circuits and progressively incorporating longer ones with diverse output statistics. The embedded sequences are processed by a set-vision transformer with permutation-invariant pooling, producing k-seed vectors that represent the learned concept space of the quantum device. Aggregating data across multiple circuits makes this latent representation inherently context-aware, capturing the shared physical noise environment (e.g., crosstalk, drift) that isolated gate metrics miss. We propose an unconditional diffusion model to sample from the concept space. During inference, a user provides a target measurement distribution, and the model generates a corresponding circuit. To ensure fidelity and robustness, the output is denoised using a diffusion model that operates on the target conditional covariance matrix. This end-to-end framework is a step towards context-aware, hardware-native circuit synthesis directly from raw GST data, which offers a new paradigm for integrating quantum control and compilation. The QMLC framework is particularly suited for near-term quantum devices with complex calibration procedures.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a quantum machine learning control (QMLC) framework for generative quantum circuit synthesis directly from gate-set tomography (GST) data. It tokenizes GST germ circuits, embeds them into a latent concept space via a set-vision transformer with curriculum learning and permutation-invariant pooling to produce k-seed vectors, and uses an unconditional diffusion model that conditions at inference on a target measurement distribution plus covariance denoising to generate circuits intended to match desired output statistics while capturing shared device noise (e.g., crosstalk, drift). The approach aims to bypass the traditional characterize-then-compile pipeline.

Significance. If empirically validated, the framework could offer a novel integration of characterization and compilation that accounts for correlated, context-dependent noise directly in circuit generation, potentially improving fidelity on NISQ hardware beyond standard unitary decomposition plus noise-aware compilation. The proposal applies established ML components (transformers, diffusion models, curriculum learning) to quantum control in an interesting way, but the manuscript contains no implementation details, synthesized circuits, fidelity metrics, ablation studies, or comparisons against baselines, leaving the practical significance speculative rather than demonstrated.

major comments (2)
  1. [Abstract] Abstract: The central claim that the learned latent concept space plus diffusion sampling 'generates a corresponding circuit' whose 'hardware execution matches the target conditional distribution with high fidelity' is unsupported, as the manuscript reports no experimental results, simulations on real or simulated devices, or quantitative metrics (e.g., total variation distance or process fidelity between generated and target distributions). This is load-bearing for the proposed paradigm shift.
  2. [Abstract] Abstract (description of inference): The covariance-denoising step is presented as ensuring 'fidelity and robustness,' but no algorithm, loss function, or conditioning mechanism is specified, nor is there any argument or test showing that sampling from the concept space reproduces the target statistics under realistic time-varying noise.
minor comments (2)
  1. The manuscript would benefit from explicit pseudocode or a high-level algorithm box outlining the training of the set-vision transformer, the diffusion model, and the inference conditioning procedure.
  2. Notation for 'k-seed vectors' and the 'generative concept space' should be defined more formally (e.g., dimensionality, training objective) to aid reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive review and for recognizing the potential of integrating GST data with generative models to bypass traditional characterization-compilation pipelines. We agree that the original abstract and method descriptions overstated the framework's demonstrated capabilities and lacked sufficient technical detail. We have revised the manuscript to clarify its scope as a methodological proposal, tone down unsupported performance claims, expand the description of the diffusion model and inference steps, and add a discussion of planned validations. Below we respond point by point to the major comments.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that the learned latent concept space plus diffusion sampling 'generates a corresponding circuit' whose 'hardware execution matches the target conditional distribution with high fidelity' is unsupported, as the manuscript reports no experimental results, simulations on real or simulated devices, or quantitative metrics (e.g., total variation distance or process fidelity between generated and target distributions). This is load-bearing for the proposed paradigm shift.

    Authors: We agree that the original wording in the abstract implied empirical validation that is not present in the manuscript. This paper is a methodology contribution focused on the proposed QMLC framework architecture rather than a completed empirical study. In the revised version we have rewritten the abstract to state that the framework 'aims to generate circuits whose hardware execution is intended to match the target conditional distribution,' removing the claim of 'high fidelity' and instead describing the design choices (curriculum-trained set-vision transformer, context-aware latent space, covariance-guided diffusion) that are expected to support this goal. We have added a new 'Limitations and Future Validation' section that explicitly notes the absence of current benchmarks and outlines the simulation and hardware experiments we intend to perform next. These changes make the load-bearing claims prospective rather than asserted. revision: yes

  2. Referee: [Abstract] Abstract (description of inference): The covariance-denoising step is presented as ensuring 'fidelity and robustness,' but no algorithm, loss function, or conditioning mechanism is specified, nor is there any argument or test showing that sampling from the concept space reproduces the target statistics under realistic time-varying noise.

    Authors: We accept that the original description of the covariance-denoising step was insufficiently precise. In the revised manuscript we have inserted a dedicated subsection (Section 3.4) that specifies: (i) the diffusion model is trained unconditionally on the k-seed vectors produced by the set-vision transformer; (ii) at inference, conditioning is performed by concatenating the target measurement distribution (as a soft prompt) to the noisy latent vector and using classifier-free guidance with a guidance scale derived from the GST covariance matrix; (iii) the training loss is the standard denoising score-matching objective plus an auxiliary term that matches the empirical covariance of the generated circuits to the GST-derived noise covariance; and (iv) a pseudocode listing of the full inference procedure, including the covariance-denoising update rule. We have also added a short theoretical paragraph arguing that the permutation-invariant pooling and curriculum training allow the latent space to encode shared device-level noise (crosstalk, drift) that isolated gate metrics miss, thereby providing a mechanism for robustness under time-varying conditions. No empirical tests of this mechanism are included in the current revision, as the work remains at the proposal stage. revision: yes

Circularity Check

0 steps flagged

No significant circularity; methodological proposal is self-contained

full rationale

The paper describes a proposed QMLC framework that tokenizes GST germ circuits, embeds them via a set-vision transformer into k-seed vectors forming a learned concept space, and uses a diffusion model to generate circuits conditioned on a target distribution at inference time. This constitutes a standard generative ML pipeline trained on data to produce new samples; no equations, derivations, or claims reduce a prediction or result by construction to the inputs themselves. No self-definitional steps, fitted inputs renamed as predictions, load-bearing self-citations for uniqueness theorems, or smuggled ansatzes are present in the provided text. The central claim is a description of an end-to-end methodology without mathematical reductions that equate outputs to inputs tautologically. The framework remains independent of its training data in formulation, qualifying as self-contained.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 1 invented entities

Only the abstract is available, so the complete set of free parameters, axioms, and invented entities cannot be extracted. The framework implicitly introduces a learned 'generative concept space' whose structure depends on model architecture choices and data preprocessing decisions.

free parameters (2)
  • latent dimension of k-seed vectors
    The size of the representation produced by the set-vision transformer is a hyperparameter chosen to capture the concept space.
  • curriculum schedule parameters
    The progression from short to longer circuits and the weighting of diverse output statistics are training hyperparameters.
axioms (1)
  • domain assumption GST data sufficiently samples the device's noise environment to allow generalization to new circuits
    The method assumes that aggregating multiple germ circuits yields a context-aware representation that transfers to unseen target distributions.
invented entities (1)
  • generative concept space no independent evidence
    purpose: A latent representation that encodes the shared physical noise of the quantum device
    Postulated as the output of the transformer pooling step; no independent physical interpretation or falsifiable prediction is given in the abstract.

pith-pipeline@v0.9.0 · 5616 in / 1535 out tokens · 43388 ms · 2026-05-09T14:50:37.424477+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

67 extracted references · 20 canonical work pages · 3 internal anchors

  1. [1]

    Superstaq: Deep optimiza- tion of quantum programs

    Colin Campbell, Frederic T Chong, Denny Dahl, Paige Frederick, Palash Goiporia, Pranav Gokhale, Benjamin Hall, Salahedeen Issa, Eric Jones, Stephanie Lee, et al. Superstaq: Deep optimiza- tion of quantum programs. In 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 1, pages 1020–1032. IEEE, 2023

  2. [2]

    Yaqq: Yet another quantum quantizer–design space exploration of quantum gate sets using novelty search

    Aritra Sarkar, Akash Kundu, Matthew Steinberg, Sibasish Mishra, Sebastiaan Fauquenot, Tamal Acharya, Jaroslaw Miszczak, and Sebastian Feld. Yaqq: Yet another quantum quantizer–design space exploration of quantum gate sets using novelty search. New Journal of Physics, 2026

  3. [3]

    Review of performance metrics of spin qubits in gated semiconducting nanostructures

    Peter Stano and Daniel Loss. Review of performance metrics of spin qubits in gated semiconducting nanostructures. Nature Reviews Physics, 4(10):672–688, 2022

  4. [4]

    Compiling quantum circuits for dynamically field-programmable neutral atoms array processors

    Daniel Bochen Tan, Dolev Bluvstein, Mikhail D Lukin, and Jason Cong. Compiling quantum circuits for dynamically field-programmable neutral atoms array processors. Quantum, 8:1281, 2024

  5. [5]

    Expressing and Analyzing Quantum Algorithms with Qualtran

    Matthew P Harrigan, Tanuj Khattar, Charles Yuan, Anurudh Peduri, Noureldin Yosri, Fionn D Malone, Ryan Babbush, and Nicholas C Rubin. Expressing and analyzing quantum algorithms with qualtran. arXiv preprint arXiv:2409.04643, 2024

  6. [6]

    Colloquium: Advances in automation of quantum dot devices control

    Justyna P Zwolak and Jacob M Taylor. Colloquium: Advances in automation of quantum dot devices control. Reviews of modern physics, 95(1):011006, 2023

  7. [7]

    Unified evolutionary optimization for high-fidelity spin qubit operations

    Sam R Katiraee-Far, Yuta Matsumoto, Brennan Undseth, Maxim De Smet, Valentina Gualtieri, Christian Ventura Meinersen, Irene Fernandez de Fuentes, Kenji Capannelli, Maximilian Rimbach- Russ, Giordano Scappucci, et al. Unified evolutionary optimization for high-fidelity spin qubit operations. arXiv preprint arXiv:2503.12256, 2025

  8. [8]

    Gate set tomography

    Erik Nielsen, John King Gamble, Kenneth Rudinger, Travis Scholten, Kevin Young, and Robin Blume-Kohout. Gate set tomography. Quantum, 5:557, 2021

  9. [9]

    Randomized benchmarking of quantum gates

    Emanuel Knill, Dietrich Leibfried, Rolf Reichle, Joe Britton, R Brad Blakestad, John D Jost, Chris Langer, Roee Ozeri, Signe Seidelin, and David J Wineland. Randomized benchmarking of quantum gates. Physical Review A—Atomic, Molecular, and Optical Physics, 77(1):012307, 2008

  10. [10]

    General framework for randomized benchmarking

    Jonas Helsen, Ingo Roth, Emilio Onorati, Albert H Werner, and Jens Eisert. General framework for randomized benchmarking. PRX quantum, 3(2):020357, 2022

  11. [11]

    Predicting many properties of a quantum system from very few measurements

    Hsin-Yuan Huang, Richard Kueng, and John Preskill. Predicting many properties of a quantum system from very few measurements. Nature Physics, 16(10):1050–1057, 2020

  12. [12]

    Shadow estimation of gate-set properties from random sequences

    Jonas Helsen, Marios Ioannou, Jonas Kitzinger, Emilio Onorati, AH Werner, Jens Eisert, and Ingo Roth. Shadow estimation of gate-set properties from random sequences. Nature Communications, 14(1):5039, 2023. 16

  13. [13]

    Dawson and Michael A

    Christopher M Dawson and Michael A Nielsen. The solovay-kitaev algorithm. arXiv preprint quant-ph/0505030, 2005

  14. [14]

    Optimal ancilla-free Clifford+T approximation of z-rotations

    Neil J Ross and Peter Selinger. Optimal ancilla-free clifford+ t approximation of z-rotations. arXiv preprint arXiv:1403.2975, 2014

  15. [15]

    Quantum computation with machine-learning-controlled quan- tum stuff

    Lucien Hardy and Adam GM Lewis. Quantum computation with machine-learning-controlled quan- tum stuff. Machine Learning: Science and Technology, 2(1):015008, 2020

  16. [16]

    Introduction to quantum gate set tomography

    Daniel Greenbaum. Introduction to quantum gate set tomography. arXiv preprint arXiv:1509.02921, 2015

  17. [17]

    Probing quantum processor performance with pygsti

    Erik Nielsen, Kenneth Rudinger, Timothy Proctor, Antonio Russo, Kevin Young, and Robin Blume- Kohout. Probing quantum processor performance with pygsti. Quantum Science & Technology, 5(4):044002, 2020

  18. [18]

    Set trans- former: A framework for attention-based permutation-invariant neural networks

    Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set trans- former: A framework for attention-based permutation-invariant neural networks. In International Conference on Machine Learning, pages 3744–3753. PMLR, 2019

  19. [19]

    Ccdm: Continuous conditional diffusion models for image generation

    Xin Ding, Yongwei Wang, Kao Zhang, and Z Jane Wang. Ccdm: Continuous conditional diffusion models for image generation. arXiv preprint arXiv:2405.03546, 2024

  20. [20]

    Elementary gates for quantum computation

    Adriano Barenco, Charles H Bennett, Richard Cleve, David P DiVincenzo, Norman Margolus, Peter Shor, Tycho Sleator, John A Smolin, and Harald Weinfurter. Elementary gates for quantum computation. Physical review A, 52(5):3457, 1995

  21. [21]

    Efficient decomposition of unitary matrices in quantum circuit compilers

    Anna M Krol, Aritra Sarkar, Imran Ashraf, Zaid Al-Ars, and Koen Bertels. Efficient decomposition of unitary matrices in quantum circuit compilers. Applied Sciences, 12(2):759, 2022

  22. [22]

    Beyond quantum shannon decomposition: Circuit construction for n-qubit gates based on block-zxz decomposition

    Anna M Krol and Zaid Al-Ars. Beyond quantum shannon decomposition: Circuit construction for n-qubit gates based on block-zxz decomposition. Physical Review Applied, 22(3):034019, 2024

  23. [23]

    Optimal ancilla-free clifford+ t approximation of z-rotations

    Neil J Ross and Peter Selinger. Optimal ancilla-free clifford+ t approximation of z-rotations. Quantum Inf. Comput., 16(11&12):901–953, 2016

  24. [24]

    High-Precision Multi-Qubit Clifford+T Synthesis by Unitary Diagonalization

    Mathias Weiden, Justin Kalloor, John Kubiatowicz, Ed Younis, and Costin Iancu. High-precision multi-qubit clifford+ t synthesis by unitary diagonalization. arXiv preprint arXiv:2409.00433, 2024

  25. [25]

    Mind the gaps: The fraught road to quantum advantage

    Jens Eisert and John Preskill. Mind the gaps: The fraught road to quantum advantage. arXiv preprint arXiv:2510.19928, 2025

  26. [26]

    The quantum house of cards

    Xavier Waintal. The quantum house of cards. Proceedings of the National Academy of Sciences, 121(1):e2313269120, 2024

  27. [27]

    Op- timal control of coupled spin dynamics: design of nmr pulse sequences by gradient ascent algorithms

    Navin Khaneja, Timo Reiss, Cindie Kehlet, Thomas Schulte-Herbr¨ uggen, and Steffen J Glaser. Op- timal control of coupled spin dynamics: design of nmr pulse sequences by gradient ascent algorithms. Journal of magnetic resonance, 172(2):296–305, 2005

  28. [28]

    Open and closed loop approaches for energy efficient quantum optimal control

    Sebastiaan Fauquenot, Aritra Sarkar, and Sebastian Feld. Open and closed loop approaches for energy efficient quantum optimal control. Advanced Quantum Technologies, page 2400690, 2025

  29. [29]

    Quantum optimal control with geodesic pulse engi- neering.arXiv preprint arXiv:2508.16029, 2025

    Dylan Lewis, Roeland Wiersema, and Sougato Bose. Quantum optimal control with geodesic pulse engineering. arXiv preprint arXiv:2508.16029, 2025

  30. [30]

    Software tools for quantum control: Improving quantum computer performance through noise and error suppression

    Harrison Ball, Michael J Biercuk, Andre RR Carvalho, Jiayin Chen, Michael Hush, Leonardo A De Castro, Li Li, Per J Liebermann, Harry J Slatyer, Claire Edmunds, et al. Software tools for quantum control: Improving quantum computer performance through noise and error suppression. Quantum Science & Technology, 6(4):044011, 2021

  31. [31]

    Programming physical quantum systems with pulse-level control

    Kaitlin N Smith, Gokul Subramanian Ravi, Thomas Alexander, Nicholas T Bronn, Andr´ e RR Car- valho, Alba Cervera-Lierta, Frederic T Chong, Jerry M Chow, Michael Cubeddu, Akel Hashim, et al. Programming physical quantum systems with pulse-level control. Frontiers in physics, 10:900099, 2022. 17

  32. [32]

    Improving quan- tum circuit synthesis with machine learning

    Mathias Weiden, Ed Younis, Justin Kalloor, John Kubiatowicz, and Costin Iancu. Improving quan- tum circuit synthesis with machine learning. In 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 1, pages 1–11. IEEE, 2023

  33. [33]

    Qfast: Conflating search and numerical optimization for scalable quantum circuit synthesis

    Ed Younis, Koushik Sen, Katherine Yelick, and Costin Iancu. Qfast: Conflating search and numerical optimization for scalable quantum circuit synthesis. In 2021 IEEE International Conference on Quantum Computing and Engineering (QCE), pages 232–243. IEEE, 2021

  34. [34]

    Unitary synthesis of clifford+ t circuits with rein- forcement learning

    Sebastian Rietsch, Abhishek Y Dubey, Christian Ufrecht, Maniraman Periyasamy, Axel Plinge, Christopher Mutschler, and Daniel D Scherer. Unitary synthesis of clifford+ t circuits with rein- forcement learning. In 2024 IEEE International Conference on Quantum Computing and Engineering (QCE), volume 1, pages 824–835. IEEE, 2024

  35. [35]

    Synthetiq: Fast and versatile quantum circuit synthesis

    Anouk Paradis, Jasper Dekoninck, Benjamin Bichsel, and Martin Vechev. Synthetiq: Fast and versatile quantum circuit synthesis. Proceedings of the ACM on Programming Languages, 8(OOPSLA1):55–82, 2024

  36. [36]

    A scalable quantum neural network for approximate srbb-based unitary synthesis

    Giacomo Belli, Marco Mordacci, and Michele Amoretti. A scalable quantum neural network for approximate srbb-based unitary synthesis. arXiv preprint arXiv:2412.03083, 2024

  37. [37]

    Optimal quantum circuit design via unitary neural networks

    M Zomorodi, H Amini, M Abbaszadeh, J Sohrabi, V Salari, and P Plawiak. Optimal quantum circuit design via unitary neural networks. arXiv preprint arXiv:2408.13211, 2024

  38. [38]

    No need to calibrate: characterization and compilation for high-fidelity circuit execution using imperfect gates

    Ashish Kakkar, Samuel Marsh, Yulun Wang, Pranav Mundada, Paul Coote, Gavin Hartnett, Michael J Biercuk, and Yuval Baum. No need to calibrate: characterization and compilation for high-fidelity circuit execution using imperfect gates. arXiv preprint arXiv:2511.21831, 2025

  39. [39]

    Ai-powered noisy quan- tum emulation: Generalized gate-based protocols for hardware-agnostic simulation

    Matthew Ho, Jun Yong Khoo, Adrian M Mak, and Stefano Carrazza. Ai-powered noisy quan- tum emulation: Generalized gate-based protocols for hardware-agnostic simulation. arXiv preprint arXiv:2502.19872, 2025

  40. [40]

    Trans- former models for quantum gate set tomography

    King Yiu Yu, Aritra Sarkar, Maximilian Rimbach-Russ, Ryoichi Ishihara, and Sebastian Feld. Trans- former models for quantum gate set tomography. Quantum Machine Intelligence, 7(1):10, 2025

  41. [41]

    Quantum circuit synthesis with diffusion models

    Florian F¨ urrutter, Gorka Mu˜ noz-Gil, and Hans J Briegel. Quantum circuit synthesis with diffusion models. Nature Machine Intelligence, 6(5):515–524, 2024

  42. [42]

    Uditqc: U-net-style diffusion transformer for quantum circuit synthesis

    Zhiwei Chen and Hao Tang. Uditqc: U-net-style diffusion transformer for quantum circuit synthesis. arXiv preprint arXiv:2501.16380, 2025

  43. [43]

    Leveraging diffusion models for parameterized quantum circuit generation

    Daniel Barta, Darya Martyniuk, Johannes Jung, and Adrian Paschke. Leveraging diffusion models for parameterized quantum circuit generation. arXiv preprint arXiv:2505.20863, 2025

  44. [44]

    Synthesis of discrete-continuous quantum circuits with multimodal diffusion models

    Florian F¨ urrutter, Zohim Chandani, Ikko Hamamura, Hans J Briegel, and Gorka Mu˜ noz-Gil. Syn- thesis of discrete-continuous quantum circuits with multimodal diffusion models. arXiv preprint arXiv:2506.01666, 2025

  45. [45]

    Learning transferable visual models from natural language supervision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021

  46. [46]

    Attention is all you need

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017

  47. [47]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and Others. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020

  48. [48]

    Hamprecht, Yoshua Bengio, and Aaron C

    Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Dr¨ axler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, and Aaron C. Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, 2018. 18

  49. [49]

    Analysis of boolean functions

    Ryan O’Donnell. Analysis of boolean functions. ArXiv, abs/2105.10386, 2014

  50. [50]

    Joseph L. Walsh. A closed set of normal orthogonal functions. American Journal of Mathematics, 45:5, 1923

  51. [51]

    Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Ragha- van, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T

    Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. ArXiv, abs/2006.10739, 2020

  52. [52]

    Curriculum learning

    Yoshua Bengio, J´ erˆ ome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009

  53. [53]

    Curriculum learning: A survey

    Petru Soviany, Radu Tudor Ionescu, Paolo Rota, and Nicu Sebe. Curriculum learning: A survey. International Journal of Computer Vision, 130(6):1526–1565, 2022

  54. [54]

    Score-based generative modeling in latent space

    Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. Advances in neural information processing systems, 34:11287–11302, 2021

  55. [55]

    Variational diffusion models

    Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Advances in neural information processing systems, 34:21696–21707, 2021

  56. [56]

    Blueprint for a scalable photonic fault-tolerant quantum computer

    J Eli Bourassa, Rafael N Alexander, Michael Vasmer, Ashlesha Patil, Ilan Tzitrin, Takaya Matsuura, Daiqin Su, Ben Q Baragiola, Saikat Guha, Guillaume Dauphinais, et al. Blueprint for a scalable photonic fault-tolerant quantum computer. Quantum, 5:392, 2021

  57. [57]

    Quantum computational advantage with a programmable photonic processor.Nature, 606(7912):75– 81, 2022

    Lars S Madsen, Fabian Laudenbach, Mohsen Falamarzi Askarani, Fabien Rortais, Trevor Vincent, Jacob FF Bulmer, Filippo M Miatto, Leonhard Neuhaus, Lukas G Helt, Matthew J Collins, et al. Quantum computational advantage with a programmable photonic processor.Nature, 606(7912):75– 81, 2022

  58. [58]

    Deqompile: quantum circuit decompilation using genetic programming for explainable quantum architecture search

    Shubing Xie, Aritra Sarkar, and Sebastian Feld. Deqompile: quantum circuit decompilation using genetic programming for explainable quantum architecture search. arXiv preprint arXiv:2504.08310, 2025

  59. [59]

    Qksa: Quantum knowledge seeking agent

    Aritra Sarkar, Zaid Al-Ars, and Koen Bertels. Qksa: Quantum knowledge seeking agent. In International Conference on Artificial General Intelligence, pages 384–393. Springer, 2022

  60. [60]

    Exploiting symmetry in variational quantum machine learning

    Johannes Jakob Meyer, Marian Mularski, Elies Gil-Fuster, Antonio Anna Mele, Francesco Arzani, Alissa Wilms, and Jens Eisert. Exploiting symmetry in variational quantum machine learning. PRX quantum, 4(1):010328, 2023

  61. [61]

    Automated quantum software engineering

    Aritra Sarkar. Automated quantum software engineering. Automated Software Engineering, 31(1):36, 2024

  62. [62]

    A million spiking-neuron integrated circuit with a scalable communication network and interface

    Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Fil- ipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673, 2014

  63. [63]

    Loihi: A neuromorphic manycore processor with on-chip learning

    Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee micro, 38(1):82–99, 2018

  64. [64]

    Synaptic electronics: materials, devices and applications

    Duygu Kuzum, Shimeng Yu, and HS Philip Wong. Synaptic electronics: materials, devices and applications. Nanotechnology, 24(38):382001, 2013

  65. [65]

    Memory and information processing in neuromorphic systems

    Giacomo Indiveri and Shih-Chii Liu. Memory and information processing in neuromorphic systems. Proceedings of the IEEE, 103(8):1379–1397, 2015

  66. [66]

    Neuromorphic electronic systems for reservoir computing

    Fatemeh Hadaeghi. Neuromorphic electronic systems for reservoir computing. In Reservoir Computing: Theory, Physical Implementations, and Applications, pages 221–237. Springer, 2021

  67. [67]

    Deep learning in neural networks: An overview

    J¨ urgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015. 19