Recognition: no theorem link
Eliminating Vendor Lock-In in Quantum Machine Learning via Framework-Agnostic Neural Networks
Pith reviewed 2026-05-10 19:55 UTC · model grok-4.3
The pith
A framework-agnostic quantum neural network architecture removes the need to rewrite models when switching between different software ecosystems and hardware providers.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central discovery is a quantum neural network that uses one computational graph for the model structure, a hardware abstraction layer to connect to various quantum systems via a common interface, and an export pipeline that translates the model losslessly into formats used by different quantum software tools. It includes flexible methods to encode classical data into quantum states that work on all supported systems. Tests on standard datasets for classifying items like flowers and digits show training speeds nearly match direct use of single frameworks and produce the same accuracy levels.
What carries the argument
The framework-agnostic quantum neural network built on a unified computational graph, hardware abstraction layer for backend access, and export pipeline for cross-representation compatibility.
If this is right
- A model developed once can execute on multiple quantum hardware providers through the same code.
- Accuracy on classification problems matches that of implementations built directly for one framework.
- The overhead from the abstraction stays small, under eight percent in training time.
- Different data encoding approaches can be swapped in without affecting compatibility with backends.
Where Pith is reading between the lines
- The design may reduce duplication of effort as groups no longer need to reimplement models for each new hardware.
- It could support runtime decisions on which backend to use based on current availability or cost.
- Extending the abstraction to include error mitigation techniques would be a natural next test of its generality.
Load-bearing premise
The unified computational graph and hardware abstraction layer can be made to handle every classical framework and every quantum backend at the same time with no loss of function or hidden extra costs.
What would settle it
Implementing a model with the architecture, exporting it to two different quantum software representations, running both on their native hardware, and checking if the classification results differ beyond normal variation or if one fails to run.
Figures
read the original abstract
Quantum machine learning (QML) stands at the intersection of quantum computing and artificial intelligence, offering the potential to solve problems that remain intractable for classical methods. However, the current landscape of QML software frameworks suffers from severe fragmentation: models developed in TensorFlow Quantum cannot execute on PennyLane backends, circuits authored in Qiskit Machine Learning cannot be deployed to Amazon Braket hardware, and researchers who invest in one ecosystem face prohibitive switching costs when migrating to another. This vendor lock-in impedes reproducibility, limits hardware access, and slows the pace of scientific discovery. In this paper, we present a framework-agnostic quantum neural network (QNN) architecture that abstracts away vendor-specific interfaces through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline. The core architecture supports simultaneous integration with TensorFlow, PyTorch, and JAX as classical co-processors, while the HAL provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends through a single application programming interface (API). We introduce three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial encoding) that are compatible with all supported backends. An export module leveraging Open Neural Network Exchange (ONNX) metadata enables lossless circuit translation across Qiskit, Cirq, PennyLane, and Braket representations. We benchmark our framework on the Iris, Wine, and MNIST-4 classification tasks, demonstrating training time parity (within 8\% overhead) compared to native framework implementations, while achieving identical classification accuracy.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims to present a framework-agnostic quantum neural network (QNN) architecture that eliminates vendor lock-in in quantum machine learning through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline based on ONNX metadata. The architecture supports TensorFlow, PyTorch, and JAX as classical co-processors and provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends via a single API. It introduces three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial) and benchmarks on the Iris, Wine, and MNIST-4 classification tasks, reporting training time parity within 8% overhead and identical classification accuracy relative to native framework implementations.
Significance. If the central claims of lossless translation, full functionality preservation, and performance parity hold, the work would be significant for improving reproducibility and reducing switching costs across fragmented QML ecosystems. The pluggable encoding strategies represent a constructive step toward generality. However, the absence of detailed verification for the HAL and ONNX pipeline limits the demonstrated impact, leaving the contribution more prospective than empirically established.
major comments (3)
- [Abstract] Abstract: The claims of 'identical classification accuracy' and 'training time parity (within 8% overhead)' on Iris, Wine, and MNIST-4 are presented as summary assertions without implementation details, error bars, statistical tests, data splits, circuit diagrams, or references to specific tables/figures. These performance results are load-bearing for the central claim of practical viability and cannot be assessed from the given information.
- [Multi-framework export pipeline] Multi-framework export pipeline: The assertion that ONNX metadata enables 'lossless circuit translation' across Qiskit, Cirq, PennyLane, and Braket does not address how differences in gradient computation (parameter-shift vs. backprop-through-simulator), measurement semantics, or native gate sets are reconciled. Any internal transpilation to map circuits (e.g., PennyLane to Rigetti) would necessarily alter depth or introduce approximations, directly contradicting the lossless and identical-accuracy conditions.
- [Hardware Abstraction Layer (HAL)] Hardware Abstraction Layer (HAL): The HAL is described as providing transparent access to five distinct quantum backends through a single API, yet no mechanism is specified for abstracting backend-specific constraints such as connectivity graphs, calibration data, or noise models without per-backend overrides or hidden overheads. This omission undermines the framework-agnostic guarantee.
minor comments (1)
- The manuscript would benefit from explicit pseudocode or diagrams for the three pluggable encoding strategies to support reproducibility.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed comments. We address each major point below, agreeing where additional clarification or documentation is warranted and outlining specific revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: The claims of 'identical classification accuracy' and 'training time parity (within 8% overhead)' on Iris, Wine, and MNIST-4 are presented as summary assertions without implementation details, error bars, statistical tests, data splits, circuit diagrams, or references to specific tables/figures. These performance results are load-bearing for the central claim of practical viability and cannot be assessed from the given information.
Authors: We acknowledge that the abstract, as a concise summary, does not embed the full experimental details. The manuscript reports the benchmark results in the Experimental Evaluation section, but we agree that explicit documentation of data splits, error bars from repeated runs, statistical tests, circuit diagrams, and direct table/figure references would improve verifiability. We will revise the abstract to include a brief pointer to the experimental section and key quantitative results with error bars, and expand the experimental section to add the missing implementation specifics and cross-references. revision: yes
-
Referee: [Multi-framework export pipeline] Multi-framework export pipeline: The assertion that ONNX metadata enables 'lossless circuit translation' across Qiskit, Cirq, PennyLane, and Braket does not address how differences in gradient computation (parameter-shift vs. backprop-through-simulator), measurement semantics, or native gate sets are reconciled. Any internal transpilation to map circuits (e.g., PennyLane to Rigetti) would necessarily alter depth or introduce approximations, directly contradicting the lossless and identical-accuracy conditions.
Authors: The architecture relies on a unified computational graph that normalizes representations before ONNX export, with gradient handling standardized at the graph level using the parameter-shift rule and measurement semantics normalized via the HAL. Gate-set mappings are performed by a semantics-preserving transpiler that maintains circuit depth for the supported encodings. We recognize that the manuscript does not provide sufficient detail on these reconciliation steps. We will add a new subsection in the architecture description that explicitly documents the gradient unification, measurement normalization, and transpilation rules, including examples demonstrating no depth increase or approximation for the benchmarked models. revision: yes
-
Referee: [Hardware Abstraction Layer (HAL)] Hardware Abstraction Layer (HAL): The HAL is described as providing transparent access to five distinct quantum backends through a single API, yet no mechanism is specified for abstracting backend-specific constraints such as connectivity graphs, calibration data, or noise models without per-backend overrides or hidden overheads. This omission undermines the framework-agnostic guarantee.
Authors: The HAL uses a standardized backend descriptor that encodes connectivity graphs, calibration data, and noise models, enabling automatic transpilation inside the unified graph without requiring user-level per-backend overrides. We agree that the current manuscript description remains at a high level and lacks concrete specification of these mechanisms. We will revise the HAL section to include the descriptor schema, an outline of the transpilation process, and empirical measurements of any overhead on the supported backends to substantiate the framework-agnostic claim. revision: yes
Circularity Check
No circularity: architecture described as independent design
full rationale
The paper presents a design for a framework-agnostic QNN using a unified computational graph, hardware abstraction layer, and ONNX-based export pipeline. No equations, fitted parameters, or self-referential definitions appear in the provided text. Claims of lossless translation and performance parity are stated as outcomes of the architecture rather than inputs used to define it. Benchmarks on Iris, Wine, and MNIST-4 are presented as independent empirical validation. The derivation consists of engineering choices motivated by vendor fragmentation, with no reduction to self-citation chains or tautological inputs.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Quantum computing frameworks and hardware backends can be unified through a single computational graph and hardware abstraction layer without inherent loss of functionality.
invented entities (2)
-
Hardware Abstraction Layer (HAL)
no independent evidence
-
Multi-framework export pipeline using ONNX metadata
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Cerezoet al., Variational quantum al- gorithms, Nat
Marco Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, et al. Variational quantum algorithms. Nature Reviews Physics, 3 0 (9): 0 625--644, 2021. doi:10.1038/s42254-021-00348-9
-
[2]
Nature Communications5(1), 4213 (2014) https://doi.org/ 10.1038/ncomms5213
Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Al \'a n Aspuru-Guzik, and Jeremy L O'Brien. A variational eigenvalue solver on a photonic quantum processor. Nature Communications, 5 0 (1): 0 4213, 2014. doi:10.1038/ncomms5213
-
[3]
Quantum computing in the NISQ era and beyond.Quantum, 2:79, 2018
John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2: 0 79, 2018. doi:10.22331/q-2018-08-06-79
work page internal anchor Pith review doi:10.22331/q-2018-08-06-79 2018
-
[4]
Quantum Science and Technology4(4), 043001 (2019) https://doi.org/10.1088/2058-9565/ab4eb5
Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4 0 (4): 0 043001, 2019. doi:10.1088/2058-9565/ab4eb5
-
[5]
Supervised Learning with Quantum-Enhanced Feature Spaces
Vojt e ch Havl \' c ek, Antonio D C \'o rcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature, 567 0 (7747): 0 209--212, 2019. doi:10.1038/s41586-019-0980-2
-
[6]
Maria Schuld, Alex Bocharov, Krysta M Svore, and Nathan Wiebe. Circuit-centric quantum classifiers. Physical Review A, 101 0 (3): 0 032308, 2020. doi:10.1103/PhysRevA.101.032308
-
[7]
Representation learning via quantum neural networks
Yunchao Liu, Srinivasan Arunachalam, and Kristan Temme. Representation learning via quantum neural networks. Physical Review Research, 6: 0 L032057, 2024. doi:10.1103/PhysRevResearch.6.L032057
-
[8]
Variational quantum circuits for deep reinforcement learning
Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, and Hsi-Sheng Goan. Variational quantum circuits for deep reinforcement learning. IEEE Access, 8: 0 141007--141024, 2020. doi:10.1109/ACCESS.2020.3010470
-
[9]
Reinforcement learning with quantum variational circuit
Owen Lockwood and Mei Si. Reinforcement learning with quantum variational circuit. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 16: 0 245--251, 2020. doi:10.1609/aiide.v16i1.7437
-
[10]
Michael Broughton, Guillaume Verdon, Trevor McCourt, Antonio J Martinez, Jae Hyeon Yoo, Sergei V Isakov, Philip Masber, Ramin Haber, Masoud Mohseni, Dave Bacon, et al. TensorFlow Quantum : A software framework for quantum machine learning. arXiv preprint arXiv:2003.02989, 2020. doi:10.48550/arXiv.2003.02989
-
[11]
Cirq : A Python framework for creating, editing, and invoking NISQ circuits
Cirq Developers . Cirq : A Python framework for creating, editing, and invoking NISQ circuits. https://quantumai.google/cirq, 2023. Accessed: 2024-01-15
2023
-
[12]
PennyLane: Automatic differentiation of hybrid quantum-classical computations
Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Shahnawaz Ahmed, Vishnu Ajith, M Sohaib Alam, Guillermo Alonso-Linaje, B AkashNarayanan, Ali Asber, et al. PennyLane : Automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968, 2022. doi:10.48550/arXiv.1811.04968
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1811.04968 2022
-
[13]
Qiskit Machine Learning : An open-source framework for quantum machine learning
Qiskit ML Contributors . Qiskit Machine Learning : An open-source framework for quantum machine learning. https://qiskit.org/ecosystem/machine-learning/, 2023. Accessed: 2024-01-15
2023
-
[14]
Qiskit : An open-source framework for quantum computing
Qiskit Contributors . Qiskit : An open-source framework for quantum computing. https://qiskit.org/, 2023
2023
-
[15]
Amazon Braket Developer Guide
Amazon Web Services . Amazon Braket Developer Guide . https://docs.aws.amazon.com/braket/, 2023. Accessed: 2024-01-15
2023
-
[16]
Azure Quantum Documentation
Microsoft . Azure Quantum Documentation . https://learn.microsoft.com/en-us/azure/quantum/, 2023. Accessed: 2024-01-15
2023
-
[17]
IonQ quantum cloud
IonQ Inc. IonQ quantum cloud. https://ionq.com/, 2023. Accessed: 2024-01-15
2023
-
[18]
Rigetti quantum cloud services
Rigetti Computing . Rigetti quantum cloud services. https://www.rigetti.com/, 2023. Accessed: 2024-01-15
2023
-
[19]
ONNX : Open neural network exchange
ONNX Consortium . ONNX : Open neural network exchange. https://onnx.ai/, 2023. Accessed: 2024-01-15
2023
-
[20]
Evaluating analytic gradients on quantum hardware,
Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware. Physical Review A, 99 0 (3): 0 032331, 2019. doi:10.1103/PhysRevA.99.032331
-
[21]
Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Al´ an Aspuru-Guzik
Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S Kottmann, Tim Menke, et al. Noisy intermediate-scale quantum algorithms. Reviews of Modern Physics, 94 0 (1): 0 015004, 2022. doi:10.1103/RevModPhys.94.015004
-
[22]
Nature549(7671), 242–246 (2017) https://doi.org/10.1038/nature23879
Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 549 0 (7671): 0 242--246, 2017. doi:10.1038/nature23879
-
[23]
Variational quantum eigensolver with fewer qubits
Jin-Guo Liu, Yi-Hong Zhang, Yuan Wan, and Lei Wang. Variational quantum eigensolver with fewer qubits. Physical Review Research, 1 0 (2): 0 023025, 2019. doi:10.1103/PhysRevResearch.1.023025
-
[24]
A Quantum Approximate Optimization Algorithm
Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm. arXiv preprint arXiv:1411.4028, 2014. doi:10.48550/arXiv.1411.4028
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.1411.4028 2014
-
[25]
Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. Physical Review A, 98 0 (3): 0 032309, 2018. doi:10.1103/PhysRevA.98.032309
-
[26]
Quantum machine learning in feature Hilbert spaces
Maria Schuld and Nathan Killoran. Quantum machine learning in feature Hilbert spaces. Physical Review Letters, 122 0 (4): 0 040504, 2019. doi:10.1103/PhysRevLett.122.040504
-
[27]
A rigorous and robust quantum speed-up in supervised machine learning
Yunchao Liu, Srinivasan Arunachalam, and Kristan Temme. A rigorous and robust quantum speed-up in supervised machine learning. Nature Physics, 17 0 (9): 0 1013--1017, 2021. doi:10.1038/s41567-021-01287-z
-
[28]
Na- ture Communications12(1), 2631 (2021) https: //doi.org/10.1038/s41467-021-22539-9
Hsin-Yuan Huang, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven, and Jarrod R McClean. Power of data in quantum machine learning. Nature Communications, 12 0 (1): 0 2631, 2021. doi:10.1038/s41467-021-22539-9
-
[29]
u bler, Simon Buchholz, and Bernhard Sch \
Jonas M K \"u bler, Simon Buchholz, and Bernhard Sch \"o lkopf. The inductive bias of quantum kernels. Advances in Neural Information Processing Systems, 34: 0 12661--12673, 2021
2021
-
[30]
Hsin-Yuan Huang, Michael Broughton, Jordan Cotler, Sitan Chen, Jerry Li, Masoud Mohseni, Hartmut Neven, Ryan Babbush, Richard Kueng, John Preskill, and Jarrod R McClean. Quantum advantage in learning from experiments. Science, 376 0 (6598): 0 1182--1186, 2022. doi:10.1126/science.abn7293
-
[31]
Ewin Tang et al. Dequantizing the quantum singular value transformation: Hardness and applications to quantum chemistry and the quantum PCP conjecture. Proceedings of STOC, 2021. doi:10.1145/3564246.3585234
-
[32]
Sukin Sim, Peter D Johnson, and Al \'a n Aspuru-Guzik. Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Advanced Quantum Technologies, 2 0 (12): 0 1900070, 2019. doi:10.1002/qute.201900070
-
[33]
Expressive power of parametrized quantum circuits
Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, and Dacheng Tao. Expressive power of parametrized quantum circuits. Physical Review Research, 2 0 (3): 0 033125, 2020. doi:10.1103/PhysRevResearch.2.033125
-
[34]
Nature Communications9(1) (2018) https:// doi.org/10.1038/s41467-018-07090-4
Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9 0 (1): 0 4812, 2018. doi:10.1038/s41467-018-07090-4
-
[35]
Effect of barren plateaus on gradient-free optimization
Andrew Arrasmith, Marco Cerezo, Piotr Czarnik, Lukasz Cincio, and Patrick J Coles. Effect of barren plateaus on gradient-free optimization. Quantum, 5: 0 558, 2021. doi:10.22331/q-2021-10-05-558
-
[36]
Quantum convolutional neural networks
Iris Cong, Soonwon Choi, and Mikhail D Lukin. Quantum convolutional neural networks. Nature Physics, 15 0 (12): 0 1273--1278, 2019. doi:10.1038/s41567-019-0648-8
-
[37]
Absence of barren plateaus in quantum convolutional neural networks
Arthur Pesah, Marco Cerezo, Samson Wang, Tyler Volkoff, Andrew T Sornborger, and Patrick J Coles. Absence of barren plateaus in quantum convolutional neural networks. Physical Review X, 11 0 (4): 0 041011, 2021. doi:10.1103/PhysRevX.11.041011
-
[38]
Hierarchical quantum classifiers
Edward Grant, Marcello Benedetti, Shruti Cao, Andrew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G Green, and Simone Severini. Hierarchical quantum classifiers. npj Quantum Information, 4 0 (1): 0 65, 2018. doi:10.1038/s41534-018-0116-9
-
[39]
Exploring entanglement and optimization within the Hamiltonian variational ansatz
Roeland Wiersema, Cunlu Zhou, Yvette de Sereville, Juan F Carrasquilla, Yong Baek Kim, and Henry Yuen. Exploring entanglement and optimization within the Hamiltonian variational ansatz. PRX Quantum, 1 0 (2): 0 020319, 2020. doi:10.1103/PRXQuantum.1.020319
-
[40]
Trainability of dissipative perceptron-based quantum neural networks
Kunal Sharma, Marco Cerezo, Enrico Fontana, Akira Sone, and Patrick J Coles. Trainability of dissipative perceptron-based quantum neural networks. Physical Review Letters, 128 0 (7): 0 070501, 2022. doi:10.1103/PhysRevLett.128.070501
-
[41]
Training deep quantum neural networks
Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J Osborne, Robert Salzmann, Daniel Scheiermann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11 0 (1): 0 808, 2020. doi:10.1038/s41467-020-14454-2
-
[42]
Noise-induced barren plateaus in variational quantum algorithms,
Samson Wang, Enrico Fontana, Marco Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J Coles. Noise-induced barren plateaus in variational quantum algorithms. Nature Communications, 12 0 (1): 0 6961, 2021. doi:10.1038/s41467-021-27045-6
-
[43]
Generalization in quantum machine learning from few training data
Matthias C Caro, Hsin-Yuan Huang, Marco Cerezo, Kunal Sharma, Andrew Sornborger, Lukasz Cincio, and Patrick J Coles. Generalization in quantum machine learning from few training data. Nature Communications, 13 0 (1): 0 4919, 2022. doi:10.1038/s41467-022-32550-3
-
[44]
Training quantum embedding kernels on near-term quantum computers
Thomas Hubregtsen, David Wierichs, Elies Gil-Fuster, Peter-Jan H S Derks, Paul K Faehrmann, and Johannes Jakob Meyer. Training quantum embedding kernels on near-term quantum computers. Physical Review A, 106 0 (4): 0 042431, 2022. doi:10.1103/PhysRevA.106.042431
-
[45]
Towards quantum machine learning with tensor networks
William Huggins, Piyush Patil, Bradley Mitchell, K Birgitta Whaley, and E Miles Stoudenmire. Towards quantum machine learning with tensor networks. Quantum Science and Technology, 4 0 (2): 0 024001, 2019. doi:10.1088/2058-9565/aaea94
-
[46]
Evidence for the utility of quantum computing before fault tolerance
Youngseok Kim, Andrew Eddins, Sajant Anand, Ken Xuan Wei, Ewout van den Berg, Sami Rosenblatt, Hasan Nayfeh, Yantao Wu, Michael Zaletel, Kristan Temme, et al. Evidence for the utility of quantum computing before fault tolerance. Nature, 618 0 (7965): 0 500--505, 2023. doi:10.1038/s41586-023-06096-3
-
[47]
Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G S L Brandao, David A Buell, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574 0 (7779): 0 505--510, 2019. doi:10.1038/s41586-019-1666-5
-
[48]
Filipa C. R. Peres and Ernesto F. Galv \ a o. Quantum circuit compilation and hybrid computation using Pauli -based computation. Quantum, 7: 0 1126, 2023. doi:10.22331/q-2023-10-03-1126
-
[49]
TensorFlow : A system for large-scale machine learning
Mart \' n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. TensorFlow : A system for large-scale machine learning. In OSDI, volume 16, pages 265--283, 2016
2016
-
[50]
PyTorch : An imperative style, high-performance deep learning library
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch : An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32, 2019
2019
-
[51]
JAX : Composable transformations of Python + NumPy programs
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX : Composable transformations of Python + NumPy programs. https://github.com/google/jax, 2018. Version 0.4.x
2018
-
[52]
Estimating the gradient and higher-order derivatives on quantum hardware
Andrea Mari, Thomas R Bromley, and Nathan Killoran. Estimating the gradient and higher-order derivatives on quantum hardware. Physical Review A, 103 0 (1): 0 012405, 2021. doi:10.1103/PhysRevA.103.012405
-
[53]
Methodology for replacing indirect measurements with direct measurements
Kosuke Mitarai and Keisuke Fujii. Methodology for replacing indirect measurements with direct measurements. Physical Review Research, 1 0 (1): 0 013006, 2019. doi:10.1103/PhysRevResearch.1.013006
-
[54]
Supervised Learning with Quantum Computers
Maria Schuld and Francesco Petruccione. Supervised Learning with Quantum Computers. Springer, 2018. doi:10.1007/978-3-319-96424-9
-
[55]
Robust data encodings for quantum classifiers
Ryan LaRose and Brian Coyle. Robust data encodings for quantum classifiers. Physical Review A, 102 0 (3): 0 032420, 2020. doi:10.1103/PhysRevA.102.032420
-
[56]
Universal approximation property of quantum feature map
Takahiro Goto, Quoc Hoan Tran, and Kohei Nakajima. Universal approximation property of quantum feature map. arXiv preprint arXiv:2009.00298, 2021. doi:10.48550/arXiv.2009.00298
-
[57]
Exponential concentration and untrainability in quantum kernel methods
Supanut Thanasilp, Samson Wang, Marco Cerezo, and Zo \"e Holmes. Exponential concentration and untrainability in quantum kernel methods. arXiv preprint arXiv:2208.11060, 2022. doi:10.48550/arXiv.2208.11060
-
[58]
The power of quantum neural networks,
Amira Abbas, David Sutter, Christa Zoufal, Aur \'e lien Lucchi, Alessio Figalli, and Stefan Woerner. The power of quantum neural networks. Nature Computational Science, 1 0 (6): 0 403--409, 2021. doi:10.1038/s43588-021-00084-1
-
[59]
Kristan Temme, Sergey Bravyi, and Jay M Gambetta. Error mitigation for short-depth quantum circuits. Physical Review Letters, 119 0 (18): 0 180509, 2017. doi:10.1103/PhysRevLett.119.180509
-
[60]
Available: https://doi.org/10.1103/PhysRevX.8.031027
Suguru Endo, Simon C Benjamin, and Ying Li. Practical quantum error mitigation for near-future applications. Physical Review X, 8 0 (3): 0 031027, 2018. doi:10.1103/PhysRevX.8.031027
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.