Recognition: unknown
Option Pricing on Noisy Intermediate-Scale Quantum Computers: A Quantum Neural Network Approach
Pith reviewed 2026-05-10 03:55 UTC · model grok-4.3
The pith
Quantum neural networks approximate option prices accurately on today's noisy quantum computers.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper demonstrates that a fully quantum approach based on quantum neural networks can achieve accurate option pricing approximations on NISQ hardware. By implementing a compact 2-qubit QNN architecture on multiple platforms including IBM Fez, IQM Garnet, IonQ Forte, and Rigetti Ankaa-3, the authors obtain pricing results that remain consistent across devices. This provides empirical evidence that QNNs, by exploiting the geometric structure of Hilbert space, can effectively approximate the option pricing function in the Black-Scholes-Merton framework and constitute a viable method for derivative pricing.
What carries the argument
A compact 2-qubit quantum neural network whose quantum circuit layers map market parameters to option prices through parameterized unitary operations.
If this is right
- QNN methods can deliver usable pricing results on current quantum processors without waiting for error-corrected hardware.
- The same circuit approach can be extended to local volatility and stochastic volatility models once hardware improves.
- Consistent cross-device performance supports using QNNs for real-time risk calculations in derivatives trading.
- Success on the benchmark model indicates that quantum machine learning can address pricing problems whose classical cost grows rapidly with model complexity.
Where Pith is reading between the lines
- The method could be tested next on multi-asset or path-dependent options to see whether qubit count limits block scaling.
- Integration with classical post-processing might reduce the impact of hardware noise while keeping the quantum advantage in function approximation.
- If the approach generalizes, trading desks could run pricing updates on cloud quantum access rather than large classical clusters for certain models.
Load-bearing premise
That the structure of quantum states lets the network learn the option pricing function well enough to stay accurate on noisy hardware.
What would settle it
If the same 2-qubit QNN produces pricing errors that exceed classical accuracy benchmarks on all four tested devices for the Black-Scholes-Merton model, the claim of consistent viable approximations would not hold.
Figures
read the original abstract
In a global derivatives market with notional values in the hundreds of trillions of dollars, the accuracy and efficiency of pricing models are of fundamental importance, with direct implications for risk management, capital allocation, and regulatory compliance. In this work, we employ the Black-Scholes-Merton (BSM) framework not as an end in itself, but as a controlled benchmark environment in which to rigorously assess the capabilities of quantum machine learning methods. We propose a fully quantum approach to option pricing based on Quantum Neural Networks (QNNs), and, to the best of our knowledge, present one of the first implementations of such a methodology on currently available quantum hardware. Specifically, we investigate whether QNNs, by exploiting the geometric structure of Hilbert space, can effectively approximate option pricing functions. Our implementation utilizes a compact 2-qubit QNN architecture evaluated across multiple state-of-the-art quantum processors, including IBM Fez, IQM Garnet, IonQ Forte, and Rigetti Ankaa-3. This cross-platform study reveals distinct hardware-dependent performance characteristics while demonstrating that accurate pricing approximations can be achieved consistently across different devices despite the constraints of Noisy Intermediate-Scale Quantum (NISQ) hardware. The results provide empirical evidence that QNN-based approaches constitute a viable framework for derivative pricing. While the analysis is conducted within the BSM setting, the broader significance lies in the potential extension of these methods to more realistic and computationally demanding models, including local volatility, stochastic volatility, and interest rate frameworks commonly used in practice.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a fully quantum approach to option pricing via compact 2-qubit Quantum Neural Networks (QNNs) within the Black-Scholes-Merton (BSM) framework, treated explicitly as a controlled benchmark rather than a production tool. It reports implementation and evaluation of this architecture on four NISQ processors (IBM Fez, IQM Garnet, IonQ Forte, Rigetti Ankaa-3), claiming that accurate pricing approximations are achieved consistently across devices despite hardware noise, with broader implications for extension to local-volatility or stochastic-volatility models.
Significance. If the reported cross-platform results hold under quantitative scrutiny, the work supplies one of the first empirical demonstrations of QNN-based derivative pricing executed directly on physical NISQ hardware. The multi-vendor evaluation is a concrete strength that addresses hardware dependence, and the explicit framing of BSM as a benchmark avoids overclaiming. This could serve as a reproducible starting point for assessing quantum machine learning feasibility in finance, provided the accuracy claims are backed by error metrics and baselines.
major comments (1)
- [Abstract] Abstract and results description: the central claim that 'accurate pricing approximations can be achieved consistently across different devices' is asserted without any reported quantitative error metrics (e.g., mean absolute percentage error against BSM values), training loss curves, hyper-parameter details, data exclusion criteria, or classical baseline comparisons. These omissions make it impossible to assess whether the observed performance substantiates the feasibility conclusion or is consistent with the low soundness noted in the review.
minor comments (1)
- The manuscript would benefit from explicit statements of the QNN circuit depth, ansatz parameterization, and optimization method used for the 2-qubit architecture to allow reproduction.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our manuscript. We address the major comment below and commit to revisions that improve the transparency and substantiation of our results.
read point-by-point responses
-
Referee: [Abstract] Abstract and results description: the central claim that 'accurate pricing approximations can be achieved consistently across different devices' is asserted without any reported quantitative error metrics (e.g., mean absolute percentage error against BSM values), training loss curves, hyper-parameter details, data exclusion criteria, or classical baseline comparisons. These omissions make it impossible to assess whether the observed performance substantiates the feasibility conclusion or is consistent with the low soundness noted in the review.
Authors: We agree that the abstract and results description would benefit from explicit quantitative support for the performance claims. In the revised manuscript we will add mean absolute percentage error (MAPE) values comparing QNN outputs to BSM benchmarks, training loss curves, hyper-parameter specifications, data exclusion criteria, and classical baseline comparisons. These additions will allow readers to evaluate the accuracy and feasibility of the approach directly. revision: yes
Circularity Check
No significant circularity; empirical hardware benchmark is self-contained
full rationale
The paper treats BSM explicitly as an external benchmark for testing a 2-qubit QNN on physical NISQ devices (IBM, IQM, IonQ, Rigetti). Reported accuracies arise from direct circuit execution and comparison to known closed-form BSM values, with no equations, fitted parameters, or self-citations that reduce the central claim to its own inputs by construction. The derivation chain consists of standard QNN ansatz + hardware runs + classical post-processing against an independent analytical reference; this is a normal experimental demonstration and receives the default non-circularity finding.
Axiom & Free-Parameter Ledger
free parameters (1)
- QNN variational parameters
axioms (1)
- domain assumption Quantum circuits can approximate continuous functions via Hilbert-space geometry
Reference graph
Works this paper leans on
-
[1]
doi:10.1086/260062. Robert C. Merton. Theory of rational option pricing.Bell Journal of Economics and Management Science, 4(1): 141–183,
-
[2]
doi:10.2307/3003143. Emanuel Derman and Iraj Kani. Riding on a smile.Risk, 7:32–39,
-
[3]
Alan Brace, Dariusz Gatarek, and Marek Musiela
doi:10.1016/0304-405X(76)90022-2. Alan Brace, Dariusz Gatarek, and Marek Musiela. The market model of interest rate dynamics.Mathematical Finance, 7(2):127–155,
-
[4]
URLhttps://doi.org/10.1111/1467-9965.00028
doi:10.1111/1467-9965.00028. URLhttps://doi.org/10.1111/1467-9965.00028. Rafał Pracht and Dariusz G ˛ atarek. Quantum Binomial Tree, An Effective Method for Probability Distribution Loading for Derivative Pricing.Wilmott, 2025(139),
-
[5]
ISSN 1541-8286. doi:10.54946/wilm.12175. URL http://dx.doi.org/10.54946/wilm.12175. Maria Schuld and Nathan Killoran. Quantum machine learning in feature hilbert spaces.Physical Review Letters, 122 (4):040504,
-
[6]
Quantum machine learning in feature Hilbert spaces
doi:10.1103/PhysRevLett.122.040504. Maria Schuld. Quantum machine learning models are kernel methods.Physical Review A, 103(3):032430,
-
[8]
Edward Farhi and Hartmut Neven
doi:10.1038/s41467-024-46345-x. Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors.arXiv preprint arXiv:1802.06002,
-
[9]
ISSN 2662-8457. doi:10.1038/s43588- 021-00084-1. URLhttp://dx.doi.org/10.1038/s43588-021-00084-1. Dylan Herman, Cody Googin, Xiaoyuan Mickelin, et al. Quantum computing for finance.Nature Reviews Physics, 5: 450–465,
-
[10]
Quantum computing for fi- nance.Nature Reviews Physics, 5(8):450–465, 2023
doi:10.1038/s42254-023-00603-1. John Preskill. Quantum computing in the NISQ era and beyond.Quantum, 2:79,
-
[11]
Quantum computing in the NISQ era and beyond.Quantum, 2:79, 2018
doi:10.22331/q-2018-08-06-79. Frank Arute, Kunal Arya, Ryan Babbush, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510,
-
[12]
doi:10.1111/j.1540- 6261.1994.tb00081.x. Johannes Ruf and Weiguan Wang. Machine learning in option pricing and hedging: a survey.Journal of Computational Finance, 24(1):1–55,
-
[13]
doi:10.21314/JCF.2020.375. Andreas Weigand. Machine learning in asset pricing: A survey of recent literature.Working Paper,
-
[14]
The Review of Financial Studies , volume =
doi:10.1093/rfs/hhaa009. Serena Della Corte, Laurens Van Mieghem, Antonis Papapantoleon, and Jonas Papazoglou-Hennig. Machine learning for option pricing: an empirical investigation of network architectures,
-
[15]
David Anderson and Urban Ulrych
URL https://arxiv.org/abs/ 2307.07657. David Anderson and Urban Ulrych. Accelerated american option pricing with deep neural networks,
-
[16]
doi:10.1016/j.inffus.2019.12.012. Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794,
-
[17]
doi:10.1145/2939672.2939785. John C. Hull.Options, Futures, and Other Derivatives. Pearson, New York, 11th edition,
-
[18]
ISSN 2469-9934. doi:10.1103/physreva.98.032309. URL http://dx.doi.org/10.1103/PhysRevA.98.032309. Jacob L. Cybulski and Sebastian Zaj ˛ ac. Design considerations for denoising quantum time series autoencoder. In Computational Science – ICCS 2024, pages 252–267. Springer Nature Switzerland,
-
[19]
Rethinking data augmentation for robust LiDAR semantic segmentation in adverse weather,
doi:10.1007/978-3-031- 63778-0_18. Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine-learning models.Physical Review A, 103(3), March
-
[20]
doi:10.1103/physreva.103.032430
ISSN 2469-9934. doi:10.1103/physreva.103.032430. URLhttp://dx.doi.org/10.1103/PhysRevA.103.032430. Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, and José I. Latorre. Data re-uploading for a universal quantum classifier.Quantum, 4:226, February
-
[21]
Data re-uploading for a universal quantum classifier,
ISSN 2521-327X. doi:10.22331/q-2020-02-06-226. URL http://dx.doi.org/10.22331/q-2020-02-06-226. Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes.Nature communications, 9(1):1–6,
-
[22]
com/qpu/garnet/
URL https://www.iqmacademy. com/qpu/garnet/. Accessed: 2025-11-03. Askar Oralkhan and Temirlan Zhaxalykov. Investigation of hardware architecture effects on quantum algorithm performance: A comparative hardware study,
2025
- [23]
-
[24]
Accessed: 2025-11-03
URL https://www.ionq.com/quantum-systems/aria. Accessed: 2025-11-03. Rigetti Computing. Rigetti fourth-generation quantum processing units and qcs architecture. https://qcs.rigetti. com/systems,
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.