pith. machine review for the scientific record. sign in

arxiv: 2604.03346 · v1 · submitted 2026-04-03 · 🪐 quant-ph

Recognition: 2 theorem links

· Lean Theorem

Learning PDEs for Portfolio Optimization with Quantum Physics-Informed Neural Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-13 19:49 UTC · model grok-4.3

classification 🪐 quant-ph
keywords PDEsportfolio optimizationquantum circuitsphysics-informed neural networkstensor rank decompositionMerton problemquantum-inspired models
0
0 comments X

The pith

Parameterized quantum circuits using tensor rank decomposition approximate PDE solutions for portfolio optimization with higher accuracy and fewer parameters than classical networks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that a parameterized quantum circuit can implement polynomials via tensor rank decomposition, enabling quantum physics-informed neural networks to solve partial differential equations in financial mathematics. This reduces resource complexity to polynomial when the tensor rank is moderate and provides approximation guarantees for the PDE solution. Experiments on the Merton portfolio optimization problem, which finds the optimal fraction to invest in risky versus risk-free assets, show the quantum models outperforming classical fully connected PINNs in accuracy and convergence speed while using 80 times fewer parameters. The approach also beats classical PINNs designed with the same inductive bias, indicating a quantum-induced benefit.

Core claim

We develop a parameterized quantum circuit for tensor rank decomposition of polynomials and use it to construct Quantum Physics-Informed Neural Networks and Quantum-inspired PINNs that guarantee polynomial approximations to PDE solutions, demonstrating superior performance on the Merton problem with significantly reduced parameters.

What carries the argument

Parameterized quantum circuit implementing a polynomial via tensor rank decomposition within a physics-informed neural network framework to enforce the PDE residual.

Load-bearing premise

The solution to the target PDE admits a polynomial representation with sufficiently low tensor rank for the quantum circuit to remain efficient.

What would settle it

Running the same experiment on a PDE whose solution requires high tensor rank, resulting in no accuracy gain or loss of efficiency, would falsify the practical advantage.

read the original abstract

Partial differential equations (PDEs) play a crucial role in financial mathematics, particularly in portfolio optimization, and solving them using classical numerical or neural network methods has always posed significant challenges. Here, we investigate the potential role of quantum circuits for solving PDEs. We design a parameterized quantum circuit (PQC) for implementing a polynomial based on tensor rank decomposition, reducing the quantum resource complexity from exponential to polynomial when the corresponding tensor rank is moderate. Building on this circuit, we develop a Quantum Physics-Informed Neural Network (QPINN) and a Quantum-inspired PINN, both of which guarantee the existence of an approximation of the PDE solution, and this approximation is represented as a polynomial that incorporates tensor rank decomposition. Despite using 80 times fewer parameters in experiments, our quantum models achieve higher accuracy and faster convergence than a classical fully connected PINN when solving the PDE for the Merton portfolio optimization problem, which determines the optimal investment fraction between a risky and a risk-free asset. Our quantum models further outperform a classical PINN constructed to share the same inductive bias, providing experimental evidence of quantum-induced improvement and highlighting a resource-efficient pathway toward classical and near-term quantum PDE solvers.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes a parameterized quantum circuit (PQC) realizing multivariate polynomials via tensor-rank decomposition to build Quantum Physics-Informed Neural Networks (QPINNs) and quantum-inspired PINNs for solving PDEs. Applied to the Merton portfolio optimization HJB equation, the quantum models are claimed to achieve higher accuracy and faster convergence than classical fully connected PINNs while using 80 times fewer parameters, with an approximation guarantee based on the polynomial tensor decomposition; this is presented as experimental evidence of quantum-induced improvement.

Significance. If the tensor-rank assumption holds and the experimental gains are robust, the work could demonstrate a resource-efficient route to PDE solvers for financial mathematics on near-term quantum hardware or via quantum-inspired classical methods, particularly for problems where high-dimensional grids challenge standard numerical techniques.

major comments (3)
  1. [PQC construction and theoretical guarantee section] The resource-reduction claim (exponential to polynomial) and the 80x parameter advantage rest on the Merton PDE solution admitting a moderate tensor rank under the polynomial ansatz. The manuscript does not report the computed tensor rank of the known closed-form solution (logarithmic value function with constant control) on the discretization grid nor supply an a-priori bound, which is load-bearing for transferring the approximation guarantee to the concrete experiments.
  2. [Experimental results on Merton problem] The experimental comparison to the classical fully connected PINN reports higher accuracy with far fewer parameters, yet lacks details on error bars, exact data splits, hyperparameter optimization protocol, and noise models; without these, it is unclear whether the observed improvement is attributable to the quantum structure or to differences in model capacity and training.
  3. [Approximation guarantee discussion] The approximation guarantee is explicitly tied to the choice of tensor rank r and polynomial degree; the paper should specify how r is selected for the Merton instance and demonstrate that the guarantee remains valid under the discretization and any stochastic elements used in the numerical experiments.
minor comments (2)
  1. [Methods] The description of the classical PINN sharing the same inductive bias as the QPINN would benefit from an explicit side-by-side architecture diagram or parameter count table.
  2. [PQC construction] Notation for the tensor decomposition in the PQC (e.g., the precise mapping from tensor cores to circuit gates) could be illustrated with a small-scale example equation.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which help strengthen the manuscript. We address each major point below and will revise the paper to incorporate the suggested clarifications and additional details.

read point-by-point responses
  1. Referee: The resource-reduction claim (exponential to polynomial) and the 80x parameter advantage rest on the Merton PDE solution admitting a moderate tensor rank under the polynomial ansatz. The manuscript does not report the computed tensor rank of the known closed-form solution (logarithmic value function with constant control) on the discretization grid nor supply an a-priori bound, which is load-bearing for transferring the approximation guarantee to the concrete experiments.

    Authors: We agree that explicitly reporting the tensor rank of the closed-form solution on the experimental discretization grid would strengthen the connection between the general guarantee and our results. In the revised manuscript we will compute this rank numerically for the logarithmic value function and constant control on the grid used in experiments, and we will supply a simple a-priori bound derived from the analytic form of the solution that confirms the rank remains moderate (independent of dimension for this particular problem). revision: yes

  2. Referee: The experimental comparison to the classical fully connected PINN reports higher accuracy with far fewer parameters, yet lacks details on error bars, exact data splits, hyperparameter optimization protocol, and noise models; without these, it is unclear whether the observed improvement is attributable to the quantum structure or to differences in model capacity and training.

    Authors: We acknowledge the need for greater experimental transparency. The revised version will report error bars obtained from 10 independent runs with different random seeds, clarify that collocation points are sampled once from fixed distributions (no train/test split in the classical sense), document the hyperparameter search protocol (grid search over learning rate, network width, and polynomial degree), and state that all circuits are simulated noiselessly. We will also add a parameter-matched classical PINN baseline to isolate the effect of the tensor-rank inductive bias. revision: yes

  3. Referee: The approximation guarantee is explicitly tied to the choice of tensor rank r and polynomial degree; the paper should specify how r is selected for the Merton instance and demonstrate that the guarantee remains valid under the discretization and any stochastic elements used in the numerical experiments.

    Authors: We will add an explicit subsection describing the practical selection of r: r is chosen as the smallest integer such that the tensor-rank polynomial approximates the known closed-form solution to within a prescribed tolerance on a dense validation grid. We will also bound the additional discretization error separately and show that the overall guarantee continues to hold. Training uses deterministic full-batch gradient descent, so no stochasticity affects the loss; this will be stated clearly. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper's core claims rest on experimental comparisons showing quantum models outperforming classical PINNs (including one with matched inductive bias) on the Merton PDE, using 80x fewer parameters. The PQC construction for tensor-rank polynomial representation and the stated approximation guarantee follow directly from the explicit architectural choice and standard approximation properties of polynomials; these are not derived by reducing to the target result itself. No self-citations, fitted inputs renamed as predictions, or uniqueness theorems are invoked in a load-bearing way that collapses the central result to its inputs. The moderate-rank assumption is an empirical precondition verified by the reported success of the experiments rather than a hidden definitional loop.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that the target PDE solution can be represented as a moderate-rank tensor polynomial and that this representation can be realized by a parameterized quantum circuit whose parameters are learned via physics-informed training.

free parameters (1)
  • tensor rank r
    Chosen moderate to keep quantum resource scaling polynomial; value not specified in abstract but directly controls circuit depth and parameter count.
axioms (1)
  • domain assumption The PDE solution admits a polynomial approximation whose tensor rank remains moderate.
    Invoked to guarantee existence of the approximation and to reduce quantum complexity from exponential to polynomial.

pith-pipeline@v0.9.0 · 5515 in / 1391 out tokens · 35892 ms · 2026-05-13T19:49:57.048326+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

63 extracted references · 63 canonical work pages · 1 internal anchor

  1. [1]

    The Review of Economics and Statistics51, 247– 57 (1969) https://doi.org/10.2307/1926560

    Merton, R.: Lifetime portfolio selection under uncertainty: The continuous-time case. The Review of Economics and Statistics51, 247– 57 (1969) https://doi.org/10.2307/1926560

  2. [2]

    Mathematics13, 2664 (2025) https://doi.org/10.3390/math13162664

    Gonzalo, V., Wahl, M., Zagst, R.: Dynamic portfolio optimization using information from a crisis indica- tor. Mathematics13, 2664 (2025) https://doi.org/10.3390/math13162664

  3. [3]

    Mathematics13(3) (2025) https: //doi.org/10.3390/math13030535

    Di Persio, L., Fraccarolo, N.: Exploring optimisation strategies under jump-diffusion dynamics. Mathematics13(3) (2025) https: //doi.org/10.3390/math13030535

  4. [4]

    Journal of Indus- trial and Management Optimization18(6), 3831–3845 (2022) https://doi.org/10.3934/ jimo.2021137

    Wang, X., Hu, L.: A new method to solve the hamilton-jacobi-bellman equation for a stochastic portfolio optimization model with boundary memory. Journal of Indus- trial and Management Optimization18(6), 3831–3845 (2022) https://doi.org/10.3934/ jimo.2021137

  5. [5]

    International Journal of Computer Mathematics93, 1– 10 (2014) https://doi.org/10.1080/00207160

    Kilianov´ a, S., Trnovska, M.: Robust portfo- lio optimization via solution to the hamil- ton–jacobi–bellman equation. International Journal of Computer Mathematics93, 1– 10 (2014) https://doi.org/10.1080/00207160. 2013.871542

  6. [6]

    Gollier, C.: The Economics of Risk and Time, pp. 43–54. MIT Press, Cambridge, MA (2001). https://doi.org/10.7551/mitpress/ 2622.001.0001

  7. [7]

    Artificial Intelligence Review58(323) (2025) https://doi.org/10

    Luo, K., Zhao, J., Wang, Y., Li, J., Wen, J., Liang, J., Soekmadji, H., Liao, S.: Physics- informed neural networks for pde problems: a comprehensive review. Artificial Intelligence Review58(323) (2025) https://doi.org/10. 1007/s10462-025-11322-7

  8. [8]

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

    Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: A deep 16 learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computa- tional Physics378, 686–707 (2019) https:// doi.org/10.1016/j.jcp.2018.10.045

  9. [9]

    SIAM Journal on Computing26(5), 1484–1509 (1997) https: //doi.org/10.1137/s0097539795293172

    Shor, P.W.: Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing26(5), 1484–1509 (1997) https: //doi.org/10.1137/s0097539795293172

  10. [10]

    Aaijet al.[LHCb Collaboration]

    Harrow, A.W., Hassidim, A., Lloyd, S.: Quantum algorithm for linear systems of equations. Physical Review Letters103(15) (2009) https://doi.org/10.1103/physrevlett. 103.150502

  11. [11]

    Commu- nications in Mathematical Physics356(3), 1057–1081 (2017) https://doi.org/10.1007/ s00220-017-3002-y

    Berry, D.W., Childs, A.M., Ostrander, A., Wang, G.: Quantum algorithm for lin- ear differential equations with exponentially improved dependence on precision. Commu- nications in Mathematical Physics356(3), 1057–1081 (2017) https://doi.org/10.1007/ s00220-017-3002-y

  12. [12]

    Physi- cal Review A111(6) (2025) https: //doi.org/10.1103/physreva.111.062404

    Hunout, J., Laizet, S., Iannucci, L.: Variational quantum algorithm based on lagrange polynomial encoding to solve differential equations. Physi- cal Review A111(6) (2025) https: //doi.org/10.1103/physreva.111.062404

  13. [13]

    In: Samuel J

    Brassard, G., Høyer, P., Mosca, M., Tapp, A.: Quantum amplitude amplification and estimation. In: Samuel J. Lomonaco, J., Brandt, H.E. (eds.) Quantum Computation and Information. Contemporary Mathemat- ics, vol. 305, pp. 53–74. American Mathe- matical Society, Providence, RI (2002). https: //doi.org/10.1090/conm/305/05215

  14. [14]

    Kacewicz, B.: Almost optimal solution of initial-value problems by randomized and quantum algorithms. J. Complex.22(5), 676– 690 (2006)

  15. [15]

    npj Quan- tum Information6, 61 (2020) https://doi

    Gaitan, F.: Finding flows of a navier–stokes fluid through quantum computing. npj Quan- tum Information6, 61 (2020) https://doi. org/10.1038/s41534-020-00291-0

  16. [16]

    Quantum Information Process- ing21(1), 30 (2022) https://doi.org/10.1007/ s11128-021-03391-8

    Oz, F., Vuppala, R.K.S.S., Kara, K., Gaitan, F.: Solving burgers’ equation with quantum computing. Quantum Information Process- ing21(1), 30 (2022) https://doi.org/10.1007/ s11128-021-03391-8

  17. [17]

    Scientific Reports 13, 7767 (2023) https://doi.org/10.1038/ s41598-023-34966-3

    Oz, F., San, O., Kara, K.: An efficient quantum partial differential equation solver with chebyshev points. Scientific Reports 13, 7767 (2023) https://doi.org/10.1038/ s41598-023-34966-3

  18. [18]

    Physical Review Letters100(16) (2008) https://doi

    Giovannetti, V., Lloyd, S., Maccone, L.: Quantum random access memory. Physical Review Letters100(16) (2008) https://doi. org/10.1103/physrevlett.100.160501

  19. [19]

    IEEE Transac- tions on Quantum Engineering1, 1–13 (2020) https://doi.org/10.1109/tqe.2020.2965803

    Matteo, O.D., Gheorghiu, V., Mosca, M.: Fault-tolerant resource estimation of quan- tum random-access memories. IEEE Transac- tions on Quantum Engineering1, 1–13 (2020) https://doi.org/10.1109/tqe.2020.2965803

  20. [20]

    Quantum machine learning.Nature, 549(7671):195–202, Sep 2017

    Biamonte, J., Wittek, P., Pan- cotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learn- ing. Nature549(7671), 195–202 (2017) https://doi.org/10.1038/nature23474

  21. [21]

    Quantum Science and Technology4(4), 043001 (2019) https://doi.org/10.1088/2058-9565/ab4eb5

    Benedetti, M., Lloyd, E., Sack, S., Fioren- tini, M.: Parameterized quantum circuits as machine learning models. Quantum Science and Technology4(4), 043001 (2019) https: //doi.org/10.1088/2058-9565/ab4eb5

  22. [22]

    Cerezoet al., Variational quantum al- gorithms, Nat

    Cerezo, M., Arrasmith, A., Babbush, R., Ben- jamin, S.C., Endo, S., Fujii, K., McClean, J.R., Mitarai, K., Yuan, X., Cincio, L., Coles, P.J.: Variational quantum algorithms. Nature Reviews Physics3(9), 625–644 (2021) https: //doi.org/10.1038/s42254-021-00348-9

  23. [23]

    Kyriienko, O., Paine, A.E., Elfving, V.E.: Solving nonlinear differential equations with differentiable quantum circuits. Phys. Rev. A 103, 052416 (2021) https://doi.org/10.1103/ PhysRevA.103.052416

  24. [24]

    Frontiers in Applied Mathematics and Statistics8(2022) 17

    Markidis, S.: On physics-informed neural net- works for quantum computers. Frontiers in Applied Mathematics and Statistics8(2022) 17

  25. [25]

    Entropy26(8) (2024) https://doi.org/10.3390/e26080649

    Trahan, C., Loveland, M., Dent, S.: Quantum physics-informed neu- ral networks. Entropy26(8) (2024) https://doi.org/10.3390/e26080649

  26. [26]

    In: Workshop Pro- ceedings of the 53rd International Confer- ence on Parallel Processing

    Hegde, P.R., Markidis, S.: Quantum physics informed neural networks. In: Workshop Pro- ceedings of the 53rd International Confer- ence on Parallel Processing. ICPP Workshops ’24, pp. 114–115. Association for Comput- ing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/3677333.3678272

  27. [27]

    Scientific Reports15(1), 18823 (2025) https://doi.org/ 10.1038/s41598-025-02959-z

    Berger, S., Hosters, N., M¨ oller, M.: Trainable embedding quantum physics informed neural networks for solving nonlinear pdes. Scientific Reports15(1), 18823 (2025) https://doi.org/ 10.1038/s41598-025-02959-z

  28. [28]

    Panichi, G., Corli, S., Prati, E.: Quantum physics-informed neural networks for multi- variable partial differential equations. Phys. Rev. Appl.25, 014001 (2026) https://doi. org/10.1103/6nh4-yh2y

  29. [29]

    Machine Learning: Science and Technology6(4), 045053 (2025) https://doi.org/10.1088/2632-2153/ae1c91

    Farea, A., Khan, S., Serdar Celebi, M.: Qcpinn: quantum-classical physics- informed neural networks for solving pdes. Machine Learning: Science and Technology6(4), 045053 (2025) https://doi.org/10.1088/2632-2153/ae1c91

  30. [30]

    black hole

    Chen, Z., Shaviner, G.G., Chandravamsi, H., Pisnoy, S., Frankel, S.H., Pereg, U.: Quan- tum physics-informed neural networks for maxwell’s equations: circuit design, “black hole” barren plateaus mitigation, and gpu acceleration. Quantum Machine Intelligence 8(1), 21 (2026) https://doi.org/10.1007/ s42484-026-00365-w

  31. [31]

    Lantigua, S., Giraldi, G., Portugal, R.: Classical-quantum hybrid architecture for physics-informed neural networks. Phys. Rev. A, (2026) https://doi.org/10.1103/fdd3-qz1s

  32. [32]

    Schuld, M., Sweke, R., Meyer, J.J.: Effect of data encoding on the expressive power of variational quantum-machine-learning mod- els. Phys. Rev. A103, 032430 (2021) https: //doi.org/10.1103/PhysRevA.103.032430

  33. [33]

    Nature549(7671), 242–246 (2017) https://doi.org/10.1038/nature23879

    Kandala, A., Mezzacapo, A., Temme, K., Takita, M., Brink, M., Chow, J.M., Gambetta, J.M.: Hardware-efficient variational quantum eigensolver for small molecules and quantum mag- nets. Nature549(7671), 242–246 (2017) https://doi.org/10.1038/nature23879

  34. [34]

    https://arxiv

    Chang, S.Y., Cerezo, M.: A Primer on Quan- tum Machine Learning (2025). https://arxiv. org/abs/2511.15969

  35. [35]

    Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics

    Gily´ en, A., Su, Y., Low, G.H., Wiebe, N.: Quantum singular value transformation and beyond: exponential improvements for quan- tum matrix arithmetics. In: Proceedings of the 51st Annual ACM SIGACT Sympo- sium on Theory of Computing. STOC 2019, pp. 193–204. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3313276.331...

  36. [36]

    In: The Thirty-eighth Annual Conference on Neu- ral Information Processing Systems (2024)

    Yu, Z., Chen, Q., Jiao, Y., Li, Y., Lu, X., Wang, X., Yang, J.Z.: Non- asymptotic approximation error bounds of parameterized quantum circuits. In: The Thirty-eighth Annual Conference on Neu- ral Information Processing Systems (2024). https://openreview.net/forum?id=XCkII8nCt3

  37. [37]

    SIAM Review 51(3), 455–500 (2009) https://doi.org/10

    Kolda, T.G., Bader, B.W.: Tensor decom- positions and applications. SIAM Review 51(3), 455–500 (2009) https://doi.org/10. 1137/07070111X

  38. [38]

    Undergraduate Texts in Mathematics

    Logan, J.D.: Applied Partial Differential Equations, 3rd edn. Undergraduate Texts in Mathematics. Springer, Cham (2015). https: //doi.org/10.1007/978-3-319-12493-3

  39. [39]

    Computers & Mathematics with Applications132, 48– 62 (2023) https://doi.org/10.1016/j.camwa

    Tang, S., Feng, X., Wu, W., Xu, H.: Physics- informed neural networks combined with polynomial interpolation to solve nonlinear partial differential equations. Computers & Mathematics with Applications132, 48– 62 (2023) https://doi.org/10.1016/j.camwa. 2022.12.008

  40. [40]

    In: Feldman, V., Rakhlin, A., Shamir, O

    Cohen, N., Sharir, O., Shashua, A.: On the expressive power of deep learning: A 18 tensor analysis. In: Feldman, V., Rakhlin, A., Shamir, O. (eds.) Proceedings of the 29th Conference on Learning Theory (COLT 2016). Proceedings of Machine Learning Research, vol. 49, pp. 698–728. PMLR, New York, NY, USA (2016). https://proceedings.mlr.press/v49/cohen16.html

  41. [41]

    IEEE Transac- tions on Pattern Analysis and Machine Intel- ligence, 1–1 (2021) https://doi.org/10.1109/ tpami.2021.3058891

    Chrysos, G.G., Moschoglou, S., Bouritsas, G., Deng, J., Panagakis, Y., Zafeiriou, S.P.: Deep polynomial neural networks. IEEE Transac- tions on Pattern Analysis and Machine Intel- ligence, 1–1 (2021) https://doi.org/10.1109/ tpami.2021.3058891

  42. [42]

    In: Antonacopoulos, A., Chaudhuri, S., Chel- lappa, R., Liu, C.-L., Bhattacharya, S., Pal, U

    Vemuri, S.K., B¨ uchner, T., Niebling, J., Den- zler, J.: Functional tensor decompositions for physics-informed neural networks. In: Antonacopoulos, A., Chaudhuri, S., Chel- lappa, R., Liu, C.-L., Bhattacharya, S., Pal, U. (eds.) Pattern Recognition, pp. 32–46. Springer, Cham (2025)

  43. [43]

    Cambridge University Press, New York, USA (1981)

    Powell, M.J.D.: Approximation Theory and Methods. Cambridge University Press, New York, USA (1981)

  44. [44]

    Dover Publications, Mineola, New York (2001)

    Boyd, J.P.: Chebyshev and Fourier Spec- tral Methods, 2nd edn. Dover Publications, Mineola, New York (2001)

  45. [45]

    Finance Research Letters69, 106185 (2024) https://doi.org/10

    Kopeliovich, Y., Pokojovy, M.: Portfolio opti- mization with feedback strategies based on artificial neural networks. Finance Research Letters69, 106185 (2024) https://doi.org/10. 1016/j.frl.2024.106185

  46. [46]

    Mitarai, M

    Mitarai, K., Negoro, M., Kitagawa, M., Fujii, K.: Quantum circuit learning. Physi- cal Review A98(3) (2018) https://doi.org/ 10.1103/physreva.98.032309

  47. [47]

    Encyclopaedia of Math- ematical Sciences, vol

    Nikol’skii, S.M.: Analysis III: Spaces of Differ- entiable Functions. Encyclopaedia of Math- ematical Sciences, vol. 91. Springer, Berlin, Heidelberg (2013). https://doi.org/10.1007/ 978-3-642-36247-3

  48. [48]

    Quantum Computing in the NISQ era and beyond.Quantum, 2:79, August 2018

    Preskill, J.: Quantum computing in the nisq era and beyond. Quantum2, 79 (2018) https: //doi.org/10.22331/q-2018-08-06-79

  49. [49]

    Li, M., Guo, F.-Q., Jin, Z., Yan, L.-L., Liang, E.-J., Su, S.-L.: Multiple-qubit controlled unitary quantum gate for rydberg atoms using shortcut to adiabaticity and optimized geometric quantum operations. Phys. Rev. A 103, 062607 (2021) https://doi.org/10.1103/ PhysRevA.103.062607

  50. [50]

    Nature Communications9(1) (2018) https:// doi.org/10.1038/s41467-018-07090-4

    McClean, J.R., Boixo, S., Smelyanskiy, V.N., Babbush, R., Neven, H.: Barren plateaus in quantum neural network training landscapes. Nature Communications9(1) (2018) https:// doi.org/10.1038/s41467-018-07090-4

  51. [51]

    https://arxiv.org/abs/2112.00811

    Cotler, J., Huang, H.-Y., McClean, J.R.: Revisiting dequantization and quan- tum advantage in learning tasks (2021). https://arxiv.org/abs/2112.00811

  52. [52]

    1038/s41467-025-63099-6

    Cerezo, M., Larocca, M., Garc´ ıa-Mart´ ın, D., Diaz, N.L., Braccia, P., Fontana, E., Rudolph, M.S., Bermejo, P., Ijaz, A., Thanasilp, S., Anschuetz, E.R., Holmes, Z.: Does provable absence of barren plateaus imply classical simulability? Nature Com- munications16(1) (2025) https://doi.org/10. 1038/s41467-025-63099-6

  53. [53]

    In: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing

    Chia, N.-H., Gily´ en, A., Li, T., Lin, H.- H., Tang, E., Wang, C.: Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. In: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. STOC 2020, pp. 387–400. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/1...

  54. [54]

    In: International Conference on Learning Representations (ICLR) 2020 (2020).https://arxiv.org/abs/1904.00962

    You, Y., Li, J., Reddi, S., Hseu, J., Kumar, S., Bhojanapalli, S., Song, X., Demmel, J., Keutzer, K., Hsieh, C.-J.: Large batch opti- mization for deep learning: Training bert in 76 minutes. In: International Conference on Learning Representations (ICLR) 2020 (2020).https://arxiv.org/abs/1904.00962

  55. [55]

    Schreiber, F.J., Eisert, J., Meyer, J.J.: Clas- sical surrogates for quantum learning models. Phys. Rev. Lett.131, 100803 (2023) https: //doi.org/10.1103/PhysRevLett.131.100803 19

  56. [56]

    Journal of Mathematics and Physics6, 164–189 (1927)

    Hitchcock, F.L.: The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics6, 164–189 (1927)

  57. [57]

    Physi- cal Review A52(5), 3457–3467 (1995) https: //doi.org/10.1103/physreva.52.3457

    Barenco, A., Bennett, C.H., Cleve, R., DiVin- cenzo, D.P., Margolus, N., Shor, P., Sleator, T., Smolin, J.A., Weinfurter, H.: Elemen- tary gates for quantum computation. Physi- cal Review A52(5), 3457–3467 (1995) https: //doi.org/10.1103/physreva.52.3457

  58. [58]

    Physical Review A106(4) (2022) https:// doi.org/10.1103/physreva.106.042602

    Silva, A.J., Park, D.K.: Linear-depth quan- tum circuits for multiqubit controlled gates. Physical Review A106(4) (2022) https:// doi.org/10.1103/physreva.106.042602

  59. [59]

    Quantum Info

    M¨ ott¨ onen, M., Vartiainen, J.J., Bergholm, V., Salomaa, M.M.: Transformation of quan- tum states using uniformly controlled rota- tions. Quantum Info. Comput.5(6), 467–473 (2005) 20 Appendix A Tensor-decomposed polynomial derivation We first recall the notion oftensor rank decomposition[37]. Given indexn m ∈I m,∀m∈[D] withDas an integer, letA ∈F I1×I2×...

  60. [60]

    ifLis even/odd thenp(x) must be an even/odd function), 3.∀x∈[−1,1],|p(x)| ≤1

    deg(p(x))≤L, 2.p(x) has parityLmod 2 (i.e. ifLis even/odd thenp(x) must be an even/odd function), 3.∀x∈[−1,1],|p(x)| ≤1. The circuit diagram ofU θ(x) is shown in Figure 2. We have several different methods to estimate ⟨+|Uθ(x)|+⟩to obtainp(x). In this work, we consider only the Hadamard test method, whose circuit diagram is shown in Figure C1. Fig. C1Hada...

  61. [61]

    ProofLetp(x) = 1 2 podd(x) + 1 2 peven(x) wherep even (x) =p(x) +p(−x) andp odd (x) =p(x)−p(−x)

    deg(p(x))≤L, 2.∀x∈[−1,1],|p(x)| ≤ 1 2. ProofLetp(x) = 1 2 podd(x) + 1 2 peven(x) wherep even (x) =p(x) +p(−x) andp odd (x) =p(x)−p(−x). We have deg(peven (x))≤L,deg(p odd(x))≤L−1 whenLis even, and deg(p even (x))≤L−1,deg(p odd(x))≤LwhenL is odd. We also know that|p odd(x)| ≤1 and|p even(x)| ≤1 since|p(x)| ≤ 1

  62. [62]

    □ In fact,∀x∈[−1,1], the condition|p(x)| ≤ 1 2 is stronger than necessary to guarantee the existence of a quantum model

    Hence from the Lemma 5, we know there existθ odd ∈R L,θ even ∈R L+1 (Lis even) orθ odd ∈R L+1,θ even ∈R L (Lis odd) such that podd(x) =⟨+|U θodd(x)|+⟩ and peven(x) =⟨+|U θeven(x)|+⟩ then we can build|0⟩⟨0| ⊗U θodd +|1⟩⟨1| ⊗U θeven such that ⟨+|⊗2(|0⟩⟨0| ⊗U θodd +|1⟩⟨1| ⊗U θeven)(x)|+⟩⊗2 = 1 2 ⟨+|Uθodd(x)|+⟩+ 1 2 ⟨+|Uθeven(x)|+⟩= 1 2 podd(x) + 1 2 peven(x)...

  63. [63]

    Then we can do the Hadamard test to estimatep(x) =⟨+| ⊗2(|0⟩⟨0| ⊗U θ1(x) +|1⟩⟨1| ⊗U θ2(x))|+⟩⊗2 and analyze the circuit complexity. Proposition 1.For any real polynomialp(x)∈R[x]that satisfiesdeg(p(x))≤Land∀x∈ [−1,1],|p(x)| ≤ 1 2, there exists a quantum modelQthat consists of a PQCW p(x)and an observableZ (0) such that fQ(x) :=⟨0|W † p (x)Z (0)Wp(x)|0⟩=p(...