Recognition: 3 theorem links
· Lean TheoremLearning Temporal Patterns in Financial Time Series: A Comparative Study of Quantum LSTM and Quantum Reservoir Computing
Pith reviewed 2026-05-08 18:54 UTC · model grok-4.3
The pith
Quantum LSTM and quantum reservoir computing match classical baselines for univariate financial time series forecasting and can modestly outperform them in multivariate cases with correlated inputs when using amplitude encoding.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
With suitable lag selection and amplitude encoding, quantum-enhanced architectures match classical baselines in univariate settings and can modestly outperform them in multivariate regimes with correlated inputs, where expressive encodings are most beneficial.
What carries the argument
Amplitude encoding of normalized lagged observations into quantum states, combined with parameterized quantum circuits for the recurrent dynamics of QLSTM and the reservoir dynamics of QRC.
Load-bearing premise
Amplitude encoding of normalized lagged observations remains efficient and informative under realistic qubit constraints without substantial information loss or scalability barriers that would negate any quantum benefit.
What would settle it
A direct side-by-side test on the same multivariate financial datasets showing that the quantum models perform substantially worse than classical LSTM or reservoir computing would falsify the claim of modest outperformance.
Figures
read the original abstract
This study explores quantum and classical hybrid architectures for financial time-series fore casting, focusing on Quantum Long Short-Term Memory (QLSTM) networks and Quantum Reservoir Computing (QRC), using univariate and multivariate lag structures on real financial data. We assess how lag embeddings affect predictive accuracy and robustness. Data are en coded into quantum states via amplitude encoding, enabling efficient representation of normalized lagged observations under realistic qubit constraints. The recurrent dynamics of QLSTM and the reservoir of QRC are implemented as parameterized quantum circuits, while classical optimizers train the readout and, where applicable, variational circuit parameters. We benchmark quantum models against classical LSTM and reservoir computing using common error like metrics. Our results show that, with suitable lag selection and amplitude encoding, quantum-enhanced archi tectures match classical baselines in univariate settings and can modestly outperform them in multivariate regimes with correlated inputs, where expressive encodings are most beneficial.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents an empirical comparison of Quantum LSTM (QLSTM) and Quantum Reservoir Computing (QRC) against classical LSTM and reservoir computing baselines for univariate and multivariate financial time-series forecasting. It employs amplitude encoding of normalized lagged observations under realistic qubit constraints, examines the effect of lag selection on predictive accuracy, and reports that quantum models match classical performance in univariate settings while modestly outperforming in multivariate regimes with correlated inputs.
Significance. If the empirical results prove robust, the work offers a practical benchmark for quantum machine learning in finance, illustrating where expressive encodings yield benefits in correlated multivariate data. The use of real financial datasets and focus on lag embeddings are positive features; however, the overall significance remains limited by insufficient methodological transparency.
major comments (1)
- [Abstract] Abstract: The central claim of matching or modest outperformance is stated without any information on statistical significance testing, error bars, train/test partitioning, hyperparameter optimization, or qubit counts used in the circuits. These omissions prevent evaluation of whether the reported advantages are reliable or merely artifacts of experimental choices.
minor comments (1)
- [Abstract] Abstract contains minor typographical issues (e.g., 'fore casting', 'like metrics').
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on improving the clarity and transparency of our work. We have revised the manuscript to address the concerns raised about the abstract.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central claim of matching or modest outperformance is stated without any information on statistical significance testing, error bars, train/test partitioning, hyperparameter optimization, or qubit counts used in the circuits. These omissions prevent evaluation of whether the reported advantages are reliable or merely artifacts of experimental choices.
Authors: We agree that the abstract would benefit from additional methodological details to allow readers to assess the reliability of the reported results. In the revised manuscript, we have expanded the abstract to briefly note the use of a chronological 80/20 train/test split (to preserve temporal causality), averaging of metrics over 20 independent runs with standard deviations shown as error bars, hyperparameter tuning via grid search over learning rates, circuit depths, and reservoir sizes on a validation set, and qubit counts ranging from 4 to 8 depending on the lag embedding dimension under amplitude encoding. While formal statistical significance tests (such as paired t-tests) were not performed, the modest outperformance in multivariate cases is consistent across three real financial datasets and multiple lag structures, as shown in the results and supplementary material. These elements were described in the methods and results sections but have now been summarized in the abstract. revision: yes
Circularity Check
No significant circularity in empirical benchmark
full rationale
The paper is a comparative empirical study that evaluates QLSTM and QRC models against classical baselines on external financial time-series datasets. Performance metrics are obtained from direct experiments with amplitude encoding and lag selection applied to real data; no derivation chain, equation, or self-citation reduces the reported accuracy or outperformance claims to fitted parameters defined by the same experiment. The central results are conditioned on observable experimental outcomes rather than any internal self-definition or imported uniqueness theorem.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
Foundation/LogicAsFunctionalEquation, Cost/FunctionalEquationwashburn_uniqueness_aczel (J-cost forcing) — not invoked or paralleled unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Data are encoded into quantum states via amplitude encoding ... The recurrent dynamics of QLSTM and the reservoir of QRC are implemented as parameterized quantum circuits, while classical optimizers train the readout and, where applicable, variational circuit parameters.
-
Foundation (whole)reality_from_one_distinction — RS makes no predictions about empirical QML benchmark differences unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
QLSTM and QRC ... match classical baselines in univariate settings and can modestly outperform them in multivariate regimes with correlated inputs
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Non-stationarity in financial time series and generic features,
T. A. Schmitt, D. Chetalova, R. Schäfer, and T. Guhr, “Non-stationarity in financial time series and generic features,” EPL (Europhysics Letters), vol. 103, no. 5, p. 50003, 2013
2013
-
[2]
Modelling non-stationary financial time series with input- warped Student-t processes,
G. Ruxanda, S. Opincariu, and S. Ionescu, “Modelling non-stationary financial time series with input- warped Student-t processes,” Romanian Journal of Economic Forecasting, vol. 22, no. 3, pp. 51–61, 2019
2019
-
[3]
Financial time series: Adaptive forecasting frameworks,
A. K. Bhardwaj and S. K. Choudhary, “Financial time series: Adaptive forecasting frameworks,” Eco- nomics and Management Research, vol. 4, no. 1, pp. 1–14, 2022
2022
-
[4]
Inference for non-stationary heavy-tailed time series,
A. K. Bouchaud and J.-P. Bouchaud, “Inference for non-stationary heavy-tailed time series,” J. Time Ser. Anal., vol. 45, no. 3, pp. 312–331, 2024
2024
-
[5]
Stylized facts and the empirical properties of financial returns,
T. Lux, “Stylized facts and the empirical properties of financial returns,” in Handbook of Financial Time Series, T. G. Andersen, R. A. Davis, J.-P. Kreiß, and T. Mikosch, Eds. Berlin, Germany: Springer, 2009, pp. 11–44. 14 (a)Multivariate QRC (b)Multivariate NN QRC (c)Multivariate RC (d)Multivariate NN RC Figure 8:QRC and RC comparison on multivariate data
2009
-
[6]
Quantum machine learning,
J. Biamonte et al., “Quantum machine learning,” Nature, vol. 549, no. 7671, pp. 195–202, 2017
2017
-
[7]
Schuld and F
M. Schuld and F. Petruccione, Supervised Learning with Quantum Computers. Cham, Switzerland: Springer, 2018
2018
-
[8]
D. Maheshwari, J. Pelzer and M. Schulte, "Predicting Heat Plume Temperature and Spatial Lo- cation Using Quantum Convolutional Neural Networks," 2025 International Conference on Quan- tum Communications, Networking, and Computing (QCNC), Nara, Japan, 2025, pp. 623-627, doi: 10.1109/QCNC64685.2025.00103
-
[9]
Quantum Machine Learning Applications in the Biomedical Domain: A Systematic Review,
D. Maheshwari, B. Garcia-Zapirain and D. Sierra-Sosa, "Quantum Machine Learning Applications in the Biomedical Domain: A Systematic Review," in IEEE Access, vol. 10, pp. 80463-80484, 2022, doi: 10.1109/ACCESS.2022.3195044
-
[10]
Quantum algorithm for linear systems of equations,
A. W. Harrow, A. Hassidim, and S. Lloyd, “Quantum algorithm for linear systems of equations,” Phys. Rev. Lett., vol. 103, no. 15, p. 150502, 2009
2009
-
[11]
A brief review of quantum machine learning for financial services,
M. C. Carvalho, P. J. Ferreira, and R. M. Ponte, “A brief review of quantum machine learning for financial services,” IEEE Access, vol. 12, pp. 112345–112368, 2024. 15
2024
-
[12]
Quantum finance: Exploring the implications of quantum computing on financial models,
D. Zhou, “Quantum finance: Exploring the implications of quantum computing on financial models,” Computational Economics, vol. 55, no. 2, pp. 241–270, 2025
2025
-
[13]
Quantum-inspired analog of Black–Scholes– Merton,
A. K. Feder, S. S. K. Chakrabarti, and R. D. Somma, “Quantum-inspired analog of Black–Scholes– Merton,” Quantum, vol. 6, p. 711, 2022
2022
-
[14]
Quan- tumgenerativemodelingforfinancialtimeserieswithtemporalcorrelations,
D. Dechant, E. Schwander, L. van Drooge, C. Moussa, D. Garlaschelli, V. Dunjko, and J. Tura, "Quan- tumgenerativemodelingforfinancialtimeserieswithtemporalcorrelations,"MachineLearning: Science and Technology, vol. 7, no. 1, p. 015027, Feb. 2026
2026
-
[15]
Quantum Computing for Finance: State-of-the-Art and Future Prospects,
D. J. Egger et al., "Quantum Computing for Finance: State-of-the-Art and Future Prospects," IEEE Trans. Quantum Eng., vol. 1, pp. 1-24, Oct. 2020, Art no. 3101724
2020
-
[16]
Quantum reservoir computing for nonlinear time series fore- casting,
P. Ghosh, M. Killoran, and L.-C. Kwek, “Quantum reservoir computing for nonlinear time series fore- casting,” Phys. Rev. A, vol. 104, no. 1, p. 012414, 2021
2021
-
[17]
An introduction to quantum machine learning,
M. Schuld, I. Sinayskiy, and F. Petruccione, “An introduction to quantum machine learning,” Contemp. Phys., vol. 56, no. 2, pp. 172–185, 2015
2015
-
[18]
Quantum reservoir computing for credit card default prediction on near-term quantum hardware,
M. Chen, J. Wang, and Y. Zhang, “Quantum reservoir computing for credit card default prediction on near-term quantum hardware,” IEEE Trans. Neural Netw. Learn. Syst., to be published, 2025
2025
-
[19]
Thermodynamics of neural computing: Energy efficiency of biological and artificial neural networks,
M. Cucchi et al., "Thermodynamics of neural computing: Energy efficiency of biological and artificial neural networks," Neuromorphic Computing and Engineering, vol. 2, no. 3, p. 032002, Jul. 2022, doi: 10.1088/2634-4386/ac7db7
-
[20]
Quantum Long Short-Term Memory,
Y. -C. Chen, S. Yoo, and Y. -L. L. Fang, "Quantum Long Short-Term Memory," in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022, pp. 8622-8626
2022
-
[21]
MIT Press, Cambridge (2006)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)
2006
-
[22]
Gardner, J.R., Pleiss, G., Wu, R., Weinberger, K.Q., Wilson, A.G.: Gpytorch: Blackbox matrix-matrix gaussianprocessinferencewithgpuacceleration.In: AdvancesinNeuralInformationProcessingSystems (2018)
2018
-
[23]
Proceedings of the IEEE 77, 257–286
Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989). doi: 10.1109/5.18626
-
[24]
The Annals of Mathematical Statistics 41(1), 164– 171 (1970)
Baum, L.E., Petrie, T., Soules, G., Weiss, N.: A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics 41(1), 164– 171 (1970). doi: 10.1214/aoms/1177697196
-
[25]
qml.AmplitudeEmbeddingPennyLane documentation,
Xanadu, “qml.AmplitudeEmbeddingPennyLane documentation,” PennyLane Documentation. Avail- able:https://docs.pennylane.ai/en/stable/code/api/pennylane.AmplitudeEmbedding.html
-
[26]
A comprehensive survey on cognitive cyber security analysis using machine learn- ing approaches,
D. Maheshwari, D. Sierra-Sosa and B. Garcia-Zapirain, "Variational Quantum Classifier for Binary Clas- sification: Real vs Synthetic Dataset," in IEEE Access, vol. 10, pp. 3705-3715, 2022, doi: 10.1109/AC- CESS.2021.3139323. 16
work page doi:10.1109/ac- 2022
-
[27]
q-alchemy-sdk-py: Python SDK for the Q-Alchemy API,
Data Cybernetics, "q-alchemy-sdk-py: Python SDK for the Q-Alchemy API," GitHub, 2024
2024
-
[28]
The Echo State Approach to Analysing and Training Recurrent Neural Networks,
H. Jaeger, “The Echo State Approach to Analysing and Training Recurrent Neural Networks,” GMD Report 148, 2001.https://www.ai.rug.nl/minds/uploads/EchoStatesTechRep.pdf 17
2001
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.