Local tensor-train surrogates approximate quantum machine learning models via Taylor polynomials and tensor networks, delivering polynomial parameter scaling and explicit generalization bounds controlled by patch radius.
Generalization in quantum machine learning from few training data
3 Pith papers cite this work, alongside 392 external citations. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
A new QNN architecture with unified graph, HAL, and ONNX pipeline enables cross-framework and cross-hardware QML with training time within 8% of native implementations and identical accuracy on Iris, Wine, and MNIST-4 tasks.
QCNN, QRNN, and QViT perform well on low-feature data but degrade on high-feature datasets, with QViT most robust to quantum noise and classical-style models better against adversarial noise.
citing papers explorer
-
Local tensor-train surrogates for quantum learning models
Local tensor-train surrogates approximate quantum machine learning models via Taylor polynomials and tensor networks, delivering polynomial parameter scaling and explicit generalization bounds controlled by patch radius.
-
Eliminating Vendor Lock-In in Quantum Machine Learning via Framework-Agnostic Neural Networks
A new QNN architecture with unified graph, HAL, and ONNX pipeline enables cross-framework and cross-hardware QML with training time within 8% of native implementations and identical accuracy on Iris, Wine, and MNIST-4 tasks.
-
A Comprehensive Analysis of Accuracy and Robustness in Quantum Neural Networks
QCNN, QRNN, and QViT perform well on low-feature data but degrade on high-feature datasets, with QViT most robust to quantum noise and classical-style models better against adversarial noise.