pith. machine review for the scientific record. sign in

arxiv: 2604.20639 · v1 · submitted 2026-04-22 · 🪐 quant-ph · cs.DC

Recognition: unknown

Distributed Quantum-Enhanced Optimization: A Topographical Preconditioning Approach for High-Dimensional Search

Authors on Pith no claims yet

Pith reviewed 2026-05-10 00:57 UTC · model grok-4.3

classification 🪐 quant-ph cs.DC
keywords quantum optimizationhybrid algorithmsseparable functionsglobal optimizationpreconditioninghigh-dimensional searchBFGS solvernear-term quantum
0
0 comments X

The pith

A quantum preconditioner prevents exponential failures in high-dimensional optimization

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a Distributed Quantum-Enhanced Optimization framework that uses quantum hardware only to precondition the search landscape instead of solving the full problem. For separable functions it breaks a large continuous optimization task into many independent small quantum subproblems whose results seed a classical solver. Benchmarks on 10-dimensional Rastrigin and Ackley functions show the method avoids the exponential rise in failure probability that occurs with purely classical global search. The quantum step also lowers the number of subsequent BFGS iterations needed for convergence. The approach is designed to run on near-term hardware by avoiding entanglement across the full qubit register.

Core claim

By treating the quantum processor as a topographical preconditioner that identifies promising basins of attraction, small independent 5-qubit circuits on separable subspaces generate seed points that allow a classical GPU-accelerated BFGS solver to reach the global minimum reliably. This decomposition turns a 50-qubit-scale search into concurrent 5-qubit fragments that require neither cross-register entanglement nor tensor knitting, and the resulting hybrid procedure demonstrably eliminates exponential failure rates while cutting classical iteration counts.

What carries the argument

Topographical preconditioning via concurrent 5-qubit quantum subcircuits that map independent subspaces of a separable function and supply high-quality seeds to a classical optimizer.

If this is right

  • The method prevents the exponential failure rates that appear in purely classical global optimization on the tested functions.
  • It reduces the number of classical BFGS iterations required to reach convergence.
  • It allows the full 2^50-scale search space to be handled with only 5-qubit circuits run in parallel.
  • It removes the need for entanglement or classical tensor knitting when the objective is separable.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same basin-identification strategy could be applied to other separable or approximately separable problems in engineering and machine learning.
  • Quantifying how much better the quantum seeds perform than classical multi-start heuristics would clarify the practical advantage.
  • If separability is only approximate, one could test how quickly the preconditioning benefit degrades with increasing cross-term coupling.

Load-bearing premise

The target functions must be separable so the global search space decomposes into independent low-dimensional subspaces that small quantum circuits can handle without entanglement.

What would settle it

On the 10-dimensional Rastrigin function, running the same classical BFGS solver from random starting points instead of the quantum-generated seeds yields the same success rate and iteration count.

Figures

Figures reproduced from arXiv: 2604.20639 by Dominik So\'os, John Stenger, Marc Paterno, Nikos Chrisochoides.

Figure 1
Figure 1. Figure 1: Box and whisker plot showing performance degrades drastically for a [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Motivation for cutting and knitting. Increasing from 5 to 10 qubits [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Quantum measurement distribution for the 2D Himmelblau function at low spatial resolution ( [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Hardware-Efficient Ansatz utilized for a single [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Number of correct solutions (Ncorrect) for the Rastrigin function. The D-QEO preconditioner preserves convergence in high dimensions where the classical baseline experiences exponential failure. The box plots display the distribution of successful runs. The central line indicates the median, the box edges represent the 25th and 75th percentiles (Interquartile Range), the whiskers extend to 1.5 × IQR, and t… view at source ↗
Figure 6
Figure 6. Figure 6: Number of correct solutions (Ncorrect) for the separable Ackley function. The quantum sampling effectively bypasses the classical gradient failures caused by the function’s non-differentiability at the global optimum. Plotting conventions follow those defined in [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
read the original abstract

Optimization problems become fundamentally challenging as the number of variables increases. Because the volume of the search space grows exponentially, classical algorithms frequently fail to locate the global minimum of non-convex functions. While quantum optimization offers a potential alternative, mapping continuous problems onto near-term quantum hardware introduces severe scaling limits and barren plateaus. To bridge this gap, we propose the Distributed Quantum-Enhanced Optimization (D-QEO) framework. Instead of forcing the quantum processor to find the exact minimum, we use it simply as a topographical preconditioner. The QPU maps the landscape to locate the most promising basin of attraction, generating high-quality seed points for a classical GPU-accelerated solver to refine. To make this approach viable for utility-scale problems, we exploit the mathematical structure of separable functions. This allows us to cut a 50-qubit (i.e., $2^{50}$) global search space into independent and manageable sub-spaces using 5-qubit subcircuits. By executing these fragments concurrently with CUDA-Q, we completely bypass the overhead of cross-register entanglement and classical tensor knitting for separable functions. Benchmarks on the 10-dimensional Rastrigin and Ackley functions show that D-QEO prevents the exponential failure rates observed in purely classical algorithms. Furthermore, this quantum warm-start significantly reduces the number of classical BFGS iterations required to converge, providing a highly practical blueprint for utilizing near-term quantum resources in complex global search.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes the Distributed Quantum-Enhanced Optimization (D-QEO) framework, which employs near-term quantum processors as topographical preconditioners to identify promising basins of attraction in high-dimensional non-convex optimization landscapes. For separable functions, the approach decomposes a large (e.g., 50-qubit) search space into independent 5-qubit subcircuits that are executed concurrently via CUDA-Q, bypassing entanglement and tensor-knitting overhead. These quantum-generated seed points are then refined by a classical GPU-accelerated BFGS solver. Benchmarks on 10-dimensional Rastrigin and Ackley functions are reported to demonstrate that D-QEO avoids the exponential failure rates of purely classical methods and substantially reduces the number of classical iterations required for convergence.

Significance. If the reported empirical advantages are reproducible and the quantum component demonstrably outperforms a classical baseline that also exploits separability, the framework would provide a concrete, near-term pathway for using small quantum circuits to accelerate global search in structured optimization problems. The emphasis on preconditioning rather than exact quantum solution, combined with the distributed execution strategy for separable cases, represents a pragmatic engineering contribution that could be extended to other decomposable landscapes.

major comments (3)
  1. [Abstract] Abstract: The central claim that D-QEO 'prevents the exponential failure rates observed in purely classical algorithms' for separable functions such as Rastrigin is undermined by the separability assumption itself. Because the Rastrigin function is exactly a sum of independent one-dimensional terms, the global minimum can be recovered by solving each 1D problem separately with classical methods; no joint exponential scaling arises. The reported benefit therefore cannot be attributed to the quantum preconditioner without an explicit comparison to a classical baseline that performs the same per-dimension decomposition.
  2. [Abstract] Abstract and benchmarks description: The same per-subspace decomposition is applied to the Ackley function, yet the standard Ackley form contains the non-separable coupling term −20 exp(−0.2 √(1/n ∑x_i²)). The manuscript does not specify whether this coupling is approximated, ignored, or handled by an alternative decomposition; if the decomposition is applied unchanged, the quantum-generated seeds may systematically mislocate basins, rendering the claimed reduction in BFGS iterations incomparable to a correctly implemented classical solver.
  3. [Benchmarks] Benchmarks section (implicit in the reported 10D results): No details are provided on the classical baseline implementation. If the classical BFGS runs are performed on the full 10D space without exploiting separability (while D-QEO does), the observed iteration reduction and failure-rate improvement are artifacts of the decomposition strategy rather than evidence of quantum advantage. A load-bearing comparison requires reporting iteration counts and success rates for a classical multi-start or coordinate-wise optimizer that uses the identical separability decomposition.
minor comments (2)
  1. [Framework description] The manuscript would benefit from an explicit equation defining the topographical preconditioner mapping (e.g., how the 5-qubit circuit output is converted into a classical seed point).
  2. [Abstract] Clarify the dimensionality mapping: a 50-qubit global space decomposed into 5-qubit subcircuits implies 10 independent subspaces; the 10D benchmarks therefore correspond to a single full decomposition, which should be stated explicitly.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thoughtful and detailed comments, which have helped us strengthen the manuscript. We agree that clearer comparisons to decomposed classical baselines and explicit specification of the benchmark functions are needed. We have revised the abstract, benchmarks section, and added new experimental results accordingly. Our point-by-point responses follow.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that D-QEO 'prevents the exponential failure rates observed in purely classical algorithms' for separable functions such as Rastrigin is undermined by the separability assumption itself. Because the Rastrigin function is exactly a sum of independent one-dimensional terms, the global minimum can be recovered by solving each 1D problem separately with classical methods; no joint exponential scaling arises. The reported benefit therefore cannot be attributed to the quantum preconditioner without an explicit comparison to a classical baseline that performs the same per-dimension decomposition.

    Authors: We agree that Rastrigin is fully separable and that a classical solver exploiting per-dimension decomposition avoids joint exponential scaling. In the original manuscript the 'purely classical algorithms' denoted standard full-dimensional global optimizers (e.g., multi-start BFGS without separability awareness). To address the concern directly, the revised manuscript now includes a side-by-side comparison against a classical coordinate-wise optimizer that applies the identical 1D decomposition. Even against this stronger baseline, the quantum-generated seeds yield measurably higher success rates and fewer BFGS iterations, because the small quantum subcircuits provide a topographical map of each 1D landscape that outperforms classical random or grid sampling. Updated tables and success-rate statistics have been added. revision: yes

  2. Referee: [Abstract] Abstract and benchmarks description: The same per-subspace decomposition is applied to the Ackley function, yet the standard Ackley form contains the non-separable coupling term −20 exp(−0.2 √(1/n ∑x_i²)). The manuscript does not specify whether this coupling is approximated, ignored, or handled by an alternative decomposition; if the decomposition is applied unchanged, the quantum-generated seeds may systematically mislocate basins, rendering the claimed reduction in BFGS iterations incomparable to a correctly implemented classical solver.

    Authors: We apologize for the omission. The benchmarks employed the separable variant of Ackley (sum of independent one-dimensional terms) that is standard in the decomposable-optimization literature; the coupling term was omitted to maintain consistency with the separability assumption stated in the framework. The revised manuscript now explicitly states the exact functional form used, includes a footnote referencing the separable Ackley definition, and notes that the non-separable case would require a different (approximate) decomposition strategy outside the current scope. With this clarification the reported iteration reductions remain comparable to a correctly implemented classical solver on the same separable instance. revision: yes

  3. Referee: [Benchmarks] Benchmarks section (implicit in the reported 10D results): No details are provided on the classical baseline implementation. If the classical BFGS runs are performed on the full 10D space without exploiting separability (while D-QEO does), the observed iteration reduction and failure-rate improvement are artifacts of the decomposition strategy rather than evidence of quantum advantage. A load-bearing comparison requires reporting iteration counts and success rates for a classical multi-start or coordinate-wise optimizer that uses the identical separability decomposition.

    Authors: We concur that baseline implementation details were insufficient. The original classical runs used full-dimensional multi-start BFGS. The revised benchmarks section now fully documents both the original full-space baseline and a new classical coordinate-wise optimizer that mirrors D-QEO’s per-subspace decomposition. The added results demonstrate that D-QEO still reduces iteration counts and raises success rates relative to this decomposed classical baseline, owing to the quality of the quantum topographical seeds. Comprehensive tables listing mean iterations, success rates, and statistical significance have been inserted. revision: yes

Circularity Check

0 steps flagged

No significant circularity; proposal and benchmarks are self-contained

full rationale

The paper presents D-QEO as a new framework that uses quantum circuits only as a topographical preconditioner for separable functions, explicitly decomposing high-dimensional spaces into independent 5-qubit subcircuits to avoid entanglement overhead. Benchmarks on 10D Rastrigin and Ackley are reported as empirical outcomes showing reduced BFGS iterations and lower failure rates versus classical baselines. No derivation chain, equation, or result reduces by construction to fitted parameters, self-citations, or renamed inputs; the separability assumption is stated upfront as a precondition for the method rather than derived from the claimed performance. The central claims rest on the proposed architecture and experimental validation, which remain independent of the outputs they report.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper introduces a new framework but relies on the domain assumption of separability for its scalability claims; no new physical entities are postulated.

axioms (1)
  • domain assumption Optimization functions of interest are separable, permitting decomposition of the global search space into independent sub-spaces solvable by small quantum circuits.
    Explicitly invoked to enable cutting 50-qubit problems into 5-qubit subcircuits without cross-entanglement.

pith-pipeline@v0.9.0 · 5565 in / 1390 out tokens · 54517 ms · 2026-05-10T00:57:06.992905+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

53 extracted references · 7 canonical work pages · 1 internal anchor

  1. [1]

    Zeus: An efficient gpu optimization method integrating pso, bfgs, and automatic differentia- tion,

    D. So ´os, M. Paterno, D. Ranjan, and M. Zubair, “Zeus: An efficient gpu optimization method integrating pso, bfgs, and automatic differentia- tion,” in2025 IEEE 32nd International Conference on High Performance Computing, Data, and Analytics (HiPC). IEEE, 2025, pp. 225–235

  2. [2]

    Particle swarm optimization,

    J. Kennedy and R. Eberhart, “Particle swarm optimization,” inProceed- ings of ICNN’95-international conference on neural networks, vol. 4. ieee, 1995, pp. 1942–1948

  3. [3]

    The convergence of a class of double-rank minimization algorithms 1. general considerations,

    C. G. Broyden, “The convergence of a class of double-rank minimization algorithms 1. general considerations,”IMA Journal of Applied Mathe- matics, vol. 6, no. 1, pp. 76–90, 1970

  4. [4]

    A new approach to variable metric algorithms,

    R. Fletcher, “A new approach to variable metric algorithms,”The computer journal, vol. 13, no. 3, pp. 317–322, 1970

  5. [5]

    A family of variable-metric methods derived by variational means,

    D. Goldfarb, “A family of variable-metric methods derived by variational means,”Mathematics of computation, vol. 24, no. 109, pp. 23–26, 1970

  6. [6]

    Conditioning of quasi-newton methods for function minimization,

    D. F. Shanno, “Conditioning of quasi-newton methods for function minimization,”Mathematics of computation, vol. 24, no. 111, pp. 647– 656, 1970

  7. [7]

    An introduction to automatic differen- tiation,

    L. B. Rall and G. F. Corliss, “An introduction to automatic differen- tiation,”Computational Differentiation: Techniques, Applications, and Tools, vol. 89, pp. 1–18, 1996

  8. [8]

    Systems of extremal control,

    L. A. Rastrigin, “Systems of extremal control,”Nauka, 1974

  9. [9]

    The nova experiment: overview and status,

    J. Bian, “The nova experiment: overview and status,”arXiv preprint arXiv:1309.7898, 2013

  10. [10]

    Improved measurement of neutrino oscillation parameters by the nova experiment,

    M. A. Aceroet al., “Improved measurement of neutrino oscillation parameters by the nova experiment,”Phys. Rev. D, vol. 106, p. 032004, 2022

  11. [11]

    Monte Carlo method for constructing confidence intervals with unconstrained and constrained nuisance parameters in the NOvA exper- iment,

    ——, “Monte Carlo method for constructing confidence intervals with unconstrained and constrained nuisance parameters in the NOvA exper- iment,”JINST, vol. 20, no. 02, p. T02001, 2025

  12. [12]

    Analyzing nova neutrino data with the perlmutter supercomputer,

    N. Buchanan, S. Calvez, D. Doyle, V . Hewes, A. Himmel, J. Kowalkowski, A. Norman, M. Paterno, T. Peterka, S. Sehrish, A. Sousa, T. Thakore, and O. Yildiz, “Analyzing nova neutrino data with the perlmutter supercomputer,” Poster presented at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC22), Dallas, TX, U...

  13. [13]

    V olume i. introduction to dune,

    B. Abi, R. Acciarri, M. A. Acero, G. Adamov, D. Adams, M. Adinolfi, Z. Ahmad, J. Ahmed, T. Alion, S. A. Monsalveet al., “V olume i. introduction to dune,”Journal of instrumentation, vol. 15, no. 08, pp. T08 008–T08 008, 2020

  14. [14]

    Non-iterative disentangled unitary coupled-cluster based on lie- algebraic structure,

    M. Haidar, O. Adjoua, S. Badreddine, A. Peruzzo, and J.-P. Pique- mal, “Non-iterative disentangled unitary coupled-cluster based on lie- algebraic structure,”Quantum Science and Technology, vol. 10, no. 2, p. 025031, 2025

  15. [15]

    Solving large-scale vehicle routing problems with hybrid quantum-classical decomposition,

    A. Maciejunes, J. Stenger, D. Gunlycke, and N. Chrisochoides, “Solving large-scale vehicle routing problems with hybrid quantum-classical decomposition,” 2025. [Online]. Available: https://arxiv.org/abs/2507.05373

  16. [16]

    Towards a utility-scale quantum edge detection for real-world medical image data,

    E. Billias and N. Chrisochoides, “Towards a utility-scale quantum edge detection for real-world medical image data,” 2025. [Online]. Available: https://arxiv.org/abs/2507.10939

  17. [17]

    Enabling large-scale quantum computing via distributed and hybrid architectures,

    W. Tang, “Enabling large-scale quantum computing via distributed and hybrid architectures,” Ph.D. dissertation, Princeton University, 2025

  18. [18]

    Continuous op- timization by quantum adaptive distribution search,

    K. Morimoto, Y . Takase, K. Mitarai, and K. Fujii, “Continuous op- timization by quantum adaptive distribution search,”Physical Review Research, vol. 6, no. 2, p. 023191, 2024

  19. [19]

    Best-in-class quantum circuit simulation at scale with nvidia cuquantum appliance,

    T. Lubowe and S. Morino, “Best-in-class quantum circuit simulation at scale with nvidia cuquantum appliance,” 2022

  20. [20]

    Ackley,A connectionist machine for genetic hillclimbing

    D. Ackley,A connectionist machine for genetic hillclimbing. Springer science & business media, 2012

  21. [21]

    A Quantum Approximate Optimization Algorithm

    E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,”arXiv preprint arXiv:1411.4028, 2014

  22. [22]

    Qaoa for max-cut requires hundreds of qubits for quantum speed-up,

    G. G. Guerreschi and A. Y . Matsuura, “Qaoa for max-cut requires hundreds of qubits for quantum speed-up,”Scientific reports, vol. 9, no. 1, p. 6903, 2019

  23. [23]

    Evaluating quantum approximate opti- mization algorithm: A case study,

    R. Shaydulin and Y . Alexeev, “Evaluating quantum approximate opti- mization algorithm: A case study,” in2019 tenth international green and sustainable computing conference (IGSC). IEEE, 2019, pp. 1–6

  24. [24]

    Barren plateaus in quantum neural network training landscapes,

    J. R. McClean, S. Boixo, V . N. Smelyanskiy, R. Babbush, and H. Neven, “Barren plateaus in quantum neural network training landscapes,”Nature communications, vol. 9, no. 1, p. 4812, 2018

  25. [25]

    Effect of barren plateaus on gradient-free optimization,

    M. V . S. Cerezo de la Roca, A. T. Arrasmith, P. J. Czarnik, L. Cincio, and P. J. Coles, “Effect of barren plateaus on gradient-free optimization,” Quantum, vol. 5, no. LA-UR–20-29699, 2021

  26. [26]

    A quantum approximate optimization algorithm for continuous problems,

    G. Verdon, J. M. Arrazola, K. Br ´adler, and N. Killoran, “A quantum approximate optimization algorithm for continuous problems,”arXiv preprint arXiv:1902.00409, 2019

  27. [27]

    Implementation of a quantum approximate optimization algorithm for continuous variables with qiskit,

    M. Luna, V . Patare, G. Aksoy, and G. Cattan, “Implementation of a quantum approximate optimization algorithm for continuous variables with qiskit,”HAL, 2025

  28. [28]

    A variational eigenvalue solver on a photonic quantum processor,

    A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, “A variational eigenvalue solver on a photonic quantum processor,”Nature communications, vol. 5, no. 1, p. 4213, 2014

  29. [29]

    The variational quantum eigensolver: a review of methods and best practices,

    J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y . Li, E. Grant, L. Wossnig, I. Rungger, G. H. Boothet al., “The variational quantum eigensolver: a review of methods and best practices,”Physics Reports, vol. 986, pp. 1–128, 2022

  30. [30]

    Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,

    A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, “Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets,”nature, vol. 549, no. 7671, pp. 242–246, 2017

  31. [31]

    Quantum adaptive search: a hybrid quantum-classical algorithm for global optimization of multivariate functions,

    G. Intoccia, U. Chirico, V . Schiano Di Cola, G. P. Pepe, and S. Cuomo, “Quantum adaptive search: a hybrid quantum-classical algorithm for global optimization of multivariate functions,”Frontiers in Applied Mathematics and Statistics, vol. 11, p. 1662682, 2025

  32. [32]

    Cascaded variational quantum eigensolver algorithm,

    D. Gunlycke, C. S. Hellberg, and J. P. Stenger, “Cascaded variational quantum eigensolver algorithm,”Physical Review Research, vol. 6, no. 1, p. 013238, 2024

  33. [33]

    Trading classical and quantum computational resources,

    S. Bravyi, G. Smith, and J. A. Smolin, “Trading classical and quantum computational resources,”Physical Review X, vol. 6, no. 2, p. 021043, 2016

  34. [34]

    Simulating large quantum circuits on a small quantum computer,

    T. Peng, A. W. Harrow, M. Ozols, and X. Wu, “Simulating large quantum circuits on a small quantum computer,”Physical review letters, vol. 125, no. 15, p. 150504, 2020

  35. [35]

    Particle swarm optimization with particles having quantum behavior,

    J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” inProceedings of the 2004 congress on evolutionary computation (IEEE Cat. No. 04TH8753), vol. 1. IEEE, 2004, pp. 325–331

  36. [36]

    A review of quantum- behaved particle swarm optimization,

    W. Fang, J. Sun, Y . Ding, X. Wu, and W. Xu, “A review of quantum- behaved particle swarm optimization,”IETE Technical Review, vol. 27, no. 4, pp. 336–348, 2010

  37. [37]

    Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection,

    J. Sun, W. Fang, X. Wu, V . Palade, and W. Xu, “Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection,”Evolutionary computation, vol. 20, no. 3, pp. 349– 393, 2012

  38. [38]

    Quantum particle swarm optimization for electromagnetics,

    S. M. Mikki and A. A. Kishk, “Quantum particle swarm optimization for electromagnetics,”IEEE transactions on antennas and propagation, vol. 54, no. 10, pp. 2764–2775, 2006

  39. [39]

    Gaussian quantum-behaved particle swarm optimiza- tion approaches for constrained engineering design problems,

    L. d. S. Coelho, “Gaussian quantum-behaved particle swarm optimiza- tion approaches for constrained engineering design problems,”Expert Systems with Applications: An International Journal, vol. 37, no. 2, pp. 1676–1683, 2010

  40. [40]

    Particle swarm optimization based on k-means clustering and adaptive dual-groups strategy,

    Y . Fan, D. Tian, Q. Xu, J. Sun, Q. Xu, and Z. Shi, “Particle swarm optimization based on k-means clustering and adaptive dual-groups strategy,”Swarm and Evolutionary Computation, vol. 100, p. 102226, 2026

  41. [41]

    A hybrid pso- bfgs strategy for global optimization of multimodal functions,

    S. Li, M. Tan, I. W. Tsang, and J. T.-Y . Kwok, “A hybrid pso- bfgs strategy for global optimization of multimodal functions,”IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 4, pp. 1003–1014, 2011

  42. [42]

    Dynamic multi-swarm particle swarm optimizer,

    J.-J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” inProceedings 2005 IEEE Swarm Intelligence Symposium,

  43. [43]

    SIS 2005.IEEE, 2005, pp. 124–129

  44. [44]

    A modified quantum- based particle swarm optimization for engineering inverse problem,

    O. U. Rehman, S. Yang, and S. U. Khan, “A modified quantum- based particle swarm optimization for engineering inverse problem,” COMPEL-The international journal for computation and mathematics in electrical and electronic engineering, vol. 36, no. 1, pp. 168–187, 2017

  45. [45]

    Reduction methods in nonlinear programming,

    D. Himmelblau, “Reduction methods in nonlinear programming,” 1982

  46. [46]

    Improving variational quantum optimization using cvar,

    P. K. Barkoutsos, G. Nannicini, A. Robert, I. Tavernelli, and S. Woerner, “Improving variational quantum optimization using cvar,”Quantum, vol. 4, p. 256, 2020

  47. [47]

    Hybrid vqe-cvqe algorithm using diabatic state preparation,

    J. Stenger, C. S. Hellberg, and D. Gunlycke, “Hybrid vqe-cvqe algorithm using diabatic state preparation,”arXiv preprint arXiv:2512.04801, 2025

  48. [48]

    Cuda quantum: The platform for integrated quantum-classical computing,

    J.-S. Kim, A. McCaskey, B. Heim, M. Modani, S. Stanwyck, and T. Costa, “Cuda quantum: The platform for integrated quantum-classical computing,” in2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023, pp. 1–4

  49. [49]

    2016, arXiv e-prints, arXiv:1604.00772, doi: 10.48550/arXiv.1604.00772

    N. Hansen, “The cma evolution strategy: A tutorial,”arXiv preprint arXiv:1604.00772, 2016

  50. [50]

    A. R. Conn, N. I. Gould, and P. L. Toint,Trust region methods. SIAM, 2000

  51. [51]

    A direct search optimization method that models the objective and constraint functions by linear interpolation,

    M. J. Powell, “A direct search optimization method that models the objective and constraint functions by linear interpolation,” inAdvances in optimization and numerical analysis. Springer, 1994, pp. 51–67

  52. [52]

    Self-adjusting parameter control for surrogate-assisted constrained optimization under limited budgets,

    S. Bagheri, W. Konen, M. Emmerich, and T. B ¨ack, “Self-adjusting parameter control for surrogate-assisted constrained optimization under limited budgets,”Applied Soft Computing, vol. 61, pp. 377–393, 2017

  53. [53]

    Derivative-free optimization: a review of algorithms and comparison of software implementations,

    L. M. Rios and N. V . Sahinidis, “Derivative-free optimization: a review of algorithms and comparison of software implementations,”Journal of Global Optimization, vol. 56, no. 3, pp. 1247–1293, 2013