pith. machine review for the scientific record. sign in

arxiv: 2605.09035 · v1 · submitted 2026-05-09 · 🧮 math.OC

Recognition: 2 theorem links

· Lean Theorem

Riemannian optimal reduction for linear systems with quadratic outputs

Chenglong Liu, Xiaolong Wang

Pith reviewed 2026-05-12 02:03 UTC · model grok-4.3

classification 🧮 math.OC
keywords model order reductionRiemannian optimizationH2-optimal reductionquadratic outputslinear systemsmanifold geometryBFGS method
0
0 comments X

The pith

Directly optimizing the coefficient matrices of a reduced model on a Riemannian manifold produces H2-optimal approximations for linear systems with quadratic outputs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops an H2-optimal model order reduction method for linear systems whose outputs depend quadratically on the state. It formulates the task as an optimization problem whose variables are the entries of the reduced system's matrices rather than projection bases. A product manifold is defined so that every point corresponds to a stable reduced model. The geometry of the manifold supplies an explicit Riemannian gradient of the H2-error objective, which is then minimized by a limited-memory Riemannian BFGS algorithm. The approach sharply reduces the number of decision variables and yields reduced models with smaller H2 errors than projection-based techniques in the reported examples. A reader cares because accurate low-dimensional models speed up simulation and control design when quadratic terms capture energy or power quantities.

Core claim

The H2-optimal model order reduction problem for linear systems with quadratic outputs is recast as an optimization over the coefficient matrices of the reduced system. These matrices are varied on a product manifold that enforces asymptotic stability. An explicit formula for the Riemannian gradient of the H2 error is derived from the manifold geometry, and a limited-memory Riemannian BFGS solver is applied iteratively to produce reduced models that closely match the original system's quadratic-output behavior.

What carries the argument

The product manifold of asymptotically stable reduced linear systems, on which the coefficient matrices are optimized; the explicit Riemannian gradient of the H2-error functional, derived from manifold geometry and supplied to a limited-memory BFGS iteration.

If this is right

  • The number of optimization variables is reduced dramatically because only the reduced matrices, not full projection matrices, are varied.
  • Stability of the reduced model is guaranteed by construction through the manifold definition.
  • Numerical examples show that the resulting reduced models achieve lower H2 errors than conventional projection-based methods.
  • The reduced models accurately reproduce the quadratic output response of the original system.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same manifold construction could be adapted to enforce additional properties such as passivity by changing the manifold definition.
  • The method's direct-matrix formulation may extend to other error measures or system classes if analogous manifold geometries can be identified.
  • In control or simulation applications the smaller variable count and guaranteed stability could enable real-time use of quadratic-output models.

Load-bearing premise

The product manifold is defined so that it correctly enforces stability of every candidate reduced model, and the Riemannian BFGS iteration reliably reaches a high-quality local minimum of the H2 error.

What would settle it

A numerical test in which the H2 error of the reduced model produced by the method exceeds the error obtained by a standard projection-based H2-optimal technique, or in which the reduced model is unstable despite lying on the manifold.

Figures

Figures reproduced from arXiv: 2605.09035 by Chenglong Liu, Xiaolong Wang.

Figure 1
Figure 1. Figure 1: Convergence history of the LRBFGS algorithm: (a) o [PITH_FULL_IMAGE:figures/full_fig_p013_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Performance comparison of the reduced-order mode [PITH_FULL_IMAGE:figures/full_fig_p014_2.png] view at source ↗
read the original abstract

This paper presents an H2-optimal model order reduction (MOR) method for linear systems with quadratic outputs based on Riemannian optimization. The H2-optimal MOR is formulated as an optimization problem in which the optimization variables are selected directly as the coefficient matrices of reduced models. The product manifold is defined properly to impose the stability condition for reduced models. By exploiting the geometric properties of the product manifold, we derive an explicit formula for Riemannian gradient of the objective function, and then a limited-memory Riemannian BFGS method is adopted to solve the resulting optimization problem iteratively. In contrast to selecting projection matrices, optimizing coefficient matrices of reduced models reduces the amount of variables dramatically. Numerical simulation results demonstrate that reduced models accurately approximate the original system and exhibit superior performance in terms of H2 error, which confirms the effectiveness of the proposed algorithm.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper formulates H2-optimal model order reduction for linear systems with quadratic outputs as a direct optimization over the coefficient matrices (A_r, B_r, C_r, D_r) of the reduced model. It defines a product manifold to enforce stability of the reduced system, derives an explicit Riemannian gradient of the H2-error objective on this manifold, and applies a limited-memory Riemannian BFGS method to minimize the error. Numerical examples are reported to show that the resulting reduced models achieve lower H2 error than alternative approaches while using fewer optimization variables than projection-based methods.

Significance. If the manifold construction and gradient formula are valid and the iterates remain stable, the approach would reduce the number of decision variables relative to projection-matrix optimization and could yield improved H2 approximations for quadratic-output systems. The explicit Riemannian gradient derivation on a stability-constrained manifold would constitute a concrete technical contribution to geometric optimization in control theory.

major comments (2)
  1. [manifold definition paragraph] The section defining the product manifold (abstract and the paragraph beginning 'The product manifold is defined properly...'): the claim that this manifold 'imposes the stability condition' is load-bearing for the entire method, yet the set of Hurwitz matrices is an open subset of R^{n×n} rather than a closed embedded submanifold. No explicit construction (e.g., via factorization, retraction, or barrier) is given that guarantees every point on the manifold yields Re(eig(A_r)) < 0 while still covering a sufficiently large portion of the stable coefficient space. If the limited-memory BFGS update can leave this set, the H2 norm becomes undefined and subsequent claims about convergence and error reduction do not hold.
  2. [Riemannian gradient derivation] The paragraph deriving the Riemannian gradient (the sentence 'By exploiting the geometric properties of the product manifold, we derive an explicit formula...'): the formula is stated to be explicit, but no intermediate steps, tangent-space projection, or verification that the gradient is orthogonal to the normal space of the stability constraint are supplied. Without these details it is impossible to confirm that the gradient is correctly computed on the manifold rather than in the ambient Euclidean space.
minor comments (2)
  1. [numerical simulation results] Numerical results paragraph: no error bars, standard deviations across random initializations, or explicit statement of the baseline methods used for comparison are provided, making it difficult to judge whether the reported H2-error improvement is statistically significant.
  2. [abstract and introduction] Notation: the symbols for the original and reduced quadratic output matrices are not introduced before they appear in the objective function, which reduces readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive comments on our manuscript. The points raised highlight important aspects of the manifold construction and gradient derivation that require clarification. We address each major comment below and will revise the manuscript accordingly to strengthen the presentation.

read point-by-point responses
  1. Referee: [manifold definition paragraph] The section defining the product manifold (abstract and the paragraph beginning 'The product manifold is defined properly...'): the claim that this manifold 'imposes the stability condition' is load-bearing for the entire method, yet the set of Hurwitz matrices is an open subset of R^{n×n} rather than a closed embedded submanifold. No explicit construction (e.g., via factorization, retraction, or barrier) is given that guarantees every point on the manifold yields Re(eig(A_r)) < 0 while still covering a sufficiently large portion of the stable coefficient space. If the limited-memory BFGS update can leave this set, the H2 norm becomes undefined and subsequent claims about convergence and error reduction do not hold.

    Authors: The product manifold is the Cartesian product of the open set of Hurwitz matrices (an open submanifold of R^{n×n}) with the Euclidean spaces for B_r, C_r, and D_r. By definition, every point on this manifold corresponds to a stable reduced-order model. We acknowledge that the manuscript does not provide an explicit retraction or barrier function to keep iterates strictly inside the open set during the limited-memory Riemannian BFGS iterations. In the revision we will add a detailed description of the retraction (a standard one for the stable-matrix component that projects back into the Hurwitz region) together with the initialization and step-size safeguards used in the numerical examples. This construction covers the entire open set of stable matrices. revision: yes

  2. Referee: [Riemannian gradient derivation] The paragraph deriving the Riemannian gradient (the sentence 'By exploiting the geometric properties of the product manifold, we derive an explicit formula...'): the formula is stated to be explicit, but no intermediate steps, tangent-space projection, or verification that the gradient is orthogonal to the normal space of the stability constraint are supplied. Without these details it is impossible to confirm that the gradient is correctly computed on the manifold rather than in the ambient Euclidean space.

    Authors: We agree that the current presentation of the Riemannian gradient lacks the intermediate algebraic steps. Because the stability component is an open set, its tangent space coincides with the ambient space at every point; the projection therefore acts only on the Euclidean factors. In the revised manuscript we will insert the full derivation: the Euclidean gradient of the H2-error functional, its decomposition according to the product structure, the (trivial) projection onto the tangent space of the open Hurwitz component, and the explicit verification that the resulting vector lies in the tangent space. These additions will confirm that the formula is the true Riemannian gradient on the manifold. revision: yes

Circularity Check

0 steps flagged

No circularity: gradient derived from manifold geometry, not tautological fit

full rationale

The paper formulates H2-optimal MOR directly as minimization of the H2 error over coefficient matrices of the reduced model, defines a product manifold to enforce stability, derives an explicit Riemannian gradient formula by exploiting the manifold's geometry, and applies a standard limited-memory Riemannian BFGS solver. No step reduces the objective, gradient, or claimed superiority to a fitted parameter or self-citation that is itself defined by the same quantities; the derivation chain is self-contained and relies on independent differential-geometric calculations applied to the given objective.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Based on abstract only; the approach rests on standard MOR assumptions that stable reduced models exist and that the H2 norm is an appropriate error measure. No free parameters, new axioms, or invented entities are described.

pith-pipeline@v0.9.0 · 5426 in / 1211 out tokens · 63439 ms · 2026-05-12T02:03:52.740164+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

29 extracted references · 29 canonical work pages

  1. [1]

    B. C. Moore, Principal component analysis in linear syste ms: Controllability, observability, and model reduction, IEEE Transactions on Automatic Contro l 26 (1981) 17–32

  2. [2]

    Astolfi, Model reduction by moment matching for linear and nonlinear systems, IEEE Transactions on Automatic Control 55 (2010) 2321–2336

    A. Astolfi, Model reduction by moment matching for linear and nonlinear systems, IEEE Transactions on Automatic Control 55 (2010) 2321–2336

  3. [3]

    Benner, M

    P. Benner, M. Ohlberger, A. Cohen, K. Willcox, Model Reduc tion and Approximation: Theory and Algorithms, SIAM, 2017

  4. [4]

    A. C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, PA, 2005

  5. [5]

    Van Beeumen, K

    R. Van Beeumen, K. Meerbergen, Model reduction by balance d truncation of linear systems with a quadratic output, in: AIP Conference Proceedings, vo lume 1281, American Institute of Physics, 2010, pp. 2033–2036

  6. [6]

    Benner, T

    P. Benner, T. Breiten, Two-sided projection methods for no nlinear model order reduction, SIAM Journal on Scientific Computing 37 (2015) B239–B260

  7. [7]

    Benner, P

    P. Benner, P. Goyal, I. P. Duff, Gramians, energy functiona ls, and balanced truncation for linear dynamical systems with quadratic outputs, IEEE Tran sactions on Automatic Control 67 (2021) 886–893

  8. [8]

    I. V. Gosea, A. C. Antoulas, A two-sided iterative framew ork for model reduction of linear systems with quadratic output, in: 2019 IEEE 58th Conferenc e on Decision and Control (CDC), IEEE, 2019, pp. 7812–7817

  9. [9]

    I. V. Gosea, S. Gugercin, Data-driven modeling of linear dynamical systems with quadratic output in the AAA framework, Journal of Scientific Computing 91 (2022) 16

  10. [10]

    Y. Xu, T. Zeng, Fast optimal H2 model reduction algorithms based on grassmann manifold optimization, International Journal of Numerical Analysi s & Modeling 10 (2013) 972–991

  11. [11]

    W.-Y. Yan, J. Lam, An approximate approach to H2 optimal model reduction, IEEE Trans- actions on Automatic Control 44 (1999) 1341–1358

  12. [12]

    Absil, R

    P.-A. Absil, R. Mahony, R. Sepulchre, Optimization Alg orithms on Matrix Manifolds, Prince- ton University Press, 2008

  13. [13]

    H. Sato, K. Sato, A new H 2 optimal model reduction method based on riemannian conjuga te gradient method, in: 2016 IEEE 55th Conference on Decision a nd Control (CDC), IEEE, 2016, pp. 5762–5768

  14. [14]

    T. Zeng, C. Lu, Two-sided grassmann manifold algorithm for optimal model reduction, Inter- national Journal for Numerical Methods in Engineering 104 ( 2015) 928–943. 15

  15. [15]

    Xu, Y.-L

    K.-L. Xu, Y.-L. Jiang, An unconstrained H2 model order reduction optimisation algorithm based on the Stiefel manifold for bilinear systems, Interna tional Journal of Control 92 (2019) 950–959

  16. [16]

    Xu, Y.-L

    K.-L. Xu, Y.-L. Jiang, Z.-X. Yang, H2 order-reduction for bilinear systems based on Grassmann manifold, Journal of the Franklin Institute 352 (2015) 4467 –4479

  17. [17]

    Yang, Y.-L

    P. Yang, Y.-L. Jiang, H2 optimal model reduction of coupled systems on the Grassmann manifold, Mathematical Modelling and Analysis 22 (2017) 78 5–808

  18. [18]

    Jiang, W.-G

    Y.-L. Jiang, W.-G. Wang, H2 optimal model order reduction of the discrete system on the product manifold, Applied Mathematical Modelling 69 (2019 ) 593–603

  19. [19]

    X. Wang, T. Tian, Riemannian optimization for model ord er reduction of linear systems with quadratic outputs, Journal of Computational and Applied Ma thematics 485 (2026) 117496. doi:10.1016/j.cam.2026.117496

  20. [20]

    Reiter, I

    S. Reiter, I. Pontes Duff, I. V. Gosea, S. Gugercin, H2 optimal model reduction of linear systems with multiple quadratic outputs, IEEE Transaction s on Automatic Control (2025). To appear

  21. [21]

    Sato, Riemannian optimal model reduction of stable l inear systems, IEEE Access 7 (2019) 9150–9159

    K. Sato, Riemannian optimal model reduction of stable l inear systems, IEEE Access 7 (2019) 9150–9159

  22. [22]

    G. H. Golub, C. F. Van Loan, Matrix Computations, 4th ed. , Johns Hopkins University Press, Baltimore, MD, USA, 2012

  23. [23]

    Boumal, An Introduction to Optimization on Smooth Man ifolds, Cambridge University Press, 2023

    N. Boumal, An Introduction to Optimization on Smooth Man ifolds, Cambridge University Press, 2023

  24. [24]

    Moakher, A differential geometric approach to the geo metric mean of symmetric positive- definite matrices, SIAM Journal on Matrix Analysis and Appli cations 26 (2005) 735–747

    M. Moakher, A differential geometric approach to the geo metric mean of symmetric positive- definite matrices, SIAM Journal on Matrix Analysis and Appli cations 26 (2005) 735–747

  25. [25]

    Vandereycken, P.-A

    B. Vandereycken, P.-A. Absil, S. Vandewalle, Embedded g eometry of the set of symmetric positive semidefinite matrices of fixed rank, in: Proceeding s of the 15th IEEE Workshop on Statistical Signal Processing, IEEE, 2009, pp. 389–392

  26. [26]

    Jeuris, R

    B. Jeuris, R. Vandebril, B. Vandereycken, A survey and com parison of contemporary al- gorithms for computing the matrix geometric mean, Electron ic Transactions on Numerical Analysis 39 (2012) 379–402

  27. [27]

    Obara, K

    M. Obara, K. Sato, H. Sakamoto, T. Okuno, A. Takeda, Stab le linear system identification with prior knowledge by riemannian sequential quadratic op timization, IEEE Transactions on Automatic Control 69 (2024) 2060–2066

  28. [28]

    Absil, and K

    W. Huang, P.-A. Absil, K. A. Gallivan, A Riemannian BFGS m ethod without differentiated retraction for nonconvex optimization problems, SIAM Jour nal on Optimization 28 (2018) 470–495. doi:10.1137/17M1127582

  29. [29]

    A. N. Diaz, M. Heinkenschloss, I. V. Gosea, A. C. Antoula s, Interpolatory model reduction of quadratic-bilinear dynamical systems with quadratic-bil inear outputs, Advances in Computa- tional Mathematics 49 (2023) 1–28. 16