Recognition: 2 theorem links
· Lean TheoremSubspace Pruning via Principal Vectors for Accurate Koopman-Based Approximations
Pith reviewed 2026-05-14 18:52 UTC · model grok-4.3
The pith
A hybrid principal-vector pruning framework refines Koopman subspace invariance with error bounds and rank-one update efficiency for lifted linear prediction.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We establish the geometric equivalence between consistency-based methods and principal-vector pruning, and build on this insight to introduce a hybrid strategy that balances between multiple and single principal vector pruning for improved numerical stability and scalability.
Load-bearing premise
That the principal angles between a candidate subspace and its image under the Koopman operator provide a sufficient and refinable measure of invariance error that can be systematically reduced by pruning without losing essential dynamical information.
read the original abstract
The accuracy of Koopman operator approximations over finite-dimensional spaces relies critically on their invariance properties. These can be rigorously quantified via the principal angles between a candidate subspace and its image under the Koopman operator. This paper proposes a unified algebraic framework for subspace pruning designed to systematically refine the invariance error. We establish the geometric equivalence between consistency-based methods and principal-vector pruning, and build on this insight to introduce a hybrid strategy that balances between multiple and single principal vector pruning for improved numerical stability and scalability. We derive error bounds for the retention of approximate and external eigenfunctions, demonstrating that the multi-vector approach mitigates the numerical drift inherent to sequential pruning. To ensure scalability, we develop an efficient numerical update scheme based on rank-one modifications that reduces the computational complexity of tracking principal angles by an order of magnitude. Finally, we exploit the subspace obtained from the pruning algorithms to build a lifted linear model for state prediction that accounts for the trade-offs between improving invariance and minimizing state reconstruction error. Simulations demonstrate the effectiveness of our approach.
Editorial analysis
A structured set of objections, weighed in public.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We establish the geometric equivalence between consistency-based methods and principal-vector pruning... hybrid MPV-SPV strategy... rank-one modifications... invariance proximity δ(S) = sin θ_max(S,KS)
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
principal angles... SVD computation... external eigenfunction retention bounds
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
We construct a transformation matrixT∈R s×(s−k) by padding Ewith zeros to align with the originals-dimensional space: T= E 0k×(s−k) .(25)
Construct the Re-alignment Matrix LetE∈R (s−k)×(s−k) be the matrix of eigenvectors. We construct a transformation matrixT∈R s×(s−k) by padding Ewith zeros to align with the originals-dimensional space: T= E 0k×(s−k) .(25)
-
[2]
We then perform a QR decomposition ofCas C=Q CRC,(26) whereQ C ∈R s×(s−k) is orthogonal andR new =R C ∈ R(s−k)×(s−k) is the new upper triangular factor
Update the Triangular Factor We apply the transformationTto the existing upper triangular factorRto form the intermediate matrixC=RT∈R s×(s−k). We then perform a QR decomposition ofCas C=Q CRC,(26) whereQ C ∈R s×(s−k) is orthogonal andR new =R C ∈ R(s−k)×(s−k) is the new upper triangular factor
-
[3]
The matricesW new andR new are a valid QR decomposition ofKU new, i.e., KU new =W newRC
Update the Orthogonal Bases Finally, we update the orthogonal image basisWby applying the rotations derived above, W new =W Q C.(27) As we show next, the resulting matrices form the QR decom- position of the new image spaceKU new Lemma 6.4 (Correctness of Incremental QR):Consider the notation and construction of Section VI-A. The matricesW new andR new ar...
-
[4]
The corresponding eigenvalues are λ1 = 1,λ 2 = 0.8,λ 3 = 0.64, andλ 4 = 0.9. We employ these eigenfunctions as the ground truth for evaluating the accuracy of the pruning algorithms. We initialize the search space with a large dictionary of basis functions with1615elements, comprising both polynomials (up to degree 4) and radial basis functions. We genera...
-
[5]
Note that kernel EDMD performs the orthogonal projection using the kernel inner product, which is different from the standard L2(µX)inner product used in our other examples
using the same Wendland kernel and centers. Note that kernel EDMD performs the orthogonal projection using the kernel inner product, which is different from the standard L2(µX)inner product used in our other examples. As another baseline, we perform standard EDMD using the full initial dictionary without pruning. Finally, we apply the proposed MPV-SPVprun...
-
[6]
A unified algebraic framework for subspace pruning in Koopman operator approximation via principal vectors,
D. Shah and J. Cort ´es, “A unified algebraic framework for subspace pruning in Koopman operator approximation via principal vectors,” in IEEE Conf. on Decision and Control, (Honolulu, Hawaii), Dec. 2026. Submitted
2026
-
[7]
Dynamical systems of continuous spectra,
B. O. Koopman and J. V . Neumann, “Dynamical systems of continuous spectra,”Proceedings of the National Academy of Sciences, vol. 18, no. 3, pp. 255–263, 1932
1932
-
[8]
Data-driven approximation of the Koopman generator: Model reduc- tion, system identification, and control,
S. Klus, F. N ¨uske, S. Peitz, J. H. Niemann, C. Clementi, and C. Sch ¨utte, “Data-driven approximation of the Koopman generator: Model reduc- tion, system identification, and control,”Physica D: Nonlinear Phenom- ena, vol. 406, p. 132416, 2020
2020
-
[9]
Dynamic mode decomposition and Koopman spectral analysis of boundary layer separation-induced transition,
A. Dotto, D. Lengani, D. Simoni, and A. Tacchella, “Dynamic mode decomposition and Koopman spectral analysis of boundary layer separation-induced transition,”Physics of Fluids, vol. 33, no. 10, 2021
2021
-
[10]
Graph neural network and Koopman models for learning networked dynamics: A comparative study on power grid transients prediction,
S. P. Nandanoori, S. Guan, S. Kundu, S. Pal, K. Agarwal, Y . Wu, and S. Choudhury, “Graph neural network and Koopman models for learning networked dynamics: A comparative study on power grid transients prediction,”IEEE Access, vol. 10, pp. 32337–32349, 2022
2022
-
[11]
Koopman operators in robot learning,
L. Shi, M. Haseli, G. Mamakoukas, D. Bruder, I. Abraham, T. Murphey, J. Cort´es, and K. Karydis, “Koopman operators in robot learning,”IEEE Transactions on Robotics, vol. 42, pp. 1088–1107, 2026
2026
-
[12]
Global stability analysis using the eigen- functions of the Koopman operator,
A. Mauroy and I. Mezi ´c, “Global stability analysis using the eigen- functions of the Koopman operator,”IEEE Transactions on Automatic Control, vol. 61, no. 11, pp. 3356–3369, 2016
2016
-
[13]
On the equivalence of contraction and Koopman approaches for nonlinear stability and control,
B. Yi and I. R. Manchester, “On the equivalence of contraction and Koopman approaches for nonlinear stability and control,”IEEE Trans- actions on Automatic Control, vol. 69, no. 7, pp. 4336–4351, 2024
2024
-
[14]
Supervised learning of Lyapunov functions using Laplace averages of approximate Koopman eigenfunc- tions,
S. A. Deka and D. V . Dimarogonas, “Supervised learning of Lyapunov functions using Laplace averages of approximate Koopman eigenfunc- tions,”IEEE Control Systems Letters, vol. 7, pp. 3072–3077, 2023
2023
-
[15]
Uniform global stability of switched nonlinear systems in the Koopman operator framework,
C. M. Zagabe and A. Mauroy, “Uniform global stability of switched nonlinear systems in the Koopman operator framework,”SIAM Journal on Control and Optimization, vol. 63, no. 1, pp. 472–501, 2025
2025
-
[16]
Learning regions of attraction in unknown dynamical systems via Zubov-Koopman lifting: Regularities and convergence,
Y . Meng, R. Zhou, and J. Liu, “Learning regions of attraction in unknown dynamical systems via Zubov-Koopman lifting: Regularities and convergence,”IEEE Transactions on Automatic Control, 2025
2025
-
[17]
Optimal construction of Koopman eigen- functions for prediction and control,
M. Korda and I. Mezic, “Optimal construction of Koopman eigen- functions for prediction and control,”IEEE Transactions on Automatic Control, vol. 65, no. 12, pp. 5114–5129, 2020
2020
-
[18]
Modeling nonlinear control systems via Koop- man control family: universal forms and subspace invariance proximity,
M. Haseli and J. Cort ´es, “Modeling nonlinear control systems via Koop- man control family: universal forms and subspace invariance proximity,” Automatica, vol. 185, p. 112722, 2026
2026
-
[19]
Two roads to Koopman operator theory for control: infinite input sequences and operator families,
M. Haseli, I. Mezi ´c, and J. Cort ´es, “Two roads to Koopman operator theory for control: infinite input sequences and operator families,”IEEE Transactions on Automatic Control, 2025. Submitted
2025
-
[20]
Data-driven feedback linearization using the Koopman generator,
D. Gadginmath, V . Krishnan, and F. Pasqualetti, “Data-driven feedback linearization using the Koopman generator,”IEEE Transactions on Automatic Control, vol. 69, no. 12, pp. 8844–8851, 2024
2024
-
[21]
Neural Koopman control barrier functions for safety-critical control of unknown nonlinear systems,
V . Zinage and E. Bakolas, “Neural Koopman control barrier functions for safety-critical control of unknown nonlinear systems,” inAmerican Control Conference, pp. 3442–3447, IEEE, 2023
2023
-
[22]
Towards global optimal control via Koopman lifts,
M. E. Villanueva, C. N. Jones, and B. Houska, “Towards global optimal control via Koopman lifts,”Automatica, vol. 132, p. 109610, 2021
2021
-
[23]
Markov chain Monte Carlo for Koopman- based optimal control,
J. Hespanha and K. C ¸ amsari, “Markov chain Monte Carlo for Koopman- based optimal control,”IEEE Control Systems Letters, vol. 8, pp. 1901– 1906, 2024
1901
-
[24]
Safedmd: A Koopman-based data-driven controller design framework for nonlinear dynamical systems,
R. Str ¨asser, M. Schaller, K. Worthmann, J. Berberich, and F. Allg ¨ower, “Safedmd: A Koopman-based data-driven controller design framework for nonlinear dynamical systems,”Automatica, vol. 185, p. 112732, 2026
2026
-
[25]
Koopman-based control using sum-of-squares optimization: Improved stability guarantees and data efficiency,
R. Str ¨asser, J. Berberich, and F. Allg ¨ower, “Koopman-based control using sum-of-squares optimization: Improved stability guarantees and data efficiency,”European Journal of Control, vol. 86, p. 101367, 2025. Special Issue on the European Control Conference 2025
2025
-
[26]
Controller design for bilinear neural feedback loops,
D. Shah and J. Cort ´es, “Controller design for bilinear neural feedback loops,”IEEE Control Systems Letters, vol. 9, pp. 1712–1717, 2025
2025
-
[27]
An overview of Koopman-based control: From error bounds to closed-loop guarantees,
R. Str ¨asser, K. Worthmann, I. Mezi ´c, J. Berberich, M. Schaller, and F. Allg ¨ower, “An overview of Koopman-based control: From error bounds to closed-loop guarantees,”Annual Reviews in Control, vol. 61, p. 101035, 2026
2026
-
[28]
Properties of immersions for systems with multiple limit sets with implications to learning Koopman embeddings,
Z. Liu, N. Ozay, and E. D. Sontag, “Properties of immersions for systems with multiple limit sets with implications to learning Koopman embeddings,”Automatica, vol. 176, p. 112226, 2025
2025
-
[29]
Dynamic mode decomposition of numerical and experi- mental data,
P. J. Schmid, “Dynamic mode decomposition of numerical and experi- mental data,”Journal of Fluid Mechanics, vol. 656, pp. 5–28, 2010
2010
-
[30]
A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition,
M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, “A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition,”Journal of Nonlinear Science, vol. 25, no. 6, pp. 1307– 1346, 2015
2015
-
[31]
On convergence of extended dynamic mode decomposition to the Koopman operator,
M. Korda and I. Mezi ´c, “On convergence of extended dynamic mode decomposition to the Koopman operator,”Journal of Nonlinear Science, vol. 28, no. 2, pp. 687–710, 2018
2018
-
[32]
Finite- data error bounds for Koopman-based prediction and control,
F. N ¨uske, S. Peitz, F. Philipp, M. Schaller, and K. Worthmann, “Finite- data error bounds for Koopman-based prediction and control,”Journal of Nonlinear Science, vol. 33, no. 1, p. 14, 2023
2023
-
[33]
L∞-error bounds for approximations of the Koopman operator by ker- nel extended dynamic mode decomposition,
F. K ¨ohne, F. M. Philipp, M. Schaller, A. Schiela, and K. Worthmann, “L∞-error bounds for approximations of the Koopman operator by ker- nel extended dynamic mode decomposition,”SIAM Journal on Applied Dynamical Systems, vol. 24, no. 1, pp. 501–529, 2025
2025
-
[34]
Koopman for stochastic dynamics: error bounds for kernel extended dynamic mode decomposition
M. Hertel, F. M. Philipp, M. Schaller, and K. Worthmann, “Koopman for stochastic dynamics: error bounds for kernel extended dynamic mode decomposition,”arXiv preprint arXiv:2512.20247, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[35]
Generalizing dynamic mode decomposition: balancing accuracy and expressiveness in Koopman approximations,
M. Haseli and J. Cort ´es, “Generalizing dynamic mode decomposition: balancing accuracy and expressiveness in Koopman approximations,” Automatica, vol. 153, p. 111001, 2023
2023
-
[36]
Rigged dynamic mode decomposition: Data-driven generalized eigenfunction decompositions for Koopman operators,
M. J. Colbrook, C. Drysdale, and A. Horning, “Rigged dynamic mode decomposition: Data-driven generalized eigenfunction decompositions for Koopman operators,”SIAM Journal on Applied Dynamical Systems, vol. 24, no. 2, pp. 1150–1190, 2025
2025
-
[37]
Reskoopnet: Learning Koopman representations for complex dynamics with spectral residu- als,
Y . Xu, K. Shao, N. K. Logothetis, and Z. Shen, “Reskoopnet: Learning Koopman representations for complex dynamics with spectral residu- als,” inForty-second International Conference on Machine Learning, (Vancouver, Canada), 2025
2025
-
[38]
Conformal on- line learning of deep Koopman linear embeddings,
B. Gao, J. Patracone, S. Chr ´etien, and O. Alata, “Conformal on- line learning of deep Koopman linear embeddings,”arXiv preprint arXiv:2511.12760, 2025
-
[39]
Evaluating the accuracy of the dynamic mode decomposi- tion,
H. Zhang, S. T. M. Dawson, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, “Evaluating the accuracy of the dynamic mode decomposi- tion,”Journal of Computational Dynamics, vol. 7, no. 1, pp. 35–56, 2020
2020
-
[40]
Sparse- mode dynamic mode decomposition for disambiguating local and global structures,
S. M. Ichinaga, S. L. Brunton, A. Y . Aravkin, and J. N. Kutz, “Sparse- mode dynamic mode decomposition for disambiguating local and global structures,”arXiv preprint arXiv:2507.19787, 2025
-
[41]
Residual dynamic mode decomposition: robust and verified Koopmanism,
M. J. Colbrook, L. J. Ayton, and M. Sz ˝oke, “Residual dynamic mode decomposition: robust and verified Koopmanism,”Journal of Fluid Mechanics, vol. 955, p. A21, 2023
2023
-
[42]
Learning Koopman eigenfunctions and invari- ant subspaces from data: Symmetric Subspace Decomposition,
M. Haseli and J. Cort ´es, “Learning Koopman eigenfunctions and invari- ant subspaces from data: Symmetric Subspace Decomposition,”IEEE Transactions on Automatic Control, vol. 67, no. 7, pp. 3442–3457, 2022
2022
-
[43]
Temporal forward-backward consistency, not residual error, measures the prediction accuracy of Extended Dynamic Mode Decomposition,
M. Haseli and J. Cort ´es, “Temporal forward-backward consistency, not residual error, measures the prediction accuracy of Extended Dynamic Mode Decomposition,”IEEE Control Systems Letters, vol. 7, pp. 649– 654, 2023
2023
-
[44]
Recursive forward-backward EDMD: Guaran- teed algebraic search for Koopman invariant subspaces,
M. Haseli and J. Cort ´es, “Recursive forward-backward EDMD: Guaran- teed algebraic search for Koopman invariant subspaces,”IEEE Access, vol. 13, pp. 61006–61025, 2025
2025
-
[45]
The geometry behind invariance proximity: tight error bounds for Koopman-based approximations,
M. Haseli and J. Cort ´es, “The geometry behind invariance proximity: tight error bounds for Koopman-based approximations,” 2024. Submit- ted. Available athttps://arxiv.org/abs/2311.13033
-
[46]
Mauroy, Y
A. Mauroy, Y . Susuki, and I. Mezi ´c,Koopman Operator in Systems and Control. New York: Springer, 2020
2020
-
[47]
Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator,
Q. Li, F. Dietrich, E. M. Bollt, and I. G. Kevrekidis, “Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator,”Chaos, vol. 27, no. 10, p. 103111, 2017
2017
-
[48]
Numerical methods for computing an- gles between linear subspaces,
A. Bj ¨orck and G. H. Golub, “Numerical methods for computing an- gles between linear subspaces,”Mathematics of Computation, vol. 27, no. 123, pp. 579–594, 1973
1973
-
[49]
G. H. Golub and C. F. V . Loan,Matrix Computations. The Johns Hopkins University Press, 2013
2013
-
[50]
Anderson, Z
E. Anderson, Z. Bai, C. Bischof, L. S. Blackford, J. Demmel, J. Don- garra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney,et al., LAPACK users’ guide. SIAM, 1999. X. APPENDIX A. Results on principal angles and vectors We present some additional results on principal angles and vectors that are used in the main text. Lemma 10.1 (Alternate Characteriz...
1999
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.