pith. machine review for the scientific record. sign in

arxiv: 2604.07685 · v1 · submitted 2026-04-09 · 💻 cs.LG

Recognition: no theorem link

Tensor-based computation of the Koopman generator via operator logarithm

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:17 UTC · model grok-4.3

classification 💻 cs.LG
keywords Koopman generatortensor train formatoperator logarithmsystem identificationnonlinear dynamical systemsdata-driven modelinghigh-dimensional systems
0
0 comments X

The pith

The Koopman generator is recovered in low-rank tensor train format by taking the logarithm of its eigenvalues while preserving the structure.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops a data-driven method to identify governing equations of nonlinear dynamical systems by computing the infinitesimal Koopman generator directly in a compressed tensor train representation. Operator-logarithm approaches already avoid explicit time differentiation and permit larger sampling intervals, but they still face the curse of dimensionality in high-dimensional state spaces. The new technique maintains the low-rank tensor train format throughout the logarithm step, so the resulting generator remains tractable even when the underlying system has ten or more variables. Experiments recover accurate vector-field coefficients on both a four-dimensional Lotka-Volterra model and a ten-dimensional Lorenz-96 model, demonstrating that the method scales where earlier approaches do not.

Core claim

We propose a data-driven method to compute the Koopman generator in a low-rank tensor train (TT) format by taking logarithms of Koopman eigenvalues while preserving the TT format, which enables accurate recovery of vector field coefficients and scalability to higher-dimensional systems.

What carries the argument

Preservation of the low-rank tensor train (TT) format when the logarithm is applied to the data-driven approximation of the Koopman operator.

If this is right

  • Accurate recovery of vector field coefficients is possible on four-dimensional Lotka-Volterra systems.
  • The approach scales to ten-dimensional Lorenz-96 systems without prohibitive cost.
  • Larger sampling intervals become usable because time differentiation is avoided.
  • Low-rank compression keeps memory and computation feasible as dimension grows.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same TT-logarithm step could be inserted into other operator-based identification pipelines that already use eigenvalue decompositions.
  • Pairing the recovered generator with sparse regression post-processing might yield more interpretable governing equations.
  • The method invites direct tests on experimental time-series data from fluid flows or networked systems where dimensionality is high but the dynamics remain low-rank.

Load-bearing premise

The logarithm of the Koopman operator can be taken while exactly preserving the low-rank tensor train format without introducing significant approximation errors.

What would settle it

If the vector-field coefficients recovered from the TT-format generator deviate substantially from the known coefficients in the Lotka-Volterra or Lorenz-96 test cases, the preservation claim does not hold.

Figures

Figures reproduced from arXiv: 2604.07685 by Jun Ohkubo, Tatsuya Kishimoto.

Figure 1
Figure 1. Figure 1: FIG. 1. Segment of the vdP oscillator trajectory data used for EDMD. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2. Illustration of the proposed method for computing the Koop [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3. Tensor contraction process for computing the Koopman gen [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4. Vector field coe [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5. Graphical representation of the reduced-matrix computation. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
read the original abstract

Identifying governing equations of nonlinear dynamical systems from data is challenging. While sparse identification of nonlinear dynamics (SINDy) and its extensions are widely used for system identification, operator-logarithm approaches use the logarithm to avoid time differentiation, enabling larger sampling intervals. However, they still suffer from the curse of dimensionality. Then, we propose a data-driven method to compute the Koopman generator in a low-rank tensor train (TT) format by taking logarithms of Koopman eigenvalues while preserving the TT format. Experiments on 4-dimensional Lotka-Volterra and 10-dimensional Lorenz-96 systems show accurate recovery of vector field coefficients and scalability to higher-dimensional systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 0 minor

Summary. The manuscript presents a data-driven approach to compute the Koopman generator in tensor-train (TT) format by applying the logarithm to the eigenvalues of the Koopman operator while maintaining the low-rank TT structure. This is used to identify the vector field coefficients of nonlinear dynamical systems without requiring time derivatives. Numerical experiments on the 4-dimensional Lotka-Volterra system and the 10-dimensional Lorenz-96 system demonstrate accurate recovery of the governing equations and suggest scalability to higher dimensions.

Significance. If the central construction can be shown to control the approximation errors introduced by the TT-format-preserving logarithm, the method would offer a promising way to mitigate the curse of dimensionality in operator-based system identification. The empirical results on moderately high-dimensional systems (up to 10D) indicate practical utility, though the absence of theoretical error bounds limits the strength of the scalability claim.

major comments (3)
  1. The central construction computes the generator as the logarithm of the (finite-data) Koopman operator while claiming to stay inside the TT format. Matrix logarithm does not preserve TT rank in general; any method that 'preserves the TT format' must therefore introduce a truncation or approximation step whose error is not controlled by the paper's stated assumptions. Because the downstream task is recovery of the exact vector-field coefficients (not merely an approximate operator), even moderate rank-truncation error after the log can invalidate the SINDy-style coefficient extraction.
  2. Experiments on 4-dimensional Lotka-Volterra and 10-dimensional Lorenz-96 systems show accurate recovery of vector field coefficients, but the abstract and results section provide no details on error metrics, data selection, how TT rank is chosen, or sensitivity to sampling interval. This makes it impossible to verify whether the central claim holds under the stated conditions or to assess robustness.
  3. The scalability claim to higher-dimensional systems rests on the unverified assumption that the logarithm step introduces only negligible errors that do not corrupt the recovered coefficients. No a-priori bound or sensitivity analysis on this truncation is supplied, leaving the practical utility dependent on an empirical observation rather than a controlled property.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which identify key areas for improving the rigor and reproducibility of our work. We address each major comment below and outline the revisions planned for the next version of the manuscript.

read point-by-point responses
  1. Referee: The central construction computes the generator as the logarithm of the (finite-data) Koopman operator while claiming to stay inside the TT format. Matrix logarithm does not preserve TT rank in general; any method that 'preserves the TT format' must therefore introduce a truncation or approximation step whose error is not controlled by the paper's stated assumptions. Because the downstream task is recovery of the exact vector-field coefficients (not merely an approximate operator), even moderate rank-truncation error after the log can invalidate the SINDy-style coefficient extraction.

    Authors: We agree that the matrix logarithm does not preserve TT rank in general and that our procedure necessarily includes a truncation step to maintain the low-rank TT representation. The method applies the logarithm to the eigenvalues of the finite-data Koopman operator (approximated in TT format) and reconstructs the generator with subsequent TT truncation (via successive SVDs). In the revised manuscript we will expand the method section to explicitly describe this truncation procedure, including the tolerance criterion used. We will also add numerical results quantifying the truncation error's propagation to the recovered coefficients on the Lotka-Volterra and Lorenz-96 examples, demonstrating that the error remains small enough to preserve accurate coefficient extraction for the ranks and tolerances employed. While a general a-priori bound is not supplied, the added empirical control addresses the practical validity of the coefficient recovery step. revision: yes

  2. Referee: Experiments on 4-dimensional Lotka-Volterra and 10-dimensional Lorenz-96 systems show accurate recovery of vector field coefficients, but the abstract and results section provide no details on error metrics, data selection, how TT rank is chosen, or sensitivity to sampling interval. This makes it impossible to verify whether the central claim holds under the stated conditions or to assess robustness.

    Authors: We acknowledge that the current manuscript omits important experimental details. The revised version will substantially expand the numerical experiments section to report: the precise error metrics (relative coefficient error and residual norm), full data-generation protocol (number of trajectories, total samples, time-step size, and sampling intervals), the TT-rank selection procedure (singular-value decay threshold together with cross-validation), and a sensitivity study with respect to sampling interval. These additions will allow readers to reproduce the experiments and evaluate robustness under the stated conditions. revision: yes

  3. Referee: The scalability claim to higher-dimensional systems rests on the unverified assumption that the logarithm step introduces only negligible errors that do not corrupt the recovered coefficients. No a-priori bound or sensitivity analysis on this truncation is supplied, leaving the practical utility dependent on an empirical observation rather than a controlled property.

    Authors: The scalability statement is currently supported by empirical success up to 10 dimensions. We recognize the absence of a-priori bounds on the truncation error introduced by the logarithm step. In the revision we will add a sensitivity-analysis subsection that systematically varies TT rank and truncation tolerance, reporting the resulting coefficient-recovery accuracy for both benchmark systems. This provides controlled empirical evidence beyond single-point observations. A complete theoretical error bound lies outside the present scope, but the added analysis will make the practical utility claim more substantiated. revision: partial

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper presents a direct computational procedure to obtain the Koopman generator in TT format via operator logarithm on eigenvalues, extending prior operator-logarithm and SINDy ideas to mitigate dimensionality issues. No quoted equations or steps reduce the central construction to a self-definition, a fitted parameter renamed as a prediction, or a load-bearing self-citation chain; the method is described as preserving TT format during the logarithm step and is validated empirically on Lotka-Volterra and Lorenz-96 systems. The derivation remains self-contained against external benchmarks without the specific reductions required to flag circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the method implicitly relies on standard Koopman operator theory and tensor train approximation properties whose details are not provided.

pith-pipeline@v0.9.0 · 5400 in / 1034 out tokens · 42957 ms · 2026-05-10T18:17:20.590064+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

29 extracted references · 24 canonical work pages · 1 internal anchor

  1. [1]

    Tensor-based computation of the Koopman generator via operator logarithm

    computes the generator directly in the TT format. De- spite these advances, the operator-logarithm-based method for computing the Koopman generator within the TT format re- mains undeveloped, primarily because computing a matrix logarithm while preserving the TT structure presents a sig- nificant challenge. We propose a data-driven method to compute the K...

  2. [2]

    Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou

    S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering gov- erning equations from data by sparse identification of nonlinear dynamical systems,”Proc. Natl. Acad. Sci., vol. 113, no. 15, pp. 3932–3937, March 2016. DOI:10.1073/pnas.1517384113

  3. [3]

    The Journal of Chemical Physics148(24), 241723 (2018) https://doi

    L. Boninsegna, F. N ¨uske, and C. Clementi, “Sparse learning of stochastic dynamical equations,”J. Chem. Phys., vol. 148, no. 24, article no. 241723, March 2018. DOI:10.1063/1.5018409

  4. [4]

    Sparse in- ference and active learning of stochastic differential equations from data,

    Y . Huang, Y . Mabrouk, G. Gompper, and B. Sabass, “Sparse in- ference and active learning of stochastic differential equations from data,”Sci. Rep., vol. 12, no. 1, article no. 21691, Decem- ber 2022. DOI:10.1038/s41598-022-25638-9

  5. [5]

    Hamiltonian Systems and Transformation in Hilbert Space,

    B. O. Koopman, “Hamiltonian systems and transformation in Hilbert space,”Proc. Natl. Acad. Sci., vol. 17, no. 5, pp. 315– 318, May 1931. DOI:10.1073/pnas.17.5.315

  6. [6]

    Spectral analysis of nonlinear flows,

    C. W. Rowley, I. Mezi ´c, S. Bagheri, P. Schlatter, and D. S. Henningson, “Spectral analysis of nonlinear flows,” J. Fluid Mech., vol. 641, pp. 115–127, December 2009. DOI:10.1017/S0022112009992059

  7. [7]

    Dynamic mode decomposition of numerical and experimental data,

    P. J. Schmid, “Dynamic mode decomposition of numerical and experimental data,”J. Fluid Mech., vol. 656, pp. 5–28, August

  8. [8]

    DOI:10.1017/S0022112010001217

  9. [9]

    A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition

    M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, “A data– driven approximation of the Koopman operator: extending dy- namic mode decomposition,”J. Nonlinear Sci., vol. 25, no. 6, pp. 1307–1346, December 2015. DOI:10.1007/s00332-015- 9258-5

  10. [10]

    Koopman operator, geometry, and learning of dy- namical systems,

    I. Mezi ´c, “Koopman operator, geometry, and learning of dy- namical systems,”Notices Amer . Math. Soc., vol. 68, no. 7, pp. 1087–1105, August 2021. DOI:10.1090/noti2306

  11. [11]

    Salim Dahdah and James Richard Forbes

    S. L. Brunton, M. Budi ˇsi´c, E. Kaiser, and J. N. Kutz, “Modern Koopman theory for dynamical systems,”SIAM Rev., vol. 64, no. 2, pp. 229–340, May 2022. DOI:10.1137/21M1401243

  12. [12]

    Physica D: Non- linear Phenomena406, 132416 (2020) https://doi

    S. Klus, F. N ¨uske, S. Peitz, J.-H. Niemann, C. Clementi, and C. Sch ¨utte, “Data-driven approximation of the Koopman generator: model reduction, system identification, and con- trol,”Physica D, vol. 406, article no. 132416, May 2020. DOI:10.1016/j.physd.2020.132416

  13. [13]

    Koopman-based lifting tech- niques for nonlinear systems identification,

    A. Mauroy, and J. Goncalves, “Koopman-based lifting tech- niques for nonlinear systems identification,”IEEE Trans. Au- tomat. Contr ., vol. 65, no. 6, pp. 2550–2565, June 2020. DOI:10.1109/TAC.2019.2941433

  14. [14]

    Analysis of individual differ- ences in multidimensional scaling via an n-way generalization of “Eckart-Young

    J. D. Carroll, and J.-J. Chang, “Analysis of individual differ- ences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition,”Psychometrika, vol. 35, no. 3, pp. 283–319, September 1970. DOI:10.1007/BF02310791

  15. [15]

    Some mathematical notes on three-mode factor analysis,

    L. R. Tucker, “Some mathematical notes on three-mode factor analysis,”Psychometrika, vol. 31, no. 3, pp. 279–311, Septem- ber 1966. DOI:10.1007/BF02289464

  16. [16]

    A new tensor decomposition,

    I. V . Oseledets, “A new tensor decomposition,”Dokl. Math., vol. 80, no. 1, pp. 495–496, August 2009. DOI:10.1134/S1064562409040115

  17. [17]

    Oseledets , title =

    I. V . Oseledets, “Tensor-train decomposition,”SIAM J. Sci. Comput., vol. 33, no. 5, pp. 2295–2317, January 2011. DOI:10.1137/090752286

  18. [18]

    Matrix product state representations,

    D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, “Matrix product state representations,”Quantum Info. Comput., vol. 7, no. 5, pp. 401–430, July 2007

  19. [19]

    Schollwöck,The density-matrix renormalization group in the age of matrix product states, Annals of Physics326(1), 96 (2011), doi:10.1016/j.aop.2010.09.012

    U. Schollw ¨ock, “The density-matrix renormalization group in the age of matrix product states,”Ann. Phys., vol. 326, no. 1, pp. 96–192, January 2011. DOI:10.1016/j.aop.2010.09.012

  20. [20]

    Ten- sorizing neural networks,

    A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov, “Ten- sorizing neural networks,” Proc. NIPS’15, vol. 1, pp. 442–450, December 2015

  21. [21]

    Solution of the Fokker–Planck equation by cross approximation method in the tensor train for- mat,

    A. Chertkov, and I. Oseledets, “Solution of the Fokker–Planck equation by cross approximation method in the tensor train for- mat,”Front. Artif. Intell., vol. 4, article no. 668215, August

  22. [22]

    DOI:10.3389/frai.2021.668215

  23. [23]

    Tensor- based computation of metastable and coherent sets,

    F. N ¨uske, P. Gelß, S. Klus, and C. Clementi, “Tensor- based computation of metastable and coherent sets,”Phys- ica D, vol. 427, article no. 133018, December 2021. DOI:10.1016/j.physd.2021.133018

  24. [24]

    tgEDMD: approximation of the Kol- mogorov operator in tensor train format,

    M. L ¨ucke, and F. N¨uske, “tgEDMD: approximation of the Kol- mogorov operator in tensor train format,”J. Nonlinear Sci., vol. 32, no. 4, article no. 44, August 2022. DOI:10.1007/s00332- 022-09801-0

  25. [25]

    Contribution to the theory of periodic reactions,

    A. J. Lotka, “Contribution to the theory of periodic reactions,” J. Phys. Chem., vol. 14, no. 3, pp. 271–274, March 1910. DOI:10.1021/j150111a004

  26. [26]

    On the V olterra and other nonlinear models of interacting populations,

    N. S. Goel, S. C. Maitra, and E. W. Montroll, “On the V olterra and other nonlinear models of interacting populations,” Rev. Mod. Phys., vol. 43, no. 2, pp. 231–276, April 1971. DOI:10.1103/RevModPhys.43.231

  27. [27]

    Predictability – a problem partly solved,

    E. N. Lorenz, “Predictability – a problem partly solved,”Proc. Seminar on Predictability, vol. 1, pp. 1–18, September 1995

  28. [28]

    Tensor-based dy- namic mode decomposition,

    S. Klus, P. Gelß, S. Peitz, and C. Sch ¨utte, “Tensor-based dy- namic mode decomposition,”Nonlinearity, vol. 31, no. 7, pp. 3359–3380, July 2018. DOI:10.1088/1361-6544/aabc8f

  29. [29]

    Multidimensional approximation of nonlinear dynamical systems,

    P. Gelß, S. Klus, J. Eisert, and C. Sch ¨utte, “Multidimensional approximation of nonlinear dynamical systems,”J. Comput. Nonlinear Dyn., vol. 14, no. 6, article no. 061006, April 2019. DOI:10.1115/1.4043148