pith. machine review for the scientific record. sign in

arxiv: 2605.06000 · v1 · submitted 2026-05-07 · 🧮 math.DS

Recognition: unknown

Deep-Koopman-KANDy: Dictionary Discovery for Deep-Koopman Operators with Kolmogorov-Arnold Networks for Dynamics

Erik Bollt, Jeremie Fish, Kevin Slote

Pith reviewed 2026-05-08 04:30 UTC · model grok-4.3

classification 🧮 math.DS
keywords deep koopman operatorskolmogorov-arnold networksdictionary discoverydynamical systemssymbolic regressiondata-driven modelinglorenz systemkoopman theory
0
0 comments X

The pith

Deep-Koopman operators with two-layer KAN encoders and decoders allow post-training recovery of symbolic dictionaries via level-set and chain-rule identities.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops Deep-Koopman-KANDy to address the need to precommit to a function library when using methods like EDMD or SINDy for learning dynamical systems. It replaces the encoder and decoder networks in a standard Deep-Koopman model with two-layer Kolmogorov-Arnold Networks, then applies a level-set construction together with a chain-rule gradient identity to reveal the compositional form of the learned observables in any chosen basis after training is complete. The method is demonstrated on the Lorenz system, where it recovers the target polynomial terms, the standard map where it finds a matching Fourier basis, the Ikeda map where it identifies the correct foliation coordinate, and the Arnold cat map where it correctly detects the lack of a sparse finite-dimensional representation. A reader would care because this removes the upfront library choice while still producing interpretable symbolic output from flexible deep models.

Core claim

By training a Deep-Koopman Operator whose encoder and decoder are two-layer KANs, a level-set construction combined with the chain-rule gradient identity exposes the compositional structure of the learned latent observables as a sparse symbolic dictionary in a post-hoc basis. On the Lorenz system this recovers the dictionary {x, y, z, xy, xz} with perfect recall and Jaccard score 0.79 plus or minus 0.06; on the standard map it recovers a low-order Fourier basis; on the Ikeda map a misspecified polynomial readout still recovers the foliation coordinate approximately x squared plus y squared together with a nontrivial outer function; and on the Arnold cat map the method fails to produce a spur

What carries the argument

Two-layer Kolmogorov-Arnold Networks as the encoder and decoder of a Deep-Koopman Operator, together with a level-set construction and chain-rule gradient identity that extracts the compositional structure of the learned observables.

If this is right

  • On the Lorenz system the method recovers the target dictionary {x, y, z, xy, xz} with perfect recall.
  • On the Chirikov standard map the method recovers a low-order Fourier basis that matches the known analytical structure.
  • On the Ikeda map a misspecified polynomial readout still recovers the correct foliation coordinate approximately equal to x squared plus y squared along with a nontrivial outer function.
  • On the Arnold cat map the method fails to find a sparse closure, consistent with the theoretical impossibility of finite-dimensional Koopman closure.
  • The approach permits dictionary discovery without requiring the practitioner to select a function library before training.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The post-training readout could be applied to already-trained Deep-Koopman models provided their encoder and decoder admit similar compositional exposure.
  • The technique offers a route to validate whether a deep Koopman model has captured essential dynamics in an interpretable symbolic form rather than an opaque latent space.
  • Success on the Ikeda map despite a misspecified readout basis suggests the method may work for systems whose Koopman representations are structured but not polynomial.

Load-bearing premise

The observables learned by the two-layer KAN encoder and decoder possess a compositional structure that the level-set construction and chain-rule gradient identity can reliably extract.

What would settle it

Applying the full pipeline to the Lorenz system and finding that the recovered dictionary misses the terms xy and xz or yields a Jaccard score well below 0.79 would falsify the recovery claim.

Figures

Figures reproduced from arXiv: 2605.06000 by Erik Bollt, Jeremie Fish, Kevin Slote.

Figure 1
Figure 1. Figure 1: Deep-Koopman-KANDy architecture. A two-layer KAN encoder ( view at source ↗
Figure 2
Figure 2. Figure 2: The hidden activation layer of the KAN defines a manifold in latent space. The level view at source ↗
Figure 3
Figure 3. Figure 3: Lorenz attractor and learned Koopman observable. (left) Attractor colored by the learned observable g(x, y, z). (center) Ground-truth and predicted trajectory over ≈ 2τΛ. (right) Level-set bands g(x, y, z) ≈ ci induce a structured foliation of the attractor. Headline result. Deep-Koopman-KANDy with architecture [3, 5, 5] recovers T exactly at the union level: every term in T appears in at least one latent … view at source ↗
Figure 4
Figure 4. Figure 4: Deep-Koopman-KANDy on the standard map at view at source ↗
Figure 5
Figure 5. Figure 5: Arnold cat map: ground truth, full model, and two ablations. view at source ↗
Figure 6
Figure 6. Figure 6: Ikeda map: level-set decomposition with a misspecified polynomial readout dictio￾nary. The Ikeda map admits no sparse polynomial representation, so the degree-3 polynomial Lasso used to recover the inner function g is misspecified by construction. We use it anyway, treating the polynomial as a flexible interpolant rather than a structural prior. (left) Attractor colored by the reconstruction h(g(x)) for ea… view at source ↗
read the original abstract

Symbolic library -- or Koopman dictionary -- selection is a fundamental challenge in data-driven dynamical systems. Extended Dynamic Mode Decomposition (EDMD), Sparse Identification of Nonlinear Dynamics (SINDy), and Kolmogorov--Arnold Networks for Dynamics (KANDy) all require the practitioner to commit to a function library at training time; Deep-Koopman Operators avoid this commitment but produce uninterpretable latent observables. We propose Deep-Koopman-KANDy, a structured approach to post-hoc symbolic dictionary readout that combines Deep-Koopman modeling with Kolmogorov-Arnold Networks for Dynamics (KANDy). The encoder and decoder of a Deep-Koopman Operator are replaced with two-layer Kolmogorov--Arnold Networks (KANs), and a level-set construction together with a chain-rule gradient identity exposes the compositional structure of the learned observables in a basis chosen \emph{after} training. We evaluate the method on the Lorenz system, the Chirikov standard map, the Ikeda map, and the Arnold cat map. On Lorenz it recovers the target dictionary $\{x,y,z,xy,xz\}$ with perfect recall and Jaccard score $0.79\pm0.06$; on the standard map it recovers a low-order Fourier basis matching the analytical structure; on Ikeda -- which has no sparse polynomial representation -- a misspecified polynomial readout still recovers the correct foliation coordinate $g\approx x^2+y^2$ together with a nontrivial outer function; and on the Arnold cat map -- used as a negative control because finite-dimensional Koopman closure is provably impossible -- the method fails to find a sparse closure, as expected.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper proposes Deep-Koopman-KANDy, which augments Deep-Koopman operators by replacing the encoder/decoder with two-layer Kolmogorov-Arnold Networks (KANs). A level-set construction and chain-rule gradient identity are then applied to expose the compositional structure of the learned observables, enabling post-hoc recovery of a sparse symbolic dictionary in a user-chosen basis. Experiments on the Lorenz system recover the target dictionary {x,y,z,xy,xz} with perfect recall and Jaccard score 0.79±0.06; the standard map yields a low-order Fourier basis; the Ikeda map recovers the foliation coordinate g≈x²+y² even under polynomial misspecification; and the Arnold cat map (negative control) fails to produce a sparse closure, as expected.

Significance. If the recoveries hold under fuller scrutiny, the work offers a practical bridge between the flexibility of latent Deep-Koopman models and the interpretability of dictionary-based methods such as EDMD, SINDy, and KANDy. The benchmark results, including exact recall on Lorenz, appropriate Fourier recovery on the standard map, recovery of the correct foliation coordinate on Ikeda despite basis misspecification, and expected failure on the Arnold cat map, provide concrete evidence that the KAN-based compositional readout can succeed where purely data-driven latent spaces remain opaque. The explicit negative control and the post-training nature of the dictionary selection are particular strengths.

major comments (1)
  1. [Abstract and Experiments] The abstract and experimental sections report quantitative recovery metrics (perfect recall, Jaccard 0.79±0.06 on Lorenz; correct foliation coordinate on Ikeda) but omit full implementation details, hyperparameter sensitivity studies, and error analysis. These omissions are load-bearing for the central claim that the level-set/chain-rule construction reliably exposes the target dictionary across systems.
minor comments (2)
  1. [Methods] The precise mathematical form of the level-set construction and the chain-rule gradient identity should be stated explicitly (with equation numbers) in the methods section to allow independent verification.
  2. [Experiments] Clarify the exact user-chosen basis and the stopping criterion for the post-hoc readout in each experiment; the current description leaves the selection procedure somewhat implicit.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their positive summary, recognition of the method's strengths, and recommendation for minor revision. We address the major comment below and will make the requested additions.

read point-by-point responses
  1. Referee: [Abstract and Experiments] The abstract and experimental sections report quantitative recovery metrics (perfect recall, Jaccard 0.79±0.06 on Lorenz; correct foliation coordinate on Ikeda) but omit full implementation details, hyperparameter sensitivity studies, and error analysis. These omissions are load-bearing for the central claim that the level-set/chain-rule construction reliably exposes the target dictionary across systems.

    Authors: We agree that the current presentation would benefit from expanded details to support the reliability claim. In the revised version we will add: complete implementation specifications (network widths, activation choices, training schedules, and basis libraries); hyperparameter sensitivity results across key parameters such as regularization strength, number of KAN grid points, and dictionary size; and error analysis including run-to-run variance, failure-mode identification, and quantitative metrics on all four systems. These will appear in an expanded Experiments section together with a new supplementary note on reproducibility. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper introduces a post-training readout procedure that replaces the encoder/decoder of a Deep-Koopman model with two-layer KANs and then applies a level-set construction plus chain-rule identity to recover a symbolic dictionary in a user-chosen basis. All reported results consist of empirical recovery of externally known analytical dictionaries (Lorenz polynomial set, low-order Fourier basis on the standard map, foliation coordinate on Ikeda, and expected failure on the Arnold cat map). These recoveries are measured against independent benchmark systems and known closed-form structures rather than against quantities fitted inside the same optimization; no step reduces a claimed prediction to a fitted input by construction, and no load-bearing uniqueness theorem or ansatz is imported solely via self-citation. The derivation therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The claim rests on standard neural network differentiability and dynamical systems theory with no new postulated entities. Architecture choices act as free parameters but are not central to the recovery claim.

free parameters (1)
  • KAN layer widths and activation choices
    Two-layer KAN architecture hyperparameters selected by the user that determine the expressivity of the readout.
axioms (1)
  • standard math The chain rule applies to the composed functions realized by the KAN encoder/decoder.
    Invoked to obtain the gradient identity that exposes the compositional structure of the observables.

pith-pipeline@v0.9.0 · 5605 in / 1273 out tokens · 49475 ms · 2026-05-08T04:30:05.131497+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

71 extracted references · 45 canonical work pages

  1. [1]

    Alexander D Wilson, Joshua A Schultz, and Todd D Murphey

    Matthew Williams, Ioannis Kevrekidis, and Clarence Rowley. A data-driven approximation of the koopman operator: Extending dynamic mode decomposition.Journal of Nonlinear Science, 25, 08 2014. doi: 10.1007/s00332-015-9258-5

  2. [2]

    Williams, Clarence W

    Matthew O. Williams, Clarence W. Rowley, and Ioannis G. Kevrekidis. A kernel-based method for data-driven koopman spectral analysis.Journal of Computational Dynamics, 2(2):247–265,

  3. [3]

    doi: 10.3934/jcd.2015005. 9

  4. [4]

    Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou

    Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016. doi: 10.1073/pnas.1517384113

  5. [5]

    Merlean: An agentic framework for autoformalization in quantum computation

    Kevin Slote, Jeremie Fish, and Erik Bollt. KANDy: Kolmogorov–arnold networks and dy- namical system discovery.arXiv preprint arXiv:2602.20413, 2026. doi: 10.48550/arXiv.2602. 20413

  6. [6]

    Champion, B

    Kathleen Champion, Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations.Proceedings of the National Academy of Sciences, 116(45):22445–22451, 2019. doi: 10.1073/pnas.1906995116

  7. [7]

    Nathan Kutz, and Steven L

    Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics.Nature Communications, 9(1):4950, 2018. doi: 10.1038/ s41467-018-07210-0

  8. [8]

    Bollt, and Ioannis G

    Qianxiao Li, Felix Dietrich, Erik M. Bollt, and Ioannis G. Kevrekidis. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator.Chaos, 27(10):103111, 2017. doi: 10.1063/1.4993854

  9. [9]

    Otto and Clarence W

    Samuel E. Otto and Clarence W. Rowley. Linearly-recurrent autoencoder networks for learning dynamics.SIAM Journal on Applied Dynamical Systems, 18(1):558–593, 2019. doi: 10.1137/ 18M1177846

  10. [10]

    Spectral analysis of nonlinear flows,

    Clarence W. Rowley, Igor Mezi ´c, Shervin Bagheri, Philipp Schlatter, and Dan S. Henningson. Spectral analysis of nonlinear flows.Journal of Fluid Mechanics, 641:115–127, 2009. doi: 10.1017/S0022112009992059

  11. [11]

    Budiˇ si´ c, R

    Marko Budi ˇsi´c, Ryan Mohr, and Igor Mezi´c. Applied koopmanism.Chaos, 22:047510, 2012. doi: 10.1063/1.4772195

  12. [12]

    Brunton, Marko Budi ˇsi´c, Eurika Kaiser, and J

    Steven L. Brunton, Marko Budi ˇsi´c, Eurika Kaiser, and J. Nathan Kutz. Modern koopman theory for dynamical systems.SIAM Review, 64(2):229–340, May 2022. doi: 10.1137/ 21M1401243

  13. [13]

    Analysis of fluid flows via spectral properties of the koopman operator.Annual Review of Fluid Mechanics, 45:357–378, 2013

    Igor Mezi ´c. Analysis of fluid flows via spectral properties of the koopman operator.Annual Review of Fluid Mechanics, 45:357–378, 2013. doi: 10.1146/annurev-fluid-011212-140652

  14. [14]

    A. S. Sharma, Igor Mezi´c, and B. J. McKeon. Correspondence between koopman mode decom- position, resolvent mode decomposition, and invariant solutions of the navier–stokes equations. Physical Review Fluids, 1:032402, 2016. doi: 10.1103/PhysRevFluids.1.032402

  15. [15]

    Georgescu and Igor Mezi ´c

    M. Georgescu and Igor Mezi ´c. Building energy modeling: A systematic approach to zoning and model reduction using koopman mode analysis.Energy and Buildings, 86:794–802, 2015. doi: 10.1016/j.enbuild.2014.10.046

  16. [16]

    Spatiotemporal feature extraction with data-driven koopman operators

    Dimitrios Giannakis, Joanna Slawinska, and Zhizhen Zhao. Spatiotemporal feature extraction with data-driven koopman operators. InProceedings of the 32nd International Conference on Machine Learning, volume 44 ofJMLR Workshop and Conference Proceedings, pages 103– 115, 2015

  17. [17]

    H. Wu, F. N ¨uske, F. Paul, S. Klus, P. Koltai, and F. No´e. Variational koopman models: Slow collective variables and molecular kinetics from short off-equilibrium simulations.The Journal of Chemical Physics, 146:154104, 2017. doi: 10.1063/1.4979344

  18. [18]

    Peter J. Schmid. Dynamic mode decomposition of numerical and experimental data.Journal of Fluid Mechanics, 656:5–28, 2010. doi: 10.1017/S0022112010001217

  19. [19]

    Hou, and Max Tegmark

    Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljacic, Thomas Y . Hou, and Max Tegmark. KAN: Kolmogorov arnold networks. InThe Thirteenth International Conference on Learning Representations, 2025. URLhttps://openreview. net/forum?id=Ozo7qJ5vZi. 10

  20. [20]

    CoxKAN: Kolmogorov–Arnold networks for interpretable, high-performance survival analysis.Bioinformatics, 41(8):btaf413, 07 2025

    William Knottenbelt, William McGough, Rebecca Wray, Woody Zhidong Zhang, Jiashuai Liu, Ines Prata Machado, Zeyu Gao, and Mireia Crispin-Ortuzar. CoxKAN: Kolmogorov–Arnold networks for interpretable, high-performance survival analysis.Bioinformatics, 41(8):btaf413, 07 2025. ISSN 1367-4811. doi: 10.1093/bioinformatics/btaf413

  21. [21]

    Kolmogorov–Arnold graph neural networks for molecular property prediction.Nature Machine Intelligence, 7(8): 1346–1354, 2025

    Longlong Li, Yipeng Zhang, Guanghui Wang, and Kelin Xia. Kolmogorov–Arnold graph neural networks for molecular property prediction.Nature Machine Intelligence, 7(8): 1346–1354, 2025. doi: 10.1038/s42256-025-01087-7. URLhttps://doi.org/10.1038/ s42256-025-01087-7

  22. [22]

    Bollt, and Ying-Cheng Lai

    Shirin Panahi, Mohammadamin Moradi, Erik M. Bollt, and Ying-Cheng Lai. Data-driven model discovery with Kolmogorov-Arnold networks.Physical Review Research, 7:023037, April 2025. doi: 10.1103/PhysRevResearch.7.023037

  23. [23]

    Multi-exit Kolmogorov–Arnold networks: enhancing accu- racy and parsimony.Machine Learning: Science and Technology, 6(3):035037, aug 2025

    James Bagrow and Josh Bongard. Multi-exit Kolmogorov–Arnold networks: enhancing accu- racy and parsimony.Machine Learning: Science and Technology, 6(3):035037, aug 2025. doi: 10.1088/2632-2153/adf9bd

  24. [24]

    Koenig, Suyong Kim, and Sili Deng

    Benjamin C. Koenig, Suyong Kim, and Sili Deng. KAN-ODEs: Kolmogorov–Arnold network ordinary differential equations for learning dynamical systems and hidden physics.Computer Methods in Applied Mechanics and Engineering, 432:117397, 2024. ISSN 0045-7825. doi: https://doi.org/10.1016/j.cma.2024.117397

  25. [25]

    Petrenko, L

    Benjamin C. Koenig, Suyong Kim, and Sili Deng. LeanKAN: a parameter-lean Kolmogorov- Arnold network layer with improved memory efficiency and convergence behavior.Neural Networks, 192:107883, 2025. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2025. 107883

  26. [26]

    B. O. Koopman. Hamiltonian systems and transformation in hilbert space.Proceedings of the National Academy of Sciences of the United States of America, 17(5):315–318, 1931. doi: 10.1073/pnas.17.5.315

  27. [27]

    B. O. Koopman and J. von Neumann. Dynamical systems of continuous spectra.Proceedings of the National Academy of Sciences of the United States of America, 18(3):255–263, 1932. doi: 10.1073/pnas.18.3.255

  28. [28]

    doi: 10.48550/arXiv.2410

    Mohammadamin Moradi, Shirin Panahi, Erik Bollt, and Ying-Cheng Lai. Kolmogorov-arnold network autoencoders.arXiv preprint arXiv:2410.02077, 2024. doi: 10.48550/arXiv.2410. 02077

  29. [29]

    Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control.arXiv preprint arXiv:1703.10112, 2017

    Milan Korda and Igor Mezi ´c. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control.arXiv preprint arXiv:1703.10112, 2017

  30. [30]

    Williams, Clarence W

    Matthew O. Williams, Clarence W. Rowley, Igor Mezi ´c, and Ioannis G. Kevrekidis. Data fu- sion via intrinsic dynamic variables: An application of data-driven koopman spectral analysis. Europhysics Letters, 109:40007, 2015. doi: 10.1209/0295-5075/109/40007

  31. [31]

    Brunton, Bingni W

    Steven L. Brunton, Bingni W. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Koopman invariant subspaces and finite linear representation of nonlinear dynamical systems for control. PLOS ONE, 11:e0150171, 2016. doi: 10.1371/journal.pone.0150171

  32. [32]

    Nathan Kutz, and Steven L

    Joseph Bakarji, Kathleen Champion, J. Nathan Kutz, and Steven L. Brunton. Discovering governing equations from partial measurements with deep delay autoencoders.Proceedings of the Royal Society A, 479(2276):20230422, August 2023. doi: 10.1098/rspa.2023.0422

  33. [33]

    J. D. Farmer and J. J. Sidorowich. Predicting chaotic time series.Physical Review Letters, 59: 845, 1987. doi: 10.1103/PhysRevLett.59.845

  34. [34]

    J. P. Crutchfield and B. McNamara. Equations of motion from a data series.Complex Systems, 1:417, 1987

  35. [35]

    Casdagli

    M. Casdagli. Nonlinear prediction of chaotic time series.Physica D, 35:335, 1989. 11

  36. [36]

    Sugihara, B

    G. Sugihara, B. Grenfell, R. M. May, P. Chesson, H. M. Platt, and M. Williamson. Distin- guishing error from chaos in ecological time series.Philosophical Transactions of the Royal Society of London. Series B, 330:235, 1990

  37. [37]

    Grassberger and T

    P. Grassberger and T. Schreiber. Nonlinear time sequence analysis.International Journal of Bifurcation and Chaos, 1:521, 1990

  38. [38]

    Hegger, H

    R. Hegger, H. Kantz, and T. Schreiber. Practical implementation of nonlinear time series methods: The tisean package.Chaos, 9:413, 1999

  39. [39]

    E. M. Bollt. Controlling chaos and the inverse Frobenius-Perron problem: Global stabilization of arbitrary invariant measures.International Journal of Bifurcation and Chaos, 10:1033, 2000

  40. [40]

    Yao and E

    C. Yao and E. M. Bollt. Modeling and nonlinear parameter estimation with Kronecker product representation for coupled oscillators and spatiotemporal systems.Physica D, 227:78, 2007

  41. [41]

    Predicting catastrophes in nonlinear dynamical systems by compressive sensing.Physical Review Letters, 106:154101, Apr 2011

    Wen-Xu Wang, Rui Yang, Ying-Cheng Lai, Vassilios Kovanis, and Celso Grebogi. Predicting catastrophes in nonlinear dynamical systems by compressive sensing.Physical Review Letters, 106:154101, Apr 2011. doi: 10.1103/PhysRevLett.106.154101

  42. [42]

    Network reconstruction based on evolutionary-game data via compressive sensing.Physical Review X, 1:021021, Dec 2011

    Wen-Xu Wang, Ying-Cheng Lai, Celso Grebogi, and Jieping Ye. Network reconstruction based on evolutionary-game data via compressive sensing.Physical Review X, 1:021021, Dec 2011. doi: 10.1103/PhysRevX.1.021021

  43. [43]

    Detecting hidden nodes in complex networks from time series.Phys

    Ri-Qi Su, Wen-Xu Wang, and Ying-Cheng Lai. Detecting hidden nodes in complex networks from time series.Phys. Rev. E, 85:065201, Jun 2012. doi: 10.1103/PhysRevE.85.065201

  44. [44]

    Shen, W.-X

    Z. Shen, W.-X. Wang, Y . Fan, Z. Di, and Y .-C. Lai. Reconstructing propagation networks with natural diversity and identifying hidden sources.Nature Communications, 5:4323, 2014

  45. [45]

    Gouesbet

    G. Gouesbet. Reconstruction of standard and inverse vector fields equivalent to a r ¨ossler sys- tem.Physical Review A, 44:6264, 1991. doi: 10.1103/PhysRevA.44.6264

  46. [46]

    T. Sauer. Reconstruction of dynamical systems from interspike intervals.Physical Review Letters, 72:3811, 1994. doi: 10.1103/PhysRevLett.72.3811

  47. [47]

    Baake, M

    E. Baake, M. Baake, H.-G. Bock, and K. M. Briggs. Fitting ordinary differential equations to chaotic data.Physical Review A, 45:5524, 1992. doi: 10.1103/PhysRevA.45.5524

  48. [48]

    U. Parlitz. Estimating model parameters from time series by autosynchronization.Physical Review Letters, 76:1232, 1996

  49. [49]

    G. G. Szpiro. Forecasting chaotic time series with genetic algorithms.Physical Review E, 55: 2557, 1997. doi: 10.1103/PhysRevE.55.2557

  50. [50]

    C. Tao, Y . Zhang, and J. J. Jiang. Estimating system parameters from chaotic time series with synchronization optimized by a genetic algorithm.Physical Review E, 76:016209, 2007. doi: 10.1103/PhysRevE.76.016209

  51. [51]

    Forecasting the future: Is it possible for adiabatically time-varying nonlinear dynamical systems?Chaos, 22(3):033119, 2012

    Rui Yang, Ying-Cheng Lai, and Celso Grebogi. Forecasting the future: Is it possible for adiabatically time-varying nonlinear dynamical systems?Chaos, 22(3):033119, 2012. doi: 10.1063/1.4740057

  52. [52]

    Cand `es and Michael B

    Emmanuel J. Cand `es and Michael B. Wakin. An introduction to compressive sampling.IEEE Signal Processing Magazine, 25(2):21–30, 2008. doi: 10.1109/MSP.2007.914731

  53. [53]

    Baraniuk

    Richard G. Baraniuk. Compressed sensing.IEEE Signal Processing Magazine, 24(4):118– 121, 2007. doi: 10.1109/MSP.2007.4286571

  54. [54]

    David L. Donoho. Compressed sensing.IEEE Transactions on Information Theory, 52(4): 1289–1306, 2006. doi: 10.1109/TIT.2006.871582. 12

  55. [55]

    Cand `es, Justin Romberg, and Terence Tao

    Emmanuel J. Cand `es, Justin Romberg, and Terence Tao. Stable signal recovery from incom- plete and inaccurate measurements.Communications on Pure and Applied Mathematics, 59 (8):1207–1223, 2006. doi: 10.1002/cpa.20124

  56. [56]

    Cand `es, Justin Romberg, and Terence Tao

    Emmanuel J. Cand `es, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on Information Theory, 52(2):489–509, 2006. doi: 10.1109/TIT.2005.862083

  57. [57]

    Edward N. Lorenz. Deterministic nonperiodic flow.Journal of the Atmospheric Sciences, 20 (2):130–141, 1963. doi: 10.1175/1520-0469(1963)020⟨0130:DNF⟩2.0.CO;2

  58. [58]

    O. E. R ¨ossler. Equation for continuous chaos.Physics Letters A, 57:397, 1976

  59. [59]

    Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system.Optics Communications, 30(2):257–261, 1979

    Kensuke Ikeda. Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system.Optics Communications, 30(2):257–261, 1979. doi: 10.1016/ 0030-4018(79)90090-7

  60. [60]

    S. M. Hammel, C. K. R. T. Jones, and J. V . Moloney. Global dynamical behavior of the optical field in a ring cavity.Journal of the Optical Society of America B, 2:552, 1985

  61. [61]

    C. S. Holling. The components of predation as revealed by a study of small-mammal predation of the european pine sawfly.The Canadian Entomologist, 91:293–320, 1959

  62. [62]

    C. S. Holling. Some characteristics of simple types of predation and parasitism.The Canadian Entomologist, 91:385, 1959

  63. [63]

    Jiang, Z.-G

    J. Jiang, Z.-G. Huang, T. P. Seager, W. Lin, C. Grebogi, A. Hastings, and Y .-C. Lai. Predict- ing tipping points in mutualistic networks through dimension reduction.Proceedings of the National Academy of Sciences of the USA, 115:E639, 2018

  64. [64]

    Self-entrainment of a population of coupled non-linear oscillators

    Yoshiki Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. In Huzihiro Araki, editor,International Symposium on Mathematical Problems in Theoretical Physics, volume 39 ofLecture Notes in Physics. Springer, Berlin, Heidelberg, 1975. doi: 10.1007/BFb0013365

  65. [65]

    Mohammad Amin Basiri and Sina Khanmohammadi. SINDyG: sparse identification of non- linear dynamical systems from graph-structured data, with applications to Stuart–Landau oscillator networks.Journal of Complex Networks, 13(5):cnaf029, September 2025. doi: 10.1093/comnet/cnaf029

  66. [66]

    A. N. Kolmogorov. On the representation of continuous functions of many variables by super- position of continuous functions of one variable and addition.Doklady Akademii Nauk SSSR, 114:953, 1957

  67. [67]

    A. N. Kolmogorov. On the representation of functions of several variables as a superposition of functions of a smaller number of variables. In A. B. Givental, B. A. Khesin, J. E. Marsden, A. N. Varchenko, V . A. Vassiliev, O. Y . Viro, and V . M. Zakalyukin, editors,Collected Works: Representations of Functions, Celestial Mechanics and KAM Theory, 1957–19...

  68. [68]

    doi: 10.1007/978-3-642-01742-1 5

    Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. doi: 10.1007/978-3-642-01742-1 5

  69. [69]

    Kolmogorov–Arnold networks for genomic tasks.Bioinformatics, 41(2):412–424, 2025

    Oleksandr Cherednichenko and Maria Poptsova. Kolmogorov–Arnold networks for genomic tasks.Bioinformatics, 41(2):412–424, 2025

  70. [70]

    Radaideh Nataly R

    Majdi I. Radaideh Nataly R. Panczyk, Omer F. Erdem. Opening the AI black-box: Symbolic regression with Kolmogorov–Arnold networks.Energy AI, 22:100258, 2025. A Related Work Koopman Operator Theory and Data-Driven Approximations.The Koopman operator [25, 26] provides a linear, infinite-dimensional description of nonlinear dynamical systems and has been app...

  71. [71]

    This yields a one-dimensional dataset{(g j, q j)}N j=1 whereg j =g k(xj)andq j ≈h ′ k(gj)

    points with∥∇g k(xj)∥2 below the 5th percentile are discarded (ill-conditioned denomina- tors), and outliers inh ′ k beyond3×IQRare removed. This yields a one-dimensional dataset{(g j, q j)}N j=1 whereg j =g k(xj)andq j ≈h ′ k(gj). Step 3: Integration and reconstruction.We fit a polynomial ˆh′ k(ζ) = Pp l=0 bl (ζ−¯gk)l to the binned medians of{(g j, qj)}(...