Recognition: unknown
Deep-Koopman-KANDy: Dictionary Discovery for Deep-Koopman Operators with Kolmogorov-Arnold Networks for Dynamics
Pith reviewed 2026-05-08 04:30 UTC · model grok-4.3
The pith
Deep-Koopman operators with two-layer KAN encoders and decoders allow post-training recovery of symbolic dictionaries via level-set and chain-rule identities.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By training a Deep-Koopman Operator whose encoder and decoder are two-layer KANs, a level-set construction combined with the chain-rule gradient identity exposes the compositional structure of the learned latent observables as a sparse symbolic dictionary in a post-hoc basis. On the Lorenz system this recovers the dictionary {x, y, z, xy, xz} with perfect recall and Jaccard score 0.79 plus or minus 0.06; on the standard map it recovers a low-order Fourier basis; on the Ikeda map a misspecified polynomial readout still recovers the foliation coordinate approximately x squared plus y squared together with a nontrivial outer function; and on the Arnold cat map the method fails to produce a spur
What carries the argument
Two-layer Kolmogorov-Arnold Networks as the encoder and decoder of a Deep-Koopman Operator, together with a level-set construction and chain-rule gradient identity that extracts the compositional structure of the learned observables.
If this is right
- On the Lorenz system the method recovers the target dictionary {x, y, z, xy, xz} with perfect recall.
- On the Chirikov standard map the method recovers a low-order Fourier basis that matches the known analytical structure.
- On the Ikeda map a misspecified polynomial readout still recovers the correct foliation coordinate approximately equal to x squared plus y squared along with a nontrivial outer function.
- On the Arnold cat map the method fails to find a sparse closure, consistent with the theoretical impossibility of finite-dimensional Koopman closure.
- The approach permits dictionary discovery without requiring the practitioner to select a function library before training.
Where Pith is reading between the lines
- The post-training readout could be applied to already-trained Deep-Koopman models provided their encoder and decoder admit similar compositional exposure.
- The technique offers a route to validate whether a deep Koopman model has captured essential dynamics in an interpretable symbolic form rather than an opaque latent space.
- Success on the Ikeda map despite a misspecified readout basis suggests the method may work for systems whose Koopman representations are structured but not polynomial.
Load-bearing premise
The observables learned by the two-layer KAN encoder and decoder possess a compositional structure that the level-set construction and chain-rule gradient identity can reliably extract.
What would settle it
Applying the full pipeline to the Lorenz system and finding that the recovered dictionary misses the terms xy and xz or yields a Jaccard score well below 0.79 would falsify the recovery claim.
Figures
read the original abstract
Symbolic library -- or Koopman dictionary -- selection is a fundamental challenge in data-driven dynamical systems. Extended Dynamic Mode Decomposition (EDMD), Sparse Identification of Nonlinear Dynamics (SINDy), and Kolmogorov--Arnold Networks for Dynamics (KANDy) all require the practitioner to commit to a function library at training time; Deep-Koopman Operators avoid this commitment but produce uninterpretable latent observables. We propose Deep-Koopman-KANDy, a structured approach to post-hoc symbolic dictionary readout that combines Deep-Koopman modeling with Kolmogorov-Arnold Networks for Dynamics (KANDy). The encoder and decoder of a Deep-Koopman Operator are replaced with two-layer Kolmogorov--Arnold Networks (KANs), and a level-set construction together with a chain-rule gradient identity exposes the compositional structure of the learned observables in a basis chosen \emph{after} training. We evaluate the method on the Lorenz system, the Chirikov standard map, the Ikeda map, and the Arnold cat map. On Lorenz it recovers the target dictionary $\{x,y,z,xy,xz\}$ with perfect recall and Jaccard score $0.79\pm0.06$; on the standard map it recovers a low-order Fourier basis matching the analytical structure; on Ikeda -- which has no sparse polynomial representation -- a misspecified polynomial readout still recovers the correct foliation coordinate $g\approx x^2+y^2$ together with a nontrivial outer function; and on the Arnold cat map -- used as a negative control because finite-dimensional Koopman closure is provably impossible -- the method fails to find a sparse closure, as expected.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Deep-Koopman-KANDy, which augments Deep-Koopman operators by replacing the encoder/decoder with two-layer Kolmogorov-Arnold Networks (KANs). A level-set construction and chain-rule gradient identity are then applied to expose the compositional structure of the learned observables, enabling post-hoc recovery of a sparse symbolic dictionary in a user-chosen basis. Experiments on the Lorenz system recover the target dictionary {x,y,z,xy,xz} with perfect recall and Jaccard score 0.79±0.06; the standard map yields a low-order Fourier basis; the Ikeda map recovers the foliation coordinate g≈x²+y² even under polynomial misspecification; and the Arnold cat map (negative control) fails to produce a sparse closure, as expected.
Significance. If the recoveries hold under fuller scrutiny, the work offers a practical bridge between the flexibility of latent Deep-Koopman models and the interpretability of dictionary-based methods such as EDMD, SINDy, and KANDy. The benchmark results, including exact recall on Lorenz, appropriate Fourier recovery on the standard map, recovery of the correct foliation coordinate on Ikeda despite basis misspecification, and expected failure on the Arnold cat map, provide concrete evidence that the KAN-based compositional readout can succeed where purely data-driven latent spaces remain opaque. The explicit negative control and the post-training nature of the dictionary selection are particular strengths.
major comments (1)
- [Abstract and Experiments] The abstract and experimental sections report quantitative recovery metrics (perfect recall, Jaccard 0.79±0.06 on Lorenz; correct foliation coordinate on Ikeda) but omit full implementation details, hyperparameter sensitivity studies, and error analysis. These omissions are load-bearing for the central claim that the level-set/chain-rule construction reliably exposes the target dictionary across systems.
minor comments (2)
- [Methods] The precise mathematical form of the level-set construction and the chain-rule gradient identity should be stated explicitly (with equation numbers) in the methods section to allow independent verification.
- [Experiments] Clarify the exact user-chosen basis and the stopping criterion for the post-hoc readout in each experiment; the current description leaves the selection procedure somewhat implicit.
Simulated Author's Rebuttal
We thank the referee for their positive summary, recognition of the method's strengths, and recommendation for minor revision. We address the major comment below and will make the requested additions.
read point-by-point responses
-
Referee: [Abstract and Experiments] The abstract and experimental sections report quantitative recovery metrics (perfect recall, Jaccard 0.79±0.06 on Lorenz; correct foliation coordinate on Ikeda) but omit full implementation details, hyperparameter sensitivity studies, and error analysis. These omissions are load-bearing for the central claim that the level-set/chain-rule construction reliably exposes the target dictionary across systems.
Authors: We agree that the current presentation would benefit from expanded details to support the reliability claim. In the revised version we will add: complete implementation specifications (network widths, activation choices, training schedules, and basis libraries); hyperparameter sensitivity results across key parameters such as regularization strength, number of KAN grid points, and dictionary size; and error analysis including run-to-run variance, failure-mode identification, and quantitative metrics on all four systems. These will appear in an expanded Experiments section together with a new supplementary note on reproducibility. revision: yes
Circularity Check
No significant circularity in the derivation chain
full rationale
The paper introduces a post-training readout procedure that replaces the encoder/decoder of a Deep-Koopman model with two-layer KANs and then applies a level-set construction plus chain-rule identity to recover a symbolic dictionary in a user-chosen basis. All reported results consist of empirical recovery of externally known analytical dictionaries (Lorenz polynomial set, low-order Fourier basis on the standard map, foliation coordinate on Ikeda, and expected failure on the Arnold cat map). These recoveries are measured against independent benchmark systems and known closed-form structures rather than against quantities fitted inside the same optimization; no step reduces a claimed prediction to a fitted input by construction, and no load-bearing uniqueness theorem or ansatz is imported solely via self-citation. The derivation therefore remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- KAN layer widths and activation choices
axioms (1)
- standard math The chain rule applies to the composed functions realized by the KAN encoder/decoder.
Reference graph
Works this paper leans on
-
[1]
Alexander D Wilson, Joshua A Schultz, and Todd D Murphey
Matthew Williams, Ioannis Kevrekidis, and Clarence Rowley. A data-driven approximation of the koopman operator: Extending dynamic mode decomposition.Journal of Nonlinear Science, 25, 08 2014. doi: 10.1007/s00332-015-9258-5
-
[2]
Williams, Clarence W
Matthew O. Williams, Clarence W. Rowley, and Ioannis G. Kevrekidis. A kernel-based method for data-driven koopman spectral analysis.Journal of Computational Dynamics, 2(2):247–265,
-
[3]
doi: 10.3934/jcd.2015005. 9
-
[4]
Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou
Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016. doi: 10.1073/pnas.1517384113
-
[5]
Merlean: An agentic framework for autoformalization in quantum computation
Kevin Slote, Jeremie Fish, and Erik Bollt. KANDy: Kolmogorov–arnold networks and dy- namical system discovery.arXiv preprint arXiv:2602.20413, 2026. doi: 10.48550/arXiv.2602. 20413
-
[6]
Kathleen Champion, Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations.Proceedings of the National Academy of Sciences, 116(45):22445–22451, 2019. doi: 10.1073/pnas.1906995116
-
[7]
Nathan Kutz, and Steven L
Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics.Nature Communications, 9(1):4950, 2018. doi: 10.1038/ s41467-018-07210-0
2018
-
[8]
Qianxiao Li, Felix Dietrich, Erik M. Bollt, and Ioannis G. Kevrekidis. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator.Chaos, 27(10):103111, 2017. doi: 10.1063/1.4993854
-
[9]
Otto and Clarence W
Samuel E. Otto and Clarence W. Rowley. Linearly-recurrent autoencoder networks for learning dynamics.SIAM Journal on Applied Dynamical Systems, 18(1):558–593, 2019. doi: 10.1137/ 18M1177846
2019
-
[10]
Spectral analysis of nonlinear flows,
Clarence W. Rowley, Igor Mezi ´c, Shervin Bagheri, Philipp Schlatter, and Dan S. Henningson. Spectral analysis of nonlinear flows.Journal of Fluid Mechanics, 641:115–127, 2009. doi: 10.1017/S0022112009992059
-
[11]
Marko Budi ˇsi´c, Ryan Mohr, and Igor Mezi´c. Applied koopmanism.Chaos, 22:047510, 2012. doi: 10.1063/1.4772195
-
[12]
Brunton, Marko Budi ˇsi´c, Eurika Kaiser, and J
Steven L. Brunton, Marko Budi ˇsi´c, Eurika Kaiser, and J. Nathan Kutz. Modern koopman theory for dynamical systems.SIAM Review, 64(2):229–340, May 2022. doi: 10.1137/ 21M1401243
2022
-
[13]
Igor Mezi ´c. Analysis of fluid flows via spectral properties of the koopman operator.Annual Review of Fluid Mechanics, 45:357–378, 2013. doi: 10.1146/annurev-fluid-011212-140652
-
[14]
A. S. Sharma, Igor Mezi´c, and B. J. McKeon. Correspondence between koopman mode decom- position, resolvent mode decomposition, and invariant solutions of the navier–stokes equations. Physical Review Fluids, 1:032402, 2016. doi: 10.1103/PhysRevFluids.1.032402
-
[15]
M. Georgescu and Igor Mezi ´c. Building energy modeling: A systematic approach to zoning and model reduction using koopman mode analysis.Energy and Buildings, 86:794–802, 2015. doi: 10.1016/j.enbuild.2014.10.046
-
[16]
Spatiotemporal feature extraction with data-driven koopman operators
Dimitrios Giannakis, Joanna Slawinska, and Zhizhen Zhao. Spatiotemporal feature extraction with data-driven koopman operators. InProceedings of the 32nd International Conference on Machine Learning, volume 44 ofJMLR Workshop and Conference Proceedings, pages 103– 115, 2015
2015
-
[17]
H. Wu, F. N ¨uske, F. Paul, S. Klus, P. Koltai, and F. No´e. Variational koopman models: Slow collective variables and molecular kinetics from short off-equilibrium simulations.The Journal of Chemical Physics, 146:154104, 2017. doi: 10.1063/1.4979344
-
[18]
Peter J. Schmid. Dynamic mode decomposition of numerical and experimental data.Journal of Fluid Mechanics, 656:5–28, 2010. doi: 10.1017/S0022112010001217
-
[19]
Hou, and Max Tegmark
Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljacic, Thomas Y . Hou, and Max Tegmark. KAN: Kolmogorov arnold networks. InThe Thirteenth International Conference on Learning Representations, 2025. URLhttps://openreview. net/forum?id=Ozo7qJ5vZi. 10
2025
-
[20]
William Knottenbelt, William McGough, Rebecca Wray, Woody Zhidong Zhang, Jiashuai Liu, Ines Prata Machado, Zeyu Gao, and Mireia Crispin-Ortuzar. CoxKAN: Kolmogorov–Arnold networks for interpretable, high-performance survival analysis.Bioinformatics, 41(8):btaf413, 07 2025. ISSN 1367-4811. doi: 10.1093/bioinformatics/btaf413
-
[21]
Longlong Li, Yipeng Zhang, Guanghui Wang, and Kelin Xia. Kolmogorov–Arnold graph neural networks for molecular property prediction.Nature Machine Intelligence, 7(8): 1346–1354, 2025. doi: 10.1038/s42256-025-01087-7. URLhttps://doi.org/10.1038/ s42256-025-01087-7
-
[22]
Shirin Panahi, Mohammadamin Moradi, Erik M. Bollt, and Ying-Cheng Lai. Data-driven model discovery with Kolmogorov-Arnold networks.Physical Review Research, 7:023037, April 2025. doi: 10.1103/PhysRevResearch.7.023037
-
[23]
James Bagrow and Josh Bongard. Multi-exit Kolmogorov–Arnold networks: enhancing accu- racy and parsimony.Machine Learning: Science and Technology, 6(3):035037, aug 2025. doi: 10.1088/2632-2153/adf9bd
-
[24]
Koenig, Suyong Kim, and Sili Deng
Benjamin C. Koenig, Suyong Kim, and Sili Deng. KAN-ODEs: Kolmogorov–Arnold network ordinary differential equations for learning dynamical systems and hidden physics.Computer Methods in Applied Mechanics and Engineering, 432:117397, 2024. ISSN 0045-7825. doi: https://doi.org/10.1016/j.cma.2024.117397
-
[25]
Benjamin C. Koenig, Suyong Kim, and Sili Deng. LeanKAN: a parameter-lean Kolmogorov- Arnold network layer with improved memory efficiency and convergence behavior.Neural Networks, 192:107883, 2025. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2025. 107883
-
[26]
B. O. Koopman. Hamiltonian systems and transformation in hilbert space.Proceedings of the National Academy of Sciences of the United States of America, 17(5):315–318, 1931. doi: 10.1073/pnas.17.5.315
-
[27]
B. O. Koopman and J. von Neumann. Dynamical systems of continuous spectra.Proceedings of the National Academy of Sciences of the United States of America, 18(3):255–263, 1932. doi: 10.1073/pnas.18.3.255
-
[28]
Mohammadamin Moradi, Shirin Panahi, Erik Bollt, and Ying-Cheng Lai. Kolmogorov-arnold network autoencoders.arXiv preprint arXiv:2410.02077, 2024. doi: 10.48550/arXiv.2410. 02077
-
[29]
Milan Korda and Igor Mezi ´c. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control.arXiv preprint arXiv:1703.10112, 2017
-
[30]
Matthew O. Williams, Clarence W. Rowley, Igor Mezi ´c, and Ioannis G. Kevrekidis. Data fu- sion via intrinsic dynamic variables: An application of data-driven koopman spectral analysis. Europhysics Letters, 109:40007, 2015. doi: 10.1209/0295-5075/109/40007
-
[31]
Steven L. Brunton, Bingni W. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Koopman invariant subspaces and finite linear representation of nonlinear dynamical systems for control. PLOS ONE, 11:e0150171, 2016. doi: 10.1371/journal.pone.0150171
-
[32]
Joseph Bakarji, Kathleen Champion, J. Nathan Kutz, and Steven L. Brunton. Discovering governing equations from partial measurements with deep delay autoencoders.Proceedings of the Royal Society A, 479(2276):20230422, August 2023. doi: 10.1098/rspa.2023.0422
-
[33]
J. D. Farmer and J. J. Sidorowich. Predicting chaotic time series.Physical Review Letters, 59: 845, 1987. doi: 10.1103/PhysRevLett.59.845
-
[34]
J. P. Crutchfield and B. McNamara. Equations of motion from a data series.Complex Systems, 1:417, 1987
1987
-
[35]
Casdagli
M. Casdagli. Nonlinear prediction of chaotic time series.Physica D, 35:335, 1989. 11
1989
-
[36]
Sugihara, B
G. Sugihara, B. Grenfell, R. M. May, P. Chesson, H. M. Platt, and M. Williamson. Distin- guishing error from chaos in ecological time series.Philosophical Transactions of the Royal Society of London. Series B, 330:235, 1990
1990
-
[37]
Grassberger and T
P. Grassberger and T. Schreiber. Nonlinear time sequence analysis.International Journal of Bifurcation and Chaos, 1:521, 1990
1990
-
[38]
Hegger, H
R. Hegger, H. Kantz, and T. Schreiber. Practical implementation of nonlinear time series methods: The tisean package.Chaos, 9:413, 1999
1999
-
[39]
E. M. Bollt. Controlling chaos and the inverse Frobenius-Perron problem: Global stabilization of arbitrary invariant measures.International Journal of Bifurcation and Chaos, 10:1033, 2000
2000
-
[40]
Yao and E
C. Yao and E. M. Bollt. Modeling and nonlinear parameter estimation with Kronecker product representation for coupled oscillators and spatiotemporal systems.Physica D, 227:78, 2007
2007
-
[41]
Wen-Xu Wang, Rui Yang, Ying-Cheng Lai, Vassilios Kovanis, and Celso Grebogi. Predicting catastrophes in nonlinear dynamical systems by compressive sensing.Physical Review Letters, 106:154101, Apr 2011. doi: 10.1103/PhysRevLett.106.154101
-
[42]
Wen-Xu Wang, Ying-Cheng Lai, Celso Grebogi, and Jieping Ye. Network reconstruction based on evolutionary-game data via compressive sensing.Physical Review X, 1:021021, Dec 2011. doi: 10.1103/PhysRevX.1.021021
-
[43]
Detecting hidden nodes in complex networks from time series.Phys
Ri-Qi Su, Wen-Xu Wang, and Ying-Cheng Lai. Detecting hidden nodes in complex networks from time series.Phys. Rev. E, 85:065201, Jun 2012. doi: 10.1103/PhysRevE.85.065201
-
[44]
Shen, W.-X
Z. Shen, W.-X. Wang, Y . Fan, Z. Di, and Y .-C. Lai. Reconstructing propagation networks with natural diversity and identifying hidden sources.Nature Communications, 5:4323, 2014
2014
-
[45]
G. Gouesbet. Reconstruction of standard and inverse vector fields equivalent to a r ¨ossler sys- tem.Physical Review A, 44:6264, 1991. doi: 10.1103/PhysRevA.44.6264
-
[46]
T. Sauer. Reconstruction of dynamical systems from interspike intervals.Physical Review Letters, 72:3811, 1994. doi: 10.1103/PhysRevLett.72.3811
-
[47]
E. Baake, M. Baake, H.-G. Bock, and K. M. Briggs. Fitting ordinary differential equations to chaotic data.Physical Review A, 45:5524, 1992. doi: 10.1103/PhysRevA.45.5524
-
[48]
U. Parlitz. Estimating model parameters from time series by autosynchronization.Physical Review Letters, 76:1232, 1996
1996
-
[49]
G. G. Szpiro. Forecasting chaotic time series with genetic algorithms.Physical Review E, 55: 2557, 1997. doi: 10.1103/PhysRevE.55.2557
-
[50]
C. Tao, Y . Zhang, and J. J. Jiang. Estimating system parameters from chaotic time series with synchronization optimized by a genetic algorithm.Physical Review E, 76:016209, 2007. doi: 10.1103/PhysRevE.76.016209
-
[51]
Rui Yang, Ying-Cheng Lai, and Celso Grebogi. Forecasting the future: Is it possible for adiabatically time-varying nonlinear dynamical systems?Chaos, 22(3):033119, 2012. doi: 10.1063/1.4740057
-
[52]
Emmanuel J. Cand `es and Michael B. Wakin. An introduction to compressive sampling.IEEE Signal Processing Magazine, 25(2):21–30, 2008. doi: 10.1109/MSP.2007.914731
-
[53]
Richard G. Baraniuk. Compressed sensing.IEEE Signal Processing Magazine, 24(4):118– 121, 2007. doi: 10.1109/MSP.2007.4286571
-
[54]
David L. Donoho. Compressed sensing.IEEE Transactions on Information Theory, 52(4): 1289–1306, 2006. doi: 10.1109/TIT.2006.871582. 12
-
[55]
Cand `es, Justin Romberg, and Terence Tao
Emmanuel J. Cand `es, Justin Romberg, and Terence Tao. Stable signal recovery from incom- plete and inaccurate measurements.Communications on Pure and Applied Mathematics, 59 (8):1207–1223, 2006. doi: 10.1002/cpa.20124
-
[56]
Cand `es, Justin Romberg, and Terence Tao
Emmanuel J. Cand `es, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on Information Theory, 52(2):489–509, 2006. doi: 10.1109/TIT.2005.862083
-
[57]
Edward N. Lorenz. Deterministic nonperiodic flow.Journal of the Atmospheric Sciences, 20 (2):130–141, 1963. doi: 10.1175/1520-0469(1963)020⟨0130:DNF⟩2.0.CO;2
-
[58]
O. E. R ¨ossler. Equation for continuous chaos.Physics Letters A, 57:397, 1976
1976
-
[59]
Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system.Optics Communications, 30(2):257–261, 1979
Kensuke Ikeda. Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system.Optics Communications, 30(2):257–261, 1979. doi: 10.1016/ 0030-4018(79)90090-7
1979
-
[60]
S. M. Hammel, C. K. R. T. Jones, and J. V . Moloney. Global dynamical behavior of the optical field in a ring cavity.Journal of the Optical Society of America B, 2:552, 1985
1985
-
[61]
C. S. Holling. The components of predation as revealed by a study of small-mammal predation of the european pine sawfly.The Canadian Entomologist, 91:293–320, 1959
1959
-
[62]
C. S. Holling. Some characteristics of simple types of predation and parasitism.The Canadian Entomologist, 91:385, 1959
1959
-
[63]
Jiang, Z.-G
J. Jiang, Z.-G. Huang, T. P. Seager, W. Lin, C. Grebogi, A. Hastings, and Y .-C. Lai. Predict- ing tipping points in mutualistic networks through dimension reduction.Proceedings of the National Academy of Sciences of the USA, 115:E639, 2018
2018
-
[64]
Self-entrainment of a population of coupled non-linear oscillators
Yoshiki Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. In Huzihiro Araki, editor,International Symposium on Mathematical Problems in Theoretical Physics, volume 39 ofLecture Notes in Physics. Springer, Berlin, Heidelberg, 1975. doi: 10.1007/BFb0013365
-
[65]
Mohammad Amin Basiri and Sina Khanmohammadi. SINDyG: sparse identification of non- linear dynamical systems from graph-structured data, with applications to Stuart–Landau oscillator networks.Journal of Complex Networks, 13(5):cnaf029, September 2025. doi: 10.1093/comnet/cnaf029
-
[66]
A. N. Kolmogorov. On the representation of continuous functions of many variables by super- position of continuous functions of one variable and addition.Doklady Akademii Nauk SSSR, 114:953, 1957
1957
-
[67]
A. N. Kolmogorov. On the representation of functions of several variables as a superposition of functions of a smaller number of variables. In A. B. Givental, B. A. Khesin, J. E. Marsden, A. N. Varchenko, V . A. Vassiliev, O. Y . Viro, and V . M. Zakalyukin, editors,Collected Works: Representations of Functions, Celestial Mechanics and KAM Theory, 1957–19...
1957
-
[68]
doi: 10.1007/978-3-642-01742-1 5
Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. doi: 10.1007/978-3-642-01742-1 5
-
[69]
Kolmogorov–Arnold networks for genomic tasks.Bioinformatics, 41(2):412–424, 2025
Oleksandr Cherednichenko and Maria Poptsova. Kolmogorov–Arnold networks for genomic tasks.Bioinformatics, 41(2):412–424, 2025
2025
-
[70]
Radaideh Nataly R
Majdi I. Radaideh Nataly R. Panczyk, Omer F. Erdem. Opening the AI black-box: Symbolic regression with Kolmogorov–Arnold networks.Energy AI, 22:100258, 2025. A Related Work Koopman Operator Theory and Data-Driven Approximations.The Koopman operator [25, 26] provides a linear, infinite-dimensional description of nonlinear dynamical systems and has been app...
2025
-
[71]
This yields a one-dimensional dataset{(g j, q j)}N j=1 whereg j =g k(xj)andq j ≈h ′ k(gj)
points with∥∇g k(xj)∥2 below the 5th percentile are discarded (ill-conditioned denomina- tors), and outliers inh ′ k beyond3×IQRare removed. This yields a one-dimensional dataset{(g j, q j)}N j=1 whereg j =g k(xj)andq j ≈h ′ k(gj). Step 3: Integration and reconstruction.We fit a polynomial ˆh′ k(ζ) = Pp l=0 bl (ζ−¯gk)l to the binned medians of{(g j, qj)}(...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.