pith. machine review for the scientific record. sign in

arxiv: 2602.15603 · v2 · submitted 2026-02-17 · 💻 cs.LG · cs.SC· math.OC

Recognition: 2 theorem links

· Lean Theorem

Symbolic recovery of PDEs from measurement data

Authors on Pith no claims yet

Pith reviewed 2026-05-15 21:42 UTC · model grok-4.3

classification 💻 cs.LG cs.SCmath.OC
keywords symbolic regressionPDE identificationrational neural networksfunction-space recoveryphysical law learningsparse regularizationinterpretable models
0
0 comments X

The pith

If a physical law fits inside a rational-function network, noiseless complete data recovers it exactly as the sparsest parameterization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a reconstruction theorem for PDE identification using neural networks built from rational functions and arithmetic operations. Under noiseless and complete measurements, any admissible law expressible in this architecture is recovered in the limit, and the recovered expression minimizes a regularization term that favors sparsity. The proof is carried out at the continuous function-space level before any discretization. This guarantees that the method returns an interpretable symbolic PDE rather than a black-box approximation when the modeling assumptions hold.

Core claim

If there exists an admissible physical law that is expressible within the symbolic network architecture, then in the limit of noiseless and complete measurements, symbolic networks recover a physical law within the PDE model that is representable by the architecture. Moreover, the recovered law corresponds to a regularization-minimizing parameterization. Under an additional identifiability condition, the unique true physical law is recovered.

What carries the argument

Symbolic networks composed of rational functions combined with arithmetic operations, which serve as the hypothesis class for representing candidate physical laws inside the PDE model.

If this is right

  • L1 regularization on the network parameters yields sparse, interpretable symbolic expressions for the recovered PDE.
  • The reconstruction holds at the continuous level, so discretization errors can be analyzed separately after the function-space result.
  • An identifiability condition on the architecture guarantees that the recovered law is the unique true physical law.
  • The same network class generalizes earlier ParFam and EQL architectures while preserving the recovery property.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The continuous-level guarantee suggests the method could be combined with existing numerical PDE solvers to refine recovered laws on discrete grids.
  • Similar recovery arguments might apply to ordinary differential equations or algebraic relations if the network architecture is adjusted accordingly.
  • In practice, the regularization-minimizing property could be used to rank multiple candidate laws when several are consistent with the data.

Load-bearing premise

The true physical law must be exactly expressible using rational functions and arithmetic operations inside the chosen network architecture.

What would settle it

A counterexample in which complete noiseless measurements of a simple known PDE, such as the heat equation, are fed to the network yet it returns a different or non-symbolic expression would falsify the reconstruction claim.

Figures

Figures reproduced from arXiv: 2602.15603 by Erion Morina, Martin Holler, Philipp Scholl.

Figure 1
Figure 1. Figure 1: Scheme of symbolic network Sσ a restriction of the denominator polynomials as follows. We define for d, n ∈ N and ϵ > 0, for q : R( n+d d ) × R n → R given as in Definition 2, the set Q ϵ (d, n) := n b ∈ R( n+d d ) : q(b, x) ≥ ϵ for all x ∈ R n o . Closedness of Qϵ (d, n) follows from continuity of q, since Q ϵ (d, n) = \ x∈Rn [q(·, x)]−1 ([ϵ, ∞)). Another issue that arises alongside closedness is that rat… view at source ↗
Figure 2
Figure 2. Figure 2: Numerical performance. In (a)-(c) the median performance over 10 random seeds with [PITH_FULL_IMAGE:figures/full_fig_p021_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Numerical results for Strogatz ODE systems. For each system (a)-(g), [PITH_FULL_IMAGE:figures/full_fig_p023_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Numerical results for Brusselator. Each subplots shows the [PITH_FULL_IMAGE:figures/full_fig_p025_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Architecture comparison on the Brusselator regime with ( [PITH_FULL_IMAGE:figures/full_fig_p026_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Numerical results for Brusselator with constrained additive structure. Each subplot: [PITH_FULL_IMAGE:figures/full_fig_p027_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Membrane transport results (Du = 0.1, Dv = 0.2). (a) Median performance with IQR bands. (b) Best seed per m. 6 Conclusions This work focused on investigating the reconstructibility of symbolic (human interpretable) expres￾sions for unknown physical laws corresponding to the right-hand side of a PDE model, as well as reconstructing the unknown state based on noisy and incomplete measurements. We proposed an… view at source ↗
read the original abstract

Models based on partial differential equations (PDEs) are powerful for describing a wide range of complex phenomena in the natural sciences. Accurately identifying the PDE model, which represents the underlying physical law, is essential for a proper understanding of the problem. This reconstruction typically relies on indirect and noisy measurements of the system's state and, without specifically tailored methods, rarely yields symbolic expressions, thereby limiting interpretability. In this work, we address this limitation by considering neural network architectures based on rational functions for the symbolic representation of physical laws. These networks combine the approximation power of rational functions with the flexibility to represent arithmetic operations, and generalize ParFam and EQL-type architectures used in symbolic regression for physical law learning. We further establish regularity results for these symbolic networks. Our main contribution is a reconstruction result showing that, if there exists an admissible physical law that is expressible within the symbolic network architecture, then in the limit of noiseless and complete measurements, symbolic networks recover a physical law within the PDE model that is representable by the architecture. Moreover, the recovered law corresponds to a regularization-minimizing parameterization, promoting interpretability and sparsity in case of $L^1$-regularization. Under an additional identifiability condition, the unique true physical law is recovered. These reconstruction and regularity results are derived at the continuous level prior to discretization due to a formulation in function space. Empirical results using the ParFam architecture are consistent with the theoretical findings and suggest the feasibility of reconstructing interpretable physical laws in practice.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes neural network architectures based on rational functions (generalizing ParFam and EQL) for symbolic recovery of PDEs from indirect noisy measurements. It establishes regularity results for these networks and proves a reconstruction theorem in function space: if an admissible physical law is expressible in the architecture, then noiseless complete measurements recover a representable law that is regularization-minimizing (e.g., via L1 for sparsity); uniqueness holds under an additional identifiability condition. Results precede discretization, with empirical consistency checks using ParFam.

Significance. If the conditional reconstruction theorem holds, the work supplies a rigorous function-space foundation for interpretable symbolic PDE discovery, linking neural approximation power to sparsity-promoting regularization. This strengthens scientific machine learning by providing guarantees absent in purely empirical symbolic regression, with potential impact on physics-informed modeling where symbolic forms aid understanding.

major comments (2)
  1. [Main reconstruction result] Reconstruction theorem (function-space formulation): the claim that the recovered law is regularization-minimizing follows from the architecture and L1 penalty, but the proof must explicitly verify that the minimizer coincides with the true law under only the expressibility assumption; any hidden dependence on discretization or measurement completeness should be stated with a precise error bound.
  2. [Regularity results] Regularity results: the function-space regularity (prior to discretization) is central to avoiding discretization artifacts, yet the specific spaces (e.g., Sobolev or Besov) and the approximation rates for rational-function networks are not compared to classical results on rational approximation; this weakens the claim that the continuous-level derivation is fully rigorous.
minor comments (2)
  1. Notation for the symbolic network layers (rational blocks and arithmetic operations) should be defined once and used consistently to prevent confusion with standard feed-forward layers.
  2. [Empirical results] The empirical section would benefit from explicit quantitative metrics (recovery error, sparsity level) and direct comparison against SINDy-style PDE methods on the same benchmark problems.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which help clarify the presentation of our function-space results. We respond to each major comment below.

read point-by-point responses
  1. Referee: [Main reconstruction result] Reconstruction theorem (function-space formulation): the claim that the recovered law is regularization-minimizing follows from the architecture and L1 penalty, but the proof must explicitly verify that the minimizer coincides with the true law under only the expressibility assumption; any hidden dependence on discretization or measurement completeness should be stated with a precise error bound.

    Authors: The theorem is stated and proved entirely in function space under the assumption of noiseless and complete measurements; consequently there is no discretization dependence and the reconstruction error is identically zero in this limit. The proof already shows that, whenever an admissible law lies in the architecture, the L1-regularized minimizer within the architecture recovers a representable law, and uniqueness follows from the additional identifiability condition. To make the argument fully explicit we will insert a short clarifying paragraph immediately after the theorem statement that isolates the expressibility assumption and confirms the zero-error bound in the continuous limit. revision: partial

  2. Referee: [Regularity results] Regularity results: the function-space regularity (prior to discretization) is central to avoiding discretization artifacts, yet the specific spaces (e.g., Sobolev or Besov) and the approximation rates for rational-function networks are not compared to classical results on rational approximation; this weakens the claim that the continuous-level derivation is fully rigorous.

    Authors: We agree that an explicit comparison would strengthen the claim. The regularity results are obtained in Sobolev spaces; we will revise the relevant section to state the precise Sobolev regularity assumed for the target PDE and to include a brief comparison of the approximation rates achieved by the rational-function networks with classical results on rational approximation in Sobolev and Besov spaces. revision: yes

Circularity Check

0 steps flagged

Reconstruction theorem self-contained in function space; no circular reduction

full rationale

The central claim is a conditional reconstruction theorem derived at the continuous function-space level prior to any discretization. It states that if an admissible physical law exists and is expressible inside the symbolic network architecture (rational functions plus arithmetic operations), then noiseless complete measurements yield recovery of a representable law that is regularization-minimizing. This mathematical result does not reduce any quantity to a fitted parameter defined by the same data, nor does it rely on self-citation chains for its load-bearing steps; uniqueness is invoked only under an explicit additional identifiability condition. Regularity results for the networks are established separately. Empirical consistency checks are presented as supporting evidence but are not part of the derivation chain. The theorem is therefore self-contained and does not exhibit any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The reconstruction theorem rests on the assumption that an admissible physical law exists inside the symbolic network architecture and on standard function-space regularity conditions; no new physical entities are introduced and no parameters are fitted to data in the central claim itself.

axioms (2)
  • domain assumption Existence of an admissible physical law expressible within the rational-function network architecture
    Invoked in the statement of the main reconstruction result; if false, the recovery guarantee does not apply.
  • standard math Standard Sobolev or similar function-space regularity for the PDE solutions and measurements
    Used to derive the continuous-level results prior to discretization.

pith-pipeline@v0.9.0 · 5573 in / 1349 out tokens · 30684 ms · 2026-05-15T21:42:26.645576+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

105 extracted references · 105 canonical work pages · 4 internal anchors

  1. [1]

    Learning-informed parameter identifi- cation in nonlinear time-dependent PDEs.Applied Mathematics & Optimization, 88(3), August 2023

    Christian Aarset, Martin Holler, and Tram Thi Ngoc Nguyen. Learning-informed parameter identifi- cation in nonlinear time-dependent PDEs.Applied Mathematics & Optimization, 88(3), August 2023. doi:10.1007/s00245-023-10044-y

  2. [2]

    Identification of the coefficient in elliptic equations.SIAM Journal on Control and Optimization, 31(5):1221–1244, September 1993.doi:10.1137/0331058

    Robert Acar. Identification of the coefficient in elliptic equations.SIAM Journal on Control and Optimization, 31(5):1221–1244, September 1993.doi:10.1137/0331058

  3. [3]

    Adams and John J

    Robert A. Adams and John J. F. Fournier.Sobolev Spaces. Elsevier, Amsterdam, 2003

  4. [4]

    An identification problem for an elliptic equation in two variables.Annali di Matematica Pura ed Applicata, 145(1):265–295, December 1986.doi:10.1007/bf01790543

    Giovanni Alessandrini. An identification problem for an elliptic equation in two variables.Annali di Matematica Pura ed Applicata, 145(1):265–295, December 1986.doi:10.1007/bf01790543

  5. [5]

    Luigi Ambrosio. Well posedness of ODE’s and continuity equations with nonsmooth vector fields, and applications.Revista Matem´ atica Complutense, 30(3):427–450, August 2017.doi:10.1007/ s13163-017-0244-3

  6. [6]

    Karakasidis

    Dimitrios Angelis, Filippos Sofos, and Theodoros E. Karakasidis. Artificial intelligence in physi- cal sciences: Symbolic regression trends and perspectives.Archives of Computational Methods in Engineering, 30(6):3845–3865, Jul 2023.doi:10.1007/s11831-023-09922-z

  7. [7]

    Augusto and H.J.C

    D.A. Augusto and H.J.C. Barbosa. Symbolic regression via genetic programming. InProceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks, SBRN-00. IEEE Comput. Soc, 2000.doi: 10.1109/sbrn.2000.889734. 40

  8. [8]

    Neural operators for accelerating scientific simulations and design.Nature Reviews Physics, 6(5):320–328, April 2024.doi:10.1038/s42254-024-00712-5

    Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design.Nature Reviews Physics, 6(5):320–328, April 2024.doi:10.1038/s42254-024-00712-5

  9. [9]

    H. T. Banks and K. Kunisch.Estimation Techniques for Distributed Parameter Systems. Birkh¨ auser Boston, 1989.doi:10.1007/978-1-4612-3700-6

  10. [10]

    G. K. Batchelor.An Introduction to Fluid Dynamics. Cambridge Mathematical Library. Cambridge University Press, 2000.doi:10.1017/CBO9780511800955

  11. [11]

    Bellman and K.J

    R. Bellman and K.J. ˚Astr¨ om. On structural identifiability.Mathematical Biosciences, 7(3–4):329–339, April 1970.doi:10.1016/0025-5564(70)90132-x

  12. [12]

    A survey of projection-based model reduction methods for parametric dynamical systems.SIAM Review, 57(4):483–531, January 2015.doi:10

    Peter Benner, Serkan Gugercin, and Karen Willcox. A survey of projection-based model reduction methods for parametric dynamical systems.SIAM Review, 57(4):483–531, January 2015.doi:10. 1137/130932715

  13. [13]

    Kovachki, and Andrew M

    Kaushik Bhattacharya, Bamdad Hosseini, Nikola B. Kovachki, and Andrew M. Stuart. Model reduc- tion and neural networks for parametric PDEs.The SMAI Journal of computational mathematics, 7:121–157, July 2021.doi:10.5802/smai-jcm.74

  14. [14]

    Biggio, T

    L. Biggio, T. Bendinelli, A. Neitz, A. Lucchi, and G. Parascandolo. Neural symbolic regression that scales. InProceedings of 38th International Conference on Machine Learning (ICML 2021), volume 139 ofProceedings of Machine Learning Research, pages 936–945. PMLR, July 2021. URL: https://proceedings.mlr.press/v139/biggio21a.html

  15. [15]

    Rational neural networks

    Nicolas Boull´ e, Yuji Nakatsukasa, and Alex Townsend. Rational neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,Advances in Neural Information Processing Systems, volume 33, pages 14243–14253. Curran Associates, Inc., 2020. URL:https://proceedings. neurips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1...

  16. [16]

    Chapter 3 - A mathematical guide to operator learning

    Nicolas Boull´ e and Alex Townsend. Chapter 3 - A mathematical guide to operator learning. In Sid- dhartha Mishra and Alex Townsend, editors,Numerical Analysis Meets Machine Learning, volume 25 ofHandbook of Numerical Analysis, pages 83–125. Elsevier, 2024.doi:10.1016/bs.hna.2024.05.003

  17. [17]

    Brunton and J

    Steven L. Brunton and J. Nathan Kutz. Promising directions of machine learning for partial differential equations.Nature Computational Science, 4(7):483–494, June 2024.doi:10.1038/ s43588-024-00643-2

  18. [18]

    Brunton, Joshua L

    Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016.doi:10.1073/pnas.1517384113

  19. [19]

    Sobolev estimates for solutions of the transport equation and ODE flows associated to non-Lipschitz drifts.Mathematische Annalen, 380(1–2):855–883, April 2020

    Elia Bru´ e and Quoc-Hung Nguyen. Sobolev estimates for solutions of the transport equation and ODE flows associated to non-Lipschitz drifts.Mathematische Annalen, 380(1–2):855–883, April 2020. doi:10.1007/s00208-020-01988-5

  20. [20]

    Bukhgeim and Gunther Uhlmann

    Alexander L. Bukhgeim and Gunther Uhlmann. Recovering a potential from partial Cauchy data. Communications in Partial Differential Equations, 27(3–4):653–668, January 2002.doi:10.1081/ pde-120002868

  21. [21]

    J. R. Cannon and Paul DuChateau. An inverse problem for a nonlinear diffusion equation.SIAM Journal on Applied Mathematics, 39(2):272–289, October 1980.doi:10.1137/0139024

  22. [22]

    Identifiability Challenges in Sparse Linear Ordinary Differential Equations

    Cecilia Casolo, S¨ oren Becker, and Niki Kilbertus. Identifiability challenges in sparse linear ordinary differential equations.ArXiv preprint arXiv:2506.09816, 2025.doi:10.48550/arXiv.2506.09816

  23. [23]

    Constantin Christof and Julia Kowalczyk. On the identification and optimization of nonsmooth superposition operators in semilinear elliptic PDEs.ESAIM: Control, Optimisation and Calculus of Variations, 30:16, 2024.doi:10.1051/cocv/2023091

  24. [24]

    Cobelli and J

    C. Cobelli and J. J. DiStefano. Parameter and structural identifiability concepts and ambiguities: a critical review and analysis.American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 239(1):R7–R24, July 1980.doi:10.1152/ajpregu.1980.239.1.r7. 41

  25. [25]

    Josephson, Joao Goncalves, Kenneth L

    Cristina Cornelio, Sanjeeb Dash, Vernon Austel, Tyler R. Josephson, Joao Goncalves, Kenneth L. Clarkson, Nimrod Megiddo, Bachir El Khadir, and Lior Horesh. Combining data and theory for derivable scientific discovery with AI-Descartes.Nature Communications, 14(1), April 2023.doi: 10.1038/s41467-023-37236-y

  26. [26]

    Design of the monodomain model by artificial neural networks

    S´ ebastien Court and Karl Kunisch. Design of the monodomain model by artificial neural networks. Discrete and Continuous Dynamical Systems, 42(12):6031–6061, 2022.doi:10.3934/dcds.2022137

  27. [27]

    Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl

    Miles Cranmer. Interpretable machine learning for science with PySR and SymbolicRegression.jl. ArXiv preprint arXiv:2305.01582, 2023.doi:10.48550/arXiv.2305.01582

  28. [28]

    Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning.Acta Numerica, 33:633–713, July 2024.doi: 10.1017/s0962492923000089

    Tim De Ryck and Siddhartha Mishra. Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning.Acta Numerica, 33:633–713, July 2024.doi: 10.1017/s0962492923000089

  29. [29]

    R. J. DiPerna and P. L. Lions. Ordinary differential equations, transport theory and Sobolev spaces. Inventiones Mathematicae, 98(3):511–547, October 1989.doi:10.1007/bf01393835

  30. [30]

    DiStefano and C

    J. DiStefano and C. Cobelli. On parameter and structural identifiability: Nonunique observability/re- constructibility for identifiable systems, other ambiguities, and new definitions.IEEE Transactions on Automatic Control, 25(4):830–833, August 1980.doi:10.1109/tac.1980.1102439

  31. [31]

    Guozhi Dong, Michael Hinterm¨ uller, and Kostas Papafitsoros. Optimization with learning-informed differential equation constraints and its applications.ESAIM: Control, Optimisation and Calculus of Variations, 28:3, 2022.doi:10.1051/cocv/2021100

  32. [32]

    Guozhi Dong, Michael Hinterm¨ uller, and Kostas Papafitsoros. A descent algorithm for the optimal control of ReLU neural network informed PDEs based on approximate directional derivatives.SIAM Journal on Optimization, 34(3):2314–2349, July 2024.doi:10.1137/22m1534420

  33. [33]

    Guozhi Dong, Michael Hinterm¨ uller, Kostas Papafitsoros, and Kathrin V¨ olkner. First-order conditions for the optimal control of learning-informed nonsmooth PDEs.Numerical Functional Analysis and Optimization, 46(7):505–539, April 2025.doi:10.1080/01630563.2025.2488796

  34. [34]

    Unicity in an inverse problem for an unknown reaction term in a reaction-diffusion equation.Journal of Differential Equations, 59(2):155–164, September 1985

    Paul DuChateau and William Rundell. Unicity in an inverse problem for an unknown reaction term in a reaction-diffusion equation.Journal of Differential Equations, 59(2):155–164, September 1985. doi:10.1016/0022-0396(85)90152-4

  35. [35]

    Identification of nonlinear heat conduction laws.Journal of Inverse and Ill-posed Problems, 23(5):429–437, December 2014

    Herbert Egger, Jan-Frederik Pietschmann, and Matthias Schlottbom. Identification of nonlinear heat conduction laws.Journal of Inverse and Ill-posed Problems, 23(5):429–437, December 2014. doi:10.1515/jiip-2014-0030

  36. [36]

    Mathe- matics and Its Applications

    Heinz W Engl, Martin Hanke, and Gunther Neubauer.Regularization of Inverse Problems. Mathe- matics and Its Applications. Springer, Dordrecht, Netherlands, 1996

  37. [37]

    Evans.Partial Differential Equations

    Lawrence C. Evans.Partial Differential Equations. American Mathematical Society, Heidelberg, 2010

  38. [38]

    Partial data inverse problems for reaction- diffusion and heat equations.ArXiv preprint arXiv:2406.01387, 2024.doi:10.48550/ARXIV.2406

    Ali Feizmohammadi, Yavar Kian, and Gunther Uhlmann. Partial data inverse problems for reaction- diffusion and heat equations.ArXiv preprint arXiv:2406.01387, 2024.doi:10.48550/ARXIV.2406. 01387

  39. [39]

    Gin, Daniel E

    Craig R. Gin, Daniel E. Shea, Steven L. Brunton, and J. Nathan Kutz. DeepGreen: Deep learning of Green’s functions for nonlinear boundary value problems.Scientific Reports, 11(1), November 2021. doi:10.1038/s41598-021-00773-x

  40. [40]

    Griffiths and Darrell F

    David J. Griffiths and Darrell F. Schroeter.Introduction to Quantum Mechanics. Cambridge Univer- sity Press, Cambridge, 2018.doi:10.1017/9781316995433

  41. [41]

    Robust identifiability for symbolic recovery of differential equations

    Hillary Hauger, Philipp Scholl, and Gitta Kutyniok. Robust identifiability for symbolic recovery of differential equations. InICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5, 2025.doi:10.1109/ICASSP49660.2025.10887720. 42

  42. [42]

    Neural power units

    Niklas Heim, Tom´ aˇ s Pevn´ y, and V´ aclavˇSm´ ıdl. Neural power units. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,Advances in Neural Information Processing Systems, volume 33, pages 6573–6583. Curran Associates, Inc., 2020. URL:https://proceedings.neurips. cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-Paper.pdf

  43. [43]

    Springer Dordrecht, 2009.doi:10.1007/978-1-4020-8839-1

    Michael Hinze, Rene Pinnau, Michael Ulbrich, and Stefan Ulbrich.Optimization with PDE Con- straints. Springer Dordrecht, 2009.doi:10.1007/978-1-4020-8839-1

  44. [44]

    On the growth of the parameters of a class of approximating ReLU neural networks.ArXiv preprint arXiv:2406.14936, 2024.doi:10.48550/arXiv.2406.14936

    Martin Holler and Erion Morina. On the growth of the parameters of a class of approximating ReLU neural networks.ArXiv preprint arXiv:2406.14936, 2024.doi:10.48550/arXiv.2406.14936

  45. [45]

    On uniqueness in structured model learning.ArXiv preprint arXiv:2410.22009, 2024.doi:10.48550/arXiv.2410.22009

    Martin Holler and Erion Morina. On uniqueness in structured model learning.ArXiv preprint arXiv:2410.22009, 2024.doi:10.48550/arXiv.2410.22009

  46. [46]

    Martin Holler and Erion Morina.C 1-approximation with rational functions and rational neural net- works.ArXiv preprint arXiv:2508.19672, 2025.doi:10.48550/arXiv.2508.19672

  47. [47]

    Physically consistent model learning for reaction-diffusion systems

    Martin Holler and Erion Morina. Physically consistent model learning for reaction-diffusion systems. ArXiv preprint arXiv:2512.14240, 2025.doi:10.48550/arXiv.2512.14240

  48. [48]

    Deep generative symbolic regression

    Samuel Holt, Zhaozhi Qian, and Mihaela van der Schaar. Deep generative symbolic regression. In The Eleventh International Conference on Learning Representations, 2023. URL:https://arxiv. org/abs/2401.00282

  49. [49]

    V. Isakov. On uniqueness in inverse problems for semilinear parabolic equations.Archive for Rational Mechanics and Analysis, 124(1):1–12, 1993.doi:10.1007/bf00392201

  50. [50]

    Courier Corporation, New York, 2009

    Nathan Jacobson.Basic Algebra I - Second Edition. Courier Corporation, New York, 2009

  51. [51]

    Daijun Jiang, Yikan Liu, and Masahiro Yamamoto. Inverse source problem for the hyperbolic equation with a time-dependent principal part.Journal of Differential Equations, 262(1):653–681, January 2017.doi:10.1016/j.jde.2016.09.036

  52. [52]

    Kaipio and Erkki Somersalo.Statistical and Computational Inverse Problems

    Jari P. Kaipio and Erkki Somersalo.Statistical and Computational Inverse Problems. Springer New York, 2005.doi:10.1007/b138659

  53. [53]

    Regularization based on all-at-once formulations for inverse problems.SIAM Journal on Numerical Analysis, 54(4):2594–2618, 2016.doi:10.1137/16M1060984

    Barbara Kaltenbacher. Regularization based on all-at-once formulations for inverse problems.SIAM Journal on Numerical Analysis, 54(4):2594–2618, 2016.doi:10.1137/16M1060984

  54. [54]

    Barbara Kaltenbacher and Tram T. N. Nguyen. Discretization of parameter identification in PDEs using neural networks.Inverse Problems, 38(12):124007, 2022.doi:10.1088/1361-6420/ac9c25

  55. [55]

    The inverse problem of reconstructing reaction–diffusion systems.Inverse Problems, 36(6):065011, May 2020.doi:10.1088/1361-6420/ab8483

    Barbara Kaltenbacher and William Rundell. The inverse problem of reconstructing reaction–diffusion systems.Inverse Problems, 36(6):065011, May 2020.doi:10.1088/1361-6420/ab8483

  56. [56]

    On the simultaneous recovery of the conductivity and the nonlinear reaction term in a parabolic equation.Inverse Problems and Imaging, 14(5):939–966, 2020.doi:10.3934/ipi.2020043

    Barbara Kaltenbacher and William Rundell. On the simultaneous recovery of the conductivity and the nonlinear reaction term in a parabolic equation.Inverse Problems and Imaging, 14(5):939–966, 2020.doi:10.3934/ipi.2020043

  57. [57]

    Barbara Kaltenbacher and William Rundell. On uniqueness and reconstruction of a nonlinear diffusion term in a parabolic equation.Journal of Mathematical Analysis and Applications, 500(2):125145, August 2021.doi:10.1016/j.jmaa.2021.125145

  58. [58]

    Reconstruction of space-dependence and nonlinearity of a reaction term in a subdiffusion equation.Inverse Problems, 41(5):055008, April 2025.doi: 10.1088/1361-6420/adcb67

    Barbara Kaltenbacher and William Rundell. Reconstruction of space-dependence and nonlinearity of a reaction term in a subdiffusion equation.Inverse Problems, 41(5):055008, April 2025.doi: 10.1088/1361-6420/adcb67

  59. [59]

    End-to- end symbolic regression with transformers

    Pierre-alexandre Kamienny, St´ ephane d'Ascoli, Guillaume Lample, and Francois Charton. End-to- end symbolic regression with transformers. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors,Advances in Neural Information Processing Systems, volume 35, pages 10269–10281. Curran Associates, Inc., 2022. URL:https://proceedings.neur...

  60. [60]

    Lipschitz and H¨ older stable determination of nonlinear terms for elliptic equations

    Yavar Kian. Lipschitz and H¨ older stable determination of nonlinear terms for elliptic equations. Nonlinearity, 36(2):1302–1322, January 2023.doi:10.1088/1361-6544/acafcd. 43

  61. [61]

    Recovery of nonlinear terms for reaction diffusion equations from boundary measurements.Archive for Rational Mechanics and Analysis, 247(1), January 2023

    Yavar Kian and Gunther Uhlmann. Recovery of nonlinear terms for reaction diffusion equations from boundary measurements.Archive for Rational Mechanics and Analysis, 247(1), January 2023. doi:10.1007/s00205-022-01831-y

  62. [62]

    Adam: A Method for Stochastic Optimization

    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InProceedings of the 3rd International Conference on Learning Representations (ICLR), 2015. URL:https://arxiv. org/abs/1412.6980

  63. [63]

    Klibanov

    Michael V. Klibanov. Carleman estimates for global uniqueness, stability and numerical methods for coefficient inverse problems.jiip, 21(4):477–560, August 2013.doi:10.1515/jip-2012-0072

  64. [64]

    Parameter identification for elliptic problems.Journal of Computational and Applied Mathematics, 131(1–2):175–194, June 2001.doi:10.1016/s0377-0427(00)00275-2

    Ian Knowles. Parameter identification for elliptic problems.Journal of Computational and Applied Mathematics, 131(1–2):175–194, June 2001.doi:10.1016/s0377-0427(00)00275-2

  65. [65]

    Kovachki, Samuel Lanthaler, and Andrew M

    Nikola B. Kovachki, Samuel Lanthaler, and Andrew M. Stuart. Chapter 9 - operator learning: Al- gorithms and analysis. In Siddhartha Mishra and Alex Townsend, editors,Numerical Analysis Meets Machine Learning, volume 25 ofHandbook of Numerical Analysis, pages 419–467. Elsevier, 2024. doi:10.1016/bs.hna.2024.05.009

  66. [66]

    Contemporary symbolic regression methods and their relative performance.Advances in Neural Information Processing Systems, 2021(DB1):1–16,

    William La Cava, Bogdan Burlacu, Marco Virgolin, Michael Kommenda, Patryk Orzechowski, Fabr´ ıcio Olivetti de Fran¸ ca, Ying Jin, and Jason H Moore. Contemporary symbolic regression methods and their relative performance.Advances in Neural Information Processing Systems, 2021(DB1):1–16,

  67. [67]

    URL:https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/ file/c0c7c76d30bd3dcaefc96f40275bdc0a-Paper-round1.pdf

  68. [68]

    Extrapolation and learning equations

    Christoph Lampert and Georg Martius. Extrapolation and learning equations. In5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings. International Conference on Learning Representations, 24–26 Apr 2017. URL:https://arxiv.org/pdf/1610. 02995

  69. [69]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. ArXiv preprint arXiv:2010.08895, 2020.doi:10.48550/arXiv.2010.08895

  70. [70]

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3):218–229, March 2021.doi:10.1038/s42256-021-00302-5

  71. [71]

    Perelson, and Hulin Wu

    Hongyu Miao, Xiaohua Xia, Alan S. Perelson, and Hulin Wu. On identifiability of nonlinear ode models and applications in viral dynamics.SIAM Review, 53(1):3–39, January 2011.doi:10.1137/ 090757009

  72. [72]

    Symbolic regression via deep reinforcement learning enhanced genetic programming seeding

    Terrell Mundhenk, Mikel Landajuela, Ruben Glatt, Claudio P Santiago, Daniel faissol, and Bren- den K Petersen. Symbolic regression via deep reinforcement learning enhanced genetic programming seeding. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, ed- itors,Advances in Neural Information Processing Systems, volume 34, pages...

  73. [73]

    Murray.Mathematical Biology: I

    J.D. Murray.Mathematical Biology: I. An Introduction. Springer New York, 2002.doi:10.1007/ b98868

  74. [74]

    Murray.Mathematical Biology: II: Spatial Models and Biomedical Applications

    J.D. Murray.Mathematical Biology: II: Spatial Models and Biomedical Applications. Springer New York, 2003.doi:10.1007/b98869

  75. [75]

    Adrian I. Nachman. Global uniqueness for a two-dimensional inverse boundary value problem.The Annals of Mathematics, 143(1):71, January 1996.doi:10.2307/2118653

  76. [76]

    Deep learning methods for partial differential equations and related parameter identification problems.Inverse Problems, 39(10):103001, August 2023.doi:10.1088/ 1361-6420/ace9d4

    Derick Nganyu Tanyu, Jianfeng Ning, Tom Freudenberg, Nick Heilenk¨ otter, Andreas Rademacher, Uwe Iben, and Peter Maass. Deep learning methods for partial differential equations and related parameter identification problems.Inverse Problems, 39(10):103001, August 2023.doi:10.1088/ 1361-6420/ace9d4. 44

  77. [77]

    Sequential bi-level regularized inversion with application to hidden reaction law discovery.Inverse Problems, 41(6):065015, June 2025.doi:10.1088/1361-6420/addf73

    Tram Thi Ngoc Nguyen. Sequential bi-level regularized inversion with application to hidden reaction law discovery.Inverse Problems, 41(6):065015, June 2025.doi:10.1088/1361-6420/addf73

  78. [78]

    Springer, Berlin, Heidelberg, 2015.doi:10.1007/978-3-319-19500-1

    Benoˆ ıt Perthame.Parabolic Equations in Biology - Growth, reaction, movement and diffusion. Springer, Berlin, Heidelberg, 2015.doi:10.1007/978-3-319-19500-1

  79. [79]

    Petersen, Mikel Landajuela, T

    Brenden K. Petersen, Mikel Landajuela, T. Nathan Mundhenk, Claudio P. Santiago, Soo K. Kim, and Joanne T. Kim. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. InThe Ninth International Conference on Learning Representations,

  80. [80]

    URL:https://arxiv.org/pdf/1912.04871

Showing first 80 references.