pith. machine review for the scientific record. sign in

arxiv: 2604.07671 · v1 · submitted 2026-04-09 · 📊 stat.ML · cs.LG· cs.NA· math.DS· math.NA

Recognition: 1 theorem link

· Lean Theorem

On the Unique Recovery of Transport Maps and Vector Fields from Finite Measure-Valued Data

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:32 UTC · model grok-4.3

classification 📊 stat.ML cs.LGcs.NAmath.DSmath.NA
keywords transport mapsdiffeomorphismspushforward measuresvector fieldsunique recoveryinverse problemsembedding theoremsPDE
0
0 comments X

The pith

A diffeomorphism is uniquely determined by its pushforward action on finitely many densities.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that a transport map or diffeomorphism can be pinned down exactly by observing how it moves each of a small number of input densities to their output measures. The required number of such pairs depends only on the dimension of the underlying space, not on the complexity of the map. This matters for inverse problems because it replaces the need for continuous or infinite data with a finite set of measurements that still separate distinct maps. If the conditions hold, recovery becomes possible in principle from limited observations in generative modeling, dynamical systems, and PDEs.

Core claim

We establish that under general conditions a diffeomorphism f is the unique map satisfying the observed data pairs (ρ_j, f_#ρ_j) for j=1 to m, where m is controlled by the intrinsic dimension via the Whitney and Takens embedding theorems. The same style of argument shows that a vector field v is uniquely recovered from the pairs (ρ_j, div(ρ_j v)). These results supply a new metric on the space of diffeomorphisms and well-posedness statements for inverse problems involving the continuity equation, advection, Fokker-Planck, and advection-diffusion-reaction equations.

What carries the argument

The pushforward action of a diffeomorphism on a finite collection of densities, which separates maps once the collection is large enough for an embedding of the diffeomorphism space.

If this is right

  • A new metric on diffeomorphisms is obtained by comparing the finite pushforward densities they produce.
  • Unique recovery extends to vector fields observed through weighted divergence terms.
  • Well-posedness follows for inverse problems in continuity, advection, and Fokker-Planck equations.
  • Numerical experiments confirm that transport maps can be identified from finitely many pushforward densities.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Minimal-measurement protocols become feasible for recovering flows in applications where only a few density snapshots can be collected.
  • The same separation principle may apply to other operators that act on measures, such as those arising in stochastic dynamics.
  • Choice of densities could be optimized to reduce the number m needed in practice while preserving the separation property.

Load-bearing premise

The chosen densities must separate distinct diffeomorphisms through their pushforwards, which the embedding theorems guarantee for sufficiently many generic choices.

What would settle it

Two different diffeomorphisms that produce identical pushforward measures for every density in the finite collection would disprove uniqueness.

Figures

Figures reproduced from arXiv: 2604.07671 by Jonah Botvinick-Greenhouse, Yunan Yang.

Figure 1
Figure 1. Figure 1: Flowchart of our main results. Note that (2) depends on the choice of Riemannian metric r on M, as ∥dYx∥ := sup v∈TxM, ∥v∥r=1 |dYx(v)|, ∥v∥r = p ⟨v, v⟩rx . Since M is compact, any choice of the Riemannian metric will induce an equivalent topology on C 1 (M, R n ). More generally, C ℓ (M, R n ) is the space of ℓ-times continuously differentiable R n -valued functions. Throughout, we write Yj (x) ∈ R as shor… view at source ↗
Figure 2
Figure 2. Figure 2: Recovering a one-dimensional function from its pushforward action on five densities. [PITH_FULL_IMAGE:figures/full_fig_p024_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Recovering the Lorenz-63 dynamics on the support of finite snapshot data [PITH_FULL_IMAGE:figures/full_fig_p026_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Recovering the Lorenz-63 dynamics on the support of finite snapshot data [PITH_FULL_IMAGE:figures/full_fig_p027_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Reconstructing the vector field (47) by comparing divergence operators via the objective function (48). The reference vector field v and relative error as a function of m are shown in Figure 5a, while Figure 5b visualizes individual training results for m ∈ {1, 2, 3, 4}. Theorem 3.1 predicts that unique recovery occurs when m > 2d + 1, which in this case is m = 6. This heuristic relies on the Whitney and T… view at source ↗
read the original abstract

We establish guarantees for the unique recovery of vector fields and transport maps from finite measure-valued data, yielding new insights into generative models, data-driven dynamical systems, and PDE inverse problems. In particular, we provide general conditions under which a diffeomorphism can be uniquely identified from its pushforward action on finitely many densities, i.e., when the data $\{(\rho_j,f_\#\rho_j)\}_{j=1}^m$ uniquely determines $f$. As a corollary, we introduce a new metric which compares diffeomorphisms by measuring the discrepancy between finitely many pushforward densities in the space of probability measures. We also prove analogous results in an infinitesimal setting, where derivatives of the densities along a smooth vector field are observed, i.e., when $\{(\rho_j,\text{div} (\rho_j v))\}_{j=1}^m$ uniquely determines $v$. Our analysis makes use of the Whitney and Takens embedding theorems, which provide estimates on the required number of densities $m$, depending only on the intrinsic dimension of the problem. We additionally interpret our results through the lens of Perron--Frobenius and Koopman operators and demonstrate how our techniques lead to new guarantees for the well-posedness of certain PDE inverse problems related to continuity, advection, Fokker--Planck, and advection-diffusion-reaction equations. Finally, we present illustrative numerical experiments demonstrating the unique identification of transport maps from finitely many pushforward densities, and of vector fields from finitely many weighted divergence observations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript claims to provide general conditions for the unique recovery of diffeomorphisms (transport maps) from their pushforward actions on finitely many densities, i.e., data of the form {($ρ_j$, $f_#ρ_j$)}$_{j=1}^m$ determines $f$ uniquely, with the minimal $m$ bounded solely by the manifold dimension $d$ via the Whitney and Takens embedding theorems. Analogous results are stated for vector fields from divergence observations. Corollaries include a new metric on diffeomorphisms, operator interpretations via Perron-Frobenius and Koopman operators, well-posedness guarantees for inverse problems involving continuity, advection, Fokker-Planck, and advection-diffusion-reaction PDEs, and supporting numerical experiments.

Significance. If the identifiability results hold under the stated conditions, the work would contribute to theoretical foundations for data-driven recovery in generative models and dynamical systems, particularly by showing that finite measure-valued observations can suffice for unique reconstruction when $m$ scales with intrinsic dimension. The PDE inverse-problem corollaries and the proposed discrepancy metric on diffeomorphisms could have practical value if the finite-data separation property is rigorously established.

major comments (1)
  1. [Abstract and main theorem on unique recovery of transport maps] The central claim (abstract and main theorem) that Whitney and Takens embedding theorems yield a finite $m$ (depending only on dim($M$)=d, e.g., $m$ ≥ 2d+1) sufficient for injectivity of the map Φ(f) = ($f_#ρ_1$, …, $f_#ρ_m$) over the space of diffeomorphisms is not supported by the cited theorems. These results guarantee embeddings for maps from finite-dimensional compact manifolds into Euclidean space, but Diff($M$) is an infinite-dimensional Fréchet manifold; no mechanism is provided to ensure that the infinite degrees of freedom in $f$ are separated by finitely many pushforward densities without additional restrictions (e.g., to finite-parametric families of maps). This directly undermines the dimension-dependent bound on $m$ and the uniqueness guarantee.
minor comments (2)
  1. [Numerical experiments] The numerical experiments section would benefit from explicit statements of the manifold dimension, the specific densities chosen, and quantitative metrics (e.g., error norms) used to verify unique identification, rather than relying solely on visual inspection of plots.
  2. [Preliminaries] Notation for the space of densities and the precise regularity assumptions (e.g., Sobolev or C^k class) on the diffeomorphisms and vector fields should be introduced earlier and used consistently when invoking the embedding theorems.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their careful reading of our manuscript and for highlighting this important point about the applicability of the embedding theorems. We address the concern in detail below and will make revisions to strengthen the presentation of our results.

read point-by-point responses
  1. Referee: [Abstract and main theorem on unique recovery of transport maps] The central claim (abstract and main theorem) that Whitney and Takens embedding theorems yield a finite m (depending only on dim(M)=d, e.g., m ≥ 2d+1) sufficient for injectivity of the map Φ(f) = (f_#ρ_1, …, f_#ρ_m) over the space of diffeomorphisms is not supported by the cited theorems. These results guarantee embeddings for maps from finite-dimensional compact manifolds into Euclidean space, but Diff(M) is an infinite-dimensional Fréchet manifold; no mechanism is provided to ensure that the infinite degrees of freedom in f are separated by finitely many pushforward densities without additional restrictions (e.g., to finite-parametric families of maps). This directly undermines the dimension-dependent bound on m and the uniqueness guarantee.

    Authors: We appreciate the referee bringing this potential gap to our attention. In the manuscript, the Whitney and Takens theorems are used to select a finite number of densities ρ_j on the base manifold M such that the map from M to R^m given by the joint evaluation of the densities is an embedding. The pushforward measures f_#ρ_j then correspond to the images under this embedding, allowing in principle to recover the transported points and hence the diffeomorphism f. However, we acknowledge that the current proof sketch does not fully detail how this construction ensures injectivity of the map Φ over the entire infinite-dimensional space Diff(M), particularly regarding the topology and the separation of all possible diffeomorphisms. We will revise the main theorem statement, the proof, and the abstract to either provide a complete rigorous argument or to state the result under additional assumptions that make the finite m sufficient, such as when the diffeomorphisms belong to a finite-parametric family or satisfy certain regularity conditions that allow the embedding to separate them. This revision will be made in the next version of the manuscript. revision: yes

Circularity Check

0 steps flagged

No circularity; central uniqueness claim rests on external Whitney/Takens theorems

full rationale

The paper derives unique recovery of diffeomorphisms f from data {(ρ_j, f#ρ_j)} by applying the classical Whitney and Takens embedding theorems to bound the number m of densities needed, with m depending only on intrinsic dimension. These are external, independently established results on finite-dimensional manifolds and generic embeddings, not constructed or fitted within the paper. No self-definitional steps appear (e.g., no quantity defined in terms of itself), no parameters are fitted to data and then relabeled as predictions, and no load-bearing self-citations or uniqueness theorems imported from the authors' prior work are invoked. The Perron-Frobenius/Koopman operator interpretations and PDE well-posedness corollaries are presented as consequences rather than reductions to the input data by construction. The derivation chain is therefore self-contained against external benchmarks and does not reduce the claimed injectivity of the pushforward map Φ to any internal tautology or fitted input.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The claims rely on the applicability of the Whitney and Takens embedding theorems to the manifolds and function spaces under consideration, plus standard assumptions on smoothness of the maps and densities.

axioms (1)
  • standard math Whitney and Takens embedding theorems apply to the relevant manifolds and yield the minimal number of densities m needed to distinguish the maps or fields.
    Invoked to bound the number of observations required depending only on intrinsic dimension.

pith-pipeline@v0.9.0 · 5585 in / 1222 out tokens · 36078 ms · 2026-05-10T18:32:25.588409+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Cost/FunctionalEquation.lean washburn_uniqueness_aczel echoes
    ?
    echoes

    ECHOES: this paper passage has the same mathematical shape or conceptual pattern as the Recognition theorem, but is not a direct formal dependency.

    After dividing (21) where 1≤j≤m−1 by (21) where j=m, we obtain ρj(f−1(x))/ρm(f−1(x)) = ρj(g−1(x))/ρm(g−1(x)) … Since (ρ1/ρm,…,ρm−1/ρm) is an embedding … f−1=g−1

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

40 extracted references · 5 canonical work pages

  1. [1]

    Denoising Diffusion Probabilistic Models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. Advances in neural information processing systems, 33:6840–6851, 2020

  2. [2]

    Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. InThe Eleventh International Conference on Learning Representations, 2023

  3. [3]

    Normalizing Flows for Probabilistic Modeling and Inference

    George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. Journal of Machine Learning Research, 22(57):1–64, 2021

  4. [4]

    TrajectoryNet: A Dynamic Optimal Transport Network for Modeling Cellular Dynamics

    Alexander Tong, Jessie Huang, Guy Wolf, David Van Dijk, and Smita Krishnaswamy. TrajectoryNet: A Dynamic Optimal Transport Network for Modeling Cellular Dynamics. InInternational conference on machine learning, pages 9526–9536. PMLR, 2020. 30

  5. [5]

    Manifold Interpolating Optimal- Transport Flows for Trajectory Inference

    Guillaume Huguet, Daniel Sumner Magruder, Alexander Tong, Oluwadamilola Fasina, Manik Kuchroo, Guy Wolf, and Smita Krishnaswamy. Manifold Interpolating Optimal- Transport Flows for Trajectory Inference. InAdvances in Neural Information Processing Systems, 2022

  6. [6]

    Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogram- ming.Cell, 176(4):928–943, 2019

    Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogram- ming.Cell, 176(4):928–943, 2019

  7. [7]

    Learning stochastic dynamics from snapshots through regularized unbalanced optimal transport

    Zhenyi Zhang, Tiejun Li, and Peijie Zhou. Learning stochastic dynamics from snapshots through regularized unbalanced optimal transport. InThe Thirteenth International Con- ference on Learning Representations, 2025

  8. [8]

    CycleGRN: Inferring Gene Regulatory Networks from Cyclic Flow Dynamics in Single-Cell RNA-seq.bioRxiv, pages 2025–11, 2025

    Wenjun Zhao, Elana J Fertig, and Genevieve L Stein-O’Brien. CycleGRN: Inferring Gene Regulatory Networks from Cyclic Flow Dynamics in Single-Cell RNA-seq.bioRxiv, pages 2025–11, 2025

  9. [9]

    2025 , archivePrefix =

    Rentian Yao, Atsushi Nitanda, Xiaohui Chen, and Yun Yang. Learning Density Evolution from Snapshot Data.arXiv preprint arXiv:2502.17738, 2025

  10. [10]

    Riemannian metric learning via optimal trans- port

    Christopher Scarvelis and Justin Solomon. Riemannian metric learning via optimal trans- port. InThe Eleventh International Conference on Learning Representations, 2023

  11. [11]

    Inversions of stochastic processes from ergodic measures of Nonlinear SDEs.arXiv preprint arXiv:2512.01307, 2025

    Hongyu Liu and Zhihui Liu. Inversions of stochastic processes from ergodic measures of Nonlinear SDEs.arXiv preprint arXiv:2512.01307, 2025

  12. [12]

    arXiv preprint arXiv:2507.05107 , year=

    Tobias Blickhan, Jules Berman, Andrew Stuart, and Benjamin Peherstorfer. DICE: Discrete inverse continuity equation for learning population dynamics.arXiv preprint arXiv:2507.05107, 2025

  13. [13]

    Solving the in- verse problem of time independent Fokker–Planck equation with a self supervised neural network method.Scientific Reports, 11(1):15540, 2021

    Wei Liu, Connie Khor Li Kou, Kun Hee Park, and Hwee Kuan Lee. Solving the in- verse problem of time independent Fokker–Planck equation with a self supervised neural network method.Scientific Reports, 11(1):15540, 2021

  14. [14]

    Xiaoli Chen, Liu Yang, Jinqiao Duan, and George Em Karniadakis. Solving Inverse Stochastic Problems from Discrete Particle Observations Using the Fokker–Planck Equa- tion and Physics-Informed Neural Networks.SIAM Journal on Scientific Computing, 43(3):B811–B830, 2021

  15. [15]

    Identification of Nonlinear Discrete Systems From Probability Density Sequences.IEEE Transactions on Circuits and Systems I: Regular Papers, 70(2):846–859, 2022

    Andr´ e M McDonald and Micha¨ el A van Wyk. Identification of Nonlinear Discrete Systems From Probability Density Sequences.IEEE Transactions on Circuits and Systems I: Regular Papers, 70(2):846–859, 2022

  16. [16]

    Wasserstein regression.Journal of the American Statistical Association, 118(542):869–882, 2023

    Yaqing Chen, Zhenhua Lin, and Hans-Georg M¨ uller. Wasserstein regression.Journal of the American Statistical Association, 118(542):869–882, 2023

  17. [17]

    Measure- theoretic time-delay embedding.Journal of Statistical Physics, 192(12):171, 2025

    Jonah Botvinick-Greenhouse, Maria Oprea, Romit Maulik, and Yunan Yang. Measure- theoretic time-delay embedding.Journal of Statistical Physics, 192(12):171, 2025

  18. [18]

    Detecting strange attractors in turbulence

    Floris Takens. Detecting strange attractors in turbulence. InDynamical Systems and Turbulence, Warwick 1980, pages 366–381. Springer Berlin Heidelberg, 1981

  19. [19]

    The Takens embedding theorem.International Journal of Bifurcation and Chaos, 1(04):867–872, 1991

    Lyle Noakes. The Takens embedding theorem.International Journal of Bifurcation and Chaos, 1(04):867–872, 1991. 31

  20. [20]

    A unified approach to attractor reconstruction.Chaos: An Interdisciplinary Journal of Nonlinear Science, 17(1), 2007

    Louis M Pecora, Linda Moniz, Jonathan Nichols, and Thomas L Carroll. A unified approach to attractor reconstruction.Chaos: An Interdisciplinary Journal of Nonlinear Science, 17(1), 2007

  21. [21]

    Arousal as a universal embedding for spatiotemporal brain dynamics.BioRxiv, pages 2023–11, 2025

    Ryan V Raut, Zachary P Rosenthal, Xiaodan Wang, Hanyang Miao, Zhanqi Zhang, Jin- Moo Lee, Marcus E Raichle, Adam Q Bauer, Steven L Brunton, Bingni W Brunton, et al. Arousal as a universal embedding for spatiotemporal brain dynamics.BioRxiv, pages 2023–11, 2025

  22. [22]

    Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series.Nature, 344(6268):734–741, 1990

    George Sugihara and Robert M May. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series.Nature, 344(6268):734–741, 1990

  23. [23]

    Springer, 2008

    C´ edric Villani.Optimal transport: old and new, volume 338. Springer, 2008

  24. [24]

    Borgwardt, Malte J

    Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Sch¨ olkopf, and Alexander Smola. A Kernel Two-Sample Test.Journal of Machine Learning Research, 13(25):723–773, 2012

  25. [25]

    Oxford University Press, 2009

    Wilson A Sutherland.Introduction to Metric and Topological Spaces. Oxford University Press, 2009

  26. [26]

    Springer Science & Business Media, 2012

    Morris W Hirsch.Differential topology, volume 33. Springer Science & Business Media, 2012

  27. [27]

    Sch¨ utte, W

    Ch. Sch¨ utte, W. Huisinga, and P. Deuflhard. Transfer Operator Approach to Confor- mational Dynamics in Biomolecular Systems. InErgodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, pages 191–223. Springer Berlin Heidelberg, 2001

  28. [28]

    Coherent structures and isolated spec- trum for Perron–Frobenius cocycles.Ergodic Theory and Dynamical Systems, 30(3):729– 756, 2010

    Gary Froyland, Simon Lloyd, and Anthony Quas. Coherent structures and isolated spec- trum for Perron–Frobenius cocycles.Ergodic Theory and Dynamical Systems, 30(3):729– 756, 2010

  29. [29]

    On the numerical approximation of the Perron-Frobenius and Koopman operator.Journal of Computational Dynamics, 3(1):51– 79, 2015

    Stefan Klus, P´ eter Koltai, and Christof Sch¨ utte. On the numerical approximation of the Perron-Frobenius and Koopman operator.Journal of Computational Dynamics, 3(1):51– 79, 2015

  30. [30]

    Springer Science & Business Media, 2013

    Andrzej Lasota and Michael C Mackey.Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics, volume 97. Springer Science & Business Media, 2013

  31. [31]

    Analysis of Fluid Flows via Spectral Properties of the Koopman Operator

    Igor Mezi´ c. Analysis of Fluid Flows via Spectral Properties of the Koopman Operator. Annual Review of Fluid Mechanics, 45(1):357–378, 2013

  32. [32]

    An Introductory Guide to Koopman Learning.arXiv preprint arXiv:2510.22002, 2025

    Matthew J Colbrook, Zlatko Drmaˇ c, and Andrew Horning. An Introductory Guide to Koopman Learning.arXiv preprint arXiv:2510.22002, 2025

  33. [33]

    An image-driven parameter esti- mation problem for a reaction–diffusion glioma growth model with mass effects.Journal of mathematical biology, 56(6):793–825, 2008

    Cosmina Hogea, Christos Davatzikos, and George Biros. An image-driven parameter esti- mation problem for a reaction–diffusion glioma growth model with mass effects.Journal of mathematical biology, 56(6):793–825, 2008

  34. [34]

    Patient specific tumor growth prediction using multimodal images.Medical image analysis, 18(3):555–566, 2014

    Yixun Liu, Samira M Sadowski, Allison B Weisbrod, Electron Kebebew, Ronald M Sum- mers, and Jianhua Yao. Patient specific tumor growth prediction using multimodal images.Medical image analysis, 18(3):555–566, 2014

  35. [35]

    Stochastic inverse problem: stability, regu- larization and wasserstein gradient flow.arXiv preprint arXiv:2410.00229, 2024

    Qin Li, Maria Oprea, Li Wang, and Yunan Yang. Stochastic Inverse Problem: stability, regularization and Wasserstein gradient flow.arXiv preprint arXiv:2410.00229, 2024. 32

  36. [36]

    Springer Science & Business Media, 2013

    Mark Anthony Armstrong.Basic Topology. Springer Science & Business Media, 2013

  37. [37]

    Springer, 2017

    Olav Kallenberg.Random measures, theory and applications, volume 1. Springer, 2017

  38. [38]

    Interpolating between Optimal Transport and MMD using Sinkhorn Divergences

    Jean Feydy, Thibault S´ ejourn´ e, Fran¸ cois-Xavier Vialard, Shun-ichi Amari, Alain Trouve, and Gabriel Peyr´ e. Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. InThe 22nd International Conference on Artificial Intelligence and Statis- tics, pages 2681–2690, 2019

  39. [39]

    Kingma and Jimmy Ba

    Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, 2015

  40. [40]

    Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational physics, 378:686–707, 2019. 33