pith. machine review for the scientific record. sign in

arxiv: 2604.17922 · v1 · submitted 2026-04-20 · 🧮 math.NA · cs.NA

Recognition: unknown

Optimal Linear Interpolation under Differential Information: application to the prediction of perfect flows

Authors on Pith no claims yet

Pith reviewed 2026-05-10 04:29 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords KrigingGaussian processesPDE constraintsLagrange multiplierscollocation pointsco-Krigingperfect flowsinterpolation
0
0 comments X

The pith

Constrained Kriging uses Lagrange multipliers to strongly enforce linear PDE constraints at prediction points.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops extensions of Kriging to interpolate functions obeying linear partial differential equations when observations are sparse. It presents two concrete methods: treating PDE residuals as auxiliary data in a co-Kriging scheme and reformulating the Kriging problem as a constrained optimization solved with Lagrange multipliers so that the PDEs hold exactly at chosen collocation points. The work matters for physical simulations such as fluid flows because it adds known differential information without requiring the large datasets typical of physics-informed neural networks. Tests on ordinary differential equations, 2D harmonic problems, and potential flow around a cylinder illustrate that the methods produce predictions consistent with both data and the governing equations.

Core claim

The constrained Kriging optimization problem, solved via a Lagrangian formulation, strongly satisfies linear PDE constraints at the points of prediction while remaining optimal among linear interpolators given the primary observations and differential information.

What carries the argument

Lagrangian formulation of the constrained Kriging optimization that enforces linear PDE constraints exactly at prediction collocation points.

Load-bearing premise

Linear PDE information supplied at a finite set of collocation points can be added without creating inconsistencies with the primary field observations or prohibitive scaling costs in higher dimensions.

What would settle it

A numerical example in three or more dimensions where the constrained predictor violates the PDE at a chosen prediction point or produces values inconsistent with the given observations.

Figures

Figures reproduced from arXiv: 2604.17922 by CNRS), David Gaudrie, Didier Rulli\`ere (Mines Saint-\'Etienne MSE, ENSM ST-ETIENNE, FAYOL-ENSMSE, FAYOL-ENSMSE), Laurent Genest, LIMOS, LIMOS), Mines Saint-\'Etienne MSE), Rodolphe Le Riche (LIMOS, Soumyodeep Mukhopadhyay (Mines Saint-\'Etienne MSE, UCA [2017-2020], Xavier Bay (FAYOL-ENSMSE.

Figure 1
Figure 1. Figure 1: Demonstrating the difference between simple Kriging (in orange) and the prediction (in green and [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparing the simple Kriging (in orange) and Lagrangian Kriging prediction (in green and the blue [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: All figures show the exact function in black, four observations as black crosses, collocation points / [PITH_FULL_IMAGE:figures/full_fig_p015_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Lagrangian Kriging with the same observations but predicting at [PITH_FULL_IMAGE:figures/full_fig_p016_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: The effect of increasing observations on Lagrangian Kriging prediction (in blue) without uncertainty [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Comparing the simple Kriging vs. derivatives based collocated co-Kriging with [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Lagrangian Kriging predictions for higher-order derivatives involved in the gradient-based and Lapla [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Prediction based on random additional observations in the domain: (a) Simple Kriging without cross [PITH_FULL_IMAGE:figures/full_fig_p019_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Simple Kriging (using ϕ) prediction based on random additional observations in the domain for varying lengthscale parameter (flow-reversal phenomena). Once again, the color and length of the arrows are proportional to the magnitude of velocity. A possible contributor to the flow-reversal phenomena may be the negative correlations that appear when computing covariances using the differentiated kernel. This … view at source ↗
Figure 10
Figure 10. Figure 10: The variation of covariance with respect to the squared distance for the squared exponential kernel [PITH_FULL_IMAGE:figures/full_fig_p020_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Comparing the predicted flows. Observed velocity vectors in red. The LOOCV optimal co-Kriging [PITH_FULL_IMAGE:figures/full_fig_p020_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Uncertainty in the squared magnitude of the predicted velocity vectors: (a) Simple Kriging with [PITH_FULL_IMAGE:figures/full_fig_p021_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: The exact flow and co-Kriging and Lagrangian Kriging predictions for the airfoils NACA 0012 (top) [PITH_FULL_IMAGE:figures/full_fig_p022_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Comparing the uncertainty quantification ( [PITH_FULL_IMAGE:figures/full_fig_p023_14.png] view at source ↗
read the original abstract

Approximation of functions satisfying partial differential equations (PDEs) is paramount for simulation of physical fluid flows and other problems in physics. Recently, physics-informed machine learning approaches have proven useful as a data-driven complement to numerical models for partial differential equations, bringing faster responses and allowing us to capitalize on past observations. However, their efficiency and convergence depend on the availability of vast training datasets. For sparse observations, Gaussian process regression or Kriging has emerged as a powerful interpolation model, offering principled estimates and uncertainty quantification. Several attempts have been made to condition Gaussian processes on linear PDEs via artificial or collocation observations and kernel design.These methods suffer from scalability issues in higher dimensions and limited generalizability. The aim of this study is to explore the extension of the Kriging predictor in the presence of linear PDE information at a finite number of collocation points. Two approaches are proposed: 1) A collocated co-Kriging with primary observations of the physical field and auxiliary differential observations; 2) A constrained Kriging optimization problem strongly satisfying linear PDE constraints at the points of prediction through a Lagrangian formulation. Numerical experiments are given for ordinary differential equations, 2D harmonic PDEs and an application to perfect flows around a cylinder. This work highlights a trade-off between the computational efficiency of the Lagrange multipliers approach and the strict interpolation of observations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes two extensions of Kriging interpolation to incorporate linear PDE information at collocation points: (1) a collocated co-Kriging formulation treating differential quantities as auxiliary observations, and (2) a constrained optimization that enforces the linear PDE constraints exactly at prediction points via Lagrange multipliers. These are applied to ODEs, 2D harmonic equations, and perfect flows around a cylinder, with the abstract noting a computational-efficiency versus strict-interpolation trade-off for the Lagrangian approach.

Significance. If the central constructions are shown to be consistent with Kriging exactness and supported by error analysis, the work could offer a principled route to physics-constrained linear predictors for sparse fluid data, complementing existing co-Kriging and kernel-design methods. The explicit Lagrangian formulation for strong PDE satisfaction at prediction sites and the cylinder-flow example are potentially useful, but the absence of convergence results or quantitative validation in the current description limits immediate impact.

major comments (2)
  1. [Abstract] Abstract: The claim of 'optimal linear interpolation under differential information' is qualified by an acknowledged trade-off with 'the strict interpolation of observations' in the Lagrange-multipliers approach. Standard Kriging optimality rests on exact reproduction of primary observations (unbiasedness constraint). It is unclear whether the Lagrangian system augments or replaces this constraint; if the latter, the resulting predictor is no longer guaranteed to interpolate the data, undermining the optimality claim. A derivation showing the modified normal equations and the conditions under which exact interpolation is retained is required.
  2. [Numerical experiments] Numerical experiments section: The abstract states that experiments are performed for ODEs, 2D harmonic PDEs, and perfect flows, yet reports no error norms, convergence rates, or comparisons against standard Kriging or physics-informed baselines. Without quantitative metrics or an assessment of how the PDE constraints affect prediction accuracy versus the trade-off, it is impossible to evaluate whether the methods deliver the promised improvement for the cylinder-flow application.
minor comments (1)
  1. [Abstract] The abstract would be strengthened by a single sentence summarizing the key mathematical distinction between the two proposed approaches and one quantitative highlight from the experiments.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the referee's constructive comments. We will revise the manuscript to address the concerns on the optimality claim and to include quantitative validation. Point-by-point responses are below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim of 'optimal linear interpolation under differential information' is qualified by an acknowledged trade-off with 'the strict interpolation of observations' in the Lagrange-multipliers approach. Standard Kriging optimality rests on exact reproduction of primary observations (unbiasedness constraint). It is unclear whether the Lagrangian system augments or replaces this constraint; if the latter, the resulting predictor is no longer guaranteed to interpolate the data, undermining the optimality claim. A derivation showing the modified normal equations and the conditions under which exact interpolation is retained is required.

    Authors: The Lagrangian formulation replaces the standard unbiasedness constraint with exact enforcement of the linear PDE constraints at the prediction points. This yields a minimum-variance predictor under the PDE constraints, but does not guarantee exact interpolation of the primary observations, which is the acknowledged trade-off. The collocated co-Kriging approach retains the standard exact-interpolation property. We will add a derivation of the modified normal equations in the revision, together with the conditions (e.g., consistency of observations with the PDE) under which exact data interpolation is recovered. revision: yes

  2. Referee: [Numerical experiments] Numerical experiments section: The abstract states that experiments are performed for ODEs, 2D harmonic PDEs, and perfect flows, yet reports no error norms, convergence rates, or comparisons against standard Kriging or physics-informed baselines. Without quantitative metrics or an assessment of how the PDE constraints affect prediction accuracy versus the trade-off, it is impossible to evaluate whether the methods deliver the promised improvement for the cylinder-flow application.

    Authors: We agree that quantitative metrics are essential. The revised manuscript will include relative L2 error norms, convergence rates with respect to the number of collocation points, and comparisons against standard Kriging and other physics-informed baselines for the ODE, harmonic PDE, and cylinder-flow cases. These additions will quantify accuracy improvements and clarify the efficiency-versus-interpolation trade-off. revision: yes

Circularity Check

0 steps flagged

No circularity; derivation self-contained from standard Kriging and constrained optimization

full rationale

The paper proposes two extensions of Kriging: collocated co-Kriging incorporating auxiliary differential observations and a Lagrangian-constrained optimization enforcing linear PDE conditions at prediction points. These follow directly from established Gaussian process regression, co-Kriging formulations, and standard constrained optimization without any reduction of predictions to fitted inputs by construction, self-definitional loops, or load-bearing self-citations. The abstract explicitly notes the trade-off with strict interpolation of observations, confirming the construction does not claim exactness where it is relaxed. No ansatzes are smuggled via prior work, and no uniqueness theorems or renamings of known results are invoked as derivations. The approach remains externally falsifiable against PDE solutions and standard Kriging benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claims rest on the assumption that the target function satisfies a linear PDE and that Gaussian process kernels can be conditioned on both field values and differential information without contradiction.

axioms (2)
  • domain assumption The unknown function satisfies a given linear partial differential equation at collocation points.
    Invoked to justify both the co-Kriging auxiliary observations and the Lagrangian constraints.
  • standard math Standard Gaussian process regression assumptions hold (stationarity, positive-definiteness of kernel).
    Required for the Kriging predictor to remain well-defined after adding differential information.

pith-pipeline@v0.9.0 · 5636 in / 1215 out tokens · 36857 ms · 2026-05-10T04:29:00.770353+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 1 canonical work pages

  1. [1]

    A Review of Gaussian Random Fields and Correlation Functions

    Petter Abrahamsen. A Review of Gaussian Random Fields and Correlation Functions . Norwegian Computing Center, 2 edition, April 1997

  2. [2]

    Using parametric effec- tiveness for efficient CAD-based adjoint optimization

    Dheeraj Agarwal, Christos Kapellos, Trevor Robinson, and Cecil Armstrong. Using parametric effec- tiveness for efficient CAD-based adjoint optimization. Computer-Aided Design and Applications, 16, 05 2018

  3. [3]

    Prediction of nonlinear spatial functionals

    Jeremy Aldworth and Noel Cressie. Prediction of nonlinear spatial functionals. Journal of Statistical Planning and Inference, 112:3–41, 3 2003

  4. [4]

    M. A. Alvarez, D. Luengo, and N. D. Lawrence. Linear latent force models using gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35:2693–2705, 11 2013

  5. [5]

    ´Alvarez, Lorenzo Rosasco, and Neil D

    Mauricio A. ´Alvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for vector-valued functions: A review. Foundations and Trends®in Machine Learning, 4:195–266, 2012

  6. [6]

    William F. Ames. Numerical Methods for Partial Differential Equations . Elsevier, 3, 1977. 24

  7. [7]

    Cross validation and maximum likelihood estimations of hyper-parameters of gaussian processes with model misspecification

    Fran¸ cois Bachoc. Cross validation and maximum likelihood estimations of hyper-parameters of gaussian processes with model misspecification. Computational Statistics & Data Analysis , 66:55–69, 10 2013

  8. [8]

    L´ opez-Lopera

    Fran¸ cois Bachoc, Agn` es Lagnoux, and Andr´ es F. L´ opez-Lopera. Maximum likelihood estimation for gaussian processes under inequality constraints. Electronic Journal of Statistics , 13, 1 2019

  9. [9]

    Hemachandra, and Vivek Kumar Mishra

    Shalabh Bhatnagar, N. Hemachandra, and Vivek Kumar Mishra. Stochastic approximation algorithms for constrained optimization via simulation. ACM Transactions on Modeling and Computer Simulation, 21:1–22, 3 2011

  10. [10]

    Blitzstein and Jessica Hwang

    Joseph K. Blitzstein and Jessica Hwang. Introduction to Probability. Chapman and Hall/CRC, 7 2014

  11. [11]

    Yifan Chen, Bamdad Hosseini, Houman Owhadi, and Andrew M. Stuart. Solving and learning nonlinear pdes with gaussian processes. Journal of Computational Physics , 447:110668, December 2021

  12. [12]

    Geostatistics: Modeling Spatial Uncertainty

    Jean-Paul Chil` es and Pierre Delfiner. Geostatistics: Modeling Spatial Uncertainty . Wiley, February 2012

  13. [13]

    The Spatial Random Field Model , pages 21–106

    George Christakos. The Spatial Random Field Model , pages 21–106. Elsevier, 1992

  14. [14]

    Aggregation in Geostatistical Problems, pages 25–36

    Noel Cressie. Aggregation in Geostatistical Problems, pages 25–36. Springer Netherlands, 1993

  15. [15]

    Noel A. C. Cressie. Statistics for Spatial Data . Wiley, 9 1993

  16. [16]

    Cross and Timothy J

    Elizabeth J. Cross and Timothy J. Rogers. Physics-derived covariance functions for machine learning in structural dynamics. IFAC-PapersOnLine, 54:168–173, 2021

  17. [17]

    Scientific machine learning through physics–informed neural networks: Where we are and what’s next

    Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. Journal of Scientific Computing , 92:88–, 2022

  18. [18]

    Gaussian process modeling with inequality constraints

    S´ ebastien Da Veiga and Amandine Marrel. Gaussian process modeling with inequality constraints. Annales de la facult´ e des sciences de Toulouse Math´ ematiques, 21(3):529–555, 4 2012

  19. [19]

    High-dimensional gaussian process inference with derivatives

    Filip de Roos, Alexandra Gessner, and Philipp Hennig. High-dimensional gaussian process inference with derivatives. 2 2021

  20. [20]

    Cross validation of kriging in a unique neighborhood

    Olivier Dubrule. Cross validation of kriging in a unique neighborhood. Journal of the International Association for Mathematical Geology, 15:687–699, 12 1983

  21. [21]

    Airfoil Design and Data

    Richard Eppler. Airfoil Design and Data . Springer Berlin Heidelberg, 1990

  22. [22]

    Panel methods: An introduction

    Larry Erickson. Panel methods: An introduction. 1990

  23. [23]

    Ertun¸ c, A.E

    G. Ertun¸ c, A.E. Tercan, M.A. Hindistan, B.¨Unver, S. ¨Unal, F. Atalay, and S.Y. Kıllıo˘ glu. Geostatistical estimation of coal quality variables by using covariance matching constrained kriging. International Journal of Coal Geology , 112:14–25, 6 2013

  24. [24]

    Partial Differential Equations

    Lawrence Evans. Partial Differential Equations . American Mathematical Society, 3 2010

  25. [25]

    Phygeonet: Physics-informed geometry-adaptive convo- lutional neural networks for solving parameterized steady-state pdes on irregular domain

    Han Gao, Luning Sun, and Jian-Xun Wang. Phygeonet: Physics-informed geometry-adaptive convo- lutional neural networks for solving parameterized steady-state pdes on irregular domain. Journal of Computational Physics, 428:110079, 3 2021

  26. [26]

    Solving noisy linear operator equations by gaussian processes: Application to ordinary and partial differential equations

    Thore Graepel. Solving noisy linear operator equations by gaussian processes: Application to ordinary and partial differential equations. volume 1, page 234 – 241, 2003

  27. [27]

    Characterization of the second order random fields subject to linear distributional pde constraints

    Iain Henderson, Pascal Noble, and Olivier Roustant. Characterization of the second order random fields subject to linear distributional pde constraints. Bernoulli, 29(4):3396–3422, November 2023

  28. [28]

    Covariance models and gaussian process regression for the wave equation

    Iain Henderson, Pascal Noble, and Olivier Roustant. Covariance models and gaussian process regression for the wave equation. application to related inverse problems. Journal of Computational Physics , 494:112519, December 2023

  29. [29]

    constrainedkriging: An r-package for customary, constrained and covariance-matching constrained point or block kriging

    Christoph Hofer and Andreas Papritz. constrainedkriging: An r-package for customary, constrained and covariance-matching constrained point or block kriging. Computers & Geosciences, 37:1562–1569, 10 2011

  30. [30]

    Carl Jidling, Niklas Wahlstrom, Adrian Wills, and Thomas B. Sch¨ on. Linearly constrained gaussian pro- cesses. In Proceedings of the 31st International Conference on Neural Information Processing Systems , NIPS’17, page 1215–1224, Red Hook, NY, USA, 2017. Curran Associates Inc. 25

  31. [31]

    Physics- informed machine learning

    George Karniadakis, Yannis Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics- informed machine learning. Nature Reviews Physics, pages 1–19, 05 2021

  32. [32]

    Interpolation and extrapolation of stationary sequences

    Andre¨ ı Nikola¨ ıevitch Kolmogorov. Interpolation and extrapolation of stationary sequences. Izvestiia Akademii Nauk SSSR, Seriia Matematicheskaia , pages 3–14, 1941

  33. [33]

    Neural operator: Learning maps between function spaces

    Nikola B. Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, An- drew M. Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. CoRR, abs/2108.08481, 2021

  34. [34]

    Fourier neural operator with learned deformations for pdes on general geometries

    Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned deformations for pdes on general geometries. J. Mach. Learn. Res. , 24(1), January 2023

  35. [35]

    A dual-dimer method for training physics-constrained neural networks with minimax architecture

    Dehao Liu and Yan Wang. A dual-dimer method for training physics-constrained neural networks with minimax architecture. Neural Networks, 136:112–125, 4 2021

  36. [36]

    The ubiquitous kronecker product

    Charles F.Van Loan. The ubiquitous kronecker product. Journal of Computational and Applied Math- ematics, 123:85–100, 11 2000

  37. [37]

    Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, 2021

  38. [38]

    L´ opez-Lopera, Fran¸ cois Bachoc, Nicolas Durrande, and Olivier Roustant

    Andr´ es F. L´ opez-Lopera, Fran¸ cois Bachoc, Nicolas Durrande, and Olivier Roustant. Finite-dimensional gaussian approximation with linear inequality constraints. SIAM/ASA Journal on Uncertainty Quan- tification, 6(3):1224–1255, January 2018

  39. [39]

    Probabilistic surrogate modeling by gaussian process: A new esti- mation algorithm for more robust prediction

    Amandine Marrel and Bertrand Iooss. Probabilistic surrogate modeling by gaussian process: A new esti- mation algorithm for more robust prediction. Reliability Engineering & System Safety, 247:110120, 7 2024

  40. [40]

    Narcowich and Joseph D

    Francis J. Narcowich and Joseph D. Ward. Generalized hermite interpolation via matrix-valued condi- tionally positive definite functions. Mathematics of Computation , 63:661, 10 1994

  41. [41]

    Automatic differentiation in pytorch

    Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017

  42. [42]

    Raissi, P

    M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics , 378:686–707, February 2019

  43. [43]

    Machine learning of linear differential equations using gaussian processes

    Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Machine learning of linear differential equations using gaussian processes. Journal of Computational Physics , 348:683–693, November 2017

  44. [44]

    Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning . The MIT Press, nov 2005

  45. [45]

    A joint kriging model with application to constrained classifica- tion

    Didier Rulli` ere and Marc Grossouvre. A joint kriging model with application to constrained classifica- tion. Statistics and Computing , 35, 12 2025

  46. [46]

    vol 6792

    Simo S¨ arkk¨ a.Linear Operators and Stochastic Partial Differential Equations in Gaussian Process Regression, pages 151–158. vol 6792. Springer Berlin Heidelberg, 2011

  47. [47]

    Covariance models for divergence-free and curl-free random vector fields

    Michael Scheuerer and Martin Schlather. Covariance models for divergence-free and curl-free random vector fields. Stochastic Models, 28(3):433–451, 2012

  48. [48]

    Bayesian Solution of Ordinary Differential Equations , page 23–37

    John Skilling. Bayesian Solution of Ordinary Differential Equations , page 23–37. Springer Netherlands, 1992

  49. [49]

    Solak, R

    E. Solak, R. Murray-smith, W. Leithead, D. Leith, and Carl Rasmussen. Derivative observations in gaussian process models of dynamic systems. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems , volume 15. MIT Press, 2002

  50. [50]

    Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data

    Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Computer Methods in Applied Mechanics and Engineering, 361:112732, 4 2020

  51. [51]

    Swiler, Mamikon Gulian, Ari L

    Laura P. Swiler, Mamikon Gulian, Ari L. Frankel, Cosmin Safta, and John D. Jakeman. A survey of constrained gaussian process regression: Approaches and implementation challenges. Journal of Machine Learning for Modeling and Computing , 1:119–156, 2020. 26

  52. [52]

    Pdebench: An extensive benchmark for scientific machine learning

    Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Daniel MacKinlay, Francesco Alesiani, Dirk Pfl¨ uger, and Mathias Niepert. Pdebench: An extensive benchmark for scientific machine learning. Advances in Neural Information Processing Systems , 35:1596–1611, 2022

  53. [53]

    From pinns to pikans: Recent advances in physics-informed machine learning

    Juan Diego Toscano, Vivek Oommen, Alan John Varghese, Zongren Zou, Nazanin Ahmadi Daryakenari, Chenxi Wu, and George Em Karniadakis. From pinns to pikans: Recent advances in physics-informed machine learning. Machine Learning for Computational Science and Engineering , 1(1):1–43, 2025

  54. [54]

    Multivariate Geostatistics

    Hans Wackernagel. Multivariate Geostatistics. Springer Berlin Heidelberg, 2003

  55. [55]

    When and why pinns fail to train: A neural tangent kernel perspective

    Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics , 449:110768, 1 2022

  56. [56]

    Physics informed deep kernel learning

    Zheng Wang, Wei Xing, Robert Kirby, and Shandian Zhe. Physics informed deep kernel learning. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics , volume 151 of Proceedings of Machine Learning Research, pages 1206–1218. PMLR, 28–30 Mar 2022

  57. [57]

    Divergence-free kernel methods for approximating the stokes problem

    Holger Wendland. Divergence-free kernel methods for approximating the stokes problem. SIAM Journal on Numerical Analysis, 47:3158–3179, 1 2009

  58. [58]

    Extrapolation, interpolation, and smoothing of stationary time series, with engineer- ing applications/

    Norbert Wiener. Extrapolation, interpolation, and smoothing of stationary time series, with engineer- ing applications/ . M.I.T. paperback series ; 9. Technology Press of the Massachusetts Institute of Technology, Cambridge, Massachusetts, First M.I.T. Press paperback edition, 1949

  59. [59]

    B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data

    Liu Yang, Xuhui Meng, and George Em Karniadakis. B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data. Journal of Computational Physics , 425:109913, 1 2021

  60. [60]

    Physics-informed generative adversarial networks for stochastic differential equations

    Liu Yang, Dongkun Zhang, and George Em Karniadakis. Physics-informed generative adversarial networks for stochastic differential equations. SIAM Journal on Scientific Computing , 42:A292–A317, 1 2020

  61. [61]

    Joint Kriging weights under a predicted values constraint

    Hao Zhongkai, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, et al. Pinnacle: A comprehensive benchmark of physics-informed neural networks for solving pdes. Advances in Neural Information Processing Systems , 37:76721–76774, 2024. A Appendix A.1 Proofs Proof of Proposition 1. Consider a prediction locati...