Recognition: 2 theorem links
· Lean TheoremNeural Operator: Graph Kernel Network for Partial Differential Equations
Pith reviewed 2026-05-14 22:13 UTC · model grok-4.3
The pith
A single set of network parameters can describe mappings between infinite-dimensional spaces and their finite approximations using graph kernel networks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central discovery is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. Approximation of the infinite-dimensional mapping is formulated by composing nonlinear activation functions and a class of integral operators, with kernel integration computed by message passing on graph networks.
What carries the argument
Graph kernel network that approximates integral operators through message passing on graphs to enable discretization-independent operator learning.
Load-bearing premise
Message passing on graphs faithfully approximates the integral operators for PDE mappings without discretization-specific artifacts that prevent generalization.
What would settle it
If a network trained on one grid resolution or method shows large accuracy drops when tested on a different resolution or method, compared to training directly on that target, the generalization claim would fail.
read the original abstract
The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces. The purpose of this work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators). The key innovation in our work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators. The kernel integration is computed by message passing on graph networks. This approach has substantial practical consequences which we will illustrate in the context of mappings between input data to partial differential equations (PDEs) and their solutions. In this context, such learned networks can generalize among different approximation methods for the PDE (such as finite difference or finite element methods) and among approximations corresponding to different underlying levels of resolution and discretization. Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces the Neural Operator, realized as a Graph Kernel Network, to approximate mappings between infinite-dimensional function spaces. The central claim is that a single set of learned parameters suffices to represent the operator both in the continuous setting and across different finite-dimensional discretizations (varying resolutions, finite-difference vs. finite-element meshes) by composing nonlinear activations with integral operators that are evaluated via graph message passing.
Significance. If the discretization-invariance property holds, the result would be a meaningful advance for operator learning in scientific computing: models trained on one mesh type or resolution could be deployed on others without retraining, addressing a practical bottleneck in neural PDE solvers.
major comments (2)
- [§3] §3 (Graph Kernel Network formulation): the argument that message passing realizes a discretization-invariant integral operator is not shown. The edge features and aggregation depend on the concrete graph constructed from the mesh; no analysis establishes that the learned kernel function commutes with changes in point set or adjacency, so the observed cross-discretization performance may be an artifact of the training distribution rather than evidence of an infinite-dimensional operator.
- [Experiments] Experimental section (cross-discretization tests): the quantitative evidence for generalization (training on one discretization and testing on another) lacks error bars, number of independent runs, and explicit comparison of mesh types (FD vs. FE). Without these, the central claim that a single parameter set works across approximations cannot be assessed.
minor comments (2)
- [Abstract] Abstract: the statement of 'competitive performance' should name the baselines and metrics used.
- [§3] Notation: the precise definition of the kernel function k(x,y) and how it is parameterized inside the message-passing layers should be stated explicitly for reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive comments on our manuscript introducing the Neural Operator realized as a Graph Kernel Network. We address each major comment point by point below, indicating planned revisions where appropriate.
read point-by-point responses
-
Referee: [§3] §3 (Graph Kernel Network formulation): the argument that message passing realizes a discretization-invariant integral operator is not shown. The edge features and aggregation depend on the concrete graph constructed from the mesh; no analysis establishes that the learned kernel function commutes with changes in point set or adjacency, so the observed cross-discretization performance may be an artifact of the training distribution rather than evidence of an infinite-dimensional operator.
Authors: We thank the referee for this observation. The Graph Kernel Network formulates the operator by composing nonlinear activations with integral operators whose kernels are functions of spatial coordinates. Message passing on the graph approximates the integral using the available point set, but the learned kernel parameters remain independent of the specific mesh or adjacency structure. This design ensures the same parameter set can be applied across different discretizations. While the manuscript does not include a formal proof that the kernel commutes with arbitrary changes in point sets, the architecture is constructed precisely to realize a discretization-invariant operator in the continuous limit. We will revise §3 to provide a clearer explanation of how the coordinate-based kernel and message-passing aggregation support this invariance property. revision: partial
-
Referee: [Experiments] Experimental section (cross-discretization tests): the quantitative evidence for generalization (training on one discretization and testing on another) lacks error bars, number of independent runs, and explicit comparison of mesh types (FD vs. FE). Without these, the central claim that a single parameter set works across approximations cannot be assessed.
Authors: We agree that reporting error bars, the number of independent runs, and explicit FD vs. FE comparisons would strengthen the experimental evidence. In the revised manuscript we will add error bars computed over multiple independent training runs, state the number of runs performed, and include direct side-by-side results for finite-difference and finite-element meshes in the cross-discretization experiments. revision: yes
Circularity Check
No circularity: architecture and generalization claims are independent constructions supported by experiments
full rationale
The paper presents a new network architecture that composes nonlinear activations with integral operators approximated via graph message passing. This formulation is introduced as an explicit design choice for learning operators between function spaces, without any equation or parameter being defined in terms of the target performance metric or the same data used for evaluation. The claim of cross-discretization generalization is asserted via experimental results on different meshes and methods (FD vs FE), not by reducing the output to a fitted quantity or self-citation chain. No load-bearing step equates a prediction to its input by construction, and the derivation chain remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 22 Pith papers
-
A meshfree exterior calculus for generalizable and data-efficient learning of physics from point clouds
MEEC equips point clouds with a discrete exterior calculus that satisfies exact conservation and is differentiable in point positions, allowing a single trained kernel to produce compatible physics on unseen geometrie...
-
Fourier Neural Operator for Parametric Partial Differential Equations
Fourier Neural Operator parameterizes integral kernels in Fourier space to learn parametric PDE solution operators, delivering up to 1000x speedups and zero-shot super-resolution on turbulent Navier-Stokes flows.
-
Discovering Physical Directions in Weight Space: Composing Neural PDE Experts
Fine-tuning neural PDE operators to regime endpoints reveals a physical direction in weight space that CCM uses to compose accurate merged models for new or extrapolated regimes from metadata or short prefixes.
-
Neural-Schwarz Tiling for Geometry-Universal PDE Solving at Scale
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
-
Bayesian Optimization with Structured Measurements: A Vector-Valued RKHS Framework
Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.
-
QuadNorm: Resolution-Robust Normalization for Neural Operators
QuadNorm uses quadrature-based moments instead of uniform averaging in normalization layers, achieving O(h²) consistency across resolutions and better cross-resolution transfer in neural operators.
-
Hybrid Iterative Neural Low-Regularity Integrator for Nonlinear Dispersive Equations
A hybrid solver-neural framework achieves global error O(τ^γ ln(1/τ)) for nonlinear dispersive equations by training a lightweight network on the residual defect inside the solver loop while preserving uniform stability.
-
Enabling Real-Time Training of a Wildfire-to-Smoke Map with Multilinear Operators
A multilinear operator learned on PCA coefficients maps time-since-ignition inputs to smoke outputs, matching Monte Carlo accuracy with half the model calls and outperforming prior classifiers on holdout data.
-
Hybrid Fourier Neural Operator-Lattice Boltzmann Method
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
-
Learning Neural Operator Surrogates for the Black Hole Accretion Code
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in th...
-
Learning on the Temporal Tangent Bundle for Physics-Informed Neural Networks
Parameterizing the temporal derivative in PINNs and reconstructing via Volterra integral yields 100-200x lower errors on advection, Burgers, and Klein-Gordon equations while proving equivalence to the original PDE.
-
Flow Field Reconstruction with Sensor Placement Policy Learning
A directional GNN combined with constrained PPO jointly improves flow-field reconstruction accuracy and sensor layout selection in realistic fluid dynamics settings.
-
U-HNO: A U-shaped Hybrid Neural Operator with Sparse-Point Adaptive Routing for Non-stationary PDE Dynamics
U-HNO uses adaptive per-point routing in a U-shaped hybrid architecture to achieve state-of-the-art accuracy on PDE benchmarks with sharp localized features.
-
Continuity Laws for Sequential Models
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
-
Recovering Physical Dynamics from Discrete Observations via Intrinsic Differential Consistency
Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.
-
Excluding the Target Domain Improves Extrapolation: Deconfounded Hierarchical Physics Constraints
Deconfounded Hierarchical Gate with counterfactual estimation and hierarchical constraints achieves 46% better RMSE on out-of-distribution battery temperature extrapolation, with excluding target data from pretraining...
-
Universal Neural Propagator: Learning Time Evolution in Many-Body Quantum Systems
The Universal Neural Propagator is a single neural model trained self-supervised to predict time evolution in driven quantum many-body systems across arbitrary protocols and initial states.
-
Shape: A Self-Supervised 3D Geometry Foundation Model for Industrial CAD Analysis
A 10.9M-parameter self-supervised model pretrained on 61k CAD meshes achieves R²=0.729 reconstruction and 98.1% top-1 retrieval on held-out data via masked normalized geometry reconstruction and multi-resolution contr...
-
A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems
A multimodal SwinV2-UNet vision transformer conditioned on data modality and time predicts spatiotemporal fluid flows and reconstructs unobserved fields from limited views using CFD data of argon jet injection.
-
Di-BiLPS: Denoising induced Bidirectional Latent-PDE-Solver under Sparse Observations
Di-BiLPS combines a variational autoencoder, latent diffusion, and contrastive learning to achieve state-of-the-art accuracy on PDE problems with as little as 3% observations while supporting zero-shot super-resolutio...
-
Multi-scale Dynamic Wake Modeling of Floating Offshore Wind Turbines via Fourier Neural Operators and Physics-Informed Neural Networks
FNO captures large- and small-scale wake structures, higher harmonics, and temporal variations more accurately and trains eight times faster than PINN for FOWT wake prediction.
-
Multiscale Physics-Informed Neural Network for Complex Fluid Flows with Long-Range Dependencies
DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data an...
Reference graph
Works this paper leans on
-
[1]
Tamara G. Kolda and Brett W. Bader , title =. 2009 , url =. doi:10.1137/07070111X , timestamp =
-
[2]
Anima Anandkumar and Rong Ge and Daniel J. Hsu and Sham M. Kakade and Matus Telgarsky , title =. CoRR , volume =. 2012 , url =
work page 2012
-
[3]
Ivan V. Oseledets , title =. 2011 , url =. doi:10.1137/090752286 , timestamp =
-
[4]
Fast adaptive interpolation of multi-dimensional arrays in tensor train format , isbn =
Savostyanov, Dmitry and Oseledets, Ivan , year =. Fast adaptive interpolation of multi-dimensional arrays in tensor train format , isbn =
-
[5]
Gorodetsky and Sertac Karaman and Youssef M
Alex A. Gorodetsky and Sertac Karaman and Youssef M. Marzouk , title =. CoRR , volume =. 2016 , url =
work page 2016
-
[6]
Spectral Tensor-Train Decomposition , journal =
Daniele Bigoni and Allan Peter Engsig. Spectral Tensor-Train Decomposition , journal =. 2016 , url =. doi:10.1137/15M1036919 , timestamp =
-
[7]
ArXiv e-prints , archivePrefix = "arXiv", eprint =
Tensor Numerical Methods for High-dimensional PDEs: Basic Theory and Initial Applications. ArXiv e-prints , archivePrefix = "arXiv", eprint =
-
[8]
Oseledets, Ivan V. and Tyrtyshnikov, Eugene E. , title =. SIAM J. Sci. Comput. , issue_date =. 2011 , issn =. doi:10.1137/100811647 , acmid =
-
[9]
Susanne Brenner and Ridgway Scott , title =
-
[10]
Finite element method --- Wikipedia , The Free Encyclopedia
Wikipedia contributors. Finite element method --- Wikipedia , The Free Encyclopedia. 2018
work page 2018
-
[11]
Jean Kossaifi and Yannis Panagakis and Anima Anandkumar and Maja Pantic , title =. CoRR , volume =
-
[12]
ArXiv e-prints , archivePrefix = "arXiv", eprint =
Gradient-based Optimization for Regression in the Functional Tensor-Train Format. ArXiv e-prints , archivePrefix = "arXiv", eprint =
-
[13]
T. Gerstner and M. Griebel. , title =. Encyclopedia of Quantitative Finance , publisher =
-
[14]
Constructive Approximation , volume=
O (dlog n)-Quantics approximation of N-d tensors in high-dimensional numerical modeling , author=. Constructive Approximation , volume=. 2011 , publisher=
work page 2011
-
[15]
Chemometrics and Intelligent Laboratory Systems , volume=
Tensors-structured numerical methods in scientific computing: Survey on recent advances , author=. Chemometrics and Intelligent Laboratory Systems , volume=. 2012 , publisher=
work page 2012
-
[16]
Numerische Mathematik , volume=
Quantized tensor-structured finite elements for second-order elliptic PDEs in two dimensions , author=. Numerische Mathematik , volume=. 2018 , publisher=
work page 2018
-
[17]
Advances in Computational Mathematics , volume=
QTT-finite-element approximation for multiscale problems I: model problems in one dimension , author=. Advances in Computational Mathematics , volume=. 2017 , publisher=
work page 2017
-
[18]
ArXiv e-prints , archivePrefix = "arXiv", eprint =
Robust discretization in quantized tensor train format for elliptic problems in two dimensions. ArXiv e-prints , archivePrefix = "arXiv", eprint =
- [19]
-
[20]
Explicit and implicit methods --- Wikipedia , The Free Encyclopedia
Wikipedia contributors. Explicit and implicit methods --- Wikipedia , The Free Encyclopedia. 2018
work page 2018
-
[21]
Time-dependent problems , author=
-
[22]
ArXiv e-prints , archivePrefix = "arXiv", eprint =
Tensor Ring Decomposition. ArXiv e-prints , archivePrefix = "arXiv", eprint =
-
[23]
Oseledets, Ivan V , journal=. Approximation of 2\^. 2010 , publisher=
work page 2010
-
[24]
SIAM Conference on Uncertainty Qualification , year=
Ordering Heuristics for Minimal Rank Approximations in Tensor-Train Format , author=. SIAM Conference on Uncertainty Qualification , year=
-
[25]
Foundations of Computational Mathematics , volume=
Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations , author=. Foundations of Computational Mathematics , volume=. 2016 , publisher=
work page 2016
-
[26]
Linear Algebra and its Applications , volume=
TT-cross approximation for multidimensional arrays , author=. Linear Algebra and its Applications , volume=. 2010 , publisher=
work page 2010
-
[27]
A hybrid Alternating Least Squares -- TT Cross algorithm for parametric PDEs
A hybrid Alternating Least Squares--TT Cross algorithm for parametric PDEs , author=. arXiv preprint arXiv:1707.04562 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[28]
Computational Methods in Applied Mathematics Comput
DMRG approach to fast linear algebra in the TT-format , author=. Computational Methods in Applied Mathematics Comput. Methods Appl. Math. , volume=. 2011 , publisher=
work page 2011
-
[29]
Artificial Intelligence and Statistics , pages=
Discovering and exploiting additive structure for Bayesian optimization , author=. Artificial Intelligence and Statistics , pages=
-
[30]
Advances in neural information processing systems , pages=
Sparse Gaussian processes using pseudo-inputs , author=. Advances in neural information processing systems , pages=
-
[31]
International Conference on Machine Learning , pages=
Kernel interpolation for scalable structured Gaussian processes (KISS-GP) , author=. International Conference on Machine Learning , pages=
-
[32]
Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition
Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition , author=. arXiv preprint arXiv:1710.07324 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[33]
Scalable inference for structured Gaussian process models , author=. 2012 , school=
work page 2012
-
[34]
Artificial Intelligence and Statistics , pages=
Variational learning of inducing variables in sparse Gaussian processes , author=. Artificial Intelligence and Statistics , pages=
-
[35]
The FEniCS Project Version 1.5 , author =. 2015 , journal =. doi:10.11588/ans.2015.100.20553 , page =
-
[36]
IEEE Transactions on Cybernetics , year=
Image Representation and Learning With Graph-Laplacian Tucker Tensor Decomposition , author=. IEEE Transactions on Cybernetics , year=
-
[37]
SIAM Journal on Matrix Analysis and Applications , volume=
Low-rank explicit QTT representation of the Laplace operator and its inverse , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2012 , publisher=
work page 2012
-
[38]
Tensor-product approximation to elliptic and parabolic solution operators in higher dimensions , author=. Computing , volume=
-
[39]
SIAM Journal on Scientific Computing , volume=
The alternating linear scheme for tensor optimization in the tensor train format , author=. SIAM Journal on Scientific Computing , volume=. 2012 , publisher=
work page 2012
-
[40]
SIAM Journal on Scientific Computing , volume=
Alternating minimal energy methods for linear systems in higher dimensions , author=. SIAM Journal on Scientific Computing , volume=. 2014 , publisher=
work page 2014
-
[41]
Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions
Analysis and design of convolutional networks via hierarchical tensor decompositions , author=. arXiv preprint arXiv:1705.02302 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[42]
Conference on Learning Theory , pages=
On the expressive power of deep learning: A tensor analysis , author=. Conference on Learning Theory , pages=
-
[43]
International Conference on Machine Learning , pages=
Convolutional rectifier networks as generalized tensor decompositions , author=. International Conference on Machine Learning , pages=
-
[44]
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very deep convolutional networks for large-scale image recognition , author=. arXiv preprint arXiv:1409.1556 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[45]
Entanglement and tensor network states
Entanglement and tensor network states , author=. arXiv preprint arXiv:1308.3318 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[46]
Physical review letters , volume=
Class of quantum many-body states that can be efficiently simulated , author=. Physical review letters , volume=. 2008 , publisher=
work page 2008
-
[47]
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
Compact neural networks based on the multiscale entanglement renormalization ansatz , author=. arXiv preprint arXiv:1711.03357 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[48]
Journal of Physics A: Mathematical and Theoretical , volume=
Hand-waving and interpretive dance: an introductory course on tensor networks , author=. Journal of Physics A: Mathematical and Theoretical , volume=. 2017 , publisher=
work page 2017
-
[49]
Physical review letters , volume=
Tensor network renormalization , author=. Physical review letters , volume=. 2015 , publisher=
work page 2015
-
[50]
Content-based image retrieval using deep learning , author=
- [51]
-
[52]
IEEE transactions on neural networks , volume=
Artificial neural networks for solving ordinary and partial differential equations , author=. IEEE transactions on neural networks , volume=. 1998 , publisher=
work page 1998
-
[53]
Jun-Ting Hsieh and Shengjia Zhao and Stephan Eismann and Lucia Mirabella and Stefano Ermon , booktitle=. Learning Neural. 2019 , url=
work page 2019
-
[54]
Journal of Computational Physics , volume=
A paradigm for data-driven predictive modeling using field inversion and machine learning , author=. Journal of Computational Physics , volume=. 2016 , publisher=
work page 2016
-
[55]
36th International Conference on Machine Learning , year =
Graph Element Networks: adaptive, structured computation and memory , author =. 36th International Conference on Machine Learning , year =
-
[57]
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=
Non-local neural networks , author=. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=
-
[58]
Gated Graph Sequence Neural Networks
Gated graph sequence neural networks , author=. arXiv preprint arXiv:1511.05493 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[59]
Advances in neural information processing systems , pages=
Convolutional neural networks on graphs with fast localized spectral filtering , author=. Advances in neural information processing systems , pages=
-
[60]
Advances in neural information processing systems , pages=
Neural ordinary differential equations , author=. Advances in neural information processing systems , pages=
-
[61]
arXiv preprint arXiv:1807.01883 , year=
A multiscale neural network based on hierarchical matrices , author=. arXiv preprint arXiv:1807.01883 , year=
-
[62]
Chemistry of Materials , volume=
Graph networks as a universal machine learning framework for molecules and crystals , author=. Chemistry of Materials , volume=. 2019 , publisher=
work page 2019
-
[63]
Journal of Computational Physics , volume=
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , author=. Journal of Computational Physics , volume=. 2019 , publisher=
work page 2019
-
[64]
Dynamic Graph CNN for Learning on Point Clouds
Dynamic graph cnn for learning on point clouds , author=. arXiv preprint arXiv:1801.07829 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[65]
AIAA Aviation 2019 Forum , pages=
Field Inversion and Machine Learning With Embedded Neural Networks: Physics-Consistent Neural Network Training , author=. AIAA Aviation 2019 Forum , pages=
work page 2019
-
[66]
Convolutional neural networks for steady flow approximation , author=. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , year=
-
[67]
Computational Mechanics , pages=
Prediction of aerodynamic flow fields using convolutional neural networks , author=. Computational Mechanics , pages=. 2019 , publisher=
work page 2019
-
[68]
Graph U-Nets , author=. arXiv preprint arXiv:1905.05178 , year=
work page internal anchor Pith review Pith/arXiv arXiv 1905
-
[70]
Proceedings of the 34th International Conference on Machine Learning , year=
Neural message passing for quantum chemistry , author=. Proceedings of the 34th International Conference on Machine Learning , year=
-
[71]
An introduction to continuum mechanics , author=. 1982 , publisher=
work page 1982
-
[72]
Numerical solution of partial differential equations by the finite element method , author=. 2012 , publisher=
work page 2012
- [73]
-
[74]
Spectral partitioning with indefinite kernels using the Nystr
Belongie, Serge and Fowlkes, Charless and Chung, Fan and Malik, Jitendra , booktitle=. Spectral partitioning with indefinite kernels using the Nystr. 2002 , organization=
work page 2002
-
[75]
Fundamentals of transport phenomena in porous media , author=. 2012 , publisher=
work page 2012
- [76]
- [77]
-
[78]
Elliptic partial differential equations of second order , author=. 2015 , publisher=
work page 2015
-
[79]
The Journal of Machine Learning Research , volume=
How deep are deep Gaussian processes? , author=. The Journal of Machine Learning Research , volume=. 2018 , publisher=
work page 2018
-
[80]
An introduction to computational stochastic PDEs , author=. 2014 , publisher=
work page 2014
-
[81]
Williams, Christopher K. I. , title =. 1996 , publisher =
work page 1996
- [82]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.