pith. machine review for the scientific record. sign in

arxiv: 2003.03485 · v1 · submitted 2020-03-07 · 💻 cs.LG · cs.NA· math.NA· stat.ML

Recognition: 2 theorem links

· Lean Theorem

Neural Operator: Graph Kernel Network for Partial Differential Equations

Authors on Pith no claims yet

Pith reviewed 2026-05-14 22:13 UTC · model grok-4.3

classification 💻 cs.LG cs.NAmath.NAstat.ML
keywords neural operatorsgraph neural networkspartial differential equationsoperator learningkernel methodsdiscretization generalizationinfinite-dimensional mappings
0
0 comments X

The pith

A single set of network parameters can describe mappings between infinite-dimensional spaces and their finite approximations using graph kernel networks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper generalizes neural networks to learn operators between infinite-dimensional spaces. It achieves this by composing nonlinear activations with integral operators whose kernels are computed via message passing on graphs. This single-parameter network can handle PDE input-to-solution mappings across different discretizations and resolutions. A reader would care because it offers a way to build solvers that are not tied to specific numerical grids or methods. Experiments show it generalizes as claimed and performs competitively with traditional solvers.

Core claim

The central discovery is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. Approximation of the infinite-dimensional mapping is formulated by composing nonlinear activation functions and a class of integral operators, with kernel integration computed by message passing on graph networks.

What carries the argument

Graph kernel network that approximates integral operators through message passing on graphs to enable discretization-independent operator learning.

Load-bearing premise

Message passing on graphs faithfully approximates the integral operators for PDE mappings without discretization-specific artifacts that prevent generalization.

What would settle it

If a network trained on one grid resolution or method shows large accuracy drops when tested on a different resolution or method, compared to training directly on that target, the generalization claim would fail.

read the original abstract

The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces. The purpose of this work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators). The key innovation in our work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators. The kernel integration is computed by message passing on graph networks. This approach has substantial practical consequences which we will illustrate in the context of mappings between input data to partial differential equations (PDEs) and their solutions. In this context, such learned networks can generalize among different approximation methods for the PDE (such as finite difference or finite element methods) and among approximations corresponding to different underlying levels of resolution and discretization. Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces the Neural Operator, realized as a Graph Kernel Network, to approximate mappings between infinite-dimensional function spaces. The central claim is that a single set of learned parameters suffices to represent the operator both in the continuous setting and across different finite-dimensional discretizations (varying resolutions, finite-difference vs. finite-element meshes) by composing nonlinear activations with integral operators that are evaluated via graph message passing.

Significance. If the discretization-invariance property holds, the result would be a meaningful advance for operator learning in scientific computing: models trained on one mesh type or resolution could be deployed on others without retraining, addressing a practical bottleneck in neural PDE solvers.

major comments (2)
  1. [§3] §3 (Graph Kernel Network formulation): the argument that message passing realizes a discretization-invariant integral operator is not shown. The edge features and aggregation depend on the concrete graph constructed from the mesh; no analysis establishes that the learned kernel function commutes with changes in point set or adjacency, so the observed cross-discretization performance may be an artifact of the training distribution rather than evidence of an infinite-dimensional operator.
  2. [Experiments] Experimental section (cross-discretization tests): the quantitative evidence for generalization (training on one discretization and testing on another) lacks error bars, number of independent runs, and explicit comparison of mesh types (FD vs. FE). Without these, the central claim that a single parameter set works across approximations cannot be assessed.
minor comments (2)
  1. [Abstract] Abstract: the statement of 'competitive performance' should name the baselines and metrics used.
  2. [§3] Notation: the precise definition of the kernel function k(x,y) and how it is parameterized inside the message-passing layers should be stated explicitly for reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript introducing the Neural Operator realized as a Graph Kernel Network. We address each major comment point by point below, indicating planned revisions where appropriate.

read point-by-point responses
  1. Referee: [§3] §3 (Graph Kernel Network formulation): the argument that message passing realizes a discretization-invariant integral operator is not shown. The edge features and aggregation depend on the concrete graph constructed from the mesh; no analysis establishes that the learned kernel function commutes with changes in point set or adjacency, so the observed cross-discretization performance may be an artifact of the training distribution rather than evidence of an infinite-dimensional operator.

    Authors: We thank the referee for this observation. The Graph Kernel Network formulates the operator by composing nonlinear activations with integral operators whose kernels are functions of spatial coordinates. Message passing on the graph approximates the integral using the available point set, but the learned kernel parameters remain independent of the specific mesh or adjacency structure. This design ensures the same parameter set can be applied across different discretizations. While the manuscript does not include a formal proof that the kernel commutes with arbitrary changes in point sets, the architecture is constructed precisely to realize a discretization-invariant operator in the continuous limit. We will revise §3 to provide a clearer explanation of how the coordinate-based kernel and message-passing aggregation support this invariance property. revision: partial

  2. Referee: [Experiments] Experimental section (cross-discretization tests): the quantitative evidence for generalization (training on one discretization and testing on another) lacks error bars, number of independent runs, and explicit comparison of mesh types (FD vs. FE). Without these, the central claim that a single parameter set works across approximations cannot be assessed.

    Authors: We agree that reporting error bars, the number of independent runs, and explicit FD vs. FE comparisons would strengthen the experimental evidence. In the revised manuscript we will add error bars computed over multiple independent training runs, state the number of runs performed, and include direct side-by-side results for finite-difference and finite-element meshes in the cross-discretization experiments. revision: yes

Circularity Check

0 steps flagged

No circularity: architecture and generalization claims are independent constructions supported by experiments

full rationale

The paper presents a new network architecture that composes nonlinear activations with integral operators approximated via graph message passing. This formulation is introduced as an explicit design choice for learning operators between function spaces, without any equation or parameter being defined in terms of the target performance metric or the same data used for evaluation. The claim of cross-discretization generalization is asserted via experimental results on different meshes and methods (FD vs FE), not by reducing the output to a fitted quantity or self-citation chain. No load-bearing step equates a prediction to its input by construction, and the derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The central claim depends on the unstated assumption that graph message passing can serve as a faithful, discretization-invariant surrogate for the integral operators that define the target mapping; no free parameters or invented entities are explicitly introduced in the abstract.

pith-pipeline@v0.9.0 · 5527 in / 1030 out tokens · 30155 ms · 2026-05-14T22:13:20.539096+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 22 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A meshfree exterior calculus for generalizable and data-efficient learning of physics from point clouds

    cs.LG 2026-05 unverdicted novelty 8.0

    MEEC equips point clouds with a discrete exterior calculus that satisfies exact conservation and is differentiable in point positions, allowing a single trained kernel to produce compatible physics on unseen geometrie...

  2. Fourier Neural Operator for Parametric Partial Differential Equations

    cs.LG 2020-10 unverdicted novelty 8.0

    Fourier Neural Operator parameterizes integral kernels in Fourier space to learn parametric PDE solution operators, delivering up to 1000x speedups and zero-shot super-resolution on turbulent Navier-Stokes flows.

  3. Discovering Physical Directions in Weight Space: Composing Neural PDE Experts

    cs.LG 2026-05 unverdicted novelty 7.0

    Fine-tuning neural PDE operators to regime endpoints reveals a physical direction in weight space that CCM uses to compose accurate merged models for new or extrapolated regimes from metadata or short prefixes.

  4. Neural-Schwarz Tiling for Geometry-Universal PDE Solving at Scale

    cs.LG 2026-05 unverdicted novelty 7.0

    Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.

  5. Bayesian Optimization with Structured Measurements: A Vector-Valued RKHS Framework

    cs.LG 2026-05 unverdicted novelty 7.0

    Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.

  6. QuadNorm: Resolution-Robust Normalization for Neural Operators

    cs.LG 2026-05 unverdicted novelty 7.0

    QuadNorm uses quadrature-based moments instead of uniform averaging in normalization layers, achieving O(h²) consistency across resolutions and better cross-resolution transfer in neural operators.

  7. Hybrid Iterative Neural Low-Regularity Integrator for Nonlinear Dispersive Equations

    cs.LG 2026-05 unverdicted novelty 7.0

    A hybrid solver-neural framework achieves global error O(τ^γ ln(1/τ)) for nonlinear dispersive equations by training a lightweight network on the residual defect inside the solver loop while preserving uniform stability.

  8. Enabling Real-Time Training of a Wildfire-to-Smoke Map with Multilinear Operators

    cs.LG 2026-05 unverdicted novelty 7.0

    A multilinear operator learned on PCA coefficients maps time-since-ignition inputs to smoke outputs, matching Monte Carlo accuracy with half the model calls and outperforming prior classifiers on holdout data.

  9. Hybrid Fourier Neural Operator-Lattice Boltzmann Method

    physics.flu-dyn 2026-04 unverdicted novelty 7.0

    Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.

  10. Learning Neural Operator Surrogates for the Black Hole Accretion Code

    astro-ph.HE 2026-04 unverdicted novelty 7.0

    Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in th...

  11. Learning on the Temporal Tangent Bundle for Physics-Informed Neural Networks

    math.NA 2026-04 unverdicted novelty 7.0

    Parameterizing the temporal derivative in PINNs and reconstructing via Volterra integral yields 100-200x lower errors on advection, Burgers, and Klein-Gordon equations while proving equivalence to the original PDE.

  12. Flow Field Reconstruction with Sensor Placement Policy Learning

    cs.CE 2026-05 unverdicted novelty 6.0

    A directional GNN combined with constrained PPO jointly improves flow-field reconstruction accuracy and sensor layout selection in realistic fluid dynamics settings.

  13. U-HNO: A U-shaped Hybrid Neural Operator with Sparse-Point Adaptive Routing for Non-stationary PDE Dynamics

    cs.LG 2026-05 unverdicted novelty 6.0

    U-HNO uses adaptive per-point routing in a U-shaped hybrid architecture to achieve state-of-the-art accuracy on PDE benchmarks with sharp localized features.

  14. Continuity Laws for Sequential Models

    cs.LG 2026-05 unverdicted novelty 6.0

    S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.

  15. Recovering Physical Dynamics from Discrete Observations via Intrinsic Differential Consistency

    cs.LG 2026-05 unverdicted novelty 6.0

    Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.

  16. Excluding the Target Domain Improves Extrapolation: Deconfounded Hierarchical Physics Constraints

    cs.LG 2026-05 unverdicted novelty 6.0

    Deconfounded Hierarchical Gate with counterfactual estimation and hierarchical constraints achieves 46% better RMSE on out-of-distribution battery temperature extrapolation, with excluding target data from pretraining...

  17. Universal Neural Propagator: Learning Time Evolution in Many-Body Quantum Systems

    quant-ph 2026-05 unverdicted novelty 6.0

    The Universal Neural Propagator is a single neural model trained self-supervised to predict time evolution in driven quantum many-body systems across arbitrary protocols and initial states.

  18. Shape: A Self-Supervised 3D Geometry Foundation Model for Industrial CAD Analysis

    cs.CV 2026-04 unverdicted novelty 6.0

    A 10.9M-parameter self-supervised model pretrained on 61k CAD meshes achieves R²=0.729 reconstruction and 98.1% top-1 retrieval on held-out data via masked normalized geometry reconstruction and multi-resolution contr...

  19. A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems

    physics.flu-dyn 2026-04 unverdicted novelty 6.0

    A multimodal SwinV2-UNet vision transformer conditioned on data modality and time predicts spatiotemporal fluid flows and reconstructs unobserved fields from limited views using CFD data of argon jet injection.

  20. Di-BiLPS: Denoising induced Bidirectional Latent-PDE-Solver under Sparse Observations

    cs.LG 2026-05 unverdicted novelty 5.0

    Di-BiLPS combines a variational autoencoder, latent diffusion, and contrastive learning to achieve state-of-the-art accuracy on PDE problems with as little as 3% observations while supporting zero-shot super-resolutio...

  21. Multi-scale Dynamic Wake Modeling of Floating Offshore Wind Turbines via Fourier Neural Operators and Physics-Informed Neural Networks

    physics.flu-dyn 2026-04 unverdicted novelty 5.0

    FNO captures large- and small-scale wake structures, higher harmonics, and temporal variations more accurately and trains eight times faster than PINN for FOWT wake prediction.

  22. Multiscale Physics-Informed Neural Network for Complex Fluid Flows with Long-Range Dependencies

    physics.flu-dyn 2026-04 unverdicted novelty 5.0

    DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data an...

Reference graph

Works this paper leans on

141 extracted references · 141 canonical work pages · cited by 22 Pith papers · 16 internal anchors

  1. [1]

    Kolda and Brett W

    Tamara G. Kolda and Brett W. Bader , title =. 2009 , url =. doi:10.1137/07070111X , timestamp =

  2. [2]

    Hsu and Sham M

    Anima Anandkumar and Rong Ge and Daniel J. Hsu and Sham M. Kakade and Matus Telgarsky , title =. CoRR , volume =. 2012 , url =

  3. [3]

    Oseledets , title =

    Ivan V. Oseledets , title =. 2011 , url =. doi:10.1137/090752286 , timestamp =

  4. [4]

    Fast adaptive interpolation of multi-dimensional arrays in tensor train format , isbn =

    Savostyanov, Dmitry and Oseledets, Ivan , year =. Fast adaptive interpolation of multi-dimensional arrays in tensor train format , isbn =

  5. [5]

    Gorodetsky and Sertac Karaman and Youssef M

    Alex A. Gorodetsky and Sertac Karaman and Youssef M. Marzouk , title =. CoRR , volume =. 2016 , url =

  6. [6]

    Spectral Tensor-Train Decomposition , journal =

    Daniele Bigoni and Allan Peter Engsig. Spectral Tensor-Train Decomposition , journal =. 2016 , url =. doi:10.1137/15M1036919 , timestamp =

  7. [7]

    ArXiv e-prints , archivePrefix = "arXiv", eprint =

    Tensor Numerical Methods for High-dimensional PDEs: Basic Theory and Initial Applications. ArXiv e-prints , archivePrefix = "arXiv", eprint =

  8. [8]

    and Tyrtyshnikov, Eugene E

    Oseledets, Ivan V. and Tyrtyshnikov, Eugene E. , title =. SIAM J. Sci. Comput. , issue_date =. 2011 , issn =. doi:10.1137/100811647 , acmid =

  9. [9]

    Susanne Brenner and Ridgway Scott , title =

  10. [10]

    Finite element method --- Wikipedia , The Free Encyclopedia

    Wikipedia contributors. Finite element method --- Wikipedia , The Free Encyclopedia. 2018

  11. [11]

    CoRR , volume =

    Jean Kossaifi and Yannis Panagakis and Anima Anandkumar and Maja Pantic , title =. CoRR , volume =

  12. [12]

    ArXiv e-prints , archivePrefix = "arXiv", eprint =

    Gradient-based Optimization for Regression in the Functional Tensor-Train Format. ArXiv e-prints , archivePrefix = "arXiv", eprint =

  13. [13]

    Gerstner and M

    T. Gerstner and M. Griebel. , title =. Encyclopedia of Quantitative Finance , publisher =

  14. [14]

    Constructive Approximation , volume=

    O (dlog n)-Quantics approximation of N-d tensors in high-dimensional numerical modeling , author=. Constructive Approximation , volume=. 2011 , publisher=

  15. [15]

    Chemometrics and Intelligent Laboratory Systems , volume=

    Tensors-structured numerical methods in scientific computing: Survey on recent advances , author=. Chemometrics and Intelligent Laboratory Systems , volume=. 2012 , publisher=

  16. [16]

    Numerische Mathematik , volume=

    Quantized tensor-structured finite elements for second-order elliptic PDEs in two dimensions , author=. Numerische Mathematik , volume=. 2018 , publisher=

  17. [17]

    Advances in Computational Mathematics , volume=

    QTT-finite-element approximation for multiscale problems I: model problems in one dimension , author=. Advances in Computational Mathematics , volume=. 2017 , publisher=

  18. [18]

    ArXiv e-prints , archivePrefix = "arXiv", eprint =

    Robust discretization in quantized tensor train format for elliptic problems in two dimensions. ArXiv e-prints , archivePrefix = "arXiv", eprint =

  19. [19]

    2016 , url =

    what's difference between fem fdm and fvm , author=. 2016 , url =

  20. [20]

    Explicit and implicit methods --- Wikipedia , The Free Encyclopedia

    Wikipedia contributors. Explicit and implicit methods --- Wikipedia , The Free Encyclopedia. 2018

  21. [21]

    Time-dependent problems , author=

  22. [22]

    ArXiv e-prints , archivePrefix = "arXiv", eprint =

    Tensor Ring Decomposition. ArXiv e-prints , archivePrefix = "arXiv", eprint =

  23. [23]

    Approximation of 2\^

    Oseledets, Ivan V , journal=. Approximation of 2\^. 2010 , publisher=

  24. [24]

    SIAM Conference on Uncertainty Qualification , year=

    Ordering Heuristics for Minimal Rank Approximations in Tensor-Train Format , author=. SIAM Conference on Uncertainty Qualification , year=

  25. [25]

    Foundations of Computational Mathematics , volume=

    Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations , author=. Foundations of Computational Mathematics , volume=. 2016 , publisher=

  26. [26]

    Linear Algebra and its Applications , volume=

    TT-cross approximation for multidimensional arrays , author=. Linear Algebra and its Applications , volume=. 2010 , publisher=

  27. [27]

    A hybrid Alternating Least Squares -- TT Cross algorithm for parametric PDEs

    A hybrid Alternating Least Squares--TT Cross algorithm for parametric PDEs , author=. arXiv preprint arXiv:1707.04562 , year=

  28. [28]

    Computational Methods in Applied Mathematics Comput

    DMRG approach to fast linear algebra in the TT-format , author=. Computational Methods in Applied Mathematics Comput. Methods Appl. Math. , volume=. 2011 , publisher=

  29. [29]

    Artificial Intelligence and Statistics , pages=

    Discovering and exploiting additive structure for Bayesian optimization , author=. Artificial Intelligence and Statistics , pages=

  30. [30]

    Advances in neural information processing systems , pages=

    Sparse Gaussian processes using pseudo-inputs , author=. Advances in neural information processing systems , pages=

  31. [31]

    International Conference on Machine Learning , pages=

    Kernel interpolation for scalable structured Gaussian processes (KISS-GP) , author=. International Conference on Machine Learning , pages=

  32. [32]

    Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition

    Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition , author=. arXiv preprint arXiv:1710.07324 , year=

  33. [33]

    2012 , school=

    Scalable inference for structured Gaussian process models , author=. 2012 , school=

  34. [34]

    Artificial Intelligence and Statistics , pages=

    Variational learning of inducing variables in sparse Gaussian processes , author=. Artificial Intelligence and Statistics , pages=

  35. [35]

    2015 , journal =

    The FEniCS Project Version 1.5 , author =. 2015 , journal =. doi:10.11588/ans.2015.100.20553 , page =

  36. [36]

    IEEE Transactions on Cybernetics , year=

    Image Representation and Learning With Graph-Laplacian Tucker Tensor Decomposition , author=. IEEE Transactions on Cybernetics , year=

  37. [37]

    SIAM Journal on Matrix Analysis and Applications , volume=

    Low-rank explicit QTT representation of the Laplace operator and its inverse , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2012 , publisher=

  38. [38]

    Computing , volume=

    Tensor-product approximation to elliptic and parabolic solution operators in higher dimensions , author=. Computing , volume=

  39. [39]

    SIAM Journal on Scientific Computing , volume=

    The alternating linear scheme for tensor optimization in the tensor train format , author=. SIAM Journal on Scientific Computing , volume=. 2012 , publisher=

  40. [40]

    SIAM Journal on Scientific Computing , volume=

    Alternating minimal energy methods for linear systems in higher dimensions , author=. SIAM Journal on Scientific Computing , volume=. 2014 , publisher=

  41. [41]

    Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions

    Analysis and design of convolutional networks via hierarchical tensor decompositions , author=. arXiv preprint arXiv:1705.02302 , year=

  42. [42]

    Conference on Learning Theory , pages=

    On the expressive power of deep learning: A tensor analysis , author=. Conference on Learning Theory , pages=

  43. [43]

    International Conference on Machine Learning , pages=

    Convolutional rectifier networks as generalized tensor decompositions , author=. International Conference on Machine Learning , pages=

  44. [44]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Very deep convolutional networks for large-scale image recognition , author=. arXiv preprint arXiv:1409.1556 , year=

  45. [45]

    Entanglement and tensor network states

    Entanglement and tensor network states , author=. arXiv preprint arXiv:1308.3318 , year=

  46. [46]

    Physical review letters , volume=

    Class of quantum many-body states that can be efficiently simulated , author=. Physical review letters , volume=. 2008 , publisher=

  47. [47]

    Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz

    Compact neural networks based on the multiscale entanglement renormalization ansatz , author=. arXiv preprint arXiv:1711.03357 , year=

  48. [48]

    Journal of Physics A: Mathematical and Theoretical , volume=

    Hand-waving and interpretive dance: an introductory course on tensor networks , author=. Journal of Physics A: Mathematical and Theoretical , volume=. 2017 , publisher=

  49. [49]

    Physical review letters , volume=

    Tensor network renormalization , author=. Physical review letters , volume=. 2015 , publisher=

  50. [50]

    Content-based image retrieval using deep learning , author=

  51. [51]

    CoRR , volume =

    Tim Dockhorn , title =. CoRR , volume =. 2019 , url =

  52. [52]

    IEEE transactions on neural networks , volume=

    Artificial neural networks for solving ordinary and partial differential equations , author=. IEEE transactions on neural networks , volume=. 1998 , publisher=

  53. [53]

    Learning Neural

    Jun-Ting Hsieh and Shengjia Zhao and Stephan Eismann and Lucia Mirabella and Stefano Ermon , booktitle=. Learning Neural. 2019 , url=

  54. [54]

    Journal of Computational Physics , volume=

    A paradigm for data-driven predictive modeling using field inversion and machine learning , author=. Journal of Computational Physics , volume=. 2016 , publisher=

  55. [55]

    36th International Conference on Machine Learning , year =

    Graph Element Networks: adaptive, structured computation and memory , author =. 36th International Conference on Machine Learning , year =

  56. [57]

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=

    Non-local neural networks , author=. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=

  57. [58]

    Gated Graph Sequence Neural Networks

    Gated graph sequence neural networks , author=. arXiv preprint arXiv:1511.05493 , year=

  58. [59]

    Advances in neural information processing systems , pages=

    Convolutional neural networks on graphs with fast localized spectral filtering , author=. Advances in neural information processing systems , pages=

  59. [60]

    Advances in neural information processing systems , pages=

    Neural ordinary differential equations , author=. Advances in neural information processing systems , pages=

  60. [61]

    arXiv preprint arXiv:1807.01883 , year=

    A multiscale neural network based on hierarchical matrices , author=. arXiv preprint arXiv:1807.01883 , year=

  61. [62]

    Chemistry of Materials , volume=

    Graph networks as a universal machine learning framework for molecules and crystals , author=. Chemistry of Materials , volume=. 2019 , publisher=

  62. [63]

    Journal of Computational Physics , volume=

    Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , author=. Journal of Computational Physics , volume=. 2019 , publisher=

  63. [64]

    Dynamic Graph CNN for Learning on Point Clouds

    Dynamic graph cnn for learning on point clouds , author=. arXiv preprint arXiv:1801.07829 , year=

  64. [65]

    AIAA Aviation 2019 Forum , pages=

    Field Inversion and Machine Learning With Embedded Neural Networks: Physics-Consistent Neural Network Training , author=. AIAA Aviation 2019 Forum , pages=

  65. [66]

    Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , year=

    Convolutional neural networks for steady flow approximation , author=. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , year=

  66. [67]

    Computational Mechanics , pages=

    Prediction of aerodynamic flow fields using convolutional neural networks , author=. Computational Mechanics , pages=. 2019 , publisher=

  67. [68]

    Graph U-Nets

    Graph U-Nets , author=. arXiv preprint arXiv:1905.05178 , year=

  68. [70]

    Proceedings of the 34th International Conference on Machine Learning , year=

    Neural message passing for quantum chemistry , author=. Proceedings of the 34th International Conference on Machine Learning , year=

  69. [71]

    1982 , publisher=

    An introduction to continuum mechanics , author=. 1982 , publisher=

  70. [72]

    2012 , publisher=

    Numerical solution of partial differential equations by the finite element method , author=. 2012 , publisher=

  71. [73]

    Acta Mathematica , year=

    Nystr. Acta Mathematica , year=

  72. [74]

    Spectral partitioning with indefinite kernels using the Nystr

    Belongie, Serge and Fowlkes, Charless and Chung, Fan and Malik, Jitendra , booktitle=. Spectral partitioning with indefinite kernels using the Nystr. 2002 , organization=

  73. [75]

    2012 , publisher=

    Fundamentals of transport phenomena in porous media , author=. 2012 , publisher=

  74. [76]

    2005 , publisher=

    Problems In Nonlinear Elasticity , author=. 2005 , publisher=

  75. [77]

    2010 , publisher=

    Partial Differential Equations , author=. 2010 , publisher=

  76. [78]

    2015 , publisher=

    Elliptic partial differential equations of second order , author=. 2015 , publisher=

  77. [79]

    The Journal of Machine Learning Research , volume=

    How deep are deep Gaussian processes? , author=. The Journal of Machine Learning Research , volume=. 2018 , publisher=

  78. [80]

    2014 , publisher=

    An introduction to computational stochastic PDEs , author=. 2014 , publisher=

  79. [81]

    Williams, Christopher K. I. , title =. 1996 , publisher =

  80. [82]

    , title =

    Neal, Radford M. , title =. 1996 , isbn =

Showing first 80 references.