MEEC equips point clouds with a discrete exterior calculus that satisfies exact conservation and is differentiable in point positions, allowing a single trained kernel to produce compatible physics on unseen geometries and parameters.
citation dossier
Neural operator: Graph kernel network for partial differential equations
why this work matters in Pith
Pith has found this work in 18 reviewed papers. Its strongest current cluster is cs.LG (10 papers). The largest review-status bucket among citing papers is UNVERDICTED (18 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
verdicts
UNVERDICTED 18representative citing papers
Fourier Neural Operator parameterizes integral kernels in Fourier space to learn parametric PDE solution operators, delivering up to 1000x speedups and zero-shot super-resolution on turbulent Navier-Stokes flows.
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.
QuadNorm uses quadrature-based moments instead of uniform averaging in normalization layers, achieving O(h²) consistency across resolutions and better cross-resolution transfer in neural operators.
A hybrid solver-neural framework achieves global error O(τ^γ ln(1/τ)) for nonlinear dispersive equations by training a lightweight network on the residual defect inside the solver loop while preserving uniform stability.
A multilinear operator learned on PCA coefficients maps time-since-ignition inputs to smoke outputs, matching Monte Carlo accuracy with half the model calls and outperforming prior classifiers on holdout data.
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
Parameterizing the temporal derivative in PINNs and reconstructing via Volterra integral yields 100-200x lower errors on advection, Burgers, and Klein-Gordon equations while proving equivalence to the original PDE.
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.
Deconfounded Hierarchical Gate with counterfactual estimation and hierarchical constraints achieves 46% better RMSE on out-of-distribution battery temperature extrapolation, with excluding target data from pretraining outperforming inclusion.
The Universal Neural Propagator is a single neural model trained self-supervised to predict time evolution in driven quantum many-body systems across arbitrary protocols and initial states.
A 10.9M-parameter self-supervised model pretrained on 61k CAD meshes achieves R²=0.729 reconstruction and 98.1% top-1 retrieval on held-out data via masked normalized geometry reconstruction and multi-resolution contrastive learning.
A multimodal SwinV2-UNet vision transformer conditioned on data modality and time predicts spatiotemporal fluid flows and reconstructs unobserved fields from limited views using CFD data of argon jet injection.
FNO captures large- and small-scale wake structures, higher harmonics, and temporal variations more accurately and trains eight times faster than PINN for FOWT wake prediction.
DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data and O(10^-4) error on turbulent flow with only 500 supervision points.
citing papers explorer
-
A meshfree exterior calculus for generalizable and data-efficient learning of physics from point clouds
MEEC equips point clouds with a discrete exterior calculus that satisfies exact conservation and is differentiable in point positions, allowing a single trained kernel to produce compatible physics on unseen geometries and parameters.
-
Fourier Neural Operator for Parametric Partial Differential Equations
Fourier Neural Operator parameterizes integral kernels in Fourier space to learn parametric PDE solution operators, delivering up to 1000x speedups and zero-shot super-resolution on turbulent Navier-Stokes flows.
-
Neural-Schwarz Tiling for Geometry-Universal PDE Solving at Scale
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
-
Bayesian Optimization with Structured Measurements: A Vector-Valued RKHS Framework
Proposes a vector-valued RKHS framework for Bayesian optimization with structured measurements, deriving concentration bounds and UCB-based regret guarantees that recover sublinear rates.
-
QuadNorm: Resolution-Robust Normalization for Neural Operators
QuadNorm uses quadrature-based moments instead of uniform averaging in normalization layers, achieving O(h²) consistency across resolutions and better cross-resolution transfer in neural operators.
-
Hybrid Iterative Neural Low-Regularity Integrator for Nonlinear Dispersive Equations
A hybrid solver-neural framework achieves global error O(τ^γ ln(1/τ)) for nonlinear dispersive equations by training a lightweight network on the residual defect inside the solver loop while preserving uniform stability.
-
Enabling Real-Time Training of a Wildfire-to-Smoke Map with Multilinear Operators
A multilinear operator learned on PCA coefficients maps time-since-ignition inputs to smoke outputs, matching Monte Carlo accuracy with half the model calls and outperforming prior classifiers on holdout data.
-
Hybrid Fourier Neural Operator-Lattice Boltzmann Method
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
-
Learning Neural Operator Surrogates for the Black Hole Accretion Code
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
-
Learning on the Temporal Tangent Bundle for Physics-Informed Neural Networks
Parameterizing the temporal derivative in PINNs and reconstructing via Volterra integral yields 100-200x lower errors on advection, Burgers, and Klein-Gordon equations while proving equivalence to the original PDE.
-
Continuity Laws for Sequential Models
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
-
Recovering Physical Dynamics from Discrete Observations via Intrinsic Differential Consistency
Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.
-
Excluding the Target Domain Improves Extrapolation: Deconfounded Hierarchical Physics Constraints
Deconfounded Hierarchical Gate with counterfactual estimation and hierarchical constraints achieves 46% better RMSE on out-of-distribution battery temperature extrapolation, with excluding target data from pretraining outperforming inclusion.
-
Universal Neural Propagator: Learning Time Evolution in Many-Body Quantum Systems
The Universal Neural Propagator is a single neural model trained self-supervised to predict time evolution in driven quantum many-body systems across arbitrary protocols and initial states.
-
Shape: A Self-Supervised 3D Geometry Foundation Model for Industrial CAD Analysis
A 10.9M-parameter self-supervised model pretrained on 61k CAD meshes achieves R²=0.729 reconstruction and 98.1% top-1 retrieval on held-out data via masked normalized geometry reconstruction and multi-resolution contrastive learning.
-
A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems
A multimodal SwinV2-UNet vision transformer conditioned on data modality and time predicts spatiotemporal fluid flows and reconstructs unobserved fields from limited views using CFD data of argon jet injection.
-
Multi-scale Dynamic Wake Modeling of Floating Offshore Wind Turbines via Fourier Neural Operators and Physics-Informed Neural Networks
FNO captures large- and small-scale wake structures, higher harmonics, and temporal variations more accurately and trains eight times faster than PINN for FOWT wake prediction.
-
Multiscale Physics-Informed Neural Network for Complex Fluid Flows with Long-Range Dependencies
DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data and O(10^-4) error on turbulent flow with only 500 supervision points.