KANs with learnable univariate spline activations on edges achieve better accuracy than MLPs with fewer parameters, faster scaling, and direct visualization for scientific discovery.
hub
Fourier Neural Operator for Parametric Partial Differential Equations
87 Pith papers cite this work. Polarity classification is still indexing.
abstract
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation. The Fourier neural operator is the first ML-based method to successfully model turbulent flows with zero-shot super-resolution. It is up to three orders of magnitude faster compared to traditional PDE solvers. Additionally, it achieves superior accuracy compared to previous learning-based solvers under fixed resolution.
hub tools
citation-role summary
citation-polarity summary
claims ledger
- abstract The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fou
co-cited works
fields
cs.LG 43 math.NA 9 physics.flu-dyn 6 cs.CE 5 cs.AI 2 cs.CL 2 cs.CV 2 cs.RO 2 math.OC 2 physics.comp-ph 2roles
other 1polarities
unclear 1representative citing papers
Hodge Spectral Duality provides a topology-preserving neural operator by isolating unlearnable topological components via Hodge orthogonality and operator splitting.
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
Laplacian eigenfunction-based neural operators approximate the solution operator of the generalized Gierer-Meinhardt reaction-diffusion system with error bounds that imply only polynomial growth in parameters as accuracy improves.
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
A new Intent Fidelity Score and refinement loop verify that LLM-generated simulation code matches the intended PDEs, improving performance on a 220-case benchmark where execution alone fails to ensure correctness.
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
Commutativity regularization on Jacobians reduces transient error amplification in neural simulators, enabling stable rollouts over thousands of steps on physical and climate data.
PerFlow embeds physics constraints into rectified flow sampling through guidance-free conditioning and constraint-preserving projections, achieving efficient sparse reconstruction and uncertainty quantification for spatiotemporal dynamics.
PODiff performs conditional diffusion in a fixed, variance-ordered POD latent space to enable efficient probabilistic super-resolution of high-dimensional scientific fields with lower memory and better-calibrated uncertainty than pixel-space or dropout baselines.
Neural operators approximate continuous operators from H^s to H^t with O(N^{-s/d}) error in H^t norm; FNOs on Burgers achieve H^1 errors to 10^{-7} and follow a power-law scaling with exponent ~1.4.
Isotropic Fourier Neural Operators enforce spatial symmetries in Fourier layers, improving PDE-solving performance while reducing parameters by up to 16x in 2D and 96x in 3D.
A horizon-agnostic neural operator paired with a boundary control barrier function creates a real-time safety filter that raises safe trajectory rates by up to 22% on fluid manipulation tasks in simulation.
A physics-encoded Fourier neural operator (PeFNO) uses stress potentials to enforce divergence-free stress fields by architecture design, yielding better equilibrium satisfaction than physics-informed or physics-guided FNOs for polycrystalline materials under uniaxial tension.
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
A safeguarded hybrid of Levenberg-Marquardt and learned operators achieves equivalent reconstruction quality for PGET in roughly one-third the iterations, with architecture-dependent robustness.
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
PI-LNO is a physics-informed neural operator that uses Laplace transforms and fluid physics constraints to accurately and rapidly predict droplet spreading dynamics on complex surfaces.
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
A graph-based neural operator trained on expert-validated race-car CFD data reaches accuracy levels usable for early-stage interactive aerodynamic design exploration.
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
VS-GNO delivers 0.71-1.04% reconstruction error at 15-24.5% spiking rates versus 0.4% for a non-spiking baseline in sparse-to-dense virtual sensing.
citing papers explorer
-
KAN: Kolmogorov-Arnold Networks
KANs with learnable univariate spline activations on edges achieve better accuracy than MLPs with fewer parameters, faster scaling, and direct visualization for scientific discovery.
-
Topology-Preserving Neural Operator Learning via Hodge Decomposition
Hodge Spectral Duality provides a topology-preserving neural operator by isolating unlearnable topological components via Hodge orthogonality and operator splitting.
-
Neural-Schwarz Tiling for Geometry-Universal PDE Solving at Scale
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
-
Approximation Theory of Laplacian-Based Neural Operators for Reaction-Diffusion System
Laplacian eigenfunction-based neural operators approximate the solution operator of the generalized Gierer-Meinhardt reaction-diffusion system with error bounds that imply only polynomial growth in parameters as accuracy improves.
-
Fixed-Point Neural Optimal Transport without Implicit Differentiation
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
-
Stable Long-Horizon PDE Forecasting via Latent Structured Spectral Propagators
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
-
Your Simulation Runs but Solves the Wrong Physics: PDE-Grounded Intent Verification for LLM-Generated Multiphysics Simulation Code
A new Intent Fidelity Score and refinement loop verify that LLM-generated simulation code matches the intended PDEs, improving performance on a 220-case benchmark where execution alone fails to ensure correctness.
-
CATO: Charted Attention for Neural PDE Operators
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
-
Physics-Informed Neural PDE Solvers via Spatio-Temporal MeanFlow
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
-
Controlling Transient Amplification Improves Long-horizon Rollouts
Commutativity regularization on Jacobians reduces transient error amplification in neural simulators, enabling stable rollouts over thousands of steps on physical and climate data.
-
PerFlow: Physics-Embedded Rectified Flow for Efficient Reconstruction and Uncertainty Quantification of Spatiotemporal Dynamics
PerFlow embeds physics constraints into rectified flow sampling through guidance-free conditioning and constraint-preserving projections, achieving efficient sparse reconstruction and uncertainty quantification for spatiotemporal dynamics.
-
PODiff: Latent Diffusion in Proper Orthogonal Decomposition Space for Scientific Super-Resolution
PODiff performs conditional diffusion in a fixed, variance-ordered POD latent space to enable efficient probabilistic super-resolution of high-dimensional scientific fields with lower memory and better-calibrated uncertainty than pixel-space or dropout baselines.
-
Quantitative Sobolev Approximation Bounds for Neural Operators with Empirical Validation on Burgers Equation
Neural operators approximate continuous operators from H^s to H^t with O(N^{-s/d}) error in H^t norm; FNOs on Burgers achieve H^1 errors to 10^{-7} and follow a power-law scaling with exponent ~1.4.
-
Isotropic Fourier Neural Operators
Isotropic Fourier Neural Operators enforce spatial symmetries in Fourier layers, improving PDE-solving performance while reducing parameters by up to 16x in 2D and 96x in 3D.
-
Online Safety Filter for Deformable Object Manipulation with Horizon Agnostic Neural Operators
A horizon-agnostic neural operator paired with a boundary control barrier function creates a real-time safety filter that raises safe trajectory rates by up to 22% on fluid manipulation tasks in simulation.
-
An approach to encode divergence-free stress fields in neural approximations based on stress potentials
A physics-encoded Fourier neural operator (PeFNO) uses stress potentials to enforce divergence-free stress fields by architecture design, yielding better equilibrium satisfaction than physics-informed or physics-guided FNOs for polycrystalline materials under uniaxial tension.
-
Hybrid Fourier Neural Operator-Lattice Boltzmann Method
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
-
Robust Model-Based Iteration for Passive Gamma Emission Tomography
A safeguarded hybrid of Levenberg-Marquardt and learned operators achieves equivalent reconstruction quality for PGET in roughly one-third the iterations, with architecture-dependent robustness.
-
Learning Neural Operator Surrogates for the Black Hole Accretion Code
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
-
Droplet-LNO: Physics-Informed Laplace Neural Operators for Accurate Prediction of Droplet Spreading Dynamics on Complex Surfaces
PI-LNO is a physics-informed neural operator that uses Laplace transforms and fluid physics constraints to accurately and rapidly predict droplet spreading dynamics on complex surfaces.
-
AI models of unstable flow exhibit hallucination
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
-
Faster by Design: Interactive Aerodynamics via Neural Surrogates Trained on Expert-Validated CFD
A graph-based neural operator trained on expert-validated race-car CFD data reaches accuracy levels usable for early-stage interactive aerodynamic design exploration.
-
DeepRitzSplit Neural Operator for Phase-Field Models via Energy Splitting
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
-
Neuroscience Inspired Graph Operators Towards Edge-Deployable Virtual Sensing for Irregular Geometries
VS-GNO delivers 0.71-1.04% reconstruction error at 15-24.5% spiking rates versus 0.4% for a non-spiking baseline in sparse-to-dense virtual sensing.
-
G-PARC: Graph-Physics Aware Recurrent Convolutional Neural Networks for Spatiotemporal Dynamics on Unstructured Meshes
G-PARC embeds analytically computed differential operators via moving least squares on graphs into recurrent networks, achieving higher accuracy with 2-3x fewer parameters than prior graph PADL methods on nonlinear benchmarks.
-
DiLO: Decoupling Generative Priors and Neural Operators via Diffusion Latent Optimization for Inverse Problems
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.
-
Learning on the Temporal Tangent Bundle for Physics-Informed Neural Networks
Parameterizing the temporal derivative in PINNs and reconstructing via Volterra integral yields 100-200x lower errors on advection, Burgers, and Klein-Gordon equations while proving equivalence to the original PDE.
-
SPAMoE: Spectrum-Aware Hybrid Operator Framework for Full-Waveform Inversion
SPAMoE reduces average MAE by 44.4% on OpenFWI datasets for full-waveform inversion via a spectral-preserving DINO encoder and dynamic frequency-band routing to specialized neural operators.
-
Generative modeling of granular flow on inclined planes using conditional flow matching
A conditional flow matching model trained on DEM simulations reconstructs granular flow velocity fields from as little as 11-16% sparse boundary data, outperforming deterministic CNN baselines while providing uncertainty estimates via ensemble generation.
-
A Unified Multiscale Auxiliary PINN Framework for Generalized Phonon Transport
MTNet is a new auxiliary PINN framework that solves the generalized equation of phonon radiative transfer by converting it to a differential system, capturing multiscale phonon transport in nanostructures beyond standard approximations.
-
Toward AI-Driven Digital Twins for Metropolitan Floods: A Conditional Latent Dynamics Network Surrogate of the Shallow Water Equations
CLDNet is a conditional latent dynamics network surrogate for the shallow water equations that delivers 115x faster 96-hour flood forecasts on irregular metropolitan basins while maintaining usable accuracy against gauge data.
-
U-HNO: A U-shaped Hybrid Neural Operator with Sparse-Point Adaptive Routing for Non-stationary PDE Dynamics
U-HNO uses adaptive per-point routing in a U-shaped hybrid architecture to achieve state-of-the-art accuracy on PDE benchmarks with sharp localized features.
-
Compositional Neural Operators for Multi-Dimensional Fluid Dynamics
Compositional Neural Operators decompose multi-dimensional fluid PDEs into a library of pretrained elementary physics blocks assembled via an aggregator that minimizes data and physics residuals.
-
ShardTensor: Domain Parallelism for Scientific Machine Learning
ShardTensor is a domain-parallelism system for SciML that enables flexible scaling of extreme-resolution spatial datasets by removing the constraint of batch size one per device.
-
GenMed: A Pairwise Generative Reformulation of Medical Diagnostic Tasks
GenMed uses diffusion models to capture P(X,Y) for medical tasks and performs inference via gradient-based test-time optimization, supporting arbitrary observation combinations without retraining.
-
Don't Fix the Basis -- Learn It: Spectral Representation with Adaptive Basis Learning for PDEs
ABLE learns a spatially adaptive Parseval frame from data via an ancillary density to replace fixed bases in spectral neural operators for PDEs.
-
Intervention-Based Time Series Causal Discovery via Simulator-Generated Interventional Distributions
SVAR-FM uses simulator clamping to produce interventional distributions and flow matching to identify time series causal structures, with an error bound that predicts sign reversal of causal effects below a simulator accuracy threshold.
-
DiffATS: Diffusion in Aligned Tensor Space
DiffATS trains diffusion models directly on aligned Tucker tensor primitives that are proven to be homeomorphisms, delivering efficient unconditional and conditional generation across images, videos, and PDE data with high compression.
-
Continuity Laws for Sequential Models
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
-
Physics-Informed Reduced-Order Operator Learning for Hyperelasticity in Continuum Micromechanics
EquiNO with Q-DEIM creates reduced-order physics-informed surrogates for 3D hyperelastic RVEs that enforce equilibrium and periodicity by construction, achieve 10^3 speedups, and accurately interpolate and extrapolate stresses from few snapshots.
-
WeatherSyn: An Instruction Tuning MLLM For Weather Forecasting Report Generation
WeatherSyn is the first instruction-tuned MLLM for weather forecasting report generation, outperforming closed-source models on a new dataset of 31 US cities across 8 weather aspects.
-
Excluding the Target Domain Improves Extrapolation: Deconfounded Hierarchical Physics Constraints
Deconfounded Hierarchical Gate with counterfactual estimation and hierarchical constraints achieves 46% better RMSE on out-of-distribution battery temperature extrapolation, with excluding target data from pretraining outperforming inclusion.
-
Do Neural Operators Forget Geometry? The Forgetting Hypothesis in Deep Operator Learning
Neural operators progressively forget domain geometry with depth due to Markovian layers and global mixing; a geometry memory injection mechanism mitigates this forgetting.
-
Universal Neural Propagator: Learning Time Evolution in Many-Body Quantum Systems
The Universal Neural Propagator is a single neural model trained self-supervised to predict time evolution in driven quantum many-body systems across arbitrary protocols and initial states.
-
Deep Wave Network for Modeling Multi-Scale Physical Dynamics
DW-Net improves the accuracy versus computational cost Pareto front over standard U-Nets for 2D and 3D multi-scale flow benchmarks by stacking multiple waves while keeping training settings identical.
-
Chebyshev-Augmented One-Shot Transfer Learning for PINNs on Nonlinear Differential Equations
Chebyshev polynomial surrogates enable one-shot closed-form adaptation of PINNs for a broader class of nonlinear ODEs and PDEs by decomposing them into linear subproblems.
-
A Neural Latent Dynamics Approach for Solving Inverse Problems in Cardiac Electrophysiology
LDNet surrogates with neural ODE latent dynamics enable fast, accurate recovery of cardiac parameters from ECG data by replacing expensive PDE solves during inversion.
-
M-CaStLe: Uncovering Local Causal Structures in Multivariate Space-Time Gridded Data
M-CaStLe generalizes local stencil-based causal discovery to the multivariate case and decomposes resulting graphs into reaction and spatial components for interpretation in space-time gridded data.
-
Adaptive anisotropic composite quadratures for residual minimisation in neural PDE approximations
An adaptive anisotropic composite quadrature strategy combined with refresh-based training narrows the gap between training and reference losses in neural residual minimization for PDEs while using quadrature points more efficiently.
-
Large-eddy simulation nets (LESnets) based on physics-informed neural operator for wall-bounded turbulence
LESnets integrates LES equations and the law of the wall into F-FNO to enable data-free, stable long-term predictions of wall-bounded turbulence at Re_tau up to 1000 on coarse grids, matching traditional LES accuracy at higher efficiency.