KANs with learnable univariate spline activations on edges achieve better accuracy than MLPs with fewer parameters, faster scaling, and direct visualization for scientific discovery.
hub
Fourier Neural Operator for Parametric Partial Differential Equations
88 Pith papers cite this work. Polarity classification is still indexing.
abstract
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation. The Fourier neural operator is the first ML-based method to successfully model turbulent flows with zero-shot super-resolution. It is up to three orders of magnitude faster compared to traditional PDE solvers. Additionally, it achieves superior accuracy compared to previous learning-based solvers under fixed resolution.
hub tools
citation-role summary
citation-polarity summary
claims ledger
- abstract The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fou
co-cited works
fields
cs.LG 44 math.NA 9 physics.flu-dyn 6 cs.CE 5 cs.AI 2 cs.CL 2 cs.CV 2 cs.RO 2 math.OC 2 physics.comp-ph 2roles
other 1polarities
unclear 1representative citing papers
Hodge Spectral Duality provides a topology-preserving neural operator by isolating unlearnable topological components via Hodge orthogonality and operator splitting.
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
Laplacian eigenfunction-based neural operators approximate the solution operator of the generalized Gierer-Meinhardt reaction-diffusion system with error bounds that imply only polynomial growth in parameters as accuracy improves.
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
A new Intent Fidelity Score and refinement loop verify that LLM-generated simulation code matches the intended PDEs, improving performance on a 220-case benchmark where execution alone fails to ensure correctness.
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
Commutativity regularization on Jacobians reduces transient error amplification in neural simulators, enabling stable rollouts over thousands of steps on physical and climate data.
PerFlow embeds physics constraints into rectified flow sampling through guidance-free conditioning and constraint-preserving projections, achieving efficient sparse reconstruction and uncertainty quantification for spatiotemporal dynamics.
PODiff performs conditional diffusion in a fixed, variance-ordered POD latent space to enable efficient probabilistic super-resolution of high-dimensional scientific fields with lower memory and better-calibrated uncertainty than pixel-space or dropout baselines.
Neural operators approximate continuous operators from H^s to H^t with O(N^{-s/d}) error in H^t norm; FNOs on Burgers achieve H^1 errors to 10^{-7} and follow a power-law scaling with exponent ~1.4.
Isotropic Fourier Neural Operators enforce spatial symmetries in Fourier layers, improving PDE-solving performance while reducing parameters by up to 16x in 2D and 96x in 3D.
A horizon-agnostic neural operator paired with a boundary control barrier function creates a real-time safety filter that raises safe trajectory rates by up to 22% on fluid manipulation tasks in simulation.
A physics-encoded Fourier neural operator (PeFNO) uses stress potentials to enforce divergence-free stress fields by architecture design, yielding better equilibrium satisfaction than physics-informed or physics-guided FNOs for polycrystalline materials under uniaxial tension.
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
A safeguarded hybrid of Levenberg-Marquardt and learned operators achieves equivalent reconstruction quality for PGET in roughly one-third the iterations, with architecture-dependent robustness.
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
PI-LNO is a physics-informed neural operator that uses Laplace transforms and fluid physics constraints to accurately and rapidly predict droplet spreading dynamics on complex surfaces.
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
A graph-based neural operator trained on expert-validated race-car CFD data reaches accuracy levels usable for early-stage interactive aerodynamic design exploration.
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
VS-GNO delivers 0.71-1.04% reconstruction error at 15-24.5% spiking rates versus 0.4% for a non-spiking baseline in sparse-to-dense virtual sensing.
citing papers explorer
-
Adaptive anisotropic composite quadratures for residual minimisation in neural PDE approximations
An adaptive anisotropic composite quadrature strategy combined with refresh-based training narrows the gap between training and reference losses in neural residual minimization for PDEs while using quadrature points more efficiently.
-
Large-eddy simulation nets (LESnets) based on physics-informed neural operator for wall-bounded turbulence
LESnets integrates LES equations and the law of the wall into F-FNO to enable data-free, stable long-term predictions of wall-bounded turbulence at Re_tau up to 1000 on coarse grids, matching traditional LES accuracy at higher efficiency.
-
A neural operator framework for data-driven discovery of stability and receptivity in physical systems
A neural network dynamics emulator trained on data yields stability eigenmodes and resolvent modes via automatic differentiation of its Jacobian, enabling equation-free analysis of nonlinear systems.
-
Neural Adjoint Method for Meta-optics: Accelerating Volumetric Inverse Design via Fourier Neural Operators
A stage-wise Fourier Neural Operator surrogate predicts per-voxel adjoint gradients to accelerate 3D meta-optics inverse design, replacing expensive FDTD solves with fast inference.
-
FLARE: A Data-Efficient Surrogate for Predicting Displacement Fields in Directed Energy Deposition
FLARE predicts post-cooling displacement fields in directed energy deposition by encoding simulations as implicit neural fields whose weights are regularized to follow an affine structure in parameter space, enabling data-efficient prediction via weight mixing.
-
Adaptive Randomized Neural Networks with Locally Activation Function: Theory and Algorithm for Solving PDEs
Randomized neural networks require a sampling domain sized to target smoothness for optimal approximation, and an adaptive PIRaNN method with partition-of-unity refinement solves PDEs with limited local regularity.
-
$\phi-$DeepONet: A Discontinuity Capturing Neural Operator
φ-DeepONet learns mappings with discontinuities in inputs and outputs by combining multiple branch networks with a nonlinear interface embedding in the trunk, trained via physics- and interface-informed loss, and shows accurate results on 1D/2D benchmarks.
-
MENO: MeanFlow-Enhanced Neural Operators for Dynamical Systems
MENO enhances neural operators with MeanFlow to restore multi-scale accuracy in dynamical system predictions while keeping inference costs low, achieving up to 2x better power spectrum accuracy and 12x faster inference than diffusion-enhanced baselines on phase-field, Kolmogorov flow, and active-m<f
-
PD-SOVNet: A Physics-Driven Second-Order Vibration Operator Network for Estimating Wheel Polygonal Roughness from Axle-Box Vibrations
PD-SOVNet combines shared second-order vibration kernels, MIMO coupling, adaptive physical correction, and Mamba temporal modeling to regress 1st-40th order wheel roughness spectra from axle-box vibrations with competitive accuracy on real datasets.
-
Optimal-Transport-Guided Functional Flow Matching for Turbulent Field Generation in Hilbert Space
FOT-CFM generates turbulent fields in function space with superior high-order statistics and energy spectra on Navier-Stokes, Kolmogorov flow, and Hasegawa-Wakatani equations compared to baselines.
-
FNO$^{\angle \theta}$: Extended Fourier neural operator for learning state and optimal control of distributed parameter systems
FNO extended to complex frequencies via Ehrenpreis-Palamodov principle improves state and optimal control learning for PDE systems, with order-of-magnitude lower training errors and better non-periodic boundary predictions on nonlinear Burgers' equation.
-
Neural Operators for Multi-Task Control and Adaptation
Neural operators approximate the solution operator for multi-task optimal control, generalizing to new tasks and enabling efficient adaptation via branch-trunk structure and meta-training.
-
Resolution-Independent Machine Learning Heat Flux Closure for ICF Plasmas
A Fourier Neural Operator trained on PIC simulations yields a resolution-independent machine-learning closure for electron heat flux that reproduces temperature evolution when inserted into the energy equation and generalizes from coarse to fine grids.
-
Learning Contractive Integral Operators with Fredholm Integral Neural Operators
Fredholm Integral Neural Operators are universal approximators of integral operators and are guaranteed to be contractive, enabling reliable solution operator learning for FIEs and PDEs.
-
Generalized Transferable Neural Networks for Steady-State Partial Differential Equations
GTransNet extends single-hidden-layer TransNet by adding hidden layers with symmetry-constrained biases and variance-controlled weights to improve accuracy and stability for oscillatory steady-state PDE solutions.
-
A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems
A multimodal SwinV2-UNet vision transformer conditioned on data modality and time predicts spatiotemporal fluid flows and reconstructs unobserved fields from limited views using CFD data of argon jet injection.
-
Flow Learners for PDEs: Toward a Physics-to-Physics Paradigm for Scientific Computing
Flow learners parameterize transport vector fields to generate PDE trajectories through integration, offering a physics-to-physics organizing principle for learned solvers.
-
FluidFlow: a flow-matching generative model for fluid dynamics surrogates on unstructured meshes
FluidFlow uses conditional flow-matching with U-Net and DiT architectures to predict pressure and friction coefficients on airfoils and 3D aircraft meshes, outperforming MLP baselines with better generalization.
-
Discovery of Interpretable Surrogates via Agentic AI: Application to Gravitational Waves
GWAgent agentic workflow produces analytic surrogates for eccentric BBH waveforms with 6.9e-4 median mismatch and 8.4x speedup, outperforming baselines, and infers eccentricity for GW200129.
-
Accelerated and data-efficient flow prediction in stirred tanks via physics-informed learning
Physics-informed constraints on implicit neural representations yield more accurate and stable predictions of stirred-tank flows than purely data-driven models when training data is scarce, with diminishing returns at larger dataset sizes.
-
Physics-Based Flow Matching for Full-Field Prediction of Silicon Photonic Devices
PIC-Flow applies conditional flow matching with a real-valued U-Net and interface-masked Helmholtz residual loss to predict electromagnetic fields in photonic devices, generalizing to held-out device classes beyond its training set.
-
Communication Dynamics Neural Networks: FFT-Diagonalized Layers for Improved Hessian Conditioning at Reduced Parameter Count
CDLinear layers achieve population Hessian condition number exactly 1 under pre-whitening, deliver 3.8x parameter reduction versus dense layers at 0.65% accuracy cost, and show 310x better empirical conditioning on an MLP.
-
DeepPropNet: an operator learning-based predictor for thermal plasma properties
DeepPropNet predicts thermal plasma properties with relative L2 errors of 10^{-3} to 10^{-2} for SF6-N2 and C4F7N-CO2-O2 mixtures using single-property and mixture-of-experts architectures trained on high-fidelity data.
-
Man, Machine, and Mathematics
A high-level outline is given for a unified theory that reduces learning to a small set of ideas from dynamical systems, geometry, and physics via definitions of solvable problems and parametrized methods.
-
Multi-scale Dynamic Wake Modeling of Floating Offshore Wind Turbines via Fourier Neural Operators and Physics-Informed Neural Networks
FNO captures large- and small-scale wake structures, higher harmonics, and temporal variations more accurately and trains eight times faster than PINN for FOWT wake prediction.
-
LASER: Learning Active Sensing for Continuum Field Reconstruction
LASER trains a reinforcement learning policy inside a latent dynamics model to choose sensor placements that improve reconstruction of continuum fields under sparsity.
-
Neural Operator Representation of Granular Micromechanics-based Failure Envelope
A differentiable neural operator learns the mapping from granular microstructure configurations to failure envelopes, with physics-informed convexity enforcement and active learning for efficient training.
-
Singularity Formation: Synergy in Theoretical, Numerical and Machine Learning Approaches
The work introduces a modulation-based analytical method for singularity proofs in singular PDEs and refines ML techniques like PINNs and KANs to identify blowup solutions, with application to the open 3D Keller-Segel problem.
-
A Systematic Survey and Benchmark of Deep Learning for Molecular Property Prediction in the Foundation Model Era
A systematic survey and benchmark of four deep learning paradigms for molecular property prediction that organizes the field, critiques current data practices, and outlines three future directions.
-
Euler-inspired Decoupling Neural Operator for Efficient Pansharpening
EDNO redefines pansharpening as a frequency-domain functional mapping that decouples fusion via Euler-inspired polar coordinates into explicit phase-rotation simulation and implicit spectral modeling for improved efficiency.
-
From Perception to Autonomous Computational Modeling: A Multi-Agent Approach
A multi-agent LLM framework autonomously completes the full computational mechanics pipeline from a photograph to a code-compliant engineering report on a steel L-bracket example.
-
A Multi-Scale ResNet-augmented Fourier Neural Operator Framework for High-Frequency Sequence-to-Sequence Prediction of Magnetic Hysteresis
Res-FNO predicts high-frequency magnetic hysteresis loops including ringing effects and minor loops with claimed strong generalization across materials from 79 to 3C90.
-
Elastomeric Strain Limitation for Design of Soft Pneumatic Actuators
Elastomeric strain limiters paired with electroadhesive clutches enable variable inflation trajectories in soft pneumatic actuators, supported by neural-network inverse design and experimental validation for safe human interaction.
-
General Explicit Network (GEN): A novel deep learning architecture for solving partial differential equations
GEN is a neural network that solves PDEs by constructing explicit function approximations from basis functions based on prior PDE knowledge, yielding more robust and extensible solutions than standard PINNs.
-
jNO: A JAX Library for Neural Operator and Foundation Model Training
jNO introduces a unified JAX tracing system for data-driven and physics-informed neural operator training that compiles domains, residuals, losses, and diagnostics into one pipeline.
-
Replay-Based Continual Learning for Physics-Informed Neural Operators
A replay-based continual learning strategy for physics-informed neural operators mitigates catastrophic forgetting on prior physical problems while enabling efficient adaptation to new data using only physical constraints.
-
RETO: A Rotary-Enhanced Transformer Operator for High-Fidelity Prediction of Automotive Aerodynamics
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.
-
AI-Powered Surrogate Modelling for Multiscale Combustion: A Critical Review and Opportunities
A critical review of AI surrogate models for multiscale combustion that compares supervised, unsupervised, and physics-guided methods, identifies transferability and consistency challenges, and outlines future opportunities.