HS-FNO lifts the state to include history and decomposes updates into a learned future-slice predictor plus an exact shift-append transport, yielding lower rollout errors than standard or lag-stack FNO baselines on five non-Markovian PDE families.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
57 Pith papers cite this work, alongside 14,692 external citations. Polarity classification is still indexing.
citation-role summary
citation-polarity summary
fields
cs.LG 24 cs.CE 7 physics.flu-dyn 4 quant-ph 4 math.NA 3 physics.comp-ph 3 cs.CV 2 astro-ph.IM 1 cs.AI 1 cs.GR 1years
2026 57roles
background 2representative citing papers
FactoryNet is the first universal pretraining corpus for industrial time-series data with a shared S-E-F-C schema that supports cross-embodiment transfer and competitive anomaly detection.
Port-Hamiltonian neural networks extended to PDEs recover the Hamiltonian and dissipation of nonlinear string dynamics from data and outperform non-physics-informed baselines.
Shock-centered scaling of DSMC fields in micro-nozzles reveals low-rank density structure, enabling DeepONet surrogates with mean errors reduced to 4.51% on hardest test cases.
A geometry-aligned bi-fidelity surrogate maps low- and high-fidelity wildfire solutions to a common domain for improved reduced-basis reconstruction, lower error near fronts, and practical uncertainty quantification.
Bayesian PINNs for elliptic PDEs have posteriors that contract around the true solution at near-optimal rates, with the prior adapting automatically to unknown smoothness.
A per-layer risk estimator for hybrid deep networks shows that replacing learned layers with known operators shrinks the bound and scales sample needs with the number of replaced parameters, validated on CT reconstruction.
DualTCN is the first deep-learning model for time-domain marine CSEM inversion that regresses four earth parameters, achieves high accuracy on simulated data, and runs up to 21,000 times faster than classical optimizers.
A video-to-PDE pipeline extracts the model u_t + v(t)·∇u = 9.005|∇u|^2 + 0.666Δu from grayscale ink-plume footage, outperforming advection-diffusion baselines on held-out frames and reducing to linear form via Cole-Hopf transformation.
TCD-Arena is a new customizable testing framework that runs millions of experiments to map how 33 different assumption violations affect time series causal discovery methods and shows ensembles can boost overall robustness.
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
Quasi-equivariant metanetworks relax strict equivariance to preserve functional identity in weight-space learning while improving expressivity for feedforward, convolutional, and transformer networks.
A weighted FOSLS formulation for deep neural networks solves transmission problems robustly, with proofs that the loss aligns with the energy norm independently of material contrast and shows passive variance reduction.
A modified DCGAN with an auxiliary discriminator using the membrane factor generates stable, previously unseen funicular shells optimized for pure compression in three dimensions.
A deep energy method simulates fourth-order phase-field fracture in piezoresistive materials via one-way coupled electrical sensing after solving the mechanics-fracture problem.
Quantum PINNs using tensor-rank polynomials solve the Merton portfolio optimization PDE more accurately and with far fewer parameters than classical neural networks.
Forward-mode automatic differentiation replaces finite-difference approximations for Jacobian-vector products in JFNK solvers, delivering 2-3 orders of magnitude speedup and lifting minimum solver completion from 42% to 95% across Burgers, radiation diffusion, reaction-diffusion, and nonlinear time-
MetaColloc meta-learns a universal set of neural basis functions offline so that new PDEs can be solved at test time with a single linear solve instead of per-equation neural-network optimization.
PG-3DGS couples 3D Gaussian Splatting with differentiable physics so that optimized shapes satisfy both visual fidelity and physical objectives such as pouring and aerodynamic lift, with real-world 3D-printed validation.
SVAR-FM uses simulator clamping to produce interventional distributions and flow matching to identify time series causal structures, with an error bound that predicts sign reversal of causal effects below a simulator accuracy threshold.
A framework that structurally enforces divergence-free velocity and long-range transport coherence in 3D fluid reconstruction from 2D videos via divergence-free kernels advecting Lagrangian Gaussian splats.
Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.
OGCVL integrates symbolic and numerical techniques to learn effective nonlinear controlled variables for scalable self-optimizing control in chemical processes.
NSPOD is a multigrid-like preconditioner using DeepONet-learned POD subspaces that dramatically cuts Krylov solver iterations for solid mechanics PDEs on unstructured CAD geometries, outperforming algebraic multigrid.
citing papers explorer
-
HS-FNO: History-Space Fourier Neural Operator for Non-Markovian Partial Differential Equations
HS-FNO lifts the state to include history and decomposes updates into a learned future-slice predictor plus an exact shift-append transport, yielding lower rollout errors than standard or lag-stack FNO baselines on five non-Markovian PDE families.
-
FactoryNet: A Large-Scale Dataset toward Industrial Time-Series Foundation Models
FactoryNet is the first universal pretraining corpus for industrial time-series data with a shared S-E-F-C schema that supports cross-embodiment transfer and competitive anomaly detection.
-
Identifying the nonlinear string dynamics with port-Hamiltonian neural networks
Port-Hamiltonian neural networks extended to PDEs recover the Hamiltonian and dissipation of nonlinear string dynamics from data and outperform non-physics-informed baselines.
-
Shock-Centered Low-Rank Structure and Neural-Operator Representation of Rarefied Micro-Nozzle Flows
Shock-centered scaling of DSMC fields in micro-nozzles reveals low-rank density structure, enabling DeepONet surrogates with mean errors reduced to 4.51% on hardest test cases.
-
A geometry-aligned multi-fidelity framework for uncertainty quantification of wildfire spread
A geometry-aligned bi-fidelity surrogate maps low- and high-fidelity wildfire solutions to a common domain for improved reduced-basis reconstruction, lower error near fronts, and practical uncertainty quantification.
-
Posterior Concentration of Bayesian Physics-Informed Neural Networks for Elliptic PDEs
Bayesian PINNs for elliptic PDEs have posteriors that contract around the true solution at near-optimal rates, with the prior adapting automatically to unknown smoothness.
-
A Deep Risk Estimator for Known Operator Learning
A per-layer risk estimator for hybrid deep networks shows that replacing learned layers with known operators shrinks the bound and scales sample needs with the number of replaced parameters, validated on CT reconstruction.
-
DualTCN: A Physics-Constrained Temporal Convolutional Network for 2 Time-Domain Marine CSEM Inversion
DualTCN is the first deep-learning model for time-domain marine CSEM inversion that regresses four earth parameters, achieves high accuracy on simulated data, and runs up to 21,000 times faster than classical optimizers.
-
From Video-to-PDE: Data-Driven Discovery of Nonlinear Dye Plume Dynamics
A video-to-PDE pipeline extracts the model u_t + v(t)·∇u = 9.005|∇u|^2 + 0.666Δu from grayscale ink-plume footage, outperforming advection-diffusion baselines on held-out frames and reducing to linear form via Cole-Hopf transformation.
-
TCD-Arena: Assessing Robustness of Time Series Causal Discovery Methods Against Assumption Violations
TCD-Arena is a new customizable testing framework that runs millions of experiments to map how 33 different assumption violations affect time series causal discovery methods and shows ensembles can boost overall robustness.
-
Hybrid Fourier Neural Operator-Lattice Boltzmann Method
Hybrid FNO-LBM accelerates porous media flow convergence by up to 70% via neural initialization and stabilizes unsteady simulations through embedded FNO rollouts, allowing small models to match larger ones in accuracy.
-
Quasi-Equivariant Metanetworks
Quasi-equivariant metanetworks relax strict equivariance to preserve functional identity in weight-space learning while improving expressivity for feedforward, convolutional, and transformer networks.
-
Robust Deep FOSLS for Transmission Problems
A weighted FOSLS formulation for deep neural networks solves transmission problems robustly, with proofs that the loss aligns with the energy norm independently of material contrast and shows passive variance reduction.
-
Physics-informed, Generative Adversarial Design of Funicular Shells
A modified DCGAN with an auxiliary discriminator using the membrane factor generates stable, previously unseen funicular shells optimized for pure compression in three dimensions.
-
A multiphysics deep energy method for fourth-order phase-field fracture with piezoresistive self-sensing
A deep energy method simulates fourth-order phase-field fracture in piezoresistive materials via one-way coupled electrical sensing after solving the mechanics-fracture problem.
-
Learning PDEs for Portfolio Optimization with Quantum Physics-Informed Neural Networks
Quantum PINNs using tensor-rank polynomials solve the Merton portfolio optimization PDE more accurately and with far fewer parameters than classical neural networks.
-
Robust Matrix-Free Newton-Krylov Solvers via Automatic Differentiation
Forward-mode automatic differentiation replaces finite-difference approximations for Jacobian-vector products in JFNK solvers, delivering 2-3 orders of magnitude speedup and lifting minimum solver completion from 42% to 95% across Burgers, radiation diffusion, reaction-diffusion, and nonlinear time-
-
MetaColloc: Optimization-Free PDE Solving via Meta-Learned Basis Functions
MetaColloc meta-learns a universal set of neural basis functions offline so that new PDEs can be solved at test time with a single linear solve instead of per-equation neural-network optimization.
-
PG-3DGS: Optimizing 3D Gaussian Splatting to Satisfy Physics Objectives
PG-3DGS couples 3D Gaussian Splatting with differentiable physics so that optimized shapes satisfy both visual fidelity and physical objectives such as pouring and aerodynamic lift, with real-world 3D-printed validation.
-
Intervention-Based Time Series Causal Discovery via Simulator-Generated Interventional Distributions
SVAR-FM uses simulator clamping to produce interventional distributions and flow matching to identify time series causal structures, with an error bound that predicts sign reversal of causal effects below a simulator accuracy threshold.
-
LagrangianSplats: Divergence-Free Transport of Gaussian Primitives for Fluid Reconstruction
A framework that structurally enforces divergence-free velocity and long-range transport coherence in 3D fluid reconstruction from 2D videos via divergence-free kernels advecting Lagrangian Gaussian splats.
-
Recovering Physical Dynamics from Discrete Observations via Intrinsic Differential Consistency
Enforcing semi-group consistency on a time-conditioned secant velocity field via Symmetry Rupture improves rollout accuracy and efficiency when learning physical dynamics from discrete observations.
-
Generalized Global Self-Optimizing Control for Chemical Processes: Part II Objective-Guided Controlled Variable Learning Approach
OGCVL integrates symbolic and numerical techniques to learn effective nonlinear controlled variables for scalable self-optimizing control in chemical processes.
-
NSPOD: Accelerating Krylov solvers via DeepONet-learned POD subspaces
NSPOD is a multigrid-like preconditioner using DeepONet-learned POD subspaces that dramatically cuts Krylov solver iterations for solid mechanics PDEs on unstructured CAD geometries, outperforming algebraic multigrid.
-
Physics-Informed Reduced-Order Operator Learning for Hyperelasticity in Continuum Micromechanics
EquiNO with Q-DEIM creates reduced-order physics-informed surrogates for 3D hyperelastic RVEs that enforce equilibrium and periodicity by construction, achieve 10^3 speedups, and accurately interpolate and extrapolate stresses from few snapshots.
-
Hierarchical Multi-Fidelity Learning for Predicting Three-Dimensional Flame Wrinkling and Turbulent Burning Velocity
MuFiNNs integrates sparse experimental measurements with structured low-fidelity models via hierarchical construction and nonlinear correction to predict 3D flame wrinkling dynamics and turbulent mass burning velocity across fuels, pressures, and turbulence levels.
-
A physics-informed neural network approach to solve the spatially inhomogeneous electron Boltzmann equation
A specialized PINN architecture solves the spatially inhomogeneous electron Boltzmann equation with high accuracy across gases and electric field strengths without case-specific tuning.
-
A Variational Kolosov--Muskhelishvili Network for Elasticity and Fracture
A variational neural network using Kolosov-Muskhelishvili potentials solves 2D linear elasticity and fracture problems by minimizing total potential energy and embedding crack discontinuities into the ansatz, yielding higher accuracy and faster convergence than standard physics-informed networks.
-
Scale-Aware Adversarial Analysis: A Diagnostic for Generative AI in Multiscale Complex Systems
A new scale-aware diagnostic framework shows that unconstrained diffusion generative models exhibit structural freezing and instability instead of smooth physical responses under multiscale perturbations.
-
An adaptive wavelet-based PINN for problems with localized high-magnitude source
AW-PINN uses dynamic wavelet basis adaptation in PINNs to solve PDEs with localized high-magnitude sources, outperforming prior methods on loss imbalances up to 10^10:1 while deriving a Gaussian process limit and NTK structure under assumptions.
-
Large-eddy simulation nets (LESnets) based on physics-informed neural operator for wall-bounded turbulence
LESnets integrates LES equations and the law of the wall into F-FNO to enable data-free, stable long-term predictions of wall-bounded turbulence at Re_tau up to 1000 on coarse grids, matching traditional LES accuracy at higher efficiency.
-
PI-TTA: Physics-Informed Source-Free Test-Time Adaptation for Robust Human Activity Recognition on Mobile Devices
PI-TTA stabilizes source-free test-time adaptation for sensor-based human activity recognition by adding physics-consistent constraints, yielding up to 9.13% accuracy gains and lower physical violation rates on three benchmarks under streaming shifts.
-
A Continuous-Time Ensemble Kalman-Bucy Smoother for Causal Inference and Model Discovery
A derivative-free ensemble Kalman-Bucy smoother is developed for continuous-time data assimilation that supports Bayesian causal inference and iterative model structure identification with small ensemble sizes under partial observations.
-
Wildfires Quasi-Implicit Alternative-Direction Simulations using Isogeometric Finite Element Method
Quasi-implicit alternating-direction splitting combined with isogeometric analysis produces wildfire temperature simulations with 10 times higher accuracy and linear computational cost.
-
FlowForge: A Staged Local Rollout Engine for Flow-Field Prediction
FlowForge predicts flow fields via staged local updates with a shared lightweight predictor, matching or exceeding baselines in accuracy while improving robustness to noise and reducing latency.
-
Physics-Informed Tracking (PIT)
PIT uses a neural autoencoder with a differentiable physics module and a new Physics-Informed Landmark Loss to track single particles in video, achieving sub-pixel accuracy in supervised and unsupervised modes.
-
Physics-Informed Neural Networks for Methane Sorption: Cross-Gas Transfer Learning, Ensemble Collapse Under Physics Constraints, and Monte Carlo Dropout Uncertainty Quantification
A PINN transfer learning framework for coal methane sorption reaches R²=0.932 on held-out data with 227% improvement over classical isotherms and identifies Monte Carlo Dropout as the best uncertainty method while ensembles degrade under shared physics constraints.
-
Learning Parameterized Nonlinear Elasticity on Curved Surfaces
A single physics-informed neural network learns a continuous family of nonlinear elastic equilibria on curved surfaces and generalizes to unseen geometry and material parameters.
-
Thermodynamic Liquid Manifold Networks: Physics-Bounded Deep Learning for Solar Forecasting in Autonomous Off-Grid Microgrids
A new neural network architecture enforces celestial and thermodynamic constraints to deliver zero nocturnal error and high-accuracy solar forecasts for autonomous microgrids.
-
A Statistical-AI Framework for Detecting Transient Flares in SDSS Stripe 82 Quasar Light Curves
A modular framework combining physics-informed neural networks, Ornstein-Uhlenbeck fitting, extreme value theory, and vision-language models detects 51 transient flares in 9,258 SDSS Stripe 82 quasar light curves.
-
Cell-induced densification and tether formation in fibrous extracellular matrices with biomimetic physics-informed neural networks
Bio-PINNs with a near-to-far curriculum and deformation-uncertainty proxy recover cell-induced densified phases and tether morphologies more reliably than standard adaptive PINN baselines in single-cell and multicellular settings.
-
Structured force reformulation of many-body dispersion: towards effective atom--atom decomposition and surrogate modeling
Reformulation of many-body dispersion via a correlation matrix yields pairwise force decomposition and unified energy-force-Hessian expressions.
-
Physics Guided Generative Optimization for Trotter Suzuki Decomposition
A generative optimization loop using diffusion models, PINNs, and GNNs achieves 85.6% of fourth-order Qiskit fidelity at 21.8% circuit depth for transverse-field Ising model Trotter-Suzuki decomposition.
-
Exchange-Only Silicon Based Spin Qubits: Charge Noise, PINN Optimised Pulse Sequences,and Gate-Level Fidelity
A two-stage PINN optimizes pulse sequences for silicon exchange-only spin qubits to achieve over 99% noise-averaged fidelity while shortening pulse durations by 20-40%.
-
Mesh Based Simulations with Spatial and Temporal awareness
A unified training framework for mesh-based ML surrogates in CFD improves accuracy and long-horizon stability by enforcing spatial derivative consistency via multi-node prediction, using temporal cross-attention correction, and adding 3D rotary positional embeddings.
-
Physics-Informed Neural Networks for Maximizing Quantum Fisher Information in Time-Dependent Many-Body Systems
PINNs combined with Magnus expansion learn scheduling functions and adiabatic gauge potentials that yield higher normalized QFI than Euler-Lagrange baselines in nearest-neighbor, dipolar, and trapped-ion spin models up to six qubits.
-
Balance-Guided Sparse Identification of Multiscale Nonlinear PDEs with Small-coefficient Terms
BG-SINDy reformulates l0-constrained regression as term-level l2,0 regularization and uses progressive pruning guided by balance contributions to recover small-coefficient terms in multiscale PDEs.
-
Uncertainty Quantification in PINNs for Turbulent Flows: Bayesian Inference and Repulsive Ensembles
Bayesian PINNs with Hamiltonian Monte Carlo sampling deliver the most consistent uncertainty estimates for turbulent flow inverse problems, while repulsive deep ensembles provide a faster but slightly less calibrated alternative.
-
Component-Based Reduced-Order Modeling Framework for Rocket Combustion Dynamics in Multi-Injector Configurations
A component-based reduced-order modeling framework decomposes multi-injector rocket combustors into trainable sub-models that couple to predict combustion dynamics across flow and geometry changes.
-
Non-intrusive Learning of Physics-Informed Spatio-temporal Surrogate for Accelerating Design
A non-intrusive framework combines Koopman autoencoders with a spatio-temporal surrogate to learn and predict physics-constrained dynamics of systems like 2D flow around a cylinder for unseen conditions.