Constraint-Aware Flow Matching integrates constraint projections into the flow matching training objective to align model dynamics with constrained sampling and reduce distributional shift.
hub
DeepONet: Learning nonlinear operators for iden- tifying differential equations based on the universal approximation theorem of operators
19 Pith papers cite this work. Polarity classification is still indexing.
hub tools
citation-role summary
citation-polarity summary
years
2026 19roles
background 1polarities
background 1representative citing papers
Any maximally monotone operator can be approximated in local graph convergence by continuous encoder-decoder networks, with structure-preserving versions that retain maximal monotonicity via resolvent parameterizations.
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
GANO unifies shape encoding with auto-decoders, denoising-stabilized latent optimization, and geometry-injected surrogates into an end-to-end differentiable pipeline for PDE-governed shape optimization and inversion.
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.
Compositional Neural Operators decompose multi-dimensional fluid PDEs into a library of pretrained elementary physics blocks assembled via an aggregator that minimizes data and physics residuals.
ABLE learns a spatially adaptive Parseval frame from data via an ancillary density to replace fixed bases in spectral neural operators for PDEs.
PnP-Corrector decouples physics simulation from error correction via a plug-and-play agent, cutting error by 29% in 300-day global ocean-atmosphere forecasts.
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
MuFiNNs integrates sparse experimental measurements with structured low-fidelity models via hierarchical construction and nonlinear correction to predict 3D flame wrinkling dynamics and turbulent mass burning velocity across fuels, pressures, and turbulence levels.
Late Fusion Neural Operators disentangle state and parameter learning to outperform FNO and CAPE-FNO on advection, Burgers, and reaction-diffusion PDEs with 72% average RMSE reduction in and out of domain.
Hypernetworks map a forcing parameter directly to policy weights in an RL framework, enabling unified stabilization of the Kuramoto-Sivashinsky equation across regimes with KAN architectures showing strongest extrapolation.
Physics-informed constraints on implicit neural representations yield more accurate and stable predictions of stirred-tank flows than purely data-driven models when training data is scarce, with diminishing returns at larger dataset sizes.
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.
citing papers explorer
-
Constraint-Aware Flow Matching: Decision Aligned End-to-End Training for Constrained Sampling
Constraint-Aware Flow Matching integrates constraint projections into the flow matching training objective to align model dynamics with constrained sampling and reduce distributional shift.
-
Approximation of Maximally Monotone Operators : A Graph Convergence Perspective
Any maximally monotone operator can be approximated in local graph convergence by continuous encoder-decoder networks, with structure-preserving versions that retain maximal monotonicity via resolvent parameterizations.
-
Fixed-Point Neural Optimal Transport without Implicit Differentiation
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
-
Stable Long-Horizon PDE Forecasting via Latent Structured Spectral Propagators
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
-
CATO: Charted Attention for Neural PDE Operators
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
-
Physics-Informed Neural PDE Solvers via Spatio-Temporal MeanFlow
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
-
Geometry-Aware Neural Optimizer for Shape Optimization and Inversion
GANO unifies shape encoding with auto-decoders, denoising-stabilized latent optimization, and geometry-injected surrogates into an end-to-end differentiable pipeline for PDE-governed shape optimization and inversion.
-
AI models of unstable flow exhibit hallucination
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
-
DeepRitzSplit Neural Operator for Phase-Field Models via Energy Splitting
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
-
DiLO: Decoupling Generative Priors and Neural Operators via Diffusion Latent Optimization for Inverse Problems
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.
-
Compositional Neural Operators for Multi-Dimensional Fluid Dynamics
Compositional Neural Operators decompose multi-dimensional fluid PDEs into a library of pretrained elementary physics blocks assembled via an aggregator that minimizes data and physics residuals.
-
Don't Fix the Basis -- Learn It: Spectral Representation with Adaptive Basis Learning for PDEs
ABLE learns a spatially adaptive Parseval frame from data via an ancillary density to replace fixed bases in spectral neural operators for PDEs.
-
PnP-Corrector: A Universal Correction Framework for Coupled Spatiotemporal Forecasting
PnP-Corrector decouples physics simulation from error correction via a plug-and-play agent, cutting error by 29% in 300-day global ocean-atmosphere forecasts.
-
Continuity Laws for Sequential Models
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
-
Hierarchical Multi-Fidelity Learning for Predicting Three-Dimensional Flame Wrinkling and Turbulent Burning Velocity
MuFiNNs integrates sparse experimental measurements with structured low-fidelity models via hierarchical construction and nonlinear correction to predict 3D flame wrinkling dynamics and turbulent mass burning velocity across fuels, pressures, and turbulence levels.
-
Late Fusion Neural Operators for Extrapolation Across Parameter Space in Partial Differential Equations
Late Fusion Neural Operators disentangle state and parameter learning to outperform FNO and CAPE-FNO on advection, Burgers, and reaction-diffusion PDEs with 72% average RMSE reduction in and out of domain.
-
Hyperfastrl: Hypernetwork-based reinforcement learning for unified control of parametric chaotic PDEs
Hypernetworks map a forcing parameter directly to policy weights in an RL framework, enabling unified stabilization of the Kuramoto-Sivashinsky equation across regimes with KAN architectures showing strongest extrapolation.
-
Accelerated and data-efficient flow prediction in stirred tanks via physics-informed learning
Physics-informed constraints on implicit neural representations yield more accurate and stable predictions of stirred-tank flows than purely data-driven models when training data is scarce, with diminishing returns at larger dataset sizes.
-
RETO: A Rotary-Enhanced Transformer Operator for High-Fidelity Prediction of Automotive Aerodynamics
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.