Any maximally monotone operator can be approximated in local graph convergence by continuous encoder-decoder networks, with structure-preserving versions that retain maximal monotonicity via resolvent parameterizations.
citation dossier · hub
DeepONet: Learning nonlinear operators for iden- tifying differential equations based on the universal approximation theorem of operators
why this work matters in Pith
Pith has found this work in 18 reviewed papers. Its strongest current cluster is cs.LG (9 papers). The largest review-status bucket among citing papers is UNVERDICTED (16 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
hub tools
citation-role summary
citation-polarity summary
years
2026 18roles
background 1polarities
background 1representative citing papers
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for time-dependent and stationary equations with improved accuracy and generalization.
GANO unifies shape encoding with auto-decoders, denoising-stabilized latent optimization, and geometry-injected surrogates into an end-to-end differentiable pipeline for PDE-governed shape optimization and inversion.
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral methods on Allen-Cahn and dendritic growth cases.
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.
Compositional Neural Operators decompose multi-dimensional fluid PDEs into a library of pretrained elementary physics blocks assembled via an aggregator that minimizes data and physics residuals.
ABLE learns a spatially adaptive Parseval frame from data via an ancillary density to replace fixed bases in spectral neural operators for PDEs.
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
MuFiNNs integrates sparse experimental measurements with structured low-fidelity models via hierarchical construction and nonlinear correction to predict 3D flame wrinkling dynamics and turbulent mass burning velocity across fuels, pressures, and turbulence levels.
Late Fusion Neural Operators disentangle state and parameter learning to outperform FNO and CAPE-FNO on advection, Burgers, and reaction-diffusion PDEs with 72% average RMSE reduction in and out of domain.
Hypernetworks map a forcing parameter directly to policy weights in an RL framework, enabling unified stabilization of the Kuramoto-Sivashinsky equation across regimes with KAN architectures showing strongest extrapolation.
Physics-informed constraints on implicit neural representations yield more accurate and stable predictions of stirred-tank flows than purely data-driven models when training data is scarce, with diminishing returns at larger dataset sizes.
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.
citing papers explorer
-
DiLO: Decoupling Generative Priors and Neural Operators via Diffusion Latent Optimization for Inverse Problems
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.