Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
Transolver++: An accurate neural solver for pdes on million-scale geometries
7 Pith papers cite this work. Polarity classification is still indexing.
years
2026 7representative citing papers
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
Scale-autoregressive modeling (SAR) samples fluid flow distributions hierarchically from coarse to fine resolutions on meshes, achieving lower distributional error and 2-7x faster runtime than diffusion or flow-matching baselines.
ShardTensor is a domain-parallelism system for SciML that enables flexible scaling of extreme-resolution spatial datasets by removing the constraint of batch size one per device.
Neural and spectral operators can approximate shape-to-solution maps for families of elliptic and parabolic PDEs and BIEs with provable uniform error bounds derived from parametric holomorphy on a reference domain.
DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data and O(10^-4) error on turbulent flow with only 500 supervision points.
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.
citing papers explorer
-
Neural-Schwarz Tiling for Geometry-Universal PDE Solving at Scale
Local neural operators on 3x3x3 patches, composed via Schwarz iteration, solve large-scale nonlinear elasticity on arbitrary geometries without domain-specific retraining.
-
Learning Neural Operator Surrogates for the Black Hole Accretion Code
Physics-informed Fourier neural operators recover plasmoid formation in sparse SRRMHD vortex data where data-only models fail, and transformer operators approximate AMR jet evolution, marking first reported uses in these relativistic MHD settings.
-
One Scale at a Time: Scale-Autoregressive Modeling for Fluid Flow Distributions
Scale-autoregressive modeling (SAR) samples fluid flow distributions hierarchically from coarse to fine resolutions on meshes, achieving lower distributional error and 2-7x faster runtime than diffusion or flow-matching baselines.
-
ShardTensor: Domain Parallelism for Scientific Machine Learning
ShardTensor is a domain-parallelism system for SciML that enables flexible scaling of extreme-resolution spatial datasets by removing the constraint of batch size one per device.
-
Neural Shape Operator Surrogates -- Expression Rate Bounds
Neural and spectral operators can approximate shape-to-solution maps for families of elliptic and parabolic PDEs and BIEs with provable uniform error bounds derived from parametric holomorphy on a reference domain.
-
Multiscale Physics-Informed Neural Network for Complex Fluid Flows with Long-Range Dependencies
DDS-PINN uses localized neural networks plus a unified global loss to model multiscale fluid flows with long-range dependencies, achieving CFD-comparable accuracy on laminar backward-facing step flow with zero data and O(10^-4) error on turbulent flow with only 500 supervision points.
-
RETO: A Rotary-Enhanced Transformer Operator for High-Fidelity Prediction of Automotive Aerodynamics
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.