A preconditioned neural operator is trained to handle high-frequency error components and hybridized with weighted Jacobi iteration to solve large convolution-type integral equations faster than multigrid or preconditioned conjugate gradient methods.
Preconditioning for physics-informed neural networks
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
math.NA 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Sparse RFNNs with sSVD via Lanczos-Golub-Kahan bidiagonalization maintain accuracy while improving efficiency and robustness for 1D steady convection-diffusion equations with strong advection.
citing papers explorer
-
Solving Convolution-type Integral Equations using Preconditioned Neural Operators
A preconditioned neural operator is trained to handle high-frequency error components and hybridized with weighted Jacobi iteration to solve large convolution-type integral equations faster than multigrid or preconditioned conjugate gradient methods.
-
Sparse Random-Feature Neural Networks with Krylov-Based SVD for Singularly Perturbed ODE
Sparse RFNNs with sSVD via Lanczos-Golub-Kahan bidiagonalization maintain accuracy while improving efficiency and robustness for 1D steady convection-diffusion equations with strong advection.