Sparse RFNNs with sSVD via Lanczos-Golub-Kahan bidiagonalization maintain accuracy while improving efficiency and robustness for 1D steady convection-diffusion equations with strong advection.
Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081
2 Pith papers cite this work. Polarity classification is still indexing.
years
2026 2verdicts
UNVERDICTED 2representative citing papers
AdamFLIP treats PDE constraint residuals in PINNs as a controlled dynamical system, computes Lagrange multipliers via feedback linearization to drive residuals to zero, and applies Adam-style adaptation to the resulting gradient for scalable hard-constrained training.
citing papers explorer
-
Sparse Random-Feature Neural Networks with Krylov-Based SVD for Singularly Perturbed ODE
Sparse RFNNs with sSVD via Lanczos-Golub-Kahan bidiagonalization maintain accuracy while improving efficiency and robustness for 1D steady convection-diffusion equations with strong advection.
-
AdamFLIP: Adaptive Momentum Feedback Linearization Optimization for Hard Constrained PINN Training
AdamFLIP treats PDE constraint residuals in PINNs as a controlled dynamical system, computes Lagrange multipliers via feedback linearization to drive residuals to zero, and applies Adam-style adaptation to the resulting gradient for scalable hard-constrained training.