A forward-only Lanczos gradient approximation for Hermitian matrix function bilinear forms whose error scales with the same residual norm as the forward approximation and appears stable without reorthogonalization.
Higham.Functions of Matrices: Theory and Computation
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
A square root form of the second-order covariance update is presented for the first time, improving numerical accuracy and efficiency in recursive estimation algorithms.
Muon does not converge on convex Lipschitz functions regardless of learning rate, while error feedback restores theoretical convergence but degrades performance on CIFAR-10 and nanoGPT tasks.
Augmented Krylov subspaces jointly approximate quadratic forms and log-dets for faster MLE-based hyperparameter tuning in kernel-based linear system identification.
citing papers explorer
-
Fast and Stable Gradient Approximation for Bilinear Forms of Hermitian Matrix Functions
A forward-only Lanczos gradient approximation for Hermitian matrix function bilinear forms whose error scales with the same residual norm as the forward approximation and appears stable without reorthogonalization.
-
Covariance Square Root Second-Order Mapping
A square root form of the second-order covariance update is presented for the first time, improving numerical accuracy and efficiency in recursive estimation algorithms.
-
Muon Does Not Converge on Convex Lipschitz Functions
Muon does not converge on convex Lipschitz functions regardless of learning rate, while error feedback restores theoretical convergence but degrades performance on CIFAR-10 and nanoGPT tasks.
-
Kernel-based linear system identification using augmented Krylov subspaces
Augmented Krylov subspaces jointly approximate quadratic forms and log-dets for faster MLE-based hyperparameter tuning in kernel-based linear system identification.