Recognition: unknown
Riemannian Adaptive Optimization Methods
read the original abstract
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools - namely Adam , Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
New non-Euclidean neural quantum states from additional types of hyperbolic recurrent neural networks
Hyperbolic RNN and GRU neural quantum states outperform Euclidean versions on Heisenberg J1J2 and J1J2J3 models with 100 spins.
-
Inversion-Free Natural Gradient Descent on Riemannian Manifolds
The paper develops an inversion-free natural gradient descent algorithm on Riemannian manifolds that maintains an online approximation of the inverse Fisher information matrix using transported score vectors and prove...
-
LA-Sign: Looped Transformers with Geometry-aware Alignment for Skeleton-based Sign Language Recognition
LA-Sign achieves state-of-the-art skeleton-based sign language recognition on WLASL and MSASL by using recurrent looped transformers with adaptive hyperbolic geometry alignment.
-
Deflation-Free Optimal Scoring
DFSOS computes all sparse discriminant vectors at once with global orthogonality via Bregman iteration and augmented Lagrangian, achieving classification accuracy comparable to or better than deflation-based sparse op...
-
Pion: A Spectrum-Preserving Optimizer via Orthogonal Equivalence Transformation
Pion is an optimizer that preserves the singular values of weight matrices in LLM training by applying orthogonal equivalence transformations.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.