LA-Sign achieves state-of-the-art skeleton-based sign language recognition on WLASL and MSASL by using recurrent looped transformers with adaptive hyperbolic geometry alignment.
& Sepulchre, R
5 Pith papers cite this work. Polarity classification is still indexing.
abstract
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings. However, some of the most popular of these optimization tools - namely Adam , Adagrad and the more recent Amsgrad - remain to be generalized to Riemannian manifolds. We discuss the difficulty of generalizing such adaptive schemes to the most agnostic Riemannian setting, and then provide algorithms and convergence proofs for geodesically convex objectives in the particular case of a product of Riemannian manifolds, in which adaptivity is implemented across manifolds in the cartesian product. Our generalization is tight in the sense that choosing the Euclidean space as Riemannian manifold yields the same algorithms and regret bounds as those that were already known for the standard algorithms. Experimentally, we show faster convergence and to a lower train loss value for Riemannian adaptive methods over their corresponding baselines on the realistic task of embedding the WordNet taxonomy in the Poincare ball.
years
2026 5verdicts
UNVERDICTED 5representative citing papers
Hyperbolic RNN and GRU neural quantum states outperform Euclidean versions on Heisenberg J1J2 and J1J2J3 models with 100 spins.
The paper develops an inversion-free natural gradient descent algorithm on Riemannian manifolds that maintains an online approximation of the inverse Fisher information matrix using transported score vectors and proves almost-sure convergence rates of O(log s / s^α) for α > 2/3.
DFSOS computes all sparse discriminant vectors at once with global orthogonality via Bregman iteration and augmented Lagrangian, achieving classification accuracy comparable to or better than deflation-based sparse optimal scoring on synthetic and real time series data.
Pion is an optimizer that preserves the singular values of weight matrices in LLM training by applying orthogonal equivalence transformations.
citing papers explorer
-
LA-Sign: Looped Transformers with Geometry-aware Alignment for Skeleton-based Sign Language Recognition
LA-Sign achieves state-of-the-art skeleton-based sign language recognition on WLASL and MSASL by using recurrent looped transformers with adaptive hyperbolic geometry alignment.
-
New non-Euclidean neural quantum states from additional types of hyperbolic recurrent neural networks
Hyperbolic RNN and GRU neural quantum states outperform Euclidean versions on Heisenberg J1J2 and J1J2J3 models with 100 spins.
-
Inversion-Free Natural Gradient Descent on Riemannian Manifolds
The paper develops an inversion-free natural gradient descent algorithm on Riemannian manifolds that maintains an online approximation of the inverse Fisher information matrix using transported score vectors and proves almost-sure convergence rates of O(log s / s^α) for α > 2/3.
-
Deflation-Free Optimal Scoring
DFSOS computes all sparse discriminant vectors at once with global orthogonality via Bregman iteration and augmented Lagrangian, achieving classification accuracy comparable to or better than deflation-based sparse optimal scoring on synthetic and real time series data.
-
Pion: A Spectrum-Preserving Optimizer via Orthogonal Equivalence Transformation
Pion is an optimizer that preserves the singular values of weight matrices in LLM training by applying orthogonal equivalence transformations.