pith. machine review for the scientific record. sign in

arxiv: 1808.02941 · v2 · submitted 2018-08-08 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

Xiangyi Chen , Sijia Liu , Ruoyu Sun , Mingyi Hong

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords algorithmsclassconvergenceadam-typeadaptiveconditionsgradientmethods
0
0 comments X
read the original abstract

This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the "Adam-type", includes the popular algorithms such as the Adam, AMSGrad and AdaGrad. Despite their popularity in training deep neural networks, the convergence of these algorithms for solving nonconvex problems remains an open question. This paper provides a set of mild sufficient conditions that guarantee the convergence for the Adam-type methods. We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization. We show the conditions are essential in the sense that violating them may make the algorithm diverge. Moreover, we propose and analyze a class of (deterministic) incremental adaptive gradient algorithms, which has the same $O(\log{T}/\sqrt{T})$ convergence rate. Our study could also be extended to a broader class of adaptive gradient methods in machine learning and optimization.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Adam-HNAG: A Convergent Reformulation of Adam with Accelerated Rate

    math.OC 2026-04 unverdicted novelty 8.0

    Adam-HNAG is a splitting-based reformulation of Adam that yields the first convergence proof for Adam-type methods, including accelerated rates, in convex smooth optimization.

  2. Anon: Extrapolating Adaptivity Beyond SGD and Adam

    cs.AI 2026-05 unverdicted novelty 6.0

    Anon optimizer uses tunable adaptivity and incremental delay update to achieve convergence guarantees and outperform existing methods on image classification, diffusion, and language modeling tasks.