pith. machine review for the scientific record. sign in

arxiv: 1206.1106 · v2 · submitted 2012-06-06 · 📊 stat.ML · cs.LG

Recognition: unknown

No More Pesky Learning Rates

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords learningratesgradientmethodperformancetimeacrossadaptive
0
0 comments X
read the original abstract

The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of SGD or other adaptive approaches with their best settings obtained through systematic search, and effectively removes the need for learning rate tuning.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Adam: A Method for Stochastic Optimization

    cs.LG 2014-12 accept novelty 7.5

    A first-order stochastic optimizer that maintains bias-corrected exponential moving averages of the gradient and its square, dividing the former by the square root of the latter to set per-parameter step sizes.

  2. MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework

    cs.AI 2023-08 unverdicted novelty 6.0

    MetaGPT embeds human SOPs into LLM prompts to create role-specialized agent teams that produce more coherent solutions on collaborative software engineering tasks than prior chat-based multi-agent systems.