pith. machine review for the scientific record. sign in

arxiv: 1904.12043 · v2 · submitted 2019-04-26 · 💻 cs.LG · cs.CV· cs.DC· stat.ML

Recognition: unknown

Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVcs.DCstat.ML
keywords learningdistributedtrainingdifferentelasticmomentumratetime
0
0 comments X
read the original abstract

With an increasing demand for training powers for deep learning algorithms and the rapid growth of computation resources in data centers, it is desirable to dynamically schedule different distributed deep learning tasks to maximize resource utilization and reduce cost. In this process, different tasks may receive varying numbers of machines at different time, a setting we call elastic distributed training. Despite the recent successes in large mini-batch distributed training, these methods are rarely tested in elastic distributed training environments and suffer degraded performance in our experiments, when we adjust the learning rate linearly immediately with respect to the batch size. One difficulty we observe is that the noise in the stochastic momentum estimation is accumulated over time and will have delayed effects when the batch size changes. We therefore propose to smoothly adjust the learning rate over time to alleviate the influence of the noisy momentum estimation. Our experiments on image classification, object detection and semantic segmentation have demonstrated that our proposed Dynamic SGD method achieves stabilized performance when varying the number of GPUs from 8 to 128. We also provide theoretical understanding on the optimality of linear learning rate scheduling and the effects of stochastic momentum.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

    cs.LG 2024-03 conditional novelty 7.0

    GaLore performs full-parameter LLM training with up to 65.5% less optimizer memory by projecting gradients onto a low-rank subspace at each step, matching full-rank performance on LLaMA pre-training and RoBERTa fine-tuning.