pith. machine review for the scientific record. sign in

arxiv: 1802.04434 · v3 · submitted 2018-02-13 · 💻 cs.LG · cs.DC· math.OC

Recognition: unknown

signSGD: Compressed Optimisation for Non-Convex Problems

Authors on Pith no claims yet
classification 💻 cs.LG cs.DCmath.OC
keywords signsgdconvergencegradientsachievecommunicationcompresseddistributedfast
0
0 comments X
read the original abstract

Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. signSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative $\ell_1/\ell_2$ geometry of gradients, noise and curvature informs whether signSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of signSGD is able to match the accuracy and convergence speed of Adam on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD .

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Unlearning with Asymmetric Sources: Improved Unlearning-Utility Trade-off with Public Data

    cs.LG 2026-05 unverdicted novelty 7.0

    Asymmetric Langevin Unlearning uses public data to suppress unlearning noise costs by O(1/n_pub²), enabling practical mass unlearning with preserved utility under distribution mismatch.

  2. Enhancing SignSGD: Small-Batch Convergence Analysis and a Hybrid Switching Strategy

    cs.LG 2026-04 unverdicted novelty 5.0

    SignSGD with pre-sign dithering and a calibrated hybrid switch to SGD achieves 92.18% accuracy on CIFAR-10 with ResNet-18, outperforming pure SGD and SignSGD, plus better results than Adam on CIFAR-100.