pith. machine review for the scientific record. sign in

arxiv: 1810.02032 · v2 · submitted 2018-10-04 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

Gradient descent aligns the layers of deep linear networks

Matus Telgarsky, Ziwei Ji

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords gradientdescentlinearweightalignmentappliedconvergesdecreasing
0
0 comments X
read the original abstract

This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized i-th weight matrix asymptotically equals its rank-1 approximation $u_iv_i^{\top}$; (iii) these rank-1 matrices are aligned across layers, meaning $|v_{i+1}^{\top}u_i|\to1$. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network --- the product of its weight matrices --- converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. A Theory of Saddle Escape in Deep Nonlinear Networks

    cs.LG 2026-05 unverdicted novelty 7.0

    Derives exact norm-imbalance identity for deep nonlinear nets, classifying activations into four classes and yielding escape time law τ★ = Θ(ε^{-(r-2)}) governed by bottleneck depth r.

  2. A Theory of Saddle Escape in Deep Nonlinear Networks

    cs.LG 2026-05 conditional novelty 7.0

    An exact norm-imbalance identity classifies activations into four classes and reduces deep nonlinear training flow to a scalar ODE that predicts saddle escape time scaling as ε to the power of minus (r-2) for r bottle...

  3. On the global convergence of gradient descent for wide shallow models with bounded nonlinearities

    math.OC 2026-05 unverdicted novelty 6.0

    Gradient descent on wide shallow models with bounded nonlinearities converges globally in the mean-field limit as non-global critical points are unstable under the dynamics.

  4. Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

    cs.LG 2024-01 unverdicted novelty 6.0

    SPIN lets weak LLMs become strong by self-generating training data from previous model versions and training to prefer human-annotated responses over its own outputs, outperforming DPO even with extra GPT-4 data on be...