pith. machine review for the scientific record. sign in

hub

Understanding deep learning requires rethinking generalization

23 Pith papers cite this work. Polarity classification is still indexing.

23 Pith papers citing it
abstract

Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.

hub tools

clear filters

representative citing papers

Stochastic Trust-Region Methods for Over-parameterized Models

math.OC · 2026-04-15 · unverdicted · novelty 7.0

Stochastic trust-region methods achieve O(ε^{-2} log(1/ε)) complexity for unconstrained problems and O(ε^{-4} log(1/ε)) for equality-constrained problems under the strong growth condition, with experiments showing stable performance comparable to tuned baselines without learning-rate scheduling.

Deep Learning Scaling is Predictable, Empirically

cs.LG · 2017-12-01 · unverdicted · novelty 7.0

Deep learning generalization error follows power-law scaling with training set size across multiple domains, with model size scaling sublinearly with data size.

The Propagation Field: A Geometric Substrate Theory of Deep Learning

cs.LG · 2026-05-08 · unverdicted · novelty 6.0

Neural networks possess a propagation field of trajectories and Jacobians whose quality can be measured and optimized independently of endpoint loss, yielding better unseen-path generalization and reduced forgetting in continual learning.

Adversarial Robustness of NTK Neural Networks

stat.ML · 2026-04-28 · unverdicted · novelty 6.0

NTK networks achieve minimax optimal adversarial regression rates in Sobolev spaces with early stopping, but minimum-norm interpolants are vulnerable.

Misspecified Universal Learning

cs.IT · 2026-05-11 · unverdicted · novelty 5.0

Minimax regret is characterized for misspecified universal learning with log-loss, yielding the optimal universal learner as a unified framework for any uncertainty in the data-generating process.

Benefits of Low-Cost Bio-Inspiration in the Age of Overparametrization

cs.RO · 2026-04-22 · unverdicted · novelty 3.0

Shallow MLPs and dense CPGs outperform deeper MLPs and Actor-Critic RL in bounded robot control tasks with limited proprioception, with a Parameter Impact metric indicating extra RL parameters yield no performance gain over evolutionary strategies.

citing papers explorer

Showing 1 of 1 citing paper after filters.

  • Deep Learning Scaling is Predictable, Empirically cs.LG · 2017-12-01 · unverdicted · none · ref 11 · internal anchor

    Deep learning generalization error follows power-law scaling with training set size across multiple domains, with model size scaling sublinearly with data size.