pith. machine review for the scientific record. sign in

arxiv: 1802.08760 · v3 · submitted 2018-02-23 · 📊 stat.ML · cs.AI· cs.LG· cs.NE

Recognition: unknown

Sensitivity and Generalization in Neural Networks: an Empirical Study

Authors on Pith no claims yet
classification 📊 stat.ML cs.AIcs.LGcs.NE
keywords generalizationcomplexitynetworksneuralassociateddataempiricalfactors
0
0 comments X
read the original abstract

In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with various fully-connected architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization. We further establish that factors associated with poor generalization $-$ such as full-batch training or using random labels $-$ correspond to lower robustness, while factors associated with good generalization $-$ such as data augmentation and ReLU non-linearities $-$ give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Scale-Aware Adversarial Analysis: A Diagnostic for Generative AI in Multiscale Complex Systems

    cs.LG 2026-05 unverdicted novelty 6.0

    A new scale-aware diagnostic framework shows that unconstrained diffusion generative models exhibit structural freezing and instability instead of smooth physical responses under multiscale perturbations.

  2. Complexity of Linear Regions in Self-supervised Deep ReLU Networks

    cs.LG 2026-04 unverdicted novelty 6.0

    Self-supervised ReLU networks form substantially fewer linear regions than supervised models for comparable accuracy, with contrastive methods rapidly expanding regions and self-distillation consolidating them, enabli...

  3. Escape dynamics and implicit bias of one-pass SGD in overparameterized quadratic networks

    cond-mat.dis-nn 2026-04 unverdicted novelty 6.0

    In overparameterized quadratic networks, one-pass SGD escapes generalization plateaus only modestly faster and selects the initialization-closest zero-loss solution due to a conserved quantity in the overlap ODEs.