pith. machine review for the scientific record. sign in

arxiv: 1808.04295 · v4 · submitted 2018-08-13 · 💻 cs.LG · cs.AI· math.OC· math.ST· stat.ML· stat.TH

Recognition: unknown

Understanding training and generalization in deep learning by Fourier analysis

Authors on Pith no claims yet
classification 💻 cs.LG cs.AImath.OCmath.STstat.MLstat.TH
keywords traininggeneralizationabilityanalysisdeepdnnsfourierfunction
0
0 comments X
read the original abstract

Background: It is still an open research area to theoretically understand why Deep Neural Networks (DNNs)---equipped with many more parameters than training data and trained by (stochastic) gradient-based methods---often achieve remarkably low generalization error. Contribution: We study DNN training by Fourier analysis. Our theoretical framework explains: i) DNN with (stochastic) gradient-based methods often endows low-frequency components of the target function with a higher priority during the training; ii) Small initialization leads to good generalization ability of DNN while preserving the DNN's ability to fit any function. These results are further confirmed by experiments of DNNs fitting the following datasets, that is, natural images, one-dimensional functions and MNIST dataset.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Spectral Energy Centroid: a Metric for Improving Performance and Analyzing Spectral Bias in Implicit Neural Representations

    cs.LG 2026-05 unverdicted novelty 7.0

    Spectral Energy Centroid is a new metric that quantifies signal frequency and INR spectral bias, supporting better hyperparameter selection and cross-architecture analysis.