pith. machine review for the scientific record. sign in

arxiv: 1710.09412 · v2 · submitted 2017-10-25 · 💻 cs.LG · stat.ML

Recognition: 3 theorem links

· Lean Theorem

mixup: Beyond Empirical Risk Minimization

David Lopez-Paz, Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin

Pith reviewed 2026-05-12 15:28 UTC · model grok-4.3

classification 💻 cs.LG stat.ML
keywords mixupdata augmentationregularizationgeneralizationadversarial robustnessneural networksempirical risk minimizationgenerative adversarial networks
0
0 comments X

The pith

Training neural networks on convex combinations of example pairs and their labels encourages linear behavior between points and improves generalization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes mixup as a training principle that creates virtual examples by taking convex combinations of pairs of real inputs and their labels. This regularizes the network to favor simple linear interpolations in the regions between training data rather than complex or memorized functions. A sympathetic reader would care because deep networks often memorize training details and remain fragile to small input changes despite high capacity. If the approach succeeds, it offers a lightweight way to boost performance on image, speech, and tabular tasks while addressing memorization and adversarial sensitivity.

Core claim

Mixup trains a neural network on convex combinations of pairs of examples and their labels, which regularizes the model to exhibit simple linear behavior in between training examples. Experiments on ImageNet-2012, CIFAR-10, CIFAR-100, Google commands, and UCI datasets demonstrate that this yields better generalization than standard empirical risk minimization, reduces memorization of corrupt labels, increases robustness to adversarial examples, and stabilizes generative adversarial network training.

What carries the argument

The mixup procedure that forms virtual training examples as lambda times one input plus one minus lambda times another, with correspondingly mixed labels.

If this is right

  • State-of-the-art networks achieve higher accuracy on large-scale image classification benchmarks.
  • Models become less prone to fitting noisy or corrupted training labels.
  • Networks exhibit greater tolerance to small adversarial perturbations in their inputs.
  • Training dynamics for generative adversarial networks become more stable across runs.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The regularization implicitly smooths the learned function over the convex hull of the training data.
  • Mixup could serve as a drop-in replacement for other vicinal risk minimization strategies that operate only in input space.
  • The same mixing principle may extend naturally to tasks beyond classification, such as regression or sequence modeling, where linear label combinations remain well-defined.

Load-bearing premise

That linear interpolation between training examples in input space corresponds to a meaningful linear interpolation in label space that improves the learned function's generalization.

What would settle it

An experiment on a synthetic classification task with deliberately non-linear class boundaries between examples where mixup produces lower test accuracy than standard training.

read the original abstract

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper claims that training neural networks on convex combinations of pairs of examples and their labels (mixup) regularizes the model to exhibit linear behavior between training examples, thereby improving generalization, reducing memorization of corrupt labels, and increasing robustness to adversarial examples. This is demonstrated through experiments on ImageNet-2012, CIFAR-10, CIFAR-100, Google commands, and UCI datasets using various neural network architectures.

Significance. If the results hold, mixup provides a straightforward and computationally efficient regularization technique that extends empirical risk minimization in a novel way. The paper is to be credited for its extensive empirical evaluation across multiple domains and tasks, including the additional findings on label noise robustness and GAN stabilization. These aspects make the contribution significant for practical deep learning applications.

major comments (1)
  1. [§2] §2: The regularization argument that mixup favors 'simple linear behavior in-between training examples' is presented heuristically via the vicinal distribution construction without a formal derivation, generalization bound, or analysis showing why label-space interpolation is semantically appropriate. This is load-bearing for the central claim of going 'beyond empirical risk minimization' in a principled way, as opposed to a generic augmentation effect.
minor comments (2)
  1. The algorithm description would benefit from explicit pseudocode to aid reproducibility.
  2. Figure captions and axis labels in the experimental sections could be expanded for standalone clarity.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the positive review, the recognition of the empirical contributions across multiple domains, and the recommendation for minor revision. We address the single major comment below.

read point-by-point responses
  1. Referee: [§2] §2: The regularization argument that mixup favors 'simple linear behavior in-between training examples' is presented heuristically via the vicinal distribution construction without a formal derivation, generalization bound, or analysis showing why label-space interpolation is semantically appropriate. This is load-bearing for the central claim of going 'beyond empirical risk minimization' in a principled way, as opposed to a generic augmentation effect.

    Authors: We agree that the motivation in §2 is heuristic and does not include a formal generalization bound. The argument extends the vicinal risk minimization framework of Chapelle et al. (2000), where the vicinal distribution is instantiated via convex combinations of training examples and labels; this is a deliberate design choice rather than generic augmentation. Label interpolation is semantically motivated for classification because one-hot (or soft) labels represent class probabilities, and linear interpolation in label space encourages the network to output probabilities that vary smoothly between classes, consistent with the assumption that the underlying data manifold is locally linear. We will revise the manuscript to (i) cite the vicinal risk minimization literature more explicitly, (ii) clarify that the linear-behavior inductive bias is the core modeling assumption, and (iii) distinguish mixup from standard augmentation by emphasizing the joint interpolation of inputs and labels. A full theoretical analysis with generalization bounds is beyond the scope of the current work, which prioritizes broad empirical validation. revision: partial

Circularity Check

0 steps flagged

No significant circularity; mixup defines an augmentation procedure whose effects are measured empirically on held-out data

full rationale

The paper introduces mixup by defining a vicinal distribution over convex combinations of input-label pairs and then applies standard ERM to samples from that distribution. The claimed regularization toward linear behavior between examples is the direct, definitional consequence of minimizing loss on those constructed pairs; it is not derived as a separate prediction. Generalization, robustness, and stability improvements are reported via experiments on independent test sets (ImageNet, CIFAR, etc.), with no fitted parameters, self-citations, or uniqueness theorems invoked to support the central claim. The derivation chain is therefore self-contained and externally falsifiable.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The method rests on one tunable hyper-parameter for the mixing distribution and the domain assumption that input-space linear interpolation is a useful vicinal distribution for labels.

free parameters (1)
  • alpha
    Controls the Beta(alpha, alpha) distribution from which the mixing coefficient lambda is sampled; chosen per dataset.
axioms (1)
  • domain assumption Training on vicinal distributions formed by linear interpolations improves generalization
    Invoked to justify why convex combinations of examples and labels should be used as training data.

pith-pipeline@v0.9.0 · 5430 in / 1106 out tokens · 30378 ms · 2026-05-12T15:28:40.009016+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Forward citations

Cited by 39 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TCP-SSM: Efficient Vision State Space Models with Token-Conditioned Poles

    cs.CV 2026-05 unverdicted novelty 7.0

    TCP-SSM conditions stable poles on visual tokens to explicitly control memory decay and oscillation in SSMs, cutting computation up to 44% while matching or exceeding accuracy on classification, segmentation, and detection.

  2. Efficient and provably convergent end-to-end training of deep neural networks with linear constraints

    math.OC 2026-05 unverdicted novelty 7.0

    An efficiently computable HS-Jacobian acts as a conservative mapping for projections onto polyhedral sets, supporting provably convergent Adam-based end-to-end training of linearly constrained deep neural networks.

  3. LLM-guided Semi-Supervised Approaches for Social Media Crisis Data Classification

    cs.AI 2026-05 conditional novelty 7.0

    LG-CoTrain, an LLM-guided co-training method, outperforms classical semi-supervised baselines for crisis tweet classification in low-resource settings with 5-25 labeled examples per class.

  4. LookWhen? Fast Video Recognition by Learning When, Where, and What to Compute

    cs.CV 2026-05 conditional novelty 7.0

    LookWhen factorizes video recognition into learning when, where, and what to compute via uniqueness-based token selection and dual-teacher distillation, achieving better accuracy-FLOPs trade-offs than baselines on mul...

  5. Domain Generalization through Spatial Relation Induction over Visual Primitives

    cs.CV 2026-05 unverdicted novelty 7.0

    PARSE improves domain generalization accuracy by factoring recognition into visual primitives and their spatial relational compositions learned end-to-end with differentiable predicates.

  6. LEGO: LoRA-Enabled Generator-Oriented Framework for Synthetic Image Detection

    cs.CV 2026-05 unverdicted novelty 7.0

    LEGO uses multiple generator-specific LoRA modules modulated by an MLP and fused with attention to detect synthetic images, achieving better performance than prior methods while using under 10% of the training data.

  7. SignMAE: Segmentation-Driven Self-Supervised Learning for Sign Language Recognition

    cs.CV 2026-05 unverdicted novelty 7.0

    SignMAE uses segmentation-driven masking in a mask-and-reconstruct self-supervised task to learn fine-grained sign representations, achieving state-of-the-art accuracy on WLASL, NMFs-CSL, and Slovo with fewer frames a...

  8. Direct Discrepancy Replay: Distribution-Discrepancy Condensation and Manifold-Consistent Replay for Continual Face Forgery Detection

    cs.CV 2026-04 unverdicted novelty 7.0

    A replay method for continual face forgery detection condenses real-fake distribution discrepancies into compact maps and synthesizes compatible samples from current real faces to reduce forgetting under tight memory ...

  9. Is your algorithm unlearning or untraining?

    cs.LG 2026-04 conditional novelty 7.0

    Machine unlearning conflates reversing the influence of specific training examples (untraining) with removing the full underlying distribution or behavior (unlearning).

  10. Chronos: Learning the Language of Time Series

    cs.LG 2024-03 conditional novelty 7.0

    Chronos pretrains transformer models on tokenized time series to deliver strong zero-shot forecasting across diverse domains.

  11. The DeepFake Detection Challenge (DFDC) Dataset

    cs.CV 2020-06 accept novelty 7.0

    The DFDC dataset is the largest public collection of face-swapped videos and supports detectors that generalize to in-the-wild deepfakes.

  12. HamBR: Active Decision Boundary Restoration Based on Hamiltonian Dynamics for Learning with Noisy Labels

    cs.CV 2026-05 unverdicted novelty 6.0

    HamBR uses Spherical HMC to probe ambiguous regions and synthesize virtual outliers with energy-based repulsion to restore decision boundaries degraded by noisy labels, achieving SOTA on CIFAR and real-world benchmarks.

  13. LiBaGS: Lightweight Boundary Gap Synthesis for Targeted Synthetic Data Selection

    cs.LG 2026-05 unverdicted novelty 6.0

    LiBaGS scores and selects synthetic data near decision boundaries using proximity, uncertainty, density, and validity, with boundary-gap allocation and marginal stopping to improve training accuracy.

  14. Cross-Sample Relational Fusion: Unifying Domain Generalization and Class-Incremental Learning

    cs.CV 2026-05 unverdicted novelty 6.0

    CORF unifies domain generalization and class-incremental learning via selective sample refinement with spatial maps and confidence weighting plus cascaded relational distillation.

  15. ICDAR 2026 Competition on Writer Identification and Pen Classification from Hand-Drawn Circles

    cs.CV 2026-05 accept novelty 6.0

    A new dataset of hand-drawn circles from 66 writers and 8 pens yields competition results of 64.8% top-1 accuracy for open-set writer identification and 92.7% for pen classification.

  16. Leveraging Image Generators to Address Training Data Scarcity: The Gen4Regen Dataset for Forest Regeneration Mapping

    cs.CV 2026-05 conditional novelty 6.0

    Mixing real UAV imagery with 2101 AI-generated image-mask pairs improves semantic segmentation F1 scores for fine-grained forest species by over 15 percentage points overall and up to 30 points for rare classes.

  17. Cheeger--Hodge Contrastive Learning for Structurally Robust Graph Representation Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    CHCL aligns a Cheeger-Hodge joint signature across graph augmentations to produce embeddings that remain stable under local structural changes.

  18. Beyond Binary Contrast: Modeling Continuous Skeleton Action Spaces with Transitional Anchors

    cs.CV 2026-04 unverdicted novelty 6.0

    TranCLR models continuous skeleton action spaces with transitional anchors and multi-level manifold calibration, yielding smoother and more accurate representations than binary contrastive methods.

  19. PAC-Bayes Bounds for Gibbs Posteriors via Singular Learning Theory

    stat.ML 2026-04 unverdicted novelty 6.0

    PAC-Bayes bounds for Gibbs posteriors are obtained via singular learning theory, producing explicit and tighter posterior-averaged risk bounds that adapt to data structure in overparameterized models.

  20. Human Gaze-based Dual Teacher Guidance Learning for Semi-Supervised Medical Image Segmentation

    eess.IV 2026-04 unverdicted novelty 6.0

    HG-DTGL integrates human gaze as an extra teacher in mean-teacher learning via GazeMix, MGP module and Gaze Loss, reporting superior segmentation across ten organs on multiple modalities.

  21. Feature-Aware Anisotropic Local Differential Privacy for Utility-Preserving Graph Representation Learning in Metal Additive Manufacturing

    cs.LG 2026-04 unverdicted novelty 6.0

    FI-LDP-HGAT applies feature-importance-aware anisotropic local differential privacy to a hierarchical graph attention network, recovering 81.5% utility at epsilon=4 and 0.762 defect recall at epsilon=2 on a DED porosi...

  22. OASIC: Occlusion-Agnostic and Severity-Informed Classification

    cs.CV 2026-04 conditional novelty 6.0

    OASIC uses anomaly-based masking and severity estimation to select occlusion-matched models, improving AUC on occluded images by up to 23.7 points.

  23. Can LLMs Learn to Reason Robustly under Noisy Supervision?

    cs.LG 2026-04 conditional novelty 6.0

    Online Label Refinement lets LLMs learn robust reasoning from noisy supervision by correcting labels when majority answers show rising rollout success and stable history, delivering 3-4% gains on math and reasoning be...

  24. R\'enyi Attention Entropy for Patch Pruning

    cs.CV 2026-04 unverdicted novelty 6.0

    Rényi entropy of attention maps serves as a tunable criterion for pruning redundant patches in vision transformers, reducing compute with preserved accuracy on image recognition.

  25. YOLOv12: Attention-Centric Real-Time Object Detectors

    cs.CV 2025-02 unverdicted novelty 6.0

    YOLOv12 is a new attention-based real-time object detector that reports higher accuracy than YOLOv10, YOLOv11, and RT-DETR variants at comparable or better speed and efficiency.

  26. Revisiting Feature Prediction for Learning Visual Representations from Video

    cs.CV 2024-02 conditional novelty 6.0

    V-JEPA models trained only on feature prediction from 2 million public videos achieve 81.9% on Kinetics-400, 72.2% on Something-Something-v2, and 77.9% on ImageNet-1K using frozen ViT-H/16 backbones.

  27. LiBaGS: Lightweight Boundary Gap Synthesis for Targeted Synthetic Data Selection

    cs.LG 2026-05 unverdicted novelty 5.0

    LiBaGS is a lightweight method that picks synthetic data near decision boundaries while checking density and validity to improve training accuracy over standard oversampling or uncertainty sampling.

  28. CAST: Channel-Aware Spatial Transfer Learning with Pseudo-Image Radar for Sign Language Recognition

    cs.CV 2026-05 unverdicted novelty 5.0

    CAST achieves 80.5% Top-1 accuracy on radar-only sign language recognition by fusing physics-aware CVD and RTM representations through channel-aware spatial attention and asymmetric cross-attention.

  29. Agentic AIs Are the Missing Paradigm for Out-of-Distribution Generalization in Foundation Models

    cs.LG 2026-05 unverdicted novelty 5.0

    Agentic AI systems are required to overcome the parameter coverage ceiling that prevents foundation models from handling certain out-of-distribution cases.

  30. HiMix: Hierarchical Artifact-aware Mixup for Generalized Synthetic Image Detection

    cs.CV 2026-04 unverdicted novelty 5.0

    HiMix combines mixup augmentation to create transitional real-fake samples with hierarchical global-local artifact feature fusion to achieve better generalization in detecting AI-generated images from unseen generators.

  31. Investigating Bias and Fairness in Appearance-based Gaze Estimation

    cs.CV 2026-04 unverdicted novelty 5.0

    First large-scale fairness audit of gaze estimators reveals sizable accuracy disparities by ethnicity and gender, with existing mitigation methods providing only marginal fairness gains.

  32. Beyond Surface Artifacts: Capturing Shared Latent Forgery Knowledge Across Modalities

    cs.CV 2026-04 unverdicted novelty 5.0

    Introduces MAF framework and DeepModal-Bench to capture universal cross-modal forgery traces for better generalization in multimodal deepfake detection.

  33. Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization

    cs.CL 2026-04 unverdicted novelty 5.0

    MaKD distills pre-trained language models by deeply mimicking self-attention and feed-forward modules across aspects using low-rank factorization, matching strong baselines at the same parameter budget and extending t...

  34. Why Invariance is Not Enough for Biomedical Domain Generalization and How to Fix It

    eess.IV 2026-04 unverdicted novelty 5.0

    MaskGen improves domain generalization for biomedical image segmentation by using source intensities plus domain-stable foundation model representations with minimal added complexity.

  35. YOLOv4: Optimal Speed and Accuracy of Object Detection

    cs.CV 2020-04 unverdicted novelty 5.0

    YOLOv4 achieves 43.5% AP (65.7% AP50) on MS COCO at ~65 FPS on Tesla V100 by integrating WRC, CSP, CmBN, SAT, Mish activation, Mosaic augmentation, DropBlock, and CIoU loss.

  36. an interpretable vision transformer framework for automated brain tumor classification

    cs.CV 2026-04 unverdicted novelty 4.0

    Vision Transformer with CLAHE preprocessing, two-stage fine-tuning, MixUp/CutMix, EMA, TTA, and attention rollout achieves 99.29% accuracy and 99.25% macro F1 on four-class brain tumor MRI classification from 7023 scans.

  37. A Wasserstein GAN-based climate scenario generator for risk management and insurance: the case of soil subsidence

    cs.LG 2026-04 unverdicted novelty 4.0

    A conditional Wasserstein GAN generates plausible future SWI drought trajectories for French insurance risk management under climate change.

  38. PR3DICTR: A modular AI framework for medical 3D image-based detection and outcome prediction

    cs.CV 2026-04 unverdicted novelty 4.0

    PR3DICTR is a new open-access modular framework for 3D medical image classification and outcome prediction that works with as little as two lines of code.

  39. Image-Based Malware Type Classification on MalNet-Image Tiny: Effects of Multi-Scale Fusion, Transfer Learning, Data Augmentation, and Schedule-Free Optimization

    cs.CR 2026-04 unverdicted novelty 2.0

    Pretraining plus Mixup/TrivialAugment and a feature pyramid network lift macro-F1 from 0.65 to 0.69 on 43-class malware image classification while cutting training epochs from 96 to 10.

Reference graph

Works this paper leans on

45 extracted references · 45 canonical work pages · cited by 38 Pith papers

  1. [1]

    Amodei, S

    D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, et al. Deep speech 2: End-to-end speech recognition in E nglish and M andarin. In ICML, 2016

  2. [2]

    Arpit, S

    D. Arpit, S. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al. A closer look at memorization in deep networks. ICML, 2017

  3. [3]

    Bartlett, D

    P. Bartlett, D. J. Foster, and M. Telgarsky. Spectrally-normalized margin bounds for neural networks. NIPS, 2017

  4. [4]

    Chapelle, J

    O. Chapelle, J. Weston, L. Bottou, and V. Vapnik. Vicinal risk minimization. NIPS, 2000

  5. [5]

    N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. SMOTE : synthetic minority over-sampling technique. Journal of artificial intelligence research, 16: 0 321--357, 2002

  6. [6]

    Chelba, T

    C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv, 2013

  7. [7]

    Cisse, P

    M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. Parseval networks: Improving robustness to adversarial examples. ICML, 2017

  8. [8]

    W. M. Czarnecki, S. Osindero, M. Jaderberg, G. \'S wirszcz, and R. Pascanu. Sobolev training for neural networks. NIPS, 2017

  9. [9]

    DeVries and G

    T. DeVries and G. W. Taylor. Dataset augmentation in feature space. ICLR Workshops, 2017

  10. [10]

    Drucker and Y

    H. Drucker and Y. Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3 0 (6): 0 991--997, 1992

  11. [11]

    Goodfellow

    I. Goodfellow. Tutorial: Generative adversarial networks. NIPS, 2016

  12. [12]

    Goodfellow, J

    I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. NIPS, 2014

  13. [13]

    I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015

  14. [14]

    Goyal, P

    P. Goyal, P. Doll \'a r, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He. Accurate, large minibatch SGD : Training I mage Net in 1 hour. arXiv, 2017

  15. [15]

    Graves, A.-r

    A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP. IEEE, 2013

  16. [16]

    Gulrajani, F

    I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of W asserstein GAN s. NIPS, 2017

  17. [17]

    Harvey, C

    N. Harvey, C. Liaw, and A. Mehrabian. Nearly-tight VC -dimension bounds for piecewise linear neural networks. JMLR, 2017

  18. [18]

    K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. ECCV, 2016

  19. [19]

    Hein and M

    M. Hein and M. Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. NIPS, 2017

  20. [20]

    Hinton, L

    G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012

  21. [21]

    Huang, Z

    G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. CVPR, 2017

  22. [22]

    Kingma and J

    D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015

  23. [23]

    Krizhevsky, I

    A. Krizhevsky, I. Sutskever, and G. E. Hinton. Image Net classification with deep convolutional neural networks. NIPS, 2012

  24. [24]

    Lecun, L

    Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of IEEE, 2001

  25. [25]

    M. Lichman. UCI machine learning repository, 2013

  26. [26]

    Liu, 2017

    K. Liu, 2017. URL https://github.com/kuangliu/pytorch-cifar

  27. [27]

    Pereyra, G

    G. Pereyra, G. Tucker, J. Chorowski, . Kaiser, and G. Hinton. Regularizing neural networks by penalizing confident output distributions. ICLR Workshops, 2017

  28. [28]

    Russakovsky, J

    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Image Net large scale visual recognition challenge. IJCV, 2015

  29. [29]

    Silver, A

    D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016

  30. [30]

    Simard, Y

    P. Simard, Y. LeCun, J. Denker, and B. Victorri. Transformation invariance in pattern recognition—tangent distance and tangent propagation. Neural networks: tricks of the trade, 1998

  31. [31]

    Simonyan and A

    K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015

  32. [32]

    J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. ICLR Workshops, 2015

  33. [33]

    Srivastava, G

    N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15 0 (1): 0 1929--1958, 2014

  34. [34]

    Szegedy, W

    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR, 2014

  35. [35]

    Szegedy, V

    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the I nception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016

  36. [36]

    V. N. Vapnik. Statistical learning theory. J. Wiley, 1998

  37. [37]

    Vapnik and A

    V. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 1971

  38. [38]

    Veit, 2017

    A. Veit, 2017. URL https://github.com/andreasveit

  39. [39]

    Warden, 2017

    P. Warden, 2017. URL https://research.googleblog.com/2017/08/launching-speech-commands-dataset.html

  40. [40]

    S. Xie, R. Girshick, P. Doll \'a r, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. CVPR, 2016

  41. [41]

    Zagoruyko and N

    S. Zagoruyko and N. Komodakis. Wide residual networks. BMVC, 2016 a

  42. [42]

    Zagoruyko and N

    S. Zagoruyko and N. Komodakis, 2016 b . URL https://github.com/szagoruyko/wide-residual-networks

  43. [43]

    Zhang , S

    C. Zhang , S. Bengio , M. Hardt , B. Recht , and O. Vinyals . Understanding deep learning requires rethinking generalization . ICLR, 2017

  44. [44]

    Zhang, 2017

    C. Zhang, 2017. URL https://github.com/pluskid/fitting-random-labels

  45. [45]

    Zhong, L

    Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. arXiv, 2017