pith. machine review for the scientific record. sign in

arxiv: 1406.2080 · v4 · submitted 2014-06-09 · 💻 cs.CV · cs.LG· cs.NE

Recognition: unknown

Training Convolutional Networks with Noisy Labels

Authors on Pith no claims yet
classification 💻 cs.CV cs.LGcs.NE
keywords noisydatanetworktrainingconvolutionaldatasetslabellabels
0
0 comments X
read the original abstract

The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Silhouette Loss: Differentiable Global Structure Learning for Deep Representations

    cs.LG 2026-03 unverdicted novelty 8.0

    Soft Silhouette Loss offers a batch-global differentiable metric to promote intra-class compactness and inter-class separation in learned representations, boosting performance when hybridized with cross-entropy and co...

  2. Can LLMs Learn to Reason Robustly under Noisy Supervision?

    cs.LG 2026-04 conditional novelty 6.0

    Online Label Refinement lets LLMs learn robust reasoning from noisy supervision by correcting labels when majority answers show rising rollout success and stable history, delivering 3-4% gains on math and reasoning be...