pith. machine review for the scientific record. sign in

arxiv: 1703.01780 · v6 · submitted 2017-03-06 · 💻 cs.NE · cs.LG· stat.ML

Recognition: unknown

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

Authors on Pith no claims yet
classification 💻 cs.NE cs.LGstat.ML
keywords labelsmeanensemblingteachertemporallearningpredictionsarchitecture
0
0 comments X
read the original abstract

The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TILT: Target-induced loss tilting under covariate shift

    cs.LG 2026-05 conditional novelty 7.0

    TILT adds a target-data penalty on an auxiliary predictor component to induce effective importance weighting for unsupervised domain adaptation under covariate shift.

  2. Learned Neighbor Trust for Collaborative Deployment in Model-Agnostic Decentralized Learning

    cs.LG 2026-05 unverdicted novelty 7.0

    LNTrust has nodes learn compact trust functions from validation evidence that both guide training distillation and define deployment ensembles, yielding higher accuracy with less communication than prior output-only b...

  3. Revisiting Feature Prediction for Learning Visual Representations from Video

    cs.CV 2024-02 conditional novelty 6.0

    V-JEPA models trained only on feature prediction from 2 million public videos achieve 81.9% on Kinetics-400, 72.2% on Something-Something-v2, and 77.9% on ImageNet-1K using frozen ViT-H/16 backbones.

  4. Vision Transformers Need Registers

    cs.CV 2023-09 unverdicted novelty 6.0

    Adding register tokens to Vision Transformers eliminates high-norm background artifacts and raises state-of-the-art performance on dense visual prediction tasks.

  5. ZScribbleSeg: A comprehensive segmentation framework with modeling of efficient annotation and maximization of scribble supervision

    cs.CV 2026-05 unverdicted novelty 5.0

    ZScribbleSeg maximizes scribble supervision with efficient annotation forms, spatial regularization, and EM-estimated class ratios to deliver competitive performance on six medical segmentation tasks without full labels.