pith. machine review for the scientific record. sign in

arxiv: 1702.05373 · v2 · submitted 2017-02-17 · 💻 cs.CV

Recognition: unknown

EMNIST: an extension of MNIST to handwritten letters

Andr\'e van Schaik, Gregory Cohen, Jonathan Tapson, Saeed Afshar

classification 💻 cs.CV
keywords mnistdatasetdigitsclassificationdatabaselettersnistbenchmark
0
0 comments X
read the original abstract

The MNIST dataset has become a standard benchmark for learning, classification and computer vision systems. Contributing to its widespread adoption are the understandable and intuitive nature of the task, its relatively small size and storage requirements and the accessibility and ease-of-use of the database itself. The MNIST database was derived from a larger dataset known as the NIST Special Database 19 which contains digits, uppercase and lowercase handwritten letters. This paper introduces a variant of the full NIST dataset, which we have called Extended MNIST (EMNIST), which follows the same conversion paradigm used to create the MNIST dataset. The result is a set of datasets that constitute a more challenging classification tasks involving letters and digits, and that shares the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems. Benchmark results are presented along with a validation of the conversion process through the comparison of the classification results on converted NIST digits and the MNIST digits.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Rescaled Asynchronous SGD: Optimal Distributed Optimization under Data and System Heterogeneity

    cs.LG 2026-05 unverdicted novelty 6.0

    Rescaled ASGD recovers convergence to the true global objective by rescaling worker stepsizes proportional to computation times, matching the known time lower bound in the leading term under non-convex smoothness and ...

  2. Mistake gating leads to energy and memory efficient continual learning

    cs.AI 2026-04 unverdicted novelty 6.0

    Mistake-gated plasticity reduces neural network updates by 50-80% by gating changes on classification errors, improving efficiency for continual learning without added hyperparameters.

  3. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    cs.LG 2017-08 accept novelty 6.0

    Fashion-MNIST is a new benchmark dataset of 70,000 fashion product images that serves as a direct drop-in replacement for the original MNIST dataset while being more challenging.

  4. Heterogeneous Connectivity in Sparse Networks: Fan-in Profiles, Gradient Hierarchy, and Topological Equilibria

    cs.LG 2026-04 unverdicted novelty 5.0

    Arbitrary heterogeneous fan-in profiles in sparse networks match uniform random accuracy at high sparsity, but initializing RigL dynamic sparse training with equilibrium-matched lognormal profiles improves performance...