pith. machine review for the scientific record. sign in

arxiv: 1808.04444 · v2 · submitted 2018-08-09 · 💻 cs.CL · cs.AI· cs.LG· stat.ML

Recognition: unknown

Character-Level Language Modeling with Deeper Self-Attention

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LGstat.ML
keywords character-levelintermediatelanguagemodelingvariantsabilityachievingassume
0
0 comments X
read the original abstract

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Reformer: The Efficient Transformer

    cs.LG 2020-01 accept novelty 8.0

    Reformer matches standard Transformer accuracy on long sequences while using far less memory and running faster via LSH attention and reversible residual layers.

  2. Generating Long Sequences with Sparse Transformers

    cs.LG 2019-04 unverdicted novelty 7.0

    Sparse Transformers factorize attention to handle sequences tens of thousands long, achieving new SOTA density modeling on Enwik8, CIFAR-10, and ImageNet-64.