pith. machine review for the scientific record. sign in

arxiv: 1803.03382 · v6 · submitted 2018-03-09 · 💻 cs.LG

Recognition: unknown

Fast Decoding in Sequence Models using Discrete Latent Variables

Authors on Pith no claims yet
classification 💻 cs.LG
keywords sequencemodelsdecodinglatentdiscretevariablesautoregressiveduring
0
0 comments X
read the original abstract

Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. Inspired by [arxiv:1711.00937], we present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first auto-encode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. CTRL: A Conditional Transformer Language Model for Controllable Generation

    cs.CL 2019-09 unverdicted novelty 6.0

    CTRL is a large conditional transformer language model that uses naturally occurring control codes to steer text generation style and content.

  2. Optical Context Compression Is Just (Bad) Autoencoding

    cs.CV 2025-12 accept novelty 5.0

    Vision-based optical context compression performs no better than direct autoencoding baselines like mean pooling or hierarchical encoders across compression ratios.