pith. machine review for the scientific record. sign in

arxiv: 1611.02344 · v3 · submitted 2016-11-07 · 💻 cs.CL

Recognition: unknown

A Convolutional Encoder Model for Neural Machine Translation

David Grangier, Jonas Gehring, Michael Auli, Yann N. Dauphin

Authors on Pith no claims yet
classification 💻 cs.CL
keywords translationaccuracyconvolutionalbi-directionalencodeencoderlstmmachine
0
0 comments X
read the original abstract

The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Graph Attention Networks

    stat.ML 2017-10 accept novelty 7.0

    Graph Attention Networks compute learnable attention coefficients over node neighborhoods to produce weighted feature aggregations, achieving state-of-the-art results on citation networks and inductive protein-protein...