pith. machine review for the scientific record. sign in

arxiv: 1508.06615 · v4 · submitted 2015-08-26 · 💻 cs.CL · cs.NE· stat.ML

Recognition: unknown

Character-Aware Neural Language Models

Authors on Pith no claims yet
classification 💻 cs.CL cs.NEstat.ML
keywords modellanguageneuralnetworkcharactercharactersfewerinputs
0
0 comments X
read the original abstract

We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Neural Machine Translation of Rare Words with Subword Units

    cs.CL 2015-08 accept novelty 8.0

    Subword segmentation via byte pair encoding enables open-vocabulary neural machine translation and improves BLEU scores by 1.1 on English-German and 1.3 on English-Russian WMT 2015 tasks over dictionary back-off baselines.

  2. Pointer Sentinel Mixture Models

    cs.CL 2016-09 conditional novelty 7.0

    Pointer sentinel-LSTM mixes context copying with softmax prediction to reach 70.9 perplexity on Penn Treebank using fewer parameters than standard LSTMs.