pith. machine review for the scientific record. sign in

arxiv: 1706.07206 · v2 · submitted 2017-06-22 · 💻 cs.CL · cs.AI· cs.NE· stat.ML

Recognition: unknown

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.NEstat.ML
keywords networkneuralrecurrentpropagationrelevancessentimenttechniquework
0
0 comments X
read the original abstract

Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Understanding intermediate layers using linear classifier probes

    stat.ML 2016-10 accept novelty 7.0

    Linear probes demonstrate that feature separability for classification increases monotonically with network depth in Inception v3 and ResNet-50.