pith. machine review for the scientific record. sign in

arxiv: 1506.02078 · v2 · submitted 2015-06-05 · 💻 cs.LG · cs.CL· cs.NE

Recognition: unknown

Visualizing and Understanding Recurrent Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.CLcs.NE
keywords analysisdependenciesinterpretablelong-rangelstmmodelsnetworksrecurrent
0
0 comments X
read the original abstract

Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Toy Models of Superposition

    cs.LG 2022-09 accept novelty 8.0

    Toy models demonstrate that polysemanticity arises when neural networks store more sparse features than neurons via superposition, producing a phase transition tied to polytope geometry and increased adversarial vulne...

  2. Letting the neural code speak: Automated characterization of monkey visual neurons through human language

    q-bio.NC 2026-05 unverdicted novelty 6.0

    Natural-language descriptions generated and verified through generative models and digital twins capture the selectivity of most neurons in macaque V1 and V4.

  3. Federated Cross-Modal Retrieval with Missing Modalities via Semantic Routing and Adapter Personalization

    cs.CV 2026-04 unverdicted novelty 6.0

    RCSR is a personalization-friendly federated framework that improves cross-modal retrieval accuracy and stability under missing modalities via semantic routing and adapters.