pith. machine review for the scientific record. sign in

arxiv: 1503.08895 · v5 · submitted 2015-03-31 · 💻 cs.NE · cs.CL

Recognition: unknown

End-To-End Memory Networks

Authors on Pith no claims yet
classification 💻 cs.NE cs.CL
keywords memorymodelapproachcomputationalend-to-endhopslessmultiple
0
0 comments X
read the original abstract

We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (Weston et al., 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Storage Is Not Memory: A Retrieval-Centered Architecture for Agent Recall

    cs.CL 2026-05 conditional novelty 6.0

    True Memory is a verbatim-event retrieval pipeline running on a single SQLite file that reaches 93% accuracy on LoCoMo multi-session questions, outperforming Mem0, Supermemory, Zep, and matching or exceeding EverMemOS...