pith. machine review for the scientific record. sign in

arxiv: 1810.10182 · v1 · submitted 2018-10-24 · 💻 cs.CL · cs.AI

Recognition: unknown

Modeling Localness for Self-Attention Networks

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords capturinglocalnessnetworksself-attentiondependenciesmodelingabilityattention
0
0 comments X
read the original abstract

Self-attention networks have proven to be of profound value for its strength of capturing global dependencies. In this work, we propose to model localness for self-attention networks, which enhances the ability of capturing useful local context. We cast localness modeling as a learnable Gaussian bias, which indicates the central and scope of the local region to be paid more attention. The bias is then incorporated into the original attention distribution to form a revised distribution. To maintain the strength of capturing long distance dependencies and enhance the ability of capturing short-range dependencies, we only apply localness modeling to lower layers of self-attention networks. Quantitative and qualitative analyses on Chinese-English and English-German translation tasks demonstrate the effectiveness and universality of the proposed approach.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.