papers AI Learner
The Github is limit! Click to go to the new site.

Sparse and Constrained Attention for Neural Machine Translation

2018-05-21
Chaitanya Malaviya, Pedro Ferreira, André F. T. Martins

Abstract

In NMT, words are sometimes dropped from the source or generated repeatedly in the translation. We explore novel strategies to address the coverage problem that change only the attention transformation. Our approach allocates fertilities to source words, used to bound the attention each word can receive. We experiment with various sparse and constrained attention transformations and propose a new one, constrained sparsemax, shown to be differentiable and sparse. Empirical evaluation is provided in three languages pairs.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.08241

PDF

https://arxiv.org/pdf/1805.08241


Similar Posts

Comments