papers AI Learner
The Github is limit! Click to go to the new site.

Modeling Latent Sentence Structure in Neural Machine Translation

2019-01-18
Joost Bastings, Wilker Aziz, Ivan Titov, Khalil Sima'an

Abstract

Recently it was shown that linguistic structure predicted by a supervised parser can be beneficial for neural machine translation (NMT). In this work we investigate a more challenging setup: we incorporate sentence structure as a latent variable in a standard NMT encoder-decoder and induce it in such a way as to benefit the translation task. We consider German-English and Japanese-English translation benchmarks and observe that when using RNN encoders the model makes no or very limited use of the structure induction apparatus. In contrast, CNN and word-embedding-based encoders rely on latent graphs and force them to encode useful, potentially long-distance, dependencies.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.06436

PDF

http://arxiv.org/pdf/1901.06436


Comments

Content