papers AI Learner
The Github is limit! Click to go to the new site.

Translational Grounding: Using Paraphrase Recognition and Generation to Demonstrate Semantic Abstraction Abilities of MultiLingual NMT

2018-08-21
Jörg Tiedemann, Yves Scherrer

Abstract

In this paper, we investigate whether multilingual neural translation models learn a stronger semantic abstraction of sentences than bilingual ones. We test this hypotheses by measuring the perplexity of such models when applied to paraphrases of the source language. The intuition is that an encoder produces better representations if a decoder is capable of recognizing synonymous sentences in the same language even though the model is never trained for that task. In our setup, we add 16 different auxiliary languages to a bidirectional bilingual baseline model (English-French) and test it with in-domain and out-of-domain paraphrases in English. The results show that the perplexity is significantly reduced in each of the cases, indicating that meaning can be grounded in translation. This is further supported by a study on paraphrase generation that we also include at the end of the paper.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1808.06826

PDF

https://arxiv.org/pdf/1808.06826


Similar Posts

Comments