papers AI Learner
The Github is limit! Click to go to the new site.

Towards Interlingua Neural Machine Translation

2019-05-15
Carlos Escolano, Marta R. Costa-jussà, José A. R. Fonollosa

Abstract

A common intermediate language representation or an interlingua is the holy grail in machine translation. Thanks to the new neural machine translation approach, it seems that there are good perspectives towards this goal. In this paper, we propose a new architecture based on introducing an interlingua loss as an additional training objective. By adding and forcing this interlingua loss, we are able to train multiple encoders and decoders for each language, sharing a common intermediate representation. Preliminary translation results on the WMT Turkish/English and WMT 2019 Kazakh/English tasks show improvements over the baseline system. Additionally, since the final objective of our architecture is having compatible encoder/decoders based on a common representation, we visualize and evaluate the learned intermediate representations. What is most relevant from our study is that our architecture shows the benefits of the dreamed interlingua since it is capable of: (1) reducing the number of production systems, with respect to the number of languages, from quadratic to linear (2) incrementally adding a new language in the system without retraining languages previously there and (3) allowing for translations from the new language to all the others present in the system

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.06831

PDF

http://arxiv.org/pdf/1905.06831


Comments

Content