papers AI Learner
The Github is limit! Click to go to the new site.

Auto-Encoding Variational Neural Machine Translation

2019-05-29
Bryan Eikema, Wilker Aziz

Abstract

We present a deep generative model of bilingual sentence pairs for machine translation. The model generates source and target sentences jointly from a shared latent representation and is parameterised by neural networks. We perform efficient training using amortised variational inference and reparameterised gradients. Additionally, we discuss the statistical implications of joint modelling and propose an efficient approximation to maximum a posteriori decoding for fast test-time predictions. We demonstrate the effectiveness of our model in three machine translation scenarios: in-domain training, mixed-domain training, and learning from a mix of gold-standard and synthetic data. Our experiments show consistently that our joint formulation outperforms conditional modelling (i.e. standard neural machine translation) in all such scenarios.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1807.10564

PDF

http://arxiv.org/pdf/1807.10564


Comments

Content