papers AI Learner
The Github is limit! Click to go to the new site.

Regularization techniques for fine-tuning in neural machine translation

2017-07-31
Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, Rico Sennrich

Abstract

We investigate techniques for supervised domain adaptation for neural machine translation where an existing model trained on a large out-of-domain dataset is adapted to a small in-domain dataset. In this scenario, overfitting is a major challenge. We investigate a number of techniques to reduce overfitting and improve transfer learning, including regularization techniques such as dropout and L2-regularization towards an out-of-domain prior. In addition, we introduce tuneout, a novel regularization technique inspired by dropout. We apply these techniques, alone and in combination, to neural machine translation, obtaining improvements on IWSLT datasets for English->German and English->Russian. We also investigate the amounts of in-domain training data needed for domain adaptation in NMT, and find a logarithmic relationship between the amount of training data and gain in BLEU score.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1707.09920

PDF

https://arxiv.org/pdf/1707.09920


Similar Posts

Comments