papers AI Learner
The Github is limit! Click to go to the new site.

Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation

2018-09-10
Chenze Shao, Yang Feng, Xilin Chen

Abstract

Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1809.03132

PDF

https://arxiv.org/pdf/1809.03132


Comments

Content