papers AI Learner
The Github is limit! Click to go to the new site.

Analyzing Neural MT Search and Model Performance

2017-08-02
Jan Niehues, Eunah Cho, Thanh-Le Ha, Alex Waibel

Abstract

In this paper, we offer an in-depth analysis about the modeling and search performance. We address the question if a more complex search algorithm is necessary. Furthermore, we investigate the question if more complex models which might only be applicable during rescoring are promising. By separating the search space and the modeling using $n$-best list reranking, we analyze the influence of both parts of an NMT system independently. By comparing differently performing NMT systems, we show that the better translation is already in the search space of the translation systems with less performance. This results indicate that the current search algorithms are sufficient for the NMT systems. Furthermore, we could show that even a relatively small $n$-best list of $50$ hypotheses already contain notably better translations.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1708.00563

PDF

https://arxiv.org/pdf/1708.00563


Comments

Content