papers AI Learner
The Github is limit! Click to go to the new site.

Towards Decoding as Continuous Optimization in Neural Machine Translation

2017-07-22
Cong Duy Vu Hoang (University of Melbourne), Gholamreza Haffari (Monash University), Trevor Cohn (University of Melbourne)

Abstract

We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. We convert decoding - basically a discrete optimization problem - into a continuous optimization problem. The resulting constrained continuous optimisation problem is then tackled using gradient-based methods. Our powerful decoding framework enables decoding intractable models such as the intersection of left-to-right and right-to-left (bidirectional) as well as source-to-target and target-to-source (bilingual) NMT models. Our empirical results show that our decoding framework is effective, and leads to substantial improvements in translations generated from the intersected models where the typical greedy or beam search is not feasible. We also compare our framework against reranking, and analyse its advantages and disadvantages.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1701.02854

PDF

https://arxiv.org/pdf/1701.02854


Comments

Content