papers AI Learner
The Github is limit! Click to go to the new site.

RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition

2018-05-24
Albert Zeyer, Tamer Alkhouli, Hermann Ney

Abstract

We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks. Promising preliminary results on max. expected BLEU training are presented. We are able to train state-of-the-art models for translation and end-to-end models for speech recognition and show results on WMT 2017 and Switchboard. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.05225

PDF

https://arxiv.org/pdf/1805.05225


Similar Posts

下一篇 Fairness GAN

Comments