papers AI Learner
The Github is limit! Click to go to the new site.

Pre-trained Language Model Representations for Language Generation

2019-03-22
Sergey Edunov, Alexei Baevski, Michael Auli

Abstract

Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pre-trained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN/DailyMail.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.09722

PDF

http://arxiv.org/pdf/1903.09722


Comments

Content