papers AI Learner
The Github is limit! Click to go to the new site.

Adaptive Input Representations for Neural Language Modeling

2019-02-22
Alexei Baevski, Michael Auli

Abstract

We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1809.10853

PDF

http://arxiv.org/pdf/1809.10853


Similar Posts

Comments