papers AI Learner
The Github is limit! Click to go to the new site.

Better Word Embeddings by Disentangling Contextual n-Gram Information

2019-04-10
Prakhar Gupta, Matteo Pagliardini, Martin Jaggi

Abstract

Pre-trained word vectors are ubiquitous in Natural Language Processing applications. In this paper, we show how training word embeddings jointly with bigram and even trigram embeddings, results in improved unigram embeddings. We claim that training word embeddings along with higher n-gram embeddings helps in the removal of the contextual information from the unigrams, resulting in better stand-alone word embeddings. We empirically show the validity of our hypothesis by outperforming other competing word representation models by a significant margin on a wide variety of tasks. We make our models publicly available.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.05033

PDF

http://arxiv.org/pdf/1904.05033


Similar Posts

Comments