papers AI Learner
The Github is limit! Click to go to the new site.

Evaluating Word Embedding Models: Methods and Experimental Results

2019-01-28
Bin Wang, Angela Wang, Fenxiao Chen, Yunchen Wang, C.-C. Jay Kuo

Abstract

Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.09785

PDF

http://arxiv.org/pdf/1901.09785


Similar Posts

Comments