papers AI Learner
The Github is limit! Click to go to the new site.

Aligning Vector-spaces with Noisy Supervised Lexicons

2019-03-25
Noa Yehezkel Lubin, Jacob Goldberger, Yoav Goldberg

Abstract

The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP. Current solutions assume that the lexicon which defines the alignment pairs is noise-free. We consider the case where the set of aligned points is allowed to contain an amount of noise, in the form of incorrect lexicon pairs and show that this arises in practice by analyzing the edited dictionaries after the cleaning process. We demonstrate that such noise substantially degrades the accuracy of the learned translation when using current methods. We propose a model that accounts for noisy pairs. This is achieved by introducing a generative model with a compatible iterative EM algorithm. The algorithm jointly learns the noise level in the lexicon, finds the set of noisy pairs, and learns the mapping between the spaces. We demonstrate the effectiveness of our proposed algorithm on two alignment problems: bilingual word embedding translation, and mapping between diachronic embedding spaces for recovering the semantic shifts of words across time periods.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.10238

PDF

http://arxiv.org/pdf/1903.10238


Comments

Content