papers AI Learner
The Github is limit! Click to go to the new site.

Learning Language Representations for Typology Prediction

2017-07-29
Chaitanya Malaviya, Graham Neubig, Patrick Littell

Abstract

One central mystery of neural NLP is what neural models “know” about their subject matter. When a neural machine translation system learns to translate from one language to another, does it learn the syntax or semantics of the languages? Can this knowledge be extracted from the system to fill holes in human scientific knowledge? Existing typological databases contain relatively full feature specifications for only a few hundred languages. Exploiting the existence of parallel texts in more than a thousand languages, we build a massive many-to-one neural machine translation (NMT) system from 1017 languages into English, and use this to predict information missing from typological databases. Experiments show that the proposed method is able to infer not only syntactic, but also phonological and phonetic inventory features, and improves over a baseline that has access to information about the languages’ geographic and phylogenetic neighbors.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1707.09569

PDF

https://arxiv.org/pdf/1707.09569


Comments

Content