papers AI Learner
The Github is limit! Click to go to the new site.

Several Experiments on Investigating Pretraining and Knowledge-Enhanced Models for Natural Language Inference

2019-04-27
Tianda Li, Xiaodan Zhu, Quan Liu, Qian Chen, Zhigang Chen, Si Wei

Abstract

Natural language inference (NLI) is among the most challenging tasks in natural language understanding. Recent work on unsupervised pretraining that leverages unsupervised signals such as language-model and sentence prediction objectives has shown to be very effective on a wide range of NLP problems. It would still be desirable to further understand how it helps NLI; e.g., if it learns artifacts in data annotation or instead learn true inference knowledge. In addition, external knowledge that does not exist in the limited amount of NLI training data may be added to NLI models in two typical ways, e.g., from human-created resources or an unsupervised pretraining paradigm. We runs several experiments here to investigate whether they help NLI in the same way, and if not,how?

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.12104

PDF

http://arxiv.org/pdf/1904.12104


Comments

Content