papers AI Learner
The Github is limit! Click to go to the new site.

Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge

2019-04-22
Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei

Abstract

Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers’ intelligence on common sense representation and reasoning. This paper presents the new state-of-theart on WSC, achieving an accuracy of 71.1%. We demonstrate that the leading performance benefits from jointly modelling sentence structures, utilizing knowledge learned from cutting-edge pretraining models, and performing fine-tuning. We conduct detailed analyses, showing that fine-tuning is critical for achieving the performance, but it helps more on the simpler associative problems. Modelling sentence dependency structures, however, consistently helps on the harder non-associative subset of WSC. Analysis also shows that larger fine-tuning datasets yield better performances, suggesting the potential benefit of future work on annotating more Winograd schema sentences.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09705

PDF

http://arxiv.org/pdf/1904.09705


Similar Posts

Comments