Abstract
We propose a multi-task learning framework to learn a joint Machine Reading Comprehensive (MRC) model that can be applied to a wide range of MRC tasks in different domains. Inspired by recent ideas of data selection in machine translation, we develop a novel sample re-weighting scheme to assign sample-specific weights to the loss. Empirical study shows that our approach can be applied to many existing MRC models. Combined with contextual representations from pre-trained language models (such as ELMo), we achieve new state-of-the-art results on a set of MRC benchmark datasets. We release our code at: https://github.com/xycforgithub/MultiTask-MRC.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1809.06963