Abstract
In this paper, we address the question answering challenge with the SQuAD 2.0 dataset. We design a model architecture which leverages BERT’s capability of context-aware word embeddings and BiDAF’s context interactive exploration mechanism. By integrating these two state-of-the-art architectures, our system tries to extract the contextual word representation at word and character levels, for better comprehension of both question and context and their correlations. We also propose our original joint posterior probability predictor module and its associated loss functions. Our best model so far obtains F1 score of 75.842% and EM score of 72.24% on the test PCE leaderboad.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1904.08109