papers AI Learner
The Github is limit! Click to go to the new site.

Using syntactical and logical forms to evaluate textual inference competence

2019-05-10
Felipe Salvatore, Marcelo Finger, Roberto Hirata Jr

Abstract

In the light of recent breakthroughs in transfer learning for Natural Language Processing, much progress was achieved on Natural Language Inference. Different models are now presenting high accuracy on popular inference datasets such as SNLI, MNLI and SciTail. At the same time, there are different indicators that those datasets can be exploited by using some simple linguistic patterns. This fact poses difficulties to our understanding of the actual capacity of machine learning models to solve the complex task of textual inference. We propose a new set of tasks that require specific capacities over linguistic logical forms such as: i) Boolean coordination, ii) quantifiers, iii) definitive description, and iv) counting operators. By evaluating a model on our stratified dataset, we can better pinpoint the specific inferential difficulties of a model in each kind of textual structure. We evaluate two kinds of neural models that implicitly exploit language structure: recurrent models and the Transformer network BERT. We show that although BERT is clearly more efficient to generalize over most logical forms, there is space for improvement when dealing with counting operators.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.05704

PDF

https://arxiv.org/pdf/1905.05704


Similar Posts

Comments