papers AI Learner
The Github is limit! Click to go to the new site.

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference

2019-02-04
R. Thomas McCoy, Ellie Pavlick, Tal Linzen

Abstract

Machine learning systems can often achieve high performance on a test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. Based on an analysis of the task, we hypothesize three fallible syntactic heuristics that NLI models are likely to adopt: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI Systems), which contains many examples where the heuristics fail. We find that models trained on MNLI, including the state-of-the-art model BERT, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.01007

PDF

http://arxiv.org/pdf/1902.01007


Comments

Content