papers AI Learner
The Github is limit! Click to go to the new site.

Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

2019-04-05
Arijit Ray, Giedrius Burachas, Yi Yao, Ajay Divakaran

Abstract

While there have been many proposals on how to make AI algorithms more transparent, few have attempted to evaluate the impact of AI explanations on human performance on a task using AI. We propose a Twenty-Questions style collaborative image guessing game, Explanation-assisted Guess Which (ExAG) as a method of evaluating the efficacy of explanations in the context of Visual Question Answering (VQA) - the task of answering natural language questions on images. We study the effect of VQA agent explanations on the game performance as a function of explanation type and quality. We observe that “effective” explanations are not only conducive to game performance (by almost 22% for “excellent” rated explanations), but also helpful when VQA system answers are erroneous or noisy (by almost 30% compared to no explanations). We also see that players develop a preference for explanations even when penalized and that the explanations are mostly rated as “helpful”.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.03285

PDF

http://arxiv.org/pdf/1904.03285


Similar Posts

Comments