papers AI Learner
The Github is limit! Click to go to the new site.

Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool

2018-03-16
Feng Liu, Tao Xiang, Timothy M. Hospedales, Wankou Yang, Changyin Sun

Abstract

In recent years, visual question answering (VQA) has become topical. The premise of VQA’s significance as a benchmark in AI, is that both the image and textual question need to be well understood and mutually grounded in order to infer the correct answer. However, current VQA models perhaps understand' less than initially hoped, and instead master the easier task of exploiting cues given away in the question and biases in the answer distribution. In this paper we propose the inverse problem of VQA (iVQA). The iVQA task is to generate a question that corresponds to a given image and answer pair. We propose a variational iVQA model that can generate diverse, grammatically correct and content correlated questions that match the given answer. Based on this model, we show that iVQA is an interesting benchmark for visuo-linguistic understanding, and a more challenging alternative to VQA because an iVQA model needs to understand the image better to be successful. As a second contribution, we show how to use iVQA in a novel reinforcement learning framework to diagnose any existing VQA model by way of exposing its belief set: the set of question-answer pairs that the VQA model would predict true for a given image. This provides a completely new window into what VQA models believe’ about images. We show that existing VQA models have more erroneous beliefs than previously thought, revealing their intrinsic weaknesses. Suggestions are then made on how to address these weaknesses going forward.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1803.06936

PDF

https://arxiv.org/pdf/1803.06936


Similar Posts

Comments