papers AI Learner
The Github is limit! Click to go to the new site.

iVQA: Inverse Visual Question Answering

2018-03-16
Feng Liu, Tao Xiang, Timothy M. Hospedales, Wankou Yang, Changyin Sun

Abstract

We propose the inverse problem of Visual question answering (iVQA), and explore its suitability as a benchmark for visuo-linguistic understanding. The iVQA task is to generate a question that corresponds to a given image and answer pair. Since the answers are less informative than the questions, and the questions have less learnable bias, an iVQA model needs to better understand the image to be successful than a VQA model. We pose question generation as a multi-modal dynamic inference process and propose an iVQA model that can gradually adjust its focus of attention guided by both a partially generated question and the answer. For evaluation, apart from existing linguistic metrics, we propose a new ranking metric. This metric compares the ground truth question’s rank among a list of distractors, which allows the drawbacks of different algorithms and sources of error to be studied. Experimental results show that our model can generate diverse, grammatically correct and content correlated questions that match the given answer.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1710.03370

PDF

https://arxiv.org/pdf/1710.03370


Similar Posts

Comments