papers AI Learner
The Github is limit! Click to go to the new site.

On the Flip Side: Identifying Counterexamples in Visual Question Answering

2018-07-24
Gabriel Grand, Aron Szanto, Yoon Kim, Alexander Rush

Abstract

Visual question answering (VQA) models respond to open-ended natural language questions about images. While VQA is an increasingly popular area of research, it is unclear to what extent current VQA architectures learn key semantic distinctions between visually-similar images. To investigate this question, we explore a reformulation of the VQA task that challenges models to identify counterexamples: images that result in a different answer to the original question. We introduce two methods for evaluating existing VQA models against a supervised counterexample prediction task, VQA-CX. While our models surpass existing benchmarks on VQA-CX, we find that the multimodal representations learned by an existing state-of-the-art VQA model do not meaningfully contribute to performance on this task. These results call into question the assumption that successful performance on the VQA benchmark is indicative of general visual-semantic reasoning abilities.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1806.00857

PDF

https://arxiv.org/pdf/1806.00857


Similar Posts

Comments