papers AI Learner
The Github is limit! Click to go to the new site.

Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining

2018-08-01
Yundong Zhang, Juan Carlos Niebles, Alvaro Soto

Abstract

A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations specific for visual grounding is difficult and expensive. In this work, we demonstrate that we can effectively train a VQA architecture with grounding supervision that can be automatically obtained from available region descriptions and object annotations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA accuracy.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1808.00265

PDF

https://arxiv.org/pdf/1808.00265


Similar Posts

Comments