papers AI Learner
The Github is limit! Click to go to the new site.

DualNet: Domain-Invariant Network for Visual Question Answering

2017-05-04
Kuniaki Saito, Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada

Abstract

Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1606.06108

PDF

https://arxiv.org/pdf/1606.06108


Similar Posts

Comments