papers AI Learner
The Github is limit! Click to go to the new site.

Task-driven Visual Saliency and Attention-based Visual Question Answering

2017-02-22
Yuetan Lin, Zhangyang Pang, Donghui Wang, Yueting Zhuang

Abstract

Visual question answering (VQA) has witnessed great progress since May, 2015 as a classic problem unifying visual and textual data into a system. Many enlightening VQA works explore deep into the image and question encodings and fusing methods, of which attention is the most effective and infusive mechanism. Current attention based methods focus on adequate fusion of visual and textual features, but lack the attention to where people focus to ask questions about the image. Traditional attention based methods attach a single value to the feature at each spatial location, which losses many useful information. To remedy these problems, we propose a general method to perform saliency-like pre-selection on overlapped region features by the interrelation of bidirectional LSTM (BiLSTM), and use a novel element-wise multiplication based attention method to capture more competent correlation information between visual and textual features. We conduct experiments on the large-scale COCO-VQA dataset and analyze the effectiveness of our model demonstrated by strong empirical results.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1702.06700

PDF

https://arxiv.org/pdf/1702.06700


Similar Posts

Comments