papers AI Learner
The Github is limit! Click to go to the new site.

Saliency-Guided Attention Network for Image-Sentence Matching

2019-04-20
Zhong Ji, Haoran Wang, Jungong Han, Yanwei Pang

Abstract

This paper studies the task of matching image and sentence, where learning appropriate representations across the multi-modal data appears to be the main challenge. Unlike previous approaches that predominantly deploy symmetrical architecture to represent both modalities, we propose Saliency-guided Attention Network (SAN) that asymmetrically employs visual and textual attention modules to learn the fine-grained correlation intertwined between vision and language. The proposed SAN mainly includes three components: saliency detector, Saliency-weighted Visual Attention (SVA) module, and Saliency-guided Textual Attention (STA) module. Concretely, the saliency detector provides the visual saliency information as the guidance for the two attention modules. SVA is designed to leverage the advantage of the saliency information to improve discrimination of visual representations. By fusing the visual information from SVA and textual information as a multi-modal guidance, STA learns discriminative textual representations that are highly sensitive to visual clues. Extensive experiments demonstrate SAN can substantially improve the state-of-the-art results on the benchmark Flickr30K and MSCOCO datasets by a large margin.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09471

PDF

http://arxiv.org/pdf/1904.09471


Similar Posts

Comments