papers AI Learner
The Github is limit! Click to go to the new site.

Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning

2018-04-15
Xin Wang, Yuan-Fang Wang, William Yang Wang

Abstract

A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1804.05448

PDF

https://arxiv.org/pdf/1804.05448


Similar Posts

Comments