papers AI Learner
The Github is limit! Click to go to the new site.

Motion-Appearance Co-Memory Networks for Video Question Answering

2018-03-29
Jiyang Gao, Runzhou Ge, Kan Chen, Ram Nevatia

Abstract

Video Question Answering (QA) is an important task in understanding video temporal structure. We observe that there are three unique attributes of video QA compared with image QA: (1) it deals with long sequences of images containing richer information not only in quantity but also in variety; (2) motion and appearance information are usually correlated with each other and able to provide useful attention cues to the other; (3) different questions require different number of frames to infer the answer. Based these observations, we propose a motion-appearance comemory network for video QA. Our networks are built on concepts from Dynamic Memory Network (DMN) and introduces new mechanisms for video QA. Specifically, there are three salient aspects: (1) a co-memory attention mechanism that utilizes cues from both motion and appearance to generate attention; (2) a temporal conv-deconv network to generate multi-level contextual facts; (3) a dynamic fact ensemble method to construct temporal representation dynamically for different questions. We evaluate our method on TGIF-QA dataset, and the results outperform state-of-the-art significantly on all four tasks of TGIF-QA.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1803.10906

PDF

https://arxiv.org/pdf/1803.10906


Similar Posts

Comments