papers AI Learner
The Github is limit! Click to go to the new site.

Tripping through time: Efficient Localization of Activities in Videos

2019-04-22
Meera Hahn, Asim Kadav, James M. Rehg, Hans Peter Graf

Abstract

Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications that this task lends itself to, such as surveillance, efficiency a is pivotal trait of a system. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for fewer frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41% of the entire video.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09936

PDF

http://arxiv.org/pdf/1904.09936


Similar Posts

Comments