papers AI Learner
The Github is limit! Click to go to the new site.

Joint Event Detection and Description in Continuous Video Streams

2018-12-25
Huijuan Xu, Boyang Li, Vasili Ramanishka, Leonid Sigal, Kate Saenko

Abstract

Dense video captioning is a fine-grained video understanding task that involves two sub-problems: localizing distinct events in a long video stream, and generating captions for the localized events. We propose the Joint Event Detection and Description Network (JEDDi-Net), which solves the dense video captioning task in an end-to-end fashion. Our model continuously encodes the input video stream with three-dimensional convolutional layers, proposes variable-length temporal events based on pooled features, and generates their captions. Proposal features are extracted within each proposal segment through 3D Segment-of-Interest pooling from shared video feature encoding. In order to explicitly model temporal relationships between visual events and their captions in a single video, we also propose a two-level hierarchical captioning module that keeps track of context. On the large-scale ActivityNet Captions dataset, JEDDi-Net demonstrates improved results as measured by standard metrics. We also present the first dense captioning results on the TACoS-MultiLevel dataset.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1802.10250

PDF

https://arxiv.org/pdf/1802.10250


Similar Posts

Comments