papers AI Learner
The Github is limit! Click to go to the new site.

MTLE: A Multitask Learning Encoder of Visual Feature Representations for Video and Movie Description

2018-09-19
Oliver Nina, Washington Garcia, Scott Clouse, Alper Yilmaz

Abstract

Learning visual feature representations for video analysis is a daunting task that requires a large amount of training samples and a proper generalization framework. Many of the current state of the art methods for video captioning and movie description rely on simple encoding mechanisms through recurrent neural networks to encode temporal visual information extracted from video data. In this paper, we introduce a novel multitask encoder-decoder framework for automatic semantic description and captioning of video sequences. In contrast to current approaches, our method relies on distinct decoders that train a visual encoder in a multitask fashion. Our system does not depend solely on multiple labels and allows for a lack of training data working even with datasets where only one single annotation is viable per video. Our method shows improved performance over current state of the art methods in several metrics on multi-caption and single-caption datasets. To the best of our knowledge, our method is the first method to use a multitask approach for encoding video features. Our method demonstrates its robustness on the Large Scale Movie Description Challenge (LSMDC) 2017 where our method won the movie description task and its results were ranked among other competitors as the most helpful for the visually impaired.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1809.07257

PDF

https://arxiv.org/pdf/1809.07257


Similar Posts

Comments