papers AI Learner
The Github is limit! Click to go to the new site.

Learning a Text-Video Embedding from Incomplete and Heterogeneous Data

2018-04-07
Antoine Miech, Ivan Laptev, Josef Sivic

Abstract

Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks. Code is available at: this https URL

Abstract (translated by Google)
URL

https://arxiv.org/abs/1804.02516

PDF

https://arxiv.org/pdf/1804.02516


Similar Posts

Comments