papers AI Learner
The Github is limit! Click to go to the new site.

Beyond Caption To Narrative: Video Captioning With Multiple Sentences

2016-05-18
Andrew Shin, Katsunori Ohnishi, Tatsuya Harada

Abstract

Recent advances in image captioning task have led to increasing interests in video captioning task. However, most works on video captioning are focused on generating single input of aggregated features, which hardly deviates from image captioning process and does not fully take advantage of dynamic contents present in videos. We attempt to generate video captions that convey richer contents by temporally segmenting the video with action localization, generating multiple captions from multiple frames, and connecting them with natural language processing techniques, in order to generate a story-like caption. We show that our proposed method can generate captions that are richer in contents and can compete with state-of-the-art method without explicitly using video-level features as input.

Abstract (translated by Google)

图像字幕任务的最新进展已经引起对视频字幕任务的兴趣增加。然而,大多数关于视频字幕的作品都集中在单一输入的聚合特征上,这几乎没有偏离图像字幕的过程,也没有充分利用视频中的动态内容。我们试图通过将视频与动作本地化进行时间分割,从多个帧生成多个字幕,并将它们与自然语言处理技术连接起来,生成传达更丰富内容的视频字幕,以生成故事状的字幕。我们表明,我们提出的方法可以生成更丰富的内容的字幕,并可以与最先进的方法竞争,而无需明确使用视频级功能作为输入。

URL

https://arxiv.org/abs/1605.05440

PDF

https://arxiv.org/pdf/1605.05440


Similar Posts

Comments