papers AI Learner
The Github is limit! Click to go to the new site.

Video Captioning with Multi-Faceted Attention

2016-12-01
Xiang Long, Chuang Gan, Gerard de Melo

Abstract

Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval. While existing methods rely on different kinds of visual features and model structures, they do not fully exploit relevant semantic information. We present an extensible approach to jointly leverage several sorts of visual features and semantic attributes. Our novel architecture builds on LSTMs for sentence generation, with several attention layers and two multimodal layers. The attention mechanism learns to automatically select the most salient visual features or semantic attributes, and the multimodal layer yields overall representations for the input and outputs of the sentence generation component. Experimental results on the challenging MSVD and MSR-VTT datasets show that our framework outperforms the state-of-the-art approaches, while ground truth based semantic attributes are able to further elevate the output quality to a near-human level.

Abstract (translated by Google)

最近,视频字幕由于其改善可访问性和信息检索的潜力而引起越来越多的兴趣。虽然现有的方法依赖于不同类型的视觉特征和模型结构,但是它们并没有充分利用相关的语义信息。我们提出了一个可扩展的方法来共同利用几种视觉特征和语义属性。我们的新颖架构建立在LSTM上,用于生成句子,具有多个关注层和两个多模态层。注意机制学习自动选择最显着的视觉特征或语义属性,并且多模态层产生句子生成组件的输入和输出的整体表示。在具有挑战性的MSVD和MSR-VTT数据集上的实验结果表明,我们的框架胜过了最先进的方法,而基于地面真值的语义属性能够进一步提高输出质量到接近人类的水平。

URL

https://arxiv.org/abs/1612.00234

PDF

https://arxiv.org/pdf/1612.00234


Similar Posts

Comments