papers AI Learner
The Github is limit! Click to go to the new site.

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

2016-04-27
Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell

Abstract

While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model’s ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.

Abstract (translated by Google)

虽然最近的深度神经网络模型已经在图像字幕任务上取得了有希望的结果,但是它们在很大程度上依赖于具有配对图像和句子标题的语料库的可用性来描述背景中的对象。在这项工作中,我们提出了深度合成字幕(DCC)来解决生成新的对象的任务,这些对象不存在于配对的图像句子数据集中。我们的方法通过利用大对象识别数据集和外部文本语料库以及在语义相似的概念之间传递知识来实现​​这一点。目前的深层字幕模型只能描述配对的图像 - 句子语料库中的对象,尽管事实上他们是用大对象识别数据集(即ImageNet)预先训练的。相比之下,我们的模型可以组成描述新的对象的句子以及它们与其他对象的交互。我们通过实证评估其在MSCOCO上的性能来展示我们的模型描述新颖概念的能力,并且在没有配对图像标题数据的对象的ImageNet图像上显示定性结果。此外,我们扩展了我们的方法来生成视频剪辑中的对象的描述。我们的研究结果表明,与现有的图像和视频字幕方法相比,DCC具有明显的优势,用于在上下文中生成新对象的描述。

URL

https://arxiv.org/abs/1511.05284

PDF

https://arxiv.org/pdf/1511.05284


Similar Posts

Comments