papers AI Learner
The Github is limit! Click to go to the new site.

From Images to Sentences through Scene Description Graphs using Commonsense Reasoning and Knowledge

2015-11-10
Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, Yiannis Aloimonos

Abstract

In this paper we propose the construction of linguistic descriptions of images. This is achieved through the extraction of scene description graphs (SDGs) from visual scenes using an automatically constructed knowledge base. SDGs are constructed using both vision and reasoning. Specifically, commonsense reasoning is applied on (a) detections obtained from existing perception methods on given images, (b) a “commonsense” knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most cases, sentences auto-constructed from SDGs obtained by our method give a more relevant and thorough description of an image than a recent state-of-the-art image caption based approach. Our Image-Sentence Alignment Evaluation results are also comparable to that of the recent state-of-the art approaches.

Abstract (translated by Google)

在本文中,我们提出了图像的语言描述的建设。这是通过使用自动构建的知识库从视觉场景中提取场景描述图(SDG)来实现的。可持续发展目标是用视觉和推理来构建的。具体来说,常识推理应用于:(a)从给定图像的现有感知方法获得的检测;(b)使用图像注释的自然语言处理构建的“常识”知识库;以及(c)来自诸如WordNet的资源的词汇本体知识。以Flickr8k,Flickr30k和MS-COCO数据集为基础的基于亚马逊Mechanical Turk(AMT)的评估表明,在大多数情况下,由我们的方法获得的SDG自动构建的句子给出了比最近的状态更加相关和彻底的图像描述基于最先进的图像标题的方法。我们的图像句子对齐评估结果也与最近的最新技术方法相媲美。

URL

https://arxiv.org/abs/1511.03292

PDF

https://arxiv.org/pdf/1511.03292


Similar Posts

Comments