papers AI Learner
The Github is limit! Click to go to the new site.

Visual Storytelling

2016-04-13
Ting-Hao (Kenneth)Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell

Abstract

We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as provided in this dataset and the storytelling task, has the potential to move artificial intelligence from basic understandings of typical visual scenes towards more and more human-like understanding of grounded event structure and subjective expression.

Abstract (translated by Google)

我们介绍了序贯视觉语言的第一个数据集,并探讨这个数据如何用于视觉叙事的任务。这个数据集的第一个版本,SIND v.1,包括20,211个序列中的81,743张独特的照片,与描述性(标题)和故事语言对齐。我们为故事叙述任务建立了几个强有力的基线,并且激励一个自动度量来衡量进度。对数据集和叙事任务中提供的具体描述以及比喻性和社会性语言进行建模,有可能将人工智能从典型的视觉场景的基本理解转移到对基础事件结构和主观表达的越来越像人类的理解。

URL

https://arxiv.org/abs/1604.03968

PDF

https://arxiv.org/pdf/1604.03968


Similar Posts

Comments