papers AI Learner
The Github is limit! Click to go to the new site.

OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts

2017-07-22
Xuwang Yin, Vicente Ordonez

Abstract

Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-to-sequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1707.07102

PDF

https://arxiv.org/pdf/1707.07102


Similar Posts

Comments