papers AI Learner
The Github is limit! Click to go to the new site.

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions

2019-02-14
Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara

Abstract

Current captioning approaches can describe images using black-box architectures whose behavior is hardly controllable and explainable from the exterior. As an image can be described in infinite ways depending on the goal and the context at hand, a higher degree of controllability is needed to apply captioning algorithms in complex scenarios. In this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by allowing both grounding and controllability. Given a control signal in the form of a sequence or set of image regions, we generate the corresponding caption through a recurrent architecture which predicts textual chunks explicitly grounded on regions, following the constraints of the given control. Experiments are conducted on Flickr30k Entities and on COCO Entities, an extended version of COCO in which we add grounding annotations collected in a semi-automatic manner. Results demonstrate that our method achieves state of the art performances on controllable image captioning, in terms of caption quality and diversity. Code and annotations are publicly available at: this https URL.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1811.10652

PDF

https://arxiv.org/pdf/1811.10652


Similar Posts

Comments