papers AI Learner
The Github is limit! Click to go to the new site.

Image Captioning with Clause-Focused Metrics in a Multi-Modal Setting for Marketing

2019-05-06
Philipp Harzig, Dan Zecha, Rainer Lienhart, Carolin Kaiser, René Schallner

Abstract

Automatically generating descriptive captions for images is a well-researched area in computer vision. However, existing evaluation approaches focus on measuring the similarity between two sentences disregarding fine-grained semantics of the captions. In our setting of images depicting persons interacting with branded products, the subject, predicate, object and the name of the branded product are important evaluation criteria of the generated captions. Generating image captions with these constraints is a new challenge, which we tackle in this work. By simultaneously predicting integer-valued ratings that describe attributes of the human-product interaction, we optimize a deep neural network architecture in a multi-task learning setting, which considerably improves the caption quality. Furthermore, we introduce a novel metric that allows us to assess whether the generated captions meet our requirements (i.e., subject, predicate, object, and product name) and describe a series of experiments on caption quality and how to address annotator disagreements for the image ratings with an approach called soft targets. We also show that our novel clause-focused metrics are also applicable to other image captioning datasets, such as the popular MSCOCO dataset.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.01919

PDF

http://arxiv.org/pdf/1905.01919


Similar Posts

Comments