papers AI Learner
The Github is limit! Click to go to the new site.

What is not where: the challenge of integrating spatial representations into deep learning architectures


Abstract

This paper examines to what degree current deep learning architectures for image caption generation capture spatial language. On the basis of the evaluation of examples of generated captions from the literature we argue that systems capture what objects are in the image data but not where these objects are located: the captions generated by these systems are the output of a language model conditioned on the output of an object detector that cannot capture fine-grained location information. Although language models provide useful knowledge for image captions, we argue that deep learning image captioning architectures should also model geometric relations between objects.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1807.08133

PDF

https://arxiv.org/pdf/1807.08133


Similar Posts

上一篇 GAN Q-learning

Comments