papers AI Learner
The Github is limit! Click to go to the new site.

Bringing back simplicity and lightliness into neural image captioning

2018-10-15
Jean-Benoit Delbrouck, Stéphane Dupont

Abstract

Neural Image Captioning (NIC) or neural caption generation has attracted a lot of attention over the last few years. Describing an image with a natural language has been an emerging challenge in both fields of computer vision and language processing. Therefore a lot of research has focused on driving this task forward with new creative ideas. So far, the goal has been to maximize scores on automated metric and to do so, one has to come up with a plurality of new modules and techniques. Once these add up, the models become complex and resource-hungry. In this paper, we take a small step backwards in order to study an architecture with interesting trade-off between performance and computational complexity. To do so, we tackle every component of a neural captioning model and propose one or more solution that lightens the model overall. Our ideas are inspired by two related tasks: Multimodal and Monomodal Neural Machine Translation.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1810.06245

PDF

https://arxiv.org/pdf/1810.06245


Similar Posts

Comments