papers AI Learner
The Github is limit! Click to go to the new site.

End-to-End Learning Using Cycle Consistency for Image-to-Caption Transformations

2019-03-25
Keisuke Hagiwara, Yusuke Mukuta, Tatsuya Harada

Abstract

So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.10118

PDF

http://arxiv.org/pdf/1903.10118


Similar Posts

Comments