papers AI Learner
The Github is limit! Click to go to the new site.

CanvasGAN: A simple baseline for text to image generation by incrementally patching a canvas

2018-10-05
Amanpreet Singh, Sharan Agrawal

Abstract

We propose a new recurrent generative model for generating images from text captions while attending on specific parts of text captions. Our model creates images by incrementally adding patches on a “canvas” while attending on words from text caption at each timestep. Finally, the canvas is passed through an upscaling network to generate images. We also introduce a new method for generating visual-semantic sentence embeddings based on self-attention over text. We compare our model’s generated images with those generated Reed et. al.’s model and show that our model is a stronger baseline for text to image generation tasks.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1810.02833

PDF

https://arxiv.org/pdf/1810.02833


Similar Posts

Comments