papers AI Learner
The Github is limit! Click to go to the new site.

Sequential Attention GAN for Interactive Image Editing via Dialogue

2019-04-03
Yu Cheng, Zhe Gan, Yitong Li, Jingjing Liu, Jianfeng Gao

Abstract

In this paper, we introduce a new task - interactive image editing via conversational language, where users can guide an agent to edit images via multi-turn natural language dialogue. In each dialogue turn, the agent takes a source image and a natural language description from the user as the input and generates a new image following the textual description. Two new datasets are introduced for this task, Zap-Seq, and DeepFashion-Seq. We propose a novel Sequential Attention Generative Adversarial Network (SeqAttnGAN) framework, which applies a neural state tracker to encode both the source image and the textual description in each dialogue turn and generates high-quality new image consistent with both the preceding images and the dialogue context. To achieve better region-specific text-to-image generation, we also introduce an attention mechanism into the model. Experiments on the two new datasets show that the proposed SeqAttnGAN model outperforms state-of-the-art (SOTA) approaches on the dialogue-based image editing task. Detailed quantitative evaluation and user study also demonstrate that our model is more effective than SOTA baselines on image generation, in terms of both visual quality and text-to-image consistency.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1812.08352

PDF

https://arxiv.org/pdf/1812.08352


Similar Posts

Comments