papers AI Learner
The Github is limit! Click to go to the new site.

Semi-Latent GAN: Learning to generate and modify facial images from attributes

2017-04-07
Weidong Yin, Yanwei Fu, Leonid Sigal, Xiangyang Xue

Abstract

Generating and manipulating human facial images using high-level attributal controls are important and interesting problems. The models proposed in previous work can solve one of these two problems (generation or manipulation), but not both coherently. This paper proposes a novel model that learns how to both generate and modify the facial image from high-level semantic attributes. Our key idea is to formulate a Semi-Latent Facial Attribute Space (SL-FAS) to systematically learn relationship between user-defined and latent attributes, as well as between those attributes and RGB imagery. As part of this newly formulated space, we propose a new model — SL-GAN which is a specific form of Generative Adversarial Network. Finally, we present an iterative training algorithm for SL-GAN. The experiments on recent CelebA and CASIA-WebFace datasets validate the effectiveness of our proposed framework. We will also make data, pre-trained models and code available.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1704.02166

PDF

https://arxiv.org/pdf/1704.02166


Similar Posts

Comments