papers AI Learner
The Github is limit! Click to go to the new site.

Optimizing the Latent Space of Generative Networks

2019-05-20
Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam

Abstract

Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most successful applications, GAN models share two common aspects: solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions; and parameterizing the generator and the discriminator as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1707.05776

PDF

http://arxiv.org/pdf/1707.05776


Similar Posts

Comments