papers AI Learner
The Github is limit! Click to go to the new site.

Generalization and Equilibrium in Generative Adversarial Nets

2017-08-01
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang

Abstract

We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1703.00573

PDF

https://arxiv.org/pdf/1703.00573


Similar Posts

Comments