papers AI Learner
The Github is limit! Click to go to the new site.

Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

2018-03-05
Xiang Wei, Boqing Gong, Zixia Liu, Wei Lu, Liqiang Wang

Abstract

Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the 1-Lipschitz continuity of the discriminator. In this paper, we propose a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning methods. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000 labeled images, to the best of our knowledge.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1803.01541

PDF

https://arxiv.org/pdf/1803.01541


Similar Posts

Comments