papers AI Learner
The Github is limit! Click to go to the new site.

A Wasserstein GAN model with the total variational regularization

2018-12-03
Lijun Zhang, Yujin Zhang, Yongbin Gao

Abstract

It is well known that the generative adversarial nets (GANs) are remarkably difficult to train. The recently proposed Wasserstein GAN (WGAN) creates principled research directions towards addressing these issues. But we found in practice that gradient penalty WGANs (GP-WGANs) still suffer from training instability. In this paper, we combine a Total Variational (TV) regularizing term into the WGAN formulation instead of weight clipping or gradient penalty, which implies that the Lipschitz constraint is enforced on the critic network. Our proposed method is more stable at training than GP-WGANs and works well across varied GAN architectures. We also present a method to control the trade-off between image diversity and visual quality. It does not bring any computation burden.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1812.00810

PDF

https://arxiv.org/pdf/1812.00810


Similar Posts

Comments