papers AI Learner
The Github is limit! Click to go to the new site.

On gradient regularizers for MMD GANs

2018-11-29
Michael Arbel, Dougal J. Sutherland, Mikołaj Bińkowski, Arthur Gretton

Abstract

We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1805.11565

PDF

https://arxiv.org/pdf/1805.11565


Similar Posts

Comments