papers AI Learner
The Github is limit! Click to go to the new site.

Self-Supervised GAN to Counter Forgetting

2018-11-29
Ting Chen, Xiaohua Zhai, Neil Houlsby

Abstract

GANs involve training two networks in an adversarial game, where each network’s task depends on its adversary. Recently, several works have framed GAN training as an online or continual learning problem. We focus on the discriminator, which must perform classification under an (adversarially) shifting data distribution. When trained on sequential tasks, neural networks exhibit \emph{forgetting}. For GANs, discriminator forgetting leads to training instability. To counter forgetting, we encourage the discriminator to maintain useful representations by adding a self-supervision. Conditional GANs have a similar effect using labels. However, our self-supervised GAN does not require labels, and closes the performance gap between conditional and unconditional models. We show that, in doing so, the self-supervised discriminator learns better representations than regular GANs.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1810.11598

PDF

https://arxiv.org/pdf/1810.11598


Similar Posts

Comments