papers AI Learner
The Github is limit! Click to go to the new site.

Normalized Diversification

2019-04-07
Shaohui Liu, Xiao Zhang, Jianqiao Wangni, Jianbo Shi

Abstract

Generating diverse yet specific data is the goal of the generative adversarial network (GAN), but it suffers from the problem of mode collapse. We introduce the concept of normalized diversity which forced the model to preserve the normalized pairwise distance between the sparse samples from a latent parametric distribution and their corresponding high-dimensional outputs. The normalized diversification aims to unfold the manifold of unknown topology and non-uniform distribution, which leads to safe interpolation between valid latent variables. By alternating the maximization over the pairwise distance and updating the total distance (normalizer), we encourage the model to actively explore in the high-dimensional output space. We demonstrate that by combining the normalized diversity loss and the adversarial loss, we generate diverse data without suffering from mode collapsing. Experimental results show that our method achieves consistent improvement on unsupervised image generation, conditional image generation and hand pose estimation over strong baselines.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.03608

PDF

http://arxiv.org/pdf/1904.03608


Similar Posts

Comments