Abstract
Generative adversarial networks (GANs) have shown remarkable success in generation of unstructured data, such as, natural images. However, discovery and separation of modes in the generated space, essential for several tasks beyond naive data generation, is still a challenge. In this paper, we address the problem of imposing desired modal properties on the generated space using a latent distribution, engineered in accordance with the modal properties of the true data distribution. This is achieved by training a latent space inversion network in tandem with the generative network using a divergence loss. The latent space is made to follow a continuous multimodal distribution generated by reparameterization of a pair of continuous and discrete random variables. In addition, the modal priors of the latent distribution are learned to match with the true data distribution using minimal-supervision with negligible increment in number of learnable parameters. We validate our method on multiple tasks such as mode separation, conditional generation, and attribute discovery on multiple real world image datasets and demonstrate its efficacy over other state-of-the-art methods.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1811.03692