papers AI Learner
The Github is limit! Click to go to the new site.

Geometry of Deep Generative Models for Disentangled Representations

2019-02-19
Ankita Shukla, Shagun Uppal, Sarthak Bhagat, Saket Anand, Pavan Turaga

Abstract

Deep generative models like variational autoencoders approximate the intrinsic geometry of high dimensional data manifolds by learning low-dimensional latent-space variables and an embedding function. The geometric properties of these latent spaces has been studied under the lens of Riemannian geometry; via analysis of the non-linearity of the generator function. In new developments, deep generative models have been used for learning semantically meaningful `disentangled’ representations; that capture task relevant attributes while being invariant to other attributes. In this work, we explore the geometry of popular generative models for disentangled representation learning. We use several metrics to compare the properties of latent spaces of disentangled representation models in terms of class separability and curvature of the latent-space. The results we obtain establish that the class distinguishable features in the disentangled latent space exhibits higher curvature as opposed to a variational autoencoder. We evaluate and compare the geometry of three such models with variational autoencoder on two different datasets. Further, our results show that distances and interpolation in the latent space are significantly improved with Riemannian metrics derived from the curvature of the space. We expect these results will have implications on understanding how deep-networks can be made more robust, generalizable, as well as interpretable.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.06964

PDF

http://arxiv.org/pdf/1902.06964


Similar Posts

Comments