papers AI Learner
The Github is limit! Click to go to the new site.

Universality Theorems for Generative Models

2019-05-27
Valentin Khrulkov, Ivan Oseledets

Abstract

Despite the fact that generative models are extremely successful in practice, the theory underlying this phenomenon is only starting to catch up with practice. In this work we address the question of the universality of generative models: is it true that neural networks can approximate any data manifold arbitrarily well? We provide a positive answer to this question and show that under mild assumptions on the activation function one can always find a feedforward neural network that maps the latent space onto a set located within the specified Hausdorff distance from the desired data manifold. We also prove similar theorems for the case of multiclass generative models and cycle generative models, trained to map samples from one manifold to another and vice versa.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.11520

PDF

http://arxiv.org/pdf/1905.11520


Similar Posts

Comments