papers AI Learner
The Github is limit! Click to go to the new site.

A Style Transfer Approach to Source Separation

2019-05-01
Shrikant Venkataramani, Efthymios Tzinis, Paris Smaragdis

Abstract

Training neural networks for source separation involves presenting a mixture recording at the input of the network and updating network parameters in order to produce an output that resembles the clean source. Consequently, supervised source separation depends on the availability of paired mixture-clean training examples. In this paper, we interpret source separation as a style transfer problem. We present a variational auto-encoder network that exploits the commonality across the domain of mixtures and the domain of clean sounds and learns a shared latent representation across the two domains. Using these cycle-consistent variational auto-encoders, we learn a mapping from the mixture domain to the domain of clean sounds and perform source separation without explicitly supervising with paired training examples.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.00151

PDF

http://arxiv.org/pdf/1905.00151


Comments

Content