papers AI Learner
The Github is limit! Click to go to the new site.

Adversarial Audio Synthesis

2019-02-09
Chris Donahue, Julian McAuley, Miller Puckette

Abstract

Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that, without labels, WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1802.04208

PDF

http://arxiv.org/pdf/1802.04208


Comments

Content