papers AI Learner
The Github is limit! Click to go to the new site.

Self-supervised Audio Spatialization with Correspondence Classifier

2019-05-14
Yu-Ding Lu, Hsin-Ying Lee, Hung-Yu Tseng, Ming-Hsuan Yang

Abstract

Spatial audio is an essential medium to audiences for 3D visual and auditory experience. However, the recording devices and techniques are expensive or inaccessible to the general public. In this work, we propose a self-supervised audio spatialization network that can generate spatial audio given the corresponding video and monaural audio. To enhance spatialization performance, we use an auxiliary classifier to classify ground-truth videos and those with audio where the left and right channels are swapped. We collect a large-scale video dataset with spatial audio to validate the proposed method. Experimental results demonstrate the effectiveness of the proposed model on the audio spatialization task.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.05375

PDF

https://arxiv.org/pdf/1905.05375


Similar Posts

Comments