Abstract
This paper investigates waveform representation for audio signal classification. Recently, many studies on audio waveform classification such as acoustic event detection and music genre classification have been increasing. Most studies on audio waveform classification proposed to use a deep learning (neural network) framework. Generally, a frequency analysis method like the Fourier transform is applied to extract frequency or spectral information of the input audio waveform before inputting the raw audio waveform into a neural network. As against to these previous studies, in this paper, we propose a novel waveform representation method, in which audio waveforms are represented as bit-sequence, for audio classification. In our experiment, we compare the proposed bit-representation waveform, which is directly given to a neural network, to other representation of audio waveforms such as raw audio waveform and power spectrum on two classification tasks: one is an acoustic event classification task, the other is a sound/music classification task. The experimental results showed that the bit-representation waveform got the best classification performances on both the tasks.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1904.04364