papers AI Learner
The Github is limit! Click to go to the new site.

Direct Modelling of Speech Emotion from Raw Speech

2019-04-08
Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Julien Epps

Abstract

Speech emotion recognition is a challenging task and heavily depends on hand-engineered acoustic features, which are typically crafted to echo human perception of speech signals. However, a filter bank that is designed from perceptual evidence is not always guaranteed to be the best in a statistical modelling framework where the end goal is for example emotion classification. This has fuelled the emerging trend of learning representations from raw speech especially using deep learning neural networks. In particular, a combination of Convolution Neural Networks (CNNs) and Long Short Term Memory (LSTM) have gained traction in this field for the intrinsic property of LSTM in learning contextual information crucial for emotion recognition and CNNs been used for its ability to overcome the scalability problem of regular neural networks. In this paper, we show that there are still opportunities to improve the performance of emotion recognition from the raw speech by exploiting the properties of CNN in modelling contextual information. We propose the use of parallel convolutional layers in the feature extraction block that are jointly trained with the LSTM based classification network for emotion recognition task. Our results suggest that the proposed model can reach the performance of CNN with hand-engineered features on IEMOCAP and MSP-IMPROV datasets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.03833

PDF

http://arxiv.org/pdf/1904.03833


Similar Posts

Comments