papers AI Learner
The Github is limit! Click to go to the new site.

The Verbal and Non Verbal Signals of Depression -- Combining Acoustics, Text and Visuals for Estimating Depression Level

2019-04-02
Syed Arbaaz Qureshi, Mohammed Hasanuzzaman, Sriparna Saha, Gaël Dias

Abstract

Depression is a serious medical condition that is suffered by a large number of people around the world. It significantly affects the way one feels, causing a persistent lowering of mood. In this paper, we propose a novel attention-based deep neural network which facilitates the fusion of various modalities. We use this network to regress the depression level. Acoustic, text and visual modalities have been used to train our proposed network. Various experiments have been carried out on the benchmark dataset, namely, Distress Analysis Interview Corpus - a Wizard of Oz (DAIC-WOZ). From the results, we empirically justify that the fusion of all three modalities helps in giving the most accurate estimation of depression level. Our proposed approach outperforms the state-of-the-art by 7.17% on root mean squared error (RMSE) and 8.08% on mean absolute error (MAE).

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.07656

PDF

http://arxiv.org/pdf/1904.07656


Similar Posts

Comments