papers AI Learner
The Github is limit! Click to go to the new site.

VICSOM: VIsual Clues from SOcial Media for psychological assessment

2019-05-15
Mohammad Mahdi Dehshibi, Gerard Pons, Bita Baiani, David Masip

Abstract

Sharing multimodal information (typically images, videos or text) in Social Network Sites (SNS) occupies a relevant part of our time. The particular way how users expose themselves in SNS can provide useful information to infer human behaviors. This paper proposes to use multimodal data gathered from Instagram accounts to predict the perceived prototypical needs described in Glasser’s choice theory. The contribution is two-fold: (i) we provide a large multimodal database from Instagram public profiles (more than 30,000 images and text captions) annotated by expert Psychologists on each perceived behavior according to Glasser’s theory, and (ii) we propose to automate the recognition of the (unconsciously) perceived needs by the users. Particularly, we propose a baseline using three different feature sets: visual descriptors based on pixel images (SURF and Visual Bag of Words), a high-level descriptor based on the automated scene description using Convolutional Neural Networks, and a text-based descriptor (Word2vec) obtained from processing the captions provided by the users. Finally, we propose a multimodal fusion of these descriptors obtaining promising results in the multi-label classification problem.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.06203

PDF

http://arxiv.org/pdf/1905.06203


Similar Posts

Comments