papers AI Learner
The Github is limit! Click to go to the new site.

Learning multimodal representations for sample-efficient recognition of human actions

2019-03-06
Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura

Abstract

Humans interact in rich and diverse ways with the environment. However, the representation of such behavior by artificial agents is often limited. In this work we present \textit{motion concepts}, a novel multimodal representation of human actions in a household environment. A motion concept encompasses a probabilistic description of the kinematics of the action along with its contextual background, namely the location and the objects held during the performance. Furthermore, we present Online Motion Concept Learning (OMCL), a new algorithm which learns novel motion concepts from action demonstrations and recognizes previously learned motion concepts. The algorithm is evaluated on a virtual-reality household environment with the presence of a human avatar. OMCL outperforms standard motion recognition algorithms on an one-shot recognition task, attesting to its potential for sample-efficient recognition of human actions.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.02511

PDF

http://arxiv.org/pdf/1903.02511


Similar Posts

Comments