papers AI Learner
The Github is limit! Click to go to the new site.

Action2Vec: A Crossmodal Embedding Approach to Action Learning

2019-01-02
Meera Hahn, Andrew Silva, James M. Rehg

Abstract

We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1901.00484

PDF

https://arxiv.org/pdf/1901.00484


Similar Posts

Comments