papers AI Learner
The Github is limit! Click to go to the new site.

Video-based Person Re-identification via 3D Convolutional Networks and Non-local Attention

2019-04-28
Xingyu Liao, Lingxiao He, Zhouwang Yang, Chi Zhang

Abstract

Video-based person re-identification (ReID) is a challenging problem, where some video tracks of people across non-overlapping cameras are available for matching. Feature aggregation from a video track is a key step for video-based person ReID. Many existing methods tackle this problem by average/maximum temporal pooling or RNNs with attention. However, these methods cannot deal with temporal dependency and spatial misalignment problems at the same time. We are inspired by video action recognition that involves the identification of different actions from video tracks. Firstly, we use 3D convolutions on video volume, instead of using 2D convolutions across frames, to extract spatial and temporal features simultaneously. Secondly, we use a non-local block to tackle the misalignment problem and capture spatial-temporal long-range dependencies. As a result, the network can learn useful spatial-temporal information as a weighted sum of the features in all space and temporal positions in the input feature map. Experimental results on three datasets show that our framework outperforms state-of-the-art approaches by a large margin on multiple metrics.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1807.05073

PDF

http://arxiv.org/pdf/1807.05073


Similar Posts

Comments