papers AI Learner
The Github is limit! Click to go to the new site.

Video-based Person Re-identification with Two-stream Convolutional Network and Co-attentive Snippet Embedding

2019-05-28
Peixian Chen, Pingyang Dai, Qiong Wu, Yuyu Huang

Abstract

Recently, the applications of person re-identification in visual surveillance and human-computer interaction are sharply increasing, which signifies the critical role of such a problem. In this paper, we propose a two-stream convolutional network (ConvNet) based on the competitive similarity aggregation scheme and co-attentive embedding strategy for video-based person re-identification. By dividing the long video sequence into multiple short video snippets, we manage to utilize every snippet’s RGB frames, optical flow maps and pose maps to facilitate residual networks, e.g., ResNet, for feature extraction in the two-stream ConvNet. The extracted features are embedded by the co-attentive embedding method, which allows for the reduction of the effects of noisy frames. Finally, we fuse the outputs of both streams as the embedding of a snippet, and apply competitive snippet-similarity aggregation to measure the similarity between two sequences. Our experiments show that the proposed method significantly outperforms current state-of-the-art approaches on multiple datasets.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.11862

PDF

https://arxiv.org/pdf/1905.11862


Similar Posts

Comments