papers AI Learner
The Github is limit! Click to go to the new site.

Intra-clip Aggregation for Video Person Re-identification

2019-05-05
Takashi Isobe, Jian Han, Fang Zhu, Yali Li, Shengjin Wang

Abstract

Video-based person re-id has drawn much attention in recent years due to its prospective applications in video surveillance. Most existing methods concentrate on how to represent discriminative clip-level features. Moreover, clip-level data augmentation is also important, especially for temporal aggregation task. Inconsistent intra-clip augmentation will collapse inter-frame alignment, thus bringing in additional noise. To tackle the above-motioned problems, we design a novel framework for video-based person re-id, which consists of two main modules: Synchronized Transformation (ST) and Intra-clip Aggregation (ICA). The former module augments intra-clip frames with the same probability and the same operation, while the latter leverages two-level intra-clip encoding to generate more discriminative clip-level features. To confirm the advantage of synchronized transformation, we conduct ablation study with different synchronized transformation scheme. We also perform cross-dataset experiment to better understand the generality of our method. Extensive experiments on three benchmark datasets demonstrate that our framework outperforming the most of recent state-of-the-art methods.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.01722

PDF

http://arxiv.org/pdf/1905.01722


Similar Posts

Comments