Abstract
In this work, we propose a novel framework named Region-Aware Network (RANet) to achieve anti-confusing, including heavy occlusion, nearby person and symmetric appearance, for human pose estimation. Specifically, our proposed method addresses three key aspects for human pose estimation, i.e., data augmentation, feature learning and prediction fusion. First, we propose Parsing-based Data Augmentation (PDA) to generate abundant data with confusing textures. Second, we not only propose a Feature Pyramid Stem (FPS) module to learn better low-level features in lower stage; but also incorporate an Effective Region Extraction (ERE) module to investigate better human body-specific features. Third, we introduce Cascade Voting Fusion (CVS) to explicitly leverage the visibility to exclude the deflected predictions and achieve final accurate pose estimation. Experimental results demonstrate the superiority of our method against the state of the arts with significant improvements on two popular benchmark datasets, including MPII and LSP.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1905.00996