Abstract
Visual Tracking is a complex problem due to unconstrained appearance variations and dynamic environment. Extraction of complementary information from the object environment via multiple features and adaption to the target’s appearance variations are the key problems of this work. To this end, we propose a robust object tracking framework based on Unified Graph Fusion (UGF) of multi-cue to adapt to the object’s appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multi-cue. This iterative process builds robust unified features which are invariant to object deformations, fast motion, and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1812.06407