papers AI Learner
The Github is limit! Click to go to the new site.

2-Entity RANSAC for robust visual localization in changing environment

2019-03-10
Yanmei Jiao, Yue Wang, Bo Fu, Xiaqing Ding, Qimeng Tan, Lei Chen, Rong Xiong

Abstract

Visual localization has attracted considerable attention due to its low-cost and stable sensor, which is desired in many applications, such as autonomous driving, inspection robots and unmanned aerial vehicles. However, current visual localization methods still struggle with environmental changes across weathers and seasons, as there is significant appearance variation between the map and the query image. The crucial challenge in this situation is that the percentage of outliers, i.e. incorrect feature matches, is high. In this paper, we derive minimal closed form solutions for 3D-2D localization with the aid of inertial measurements, using only 2 pairs of point matches or 1 pair of point match and 1 pair of line match. These solutions are further utilized in the proposed 2-entity RANSAC, which is more robust to outliers as both line and point features can be used simultaneously and the number of matches required for pose calculation is reduced. Furthermore, we introduce three feature sampling strategies with different advantages, enabling an automatic selection mechanism. With the mechanism, our 2-entity RANSAC can be adaptive to the environments with different distribution of feature types in different segments. Finally, we evaluate the method on both synthetic and real-world datasets, validating its performance and effectiveness in inter-session scenarios.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1903.03967

PDF

https://arxiv.org/pdf/1903.03967


Similar Posts

Comments