papers AI Learner
The Github is limit! Click to go to the new site.

Learning Local RGB-to-CAD Correspondences for Object Pose Estimation

2019-05-06
Georgios Georgakis, Srikrishna Karanam, Ziyan Wu, Jana Kosecka

Abstract

We consider the problem of 3D object pose estimation. While much recent work has focused on the RGB domain, the reliance on accurately annotated images limits their generalizability and scalability. On the other hand, the easily available CAD models of objects are rich sources of data, providing a large number of synthetically rendered images. In this paper, we solve this key problem of existing methods requiring expensive 3D pose annotations by proposing a new method that matches RGB images to CAD models for object pose estimation. Our key innovations compared to existing work include removing the need for either real-world textures for CAD models or explicit 3D pose annotations for RGB images. We achieve this through a series of objectives that learn how to select keypoints and enforce viewpoint and modality invariance across RGB images and CAD model renderings. We conduct extensive experiments to demonstrate that the proposed method can reliably estimate object pose in RGB images, as well as generalize to object instances not seen during training.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1811.07249

PDF

http://arxiv.org/pdf/1811.07249


Similar Posts

Comments