Abstract
In this work we describe a novel motion guided method for targetless self-calibration of a LiDAR and camera and use the re-projection of LiDAR points onto the image reference frame for real-time depth upsampling. The calibration parameters are estimated by optimizing an objective function that penalizes distances between 2D and re-projected 3D motion vectors obtained from time-synchronized image and point cloud sequences. For upsampling, we propose a simple, yet effective and time efficient formulation that minimizes depth gradients subject to an equality constraint involving the LiDAR measurements. We test our algorithms on real data from urban environments and demonstrate that our two methods are effective and suitable to mobile robotics and autonomous vehicle applications imposing real-time requirements.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1803.10681