papers AI Learner
The Github is limit! Click to go to the new site.

Fusing Bird View LIDAR Point Cloud and Front View Camera Image for Deep Object Detection

2018-02-14
Zining Wang, Wei Zhan, Masayoshi Tomizuka

Abstract

We propose a new method for fusing a LIDAR point cloud and camera-captured images in the deep convolutional neural network (CNN). The proposed method constructs a new layer called non-homogeneous pooling layer to transform features between bird view map and front view map. The sparse LIDAR point cloud is used to construct the mapping between the two maps. The pooling layer allows efficient fusion of the bird view and front view features at any stage of the network. This is favorable for the 3D-object detection using camera-LIDAR fusion in autonomous driving scenarios. A corresponding deep CNN is designed and tested on the KITTI bird view object detection dataset, which produces 3D bounding boxes from the bird view map. The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1711.06703

PDF

https://arxiv.org/pdf/1711.06703


Similar Posts

Comments