papers AI Learner
The Github is limit! Click to go to the new site.

Towards Self-Supervised High Level Sensor Fusion

2019-02-12
Qadeer Khan, Torsten Schön, Patrick Wenzel

Abstract

In this paper, we present a framework to control a self-driving car by fusing raw information from RGB images and depth maps. A deep neural network architecture is used for mapping the vision and depth information, respectively, to steering commands. This fusion of information from two sensor sources allows to provide redundancy and fault tolerance in the presence of sensor failures. Even if one of the input sensors fails to produce the correct output, the other functioning sensor would still be able to maneuver the car. Such redundancy is crucial in the critical application of self-driving cars. The experimental results have showed that our method is capable of learning to use the relevant sensor information even when one of the sensors fail without any explicit signal.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.04272

PDF

http://arxiv.org/pdf/1902.04272


Similar Posts

Comments