papers AI Learner
The Github is limit! Click to go to the new site.

Affordance Learning In Direct Perception for Autonomous Driving

2019-03-20
Chen Sun, Jean M. Uwabeza Vianney, Dongpu Cao

Abstract

Recent development in autonomous driving involves high-level computer vision and detailed road scene understanding. Today, most autonomous vehicles are using mediated perception approach for path planning and control, which highly rely on high-definition 3D maps and real time sensors. Recent research efforts aim to substitute the massive HD maps with coarse road attributes. In this paper, we follow the direct perception based method to train a deep neural network for affordance learning in autonomous driving. Our goal in this work is to develop the affordance learning model based on freely available Google Street View panoramas and Open Street Map road vector attributes. Driving scene understanding can be achieved by learning affordances from the images captured by car-mounted cameras. Such scene understanding by learning affordances may be useful for corroborating base maps such as HD maps so that the required data storage space is minimized and available for processing in real time. We compare capability in road attribute identification between human volunteers and our model by experimental evaluation. Our results indicate that this method could act as a cheaper way for training data collection in autonomous driving. The cross validation results also indicate the effectiveness of our model.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.08746

PDF

http://arxiv.org/pdf/1903.08746


Similar Posts

Comments