papers AI Learner
The Github is limit! Click to go to the new site.

Uncertainty-Aware Data Aggregation for Deep Imitation Learning

2019-05-07
Yuchen Cui, David Isele, Scott Niekum, Kikuo Fujimura

Abstract

Estimating statistical uncertainties allows autonomous agents to communicate their confidence during task execution and is important for applications in safety-critical domains such as autonomous driving. In this work, we present the uncertainty-aware imitation learning (UAIL) algorithm for improving end-to-end control systems via data aggregation. UAIL applies Monte Carlo Dropout to estimate uncertainty in the control output of end-to-end systems, using states where it is uncertain to selectively acquire new training data. In contrast to prior data aggregation algorithms that force human experts to visit sub-optimal states at random, UAIL can anticipate its own mistakes and switch control to the expert in order to prevent visiting a series of sub-optimal states. Our experimental results from simulated driving tasks demonstrate that our proposed uncertainty estimation method can be leveraged to reliably predict infractions. Our analysis shows that UAIL outperforms existing data aggregation algorithms on a series of benchmark tasks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.02780

PDF

http://arxiv.org/pdf/1905.02780


Comments

Content