papers AI Learner
The Github is limit! Click to go to the new site.

Bi-directional Value Learning for Risk-aware Planning Under Uncertainty

2019-02-15
Sung-Kyun Kim, Rohan Thakker, Ali-akbar Agha-mohammadi

Abstract

Decision-making under uncertainty is a crucial ability for autonomous systems. In its most general form, this problem can be formulated as a Partially Observable Markov Decision Process (POMDP). The solution policy of a POMDP can be implicitly encoded as a value function. In partially observable settings, the value function is typically learned via forward simulation of the system evolution. Focusing on accurate and long-range risk assessment, we propose a novel method, where the value function is learned in different phases via a bi-directional search in belief space. A backward value learning process provides a long-range and risk-aware base policy. A forward value learning process ensures local optimality and updates the policy via forward simulations. We consider a class of scalable and continuous-space rover navigation problems (RNP) to assess the safety, scalability, and optimality of the proposed algorithm. The results demonstrate the capabilities of the proposed algorithm in evaluating long-range risk/safety of the planner while addressing continuous problems with long planning horizons.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.05698

PDF

http://arxiv.org/pdf/1902.05698


Comments

Content