papers AI Learner
The Github is limit! Click to go to the new site.

A Bayesian Approach to Robust Reinforcement Learning

2019-05-20
Esther Derman, Daniel Mankowitz, Timothy Mann, Shie Mannor

Abstract

Robust Markov Decision Processes (RMDPs) intend to ensure robustness with respect to changing or adversarial system behavior. In this framework, transitions are modeled as arbitrary elements of a known and properly structured uncertainty set and a robust optimal policy can be derived under the worst-case scenario. In this study, we address the issue of learning in RMDPs using a Bayesian approach. We introduce the Uncertainty Robust Bellman Equation (URBE) which encourages safe exploration for adapting the uncertainty set to new observations while preserving robustness. We propose a URBE-based algorithm, DQN-URBE, that scales this method to higher dimensional domains. Our experiments show that the derived URBE-based strategy leads to a better trade-off between less conservative solutions and robustness in the presence of model misspecification. In addition, we show that the DQN-URBE algorithm can adapt significantly faster to changing dynamics online compared to existing robust techniques with fixed uncertainty sets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.08188

PDF

http://arxiv.org/pdf/1905.08188


Similar Posts

Comments