papers AI Learner
The Github is limit! Click to go to the new site.

Tight Regret Bounds for Model-Based Reinforcement Learning with Greedy Policies

2019-05-27
Yonathan Efroni, Nadav Merlis, Mohammad Ghavamzadeh, Shie Mannor

Abstract

State-of-the-art efficient model-based Reinforcement Learning (RL) algorithms typically act by iteratively solving empirical models, i.e., by performing \emph{full-planning} on Markov Decision Processes (MDPs) built by the gathered experience. In this paper, we focus on model-based RL in the finite-state finite-horizon MDP setting and establish that exploring with \emph{greedy policies} – act by \emph{1-step planning} – can achieve tight minimax performance in terms of regret, $\tilde{\mathcal{O}}(\sqrt{HSAT})$. Thus, full-planning in model-based RL can be avoided altogether without any performance degradation, and, by doing so, the computational complexity decreases by a factor of $S$. The results are based on a novel analysis of real-time dynamic programming, then extended to model-based RL. Specifically, we generalize existing algorithms that perform full-planning to such that act by 1-step planning. For these generalizations, we prove regret bounds with the same rate as their full-planning counterparts.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.11527

PDF

http://arxiv.org/pdf/1905.11527


Similar Posts

Comments