papers AI Learner
The Github is limit! Click to go to the new site.

Scaling up budgeted reinforcement learning

2019-03-03
Nicolas Carrara, Edouard Leurent, Romain Laroche, Tanguy Urvoy, Odalric Maillard, Olivier Pietquin

Abstract

Can we learn a control policy able to adapt its behaviour in real time so as to take any desired amount of risk? The general Reinforcement Learning framework solely aims at optimising a total reward in expectation, which may not be desirable in critical applications. In stark contrast, the Budgeted Markov Decision Process (BMDP) framework is a formalism in which the notion of risk is implemented as a hard constraint on a failure signal. Existing algorithms solving BMDPs rely on strong assumptions and have so far only been applied to toy-examples. In this work, we relax some of these assumptions and demonstrate the scalability of our approach on two practical problems: a spoken dialogue system and an autonomous driving task. On both examples, we reach similar performances as Lagrangian Relaxation methods with a significant improvement in sample and memory efficiency.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.01004

PDF

http://arxiv.org/pdf/1903.01004


Similar Posts

Comments