papers AI Learner
The Github is limit! Click to go to the new site.

Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning

2019-01-08
Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien

Abstract

We consider the problem of detecting out-of-distribution (OOD) samples in deep reinforcement learning. In a value based reinforcement learning setting, we propose to use uncertainty estimation techniques directly on the agent’s value estimating neural network to detect OOD samples. The focus of our work lies in analyzing the suitability of approximate Bayesian inference methods and related ensembling techniques that generate uncertainty estimates. Although prior work has shown that dropout-based variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question. Our results show that uncertainty estimation can be used to differentiate in- from out-of-distribution samples. Over the complete training process of the reinforcement learning agents, bootstrap-based approaches tend to produce more reliable epistemic uncertainty estimates, when compared to dropout-based approaches.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.02219

PDF

http://arxiv.org/pdf/1901.02219


Similar Posts

Comments