papers AI Learner
The Github is limit! Click to go to the new site.

Combining Experience Replay with Exploration by Random Network Distillation

2019-05-18
Francesco Sovrano

Abstract

Our work is a simple extension of the paper “Exploration by Random Network Distillation”. More in detail, we show how to efficiently combine Intrinsic Rewards with Experience Replay in order to achieve more efficient and robust exploration (with respect to PPO/RND) and consequently better results in terms of agent performances and sample efficiency. We are able to do it by using a new technique named Prioritized Oversampled Experience Replay (POER), that has been built upon the definition of what is the important experience useful to replay. Finally, we evaluate our technique on the famous Atari game Montezuma’s Revenge and some other hard exploration Atari games.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.07579

PDF

http://arxiv.org/pdf/1905.07579


Similar Posts

Comments