papers AI Learner
The Github is limit! Click to go to the new site.

Successor Options: An Option Discovery Framework for Reinforcement Learning

2019-05-14
Rahul Ramesh, Manan Tomar, Balaraman Ravindran

Abstract

The options framework in reinforcement learning models the notion of a skill or a temporally extended sequence of actions. The discovery of a reusable set of skills has typically entailed building options, that navigate to bottleneck states. This work adopts a complementary approach, where we attempt to discover options that navigate to landmark states. These states are prototypical representatives of well-connected regions and can hence access the associated region with relative ease. In this work, we propose Successor Options, which leverages Successor Representations to build a model of the state space. The intra-option policies are learnt using a novel pseudo-reward and the model scales to high-dimensional spaces easily. Additionally, we also propose an Incremental Successor Options model that iterates between constructing Successor Representations and building options, which is useful when robust Successor Representations cannot be built solely from primitive actions. We demonstrate the efficacy of our approach on a collection of grid-worlds, and on the high-dimensional robotic control environment of Fetch.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.05731

PDF

https://arxiv.org/pdf/1905.05731


Similar Posts

Comments