papers AI Learner
The Github is limit! Click to go to the new site.

A Cross Entropy based Stochastic Approximation Algorithm for Reinforcement Learning with Linear Function Approximation

2016-09-29
Ajin George Joseph, Shalabh Bhatnagar

Abstract

In this paper, we provide a new algorithm for the problem of prediction in Reinforcement Learning, \emph{i.e.}, estimating the Value Function of a Markov Reward Process (MRP) using the linear function approximation architecture, with memory and computation costs scaling quadratically in the size of the feature set. The algorithm is a multi-timescale variant of the very popular Cross Entropy (CE) method which is a model based search method to find the global optimum of a real-valued function. This is the first time a model based search method is used for the prediction problem. The application of CE to a stochastic setting is a completely unexplored domain. A proof of convergence using the ODE method is provided. The theoretical results are supplemented with experimental comparisons. The algorithm achieves good performance fairly consistently on many RL benchmark problems. This demonstrates the competitiveness of our algorithm against least squares and other state-of-the-art algorithms in terms of computational efficiency, accuracy and stability.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1609.09449

PDF

https://arxiv.org/pdf/1609.09449


Similar Posts

Comments