papers AI Learner
The Github is limit! Click to go to the new site.

Contextual Markov Decision Processes using Generalized Linear Models

2019-03-14
Aditya Modi, Ambuj Tewari

Abstract

We consider the recently proposed reinforcement learning (RL) framework of Contextual Markov Decision Processes (CMDP), where the agent has a sequence of episodic interactions with tabular environments chosen from a possibly infinite set. The parameters of these environments depend on a context vector that is available to the agent at the start of each episode. In this paper, we propose a no-regret online RL algorithm in the setting where the MDP parameters are obtained from the context using generalized linear models (GLMs). The proposed algorithm \texttt{GL-ORL} relies on efficient online updates and is also memory efficient. Our analysis of the algorithm gives new results in the logit link case and improves previous bounds in the linear case. Our algorithm uses efficient Online Newton Step updates to build confidence sets. Moreover, for any strongly convex link function, we also show a generic conversion from any online no-regret algorithm to confidence sets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.06187

PDF

http://arxiv.org/pdf/1903.06187


Similar Posts

Comments