papers AI Learner
The Github is limit! Click to go to the new site.

Towards Characterizing Divergence in Deep Q-Learning

2019-03-21
Joshua Achiam, Ethan Knight, Pieter Abbeel

Abstract

Deep Q-Learning (DQL), a family of temporal difference algorithms for control, employs three techniques collectively known as the `deadly triad’ in reinforcement learning: bootstrapping, off-policy learning, and function approximation. Prior work has demonstrated that together these can lead to divergence in Q-learning algorithms, but the conditions under which divergence occurs are not well-understood. In this note, we give a simple analysis based on a linear approximation to the Q-value updates, which we believe provides insight into divergence under the deadly triad. The central point in our analysis is to consider when the leading order approximation to the deep-Q update is or is not a contraction in the sup norm. Based on this analysis, we develop an algorithm which permits stable deep Q-learning for continuous control without any of the tricks conventionally used (such as target networks, adaptive gradient optimizers, or using multiple Q functions). We demonstrate that our algorithm performs above or near state-of-the-art on standard MuJoCo benchmarks from the OpenAI Gym.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.08894

PDF

http://arxiv.org/pdf/1903.08894


Similar Posts

Comments