papers AI Learner
The Github is limit! Click to go to the new site.

Neural Temporal-Difference Learning Converges to Global Optima

2019-05-24
Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang

Abstract

Temporal-difference learning (TD), coupled with neural networks, is among the most fundamental building blocks of deep reinforcement learning. However, due to the nonlinearity in value function approximation, such a coupling leads to nonconvexity and even divergence in optimization. As a result, the global convergence of neural TD remains unclear. In this paper, we prove for the first time that neural TD converges at a sublinear rate to the global optimum of the mean-squared projected Bellman error for policy evaluation. In particular, we show how such global convergence is enabled by the overparametrization of neural networks, which also plays a vital role in the empirical success of neural TD. Beyond policy evaluation, we establish the global convergence of neural (soft) Q-learning, which is further connected to that of policy gradient algorithms.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10027

PDF

http://arxiv.org/pdf/1905.10027


Similar Posts

Comments