papers AI Learner
The Github is limit! Click to go to the new site.

ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs

2019-05-26
Ernest K. Ryu, Kun Yuan, Wotao Yin

Abstract

Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood. In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations. First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable. Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients. Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.10899

PDF

https://arxiv.org/pdf/1905.10899


Similar Posts

Comments