papers AI Learner
The Github is limit! Click to go to the new site.

Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions

2019-02-03
Yunwen Lei, Ting Hu, Ke Tang

Abstract

Stochastic gradient descent (SGD) is a popular and efficient method with wide applications in training deep neural nets and other nonconvex models. While the behavior of SGD is well understood in the convex learning setting, the existing theoretical results for SGD applied to nonconvex objective functions are far from mature. For example, existing results require to impose a nontrivial assumption on the uniform boundedness of gradients for all iterates encountered in the learning process, which is hard to verify in practical implementations. In this paper, we establish a rigorous theoretical foundation for SGD in nonconvex learning by showing that this boundedness assumption can be removed without affecting convergence rates. In particular, we establish sufficient conditions for almost sure convergence as well as optimal convergence rates for SGD applied to both general nonconvex objective functions and gradient-dominated objective functions. A linear convergence is further derived in the case with zero variances.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.00908

PDF

http://arxiv.org/pdf/1902.00908


Similar Posts

Comments