papers AI Learner
The Github is limit! Click to go to the new site.

Gradient Descent based Optimization Algorithms for Deep Learning Models Training


Abstract

In this paper, we aim at providing an introduction to the gradient descent based optimization algorithms for learning deep neural network models. Deep learning models involving multiple nonlinear projection layers are very challenging to train. Nowadays, most of the deep learning model training still relies on the back propagation algorithm actually. In back propagation, the model variables will be updated iteratively until convergence with gradient descent based optimization algorithms. Besides the conventional vanilla gradient descent algorithm, many gradient descent variants have also been proposed in recent years to improve the learning performance, including Momentum, Adagrad, Adam, Gadam, etc., which will all be introduced in this paper respectively.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.03614

PDF

http://arxiv.org/pdf/1903.03614


Similar Posts

Comments