papers AI Learner
The Github is limit! Click to go to the new site.

A Subsampling Line-Search Method with Second-Order Results

2018-11-21
El-houcine Bergou, Youssef Diouane, Vyacheslav Kungurtsev, Clément W. Royer

Abstract

In many contemporary optimization problems, such as hyperparameter tuning for deep learning architectures, it is computationally challenging or even infeasible to evaluate an entire function or its derivatives. This necessitates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees classically obtained through globalization techniques via a trust region or a line search. Using subsampled function values is particularly challenging for the latter strategy, that relies upon multiple evaluations. On top of that all, there has been an increasing interest for nonconvex formulations of data-related problems. For such instances, one aims at developing methods that converge to second-order stationary points, which is particularly delicate to ensure when one only accesses subsampled approximations of the objective and its derivatives. This paper contributes to this rapidly expanding field by presenting a stochastic algorithm based on negative curvature and Newton-type directions, computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model, and for a sufficiently large sample, a similar amount of reduction holds for the true objective. By using probabilistic reasoning, we can then obtain worst-case complexity guarantees for our framework, leading us to discuss appropriate notions of stationarity in a subsampling context. Our analysis, which we illustrate through real data experiments, encompasses the full sampled regime as a special case: it thus provides an insightful generalization of second-order line-search paradigms to subsampled settings.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1810.07211

PDF

https://arxiv.org/pdf/1810.07211


Similar Posts

Comments