papers AI Learner
The Github is limit! Click to go to the new site.

Stochastic Conditional Gradient Method for Composite Convex Minimization

2019-01-29
Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher

Abstract

In this paper, we propose the first practical algorithm to minimize stochastic composite optimization problems over compact convex sets. This template allows for affine constraints and therefore covers stochastic semidefinite programs (SDPs), which are vastly applicable in both machine learning and statistics. In this setup, stochastic algorithms with convergence guarantees are either not known or not tractable. We tackle this general problem and propose a convergent, easy to implement and tractable algorithm. We prove $\mathcal{O}(k^{-1/3})$ convergence rate in expectation on the objective residual and $\mathcal{O}(k^{-5/12})$ in expectation on the feasibility gap. These rates are achieved without increasing the batchsize, which can contain a single sample. We present extensive empirical evidence demonstrating the superiority of our algorithm on a broad range of applications including optimization of stochastic SDPs.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.10348

PDF

http://arxiv.org/pdf/1901.10348


Similar Posts

Comments