papers AI Learner
The Github is limit! Click to go to the new site.

Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms

2019-05-22
Mahesh Chandra Mukkamala, Peter Ochs

Abstract

Matrix Factorization is a popular non-convex objective, for which alternating minimization schemes are mostly used. They usually suffer from the major drawback that the solution is biased towards one of the optimization variables. A remedy is non-alternating schemes. However, due to a lack of Lipschitz continuity of the gradient in matrix factorization problems, convergence cannot be guaranteed. A recently developed remedy relies on the concept of Bregman distances, which generalizes the standard Euclidean distance. We exploit this theory by proposing a novel Bregman distance for matrix factorization problems, which, at the same time, allows for simple/closed form update steps. Therefore, for non-alternating schemes, such as the recently introduced Bregman Proximal Gradient (BPG) method and an inertial variant Convex–Concave Inertial BPG (CoCaIn BPG), convergence of the whole sequence to a stationary point is proved for Matrix Factorization. In several experiments, we observe a superior performance of our non-alternating schemes in terms of speed and objective value at the limit point.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.09050

PDF

http://arxiv.org/pdf/1905.09050


Similar Posts

Comments