papers AI Learner
The Github is limit! Click to go to the new site.

Meta-learning with differentiable closed-form solvers

2019-02-27
Luca Bertinetto, João F. Henriques, Philip H.S. Torr, Andrea Vedaldi

Abstract

Adapting deep networks to new concepts from a few examples is challenging, due to the high computational requirements of standard fine-tuning procedures. Most work on few-shot learning has thus focused on simple learning techniques for adaptation, such as nearest neighbours or gradient descent. Nonetheless, the machine learning literature contains a wealth of methods that learn non-deep models very efficiently. In this paper, we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. This requires back-propagating errors through the solver steps. While normally the cost of the matrix operations involved in such a process would be significant, by using the Woodbury identity we can make the small number of examples work to our advantage. We propose both closed-form and iterative solvers, based on ridge regression and logistic regression components. Our methods constitute a simple and novel approach to the problem of few-shot learning and achieve performance competitive with or superior to the state of the art on three benchmarks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1805.08136

PDF

http://arxiv.org/pdf/1805.08136


Similar Posts

Comments