papers AI Learner
The Github is limit! Click to go to the new site.

There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average

2019-02-21
Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson

Abstract

Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures. The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data. Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule. We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule. With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data. For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1806.05594

PDF

http://arxiv.org/pdf/1806.05594


Similar Posts

Comments