papers AI Learner
The Github is limit! Click to go to the new site.

Inductive Transfer for Neural Architecture Optimization

2019-03-08
Martin Wistuba, Tejaswini Pedapati

Abstract

The recent advent of automated neural network architecture search led to several methods that outperform state-of-the-art human-designed architectures. However, these approaches are computationally expensive, in extreme cases consuming GPU years. We propose two novel methods which aim to expedite this optimization problem by transferring knowledge acquired from previous tasks to new ones. First, we propose a novel neural architecture selection method which employs this knowledge to identify strong and weak characteristics of neural architectures across datasets. Thus, these characteristics do not need to be rediscovered in every search, a strong weakness of current state-of-the-art searches. Second, we propose a method for learning curve extrapolation to determine if a training process can be terminated early. In contrast to existing work, we propose to learn from learning curves of architectures trained on other datasets to improve the prediction accuracy for novel datasets. On five different image classification benchmarks, we empirically demonstrate that both of our orthogonal contributions independently lead to an acceleration, without any significant loss in accuracy.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.03536

PDF

http://arxiv.org/pdf/1903.03536


Similar Posts

Comments