Abstract
Much of the recent improvements in computer vision models has resulted from discovery of new network architectures. To reduce the computational cost of searching through possible architectures, most prior work has used the performance of candidate models following limited training to automatically guide the search. Could further gains in computational efficiency be achieved by guiding the search via measurements of a high performing network with unknown detailed architecture (e.g. the primate visual system)? As one step toward this goal, we use representational similarity analysis (RSA) to evaluate the similarity of internal activations of candidate networks with those of a (fixed, high performing) teacher network. Specifically, our search method termed “Teacher Guided Search for Architectures by Generation and Evaluation” (TG-SAGE) guides each step of the architecture search by evaluating the similarity of representational dissimilarity matrices between the candidate and teacher networks. We show that in the space of convolutional cells for visual categorization, TG-SAGE finds a cell structure with similar performance as was previously found using other methods but at a total computational cost that is two orders of magnitude lower than Neural Architecture Search (NAS) and more than four times lower than progressive neural architecture search (PNAS). We further show that when considering the primate visual system as the teacher network with measurements of neural activity from only several hundred neural sites, TG-SAGE finds a network with an Imagenet top-1 error that is ~2% lower than that achieved by performance-guided architecture search. These results suggest that TG-SAGE can be used to accelerate network architecture search in cases where one has access to some or all of the internal representations of a teacher network of interest, such as the brain.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1808.01405