Abstract
We present a new methodology for improving the accuracy of small neural networks by applying the concept of a clos network to achieve maximum expression in a smaller network. We explore the design space to show that more layers is beneficial, given the same number of parameters. We also present findings on how the relu nonlinearity ffects accuracy in separable networks. We present results on early work with Cifar-10 dataset.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1901.06433