papers AI Learner
The Github is limit! Click to go to the new site.

Implicit Filter Sparsification In Convolutional Neural Networks

2019-05-13
Dushyant Mehta, Kwang In Kim, Christian Theobalt

Abstract

We show implicit filter level sparsity manifests in convolutional neural networks (CNNs) which employ Batch Normalization and ReLU activation, and are trained with adaptive gradient descent techniques and L2 regularization or weight decay. Through an extensive empirical study (Mehta et al., 2019) we hypothesize the mechanism behind the sparsification process, and find surprising links to certain filter sparsification heuristics proposed in literature. Emergence of, and the subsequent pruning of selective features is observed to be one of the contributing mechanisms, leading to feature sparsity at par or better than certain explicit sparsification / pruning approaches. In this workshop article we summarize our findings, and point out corollaries of selective-featurepenalization which could also be employed as heuristics for filter pruning

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.04967

PDF

http://arxiv.org/pdf/1905.04967


Similar Posts

Comments