papers AI Learner
The Github is limit! Click to go to the new site.

A Survey of Model Compression and Acceleration for Deep Neural Networks

2019-01-21
Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang

Abstract

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep convolutional neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progresses have been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transfered/compact convolutional filters and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic networks and stochastic depths networks. After that, we survey the evaluation matrix, main datasets used for evaluating the model performance and recent bench-marking efforts. Finally we conclude this paper, discuss remaining challenges and possible directions in this topic.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1710.09282

PDF

http://arxiv.org/pdf/1710.09282


Similar Posts

Comments