papers AI Learner
The Github is limit! Click to go to the new site.

Universal Deep Neural Network Compression

2019-02-21
Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Abstract

In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment. Whereas the previous work addressed non-universal scalar quantization and entropy coding of DNN weights, we for the first time introduce universal DNN compression by universal vector quantization and universal source coding. In particular, we examine universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform random dithering before lattice quantization and can perform near-optimally on any source without relying on knowledge of its probability distribution. Moreover, we present a method of fine-tuning vector quantized DNNs to recover the performance loss after quantization. Our experimental results show that the proposed universal DNN compression scheme compresses the 32-layer ResNet (trained on CIFAR-10) and the AlexNet (trained on ImageNet) with compression ratios of $47.1$ and $42.5$, respectively.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1802.02271

PDF

http://arxiv.org/pdf/1802.02271


Similar Posts

Comments