papers AI Learner
The Github is limit! Click to go to the new site.

Distributed Lossy Image Compression with Recurrent Networks

2019-03-23
Enmao Diao, Jie Ding, Vahid Tarokh

Abstract

We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To our best knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.09887

PDF

http://arxiv.org/pdf/1903.09887


Similar Posts

Comments