papers AI Learner
The Github is limit! Click to go to the new site.

Cache-Assisted Broadcast-Relay Wireless Networks: A Delivery-Time Cache-Memory Tradeoff

2018-03-11
Jaber Kakar, Alaa Alameer, Anas Chaaban, Aydin Sezgin, Arogyaswami Paulraj

Abstract

An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, $M$ cache-equipped transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern, normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. The objective is to design the schemes for cache placement and file delivery in order to minimize the NDT. To this end, we establish a novel converse and two types of achievability schemes applicable to both time-variant and invariant channels. The first scheme is a general one-shot scheme for any $M$ and $K$ that synergistically exploits both multicasting (coded) caching and distributed zero-forcing opportunities. We show that the proposed one-shot scheme (i) attains gains attributed to both individual and collective transceiver caches (ii) is NDT-optimal for various parameter settings, particularly at higher cache sizes. The second scheme, on the other hand, designs beamformers to facilitate both subspace interference alignment and zero-forcing at lower cache sizes. Exploiting both schemes, we are able to characterize for various special cases of $M$ and $K$ which satisfy $K+M\leq 4$ the optimal tradeoff between cache storage and latency. The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees-of-freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1803.04058

PDF

https://arxiv.org/pdf/1803.04058


Similar Posts

Comments