Caching policies over unreliable channels
Abstract: Recently, there has been substantial progress in the formal understanding of how caching resources should be allocated when multiple caches each deploy the common LRU policy. Nonetheless, the role played by caching policies beyond LRU in a networked setting where content may be replicated across multiple caches and where channels are unreliable is still poorly understood. In this paper, we investigate this issue by first analyzing the cache miss rate in a system with two caches of unit size each, for the LRU, and the LFU caching policies, and their combination. Our analytical results show that joint use of the two policies outperforms LRU, while LFU outperforms all these policies whenever resource pooling is not optimal. We provide empirical results with larger caches to show that simple alternative policies, such as LFU, provide superior performance compared to LRU even if the space allocation is not fine tuned. We envision that fine tuning the cache space used by such policies may lead to promising additional gains.