Management of Caching Policies and Redundancy Over Unreliable Channels
Abstract: Caching plays a central role in networked systems, reducing the load on servers and the delay experienced by users. Despite their relevance, networked caching systems still pose a number of challenges pertaining their long term behavior. In this paper, we formally show and experimentally evidence conditions under which networked caches tend to synchronize over time. Such synchronization, in turn, leads to performance degradation and aging, motivating the monitoring of caching systems for eventual rejuvenation, as well as the deployment of diverse cache replacement policies across caches to promote diversity and preclude synchronization and its aging effects. Based on trace-driven simulations with real workloads, we show how hit probability is sensitive to varying channel reliability, cache sizes, and cache separation, indicating that the mix of simple policies, such as Least Recently Used (LRU) and Least Frequently Used (LFU), provide competitive performance against state-of-art policies. Indeed, our results suggest that diversity in cache replacement policies, rejuvenation and intentional dropping of requests are strategies that build diversity across caches, preventing or mitigating performance degradation due to caching aging.