Title: Rethinking Distributed Caching with Emerging Programmable Networking Hardware Support 

Advisor: Dan Ports

Supervisory Committee: Dan Ports (Chair), Sreeram Kannan (GSR, EE), Arvind Krishnamurthy, and Magda Balazinska

Abstract: 

Caching frequently used data is a fundamental technique to improve memory subsystem performance. Large scale web services today rely heavily on in-memory caching to reduce both client request latency and load on backend databases. Designing a distributed caching systems for these services however present several challenges. First, the system needs to cope with unpredictable changes in access patterns, especially with sudden surge in demand for certain popular items. Second, data distributed across cache servers and backend databases need to be kept consistent, even in the presence of failures in the system.

At the same time, a new class of networking hardware platform, including programmable switch ASICs, reconfigurable network accelerators, and SmartNICs, is starting to emerge. They enable datapath and application-level acceleration by pushing computation directly into the network, presenting opportunities to re-architect distributed caching systems.

In this work, we first review existing caching and cache-coherence techniques used in uniprocessors, multiprocessors, distributed shared memory, and distributed in-memory caches. We then examine several emerging networking hardware, and survey some of their use cases. Lastly, we will propose new distributed caching system designs that exploit programmable networking hardware to accelerate the caching and cache-coherence protocols. 


Place: 
CSE 305
When: 
Thursday, January 18, 2018 - 10:00 to Friday, April 26, 2024 - 14:28