Design a Distributed LRU Cache
Design a distributed Least Recently Used (LRU) cache. The distributed aspect of the cache is the key requirement, with high availability and scalability as the core non functional requirements. The cache itself only need simple get and set operations but it should follow the LRU strategy for evicting data.
Designing a distributed Least Recently Used (LRU) cache requires addressing both the technical challenges of implementing the LRU eviction strategy and the complexities of ensuring the cache scales effectively across multiple nodes. The LRU cache itself follows a straightforward principle: when the cache reaches its capacity, it evicts the least recently used items to make room for new data. While this is a well-known pattern in caching, adding the distributed element introduces several critical considerations: how to ensure consistency across multiple nodes, how to synchronize cache state efficiently, and how to handle high availability and fault tolerance.
To implement a distributed LRU cache, the fundamental building blocks are two data structures: a hash map and a doubly linked list. The hash map allows for O(1) time complexity for get and set operations by providing fast lookups. The doubly linked list is used to maintain the order of access, ensuring that the least recently used items can be identified and evicted in O(1) time. Each node in the doubly linked list holds a key-value pair and can be moved to the front of the list on access (via the get operation) or insertion (via the set operation). When the cache reaches its capacity, the least recently used item will be at the tail of the list and can be evicted efficiently. The challenge in the distributed system is how to maintain this order and ensure the cache is synchronized across multiple nodes.
In a distributed setting, we need to consider partitioning the cache to ensure scalability. This is typically done by sharding the cache, where each node in the cluster stores a subset of the data, based on a hash of the key. The hash function determines which shard (node) holds a particular key. However, this introduces challenges in cache eviction. If the distributed system spans multiple nodes, an eviction strategy that operates within a single node may no longer be sufficient, as items evicted from one node could be needed by another. One solution is to implement a global LRU eviction strategy, where each node maintains its local LRU cache, but there is a mechanism (such as a coordinator node or a consistent hashing ring) to manage eviction across the entire system. Additionally, fault tolerance is achieved through replication or backup systems, ensuring that the cache remains available even if one or more nodes fail.
In terms of software solutions that already implement distributed caching, systems like Redis and Memcached are popular choices, though they may not strictly implement the LRU eviction strategy in a distributed context out of the box. Redis, for example, offers LRU eviction, but its distributed implementation requires additional layers such as Redis Cluster or Redis Sentinel for high availability and sharding. For a true distributed LRU cache, implementing your own solution would require managing the coordination, fault tolerance, and consistency mechanisms yourself. In doing so, you'd need to carefully consider the trade-offs between consistency and availability, especially if implementing the cache in environments with high load and frequent failures, such as cloud-based microservices or containerized applications.
Related Problems
A video service (like youtube) has many viewers watching videos. Given a stream of the video IDs that are being watched, we need to find the top K most viewed videos for different periods of time (1 hour, 1 day, 1 month, all time). For the top K videos returned, we also want the count of views during this period.
Design an app like google maps. The app should provide the quickest possible route between two arbitrary locations. It should provide an ETA, estimated time to reach a destination, using current traffic data.
Sending user notifications is a common requirement in system design. Design a notification service for an organization. The system will use shared services for the underlying messaging implementation (email, sms, push notifications, etc) so the actual messaging implementation does not need to be designed. The system should support a user publishing a notification to a single user or groups of users. Notifications can be triggered manually via a web UI or programmatically via an API. Users should be able to view their past notifications they published. If a user is unable to receive a notification, they should still receive it at the next opportunity and not miss the message. The notification service should scale to billions of notifications per day, with messages delivered within a few seconds, with five 9s uptime.
Design a url shortener service (similar to tinyurl).
1. Generate expiring unique short URL from provided URL
2. Redirect users to the correct website when they navigate to the short URL