Skip to content

Implementing a Least Recently Used (LRU) Cache in Python

Updated: at 03:23 AM

A cache is a component that stores data so future requests for that data can be served faster. Caching is used in many areas of computer science to optimize the performance of applications. One popular caching strategy is the Least Recently Used (LRU) cache. This how-to guide will demonstrate how to implement an LRU cache in Python, explaining the key concepts and providing example code.

Table of Contents

Open Table of Contents

Overview of LRU Caches

The Least Recently Used (LRU) cache eviction policy removes the least recently used item first when the cache is full and a new item needs to be inserted. This strategy is based on the heuristic that the items that have been accessed recently are more likely to be needed again compared to items that have not been accessed in a long time.

Some key characteristics of LRU caches:

LRU caches are commonly used for page replacement in operating systems and web caching. Other applications include database query caching and GPU texture caching for rendering. The goal is faster access to frequently used data.

LRU Cache Implementation Overview

We will implement an LRU cache in Python having the following methods:

The cache will be initialized with a predetermined maximum capacity. A Python dictionary will store the key-value pairs. We need to also track the order of item access to determine which one is least recently used when evicting items.

There are a few common ways to implement an LRU cache:

We will examine implementations using OrderedDict and deque from Python’s collections module. But first, let’s define the cache interface.

Defining the Cache Interface

We’ll start by defining a Cache class with get(), put() and print() methods but empty implementations:

from collections import OrderedDict

class Cache:

    def __init__(self, capacity):
        # Initialize cache with a predetermined capacity
        return

    def get(self, key):
        # Retrieve item from provided key. Return -1 if nonexistent.
        pass

    def put(self, key, value):
        # Set the value if the key is not present in the cache. If the cache is at capacity remove the oldest item.
        pass

    def print(self):
        # Print the cache content in order from most recently used to least recently used
        pass

This provides a template to build our LRU cache on. Now let’s look at different data structures to track item access order.

Using OrderedDict for LRU Caching

Python’s OrderedDict from the collections module allows us to store items in the order they were inserted while efficient lookup via keys like a regular dictionary.

We can use an OrderedDict to build our LRU cache, storing key-value pairs and automatically evicting the first item added when the cache is full. The code is simple:

from collections import OrderedDict

class LRUCache:

    def __init__(self, capacity):
        self.cache = OrderedDict()
        self.capacity = capacity

    def get(self, key):
        if key not in self.cache:
            return -1
        else:
            self.cache.move_to_end(key)
            return self.cache[key]

    def put(self, key, value):
        self.cache[key] = value
        self.cache.move_to_end(key)
        if len(self.cache) > self.capacity:
            self.cache.popitem(last=False)

    def print(self):
        for key, value in self.cache.items():
            print(f"{key}: {value}")

The OrderedDict automatically keeps track of item insertion order. Calling move_to_end() on key access pushes it to the back to mark it most recent. Items will be evicted from the front (oldest) when at capacity.

This implementation is simple and efficient. Lookup and insertion is O(1) time complexity. However, one limitation is that OrderedDict requires more memory than a regular dictionary since it maintains order using doubly linked lists.

Next, we’ll look at a deque implementation which is more space efficient.

Using Deque for LRU Caching

Python’s deque (double ended queue) from the collections module is a list-like container ideal for tracking most recently used items. It supports efficient appends and pops from either end in O(1) time.

We can use a deque to build the LRU cache, storing keys in order from most to least recently used. When we access a key, we move it to the back of the deque. The cache dictionary holds the key-value pairs.

from collections import deque

class LRUCache:

    def __init__(self, capacity):
        self.cache = dict()
        self.capacity = capacity
        self.access = deque()

    def get(self, key):
        if key not in self.cache:
            return -1
        else:
            self.access.remove(key)
            self.access.append(key)
            return self.cache[key]

    def put(self, key, value):
        if key in self.cache:
            self.access.remove(key)
        elif len(self.cache) == self.capacity:
            oldest = self.access.popleft()
            del self.cache[oldest]
        self.cache[key] = value
        self.access.append(key)

    def print(self):
       for key in self.access:
           print(f"{key}: {self.cache[key]}")

The deque tracks keys from most to least recently used. On lookup we move a key to the back. On insertion we append new keys and remove oldest keys when at capacity.

This has the benefit of decoupling order tracking from the key-value storage for better space efficiency compared to OrderedDict. Lookup and insertion is still O(1).

Next we’ll demonstrate the LRU cache in action.

Demonstrating the LRU Cache

Let’s test our LRU cache implementation with some examples. First we initialize a cache with a capacity of 3 items:

cache = LRUCache(3)

We insert some keys and values:

cache.put(1, "one")
cache.put(2, "two")
cache.put(3, "three")

The cache contents will be:

3: three
2: two
1: one

Now if we access key 2, it will be moved to the most recent position:

cache.get(2)

Cache contents are now:

2: two
3: three
1: one

Next we add a new item, key 4. This will evict the least recently used key 1:

cache.put(4, "four")

Cache:

4: four
2: two
3: three

We continue accessing keys and adding new ones:

cache.get(3)
cache.put(5, "five")

Cache:

3: three
4: four
2: two
5: five

The LRU eviction policy removes the least recently used item first. The print() method shows the items ordered from most to least recently used.

This demonstrates the core functionality of an LRU cache implemented in Python! Next we’ll discuss some ways to optimize and extend the implementation.

Optimizations and Extensions

There are further optimizations we can apply to improve performance:

There are also many extensions we can add:

By tuning performance and adding advanced capabilities, we can build an LRU cache that is robust and production-ready.

Application Examples

Some examples of applying LRU caching in real world applications:

Web Caching - LRU helps efficiently serve static resources like images, CSS by caching most frequently accessed files in memory for low latency. Popular web frameworks provide LRU caches.

Database Query Caching - Stores results of database queries in an LRU cache to return cached result sets instead of re-running expensive queries. Drastically speeds up read intensive workloads.

Object Pooling - Reuse expensive to initialize objects like database connections by retaining a pool of recently used objects. Good use case for LRU cache.

Rendering Optimizations - Graphics rendering utilizes texture caching and model batching with LRU eviction to optimize GPU memory usage.

Video Streaming - Local media files or recently viewed segments of videos can be cached on end-user devices using LRU to provide smooth streaming.

Content Delivery Networks - LRU cached at CDN edge servers provide low latency access to frequently accessed content.

There are many other examples where LRU caching provides significant performance benefits by taking advantage of locality of access. The simple LRU cache implemented in Python here serves as a good starting point for many applications.

Conclusion

In this guide, we implemented a Least Recently Used cache in Python using both an OrderedDict and a deque to track order efficiently. We saw how LRU caching improves performance by retaining frequently accessed data while removing least used items first when the cache is full.

An LRU cache provides fast O(1) lookups and updates by exploiting locality of references in typical workloads. We can apply various optimizations like concurrent access, sizing, and disk overflow to make the LRU cache production ready. There are also many options for extending the cache with advanced capabilities.

LRU caching is an invaluable technique used widely across many domains including web serving, databases, graphics, video streaming, and content delivery. The Python implementation presented here provides a solid foundation to start applying LRU caching in your own applications. By caching hot data intelligently, you can speed up access, reduce resource usage, and build highly performant systems.