What Is LRU in Data Structure?

//

Heather Bennett

In computer science, LRU (Least Recently Used) is a popular caching algorithm used in data structures to manage the eviction or removal of elements from a cache when it reaches its capacity limit. The basic idea behind LRU is to remove the least recently used element from the cache, making space for new elements that are more likely to be accessed in the near future.

How Does LRU Work?

The LRU algorithm maintains a data structure that keeps track of the order in which elements are accessed. This data structure can be implemented using a doubly linked list and a hash map.

Doubly Linked List

A doubly linked list is a linear data structure where each node contains three fields: data, previous, and next. The previous field points to the previous node in the list, and the next field points to the next node.

In an LRU cache, the doubly linked list is used to maintain the order of access of elements. The most recently accessed element is placed at the front of the list, while the least recently accessed element is placed at the end of the list.

Hash Map

A hash map is a data structure that allows efficient retrieval and storage of key-value pairs. In an LRU cache, a hash map is used to store references to nodes in the doubly linked list. Each key in the hash map corresponds to an element in the cache, and its value is a reference to its corresponding node in the doubly linked list.

The combination of a doubly linked list and a hash map allows for efficient insertion, deletion, and retrieval of elements in an LRU cache.

LRU Cache Operations

An LRU cache supports the following operations:

  • Get(key): Retrieves the value associated with the given key from the cache. If the key is not present in the cache, it returns null.
  • Put(key, value): Inserts or updates a key-value pair in the cache. If the cache is already at its maximum capacity, it removes the least recently used element before adding the new element.

Example

Let’s walk through an example to better understand how LRU works:

Suppose we have an LRU cache with a maximum capacity of 3:

  • cache = {}

We perform the following operations:

  1. Put(1, 'A'): The cache becomes {1: 'A'}.
  2. Put(2, 'B'): The cache becomes {1: 'A', 2: 'B'}.
  3. Put(3, 'C'): The cache becomes {1: 'A', 2: 'B', 3: 'C'}.
  4. Get(1): Returns ‘A’. The cache becomes {2: ‘B’, 3: ‘C’, 1: ‘A’}.
  5. Put(4, 'D'): The cache is full and has to evict the least recently used element. The cache becomes {3: ‘C’, 1: ‘A’, 4: ‘D’}.
  6. Get(2): Returns null since key 2 is not present in the cache.

In this example, key 1 was accessed after keys 2 and 3, so it became the most recently used element. When the cache was full and we inserted key 4, we had to remove key 2 because it was the least recently used.

Conclusion

The LRU algorithm is widely used in various applications that involve caching, such as web browsers, operating systems, and databases. By keeping frequently accessed elements in a cache and removing less frequently accessed elements, LRU helps improve performance by reducing the number of expensive disk or network operations.

Understanding how LRU works and implementing it correctly can be crucial for optimizing the performance of your applications that rely on caching.

Discord Server - Web Server - Private Server - DNS Server - Object-Oriented Programming - Scripting - Data Types - Data Structures

Privacy Policy