const solve = (nums) => { let left = 0, right = nums.length - 1; while (left < right) { const sum = nums[left] + nums[right]; if (sum === target) return [left, right]; }}
Problem Walkthrough

LRU Cache LeetCode Solution: HashMap + Linked List

LeetCode 146 is the most frequently asked design problem at FAANG companies. Learn the O(1) HashMap + Doubly Linked List approach step by step, with edge cases and interview tips.

9 min read|

LRU Cache (#146): the design problem every FAANG company asks

HashMap + Doubly Linked List — the O(1) composition pattern

LRU Cache: The Design Problem Every FAANG Company Asks

If there is one design problem you will encounter in coding interviews, it is the LRU cache leetcode problem. LeetCode 146 appears on interview question lists at Google, Amazon, Meta, Microsoft, Netflix, and Bloomberg — and for good reason. It tests whether you can compose two data structures into a single system that meets strict performance requirements.

An LRU (Least Recently Used) cache is not an academic curiosity. Every operating system uses LRU eviction for page caching. Every CDN uses it for content expiration. Every database uses it for buffer pool management. When an interviewer asks you to design one, they are testing whether you understand how real systems work under the hood.

This walkthrough covers the complete LRU cache leetcode solution from first principles. You will understand why simpler approaches fail, why the HashMap + Doubly Linked List combination is optimal, and how to implement it cleanly under interview pressure.

Understanding the LRU Cache Problem

The leetcode 146 solution requires you to design a data structure that supports two operations: get(key) and put(key, value). Both must run in O(1) average time. The cache has a fixed capacity, and when a put operation would exceed that capacity, the least recently used item must be evicted before the new item is inserted.

The "least recently used" definition is critical. Every time you call get(key) or put(key, value) on an existing key, that key becomes the most recently used. The item that has gone the longest without being accessed is the one that gets evicted when space is needed.

Here is what the interface looks like. You initialize the cache with a capacity, then call get and put repeatedly. A get on a missing key returns -1. A put either inserts a new key-value pair or updates an existing one.

  • LRUCache(capacity) — initialize the cache with a positive integer capacity
  • get(key) — return the value if the key exists (and mark it as recently used), otherwise return -1
  • put(key, value) — update the value if the key exists, or insert the key-value pair. If insertion exceeds capacity, evict the least recently used key first
  • Both get and put must operate in O(1) average time complexity
ℹ️

Industry Insight

LRU Cache (#146) is the single most frequently asked design problem on LeetCode — it appears at Google, Amazon, Meta, Microsoft, Netflix, and Bloomberg.

Why a HashMap Alone Is Not Enough

Your first instinct might be to use a HashMap. After all, HashMaps give you O(1) lookup by key, which handles the get operation perfectly. You could store key-value pairs and retrieve them instantly. But the LRU cache design requires something a HashMap cannot provide on its own — tracking usage order.

A HashMap has no concept of ordering. When the cache is full and you need to evict the least recently used item, which key do you remove? You would need to scan every entry to find the oldest one, making eviction O(n). That breaks the O(1) requirement.

You might consider adding a timestamp to each entry and scanning for the minimum timestamp on eviction. But scanning is still O(n). You might try a sorted structure like a TreeMap to maintain order by timestamp, but insertion and deletion in a TreeMap are O(log n). The lru cache implementation demands O(1) for everything — lookup, insertion, deletion, and reordering.

This is exactly the kind of reasoning interviewers want to hear. Showing that you understand why the naive approach fails is just as important as arriving at the correct solution.

The Optimal Solution: HashMap + Doubly Linked List

The design lru cache solution combines two data structures that complement each other perfectly. A HashMap provides O(1) key-to-node lookup. A Doubly Linked List provides O(1) insertion, removal, and reordering — because you can remove any node in constant time if you have a direct pointer to it.

Here is how the hashmap linked list combination works. The HashMap maps each key to a node in the Doubly Linked List. Each node stores the key, the value, and pointers to the previous and next nodes. The list maintains usage order — the most recently used item sits at the head, and the least recently used item sits at the tail.

When you call get(key), you look up the node in the HashMap in O(1), move that node to the head of the list in O(1), and return its value. When you call put(key, value), you either update an existing node and move it to the head, or create a new node at the head. If the cache exceeds capacity, you remove the tail node from both the list and the HashMap — all in O(1).

  • HashMap: stores key -> node reference for O(1) lookup
  • Doubly Linked List: maintains usage order with most recent at head, least recent at tail
  • get(key): HashMap lookup -> move node to head -> return value — all O(1)
  • put(key, value): HashMap lookup -> insert or update node at head -> evict tail if over capacity — all O(1)
  • Space complexity: O(capacity) for both the HashMap and the linked list
💡

Key Insight

The key insight is using TWO data structures together — HashMap for O(1) lookup by key, Doubly Linked List for O(1) move-to-front and evict-from-back. Neither alone achieves O(1) for both operations.

Step-by-Step LRU Cache Implementation

The lru cache implementation starts with a Node class. Each node holds a key, a value, and pointers to the previous and next nodes. You need the key stored in the node so that when you evict the tail, you know which key to remove from the HashMap.

Next, set up the Doubly Linked List with dummy head and tail sentinel nodes. These sentinels simplify the code dramatically — you never need to check for null pointers when inserting or removing. The real nodes always sit between the sentinels.

The helper methods are straightforward. An addToHead method inserts a node right after the head sentinel. A removeNode method unlinks a node from its current position by updating its neighbors' pointers. A moveToHead method combines removeNode and addToHead. A removeTail method removes and returns the node just before the tail sentinel.

  1. 1Define a Node class with key, value, prev, and next properties
  2. 2Initialize the LRUCache with a capacity, an empty HashMap, and dummy head/tail sentinel nodes linked together
  3. 3Implement addToHead(node): insert node between head sentinel and head.next
  4. 4Implement removeNode(node): set node.prev.next = node.next and node.next.prev = node.prev
  5. 5Implement moveToHead(node): call removeNode then addToHead
  6. 6Implement removeTail(): save tail.prev, call removeNode on it, return the removed node
  7. 7Implement get(key): if key exists in HashMap, moveToHead and return value; otherwise return -1
  8. 8Implement put(key, value): if key exists, update value and moveToHead; otherwise create new node, add to HashMap and head, and if size exceeds capacity, removeTail and delete that key from HashMap

Edge Cases and Common Mistakes

The lru cache interview question has several edge cases that trip up candidates. The most common mistake is handling capacity 1 incorrectly. With a single-slot cache, every put evicts the previous entry. Your sentinel-based approach handles this correctly because the tail node and head node are always the same single real node, and eviction always targets it.

Another frequent bug is forgetting to update the HashMap when an existing key is updated with put. If you insert a duplicate key without removing the old node first, you end up with two nodes for the same key — one orphaned in the list and one in the HashMap pointing to the new node. Always check if the key already exists before creating a new node.

A subtle mistake is returning the wrong value from get after a put overwrites a key. When put is called with an existing key and a new value, the node's value must be updated in place before moving it to the head. If you create a new node instead of updating the existing one, you waste memory and risk capacity miscounts.

Finally, watch out for off-by-one errors in capacity tracking. Maintain a size counter that increments on new insertions and decrements on evictions. Do not increment size when updating an existing key — that is a common source of bugs that causes premature eviction.

  • Capacity 1: every new put evicts the previous entry — sentinels handle this naturally
  • Duplicate key: always check HashMap before creating a new node; update the existing node instead
  • Get on missing key: return -1 without modifying the list order
  • Size tracking: only increment on true insertions, not on updates to existing keys
  • Stale HashMap references: when evicting the tail, always delete the evicted key from the HashMap
⚠️

Common Bug

The most common LRU Cache bug is forgetting to update the HashMap when evicting — always remove the evicted key from the HashMap AND the DLL, or you'll have stale references.

What LRU Cache Teaches You About Data Structure Composition

The lru cache leetcode problem is not just about caching. It teaches a fundamental principle of system design — composing simple data structures to achieve properties that neither structure offers alone. A HashMap gives you O(1) random access. A Doubly Linked List gives you O(1) ordered insertion and removal. Together, they give you an O(1) ordered random-access cache.

This composition pattern appears everywhere in software engineering. Java's LinkedHashMap uses exactly this approach. Python's OrderedDict is built on the same idea. Redis uses a similar structure for its LRU eviction policy. Once you internalize this pattern, you will recognize it in production systems everywhere.

The LRU cache design pattern also prepares you for follow-up questions interviewers love to ask. How would you make this thread-safe? Add a lock around get and put. How would you implement an LFU (Least Frequently Used) cache? Add a frequency counter and a HashMap of frequency buckets. How would you implement TTL (time-to-live) expiration? Add a timestamp field to each node and a cleanup mechanism.

If you want to build lasting recall for the hashmap linked list pattern and other essential LeetCode structures, YeetCode flashcards help you drill these compositions through spaced repetition — so the approach comes to mind instantly when you see a design problem in your next interview.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now