Problem Overview: Design an LFU Cache
LeetCode 460 asks you to design a data structure that implements a Least Frequently Used (LFU) cache. The cache has a fixed capacity. You must support two operations: get(key) returns the value if the key exists and -1 otherwise, and put(key, value) inserts or updates a key-value pair. Both operations must run in O(1) average time complexity.
The eviction policy is the defining challenge: when the cache is full and a new key must be inserted, you must evict the key with the lowest access frequency. Access frequency counts both get and put operations on a key. If multiple keys share the same minimum frequency, you evict the least recently used among them — the one that was accessed least recently within that frequency tier.
This problem is significantly harder than LRU Cache (LeetCode 146) because frequency changes dynamically on every access. A key's frequency can be 1, 2, 3, or any positive integer. You cannot maintain a single ordered structure because promoting a key from frequency f to f+1 must be O(1) — ruling out sorted sets or heaps that require O(log n) updates.
- Capacity: fixed integer given at construction; capacity >= 1
- get(key): return value if key exists (and count this as an access); return -1 if not found
- put(key, value): insert new key or update existing; if at capacity, evict before inserting
- Eviction rule: remove the key with the lowest frequency
- Tie-breaking: among equal-frequency keys, evict the least recently used (LRU)
- Both get and put must be O(1) average time
LRU vs LFU: Key Differences
LRU (Least Recently Used) evicts the key that was accessed least recently, regardless of how many times it was accessed in total. This is straightforward to implement with a doubly linked list plus a hashmap: every access moves the key to the front of the list, and eviction removes from the tail. The ordering is purely temporal — one global list suffices.
LFU (Least Frequently Used) evicts the key with the fewest total accesses. This is harder because the eviction candidate is not determined by time alone but by a count that changes on every access. When a key at frequency f is accessed, it must move to frequency f+1. This means the eviction candidate can change with every operation — a key that was the LFU candidate may no longer be after being accessed once.
The tie-breaking rule adds another layer: among all keys at the minimum frequency, you must evict the least recently used. So within each frequency tier, you still need LRU ordering. This means LFU effectively requires one LRU structure per frequency level rather than a single global one.
Master LRU Cache Before Attempting LFU
LFU builds directly on LRU concepts — each frequency bucket is itself an LRU list. If you have not yet solved LeetCode 146 LRU Cache, do that first. Once you understand how a doubly linked list with head and tail sentinels enables O(1) add and remove, you can apply the same structure to each frequency bucket in the LFU design. The jump from LRU to LFU is a matter of adding frequency tracking on top of an existing pattern.
Data Structure Design: Three Maps and minFreq
The O(1) LFU design requires three core data structures working together. The first is a key-to-node map that gives direct O(1) access to any cache entry by key. Each node stores the key, value, and current frequency. The second is a key-to-frequency map that tracks each key's access count. The third is a frequency-to-doubly-linked-list map where each bucket holds all keys currently at that frequency, ordered from least recently used (head) to most recently used (tail).
A global integer minFreq tracks the lowest frequency currently present in the cache. This pointer is essential for O(1) eviction — instead of scanning all frequency buckets to find the minimum, you always evict from the minFreq bucket. Maintaining minFreq correctly on every get and put is the key invariant of the design.
Within each frequency bucket, nodes are kept in LRU order: the head sentinel's next pointer points to the least recently used node, and the tail sentinel's prev pointer points to the most recently used. Accessing a key always appends it to the tail of its new (incremented) frequency bucket, so eviction always removes from the head of the minFreq bucket.
- 1key_to_node: HashMap<Integer, Node> — direct access to any cache node in O(1)
- 2key_to_freq: HashMap<Integer, Integer> — tracks current frequency of each key
- 3freq_to_list: HashMap<Integer, DLinkedList> — each frequency maps to an LRU doubly linked list
- 4minFreq: int — the current minimum frequency; updated on get and put
- 5capacity: int — max cache size; evict when size == capacity before inserting new key
- 6Node class: stores key, val, freq fields; DLinkedList has head/tail sentinels for O(1) add/remove
Get Operation: Increment Frequency and Move Bucket
The get operation has three steps. First, look up the key in key_to_node. If not found, return -1 immediately. If found, retrieve the node and proceed to update its frequency. Second, remove the node from its current frequency bucket in freq_to_list. If that bucket is now empty, delete it from the map. Third, increment the node's frequency by 1, add it to the tail of the new frequency bucket (creating the bucket if it doesn't exist), and update key_to_freq.
The minFreq update on get is straightforward: if the node we just moved was at minFreq and its old bucket is now empty, increment minFreq by 1. The new minFreq is exactly minFreq + 1 in this case — not some arbitrary minimum — because we only moved one node up by one level, and no other nodes moved. If the old bucket still has other nodes, minFreq stays the same.
Return the node's value after the frequency update. The get operation is O(1) because all map lookups, doubly linked list insertions, and doubly linked list removals are O(1) with sentinels. The total number of operations per get is constant regardless of cache size or frequency values.
minFreq Only Increments by 1 During Get
When get moves a node from frequency f to f+1, the only possible new minFreq is f+1 — but only if the old bucket at f is now empty. If other keys remain at frequency f, minFreq stays at f. You never need to scan for the new minimum; the increment is always exactly 1 because get never creates or deletes keys. This O(1) minFreq maintenance is what makes the whole design work without a priority queue.
Put Operation: Update or Insert with Eviction
The put operation branches on whether the key already exists. If the key is in key_to_node, update its value and call the same frequency-increment logic as get — remove from the current bucket, increment frequency, add to the new bucket, update minFreq if the old bucket is now empty. The key count does not change, so no eviction is needed.
If the key is new and the cache is at capacity, evict before inserting. Eviction removes the head's next node from the minFreq bucket — the least recently used node at the minimum frequency. Remove it from the bucket's doubly linked list, delete its entries from key_to_node and key_to_freq, and if the bucket is now empty, remove it from freq_to_list. You do not need to update minFreq here because the next step always sets minFreq to 1.
After eviction (or if capacity was not reached), insert the new key with frequency 1. Create a new node, add it to freq_to_list[1] at the tail, store it in key_to_node, and set key_to_freq[key] = 1. Finally, set minFreq = 1. New keys always start at frequency 1, so after any put of a new key, minFreq is always 1 — regardless of what minFreq was before.
- 1If key exists: update value, call increment_freq(key), return (no eviction needed)
- 2If key is new and size == capacity: evict head.next of freq_to_list[minFreq]
- 3Remove evicted node from key_to_node and key_to_freq; remove empty bucket from freq_to_list
- 4Create new node with freq=1; add to tail of freq_to_list[1] (create bucket if needed)
- 5Store in key_to_node and key_to_freq; set minFreq = 1
- 6Increment current size if a new key was added (not an update)
Code Walkthrough — Python and Java
Python implementation uses collections.defaultdict for freq_to_list (defaulting to a deque or custom DLinkedList) and plain dicts for key_to_node and key_to_freq. The Node class holds key, val, and freq. The DLinkedList class has sentinel head and tail nodes, an add_to_tail method that appends before the tail, a remove method that unlinks a node in O(1), and a pop_head method that removes and returns the node after head. The LFUCache class initializes capacity, size=0, minFreq=0, and the three maps.
Java implementation mirrors Python closely. Use HashMap<Integer, Node> for key_to_node and key_to_freq, and HashMap<Integer, LinkedHashSet<Integer>> or a custom DLinkedList for freq_to_list. LinkedHashSet preserves insertion order (LRU within a bucket) and supports O(1) add, remove, and peek at the first element for eviction. The increment_freq helper is extracted as a private method called by both get and put to avoid code duplication.
The increment_freq helper in both languages: (1) get current freq f from key_to_freq; (2) remove the node from freq_to_list[f]; (3) if freq_to_list[f] is empty and f == minFreq, increment minFreq; (4) increment the node's freq to f+1; (5) add node to tail of freq_to_list[f+1]; (6) update key_to_freq[key] = f+1. This helper encapsulates all frequency movement logic and is called in O(1).
Forgetting to Reset minFreq to 1 on New Insert Is the Most Common Bug
After inserting a new key with frequency 1, you must set minFreq = 1 unconditionally. A common mistake is skipping this reset when the cache is not at capacity, or trying to compute the new minFreq by comparing the old minFreq with 1. Since new keys always have frequency 1, and frequency 1 is always the minimum possible, the reset is always correct. Failing to reset minFreq causes the eviction logic to target the wrong bucket and silently corrupt the cache state.