Design Hit Counter: The Bridge Between Algorithms and System Design
Design Hit Counter (#362) is one of those rare interview problems that sits squarely between data structures and system design. It appears on question lists at Amazon, Uber, and DoorDash, and for good reason — it starts simple but opens the door to follow-up questions about concurrency, distributed systems, and rate limiting.
The problem asks you to design a system that counts hits received in the past 5 minutes (300 seconds). You need two methods: hit(timestamp) to record a hit, and getHits(timestamp) to return the total number of hits in the last 300 seconds. Timestamps are given in increasing order.
What makes this design hit counter LeetCode problem compelling is that the naive solution is obvious, but the optimal solution requires a clever insight about circular buffers. Interviewers use it to test whether you can optimize beyond the first thing that works.
Understanding the Problem Constraints
Before jumping into solutions, you need to fully understand the constraints. The hit counter tracks hits over a rolling 300-second window. Each call to hit(timestamp) records one hit at the given second. Each call to getHits(timestamp) returns the count of all hits that occurred in the range (timestamp - 300, timestamp].
The key constraints are: timestamps are in non-decreasing order (you will never receive a hit from the past), and multiple hits can arrive at the same timestamp. This ordering guarantee is important — it means you do not need to handle out-of-order events, which simplifies both approaches.
The problem is classified as medium difficulty, but it is one of the most frequently asked design questions at top companies. Understanding both solutions gives you a framework for tackling any time-based windowing problem in interviews.
- hit(timestamp): Records a single hit at the given second
- getHits(timestamp): Returns total hits in the window (timestamp - 300, timestamp]
- Timestamps arrive in non-decreasing order — no out-of-order events
- Multiple hits can share the same timestamp
- The window is exactly 300 seconds (5 minutes)
Interview Favorite
Design Hit Counter (#362) is a favorite at Amazon, Uber, and DoorDash — it's the simplest design problem that naturally leads to system design follow-ups about concurrency and distributed counting.
Approach 1: Queue-Based Solution
The most intuitive leetcode 362 solution uses a queue (or deque). Every time hit(timestamp) is called, you push the timestamp onto the back of the queue. When getHits(timestamp) is called, you first dequeue all entries from the front that are older than 300 seconds, then return the size of the queue.
This approach is correct and easy to implement. The hit operation is O(1) — just an append. The getHits operation is O(n) in the worst case because you may need to remove many expired entries. However, each element is enqueued and dequeued at most once, so the amortized cost across all operations is efficient.
The downside is space. If your system receives thousands of hits per second, the queue can grow very large. Each timestamp is stored individually, even when multiple hits arrive at the same second. For a problem walkthrough in an interview, this is a great starting point, but you should be ready to optimize.
- hit(timestamp): Push timestamp to back of queue — O(1)
- getHits(timestamp): Dequeue expired entries, return queue size — O(n) worst case
- Space: O(n) where n is total hits in the current window
- Amortized O(1) per operation since each element is processed once
- Simple to implement but not space-efficient for high-throughput systems
Approach 2: Circular Buffer — The Optimal Solution
The circular buffer approach is the optimal hit counter design and the answer interviewers want to see. Instead of storing every individual timestamp, you use two fixed-size arrays of length 300: times[] and counts[]. The index for any timestamp is simply timestamp % 300.
When hit(timestamp) is called, you compute the index as timestamp % 300. If times[index] equals the current timestamp, you increment counts[index]. If it does not (meaning a new 300-second cycle has rolled around to this slot), you reset times[index] to the current timestamp and set counts[index] to 1.
For getHits(timestamp), you iterate through all 300 slots and sum up counts[i] for every slot where timestamp - times[i] < 300. This ensures you only count hits from the current window. The entire operation scans a fixed 300 elements, making it O(300) which is effectively O(1).
This sliding window counter pattern gives you O(1) time for hit and O(1) time for getHits, with O(300) = O(1) space regardless of how many hits your system receives. It is the textbook example of trading a small constant-space overhead for dramatically better scalability.
- 1Initialize two arrays: times[300] and counts[300], both filled with zeros
- 2On hit(timestamp): compute index = timestamp % 300. If times[index] == timestamp, increment counts[index]. Otherwise, set times[index] = timestamp and counts[index] = 1
- 3On getHits(timestamp): loop i from 0 to 299, summing counts[i] where timestamp - times[i] < 300
- 4Return the sum — this is your hit count for the past 300 seconds
Key Insight
The circular buffer approach uses times[timestamp % 300] to map any timestamp to a fixed-size array — this gives O(1) hit and O(1) getHits, making it the optimal solution for interviews.
Concurrency: The System Design Follow-Up
Once you present the circular buffer solution, most interviewers pivot to the real test: what happens when hit() is called concurrently from multiple threads? This concurrency follow-up separates mid-level candidates from senior engineers.
The circular buffer has a race condition. Two threads could read the same times[index] simultaneously, both see a stale value, and both reset counts[index] to 1 — losing one hit. In a single-machine scenario, you can solve this with a lock per bucket (fine-grained locking) or by using atomic compare-and-swap operations on each slot.
For distributed systems, the conversation shifts to tools like Redis. The INCR command in Redis is atomic, so you can use keys like hits:{timestamp} with a TTL of 300 seconds. To get the total count, you sum across the relevant keys. This rate counter implementation is exactly how production rate limiters work at companies like Uber and DoorDash.
Mentioning these options — fine-grained locks, atomics, or Redis INCR — demonstrates that you think beyond single-threaded textbook solutions. It is the kind of depth that turns a "hire" into a "strong hire" at the senior level.
Edge Cases and Common Pitfalls
Several edge cases trip up candidates in the design hit counter interview. The first is multiple hits at the same timestamp. Both approaches handle this correctly — the queue stores duplicates, and the circular buffer increments the count — but you should explicitly mention it to show thoroughness.
The second edge case is large time gaps. If no hits arrive for 600 seconds and then a burst comes in, the circular buffer handles this gracefully: stale slots get overwritten on the next hit. The queue handles it too, since expired entries are removed on the next getHits call.
A subtle pitfall is the boundary condition: is a hit exactly 300 seconds ago included or excluded? The problem defines the window as (timestamp - 300, timestamp], meaning a hit at time t is NOT counted at time t + 300. Make sure your comparison uses strict less-than (timestamp - times[i] < 300) rather than less-than-or-equal.
- Multiple hits at the same timestamp: both approaches handle this correctly
- Large time gaps: stale data is naturally overwritten or dequeued
- Boundary at exactly 300 seconds: use strict less-than, not less-than-or-equal
- Timestamps always increase: no need to handle out-of-order arrivals
- Integer overflow: not a concern for 300-second windows in practice
Senior-Level Follow-Up
The concurrency follow-up is the real test for senior candidates — mentioning atomic operations or distributed counting (Redis INCR) shows you think beyond single-threaded solutions.
What Design Hit Counter Teaches You
Design Hit Counter (#362) is more than a single problem — it is a gateway to an entire family of time-based windowing patterns. The circular buffer technique you learn here applies directly to rate limiters, sliding window logs, and request throttling systems. If an interviewer asks you to design a rate limiter, the core data structure is the same.
The problem also teaches the progression interviewers expect: start with a correct brute-force approach (the queue), then optimize for time and space (the circular buffer), then discuss real-world concerns (concurrency and distribution). This pattern of iterative improvement is the hallmark of a strong interview performance.
Practice this problem until you can explain both approaches fluently, analyze their complexity, and discuss the concurrency follow-up without hesitation. YeetCode flashcards can help you drill the circular buffer pattern and the key insight — timestamp % 300 — so it becomes second nature before your next interview.