const solve = (nums) => { let left = 0, right = nums.length - 1; while (left < right) { const sum = nums[left] + nums[right]; if (sum === target) return [left, right]; }}
Problem Walkthrough

Implement Queue Using Stacks: LeetCode 232 Solution Explained

Build a FIFO queue from two LIFO stacks — the ADT composition pattern that tests your data structure fundamentals.

7 min read|

Implement Queue Using Stacks (#232): build FIFO from LIFO

Two stacks, amortized O(1) — the ADT composition pattern

Implement Queue Using Stacks: Why LeetCode 232 Matters

Implement Queue Using Stacks (#232) is one of those deceptively simple problems that tests whether you truly understand the relationship between fundamental data structures. Can you build one abstract data type from another? That is what interviewers want to know.

At its core, this problem asks you to implement a FIFO (first-in, first-out) queue using only LIFO (last-in, first-out) stacks. It sounds like a contradiction — stacks and queues have opposite ordering — but with a clever use of two stacks, you can make it work with amortized O(1) time per operation.

In this walkthrough, you will understand the problem constraints, explore two distinct approaches (costly push vs. costly pop), master the amortized analysis that makes the optimal solution shine, and learn the edge cases interviewers look for.

Understanding the Implement Queue Using Stacks Problem

LeetCode 232 asks you to implement a MyQueue class with four operations using only two stacks: push(x) adds element x to the back of the queue, pop() removes the element from the front, peek() returns the front element without removing it, and empty() returns whether the queue is empty.

The key constraint is that you may only use standard stack operations — push to top, peek/pop from top, size, and is empty. You cannot index into the middle of a stack or iterate over it. This forces you to think about how reversing the order of elements can convert LIFO behavior into FIFO behavior.

For example, if you push 1, 2, 3 onto a stack, popping gives you 3, 2, 1 — the reverse of the insertion order. But if you pop all three into a second stack, popping from that second stack gives you 1, 2, 3 — exactly the FIFO order you need.

ℹ️

Interview Favorite

Implement Queue Using Stacks (#232) is a favorite at Amazon and Bloomberg — it tests whether you understand the relationship between LIFO and FIFO data structures.

Approach 1: Costly Push — Transfer and Insert

The first approach makes the push operation expensive. Every time you push a new element, you transfer all existing elements from stack1 to stack2, push the new element onto stack1, and then transfer everything back from stack2 to stack1. This ensures that stack1 always holds elements in queue order, with the front of the queue on top.

With this approach, pop and peek are both O(1) because the front element is always sitting on top of stack1. However, every push costs O(n) because you must move all n existing elements twice — once to stack2 and once back to stack1.

This approach works and is easy to understand, but it is not optimal. When the interviewer asks for the time complexity of push, you will have to say O(n). Most interviewers will then ask: can you do better?

  • Push: O(n) — transfer all elements to stack2, push new element, transfer back
  • Pop: O(1) — top of stack1 is always the front of the queue
  • Peek: O(1) — same as pop but without removal
  • Space: O(n) — two stacks hold all elements

Approach 2: Lazy Transfer — The Optimal Two Stack Queue

The optimal approach uses an inStack and an outStack with a lazy transfer strategy. When you push, you always push onto inStack — this is O(1) every time. When you pop or peek, you check outStack first. If outStack has elements, you pop from it directly. If outStack is empty, you transfer all elements from inStack to outStack, which reverses their order and gives you FIFO access.

The beauty of this approach is that each element is transferred from inStack to outStack at most once during its lifetime in the queue. Once an element lands in outStack, it stays there until it is popped. This means the transfer cost is spread across all the operations that element participates in.

Here is how it works in practice: suppose you push 1, 2, 3 onto inStack. InStack now holds [1, 2, 3] with 3 on top. When you call pop, outStack is empty, so you transfer everything — outStack becomes [3, 2, 1] with 1 on top. Popping gives you 1, the correct FIFO result. If you push 4 next, it goes to inStack. Popping again takes 2 from outStack directly — no transfer needed.

For peek, the logic is identical to pop except you do not remove the element. For empty, you simply check whether both inStack and outStack are empty.

  1. 1push(x): Always push x onto inStack. O(1).
  2. 2pop(): If outStack is empty, transfer all elements from inStack to outStack. Pop from outStack. Amortized O(1).
  3. 3peek(): If outStack is empty, transfer all elements from inStack to outStack. Return top of outStack without popping. Amortized O(1).
  4. 4empty(): Return true if both inStack and outStack are empty. O(1).
💡

Pro Tip

The optimal approach: push always goes to inStack. Pop/peek checks outStack first — if empty, dump all of inStack into outStack (reversing the order). This lazy transfer gives amortized O(1).

Why the Optimal Approach Is Amortized O(1)

Amortized analysis looks at the average cost of an operation over a sequence of operations, rather than the worst case of a single operation. For the two stack queue, the worst-case pop is O(n) when outStack is empty and you must transfer n elements. But that worst case cannot happen on every pop.

Here is the key insight: every element enters inStack exactly once (during push) and leaves inStack exactly once (during a transfer to outStack). Similarly, every element enters outStack exactly once and leaves exactly once (during pop). That means each element participates in exactly four stack operations total across its entire lifecycle — two pushes and two pops.

If you perform N push operations and N pop operations, the total number of individual stack operations is at most 4N — each element is pushed to inStack, popped from inStack, pushed to outStack, and popped from outStack. Dividing 4N total operations by 2N queue operations gives you O(1) amortized cost per queue operation.

This is a powerful result. Even though any single pop might cost O(n), the expensive pops are rare enough that the average cost stays constant. The same amortized O(1) argument applies to peek, since it uses the same transfer logic.

Edge Cases and Common Mistakes

Interviewers love to probe edge cases on this problem. The most common one is calling pop or peek on an empty queue — you should clarify whether the problem guarantees valid inputs (LeetCode 232 does, but real interviews may not). If not guaranteed, add a check and throw an error or return a sentinel value.

Another subtle case is alternating push and pop operations. Some candidates mistakenly transfer from inStack to outStack on every pop, even when outStack still has elements. This destroys the amortized guarantee and makes every pop O(n). The correct approach is to only transfer when outStack is empty.

Finally, be careful with peek. The peek operation should not modify outStack — it should return the top element without removing it. Some candidates accidentally pop during peek and forget to push the element back, which corrupts the queue state.

  • Pop/peek on empty queue: clarify constraints or add defensive checks
  • Alternating push/pop: only transfer when outStack is empty, never on every operation
  • Peek must not remove: return top of outStack without modifying it
  • Single element: push then immediate pop should return that element
⚠️

Common Mistake

Don't transfer from inStack to outStack on every operation — only transfer when outStack is empty. Transferring every time makes it O(n) per pop instead of amortized O(1).

What Implement Queue Using Stacks Teaches You

LeetCode 232 is fundamentally about ADT composition — building one data structure from another by exploiting their properties. The insight that two stacks can simulate a queue by reversing element order is elegant and reusable. The same idea appears in Implement Stack Using Queues (#225), where you flip the composition around.

This problem also introduces amortized analysis, a technique that comes up repeatedly in data structures like dynamic arrays, splay trees, and union-find. Understanding amortized O(1) here gives you the vocabulary to explain similar guarantees in more complex data structures during interviews.

If you want to drill this pattern and related stack and queue problems, YeetCode flashcards help you build recognition through spaced repetition. Problems like Min Stack (#155), Valid Parentheses (#20), and Daily Temperatures (#739) all test different facets of stack-based thinking. The more patterns you internalize, the faster you recognize which tool to reach for in a live interview.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now