const solve = (nums) => { let left = 0, right = nums.length - 1; while (left < right) { const sum = nums[left] + nums[right]; if (sum === target) return [left, right]; }}
Patterns

DP Space Optimization: 3 Techniques for O(1) Space

The follow-up question "can you optimize the space?" appears in every DP interview. Here are the 3 techniques that reduce O(n) or O(n^2) space to O(1) or O(n) — and how to apply them systematically.

10 min read|

"Can you optimize the space?" — the DP follow-up you will always get

3 techniques that reduce O(n) space to O(1) in dynamic programming

Can You Reduce the Space Complexity?

You have just solved a dynamic programming problem in an interview. Your solution is correct, your time complexity is optimal, and you are feeling confident. Then the interviewer leans forward and asks: "Can you optimize the space?" This follow-up appears in virtually every DP interview round, and your answer determines whether you get a "strong hire" or a "hire."

The good news is that dp space optimization follows a small set of repeatable techniques. You do not need to reinvent the wheel for each problem. Three patterns — two-variable compression, rolling arrays, and row compression — cover the vast majority of space optimized dynamic programming problems you will encounter in interviews.

Once you internalize these three techniques, the answer to "can you reduce the space?" is always "yes." This guide walks through each technique with concrete LeetCode examples, shows you exactly when each one applies, and highlights the common bugs that trip people up after compression.

Technique 1: Two Variables (The Fibonacci Pattern)

The simplest dp space optimization occurs when dp[i] depends only on dp[i-1] and dp[i-2]. In this case, you do not need an array at all — two variables are sufficient. This is the Fibonacci pattern, and it appears far more often than you might expect.

Consider Climbing Stairs (LeetCode #70). The standard DP solution builds an array where dp[i] = dp[i-1] + dp[i-2]. But since each value only looks back two steps, you can replace the entire array with two variables: prev and prevPrev. At each step, you compute the current value from these two variables, then shift them forward. The space drops from O(n) to O(1).

House Robber (LeetCode #198) follows the same pattern. The recurrence dp[i] = max(dp[i-1], dp[i-2] + nums[i]) only references the two most recent values. Replace the array with two variables, update them in a single pass, and you have an O(1) space solution that interviewers love to see.

The key insight is recognizing when a recurrence has a fixed lookback window of size two. Whenever you see dp[i] defined purely in terms of dp[i-1] and dp[i-2], the two-variable technique applies immediately. This reduce dp space pattern is the easiest optimization to spot and implement.

  • Applies when: dp[i] depends only on dp[i-1] and dp[i-2]
  • Space reduction: O(n) to O(1)
  • Classic problems: Climbing Stairs (#70), House Robber (#198), Fibonacci Number (#509)
  • Implementation: Replace array with two variables (prev, prevPrev), shift after each step
  • Time complexity: Unchanged — still O(n)

Technique 2: Rolling Array (Fixed Window Compression)

Sometimes dp[i] depends on more than just the previous two values, but the lookback window is still fixed. When dp[i] references dp[i-1], dp[i-2], and dp[i-3] — or any fixed number of prior states — you can use a rolling array with modular indexing. This rolling array dp technique generalizes the two-variable approach to arbitrary window sizes.

The idea is simple: instead of storing all n values, maintain a circular buffer of size k (where k is the lookback window). Use index % k to map positions into the buffer. When you write to position i % k, the value at position (i-k) % k is no longer needed, so overwriting is safe.

Paint House is a clean example. You need to track the minimum cost for three color choices across n houses, where each house depends only on the previous house. Instead of an n-by-3 table, keep a rolling buffer of size 2 — the current row and the previous row. This reduces space from O(n) to O(1) while preserving the O(n) time complexity.

Coin Change (LeetCode #322) can also benefit from rolling array thinking, though the standard 1D DP solution is already space-efficient. The broader principle is that any time your recurrence looks back a fixed number of steps, you can cap your memory at that fixed window size rather than storing the full history.

  • Applies when: dp[i] depends on dp[i-1] through dp[i-k] for fixed k
  • Space reduction: O(n) to O(k) where k is the window size
  • Implementation: Use modular indexing — dp[i % k] instead of dp[i]
  • Classic problems: Paint House, Decode Ways (#91), tribonacci-style recurrences
  • Advantage: Generalizes two-variable technique to any fixed lookback window
💡

Pro Tip

Always solve the full dp[] array version first, THEN optimize space — this two-step approach impresses interviewers because it shows you understand the optimization, not just memorized the trick.

Technique 3: Row Compression (2D to 1D)

Row compression is the most commonly tested dp space optimization in interviews, and it is the technique that separates strong candidates from average ones. When a 2D DP table has the property that each row depends only on the previous row, you can discard all rows except the current and previous — or sometimes just a single row.

Unique Paths (LeetCode #62) is the textbook example. The standard solution fills an m-by-n grid where dp[i][j] = dp[i-1][j] + dp[i][j-1]. Since row i only reads from row i-1, you can keep just one row of length n and update it in place from left to right. The space drops from O(m*n) to O(n).

Minimum Path Sum (LeetCode #64) works identically. Each cell depends on the cell above (same column, previous row) and the cell to the left (same row, already computed). A single 1D array, updated left to right, captures both dependencies perfectly.

Edit Distance (LeetCode #72) is the most impressive application of this 1d dp optimization. The standard O(m*n) space solution uses a full 2D table. With row compression, you reduce to O(n) space by keeping only the current row and one extra variable to store the diagonal value (the "previous previous" that gets overwritten). Demonstrating this optimization in an interview is a strong signal.

The general rule: if your 2D recurrence only references dp[i-1][...] and dp[i][...] (the previous row and the current row), row compression applies. You iterate through rows, and for each row, update a 1D array that represents the current state.

  • Applies when: dp[i][j] depends only on row i-1 and row i (not earlier rows)
  • Space reduction: O(m*n) to O(n) — keep only the current row
  • Classic problems: Unique Paths (#62), Minimum Path Sum (#64), Edit Distance (#72), Longest Common Subsequence (#1143)
  • Implementation: Replace 2D array with 1D array, update in correct iteration order
  • Extra variable: For diagonal dependencies (like Edit Distance), store the overwritten value before updating

When DP Space Optimization Works — and When It Does Not

Space optimized dynamic programming is not universally applicable. It works when the recurrence has a fixed lookback structure — meaning dp[i] depends on a bounded number of prior states, not on arbitrary earlier values. If the recurrence only looks back one step, two steps, or one row, you can optimize. If it needs random access to the entire table, you cannot.

Longest Increasing Subsequence (LeetCode #300) is a classic example where naive space optimization does not work. The recurrence dp[i] = max(dp[j] + 1) for all j < i with nums[j] < nums[i] requires access to every previous dp value, not just a fixed window. You need the full O(n) array. The O(n log n) patience sorting approach is the real optimization here — it changes the algorithm, not just the space usage.

Interval DP and tree DP problems also typically resist space compression. When dp[i][j] can depend on dp[i][k] and dp[k+1][j] for any k between i and j, there is no row-by-row structure to exploit. You need the full table.

The mental model is straightforward: draw the dependency arrows in your DP table. If all arrows point from the immediately previous row or the immediately previous few cells, space optimization works. If arrows point to scattered earlier entries, it does not. This dp memory optimization interview check takes seconds and immediately tells you whether to attempt compression.

ℹ️

Key Insight

Row compression (2D to 1D) is the most commonly tested space optimization — it applies to Edit Distance (#72), Longest Common Subsequence (#1143), and every grid DP problem.

Common Mistakes After Space Compression

The most frequent bug when applying dp space optimization is overwriting a value that you still need. In the 2D-to-1D compression of Edit Distance, for example, updating dp[j] destroys the old value of dp[i-1][j] — but you need that old value to compute dp[i][j+1] (the diagonal dependency). The fix is to store the old value in a temporary variable before overwriting.

Iteration direction is the second major pitfall. In the 0/1 Knapsack problem, the standard 2D solution iterates left to right across weights. After compressing to 1D, you must iterate right to left — otherwise, each item can be "used" multiple times because you read an already-updated value. Getting this backwards produces correct-looking code that gives wrong answers on subtle test cases.

Base case errors are the third common trap. When you replace a dp array with two variables, the initial values of those variables must match what dp[0] and dp[1] would have been. It is easy to forget this step and initialize both variables to zero when one should be one, or vice versa. Always derive your variable initializations from the original base cases.

The best defense against all three mistakes is to write the full-array solution first, verify it works, and then mechanically transform it into the space-optimized version. Do not try to write the optimized version from scratch — the risk of subtle bugs is too high, especially under interview pressure.

  • Overwriting needed values: Store old values in a temp variable before updating (especially diagonal dependencies)
  • Wrong iteration direction: After 2D-to-1D compression of Knapsack, iterate in reverse to avoid reusing items
  • Incorrect base cases: Derive variable initializations from the original dp[0] and dp[1] values
  • Debugging tip: Print both the full-array and compressed versions side by side to spot discrepancies

Practice Strategy: The Two-Step Approach That Impresses Interviewers

The winning strategy in a DP interview is not to jump straight to the space-optimized solution. Instead, use a deliberate two-step approach: first solve with the full dp array, then optimize the space. This demonstrates to the interviewer that you understand the optimization rather than having memorized a trick.

Step one: write the standard DP solution with the complete array or table. Get it correct, explain the recurrence, and state the time and space complexity. This gives you a working baseline and shows solid DP fundamentals.

Step two: identify the dependency structure. Ask yourself — does dp[i] only depend on dp[i-1] and dp[i-2]? Use two variables. Does row i only depend on row i-1? Compress to 1D. State the optimization clearly: "Since each row only depends on the previous row, I can reduce space from O(m*n) to O(n) by keeping a single row." Then implement the change.

This two-step approach is how experienced engineers actually think about space optimization. It is also how interviewers expect you to present it. Jumping directly to an O(1) space solution without explaining the full version raises suspicion that you memorized the answer rather than understanding the underlying principle.

Practice this flow on YeetCode by reviewing the pattern cards for Climbing Stairs, House Robber, Unique Paths, and Edit Distance. For each problem, practice explaining the full solution first, then articulating exactly which technique reduces the space and why it works. This builds the verbal fluency that turns a correct solution into a strong hire signal.

⚠️

Common Bug

After compressing a 2D DP to 1D, you may need to iterate in reverse to avoid overwriting values you still need — this is the most common bug when optimizing Knapsack-type problems.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now