Problem Walkthrough

Min Cost Climbing Stairs LeetCode 746

Master LeetCode 746 Min Cost Climbing Stairs using bottom-up dynamic programming with O(1) space — replace the full DP array with just two rolling variables (prev1 and prev2) that track the two most recent costs, the same trick that powers Fibonacci and House Robber space optimizations.

7 min read|

Min Cost Climbing Stairs

LeetCode 746

Problem Overview

LeetCode 746 — Min Cost Climbing Stairs — gives you an integer array cost where cost[i] is the cost of stepping on stair i. Once you pay the cost, you can move one or two steps forward. You may start from step 0 or step 1 without paying any initial cost. The goal is to reach the top of the floor, which is one step past the last element of the array.

The answer is the minimum total cost to reach the top. Because you can jump from either of the last two stairs to reach the top (past the last index), the final answer is the minimum of the two last dp values. This small detail — that the top is beyond the last index — trips up many first-time solvers who return dp[n-1] instead of min(dp[n-1], dp[n-2]).

This problem is a gateway to the linear DP family: Climbing Stairs, House Robber, Decode Ways, and Fibonacci all share the same recurrence structure where the current state depends only on the previous one or two states. Mastering the rolling-variable optimization here transfers directly to all of those problems.

  • cost[i] is the fee you pay when you step ON stair i
  • After paying, you may jump 1 or 2 steps forward
  • You may start from index 0 or index 1 (no cost to start)
  • The top is one step past the last element — index n
  • Constraints: 2 <= cost.length <= 1000, 0 <= cost[i] <= 999
  • Answer: min cost to reach the top from either last or second-to-last stair

Recurrence Relation

Define dp[i] as the minimum cost to reach step i, where reaching step i means you are standing on it and have already paid cost[i]. The recurrence is: dp[i] = cost[i] + min(dp[i-1], dp[i-2]). The cost of reaching step i is the step's own cost plus the cheaper of the two ways to arrive — jumping from step i-1 (one step) or from step i-2 (two steps).

Base cases are dp[0] = cost[0] and dp[1] = cost[1]. There is no cheaper way to reach steps 0 or 1 since you can start there for free. From index 2 onward, you apply the recurrence. The final answer is min(dp[n-1], dp[n-2]) because you can reach the top (index n) by taking one step from stair n-1 or two steps from stair n-2.

The recurrence captures the optimal substructure of the problem: the cheapest way to reach any stair is determined entirely by the cheapest ways to reach the two stairs before it. This substructure is what makes DP the right tool — each subproblem is solved once and reused, avoiding the exponential blowup of naive recursion.

💡

The Top Is Index n, Not n-1 — Return min(dp[n-1], dp[n-2])

The "top" is one step past the last element. The array has indices 0 through n-1, so the top is at index n. Since you can jump 1 step from stair n-1 or 2 steps from stair n-2 to reach index n, the answer is min(dp[n-1], dp[n-2]), not dp[n-1] alone. This is the most common mistake in this problem — off-by-one on the final return.

Bottom-Up DP Array

The bottom-up approach fills a dp array from left to right. Initialize dp[0] = cost[0] and dp[1] = cost[1]. For each index i from 2 to n-1, compute dp[i] = cost[i] + min(dp[i-1], dp[i-2]). After filling the array, return min(dp[n-1], dp[n-2]). This runs in O(n) time and uses O(n) space for the dp array.

Python implementation: def minCostClimbingStairs(cost): n = len(cost); dp = [0] * n; dp[0] = cost[0]; dp[1] = cost[1]; [dp.__setitem__(i, cost[i] + min(dp[i-1], dp[i-2])) for i in range(2, n)]; return min(dp[n-1], dp[n-2]). More clearly as a for loop: for i in range(2, n): dp[i] = cost[i] + min(dp[i-1], dp[i-2]). Simple and readable — a good starting point before optimizing.

Java implementation: int n = cost.length; int[] dp = new int[n]; dp[0] = cost[0]; dp[1] = cost[1]; for (int i = 2; i < n; i++) dp[i] = cost[i] + Math.min(dp[i-1], dp[i-2]); return Math.min(dp[n-1], dp[n-2]); Both Python and Java follow the identical structure. The O(n) space version is easier to reason about and debug before applying the rolling-variable optimization.

  1. 1Set n = cost.length
  2. 2Allocate dp array of length n
  3. 3dp[0] = cost[0], dp[1] = cost[1] (base cases)
  4. 4For i from 2 to n-1: dp[i] = cost[i] + min(dp[i-1], dp[i-2])
  5. 5Return min(dp[n-1], dp[n-2])

O(1) Space Optimization

Since dp[i] only depends on dp[i-1] and dp[i-2], you do not need the entire array — just two variables. Replace dp[i-1] with prev1 and dp[i-2] with prev2. Initialize prev2 = cost[0] and prev1 = cost[1]. At each step, compute curr = cost[i] + min(prev1, prev2), then shift: prev2 = prev1, prev1 = curr. After the loop, return min(prev1, prev2).

Python O(1): def minCostClimbingStairs(cost): prev2, prev1 = cost[0], cost[1]; for i in range(2, len(cost)): prev2, prev1 = prev1, cost[i] + min(prev1, prev2); return min(prev1, prev2). The simultaneous assignment prev2, prev1 = prev1, ... elegantly handles the shift in a single line. Java O(1): int prev2 = cost[0], prev1 = cost[1]; for (int i = 2; i < cost.length; i++) { int curr = cost[i] + Math.min(prev1, prev2); prev2 = prev1; prev1 = curr; } return Math.min(prev1, prev2);

The O(1) space version is strictly better for production use and is the expected optimal answer in interviews. The logic is identical to the array version — you are just discarding values you no longer need. Reducing from O(n) to O(1) space matters when cost arrays are large and when this subroutine is called repeatedly inside a larger system.

ℹ️

Rolling Variables — The O(1) Space Trick for Any dp[i] = f(dp[i-1], dp[i-2]) Recurrence

This is the same rolling-variable trick used in Fibonacci (space-optimized), House Robber, and Jump Game II. Any DP where dp[i] depends only on dp[i-1] and dp[i-2] can be reduced from O(n) to O(1) space by keeping just two variables. The pattern: initialize two variables for the base cases, iterate updating them each step, return a function of the final two values.

Top-Down with Memoization

The top-down approach defines a recursive function minCost(i) = cost[i] + min(minCost(i-1), minCost(i-2)) with base cases minCost(0) = cost[0] and minCost(1) = cost[1]. Without memoization, this produces an exponential recursion tree with massive repeated work — minCost(5) calls minCost(4) and minCost(3), minCost(4) calls minCost(3) and minCost(2), and minCost(3) is computed multiple times.

With memoization (a cache dictionary or array), each subproblem is computed exactly once. Python: from functools import lru_cache; @lru_cache(maxsize=None); def minCost(i): if i <= 1: return cost[i]; return cost[i] + min(minCost(i-1), minCost(i-2)); return min(minCost(n-1), minCost(n-2)). This achieves the same O(n) time as bottom-up DP but uses O(n) stack space for the recursion depth plus O(n) memo space.

Top-down memoization is often the easiest approach to implement correctly from a recursive intuition. The base cases and recurrence match the mathematical definition directly. The trade-off is stack space: for large inputs (n=1000 at the constraint limit), Python's default recursion limit of 1000 may need to be raised with sys.setrecursionlimit. Bottom-up avoids this issue entirely.

  1. 1Define recursive helper minCost(i) with cache
  2. 2Base cases: minCost(0) = cost[0], minCost(1) = cost[1]
  3. 3Recursive case: minCost(i) = cost[i] + min(minCost(i-1), minCost(i-2))
  4. 4Call min(minCost(n-1), minCost(n-2)) for the answer
  5. 5Time: O(n), Space: O(n) stack + O(n) memo

Code Walkthrough Python and Java

All three versions — array DP, O(1) rolling variables, and top-down memoization — run in O(n) time. Python array DP: def minCostClimbingStairs(cost): n = len(cost); dp = [0]*n; dp[0],dp[1] = cost[0],cost[1]; [dp.__setitem__(i, cost[i]+min(dp[i-1],dp[i-2])) for i in range(2,n)]; return min(dp[-1],dp[-2]). Python O(1): p2,p1 = cost[0],cost[1]; [exec("p2,p1=p1,cost[i]+min(p1,p2)") for i in range(2,len(cost))]; return min(p1,p2). More readable as an explicit for loop in interview settings.

Java array DP: public int minCostClimbingStairs(int[] cost) { int n = cost.length; int[] dp = new int[n]; dp[0] = cost[0]; dp[1] = cost[1]; for (int i = 2; i < n; i++) dp[i] = cost[i] + Math.min(dp[i-1], dp[i-2]); return Math.min(dp[n-1], dp[n-2]); } Java O(1): public int minCostClimbingStairs(int[] cost) { int p2 = cost[0], p1 = cost[1]; for (int i = 2; i < cost.length; i++) { int c = cost[i] + Math.min(p1, p2); p2 = p1; p1 = c; } return Math.min(p1, p2); }

In an interview, present the array DP version first to establish correctness, then explain the O(1) optimization. The interviewer wants to see that you understand why rolling variables work — not just that you memorized the pattern. Explain: "dp[i] only reads dp[i-1] and dp[i-2], so I only need to keep two values at any time." This shows understanding of memory dependency analysis.

⚠️

LeetCode 746 Pays cost[i] When You Step ON Stair i — Answer Is min of Last Two dp Values

Be careful with the problem definition. LeetCode 746 charges cost[i] when you step on stair i, and the answer is min(dp[n-1], dp[n-2]) because the top is past the array. Some variants define cost differently (e.g., cost to leave stair i). Always re-read the problem statement. The answer is NOT dp[n-1] — returning only the last dp value is a common off-by-one bug that passes some test cases but fails others.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now