Dynamic Programming Isn't About Memorizing Solutions
Dynamic programming has a reputation for being one of the hardest topics in coding interviews. Engineers spend weeks memorizing solutions to Coin Change, Knapsack, and Longest Common Subsequence — then freeze up the moment they see a problem that's even slightly different.
The problem isn't that DP is inherently hard. It's that most people study it the wrong way. They learn solutions without learning the underlying framework. Once you internalize the two core properties that define a DP problem and the systematic process for building a solution, the category becomes far less intimidating.
This guide teaches you the framework. You'll learn how to recognize when dynamic programming applies, how to define your state, how to write a recurrence relation, and how to choose between memoization and tabulation. By the end, you'll have a repeatable process — not a library of memorized solutions.
What Is Dynamic Programming?
Dynamic programming is an algorithmic technique for solving problems that can be broken down into overlapping subproblems with optimal substructure. Those two properties are the entire definition — if both are present, DP is likely the right tool.
Overlapping subproblems means the same smaller problems are solved repeatedly as you recurse through the problem. In naive recursion for Fibonacci, fib(4) is computed multiple times when calculating fib(6). DP eliminates this redundancy by storing previously computed results.
Optimal substructure means the optimal solution to the full problem can be constructed from optimal solutions to its subproblems. If the shortest path from A to C goes through B, then the sub-path from A to B must also be the shortest path between those two nodes. This property is what makes it valid to build solutions incrementally.
- Overlapping subproblems: the same sub-computation appears multiple times
- Optimal substructure: the global optimum is composed of local optima
- DP trades space for time — cache results to avoid redundant computation
- Both properties must be present; greedy works when only optimal substructure holds
The Dynamic Programming Framework: State, Recurrence, Direction
Every DP solution follows the same three-step process: define the state, write the recurrence relation, and decide whether to go top-down or bottom-up. Rushing past any of these steps is the most common reason DP solutions fail in interviews.
Defining the state is the most important step. Your state is what you need to know at each point to make progress. For Coin Change, the state is the remaining amount. For Longest Common Subsequence, the state is two indices into the two strings. Ask yourself: "What changes as I make decisions?" — those variables become your state.
The recurrence relation is the mathematical relationship between a state and smaller states. For Climbing Stairs (#70): dp[i] = dp[i-1] + dp[i-2]. For Coin Change (#322): dp[amount] = 1 + min(dp[amount - coin]) for each coin. Write this relation in plain English first, then translate it to code.
Once you have the recurrence, you choose direction: top-down starts from the answer and recurses down (using memoization), while bottom-up starts from the base cases and builds up (using a table). Both produce the same result — the choice is about code style and stack depth.
- 1Identify what changes between subproblems — these are your state variables
- 2Define dp[state] in plain English: "dp[i] is the minimum number of coins to make amount i"
- 3Write the recurrence: express dp[state] in terms of smaller states
- 4Identify base cases: the smallest states you can answer directly
- 5Choose top-down (memoization) or bottom-up (tabulation) and implement
- 6Verify with a small example by hand before coding the full solution
Watch Out
Skipping the recurrence relation step and jumping straight to code is the #1 source of DP bugs. Write dp[state] = ... in a comment before you write a single line of code. A clear recurrence prevents off-by-one errors and wrong base cases.
Top-Down (Memoization) vs Bottom-Up (Tabulation)
Top-down DP uses recursion with a cache (usually a hash map or array). You write the solution the same way you'd write naive recursion, then add a check at the top: if the current state is already in the cache, return it immediately. This approach is intuitive because it mirrors how you think about the problem.
Bottom-up DP fills a table iteratively, starting from base cases and working toward the answer. It eliminates recursion entirely, which means no stack overflow risk on large inputs. It's often faster in practice due to better cache locality and no function call overhead.
For interviews, top-down is usually easier to derive under pressure because you can write the recursive solution first and then add memoization as a second pass. Bottom-up is preferred when the interviewer specifically asks for an iterative solution or when input size makes recursion depth a concern.
- Top-down: write recursion first, add a cache — easier to derive from the problem
- Bottom-up: fill a table from base cases up — no stack overflow, slightly faster
- Both have the same time and space complexity for most problems
- Top-down only computes states that are actually needed; bottom-up computes all states
- Start with top-down if you're unsure — it's easier to verify correctness
Classic Dynamic Programming Problems: Walkthrough
Climbing Stairs (#70) is the hello-world of dynamic programming. You can climb 1 or 2 steps at a time — how many distinct ways to reach the top? State: dp[i] = number of ways to reach step i. Recurrence: dp[i] = dp[i-1] + dp[i-2]. Base cases: dp[1] = 1, dp[2] = 2. This is Fibonacci in disguise, and recognizing that pattern trains your eye for similar recurrences.
Coin Change (#322) is the canonical unbounded knapsack variant. Given coin denominations and a target amount, find the fewest coins needed. State: dp[amount] = minimum coins to make that amount. Recurrence: dp[a] = 1 + min(dp[a - coin]) for each valid coin. Base case: dp[0] = 0. This problem is excellent because it cleanly illustrates how to model "minimum cost to reach a goal."
Longest Common Subsequence (#1143) introduces 2-D DP states. Given two strings, find the length of their longest common subsequence. State: dp[i][j] = LCS of the first i characters of s1 and first j characters of s2. Recurrence: if s1[i] == s2[j], dp[i][j] = dp[i-1][j-1] + 1; otherwise dp[i][j] = max(dp[i-1][j], dp[i][j-1]). This pattern — comparing characters and choosing to include or skip — appears in dozens of string DP problems.
- Climbing Stairs (#70) — 1-D DP, Fibonacci pattern, good entry point
- House Robber (#198) — 1-D DP, skip-or-take decision at each step
- Coin Change (#322) — unbounded knapsack, minimize cost to reach a target
- Longest Increasing Subsequence (#300) — 1-D DP with O(n log n) optimization
- Longest Common Subsequence (#1143) — 2-D DP, comparing two sequences
- Edit Distance (#72) — 2-D DP, three choices at each cell (insert, delete, replace)
- 0/1 Knapsack (various) — foundational bounded include/exclude pattern
Pro Tip
Before writing code for any DP problem, manually compute dp values for a tiny example (3-4 elements). If your recurrence produces the right answer by hand, your code will too. If it doesn't, fix the recurrence — not the code.
Common Dynamic Programming Mistakes to Avoid
The most frequent DP mistake is defining the state incorrectly. If your state doesn't capture all the information needed to make a decision, your recurrence will be wrong in subtle ways that are hard to debug. When in doubt, err toward more state variables and optimize later.
Off-by-one errors in base cases are the second most common issue. Always ask: what is dp[0] and dp[1]? What happens at the boundary of your 2-D table? Test your base cases explicitly with the smallest valid input before running on larger examples.
Not memoizing all states is another trap. If you write a recursive solution and forget to cache a state, the algorithm silently falls back to exponential time. In Python, the @lru_cache decorator is the safest way to ensure you never miss a cache hit. In other languages, initialize your memo table with a sentinel value (like -1) and always check before recursing.
- Wrong state definition: missing variables that affect future decisions
- Missing base cases: forgetting dp[0] or edge cases at table boundaries
- Incomplete memoization: caching only some states, leaving others to recompute
- Wrong recurrence direction: confusing "min remaining cost" with "max collected value"
- Mutating input: modifying the array mid-recursion and corrupting cached results
- Off-by-one in table dimensions: allocating dp[n] when you need dp[n+1]
How to Practice Dynamic Programming Effectively
The best way to build DP intuition is to practice in order of increasing state complexity. Start with 1-D DP problems (Climbing Stairs, House Robber, Coin Change), then move to 2-D problems (LCS, Edit Distance, Unique Paths). Each level introduces new ways to define and traverse state space.
After solving a problem, always ask: can I reduce the space complexity? Many 2-D DP solutions can be compressed to O(n) space by observing that each row only depends on the previous row. Recognizing this is a signal of deep understanding that impresses interviewers.
YeetCode's flashcard system is well suited to DP practice because pattern recognition is what DP is really about. Drilling cards that ask "what is the state for LCS?" or "write the recurrence for Coin Change" builds the mental scaffolding you need to derive solutions under interview pressure — not just recognize problems you've seen before.
The 1-D Dynamic Programming category in YeetCode covers the core patterns: Fibonacci-style recurrences, unbounded knapsack, and subsequence problems. Work through these systematically before attempting DP problems on LeetCode directly. The pattern recognition you build with flashcards transfers directly to novel problems.
Good to Know
Dynamic programming appears in 30-40% of hard-level interview problems at top tech companies