You Don't Need a CS Degree's Worth of Algorithms
Here is a secret that nobody tells you when you start grinding leetcode algorithms: you do not need to know every algorithm ever invented. Computer science programs spend semesters covering topics like Dijkstra variants, network flow, and amortized analysis — but coding interviews test a surprisingly narrow slice of that material. The gap between what academia teaches and what interviews require is enormous.
The reality is that roughly 15 to 20 core concepts cover the vast majority of problems you will encounter. Sorting, searching, recursion, a handful of paradigms, and a solid grasp of Big O notation form the foundation. Everything else is a variation or combination of these building blocks.
This guide is designed as a single reference you can bookmark and revisit. Each section covers one major algorithm concept, explains when and why it matters for interviews, and gives you the practical knowledge you need — no textbook filler, no proofs, just the algorithms for coding interviews that actually get asked.
Big O Notation Crash Course for Coding Interviews
Big O notation is the language interviewers use to discuss solution quality. Every single coding interview will involve a conversation about time and space complexity, so understanding big o notation for interviews is non-negotiable. The good news is that you only need to internalize a handful of common complexities.
Time complexity describes how the runtime of your algorithm grows as the input size increases. Space complexity describes how much additional memory your algorithm uses. Interviewers care about both, but time complexity usually takes priority unless the problem explicitly constrains memory.
When analyzing your own solutions, focus on the dominant term and drop constants. A loop that runs 3n times is still O(n). A nested loop over the same array is O(n squared). A recursive function that splits the problem in half at each step is O(log n). These patterns become second nature with practice.
- O(1) — Constant: hash map lookup, array index access
- O(log n) — Logarithmic: binary search, balanced BST operations
- O(n) — Linear: single pass through an array, hash map construction
- O(n log n) — Linearithmic: efficient sorting (mergesort, quicksort average case)
- O(n^2) — Quadratic: nested loops, brute-force pair comparisons
- O(2^n) — Exponential: recursive subsets, naive Fibonacci
- O(n!) — Factorial: permutations, brute-force TSP
Sorting Algorithms That Matter for LeetCode
There are dozens of sorting algorithms in computer science, but for leetcode algorithms and interviews, only three matter — and you rarely need to implement them from scratch. What you need is to understand the underlying strategies because they appear as building blocks in more complex problems.
Quicksort uses a partitioning strategy: pick a pivot, rearrange elements so everything smaller goes left and everything larger goes right, then recurse. The partitioning step itself is the real gem. It powers the QuickSelect algorithm used in problems like Kth Largest Element in an Array (LeetCode #215). Average case is O(n log n), worst case O(n squared), but the constant factors make it fast in practice.
Mergesort follows the divide-and-conquer paradigm: split the array in half, sort each half, then merge the sorted halves. It guarantees O(n log n) in all cases and is stable. The merge step is crucial — it appears in problems like Merge Sorted Array (#88), Sort List (#148), and Count of Smaller Numbers After Self (#315).
Counting sort is your secret weapon when input values fall within a known, bounded range. Instead of comparing elements, you count occurrences and reconstruct the sorted order. It runs in O(n + k) where k is the range of values. Problems like Sort Colors (#75) and Top K Frequent Elements (#347) use counting-based approaches.
Pro Tip
You don't need to implement quicksort from memory — but you DO need to understand partitioning, because it's the basis for problems like Kth Largest Element (#215).
Searching Algorithms — Binary Search, BFS, DFS, and Beyond
Searching is arguably the most important algorithm family for coding interviews. Whether you are finding a target in a sorted array, exploring a graph, or narrowing down a solution space, you are searching. Mastering search strategies is the single highest-leverage investment you can make in your leetcode algorithms preparation.
Binary search is the workhorse of searching on sorted data. The classic version finds a target in O(log n), but the real power is in its variations: finding the leftmost or rightmost occurrence, searching on a rotated array (LeetCode #33), or applying binary search on the answer space itself. Whenever a problem involves sorted data or a monotonic condition, think binary search first.
Breadth-first search (BFS) explores a graph or tree level by level using a queue. It is the go-to algorithm for shortest path in unweighted graphs, level-order traversal of trees (LeetCode #102), and any problem that asks for the minimum number of steps. If you see "shortest" or "minimum moves," reach for BFS.
Depth-first search (DFS) explores as deep as possible along each branch before backtracking. It is implemented with recursion or an explicit stack. DFS is essential for path-finding problems, connected components, cycle detection, and tree traversals (inorder, preorder, postorder). Problems like Number of Islands (#200) and Path Sum (#112) are classic DFS applications.
Two pointers and sliding window are search strategies disguised as techniques. Two pointers converge from opposite ends of a sorted array to find pairs meeting a condition. Sliding window maintains a dynamic range over a sequence to find optimal subarrays. Both reduce O(n squared) brute force to O(n) and appear constantly in interviews.
- Binary search — sorted data, rotated arrays, search-on-answer problems
- BFS — shortest path (unweighted), level-order traversal, minimum steps
- DFS — path finding, connected components, tree traversals, cycle detection
- Two pointers — sorted array pairs, container with most water, three sum
- Sliding window — longest substring without repeating, minimum window substring
Core Algorithm Paradigms Every Candidate Must Know
Beyond individual algorithms, interviewers test whether you can recognize which paradigm applies to a problem. A paradigm is a general approach — a mental framework for breaking down problems. Mastering these paradigms is what separates candidates who solve novel problems from those who can only recall memorized solutions.
Greedy algorithms make the locally optimal choice at each step, hoping it leads to a globally optimal solution. They work when the problem has the "greedy choice property" — meaning a local optimum is part of the global optimum. Classic examples include Jump Game (#55), Task Scheduler (#621), and interval scheduling problems. Greedy solutions are usually elegant and efficient, but proving correctness can be tricky.
Divide and conquer splits a problem into smaller subproblems, solves each independently, and combines the results. Mergesort is the textbook example. In interviews, you will see it in problems like Maximum Subarray (#53 with the recursive approach), Median of Two Sorted Arrays (#4), and any problem where splitting the input in half leads to a clear recursive structure.
Dynamic programming (DP) solves problems by breaking them into overlapping subproblems and caching results to avoid redundant computation. If a recursive solution recomputes the same states, DP is the fix. It covers problems from Climbing Stairs (#70) to Edit Distance (#72). The key is identifying the state, the recurrence relation, and the base case — the code follows naturally.
Backtracking is exhaustive search with pruning. You explore all possible candidates and abandon a path as soon as you determine it cannot lead to a valid solution. It is the paradigm behind permutation, combination, and constraint satisfaction problems — Subsets (#78), N-Queens (#51), and Sudoku Solver (#37). The pattern is almost always recursive: choose, explore, unchoose.
Key Insight
80% of LeetCode problems use just 4 paradigms: greedy, two pointers/sliding window, BFS/DFS, and dynamic programming. Master these and you cover most interviews.
Algorithm Complexity Cheat Sheet
Having a mental algorithm cheat sheet of common operations and their complexities saves time during interviews. Instead of deriving complexities on the spot, internalize these so you can instantly evaluate whether your approach fits within the problem constraints.
For data structure operations, remember that hash maps give O(1) average for insert, lookup, and delete. Binary search trees give O(log n) for the same operations when balanced. Heaps give O(log n) for insert and extract-min/max but O(1) for peek. Arrays give O(1) for index access but O(n) for search and insertion at arbitrary positions.
For algorithm complexities, sorting is your baseline at O(n log n) for comparison-based sorts. Binary search is O(log n). BFS and DFS on a graph are O(V + E) where V is vertices and E is edges. Dynamic programming typically runs in O(states times transition cost). Backtracking can be exponential but pruning often brings it within practical limits.
The algorithm complexity guide rule of thumb for interviews: if n is up to 10, O(n!) or O(2^n) is acceptable. If n is up to 1000, O(n squared) works. If n is up to 100,000, you need O(n log n) or better. If n is up to 10 million, O(n) is required. These rough boundaries help you quickly determine whether your approach will pass within time limits.
- Hash Map — Insert: O(1), Lookup: O(1), Delete: O(1) average
- Binary Search Tree (balanced) — Insert: O(log n), Search: O(log n), Delete: O(log n)
- Heap / Priority Queue — Insert: O(log n), Extract: O(log n), Peek: O(1)
- Array — Access: O(1), Search: O(n), Insert at end: O(1) amortized, Insert at position: O(n)
- Sorting (comparison) — O(n log n) best achievable bound
- BFS / DFS on graph — O(V + E)
- Dynamic Programming — O(states x transition cost)
What to Study First — Priority by Interview Frequency
Not all leetcode algorithms are equally important. Interview frequency data from thousands of real interview reports reveals a clear priority order. If you have limited time — and most candidates do — study in this sequence to maximize your coverage with minimum effort.
Start with arrays, hashing, and two pointers. These three categories alone cover roughly 40 percent of all coding interview questions. The problems are approachable, the patterns are transferable, and they build the intuition you need for harder topics. YeetCode organizes these as its first three categories for exactly this reason.
Next, tackle binary search, sliding window, and stack problems. These add another 20 to 25 percent of interview coverage. Binary search on sorted data and on the answer space is a high-frequency pattern. Sliding window problems are popular because they test whether you can optimize from brute force. Stack problems appear often as they test fundamental understanding of LIFO processing.
Then move to trees, graphs, and dynamic programming. These are the backbone of medium and hard problems. Tree traversal with DFS and BFS, graph connectivity problems, and DP recurrence relations appear at every major tech company. They require more time to master but are unavoidable if you are targeting senior roles or top-tier companies.
Finally, cover heap/priority queue, backtracking, greedy, and intervals. These appear less frequently as standalone categories but show up as components within harder problems. A solid grasp of the first three tiers means you can often figure out these patterns on the fly during an interview.
- 1Tier 1 (start here): Arrays and Hashing, Two Pointers, Sliding Window — covers ~40% of questions
- 2Tier 2 (build on it): Binary Search, Stack, Linked List — adds ~20-25% more coverage
- 3Tier 3 (go deeper): Trees, Graphs, Dynamic Programming — essential for medium and hard problems
- 4Tier 4 (round it out): Heap/Priority Queue, Backtracking, Greedy, Intervals — completes your toolkit
Important
Don't memorize algorithm implementations — memorize when to USE each algorithm. Interviewers test pattern recognition, not textbook recall.