What's the Time Complexity? The Question You Will Always Be Asked
You have just finished writing a clean solution to a medium-difficulty LeetCode problem. The interviewer nods, then asks the one question that comes up in every single coding round: "What's the time complexity?" If you hesitate, stumble, or guess wrong, it undermines everything you just built. Leetcode time complexity analysis is not optional — it is the price of admission to every serious technical interview.
The good news is that analyzing time complexity is a learnable skill, not an innate talent. You do not need a computer science degree to master it. What you need is a practical framework that lets you look at any piece of code and quickly determine how its runtime grows as the input gets larger. That is exactly what this guide provides.
Whether you are preparing for FAANG interviews, startup coding rounds, or just want to write more efficient code, understanding big o notation explained in practical terms will make you a stronger engineer. By the end of this article, you will be able to analyze any solution you write on the spot, confidently and accurately.
Big O Basics: The Seven Complexities You Need to Know
Big O notation measures how an algorithm's runtime or memory usage scales as the input size grows toward infinity. It does not measure actual execution time in seconds — it measures the growth rate. An O(n) algorithm might be slower than an O(n^2) algorithm for tiny inputs, but as n grows to thousands or millions, the O(n) solution will always win.
The key insight behind big o notation explained simply: we drop constants and lower-order terms because they become irrelevant at scale. An algorithm that takes 3n + 500 steps is still O(n). One that takes n^2 + 10000n is still O(n^2). What matters is the dominant term — the one that grows fastest as n increases.
Every algorithm falls into one of a few common growth categories. You should be able to recognize and rank all seven from fastest to slowest, because interviewers expect you to compare solutions fluently.
- O(1) — Constant time: hash map lookup, array index access, arithmetic operations. Runtime does not depend on input size at all.
- O(log n) — Logarithmic: binary search, balanced BST operations. Each step cuts the remaining work in half.
- O(n) — Linear: single loop through the input, linear search, most hash map-based solutions. You touch each element once.
- O(n log n) — Linearithmic: efficient sorting algorithms (merge sort, heap sort), divide-and-conquer approaches. The gold standard for comparison-based sorting.
- O(n^2) — Quadratic: nested loops over the input, brute-force pair comparisons, bubble sort. Acceptable only for small n (typically n <= 10^4).
- O(2^n) — Exponential: recursive solutions without memoization, generating all subsets. Grows astronomically fast.
- O(n!) — Factorial: generating all permutations, brute-force traveling salesman. Only feasible for n <= 10-12.
How to Analyze Your Own Code
Knowing the seven complexity classes is step one. The real skill — the one that impresses interviewers — is being able to look at code you just wrote and determine its complexity on the spot. How to calculate time complexity comes down to a few reliable techniques that work for the vast majority of interview problems.
Start with the loops. A single loop that iterates through n elements is O(n). Two nested loops that each iterate through n elements give you O(n^2). A loop with a binary search inside gives you O(n log n). This simple rule of counting nested loops handles about 80 percent of the problems you will encounter.
For recursive algorithms, write down the recurrence relation. If a function calls itself twice with input size n-1, you get O(2^n). If it calls itself once with input size n/2, you get O(log n). If it calls itself once with input size n-1, you get O(n). Draw the recursion tree if the pattern is not immediately obvious — count the total number of nodes, and that is your time complexity.
Finally, identify the dominant term and drop everything else. If your algorithm has a sorting step O(n log n) followed by a linear scan O(n), the overall complexity is O(n log n) because it dominates. If you have an O(n^2) nested loop inside an O(n) outer loop, the total is O(n^3). Always look for the bottleneck — that is what determines your algorithm's class.
Pro Tip
The fastest way to estimate complexity: count your nested loops. One loop = O(n), two nested = O(n^2), a loop with binary search inside = O(n log n). This covers 80% of interview problems.
Space Complexity: The Dimension Interviewers Never Forget
Most candidates prepare thoroughly for time complexity questions but stumble when asked about space. Space complexity interview questions are just as common, and getting them wrong signals a gap in your understanding. The good news is that space analysis follows the same Big O principles — you are just counting memory allocation instead of operations.
Auxiliary space is the extra memory your algorithm uses beyond the input itself. A solution that creates a hash map of n elements uses O(n) auxiliary space. One that uses only a few pointer variables uses O(1) auxiliary space. The distinction matters because interviewers often ask "can you do this in-place?" — meaning O(1) extra space.
Recursion is the sneakiest source of space usage. Every recursive call adds a frame to the call stack. A recursive DFS on a tree uses O(h) space where h is the height — O(log n) for a balanced tree, O(n) for a skewed one. Recursive Fibonacci without memoization uses O(n) stack space even though most people only think about its O(2^n) time complexity.
The most powerful optimization technique in algorithm efficiency is trading space for time. A brute-force O(n^2) solution can often become O(n) by using a hash map to store previously seen values. Two-sum is the classic example: instead of checking every pair, you store each number in a map and look up its complement in O(1). Understanding this tradeoff is fundamental to solving medium and hard LeetCode problems.
Common Complexity Patterns for LeetCode
Once you have the fundamentals down, the next step is building a mental time complexity cheat sheet — a lookup table that maps common algorithmic patterns to their complexities. This saves you from re-deriving everything during an interview and lets you answer confidently in seconds.
These are the patterns you will use most frequently on LeetCode and in interviews. Memorize them like multiplication tables — they should be instant recall, not something you reason through each time.
- Hash map insertion and lookup: O(1) average, O(n) worst case. This is why hash maps turn O(n^2) brute force into O(n) solutions.
- Sorting (merge sort, heap sort, Tim sort): O(n log n) time, O(n) space for merge sort, O(1) for heap sort. Sorting is often the first step in an optimal solution.
- Binary search on a sorted array: O(log n). Whenever you see a sorted input or a monotonic condition, think binary search.
- BFS and DFS on a graph: O(V + E) where V is vertices and E is edges. This applies to trees as well — a tree with n nodes has n-1 edges, so traversal is O(n).
- Two pointers on a sorted array: O(n). One pass with two converging pointers — used in pair-sum problems, container with most water, and removing duplicates.
- Sliding window: O(n). Maintains a window that expands and contracts — used for substring problems, maximum sum subarrays, and minimum window substring.
- Dynamic programming with memoization: O(number of unique states * work per state). For example, 0/1 knapsack with n items and capacity W is O(nW).
- Backtracking and generating subsets: O(2^n) for subsets, O(n!) for permutations. Pruning can reduce the constant but not the complexity class.
Constraint Hack
When LeetCode says n <= 10^4, they expect O(n^2) or better. When n <= 10^5, they expect O(n log n). When n <= 10^6, they expect O(n). Use constraints to reverse-engineer the expected complexity.
Tricky Cases Interviewers Love to Ask About
Standard complexity analysis covers most problems, but interviewers love to probe edge cases that trip up candidates. These are the gotchas that separate someone who memorized Big O from someone who truly understands algorithm efficiency at a deeper level.
Amortized analysis is the first trap. Consider a dynamic array (like Python's list or Java's ArrayList). Most append operations are O(1), but occasionally the array needs to resize, which costs O(n). The key insight is that resizing happens so infrequently that the average cost per operation is still O(1) amortized. Interviewers love asking about this because it tests whether you understand that "worst case per operation" is different from "average cost over many operations."
String concatenation in a loop is another classic gotcha. In most languages, strings are immutable, so concatenating a character to a string of length k creates an entirely new string of length k+1. If you do this n times in a loop, the total work is 1 + 2 + 3 + ... + n = O(n^2), not O(n). This is why you should use StringBuilder in Java, join() in Python, or push to an array in JavaScript.
Recursive Fibonacci without memoization is the textbook example of hidden exponential complexity. The recurrence T(n) = T(n-1) + T(n-2) produces O(2^n) calls because the same subproblems are solved over and over. Adding a memo table drops this to O(n) — one of the most dramatic complexity improvements in all of computer science.
Hidden O(n) operations catch many candidates off guard. Array slicing in Python creates a copy — that is O(k) where k is the slice size. The "in" keyword on a list is O(n), not O(1) like it is for sets and dicts. Sorting a result at the end adds O(n log n) to your overall complexity even if the rest of your solution is O(n). Always account for every operation, not just the loops you wrote.
Quick Reference Cheat Sheet
Here is the time complexity cheat sheet you should internalize before any coding interview. This covers the most common data structure operations and the constraint-based heuristics that let you reverse-engineer the expected complexity from the problem constraints.
- Array: access O(1), search O(n), insert/delete at end O(1) amortized, insert/delete at index O(n).
- Hash Map / Hash Set: insert O(1), lookup O(1), delete O(1) — all average case. Worst case is O(n) due to collisions.
- Linked List: access O(n), search O(n), insert/delete at head O(1), insert/delete at position O(n).
- Binary Search Tree (balanced): search O(log n), insert O(log n), delete O(log n). Unbalanced worst case is O(n).
- Heap / Priority Queue: insert O(log n), extract-min/max O(log n), peek O(1). Building a heap from n elements is O(n).
- Stack / Queue: push O(1), pop O(1), peek O(1). These are the simplest data structures with the best guarantees.
- Constraint n <= 10: expect O(n!) or O(2^n) — brute force backtracking or permutation generation is fine.
- Constraint n <= 10^3: expect O(n^2) — nested loops are acceptable at this size.
- Constraint n <= 10^4: expect O(n^2) or better — O(n^2) is borderline, consider optimizing.
- Constraint n <= 10^5: expect O(n log n) — sorting-based or divide-and-conquer solutions.
- Constraint n <= 10^6: expect O(n) — single-pass solutions with hash maps or two pointers.
- Constraint n <= 10^9: expect O(log n) or O(1) — binary search or mathematical formula.
Common Trap
String concatenation in a loop is O(n^2) in most languages, not O(n) — each concatenation creates a new string. Use StringBuilder (Java), join (Python), or array push (JS) instead.