Problem Walkthrough

Two Sum LeetCode 1 — Complete Deep Dive

LeetCode 1 Two Sum is solved in O(n) time using a hashmap that stores each number mapped to its index. For every element you compute the complement (target minus the current number) and check whether that complement already exists in the map. This single-pass lookup replaces the O(n^2) brute-force nested loop and makes Two Sum the canonical example of trading O(n) space for O(n) time — the fundamental hashmap complement pattern that recurs throughout the hashmap problem family.

8 min read|

Two Sum

LeetCode 1

Problem Overview

LeetCode 1 — Two Sum — is the most attempted problem on LeetCode and the canonical entry point into technical interviews. Given an array of integers nums and an integer target, return the indices of the two numbers that add up to target. The problem guarantees exactly one valid answer exists, and you may not use the same element twice. For example, given nums = [2, 7, 11, 15] and target = 9, return [0, 1] because nums[0] + nums[1] = 2 + 7 = 9.

Two Sum is rated Easy, but it is a cornerstone problem because the optimal solution introduces the hashmap complement lookup pattern that appears in dozens of harder problems: Three Sum, Four Sum, Group Anagrams, Subarray Sum Equals K, and Longest Consecutive Sequence all build on this same foundation. Understanding why the hashmap solution works — and why sorting does not — is essential interview knowledge.

The problem sits at the intersection of array traversal and hash table design. The naive solution is immediately obvious (check every pair), but the interviewer wants to see that you recognize the complement structure and can jump to the O(n) hashmap solution. Walking through the problem with a concrete example, explaining the trade-off between time and space, and discussing edge cases like duplicate values demonstrates the depth expected at top companies.

  • Input: integer array nums, integer target
  • Output: indices [i, j] such that nums[i] + nums[j] == target
  • Exactly one valid answer is guaranteed
  • You may not use the same element twice (i != j)
  • Return indices in any order
  • Constraints: 2 <= nums.length <= 10^4, -10^9 <= nums[i] <= 10^9

Brute Force

The brute force approach checks every pair of elements using two nested loops. The outer loop iterates over index i from 0 to n-1, and the inner loop iterates over index j from i+1 to n-1. For each pair (i, j), check whether nums[i] + nums[j] == target. If yes, return [i, j]. This is O(n^2) time because there are n*(n-1)/2 pairs in the worst case. Space complexity is O(1) since no extra data structures are used.

For small inputs the brute force is perfectly acceptable — for n = 100 it performs only about 5,000 comparisons. But LeetCode 1 allows n up to 10,000, giving up to 50 million comparisons in the worst case. More importantly, in real codebases and interviews, the O(n^2) pattern signals that the interviewer expects you to find a better approach. The brute force should be your starting point to confirm understanding, not your final answer.

State the brute force clearly and quickly, then pivot: "This works but is O(n^2). We can do better with a hashmap that trades O(n) space for O(n) time. For each number, instead of scanning the rest of the array for its complement, we check the hashmap in O(1)." This pivot demonstrates algorithmic thinking — recognizing that storing seen elements enables constant-time complement lookup.

💡

The hashmap approach trades O(n) space for O(n) time: for each number, check if its complement (target - num) was already seen

The key insight is that for any element x in the array, you are looking for exactly one other element: target - x. Instead of scanning the array again to find it (O(n) per element, O(n^2) total), store each element in a hashmap keyed by its value. Then the lookup becomes O(1). The trade-off is O(n) extra space for the hashmap. In the vast majority of real-world scenarios this is entirely acceptable — the hashmap holds at most n entries, each a simple integer-to-integer mapping. This time-for-space trade-off is the defining pattern of the hashmap family of problems.

HashMap Single Pass

The optimal solution uses a single pass through the array with a hashmap that maps each value to its index. For each element nums[i], compute the complement as target - nums[i]. If the complement exists in the hashmap, return [hashmap[complement], i] — the index where the complement was stored and the current index. If the complement is not found, store nums[i] → i in the hashmap and continue. The algorithm terminates as soon as a pair is found, and the problem guarantees one always exists.

The time complexity is O(n): each element is visited exactly once, and each hashmap lookup and insert is O(1) average case. The space complexity is O(n): in the worst case (no pair found until the last element), the hashmap stores n-1 entries. In practice the hashmap is much smaller because the pair is found early. Both Python dict and Java HashMap provide O(1) average-case operations, making this solution fast in practice.

This single-pass approach is elegant because it builds the hashmap and checks for complements simultaneously. You never need to go back and re-scan the array. Each element is checked against all previously seen elements in O(1) time, giving the O(n) overall complexity. The complement check before insertion is critical: it ensures you do not accidentally use the same element twice, even when the target is twice a single element (e.g., nums = [3, 3], target = 6 — checking before inserting correctly returns [0, 1]).

  1. 1Initialize an empty hashmap: seen = {}
  2. 2Iterate over the array: for i, num in enumerate(nums)
  3. 3Compute complement = target - num
  4. 4If complement in seen: return [seen[complement], i]
  5. 5Otherwise: seen[num] = i
  6. 6Continue to next element (guaranteed to find answer)

Why Sorting Does Not Work Here

A common alternative for "find two numbers summing to target" is sorting the array and using two pointers — one at the left end and one at the right end, moving them inward based on whether the current sum is too large or too small. This gives O(n log n) time (dominated by sorting) and O(1) extra space. It works correctly for finding the values, but it fails for Two Sum because Two Sum asks for indices, not values.

When you sort the array, the original indices are lost. If nums = [3, 2, 4] and target = 6, sorting gives [2, 3, 4]. The two-pointer approach finds the pair (2, 4) at positions 0 and 2 in the sorted array, but the original indices were 1 and 2 — not 0 and 2. To recover the original indices you would need to either (a) store (value, original_index) pairs before sorting, which requires O(n) extra space anyway, or (b) search the original array after finding the values, which risks O(n) extra time per lookup. At that point the hashmap approach is simpler and equally efficient.

The sorting + two-pointer approach is ideal when the problem asks for values instead of indices — for example, Three Sum (LeetCode 15) asks for the actual triplets, not their indices, so sorting is the standard solution there. Recognizing this distinction is a key interview signal: Two Sum asks for indices → use hashmap. Three Sum asks for values (and allows deduplication) → use sorting + two pointers. Always clarify what the output format is before choosing your approach.

ℹ️

This is a crucial distinction: Two Sum asks for indices not values; if it asked for values, sorting + two pointers would be ideal (like 3Sum)

The Two Sum vs Three Sum comparison illustrates a fundamental design choice. Two Sum: return indices → hashmap O(n) time O(n) space. Three Sum: return value triplets, deduplicated → sort + two pointers O(n^2) time O(1) space. The reason Three Sum uses sorting is not just the extra element — it is that (1) indices are not required, (2) deduplication is required (skip duplicate values), and (3) sorting makes deduplication trivial by grouping identical values. If Three Sum asked for indices, sorting would fail for the same reason it fails in Two Sum. Always ask: does the problem require indices or values? That single question determines whether hashmap or sorting is the right tool.

One Pass vs Two Pass

There are two variants of the hashmap solution: two-pass and one-pass. The two-pass approach makes two iterations: in the first pass, build the entire hashmap mapping every value to its index. In the second pass, for each element check whether its complement exists in the hashmap and is at a different index. This is slightly more code but makes the logic very explicit — build first, query second.

The one-pass approach builds the hashmap and checks complements simultaneously in a single loop. For each element, check if the complement exists in the (partially built) hashmap first, then add the current element. This works because if nums[i] and nums[j] form a valid pair (i < j), by the time we process index j, index i has already been added to the hashmap. The complement lookup at j will find i.

The one-pass approach also handles the duplicate-value edge case more naturally. Consider nums = [3, 3], target = 6. In the one-pass approach: process index 0, num=3, complement=3, not in map yet → store {3: 0}. Process index 1, num=3, complement=3, found at index 0 → return [0, 1]. This is correct. In a naive two-pass implementation you must be careful to ensure j != i when checking the complement — the one-pass approach avoids this because the current element has not been inserted when its complement is checked.

  1. 1Two-pass: Pass 1 — build full hashmap {value: index} for all elements
  2. 2Two-pass: Pass 2 — for each element i, check if complement = target - nums[i] exists and complement_index != i
  3. 3One-pass: Single loop — for each element, check complement first, then insert
  4. 4One-pass handles duplicates naturally: current element not yet in map when complement is checked
  5. 5Both variants: O(n) time, O(n) space — one-pass preferred for simplicity

Code Walkthrough Python and Java

Python solution using a single-pass hashmap: def twoSum(nums, target): seen = {}; for i, num in enumerate(nums): complement = target - num; if complement in seen: return [seen[complement], i]; seen[num] = i. Python's enumerate provides both the index and value cleanly. The dict lookup "complement in seen" is O(1) average case. The function returns as soon as the pair is found, so in the best case (pair at the first two elements) it runs in O(1). Time O(n), Space O(n).

Java solution using HashMap: public int[] twoSum(int[] nums, int target) { Map<Integer, Integer> seen = new HashMap<>(); for (int i = 0; i < nums.length; i++) { int complement = target - nums[i]; if (seen.containsKey(complement)) { return new int[]{seen.get(complement), i}; } seen.put(nums[i], i); } return new int[]{}; }. Java's HashMap.containsKey() and get() are both O(1) average. The empty array return at the end satisfies the compiler — the problem guarantees a solution always exists, so it is never reached.

Both solutions follow the same three-line logic per iteration: compute complement, check map, insert if not found. The Python version is more concise thanks to enumerate and in-operator syntax. The Java version is more verbose but equally readable. Both handle the duplicate-value case correctly because the complement is checked before the current element is inserted. For LeetCode 1 with n up to 10,000, both solutions run well under the time limit — HashMap operations in Java and dict operations in Python are highly optimized in their standard libraries.

⚠️

Do not add the current number to the map before checking for its complement: checking first prevents using the same element twice

The order of operations in the single-pass hashmap is critical: (1) compute complement, (2) check if complement is in map, (3) if found return answer, (4) if not found, add current num to map. If you insert num into the map BEFORE checking for the complement, you risk a bug when target = 2 * nums[i] for some i. Example: nums = [3, 5], target = 6. If you insert 3 first then check complement 3, you find it and incorrectly return [0, 0] — the same element used twice. Always check for the complement before inserting the current element. This is the single most common implementation mistake on Two Sum.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now