Problem Walkthrough

Remove Duplicates Sorted Array LeetCode 26

A complete walkthrough of LeetCode 26 Remove Duplicates from Sorted Array — the two-pointer technique uses slow as the write head that overwrites duplicates in-place while fast scans ahead to find the next unique value, achieving O(n) time and O(1) space with a single pass through the sorted input.

7 min read|

Remove Duplicates Sorted

LeetCode 26

Problem Overview

LeetCode 26 — Remove Duplicates from Sorted Array — gives you a sorted integer array nums and asks you to remove duplicates in-place so that each unique element appears only once. You must return k, the count of unique elements. The first k positions of nums must contain the unique values in their original order; everything after index k-1 does not matter.

Because the array is sorted, all duplicates are adjacent — any repeated value appears in a contiguous block. This adjacency property is the key that makes the two-pointer approach work in O(1) space. If the array were unsorted, you would need a HashSet to track which values had been seen.

The problem is judged by a custom checker: it reads k from your return value and verifies that nums[0..k-1] contains the correct unique values in sorted order. The values at indices k and beyond are ignored. You do not need to actually remove elements from the array; you only need to write the unique values to the front.

  • Input: sorted integer array nums (non-decreasing order)
  • Output: k — the count of unique elements
  • First k elements of nums must hold the unique values in order
  • Elements beyond index k-1 are ignored by the judge
  • In-place: no extra array allowed, O(1) space required
  • Constraints: 1 <= nums.length <= 30000, -100 <= nums[i] <= 100

Two-Pointer Strategy

The two-pointer strategy assigns distinct roles to each pointer. slow is the write head: it tracks the position of the last confirmed unique element written to the front of the array. fast is the read head: it scans forward through the entire array looking for values that differ from nums[slow]. When fast finds such a value, it has found the next unique element.

When nums[fast] != nums[slow], there are two actions: increment slow to claim the next write position, then copy nums[fast] into nums[slow]. After the copy, slow points to the newly written unique value. fast then advances to the next index regardless of whether a copy happened. This single-pass scan processes every element exactly once.

At the end of the loop, slow is a 0-indexed pointer to the last unique element. The total count of unique elements is therefore slow + 1. Returning slow instead of slow+1 is the most common off-by-one mistake in this problem — slow is an index, not a count.

💡

Slow is the Write Head, Fast is the Read Head

This is the foundational in-place dedup pattern: slow is the "write head" and fast is the "read head". Slow only advances when a new unique value is found; fast always advances. The same pattern appears in Remove Element (LeetCode 27), Move Zeroes (LeetCode 283), and Remove Duplicates II (LeetCode 80) — mastering it here unlocks the entire family.

Algorithm Steps

The algorithm initializes slow = 0, treating nums[0] as the first unique element already in place. fast then starts from index 1 and scans to the end. For every fast position, compare nums[fast] to nums[slow]: if they differ, a new unique element has been found; if they are equal, the element is a duplicate and fast simply advances without writing.

The initialization of slow = 0 correctly handles arrays with a single element: the for loop never executes (fast starts at 1, which equals nums.length for a size-1 array), and the function returns slow+1 = 1, which is correct. Arrays with all identical elements return 1 because slow never advances past 0.

The time complexity is O(n) because fast visits each element exactly once. The space complexity is O(1) because no auxiliary data structures are used — only the two integer pointer variables slow and fast, which are constant space regardless of input size.

  1. 1Initialize slow = 0 (first unique element is already at index 0)
  2. 2For fast from 1 to nums.length - 1 (inclusive):
  3. 3 If nums[fast] != nums[slow]: slow++, then nums[slow] = nums[fast]
  4. 4 If nums[fast] == nums[slow]: skip — fast advances at end of loop automatically
  5. 5Return slow + 1 as the count of unique elements

Why Sorted Order Matters

The sorted precondition is what makes O(1) space possible. In a sorted array, all copies of a given value appear together in a contiguous block. This means a duplicate is always immediately adjacent to its original — comparing nums[fast] to nums[slow] is sufficient to detect duplicates because any repeat of nums[slow] must appear between slow and the next new value.

If the array were unsorted — for example [1, 3, 1, 2, 2] — comparing fast to slow would not work because duplicates are not adjacent. You would need a HashSet to record which values had been seen before. The HashSet approach is O(n) space, which violates the in-place constraint. The sorted precondition eliminates the need for that extra memory entirely.

The comparison fast vs slow (rather than fast vs fast-1) is intentional. slow always points to the most recently written unique value. fast compares against slow, not the previous fast position, because what matters is whether the current element is a new unique value relative to what has been written so far — not whether it equals its immediate predecessor in the original array.

ℹ️

Sorted Order Enables O(1) Space — Without It You Need a HashSet

The sorted precondition is what makes this O(1) space: duplicates are adjacent, so comparing fast to slow always catches them. Without sorted order you would need a HashSet to track seen values, turning this into Contains Duplicate territory with O(n) extra space. The sorted constraint is not incidental — it is the algorithmic foundation of the entire approach.

Walk-Through Example

Consider nums = [0,0,1,1,1,2,2,3,3,4]. Initialize slow = 0 (nums[0] = 0 is the first unique element). fast begins at index 1. At index 1: nums[1] = 0 == nums[slow=0] = 0 — duplicate, skip. At index 2: nums[2] = 1 != nums[slow=0] = 0 — new unique value; slow becomes 1, nums[1] = 1.

Continuing: index 3 (1 == nums[1] = 1, skip), index 4 (1 == 1, skip). At index 5: nums[5] = 2 != nums[slow=1] = 1 — new unique; slow becomes 2, nums[2] = 2. At index 6 (2 == 2, skip). At index 7: nums[7] = 3 != nums[slow=2] = 2 — new unique; slow becomes 3, nums[3] = 3.

At index 8 (3 == 3, skip). At index 9: nums[9] = 4 != nums[slow=3] = 3 — new unique; slow becomes 4, nums[4] = 4. Loop ends. Return slow+1 = 5. The first 5 elements of nums are [0,1,2,3,4], which are the 5 unique values in sorted order.

  1. 1Start: nums=[0,0,1,1,1,2,2,3,3,4], slow=0
  2. 2fast=2: 1 != 0 → slow=1, nums[1]=1 → [0,1,1,1,1,2,2,3,3,4]
  3. 3fast=5: 2 != 1 → slow=2, nums[2]=2 → [0,1,2,1,1,2,2,3,3,4]
  4. 4fast=7: 3 != 2 → slow=3, nums[3]=3 → [0,1,2,3,1,2,2,3,3,4]
  5. 5fast=9: 4 != 3 → slow=4, nums[4]=4 → [0,1,2,3,4,2,2,3,3,4]
  6. 6Return slow+1 = 5 (first 5 elements are [0,1,2,3,4])

Code Walkthrough — Python and Java

Python: def removeDuplicates(nums): slow = 0; for fast in range(1, len(nums)): if nums[fast] != nums[slow]: slow += 1; nums[slow] = nums[fast]; return slow + 1. O(n) time O(1) space. The for loop iterates from 1 to len(nums)-1 inclusive. The conditional only executes the two-line body when a new unique value is found; otherwise fast simply advances to the next index automatically.

Java: public int removeDuplicates(int[] nums) { int slow = 0; for (int fast = 1; fast < nums.length; fast++) { if (nums[fast] != nums[slow]) { slow++; nums[slow] = nums[fast]; } } return slow + 1; }. The logic is identical to Python. Java's for loop initializes fast = 1 and increments automatically, mirroring Python's range(1, len(nums)) exactly. Both solutions pass all LeetCode test cases.

Both implementations share the same structure: one variable initialization, one for loop, one conditional with two statements, one return. There are no nested loops, no recursion, no auxiliary data structures. The in-place writes to nums overwrite duplicates with unique values without disturbing the relative order of unique elements since both slow and fast move strictly left to right.

⚠️

Return slow+1 Not slow — slow Is an Index, Not a Count

Return slow+1 not slow: slow is 0-indexed and points to the last unique element written, so the count of unique elements = slow + 1. Returning slow is off by one — it would report one fewer unique element than actually exists. This is the single most common bug in LeetCode 26 solutions. Always remember: index → count requires adding 1.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now