Problem Walkthrough

Balanced Binary Tree LeetCode 110 Guide

A complete walkthrough of LeetCode 110 Balanced Binary Tree using post-order DFS that returns -1 as a sentinel for unbalanced subtrees, checking height and balance simultaneously in a single O(n) pass — eliminating the redundant height recomputation of the naive O(n²) approach.

7 min read|

Balanced Binary Tree

LeetCode 110

Problem Overview — Height-Balanced at Every Node

LeetCode 110 Balanced Binary Tree asks you to determine whether a given binary tree is height-balanced. A height-balanced binary tree is defined as a tree in which the depth of the left and right subtrees of every node differs by at most 1. This condition must hold for every single node in the tree — not just the root.

The naive reading of the problem leads many developers to compute the height at the root and check whether the left and right subtrees differ by at most 1. This only validates the root node. A tree can have a perfectly balanced root while containing an unbalanced subtree buried deep within one of its branches. The definition requires every node to satisfy the balance condition.

The constraints are: the number of nodes is between 0 and 5,000, and node values are between -1,000 and 1,000. An empty tree (null root) is considered balanced by definition. The function should return a boolean true if the tree is balanced, false otherwise.

  • Number of nodes in the range [0, 5000]
  • Node values in the range [-1000, 1000]
  • Every node must satisfy: abs(leftHeight - rightHeight) <= 1
  • An empty tree (null) is height-balanced
  • The condition must hold for ALL nodes, not just the root
  • Return true if balanced, false otherwise

Naive O(n²) Approach — Height Recomputed at Every Level

The straightforward approach defines two functions: isBalanced checks the balance condition at the root and recurses on children, while getHeight computes the height of a subtree by recursively visiting every node. For each node, isBalanced calls getHeight on the left and right subtrees, checks that their difference is at most 1, then recurses into isBalanced(node.left) and isBalanced(node.right).

The problem with this approach is that getHeight is called once per node from isBalanced, and getHeight itself visits every node in the subtree. A node at depth d triggers getHeight calls that visit O(n) nodes in the worst case. Since isBalanced calls getHeight at every level of the tree, getHeight is invoked O(n) times total — yielding O(n) work per level and O(n²) overall for a balanced tree.

For a skewed tree (effectively a linked list), the naive approach degrades to O(n²) in both time and call count. The root of the inefficiency is that height information computed during the isBalanced traversal is discarded, then recomputed again when isBalanced recurses into the children. The optimized approach eliminates this redundancy by computing height and checking balance in the same single pass.

💡

The O(n) Trick: Compute Height and Check Balance in One Recursion

Instead of calling a separate getHeight function at each node, combine height computation and balance checking into a single post-order DFS. Return the actual height when the subtree is balanced, or -1 as a sentinel when it is unbalanced. This way each node is visited exactly once, reducing time complexity from O(n²) to O(n).

Optimized Post-Order DFS — Height or -1 as Sentinel

The optimized solution defines a single helper function that performs post-order DFS. The function returns the height of the subtree rooted at the current node if that subtree is balanced, or -1 if it is unbalanced. This dual-purpose return value eliminates the need for a separate boolean — the caller can detect imbalance simply by checking if the return value is -1.

At each node, the helper first recurses into the left child and right child (post-order: children before parent). If either recursive call returns -1, the current subtree is immediately declared unbalanced and the function returns -1 without examining the other subtree. If both calls return valid heights, the function checks whether the absolute difference between left and right heights exceeds 1. If it does, return -1. Otherwise, return 1 + max(leftHeight, rightHeight) as the height of this subtree.

The base case is the null node: it returns 0 (a null subtree has height 0, contributing no edges). The main function calls the helper on the root and checks whether the result is -1 (unbalanced) or a non-negative integer (balanced). This single DFS visits each node exactly once, giving O(n) time and O(h) space for the call stack where h is the tree height.

  1. 1Define helper(node): if node is None/null, return 0
  2. 2leftHeight = helper(node.left)
  3. 3If leftHeight == -1, return -1 immediately (left subtree unbalanced)
  4. 4rightHeight = helper(node.right)
  5. 5If rightHeight == -1, return -1 immediately (right subtree unbalanced)
  6. 6If abs(leftHeight - rightHeight) > 1, return -1 (current node unbalanced)
  7. 7Else return 1 + max(leftHeight, rightHeight)
  8. 8Main: return helper(root) != -1

Why -1 Works as a Sentinel — Heights Are Always Non-Negative

The -1 sentinel works cleanly because tree heights are always non-negative integers. The height of a null subtree is 0, the height of a leaf is 1, and heights only increase as you go up the tree. Since -1 can never be a legitimate height value, it is safe to repurpose it as an "unbalanced" signal without any ambiguity.

Once any subtree returns -1, every ancestor in the call stack also returns -1 immediately without performing any further work. This early-exit propagation means that as soon as an imbalance is detected deep in the tree, the recursion short-circuits all the way up to the root. No unnecessary height comparisons are performed after the first imbalance is found.

The final answer in the main function is simply: return helper(root) != -1. If helper returns any non-negative integer, the tree is balanced (that integer is the height of the full tree). If helper returns -1, the tree is unbalanced. There is no need for a separate boolean flag or a global variable — the return value carries both pieces of information.

ℹ️

The Sentinel -1 Serves Double Duty: Stops Computation AND Signals the Answer

The sentinel value -1 serves double duty: it stops further height computation by causing every ancestor to return -1 immediately (early exit), AND it signals the final answer to the caller (helper(root) == -1 means unbalanced). This is a common functional programming trick — encoding a boolean condition into an otherwise impossible value of the return type to avoid extra state.

Comparing O(n²) vs O(n) — Why the Difference Matters

In the naive approach, getHeight has O(n) time complexity because it visits every node in the subtree. The isBalanced function calls getHeight at every node it visits — and isBalanced itself visits every node. For a balanced tree of n nodes, the root triggers getHeight on both halves (O(n) total), each child triggers getHeight on its halves (O(n/2) each, O(n) total again), and so on for O(log n) levels. The total work is O(n log n) for balanced and O(n²) for skewed trees.

In the optimized approach, each node is visited exactly once. The helper function computes the height and checks balance in a single pass through the tree. There is no redundant re-traversal — the height of a subtree is computed once and immediately used by the parent. The result is O(n) time for all tree shapes, whether balanced, skewed, or anything in between.

Space complexity is O(h) in both cases due to the call stack depth, where h is the height of the tree. For a balanced tree, O(h) = O(log n). For a skewed tree, O(h) = O(n). The optimized approach does not improve space complexity, only time complexity — but the time improvement from O(n²) to O(n) is significant for large trees.

  1. 1Naive: getHeight is O(n) and called once per node = O(n) × O(n) = O(n²)
  2. 2Naive: for balanced tree, total work is actually O(n log n) due to halving
  3. 3Naive: for skewed tree (worst case), total work is O(n²)
  4. 4Optimized: each node visited exactly once in one DFS pass = O(n)
  5. 5Optimized: height and balance checked simultaneously — no redundant traversal
  6. 6Both: space complexity O(h) for call stack — O(log n) balanced, O(n) skewed

Code Walkthrough — Python and Java Solutions

Python implementation defines a nested helper function inside isBalanced. The helper takes a node and returns an int: the height if balanced, -1 if not. Base case: if not node, return 0. Then left = helper(node.left); if left == -1: return -1. Then right = helper(node.right); if right == -1: return -1. Then if abs(left - right) > 1: return -1. Else: return 1 + max(left, right). The outer function returns helper(root) != -1. Total: about 10 lines. O(n) time, O(h) space.

Java implementation uses a private helper method int checkHeight(TreeNode node). Base case: if node == null, return 0. Compute int left = checkHeight(node.left); if left == -1, return -1. Compute int right = checkHeight(node.right); if right == -1, return -1. If Math.abs(left - right) > 1, return -1. Return 1 + Math.max(left, right). The public method isBalanced calls return checkHeight(root) != -1. The -1 sentinel avoids the need for a class-level boolean field or an int[] wrapper, keeping the solution clean.

An alternative Python pattern uses a tuple return: the helper returns (is_balanced, height) as a pair instead of encoding balance into the sign of height. This is more explicit but requires destructuring at every call site and is slightly more verbose. The -1 sentinel approach is more idiomatic in competitive programming contexts where concision matters. Both are correct — choose whichever is clearer to you and your team.

⚠️

Check Left AND Right Before Comparing: Short-Circuit to Prune Work

Always check if left == -1 immediately after computing leftHeight, before computing rightHeight. If the left subtree is already unbalanced, there is no need to traverse the right subtree at all. This short-circuit evaluation prunes unnecessary work and is especially important for large unbalanced trees where the imbalance is on the left side. Missing this optimization does not affect worst-case complexity but meaningfully reduces constant factors in practice.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now