Problem Walkthrough

Pow(x, n) LeetCode 50 — Fast Exponentiation

Solve LeetCode 50 Pow(x, n) using binary exponentiation (fast power) for O(log n) time complexity. Learn how to halve the exponent at each step, handle negative exponents by inverting the base, and safely handle the edge case where n = Integer.MIN_VALUE would overflow a 32-bit integer.

8 min read|

Pow(x, n)

LeetCode 50

Problem Overview: Implement pow(x, n)

LeetCode 50 Pow(x, n) asks you to implement a function that computes x raised to the power n, where x is a double and n is a 32-bit signed integer. The function must handle positive exponents, zero (x^0 = 1 for any x), and negative exponents where x^(-n) = 1 / x^n. The result is guaranteed to fit in a double, but n can be as large as 2^31 - 1 or as small as -2^31.

The straightforward interpretation — multiply x by itself n times — produces a correct result but is far too slow for large n. With n up to 2 billion, a loop-based approach would time out instantly. The key insight is that exponentiation has recursive structure: x^n = (x^2)^(n/2) when n is even, letting you cut the problem in half at each step and reach O(log n) time.

A few edge cases require careful attention: n = 0 returns 1 regardless of x; n = 1 returns x; n < 0 requires inverting x and negating n; and n = -2^31 cannot simply be negated in a 32-bit integer because 2^31 overflows. Recognizing these cases up front prevents hard-to-find bugs in the final implementation.

  • Input: x is a double (-100.0 <= x <= 100.0), n is a 32-bit integer (-2^31 <= n <= 2^31 - 1)
  • Output: x raised to the power n as a double
  • x^0 = 1.0 for any x (including x = 0.0)
  • x^(-n) = 1.0 / x^n — negative exponents invert the result
  • n range: -2,147,483,648 to 2,147,483,647
  • Result guaranteed to not overflow double precision

Why Linear Multiplication Is Too Slow

The naive approach multiplies x by itself n times in a loop: start with result = 1.0 and do result *= x exactly n times. This is O(n) time. For small n — say, n = 10 — this is perfectly fine. But LeetCode 50 allows n up to 2,147,483,647. That is over two billion multiplications, far more than any modern computer can execute within a time limit of one or two seconds.

Even for n in the hundreds of millions, the linear approach would timeout. The solution must reduce the number of multiplications from O(n) to something dramatically smaller. The mathematical structure of exponentiation makes this possible: x^8 does not require 8 multiplications — it only requires 3, because x^8 = ((x^2)^2)^2. Each squaring doubles the exponent, so log2(n) squarings suffice to reach x^n.

This observation is the foundation of binary exponentiation, also called fast power or exponentiation by squaring. It is the standard algorithm for modular exponentiation in cryptography, competitive programming, and any domain where exponents are large. Understanding it deeply also illuminates divide-and-conquer thinking more broadly.

💡

Binary Exponentiation: Square the Base, Halve the Exponent

Binary exponentiation reduces O(n) multiplications to O(log n) by exploiting the recurrence: x^n = (x^2)^(n/2) when n is even, and x^n = x * (x^2)^((n-1)/2) when n is odd. Each step halves n, so after at most log2(n) steps n reaches 0. For n = 2^31, that is only 31 steps instead of 2 billion.

Binary Exponentiation: Recursive Formulation

The recursive definition of binary exponentiation is clean: if n == 0, return 1 (base case); if n is even, return pow(x * x, n / 2); if n is odd, return x * pow(x * x, (n - 1) / 2). Each recursive call squares the base and halves the exponent, so the recursion depth is O(log n). The base case n == 0 returns 1 because x^0 = 1 for any x.

For odd n, you cannot divide evenly — n / 2 would discard the remainder. The correction is to multiply the result by one extra factor of x before recursing. This accounts for the "leftover" that halving loses. Equivalently, you can write the odd case as x * pow(x, n - 1), which makes n - 1 even and allows the next step to halve cleanly.

The recursion is tail-recursible in some forms but most implementations are not tail-recursive by default. Each call on the stack holds a pending multiplication. With O(log n) stack frames, the space complexity of the recursive version is O(log n). For most practical values of n this is negligible, but the iterative version below achieves O(1) space by eliminating the call stack entirely.

  1. 1Base case: if n == 0, return 1.0
  2. 2If n < 0: set x = 1/x, n = -n (handle overflow separately)
  3. 3If n is even: return pow(x * x, n / 2)
  4. 4If n is odd: return x * pow(x * x, (n - 1) / 2)
  5. 5Recursion depth: O(log n) — at most 31 levels for 32-bit n
  6. 6Space: O(log n) for recursive call stack

Iterative Version: Bit-by-Bit Exponentiation

The iterative version eliminates the call stack by processing n bit by bit. Initialize result = 1.0. While n > 0: if the lowest bit of n is 1 (n is odd), multiply result by the current x; then square x and right-shift n by 1 (integer divide by 2). Repeat until n reaches 0. The final result holds x raised to the original n.

To understand why this works, think of n in binary. Each bit position k corresponds to a power-of-two exponent 2^k. When bit k is 1, it contributes x^(2^k) to the product. The loop processes each bit from least significant to most significant, squaring x at each step so it equals x^(2^k) by the time bit k is examined. Multiplying result by x when the bit is 1 accumulates exactly the right contributions.

The iterative version runs in O(log n) time and O(1) space — a strict improvement over the recursive version in space. Both have the same number of multiplications. In practice the iterative version is preferred in production code because it avoids any risk of stack overflow for extremely large n and is slightly faster due to the absence of function call overhead.

ℹ️

Iterative Fast Power Processes Binary Bits of n

The iterative version processes the binary representation of n bit by bit: each 1-bit at position k contributes x^(2^k) to the result. The variable x is squared at each iteration so it equals x^(2^k) when processing bit k. Only when the current bit is 1 does result get multiplied by x. This is equivalent to the recursive formulation but uses O(1) space instead of O(log n) stack frames.

Handling Negative Exponents and Overflow

When n is negative, x^n equals (1/x)^(-n) = 1 / x^(-n). The simplest handling is: if n < 0, replace x with 1.0/x and replace n with -n, then run the standard binary exponentiation on the modified x and positive n. This transforms the negative case into a positive one without any structural changes to the algorithm.

The critical edge case is n = -2^31 = -2,147,483,648. This is the minimum value of a 32-bit signed integer. When you attempt to negate it with unary minus, the result is 2^31 = 2,147,483,648, which exceeds the maximum 32-bit signed integer value of 2^31 - 1 = 2,147,483,647. In Java, -Integer.MIN_VALUE is still Integer.MIN_VALUE due to two's complement overflow — a silent, catastrophic bug.

The fix is to widen n to a 64-bit integer (long in Java, which Python handles automatically with arbitrary-precision integers) before negating. In Java: long absN = -(long)n converts correctly because long can represent 2^31. Alternatively you can handle n = Integer.MIN_VALUE as a special case by computing pow(x, n + 1) * (1/x) — adding 1 avoids the overflow and multiplying by 1/x at the end corrects the result.

  1. 1If n < 0: set x = 1.0 / x
  2. 2Cast n to long (Java) before negating: long absN = -(long)n
  3. 3Python handles this automatically — no overflow in Python integers
  4. 4Edge case n = Integer.MIN_VALUE: -Integer.MIN_VALUE overflows in Java
  5. 5Alternative: compute pow(x, n+1) then multiply by 1/x once extra
  6. 6After handling sign, proceed with standard iterative or recursive fast power

Code Walkthrough: Python and Java

In Python, the iterative solution is about 8 lines: convert n to abs, handle sign by inverting x if needed, then loop while n > 0 — if n & 1, result *= x; x *= x; n >>= 1. Python integers are arbitrary precision so no overflow concern exists. The built-in ** operator uses the same algorithm internally. The explicit implementation demonstrates the technique clearly for interviews.

In Java, the key difference is casting n to long before any negation. A clean implementation: long absN = n < 0 ? -(long)n : (long)n; double x = n < 0 ? 1.0/xIn : xIn; double result = 1.0; while (absN > 0) { if ((absN & 1) == 1) result *= x; x *= x; absN >>= 1; } return result. The time complexity of both versions is O(log n) and the space complexity is O(1) for the iterative form and O(log n) for recursive.

A common interview variation is to write the recursive version first (cleaner to explain), then optimize to iterative to eliminate stack space. In practice both are accepted on LeetCode. Knowing both forms and being able to switch between them on demand demonstrates algorithmic fluency. The negative exponent and overflow handling is the most-asked follow-up, so rehearse that section specifically.

⚠️

Java: Cast n to long Before Negating to Avoid Overflow

In Java, -Integer.MIN_VALUE overflows back to Integer.MIN_VALUE due to two's complement arithmetic. Always cast n to long before negating: long absN = -(long)n. Using int arithmetic to negate Integer.MIN_VALUE is a silent overflow bug — the code will compile and run without errors but produce an incorrect result. Python does not have this issue because its integers have arbitrary precision.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now