Problem Overview
LeetCode 371 — Sum of Two Integers — gives you two integers a and b and asks you to return their sum without using the + or - operators. This forces you to implement integer addition purely through bitwise operations. The problem is rated Medium and is a staple of bit manipulation interviews because it tests whether you understand how addition actually works at the hardware level — XOR for the sum bits and AND with left shift for the carry bits.
The problem is deceptively simple to state but requires you to think carefully about what addition means in binary. Every addition you have ever done on a computer ultimately reduces to XOR and AND operations at the transistor level. LeetCode 371 asks you to expose that underlying mechanism and implement it explicitly. Understanding this problem deeply gives you insight into how CPUs compute arithmetic.
The problem appears in the Bit Manipulation section of the LeetCode Top Interview 150 and LeetCode 75 lists. Interviewers use it to distinguish candidates who understand binary arithmetic from those who merely know syntax. The constraints allow any integer values, including negatives, which introduces the two's complement consideration discussed later.
- Given two integers a and b, return a + b without using + or -
- Input range: -1000 <= a, b <= 1000 (LeetCode version)
- Must use bitwise operations only: XOR, AND, OR, shifts
- Negative numbers use two's complement representation
- Python requires special handling due to infinite-precision integers
How Addition Works in Binary
Binary addition follows four simple rules for two single bits: 0 + 0 = 0 (sum=0, carry=0); 0 + 1 = 1 (sum=1, carry=0); 1 + 0 = 1 (sum=1, carry=0); and 1 + 1 = 10 in binary (sum=0, carry=1). The sum bit — the rightmost digit of the result — follows the XOR truth table exactly. The carry bit — the leftmost digit when both inputs are 1 — follows the AND truth table exactly. This is not a coincidence: XOR is the mathematical definition of addition modulo 2.
When adding multi-bit integers, you perform this single-bit addition at every bit position simultaneously. The XOR of the two full integers gives all the sum bits at once, as if every bit position were added independently without any carry. The AND of the two integers gives all the positions where a carry is generated. Left-shifting that AND result by 1 moves each carry to the next higher bit position where it needs to be added — exactly what happens when you carry the 1 in elementary school long addition.
This insight is the entire algorithm. You XOR the two numbers to get a partial sum (with carry discarded), then AND and left-shift to get the carry that must be propagated. You then treat the partial sum as the new a and the carry as the new b, and repeat. Each iteration, the carry shifts further left and eventually falls off the 32-bit range, becoming zero. When carry is zero, the XOR result is the final answer.
XOR is "addition without carry" and AND is "carry detection" — together they replicate what the + operator does at the hardware level
Every addition circuit in every CPU is built from XOR gates (for the sum) and AND gates (for the carry). The half adder, which is the most basic building block of an arithmetic logic unit, is literally just one XOR gate and one AND gate. LeetCode 371 asks you to implement a ripple-carry adder in software: XOR for the sum, AND + left shift to propagate the carry, iterate until no carry remains. This is not a trick — it is the actual mechanism your hardware uses millions of times per second.
The Algorithm
The algorithm is a simple while loop. Each iteration computes two values: the sum bits (XOR of a and b, which gives the sum without carry) and the carry bits (AND of a and b, left-shifted by 1, which gives the carry positioned for the next bit). Then a is set to the sum and b is set to the carry. The loop continues until b (the carry) becomes zero, at which point a holds the complete sum.
The loop terminates because each iteration the carry shifts one position to the left. A 32-bit integer has only 32 bit positions. After at most 32 iterations, the carry has shifted all the way out of the representable range and becomes zero. In practice, the loop terminates much sooner for typical inputs — often in just a few iterations — because carries rarely propagate across all 32 bits.
This algorithm correctly handles all cases for fixed-width integers including zeros, the maximum and minimum values, and all intermediate values. The case where a = 0 terminates immediately since 0 XOR b = b and 0 AND b = 0. The case where a and b have no overlapping set bits (no carries at all) also terminates in one iteration: a XOR b gives the exact sum and a AND b = 0.
- 1While b != 0: compute sum = a XOR b (sum bits without carry)
- 2Compute carry = (a AND b) << 1 (carry bits shifted to next position)
- 3Set a = sum
- 4Set b = carry
- 5When b == 0, return a (which holds the complete sum)
Why Left Shift the Carry
The left shift by 1 is the most important operation in the algorithm and the one most candidates struggle to justify. When two bits are both 1, they produce a carry that must be added not to the current bit position but to the next higher bit position. In column addition, this is the "carry the 1" step. Left-shifting by 1 moves every carry bit from the position where it was generated to the position where it must be added — exactly one higher.
Consider adding 3 (011) and 1 (001) in binary. XOR gives 010 (sum without carry: the rightmost bits 1 XOR 1 = 0, middle bits 1 XOR 0 = 1). AND gives 001 (both rightmost bits are 1, so there is a carry). Left-shifting AND by 1 gives 010 (the carry now sits at position 1 where it must be added). Now a = 010 and b = 010. Second iteration: XOR gives 000... wait, actually XOR gives 000 and AND gives 010, shifted to 100. Third iteration: XOR gives 100, AND gives 000. Result is 4 = 3 + 1. Correct.
The left shift is what makes the carry "ripple" upward through the bit positions, just as it does in the pencil-and-paper algorithm. Without the shift, you would be adding the carry back to the same position it came from, creating an infinite loop. The shift ensures progress: each carry bit moves strictly left, and after at most 32 shifts it exits the 32-bit window and vanishes.
The loop terminates because each iteration the carry shifts left — eventually shifting out of the 32-bit range to become 0
A rigorous termination proof: define the weight of state (a, b) as the position of the highest set bit in b (the carry). Each iteration, (a AND b) << 1 shifts all carry bits one position left. The highest set bit in the new carry is at most one position higher than the highest set bit in the old carry. Since a 32-bit integer has a finite highest bit position (bit 31), after at most 32 iterations the carry shifts out of the 32-bit range to 0. In languages with fixed-width integers like Java and C++, this termination is guaranteed automatically. In Python, masking is required — explained in the next section.
Handling Negative Numbers
In Java and C++ with fixed-width 32-bit integers, negative numbers are stored in two's complement representation. Two's complement has the beautiful property that the same XOR-and-carry algorithm produces the correct result for negative inputs without any special casing. Adding -1 (0xFFFFFFFF in 32-bit two's complement) and 1 (0x00000001) produces 0x100000000, but the carry bit falls off the 32-bit integer and the result is 0. This is correct: -1 + 1 = 0. Two's complement makes signed addition identical to unsigned addition at the bit level.
Python, however, uses arbitrary-precision integers — there is no 32-bit overflow. When you compute (a AND b) << 1 for negative numbers in Python, the carry never naturally reaches zero because the integer can grow arbitrarily wide. The carry keeps shifting left forever, causing an infinite loop. The fix is to mask both a and b to 32 bits at the start of each iteration using & 0xFFFFFFFF. This simulates the fixed-width overflow behavior of hardware.
After the loop in Python, if the result has bit 31 set (meaning it would be negative in two's complement), you must perform sign extension to convert it back to a Python signed integer. Check if result & 0x80000000 != 0; if so, compute result - 0x100000000 to get the correct negative Python integer. This converts the raw 32-bit pattern back into the signed value it represents.
- 1Java/C++: no special handling needed — two's complement and 32-bit overflow handle negatives automatically
- 2Python step 1: mask both a and b each iteration: a &= 0xFFFFFFFF; b &= 0xFFFFFFFF
- 3Python step 2: run the XOR/AND/shift loop with masking until b == 0
- 4Python step 3: after loop, if a & 0x80000000: return a - 0x100000000 (sign extend)
- 5Python step 4: else return a (it is already a non-negative result)
Code Walkthrough Python and Java
Java implementation: the while loop is clean and requires no masking. public int getSum(int a, int b) { while (b != 0) { int carry = (a & b) << 1; a = a ^ b; b = carry; } return a; }. Java ints are 32 bits with two's complement, so negative numbers and overflow both work correctly with no additional code. The time complexity is O(1) — at most 32 iterations. The space complexity is O(1) — only three variables.
Python implementation requires the 32-bit mask. def getSum(a, b): mask = 0xFFFFFFFF; while b & mask: carry = ((a & b) << 1) & mask; a = (a ^ b) & mask; b = carry; return a if a <= 0x7FFFFFFF else a - 0x100000000. The mask is applied to both the XOR result and the carry each iteration. After the loop, a holds a 32-bit value; if bit 31 is set, subtract 2^32 to get the correct signed Python integer. The time and space complexity are the same O(1).
Both implementations share the same core loop structure. The only differences are the masking in Python and the unsigned-width handling. When walking through this in an interview, start with Java for clarity, then explain the Python variation. Emphasize that the algorithm is O(1) time because the loop runs at most 32 times regardless of the magnitude of the inputs, and O(1) space because no arrays or data structures are used.
Python's infinite-precision integers mean the carry never naturally goes to 0 for negative results — mask both a and b to 32 bits each iteration
A common mistake in Python is writing the loop as while b != 0 without masking. For positive inputs this works fine because the carry naturally diminishes. But for negative inputs, the carry keeps growing: (-1 & 1) << 1 = 2, and the carry propagates upward indefinitely. The fix is to check while b & 0xFFFFFFFF (ensuring b is considered as a 32-bit value) and mask both a and the carry each iteration with & 0xFFFFFFFF. This clamps the computation to 32 bits, exactly as fixed-width hardware would behave. Without masking, your Python solution will loop forever on any input where the mathematical sum is negative.