Problem Walkthrough

Number of 1 Bits LeetCode 191 Deep Dive

LeetCode 191 — Number of 1 Bits — asks you to count the number of set bits (1 bits) in the binary representation of an unsigned integer, also called the Hamming weight. Three approaches cover the full interview spectrum: Loop and Check iterates 32 times checking the last bit with n & 1 and right-shifting; Brian Kernighan's trick uses the identity n & (n-1) to clear exactly the lowest set bit each iteration, running in O(k) time where k is the number of 1 bits rather than the full 32 iterations; and built-in popcount functions like Python's bin(n).count('1'), Java's Integer.bitCount(n), and C++'s __builtin_popcount(n) handle the problem in a single call. Interviewers expect the manual Brian Kernighan implementation, an explanation of why n & (n-1) works, and awareness of the signed vs unsigned right shift pitfall in Java.

7 min read|

Number of 1 Bits

LeetCode 191

Problem Overview

LeetCode 191 — Number of 1 Bits — gives you an unsigned integer and asks you to return the number of 1 bits in its binary representation. This count is formally called the Hamming weight or popcount. For example, the integer 11 is 1011 in binary and has three 1 bits, so the answer is 3. The integer 128 is 10000000 and has one 1 bit, so the answer is 1. The integer 4294967293 (0xFFFFFFFD) has thirty-one 1 bits.

The problem is rated Easy, but it is a gateway to a broad family of bit manipulation techniques. Interviewers use it to probe whether you can work directly with binary representations, understand the difference between logical and arithmetic right shifts, and apply non-obvious bitwise identities like n & (n-1). A complete answer covers at least two approaches and explains the trade-offs.

The constraints specify that n is treated as an unsigned 32-bit integer. In Python this is natural since integers are arbitrary-precision, but in Java and C++ you must be careful with the right shift operator. The follow-up asks: if this function is called many times, how would you optimize? The answer involves precomputing a lookup table for 8-bit or 16-bit chunks and stitching together the results.

  • Input: unsigned 32-bit integer n
  • Output: integer count of 1 bits (Hamming weight / popcount)
  • Constraints: 0 <= n <= 2^32 - 1 (treated as unsigned)
  • Follow-up: optimize for many calls — precompute lookup table for 8-bit or 16-bit chunks
  • Brute force O(32) loop is acceptable; Brian Kernighan O(k) is the interview-preferred answer

Loop and Check Each Bit

The straightforward approach checks each of the 32 bits one at a time. Use the expression n & 1 to isolate the least significant bit: if the result is 1, that bit is set and you increment the count. Then right-shift n by one position (n >>= 1 in Java using unsigned shift, n >>= 1 in Python) to move the next bit into the least significant position. Repeat exactly 32 times to cover all bits of a 32-bit integer.

This approach runs in O(32) = O(1) time and O(1) space regardless of the input. It is simple to implement and trivial to verify: the loop always runs exactly 32 times, the bit check is a two-line operation, and the result accumulates in a single counter variable. For an interview warm-up or a competitive programming submission, this is perfectly acceptable.

The main drawback is that the loop always runs 32 iterations even if n has only one or two 1 bits. For an integer like 1 (binary: 00000000 00000000 00000000 00000001), the loop processes 31 zero bits and one 1 bit. Brian Kernighan's trick eliminates wasted work by jumping directly from one set bit to the next.

💡

Brian Kernighan's trick is faster in practice: n & (n-1) clears the lowest set bit, so the loop runs exactly k times where k = number of 1 bits

The loop-and-check approach always runs exactly 32 iterations regardless of how many 1 bits are in n. Brian Kernighan's trick replaces this with a loop that runs exactly k iterations where k is the Hamming weight. For sparse integers — those with only a few 1 bits — this is a significant practical speedup. Both are O(1) in theory, but Kernighan's is always faster or equal in practice. In an interview, mention loop-and-check first to show you understand the basics, then pivot to Brian Kernighan as the preferred implementation.

Brian Kernighan's Trick

Brian Kernighan's bit-counting trick exploits the identity n & (n-1) to clear the lowest set bit of n in a single operation. The algorithm is: initialize count to 0; while n is not zero, execute n = n & (n-1) and increment count; return count. Each iteration removes exactly one 1 bit from n, so the loop runs exactly k times where k is the number of 1 bits — the Hamming weight itself.

This approach is O(k) time where k is the number of 1 bits, which is always less than or equal to 32 for a 32-bit integer. For dense integers (many 1 bits), it performs similarly to the loop-and-check approach. For sparse integers (few 1 bits), it is dramatically faster — an integer with a single 1 bit takes exactly one iteration rather than up to 32. The space complexity is O(1).

The trick was popularized by Brian Kernighan and is documented in the book 'The C Programming Language.' It appears throughout competitive programming and systems code wherever popcount operations are needed without hardware support. Understanding why n & (n-1) works — not just that it works — is what interviewers expect.

  1. 1Initialize count = 0
  2. 2While n != 0: execute n = n & (n - 1) and increment count by 1
  3. 3Each operation clears exactly the lowest set bit of n
  4. 4The loop runs exactly k times where k is the number of 1 bits
  5. 5Return count

Why n & (n-1) Clears the Lowest Bit

To understand why n & (n-1) clears exactly the lowest set bit, consider what subtracting 1 does in binary. If the lowest set bit of n is at position p, then n ends with a 1 at position p followed by zeros below it. Subtracting 1 flips the bit at position p from 1 to 0, and fills all the zero positions below p with 1s. All bits above position p remain unchanged.

When you AND n with (n-1), the bits above position p are the same in both, so they survive the AND unchanged. The bit at position p is 1 in n and 0 in (n-1), so the AND produces 0 at position p — clearing it. The bits below position p are 0 in n and 1 in (n-1), so the AND produces 0 — they stay zero. The net result: exactly the lowest set bit is cleared, and nothing else changes.

Example: n = 12, binary 1100. n-1 = 11, binary 1011. n & (n-1) = 1100 & 1011 = 1000 = 8. The lowest set bit was at position 2 (value 4), and it has been cleared. Another iteration: n = 8, binary 1000. n-1 = 7, binary 0111. n & (n-1) = 1000 & 0111 = 0000. The loop terminates after 2 iterations — correctly reporting that 12 has two 1 bits.

ℹ️

This trick appears in many bit problems: check if power of two (n & (n-1) == 0), count bits, and lowest set bit extraction

The identity n & (n-1) is one of the most reusable tools in bit manipulation. To check if n is a power of two: a power of two has exactly one 1 bit, so n & (n-1) == 0 (with n > 0). To extract the lowest set bit: n & (-n) gives the value of the lowest set bit (this is because -n in two's complement is ~n + 1, which sets the lowest bit and clears everything above it). To clear the lowest set bit: n & (n-1) as shown here. These three patterns cover a large fraction of all bit manipulation interview problems.

Built-In Functions

Most production languages provide a built-in popcount function backed by a hardware POPCNT instruction where available. In Python, bin(n).count('1') converts n to its binary string representation and counts the '1' characters — simple and correct but not the fastest. Python also provides int.bit_count() in Python 3.10+, which uses the hardware instruction directly.

In Java, Integer.bitCount(n) returns the number of 1 bits in a 32-bit integer. It is implemented using a sequence of bit-twiddling operations that compute the popcount in parallel across groups of bits — effectively summing pairs, then groups of four, then groups of eight, and so on. In C++, __builtin_popcount(n) calls the compiler intrinsic which maps to the POPCNT instruction on x86 processors with SSE4.2 support.

Using built-in functions is appropriate in competitive programming where correctness and speed matter more than demonstrating understanding. In a technical interview, however, the interviewer almost always expects you to implement the bit counting manually. Mention the built-in as an alternative, implement Brian Kernighan's trick as your primary answer, and explain why the manual implementation is preferred for demonstrating algorithmic knowledge.

  1. 1Python: bin(n).count('1') — convert to binary string, count '1' characters
  2. 2Python 3.10+: n.bit_count() — uses hardware POPCNT instruction
  3. 3Java: Integer.bitCount(n) — parallel bit summation, hardware-backed
  4. 4C++: __builtin_popcount(n) — compiler intrinsic mapping to POPCNT
  5. 5Interview: mention built-ins, but implement Brian Kernighan manually as your primary answer

Code Walkthrough Python and Java

Python — loop and check: def hammingWeight(n): count = 0; [code] for _ in range(32): count += n & 1; n >>= 1; return count. Python — Brian Kernighan: def hammingWeight(n): count = 0; while n: n &= n - 1; count += 1; return count. Python — one-liner: def hammingWeight(n): return bin(n).count('1'). All three are O(1) time and O(1) space. Brian Kernighan is the interview favorite for demonstrating bit manipulation depth.

Java — loop and check with unsigned right shift: public int hammingWeight(int n) { int count = 0; for (int i = 0; i < 32; i++) { count += n & 1; n >>>= 1; } return count; }. Java — Brian Kernighan: public int hammingWeight(int n) { int count = 0; while (n != 0) { n &= n - 1; count++; } return count; }. Java — built-in: return Integer.bitCount(n). The Brian Kernighan version avoids the signed/unsigned right shift issue entirely because it does not use right shift at all.

All three approaches share O(1) time and O(1) space complexity for 32-bit inputs. The loop-and-check approach always runs exactly 32 iterations. Brian Kernighan runs k iterations where k is the Hamming weight — optimal. Built-ins run in a single instruction on modern hardware. In an interview, lead with Brian Kernighan, explain the n & (n-1) identity, mention the Java unsigned right shift caveat for the loop approach, and note the built-in as a competitive programming shortcut.

⚠️

In Java and C++, use unsigned right shift (>>>) not signed right shift (>>); signed shift preserves the sign bit and can cause infinite loops on negative numbers

In Java, the >> operator is a signed right shift that replicates the sign bit. For a negative int (which has a 1 in the most significant bit), >> 1 fills the new leftmost bit with 1, so the number never becomes zero — the loop-and-check approach will run forever. The fix is to use >>> (unsigned right shift), which always fills the new leftmost bit with 0 regardless of the sign bit. Brian Kernighan's approach sidesteps this issue entirely since it uses n &= n-1 and no right shift. In C++, use unsigned int or uint32_t and the >> operator behaves as unsigned shift.

Ready to master algorithm patterns?

YeetCode flashcards help you build pattern recognition through active recall and spaced repetition.

Start practicing now