Welcome to a nerdy blog post on how computers deal with negative numbers. I don’t mean numbers with a bad attitude, but rather numbers like -1, -4, and on down to -∞.

Computers count in binary, which is represented to us humans as 1s and 0s. So to a computer, the binary value `1101`

means *thirteen* and not *one thousand, one hundred, one*.

In decimal, where most humans count, each digit represents a power of 10:

101 means 1 hundreds, 0 tens, and 1 ones.

In binary, where most computers count, each digit represents a power of 2:

`101`

means 1 fours, 0 twos, and 1 one. That translates into the value 5; binary `101`

is decimal 5.

In most cases, even programmers don’t need to worry about instantly recognizing binary numbers. The computer easily translates them into decimal when necessary. After all, binary numbers can have dozens of digits and get long and confusing quickly:

The value 1,053,276,123 is represented in binary as `1111110110001111011011111011011`

.

One thing programmers do have to deal with, however, are negative numbers.

If you’ve been programming, then you know that there are such things as *signed* and *unsigned* integers.

An integer is simply a whole number, one that can easily be represented in binary. A signed integer is one that can be positive or negative in value. An unsigned integer is always positive.

Figuring out unsigned integers is easy. Here is an animation of a four-digit binary number counter, which displays values from `0000`

through `1111`

, or zero through fifteen decimal:

So how do you make a *negative* binary number?

You’ll be careful to note that there is no – sign available above, which is how negative values are represented in decimal.

Well, actually, there is a – sign. It’s called a *sign bit*. When you specify a signed value in programming, you’re telling the computer that the far left bit determines whether the value is positive or negative.

When the bit is on, or 1, the value is negative. When the bit is zero, the value is positive.

The following animation shows the signed values `0000`

through `1111`

, which represents values from 0 through 7 and then -8 through -1:

Once that far left bit hits 1, the values become negative. Now it may seem like the value `1001`

should be -1. After all, binary `0001`

is one and with the first bit set, `1001`

seems fair game for -1. Not so, however.

Binary values wrap from the highest `0111`

to the lowest `1000`

. That should make sense because `1001`

is one greater than `1000`

.

And if it doesn’t make sense, just trust me: It works.

Or just keep staring at the animation above and you’ll get it, or you’ll start seeing things.

The size of the bits being handled define the range. For four bits (above), the ranges are from 0 to 15 unsigned and from -8 to 7 signed. Here are the ranges for other, more common integer values used in programming:

Type | Bits | Unsigned Range | Signed Range |
---|---|---|---|

char | 8 | 0 to 255 | -128 to 127 |

short int | 16 | 0 to 65,535 | -32,768 to 32,767 |

int / long int | 32 | 0 to 4,294,967,295 | -2,147,483,648 to 2,147,483,647 |

Does this information help you in real life? Of course not! But it’s yet another curious thing about computers, and it also explains why you see values such as 127 or 255 or 65535 again and again in technology.