# Number Systems

Number Systems

Today, we will go over the types of number systems used by computer programs.
It may not seem too important to you at the moment , but understanding how some
of these works become vital when programming . Today we will be going over
binary , octal, and hexadecimal.
In our normal everyday life, we tend to look at numbers in the decimal system.
Numbers go from 0 to 9, before they repeat. However, computers don’t act the
same way we do. You could argue this comes from counting on our fingers, as we
have 10 (well 8 and 2 thumbs). Computers act in a binary way. They contain bits,
or Binary digITs . They can either be off (0) or on (1). We group these bits in
bytes, which is 8 bits long. This means the most we can represent with a single
byte is 255 , or (2^8)-1. A representation of the decimal value of 14 is “1110”.
To convert a number from decimal to binary, consider this:

```------------------------------------------
---128---64---32--16----8----4----2----1--
---2^7--2^6--2^5--2^4--2^3--2^2--2^1--2^0-
------------------------------------------
```

So how did I get 1110 from 14? I worked from right to left. I know that it is
an even number, so the first bit (2^0) is 0. Next I subtract 2 from 14, and get
12 , and i have the number 10 in binary. Next I subtract 4 from 12, and get 110
in binary with 8 remaining. Finally, I subtract 8 to get 0 in decimal, and add
the final 1 to the binary representation to get 1110. Lets do one more example:

100 : Since it is even, I know that the first bit is 0. Next I work from left
to right, it is a larger number. It is also less then 128 so the last bit is 0
as well. Now i start adding up the exponents going from highest to lowest. 64 is
2^6, and 32 is 2^5, so those bits are 1. The remainder is 4, which is 2^2, so i
know that bit is 1 as well. 01100100

With octal, we use 8^n, rather then 2^n. Digits are stored from 0 to 7, before
it increments to 10. Octal numerals can be made from binary numerals by grouping
consecutive binary digits together in groups of 3. Octal became widely used in
computing on integers when systems such as the UNIVAC 1050, PDP-8, ICL 1900 and
IBM mainframes employed 6-bit, 12-bit, 24-bit or 36-bit words. Octal was an
ideal abbreviation of binary for these machines because their word size is
divisible by three.To convert integer decimals to octal, divide the original
number by the largest possible power of 8 and divide the remainders by
successively smaller powers of 8 until the power is 1. The octal representation
is formed by the quotients, written in the order generated by the algorithm.
For example, to convert 125 to octal:

125 = 8^2 * 1 + 61
61 = 8^1 * 7 + 5
5 = 8^0 * 5 + 0

Therefore 125 in decimal is 175 in octal.

Unlike the common way of representing numbers with ten symbols, hexadecimal
uses sixteen distinct symbols, most often the symbols “0”–”9″ to represent
values zero to nine, and “A”–”F” (or alternatively “a”–”f”) to represent values
ten to fifteen. Hexadecimal numerals are widely used by computer system
designers and programmers, as they provide a human-friendly representation of
binary-coded values. Each hexadecimal digit represents four binary digits, also
known as a nibble, which is half a byte. For example, a single byte can have
values ranging from 00000000 to 11111111 in binary form, which can be
conveniently represented as 00 to FF in hexadecimal. In programming, a number of
notations are used to support hexadecimal representation, usually involving a
prefix or suffix. The prefix 0x is used in C and related languages,
which would denote the value of decimal 10,995 by 0x2AF3. Hexadecimal is used in
the transfer encoding Base16, in which each byte of the plain text is broken
into two 4-bit values and represented by two hexadecimal digits. For example ,
11001010 in binary = 202 in decimal equals CA in hex. This is because:

1100 = 12 = C
1010 = 10 = A