Foundations For Programming

Binary

Binary is a number system. It is also called base two. It is a way of counting using only 2 different symbols. Base ten, also known as decimal, is what you know. Base ten uses ten symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Binary works in exactly the same way as decimal, except the number of symbols it uses. Binary uses just these two symbols: 0, and 1. It can represent all the same values as decimal can, they just look different.

Counting

Here is a refresher on counting. To count in decimal, you start with the symbol for nothing (0) then replace it with the symbol for one (1), then replace that with the symbol for two (2) and keep going until you get to the symbol for nine (9). What happens after that? Well you know but you probably haven’t thought about exactly what’s going on for a long time. Whe you reach the highest value symbol, you write an extra symbol that represents multiples of ten, and place it to the left of the lower valued symbol. In this case a 1 to represent one multiple of ten. Additionally you must reduce the lower value symbol to zero. So to count up from 9, you write another column to it’s left, containing the symbol for one, and understand that by convention it means one multiplied by ten. You also reduce the the existing column to zero, knowing that to leave the 9 there would mean ten and nine, totaling nineteen, not the intended ten.

This may seem like a very convoluted way to count to ten, but it is important to think about this detail in order to understand binary.

To count in binary you start with the symbol for nothing (0), then replace it with the symbol for one (1). But already you’ve reached the highest value symbol. So next you write a one to the left, and reduce the existing symbol to zero. In this case, the 1 represents multiples of two, so you have the symbols 10 representing the number two.

As you can see, the system of counting in binary is the same as decimal, the only difference is the number of symbols, and therefore the maximum value each position of a digit in a number represents.

For illustration, here is the binary and decimal representation of the numbers zero to ten side by side:

Binary Decimal
0 0
1 1
10 2
11 3
100 4
101 5
110 6
111 7
1000 8
1001 9
1010 10

Bit

A bit is a contraction of Binary Digit. It is the common way to refer to one digit of a binary number. There are only two examples: 0 and 1.

Byte

A byte is a group of eight bits. It is typically the smallest amount of data handled by a computer. It is typical to writ bytes in two groups of four digits for easy reading. For example 0101 0011, or 1000 0001. Notice unlike the way we write usual numbers in base ten, a byte is written with leading zeros to fill out the eight digits. So the number one written in binary and represented by a Byte is 0000 0001.

Character

A character is a symbol or grapheme such as a letter of an alphabet, numerical digit, punctuation, or similar. They also include white space like tabs and newline.

Set

A set in computing is similar to a mathematical set. It’s a group of things where each is unique, and they are in no particular order.

Encoding (Code)

An encoding is a way to represent some information in another way. For example the letter A can be represented by the symbol A, or by the number 65. The letter A could also be represented by a dot followed by a dash as in Morse Code.

Character Set

A Character set is a set of characters in a particular encoding. ASCII is an old example of one commonly use in computing. UTF-8 is a modern example. The ASCII encoding of A is the number 65, and conveniently is the same in UTF-8.