Information and Communication Technology
Data is the name given to the information that you store in a computer system.
Data is usually considered to be different from programs but for practical purposes the two are pretty much the same.
They are both stored in memory as binary numbers, their size is measured in bytes and if they were printed out in the same form as they are stored it would be hard to tell them apart.
The image shows the numbers held in a section of memory. There is no easy way of telling if these numbers represent a program, a document, an image or any other type of stored information.
One bit is a single binary digit and can have a value of 1 or 0.
In the early days of computing, several bits were grouped together to form a unit which could hold the code for a single character. The unit was called a Byte and could have anything from 6 to 9 bits in it, depending on the construction of the computer being used.
By the end of the 1960s, the Byte had become a standard size of 8 Bits, and this has continued to the present day.
8 Bits make a Byte.
One Byte can hold any of 256 different values, ranging from Decimal 0 to 255.
Binary 00000000 to 11111111.
Early processors could only deal with 8 Bits / 1 Byte at a time. This means that the largest number that they could process in a single operation was 256.
Later ones could deal with 16 bits (2 bytes) at once. These were known as 16 bit processors and could handle numbers up to 65536
The ones in modern PCs (2013) can deal with 64 bits (8 bytes) at once. These are called 64 bit processors.
It is likely that future processors will be able to deal with even larger numbers but this is unlikely to happen soon as the main reason for developing 64 bit processing was to enable computers to use a larger number of memory addresses. 32 bit addressing allowed the use of 4Gb, 64 bit allows 4Gb x 4Gb, (yes I know you cannot multiply Gbs like that, but I'm sure you get the idea).
The actual figure is 16 exabytes, that’s 16 x 1024 x 1024 x 1024 Gb
A group of bits which can be dealt with by a processor is called a word. Thus early computers had 8 bit / 1 Byte words, while modern ones may have 64 Bit / 8 Byte words.
Bits and bytes are used for storing and calculating binary numbers.
In the decimal system, the value of a digit in a multi-
1 means 1 x 100, 4 means 4 x 10, and 3 means 3 x 1.
In Binary the numbers that you multiply by are different. So in a binary number such as 10001111 the numbers mean:
Giving a total of 143
Unfortunately, apart from the 1, the values for decimal numbers, 1, 10, 100, 1000, etc. never match a value for binary numbers, 1, 2, 4, 8, 16, etc.
So if we try to use the usual terms for large numbers, kilo= 1000, mega = 1000000, etc. they don't work easily with bytes.
The nearest match between decimal and binary is for a thousand. The decimal for one thousand is 1000 and the binary for 1024 is 1000000000
1024 has therefore become known as the binary thousand.
This means that when dealing with bytes in computing:
The commonest ways of holding a bit are: