Review of number systems and Binary codes
Decimal Number System
The decimal number system uses the base 10 and consists of 10 digits – 0 through 9. Numbers greater than 9 are described using positional weights.
Starting at the Least Significant Digit (LSD) the positional value is 100 or 1 with the next positional value being 101 or 10. For each next Most Significant Digit (MSD) the positional value goes up by a factor of 10.
For example, the number 102410 can be represented by multiplying each digit in the number by its positional value,
1×103 + 0x102 + 2×101 + 4×100 = 102410.
Binary Number System
The binary number system uses the base 2 and consist of two binary digits (Bits) – 0 and 1. Numbers greater than 1 are described using positional notation.
The Least Significant Bit (LSB) positional value is 20 or 1 with next most significant value being 21 or 2. As in decimal, each bit is multiplied by its positional value for all bits in the binary number.
An example of an 8-bit binary number is 010001102. Binary numbers have little meaning to human beings.
To understand the size of a binary number it must be converted to decimal.
To convert a binary number to decimal, each bit in the binary number is multiplied by its positional value and the results are summed. Using the number given previously (010001102),
0x27 +1×26 + 0x25 + 0x24 + 0x23 + 1×22 + 1×21 + 0x20 = 7010.
Bit Group names
Bits are put together in groups. Common bit groupings are shown below along with their corresponding names.
1. 4 bits – nibble
2. 8 bits – byte
3. 16 bits – word
4. 32 bits – double word
5. 64 bits – quad word
Counting in Binary
Counting in binary is similar to counting in decimal. Table shows a decimal count from 0 to 15 and its equivalent binary count. Notice that a 4-bit grouping is used.
Decimal
Binary
Decimal
Binary
0
0000
8
1000
1
0001
9
1001
2
0010
10
1010
3
0011
11
1011
4
0100
12
1100
5
0101
13
1101
6
0110
14
1110
7
0111
15
1111
Binary Arithmetic
Arithmetic can be performed using binary numbers. The combinations for adding and subtracting two bits are shown below in Tables.
Addition
A
B
Sum
Carry
0
0
0
0
0
1
1
0
1
0
1
0
1
1
0
1
Subtraction
A
B
Difference
Borrow
0
0
0
0
0
1
1
1
1
0
1
0
1
1
0
0
Adding Two 8-Bit numbers
The procedure for adding two 8 bit binary numbers is shown below. Start adding bits from the Least Significant Bit (LSB) column. If a carry is generated, add the bit to the next column.
Adding Two 8-bit Numbers
Carry
1
1
0
0
0
0
1
LSB
C
1
1
1
0
0
0
1
+
0
1
1
0
1
0
0
1
Sum
1
1
0
1
1
0
1
0
Hexadecimal number system
Hexadecimal (Hex) is a number system that uses the base 16. Hex numbers consists of the 16 digits, 0 through 9 and the letters A through F. The Table shows the Hex equivalents for the decimal number 0 through 15.
Hexadecimal Numbers
Dec
Hex
Dec
Hex
Dec
Hex
Dec
Hex
0
0
4
4
3
3
12
c
1
1
5
5
9
9
13
D
2
2
6
6
10
A
14
E
3
3
7
7
11
B
15
F
Converting Hex to Decimal
To convert to a decimal number use the following steps:
1. Find the decimal equivalent for each Hex digit.
2. Multiply each decimal value by the Hex positional value.
3. Add the results from Step 2 to find the decimal value.
As an example convert 3AF16 to decimal.
1.From the above table you can determine that:
316=310 A16=1010 F16=1510
2.Multiply each decimal value by the Hex positional value. Add the results to find the decimal equivalent.
3×162 + 10X161 + 15×160 = 94310
Converting Hex to Binary
To convert a Hex number to a binary number use the following steps:
1. Write down the Hex number.
2. Write below the number, the 4 bit binary equivalent for each Hex digit.
3. Combine the 4 bit numbers into one binary number.
As an example convert 3AF16 to binary.
Converting Hex to Binary
3
A
F
Hex Number
0011
1010
1111
4-Bit Equivalent
001110101111
Combined Bits
1’s Complement (Diminished Radix Complement)
All ‘0’s become ‘1’s.
All ‘1’s become ‘0’s.
Example: (10110000)2
(01001111)
2’s Complement (Radix Complement)
Take 1’s complement then add 1.
(or)
Toggle all bits to the left of the first ‘1’ from the right.
Classification of binary codes
The codes are broadly categorized into following four categories.
• Weighted Codes.
• Non-Weighted Codes.
• Binary Coded Decimal Codes.
• Alphanumeric Codes.
• Error Detecting Codes.
Duality Theorem
The Duality Theorem states that starting with a Boolean relation. We can derive another Boolean relation by,
(i)Changing OR (operation) i.e. + (Plus) sign to an AND operation.
(ii)complement any 0 or 1 appearing in the expression.
Problem
Here we will discuss about the complement and dual of
Solution
Theorems of Boolean algebra
The theorems of Boolean algebra can be used to simplify many a complex Boolean expression and also to transform the given expression into a more useful and meaningful equivalent expression.
The theorems are presented as pairs, with the two theorems in a given pair being the dual of each other. These theorems can be very easily verified by the method of perfect induction.
Theorem 1
(a) 0.X = 0 and (b) 1+X= 1.
Where X is not necessarily a single variable, it could be a term or even a large expression. Theorem 1(a) can be proved by substituting all possible values of X, that is 0 and 1, into the given expression and checking whether the LHS equals the RHS.
• For X = 0, LHS = 0.X = 0.0 = 0 = RHS.
• For X= 1, LHS = 0.1 = 0 = RHS.
Thus, 0.X =0 irrespective of the value of X and hence the proof. Theorem 1(b) can be proved in a similar manner. In general, according to theorem 1, 0. (Boolean expression) = 0 and 1+ (Boolean expression) =1.
For example: 0. (A.B+B.C +C.D) = 0 and 1+ (A.B+B.C +C.D) = 1, where A, B and C are Boolean variables.
Theorem 2
(a) 1.X = X and (b) 0+X = X.
where X could be a variable, a term or even a large expression. According to this theorem, ANDing a Boolean expression to 1 or ORing 0 to it makes no difference to the expression.
• For X = 0, LHS = 1.0 = 0= RHS.
• For X = 1, LHS = 1.1 = 1 = RHS.
Also, (Boolean expression) = 0 + (Boolean expression) = Boolean expression.
For example: (A+B.C +C.D) = 0+(A+B.C +C.D) = A+B.C +C.D
Theorem 3 (Idempotent or Identity Laws)
(a) X.X.X……X = X and (b) X+X+X +···+X = X.
Theorems 3(a) and (b) are known by the name of idempotent laws also known as identity laws.
Theorem 3(a) is a direct outcome of an AND gate operation, whereas theorem 3(b) represents an OR gate operation when all the inputs of the gate have been tied together. The scope of idempotent laws can be expanded further by considering X to be a term or an expression. For example, let us apply idempotent laws to simplify the following Boolean expression:
Theorem 4 (Complementation Law)
(a) X-X = 0 and (b) X+X = 1.
According to this theorem, in general any Boolean expression when ANDed to its complement yields a 0 and when ORed to its complement yields a 1, irrespective of the complexity of the expression:
Hence, theorem 4(a) is proved. Since theorem 4(b) is the dual of theorem 4(a), its proof is implied.
The example below further illustrates the application of complementation laws:
Theorem 5 (Commutative property)
Mathematical identity, called a property or a law describes how differing variables relate to each other in a system of numbers.
One of these properties is known as the commutative property and it applies equally to addition and multiplication.
In essence, the commutative property tells us we can reverse the order of variables that are either added together or multiplied together without changing the truth of the expression:
Commutative property of addition, A + B = B + A.
Commutative property of multiplication, AB = BA.
Theorem 6 (Associative Property)
This property tells us we can associate groups of added or multiplied variables together with parentheses without altering the truth of the equations.
Associative property of addition, A +(B + C) = (A + B) + C.
Associative property of multiplication, A (BC) = (AB) C.
Theorem 7 (Distributive Property)
The Distributive Property illustrating how to expand a Boolean expression formed by the product of a sum and in reverse shows us how terms may be factored out of Boolean sums-of-products:
Distributive property, A (B + C) = AB + AC.
Theorem 8 (Absorption Law or Redundancy Law)
(a) X+X.Y = X and (b) X.(X+Y) = X.
The proof of absorption law is straight forward, X+X.Y = X. (1+Y) = X.1 = X. Theorem 8(b) is the dual of theorem 8(a) and hence stands proved.
Noise margin and Noise immunity:
1.The circuits ability to tolerance noise signals is referred to as the noise Immunity.
2.A quantity measure of noise Immunity is called noise margin.
Noise margin = VOH – VIH(or)VIL – VOL
Weighted Codes
These codes are positionally weighted and each position with in the binary equivalent of the number is assigned a fixed value. Thus they obey positional weighting principal.
Binary Coded Decimal(BCD) is an example of weighted codes. In this code, the binary equivalent of a number will always remain same.
Examples for a positively weighted codes are 8421, 2421, 5421, 5211 etc., 642-3, 631 – 1, 84 – 2 – 1, 74 – 2 – 1 are negatively weighted codes.
OR gate using NAND gates
Implementation of the Boolean expression for EXOR gate using NAND and NOR gates
EX – OR gate expression is .
i)EX-OR using NAND
Step 1: Draw in AOI logic.
Step 2: Add bubbles on the output of AND gate and input OR gate.
Step 3: Replace by NAND symbol.
ii)EX-OR using NOR
Step 1: In AOI logic, add bubbles at the output of OR gate and input of AND gate.
Step 2: Add inverters in the line that received bubbles and remove double inversions.
Step 3: Replace by NOR Symbol.
Problem
Subtraction using 1's Complement: Here we perform subtraction using 1’s complement of (11010)2 – (10000)2.
Solution
1's complement of 10000 is 01111.
Subtraction using 2's Complement: Let us perform 2's Complement for (11011 – 100101).
Solution
11011 ⇒ 011011 (1's complement).
100101 100101
Take 2's complement of subtrahend.
DeMorgan’s Theorem
Theorem 1:
This theorem states that the complement of a product is equal to addition of the complements.
Theorem 2:
This theorem states that the complement of a sum is equal to the product of individual complements.
Problem
Here we see an example for implementing the following Boolean function with NAND – NAND logic.
Y = AC + ABC + ĀBC + AB + D
Solution
NAND – NAND logic:
Lets we see an example for expressing the following switching circuit in binary logic notation.
Solution
L = AC + BC + ABC
Here we see an example to reduce the expression .
Solution
Let us implement that (a) a + Äb = a + b
Solution
(a) a + Äb = a + b
a + Äb = a + ab + Äb
= a + b(a + Ä)(∴a + Ä = l)
= a + b
L.H.S. = R.H.S.
Lets we see an example to simplify
Solution
1.2 Error detection and correction codes (Parity and Hamming code)
The most common cause for errors are that the noise creep into the bit stream during the course of transmission from transmitter to the receiver.
If these errors are not detected and corrected the result could be disastrous as the digital systems are very much sensitive to errors and will malfunction due to the slightest of errors in transmitted codes.
There are various methods of error detection and correction such as addition of extra bits which are also called check bits, sometimes they are also called redundant bits as they don’t have any information in them.
The various codes which are used for error detection and correction code in digital system are:
• Simple Parity check.
• Two-dimensional Parity check.
• Checksum.
• Cyclic redundancy check.
Parity Code
Parity bit is added to the transmitted strings of bits during transmission from transmitters to detect any error in the data when they are received at the receiver end.
Basically a parity code is nothing but an extra bit added to the string of data. Now there are two types of parity these are even parity and odd parity.
Now we get an even parity when the total numbers of 1's in the string of the data is even after adding that extra bit. Similarly we get an odd parity when after adding that extra bit into the data string the total number of 1's in the data is odd.
We can understand it with an example, suppose we have an eight bit ASCII code – 01000001.
Now if the added bit is 0 then the number will become 001000001. Here the total number of 1's in the number is even so we get an even parity. Again if we add 1 to the number the number will become 101000001.
Here the number of 1's is 3 which is odd so we have got an odd parity. Normally even parity is used and it has almost become a convention.
Now parity checks are capable of detecting a single bit error but it fails if there are two changes in the data and it is the biggest drawback of this system. That’s why there are several other codes to detect and correct more than one bit errors.
Error Correcting Codes
The techniques that we have discussed so far can detect errors but do not correct them. Error Correction can be handled in two ways.
• One is when an error is discovered, the receiver can have the sender retransmit the entire data unit. This is known as backward error correction.
• In the other, receiver can use an error-correcting code which automatically corrects certain errors. This is known as forward error correction.
In theory it is possible to correct any number of errors automically. Error-correcting codes are more sophisticated than error detecting codes and require more redundant bits. The number of bits required to correct multiple-bit or burst error is so high that in most of the cases it is inefficient to do so. For this reason, most error correction is limited to one, two or at the most three-bit errors.
Single-bit error correction
Concept of error-correction can be easily understood by examining the simplest case of single-bit errors. As we have already seen that a single-bit error can be detected by addition of a parity bit (VRC) with the data which needed to be send.
A single additional bit can detect error, but it's not sufficient enough to correct that error too. For collecting an error one has to know the exact position of error, i.e. exactly which bit is in error (to locate the invalid bits).
For example, to correct a single-bit error in an ASCII character, the error correction must determine which one of the seven bits is in error. To this we have to add some additional redundant bits.
To calculate the numbers of redundant bits (r) required to correct (d) data bits, let us find out the relationship between the two. So we have (d+r) as the total number of bits which are to be transmitted, then r must be able to indicate at least d+r+1 different values.
Of these, one value means no error and remaining d+r values indicate error location of error in each of d+r locations. So. d+r+l states must be distinguishable by r bits and r bits can indicates 2r states. Hence 2r must be greater than d+r+1,i.e. 2r > d+r+1.
The value of r must be determined by putting in the value of d in the relation. For example, if d is 7 then the smallest value of r that satisfies the above relation is 4. So the total bits which are to be transmitted is 11 bits (d+r =7+4 =11).
Now let us examine how we can manipulate these bits to discover which bit is in error. A technique developed by R.W.Hamming provides a practical solution. The solution or coding scheme, he developed is commonly known as Hamming Code. Hamming code can be applied to data units of any length and uses the relationship between the data bits and redundant bits as discussed.
Positions of redundancy bits in hamming code
Basic approach for error detection by using Hamming code is as follows:
• To each group of m information bits k parity bits are added to form (m+k) bit code as shown in figure.
• Location of each of the (m+K) digits is assigned a decimal value.
• The k parity bits are placed in positions 1, 2, 2K-1 and K parity checks are performed on selected digits of each code word.
• At the receiving end the parity bits are recalculated. The decimal value of the k parity bits provides the bit-position in error, if any.
Use of Hamming code for error collection for a 4-bit data
Figure shows how hamming code is used for correction for 4-bit numbers (d4d3d2d1) with the help of three redundant bits (r3r2r1). For the example, consider data 1010. First r1 (0) is calculated considering the parity of the bit positions 1, 3, 5 and 7. Then the parity bits r2 is calculated considering bit positions 2, 3, 6 and 7. Finally, the parity bits r4 is calculated considering bit positions 4, 5, 6 and 7 as shown.
If any corruption occurs in any of the transmitted code 1010010, the bit position in error can be found out by calculating r3r2r1 at the receiving end. For example, if the received code word is 1110010, the recalculated value of r3r2r1 is 110 which indicates that bit position in error is 6, the decimal value of 110.
Parity Generator: The circuit that generates the parity bit in the transmitter is called a parity generator.
Parity Checker: The circuit that checks the parity in the receiver is called parity checker.