Number Systems and Codes part 3
Download
Report
Transcript Number Systems and Codes part 3
Arithmetic operations in binary
Addition / subtraction
01011
+
111
?
“Method” exatly the same as decimal
Arithmetic operations in binary
Addition
X = xn … x i … x 0
+
Y = yn … yi … y0
__________________
di
di = (xi + yi ) mod r + carry-in
Addition
For ( i = 0…n ) do
di = (xi + yi + carry-in) mod r
carry-out = (xi + yi + carry-in) div r
End for
Arithmetic operations in binary
Subtraction
Can you write an equivalent method
(algorithm) for subtraction?
Algorithm: a systematic sequence of
steps/operations that describes how to
solve a problem
Arithmetic operations in binary
Subtraction
X = xn … xi … x 0
Y = yn … yi … y0
___________________________________________________
di
For ( i = 0…n ) do
di = ( xi yi bi-1 ) mod r
borrow-out = ( xi yi bi-1 ) div r
End for
Subtraction
Arithmetic operations in binary
Multiplication
For ( i = 0…n ) do
di = ( xi yi + ci-1 ) mod r
ci = ( xi yi + ci-1 ) div r
End for
Is this correct?
Negative numbers (4 traditions):
Signed magnitude
Radix complement
Diminished radix complement
Excess-b (biased)
n = number of symbols (digits, bits, etc.)
r = radix
radix complement of x is
r x ( r 1) x 1
n
e.g.
n
n = 4, r = 10
7216 --> 9999 - 7216 + 1 = 2784 (10s complement)
n = 4, r = 2
0101 --> 1111 - 0101 + 1 = 1011 (2s complement)
diminished radix complement is
( r 1) x
n
e.g.
n = 4, r = 10
7216 --> 9999 - 7216 = 2783 (9s complement)
n = 4, r = 2
0101 --> 1111 - 0101 = 1010
(1s complement)
Note: operation is reversible (ignoring overflow)
i.e. complement of complement returns original pattern
(for both radix and diminished radix forms)
for zero and its complement:
10s complement: 0000 --> 9999 + 1 = 10000 --> 0000
9s complement: 0000 --> 9999 --> 0000
Arithmetic with complements
n=4
r=2
Excess-8
Signed number representations
[[N]] = rn – ( rn – N ) = N
( N + [N] ) mod rn = 0
Example:
(n = 4)
+5 0101
-5 1011
10000 24
N + [N] mod 24 = 0
N = 5 r = 10
N = 32546
[N] = 105 – 32546 = (67454)10
Two’s complement arithmetic
n2
n 1
i
a
2
a
2
( an-1, an-2, … a0 ) in 2-s complement is
i
n 1
i 0
– Example:
(n = 4)
0101 023 + 122 + 02 + 11 = 4 + 1 = 5
1011 123 + 022 + 12 + 11 = 8 + 2 + 1 = 5
– Addition/subtraction (n = 5)
+10 01010
+3 00011
01101 13
Two’s complement arithmetic
– Addition/subtraction (n = 5)
+10 01010
+7 00111
Overflow
10001 15
+15 01010
13 10011
Discard
100010 2
When can overflow
happen? Only when
both operands have
the same sign and the
sign bit of the result is
different.
Cyclic representation (n = 4, r = 2)
avoid discontinuity between
0111 and 1000
Add x:
move x positions clockwise
Subtract x:
move x positions counterclockwise
move (16 - x) positions clockwise
(i.e. add radix complement)
How to detect discontinuity?
5
+ 6
11
0101
+ 0110
1011
no overflow (in 4-bit register)
but, carry into most significant bit is 1 while carry out is 0
-5
+ -6
-11
1011
+ 1010
10101
overflow
but, carry into most significant bit is 0 while carry out is 1
Same circuitry
signed numbers
add
subtract (use 2s complement of subtrahend)
Intel architecture OF (overflow flag) detects out-of-range result
unsigned numbers
same protocol
but discontinuity is now between 0000 and 1111
detect with overflow for addition
lack of overflow for subtraction
Intel uses CF (carry flag) to detect out-of-range result
Codes
4-bit codes for digits 0 through 9 (BCD : 8421 weighted)
2421 and Excess-3 are self-complementing
(complement bits to get 9’s complement representation)
Biquinary has two ranges 01… and 10…
one-bit error produces invalid codeword
Number representation inside
computer
Integers – represented in 2’s complement
Single precision – 32 bits
Largest positive integer is
2,147,483,647
Smallest negative integer is
2,147,483,648
231-1 =
-231 =
Number representation inside
computer
Floating point
Scientific notation
0.0043271 = 0.4327110-2
normalized number
The digit after the decimal point is 0
Normalized notation maximizes the use of
significant digits.
Floating point numbers
Floating point
S
E
m
N = (-1)S m rE
S = 0 positive
S = 1 negative
m normalized fraction for radix r = 2
As MSB digit is always 1, no need to explicitly store it
Called hidden bit gives one extra bit of precision
Floating point formats
• IEEE format:
• DEC format:
(-1)S(1.m)2E-127
(-1)S(0.m)2E-128
Floating point formats
• Different manufacturers used different
representations
Mechanical encoder with sequential
sector numbering
At boundaries can get a code different
than either adjacent sector
Gray code:
Gray code algorithm:
input: bn1bn2 b1b0
output: gn 1gn 2 g1g0
(binary)
(Gray code)
0, bi bi 1
gi
else
1,
e.g.
n = 3: 6 -> 110 -> 101
n = 4: 10 -> 1010 -> 1111
Alternative algorithm (n bits)
If n = 1 use 0->0 and 1->1
If n > 1
first half is Gray code on (n - 1) bits preceded by 0
second half is reversed Gray code on (n - 1) bits preceded by 1
Representing non-numeric data
• Code: systematic and preferably standardized way
of using a set of symbols for representing
information
– Example: BCD (binary coded decimal) for decimal #s
– It is a “weighted” code the value of a symbol is a
weighted sum
• Extending number of symbols to include
alphabetic characters
– EBCDIC (by IBM): Extended BCD Interchange Code
– ASCII: American Standard Code for Information
Interchange
Codes
Cyclic codes
A circular shift of any code word produces a
valid code word (maybe the same)
Gray code – example of a cyclic code
Code words for consecutive numbers differ by
one bit only
Ascii, ebcdic codes
State codes, e.g.
Consequences of binary versus one-hot coding:
n-cubes of codewords:
Hamming distance between x and y is count of positions where
x-bit differs from y-bit
Also equals link count in shortest path from x to y
Gray code is path that visits each
vertex exactly once
Error-detecting codes
concept:
choose code words such that
corruption generates a non-code word
to detect single-bit error, code words must occupy
non-adjacent cube vertices
Problem
Solution: parity
Error-correcting codes
minimum distance between code words > 1
Example: correct one bit errors or detect two-bit errors
Two-bit error from
0001011
Write minimum distance as 2c + d + 1 bits ==>
corrects c-bit errors and
detects (c + d)-bit errors
Example: min distance = 4 = 2(1) + 1 + 1
But also, 4 = 2(0) + 3 + 1
Why?
suppose minimum distance is h
c
c
h - 2c
d = h - 2c -1
pair of closest nodes
maximally distant from left with entering correction zone
2c + d + 1 = 2c + (h - 2c - 1) + 1 = h
Hamming codes:
n 2 1
k
number bit positions 1, 2, 3, ... n from right to left
bit position is a power of 2 => check bit
else => information bit
e.g. (n = 7)
check bits in positions 1, 2, 4
information bits in positions 3, 5, 6, 7
Create group for each check bit
Express check bit position in binary
Group all information bits whose binary position has a one in same place
e.g. (n = 7)
check
1 (001)
2 (010)
4 (100)
information
3 (011), 5 (101), 7 (111)
3 (011), 6 (110), 7 (111)
5 (101), 6 (110), 7 (111)
Code information packets to maintain even parity in groups
e.g. (n = 7)
packet is 1011 => positions 7, 6, 5, 3
7654321
101x1xx
Consult group memberships to compute check bits
check
1
2
4
information
3, 5, 7
3, 6, 7
5, 6, 7
Code word is 1010101
=> bit 1 is 1
=> bit 2 is 0
=> bit 4 is 0
Note:
Single bit error corrupts one or more parity groups => minimum distance > 1
Two-bit error in locations x, y corrupts at least one parity group =>
minimum distance > 2
Three-bit error (i.e. 1, 4, 5) goes undetected => minimum distance = 3
3 = 2(1) + 0 + 1 = 2(0) + 2 + 1 => can correct 1-bit errors or detect errors
of size 1 or 2.
Group 8
Group 4
Group 2
Group 1
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
Pattern generalizes to longer bit strings:
• Single bit error corrupts one or more parity groups => minimum distance > 1
• Consider two-bit error in positions x and y. To avoid corrupting all groups:
-- avoid group 1: bit 1 (lsb) same for both x and y
-- avoid group 2: bit 2 same for both
-- avoid group 4: bit 3 same for both, etc.
-- forces x = y
• So true two-bit error corrupts at least one parity group => min distance > 2
• Three-bit error (two msb and lsb) goes undetected => minimum distance = 3
• Conclude: process always produces a Hamming code with min distance = 3
Traditional to permute check bits to far right
Used in memory protection schemes
Add traditional parity bit (for entire word) to obtain
code with minimum distance = 4
For minimum distance = 4:
total bits n 2k
parity bits p k 1
information bits i n p
n p n lg n 1
payload ratio
1
n
n
Number of check bits grows slowly with information bits
Two-dimensional codes (product codes)
min distance is product of
row and column distances
for simple parity on each (below)
min distance is 4
Scheme for RAID storage
CRC is extension of Hamming codes
each disk is separate row
column is disk block (say 512 characters)
rows have CRC on row disk
columns have even parity
Checksum codes
mod-256 sum of info bytes becomes checksum byte
mod-255 sum used in IP packets
m-hot codes (m out of n codes)
each valid codes has m ones in a frame of n bits
min distance is 2
detect all unidirectional errors
bit flips are all 0 to 1 or 1 to 0
Serial data transmission
(simple) transmit clock and sync with data (3 lines)
(complex) recover clock and/or sync from data line
Serial line codes:
NRZ: clock recovery except during long sequences of zeros or ones
NRZ1: nonreturn to zero, invert on one
zero => no change, one => change polarity
RZ: clock recovery except during long sequences of zero
DC balance
Bipolar return to zero (aka alternate mark inversion: send one as +1 or -1)
Manchester: zero => 0 to 1 transition, one => 1 to 0 transition
at interval center