Random Codes - Haverford College

Download Report

Transcript Random Codes - Haverford College

Chapter 10
Shannon’s
Theorem
Random Codes
Send an n-bit block code through a binary symmetric channel:
A = {ai : i = 1, …, M}
B = {bj : |bj| = n, j = 1, …, 2n}
M distinct equiprobable
n-bit blocks
P Q
Q P
I2(ai) = log2 M
C = 1 − H2(Q)
Q<½
Intuitively, each block
comes through with n∙C
bits of information.
small number ε > 0
To signal close to capacity, we want I2(ai) = n (C − ε)

M  2n (C  )
2nC intuitively, # of message that can get thru channel
 n
by increasing n, this can be made arbitrarily large
2
 we can choose M so that we use only a small fraction of the # of
messages that could get thru – redundancy. Excess redundancy
gives us the room required to bring the error rate down. For a large
n, pick M random codewords from {0, 1}n.
10.4
With high probability, almost all ai will be a certain distance apart
(provided M << 2n). Picture the ai in n-dimensional Hamming space.
As each ai goes thru channel, we expect nQ errors on average.
Consider a sphere on radius
n (Q + ε′) about each ai:
nε′
bj
received symbol
Similarly, around each bj: What us
the probability that an uncorrectable
error occurs?
too much noise
PE  P(ai  S ) 
nQ
ai
By the law of
large numbers,
lim P (b j  S n ( Q   ) (ai ))  0
 P(a  S )
A \{ai }
a   ai
a′ ai
bj
n
can be made  δ
N. b. Pbi  S (ai )  Pai  S (bj )
 P(a  S )
ai
sent symbol
 
another a′
is also inside
nQ
nε′
10.4
Shannon’s Theorem
# of code words
Pick M  2n (C  )
2n (1 H 2 ( q ))
C = Channel capacity

 0
n
2
n = block size (as yet undetermined)
We decide how close we wish to approach the channel capacity.
Number of possible random codes = (2n)M = 2nM, each equally likely
Idea
Let PE  probability of errorsaveragedoverall randomcodes.
Will show PE  0,  some/most codes must work. Take any code, it will
probably work!
Q = Recall P  Pd (a, b)  n(Q   ) 
Pd (a, b)  n(Q   )

E











prob.
a  a
of channel
error
too many errors
anothercodewordis too close
where a is the codeword sent, and b the one received.
Let X = 0 with probability P (no error) represents errors in channel
1 with probability Q (error)
If the error vector a  b = (X1, …, Xn), then d(a, b) = X1 + … + Xn
 X  X n

Pd (a, b)  n(Q   )  PX 1    X n  n(Q   )  P  1
 Q    
n


as
 X  X n
 V {X }
P 1
 Q    
 0 n   (by law of large numbers)
2
n

 n 
N. B. Q = E{X}  Q < ½ , pick ε′  Q + ε′ < ½
10.5
Since the a′ are randomly (uniformly) distributed throughout,
2 nH 2 (Q  )
Pd (a, b)  n(Q   ) 
2n
by the binomial bound
volume of whole space
1 
n   log 1 
Q 
2nH 2 (Q )  2

2n
1
 H (Q   )  H (Q)   H (Q)
2
1 
1
1
1 Q
H (Q)  1  log  1  log
 log
 log  1. Hence,
Q
1 Q
Q
Q 
 H is convex down and Q    
1 
n  log2  1 
Q 
2nH 2 (Q )  2
M  Pd (a, b)  n(Q   )  n nH 2 (Q ) n n 
2 2
2 2

1  
2
n   log2  1   
Q  


 0
as n  
1 
1 

providedwe choose log2   1   check by notinglog  1  0 
10.5
Q 
Q 