Introduction to Low-Density Parity

Download Report

Transcript Introduction to Low-Density Parity

The Role of Specialization in
LDPC Codes
Jeremy Thorpe
Pizza Meeting Talk
2/12/03
Talk Overview
 LDPC Codes
 Message Passing Decoding
 Analysis of Message Passing Decoding
(Density Evolution)
 Approximations to Density Evolution
 Design of LDPC Codes using D.E.
The Channel Coding Strategy
 Encoder chooses the mth
codeword in codebook C
and transmits it across the
channel
 Decoder observes the
channel output y and
generates m’ based on the
knowledge of the codebook
C and the channel
statistics.
x C  X n
m {0,1}k
Encoder
Channel
Decoder
m'{0,1}k
y Y n
Linear Codes
 A linear code C (over a finite field) can be
defined in terms of either a generator matrix
or parity-check matrix.
 Generator matrix G (k×n)
C  {mG}
 Parity-check matrix H (n-k×n)
C  {c : cH'  0}
LDPC Codes
 LDPC Codes -- linear codes defined in terms
of H.
 H has a small average number of non-zero
elements per row
0 1 1 0 0 1 0 0 0 0
1 0 1 1 0 0 0 0 1 1
H 1 1 0 0 1 0 1 1 1 0
0 1 0 1 1 1 0 0 1 0
1 0 0 1 0 0 1 1 0 1
Graph Representation of LDPC
Codes
...
v|c
Variable nodes
...
 H is represented by a
bipartite graph.
 There is an edge from
v to c if and only if:
H (v, c)  0
 A codeword is an
assignment of v's s.t.:
 xv  0c
Check nodes
Regular (λ,ρ) LDPC codes
...
...
 Every variable node
Variable nodes
has degree λ, every
check node has degree
ρ.
 Ensemble is defined by
π
matching left edge
"stubs" with right edge
"stubs via a random
Check nodes
permutation
Message-Passing Decoding of
LDPC Codes
 Message Passing (or Belief Propagation)
decoding is a low-complexity algorithm
which approximately answers the question
“what is the most likely x given y?”
 MP recursively defines messages mv,c(i) and
mc,v(i) from each node variable node v to each
adjacent check node c, for iteration i=0,1,...
Two Types of Messages...
 Liklihood Ratio
x , y 
p( y | x  1)
p( y | x  0)
 Probability Difference
x, y  p( x  1 | y)  p( x  0 | y)
 For y1,...yn independent  For x1,...xn
conditionally on x:
independent:
x , y   x , y
n
1
i
i
 x , y    x , y
i
i
i
i
...Related by the Biliniear
Transform
 Definition:
1 x
B( x) 
1 x
 Properties:
p ( y | x  1)
)
p ( y | x  0)
p ( y | x  0)  p ( y | x  1)
p ( y | x  0)  p ( y | x  1)
p ( y | x  0)  p ( y | x  1)
2 p( y)
2 p( x  0 | y) p( y )  2 p( x  1 | y ) p( y)
2 p( y )
p( x  0 | y)  p( x  1 | y)
B ( x , y )  B (


B ( B ( x ))  x
B ( x , y )   x , y

B(  x, y )  x, y

 x, y
Message Domains
Probability Difference
Likelihood Ratio
B()
P( x  0 | y )  P( x  1 | y )
P( y | x  0)
P( y | x  1)
Log Prob. Difference
Log Likelihood Ratio
B' ()
Variable to Check Messages
 On any iteration i, the
message from v to c is:
v
P(e | x  1)
P(e | x  0)
c
  xv , y v
 B(m
c ',v
c '|v  c
( i 1)
)
...
mv ,c
(i )
...
 It is computed like:
Check to Variable Messages
 On any iteration, the
message from c to v is:
P(v  0 | e)  P(v  1 | e)
 It is computed like:
 B(m
v ',c
v '|c  v
)
 Assumption: Incoming
messages are indep.
...

(i )
c
...
mc ,v
(i )
v
Decision Rule
 After sufficiently many iterations, return the
likelihood ratio:
0, if x , y  B(mc ,v (i 1) )  0
v v

c|v
xˆ  

otherwise
1,
Theorem about MP Algorithm
r
...
v
...
 If the algorithm stops after r
iterations, then the algorithm
returns the maximum a
posteriori probability estimate
of xv given y within radius r of
v.
 However, the variables within
a radius r of v must be
dependent only by the
equations within radius r of v,
...
Analysis of Message Passing
Decoding (Density Evolution)
 in Density Evolution we keep track of
message densities, rather than the densities
themselves.
 At each iteration, we average over all of the
edges which are connected by a permutation.
 We assume that the all-zeros codeword was
transmitted (which requires that the channel
be symmetric).
D.E. Update Rule
 The update rule for Density Evolution is
defined in the additive domain of each type
of node.
 Whereas in B.P, we add (log) messages:
mc,v 
(i )
 B' (m
v '|c v
v ',c
(i )
)
 In D.E, we convolve message densities:
D(mc ,v ) 
(i )

*
v '|c  v
(i )
B' ( D(mv ',c ))
Familiar Example:
 If one die has density function given by:
1 2 3 4 5 6
 The density function for the sum of two dice
is given by the convolution:
2 3 4 5 6 7 8 9 10 11 12
D.E. Threshold
 Fixing the channel message densities, the
message densities will either "converge" to
minus infinity, or they won't.
 For the gaussian channel, the smallest
(infimum) SNR for which the densities
converge is called the density evolution
threshold.
Regular (λ,ρ) LDPC codes
 Every variable node has degree λ, every
check node has degree ρ.
 Best rate 1/2 code is (3,6), with threshold
1.09 dB.
 This code had been invented by 1962 by
Robert Gallager.
D.E. Simulation of (3,6) codes
 Set SNR to 1.12 dB
(.03 above threshold)
 Watch fraction of
"erroneous messages"
from check to variable.
 (note that this fraction
does not characterize
the distribution fully)
Improvement vs. current error
fraction for Regular (3,6)
 Improvement per
iteration is plotted
against current error
fraction.
 Note there is a single
bottleneck which took
most of the decoding
iterations.
Irregular (λ, ρ) LDPC codes
...
...
 a fraction λi of variable Variable nodes
nodes have degree i. ρi
of check nodes have
λ2
degree i.
ρ4
 Edges are connected by
λ3
π
a single random
permutation.
ρm
λn
 Nodes have become
Check nodes
specialized.
D.E. Simulation of Irregular
Codes (Maximum degree 10)
 Set SNR to 0.42 dB
(~.03 above threshold)
 Watch fraction of
erroneous check to
variable messages.
 This Code was
designed by
Richardson et. al.
Comparison of Regular and
Irregular codes
 Notice that the
Irregular graph is much
flatter.
 Note: Capacity
achieving LDPC codes
for the erasure channel
were designed by
making this line
exactly flat.
Multi-edge-type construction
 Edges of a particular
"color" are connected
through a permutation.
 Edges become
specialized. Each edge
type has a different
message distribution
each iteration.
D.E. of MET codes.
 For Multi-edge-type
codes, Density
evolution tracks the
density of each type of
message separately.
 Comparison was made
to real decoding, next
slide (courtesy of Ken
Andrews).
MET D.E. vs. decoder
simulation
Regular, Irregular, MET
comparison
 Multi-edge-type LDPC
codes improve
gradually through most
of the decoding.
 MET codes have a
threshold below the
more "complex"
irregular codes.
Design of Codes using D.E.
 Density evolution provides a moderately fast
way to evaluate single- and multi- edge type
degree distributions (hypothesis-testing).
 Much faster approximations to Density
Evolution can easily be put into an outer loop
which performs function minimization.
 Semi-Analytic techniques exist as well.
Review
 Iterative Message Passing can be used to
decode LDPC codes.
 Density Evolution can be used to predict the
performance of MP for infinitely large codes.
 More sophisticated classes of codes can be
designed to close the gap to the Shannon
limit.
Beyond MET Codes
(future work)
 Traditional LDPC codes are designed in two stages:
design of the degree distribution and design of the
specific graph.
 Using very fast and accurate approximations to
density evolution, we can evaluate the effect of
placing or removing a single edge in the graph.
 Using an evolutionary algorithm like Simulated
Annealing, we can optimize the graph directly,
changing the degree profile as needed.