lecture-6 - Computer Science and Engineering

Download Report

Transcript lecture-6 - Computer Science and Engineering

Case Study in Computational Science & Engineering - Lecture 5
Iterative Solution of Linear Systems
Jacobi Method
2 x1  3x2  x3  13
x1  x2  x3  6
3x1  2 x2  2 x3  15
x1  6.5  15
. x2  0.5x3
x2  6  x1  x3
x3  7.5  15
. x1  x2
x1  0; x2  0; x3  0;
old
old
old
while not converged do {
x1  6.5  15
. x2  0.5x3
new
old
old
x2  6  x1  x3
new
x3  7.5  15
. x1old  x 2 old
old
new
old
new
old
new
x1  x1 ; x2  x2 ; x3  x3 ;
new
old
old
}
1
Case Study in Computational Science & Engineering - Lecture 5
Gauss Seidel Method
2 x1  3x2  x3  13
x1  x2  x3  6
3x1  2 x2  2 x3  15
x1  6.5  15
. x2  0.5x3
x2  6  x1  x3
x3  7.5  15
. x1  x2
x1  0; x2  0; x3  0;
old
old
old
while not converged do {
x1  6.5  15
. x2  0.5x3
new
new
old
x2  6  x1  x3
new
new
new
x3  7.5  15
. x1  x2
old
new
old
new
old
new
x1  x1 ; x2  x2 ; x3  x3 ;
new
old
old
}
2
Case Study in Computational Science & Engineering - Lecture 5
Stationary Iterative Methods
• Iterative method can be expressed as:
xnew=c + Mxold, where M is an iteration matrix.
• Jacobi:
Ax = b, where A = L+D+U, i.e.,
(L+D+U)x = b => Dx = b - (L+U)x
-1
-1
-1
=> x = D (b-(L+U)x) = D b - D (L+U)x
n+1
-1
n
n
x = D (b-(L+U)x ) = c + Mx
• Gauss Seidel:
(L+D+U)x = b => (L+D)x = b - Ux
n+1
-1
n
-1
-1
n
=> x = (L+D) (b-Ux ) = (L+D) b - (L+D) Ux
3
Case Study in Computational Science & Engineering - Lecture 5
Conjugate Gradient Method
• A non-stationary iterative method that is very effective for
symmetric positive definite matrices.
• The method was derived in the context of quadratic
function optimization:
f(x) = xTAx - 2bx has a minimum when Ax = b
• Algorithm starts with initial guess and proceeds along a set
of orthogonal “search” directions in successive steps.
• Guaranteed to reach solution (in exact arithmetic) in at
most n steps for an nxn system, but in practice gets close
enough to solution in far fewer iterations.
4
Case Study in Computational Science & Engineering - Lecture 5
Conjugate Gradient Algorithm
• Steps in CG algorithm in solving system Ax=y:
so = r0 = y - Ax0
ak = rkTrk/skTAsk
xk+1 = xk + aksk
rk+1 = rk - akAsk
bk+1 = rk+1Trk+1/rkTrk
sk+1 = rk+1 + bk+1sk
• s is the search direction, r is the residual vector, x is the
solution vector; a and b are scalars
• a represents the extent of move along the search direction
• New search direction is the new residual plus fraction b of
the old search direction.
5
Case Study in Computational Science & Engineering - Lecture 5
Pre-conditioning
• The convergence rate of an iterative method depends on
the spectral properties of the matrix, i.e. the range of
eigenvalues of the matrix. Convergence is not always
guaranteed - for some systems the solution may diverge.
• Often, it is possible to improve the rate of convergence (or
facilitate convergence in a diverging system) by solving an
equivalent system with better spectral properties
• Instead of solving Ax=b, solve MAx = Mb,where M is
chosen to be close to A-1. The closer MA is to the identity
matrix, the faster the convergence.
• The product MA is not explicitly computed, but its effect
incorporated via an additional matrix-vector multiplication
or a triangular solve.
6
Case Study in Computational Science & Engineering - Lecture 5
Communication Requirements
• Each iteration of an iterative linear system solver requires a
sparse matrix-vector multiplication Ax. A processor needs xi
iff any of its rows has a nonzero in column i.
P0
P1
P2
P3
7
Case Study in Computational Science & Engineering - Lecture 5
Communication Requirements
• The associated graph of a sparse matrix is very useful in
determining the communication requirements for parallel
sparse matrix-vector multiply.
P0
P0
P1
P2
P1
P3
Comm required: 8 values
P2
P0
P1
P2
P3
P3
Alternate mapping: 5 values
8
Case Study in Computational Science & Engineering - Lecture 5
Minimizing communication
• Communication for parallel sparse matrix-vector
multiplication can be minimized by solving a graph
partitioning problem.
9
Case Study in Computational Science & Engineering - Lecture 5
Communication for Direct Solvers
• The communication needed for a parallel direct sparse solver
is very different from that for an iterative solver.
• If rows are mapped to processors, comm. is reqd. between
procs owning rows j and k (k>j) iff Akj is nonzero.
• The associated graph of thematrix is not very useful in
producing a load-balanced partitioning since it does not
capture the temporal dependences in the elimination process.
• A different graph structure called the elimination tree is
useful in determining a load-balanced low-communication
mapping.
10
Case Study in Computational Science & Engineering - Lecture 5
Elimination Tree
• The e-tree is a tree data structure that succintly captures the
essential temporal dependences between rows during the
elimination process.
• The parent of node j in the tree is the row# of first non-zero
below diagonal in row j (using the “filled-in” matrix).
• If row j updates row k, k must be an ancestor in the e-tree.
• Row k can only be updated by a node that is in its subtree.
11
Case Study in Computational Science & Engineering - Lecture 5
Using the E-Tree for Mapping
• Recursive mapping
strategy.
• Sub-trees that are
entirely mapped to a
processor need no
communication
between those rows.
• Subtrees that are
mapped amongst a
subset of procs only
need communication
among that group, e.g.
rows 36 only needs
comm. from P1
12
Case Study in Computational Science & Engineering - Lecture 5
Iterative vs. Direct Solvers
• Direct solvers:
– Robust: not sensitive to spectral properties of matrix
– User can effectively apply solver without much
understanding of algorithm or properties of matrix
– Best for 1D problems; very effective for many 2D problems
– Significant increase in fill-in for 3D problems
– More difficult to parallelize than iterative solvers; poorer
scalability
• Iterative solvers:
– No fill-in problem; no explosion of operation count for 3D
problems; excellent scalability for large sparse problems
– But convergence depends on eigenvalues of matrix
– Preconditioners are very important; good ones usually
domain-specific
– The effectiveness of iterative solvers may require good
understanding of mathematical properties of equations in
order to derive good preconditioners
13