Linear systems of planar Laplacians in optimal time - UPR-RP

Download Report

Transcript Linear systems of planar Laplacians in optimal time - UPR-RP

Iterative methods
Iterative methods for Ax=b

Iterative methods produce a sequence of approximations

So, they also produce a sequence of errors

We say that a method converges if
2
What is computationally appealing
about iterative methods ?

Iterative methods produce the approximation using a
very basic primitive: matrix-vector multiplication

If a matrix A has m non-zero entries then A*x can be
computed in time O(m). Essentially every non-zero
matrix is recalled from the RAM only one time.

In some applications we don’t even know the matrix A
explicitly. For example we may have only a function f(x),
which returns A*x for some unknown matrix A.
3
What is computationally appealing
about iterative methods ?

The memory requirements are also O(m). If the matrix
can fit into the RAM, the method will run too. This is a
HUGE IMPROVEMENT with respect to the O(m2) of
Gaussian elimination.

In most applications Gaussian elimination is impossible
because the memory is limited. Iterative methods are the
only approach that can potentially work.
4
What might be problematic
about iterative methods?

Do they converge always?

If they converge, how fast do they converge?

Speed of convergence:
How many steps are required so that we gain one
decimal place in the error, in other words:
5
Richardson’s iteration

Perhaps the most simple iteration

It can be seen as a fixed point iteration for (I-A)x+b

The error reduction equation is simple too!
6
Richardson’s iteration for diagonal matrices

Let’s study convergence for positive diagonal matrices
Assume initial error is the all-ones vector and

Then

Let’s play with numbers…..

7
Richardson’s iteration for diagonal matrices

So it seems that we must have

…and Richardson’s won’t converge for all matrices
ARE WE DOOMED?
8
Maybe we’re not (yet) doomed

Suppose we know a d such that

We can then try to solve the equivalent system

The new matrix A’ is a diagonal matrix with elements d’i such that:
9
Richardson’s iteration
converges
for an equivalent “scaled” system
10
You’re cheating us:
Diagonal matrices are easy anyways!

Spectral decomposition theorem

Every non-singular matrix has the eigenvalue decomposition:

So what is (I-A)i ?

So if (I-D)i goes to zero then (I-A)i goes to zero too
11
Can we scale any positive system?

Ghersghorin’s Theorem

Recall the definition of infinity norm for matrices
12
Richardson’s iteration
converges
for an equivalent “scaled” system
for all positive matrices
13
How about the rate of convergence?

We showed that the jth coordinate of the error ei is:

How many steps do we need to take to make this coordinate smaller
than 1/10 ?

…. And the answer is
14
How about the rate of convergence?

So in the worst case the number of iterations we need is:
ARE WE DOOMED version2 ?
15
We may have some more information
about the matrix A

Suppose we know numbers d’i such that

We can then try to solve the equivalent system

The new matrix A’ is a diagonal matrix with elements d’i such that:
16
Preconditioning

In general, changing the system to
is called preconditioning

Why? Because we try to make a new matrix with a better condition.

Do we need to have the inverse of B?
So, B must be a matrix such that By = z can be solved more easily.

17
Richardson’s iteration
converges fast
for an equivalent system
when we have some more info
in the form of a preconditioner
18
Laplacians of weighted graphs
30
2
1
20


1
15
Symmetric, negative off-diagonals, zero row-sums
Symmetric diagonally dominant matrix: Add a positive diagonal to
Laplacian and flip off-diagonal signs, keeping symmetry.
Preconditioning for Laplacians
The star graph:
 A simple diagonal scaling makes the condition number good for
The line graph
 The condition number is bad despite even with scaling
ARE WE DOOMED version3 ?
20