Transcript ppt - CLAIR
COMS 6998-06 Network Theory
Week 5: October 6, 2010
Dragomir R. Radev
Wednesdays, 6:10-8 PM
325 Pupin Terrace
Fall 2010
(8) Random walks and electrical networks
Random walks
• Stochastic process on a graph
• Transition matrix E
• Simplest case: a regular 1-D graph
0
1
2
3
4
5
Gambler’s ruin
• A has N pennies and B has M pennies.
• At each turn, one of them wins a penny
with a probability of 0.5
• Stop when one of them loses all his
money.
Harmonic functions
• Harmonic functions:
–
–
–
–
P(0) = 0
P(N) = 1
P(x) = ½*p(x-1)+ ½*p(x+1), for 0<x<N
(in general, replace ½ with the bias in the walk)
Simple electrical circuit
0
1
2
3
4
V(0)=0
5
V(N)=1
ixy
v( x) v( y )
R
v( x 1) v( x) v( x 1) v( x)
0
R
R
v( x 1) v( x 1)
v( x)
2
Arbitrary resistances
v( x)
1
Rx 1
1
1
Rx 1 Rx
v( x 1)
1
Rx
1
1
Rx 1 Rx
v( x 1)
The Maximum principle
• Let f(x) be a harmonic function on a sequence S.
• Theorem:
– A harmonic function f(x) defined on S takes on its
maximum value M and its minimum value m on the
boundary.
• Proof:
– Let M be the largest value of f. Let x be an element of
S for which f(x)=M. Then f(x+1)=f(x-1)=M. If x-1 is still
an interior point, continue with x-2, etc. In the worst
case, reach x=0, for which f(x)=M.
The Uniqueness principle
• Let f(x) be a harmonic function on a sequence S.
• Theorem:
– If f(x) and g(x) are harmonic functions on S such that
f(x)=g(x) on the boundary points B, then f(x)=g(x) for
all x.
• Proof:
– Let h(x)=f(x)-g(x). Then, if x is an interior point,
h( x 1) h( x 1) f ( x 1) f ( x 1) g ( x 1) g ( x 1)
2
2
2
and h is harmonic. But h(x)=0 for x in B, and therefore,
by the Maximum principle, its minimal and maximal
values are both 0. Thus h(x)=0 for all x which proves
that f(x)=g(x) for all x.
How to find the unique solution?
• Try a linear function: f(x)=x/N.
• This function has the following properties:
– f(0)=0
– f(N)=1
– (f(x-1)+f(x+1))*1/2=x/N=f(x)
Reaching the boundary
• Theorem:
– The random walker will reach either 0 or N.
• Proof:
– Let h(x) be the probability that the walker
never reaches the boundary. Then
h(x)=1/2*h(x+1)+1/2*h(x-1),
so h(x) is harmonic. Also h(0)=h(N)=0.
According to the maximum principle, h(x)=0
for all x.
Number of steps to reach the
boundary
•
•
•
•
m(0)=0
m(N)=0
m(x)=1/2m(x+1)+1/2m(x-1)
The expected number of steps until a one
dimensional random walk goes up to b or
down to -a is ab.
• Examples: (a=1,b=1); (a=2,b=2)
• (also: the displacement varies as sqrt(t)
where t is time).
Fair games
• In the penny game, after one iteration, the
expected fortune is ½(k-1)+1/2(k+1)=k
• Fair game = martingale
• Now if A has x pennies out of a total of N,
his final fortune is:
(1-p(x)).0+p(x).N=p(x).N
• Is the game fair if A can stop when he
wants? No – e.g., stop playing when your
fortune reaches $x.
(9) Method of relaxations and other methods
for computing harmonic functions
2-D harmonic functions
0
0
x
y
z
1
1
The original Dirichlet problem
U=1
• Distribution of temperature in a sheet of metal.
• One end of the sheet has temperature t=0, the other
end: t=1.
• Laplace’s differential equation:
u u xx u yy 0
U=0
2
• This is a special (steady-state) case of the (transient)
heat equation :
k 2u ut
• In general, the solutions to this equation are called
harmonic functions.
Learning harmonic functions
• The method of relaxations
–
–
–
–
–
Discrete approximation.
Assign fixed values to the boundary points.
Assign arbitrary values to all other points.
Adjust their values to be the average of their neighbors.
Repeat until convergence.
• Monte Carlo method
– Perform a random walk on the discrete representation.
– Compute f as the probability of a random walk ending in a particular
fixed point.
• Linear equation method
• Eigenvector methods
– Look at the stationary distribution of a random walk
Monte Carlo solution
• Least accurate of all. Example: 10,000
runs for an accuracy of 0.01
Example
•
•
•
•
•
x=1/4*(y+z+0+0)
y=1/2*(x+1)
z=1/3*(x+1+1)
Ax=u
X=A-1u
Effective resistance
• Series: R=R1+R2
• Parallel: C=C1+C2
1/R=1/R1+1/R
R=R1R2/(R1+R2)
Example
• Doyle/Snell page 45
Electrical networks and random walks
• Ergodic (connected) Markov
chain with transition matrix P
c
Pxy
1Ω
1Ω
a
b
0.5 Ω
a
b
0.5 Ω
1Ω
c
d
C xy
Cx
C x C xy
a b c d
1 1
0
0
2
2
0 0 1 2
3 3
1 1
1
0
4
4
2
1
2
2
0
5 5 5
y
1
Cxy
Rxy
w=Pw
2
14
3
14
4
14
5
14
d
From Doyle and Snell 2000
T
Electrical networks and random walks
ixy
c
vx v y
Rxy
1Ω
1Ω
(v x v y )C xy
vx
b
0.5 Ω
0.5 Ω
1Ω
d
1V
xy
0
y
y
a
i
cxy
cx
v y Pxy v y
y
1 1
7
v
v
c
d
va 1
4 2
16
vb 0 v 1 2 v 3
d
c
5 5
8
• vx is the probability that a random
walk starting at x will reach a
before reaching b.
• The random walk interpretation
allows us to use Monte Carlo
methods to solve electrical circuits.
Energy-based interpretation
• The energy dissipation through a resistor is
2
ixy
Rxy
• Over the entire circuit,
1
2
E ixy
Rxy
2 x, y
• The flow from x to y is defined as follows:
f xy f yx
f xy 0, for y a, b
• Conservation of energy
( wa wb ) ja
y
1
( wx wy ) j xy
2 x, y
Thomson’s principle
• One can show that:
2
ixy
Reff
1
2
i
xy Rxy
2 x, y
• The energy dissipated by the unit current flow (for vb=0
and for ia=1) is Reff. This value is the smallest among all
possible unit flows from a to b (Thomson’s Principle)
Eigenvectors and eigenvalues
• An eigenvector is an implicit “direction” for a
matrix
Av v
where v (eigenvector) is non-zero, though λ
(eigenvalue) can be any complex number in
principle
• Computing eigenvalues:
det( A I ) 0
Eigenvectors and eigenvalues
• Example:
1 3
A
2 0
1
A I
2
• Det (A-I) = (-1-)*(-)-3*2=0
• Then: +2-6=0; 1=2; 2=-3
• For 12:
3
2
3 x1
0
2 x2
• Solutions: x1=x2
3
Stochastic matrices
• Stochastic matrices: each row (or column) adds
up to 1 and no value is less than 0. Example:
3
A 8
1
4
5
8
3
4
• The largest eigenvalue of a stochastic matrix E is
real: λ1 = 1.
• For λ1, the left (principal) eigenvector is p, the
right eigenvector = 1
• In other words, GTp = p.
Markov chains
• A homogeneous Markov chain is defined by
an initial distribution x and a Markov kernel E.
• Path = sequence (x0, x1, …, xn).
Xi = xi-1*E
• The probability of a path can be computed as
a product of probabilities for each step i.
• Random walk = find Xj given x0, E, and j.
Stationary solutions
• The fundamental Ergodic Theorem for Markov chains [Grimmett and
Stirzaker 1989] says that the Markov chain with kernel E has a
stationary distribution p under three conditions:
– E is stochastic
– E is irreducible
– E is aperiodic
• To make these conditions true:
– All rows of E add up to 1 (and no value is negative)
– Make sure that E is strongly connected
– Make sure that E is not bipartite
• Example: PageRank [Brin and Page 1998]: use “teleportation”
Example
1
6
1
8
0.9
t=0
0.8
PageRank
0.7
0.6
0.5
0.4
0.3
0.2
2
0.1
0
7
1
2
3
4
5
6
7
8
1
0.9
5
t=1
0.8
3
PageRank
0.7
4
0.6
0.5
0.4
0.3
0.2
0.1
This graph E has a second graph E’
(not drawn) superimposed on it:
E’ is the uniform transition graph.
0
1
2
3
4
5
6
7
8
Eigenvectors
• An eigenvector is an implicit “direction” for a
matrix.
Ev = λv, where v is non-zero, though λ can be any
complex number in principle.
• The largest eigenvalue of a stochastic matrix E
is real: λ1 = 1.
• For λ1, the left (principal) eigenvector is p, the
right eigenvector = 1
• In other words, ETp = p.
Computing the stationary
distribution
Solution for the
stationary distribution
pE p
T
(I ET ) p 0
Convergence rate is O(m)
function PowerStatDist (E):
begin
p(0) = u; (or p(0) = [1,0,…0])
i=1;
repeat
p(i) = ETp(i-1)
L = ||p(i)-p(i-1)||1;
i = i + 1;
until L <
return p(i)
end
Example
1
0.9
t=0
0.8
PageRank
0.7
0.6
0.5
0.4
0.3
0.2
1
6
8
0.1
0
1
2
3
4
5
6
7
8
1
0.9
t=1
0.8
0.7
PageRank
2
7
0.6
0.5
0.4
0.3
0.2
0.1
5
0
1
2
3
4
5
6
7
8
1
4
t=10
0.8
0.7
PageRank
3
0.9
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
7
8
More dimensions
• Polya’s theorem says that a 1-D random
walk is recurrent and that a 2-D walk is
also recurrent. However, a 3-D walk has a
non-zero escape probability (p=0.66).
• http://mathworld.wolfram.com/PolyasRand
omWalkConstants.html