Transcript Expanders
Expanders
Eliyahu Kiperwasser
What is it?
Expanders are graphs with no small cuts.
The later gives several unique traits to such
graph, such as:
–
–
High connectivity.
No “bottle-neck”.
What is it? (First definition)
G
E S, S
S
where
S
V
2
Graph is an expander if and only if for every
subset S, is a constant larger than 1.
G=(V,E) is an expander if the number of
edges originating from every subset of
vertices is larger than the number of vertices
at least by a constant factor.
That’s Easy…
We are all familiar with graphs which are in
fact expanders with more than a constant
factor.
–
i.e. cliques.
The challenge is to find sparse graphs which
hold as expanders.
Construction of Explicit Expanders
In this section, we describe two ways to build
such marvelous objects as expanders. The
following two methods show that some
constant-degree regular graph exist with
good expansion.
We will show:
–
–
The Margulis/Gaber-Galil Expander.
The Lubotsky-Philips-Sarnak Expander.
Lubotsky-Philips-Sarnak Expander
A graph on p+1 nodes, where p is a prime.
Let graph vertices be V=ZpU{inf}
V is a normal algebraic field with one
extension, it contains also 0-1, meaning inf.
Given a vertex x, connect it to:
–
–
–
X+1
X-1
1/X
Proof is out of this lecture’s scope.
Lubotsky-Philips-Sarnak Example
Given a vertex x, connect it to:
–
–
–
X+1
X-1
1/X
Inf
Margulis/Gaber-Galil Expander
A graph on m2 nodes
Every node is a pair (x,y) where x,y Zm
(x,y) is connected to
–
–
–
–
(x+y,y), (x-y,y)
(x+y+1,y), (x-y-1,y)
(x,y+x), (x,y-x)
(x,y+x+1), (x,y-x-1)
(all operations are modulo m)
Proof is out of lecture scope.
Spectral gap
From now on we will discuss only d-regular
undirected graphs, where A(G) is the
adjacency matrix.
Since A is a real symmetric matrix, it contains
the following real eigenvalues:
λ1=> λ2=>…=> λn . Let λ = max{ |λi(G)| : i>1}
We define spectral gap as d-λ.
A second definition of expanders
We can define an expander graph by looking
at the adjacency matrix A.
Graph is an expander if and only if A’s
spectral gap is larger than zero.
Expansion vs. Spectral gap
Theorem:
–
If G is an (n, d, λ)-expander then
2 G
2d
–
d 2 G
We prove only one direction.
Rayleigh Quotient
The following is a lemma which will appear
useful is the future.
Ax, x
max
Lemma:
xR , x1 0, x 0 ( x, x )
Proof:
n
–
–
For A(G), λ1=d is easily seen to be the largest
eigenvalue accompanied by the vector of ones as
an eigenvector.
There exists an orthonormal basis V1,V2,…,Vn
where each Vi is an eigenvector of A(G).
Rayleigh Quotient
Proof cont.
x, 1 x, v 0 a
1
Ax, x
a
i2
i
2
a a v ,v
i
i
i vi , vi
i
i
i
n
a
i2
i
x, x
therefore,
0
n
i 1
n
1
max
xR n , x1 0, x 0
Ax, x
( x, x )
n
a a v , v
i 2
i
n
2
i
i
i
i
n
i ai i ai 2
2
i2
i2
Lower Bound Theorem
In this section we will prove the correctness
of the lower bound suggested by previous
theorem.
d
G
We prove that
2
Proof:
–
S
xv
S
Let
vS
vS
x 1 xv S S S S 0
v
x xv 2 S S S S S S S S S S n
2
2
v
2
Lower Bound Theorem
Proof cont.
–
–
By combining the Rayleigh coefficient with the
2
Ax
,
x
x
fact that (Ax,x)<=||Ax||*||x||, we get:
We will develop this inner product further:
Ax, x xu Ax u xu xv 2 xu xv
u
2
u ,v E S , S
u
xu xv 2
u ,v S
u ,v E
xu xv 2
u ,v S
u ,v E
xu xv
2 E S, S S S d S E S, S S d S E S, S S
d S S n E S , S n2
2
2
Lower Bound Theorem
Proof cont.
–
After previous calculations we now have all that is
needed to use Rayleigh lemma:
d S S n E S , S n2 S S n
E S, S
–
d
S S
n
Since S contains at most half of the graph’s
n
vertices, we conclude:
S
2
E S, S d
S
2
Markov Chains
Definition: A finite state machine with
probabilities for each transition, that is, a
probability that the next state is j given that
the current state is i.
Named after Andrei Andreyevich Markov
(1856 - 1922), who studied poetry and other
texts as stochastic sequences of characters.
We will use Markov chains for the proof of
our next final lemma, in order to analyze a
random walk on an expander graph.
In directed graphs
…
There is an exponentially decreasing
probability to reach a distant vertex.
In undirected graphs
…
In an undirected graph this probability can
decrease by a polynomial factor.
Expander guarantee an almost uniform
chance to “hit” each vertex. For example,
clique provides a perfect uniform distribution.
Random walks
0
1
A G
0
0
1 0 0
0 1 0
1 0 1
0 1 0
1
0
0
0
0.5 0 0.5 0
A G
0 0.5 0 0.5
0
1
0
0
On the left we see the adjacency matrix
associated with the graph.
On the right we see the probabilityMarkov
of Chain
transition between vertex i to j.
Random walks - Explanation
0
1
A G
0
0
1 0 0
0 1 0
1 0 1
0 1 0
1
0
0
0
0.5 0 0.5 0
A G
0 0.5 0 0.5
0
1
0
0
Hence, the probability of hitting an arbitrary
vertex v on the ith step is equal to the sum
over all v neighbors of the probability of
hitting those vertices multiply by the
probability of the transition to v.
Random walks – Algebraic notation
0
1
A G
0
0
1 0 0
0 1 0
1 0 1
0 1 0
1
0
0
0
0.5 0 0.5 0
A G
0 0.5 0 0.5
0
1
0
0
We can re-write the expression in a compact
manner: A x
Where x is the initial distribution.
Random walks - Example
Suppose x is a uniform distribution on the
vertices then after one step on the graph we
receive the following distribution:
1
0
0 0.25 0.25
0
0.5
0
0.5
0
0.25
0.25
*
A G
0 0.5 0 0.5 0.25 0.25
0
0
1
0
0.25
0.25
An Expander Lemma
Let G be an (n,d,λ)-expander and F subset of
E. Then the probability that a random walk,
starting in the zero-th step from a random
edge in F, passes through F on its t step is
bounded by
F
E d
t 1
Later used to prove PCP theorem.
A random walk as a Markov chain
Proof
–
–
–
Let X be a vector containing the current
distribution on the vertices.
Let X’ be the next distribution vector, meaning the
probability to visit vertex u is (Ax)u
In algebraic notation,
x 'u
xv
Ax
'
x
Ax
u
d
u ,v E d
Expressing the success probability
Proof cont.
–
–
–
–
–
the initial distribution x.
Observation: The distribution we reach after the ii
th step is A x .
Let P be the probability we are interested at,
which is that of traversing an edge of F in the t
step.
Let yw be the number of edges of F incident on
w, divided by d.
P Ai 1 x yw
then
w
wV
Plugging the initial x
Proof cont.
–
To calculate X, we pick an edge in F, then pick
one of the endpoints of that edge to start on.
2F
Resulting: y x d
Using the previous results we get:
w
–
P Ai 1 x
wV
Ai 1 x
wV
w
w
yw
xw
2F
Ai 1 x, x
d
w
2F
d
Decomposing x
Proof cont.
–
–
–
–
Observation:
The
Random
Walksum of each row in A/d equals
one.
||
Hence, if x is a uniform distribution on the
vertices of the graph, then A x|| x||
We decompose any vector x to uniform
distribution plus the remaining orthogonal
components.
More intuitively, we separate x to V1 component
and the rest of the orthogonal basis.
Final Expander Lemma
Proof cont.
–
By linearity,
Ai 1 x Ai 1 x Ai 1 x
x Ai 1 x
Final Expander Lemma
Proof cont.
–
Hence,
A
i 1
x
A
x, x Ai 1 x , x Ai 1 x , x x , x Ai 1 x , x
2
i 1
x ,x
1
Ai 1 x , x
n
1
1
i 1
A x x
n
n d
1
n d
i 1
x
2
i 1
x x
Final Expander Lemma
Proof cont.
–
Since the entries of X are positive,
x xv 2 max v xv xv max v xv
2
–
Maximum can be achieved when all edges
d
incident to v are in F, therefore max v xv 2 F
F
Final Expander Lemma
Proof cont.
–
We will continue with previous calculation:
1
Ai 1 x, x
n d
1
n d
i 1
d
2F
i 1
i 1
1
x max v xv
n d
2
Final Expander Lemma
Proof cont.
–
Let’s see what we have so far,
P
2F
Ai 1 x, x
d
1
A x, x
n d
i 1
d
2F
P
nd
E
2
–
i 1
2F
Ai 1 x, x
d
1
A x, x
n d
i 1
i 1
d
2F
By combining those result, we finish our proof:
P
dn d
2F
P
E d
F
i 1
i 1
The graph is d-regular,
nd
E
2
Additional Lemma
The following lemma will be useful to us
when proving the PCP theorem.
If G is a d-regular graph on the vertex set V
and H is a d’-regular graph on V then
G’=GUH=(V,E(G)UE(H)) is a d+d’-regular
graph such that λ(G’)<=λ(G)+λ(H)
Lemma Proof
Choose x which sustains the following:
x 1, x 1 0, (G ' ) A G ' x, x
–
In words, the second eigenvector normalized.
A G x, x A G x, x A H x, x
'
G H
resulting
G' G H
The End
Questions?