Module 5 - University of Pittsburgh
Download
Report
Transcript Module 5 - University of Pittsburgh
School of Information Sciences
University of Pittsburgh
TELCOM2125: Network Science and
Analysis
Konstantinos Pelechrinis
Spring 2013
Figures are taken from:
M.E.J. Newman, “Networks: An Introduction”
Part 5: Random Graphs with
General Degree Distributions
2
Generating functions
Consider a probability distribution of a non-negative,
integer random variable pk
E.g., the distribution of the node degree in a network
The (probability) generating function for the probability
distribution pk is:
¥
g(z) = p0 + p1z + p2 z + p3z +... = å pk z k
2
3
k=0
Hence, if we know g(z) we can recover the probability
distribution:
3
1 dkg
pk =
k! dz k z=0
The probability distribution and the generating function
are two different representations of the same quantity
Examples
Consider a variable that only takes 4 values (e.g., 1, 2, 3, 4)
pk = 0 for k=0 or k>4
Let us further assume that p1=0.4, p2=0.3, p3=0.1 and p4=0.2
Then:
2
3
4
g(z) = 0.4z + 0.3z + 0.1z + 0.2z
Now let us assume that k follows a Poisson distribution:
k
c
pk = e-c
k!
Then the corresponding probability generating function is:
¥
k
(cz)
g(z) = e-c å
= ec(z-1)
k=0 k!
4
Examples
Suppose k follows an exponential distribution:
pk = Ce- lk , l > 0 C =1- e- l
Then the generating function is:
¥
el -1
g(z) = (1- e )å(e z) = l
e -z
k=0
-l
-l
k
The above function converges iff z < el
Given that we are only interested in the range 0≤z≤1, this holds
true
5
Power-law distributions
As we have seen many real networks exhibit power-law
degree distribution
To reiterate, in its pure form we have:
pk = Ck -a , a > 0
k>0
The normalization constant is:
Where
and
p0 = 0
¥
Cå k -a =1 Þ C =
k=1
ζ(α) is the Riemman zeta function
1
z (a )
Then the probability generating function is:
1 ¥ -a k Lia (z)
g(z) =
åk z = z (a )
z (a ) k=1
Where
6
Liα is the polylogarithm of z
¶Lia (z) ¶ ¥ -a k ¥ -(a -1) k-1 Lia -1 (z)
= åk z = åk
z =
¶z
¶z k=1
z
k=1
Power-law distribution
Real networks, as we have seen, do not follow power-law
over all the values of k
Power-law is generally followed at the tail of the distribution
after a cut-off value kmin
In this case the more accurate generating function is:
g(z) = Qkmin -1 (z) + C
¥
å
k=kmin
k -a z k
Qn(z) is a polynomial in z of degree n
C is the normalizing constant
7
Lerch transcendent
Normalization and moments
If we set z=1 at the generating function we get:
¥
g(1) = å pk
k=0
If the underlying probability distribution is normalized to unity,
g(1)=1
is not always the case – recall the distribution of small
components for a random graph
¥
This
The derivative of the generating function is:
Evaluating at z=1 we get:
¥
g'(1) = å kpk = k
k=0
8
g'(z) = å kpk z k-1
k=0
Normalization and moments
The previous result can be generalized in higher moments
of the probability distribution:
km
9
éæ d öm
ù
d mg
= êç z ÷ g(z)ú =
êëè dz ø
úûz=1 d(ln z)m
z=1
This is convenient since many times we first calculate the
generating function and hence, we can compute
interesting quantities directly from g(z)
Powers of generating functions
A very important property of generating functions is
related with their powers
In particular let us assume g(z) that represents the
probability distribution of k (e.g., degree)
If we draw m numbers – independently - from this distribution,
the generating function of this sum is the m-th power of g(z)!
This is a very important property that we will extensively use in
the derivations that follow
10
Powers of generating functions
Given that the m numbers are drawn independently from
the distribution, the probability that they take a particular
set of values {ki} is: Õ i pk
i
Hence the probability πs that they will sum up to s, is given if we
consider all the possible combinations of ki values that sum up
¥
¥
to s:
m
p s = å... å d (s, åi ki )Õ i=1 pk
i
k1 =0
km =0
Substituting to the generation function h(z) for πs:
11
Configuration model
In the configuration model we provide a given degree
sequence
This sequence has the exact degree of every node in the
network
number of edges in the network is fixed Generalization of
G(n,m)
The
Each vertex i can be thought as having ki “stubs” of edges
We choose at each step two stubs uniformly at random from the
still available ones and connect them with an edge
12
Configuration model
The graph created from running once the above process is
just one possible matching of stubs
All possible matchings appear with equal probabilities
Hence, the configuration model can be thought as the ensemble
in which each matching with the chosen degree sequence
appear with equal probability
However, configuration model has a few catches
The sum of the degrees need to be even
Self-edges and multi-edges might appear
If
we modify the process to remove these edges then the network
is no longer drawn uniformly from the set of possible matchings
It can be shown that the density of these edges tends to 0
13
Configuration model
While all matchings appear with equal probabilities, not all
networks appear with equal probability!
One network might correspond to multiple matchings
We can create all the matchings for a given network by
permuting the stubs at each vertex in every possible way
number of matches for a given network: N({ki }) = Õi ki !
o Independent of the actual network
Total
With Ω({ki}) being the number of total matchings, each network
indeed appears with equal probability N/Ω
14
Configuration model
However in the above we have assumed only simple edges
When we add multi- or self-edges things become more
complicated
Not
all permutations of stubs correspond to different matchings
Two
multi-edges
whose
stubs
are
simultaneously result in the same maching
permuted
Total number of matchings is reduced by Aij!
Aij
is the multiplicity of the edge (i,j)
For self-edges there is a further factor of 2 because the
interchange of the two edges does not generate new
matching
k
Õ
N=
(Õ A !)(Õ A !!)
i i
15
i< j
ij
i
ii
Configuration model
The total probability of a network is still N/Ω but now we
have N to depend on the structure of the network itself
Hence, different networks have different probabilities to appear
16
In the limit of large n though, the density of multi- and selfedges is zero and hence, the variations in the above
probabilities are expected to be small
A slight modification
Some times we might be given the degree distribution pk
rather than the degree sequence
In this case we draw a specific degree sequence from this
distribution and work just as above
The two models are not very different
The crucial parameter that comes into calculations is the
fraction of nodes with degree k
In
the extended model this fraction is pk in the limit of large n
In the standard configuration model this fraction can be directly
calculated from the degree sequence given
17
Edge probability
What is the probability of an edge between nodes i and j?
There are ki stubs at node i and kj at j
The
probability that one of the ki stubs of node i connects with one of
the stubs of node j is kj/(2m-1)
Since there are ki possible stubs for vertex i the overall probability is:
pij =
ki k j
@
ki k j
2m -1 2m
The above formula is the expected number of edges between
nodes i and j but in the limit of large m the probability and mean
values become equal
18
Edge probability
What is the probability of having a second edge between i
and j?
pij,2 =
ki k j (ki -1)(k j -1)
(2m)2
This is basically the probability that there is a multi-edge
between vertices i and j
Summing over all possible pairs of nodes we can get the
expected number of multi-edges in the network:
é 2
ù
1
1ê k - k ú
åkik j (ki -1)(k j -1) = 2 ê k ú
2(2m)2 ij
ë
û
2
The expected number of multi-edges remains constant as the
network grows larger, given that the moments are constant
19
Edge probability
What is the probability of a self edge ?
The possible number of pairs between the kj stubs of node j is
½kj(kj-1). Hence:
p jj =
20
1
2
k j (k j -1)
2m -1
@
k j (k j -1)
4m
Summing over all nodes we get the expected number of
self edges:
k2 - k
å pii = 2 k
i
Edge probability
What is the expected number nij of common neighbors
between nodes i and j?
Consider node i Probability that i is connected with l: kikl/2m
Probability that j is connected to l (after node i connected to it):
kj(kl-1)/(2m-1)
Hence:
k k k (k -1) ki k j
nij = å i l j l
=
2m
2m
l 2m
21
å k (k -1) = p
l l
n k
l
ij
k2 - k
k
Random graphs with given expected degree
The configuration model can be thought as an extension
of the G(n,m) random graph model
Alternatively, we can assign to each vertex i in the graph a
parameter ci and create an edge between two nodes i and j
with a probability cicj/2m
We need to allow for self- and multi-edges to keep the model
tractable
Hence the probability between edges i and j is:
22
ì
ï
ï
p ij = í
ï
ïî
ci c j
, i¹ j
2m
ci2
, i= j
4m
åc = 2m
i
i
Random graphs with given expected degree
Based on the above graph generation process we have:
Average number of edges in the network:
ci c j
ci c j
ci2
å pij = å 2m + å 4m = å 4m = m
i£ j
i< j
i
ij
Average degree of node i:
cc
cc
ci2
ki = 2 pii + å pij =
+ å i j = å i j = ci
2m j¹i 2m j 2m
j¹i
Hence, ci is the average degree of node i
The actual degree on a realization of the model will differ in
general from ci
It can be shown that the actual degree follows Poisson
distribution with mean ci (unless if ci=0)
23
Random graphs with given expected degree
Hence in this model we specify the expected degree
sequence {ci} (and consequently the expected number of
edges m), but not the actual degree sequence and number
of edges
This model is analogous to G(n,p)
The fact that the distribution of the expected degrees ci is
not the same as the distribution of the actual degrees ki
makes this model not widely used
Given our will to be able to choose the actual degree distribution
we will stick with the configuration model even if it is more
complicated
24
Neighbor’s degree distribution
Considering the configuration model, we want to find what
is the probability that the neighbor of a node has degree k
In other words, we pick a vertex i and we follow one of its
edges. What is the probability that the vertex at the other end of
the edge has degree k?
Clearly it cannot be simply pk
Counter example: If the probability we are looking for was pk it
means that the probability of this neighbor vertex to have
degree of zero is p0 (which is in general non-zero). However,
clearly this probability is 0!
25
Neighbor’s degree distribution
Since there are k stubs at every node of degree k, there is
a k/(2m-1) probability the edge we follow to end to a node
of degree k
In the limit of large network this probability can be simplified to
k/2m
The total number of nodes with degree k is npk
Hence the probability that a neighbor of a node has degree k is:
k
kpk
npk =
, sin ce 2m = n k
2m
k
26
Average degree of a neighbor
What is the average degree of an individual’s network
neighbor?
We have the degree probability of a neighbor, so we simply
need to sum over it:
k2
kpk
average degree of a neighbor=å k
=
k
k
k
Given that:
Your
k2
1
- k =
k
k
(k
2
- k
2
)=
s k2
k
³0
friends have more friends than you!
Even though this result is derived using the configuration model
it has been shown to hold true in real networks as well!
27
Excess degree distribution
In many of the calculations that will follow we want to
know how many edges the neighbor node has except the
one that connects it to the initial vertex
The number of edges attached to a vertex other than the
edge we arrived along is called excess degree qk
The excess degree is 1 less than the actual degree. Hence:
qk =
28
(k +1)pk+1
k
13.4 CLUSTERING COEFFICIENT
Clustering coefficient
As a simple application of the excess degree distribution, let us calculate the clustering coefficient
that
clustering
coefficient
is the
probability
that
for
theRecall
configuration
model.
Recall that the
clustering coefficient
is the
average probability
thattwo
two neighbors of a vertex are neighbors of each other.
nodes with a common neighbor are neighbors themselves
Consider then a vertex v that has at least two neighbors, which we will denote i and j. Being
neighbors of Consider
v, i and j are node
both at the
ends has
of edges
and hence
the numberi and
of other
u that
at from
leastv, two
neighbors,
j edges
connected to them, ki and kj are distributed according to the excess degree distribution, Eq. (13.46).
If the
degrees
j are ki and kj respectively, then
The probability
of an excess
edge between
i and j is of
theni kand
ikj/2m (see Eq. (13.32)) and, averaging both ki
the
probability
that
arefor
connected
an edge
and kj over the
distribution
qk, we get
an they
expression
the clusteringwith
coefficient
thus: is kikj/2m
Averaging over the excess distribution and both i and j we get:
29
Clustering coefficient
As with the Poisson random graph, the clustering
coefficient of the configuration model goes as n-1 and
vanishes in the limit of large networks
Not very promising model for real networks with large clustering
coefficient
However, in the enumerator of the expression, there is
<k2>, which can be large in some networks depending on
the degree distribution
E.g., power law
30
Generating functions for degree distributions
We will denote the generating functions for the degree
distribution and the excess degree distribution as g0(z)
¥
and g1(z) respectively g (z) = å p z k
k
0
k=0
¥
g1 (z) = å qk z k
k=0
We can get the relation between the two generating
functions:
1 ¥
1 ¥
1 dgo go' (z)
k
k-1
g1 (z) =
(k +1)pk+1z =
kpk z =
= '
å
å
k k=0
k k=0
k dz g0 (1)
In order to find the excess degree distribution we simply need to
find the degree distribution
31
Generating functions for degree distributions
Let us assume that the degree distribution follows a
Poisson distribution
k
pk = e-c
go (z) = e
c(z-1)
c
k!
cec(z-1)
Þ g1 (z) =
= ec(z-1) = go (z)
c
The two generating functions are identical
This is one reason why calculations on Poisson random graph
are relatively straightforward
32
Generating functions for degree distributions
Let us assume a power law degree distribution:
ka
Li (z)
pk =
Þ g0 (z) = a
z (a )
z (a )
Lia -1 (z)
Lia -1 (z)
g1 (z) =
=
zLia -1 (z -1) zz (a -1)
33
Armed with these results, we are now in a position to make some m
Number of second
neighbors of a vertex
the properties of the configuration model. The first question we will a
one: what is the probability
that a vertex has exactly k second neig
Let us break this probability down by writing it in the form
Let us calculate the probability that a vertex has exactly k
second neighbors
¥
pk(2) = å pm P (2) (k | m)
m=0
(13.57)
P(2)(k|m) is the conditional probability of having k second
neighbors given that we
have
m direct
neighbors
where
P(2)(k|m)
is the probability
of having
k second neighbors
neighbors and pm is the ordinary degree distribution. Equation
probability of having k second neighbors is the probability of having k
we neighbors
have m first neighbors,
over
possible values of m. W
second
of aaveraged
vertex
isall(2)essentially
the degree distribution pm; we need to find P (k|m) and then complete
The number of
the sum of the excessive degrees of its first neighbors
The probability that the excess degree
of the first neighbors is j1,…,jm is:
m
34
Õq
r=1
jr
Number
of second neighbors of a vertex
(13.58)
Substituting this
expression
(13.57),of
we find
that
Summing
over
allinto
sets
values
that sum up to m we get:
¥
¥
¥
m
P (k | m) = å å ... å d (k, å jr )Õ q jr
(2)
j1 =0 j2 =0
Therefore
(13.59)
¥
p
(2)
k
35
=åp
¥
¥
jm =0
¥
r
r=1
m
å å... å d (k, å j )Õ q
jr
r r We saw
By now, you may be starting to mfind sums of this type familiar.
them previously in Eqs.
m=0
j
=0
j
=0
j
=0
r=1
1
2
m
(12.25) and (13.27), for example. We
can
handle
this o ne by the same trick we used before:
instead of trying to calculate
directly, we calculate instead its generating function g(2) (z) thus:
Using the probability generator function g(2)(z) we get:
Number of second neighbors of a vertex
The quantity in brackets is the probability generator
function of qk
¥
p
(2)
k
= å pm (g1 (z))m = g0 (g1 (z))
m=0
The above equation reveals that once we know the
generating functions for the vertices degrees and the
vertices excessive degree we can find the probability
distribution of the second neighbors
The above result could have been obtained easier if we
recall the power property of generating functions (how?)
36
Number of d-hop neighbors
(13.63)
Similarly we can calculate the number of 3-hop neighbors
In futurecalculations,
we will
use of this(2-hop
shortcut toneighbors),
get our results, the
rathernumber
than
Assuming
m repeatedly
second make
neighbors
taking the long route exemplified in Eq. (13.60).
of use
3-hop
is thethesum
of the
excessofdegree
We can also
similarneighbors
methods to calculate
probability
distribution
the numberofofeach
third of
neighbors. The
of third
neighbors is the sum of the excess degrees of each of the second
thenumber
second
neighbors
neighbors. Thus, if(3)
there are m second neighbors, then the probability distribution P(3) (k|m) of the
P (k|m)has
is generating
the probability
of1(z)]
having
k 3-hop
neighbors,
givenk that
m and the
number of third neighbors
function [g
overall probability
of having
have
m 2-hop
neighbors
third neighbors iswe
exactly
analogous
to Eq.
(13.63):
o Similar to above P(3)(k|m) has generating function [g1(z)]2
(13.64)
37
Number of d-hop neighbors
This can generalize to d-hop distance neighbors:
g(d ) (z) = g0 (g1 (...g1 (z)...))
d-1
The above holds true for all distances d in an infinite
graph
At a finite graph, it holds true for small values of d
It is difficult to use the above equation to obtain closed
forms for the probabilities of the size of d-hop
neighborhoods
We can calculate averages though
38
Average number of d-hop neighbors
What is the average size of the 2-hop neighborhood?
(13.68)
We need to evaluate the derivative of g(2)(z) at z=1
But
dg(2)
= g0' (g1 (z))g1' (z)
dz
and
g1(1)=1 and hence the average number of second neighbors is
c2=g’0(1)g’1(1)
g’0(1)=<k>
(13.69)
39
and
c2 = k - k
2
Average number of d-hop neighbors
The average number of d-hop neighbors is given by:
dg(d )
cd =
dz
(d-1)'
(g1 (z))g1' (z)
z=1 = g
'
(d-1)'
(1)g1' (1) = cd-1g1' (1), g1 (1) =
z=1 = g
c2 æ c2 ö
cd = cd-1 = ç ÷
c1 è c1 ø
c2
c2 c2
=
=
'
go (1) k
c1
d-1
Hence,
The average number of neighbors at distance d increases
or falls exponentially to d
c1
If this number increase then we must have a giant component
Hence, the configuration model exhibits a giant component iff
c2>c1, which can be written as:
k2 - 2 k > 0
40
Small components
Let πs be the probability that a randomly selected node
belongs to a small component of size s>0
¥
h0 (z) = å p s z s
s=1
41
Similar to the case of Poisson random graph we can show
that a small component is a tree
Following a similar process we consider a node i, which
we then remove from the network and we calculate the
probabilities that i’s neighbors belong to a small
components of size t
Small components
The difference in the configuration model is that the
neighbors of i are not typical vertices
Excess distribution is different from the degree distribution
Hence, the components that they belong to are not distributed
according to πs but according to another distribution ρs
ρs is the probability that a vertex at the end of any edge belongs
to a small component of size
s after this edge is removed:
¥
h1 (z) = å rs z s
s=0
Let us assume that node i has a degree of k and P(s|k) is
the probability that after it’s removed its k neighbors
belong to small components whose sizes sum up to s
P(s-1|k) is the probability that i belongs to a small component of
size s, given that he has k neighbors
42
Small components
We can get the overall probability πs by averaging over all
possible degrees k:
p = å p P(s -1| k)
Substituting this to the generating function h0(z):
h (z) = åå p P(s -1| k)z = zå p å P(s -1| k)z = zå p å P(s | k)z
¥
s
k
k=0
¥
¥
s
k
0
¥
s-1
¥
k
s=1 k=0
¥
k=0
¥
s
k
s=1
k=0
s=0
The last sum is the generation function for the probability
that the k neighbors of i belong to components that sum
up to s
Each component size is independent from the others and
follows the generation function h1(z)
Hence,
from the power property of generation functions, we have:
¥
h0 (z) = zå pk [h1 (z)]k = zgo (h1 (z))
43
k=0
Small components
Let us see how we can compute h1(z)
ρs is the probability that a neighbor j of the node i we
removed belongs to a component of size s
At the limit of large networks the removal of a single node does
not change the degree distributions and hence P(s-1|k) still
gives the probability the node under consideration belongs to a
component of size s, given that he has k neighbors
If we apply the above for node j we get:
¥
rs = å qk P(s -1| k)
Where
44
s=1
we have used the excess distribution qk
Small components
Using similar arguments and the power property of
generating functions we get:
¥
¥
¥
¥
k=0
s=0
¥
h1 (z) = åå qk P(s -1| k)z = zå qk å P(s | k)z = zå qk [h1 (z)]k = zg1 (h1 (z))
s=1 k=0
s
s
k=0
In principal, we now have two equations that we can use to
obtain both h0(z) and h1(z)
In practice it might be hard to solve for hi(z) and even if it is
possible it might be hard to use them for obtaining actual
probabilities
45
Giant component
Given that πs is the probability that a randomly selected
node belongs to a small component of size s, the
probability that a randomly chosen node belongs to a
small component is: ås p s = h0 (1)
Hence, the probability that a node belongs to the giant
component is S = 1 – h0(1) = 1 – g0(h1(1))
Note that h0(1) is not necessarily 1 as with most probability
generator functions
Given that h1(1)=g1(h1(1)) S=1-g0(g1(h1(1)))
Setting h1(1)=u we get
u=g1(u) and hence,
S=1-go(g1(u))=1-g0(u)
46
Giant component
For the above equations it is obvious that u is a fixed point
of g1(z)
One trivial fixed point is z=1, since g1(1)=1
With u=1 though, we have S=1-g0(1)=0, which corresponds to
the case we do not have giant component
Hence, if there is to be a giant component there must be at least
one more fixed point of g1(z)
What is the physical interpretation of u?
u = h1 (1) = å rs
s
ρs is the probability that a vertex at the end of any edge belongs
to a small component of size s
Hence,
the above sum is the total probability that such a vertex
does not belong to the giant component
47
Graphical solution
When we can find the fixed point of g1 everything becomes
easier
However, most of the times this is not possible
Graphical
solution
g1(z) is proportional to the probabilities qk and hence for
z≥0 is in general positive
Furthermore, its derivatives are also proportional to qk and
hence are in general positive
Thus, g1(z) is positive, increase and upward concave
48
Graphical solution
In order for g1 to have another fixed point u<1, its
derivative at u=1 needs to be greater than 1
¥
1
g (1) = å kqk =
k
k=0
'
1
¥
1
åk(k +1)pk+1 = k
k=0
¥
å(k -1)kp
k
k=0
=
k2 - k
k
In order for the derivative at u=1 to be greater than 1 it
needs to hold:
k2 - k
k
>1Û k 2 - 2 k > 0
This is exactly the condition that we saw
previously for the presence of a giant component
Hence, there is a giant component iff there is a
fixed point u<1 for g1
49
Mean component sizes
Using an approach similar to that with the Poisson random
graph we can calculate some average quantities
The mean size of the component of a randomly chosen
vertex is given by:
å sp = h (1) = h (2)
s =
å p 1- S g (u)
s
s
s
s
'
0
'
0
0
Eventually, after some calculations we get:
g0' (1)u2
s =1+
g0 (u)[1- g1' (u)]
As with the random Poisson graph the above calculation
is biased
Following similar calculations we get the actual average small
component size:
50
R=
2
2-
k u2
1-S
Complete distribution of small component
sizes
é d s-2
ù
2
ps =
ê s-2 [g1 (z)] ú , s >1
(s -1)! ë dz
ûz=0
k
p1 = p0
πs for the configuration model
with exponential
degree distribution with λ=1.2
51
Random graphs with power law degree
Let’s start with a pure power law:
ì
ï
pk = í
ï
î
0 for k=0
k -a
z (a )
for k ³ 1
A giant component exists iff [<k2>-2<k>] > 0
¥
¥
k -a +1 z (a -1)
k = å kpk = å
=
z (a )
k=0
k=1 z (a )
¥
k
2
¥
k -a +2 z (a - 2)
= å k pk = å
=
z
(
a
)
z (a )
k=0
k=1
2
α<3.4788…
52
Random graphs with power law degree
The above result is of little practical importance since
rarely we have a pure power law degree distribution
We have seen that a distribution that follows a power law at its
tail will have a finite <k2> iff α>3, and a finite <k> iff α>2
if 2<α≤3 a giant component always exists
When α>3 a giant component might or might not exist
When α≤2 a giant component always exists
Hence,
53
Random graphs with power law degree
What is the size S of the giant component when one
exists?
Recall, S=1-g0(u), where u is a fixed point of g1
For a pure power law we have: g (z) = Li (z)
a -1
1
¥
Hence,
u=
åk
-a +1 k
u
Lia -1 (u) k=1
=
=
uz (a -1) uz (a -1)
¥
zz (a -1)
å(k +1) a
k=0
- +1
uk
z (a -1)
The enumerator is strictly positive for non-negative values of u
u=0 iff ζ(α-1) diverges
ζ(α-1) diverges for α≤2
Hence,
54
Random graphs with power law degree
Hence, for α≤2, u=0 There is a giant component with
S=1-g0(0)=1-p0=1!
The giant component fills the whole network !
Of course this holds true at the limit of large n
For 2<α≤3.4788… there is a giant component that fills a
proportion S of the network
For α>3.4788… there is no giant component (i.e., S=0)
55
Random graphs with power law degree
56