ppt - Crystal

Download Report

Transcript ppt - Crystal

Advanced Algorithms (6311)
Gautam Das
Notes: 04/28/2009
Ranganath M R.
Outline
•
•
•
•
Tail Inequalities
Markov distribution
Chebyshev’s Inequality
Chernoff Bounds
Tail Inequalities
Markov Distribution says that the probability of
X being greater that t is
p(X>t) < µ/t
• Chebyshev’s inequality
P(|X-µ| > t.σ) ≤ 1/t2
Chernoff bounds
• This is a specific distribution where we can obtain
much sharper tail inequalities (exponentially sharp). If
the trials are repeated more, the chances of getting
very accurate results are more. Lets see how this is
possible.
• Example: imagine we have n coins (X1……Xn ).
and let the probabilities of each coins (say heads) be
(p1…pn).
Now the Randon variable X = ∑ni=1 xi
and µ = E[X] = ∑ni=1 pi in general.
• Some special cases
– All coins are equally unbiased i.e. pi= ½.
– µ = n * pi = n*1/2 = n/2, σ= √n/2, Example for n = 100,
σ= √100/2 = 5
• Chernoff bounds is given by
– If E[X] > µ
• P(X- µ ≥ § µ) ≤ [e § /(1 + §)1 + §] µ -----------------eqn 1
– If µ > E[X]
• P(µ - X ≥ § µ) ≤ e µ§2 /2
– Here the µ is the power of right hand side expression.
Hence If more trials are taken, µ = n/2 increaese, hence we
get accurate results as the expression (e § /(1 + §)1 + §)
would be less than 1.
– Example problem to illustrate this.
• Probability of a team winning is 1/3 .
• What is the probability that the team will win 50 out of the 100
games
• µ = n* pi = 100 * 1/3 = 100/3
• σ= (no of games to win - µ)/ µ
• = (50 – 100/3)/(100/3) = ½.
• Now to calculating probability of winning we
need to substitute all these in eqn 1.
– [e ½ /(3/2 3/2 )]100/3 = 0.027 (approx).
– Here if we increase no of games(in general no of
trials), the µ increases, and the expression [e § /(1 + §)1
+ §] evaluates to less than 1. hence we get more
accurate results, when more trails are done.
Derivation
• Let X = ∑ni=1 xi
• Let Y = etX
• P(X- µ ≥ §µ) = P(X ≥ (1 + §) µ )
= P(Y ≥ et (1 + §) µ) ≤ E[Y]/ et (1 + §) µ
Now E[Y] = E[etX] = e tX1+ tX2 +…+tXn
= E[etX1]* E[etX2]*………………* E[etXn]
• Now lets consider = E[etXi]
• Xi is either 0 or 1
• Xi will be 0 with probability 1-Pi
and 1 with probability Pi.
= P(et) + (1 - Pi)(1) [if x = 1 then y =et if x =0 then y = 1]
to be continued in next class