Transcript Document

Randomized Computation
Roni Parshani
Orly Margalit
Eran Mantzur
Avi Mintz
025529199
037616638
028015329
017629262
RP – Random Polynomial Time
Denotation:
• L is Language
• M is probabilistic polynomial time turning
machine
Definition:
L RP if M such that
• x  L  Prob[ M(x) = 1 ]  ½
• x  L  Prob[ M(x) = 0 ] = 1
RP – Random Polynomial Time
The disadvantage of RP (coRP) is when the
“Input” doesn’t belong to language (does
belong to the language) the machine needs to
return a correct answer at all times.
Definition:
• x  L  L(x) = 1
• x  L  L(x) = 0
RP  NP
• Proof:
– Given: L RP
– Aim : L NP
LRP
 xL M such that more than 50% of y give
M(x,y) = 1  y : M(x,y) = 1
 xL y M(x,y) = 0
LNP
coRP - Complementary Random
Polynomial Time
Definition:
L coRP if M such that
• x  L  Prob[ M(x) = 1 ] = 1
• x  L  Prob[ M(x) = 0 ]  ½
An alternative way to define coRP is
_
coRP = { L : L  RP }
coRP  co-NP
• Proof:
– Give: L coRP
– Aim : L co-NP
_
_
LcoRP  L RP  LNP  Lco-NP
RP1
P(.) is a polynomial
Definition:
L RP1 if  M, p(.) such that
• x  L  Prob[ M(x,r) = 1 ] 
• x  L  Prob[ M(x,r) = 0 ] = 1
1
p(| x |)
RP2
P(.) is a polynomial
Definition:
L RP2 if M, p(.) such that
• x  L  Prob[ M(x,r) = 1 ]  1 – 2-p(|x|)
• x  L  Prob[ M(x,r) = 0 ] = 1
RP1 = RP2 = RP
• Aim: RP1 = RP2
RP2  RP1
 we can always select a big enough x
such that
1
p(| x |) < 1 – 2-p(|x|)
RP1 = RP2 = RP
RP1  RP2
L RP1  M, p(.) such that
xL Prob[ M(x,r) = 1 ] 
1
p(| x |)
we run M(x,r) t(|x|) times:
• If in any of the runs M(x,r) = 1  output is 1
• If in all of the runs M(x,r) = 0  output is 0
RP1  RP2
t(|x|) ≥
Select
p(| x |) ²
log e
2
Therefore if xL  output is 0
If xL the probability of outputting 0 is only if
M(x,r) = 0 all t(|x|) times
( Prob[M(x,r) = 0] )t(|x|) ≤ (1 [1-
1
p(| x |) ²
p(| x |)
]
log 2e
≤ 2-p(|x|)
1
p(| x |))t(|x|)
RP1  RP2
 So the probability of outputting 1 is larger
than 1- 2- p(|x|)
L RP2
Conclusion:
 RP1  RP  RP2  RP1
Therefore RP1 = RP = RP2
BPP – Bounded Probability
Polynomial Time
Definition:
L BPP if M such that
• x  L  Prob[ M(x) = 1 ]  ⅔
• x  L  Prob[ M(x) = 1 ] < 1 3
In other words:
• x : Prob[ M(x) = L(x) ]  ⅔
coBPP = BPP
_
coBPP = { L : L  BPP }
_
= { L : M : Prob[ M(x) = L(x) ]  ⅔}
_
= { L : M : Prob[ M(x) =  (x) ]  ⅔}
L
= BPP
M = 1 – M(.)
(M(.) exists iff M(.) exists)
BPP1
Previously we defined stricter and weaker definition
for RP, in a similar way we will define for BPP.
Denotation:
• p(.) – positive polynomial
• f – polynomial time computable function
Definition:
L BPP1 if M, p(.), f such that
•  x  L  Prob[ M(x) = 1 ]  f(|x|) +
•  x  L  Prob[ M(x) = 1 ] < f(|x|) -
1
1
p(| x |)
p(| x |)
BPP = BPP1
Proof:
Aim: BPP  BPP1
f(|x|) = ½ and p(|x|) = 6
This gives the original definition of BPP.
BPP = BPP1
Proof:
Aim: BPP1  BPP
L  BPP1  M such that
xL Prob [ M(x) = 1]  f(|x|) +
xL Prob [ M(x) = 1] < f(|x|) –
1
1
p(| x |)
p(| x |)
BPP1  BPP
we want to know with Prob > ⅔
0  p  f(|x|) – 1/p(|x|)
f(|x|) + 1/p(|x|)  p  1
if
or if
Define: M’ runs M(x) n times, and each M(x)
returns ti
1
If n  t > f(|x|) M’ returns YES, else NO
n
i 1
i
BPP1  BPP
Calculation of n
We run n independent Bernoulli variables
n
xi i 1
with p  ½ and
 : 0    p( p  1)


x
 i

i

1

 p 
 n





n
Prob

n
2 p ( 1  p )
2
< 2e
 2e
 2
n
2 1
4
= 2e
2 n 2
BPP1  BPP
Choose :
1

p(| x |)
and
1
ln
6
n
2  2
Result: M’ decides L(M) with Prob > ⅔
BPP2
Denotation:
• p(.) – positive polynomial
Definition:
L BPP2 if M, p(.) such that
• x : Prob[ M(x) = L(x) ]  1-2-p(|x|)
BPP  BPP2
Proof:
Aim: BPP  BPP2
p(|x|) = ln( 1 3) ln( 2)
This gives the original definition of BPP.
BPP  BPP2
Proof:
Aim: BPP  BPP2
L BPP  M : x Prob[ M(x) = L(x) ]  ⅔
Define: M’ runs M(x) n times, and each M(x) returns t i
1
If n  t > ½ M’ returns YES, else NO
n
i 1 i
We know : Exp[M(x)] > ⅔  xL
Exp[M(x)] < 1 3  x L
BPP  BPP2
Chernoff’s Equation :
Let {X1 , X2 , … , Xn} be a set of independent
Bernoulli variables with the same
expectations p  ½ ,and : 0<   p(p-1)
Then
 n x


i
Prob  i 1  p     2  e 2n
2


n


BPP  BPP2
From Chernoff’s equation :
n
 Prob[|M’(x) – Exp[M(x)]|  1 6 ]  1  2  e 18
1
But if |M’(x) – Exp[M(x)]|  6
 then M’ returns a correct answer
BPP  BPP2
Prob[M’(x)= L(x) ]  1  2  e
n
18
 polynomial P(x) we choose n such that
2
P (| x|)
 2 e
n
18
p (| x|)
1

2
 Prob[M’(x) = L(x) ] 
 L  BPP2
RP  BPP
Proof:
L RP if M such that
• x  L  Prob[ M(x) = 1 ]  ½
• x  L  Prob[ M(x) = 0 ] = 1
We previously proved BPP = BPP1
If we place in BPP1 formula with
f(.) 1 4
and p(.)4
this gives the original definition of RP.
P  BPP
Proof:
L P  M such that M(x) = L(x)
x : Prob[ M(x) = L(x) ] =1  ⅔
L BPP
PSPACE
Definition:
L PSPACE if M such that M(x) = L(x)
and p such that M uses p(|x|) space.
(No time restriction)
PP – Probability Polynomial
Time
Definition:
L PP if M such that
• x  L  Prob[ M(x) = 1 ] > ½
• x  L  Prob[ M(x) = 1 ]  ½
In other words
• x : Prob[ M(x) = L(x) ] > ½
PP  PSPACE
Definition: (reminder)
L PP if M such that
• x : Prob[ M(x) = L(x) ]  ½
Proof:
L PP  M, p(.) such that
x: Prob[ M(x,r) = L(x) ] > ½
and M is polynomial time.
• If we run M on r, M is correct more than 50% of the
time.
PP  PSPACE
Aim: L PSPACE
• Run M on every single r.
• Count the number of received “1” and “0”.
• The correct answer is the greater result.
PP  PSPACE
• By the definition of PP, every L PP this
algorithm will always be correct.
• M(x,r) is polynomial in space
New algorithm is polynomial in space
L PSPACE
Claim: PP = PP1

PP PP1
If we have a machine that satisfies PP it also satisfies PP1
(Since PP is stricter then PP1 and demands grater then 1/2
and PP demands only, equal or grater to ½) so clearly
L  PP  L  PP1
 PP1  PP
Let M be a language in PP1
Motivation
The trick is to build a machine that will shift the answer
of M towards the NO direction with a very small
probability that is smaller than the smallest probability
difference that M could have. So if M is biased towards
YES our shift will not harm the direction of the shift.
But if there is no bias(or bias towards NO) our shift will
give us a bias towards the no answer.
Proof:
Let M’ be defined as:
if a1  a2  ...  a p (| x|)1  0

then return NO
M ' ( x, (a1 , a2 ,..., a p (| x|)1 , b1 , b2 ,..., b p (| x|) )) def 
else return
M ( x, (b1 , b2 ,..., b p (| x|) ))

M’ chooses one of two moves.
 ( p (|x|) 1
• With probability 2
return NO
 ( p (|x|) 1
• With probability 1  2
invoke M
If
X L
:
Prob [ M ' ( x)  1]  Prob [ M ( x )  1]  (1  2
 (1 / 2  2
1 / 2  2
2
 p (| x|)
 p (| x|)
( p (| x|) 1)
)  (1  2
2
( 2 p (| x|) 1)
)
( p (| x|) 1
 p (| x|2)
 1/ 2
)
If
X L
:
Prob [ M ' ( x )  0]  Prob [ M ( x )  0]  (1  2
 1 / 2  (1  2
1 / 2  2
 L PP
( p (| x|) 1
 p (| x|2)
( p (| x|) 1)
) 2
2
)
( p (| x|) 1
 p (| x|)
 1/ 2
Claim : NP  PP
Suppose thatL  NP
is decided by a non
deterministic
machine M with a running time that is bounded by the
polynomial p(x).
The following machine M’ then will decide L according to
the following definition:
if b1  1 then return M ( x, (b1 , b2 ,..., b p (| x|)1)) 
M ' ( x, (b1 , b2 ,..., b p (| x|)1)) def 

else
return
YES


M’ uses it’s random coin tosses as a witness to M with
only one toss that it does not pass to M’. This toss is
used to choose it’s move. One of the two possible
moves gets it to the ordinary computation of M with
the same input(and the witness is the random input).
The other choice gets it to a computation that always
accepts.
Consider string x.
If M doesn't have an accepting computation then the
probability that M’ will answer 1 is exactly 1/2.
On the other hand, if M has at least one accepting
computation the probability that M’ will answer correctly is
greater then 1/2.
So we get that:
x  L  prob [ M ' ( x)  1 ]  1 / 2
x  L  prob [ M ' ( x)  1 ]  1 / 2
Meaning
L  PP1
and by the
previous claim (PP = PP1) we get that L  PP .
ZPP – Zero Error Probability
We define a probabilistic turning machine which is
allowed to reply “I Don’t Know” which will be
symbolized by “┴”.
Definition:
L ZPP if M such that
• x : Prob[ M(x) = ┴ ] ≤ ½
• x : Prob[ M(x) = L(x) or M(x) = ┴ ] = 1
Claim : ZPP RP  coRP
 ZPP  RP
Take
L  ZPP. Let M be a “ZPP machine”.
We will build a machine M’ that decides L according to
the definition of RP.
b  M ( x)



M ' ( x) def if b   then output 0
else output b itself



If
X L
then by returning 0 when
M (x) 
we will always answer correctly because in this case
M ( x)    M ' ( x)  X L ( x)  M ' ( x)  0
If
X L
the probability of getting the right answer with M’ is
greater then 1/2 since M returns a definite answer with
probability greater then 1/2 and M’s definite answers are
always correct.
 ZPP  coRP
In the same way it can be seen that by defining M’(x) as:
b  M ( x)



M ' ( x) def if b   then output 0
else output b itself



we get that
ZPP  coRP
 RP  coRP  ZPP
Assume that L  RP  coRP. Let M RP be the RP machine
and M coRP the coRP machine. We define M' (x) as follows :
run M RP ( x), if says Yes then return 1 
M' (x) def 

run M coRP ( x), if says No then return 0
If X  L
then we will get a YES answer from
M RP
and hence from M’ with probability greater then 1/2.
If
X L
then we will get a NO answer from
M coRP
and hence from M’ with probability greater
then 1/2.
 RP  coRP  ZPP
RSPACE – Randomized Space
Complexity
Definition:
RSPACE (s) = L RP such that MRP uses at
most s(|x|) space and exp( s(|x|) ) time.
BadRSPACE (s) = RSPACE (s) without time
restriction.
Claim: badRSPACE = NSPACE
 badRSPACE  NSPACE

x L
Let L
If
badRSPACE.
that means there is at least one witness and the non
deterministic machine of NSPACE will choose it.
If
xL
that means there are no witnesses at all therefore the
non deterministic machine of NSPACE also will not
find a solution.
 NSPACE  badRSPACE
L
 NSPACE. M is the Non - deterministic Turing
machine which decides L in space S(|x|).
If x
L
there exists r of length exp(S(|x|), so that M(x,r) = 1,
where r is the non-deterministic guess used by M.
Therefore the probability of selecting r so that
M(x,r) = 1 is at least
2
 exp(S (|x|))
So if we repeatedly invoke M(x,.) on random r’s we can
expect that after
exp(S (|x|)) tries we will see an
2
accepting computation.
So what we want our machine M’ to do is run M on x and a
newly randomly selected r (of length exp(S(|x|))) for about
exp(S (|x|)) times and accept iff M accepts in one of these
2
tries.
Problem:
exp(S (|x|))
In order to count to 2
we need a counter that uses space of exp(S(|x|)),
and we only have S(|x|).
Solution:
We will use a randomized counter that will use only S(|x|)
space.
We flip k =
log2 2
exp(S (|x|))
coins.
if all are heads then stop else go on. The expected num of
k
tries 2 .
But the real counter only needs to count to k and therefore
only needs space of
.
log 2 log 2 2
exp(S (|x|))
 S (| x |)