Quadratic Solvability

Download Report

Transcript Quadratic Solvability

The PCP starting point
1
Overview



In this lecture we’ll present the
Quadratic Solvability problem.
We’ll see this problem is closely related
to PCP.
And even use it to prove a (very very
weak...) PCP characterization of NP.
2
Quadratic
Solvability
or equally:
a set of dimension D
total-degree 2 polynomials
Def: (QS[D, ]):
Instance: a set of n quadratic equations over 
with at most D variables each.
Problem: to find if there is a common solution.
Example: =Z2; D=1
 0
y = 0
 1
1
2
x + x = 0
 1
x 2+ 1 = 0
3
Solvability
A generalization of this problem:
Def: (Solvability[D, ]):
Instance: a set of n polynomials over  with at
most D variables. Each polynomial has degreebound n in each one of the variables.
Problem: to find if there is a common root.
4
Solvability is Reducible to QS:
w2
w2 w2
y x + x t + w3tlz + z + 1 = 0
1
2
2
w1 = y 2
w2 = x2
w3 = tl
the parameters
(D,) don’t change
(assuming D>2)!
Could we use the same “trick” to
show Solvability is reducible to
Linear Solvability?
5
QS is NP-hard
Let us prove that QS is NP-hard by reducing 3-SAT
to it:
Def: (3-SAT):
Instance: a 3CNF formula.
Problem: to decide if this formula is satisfiable.
(123)...(m/3-2m/3-1m/3)
where each literal i{xj,xj}1jn
6
QS is NP-hard
Given an instance of 3-SAT, use the following
transformation on each clause:
xi
 xi
( i  i+1  i+2 )
1-xi
xi
Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ]
The corresponding instance of Solvability is the set of
all resulting polynomials (which, assuming the variables
are only assigned Boolean values, is equivalent)
7
QS is NP-hard
In order to remove the assumption we need to add the
equation for every variable xi:
xi * ( 1 - xi ) = 0
This concludes the description of a reduction from
3SAT to Solvability[O(1),] for any field .
What is the maximal degree of the
resulting equations ?
8
QS is NP-hard
According to the two previous reductions:
9
Gap-QS
Def: (Gap-QS[D, ,]):
Instance: a set of n quadratic equations over  with at
most D variables each.
Problem: to distinguish between the following two cases:
There is a common solution
No more than an  fraction of the equations
can be satisfied simultaneously.
10
Gap-QS and PCP
quadratic equations system
Gap-QS[D,
,]
Def:
LPCP[D,V, ] if there is a polynomial time
algorithm, which for any input x, produces a set
of efficient Boolean functions over variables of
range 2V, each depending on at most D variables
For each quadratic polynomial
so that:
values in 
pi(x1,...,xD), add the Boolean
function i(a1,...,aD)pi(a1,...,aD)=0
the variables
iff there exits an assignment to the
of xL
the input
variables, which satisfies all the functions
system
xL iff no assignment can satisfy more than an fraction of the functions.
Gap-QS[D,,]  PCP[D,log||,]
11
Gap-QS and PCP



Therefore, every language which is
efficiently reducible to Gap-QS[D,,]
is also in PCP[D,log||,].
Thus, proving Gap-QS[D,,] is NP-hard,
also proves the PCP[D,log||,]
characterization of NP.
And indeed our goal henceforth will be
proving Gap-QS[D,,] is NP-hard for
the best D,  and  we can.
12
Gap-QS[n,,2/|| ] is NP-hard
Proof: by reduction from QS[O(1),]
Instance of
QS[O(1),]:
Satisfying
assignment : i
Non-satisfying
assignment : j
degree-2
polynomials
p1
p2
p3 . . .
pn
0
0
0 . . .
0
0
3
7 . . .
0
13
Gap-QS[O(1),] is NP-hard
In order to have a gapdegree-2
we need an efficient degreepreserving transformation
polynomialson the polynomials so that
any non-satisfying assignment results in few satisfied
polynomials:
p1
p2
p3 . . .
pn
Transformation:
Nonsatisfying
assignment
: j
p1’
p2’
p3’
. . . pm’
0
2
4
. . .
3
14
Gap-QS[O(1),] is NP-hard
For such an efficient degree-preserving
transformation E it must hold that:
1)E(0n )  0m
2)v  0n.(E(v), 0m )  1  2/ 
Thus E is an error correcting code !
We shall now see examples of degreepreserving transformations which are also
error correcting codes:
15
The linear transformation:
multiplication by a matrix
polynomials
c1 . . . cm
p p1 p2 ... pn
poly-time, if
m=nc
c11
.
 .
.
cn1
. . .
.
.
. . .
p•c1 . . . p•cm
c1m
.
.
.
.
=
cnm
inner product
scalars
a linear combination of
polynomials
16
The linear transformation:
multiplication by a matrix
the values of the polynomials under some
assignment
c1
 e1 e2 ... en
c11
.
 .
.
cn1
.
cm
. . .
c1m
.
.
.
.
. . .
.
•c1 . . . •cm
.
.
.
=
cnm
the values of the new polynomials under the
same assignment
a zero vector if =0n
17
What’s Ahead

We proceed with several examples for
linear error correcting codes:



Reed-Solomon code
Random matrix
And finally even a code which suits our
needs...
18
Using Reed-Solomon Codes

Define the matrix as follows:
( j  k)

aij  0 k i  n
 (i  k )
That’s really
Lagrange’s formula in
disguise...
0 k  i  n


One can prove that for any 0i||-1, (vA)i is P(i),
where P is the unique degree n-1 univariate
polynomial, for which P(i)=vi for all 0in-1.
Therefore for any v the fraction of zeroes in vA
is bounded by (n-1)/||.
using multivariate polynomials
we can even get =O(logn/||)
19
Using a Random Matrix
Lem: A random matrix Anxm satisfies w.h.p:
v  0 .
n
{i | ( Av)i  0}
m
2F
1
The fraction of zeros in
the output vector
20
Using a Random Matrix
Proof: (by the probabilistic method)
Let v0nn.
Because the inner product of v and
a random vector is random:
1  i  m. PrA [( Av)i  0]  F
1
 Hence, |{i : (vA)i = 0}| (denoted Xv) is a binomial
random variable with parameters m and ||-1.
For this random variable, we can compute the
probability Pr[ Xv  2m||-1 ] (the probability that
the fraction of zeros exceeds 2||-1 )
21
Using a Random Matrix
The Chernoff bound:
For a binomial random variable with parameters m and
||-1:
δ2m

 1

1
4|F|1
Pr 
X v  | F |  δ   2e
m

Hence:

Pr X v  2m | F |
1
  2e

m
4|F|1
22
Using a Random Matrix
Overall, the number of different vectors v is  ||n
Hence, according to the union bound, we can
multiply the previous probability by the number of
different vectors v to obtain a bound on the
probability :

 | F |
The union bound:
1
1
The probability
for a union of events is
v
Smaller then or equal to the sum of
And this probability
isprobabilities
smaller then 1 for:
Their
Pr v  0 .X  2m | F |
n
2e

m
4|F|1
m=O(n||log||). Hence, for such m, a random
matrix satisfies the lemma with positive
probability. 
23
Deterministic Construction
Define a random matrix Anxm :
Assume =Zp. Let k=logpn+1. (Assume w.l.o.g kN)
Let Zpk be the dimension k extension field of .
Associate each row with 1ipk-1
Hence, n=pk-1
Associate each column with a pair (x,y)ZpkZpk
Hence, m=p2k
24
Deterministic Construction
And define A(i,(x,y))= <xi,y> (inner product)
p2k
pk-1
<xi,y>
25
Analysis

For any vector vn, for any column (x,y)ZpkZpk,
pk 1
pk 1
i1
i1
(vA)(x, y)   vi   x i , y    vi x i , y 

The number of zeroes in vA where v0n 
k 1
k
k
k 1
k 1
+
p

p
x,y:
G(x)0
 <G(x),y>=0
(p

p
)

p
x,y: G(x)=0

And thus the fraction of zeroes 
pk 1  pk  (pk  pk 1 )  pk 1 2

2k
p
p
26
Summary of the Reduction
Given an instance {p1,...,pn} for QS[O(1),],
We found a matrix A which satisfies
v0n |{i : (vA)i = 0}| /m < 2||-1
Hence:
{p1,...,pn}  QS[O(1),]
If and only if:
{p1A,...,pnA}  Gap-QS[O(n),,2||-1]
This proves Gap-QS[O(n),,2||-1] is NP-hard
!!
27
Hitting the Road
This proves a PCP characterization with D=O(n)
(hardly a “local” test...).
Eventually we’ll prove a characterization with
D=O(1) ([DFKRS]) using the results presented
here as our starting point.
28