Accelerating Random Walks - Computer Science

Download Report

Transcript Accelerating Random Walks - Computer Science

Accelerating Random Walks
Wei Wei and Bart Selman
Dept. of Computer Science
Cornell University
Introduction – local search
 Local search methods are a viable alternative to
backtrack style methods for solving Boolean
satisfiability (SAT) problems.
 First, such methods were based purely on greedy
hill-climb search (e.g., GSAT).
 Later, random “walk-style” methods (WalkSat and
its variants) substantially improved performance
— such methods combine a random walk strategy
with a greedy search bias.
Introduction - practice
 Random walk-style methods are successful on hard
randomly generated instances, as well as on a
number of real-world benchmarks.
 However, they are generally less effective in highly
structured domains compared to backtrack methods
such as DPLL.
 Key issue: random walk needs O(N2) flips to
propagate dependencies among variables, while in
unit-propagation in DPLL takes only O(N).
 In this talk, we will show how one can accelerate
random walk search methods.
Overview




Random Walk Strategies
- unbiased random walk
- biased random walk
Chain Formulas
- binary chains
- ternary chains
Practical Problems
Conclusion and Future Directions
Unbiased (Pure) Random Walk
for SAT
Procedure Random-Walk (RW)
Start with a random truth assignment
Repeat
c:= an unsatisfied clause chosen at random
x:= a variable in c chosen at random
flip the truth value of x
Until a satisfying assignment is found
Unbiased RW on any satisfiable
2SAT Formula
 Given a satisfiable 2SAT formula with n
variables, a satisfying assignment will be
reached by Unbiased RW in O(n2) steps with
high probability. (Papadimitriou, 1991)
 Elegant proof! (next)
Given a satisfiable 2-SAT formula F.
RW starts with a random truth assignment A0.
Consider an unsatisfied clause:
(x_3 or (not x_4))
A0 must have x_3 False and x_4 True (both “wrong”)
A satisfying truth assignment, T, must have
x_3 True or x_4 False (or both)
Now, “flip” truth value of x_3 or x_4.
With (at least) 50% chance, Hamming distance to
satisfying assignment T is reduced by 1.
I.e., we’re moving the right direction! (of course, with 50%
(or less) we are moving in the wrong direction… doesn’t matter!)
TT
A0
A0
We have an unbiased random walk with a reflecting barrier at
distance N from T (max Hamming distance) and an absorbing
barrier (satisfying assignment) at distance 0.
We start at a Hamming distance of approx. ½ N.
Property of unbiased random walks: after N^2 flips, with high
probability, we will hit the origin (the satisfying assignment).
So, O(N^2) randomized algorithm (worst-case!) for 2-SAT.
Unfortunately, does not work for k-SAT with
k>= 3. 
Reason:
example unsat clause:
(x_1 or (not x_4) or x_5)
now only 1/3 chance (worst-case) of making the
right flip!
(Also, Schoening 1999.)
Unbiased RW on 3SAT
Formulas
T
A0
Random walk takes exponential number of steps to reach 0.
(Also, Parkes CP-2002.)
Comments on RW
1) Random Walk is highly “myopic” does not take
into account any gradient of the objective
function (= number of unsatisfied clauses)!
Purely “local” fixes.
2) Can we make RW practical for SAT?
Yes --- inject greedy bias into walk
 biased Random Walk.
Biased Random Walk
(1st minimal greedy bias)
Procedure Random-Walk-with-Freebie (RWF)
Start with random truth assignment
Repeat
c:= an unsatisfied clause chosen at random
if there exist a variable x in c with break value = 0 // greedy bias
flip the value of x (a “freebie” flip)
else
x:= a variable in c chosen at random
// pure walk
flip the value of x
Until a satisfying assignment is found
break value == # of clauses that become unsatisfied because of flip.
Biased Random Walk
(adding more greedy bias)
Procedure WalkSat
Repeat
c:= an unsatisfied clause chosen at random
if there exist a variable x in c with break value = 0
flip the value of x (freebie move)
else
with probability p
x:= a variable in c chosen at random
flip the value of x
with probability (1-p)
x:= a variable in c with smallest break value
flip the value of x
Until a satisfying assignment is found
Note: tune parameter p.
// greedy bias
// pure walk
// more greedy bias
Chain Formulas
 To better understand the behavior of pure and
biased RW procedures on SAT instances, we
introduce Chain Formulas.
 These formulas have long chains of
dependencies between variables.
 They effectively demonstrate the extreme
properties of RW style algorithms.
Binary Chains
 Consider formulas 2-SAT chain, F2chain
x1  x2
x2  x3
…
xn-1  xn
xn  x1
Note: Only two satisfying assignments --TTTTTT … and FFFFFF…
Binary Chains
Walk is exactly balanced.
Binary Chains
 We obtain the following theorem
Theorem 1. The RW procedure takes Q(n2) steps to
find a satisfying assignment of F2chain.
 DPLL algorithm’s unit propagation mechanism finds an
assignment for F2chain in linear time.
 Greedy bias does not help in this case: both RWF and
WalkSat takes Q(n2) flips to reach a satisfying assignment
on these formulas.
Speeding up Random Walks on
Binary Chains
Pure binary chain
Binary chain with
redundancies (implied clauses)
Aside: Note small-world flavor (Watts & Strogatz 99, Walsh 00).
Results: Speeding up Random
Walks on Binary Chains
Pure binary
chain
RW
RWF
WalkSat
*:
Q(n2)**
Q(n2)**
Q(n2)*
empirical results
**: theoretical proof available
Chain with
redundancies
Q(n2)**
Q(n1.2)*
Q(n1.1)*
Becomes
almost like
unit prop.
Ternary Chains
In general, even a small bias in the wrong direction leads to
exponential time to reach 0.
Ternary Chains
 Consider formulas F3chain, low(i)
x1
x2
x1  x2  x3
…
xlow(i)  xi-1  xi
…
xlow(n)  xn-1  xn
Note: Only one satisfying assign.: TTTTT…
*These formulas are inspired by Prestwich [2001]
Ternary Chains
short link
medium link
long link
Effect of X1 and X2 needs to propagate through chain.
Theoretical Results on 3-SAT
Chains
low(i) captures how far back the clauses reach.
Function low(i)
Expected run time of
pure RW
i-2
(highly local)
~ Fib(n)
i/2
(interm. reach)
O(n . nlog n)
(i.e., exp.)
(i.e., quasi-poly)
log i (interm. reach)
O(n2 . (log n)2)
1
O(n2)
(full back reach)
(i.e., poly)
Proof
 The proofs of these claims are quite
involved, and are available at
http://www.cs.cornell.edu/home/selman/weiwei.pdf
 Here, just the intuitions.
 Each RW process on these formulas can be
decomposed into a series of decoupled,
simpler random walks.
Example: Decomposition
110101 111111
110111
010111
110111
100111
110111
111111
110101
010101
110001
100001
110001
111001
110111 111111
Start
110101
010101
010001
110001
100001
110001
111001
101001
111001
111101
111111
Sat assign.
110101
111101
010001
111001
111101 111111
101001
111001
111101
111111
101001
111001
111101
111111
So, the process decomposes into a series of decoupled walks of
the form (requires detailed proof):
11…101…11  11…111…11
111101
1/3
zi
1/3
111001
101101
zi+zi-1
zi+zlow(i)
1/3
111111
Recurrence Relations
Our formula structure gives us:
E(f(zi)) = (E(f(zlow(i)) + E(f(zi) + 1)/3
+ (E(f(zi-1) + E(f(zi) + 1)/3
+ 1/3
 E(f(zi)) = E(f(zlow(i)) + E(f(zi-1) + 3
Recurrence Relations
 Solving this recurrence for different low(i)’s, we get
Function low(i)
i-2
E(f(zi))
 Fib(i)
i/2
 ilog i
log i
 i . (log i)2
1
i
This leads to the complexity results for the overall RW.
Results for RW on 3-SAT
chains.
Function low(i)
Expected Running
time of pure RW
i-2
~ Fib(n)
i/2
O(n . nlog n)
log i
O(n2 . (log n)2)
1
O(n2)
Recap Chain Formula Results
 Adding implied constraints capturing longrange dependencies speeds random walk on
2-Chain to near linear time.
 Certain long-range dependencies in 3-SAT
lead to poly-time convergence of random
walks.
 Can we take advantage of these results on
practical problem instances? Yes! (next)
Results on Practical
Benchmarks
 Idea: Use a formula preprocessor to uncover longrange dependencies and add clauses capturing those
dependencies to the formula.
 We adapted Brafman’s formula preprocessor to do
so. (Brafman 2001)
 Experiments on recent verification benchmark.
(Velev 1999)
Empirical Results
Formulas <40 sec
<400 sec <4000 sec
(redun. level)
a = 0.0
15
26
42
a = 0.2
85
98
100
a = 1.0
13
33
64
SSS-SAT-1.0 instances (Velev 1999). 100 total.
a level of redundancy added (20% near optimal)
Conclusions
 We introduced a method for speeding up random
walk style SAT algorithms based on the addition of
constraints that capture long range dependencies.
 On a binary chain, we showed how by adding
implied clauses, biased RW becomes almost as
effective as unit-propagation.
Conclusions, Cont.
 In our formal analysis of ternary chains, we showed
how the performance of RW varies from
exponential to polynomial depending on the range
of dependency links. We identified the first subclass
of 3-SAT problems solvable in poly-time by
unbiased RW
 We gave a practical validation of our approach.
Future Directions
 It seems likely that many other dependency
structures could speed up random walk style
methods.
 It should be possible to develop preprocessor to
uncover other dependencies. For example, in graph
coloring problem we have:
x1 x4 , x2 x5 , x3 x6, …
x1 x4  x7  x10, …
The end.
Practical Problems


1.
2.
3.
4.
Brafman’s 2-Simplify method is an ideal tool to help us
discover long-range dependencies
It simplifies a CNF formula in the following steps:
It constructs an implication graph from binary clauses, and
collapses strongly connected components in this graph
It generates transitive closure of the graph, deduces
through binary and hyper-resolutions, and removes
assigned variables
It removes transitively redundant links to keep the number
of edge minimal
It translates the graph back to binary clauses
Practical Problems
1.
2.
3.
4.
constructs an implication graph from binary clauses, and
collapses strongly connected components in this graph
generates transitive closure of the graph, deduces through
binary and hyper-resolutions, and removes assigned
variables
steps through the redundancy removal steps, and removes
each implied link with probability (1-a)
translates the graph back to binary clauses
Related Work
 Cha and Iwama (1996) studied the effect of
adding clauses during local search process.
But they focus on resolvents of unsat clauses
at local minima, and their selected neighbors.
Our results suggested long range
dependencies may be more important to
uncover