Transcript Document

FPSAC, San Francisco State University, August 2010
The Worm Order
and its Applications
Peter Winkler, Dartmouth
joint work in part with Graham Brightwell (LSE) and
in part with Lizz Moseman (NIST)
Slide ‹#›
A pre-order among words on an ordered alphabet
For example:
Put a b if you can
schedule them so as to
keep a below b pairwise.
b = <0,2,3,-1,0,2>
a = <0,3,0,2,-1,2,1,0>
Lemma: (i) a
a
(ii) a
b
Put a
c implies a
(iii) a
b implies min a
b if a
b
c
min b and max a
a and mod out by
max b.
.
Theorem: There is a unique shortest word
(called a worm) in each equivalence class.
Slide ‹#›
A typical worm
6
0
-6
the worm <0,2,-1,5,-2,6,-5,4,-3,2,-1,1>
Slide ‹#›
2
212
The lattice of
worms of range
{0,1,2}
12
202
21
1202
121
2021
12021
102
02
1
Number of
worms with given
n-element range:
1
n+1
(4 -3n-4)
9
1201
1021
021
201
20
120
10201
0201
101
1020
01
020
10
010
0
Slide ‹#›
Some Scheduling Problems to be Solved…
You need to upgrade SFSU’s computer system,
one component at a time, without allowing
performance to fall below a certain level at
any time.
Can it ever be right to downgrade
a component during the process?
Slide ‹#›
Sweeping a Forest
You need to find a lost child in a
portion of the Cascades, using a line of
searchers which cannot be stretched
beyond a certain total length.
Assuming you can do it, can you do it
without ever covering the same point
twice? In other words, can you do it
without retreating?
Slide ‹#›
Planar s-t Networks: Peter Doyle’s problem
3
2
2
s
3
4
2
2
3
2
3
2
3
2
3
2
2
3
t
2
3
2
Suppose you can make your way from the southernmost s-t
path to the northernmost, crossing one face at a time,
without using a path of length more than some bound B.
Does it follow that you can do it moving only north,
that is, crossing each face only once?
Slide ‹#›
Scheduling Real Words
Given two words (finite
strings of reals), how
hard is it to schedule
them to minimize their
maximum pairwise sum?
For example:
a = <1,4,0,3,-2>
b = <-1,4,1,6,3>
The schedule constitutes
a lattice path and yields
a new word c=a+b.
3
6
1
4
-1
4
7
3
6
1
7
10
6
9
4
2
5
1
4
-1
5
8
4
7
2
0
3
-1
2
-3
1
4
0
3
-2
Can we do this efficiently with many words?
Slide ‹#›
A Digression to Lattices
A lattice is a poset in which, for every x and y , there is a
least upper bound x y and a greatest lower bound x y . If
finite it thus has a least element 0 and a greatest element 1 .
A lattice is distributive if x (y z) = (x y) (x z) ;
equivalently, if it is the lattice of ideals (downwardclosed subsets) of some poset.
These sublattices are forbidden
in a distributive lattice.
A function f from a lattice to the reals is
submodular if f(x y) + f(x y) f(x) + f(y) . A
distributive lattice together with a submodular
function is called a submodular system.
Slide ‹#›
A General Result
C
P
Theorem: In any submodular system (L,f) there is
a maximal chain C such that f(C)
f(P) (in the
worm order) for any path P from 0 to 1 in L .
Corollary: For any two words a and b there is an optimal
way of adding them such that the word a+b precedes the
result of any other schedule, in the worm order.
Consequence for adding many words to minimize the
maximum sum: You can add the words pairwise in any
order, if you use the canonical sum above. This
provides a fast algorithm for adding multiple words.
Slide ‹#›
A Useful (and Easily Proved) Corollary:
1
If S is a boneless
subset of a
distributive lattice,
and it contains a
path from 0 to 1,
it contains a
maximal chain.
a b
a
S
b
a b
a b
a
b
a b
A “bone”
0
Slide ‹#›
Sketch of a proof:
Let x0 … xm
be a shortest
path from 0
to 1 in S.
Put y =
j
x .
If some yj is not in S,
i j i
yj
z
yj
xj
xi
z
xi
xi
z
z
Put u i = x i
z.
xj
Then ui is in S,
else xi and z
constitute a bone.
Slide ‹#›
A 2nd Corollary and a Consequence
Corollary 2: In any submodular system
(L,f) there is a maximal chain C such that
max{f(x): x in C}
max{f(x): x in P}
for any path P from 0 to 1 in L .
C
P
Proof: For any bound b the set of points
x in L where f(x)
b is boneless.
Consequence for
upgrading your computer
system: If performance
is submodular (a
reasonable assumption),
then indeed you can
upgrade the whole system
one component at a time
without any backtracking.
Slide ‹#›
Sweeping a Forest (with detachments)
Again you need to find a lost child
in a forest, without stretching
the line of searchers beyond a
fixed length. But this time you
needn’t start all the searchers in
the same place.
Now, you are guaranteed that if you
can do it at all, then you can you do
it without any retreating.
Slide ‹#›
Planar s-t Networks: A Counterexample
1
1
30
s
1
1
10
20
1
t
1
20
1
1
There is only one way
to migrate from the
southernmost s-t
path to the
northernmost,
crossing one face at a
time, without using a
path of length more
than 40.
To do it, you must cross and recross the rightmost face
several times.
Slide ‹#›
Migrating Through a Directed Network
3
2
2
s
3
4
2
2
3
2
3
2
3
2
3
2
2
3
t
2
3
2
Again, we wish to migrate from the
southernmost s-t path to the northernmost,
crossing one face at a time, without using any
path of length greater than L.
But this time, we have a distributive lattice
and submodular function, so we never need
to cross any face more than once.
Slide ‹#›
A new version of graph searching
Vertices of a graph are
contaminated, and we
wish to clear the
vertices but must use c
“blocking material”
along an edge of
capacity c to keep
contamination from
spreading.
Now, the “slipchain” version of our theorem
(cf. Thm 3.2 of Graph Minors X) shows that
if we can decontaminate with a total of b
material, we never need to decontaminate any
vertex more than once.
Slide ‹#›
Consequences for Coordinate Percolation
Suppose: uniform random
reals from [0,1] are
assigned to the coordinates;
each grid point inherits the
sum of its coordinate reals;
and any vertex whose sum
exceeds some bound t is
deleted. (t=.75 in figure.)
Then: as a function of t, the
escape probability t is:
.3
.7
.1
.4
.1
.4
.7
.3
.6
.3
.8
1.1
.7
1.0
.7
.2
.5
.1
.4
.1
.5
.8
.4
.7
.4
.2
.5
.1
.4
.1
.1
.4
0
.3
0
1
t
t
1
2
2
…as is believed for
independent percolation.
Slide ‹#›
Proof
We make use of the fact that independent draws
from any continuous distribution fall in a uniformly
random linear order.
To do this, let a i be the value associated with
the ith column and bj the value for the jth row.
If we define cj :=t-bj then we have ai +bj < t
iff ai < cj .
Note that in a region with no open lines (i.e.
each ai and bj exceeds 1–t), all a’s and c’s are
now independent draws from [1-t,1] . This
reduces the problem to: given two uniformly
random words a and c , what is the probability
that a precedes c in the worm order?
Slide ‹#›
We use a time-varying Markov chain to get…
1+ 1 (
n
Eigenvalues:
Eigenvectors:
(
, 1,
1
-2
1+ 1 (
n
- 2),
); (
, 1,
1
-2
- 2)
); (0, 0, 1) .
Then Q(n) := Pr(survive n steps)
=
(-1)
5
n
(
where
( )-
+ 1) n
1+ 5
=
2
(-1)
5
and
n
(
( )
+ 1) n
=1-
.
Slide ‹#›
Calculating the probability of percolation
Now, if Q(n) = Pr(survive n steps), then
t
=
5+3
10
5
n
=
n
(t-1)(2-t) Q(n)
3-
(t-1)
5
2
-
3
5 -5
10
giving critical exponent 2-
5
3+
(t-1)
2
~ .382
as well as showing that t is continuous everywhere,
analytic except at t=1 and t=2, and C 1 except at
the critical point t=1.
Slide ‹#›
Thank you
for your
attention!
References:
Moseman & W.,
On a form of
coordinate percolation,
CPC 17 #6 (Nov ’08),
837-845.
Brightwell & W.,
Submodular
percolation,
SIDMA 23 #3
(2009), 1149-1178.
Brightwell & W.,
Forward processes
and a `lost child’
theorem,
in preparation.
Slide ‹#›