Transcript PowerPoint
Midterm page 1
The worst-case complexity of an algorithm is ...
a function from N (or size of instance) to N.
(n lg n) is ... a set of functions.
A random variable is ...
a function from elementary event to real number.
Polyhash is ... a set of (hash) functions.
T(n) = 3 T(n/2) + n
T(n) is (nlg3)
T(n) = 3 T(n/3) + 5
T(n) is (n)
T(n) = 3 T(n/4) + n lg n T(n) is (n lg n)
False A heap of height n can have 2n+1 –1 nodes
False A tree of height k might only have k+1 nodes
True A red-black can implement a priority queue efficiently
1
CSE 202 - Shortest Paths
Midterm page 2
Yes, you can implement “Decrease-Key(S,x,k)” in a max
heap efficiently. Go to node x (which you can do since
you have a pointer to it), change its priority to k, apply
Heapify (which “bubbles downward”).
Given a red-black tree ... in black subtree
– Every non-leaf has 2, 3, or 4 children (by P3).
– Every leaf has depth k (by P4).
– Thus # black nodes 2k+1 – 1 (obvious, or by induction)
In full red-black tree, N # black nodes 2k+1 – 1
– So lg N lg (2k+1 – 1) lg 2k = k; that is , k < lg N.
– Longest path 2 k (by P4).
– So height tree = longest path 2 lg N.
NOTE: Just saying “Tree is balanced” is insufficient – we
never even really defined what “balanced” means.
2
CSE 202 - Shortest Paths
Midterm page 3
If keys are more or less uniformly randomly distributed
(so Bucket Sort take O(N) time) and K is large (or grows
with N, so KN isn’t O(N)), then Bucket Sort will be
faster than Radix Sort.
If the keys are very skewed, and K is small, Radix Sort
will be faster (actually, all you really need is that K is
very small, that is, K << lg N).
Greedy change-making can be arbitrarily bad. Given any c,
let D2 = 2c, D3 = 2c+1, and N = 4c. Then greedy will give
one D3 and 2c-1 D1’s, or 2c coins. But optimal is to give
two D2’s, or 2 coins. So greedy is c times worse.
3
CSE 202 - Shortest Paths
Midterm page 5
Let A(i) = best sum of numbers in the first i columns that
doesn’t use any entry in column i.
B(i) = best sum [of ...] that uses only the top entry in column i
C(i) = best sum that uses middle entry in column i
D(i) = best sum that uses only the bottom entry in column i
E(i) = best sum that uses top and bottom entry of column i.
I claim, given A(i), ..., E(i), we can compute A(i+1), ..., E(i+1) in
the “obvious” way. This will take (N) time.
E.g. A(i+1) = max(A(i), B(i), C(i), D(i), E(i))
B(i+1) = T(1,i+1) + max(A(i), C(i), D(i)),
etc.
4
CSE 202 - Shortest Paths
Midterm, page 6
Basic idea: use divide an conquer.
ktile(A,N,K) {
/* print the K-tiles of A[1], ..., A[N] */
Let M = select(A, N, K/2);
/* takes O(N) time */
Split A by M into Big and Small (each of size N/2);
if (K>1) call ktile (Big, N/2, K/2);
print M;
if (K>1) call ktile (Small, N/2, K/2);
}
Complexity: T(N,K) = 2T(N/2, K/2) + cN
Recursion tree has lg K levels, each requires cN time.
So complexity is c N lg K.
5
CSE 202 - Shortest Paths
CSE 202 - Algorithms
Shortest Paths Problems
11/12/02
CSE 202 - Shortest Paths
Shortest Paths Problems
5
u
v
3
1
1
w
8
2
4
x
1
6
13
t
4
10
z
9
y
2
Given a weighted directed graph,
<u,v,t,x,z> is a path of weight 29 from u to z.
<u,v,w,x,y,z> is another path from u to z; it has
weight 16 and is the shortest path from u to z.
7
CSE 202 - Shortest Paths
Variants of Shortest Paths Problems
A. Single pair shortest path problem
•
Given s and d, find shortest path from s to d.
B. Single source shortest paths problem
•
Given s, for each d find shortest path from s to d.
C. All-pairs shortest paths problem
•
For each ordered pair s,d, find shortest path.
(1) and (2) seem to have same asymptotic complexity.
(3) takes longer, but not as long as repeating (2) for each s.
8
CSE 202 - Shortest Paths
More Shortest Paths Variants
1. All weights are non-negative.
2. Weights may be negative, but no negative cycles
(A cycle is a path from a vertex to itself.)
3. Negative cycles allowed.
Algorithm reports “- ” if there is a negative cycle on
path from source to destination
(2) and (3) seem to be harder than (1).
5
u
3
-6
9
v
w
2
u
v
w
u v
0 5
-3 0
-6 -1
w
8
3
0
5
u
v
-3
6
2
u
v
w
u
0
v
w
- -
- -
- -
w
CSE 202 - Shortest Paths
Single Source Shortest Paths
Non-negative weights (problem B-1):
From source, construct tree of shortest paths.
• Why is this a tree?
• Is this a Minimum Spanning Tree?
Basic approach: label nodes with shortest path found so far.
Relaxation step: choose any edge (u,v) . If it gives a shorter
path to v, update v ’s label.
4
4
s
1
10
5
9
9
5
2
4
4
9
s
1
relaxing
this edge
improves
this node
5
9
7
9
5
2
CSE 202 - Shortest Paths
Single Source Shortest Paths
Simplest strategy:
– Keep cycling through all edges
– When all edges are relaxed, you’re done.
What is upper bound on work?
Improvement:
We mark nodes when we’re sure we have best path.
Key insight: if you relax all edges out of marked nodes, the
unmarked node that is closest to source can be marked.
This gives Dijkstra’s algorithm
Relax edge (u,v) only once - just after u is marked.
Keep nodes on min-priority queue – need E Decrease-Key’s.
Gives O(E lg V) (or O(E + V lg V) using Fibonacci heap.)
There’s a simple O(V2) implementation – when is it fastest?
11
CSE 202 - Shortest Paths
Single Source Shortest Paths
Negative weights allowed (problem B-2 and B-3)
Why can’t we just add a constant to each weight?
Why doesn’t Dijkstra’s algorithm work for B-2 ?
Bellman-Ford: Cycle through edges V-1 times.
O(VE) time.
PERT chart: Arbitrary weights, graph is acyclic:
Is there a better algorithm than Bellman-Ford?
12
CSE 202 - Shortest Paths
All-pairs Shortest Paths
Naïve #1: Solve V single-source problems.
– O(V2 E) for general problem.
– O(V3) or O(V E lg V) for non-negative weights.
Naïve #2: Let dk(i,j) = shortest path from i to j
involving k edges.
d1(i,j) = is original weight matrix.
Compute dk+1’s from dk’s by seeing if adding edge helps:
dk+1(i,j) = Min { dk(i,m) + d1(m,j) }
Hmmm ...this looks a lot like matrix multiplication
If there are no negative cycles, dV-1(i,j) is solution.
If there is a negative cycle, then dV-1 dV
13
Complexity is O(V4)
CSE 202 - Shortest Paths
All-pairs Shortest Paths
No negative cycles – Divide and Conquer says:
“Shortest path from a to b with at most 2k+1 hops
is a shortest path from a to c with at most 2k hops
followed by one from c to b with at most 2k hops.”
T(k+1) = T(k) + V3 so T(lg V) is O(V3 lg V).
This also looks like matrix multiplication, using
d2k = dk x dk instead of dk+1= dk x d1
14
CSE 202 - Shortest Paths
All-pairs Shortest Paths
No negative cycles – Dynamic Programming says:
“Number cities 1 to V. Shortest path from a to b
that only uses first cities 1 to k+1 as intermediate
points is a path that goes from a to k+1 using only
cities 1 to k, followed by a path going from k+1 to b
(or shortest from a to b avoiding k+1 altogether).”
T(k+1)= T(k) + cV2, and T(0) is 0.
Thus, T(V) is O(V3).
This is the Floyd-Warshall algorithm
15
CSE 202 - Shortest Paths
Johnson’s All-Pairs Shortest Paths
Motivation: for sparse non-negative weights,
“O(V E lg V)”
is better than
“O(V3)”
Dijkstra V times
Floyd-Warshall
We’ll convert “arbitrary weights” (C3) to “nonnegative” (C1) problem via reweighting.
-1
-3
5
16
8
-10
3
0
0
8
Starting bonus
or finishing penalty
4
1
-9
CSE 202 - Shortest Paths
Johnson’s Algorithm
Reweighting can make all edges non-negative
5
0
-1
-3
8
5
2
3
-10
4
0
0
10
0
Make node-weight(x) = shortest path to x from anywhere.
Sounds like all-pairs problem
Slick trick
use Bellman-Ford
(only O(VE) time)
17
s
0
0
0
0
-1
-3
5
8
-10
CSE 202 - Shortest Paths
Glossary (in case symbols are weird)
subset element of
set
infinity empty
for all there exists intersection
union
big theta big omega summation
>=
<=
about equal
not equal natural numbers(N)
18
CSE 202 - Shortest Paths