Lecture Slides

Download Report

Transcript Lecture Slides

COSC 3101A - Design and
Analysis of Algorithms
8
Elements of DP
Memoization
Longest Common Subsequence
Greedy Algorithms
Many of these slides are taken from Monica Nicolescu, Univ. of Nevada, Reno, [email protected]
Elements of Dynamic Programming
• Optimal Substructure
– An optimal solution to a problem contains within it an
optimal solution to subproblems
– Optimal solution to the entire problem is build in a
bottom-up manner from optimal solutions to
subproblems
• Overlapping Subproblems
– If a recursive algorithm revisits the same subproblems
over and over  the problem has overlapping
subproblems
6/22/2004 Lecture 8
COSC3101A
2
Optimal Substructure - Examples
• Assembly line
– Fastest way of going through a station j contains:
the fastest way of going through station j-1 on either
line
• Matrix multiplication
– Optimal parenthesization of Ai  Ai+1  Aj that splits
the product between Ak and Ak+1 contains:
an optimal solution to the problem of parenthesizing
Ai..k and Ak+1..j
6/22/2004 Lecture 8
COSC3101A
3
Discovering Optimal Substructure
1. Show that a solution to a problem consists of making a
choice that leaves one or more similar problems to be
solved
2. Suppose that for a given problem you are given the
choice that leads to an optimal solution
3. Given this choice determine which subproblems result
4. Show that the solutions to the subproblems used within
the optimal solution must themselves be optimal
•
Cut-and-paste approach
6/22/2004 Lecture 8
COSC3101A
4
Parameters of Optimal Substructure
• How many subproblems are used in an optimal
solution for the original problem
– Assembly line: One subproblem (the line that gives best time)
– Matrix multiplication: Two subproblems (subproducts Ai..k, Ak+1..j)
• How many choices we have in determining
which subproblems to use in an optimal solution
– Assembly line: Two choices (line 1 or line 2)
– Matrix multiplication: j - i choices for k (splitting the product)
6/22/2004 Lecture 8
COSC3101A
5
Parameters of Optimal Substructure
• Intuitively, the running time of a dynamic
programming algorithm depends on two factors:
– Number of subproblems overall
– How many choices we look at for each subproblem
• Assembly line
– (n) subproblems (n stations)
(n) overall
– 2 choices for each subproblem
• Matrix multiplication:
– (n2) subproblems (1  i  j  n)
– At most n-1 choices
6/22/2004 Lecture 8
COSC3101A
(n3) overall
6
Memoization
• Top-down approach with the efficiency of typical dynamic
programming approach
• Maintaining an entry in a table for the solution to each
subproblem
– memoize the inefficient recursive algorithm
• When a subproblem is first encountered its solution is
computed and stored in that table
• Subsequent “calls” to the subproblem simply look up that
value
6/22/2004 Lecture 8
COSC3101A
7
Memoized Matrix-Chain
Alg.: MEMOIZED-MATRIX-CHAIN(p)
1. n  length[p] – 1
2. for i  1 to n
3.
do for j  i to n
4.
do m[i, j]  
Initialize the m table with
large values that indicate
whether the values of m[i, j]
have been computed
5. return LOOKUP-CHAIN(p, 1, n)
6/22/2004 Lecture 8
COSC3101A
Top-down approach
8
Memoized Matrix-Chain
Alg.: LOOKUP-CHAIN(p, i, j)
1.
2.
3.
if m[i, j] < 
then return m[i, j]
if i = j
4.
then m[i, j]  0
5.
else for k  i to j – 1
6.
Running time is O(n3)
do q  LOOKUP-CHAIN(p, i, k) +
LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj
7.
8.
9.6/22/2004
return
Lecture 8 m[i, j]
if q < m[i, j]
then m[i, j]  q
COSC3101A
9
Dynamic Progamming vs. Memoization
• Advantages of dynamic programming vs.
memoized algorithms
– No overhead for recursion, less overhead for
maintaining the table
– The regular pattern of table accesses may be used to
reduce time or space requirements
• Advantages of memoized algorithms vs.
dynamic programming
– Some subproblems do not need to be solved
6/22/2004 Lecture 8
COSC3101A
10
Matrix-Chain Multiplication - Summary
• Both the dynamic programming approach and
the memoized algorithm can solve the matrixchain multiplication problem in O(n3)
• Both methods take advantage of the overlapping
subproblems property
• There are only (n2) different subproblems
– Solutions to these problems are computed only once
• Without memoization the natural recursive
algorithm runs in exponential time
6/22/2004 Lecture 8
COSC3101A
11
Longest Common Subsequence
• Given two sequences
X = x1, x2, …, xm
Y = y1, y2, …, yn
find a maximum length common subsequence
(LCS) of X and Y
• E.g.:
X = A, B, C, B, D, A, B
• Subsequences of X:
– A subset of elements in the sequence taken in order
A, B, D, B, C, D, B, etc.
6/22/2004 Lecture 8
COSC3101A
12
Example
X = A, B, C, B, D, A, B
X = A, B, C, B, D, A, B
Y = B, D, C, A, B, A
Y = B, D, C, A, B, A
• B, C, B, A and B, D, A, B are longest common
subsequences of X and Y (length = 4)
• B, C, A, however is not a LCS of X and Y
6/22/2004 Lecture 8
COSC3101A
13
Brute-Force Solution
• For every subsequence of X, check whether it’s
a subsequence of Y
• There are 2m subsequences of X to check
• Each subsequence takes (n) time to check
– scan Y for first letter, from there scan for second, and
so on
• Running time: (n2m)
6/22/2004 Lecture 8
COSC3101A
14
Notations
• Given a sequence X = x1, x2, …, xm we define
the i-th prefix of X, for i = 0, 1, 2, …, m
Xi = x1, x2, …, xi
• c[i, j] = the length of a LCS of the sequences
Xi = x1, x2, …, xi and Yj = y1, y2, …, yj
6/22/2004 Lecture 8
COSC3101A
15
A Recursive Solution
Case 1: xi = yj
e.g.:
Xi = A, B, D, E
Yj = Z, B, E
c[i, j] = c[i - 1, j - 1] + 1
– Append xi = yj to the LCS of Xi-1 and Yj-1
– Must find a LCS of Xi-1 and Yj-1  optimal solution to a
problem includes optimal solutions to subproblems
6/22/2004 Lecture 8
COSC3101A
16
A Recursive Solution
Case 2: xi  yj
e.g.:
Xi = A, B, D, G
Yj = Z, B, D
c[i, j] = max { c[i - 1, j], c[i, j-1] }
– Must solve two problems
• find a LCS of Xi-1 and Yj: Xi-1 = A, B, D and Yj = Z, B, D
• find a LCS of Xi and Yj-1: Xi = A, B, D, G and Yj = Z, B
• Optimal solution to a problem includes optimal
solutions to subproblems
6/22/2004 Lecture 8
COSC3101A
17
Overlapping Subproblems
• To find a LCS of X and Y
– we may need to find the LCS between X and Yn-1 and
that of Xm-1 and Y
– Both the above subproblems has the subproblem of
finding the LCS of Xm-1 and Yn-1
• Subproblems share subsubproblems
6/22/2004 Lecture 8
COSC3101A
18
3. Computing the Length of the LCS
c[i, j] =
0
c[i-1, j-1] + 1
max(c[i, j-1], c[i-1, j])
0
xi
1
x1
2
x2
m xm
0
yj:
1
y1
2
y2
0
0
0
if i = 0 or j = 0
if xi = yj
if xi  yj
n
yn
0
0
0
first
0
0
0
0
i
second
0
j
6/22/2004 Lecture 8
COSC3101A
19
Additional Information
0
if i,j = 0
c[i, j] = c[i-1, j-1] + 1
if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
b & c:
0
xi
1
A
2
B
3
C
m D
0
yj:
1
A
2
C
3
D
0
0
0
0
0
0
0
0
0
c[i-1,j]
n
F
0
0
i
c[i,j-1]
j
6/22/2004 Lecture 8
COSC3101A
A matrix b[i, j]:
• For a subproblem [i, j] it
tells us what choice was
made to obtain the
optimal value
• If xi = yj
b[i, j] = “ ”
• Else, if
c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
20
LCS-LENGTH(X, Y, m, n)
1. for i ← 1 to m
2.
do c[i, 0] ← 0
The length of the LCS if one of the sequences
3. for j ← 0 to n
is empty is zero
4.
do c[0, j] ← 0
5. for i ← 1 to m
6.
do for j ← 1 to n
7.
do if xi = yj
8.
then c[i, j] ← c[i - 1, j - 1] + 1 Case 1: xi = yj
9.
b[i, j ] ← “ ”
10.
else if c[i - 1, j] ≥ c[i, j - 1]
11.
then c[i, j] ← c[i - 1, j]
12.
b[i, j] ← “↑”
Case 2: xi  yj
13.
else c[i, j] ← c[i, j - 1]
14.
b[i, j] ← “←”
15. return c and b
Running time: (mn)
6/22/2004 Lecture 8
COSC3101A
21
Example
0
if i = 0 or j = 0
if xi = yj
max(c[i, j-1], c[i-1, j]) if xi  yj
X = B, D, C, A, B, A
c[i, j] = c[i-1, j-1] + 1
Y = A, B, C, B, D, A
If xi = yj
b[i, j] = “ ”
Else if
c[i - 1, j] ≥ c[i, j-1]
b[i, j] = “  ”
else
b[i, j] = “  ”
6/22/2004 Lecture 8
0
yj
1
B
2
D
3
C
4
A
5
B
6
A
0
0
0
1

1
1
1
0
xi
0
0
0
0
1
A
0

0

0

0
2
B
0
C
0
4
B
0
5
D
0
6
A
0
1

1

1
1

1

1
1
3
1

1
7
B
0
1
COSC3101A
2

2

2
2

2

2

2

2
2

2

2
3

3
2

2
2

2
3

3

3
3

3
4
4

4
22
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j]  xi = yj is an element
of the LCS
6/22/2004 Lecture 8
0
yj
1
B
2
D
3
C
4
A
5
B
6
A
0
0
0
1

1
1
1
0
xi
0
0
0
0
1
A
0

0

0

0
2
B
0
3
C
0
1

1
4
B
0
5
D
0
6
A
0
1

1

1
1

1

1
7
B
0
1
2

2

2
1
2

2

2

2

2
COSC3101A
2

2

2
3

3
2

2
2

2
3

3

3
3

3
4
4

4
23
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 0
Running time: (m + n)
2. then return
3. if b[i, j] = “ ”
4.
then PRINT-LCS(b, X, i - 1, j - 1)
5.
print xi
6. elseif b[i, j] = “↑”
7.
then PRINT-LCS(b, X, i - 1, j)
8.
else PRINT-LCS(b, X, i, j - 1)
Initial call: PRINT-LCS(b, X, length[X], length[Y])
6/22/2004 Lecture 8
COSC3101A
24
Improving the Code
• What can we say about how each entry c[i, j] is
computed?
– It depends only on c[i -1, j - 1], c[i - 1, j], and c[i, j - 1]
– Eliminate table b and compute in O(1) which of the three values
was used to compute c[i, j]
– We save (mn) space from table b
– However, we do not asymptotically decrease the auxiliary space
requirements: still need table c
• If we only need the length of the LCS
– LCS-LENGTH works only on two rows of c at a time
• The row being computed and the previous row
– We can reduce the asymptotic space requirements by storing
only these two rows
6/22/2004 Lecture 8
COSC3101A
25
Greedy Algorithms
• Similar to dynamic programming, but simpler approach
– Also used for optimization problems
• Idea: When we have a choice to make, make the one
that looks best right now
– Make a locally optimal choice in hope of getting a globally
optimal solution
• Greedy algorithms don’t always yield an optimal solution
• When the problem has certain general characteristics,
greedy algorithms give optimal solutions
6/22/2004 Lecture 8
COSC3101A
26
Activity Selection
• Schedule n activities that require exclusive use
of a common resource
S = {a1, . . . , an} – set of activities
• ai needs resource during period [si , fi)
– si = start time and fi = finish time of activity ai
– 0  si < fi < 
• Activities ai and aj are compatible if the
intervals [si , fi) and [sj, fj) do not overlap
i
6/22/2004 Lecture 8
fi  sj
j
j
COSC3101A
fj  si
i
27
Activity Selection Problem
Select the largest possible set of nonoverlapping
(mutually compatible) activities.
E.g.:
i
1
2
3
4
5
6
7
8
9 10
11
si
1
3
0
5
3
5
6
8
8
2
12
fi
4
5
6
7
8
9
10
11
12
13
14
• Activities are sorted in increasing order of finish times
• A subset of mutually compatible activities: {a3, a9, a11}
• Maximal set of mutually compatible activities:
{a1, a4, a8, a11} and {a2, a4, a9, a11}
6/22/2004 Lecture 8
COSC3101A
28
Optimal Substructure
• Define the space of subproblems:
Sij = { ak  S : fi ≤ sk < fk ≤ sj }
– activities that start after ai finishes and finish before aj
starts
• Activities that are compatible with the ones in Sij
– All activities that finish by fi
– All activities that start no earlier than sj
6/22/2004 Lecture 8
COSC3101A
29
Representing the Problem
• Add fictitious activities
– a0 = [-, 0)
S = S0,n+1 entire space of activities
– an+1 = [, “ + 1”)
– Range for Sij is 0  i, j  n + 1
• In a set Sij we assume that activities are sorted in
increasing order of finish times:
f0  f1  f2  …  fn < fn+1
• What happens if i ≥ j ?
– For an activity ak  Sij: fi  sk < fk  sj < fj
contradiction with fi  fj!
 Sij =  (the set Sij must be empty!)
• We only need to consider sets Sij with 0  i < j  n + 1
6/22/2004 Lecture 8
COSC3101A
30
Optimal Substructure
• Subproblem:
– Select a maximum size subset of mutually compatible
activities from set Sij
• Assume that a solution to the above subproblem
includes activity ak (Sij is non-empty)
Sij
Skj
Sik
Solution to Sij = (Solution to Sik)  {ak}  (Solution to Skj)
Solution to Sij = Solution to Sik + 1 + Solution to Skj 
6/22/2004 Lecture 8
COSC3101A
31
Optimal Substructure (cont.)
Aij
Akj
Aik
Aij = Optimal solution to Sij
• Claim: Sets Aik and Akj must be optimal solutions
• Assume  Aik’ that includes more activities than Aik
Size[Aij’] = Size[Aik’] + 1 + Size[Akj] > Size[Aij]
 Contradiction: we assumed that Aij is the maximum # of
activities taken from Sij
6/22/2004 Lecture 8
COSC3101A
32
Recursive Solution
• Any optimal solution (associated with a set Sij)
contains within it optimal solutions to
subproblems Sik and Skj
• c[i, j] = size of maximum-size subset of mutually
compatible activities in Sij
• If Sij =   c[i, j] = 0 (i ≥ j)
6/22/2004 Lecture 8
COSC3101A
33
Recursive Solution
Sij
Skj
Sik
If Sij   and if we consider that ak is used in an
optimal solution (maximum-size subset of
mutually compatible activities of Sij)
c[i, j] = c[i,k] + c[k, j] + 1
6/22/2004 Lecture 8
COSC3101A
34
Recursive Solution
c[i, j] =
0
if Sij = 
max {c[i,k] + c[k, j] + 1}
if Sij  
i<k<j
ak  Sij
• There are j – i – 1 possible values for k
– k = i+1, …, j – 1
– ak cannot be ai or aj (from the definition of Sij)
Sij = { ak  S : fi ≤ sk < fk ≤ sj }
– We check all the values and take the best one
We could now write a dynamic programming algorithm
6/22/2004 Lecture 8
COSC3101A
35
Theorem
Let Sij   and am be the activity in Sij with the
earliest finish time:
fm = min { fk: ak  Sij }
Then:
1. am is used in some maximum-size subset of
mutually compatible activities of Sij
– There exists some optimal solution that contains am
2. Sim = 
– Choosing am leaves Smj the only nonempty
subproblem
6/22/2004 Lecture 8
COSC3101A
36
Proof
2. Assume  ak  Sim
fi  s k < f k  s m < f m

fk < f m
contradiction !
am did not have the earliest finish time
Sij
ak sm
Sim
fm
am
Smj
 There is no ak  Sim  Sim = 
6/22/2004 Lecture 8
COSC3101A
37
Proof : Greedy Choice Property
1. am is used in some maximum-size subset of
mutually compatible activities of Sij
•
Aij = optimal solution for activity selection from Sij
–
–
•
•
Order activities in Aij in increasing order of finish time
Let ak be the first activity in Aij = {ak, …}
If ak = am Done!
Otherwise, replace ak with am (resulting in a set Aij’)
–
–
since fm  fk the activities in Aij’ will continue to be compatible
Aij’ will have the same size with Aij  am is used in some
maximum-size subset
Sij
sm
6/22/2004 Lecture 8
am
COSC3101A
fm
fk
38
Why is the Theorem Useful?
Dynamic
programming
Using the theorem
Number of subproblems
in the optimal solution
2 subproblems: 1 subproblem: Smj
Sik, Skj
Sim = 
Number of choices to
consider
1 choice: the activity
j – i – 1 choices with the earliest
finish time in Sij
• Making the greedy choice (the activity with the earliest
finish time in Sij)
– Reduce the number of subproblems and choices
– Solve each subproblem in a top-down fashion
6/22/2004 Lecture 8
COSC3101A
39
Greedy Approach
• To select a maximum size subset of mutually
compatible activities from set Sij:
– Choose am  Sij with earliest finish time (greedy choice)
– Add am to the set of activities used in the optimal solution
– Solve the same problem for the set Smj
• From the theorem
– By choosing am we are guaranteed to have used an
activity included in an optimal solution
 We do not need to solve the subproblem Smj before making the
choice!
– The problem has the GREEDY CHOICE property
6/22/2004 Lecture 8
COSC3101A
40
Characterizing the Subproblems
• The original problem: find the maximum subset of
mutually compatible activities for S = S0, n+1
• Activities are sorted by increasing finish time
a0, a1, a2, a3, …, an+1
• We always choose an activity with the earliest finish
time
– Greedy choice maximizes the unscheduled time
remaining
– Finish time of activities selected is strictly increasing
6/22/2004 Lecture 8
COSC3101A
41
A Recursive Greedy Algorithm
Alg.: REC-ACT-SEL (s, f, i, j)
1.
2.
3.
4.
5.
6.
•
•
•
am
am
fm
fm
am
fm
m←i+1
ai
fi
while m < j and sm < fi ►Find first activity in Sij
do m ← m + 1
if m < j
then return {am}  REC-ACT-SEL(s, f, m, j)
else return 
Activities are ordered in increasing order of finish time
Running time: (n) – each activity is examined only once
Initial call: REC-ACT-SEL(s, f, 0, n+1)
6/22/2004 Lecture 8
COSC3101A
42
k
sk
fk
0
-
0
1
1
4
2
3
5
3
0
6
4
5
7
5
3
8
6
5
9
7
6
10
8
8
11
9
8
12
10
2
13
Example
a1
m=1
a0
a2
a1
a3
a1
a4
m=4
a1
a5
a4
a1
a6
a4
a1
a7
a1
a4
a8
m=8
a4
a1
a9
a4
a1
a8
a10
a1
a8
a4
a11
11
12
14
a1
a4
a8
m=11
12

-
a1
a4
a8
a11
6/22/2004 Lecture 8
0
1
2
3
4
COSC3101A
5 6
7
8
9
10 11 12 13 14
43
An Iterative Greedy Algorithm
Alg.: GREEDY-ACTIVITY-SELECTOR(s, f)
1.
n ← length[s]
am
am
fm
fm
2. A ← {a1}
am fm
3. i ← 1
ai
fi
4. for m ← 2 to n
5.
do if sm ≥ fi
► activity am is compatible with ai
6.
then A ← A  {am}
7.
i ← m ► ai is most recent addition to A
8. return A
•
•
Assumes that activities are ordered in increasing order of finish time
Running time: (n) – each activity is examined only once
6/22/2004 Lecture 8
COSC3101A
44
Steps Toward Our Greedy Solution
1. Determine the optimal substructure of the problem
2. Develop a recursive solution
3. Prove that one of the optimal choices is the greedy
choice
4. Show that all but one of the subproblems resulted by
making the greedy choice are empty
5. Develop a recursive algorithm that implements the
greedy strategy
6. Convert the recursive algorithm to an iterative one
6/22/2004 Lecture 8
COSC3101A
45
Designing Greedy Algorithms
1. Cast the optimization problem as one for which:
we make a choice and are left with only one
subproblem to solve
2. Prove that there is always an optimal solution to
the original problem that makes the greedy choice
– Making the greedy choice is always safe
3. Demonstrate that after making the greedy choice:
the greedy choice + an optimal solution to the
resulting subproblem leads to an optimal solution
6/22/2004 Lecture 8
COSC3101A
46
Correctness of Greedy Algorithms
1. Greedy Choice Property
– A globally optimal solution can be arrived at by
making a locally optimal (greedy) choice
2. Optimal Substructure Property
– We know that we have arrived at a subproblem by
making a greedy choice
– Optimal solution to subproblem + greedy choice 
optimal solution for the original problem
6/22/2004 Lecture 8
COSC3101A
47
Activity Selection
• Greedy Choice Property
There exists an optimal solution that includes the greedy
choice:
– The activity am with the earliest finish time in Sij
• Optimal Substructure:
If an optimal solution to subproblem Sij includes activity
ak  it must contain optimal solutions to Sik and Skj
Similarly, am + optimal solution to Sim  optimal sol.
6/22/2004 Lecture 8
COSC3101A
48
Dynamic Programming vs.
Greedy Algorithms
• Dynamic programming
– We make a choice at each step
– The choice depends on solutions to subproblems
– Bottom up solution, from smaller to larger subproblems
• Greedy algorithm
– Make the greedy choice and THEN
– Solve the subproblem arising after the choice is made
– The choice we make may depend on previous choices,
but not on solutions to subproblems
– Top down solution, problems decrease in size
6/22/2004 Lecture 8
COSC3101A
49
The Knapsack Problem
• The 0-1 knapsack problem
– A thief rubbing a store finds n items: the i-th item is
worth vi dollars and weights wi pounds (vi, wi integers)
– The thief can only carry W pounds in his knapsack
– Items must be taken entirely or left behind
– Which items should the thief take to maximize the
value of his load?
• The fractional knapsack problem
– Similar to above
– The thief can take fractions of items
6/22/2004 Lecture 8
COSC3101A
50
Fractional Knapsack Problem
• Knapsack capacity: W
• There are n items: the i-th item has value vi and
weight wi
• Goal:
– find xi such that for all 0  xi  1, i = 1, 2, .., n
 wixi  W and
 xivi is maximum
6/22/2004 Lecture 8
COSC3101A
51
Fractional Knapsack Problem
• Greedy strategy 1:
– Pick the item with the maximum value
• E.g.:
–
–
–
–
W=1
w1 = 100, v1 = 2
w2 = 1, v2 = 1
Taking from the item with the maximum value:
Total value taken = v1/w1 = 2/100
– Smaller than what the thief can take if choosing the
other item
Total value (choose item 2) = v2/w2 = 1
6/22/2004 Lecture 8
COSC3101A
52
Fractional Knapsack Problem
Greedy strategy 2:
•
Pick the item with the maximum value per pound vi/wi
•
If the supply of that element is exhausted and the thief can
carry more: take as much as possible from the item with the
next greatest value per pound
•
It is good to order items based on their value per pound
v1 v2
vn

 ... 
w1 w2
wn
6/22/2004 Lecture 8
COSC3101A
53
Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])
1.
While w > 0 and as long as there are items remaining
2.
pick item with maximum vi/wi
3.
xi  min (1, w/wi)
4.
remove item i from list
5.
w  w – xiwi
•
w – the amount of space remaining in the knapsack (w = W)
•
Running time: (n) if items already ordered; else (nlgn)
6/22/2004 Lecture 8
COSC3101A
54
Fractional Knapsack - Example
• E.g.:
20
$80
--+
30
Item 3
50
Item 2
50
20 $100
Item 1
30
+
20
10
$60
10
$100
$120
$60
$240
$6/pound $5/pound $4/pound
6/22/2004 Lecture 8
COSC3101A
55
Greedy Choice
Items:
Optimal solution:
Greedy solution:
• We know that: x1’  x1
1
x1
x1’
2
x2
x2’
3 … j …
x3
xj
x3’
xj ’
n
xn
xn’
– greedy choice takes as much as possible from item 1
• Modify the optimal solution to take x1’ of item 1
– We have to decrease the quantity taken from some item j: the
new xj is decreased by: (x1’ - x1) w1/wj
• Increase in profit: (x1’ - x1 ) v1
• Decrease in profit: (x 1’ - x1 )w 1 v j/w j
(x 1’ - x1 ) v1  (x 1’ - x1 )w1 v j/w j
vj
v1 v j
True, since x1 had the

v 1  w1

best value/pound ratio
w1 w j
wj
6/22/2004 Lecture 8
COSC3101A
56
Optimal Substructure
• Consider the most valuable load that weights at
most W pounds
• If we remove a weight w of item j from the
optimal load
 The remaining load must be the most valuable
load weighing at most W – w that can be taken
from the remaining n – 1 items plus wj – w
pounds of item j
6/22/2004 Lecture 8
COSC3101A
57
The 0-1 Knapsack Problem
• Thief has a knapsack of capacity W
• There are n items: for i-th item value vi and
weight wi
• Goal:
– find xi such that for all xi = {0, 1}, i = 1, 2, .., n
 wixi  W and
 xivi is maximum
6/22/2004 Lecture 8
COSC3101A
58
0-1 Knapsack - Greedy Strategy
• E.g.:
30 $120
Item 3
50
Item 2
50
50
+
20 $100
Item 1
30
+
20
10
10
$60
$100
$120
20
$100
$60
$160
$220
$6/pound $5/pound $4/pound
• None of the solutions involving the greedy
choice (item 1) leads to an optimal solution
– The greedy choice property does not hold
6/22/2004 Lecture 8
COSC3101A
59
0-1 Knapsack - Dynamic Programming
• P(i, w) – the maximum profit that can be
obtained from items 1 to i, if the
knapsack has size w
• Case 1: thief takes item i
P(i, w) = vi + P(i - 1, w-wi)
• Case 2: thief does not take item i
P(i, w) = P(i - 1, w)
6/22/2004 Lecture 8
COSC3101A
60
0-1 Knapsack - Dynamic Programming
Item i was taken
Item i was not taken
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
0
i-1
i
n
0:
1
0
0
0
0
0
0
0
0
6/22/2004 Lecture 8
w - wi
0
0
0
W
w
0
0
0
0
0
0
first
second
COSC3101A
61
W=5
Example:
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
Item
Weight
Value
1
2
12
2
1
10
3
3
20
4
2
15
0
1
2
3
4
5
0
0
0
0
0
0
0
1
0
0
12
12
12
12
P(1, 2) = max{12+0, 0} = 12
2
0
10
12
22
22
22
P(1, 3) = max{12+0, 0} = 12
3
0
10
12
22
30
32
P(1, 4) = max{12+0, 0} = 12
4
0
10
15
25
30
37
P(1, 5) = max{12+0, 0} = 12
P(1, 1) = P(0, 1) = 0
P(2, 1)= max{10+0, 0} = 10
P(3, 1)= P(2,1) = 10
P(4, 1)= P(3,1) = 10
P(2, 2)= max{10+0, 12} = 12
P(3, 2)= P(2,2) = 12
P(4, 2)= max{15+0, 12} = 15
P(2, 3)= max{10+12, 12} = 22 P(3, 3)= max{20+0, 22}=22 P(4, 3)= max{15+10, 22}=25
P(2, 4)= max{10+12, 12} = 22 P(3, 4)= max{20+10,22}=30 P(4, 4)= max{15+12, 30}=30
P(2, 5)= max{10+12, 12} = 22 P(4, 5)= max{20+12,22}=32 P(4, 5)= max{15+22, 32}=37
6/22/2004 Lecture 8
COSC3101A
62
Reconstructing the Optimal Solution
0
1
2
3
4
5
0
0
0
0
0
0
0
• Item 4
1
0
0
12
12
12
12
2
0
10
12
22
22
22
• Item 2
3
0
10
12
22
30
32
4
0
10
15
25
30
37
• Item 1
• Start at P(n, W)
• When you go left-up  item i has been taken
• When you go straight up  item i has not been
taken
6/22/2004 Lecture 8
COSC3101A
63
Optimal Substructure
• Consider the most valuable load that weights at
most W pounds
• If we remove item j from this load
 The remaining load must be the most valuable
load weighing at most W – wj that can be taken
from the remaining n – 1 items
6/22/2004 Lecture 8
COSC3101A
64
Overlapping Subproblems
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
0
i-1
0:
1
0
0
0
0
i
0
0
0
n
0
w
0
0
0
0
W
0
0
0
0
0
E.g.: all the subproblems shown in grey may
depend on P(i-1, w)
6/22/2004 Lecture 8
COSC3101A
65
Readings
• Chapter 15
• Chapter 16
6/22/2004 Lecture 8
COSC3101A
66