assembly line, matrices chained

Download Report

Transcript assembly line, matrices chained

Dynamic Programming
Dr. M. Sakalli, Marmara University
Matrix Chain Problem
Assembly-line scheduling
Elements of dynamic programming
Picture reference to http://www.flickr.com/photos/7271221@N04/3408234040/sizes/l/in/photostream/. Crane
strokes
o
Like
Divide and
Conquer, DP (pro-gram)
solves a problem by partitioning the problem
Dynamic
Programming
into sub-problems and combines solutions. The differences are that:
• D&C is top-down while DP is bottom to top approach. But memoization will
allow top to down method.
• The sub-problems are independent from each other in the former case, while they
are not independent in the dynamic programming.
• Therefore, DP algorithm solves every sub-problem just ONCE and saves its
answer in a TABLE and then reuse it. Memoization.
Divide&Conquer
o
Dynamic Programming
Optimization problems: Many solutions possible solutions and each has a value.
• A solution with the optimal sub-solutions. Such a solution is called an optimal solution to the
problem. Not the optimum. Shortest path example
o
The development of a dp algorithm can be in four steps.
1.
2.
3.
4.
Characterize the structure of an optimal solution.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution in a bottom-up fashion.
Construct an optimal solution from computed information.
Assembly-Line Scheduling
o
ei time to enter
xi time to exit assembly lines
tj time to transfer from assembly
aj processing time in each station.
o
Brute-force approach
o
o
o
• Enumerate all possible sequences through lines i {1, 2},
• For each sequence of n stations Sj j {1, n}, compute the passing time. (the
computation takes (n) time.)
• Record the sequence with smaller passing time.
• However, there are too many possible sequences totaling 2n
Assembly-Line Scheduling
o
DP Step 1: Analyze the structure of the fastest way through the paths
• Seeking an optimal substructure: The fastest possible way (min{f*})
through a station Si,j contains the fastest way from start to the station Si,j
trough either assembly line S1, j-1 or S2, j-1.
–
–
–
–
o
o
o
For j=1, there is only one possibility
For j=2,3,…,n, two possibilities: from S1, j-1 or S2, j-1
– from S1, j-1, additional time a1, j
– from S2, j-1, additional time t2, j-1 + a1,j
suppose the fastest way through S1, j is through S1, j-1, then the chassis must
have taken a fastest way from starting point through S1,j-1. Why???
Similar rendering for S2, j-1.
An optimal solution to a problem contains within it an optimal solution to sub-prbls.
the fastest way through station Si,j contains within it the fastest way through station
S1,j-1 or S2,j-1 .
Thus can construct an optimal solution to a problem from the optimal solutions to
sub-problems.
f *  min( f1[n]  x1 , f 2 [n]  x2 )
o
DP Step 2: A recursive solution
o
DP Step 3: Computing the fastest times in Θ(n) time.

if j  1,
e1  a1,1
f1[ j ]  

min( f1[ j  1]  a1, j , f 2 [ j  1]  t 2, j 1  a1, j ) if j  2

if j  1,
e2  a2,1
f 2[ j]  

min( f 2 [ j  1]  a2, j , f1[ j  1]  t1, j 1  a2, j ) if j  2
r1 (n)  r2 (n)  1
r1 ( j)  r2 ( j )  r1 ( j  1)  r2 ( j  1)
Problem: ri (j) = 2n-j. So f1[1] is referred to 2n-1 times.
Total references to all fi[j] is (2n).
Running time: O(n).
o
Step 4: Construct the fastest way through the factory
o
Determining the fastest way through the factory
Matrix-chain Multiplication
o
o
Problem definition: Given a chain of matrices A1, A2, ..., An, where matrix Ai has
dimension pi-1×pi, find the order of matrix multiplications minimizing the number
of the total scalar multiplications to compute the final product.
Let A be a [p, q] matrix, and B be a [q, r] matrix. Then the complexity is pqr.
C(p,r) = A(p,q) * B(q,r)
•for i1 to p for j1 to r
C[i,j]=0
•for i=1 to p
for j=1 to r
for k=1 to q C[i,j] = C[i,j] + A[i,k]*B[k,j]
o
o
In the matrix-chain multiplication problem, the actually matrices are not multiplied, the
aim is to determine an order for multiplying matrices that has the lowest cost.
Then, the time invested in determining optimal order must worth more than paid for by
the time saved later on when actually performing the matrix multiplications.
Example given in class
2x3 3x5 5x7 7x2
A1 A2 A3 A4
1
30
Suppose we want to multiply a sequence
of matrices, A1…A4 with dimensions.
70
20
2
3
0
Remember: Matrix multiplication is not
commutative.
70
28
3
30
7
0
1- Total # of multiplication for this
method is 30 + 70 +20 = 120
2- Above the total # of multiplications is
30 + 70 +28 = 128
12
3- Below the total # of multiplications is
70 + 30 +12 = 112
o
o
o
o
The aim as to fully parenthesize the product of matrices
Parenthesization
minimizing
scalar multiplications.
For example, for the product A1 A2 A3 A4, a fully
parenthesization is ((A1 A2) A3) A4.
A product of matrices is fully parenthesized if it is either a single
matrix, or a product of two fully parenthesized matrix
product, surrounded by parentheses.
Brute-force approach
a) Enumerate all possible parenthesizations.
b) Compute the number of scalar multiplications of each parenthesization.
c) Select the parenthesization needing the least number of scalar multiplications.
o
The number of enumerated parenthesizations of a product of n
matrices, denoted by P(n), is the sequence of Catalan number
growing as Ω(4n/n3/2) and solution to recurrence is Ω(2n).
1

P(n)   n 1
 P(k ) P(n  k )
 k 1
if n=1
if n≥2
1  2(n  1) 
4n
P(n)  C (n  1)  
 ( 3/ 2 )

n  n 1 
n
The Brute-force approach is inefficient.
Catalan numbers: the number of ways in which parentheses can be placed
in a sequence of numbers to be multiplied, two at a time

3 numbers:

(1 (2 3)), ((1 2) 3)

4 numbers:

(1 (2 (3 4))), (1 ((2 3) 4)), ((1 2) (3 4)), ((1 (2 3)) 4), (((1 2) 3) 4)

5 numbers:




(1 (2 (3 (4 5)))), (1 (2 ((3 4) 5))), (1 ((2 3) (4 5))), (1 ((2 (3 4)) 5)),
(1 (((2 3) 4) 5)), ((1 2) (3 (4 5))), ((1 2) ((3 4) 5)), ((1 (2 3)) (4 5)),
((1 (2 (3 4))) 5), ((1 ((2 3) 4)) 5), (((1 2) 3) (4 5)), (((1 2) (3 4)) 5),
(((1 (2 3)) 4) 5) ((((1 2) 3) 4) 5)
With DP
o
DP Step 1: structure of an optimal parenthesization
•
•
•
•
o
Let Ai..j (ij) denote the matrix resulting from AiAi+1…Aj
Any parenthesization of AiAi+1…Aj must split the product between Ak and Ak+1 for
some k, (ik<j). The cost = # of computing Ai..k + # of computing Ak+1..j + # Ai..k  Ak+1..j.
If k is the position for an optimal parenthesization, the parenthesization of “prefix”
subchain AiAi+1…Ak within this optimal parenthesization of AiAi+1…Aj must be
an optimal parenthesization of AiAi+1…Ak.
AiAi+1…Ak  Ak+1…Aj
Optimal
(( A1 A2 ... Ak )( Ak 1 Ak  2 ... An ))
Combine
Step 2: Recursively define the value of an optimal solution
o
o
o
DP Step 2: a recursive relation
• Let m[i,j] be the minimum number of multiplications
needed to compute the matrix AiAi+1…Aj
The lowest cost to compute A1 A2 … An would be m[1,n]
Recurrence:
0
if i = j
• m[i,j] =
minik<j {m[i,k] + m[k+1,j] +pi-1pkpj } if i<j
( (Ai … Ak)
m[i, k]
pi-1Xpk matrix
(Ak+1… Aj) )
(Split at k)
m[k+1, j]
pkXpj matrix
Reminder: the dimension
of Ai is pi-1 X pi
Recursive (top-down) solution
using the formula for m[i,j]:
RECURSIVE-MATRIX-CHAIN(p, i, j)
1. if i=j then return 0
2. m[i, j] = 
3. for k ← 1 to j − 1
4.
q ← RECURSIVE-MATRIX-CHAIN (p, i , k)
5.
+ RECURSIVE-MATRIX-CHAIN (p, k+1 , j)
6.
+ p[i-1] p[k] p[j]
7.
if q < m[i, j] then m[i, j] ← q
Line 1
8. return m[i, j]
Line 6
T (1)  1
Complexity:
n 1
Line 4
Line 5
T (n)  1   (T (k )  T (n  k )  1)
k 1
Line 6
for n > 1
Complexity of the recursive solution
n 1
n 1
k 1
i 1
T ( n)  1   (T ( k )  T ( n  k )  1)  n  2 T (i )
o
o
o
o
o
Using inductive method to prove by using the substitution method – we
guess a solution and then prove by using mathematical induction that it is
correct.
Prove that T(n) = (2n) that is T(n) ≥ 2n-1 for all n ≥1.
Induction Base: T(1) ≥1=20
Induction Assumption: assume T(k) ≥ 2k-1 for all 1 ≤ k < n
Induction Step:
n 1
n2
i 1
i 0
T (n)  n  2 2i 1  n  2 2i 
 n  2(2n1  1)  n  2n  2  2n1
o
Step 3, Computing the optimal cost
• If by recursive algorithm is exponential in n, (2n), no better than
brute-force.
n
• But only have
  + n = (n2) subproblems.
2
• Recursive behavior will encounter to revisit the same overlapping
subproblems many times.
• If tabling the answers for subproblems, each subproblem is only solved
once.
• The second hallmark of DP: overlapping subproblems and solve every
subproblem just once.
Step 3: Compute the value of an optimal solution bottom-up
Input: n; an array p[0…n] containing matrix dimensions
State: m[1..n, 1..n] for storing m[i, j]
s[1..n, 1..n] for storing the optimal k that was used to calculate m[i, j]
Result: Minimum-cost table m and split table s
MATRIX-CHAIN-TABLE(p, n)
for i ← 1 to n
m[i, i] ← 0
for l ← 2 to n
for i ← 1 to n-l+1
j ← i+l-1
m[i, j] ← 
for k ← i to j-1
q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j]
if q < m[i, j]
m[i, j] ← q
s[i, j] ← k
return m and s
Takes O(n3) time
Requires (n2) space
chains of length 1
j
i
1
2
3
4
1
2
3
4
0
0
0
0
A1
30×1
A2
1×40
A3
40×10
A4
10×25
chains of length 2
j
i
1
2
3
4
1
2
0
1200
1
0
3
4
400
2
0
10000
3
A1
30×1
A2
1×40
A3
40×10
A4
10×25
0
m[1, 2]  m[1,1]  m[2, 2]  30 1 40  1200
m[2,3]  m[2, 2]  m[3,3]  1 40 10  400
m[3, 4]  m[3,3]  m[4, 4]  40 10  25  10000
chains of length 3
j
i
1
2
3
4
1
2
3
4
0
1200
1
700
1
0
400
2
650
3
0
10000
3
A1
30×1
A2
1×40
A3
40×10
A4
10×25
0
 m[1,1]  m[2,3]  30 110  700
m[1,3]  min 
 700
m[1, 2]  m[3,3]  30  40 10  13200
m[2, 2]  m[3, 4]  1 40  25  11000
m[2, 4]  min 
 650
 m[2,3]  m[4, 4]  110  25  650
chains of length 4
j
i
1
2
3
4
1
2
3
4
0
1200
1
700
1
1400
1
0
400
2
650
3
0
10000
3
A1
30×1
A2
1×40
A3
40×10
A4
10×25
0
 m[1,1]  m[2, 4]  30 1 25  1400

m[1, 4]  min m[1, 2]  m[3, 4]  30  40  25  41200  1400
 m[1,3]  m[4, 4]  30 10  25  8200

Printing the solution
j
i
1
2
3
2
3
4
A1
30×1
1
1
1
A2
1×40
A3
40×10
A4
10×25
2
3
3
PRINT(s, 1, 4)
PRINT (s, 1, 1)
Output: (A1((A2A3)A4))
PRINT (s, 2, 4)
PRINT (s, 2, 3)
PRINT (s, 2, 2)
PRINT (s, 4, 4)
PRINT (s, 3, 3)
Step 4: Constructing an optimal solution
o Each entry s[i, j ]=k shows where to split the product Ai Ai+1
… Aj for the minimum cost:
A1 … An = ( (A1 … As[i, n]) (As[i, n]+1… An) )
o To print the solution invoke the following function with (s,
1, n) as the parameter:
PRINT-OPTIMAL-PARENS(s, i, j)
1. if i=j then print “A”i
2. else print “(”
3.
PRINT-OPTIMAL-PARENS(s, i, s[i, j])
4.
PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j)
5.
print “)”
Suppose
A1
A2
Ai……………….Ar
P1xP2 P2xP3 PixPi+1 ………. PrxPr+1
Assume
mij = the # of multiplication needed to multiply Ai A i+1......Aj
Initial value mii = mjj = 0
Final Value m1r
A1……..Ai………Ak
Ak+1………..Aj
mij = mik + mk+1, j + Pi Pk+1 Pj+1
k could be i <= k <= j-1
We know the range of k but don’t know the exact value of k
Thus mij = min(mik + mk+1, j + Pi Pk+1 Pj+1) for i <= k <= j-1
recurrence index : (j-1) - i + 1=(j-i)
Example: Calculate m14 for
A1 A2 A3 A4
2x5 5x3 3x7 7x2
P1 = 2, P2 = 5, P3 = 3, P4 = 7, P5 = 2
j-i = 0
m11 = 0
j-i = 1
m12 = 30
m22=0
j-i = 2
m13 = 72
m23=105
m33=0
m(1,1) m(1,2) m(1,3) m(1,4)
m(2,2) m(2,3) m(2,4)
m(3,3) m(3,4)
m(4,4)
j-i = 3
m14 = 84
m24=72
m34=42
m44=0
mij = min(mik + mk+1,i + PiPk+1Pj+1) for i <= k <= j-1
m12 = min(m11 + m22 + P1P2P3) for 1 <= k <= 1
= min( 0 + 0 + 2x5x3)
m(1,5)
m(2,5)
m(3,5)
m(4,5)
m(5,5)
m(1,6)
m(2,6)
m(3,6)
m(4,6)
m(5,6)
m(6,6)
mij = min(mik+ mk+1, j+ PiPk+1Pj+1) for i<= k<= j-1
m13 = min(m1k + mk+1,3 + P1Pk+1P4)for 1<=k<=2
= min(m11 + m23 + P1P2P4 , m12 + m33 + P1P3P4 )
= min( 0+105+2x5x7, 30 + 0 + 2x3x7)
= min( 105+70, 30 + 42) = 72
m24= min(m2k + mk+1,3 + P2Pk+1P4)for 2<=k<=3
= min(m22 + m34 + P2P3P5 , m23 + m44 + P2P4P5 )
= min( 0+42+5x3x2, 105+ 0 + 5x7x2)
= min( 42 + 30, ….) = 72
m14= min(m1k + mk+1,4 + P1Pk+1P5)for 1<=k<=3
= min(m11 + m24 + P1P2P5 , m12 + m34 + P1P3P5 , m13 + m44 +
P1P4P5 )
= min( 72+2x5x2, 30+42+ 2x3x2, 72+2x7x2)
= min( 72+20, 30+42+12, 72 + 28) = min( 92, 84, 100) = 84
Memoized Matrix Chain
1..4
1..1
2..4
2..2 3..4 2..3 4..4
1..2
3..4
1..1 2..2 3..3 4..4
1..3
4..4
1..1 2..3 1..2 3..3
3..3 4..4 2..2 3..3
2..2 3..3
LOOKUP-CHAIN(p,i,j)
1. if m[i,j]< then return m[i,j]
2. if i=j then m[i,j] 0
3.
else for ki to j-1
4.
do q LOOKUP-CHAIN(p,i,k)+
5.
LOOKUP-CHAIN(p,k+1,j)+pi-1pkpj
6.
if q< m[i,j] then m[i,j] q
7. return m[i,j]
1..1 2..2
For a DP to be applicable an optmztn prbl must have:
1.
2.
Optimal substructure
•
An optimal solution to the problem contains within it optimal
solutions to subproblems.
Overlapping (dependencies) subproblems
•
The space of subproblems must be small; i.e., the same subproblems
are encountered over and over.
DP step3. Memoization: T(n)=O(n3), PSpace(n)=(n2)
A top-down variation of dynamic programming
o
Idea: remember the solution to subproblems as they are solved in the
simple recursive algorithm but may be quite costly
o
DP is considered better when all subproblems must be calculated, because
there is no overhead for recursion. Lookup-Chain(p, i, j)
LookUp-Table(p,i,j)
Initialize all m[i,j] to 
o
if m[i,j] <  then return m[i,j]
if i =j then m[i,j]  0
else for k ← 1 to j − 1
q ← LookUp-Table (p, i , k)
+ LookUp-Table (p, k+1 , j) + p[i-1] p[k] p[j]
if q < m[i, j] then m[i, j] ← q
return m[i, j]
Elements of DP
o
o
o
Optimal substructure
– A problem exhibits optimal substructure if an optimal solution to the
problem contains within its optimal solution to subproblems.
Overlapping subproblems
• When a recursive algorithm revisits the same problem over and over again,
that is the optimization problem has overlapping subproblems.
Subtleties
• Better to not assume that optimal substructure applies in general. Two
examples in a directed graph G = (V, E) and vertices u, v  V.
• Unweighted shortest path:
– Find a path from u to v consisting of the fewest edges. Good for Dynamic
programming.
• Unweighted longest simple path:
– Find a simple path from u to v consisting of the most edges. Not good for
Dynamic programming.
o
The running time of a dynamic-programming algorithm
depends on the product of two factors.
• The number of subproblems overall * the number of choices for
each subproblem. = Sum of entire choices.
• Assembly line scheduling
– Θ(n) subproblems · 2 choices = Θ(n)
• Matrix chain multiplication
– Θ(n2) subproblems · (n-1) choices = O(n3)
Principle of Optimality (Optimal Substructure)
The principle of optimality applies to a problem (not an algorithm)
A large number of optimization problems satisfy this principle.
Principle of optimality: Given an optimal sequence of decisions or choices,
each subsequence must also be optimal.
Principle of optimality - shortest path problem
Problem: Given a graph G and vertices s and t, find a shortest path in G from
s to t
Theorem: A subpath P’ (from s’ to t’) of a shortest path P is a shortest path
from s’ to t’ of the subgraph G’ induced by P’. Subpaths are paths that
start or end at an intermediate vertex of P.
Proof: If P’ was not a shortest path from s’ to t’ in G’, we can substitute the
subpath from s’ to t’ in P, by the shortest path in G’ from s’ to t’. The
result is a shorter path from s to t than P. This contradicts our assumption
that P is a shortest path from s to t.
Principle of Optimality
P’={(c.d), (d,e)}
f
P={ (a,b), (b,c) (c.d), (d,e)}
G
a
3
10
c
b
6
1
5
3 13
7
d
e
G’
P’ must be a shortest path from c to e in G’, otherwise P cannot be a shortest
path from a to e in G.
Longest A to C
B
A
Longest A to B
Longest C to B
D
C
Problem: What is the longest simple route between City A and B?
• Simple = never visit the same spot twice.
o
The longest simple route (solid line) has city C as an intermediate city.
Does not consist of the longest simple route from A to C and the longest
simple route from C to B. Therefore does not satisfy the Principle of Optimality
o