More Problem-Solving Techniques ()
Download
Report
Transcript More Problem-Solving Techniques ()
Lecture 2 – (Chapter 2)
More Algorithmic ProblemSolving Techniques
Mark Fienup
([email protected])
Fall 2007
Many Problem-solving Techniques
Greedy
Divide-and-Conquer
Dynamic Programming
Backtracking
Branch-and-Bound (Best-First Search)
Machine learning
Randomized
Problem with Divide-and-Conquer
Solution
29
1
5
28
1
5
10
10
12
24
12 25
27 23 18 16
1
3
5
25
19
10 12
23 19 14 12
1
17
10 12
5
18 14
9
7
1
5
16 12
4
10 12
7
5
1
3
A lot of redundant calculations!!!
For coins of {1, 5, 10, 12, 25, 50}, typical
timings:
Change Amount
50
55
60
65
70
Run-Time (seconds)
1
5
26
126
613
Dynamic Programming
Idea:
Solve smaller problems before larger ones
store their answers
look-up answers to smaller problems when solving larger
subproblems
Advantage:
Eliminates redundant work since each smaller problem solved
only once!
7). In a dynamic programming coin-change solution, you fill an array fewestCoins from 0 to the amount of change.
An element of fewestCoins stores the fewest number of coins necessary for the amount of change corresponding to its index
value. For 29-cents using the set of coin types {1, 5, 10, 12, 25, 50}, determine the value of fewestCoins[0] to
fewestCoins[7].
0 1 2 3 4 5 6 7
29
fewestCoins:
8) For 29-cents using the set of coin types {1, 5, 10, 12, 25, 50}, determine the value of fewestCoins[29].
Previously Calculated
fewestCoins:
4
4
17
2
25
19
4
12
24
2
10
28 29
4
5
1
Dynamic Programming Coin-change Solution
Fills an array fewestCoins from 0 to the amount of change
An element of fewestCoins stores the fewest number of coins necessary for
the amount of change corresponding to its index value.
For 29-cents using the set of coin types {1, 5, 10, 12, 25, 50}:
Previously Calculated
fewestCoins:
4
4
17
2
25
19
4
12
24
2
10
28 29
4 3
5
1
fewestCoins[29] = minimum(fewestCoins[28], fewestCoins[24], fewestCoins[19],
fewestCoins[17], fewestCoins[4] ) + 1 = 2 + 1 = 3
NOTICE: that the algorithm has previously calculated the fewestCoins for
the change amounts of 0, 1, 2, ..., up to 28 cents
Keeping Track of Coins in the Solution
If we record the best, first coin to return for each change amount (found in the
“minimum” calculation) in an array bestFirstCoin, then we can easily recover the
actual coin types to return.
For the 29-cent solution with coin types {1, 5, 10, 12, 25, 50}.
a minimum for 29
given by 5-cent coin
fewestCoins:
4
4
17
2
19
4
12
25
24
2
10
12
12
0
bestFirstCoin: 0
12-12=0
28 29
4 3
5
24
12
24-12=12
1
29
5
29-5=24
Extract the coins in the solution for 29-cents from bestFirstCoin[29],
bestFirstCoin[24], and bestFirstCoin[12]
C++ Dynamic Programming Solution
// Assumptions: 1) coinTypes are sorted in ascending order, and 2) smallest coin is worth 1 cent
void CalculateChange(int changeToBeReturned, int coinTypes[], int numberOfCoinTypes,
int coinsToReturn[], int & numberOfCoins) {
int change, coin, firstCoinOfMin, minNumberOfCoins, subchange;
int * fewestCoins = new int [changeToBeReturned + 1];
// Allocate storage for arrays
int * bestFirstCoin = new int [changeToBeReturned + 1];
// Initial solution for 0 cents
fewestCoins[0] = 0;
bestFirstCoin[0] = 0;
// Dynamically build up to solution wanted
for (change = 1; change <= changeToBeReturned; change++) {
// Find the min. number of coins after giving back one coin of each type
firstCoinOfMin = 1;
minNumberOfCoins = fewestCoins[change-1];
for (coin = 1; coin < numberOfCoinTypes; coin++){
subchange = change - coinTypes[coin];
if (subchange >= 0 ) {
if (fewestCoins[subchange] < minNumberOfCoins) {
firstCoinOfMin = coinTypes[coin];
minNumberOfCoins = fewestCoins[subchange];
} // end if (fewestCoins[...
} // end if (subchange...
} // end for (coin...
// Store the solution for this amount of change
fewestCoins[change] = minNumberOfCoins + 1;
bestFirstCoin[change] = firstCoinOfMin;
} // end for (change...
// Copy the desired solution to the parameters that return it
numberOfCoins = 0;
while (changeToBeReturned > 0) {
coinsToReturn[numberOfCoins]= bestFirstCoin[changeToBeReturned];
changeToBeReturned = changeToBeReturned - coinsToReturn[numberOfCoins];
numberOfCoins = numberOfCoins + 1;
} // end while
// Clean up by freeing the allocated storage used for smaller solutions
delete [] bestFirstCoin;
delete [] fewestCoins;
} // end CalculateChange
Possible Dynamic Programming Optimizations
1. Reduce execution-time by calculating only smaller
problems needed
Predictable pattern
e.g., Calculation of binomial coefficient
Irregular pattern - use memoization technique
e.g., Knapsack problem
2. Reduce memory needed by reclaiming storage for no longer
needed smaller problem solutions
e.g., Fibonacci
Calculation of binomial coefficient
Handling "Hard" Problems
For many optimization problems (traveling sales person (TSP),
knapsack, job-scheduling), the best known algorithms have
run-time's that grow exponentially.
You could wait centuries for the solution of all but the smallest
problems!
Ways to handle these "hard" problems:
Find the best (or a good) solution "quickly" to avoid
considering the vast majority of worse solutions
e.g, Backtracking and Branch-and-Bound
See if a restricted version of the problem meets your
needed that might have a tractable solution.
e.g., TSP problem satisfying the triangle inequality
Use an approximation algorithm to find a good, but not
necessarily optimal solution
Backtracking
Idea:
Search the "state-space tree" using depth-first search
Use the best solution found so far to prune partial
solutions that are not "promising,", i.e., cannot lead to a
better solution than one already found.
The goal is to prune enough of the state-space tree (exponential
is size) that the optimal solution can be found in a reasonable
amount of time.
In the worst case, the algorithm is still exponential.
Backtracking State-Space Tree
State-Space Tree for 16-cents coins {1, 5, 10, 12, 25, 50}
16
50
-34
-9
50
-46
...
...
-1
-2
50
5
-48
-3
50
-49
...
-44
1
...
-49
...
5
10
-4
50
1
1
5
-4
10
50
5
15
1
...
-40
9
50
1
0
-35
1
0
3 coin
solution
Pruning Criteria
1) negative amount of change, or
2) one less coin than the best solution
and more change to return
5 coin
solution
What would the tree look like if we considered each
subproblem from smallest coin to largest?
1
...
1
5
-4
50
1
2
1
5
6
1
3
5
...
10
4
5
50
-47
12
25
14
Recursive Backtracking Template
Backtrack( treeNode n ) {
treeNode c;
for each child c of n do
if promising(c) then
if c is a solution that's better than best then
best = c
else
Backtrack(c)
end if
end if
end for
} // end Backtrack
C++ Coin-change Backtracking Solution
void Expand(int numberOfCoins /* level of search tree too */, int changeToBeReturned) {
int i, j;
for (i = 0; i < numberOfCoinTypes; i++) {
partialSolution[i] = partialSolution[i] + 1;
changeToBeReturned = changeToBeReturned - coinValues[i];
numberOfCoins++;
if (Promising(numberOfCoins, changeToBeReturned)) {
if ((changeToBeReturned) == 0) {
if (!solutionFound || numberOfCoins < bestNumberOfCoins) {
// Found a new best solution, so save it
bestNumberOfCoins = numberOfCoins;
solutionFound = TRUE;
for (j = 0; j < numberOfCoinTypes; j++) {
bestSolutionSoFar[j] = partialSolution[j];
} // end for (j...
} // end if(!solution...
} else {
Expand(numberOfCoins,changeToBeReturned);
} // end if (changeToBeReturned...
} // end if (promising...
partialSolution[i] = partialSolution[i] - 1;
changeToBeReturned = changeToBeReturned + coinValues[i];
numberOfCoins--;
} // end for (i...
} // end Expand
int Promising(int numberOfCoins, int changeToBeReturned) {
if (changeToBeReturned < 0) {
return FALSE;
} else if (solutionFound && changeToBeReturned > 0
&& numberOfCoins+1 >= bestNumberOfCoins) {
return FALSE;
} // end if
return TRUE;
} // end Promising
Branch-and-Bound (Best-first search)
Idea:
Search the "state-space tree" using no specific search
pattern
Instead, expand the tree node that appears to lead to the
best solution - node with the best bound
Use the bound values to prune partial solutions that are
not "promising,", i.e., cannot lead to a better solution than
one already found.
The goal is to prune enough of the state-space tree (exponential
is size) that the optimal solution can be found in a reasonable
amount of time.
In the worst case, the algorithm is still exponential.
Example 2: Fibonacci
The recursive definition of the Fibonacci sequence (0, 1, 1, 2,
3, 5, 8, 13, ...) is:
f0=0
f1=1
f n = f n - 1 + f n - 2 for n 2
Complete the recursive algorithm that calculates the nth element
in the sequence.
int
fib(
if
int
n
(
)
{
)
{
ret u r n
}
els e
if
(
ret u r n
}
els e
{
ret u r n
}
}
//
//
end
end
fib
if
)
{
Recursive Fibonacci algorithm that calculates the nth element in
the sequence.
int
fib(
if
(
int
n
==
ret u r n
}
els e
if
ret u r n
}
els e
}
//
//
end
0
)
{
)
{
0;
(
n
==
1
)
{
1;
{
ret u r n
}
n
end
fib
fib ( n - 1 )
if
+
fi b ( n - 2 ) ;
Complete the recursion tree showing the calls for fib(5).
fib(5)
fib(4)
fib(3)
Questions
4) What type of algorithmic problem-solving technique (greedy,
divide-and-conquer, dynamic programming) is recursive fib?
5) Executing fib on large n values takes a long time. What makes
the recursive Fibonacci calculation so slow?
6) Can you think of a nonrecursive/iterative algorithm that would
be faster?
7) What type of algorithmic problem-solving technique (greedy,
divide-and-conquer, dynamic programming) is your iterative
algorithm?
8) How much memory space is needed for your algorithm?
9) Can you reduce the amount of memory space needed for your
algorithm?
Terminology
problem - question we seek an answer for, e.g., "what is the
largest item in an array?"
instance - the specific input to the problem
algorithm - step-by-step procedure for producing a solution
basic operation - fundamental operation in the algorithm (i.e.,
operation done the most)
problem size - input size. For algorithms involving arrays,
the problem size is often the number of elements.
Goal - derive a function for the number of times
that the basic operation is performed related to the
problem size.
Example sum - Add all the numbers in the array S of n numbers.
Inputs: positive integer n, array of numbers S indexed from 1 to n.
Outputs: sum, the sum of the numbers in S.
number sum ( int n, const keytype S[ ] ) {
index i;
number result;
result = 0;
for (i = 1; i <= n; i++)
result = result + S [ i ];
return result;
}
(Note the text uses bold to indicate an array, e.g., a, and subscripts to indicate an array
element, e.g., a3 or aj. However, I’ll usually use square brackets to denote an array
element a[3].)
For algorithm sum, what is the basic operation?
What is the problem size?
Run-time Analysis of sum
number sum ( int n, const keytype S[ ] )
{
index i;
number result;
result = 0;
for (i = 1; i <= n; i++)
result = result + S [ i ];
Done once. Assume time to do once is c 1 = 100
Done n times. Assume time to do loop once is c 2 = 10
return result;
}
Every-case time complexity analysis, T(n) = c1 + c2 n = 100 + 10 n
What determines the length of c1 and c2?
When n is "sufficiently large," the 10 n term dominates. For what value of n,
does 10 n = 100?
Big-oh Definition - asymptotic upper bound
Definition 2.1: A function f(x) is “Big-oh of g(x)”
or “f(x) is O(g(x))“ or “f(x) = O(g(x))“ if there are
positive, real constants c and x0 such that for all
values of x x0,
f ( x) c g ( x)
c x g(x)
Execution
Time
f(x) is the execution time
formula of the algorithm
problem size, x
x0
Proof: T(n) = c1 + c2 n = 100 + 10 n is O( n ).
Pick c 110 and x 0 1.
Then 100 10n 110n for all n 1 since
100 10n 110n
100 100n
1 n which is true for all n 1.
Problem with big-oh:
If T(n) is O(n), then it is also O(n2), O(n3), O(n3),
O(2n), .... since these are also upper bounds.