solutions and encodes game

Download Report

Transcript solutions and encodes game

Algorithms
Algorithm Techniques
Steps in Designing Algorithms(1)
1. Understand the problem
•
Specify the range of inputs the algorithm should handle
2. Learn about the model of the implementation technology
•
RAM (Random-access machine), sequential execution
3. Choosing between an exact and an approximate solution
•
Some problems cannot be solved exactly: nonlinear equations,
evaluating definite integrals
•
Exact solutions may be unacceptably slow
4. Choose the appropriate data structures
Steps in Designing Algorithms(2)
5. Choose an algorithm design technique
•
General approach to solve problems algorithmically that is
applicable to a variety of computational problems
•
Provide guidance for developing solutions to new problems
6. Specify the algorithm
•
Pseudocode: mixture of natural and programming language
7. Prove the algorithm’s correctness
•
Algorithm yields the correct result for any legitimate input, in a
finite amount of time
•
Mathematical induction, loop-invariants
Steps in Designing Algorithms(3)
8. Analyze the Algorithm
Predicting the amount of resources required:
•
memory: how much space is needed?
•
computational time: how fast the algorithm runs?
Input size (number of elements in the input)

Size of an array, polynomial degree, # of elements in a matrix, # of bits
in the binary representation of the input, vertices and edges in a graph
Def: Running time = the number of primitive operations (steps) executed
before termination

Arithmetic operations (+, -, *), data movement, control, decision making
(if, while), comparison
Steps in Designing Algorithms(4)
9. Coding the algorithm
•
Verify the ranges of the input
•
Efficient/inefficient implementation
•
It is hard to prove the correctness of a
program (typically done by testing)
Classification of Algorithms
By problem types
By design paradigms

Sorting

Divide-and-conquer

Searching

Incremental

String processing

Dynamic programming

Graph problems

Greedy algorithms

Combinatorial problems

Randomized/probabilistic

Geometric problems

Numerical problems
fundamental design paradigms
Greedy method
 Often used in problems involving
 weighted graphs
 data compression problems
Divide and conquer method
 Already seen this is merge-sort and quick-sort
 Here we’ll concentrate on the analyzing of problems
solved by this method by solving recurrence relations
Dynamic programming
 Very powerful technique IF we can build a certain
characterization
 Used in solving many problems that superficially do not
seem to have much in common.
There are other paradigms, but these are really quite basic.
7
The Greedy Method
The Greedy Method Technique
The greedy method is a general algorithm design
paradigm, built on the following elements:
 configurations: different choices, collections, or
values to find
 an objective function: a score assigned to
configurations, which we want to either maximize
or minimize
 A globally-optimal solution can always be found
by a series of local improvements from a starting
configuration.
9
Example
Given: A set S of n items, with each item i having
 bi - a positive benefit
 wi - a positive weight
Goal: Choose items with maximum total value but with
weight at most W.
“knapsack”
Solution:
Items:
Weight:
Benefit:
Value:
($ per ml)
1
2
3
4
5
4 ml
8 ml
2 ml
6 ml
1 ml
$12
$32
$40
$30
$50
3
4
20
5
50
•
•
•
•
10 ml
1
2
6
1
ml
ml
ml
ml
of
of
of
of
5
3
4
2
Problems That Can Be Solved by the
Greedy Method
A game like chess can be won by thinking
ahead.
But, a player focusing entirely on their
immediate advantage is usually easy to
defeat.
For example, in Scrabble, the player can do
quite well by simply making whatever move
seems best at the moment and not worrying
about future consequences.
11
On each step in the algorithm, the choice
must be:
Feasible - i.e. it satisfies the problems
constraints
Locally optimal – i.e. it has to be the best
local choice among all feasible choices
available at the step
Irrevocable – i.e. once made, it cannot be
changed on subsequent steps of the
algorithm
12
Example: Making Change
Problem: A dollar amount to reach and a
collection of coin amounts to use to get
there.
Configuration: A dollar amount to return
to a customer plus the coins already
returned
Objective function: Minimize number of
coins returned.
Greedy solution: Always return the largest
coin you can
13
The Fractional Knapsack Problem
Given: A set S of n items, with each item i having


bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total value but with weight at
most W.

The value of an item is its benefit/weight ratio.
If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
In this case, we let xi denote the amount we take of
item i
0  xi  wi
 Objective: maximize

 x (b / w )
x W
iS

Constraint:
iS
i
i
i
i
14
The Fractional Knapsack Algorithm
Algorithm fractionalKnapsack (S, W)
Input: set S of items with benefit bi
Greedy choice:
and weight wi; max. weight W
Keep taking item
Output: amount xi of each
with highest value
item i to maximize benefit with weight at
(benefit to weight most W
ratio bi / wi )
for each item i in S
xi  0
Run time: O(n log
vi  bi / wi
{value}
n). Why?
w0
{total weight}
Use a max-heap
while w < W
priority queue
remove item i with highest vi
xi  min{wi , W - w}
w  w + min{wi , W - w}
15
Other Problems That Can Use the Greedy
Method
Here are a few:
 You are to network a collection of computers by
linking selected pairs of them. Each link as a
maintenance cost, reflected in a weight attached
to the link. What is the cheapest possible network?
 The MP3 audio compression scheme encodes a
sound signal by using something called a Huffman
encoding. In simple terms, given symbols A, B, C,
and D, what is the shortest way that a binary
string can encode the letters so any string can be
decoded unambiguously?
16
Task Scheduling
Given: a set T of n tasks, each having:


A start time, si
A finish time, fi (where si < fi)
Goal: Perform all the tasks using a minimum number of “machines.”
Two tasks can execute on the same machine only if
fi<=sj or fj <=si. (called non-conflicting)
Machine 3
Machine 2
Machine 1
1
2
3
4
5
6
7
8
9
17
Example
Given: a set T of n tasks, each having:
 A start time, si
 A finish time, fi (where si < fi)
 Goal: Perform all tasks on min. number of machines
 Assume T is [4,7],[7,8],[1,4],[1,3],[2,5],[3,7],[6,9]
Machine 3
Machine 2
Machine 1
1
2
3
4
5
6
7
8
9
Order by the start time:
[1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8]
18
Task Scheduling Algorithm
Greedy choice: consider tasks by
their start time and use as few Algorithm taskSchedule(T)
Input: set T of tasks w/ start time
machines as possible with this
si
and finish time fi
order.
Output: non-conflicting schedule
 Run time: O(n log n). Why?
with minimum number of
Correctness: Suppose there is a machines
m0
{no. of machines}
better schedule.
while T is not empty
 We can use k-1 machines
remove task i w/ smallest si
 The algorithm uses k
if there’s a machine j for i
then
 Let i be first task scheduled
on machine k
schedule i on machine j
else
 Machine i must conflict with
mm+1
k-1 other tasks
schedule i on machine m
 But that means there is no
non-conflicting schedule
19
using k-1 machines
Greedy Algorithms
A greedy algorithm always makes the choice
that looks best at the moment
 The hope: a locally optimal choice will lead
to a globally optimal solution
 For some problems, it works
Dynamic programming can be overkill; greedy
algorithms tend to be easier to code
Activity Selection:
A Greedy Algorithm
So actual algorithm is simple:
 Sort the activities by time
 Schedule the first activity
 Then schedule the next activity in sorted
list which starts after previous activity
finishes
 Repeat until no more activities
Intuition is even more simple:
 Always pick the shortest ride available at
the time
Activity Selection
Schedule n activities that require exclusive use
of a common resource
S = {a1, . . . , an} – set of activities
ai needs resource during period [si , fi)
 si = start time and fi = finish time of
activity ai ……0  si < fi < 
Activities ai and aj are compatible if the
intervals [si , fi) and [sj, fj) do not overlap
i
fi  sj
j
j
fj  si
i
Activity Selection Problem
Select the largest possible set of nonoverlapping
(mutually compatible) activities.
E.g.:
i
1
2
3
4
5
6
7
8
9 10
11
si
1
3
0
5
3
5
6
8
8
2
12
fi
4
5
6
7
8
9
10
11
12
13
14
• Activities are sorted in increasing order of finish times
• A subset of mutually compatible activities: {a3, a9, a11}
• Maximal set of mutually compatible activities:
{a1, a4, a8, a11} and {a2, a4, a9, a11}
Characterizing the Subproblems
The original problem: find the maximum subset of
mutually compatible activities for S = S0, n+1
Activities are sorted by increasing finish time
a0, a1, a2, a3, …, an+1
We always choose an activity with the earliest finish
time


Greedy choice maximizes the unscheduled time
remaining
Finish time of activities selected is strictly
increasing
Example
k
sk
0
-
0
1
1
4
2
3
5
3
0
6
4
5
7
5
3
8
6
5
9
7
6
10
8
8
11
9
8
12
10
2
13
fk
a1
a0
m=1
a2
a1
a3
a1
a4
m=4
a1
a5
a1
a4
a6
a1
a4
a7
a1
a4
a8
a1
m=8
a4
a9
a1
a4
a8
a10
a1
a4
a8
a11
11
12
14
a1
a4
a8
m=11
12

-
a1
a4
a8
a11
0 1 2 3 4 5 6 7 8 9 1011121314
COSC3101A
25
Steps Toward Our Greedy Solution
1. Determine the optimal substructure of the problem
2. Develop a recursive solution
3. Prove that one of the optimal choices is the greedy
choice
4. Show that all but one of the subproblems resulted
by making the greedy choice are empty
5. Develop a recursive algorithm that implements the
greedy strategy
6. Convert the recursive algorithm to an iterative one
Designing Greedy Algorithms
1. Cast the optimization problem as one for which:
we make a choice and are left with only one
subproblem to solve
2. Prove that there is always an optimal solution to
the original problem that makes the greedy choice

Making the greedy choice is always safe
3. Demonstrate that after making the greedy choice:
the greedy choice + an optimal solution to the
resulting subproblem leads to an optimal solution
Divide-and-Conquer
7 29 4  2 4 7 9
72  2 7
77
22
94  4 9
99
44
Divide-and-Conquer
Divide-and conquer is a general
algorithm design paradigm:
 Divide: divide the input data S
in two or more disjoint
subsets S1, S2, …
 Recur: solve the subproblems
recursively
 Conquer: combine the
solutions for S1, S2, …, into a
solution for S
The base case for the recursion
are subproblems of constant size
Analysis can be done using
recurrence equations
29
Many Problems Fall to Divide and Conquer
Mergesort and quicksort were mentioned earlier.
Compute gcd (greatest common divisor) of two
positive integers.
Compute the median of a list of numbers.
Multiplying two polynomials of degree 2d
i.e. (1 + 2x + 3x^2) * (5 -4x + 8x^2)
FFT - Fast Fourier Transform used in signal
processing.
(Closest Pair) Given points in the plane, find two
that have the minimum distance between them.
Merge-Sort
Merge-sort on an input
sequence S with n
elements consists of
three steps:
 Divide: partition S
into two sequences
S1 and S2 of about
n/2 elements each
 Recur: recursively
sort S1 and S2
 Conquer: merge S1
and S2 into a unique
sorted sequence
Algorithm mergeSort(S, C)
Input sequence S with n
elements,
comparator C
Output sequence S sorted
according to C
if S.size() > 1
(S1, S2)  partition(S,
n/2)
mergeSort(S1, C)
mergeSort(S2, C)
S  merge(S1, S2)
31
Problem: Big Integer Multiplication
Problem: Given two n-bit integers, I and J, that
can’t be handled by the hardware of a machine,
devise an algorithm with good complexity that
multiplies these two numbers.
Applications:
Encryption schemes used in security work.
Can we do better?
We will assume n is a power of 2; otherwise, pad
with zeroes.
Note: This provides an alternate way of doing
what we did in the first homework assignment
which tacitly assumed the hardware could handle
32
the products.
An Improved Integer Multiplication Algorithm
Algorithm: Multiply two n-bit integers I and J.
 Divide step: Split I and J into high-order and loworder bits
n/2
I  Ih 2
 Il
J  J h 2n / 2  J l

Observe that there is a different way to multiply
parts:
I * J  I h J h 2 n  [( I h  I l )( J l  J h )  I h J h  I l J l ]2 n / 2  I l J l
 I h J h 2 n  [( I h J l  I l J l  I h J h  I l J h )  I h J h  I l J l ]2 n / 2  I l J l
 I h J h 2 n  ( I h J l  I l J h )2 n / 2  I l J l
33
An Improved Integer Multiplication Algorithm




The recursion on the last slide requires 3
products of n/2 bits each plus O(n)
additional work.
So, T(n) = 3T(n/2) + n, which implies T(n)
is Θ(nlog23), by the Master Theorem.
Thus, T(n) is Θ(n1.585).
That's where we obtained the complexity
for the algorithm introduced in the
Introduction slides.
34
MATRIX OPERATIONS: Example
Matrix-matrix multiplication: Given: A is n X r and B is r X m
r
Then,
C = AB = [c(i,j)] where c[i,j] =  a[i,k] b[k,j]
k=1
3  1
0 5 3  1 
1 2 1 2 1  1   7 4 1  5 

 3 1 0  2 



0 1 
3 1 0  2
Note that the following is undefined:
3  1
1 2 1  1  

3 1 0  2 1 2 

 0 1 


For example,c[2,3]=
a[2,1]b[1,3] +
a[2,2]b[2,3]=
1*1 + 2*0 = 1
because a 2 X 4 matrix can't
be multiplied by a 3 X 2 matrix.
A 4 X m matrix is required.
35
Matrix Multiplication
In trying to improve this, a first pass would look
at the following:
Assume n is a power of 2 and view an array as
made up of submatrices, i.e.
1
3

5

4
2
1
6
1
1 1 
0  2 
1 3

8  9
These can be handled
recursively by viewing this as
a 4 X 4 matrix as shown
and then breaking the 4 X 4
matrices into 2 X 2 matrices.
36
Matrix Multiplication
Thus,
I J   A B  E F 
 K L  C D  G H 

 


where
I= AE + BG
J = AF + BH
K = CE + DG
L = CF + DH
Then, use this idea to divide and conquer.
37
Matrix Multiplication
With this approach,
T(n) = 8T(n/2) + bn^2
Unfortunately, the master theorem only gives
us that T(n) is O(n^3), which isn't any
improvement.
However, there is an algorithm called
Strassen's Algorithm which is able to handle
the multiplication in just seven recursive calls.
The technique can be verified (see pgs 272273) although the algebra is messy.
38