Algorithms and Complexity

Download Report

Transcript Algorithms and Complexity

Introduction to Algorithms
Theerayod Wiangtong
EE Dept., MUT
Contents






Introduction
Divide and conquer, recurrences
Sorting
Trees: binary tree
Dynamic programming
Graph algorithms
2
General


Algorithms are first solved on paper and
later keyed in on the computer.
To solve problems we have procedures,
recipes, process descriptions – in one word
algorithms.

The most important thing is to be



Simple
Precise
Efficient
3
History



First algorithm: Euclidean
Algorithm to find greatest
common divisor, 400-300 B.C.
19th century – Charles
Babbage, Ada Lovelace.
20th century – John von
Neumann, Alan Turing.
4
Data Structures and Algorithms

Data structure


Algorithm


Organization of data to solve the problem at
hand.
Outline, the essence of a computational
procedure, step-by-step instructions.
Program

implementation of an algorithm in some
programming language.
5
Overall Picture

Using a computer to
help solve problems.




Data Structure and
Algorithm Design Goals
Correctness
Efficiency
Precisely specify the
problem.
Designing programs
 architecture
 algorithms
Implementation Goals
Writing programs
Reusability
Robustness
Verifying (testing)
Adaptability
programs
6
Algorithmic Solution
Input instance,
adhering to
the
specification


Algorithm
Output
related to
the input as
required
Algorithm describes actions on the input
instance.
There may be many correct algorithms for the
same algorithmic problem.
7
Definition of an Algorithm


An algorithm is a sequence of
unambiguous instructions for solving a
problem, i.e., for obtaining a required
output for any legitimate input in a finite
amount of time.
Properties:



Precision
Determinism
Finiteness



Efficiency
Correctness
Generality
8
How to Develop an Algorithm


Precisely define the problem. Precisely specify
the input and output. Consider all cases.
Come up with a simple plan to solve the
problem at hand.



The plan is language independent.
The precise problem specification influences the plan.
Turn the plan into an implementation

The problem representation (data structure) influences
the implementation.
9
Example: Searching Solutions

Metaphor: shopping behavior when buying
a beer:



search1: scan products; stop as soon as a beer
is found and go to the exit.
search2: scan products until you get to the
exit; if during the process you find a beer put
it into the basket.
search3: scan products; stop as soon as a beer
is found and jump out of the window.
10
Example 1: Searching
OUTPUT
INPUT
• A - sorted sequence (increasing
order) of n (n >0) numbers
• q - a single number
• index of number q in
sequence A or NIL
j
a1, a2, a3,….,an; q
2
5
4
10
11;
5
2
2
5
4
10
11;
9
NIL
INPUT: precise
specifications of
what the algorithm
gets as an input.
OUTPUT: precise specifications
of what the algorithm produces
as an output, and how this
relates to the input. The
handling of special cases of
the input should be described.
11
Searching#1
search1
INPUT: A[1..n] – sorted array of integers, q – an integer.
OUTPUT: index j such that A[j] = q. NIL if "j (1jn): A[j] q
j := 1
while j  n and A[j]  q do j++
if j  n then return j
else return NIL


The code is written in an unambiguous pseudo
code and INPUT and OUTPUT of the algorithm
are specified.
The algorithm uses a brute-force technique, i.e.,
scans the input sequentially.
12
Searching, C solution
#define n 5
int j, q;
int a[n] = { 11, 1, 4, -3, 22 };
int main() {
j = 0; q = -2;
while (j < n && a[j] != q) { j++; }
if (j < n) { printf("%d\n", j); }
else { printf("NIL\n"); }
}
// compilation: gcc -o search search.c
// execution: ./search
13
Searching#2

Run through the array and set a pointer if
the value is found.
search2
INPUT: A[1..n] – sorted array of integers, q – an integer.
OUTPUT: index j such that A[j] = q. NIL if "j (1jn): A[j] q
j := 1; ptr := NIL;
for j := 1 to n do
if a[j] = q then ptr := j
return ptr;
14
Searching#3

Run through the array and return the index
of the value in the array.
search3
INPUT: A[1..n] – sorted array of integers, q – an integer.
OUTPUT: index j such that A[j] = q. NIL if "j (1jn): A[j] q
j := 1;
for j := 1 to n do
if a[j] = q then return j
return NIL
15
Efficiency Comparisons



search1 and search3 return the same result
(index of the first occurrence of the search
value).
search2 returns the index of the last
occurrence of the search value.
search3 is most efficient, it does not finish
the loop (as a general rule it is good to
avoid this).
16
Sorting



Sorting is a classical and important algorithmic
problem.
We look at sorting arrays (in contrast to files,
which restrict random access).
A key constraint is the efficient management of
the space


In-place sorting algorithms
The efficiency comparison is based on the
number of comparisons (C) and the number of
movements (M).
17
Sorting

Simple sorting methods use roughly n * n
comparisons




Insertion sort
Selection sort
Bubble sort
Fast sorting methods use roughly n * log n
comparisons.



Merge sort
Heap sort
Quicksort
18
Example 2: Sorting
INPUT
OUTPUT
sequence of n numbers
a permutation of the
input sequence of numbers
a1, a2, a3,….,an
2
5
4
10
7
b1,b2,b3,….,bn
Sort
2
4
5
7
10
Correctness (requirements for the output)
For any given input the algorithm halts with the output:
• b1 < b2 < b3 < …. < bn
• b1, b2, b3, …., bn is a permutation of a1, a2, a3,….,an
19
Insertion Sort
A
3
4
6
1
Strategy
•
Start with one sorted card.
•
Insert an unsorted card at the
correct position in the sorted
part.
•
Continue until all unsorted
cards are inserted/sorted.
8
9
i
7
2
5 1
j
n
A
44
44
12
12
12
12
06
06
55
55
44
42
42
18
12
12
12
12
55
44
44
42
18
18
42
42
42
55
55
44
42
42
94
94
94
94
94
55
44
44
18
18
18
18
18
94
55
55
06
06
06
06
06
06
94
67
67
67
67
67
67
67
67
94
20
Insertion Sort/2
n
INPUT: A[1..n] – anj array of integers
j 2
OUTPUT: a permutation of A such that A[1]A[2]…A[n]
for j := 2 to n do
key := A[j]
i := j-1
while i > 0 and A[i] > key do
A[i+1] := A[i]; i-A[j+1] := key
21
Selection Sort
A
1
2
3
4
1
5
7
8
9 6
j
n
i
Strategy
•
Start empty handed.
•
Enlarge the sorted part by
switching the first element of
the unsorted part with the
smallest element of the
unsorted part.
•
Continue until the unsorted part
consists of one element only.
A 44
06
06
06
06
06
06
06
55
55
12
12
12
12
12
12
12
12
55
18
18
18
18
18
42
42
42
42
42
42
42
42
94
94
94
94
94
44
44
44
18
18
18
55
55
55
55
55
06
44
44
44
44
94
94
67
67
67
67
67
67
67
67
94
22
Selection Sort/2
INPUT: A[1..n] – an array of integers
OUTPUT: a permutation of A such that A[1]A[2]…A[n]
for j := 1 to n-1 do
key := A[j]; ptr := j
for i := j+1 to n do
if A[i] < key then ptr := i; key := A[i];
A[ptr] := A[j]; A[j] := key
23
Bubble Sort
A
1
2
3
1
Strategy
•
Start from the back and
compare pairs of adjacent
elements.
•
Switch the elements if the
larger comes before the
smaller.
•
In each step the smallest
element of the unsorted part
is moved to the beginning of
the unsorted part and the
sorted part grows by one.
4
5
7
9
8 6
j
A 44
06
06
06
06
06
06
06
i
55
44
12
12
12
12
12
12
n
12
55
44
18
18
18
18
18
42
12
55
44
42
42
42
42
94
42
18
55
44
44
44
44
18
94
42
42
55
55
55
55
06
18
94
67
67
67
67
67
67
67
67
94
94
94
94
94
24
Bubble Sort/2
INPUT: A[1..n] – an array of integers
OUTPUT: a permutation of A such that A[1]A[2]…A[n]
for j := 2 to n do
for i := n to j do
if A[i-1] < A[i] then
key := A[i-1]; A[i-1] := A[i];
A[i]:=key
25
Divide and Conquer

Principle: If the problem size is small
enough to solve it trivially, solve it. Else:



Divide: Decompose the problem into two or
more disjoint subproblems.
Conquer: Use divide and conquer recursively
to solve the subproblems.
Combine: Take the solutions to the
subproblems and combine the solutions into a
solution for the original problem.
26
Merge Sort

Sort an array by



Dividing it into two arrays.
Sorting each of the arrays.
Merging the two arrays.
17 31 96 50
85 24 63 45 17 31 96 50
85 24 63 45
17 31 50 96
17 24 31 45 50 63 85 96
24 45 63 85
27
Merge Sort Algorithm



Divide: If S has at least two elements put them
into sequences S1 and S2. S1 contains the first
n/2elements and S2 contains the remaining
n/2elements.
Conquer: Sort sequences S1 and S2 using merge
sort.
Combine: Put back the elements into S by
merging the sorted sequences S1 and S2 into one
sorted sequence.
28
Merge Sort: Algorithm
MergeSort(l, r)
if l < r then
m := (l+r)/2
MergeSort(l, m)
MergeSort(m+1, r)
Merge(l, m, r)
Merge(l, m, r)
Take the smallest of the two first elements of
sequences A[l..m] and A[m+1..r] and put into the
resulting sequence. Repeat this, until both
sequences are empty. Copy the resulting sequence
into A[l..r].
29
MergeSort Example/1
30
MergeSort Example/2
31
MergeSort Example/3
32
MergeSort Example/4
33
MergeSort Example/5
34
MergeSort Example/6
35
MergeSort Example/7
36
MergeSort Example/8
37
MergeSort Example/9
38
MergeSort Example/10
39
MergeSort Example/11
40
MergeSort Example/12
41
MergeSort Example/13
42
MergeSort Example/14
43
MergeSort Example/15
44
MergeSort Example/16
45
MergeSort Example/17
46
MergeSort Example/18
47
MergeSort Example/19
48
MergeSort Example/20
49
MergeSort Example/21
50
MergeSort Example/22
51
Merge Sort Summarized

To sort n numbers




if n=1 done.
recursively sort 2 lists of
n/2 and n/2 elements,
respectively.
merge 2 sorted lists of
lengths n/2 in time Q(n).
Strategy



break problem into similar
(smaller) subproblems
recursively solve
subproblems
combine solutions to answer
52
Recurrences



Running times of algorithms with recursive
calls can be described using recurrences.
A recurrence is an equation or inequality that
describes a function in terms of its value on
smaller inputs.
For divide and conquer algorithms:
solving_trivial_problem
if n  1

T (n)  
num_pieces T (n / subproblem_size_factor)  dividing  combining if n  1

Example: Merge Sort
Q(1)
if n  1

T (n)  
2T (n / 2)  Q(n) if n  1
53
Binary Search

Find a number in a sorted array:



Trivial if the array contains one element.
Else divide into two equal halves and solve each half.
Combine the results.
INPUT: A[1..n] – a sorted (non-decreasing) array of integers, q – an integer.
OUTPUT: an index j such that A[j] = q. NIL, if "j (1jn): A[j] q
BinarySearchRec1(A, l, r, q):
if l = r then
if A[l] = q then return l else return NIL
m := (l+r)/2
ret := Binary-search(A, l, m, q)
if ret = NIL then
return Binary-search(A, m+1, r, q)
else return ret
54
Finding Min and Max

Given an unsorted array, find a minimum
and a maximum element in the array.
INPUT: A[l..r] – an unsorted array of integers, l r.
OUTPUT: (min, max) such that "j (ljr): A[j]  min and A[j]  max
MinMax(A, l, r):
if l = r then return (A[l], A[r])
Trivial case
m := (l+r)/2Divide
(minl, maxl) MinMax(A, l, m)
Conquer
(minr, maxr) MinMax(A, m+1, r)
if minl < minr then min = minl else min = minr
Combine
if maxl > maxr then max = maxl else max = maxr
return (min, max)
55
Binary Trees

Each node may have a left and right
child.





7
1 is the parent of 4
6
8
1
The root has no parent.


3
Each node has at most one parent.


The left child of 7 is 1
The right child of 7 is 8
3 has no left child
6 has no children
9
9 is the root
A leaf has no children.

4
6, 4 and 8 are children
56
Binary Trees/2

The depth (or level) of a node x
is the length of the path from the
root to x.



The height of a node x is the
length of the longest path from x
to a leaf.


The depth of 1 is 2
The depth of 9 is 0
The height of 7 is 2
The height of a tree is the height
of its root.

The height of the tree is 3
9
3
7
6
8
1
4
57
Binary Trees/3

The right subtree of a node x
is the tree rooted at the right
child of x.


The right subtree of 9 is the
tree shown in blue.
The left subtree of a node x
is the tree rooted at the left
child of x.

The left subtree of 9 is the tree
shown in red.
9
3
7
6
8
1
4
58
Complete Binary Trees

A complete binary tree is a binary tree
where



all leaves have the same depth.
all internal (non-leaf) nodes have two children.
A nearly complete binary tree is a
binary tree where


the depth of two leaves differs by at most 1.
all leaves with the maximal depth are as far
left as possible.
59
Heaps

A binary tree is a binary heap iff



it is a nearly complete binary tree
each node weigth is greater than or equal to
all its children’s
The properties of a binary heap allow


an efficient storage as an array (because it is a
nearly complete binary tree)
a fast sorting (because of the organization of
the values)
60
Heaps/2
Parent(i)
return i/2
Left(i)
return 2i
Right(i)
return 2i+1
Heap property:
A[Parent(i)]  A[i]
1 2 3 4
16 15 10 8
Level: 3
2
5
7
6
9
1
7
3
8
2
9 10
4 1
0
61
Heap Sort




Heap sort uses a heap data structure to improve
selection sort and make the running time
asymptotically optimal.
Running time is O(n log n) – like merge sort, but
unlike selection, insertion, or bubble sorts.
Sorts in place – like insertion, selection or bubble
sorts, but unlike merge sort.
The heap data structure is used for other things
than sorting.
62
Quick Sort

Characteristics

Very practical, average sort performance O(n
log n) (with small constant factors), but worst
case O(n2).
63
Quick Sort – the Principle


To understand quick sort, let’s look at a
high-level description of the algorithm.
A divide-and-conquer algorithm



Divide: partition array into 2 subarrays such
that elements in the lower part <= elements in
the higher part.
Conquer: recursively sort the 2 subarrays
Combine: trivial since sorting is done in place
64
Partitioning

Linear time partitioning procedure
Partition(A,l,r)
01
02
03
04
05
06
07
08
09
10
11
j j
i i
x := A[r]
17 12
i := l-1
X=10 
i
j := r+1
while TRUE
10 12
repeat j := j-1
until A[j]  x
repeat i := i+1
until A[i]  x
10 5
if i<j
then exchange A[i]A[j]
else return j
10
5
6
19 23
8
5
10
j
6
6
6
19 23
8
i
j
19 23
8
5
17
12 17
j
i
8
23 19 12 17
65
Quick Sort Algorithm

Initial call Quicksort(A, 1, n)
Quicksort(A, l, r)
01
02
03
04
if l < r
m := Partition(A, l, r)
Quicksort(A, l, m)
Quicksort(A, m+1, r)
66
Summary: Sorting Algorithm



Nearly complete binary trees
Heap data structure
Heapsort



based on heaps
worst case is n log n
Quicksort:




partition based sort algorithm
popular algorithm
very fast on average
worst case performance is quadratic
67
Summary/2




Comparison of sorting
methods.
Absolute values are
not important; relate
values to each other.
Relate values to the
complexity (n log n,
n2).
Running time in
seconds, n=2048.
ordered random inverse
Insertion 0.22
50.74
103.8
Selection 58.18
58.34
73.46
Bubble
80.18
128.84 178.66
Heap
2.32
2.22
2.12
Quick
0.72
1.22
0.76
68
Binary Search

Narrow down the search range in stages

findElement(22)
69
Run Time of Binary Search



The range of candidate items to be searched is
halved after comparing the key with the middle
element.
Binary search runs in O(log n) time.
What about insertion and deletion?




search: O(log n)
insert, delete: O(n)
min, max, predecessor, successor: O(1)
The idea of a binary search can be extended to
dynamic data structures  binary trees.
70
Binary Search Trees

A binary search tree is a binary tree T with the
following properties:




each internal node stores an item (k,e) of a dictionary
keys stored at nodes in the left subtree of v are less
than or equal to k
keys stored at nodes in the right subtree of v are
greater than or equal to k
Example sequence: 2, 3, 5, 5, 7, 8
71
Divide-and-Conquer Example


Divide-and-conquer: natural approach for
algorithms on trees.
Example: Find the height of the tree:


If the tree is NULL the height is -1.
Else the height is the maximum of the
heights of children.
72
Searching a BST

To find an element with key k in a tree T



compare k with T.key
if k < T.key, search for k in T.left
otherwise, search for k in T.right
73
Search Examples

Search(T, 11)
74
Search Examples (2)

Search(T, 6)
75
Balanced Binary Search Trees


Problem: execution time for tree operations is
Q(h), which in worst case is Q(n).
Solution: balanced search trees guarantee small
height h = O(log n).
76
Dynamic Programming





Fibonacci numbers
Optimization problems
Matrix multiplication optimization
Principles of dynamic programming
Longest Common Subsequence
77
Algorithm design techniques

Algorithm design techniques so far:

Iterative (brute-force) algorithms


Algorithms that use efficient data structures


For example, insertion sort
For example, heap sort
Divide-and-conquer algorithms

Binary search, merge sort, quick sort
78
Divide and Conquer/2


For example,
MergeSort
The subproblems are
independent and
non-overlaping
Merge-Sort(A, l, r)
if l < r then
m := (l+r)/2
Merge-Sort(A, l, m)
Merge-Sort(A, m+1, r)
Merge(A, l, m, r)
79
Fibonacci Numbers

Leonardo Fibonacci (1202):


A rabbit starts producing offspring during the second
year after its birth and produces one child each
generation
How many rabbits will there be after n generations?
F(1)=1 F(2)=1 F(3)=2
F(4)=3
F(5)=5
F(6)=8
80
Fibonacci Numbers/2


F(n)= F(n-1)+ F(n-2)
F(0) =0, F(1) =1

0, 1, 1, 2, 3, 5, 8, 13, 21, 34 …
FibonacciR(n)
01 if n  1 then return n
02 else return FibonacciR(n-1) + FibonacciR(n-2)

Straightforward recursive procedure is
slow!
81
Fibonacci Numbers/3
F(6) = 8
F(5)
F(4)
F(3)
F(2)
F(1)

F(1)
F(3)
F(2)
F(1)
F(4)
F(0) F(1)
F(2)
F(3)
F(2)
F(1)
F(0)
F(1)
F(1)
F(2)
F(1)
F(0)
F(0)
F(0)
We keep calculating the same value over and over!

Subproblems are overlapping – they share subsubproblems
82
Fibonacci Numbers/4

How many summations are there S(n)?




S(n) = S(n – 1) + S(n – 2) + 1
S(n) 2S(n – 2) +1 and S(1) = S(0) = 0
Solving the recurrence we get
S(n) 2n/2 – 1 1.4n
Running time is exponential!
83
Fibonacci Numbers/5



We can calculate F(n) in linear time by
remembering solutions to the solved
subproblems (= dynamic programming).
Compute solution in a bottom-up fashion
Trade space for time!
Fibonacci(n)
01 F[0]0
02 F[1]1
03 for i  2 to n do
04
F[i]  F[i-1] + F[i-2]
05 return F[n]
84
History

Dynamic programming



Invented in the 1950s by Richard Bellman as a
general method for optimizing multistage
decision processes
The term “programming” refers to a tabular
method.
Often used for optimization problems.
85
Multiplying Matrices

Two matrices, A – nm matrix and B – mk
matrix, can be multiplied to get C with
dimensions nk, using nmk scalar multiplications
 a11 a12 
 ... ... ... 

  b11 b12 b13  

a
a

...
c
...
 21 22   b b b   22 
 a a   21 22 23   ... ... ... 
 31 32 




m
ci , j   ai ,l  bl , j
l 1
Problem: Compute a product of many matrices
efficiently
Matrix multiplication is associative

(AB)C = A(BC)
86
Multiplying Matrices/2

Consider ABCD, where


Costs:




A is 301, B is 140, C is 4010, D is 1025
(AB)C)D 1200 + 12000 + 7500 = 20700
(AB)(CD) 1200 + 10000 + 30000 = 41200
A((BC)D) 400 + 250 + 750 = 1400
We need to optimally parenthesize
A1 x A2 x … An where Ai is a di-1 x di matrix
87
Longest Common Subsequence


Two text strings are given: X and Y
There is a need to quantify how similar
they are:



Comparing DNA sequences in studies of
evolution of different species
Spell checkers
One of the measures of similarity is the
length of a Longest Common Subsequence
(LCS)
88
Dynamic Programming
In general, to apply dynamic programming, we
have to address a number of issues:
 1. Show optimal substructure – an optimal
solution to the problem contains optimal solutions
to sub-problems

Solution to a problem:



Making a choice out of a number of possibilities (look what
possible choices there can be)
Solving one or more sub-problems that are the result of a
choice (characterize the space of sub-problems)
Show that solutions to sub-problems must themselves
be optimal for the whole solution to be optimal.
89
Dynamic Programming/2

2. Write a recursive solution for the value of an
optimal solution


Mopt = Minover all choices k {(Combination of Mopt of all
sub-problems resulting from choice k) + (the cost
associated with making the choice k)}
Show that the number of different instances of subproblems is bounded by a polynomial
90
Dynamic Programming/3

3. Compute the value of an optimal solution in a
bottom-up fashion, so that you always have the
necessary sub-results pre-computed (or use
memorization)


Check if it is possible to reduce the space
requirements, by “forgetting” solutions to subproblems that will not be used any more
4. Construct an optimal solution from computed
information (which records a sequence of
choices made that lead to an optimal solution)
91
Graph Searching Algorithms



Systematic search of every edge and
vertex of the graph
Graph G = (V,E) is either directed or
undirected
Applications




Graphics
Maze-solving
Mapping
Networks: routing, searching, clustering, etc.
92
Breadth First Search


A Breadth-First Search (BFS) traverses a
connected component of an undirected graph,
and in doing so defines a spanning tree.
BFS in an undirected graph G is like wandering
in a labyrinth with a string and exploring the
neighborhood first.
93
Breadth-First Search: Example
r
s
t
u








v
w
x
y
94
Breadth-First Search: Example
r
s
t
u

0






v
w
x
y
Q: s
95
Breadth-First Search: Example
r
s
t
u
1
0



1


v
w
x
y
Q: w
r
96
Breadth-First Search: Example
r
s
t
u
1
0
2


1
2

v
w
x
y
Q: r
t
x
97
Breadth-First Search: Example
r
s
t
u
1
0
2

2
1
2

v
w
x
y
Q:
t
x
v
98
Breadth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2

v
w
x
y
Q: x
v
u
99
Breadth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
Q: v
u
y
100
Breadth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
Q: u
y
101
Breadth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
Q: y
102
Breadth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
Q: Ø
103
Depth-First Search



A depth-first search (DFS) in an
undirected graph G is like wandering in a
labyrinth with a string and following one
path to the end
We then backtrack by rolling up our
string until we get back to a previously
visited vertex v.
v becomes our current vertex and we
repeat the previous steps
104
Depth-First Search: Example
r
s
t
u








v
w
x
y
105
Depth-First Search: Example
r
s
t
u

0






v
w
x
y
106
Depth-First Search: Example
r
s
t
u

0



1


v
w
x
y
107
Depth-First Search: Example
r
s
t
u

0
2


1


v
w
x
y
108
Depth-First Search: Example
r
s
t
u

0
2
3

1


v
w
x
y
109
Depth-First Search: Example
r
s
t
u

0
2
3

1
2

v
w
x
y
110
Depth-First Search: Example
r
s
t
u

0
2
3

1
2
3
v
w
x
y
111
Depth-First Search: Example
r
s
t
u
1
0
2
3

1
2
3
v
w
x
y
112
Depth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
113
Depth-First Search: Example
r
s
t
u
1
0
2
3
2
1
2
3
v
w
x
y
114
Spanning Tree

A spanning tree of G is a subgraph which



is a tree
contains all vertices of G
How many edges
are there in a
spanning tree, if
there are V
vertices?
115
Minimum Spanning Trees




Undirected, connected graph
G = (V,E)
Weight function W: E  
(assigning cost or length or
other values to edges)
Spanning tree: tree that connects all vertices
Minimum spanning tree (MST): spanning tree
T that minimizes w(T )   w(u, v)
( u ,v )T
116
Idea for an Algorithm


We have to make V–1 choices (edges of
the MST) to arrive at the optimization goal
After each choice we have a sub-problem
that is one vertex smaller than the original
problem.


A dynamic programming algorithm would
consider all possible choices (edges) at each
vertex.
Goal: at each vertex cheaply determine an
edge that definitely belongs to an MST
117
Greedy Choice


Greedy choice property: locally optimal
(greedy) choice yields a globally optimal
solution.
Theorem




Let G=(V, E) and S V
S is a cut of G (it splits G into parts S and V-S)
(u,v) is a light edge if it is a min-weight edge
of G that connects S and V-S
Then (u,v) belongs to a MST T of G
118
Greedy Choice/2

Proof




Suppose (u,v) is light but (u,v)  any MST
look at path from u to v in some MST T
Let (x, y) be the first edge on a path from u to v in T
that crosses from S to V–S. Swap (x, y) with (u,v) in T.
this improves cost of T  contradiction (T is supposed
to be an MST)
V-S
S
x
u
y
v
119
Prim-Jarnik’s Example
B
4
A
12
8
MST-Prim(Graph,A)
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = {}
Q = A-NIL/0 B-NIL/ C-NIL/ D-NIL/ E-NIL/ F-NIL/ G-NIL/ H-NIL/ I-NIL/
120
Prim-Jarnik’s Example/2
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0
Q = B-A/4 H-A/8 C-NIL/ D-NIL/ E-NIL/ F-NIL/ G-NIL/ I-NIL/
121
Prim-Jarnik’s Example/3
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4
Q = H-A/8 C-B/8 D-NIL/ E-NIL/ F-NIL/ G-NIL/ I-NIL/
122
Prim-Jarnik’s Example/4
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8
Q = G-H/1 I-H/6 C-B/8 D-NIL/ E-NIL/ F-NIL/
123
Prim-Jarnik’s Example/5
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1
Q = F-G/3 I-G/5 C-B/8 D-NIL/ E-NIL/
124
Prim-Jarnik’s Example/6
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1 F-G/3
Q = C-F/4 I-G/5 E-F/10 D-F/13
125
Prim-Jarnik’s Example/7
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1 F-G/3 C-F/4
Q = I-C/3 D-C/6 E-F/10
126
Prim-Jarnik’s Example/8
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1 F-G/3 C-F/4 I-C/3
Q = D-C/6 E-F/10
127
Prim-Jarnik’s Example/9
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1 F-G/3 C-F/4 I-C/3 D-C/6
Q = E-D/9
128
Prim-Jarnik’s Example/10
4
A
B
12
8
8
6
C
3
6
H
4
3
9
13
I
5
1
D
F
10
E
G
A = A-NIL/0 B-A/4 H-A/8 G-H/1 F-G/3 C-F/4 I-C/3 D-C/6 E-D/9
Q = {}
129
About Greedy Algorithms



Greedy algorithms make a locally optimal
choice (cheapest path, etc).
In general, a locally optimal choice does
not give a globally optimal solution.
Greedy algorithms can be used to solve
optimization problems, if:

There is an optimal substructure that we can
prove a greedy choice at each iteration leads
to an optimal solution.
130
Shortest Path



Generalize distance to weighted setting
Digraph G = (V,E) with weight function W: E  R
(assigning real values to edges)
Weight of path p = v1  v2  …  vk is
k 1
w( p)   w(vi , vi 1 )
i 1


Shortest path = a path of minimum weight (cost)
Applications



static/dynamic network routing
robot motion planning
map/route generation in traffic
131
Shortest-Path Problems

Shortest-Path problems




Single-source (single-destination). Find a shortest
path from a given source (vertex s) to each of the
vertices.
Single-pair. Given two vertices, find a shortest path
between them. Solution to single-source problem
solves this problem efficiently, too.
All-pairs. Find shortest-paths for every pair of
vertices. Dynamic programming algorithm.
Unweighted shortest-paths – BFS.
132
Optimal Substructure


Concept: subpaths of shortest paths are
shortest paths
Proof:

if some subpath were not the shortest path,
one could substitute the shorter subpath and
create a shorter total path
133
More ?
Global Optimization Problems
 Exhaustive search (brute-force)


Approximation algorithm


BFS, DFS, dynamic programming
Bin-packing problem based on non-increasing
first fit algorithm (<22% of optimal no of bins)
Heuristic algorithm

Genetic Algorithm, Simulated Annealing, Tabu
Search
134
THANK YOU!
135