Transcript Document
Princess Nora University
Faculty of Computer & Information Systems
Computer science Department
Operating Systems
(CS 340 D)
Dr. Abeer Mahmoud
(Chapter-6)
CPU Scheduling
Chapter 5: CPU Scheduling
1. Basic Concepts
2. Scheduling Criteria
3. Scheduling Algorithms
3
Dr. Abeer Mahmoud
OBJECTIVES:
To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems
To describe various CPU-scheduling algorithms
4
Dr. Abeer Mahmoud
Basic Concepts
5
Dr. Abeer Mahmoud
Basic Concepts
Maximum CPU utilization (keep the
CPU as busy as possible ) obtained
with multiprogramming
CPU–I/O Burst Cycle
o
o
Process execution consists of a cycle of
CPU execution and I/O wait
Process execution begins with a CPU
burst…That is followed by an I/O burst,
which is followed by another CPU burst,
then another I/O burst, and so on.
o Eventually, the final CPU burst ends
with a system request to terminate
execution
6
Dr. Abeer Mahmoud
Histogram of CPU-burst Times
The durations of CPU bursts
vary greatly from process to
process.
There is large number of short
CPU bursts and a small number
of long CPU bursts.
An I/O-bound program >>>>
has many short CPU bursts.
A CPU-bound program >>>>
has a few long CPU bursts
7
Dr. Abeer Mahmoud
CPU Scheduler
CPU Scheduler (/ short-term scheduler):
o
Selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them
o
The ready queue is not necessarily a first-in, first-out
(FIFO) queue. It can be implemented as a FIFO queue, a
priority queue, a tree, or an unordered linked list.
8
Dr. Abeer Mahmoud
Preemptive Scheduling
CPU scheduling decisions may take place when a process:
1.
2.
3.
4.
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Scheduling under 1 and 4 is non-preemptive
(/cooperative).
scheduling under 2 and 3 is preemptive
9
Dr. Abeer Mahmoud
Preemptive Scheduling (cont..)
Under non-preemptive scheduling,
o
once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state
10
Dr. Abeer Mahmoud
Dispatcher
Dispatcher : is the module gives control of the CPU to the
process selected by the short-term scheduler.
The dispatcher should be as fast as possible, since it is invoked
during every process switch.
Dispatch latency – time it takes for the dispatcher to stop
one process and start another running
11
Dr. Abeer Mahmoud
Scheduling Criteria
12
Dr. Abeer Mahmoud
Scheduling Criteria
Throughput – # of processes that complete their execution per
time unit (e.g. 10 processes /sec)
Turnaround time – amount of time to execute a particular
process (the sum of the periods spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/O.)
13
Dr. Abeer Mahmoud
Scheduling Criteria (cont)
Waiting time – amount of time a process has been waiting in
the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is produced
(for time-sharing environment)
14
Dr. Abeer Mahmoud
Scheduling Algorithm Optimization
Criteria
15
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Dr. Abeer Mahmoud
Scheduling Algorithms
16
Dr. Abeer Mahmoud
Scheduling Algorithms
1.
2.
3.
4.
5.
6.
17
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
Priority Scheduling
Round-Robin Scheduling
Multilevel Queue scheduling
Multilevel Feedback Queue
Dr. Abeer Mahmoud
(1) First-Come, First-Served (FCFS)
The simplest CPU-scheduling algorithm
The process that requests the CPU first is allocated first.
Can be implemented using FIFO queue:
When a process enters the ready queue, its PCB is linked
onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The
running process is then removed from the queue.
FCFS algorithm is non-preemptive
18
Dr. Abeer Mahmoud
(1) First-Come, First-Served (FCFS) -cont..
Gantt chart: is a bar chart that illustrates a particular schedule,
including the start and finish times of each of the processes.
Example(1): Consider the following set of processes that arrive at
time 0,with the length of the CPU burst given in milliseconds
Process
P1
P2
P3
Burst Time(ms)
24
3
3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
0
19
P2
24
P3
27
30
Dr. Abeer Mahmoud
(1) First-Come, First-Served (FCFS) -cont..
Example(2): Consider the same previous set of processes arrive at
time 0,with the length of the CPU burst in milliseconds
Process
Burst Time(ms)
P1
24
P2
3
P3
3
Suppose that the processes arrive in the order: P2 , P3 , P1
The Gantt chart for the schedule is:
P2
0
P3
3
P1
6
30
Waiting time for P1 = 6;P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3 >>>>Much better than example (1)
20
Dr. Abeer Mahmoud
(1) First-Come, First-Served (FCFS) -cont..
Convoy effect - short processes wait for the one big process to
get off the CPU. -This effect results in lower CPU and device
utilization than might be possible if the shorter processes were
allowed to go first.
FCFS Pros. (++):
Simplest algorithm
FCFS Cons. (--):
21
The average waiting time is generally not minimal and
affected by processes’ order.
Lower CPU and device utilization because of convoy effect
Not suitable for time-shared systems
Dr. Abeer Mahmoud
(2) Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst.
Use these lengths to schedule the process with the shortest time
If the next CPU bursts of two processes are the same, FCFS scheduling is
used to select the next process
Two schemes:
Non-preemptive
–Preemptive
once CPU given to the process it
cannot be preempted until
completes its CPU burst
if a new process arrives with CPU burst
length less than remaining time of current
executing process, preempt. This scheme is
know as the Shortest-Remaining-TimeFirst (SRTF)
22
Dr. Abeer Mahmoud
(2) Shortest-Job-First (SJF) Scheduling
Example(3): Consider the following set of processes with the
length of the CPU burst given in milliseconds
Process
P1
P2
P3
P4
Burst Time(ms)
6
8
7
3
SJF scheduling chart
P4
0
P3
P1
3
9
P2
16
24
Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
23
Dr. Abeer Mahmoud
(2) Shortest-Job-First (SJF) Scheduling
Example(4): Consider the following set of processes with the
length of the CPU burst given in milliseconds
Process
Arrival Time
P1
0.0
7
P2
2.0
4
P3
4.0
1
P4
5.0
4
Burst Time
Non-preemptive SJF
P1
0
3
P3
7
P2
8
P4
12
16
Average waiting time = (0 + 6 + 3 + 7)/4 = 4
24
Dr. Abeer Mahmoud
(2) Shortest-Job-First (SJF) Scheduling
Example(5): Consider the following set of processes with the
length of the CPU burst given in milliseconds
Process
P1
0.0
7
P2
2.0
4
P3
4.0
1
P4
5.0
4
Burst Time
Preemptive SJF
P1
0
Arrival Time
P2
2
P3
4
P2
5
P4
7
P1
11
16
Average waiting time = (9 + 1 + 0 +2)/4 = 3
25
Dr. Abeer Mahmoud
(2) Shortest-Job-First (SJF) Scheduling
SJF Pros. (++):
SJF is optimal – gives minimum average waiting time for a
given set of processes
SJF Cons. (--):
The difficulty is knowing the length of the next CPU request (
some times this time is predicted)
26
Dr. Abeer Mahmoud
(3) Priority Scheduling (cont..)
A
priority number (integer) is associated with each process.
The CPU is allocated to the process with the highest priority
Equal-priority
processes are scheduled in FCFS order.
Text book assumes (smallest integer highest priority)
Two schemes:
Non-preemptive
–Preemptive
once CPU given to the process it
cannot be preempted until
completes its CPU burst
if a new process arrives with priorty higher
of current executing process, preempt
27
Dr. Abeer Mahmoud
(3) Priority Scheduling (cont..)
Example(6): Consider the following set of processes with the
length of the CPU burst given in milliseconds
Process
P1
P2
P3
P4
P5
Burst Time
10
1
2
1
5
Priority
3
1
4
5
2
Priority scheduling chart
Average waiting time = (6 + 0 + 16 +18 + 1)/5 = 8.2 ms
28
Dr. Abeer Mahmoud
(3) Priority Scheduling (cont..)
Priority scheduling Pros. (++):
Simple algorithm
Priority scheduling Cons. (--):
Main Problem - Starvation ( /indefinite blocking) low
priority processes may never execute
29
Solution >> Aging >>as time progresses increase the
priority of the process
Dr. Abeer Mahmoud
(4) Round Robin Scheduling (RR)
(RR) algorithm is designed especially for timesharing systems.
Each process gets a small unit of CPU time (time quantum ).
After this time has elapsed, the process is preempted and added to
the end of the ready queue.
Time quantum (/ time slice ) (q)>> usually 10-100 ms.
The ready queue is treated as a circular queue and implemented as
FIFO queue
30
Dr. Abeer Mahmoud
(4) Round Robin Scheduling (RR)
Example(6): Consider the following set of processes with the length
of the CPU burst given in milliseconds
time quantum = 4 ms
Process
P1
P2
P3
The Gantt chart is:
Burst Time
24
3
3
P1
0
P2
4
P3
7
P1
10
P1
14
P1
18 22
P1
P1
26
30
Average waiting time = (6 + 4 + 7 )/3 = 5.66 ms
31
Dr. Abeer Mahmoud
(4) Round Robin Scheduling (RR)
If there are n processes in the ready queue and the time
quantum is q, then :
Each process gets 1/n of the CPU time in chunks of at most (q) time
units at once.
No process waits more than (n-1) *q time units.
Performance (depends on the size of the time quantum)
If q is very large RR is same as FCFS
If q is very small decrease the performance because of context
switch time and increase system overhead
Switching the CPU to another process requires performing a state save of
the current process and a state restore of a different process.
32
Dr. Abeer Mahmoud
(4) Round Robin Scheduling (RR)
Turnaround time depends on the size of the time quantum.
The average turnaround time can be improved if most
processes finish their next CPU burst in a single time quantum.
33
Dr. Abeer Mahmoud
(4) Round Robin Scheduling (RR)
RR Scheduling Pros. (++):
Suitable to time-shared system (better response time)
RR Scheduling Cons. (--):
The average waiting time under the RR policy is often long
Context switch overhead is higher
34
Dr. Abeer Mahmoud
(5) Multilevel Queue Scheduling
Processes are easily classified into different groups, Such as:
o Foreground (interactive) processes (may have higher priority)
o Background (batch) processes
Ready queue is partitioned into separate queues
The processes are permanently assigned to one queue,
Each queue has its own scheduling algorithm.
E.g.: foreground processes >> scheduled by RR & background process >>
scheduled by FCFS
Scheduling must be done between the queues.
35
Dr. Abeer Mahmoud
(5) Multilevel Queue Scheduling
Example (8): A multilevel queue
scheduling algorithm with five
queues, listed in order of priority
Each queue has absolute priority over
lower-priority queues.
E.g. No process in the batch queue
could run unless the queues for system
processes, interactive processes, and
interactive editing processes were all
empty.
If an interactive editing process entered the
ready queue while a batch process was
running, the batch process would be
preempted.
36
Dr. Abeer Mahmoud
(5) Multilevel Queue Scheduling
Multilevel Queue Scheduling Pros. (++):
Low scheduling overhead
Consider different process prosperities & requirements
Multilevel Queue Scheduling Cons. (--):
37
Inflexible: a process can’t change it’s queue
Starvation possibility
Dr. Abeer Mahmoud
(6) Multilevel Feedback Queue
A process can move between the various queues
The idea is to separate processes according to the characteristics of
their CPU bursts.
If a process uses too much CPU time, it will be move to a lowerpriority queue.
This scheme leaves I/O-bound and interactive processes in the
higher-priority queues.
A process that waits too long in a lower-priority queue may be
moved to a higher-priority queue. This form of aging prevents
starvation.
38
Dr. Abeer Mahmoud
(6) Multilevel Feedback Queue
Example(9): consider a multilevel
feedback queue scheduler with three queues:
Q0 – RR with time quantum 8 milliseconds (higher
priority)
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Q0 ( highest priority)
Scheduling
Processes in lower priority queue is selected if the
higher queues are empty
A new job enters queue Q0 which is served RR. When it
gains CPU, job receives 8 milliseconds. If it does not
finish in 8 milliseconds, job is moved to queue Q1.
If Q0 is empty, process at Q1 job is again served RR and
receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.
39
Q1
Q2 (lowest priority)
Dr. Abeer Mahmoud
(6) Multilevel Feedback Queue
Multilevel-feedback-queue scheduler defined by the following
parameters:
40
Number of queues
Scheduling algorithms for each queue
Method used to determine when to upgrade a process
Method used to determine when to demote a process
Method used to determine which queue a process will enter when that
process needs service
Dr. Abeer Mahmoud
(6) Multilevel Feedback Queue
Multilevel Feedback Queue Scheduling Pros. (++):
Very flexible>>>it is the most general CPU-scheduling
algorithm.
Can be configured to prevent starvation.
Multilevel Feedback Queue Scheduling Cons. (--):
Most complex algorithm
41
Dr. Abeer Mahmoud
Multiple-Processor
Scheduling
42
Dr. Abeer Mahmoud
Multiple-Processor Scheduling
43
CPU scheduling is more complex when multiple
CPUs are available.
load sharing becomes possible
Homogeneous processors -processors are
identical in functionality (i.e. any processor can
run any process in the ready queue)
Dr. Abeer Mahmoud
Multiple-Processor Scheduling
Approaches to Multiple-Processor Scheduling
Asymmetric multiprocessing
• Master processor executes
system code & slave processors
execute user code
Only the master processor
has all scheduling decisions, I/O
processing, and other system
activities
• Simple & reduce the need for
data sharing
•
44
Symmetric multiprocessing (SMP)
• Each processor is self-scheduling,
• All processes in common ready
queue, or each has its own private
queue of ready processes
• OS must ensure that two processors
do not choose the same process and
that processes are not lost from the
queue.
Dr. Abeer Mahmoud
Thank you
End of
Chapter 5
45
Dr. Abeer Mahmoud