02_08_05_scheduling2
Download
Report
Transcript 02_08_05_scheduling2
Scheduling
CSCI 3753 Operating Systems
Spring 2005
Prof. Rick Han
Announcements
• Program Assignment #1 due Tuesday Feb.
15 at 11:55 pm
– TA will explain parts b-d in recitation
• Read chapters 6 and 7
Scheduling
• A process can be switched out due to:
–
–
–
–
blocking on I/O
voluntarily yielding the CPU
being preemptive time sliced, i.e interrupted
termination
• The OS’s scheduler implements a scheduling policy that
selects the next process to run from the ready queue,
based on several criteria
–
–
–
–
–
CPU utilization - 40% to 90%
Throughput - # processes completed/second
Turnaround Time - how long it takes to execute that process
Waiting Time - sum of time waiting in the ready queue
Response Time - time until first response
Scheduling
• The dispatcher gives control of the CPU to
the process selected by the scheduler,
which involves
– switching context
– switching to user mode
– jumping to the proper location in the user
program to restart that program
Scheduling
• Context switch overhead affects the choice of
time slice
– context switches can take 10 microseconds to copy
register state to/from memory
• on a 1 GHz CPU, that’s 10000 wasted cycles per context
switch!
– if the time slice is on the order of a context switch,
then CPU spends most of its time context switching
– Typically choose time slice to be large enough so that
only 10% of CPU time is spent context switching
– Most modern systems choose time slices of 10-100
ms
FCFS Scheduling
• First Come First Serve:
order of arrival dictates
order of scheduling
• If processes arrive in
order P1, P2, P3, then
Gantt chart of CPU
service time is:
Process
P1
CPU
Service
Time
24
P2
3
P3
3
P1
0
P2
24
P3
27
30
FCFS Scheduling
• If processes arrive in Process
reverse order P3, P2,
P1, then Gantt chart
of CPU service time P1
is:
P2
P3
P2
0
3
P3
6
CPU
Service
Time
24
3
3
P1
30
FCFS Scheduling
P1
P2
0
24
P2
0
3
P3
6
P3
27
30
P1
30
• average wait time is (0+24+27)/3 = 17 seconds in top scenario
• average wait time is (0+3+6)/3 = 3 seconds in 2nd scenario
• FCFS wait times are generally not minimal - vary a lot if order of
arrival changed, which is especially true if the process service
times vary a lot (are spread out)
FCFS Scheduling
• If there is one CPU-bound process, and
many I/O bound processes, then
– convoy effect occurs: I/O bound processes
finish their I/O and wait for one large process
to complete or yield, then execute quickly,
block on I/O, CPU-bound process resumes
and the cycle keeps repeating itself
Shortest Job First Scheduling
• Choose the process/thread with
the lowest service time
– gives priority to shortest processes
– minimizes the average wait time
• can prove this - out of 24 possibilities,
this has the lowest average wait time
• intuition: moving a short process
before a long one decreases the wait
time of short processes more than it
increases the wait time of long
processes
P4
0 3
P1
P3
9
Process CPU
Service
Time
P1
6
P2
8
P3
7
P4
3
P2
16
24
av wait time
= (0+3+9+16)/4
= 7 seconds
Shortest Job First Scheduling
• It is difficult to know the length of each job a priori
– difficult to know the length of each CPU burst of a job
• So predict the burst length of each job/process/thread
– for each thread, track its pattern of CPU usage, and keep a
running average
• could be a sliding window
• instead, this is often an exponentially weighted average
– Ave(n) = (1-a)*CPU_usage(n) + a*Ave(n-1), where 0<a<1
– This can be rewritten Ave(n) = S (1-a)* an-i-1 * CPU_usage(i) summed
from i=0 to n-1
• Can be preemptive, i.e. when a new job arrives in ready
queue, if its service time is less than currently executing
job’s service time, then it can preempt current job
Priority Scheduling
• SJF is a special case of priority
scheduling, where priority is
given to short jobs
• Any criteria can be used to
decide on a priority
– measurable characteristics of
the process
– external criteria
• Can be preemptive, i.e. higher
priority process arrives in the
ready queue and preempts a
lower priority process
P2
0 1
P5
P1
6
Proces CPU
s
Servic
e Time
P1
10
P2
1
P3
2
P4
1
P5
5
P3 P4
16 18 19
Priority
3
1
4
5
2
Priority Scheduling
• Can starve low priority processes
• A solution is aging: as low priority
processes age, their priority is increased
– sample aging policy: if priorities range from 1128, can increase (decrement) the priority by
1 every T seconds
– eventually, the low priority process will get
scheduled on the CPU
Deadline Scheduling
• Hard real time systems
require that certain
processes must complete
execution within a certain
time, or the system
crashes
– robots
• Admission control policy
Process CPU
Service
Time
P0
350
– No notion of refusing
P1
admission to a new
process for FCFS, SJF,
P2
and priority scheduling
– Here, can’t admit a
P3
process if its deadline
can’t be met
– This is typical of many
P4
Quality-of-Service (QOS)
scheduling policies in
networking - can’t admit a
new source of packets if
its QOS deadlines or
guarantees cannot be met
Deadline
from
now
575
125
550
475
1050
250
none
75
200
Deadline Scheduling
i
0
1
2
3
4
t(pi) Deadline
350
575
125
550
475 1050
250 (none)
75
200
•Allocates service by deadline
•May not be feasible
200
550 575
1050
0
1275
p1
p4
p4
p4
p1
p0
p2
p3
p0
p2
p3
p2
p3
p0
Copyright © 2004 Pearson Education
p1
Round Robin Scheduling
• The ready queue is treated as a circular queue, and the
CPU scheduler rotates among the processes in the
ready queue, giving each a time slice, after which it is
preempted by a timer interrupt and the next process is
started
– useful for time sharing multitasking systems - most widely used
scheduling algorithm
– combines FCFS and preemption
• simple and fair, though wait times can be long
– Fair: If there are n processes, each process gets 1/n of CPU
– Simple: Don’t need to know service times a priori
• A process can finish before its time slice is up. The
scheduler just selects the next process in the queue
Round Robin Scheduling
• Suppose we use a time slice
Process CPU
of 4 ms, and ignoring context
Service
switch overhead
Time
• Now P1 is time sliced out, and
(ms)
P2 and P3 are allowed to run
P1
24
sooner than FCFS
• average wait time is 5.66 ms P2
3
P3
3
P1
0
P2
4
P2
7
10
P1
P1
14
P1
18
P1
22
P1
26
30
Round Robin Scheduling
• Weighted Round Robin - each process is
given some number of time slices, not just
one per round
• this is a way to provide preferences or
priorities even with preemptive time slicing
Multi-level Queue Scheduling
• Partitions ready queue into several queues
– different processes have different needs, e.g.
foreground and background
– so don’t apply the same scheduling policy to
every process, e.g. foreground gets RR,
background gets FCFS
• Queues can be organized by priority, or
each given a percentage of CPU, or a
hybrid combination
Multi-level Feedback Queues
• Allows processes to move between queues
• Criteria for movement could depend upon:
– age of a process: old processes move to higher
priority queues
– behavior of a process: could be CPU-bound
processes move down the hierarchy of queues,
allowing interactive and I/O-bound processes to move
up
• give a time slice to each queue, with smaller time slices
higher up
• if a process doesn’t finish by its time slice, it is moved down
to the next lowest queue