ITS 225 (Operating Systems) Lecture Notes

Download Report

Transcript ITS 225 (Operating Systems) Lecture Notes

Operating Systems Lecture Notes
CPU Scheduling
Matthew Dailey
Some material © Silberschatz, Galvin, and Gagne, 2002
CPU Scheduling
ready queue
head
tail
PCB7
PCB9
PCB1
registers
registers
registers
..
.
..
.
..
.
Main Question: Which process to select from ready queue?
– Preemption vs. non-preemption
– Evaluating scheduling algorithms
– Algorithms (FCFS, SJF, RR, Priority, Multilevel Queue)
Readings: Silberschatz et al., chapter 6
CPU Scheduling: What Policy?
Multiprogramming goal: keep busy. Run some process at all times
Policy determines how you allocate resources to processes
No single policy is best for all situations!
Best policy depends on goal of the system
– Single-user desktop PC
– Compute server for scientific applications
– Interactive time-sharing system
CPU Burst --- I/O Burst Cycle
Empirically, most programs
alternate between CPU bursts
and I/O bursts.
This makes multiprogramming
desirable.
CPU Burst Times
Varies with application
and hardware.
But histogram almost
always looks
exponential.
Most bursts short, a
few are long.
Short-term scheduler
Selects next job from ready queue.
– Ready queue not necessarily FIFO!
Must run when:
– A process switches from running to waiting (I/O, wait(), etc.)
– A process terminates
If preemptive, can also run when:
– A process switches from running to ready (Interrupt)
– A process switches from waiting to ready (I/O completion)
Preemptive scheduler: Pros and Cons
OS’s like Windows 3.1 and early Apple MacOS were non-preemptive.
Badly-behaved apps could kill these systems.
Preemption seems a good choice to avoid this. BUT:
– Interrupt during shared user data update
• Problem: Can cause inconsistency or corruption
• Remedy: Synchronization (Chapter 7)
Headache for app
programmers
– Interrupt during system call
• Problem: Could cause kernel data inconsistency or
corruption
• Remedy: Disable interrupts during kernel data updates
• Must keep disable time super short or might miss interrupts
Headache for
OS designer
Preemption increases complexity for OS designer AND programmers
Note on Dispatch Latency
The dispatcher is a piece of kernel code that:
–
Switches context
–
Flips protection bit to user mode
–
Jumps to correct location in user program
Dispatch latency is the time it takes from the interrupt to the final jump
Scheduling algorithm: Evaluation criteria
Generally want to maximize
– CPU utilization
% of time CPU is in use
– Throughput
# of processes completed per unit time
Generally want to minimize
– Turnaround time
time from submission to completion
admit time + ready time + CPU time + I/O time
– Wait time
amount of time spend in ready queue
– Response time
time from submission to first output
(important in interactive systems)
Usually optimize average but there are other choices.
Algorithm 1: First Come, First Served (FCFS)
Characteristics:
– A non-preemptive scheme.
– Like Bangkok Bank when only one service desk is open.
– One long-running process can clog the queue for a long time.
Example:
Gantt Chart
Process Arrival Burst time
P1
0
24
P2
1
3
0
P3
2
3
P1
P2
24
P3
27
Avg wait time: ( 0 + 24 + 27 ) / 3 = 17 ms
BUT opposite order wait time = ???
( 0 + 3 + 6 ) / 3 = 3 ms. Much improved!
30
Algorithm 2: Non-Preemptive Shortest Job First (SJF)
Characteristics:
– Always assign job with shortest next CPU burst to CPU
– Provably optimizes average wait time among non-preemptive algorithms.
Gantt Chart
Example:
Process Arrival Burst
P1
P2
P3
P4
0
2
4
5
7
4
1
4
P1
0
3
Avg wait time: ( 0 + 6 + 3 + 7 ) / 4 = 4 ms
FCFS wait time: ( 0 + 5 + 7 + 7 ) / 4 = 4.75 ms
P3
7
P2
8
P4
12
16
Algorithm 3: Preemptive Shortest Job First (SJF)
Characteristics:
– Like Shortest Job First, but running job can be preempted.
– Provably optimizes average wait time among preemptive algorithms.
Example:
Gantt Chart
Process Arrival Burst
P1
P2
P3
P4
0
2
4
5
7
4
1
4
P1
0
P2
2
P3
4
P2
5
P4
7
P1
11
16
Average waiting time = (9 + 1 + 0 +2)/4 = 3 ms
Reason that SJF is provably optimal: moving a shorter process earlier always
decreases short process’ wait time more than it increases long process’ wait time.
Implementing SJF
SJF is great, but how do we implement it?
– We don’t know a priori how long a job’s burst time is
– We have to try to predict the burst time
1. tn  actual length of nth CPU burst
2.  n1  predicted v alue for the nex t CPU burst
3. Let  be betw een 0 and 1 inclusiv e.
4. Define
 n1   tn  1    n .
Burst time prediction with exponential average
Priority Scheduling
Characteristics:
– Always schedule the ready process with highest priority
– Priority scheduling can be preemptive or non-preemptive
SJF is a special case of priority scheduling:
– process priority = the inverse of remaining CPU time
Priority Scheduling Issues
Priorities backwards or forwards?
– Unix: -20 is “highest” priority, +20 is “lowest”
– Show output of the “top” Unix program on the ITLAB server
Where do the priorities come from?
– Usually internally derived by the OS
– Sometimes externally derived by users/managers
Problem: Low-priority processes suffer from starvation. They may have to
wait indefinitely.
Solution: Process aging (gradually increase priority of old processes)
Round-Robin
Characteristics:
–
–
–
–
For time-sharing systems
Similar to FCFS but preemptive
Ready queue is a circular queue
Define a short time quantum, e.g. 20 ms
Algorithm:
Before starting a process, set timer to generate interrupt after quantum expires
If (CPU burst time < quantum)
then process gives up CPU voluntarily
else timer generates interrupt after quantum expires
interrupt causes context switch to kernel mode
running process is moved to tail of the ready queue
switch to next process in queue
Round Robin Example [Quantum = 20]
Process Arrival Burst Time
P1
0
53
P2
1
17
P3
2
68
P4
3
24
Gantt Chart
P1
0
P2
20
37
P3
P4
57
P1
77
P3
97 117
Notice the long wait times but short response times
Issue: What’s the “right” time quantum?
Too short: too many context switches (too much overhead)
Too long: approaches FCFS performance
Rule of thumb: large enough to handle 80% of the CPU bursts
P4
P1
P3
P3
121 134 154 162
Variations on scheduling: Multilevel queues
Use more than one ready queue
– e.g. “foreground queue” for interactive programs and “background queue”
system maintenance, batch programs
Use different scheduling algorithm for each queue
– e.g. RR for the foreground queue, FCFS for background queue
New problem: how to split time between the queues?
– Absolute priority: can cause starvation
– Time division: e.g. 80% foreground, 20% background
Variations on scheduling: Multilevel feedback queues
Don’t fix a process in a queue: let it move.
One example: several queues with different priorities
– Let I/O bound processes float upward (higher priority)
– Move CPU hogs downward (lower priority)
– Move processes waiting a too long gradually upward
Flexible system but complex implementation.
What have we learned?
What CPU scheduling is
Preemptive vs. non-preemptive scheduling
How to evaluate different algorithms
Some of the important scheduling algorithms:
– FCFS
– SJF (both preemptive and non-preemptive versions)
• Provably optimal, but impossible to implement
• Can be approximated with burst time prediction
– Priority scheduling
– Round-robin