slides - Simon Fraser University

Download Report

Transcript slides - Simon Fraser University

School of Computing Science
Simon Fraser University
CMPT 300: Operating Systems I
Ch 5: CPU Scheduling
Dr. Mohamed Hefeeda
1
Chapter 5: Objectives
 Understand
 Scheduling Criteria
 Scheduling Algorithms
 Multiple-Processor Scheduling
2
Basic Concepts
 Maximum CPU utilization
obtained with
multiprogramming
 CPU–I/O Burst Cycle
 Process execution consists of a
cycle of CPU execution and I/O
wait
 How long is the CPU burst?
3
CPU Burst Distribution
 CPU bursts vary greatly from process to
process and from computer to computer
 But, in general, they tend to have the following
distribution (exponential)
Many short
bursts
Few long bursts
4
CPU Scheduler
 Selects one process from ready queue to run on CPU
 Scheduling can be
 Nonpreemptive
•
Once a process is allocated the CPU, it does not leave unless:
1. it has to wait, e.g., for I/O request or for a child to terminate
2. it terminates
 Preemptive
•
OS can force (preempt) a process from CPU at anytime
– Say, to allocate CPU to another higher-priority process
 Which is harder to implement? and why?
 Preemptive is harder: Need to maintain consistency of
•
data shared between processes, and more importantly, kernel
data structures (e.g., I/O queues)
– Think of a preemption while kernel is executing a sys call on
behalf of a process (many OSs, wait for sys call to finish)
5
Dispatcher
 Scheduler: selects one process to run on CPU
 Dispatcher: allocates CPU to the selected
process, which involves:
 switching context
 switching to user mode
 jumping to the proper location (in the selected
process) and restarting it
 Dispatch latency – time it takes for the
dispatcher to stop one process and start another
 How would scheduler select a process to run?
6
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Maximize
 Throughput – # of processes that complete their execution
per time unit
 Maximize
 Turnaround time – amount of time to execute a particular
process (time from submission to termination)
 Minimize
 Waiting time – amount of time a process has been waiting in
ready queue
 Minimize
 Response time – amount of time it takes from when a
request is submitted until the first response is produced
 Minimize
7
Scheduling Algorithms
 First Come, First Served
 Shortest Job First
 Priority
 Round Robin
 Multilevel queues
 Note: A process may have many CPU bursts, but
in the following examples we show only one for
simplicity
8
First-Come, First-Served (FCFS) Scheduling
Process
Burst Time
P1
P2
P3
24
3
3
 Suppose processes arrive in order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
0
P2
24
P3
27
30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
9
FCFS Scheduling (cont’d)
Suppose processes arrive in the order
P2 , P3 , P1
3, 3, 24
 The Gantt chart for the schedule is:
P2
0
P3
3
P1
6
30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect: short process behind long process
10
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time
 Two schemes:
 nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU burst
 preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is known as the
Shortest-Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting
time for a given set of processes
11
Example: Non-Preemptive SJF
Process Arrival Time Burst Time
P1
P2
P3
P4
0.0
7
2.0
4
4.0
1
5.0
4
 SJF (non-preemptive)
P1
0
3
P3
7
P2
8
P4
12
16
 Average waiting time = (0 + 6 + 3 + 7)/4 = 4
12
Example: Preemptive SJF
Process Arrival Time Burst Time
P1
P2
P3
P4
0.0
7
2.0
4
4.0
1
5.0
4
 SJF (preemptive, SRJF)
P1
0
P2
2
P3
4
P2
5
P4
7
P1
11
16
 Average waiting time = (9 + 1 + 0 +2)/4 = 3
13
Estimating Length of Next CPU Burst
 Can only estimate the length
 Can be done by using the length of previous CPU
bursts, using exponential averaging
1. t n  actual lenght of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1
4. Define :
 n 1   t n  1   n

14
Exponential Averaging
 If we expand the formula, we get:
n+1 =  tn +  (1 - ) tn -1 +  (1 -  )2 tn -2 + … + (1 -  )n +1 0
 Examples:
  = 0 ==> n+1 = n ==> Last CPU burst does not count
(transient value)
  =1 ==> n+1 =  tn ==> Only last CPU burst counts (history
is stale)
15
Prediction of CPU Burst Lengths: Expo
Average
 Assume  = 0.5, 0 = 10
16
Priority Scheduling
 A priority number (integer) is associated with each
process
 CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
 Preemptive
 nonpreemptive
 SJF is a priority scheduling where priority is the predicted
next CPU burst time
 Problem: Starvation – low priority processes may never
execute
 Solution?
 Aging: Increase the priority of a process as it waits in the
system
17
Round Robin (RR)
 Each process gets a small unit of CPU time
(time quantum), usually 10-100 milliseconds
 After this time elapses, the process is
preempted and added to end of ready queue
 If there are n processes in ready queue and
time quantum is q, then each process gets 1/n
of the CPU time in chunks of at most q time
units at once
 No process waits more than (n-1)q time units
 Performance
 q large  FCFS
 q small  too much overhead, because many
context switchess
18
Example of RR with Time Quantum = 20
Process
Burst Time
53
17
68
24
P1
P2
P3
P4
 The Gantt chart is:
P1
0
P2
20
37
P3
P4
57
P1
77
P3
97 117
P4
P1
P3
P3
121 134 154 162
 Typically, higher average turnaround than SJF,
but better response
19
Time Quantum and Context Switch Time
 Smaller q  more responsive but more context
switches (overhead)
20
Turnaround Time Varies With Time Quantum
 Turnaround time varies with quantum, then
stabilizes
 Rule of thumb for good performance:
 80% of CPU bursts should be shorter than time quantum
21
Multilevel Queue
 Ready queue is partitioned into separate queues:
 foreground (interactive)
 background (batch)
 Each queue has its own scheduling algorithm
 foreground – RR
 background – FCFS
 Scheduling must be done between queues
 Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation
 Time slice – each queue gets a certain amount of CPU
time which it can schedule amongst its processes; e.g.,
• 80% to foreground in RR
• 20% to background in FCFS
22
Multilevel Queue Scheduling
23
Multilevel Feedback Queue
 A process can move between various queues
 aging can be implemented this way
 Multilevel-feedback-queue scheduler defined by
the following parameters:
 number of queues
 scheduling algorithms for each queue
 method to determine when to upgrade a process
 method to determine when to demote a process
 method to determine which queue a process will
enter when that process needs service
24
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to
queue Q1.
 At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete, it
is preempted and moved to queue Q2
25
Multilevel Feedback Queues
 Notes:
 Short processes get served faster (higher prio)  more
responsive
 Long processes (CPU bound) sink to bottom  served
FCFS  more throughput
26
Multiple-Processor Scheduling
 Multiple processors ==> divide load among them
 More complex than single CPU scheduling
 How to divide load?
 Asymmetric multiprocessor
• One master processor does the scheduling for others
 Symmetric multiprocessor (SMP)
• Each processor runs its own scheduler
• One common ready queue for all processors, or one
ready queue for each
• Win XP, Linux, Solaris, Mac OS X support SMP
27
SMP Issues
 Processor affinity
 When a process runs on a processor, some data is cached in
that processor’s cache
 Process migrates to another processor ==>
• Cache of new processor has to be re-populated
• Cache of old processor has to be invalidated ==>
• Performance penalty
 Load balancing
 BAD: One processor has too much load and another is idle
 Balance load using
• Push migration: A specific task periodically checks load on
all processors and evenly distributes it by moving (pushing)
tasks
• Pull migration: Idle processor pulls a waiting task from a
busy processor
• Some systems (e.g., Linux) implement both
28
SMP Issues (cont’d)
 Tradeoff between load balancing and processor
affinity: what would you do?
 May be, invoke load balancer when imbalance
exceeds threshold
29
Real-time Scheduling
 Hard-real time systems
 A task must be finished within a deadline
 Ex: Control of spacecraft
 Soft-real time systems
 A task is given higher priority over others
 Ex: Multimedia systems
30
Operating System Examples
 Windows XP scheduling
 Linux scheduling
31
Windows XP Scheduler
 Priority-based, preemptive scheduler
 The highest-priority thread will always run
 32 levels of priorities, each has a separate queue
 Scheduler traverses queues from highest to
lowest until it finds a thread that is ready to run
 Priorities are divided into classes:
 Real-time class (fixed): Levels 16 to 31
 Other classes (variable): Levels 1 to 15
• Priority may change (decrease or increase)
32
Windows XP Scheduler (cont’d)
 Processes are typically created as members of
NORMAL_PRIORITY_CLASS
 It gets base (NORMAL) priority of that class
 Priority decreases
 After thread’s quantum time runs out
• but never goes below the base (normal) value of its class
 Limit CPU consumption of CPU-bound threads
 Priority increases
 After a thread is released from a wait operation
 Bigger increase if thread was waiting for mouse or
keyboard
 Moderate increase if it was waiting for disk
 Also, active window gets a priority boost
 Yield good response time
33
Windows XP Scheduler (cont’d)
 Win 32 API defines
 several priority classes, and
 within each class several relative priorities
Priority class
34
Linux Scheduler
 Priority-based, preemptive scheduler with two separate ranges
 Real-time: 0 to 99
 Nice: 100 to 140
 Higher priority tasks get larger quanta (unlike Win XP,
Solaris)
35
Linux Scheduler (cont’d)
A task is initially assigned a time slice (quantum)
Runqueue has two arrays: active and expired
A runnable task is eligible for CPU if it has time left in its time slice
If time slice runs out, the task is moved to the expired array
 Priority increase/decrease may occur before adding to expired
array
 When there are no tasks in the active array, the expired array
becomes the active array and vice versa (change of pointers)




36
Algorithm Evaluation
 Deterministic modeling
 Take a particular predetermined workload and define
the performance of each algorithm for that workload
 Not general
 Queuing models
 Use queuing theory to analyze algorithms
 Many (unrealistic) assumptions to facilitate analysis
 Simulation
 Build a simulator and test
• synthetic workload (e.g., generated randomly), or
• Traces collected from running systems
 Implementation
 Code it up and test!
37
Summary
 Process execution: cycle of CPU bursts and I/O bursts
 CPU bursts lengths: many short bursts, and few long
ones
 Scheduler selects one process from ready queue
 Dispatcher performs the switching
 Scheduling criteria (usually conflicting)
 CPU utilization, waiting time, response time,
throughput, …
 Scheduling Algorithms
 FCFS, SJF, Priority, RR, Multilevel Queues, …
 Multiprocessor Scheduling
 Processor affinity vs. load balancing
 Evaluation of Algorithms
 Modeling, simulation, implementation
 Examples: Win XP, Linux
38