Scheduling Strategies
Download
Report
Transcript Scheduling Strategies
Scheduling Strategies
Operating Systems
Spring 2004
Class #10
Scheduling Strategies
The job of the scheduling strategy is to select
the next Ready process to enter the Running
state.
Preemption or voluntary yield
New
Process
Ready
List
“Ready”
Scheduler
Strategy
process
process
process
process
Allocate
Done
CPU
“Running”
Resource
Manager
process
process
process
Resources
Request
“Blocked”
Some Scheduling Strategies
Non-Preemptive Strategies:
First-Come-First-Served (FCFS)
Priority*
Shortest Job Next (SJN)*
* These also come in a preemptive flavor.
Preemptive Strategies:
Round Robin
Multi-level Queue
Multi-level Feedback Queue
Evaluating Scheduling Strategies
Some metrics that are useful in evaluating scheduling
strategies:
CPU Utilization: The percentage of time that the CPU spends
executing code on behalf of a user.
Throughput: The average number of processes completed per time
unit.
Turnaround Time*: The total time from when a process first enters the
Ready state to last time it leaves the Running state.
Wait Time*: The time a process spends in the Ready state before its
first transition to the Running state.
Waiting Time*: The total time that a process spends in the Ready state.
Response Time*: The average amount of time that a process spends in
the Ready state.
* These metrics are typically averaged across a
collection of processes.
Comparing Scheduling Strategies
There are many techniques for comparing
scheduling strategies:
Deterministic Modeling
Queuing Models
Simulation
Implementation
Deterministic Modeling
In deterministic modeling we’ll ignore I/O
operations and voluntary yields.
Preemption or voluntary yield
Scheduler
Ready
List
Strategy
New
Process
“Ready”
process
process
process
process
Allocate
Done
CPU
“Running”
Resource
Manager
process
process
process
Resources
Request
“Blocked”
FCFS Example
Arrival
Order
0
1
2
3
4
CPU
Burst
350
125
475
250
75
Gantt Chart:
0
350
p0
475
900
p1
p2
Throughput
5 jobs
0.004
1275 tu
Turnaround
(350 475 900 1200 1275) tu
850
5 jobs
jobs
tu
(0 350 475 900 1200) tu
Wait
595
5 jobs
tu
job
tu
job
1200 1275
p3
p4
SJN Example
Arrival
Order
0
1
2
3
4
CPU
Burst
350
125
475
250
75
Gantt Chart:
0 75 200
450
p4 p1
p3
Wait
Turnaround
SJN:
800
p0
1275
p2
(0 75 200 450 800) tu
305
5 jobs
tu
job
(75 200 450 800 1275) tu
560
5 jobs
Guaranteed minimum average wait time.
May result in starvation of large jobs.
Need to know the CPU burst times.
tu
job
RR Example (w/ 50tu Time Slice)
Arrival
Order
0
1
2
3
4
Wait
CPU
Burst
350
125
475
250
75
Gantt Chart:
0
p0
100
p1 p2
650
p0
750
p2 p3
200
p3 p4
300
p0 p1
850
p0 p2
(0 50 100 150 200) tu
100
5 jobs
950
p3 p0
400
475
550
p2 p3 p4 p0 p1 p2
1050
p2 p0
1150
p2 p2
tu
job
Waiting p 0 (0 200 175 125 100 100 50) tu 750 tu
Response p 0
Waiting p 0
7 visits
750 tu
tu
107 visit
7 visits
650
p3
1250 1275
p2 p2
RR Example (w/ 50 tu Overhead)
Gantt Chart:
0
120
p0
p1
790
p2
910
p0
CPU
240
p2
p3
p3
360
p4
1030
p0
p2
p0
p1
1150
p3
p0
480 540 575 635 670
p2
p3 p4 p0 p1 p2
1270
p2
p0
1390
p2 p2
790
p3
1510 1535
p2 p2
(350 125 475 250 75) tu
1275
100
100 83%
1535 tu
1535
Throughput, turnaround, wait, waiting and response
time calculations must now include the overhead.
Multi-Level Queues
Preemption or voluntary yield
Ready List0
Done
Ready List1
Scheduler
Ready List2
For i < j All processes
at level i run before any
process at level j.
Each list may use its
own strategy.
New
Process
Ready ListN-1
CPU
Linux Scheduling
Linux uses a multi-level queue scheduling strategy:
Preemption or voluntary yield
FIFO
New
Process
RR
Scheduler
Done
CPU
OTHER
All threads in FIFO run before any in RR which run before any in
OTHER.
Within each queue a priority scheme is used with higher priority
threads running first.
User threads run in the OTHER queue.
All queues are preemptive.
Linux OTHER Queue Scheduling
The OTHER queue uses the following strategy:
Each thread, i, has a priority pi
Every new thread is given a default priority, K.
A countdown timer is used to create time slices.
On each timer interrupt the priority of the running thread is
decremented (pi--).
• When a thread’s priority reaches 0, it is blocked.
The highest priority thread is selected to run for the next time slice.
• If no threads are available (i.e. all threads are blocked on a resource or
because they have 0 priority), then a recrediting operation is
performed.
• During recrediting, every thread in the system is assigned
priority using the following formula:
pi
pi
K
2
• Threads blocked for 0 priority now
return to the ready queue.
Multi-Level Feedback Queues
Preemption or voluntary yield
Ready List0
Ready List1
Ready List2
New
Process
Ready ListN-1
Scheduler
CPU
Done
For i < j All processes at level i run
before any process at level j.
Processes are inserted into queues by
priority.
Process priorities are updated
dynamically.
Each queue typically uses a RR
strategy.
Popular in Modern OS:
Windows
BSD
Scheduling Simulations
Scheduling simulations account for several
important factors that are frequently ignored
by deterministic modeling:
Scheduling Overhead
I/O Operations
Process Profiles:
CPU Bound
I/O Bound
Variable process arrival times.
Project
Implement the RR and Linux OTHER
scheduling strategies and compare their
performance on:
I/O Bound Processes
CPU Bound Processes
A mix of CPU and I/O Bound Processes