Transcript Scheduling
OPERATING SYSTEMS
T AN N E N B A U M
S E C T I O N 2 . 4 , 1 0 . 3
S C H E D U L I N G
WHEN TO SCHEDULE
• Process Creation
• Run parent or child (since both are ready)
• Process Exits
• Examine ready queue
• Process Blocks (on i/o, on a semaphore, etc.)
• Examine ready queue
• I/O Interrupt
• Some i/o is complete: run the current process, the blocked process, or
some other process
• Clock Interrupt (quantum has expired)
• Make a scheduling decision
SCHEDULER TYPES
• Non-preemptive
• Scheduled process runs until it blocks or voluntarily releases
CPU
• Preemptive
• Scheduled process runs until it blocks or quantum expires
CATEGORIES OF SCHEDULING
ALGORITHMS
•
•
•
Batch
• System with periodic long processes and no users waiting
for quick responses
• E.g., inventory, claims processing, interest calculation, etc.
• Preemptive algorithms or non-preemptive algorithms with
long quanta to reduce context switches
Interactive
• Favor jobs that run to completion or block quickly
• Systems with lots of interactive users, servers that serve
multiple remote users
Real time
• Systems with time constraints
• Non-preemptive because they exist only to further the
current application
SCHEDULING ALGORITHM GOALS
Tanenbaum & Bo,Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
SCHEDULING IN BATCH SYSTEMS
• First-Come, First-Served
• Obvious draw-back
• Shortest Job First
SHORTEST JOB FIRST
NON-PREEMPTIVE
(a)Running four jobs in the original order
avg turnaround = (8+12+16+20)/4 = 14
(b) Running them in shortest job first order
avg turnaround = (4+8+12+20)/4 = 11
Provably optimal: but only when (1) jobs arrive simultaneously and (2) run times are
known in advance
SCHEDULING IN INTERACTIVE
SYSTEMS
•
•
•
•
Round-Robin Scheduling
Priority Scheduling
Multiple Queues
Shortest Text Next
ROUND-ROBIN SCHEDULING
PREEMPTIVE
Issue: Quantum Size
•
Small quantum: cpu spends a lot of time servicing context switches (switching memory
maps, flushing and reloading cache, etc.)
•
Large quantum: degenerates to non-preemptive. CPU use is efficient but late arrivals wait
Assumption: All jobs are equally important
SOLVED BY
PRIORITY SCHEDULING
Static: priority assigned at process creation
Dynamic: priority reassigned based on behavior
• Reward a process that blocks on i/o, punish processes for exhausting quantum
• Means that a process could be down graded or up graded based on past behavior
• Sometimes called multi-level feedback queues
SHORTEST JOB NEXT
PREEMPTIVE (1)
Shortest Job First for Interactive Systems
Uses an aging factor and a recursively defined weighted sum
Let e_1 be estimated time for process in its first run, t_1
e_1 = t_0 where t_0 is an initial guess
After first run, we know t_1
Let e_2 be the estimated time for second run
e_2 = a * e_1 + (1-a) * t_1, where a is an aging factor
e_2 = a * t_0 + (1-a) * t_1
After second run, we know t_2
Let e_3 be estimated time for third run
e_3 = a * e_2 + (1-a) * t_2
e_3 = a * a * t_0 + a*(1-a) * t1 + (1-a) * t2
After third run, we know t_3
Let e_4 be estimated time for fourth run
e_4 = a * e_3 + (1-a) * t_3
e_4 = a* a * a * t_0 + a* a*(1-a) * t_1 + a * (1-a) * t2 + (1-a) * t3
After n-1 run, we know t_n-1
e_n = an-1 t_0 + an-2 * (1-a) * t_1 + ... + (1-a) * t_n-1
SHORTEST JOB NEXT
PREEMPTIVE (2)
Compute e_4 for the fourth run with an aging factor of 1/2
e_4 = a* a * a * t_0 + a* a*(1-a) * t_1 + a * (1-a) * t2 + (1-a) * t3
= 1/8 * t_0 + 1/8 * t1 + ¼ * t_2 + ½ * t_3
Notice
• weight of our original guess has dropped by 1/8
• weight of first actual time has dropped by 1/8
• etc.
SCHEDULING IN REAL-TIME SYSTEMS
• Time plays an essential role
• Categories
• Hard real time: absolute deadline
• Soft real time: missing an occasional deadline is OK
• Periodic or aperiodic: events occur regularly or irregularly
• A system might have periodic events with periods p_1, p_2, ..., p_i msecs each requiring
c_1, c_2, ..., c_i msec of CPU time
• That is, event 1 might occur every 100 msec, event 2 every 200 msec, etc.
• These events might require 50 msec, 30 msec, etc.
• Clearly event 1 can’t require CPU time of over 100 msec, because the CPU would be
busy when event 1 would be scheduled again
• So c_i/p_i <= 1
• In the general case:
• Such a system is said to be schedulable (and this ignores context switch overhead)
POLICY VS. MECHANISM
• Assumption so far: all processes in the system belong to
different users and are all competing with one another for the
CPU
• What if one process has many children. The parent
understands the time-critical nature of each of its children. But
each child is treated as just another process by the scheduler.
• Solution: paramaterize the scheduler such that the parameters
can be filled in by user processes
• A system might allow its users to request different priorities for
each of its children
• The policy is set by the user
• The mechanism is enforced by the scheduler
THREAD SCHEDULING
USER LEVEL THREADS
Kernel picks (for example), process A.
Process A schedules its own threads (usually with round-robin or priority queues)
Relatively efficient because threads share memory (memory map does not need to be
exchanged)
Pthreads are user threads
THREAD SCHEDULING
KERNEL LEVEL THEADS
Kernel schedules by thread
Requires a full context switch
SCHEDULING IN LINUX (1)
Three classes of threads for scheduling
purposes:
• So-Called Real-Time are just high priority
• Priorities 0 - 99
• Real-time FIFO
•
Non-preemptible
• Real-time round robin
•
Preemptible if its quantum expires
• Timesharing
• Priorities 100 - 139
SCHEDULING IN
LINUX (2)
O(1) Scheduler
Ready queue organized as two chained hashtables
• Active: ready and quantum has not expired
• Expired: ready and quantum has expired
• Each list within a category has a different
priority and will receive a different size time
slice
Operation
• Select task from highest priority queue in
Active list
•
•
•
If it’s time slice expires, move it to the expired list
If it blocks, decrement its time slice counter and put it
back on the Active queue
Priority is constantly recalculated
•
•
Reward interactive processes
Punish cpu-intensive processes
SCHEDULING IN LINUX (3)
CF (completely fair) scheduler
• Ready queue organized as a red-black
tree
• Each internal node is a task
• Children on left had less time on CPU
• Children on right had more time on CPU
• Schedule task which had least amount of
CPU time
• Selecting a node: O(N) because it’s the
left-most node
• Inserting a node: O(lg(n))
• Accounts for priorities
•
•
virtual clock ticks more quickly for low priority
processes
virtual clock ticks more slowly for high priority
processes
SCHEDULING IN LINUX (4)
CONVENTIONAL (INTERACTIVE PROCESSES)
Static Priority
•
•
•
Each interactive process has it own static priority: 100 (highest) – 139 (lowest)
A new process inherits the static priority of its parent
Determines the base time quantum in milliseconds according to this formula:
•
if static priority < 120
base_time_quantum = (140 – static priority) * 20
else
base_time_quantum = (140 – static priority) * 5
Since a higher process is associated with a lower numerical value, high priority processes start off with a longer
quantum.
• Highest static priority is 100: btq = (140 – 100) * 20 = 800 ms
• Default static priority is 120: btq = (140 – 120) * 5 = 100 ms
• Lowest static priority is 139: btq = (140 – 139) * 5 = 5 ms
Nice values
•
Older linux/unix systems used nice() which sets priorities from -20 to 19. These correspond 100 to 139
SCHEDULING IN LINUX (4)
CONVENTIONAL (INTERACTIVE PROCESSES)
Dynamic Priority
•
•
The priority used in making scheduling decisions
dynamic_priority = max(100, min(static_priority – bonus + 5, 139))
•
•
•
•
[0 .. 10]
< 5 is a penalty for bad behavior
> 5 is a premium for good behavior
Related to the average sleep time of the process. That is, if the process has blocked several times it
has slept more than one that has blocked only once. It is more interactive
Bonus
Example
static_priority = 130 (i.e., a low static priority)
bonus = 10 (process behaved well)
dynamic_priority = max(100, min(125, 139)) = 125