Lecture 7 - Suraj @ LUMS
Download
Report
Transcript Lecture 7 - Suraj @ LUMS
Thursday, June 15, 2006
Confucius says: He who play in
root, eventually kill tree.
1
telnet 203.128.0.236
instead of
telnet chand.lums.edu.pk
from outside LUMS
2
Another example
3
FCFS
Simplest algorithm – easy to implement
When a running process blocks, it is
placed at the end of queue like a newly
arrived process
Non preemptive
Does not emphasize throughput – long
processes are allowed to monopolize the
CPU.
4
FCFS
Suffers from convoy effect
Penalizes short processes following long ones
Average WT varies if process CPU burst
times vary greatly
Not suitable for time sharing systems
Tends to favor CPU bound over I/O
bound processes
5
SRTN
Starvation possible
Throughput vs. turnaround time tradeoff
Introduces context switching.
Burst sizes known in advance and all
available
6
Priority Scheduling
A priority number (integer) is associated with
each process
The CPU is allocated to the process with the
highest priority (smallest integer highest
priority ...may be different on different systems).
Preemptive
nonpreemptive
SJF is a priority scheduling where priority is the
predicted next CPU burst time.
7
Example
Processes
Burst Time
Priority
P1
10
3
Arrival
Time
0
P2
1
1
1
P3
2
3
2
P4
1
4
3
P5
5
2
4
8
Priority Scheduling
Problem Starvation – low priority processes
may never execute.
Solution Aging – as time progresses increase the
priority of the process.
9
Round Robin (RR)
Each process gets a small unit of CPU time
(time quantum), usually 10-100
milliseconds. After this time has elapsed,
the process is preempted and added to the
end of the ready queue.
If there are n processes in the ready queue
and the time quantum is q, then each
process gets 1/n of the CPU time in chunks
of at most q time units at once. No process
waits more than (n-1)q time units.
10
Round Robin (RR)
Performance
q large FIFO
q small q must be large with respect
to context switch, otherwise overhead is
too high.
11
Example: RR with Time Quantum = 20
Process Burst Time
P1
53
P2
17
P3
68
P4
24
Typically, higher average turnaround
than SJF, but better response.
12
The Gantt chart is:
P1
0
P2
20
37
P3
P4
57
P1
77
P3
97 117
P4
P1
P3
P3
121 134 154 162
13
How a Smaller Time Quantum Increases Context Switches
14
Multilevel Queue
Ready queue is partitioned into separate
queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling
algorithm
foreground – RR
background – FCFS
15
Multilevel Queue
Scheduling must be done between the
queues
Fixed priority scheduling; (i.e., serve all
from foreground then from background).
Possibility of starvation.
Time slice – each queue gets a certain
amount of CPU time which it can schedule
amongst its processes; i.e., 80% to
foreground in RR
20% to background in FCFS
16
Multilevel Queue Scheduling
17
Multilevel Feedback Queue
A process can move between the various
queues; aging can be implemented this
way
18
Multilevel Feedback Queue
Multilevel-feedback-queue scheduler
defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a
process
method used to determine when to demote a
process
method used to determine which queue a
process will enter when that process needs
service
19
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If
it does not finish in 8 milliseconds, job is moved to
queue Q1.
At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.
20
Multilevel Feedback Queues
21
Multilevel feed back queue example
Processes
Arrival time
Burst time
P1
0
17
P2
12
25
P3
28
8
P4
36
32
P5
46
18
22
Multilevel feed back queue example
Multilevel feedback queue scheduling with three
queues Q1, Q2, Q3.
The scheduler first executes processes in Q1, which
is given a time quantum of 8ms. If a process does
not finish within this time, it is moved to tail of Q2.
The scheduler executes processes in Q2 only if Q1 is
empty. The process at the head of Q2 is given a
quantum of 16ms. If it does not complete, it is
preempted and put in Q3.
Processes in Q3 are run in FCFS basis, only when
Q1 and Q2 are empty.
A process in Q1 will preempt a process in Q2, a
process that arrives in Q2 will preempt a process in
23
Q3.
THREAD SCHEDULING
User level thread with 50msec process quantum and
threads that run 5msec per CPU burst
24
User level thread with 50msec process quantum and
threads that run 5msec per CPU burst
25
Kernel level thread with 50msec process quantum
and threads that run 5msec per CPU burst
26
Kernel level thread with 50msec process quantum
and threads that run 5msec per CPU burst
27
Threads
Goal for threads: Allow each to use
blocking calls but prevent a blocked
thread from affecting other threads.
Threads in user space: Conflict with this
goal.
One compelling reason for threads in
user space: Work with existing operating
systems
28
Threads
System calls can be made non-blocking
select system call
• checking code: jacket / wrapper
Changes to system call library
Inelegant solution
Conflict with our goal
Changing semantics of calls means changing
existing user programs
29
We want:
Combine the advantage of user threads with
those of kernel threads.
We want good performance and flexibility
but without having to make special nonblocking system calls or checking for
conditions.
30
Scheduler Activations
Many to many models: User threads
multiplexed onto kernel threads.
Main idea:
Avoid unnecessary transitions between user
and kernel space
• If a thread a waiting locally for another one, then
no need to involve the kernel
• Some number of virtual processors assigned to
each process by the kernel (LWP: data structure
between user and kernel threads)
31
Scheduler Activations
• Some number of virtual processors
assigned to each process by the kernel
(LWP: data structure between user and
kernel threads)
• LWPs can be requested or released by each
process
• User process can schedule user threads
onto available virtual processors.
32
Scheduler Activations
When a kernel sees that a thread has blocked it
informs the process run-time system of this
occurrence by starting it at a well known
address (Upcall)
Now the process can reschedule its threads.
When the data for blocked thread becomes
available kernel makes another upcall
The process will decide whether to run the
previously blocked thread or put it in ready
queue.
33
Scheduler Activations
CPU-bound: maybe one LWP
I/O bound: multiple LWPs
• One LWP for each concurrent blocking
system call
34
Thread Scheduling
• Many to many model: Thread library
schedules user-level threads on available
LWPs (PCS)
• Decision among threads of same process
• Kernel decides which kernel thread to
schedule onto a CPU (SCS)
• One to one model systems use only SCS
• Windows, Linux, Solaris 9
35
Scheduling in Unix - other versions also possible
Designed to provide good response to
interactive processes
Uses multiple queues
Each queue is associated with a range of
non-overlapping priority values
36
Scheduling in Unix - other versions also possible
Processes executing in user mode have
positive values
Processes executing in kernel mode
(doing system calls) have negative values
Negative values have higher priority and
large positive values have lowest
37
Scheduling in Unix
Only processes that are in memory and ready
to run are located on queues
Scheduler searches the queues starting at
highest priority
first process is chosen on that queue and
started. It runs for one time quantum (say
100ms) or until it blocks.
If the process uses up its quantum it is blocked
Processes within same priority range share
CPU in RR
38