critical section
Download
Report
Transcript critical section
CE01000-3 Operating Systems
Lecture 8
Process Scheduling continued and an
introduction to process
synchronisation
Overview of lecture
In this lecture we will be looking at:
more scheduling algorithms –
Priority scheduling,
Round Robin scheduling,
Multi-level queue
Introduction to process synchronisation
including:
critical section problem & mutual exclusion
hardware support for process synchronisation
Priority Scheduling
A priority number (integer) is associated with
each process and the CPU is allocated to the
process with the highest priority (some
schemes use smallest integer highest
priority).
Can be preemptive or nonpreemptive
SJF is effectively priority scheduling where
priority is the predicted next CPU burst time.
Priority Scheduling (Cont.)
Problem = Starvation – low priority
processes may never execute.
Solution = Aging – as time progresses
increase the priority of the process.
Round Robin (RR)
Each process gets a small unit of CPU time
(time quantum), usually 10-100
milliseconds. After this time has elapsed,
the process is preempted and added to the
end of the ready queue.
If there are n processes in the ready queue
and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits
more than (n-1)q time units.
Round Robin (RR) (Cont.)
Performance
q large behaves similar to FCFS
q small overhead from context switch becomes
bigger - when q becomes similar in length to
context switch time then no work will get done all context switching
Example: RR with Time
Quantum = 20
Process
P1
P2
P3
P4
Burst Time
53
17
68
24
The Gantt chart is:
0
20
P1
37
P2
57
P3
77
P4
97 117
P1
P3
121 134 154 162
P4
P1
P3
P3
Typically, RR has higher average turnaround
than SJF, but better response.
How a Smaller Time Quantum
Increases Context Switches
Turnaround Time Varies With
The Time Quantum
Multilevel Queue
Ready queue is partitioned into separate
queues with processes placed according to
some property e.g.
foreground (interactive) and background (batch)
queues
or system, interactive and batch queues
Each queue may have its own scheduling
algorithm, e.g. foreground – RR and
background – FCFS
Multilevel Queue (Cont.)
Scheduling must be done between the queues.
Fixed priority scheduling; i.e., serve all from
foreground then from background. But possibility
of starvation.
Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its
processes; i.e.,
80% to foreground in RR; 20% to background in
FCFS
Multilevel Feedback Queue
(Cont.)
A process can move between the various
queues; aging can be implemented this way.
Multilevel-feedback-queue scheduler defined
by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a
process
Multilevel Feedback Queue
(Cont.)
method used to determine when to demote a
process
method used to determine which queue a process
will enter when that process needs service
Example of Multilevel Feedback
Queues
Example of Multilevel
Feedback Queue
Three queues:
Q0 – time quantum 8 milliseconds
Q1 – time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served in
FCFS order. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
Example of Multilevel Feedback
Queue (Cont.)
At Q1 job is again served FCFS order and receives
16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.
Process Synchronization
Topics:
Background
The Critical-Section Problem
Synchronization Hardware
Semaphores (a later lecture)
Classical Problems of Synchronization (a later
lecture)
Background
Concurrent access to shared data may result in
data inconsistency.
Maintaining data consistency requires
mechanisms to ensure the orderly execution of
cooperating processes.
For example 2 processes - one a producer of
data which it writes to a buffer and the other a
consumer of the data which it reads from the
buffer
Background (Cont.)
Suppose that the producer-consumer code has
a shared variable counter, initialized to 0 and
incremented each time a new item is added to
the buffer and decremented when one is
removed
Bounded-Buffer – SharedMemory soln.
Data structures:
buffer for shared data - an array organised
as a circular buffer
in-index - where producer to put data
out-index - where consumer to get data
counter - keeps count of number of items in
buffer
n is the size of the buffer
Bounded-Buffer (Cont.)
Producer Process
repeat
….
produce data item
….
/* buffer full when counter = n */
while counter == n do
no-op;
buffer[in-index] = data item
in-index = (in-index+1) modulo size-of-buffer
counter = counter + 1;
until false
Bounded-Buffer (Cont.)
Consumer process
repeat
/* buffer empty when counter == 0 */
while counter == 0 do
no-op;
get next data item from buffer[out-index]
out-index = out-index-1 modulo size-of-buffer
counter = counter - 1;
….
consume data item
….
until false
Bounded-Buffer (Cont.)
counter = counter + 1; and counter =
counter - 1; must be executed
atomically.
Code for counter := counter + 1 may be
implemented at machine level by several
instructions e.g.
move count,reg1
add
#1,reg1
move reg1,count
Bounded-Buffer (Cont.)
Multiprogammed execution of processes
each accessing shared data means that the
instructions belonging to the different
processes to access the data will be
interleaved in some arbitrary order. The
outcome of the execution (value of data)
depends on order in which accesses occur
To prevent this we need to be able to ensure
that only one process can access shared data
at a time
Example interleaving
P1
….
move count,reg1
interrupted
.
.
.
add
#1,reg1
move reg1,count
….
interrupted
.
.
P2
.
.
.
.…
move count,reg1
interrupted
.
.
.
.
sub
#1,reg1
move reg1,count
The Critical-Section Problem
Assume you have N processes all competing
to use some shared data - each process has a
code segment, called critical section, in
which the shared data is accessed.
Problem – ensure that when one process is
executing in its critical section, no other
process is allowed to execute in its critical
section
Critical-Section problem
(Cont.)
Execution must be mutually exclusive requires synchronisation.
repeat
entry section
// controls entry to critical section
critical section
exit section
// manages exit from critical section
remainder section
// other code
until false;
Solution to Critical-Section
Problem
3 conditions need to be met to provide a
solution to critical section problem
1.Mutual Exclusion. If process Pi is executing in
its critical section, then no other processes can be
executing in their critical sections.
2.Progress. If no process is executing in its critical
section and there exist some processes that wish
to enter their critical section, then the making of a
selection of the process that will enter the critical
section next cannot be postponed indefinitely.
Solution to Critical-Section
Problem (Cont.)
3.Bounded Waiting. A bound must exist on the
number of times that other processes are allowed
to enter their critical sections after a process has
made a request to enter its critical section and
before that request is granted.
Synchronization Hardware
Many processors provide instructions that as
part of one single instruction both test a word
for a value and change that value - being a
single instruction it is atomic (indivisible)
e.g. Test-and-set instruction - defined below as if it were
a method
boolean method TestAndSet (boolean wordToBeTested)
{
return wordToBeTested ; // return value of word
wordToBeTested = true; // set value of target word to true
}
Mutual Exclusion with Test-andSet
Shared data: var lock: boolean (initially false)
Process Pi
repeat
while Test-and-Set (lock) do no-op;
critical section
lock := false;
remainder section
until false;
However without further modification does not
satisfy bounded waiting condition
References
Operating System Concepts. Chapter 5 & 6.