Operating Systems
Download
Report
Transcript Operating Systems
Operating Systems:
Internals and Design Principles, 6/E
William Stallings
Chapter 5
Concurrency: Mutual
Exclusion and
Synchronization
Dave Bremer
Otago Polytechnic, N.Z.
©2008, Prentice Hall
Roadmap
•
•
•
•
•
•
Principals of Concurrency
Mutual Exclusion: Hardware Support
Semaphores
Monitors
Message Passing
Readers/Writers Problem
Multiple Processes
• Central to the design of modern Operating
Systems is managing multiple processes
– Multiprogramming within a uniprocssor system
– Multiprocessing within a multiprocssor
– Distributed Processing on multiple, distributed system
• Big Issue is Concurrency
– Managing the interaction of all of these processes:
communication, resource sharing, synchronization,
and allocation of time to processes
Concurrency
Concurrency arises in:
• Multiple applications
– Multiprogramming allows processing time to be dynamical
shared among a number of active applications
• Structured applications
– Applications implemented as a set of concurrent processes
– Extension of modular design and structural programming
• Operating system structure
– The same structuring advantages also apply to system programs
– OS themselves are often implemented as a set of processes or
threads
Key Terms
Interleaving and Overlapping
Processes
• Earlier (Ch2) we saw that processes may
be interleaved on uniprocessors
Interleaving and Overlapping Processes
• And not only interleaved but overlapped on
multi-processors
• Both multiprogramming and multiprocessing techniques can be
viewed as examples of concurrent processing and present the same
problems
Difficulties of Concurrency
• Sharing of global resources (current reads and
writes on a shared variable)
• Difficult to manage the allocation of resources
optimally (for example, an I/O channel is granted to a
process and that process is suspended before using the
channel. Should the OS lock it or not? This may lead to
dead lock.)
• Difficult to locate programming errors as results
are not deterministic and reproducible.
A Simple Example
(Consider the echo procedure)
void echo() {
chin = getchar();
// get a character from the keyboard
// and store it to variable chin
chout = chin;
// transfer it to variable chout
putchar(chout);
// print the character to the display
}
This procedure may be shared and only a copy is loaded
to the memory global to all applications.
A Simple Example: on a uniprocessor
multiprogramming system
P1 invokes echo and is interrupted after executing
chin = getchar(); // assume x is entered
P2 is activated and invokes echo, which runs to
completion
// assume y is entered and displayed
P1 is resumed and executes
chout = chin;
putchar(chout);
// What is the result?
A Simple Example: on a Multiprocessor
Process P1
.
chin = getchar();
.
chout = chin;
putchar(chout);
.
.
Process P2
.
.
chin = getchar();
chout = chin;
.
putchar(chout);
.
timeline
Enforce Single Access
• If we enforce a rule that only one process may enter the
function at a time then:
• P1 & P2 run on separate processors
• P1 enters echo first,
– P2 tries to enter but is blocked – P2 suspends
• P1 completes execution
– P2 resumes and executes echo
• This example shows that it is necessary to protect
shared global variables / resources.
Race Condition
• A race condition occurs when
– Multiple processes or threads read and write
data items
– They do so in a way where the final result
depends on the order of execution of the
processes.
• The output depends on who finishes the
race last.
Operating System Concerns
• What design and management issues are raised
by the existence of concurrency?
• The OS must
– Keep track of various processes
– Allocate and de-allocate resources
(Processor time, memory, Files, I/O devices)
– Protect the data and resources against interference
by other processes.
– Ensure that the processes and outputs are
independent of the processing speed
Process Interaction
(e.g., accessing a printer)
Competition among
Processes for Resources
Three main control problems:
• Need for Mutual Exclusion
– Critical sections
– Enforcement of M.E. creates two additional
control problems: Deadlock and starvation
• Deadlock
• Starvation
Cooperation among Processes by Sharing
One additional control problems:
data coherence
P1:
a = a + 1;
b = b + 1;
P2:
b = 2 * b;
a = 2 * a;
Concurrent execution sequence
a = a + 1;
b = 2 * b;
b = b + 1;
a = 2 * a;
Suppose we start with a= b = 1.
What are a and b at the end?
Cooperation among Processes by Comm.
• Communications can be characterized by consisting of
messages
• Primitives for sending and receiving messages may be
provided as part of the programming language or by the
OS kernel.
• Nothing is shared Mutual exclusion is not a problem
• The problems of deadlock and starvation remain.
• Example:
P1 repeatedly comm. With P2 or P3
P2 and P2 both attempt to comm. with P1
It may happen that P1 and P2 exchange information
repeatedly, while P3 is blocked waiting for a comm. with
P1 no deadlock, but P3 is starved.
Requirements for Mutual Exclusion
• Only one process at a time is allowed in the
critical section for a resource
• A process that halts in its noncritical section
must do so without interfering with other
processes
• Must not result in deadlock or starvation
• (to be continued on next slide)
Requirements for Mutual Exclusion
• A process must not be delayed access to a
critical section when there is no other process
using it
• No assumptions are made about relative
process speeds or number of processes
• A process remains inside its critical section for a
finite time only
Roadmap
• Principals of Concurrency
• Mutual Exclusion: Hardware Support
• Semaphores: OS support
• Monitors: Programming language construct
• Message Passing
• Readers/Writers Problem
Disabling Interrupts
• Uniprocessors only allow interleaving
• Interrupt Disabling
– A process runs until it invokes an operating
system service or until it is interrupted
– Disabling interrupts guarantees mutual
exclusion
– Will not work in multiprocessor architecture
Pseudo-Code
while (true) {
/*
/*
/*
/*
}
disable interrupts */;
critical section */;
enable interrupts */;
remainder */;
Special Machine Instructions
• Compare&Swap Instruction
– also called a “compare and exchange
instruction”
• Exchange Instruction
Compare&Swap Instruction
int compare_and_swap (int *word,
int testval, int newval)
{
int oldval;
oldval = *word;
if (oldval == testval) *word = newval;
return oldval; // always returns the old memory
value
}
The entire function is carried out atomically. i.e., it is not
subject to interruption
Mutual Exclusion (fig 5.2)
shared
testval
newvalue
busy waiting, spin waiting
bolt = 1 do nothing
bolt = 0 can enter the C.S.
Exchange instruction
void exchange (int register, int memory)
{
int temp;
temp = memory;
memory = register;
register = temp;
}
Exchange the contents of a register
with that of a memory location
Exchange Instruction
(fig 5.2)
shared
local
Hardware Mutual Exclusion: Advantages
• Applicable to any number of processes on either
a single processor or multiple processors
sharing main memory
• It is simple and therefore easy to verify
• It can be used to support multiple critical
sections
Hardware Mutual Exclusion: Disadvantages
• Busy-waiting consumes processor time
• Starvation is possible when a process leaves a
critical section and more than one process is
waiting.
– Some process could indefinitely be denied access.
• Deadlock is possible
Example: P1 in C.S. and interrupted. Processor is given to P2
which has higher priority. P2 is not able to enter C.S.. P1 will be
dispatched because it has lower priority.
Roadmap
•
•
•
•
•
•
Principals of Concurrency
Mutual Exclusion: Hardware Support
Semaphores
Monitors
Message Passing
Readers/Writers Problem
Semaphore
• Semaphore:
– An integer value used for signalling among
processes.
• Only three operations may be performed
on a semaphore, all of which are atomic:
– initialize,
– Decrement (semWait)
– increment. (semSignal)
Semaphore Primitives
wait for s.count to become positive
A positive value: # of processes that can
issue wait and continue to execute
A zero value: the next processes to
issue wait will be blocked
A negative value: # of processes
waiting to be unblocked
Binary Semaphore Primitives
Strong / Weak Semaphore
• A queue is used to hold processes waiting on
the semaphore
– In what order are processes removed from the
queue?
• Strong Semaphores use FIFO
• Weak Semaphores don’t specify the order of
removal from the queue
Example of Strong Semaphore Mechanism
(assume processes A, B, C depend on a result from D)
semWait
semWait
semSignal
Example of Semaphore Mechanism
C: semWait
A: semWait
B: semWait
semSignal
Mutual Exclusion Using Semaphores
Processes Using Semaphore
remove B from the blocked queue
remove C from the blocked queue
Producer/Consumer Problem
• General Situation:
– One or more producers are generating data and
placing these in a buffer
– A single consumer is taking items out of the buffer
one at time
– Only one producer or consumer may access the
buffer at any one time
• The Problem:
– Ensure that the Producer can’t add data into full buffer
and consumer can’t remove data from empty buffer
Producer/Consumer Animation
Definition of Functions
• Assume an infinite buffer b with a linear array of
elements
Producer
Consumer
while (true) {
while (true) {
/* produce item v */
while (in <= out)
b[in] = v;
/*do nothing */;
in++;
w = b[out];
}
out++;
Accessing the shared buffer
/*
consume
item
w
*/
in: position for storing v
out: position for consuming }
Buffer
Incorrect Solution
C.S.
buffer was empty prior to this append()
C.S.
n: # of elements in the buffer
n = in - out
Possible Scenario
initially
Correct Solution
C.S.
Semaphores
a cleaner solution:
n is now a counting semaphore
Its value = # of items in the buffer
Q1:
what if semSignal(s) and
semSignal(n) are interchanged in
producer’s code ?
Q2:
what if semWait(n) and
semWait(s) are interchanged in
consumer’s code ?
Bounded Buffer
Functions in a Bounded Buffer
• .
Producer
Consumer
while (true) {
while (true) {
/* produce item v */
while (in == out)
while ((in + 1) % n == out) /*
/* do nothing */;
do nothing */;
w = b[out];
b[in] = v;
out = (out + 1) % n;
in = (in + 1) % n
/* consume item w */
in and out: initialized to 0
}
n : the size of the} buffer
Semaphores
Semaphore e keeps track of
empty spaces
Demonstration
Animations
• Producer/Consumer
– Illustrates the operation of a producer-consumer
buffer.
• Bounded-Buffer Problem Using Semaphores
– Demonstrates the bounded-buffer consumer/producer
problem using semaphores.
Roadmap
•
•
•
•
•
•
Principals of Concurrency
Mutual Exclusion: Hardware Support
Semaphores
Monitors
Message Passing
Readers/Writers Problem
Monitors
• The monitor is a programming-language
construct that provides equivalent
functionality to that of semaphores and
that is easier to control.
• Implemented in a number of programming
languages, including
– Concurrent Pascal, Pascal-Plus,
– Modula-2, Modula-3, and Java.
Chief characteristics
• Local data variables are accessible only
by the monitor
• Process enters monitor by invoking one of
its procedures
• Only one process may be executing in the
monitor at a time
Synchronization
• Synchronisation achieved by condition
variables within a monitor
– only accessible by the monitor.
• Monitor Functions:
–Cwait(c): Suspend execution of the calling
process on condition c
–Csignal(c) Resume execution of some process
blocked after a cwait on the same condition
Structure of a Monitor
Bounded Buffer Solution
Using Monitor
Solution Using Monitor
Bounded
Buffer Monitor
Roadmap
•
•
•
•
•
•
Principals of Concurrency
Mutual Exclusion: Hardware Support
Semaphores
Monitors
Message Passing
Readers/Writers Problem
Process Interaction
• When processes interact with one another,
two fundamental requirements must be
satisfied:
– synchronization and
– communication.
• Message Passing is one solution to the
second requirement
– Added bonus: It works with shared memory
and with distributed systems
Message Passing
• The actual function of message passing is
normally provided in the form of a pair of
primitives:
•
send (destination, message)
•
receive (source, message)
Synchronization
• Communication requires synchronization
– Sender must send before receiver can receive
• What happens to a process after it issues
a send or receive primitive?
– Sender and receiver may or may not be
blocking (waiting for message)
Blocking send, Blocking receive
• Both sender and receiver are blocked until
message is delivered
• Known as a rendezvous
• Allows for tight synchronization between
processes.
Non-blocking Send
• More natural for many concurrent
programming tasks.
• Nonblocking send, blocking receive
– Sender continues on
– Receiver is blocked until the requested
message arrives
• Nonblocking send, nonblocking receive
– Neither party is required to wait
Addressing
• Sendin process need to be able to specify
which process should receive the
message
– Direct addressing
– Indirect Addressing
Direct Addressing
• Send primitive includes a specific identifier
of the destination process
• Receive primitive could know ahead of
time which process a message is
expected
• Receive primitive could use source
parameter to return a value when the
receive operation has been performed
Indirect addressing
• Messages are sent to a shared data
structure consisting of queues
• Queues are called mailboxes
• One process sends a message to the
mailbox and the other process picks up
the message from the mailbox
Indirect Process Communication
General Message Format
Mutual Exclusion Using Messages
(nonblocking send, blocking receive)
Producer/Consumer Messages
(nonblocking send, blocking receive)
Roadmap
•
•
•
•
•
•
Principals of Concurrency
Mutual Exclusion: Hardware Support
Semaphores
Monitors
Message Passing
Readers/Writers Problem
Readers/Writers Problem
• A data area is shared among many
processes
– Some processes only read the data area,
some only write to the area
• Conditions to satisfy:
1. Multiple readers may read the file at once.
2. Only one writer at a time may write
3. If a writer is writing to the file, no reader may
read it.
interaction of readers and
writers.
Readers have Priority
Writers have Priority
Writers have Priority
Message Passing
Message Passing