Module 7: Process Synchronization
Download
Report
Transcript Module 7: Process Synchronization
Lecture 10
Chapter 6: Process Synchronization
Modified from Silberschatz, Galvin and Gagne & Stallings
Chapter 6: Process Synchronization
Background
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Semaphores
Classic Problems of Synchronization
Monitors
Synchronization Examples
Atomic Transactions
CS 446/646 Principles of Computer Operating Systems
2
Objectives
To introduce the critical-section problem,
whose solutions can be used to ensure the consistency of shared data
To present both software and hardware solutions of the critical-section
problem
To introduce the concept of an atomic transaction
describe mechanisms to ensure atomicity
CS 446/646 Principles of Computer Operating Systems
3
Background
Concurrent access to shared data may result in data inconsistency
concurrency is a fundamental part of O/S design
concurrency includes
communication among processes/threads
sharing of, and competition for system resources
cooperative processing of shared data
synchronization of process/thread activities
organized CPU scheduling
solving deadlock and starvation problems
Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
CS 446/646 Principles of Computer Operating Systems
4
Process Interaction & Concurrency
Hardware Software
view
view
Concurrency can be viewed at different levels:
multiprogramming — interaction between multiple processes running on one CPU
(pseudo-parallelism)
multithreading — interaction between multiple threads running in one process
multiprocessors — interaction between multiple CPUs running multiple
processes/threads (real parallelism)
multicomputers — interaction between multiple computers running distributed
processes/threads
the principles of concurrency are basically the same in all of these categories
CS 446/646 Principles of Computer Operating Systems
5
Process Interaction & Concurrency
Whether processes or threads: three basic interactions
o
processes unaware of each other
they must use shared resources independently,
without interfering, and leave them intact for the
others
P
1
P
2
resource
o
processes indirectly aware of each other
they work on common data and build some result
together via the data
P
1
P
2
data
o
processes directly aware of each other
they cooperate by communicating, e.g.,
exchanging messages
P
1
P
2
messages
CS 446/646 Principles of Computer Operating Systems
6
Race Condition
Insignificant race condition in the shopping scenario
there is a “race condition” if the outcome depends on the order of the execution
Molay, B. (2002) Understanding
Unix/Linux Programming (1st Edition).
> ./multi_shopping
grabbing the salad...
grabbing the milk...
grabbing the apples...
grabbing the butter...
grabbing the cheese...
>
> ./multi_shopping
grabbing the milk...
grabbing the butter...
grabbing the salad...
grabbing the cheese...
grabbing the apples...
>
Multithreaded shopping diagram and possible outputs
CS 446/646 Principles of Computer Operating Systems
7
Race Condition
Insignificant race condition in the shopping scenario
the outcome depends on the CPU scheduling or “interleaving” of the threads
(separately, each thread always does the same thing)
> ./multi_shopping
grabbing the salad...
grabbing the milk...
grabbing the apples...
grabbing the butter...
grabbing the cheese...
>
A
CPU
B
> ./multi_shopping
grabbing the milk...
grabbing the butter...
grabbing the salad...
grabbing the cheese...
grabbing the apples...
>
CS 446/646 Principles of Computer Operating Systems
A
B
8
CPU
Race Condition
Insignificant race condition in the shopping scenario
the CPU switches from one process/thread to another, possibly on the basis of a
preemptive clock mechanism
> ./multi_shopping
grabbing the salad...
grabbing the milk...
grabbing the apples...
grabbing the butter...
grabbing the cheese...
>
salad
A
CPU
B
apples
thread A
milk
butter
cheese
Thread view expanded in real execution time
CS 446/646 Principles of Computer Operating Systems
9
thread B
Race Condition
Significant race conditions in I/O & variable sharing
char chin, chout;
void echo()
{
do {
1 chin = getchar();
2 chout = chin;
3 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
Hello world!
char chin, chout;
A
B
lucky
CPU
scheduling
Single-threaded echo
CS 446/646 Principles of Computer Operating Systems
void echo()
{
do {
4 chin = getchar();
5 chout = chin;
6 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
Hello world!
Multithreaded echo (lucky)
10
Race Condition
Significant race conditions in I/O & variable sharing
char chin, chout;
void echo()
{
do {
1 chin = getchar();
5 chout = chin;
6 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
Hello world!
char chin, chout;
A
B
unlucky
CPU
scheduling
Single-threaded echo
CS 446/646 Principles of Computer Operating Systems
void echo()
{
do {
2 chin = getchar();
3 chout = chin;
4 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
ee....
Multithreaded echo (unlucky)
11
Race Condition
changed
to local
variables
Significant race conditions in I/O & variable sharing
void echo()
{
char chin, chout;
do {
1 chin = getchar();
5 chout = chin;
6 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
Hello world!
A
B
unlucky
CPU
scheduling
Single-threaded echo
CS 446/646 Principles of Computer Operating Systems
void echo()
{
char chin, chout;
do {
2 chin = getchar();
3 chout = chin;
4 putchar(chout);
}
while (...);
}
> ./echo
Hello world!
eH....
Multithreaded echo (unlucky)
12
Race Condition
Significant race conditions in I/O & variable sharing
in this case, replacing the global variables with local variables did not solve the problem
we actually had two race conditions here:
one race condition in the shared variables and the order of value assignment
another race condition in the shared output stream:
•
which thread is going to write to output first
•
this race persisted even after making the variables local to each thread
generally, problematic race conditions may occur whenever resources and/or data are
shared
by processes unaware of each other or processes indirectly aware of each other
CS 446/646 Principles of Computer Operating Systems
13
Producer / Consumer Problem
while (true) {
while (true) {
while (in == out)
/* Produce an item */
; // do nothing -- nothing to consume
while (((in = (in + 1) % BUFFER SIZE) == out)
// remove an item from the buffer
; /* do nothing -- no free buffers */
buffer[in] = item;
item = buffer[out];
in = (in + 1) % BUFFER SIZE;
out = (out + 1) % BUFFER SIZE;
return item;
}
}
Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers.
We can do so by having an integer count that keeps track of the
number of full buffers.
Initially, count is set to 0.
incremented by the producer after it produces a new buffer,
decremented by the consumer after it consumes a buffer.
http://gaia.ecs.csus.edu/~zhangd/oscal/ProducerConsumer/Producer
Consumer.html
CS 446/646 Principles of Computer Operating Systems
14
Producer / Consumer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
CS 446/646 Principles of Computer Operating Systems
15
Race Condition
count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
CS 446/646 Principles of Computer Operating Systems
16
Race Condition
How to avoid race conditions?
find a way to keep the instructions together
this means actually. . . reverting from too much interleaving and going back to
“indivisible/atomic” blocks of execution!!
chin='H'
putchar('e')
chin='e' chout='e'
thread A
putchar('e')
thread B
(a) too much interleaving may create race conditions
chin='H' putchar('H')
thread A
chin='e' chout='e' putchar('e')
(b) keeping “indivisible” blocks of execution avoids race conditions
CS 446/646 Principles of Computer Operating Systems
17
thread B
Critical Regions
The “indivisible” execution blocks are critical regions
a critical region is a section of code that may be executed by only one process or
thread at a time
A
common critical region
B
although it is not necessarily the same region of memory or section of program in
both processes
A
A’s critical region
B
B’s critical region
but physically different or not, what matters is that these regions cannot be
interleaved or executed in parallel (pseudo or real)
CS 446/646 Principles of Computer Operating Systems
18
Critical Regions
We need mutual exclusion from critical regions
critical regions can be protected from concurrent access by padding them with entrance
and exit gates
o
a thread must try to check in, then it must check out
void echo()
{
char chin, chout;
do {
enter critical region?
chin = getchar();
chout = chin;
putchar(chout);
exit critical region
}
while (...);
}
CS 446/646 Principles of Computer Operating Systems
A
19
B
void echo()
{
char chin, chout;
do {
enter critical region?
chin = getchar();
chout = chin;
putchar(chout);
exit critical region
}
while (...);
}
Critical Section
do {
entry section
critical section
exit session
remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
20
Mutual Exclusion
Desired effect: mutual exclusion from the critical region
1.
HOW is this
achieved??
2.
3.
4.
thread A reaches the gate
to the critical region (CR)
before B
thread A enters CR first,
preventing B from entering
(B is waiting or is blocked)
thread A exits CR; thread
B can now enter
A
thread B enters CR
A
B
A
B
A
B
B
CS 446/646 Principles of Computer Operating Systems
21
critical region
Solution to Critical-Section Problem
1. Mutual Exclusion
o
If process Pi is executing in its critical section,
o
then no other processes can be executing in their critical sections
2. Progress
o
If no process is executing in its critical section and there exist some
processes that wish to enter their critical section,
o
then the selection of the processes that will enter the critical section
next cannot be postponed indefinitely
3. Bounded Waiting
o
A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
CS 446/646 Principles of Computer Operating Systems
22
Peterson’s Solution
Two process solution
Assume that the LOAD and STORE instructions are atomic;
that is, cannot be interrupted.
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical
section.
flag[i] = true implies that process Pi is ready!
CS 446/646 Principles of Computer Operating Systems
23
Algorithm for Process Pi
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
24
Peterson’s Solution
1.
2.
3.
4.
A and B each have their own
lock; an extra toggle is also
masking either lock
A arrives first, sets its lock,
pushes the mask to the other
lock and may enter
then, B also sets its lock &
pushes the mask, but must
wait until A’s lock is reset
A exits the CR and resets its
lock; B may now enter
CS 446/646 Principles of Computer Operating Systems
25
A
B
A
B
A
B
A
B
critical region
Peterson’s Solution
1.
2.1
2.2
A and B each have their
own lock; an extra toggle
is also masking either lock
A is interrupted between
setting the lock & pushing
the mask; B sets its lock
now, both A and B race to
push the mask: whoever
does it last will allow the
other one inside CR
mutual exclusion holds!!
(no bad race condition)
CS 446/646 Principles of Computer Operating Systems
26
A
critical region
B
A
B
A
B
pushed last, allowing A
A
B
pushed last, allowing B
Synchronization Hardware
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
what guarantees that the user process is going to ever exit the critical
region?
meanwhile, the CPU cannot interleave any other task
even unrelated to this race condition
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
Atomic = non-interruptable
Either test memory word and set value
Or swap contents of two memory words
CS 446/646 Principles of Computer Operating Systems
27
Solution to Critical-section Problem Using Locks
Many systems provide hardware support for critical section code
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
28
Atomic lock
1.
1.1’
2.
3.
thread A reaches CR and
finds the lock at 0 and sets
it in one shot, then enters
even if B comes right
behind A, it will find that
the lock is already at 1
thread A exits CR, then
resets lock to 0
A
thread B finds the lock at 0
and sets it to 1 in one shot,
just before entering CR
A
CS 446/646 Principles of Computer Operating Systems
29
B
A
B
A
B
B
critical region
TestAndSet Instruction
Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
CS 446/646 Principles of Computer Operating Systems
30
Solution using TestAndSet
Shared boolean variable lock., initialized to false.
Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
//
critical section
lock = FALSE;
//
remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
31
Swap Instruction
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
CS 446/646 Principles of Computer Operating Systems
32
Solution using Swap
Shared Boolean variable lock initialized to FALSE;
Each process has a local Boolean variable key
Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
//
critical section
lock = FALSE;
//
remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
33
Bounded-waiting Mutual Exclusion with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
34
Semaphore
Synchronization tool that does not require busy waiting
Two standard operations modify S: wait() and signal()
Semaphore S – integer variable
Less complicated
Can only be accessed via two indivisible (atomic) operations
wait (S) {
while S <= 0
; // no-op
S--;
}
signal (S) {
S++;
}
CS 446/646 Principles of Computer Operating Systems
35
Semaphore as General Synchronization Tool
Counting semaphore
integer value can range over an unrestricted domain
Binary semaphore
integer value can range only between 0 and 1; can be simpler to implement
Also known as mutex locks
Can implement a counting semaphore S as a binary semaphore
Provides mutual exclusion
Semaphore mutex;
// initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
CS 446/646 Principles of Computer Operating Systems
36
Binary Semaphore
signal
signal
signal
signal
signal
. . .
value = 1 (“off”)
no queue
value = 0 (“on”)
no queue
wait
value = 0
1 in queue
wait
CS 446/646 Principles of Computer Operating Systems
value = 0
2 in queue
wait
37
value = 0
3 in queue
wait
Counting Semaphore
signal
signal
0
0
0
signal
signal
0
0
. . .
value = +2
no queue
value = +1
no queue
wait
value = –1
1 in queue
value = 0
no queue
wait
CS 446/646 Principles of Computer Operating Systems
wait
38
value = –2
2 in queue
wait
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time
Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
CS 446/646 Principles of Computer Operating Systems
39
Busy waiting
in busy waiting, the PC is always looping
(increment & jump back)
it can be preemptively interrupted but will
loop again tightly whenever rescheduled
tight polling
when blocked, the process’s PC stalls
after executing a “yield” call
either the process is only timed out, thus
it is “Ready” to loop-and-yield again
sparse polling
or it is truly “Blocked” and put in event
queue condition waiting
CS 446/646 Principles of Computer Operating Systems
40
dispatch
Ready
Running
timeout
event occurs
(unblock)
event wait
(block)
Blocke
d
dispatch
Ready
Running
voluntary
timeout
event occurs
(unblock)
voluntary
event wait
(block)
Blocked
Semaphore Implementation with no Busy waiting
With each semaphore there is an associated waiting queue.
Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
Block
place the process invoking the operation on the appropriate
waiting queue.
wakeup
remove one of processes in the waiting queue and place it in the
ready queue.
CS 446/646 Principles of Computer Operating Systems
41
Semaphore Implementation with no Busy waiting
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
CS 446/646 Principles of Computer Operating Systems
42
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that
can be caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0
P1
wait (S);
wait (Q);
wait (Q);
wait (S);
.
.
.
.
signal (S);
signal (Q);
signal (Q);
signal (S);
Starvation – indefinite blocking.
A process may never be removed from the semaphore queue in which it
is suspended
Priority Inversion - Scheduling problem when lower-priority process holds a
lock needed by higher-priority process
CS 446/646 Principles of Computer Operating Systems
43