synchronization

Download Report

Transcript synchronization

Chapter 6: Process
Synchronization
Background
• Concurrent access to shared data may
result in data inconsistency
• Maintaining data consistency requires
mechanisms to ensure the orderly
execution of cooperating processes
.
Race Condition
• A situation where several processes
access and manipulate the same data
concurrently and the outcome of the
execution depends on the particular order
in which the access takes place, is called
a race condition
• To guard against the race condition , it is
to be ensured that only one process at a
time can be manipulating the variable
counter and processes be synchronized
in some manner.
Critical Section
• In a system of processes (P0, P1, P2…. Pn), each
process has a segment of code called a critical
section, in which the process may be changing
common variables, updating a table, writing a file etc.
• When one process is executing in its critical section,
no other process is to be allowed to execute in its
critical section
• Solution is to design a protocol that the processes
can use to cooperate. Implemented by following
sections:
– Entry section- includes implementation of code to grant
permission to a process to enter its critical section.
– Exit section- follows the critical section.
– Remainder Section- remaining code of the protocol
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that will
enter the critical section next cannot be postponed
indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the N processes
Approaches to handle race
conditions in OS
• Preemptive kernel- allows a process to be
preempted while it is running in kernel
mode
• Non-preemptive Kernel- does not allow a
process to be preempted while it is
running in kernel mode. It will run until it
exits kernel mode, blocks or voluntarily
yields control of CPU.
Synchronization Hardware
• Many systems provide hardware support
for critical section code
• Uniprocessors – could disable interrupts
– Currently running code would execute without
preemption
– Generally too inefficient on multiprocessor
systems
• Operating systems using this not broadly scalable
• Modern machines provide special atomic
hardware instructions
• Atomic = non-interruptable
– Either test memory word and set value
– Or swap contents of two memory words
TestAndndSet Instruction
• Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
• Shared boolean variable lock., initialized to false.
• Solution:
do {
while ( TestAndSet (&lock ))
; /* do nothing
//
critical section
lock = FALSE;
//
remainder section
} while ( TRUE);
Swap Instruction
• Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
• Shared Boolean variable lock initialized to FALSE; Each
process has a local Boolean variable key.
• Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
//
critical section
lock = FALSE;
//
remainder section
} while ( TRUE);
Semaphore
• Synchronization tool that does not require busy waiting
• Semaphore S – integer variable
• Accessed through two standard operations modify wait() and signal()
– Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic) operations
– wait (S) {
while S <= 0
; // no-op
S--;
}
– signal (S) {
S++;
}
• When one process modifies the semaphore value, no
other process can modify the value of the same
semaphore
Semaphore as General Synchronization
Tool
• Counting semaphore – integer value can range over an
unrestricted domain
• Binary semaphore – integer value can range only
between 0 and 1; can be simpler to implement
– Also known as mutex locks-provide mutual exclusion
• Counting semaphores can be used to control access to a
given resource consisting of a finite number of
times.Initialized to number of instances of the resources
available Provides mutual exclusion
– Semaphore S; // initialized to 1
– wait (S);
Critical Section
signal (S);
• Each process that wishes to use a resource performs a
wait on the semaphore (decrements count).When the
process releases the resource, it performs a signal
operation(increment count)
• When the count for a semaphore goes to 0, all
resources are being used.
• Processes that wish to use the resource must
wait for count be greater than 0 again.
• Example
– P1 process executing with statement S1 and P2
process executing with statement S2.
– Let S2 executes only after S1 executes.
– P1 and P2 share semaphore ‘synch’ initialised by 1
For Process P1
S1;
Signal(synch)
For Process P2
waist(synch);
S2;
Semaphore Implementation with no Busy waiting
• Semaphore requires busy waiting.It wastes CPU cycles.
This type of semaphore is called spinlock.
• To overcome this problem wait() and signal()can be
updated. If a process executes the wait operation, if
semaphore value is not positive, instead of busy waiting it
can block itself.
• With each semaphore there is an associated waiting
queue and a blocked process state is changed to waiting.
The control is transferred to CPU scheduler that selects
another process to execute.
• Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list
• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue.
– wakeup – remove one of processes in the waiting queue and place
it in the ready queue.
Semaphore Implementation with no
Busy waiting (Cont.)
• Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
• Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting
queue
wakeup(P); }
}
Deadlock and Starvation
• Deadlock – two or more processes are waiting
indefinitely for an event that can be caused by only one
of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0
P1
wait (S);
wait (Q);
.
.
.
signal (S);
signal (Q);
wait (Q);
wait (S);
.
.
.
signal (Q);
signal (S);
• Starvation – indefinite blocking. A process may never
be removed from the semaphore queue in which it is
suspended.