Transcript Process

Operating Systems Principles
Process Management and Coordination
Lecture 2: Processes and Their Interaction
主講人:虞台文
Content


The Process Notion
Defining and Instantiating Processes
–
–
–
–

Basic Process Interactions
–
–

Competition: The Critical Problem
Cooperation
Semaphores
–
–
–

Precedence Relations
Implicit Process Creation
Dynamic Creation With fork And join
Explicit Process Declarations
Semaphore Operations and Data
Mutual Exclusion
Producer/Consumer Situations
Event Synchronization
Operating Systems Principles
Process Management and Coordination
Lecture 2: Processes and Their Interaction
The Process Notion
What is a process?


A process is a program in execution.
–
Also, called a task.
–
Program itself (i.e., code or text)
Data
a thread of execution (possibly several threads)
Resources (such as files)
Execution info (process relation information kept by OS)
It includes
–
–
–
–

Multiple processes are simultaneously existent in
a system.
Virtualization

Conceptually,
–
–


Many computers are equipped with a single CPU.
To achieve concurrency, the followings are needed
–
–

each process has its own CPU and main memory;
processes are running concurrently.
CPU sharing  to virtualize the CPU
Virtual memory  to virtualize the memory
Usually done by the kernel of OS
–
–
Each process may be viewed in isolation.
The kernel provides few simple primitives for process
interaction.
Physical/Logical Concurrencies


An OS must handle a high degree of
parallelism.
Physical concurrency
–

Multiple CPUs or Processors required
Logical concurrency
–
Time-share CPU
Interaction among Processes


The OS and users applications are viewed as a
collection of processes, all running concurrently.
These processes
1.
2.
3.
almost operate independently of one another;
cooperate by sharing memory or by sending messages
and synchronization signals to each other; and
compete for resources.
Why Use Process Structure?

Hardware-independent solutions
–

Processes cooperate and compete correctly,
regardless of the number of CPUs
Structuring mechanism
–
Tasks are isolated with well-defined interfaces
Operating Systems Principles
Process Management and Coordination
Lecture 2: Processes and Their Interaction
Defining and
Instantiating Processes
A Process Flow Graph
User session at a workstation
S/P Notation:
serial
S ( p1 , , pn )
P( p1 ,
, pn )
parallel
execution of process p1 through pn.
Serial and Parallel Processes
Serial
S ( p1 , p2 , p3 , p4 )
Parallel
P( p1 , p2 , p3 , p4 )
S/P Notation:
serial
S ( p1 , , pn )
P( p1 ,
, pn )
parallel
execution of process p1 through pn.
Properly Nested Process Flow Graphs
Properly
Nested
A process flow graph is properly nested
if it can be described by the functions
S and P, and only function composition.
S ( p1 , P( p2 , S ( p3 , P( p4 , p5 )), p6 ), P( p7 , p8 ))
Properly Nested Process Flow Graphs
Properly
Nested
improperly
Nested
(a  b) *(c  d )  e / f
Example:
Evaluation of Arithmetic Expressions
Expression tree
Process flow graph
Implicit Process Creation

Processes are created dynamically using language
constructs
–

no process declaration.
cobegin/coend
–
syntax:
cobegin C1 // C2 // … // Cn coend
–
meaning:


All Ci may proceed concurrently
When all terminate, the statement following cobegin/coend
continues.
Implicit Process Creation
C1
C2
C1
C2
C3
C4
C3
C4
Serial
Parallel
S (C1 , C2 , C3 , C4 )
P(C1 , C2 , C3 , C4 )
C1; C2; C3; C4;
cobegin C1 // C2 // C3 // Cn coend
Example: Use of cobegin/coend
User session
at a workstation
Initialize;
cobegin
Time_Date //
Mail
//
Edit;
cobegin
Complile; Load; Execute //
Edit;
cobegin
Print //
Web
coend
coend
coend;
Terminate
Data Parallelism

Same code is applied to different data

The forall statement
–
syntax:
forall (parameters) statements
–
Meaning:


Parameters specify set of data items
Statements are executed for each item concurrently
Example: Matrix Multiplication
A  BC
B : nr
C :rm
A: nm
forall ( i:1..n, j:1..m ){
A[i][j] = 0;
for ( k=1; k<=r; ++k )
A[i][j] = A[i][j] + B[i][k]*C[k][j];
}
• Each inner product is computed sequentially
• All inner products are computed in parallel
Explicit Process Creation

cobegin/coend
–

implicit
process creation
forall
–

limited to properly nested graphs
limited to data parallelism
fork/join
–
explicit process creation
can express arbitrary functional parallelism
The fork/join/quit primitives


Syntax: fork x
Meaning:
–


create new process that begins executing at label x
Syntax: join t,y
Meaning: t = t–1;
if (t==0) goto y;
–
The operation must be indivisible. (Why?)

Syntax: quit

Meaning:
–
Process termination
Example
Synchronization needed here.
Use down-counter t1=2 and
t2=3 for synchronization.
t1
t2
The starting point of process pi
has label Li.
Example
L1:
L2:
L5:
L7:
L4:
L3:
L6:
L8:
t1 = 2; t2 = 3;
p1; fork L2; fork L5; fork L7; quit;
p2; fork L3; fork L4; quit;
p5; join t1,L6; quit;
p7; join t2,L8; quit;
p4; join t1,L6; quit;
p3; join t2,L8; quit;
p6; join t2,L8; quit;
p8; quit;
t1
t2
The Unix fork
procid = fork();
if (procid==0)
do_child_processing
else
do_parent_processing
• Replicates calling process.
• Parent and child are identical
except for the value of
procid.
The Unix fork
procid = fork();
if (procid==0)
Use procid to
diverge parent and
child.
do_child_processing
else
do_parent_processing
Explicit Process Declarations

Designate piece of code as a unit of
execution
–

Facilitates program structuring
Instantiate:
–
–
Statically (like cobegin) or
Dynamically (like fork)
Explicit Process Declarations
Syntax:
process p {
declarations_for_p;
executable_code_for_p;
};
process type p {
declarations_for_p;
executable_code_for_p;
};
Example:
Explicit Process Declarations
process p{
process p1{
declarations_for_p1;
executable_code_for_p1;
}
process
declarations_for_p;
type p2{
declarations_for_p2;
executable_code_for_p2;
}
other_declaration_for_p ;
}
...
q =executable_code_for_p;
new p2;
...
Example:
Explicit Process Declarations
process p{
process p1{
declarations_for_p1;
executable_code_for_p1;
}
similar to
cobegin/coend
process type p2{
declarations_for_p2;
executable_code_for_p2;
}
other_declaration_for_p ;
}
...
q = new p2;
...
similar to
fork
Operating Systems Principles
Process Management and Coordination
Lecture 2: Processes and Their Interaction
Basic Process
Interactions
Competition and Cooperation

Competition
–
–

Processes compete for resources
Each process could exist without the other
Cooperation
–
–
–
Each process aware of the other
Processes Synchronization
Exchange information with one another


Share memory
Message passing
Resource Competition
Shared source, e.g.,
common data
Resource Competition


When several process may asynchronously
access a common data area, it is necessary
to protect the data from simultaneous
change by two or more processes.
If not, the updated area may be left in an
inconsistent state.
Example
x = 0;
cobegin
p1: ...
x = x + 1;
...
//
p2: ...
x = x + 1;
...
coend
What value of x should be
after both processes execute?
Example
p1: . . .
R1 = x;
R1 = R1 + 1;
x = R1;
. . .
R1 = 0
R1 = 1
x = 1
R2 = 1
R2 = 2
x = 2
p2: . . .
R2 = x;
R2 = R2 + 1;
x = R2;
. . .
Example
p1: . . .
R1 = x;
R1 = 0
R1 = R1 + 1;
R1 = 1
R2 = 0
R2 = x;
x = 1
R2 = 1
R2 = R2 + 1;
x = R1;
. . .
x = 1
p2: . . .
x = R2;
. . .
The Critical Section (CS)


Any section of code involved in reading and
writing a share data area is called a
critical section.
Mutual Exclusion  At most one process is
allowed to enter critical section.
The Critical Problem

Guarantee mutual exclusion: At any time, at most
one process executing within its CSi.
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
The Critical Problem


Guarantee mutual exclusion: At any time, at most
one process executing within its CSi.
In addition, we need to prevent mutual blocking:
–
–
–
–
Process outside of its CS must not prevent other
processes from entering its CS. (No “dog in manger”)
Process must not be able to repeatedly reenter its CS
and starve other processes (fairness)
Processes must not block each other forever (deadlock)
Processes must not repeatedly yield to each other
(“after you”--“after you” livelock)
Software Solutions

cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
Solve the problem without taking advantage of
special machine instructions and other hardware.
Algorithm 1
int turn = 1;
cobegin
p1: while (1) {
while (turn==2); /*wait*/
CS1;
turn = 2;
program1;
}
//
p2: while (1) {
while (turn==1); /*wait*/
CS2;
turn = 1;
program2;
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
Algorithm 1
int turn = 1;
cobegin
p1: while (1) {
while (turn==2); /*wait*/
CS1;
turn = 2;
program1;
}
//
p2: while (1) {
while (turn==1); /*wait*/
CS2;
turn = 1;
program2;
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend

Mutual Exclusion 

No mutual blocking
–
–
–
–
No dog in manger
Fairness
No deadlock
No livelock




Algorithm 2
int c1 = 0, c2 = 0;
cobegin
p1: while (1) {
c1 = 1;
while (c2); /*wait*/
CS1;
c1 = 0;
program1;
}
//
p2: while (1) {
c2 = 1;
while (c1); /*wait*/
CS2;
c2 = 0;
program2;
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
Algorithm 2
int c1 = 0, c2 = 0;
cobegin
p1: while (1) {
c1 = 1;
while (c2); /*wait*/
CS1;
c1 = 0;
program1;
}
//
p2: while (1) {
c2 = 1;
while (c1); /*wait*/
CS2;
c2 = 0;
program2;
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend

Mutual Exclusion 

No mutual blocking
–
–
–
–
No dog in manger
Fairness
No deadlock
No livelock




Algorithm 3
int c1 = 0, c2 = 0;
cobegin
p1: while (1) {
c1 = 1;
if (c2) c1=0;
else{
CS1;
c1 = 0;
program1;
}
}
//
p2: while (1) {
c2 = 1;
if (c1) c2=0;
else{
CS2;
c2 = 0;
program2;
}
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
Algorithm 3
int c1 = 0, c2 = 0;
cobegin
p1: while (1) {
c1 = 1;
if (c2) c1=0;
else{
CS1;
c1 = 0;
program1;
}
}
//
p2: while (1) {
c2 = 1;
if (c1) c2=0;
else{
CS2;
c2 = 0;
program2;
}
}
coend
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend

Mutual Exclusion 

No mutual blocking
–
–
–
–
No dog in manger
Fairness
No deadlock
No livelock




Algorithm 3
p1: while (1) {
c1 = 1;
if (c2) c1=0;
else{
CS1;
c1 = 0;
program1;
}
}
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
p2: while (1) {
c2 = 1;
if (c1) c2=0;
else{
CS2;
c2 = 0;
program2;
}
}
May violate the fairness requirement.
Algorithm 3
p1: while (1) {
c1 = 1;
if (c2) c1=0;
else{
CS1;
c1 = 0;
program1;
}
}
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
p2: while (1) {
c2 = 1;
if (c1) c2=0;
else{
CS2;
c2 = 0;
program2;
}
}
May violate the livelock requirement.
Algorithm 4
(Peterson)
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
int c1 = 0, c2 = 0, WillWait;
cobegin
p1: while (1) {
c1 = 1;
willWait = 1;
while (c2 && (WillWait==1)); /*wait*/
CS1;
c1 = 0;
program1;
}
//
p2: while (1) {
c2 = 1;
willWait = 2;
while (c1 && (WillWait==2)); /*wait*/
CS2;
c2 = 0;
program2;
}
coend
Algorithm 4
(Peterson)
cobegin
p1: while(1) {CS1; program1;}
//
p2: while(1) {CS2; program2;}
//
...
//
pn: while(1) {CSn; programn;}
coend
int c1 = 0, c2 = 0, WillWait;
 Mutual Exclusion
cobegin
p1: while (1) {
c1 = 1;
 No mutual blocking
willWait = 1;
while (c2 && (WillWait==1)); /*wait*/
– No dog in manger
CS1;
– Fairness
c1 = 0;
program1;
– No deadlock
}
– No livelock
//
p2: while (1) {
c2 = 1;
willWait = 2;
while (c1 && (WillWait==2)); /*wait*/
CS2;
c2 = 0;
program2;
}
coend





Cooperation: Producer/Consumer
Producer
Deposit
Producer must not overwrite
any data before the
Consumer can remove it.
Buffer
Remove
Consumer
Consumer must be able to
wait for the Producer when
the latter falls behind and
does not fill the buffer on
time.
Processes communicate by message passing.
Client/Server Architecture
Competition and Cooperation

Problems with software solutions:
–
Difficult to program and to verify
–
Competition and cooperation use entirely
different solutions
–
Processes loop while waiting (busy-wait)
Process States
time-out
running
ready
dispatch
ready_queue
Process States
time-out
running
wait(si)
wait-queues
s1
...
...
...
sn
ready
dispatch
blocked
ready_queue
Process States
time-out
running
wait(si)
wait-queues
s1
...
...
...
sn
ready
dispatch
blocked
ready_queue
A running process calls signal(si).
Process States
time-out
running
wait(si)
wait-queues
s1
...
...
...
sn
ready
dispatch
blocked
signal(si)
ready_queue
P/V Operators
time-out
running
wait(si)
P operation
wait-queues
s1
...
...
...
sn
ready
dispatch
blocked
Vsignal(si)
operation
ready_queue
Dijkstra’s Semaphores

Dijkstra’s (1968) introduced two new operations,
called P and V, that considerably simplify the
coordination of concurrent processes.
–
Universally applicable to competition and cooperation
among any number of processes.
–
Avoid the performance degradation resulting from
waiting.
There is a wait-queue
associated with each
semaphore.
s
Semaphores

A semaphore s is an non-negative integer, a down
counter, with two operations P and V.
P(s) /* wait(s) */
{
if (s>0) s--;
else{
queue the process on s,
change its state to blocked,
schedule another process
}
}

V(s) /* signal(s) */
{
if (s==0 && queue is not empty){
pick a process from the queue on s,
change its state from blocked to ready;
}
else s++;
}
P and V are indivisible operations (atomic)
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 3
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 4
3
Process 3
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 4
Process 3
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 4
Process 3
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
Process 4
There is a wait-queue
associated with each
semaphore.
s
Semaphores
Process 1
Process 2
Process 3
Process 4
s
P(s)
CS1
V(s)
P(s)
CS2
V(s)
P(s)
CS3
V(s)
P(s)
CS4
V(s)
Program1
Program2
Program3
Program4
There is a wait-queue
associated with each
semaphore.
s
Semaphores

A semaphore s is an non-negative integer with two
operations P and V.
P(s) /* wait(s) */
{
if (s>0) s--;
else{
queue the process on s,
change its state to blocked,
schedule another process
}
}

V(s) /* signal(s) */
{
if (s==0 && queue is not empty){
pick a process from the queue on s,
change its state from blocked to ready;
}
else s++;
}
P and V are indivisible operations (atomic)
Mutual Exclusion with Semaphores
semaphore mutex = 1;
/* binary semaphore */
cobegin
p1: while (1) { P(mutex); CS1; V(mutex); program1; }
//
p2: while (1) { P(mutex); CS2; V(mutex); program2; }
//
.
.
.
//
pn: while (1) { P(mutex); CSn; V(mutex); programn; }
coend;
Producer/Consumer with Semaphores
// counting semaphore
semaphore s = 0; /* zero item produced */
cobegin
p1: ...
P(s); /* wait for signal
from other process if needed */
Consumer
...
//
p2: ...
V(s); /* send signal toProducer
wakeup an idle process*/
...
coend;
Producer/Consumer with Semaphores
// counting semaphore
semaphore s = 0; /* zero item produced */
cobegin
p1: ...
P(s); /* wait for signal from other process if needed */
...
//
p2: ...
V(s); /* send signal to wakeup an idle process*/
...
coend;
e: the number of empty cells
f: the number of items available
b: mutual exclusion
The Bounded Buffer Problem
semaphore e = n, f = 0, b = 1;
cobegin
Producer: while (1) {
Produce_next_record;
P(e); /* wait if zero cell empty */
P(b); /* mutually excusive on access the buffer */
Add_to_buf;
V(b); /* release the buffer */
V(f); /* signal data available */
}
//
Consumer: while (1) {
P(f); /* wait if data not available */
P(b); /* mutually excusive on access the buffer */
Take_from_buf;
V(b); /* release the buffer */
V(e); /* signal empty cell available*/
Process_record;
}
coend
Operating Systems Principles
Process Management and Coordination
Lecture 2: Process and Their Interaction
Event
Synchronization
Events


An event denotes some change in the state of the
system that is of interest to a process.
Usually principally through hardware interrupts
and traps, either directly or indirectly.
Two Type of Events

Synchronous event (e.g. I/O completion)
–
–

Process waits for it explicitly
Constructs: E.wait, E.post
Asynchronous event (e.g. arithmetic error)
–
–
Process provides event handler
Invoked whenever event is posted
Case study: Windows 2000


WaitForSingleObject
WaitForMultipleObjects
Process blocks until
object is signaled
object type
signaled when:
process
all threads complete
thread
terminates
semaphore
incremented
mutex
released
event
posted
timer
expires
file
I/O operation terminates
queue
item placed on queue