process threads

Download Report

Transcript process threads

Threads
• Context switch between processes is an
•
expensive operation. It leads to high
overhead
A thread is an alternative model for
execution of a program that incurs smaller
overhead while switching between threads
of the same application
Q: How is this achieved?
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Thread switching overhead
•
A process context switch involves:
1. saving the context of the
process in operation
2. saving its CPU state
3. loading the state of the new
process
4. loading its CPU state
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•
•
A threads is a program execution
within the context of a process (i.e., it
uses the resources of a process);
many threads can be created within
the same process
Switching between threads of the
same process involves much less
switching overhead
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•
B
PROCESS
THREADS
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
OOP WITH C++ & JAVA – D SAMANTA –
PHI(B)
• MULTITHREADING MEANS
MULTIPLE FLOW OF CONTROL -- 2
ADVANTAGES
• 1. BETTER UTLIZN OF RES
• 2. SEVERAL PROBS SOLVED
BETTER BY MULTITHREADS - EXS
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
EXS(B)
• PROGRAM TO DISPLAY
ANIMATION, DOCUMENTS TO PLAY
MUSIC, & DOWNLOAD FILES
FROM THE NET AT THE SAME
TIME
• NETWORK SIMULATION EX –
ADHOC NETW
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•B
• JAVA IS A MULTITHREADED LANG
• BECAUSE JAVA THRS RUN IN THE
SAME MEM SPACE, THEY CAN
EASILY COMMN AMONG
THEMSELVES, BECAUSE AN
OBJECT IN ONE THR CAN CALL A
METHOD IN ANOTHER THR
WITHOUT ANY OVERHEAD REQ
FROM THE OS
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•B
• T1=new testthread(“thread1”, (int)
(math.random()*2000));
• T2, T3
• T2.start();
• T1.start();
• T3.start();
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•B
• Start(), stop(), suspend(), resume();
sleep(int n), setPriority(int p), yield() –
yield method causes the run time to
switch the context from the current thr
to the next available runnable thr.
• SPECIALITY –FACILITIES AT THE
LANG LEVEL –USER CONTROL
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Threads
• Where are threads useful?
• If two processes share the same address
•
space and the same resources, they have
identical context
– switching between these processes
involves saving and reloading of their
contexts. This overhead is redundant.
In such situations, it is better to use
threads.
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Threads in process Pi :
(a) concept, (b) implementation
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Advantages of threads
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Different kinds of threads
• Kernel-level threads:
•
•
Threads are created
through system calls. The kernel is aware
of their existence, and schedules them
User-level threads: Threads are created
and maintained by a thread library, whose
routines exist as parts of a process. Kernel
is oblivious of their existences.
Hybrid threads: A combination of the
above.
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Q: Why three kinds of threads?
A: Consider switching overhead,
concurrency/parallelism
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Scheduling of kernel-level threads
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
EVENT
SAVE THREAD STATE
EVENT PROCESSING
SCHEDULING
THREAD OF SAME PR ?
YES
NO
SAVE PR CONTEXT
LOAD NEW CONTEXT
DISPATCH THREAD
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Scheduling of user-level threads
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Kernel-level and user-level threads
•
Thread switching overhead
•
Concurrency and parallelism
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Actions of the threads library:
N, R, B indicate running, ready and blocked --- this schematic should be
replaced by the schematic of Ex. 3.6.
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Hybrid thread models
•
•
•
•
Hybrid threads have elements of both kernel-level and
user-level threads
Each user-level thread has a thread control block (TCB)
Each kernel-level thread has a kernel thread control
block (KTCB)
Three models of associating user-level and kernel-level
threads
– One-to-one association
– Many-to-one association
– Many-to-many association
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Hybrid thread models
•
This schematic should be added (Fig. 3.15)
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Process Interactions
Processes may interact in four different
ways:
1.Sharing of data – Data consistency
should be preserved
2.Synchronization of actions– Processes
should perform their actions in a desired
order
3.Passing of messages – Messages should
be stored and delivered appropriately
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
4. Sending of Signals – Processes should
be able to send signals to other
processes and specify signal handling
actions when signals are sent to them
The OS performs message passing and
provides facilities
for the other three modes of interaction
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
An example of data sharing: airline reservations
• MULTIPLE PROCESSES –EACH PR
SERVICING ONE AGENT TERMINAL
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
RACE CONDITIONS(B)
• LET Ds BE A SHARED VAR
• Ai & Aj BE PRS WHICH SHARE THIS
•
•
•
•
VAR & OPERATE CONCURRENTLY
Ai : Ds=Ds+10;
Aj : Ds=Ds+5;
BECAUSE OF CONCURRENCY THERE
IS UNCERTAINTY –RACE CONDN
SOLUTION – MUTUAL EXCLN
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Race conditions in data sharing
Results of operations performed on shared data may be
wrong if race conditions exist. Let us see why this may be
so
• Let operations Oi and Oj update value of a shared data
d, let fi(d) and fj(d) represent the value of d after the
operations
• If processes Pi and Pj perform operations Oi and Oj
– If Oi is performed before Oj, resulting value of d is fi(fj(d))
– If Oj is performed before Oi, resulting value of d is fj(fi(d))
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Race conditions in data sharing
•
•
A race condition is a situation in which the result of
performing operations Oi and Oj in two processes is
neither fi(fj(d)) nor fj(fi(d)).
Q: Why do race conditions exist?
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Data sharing by processes of a reservations system
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Race condition in the airline reservations
system
• Race conditions in the airline
reservations system may have two
consequences:
–nextseatno may not be updated
properly
–Same seat number may be
allocated to two passengers
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Race conditions in the airline reservations system
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Control synchronization between processes:
Operation sj should be performed after si
TIME
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
• (FIG a) PR Pi EXECUTES Si 1ST &
THEN Pj EXECUTES Sj
• (FIG b) PR Pj IS NOT ALLOWED TO
EXEC Sj TILL Pi EXECUTES Si
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
EX 4.6(1ST EDN) – DESIGN TO REDUCE
THE TIME OF EXECN
• Y=HCF(Amax, X) WHERE A IS ARRAY
•
•
•
•
OF N ELEMENTS
INSERT Y IN ARR
ARR A IN ASC ORDER
TO DET NO. OF PRS & COMPUTATIONS
IN EACH PR
PROBLEM SPLIT INTO FOLL STEPS
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
1.READ N ELMENTS OF ARR A
2.FIND Amax
3.READ X
4.DET Y=HCF(Amax, X)
5.INCLUDE Y IN ARR A & ARR IN ASC
•
•
•
ORDER
EACH ST IS CONSIDERED TO BE A
SEPARATE PR –THEN DET WHICH OF
THESE PRS INTERACT
2, 4, 5 ARE INTERACTING PRS
CONC CAN BE ACHIEVED BY
SPLITTING 2 & 5 INTO TWO PARTS
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
2(a) COPY ARR A TO ARR B
2(b) FIND Amax
5(a) ARRANGE ARR B IN ASC ORDER
5(b) INCLUDE Y IN ARR B AT APPROPR
PLACE
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
P1
READ
ELS OF A
COPY A
TO B
P2
P3
FIND
Amax
READ X
P4
P5
Y=HCF(A
max, X)
ARR B IN
ASC
ORDER
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
P6
INCLUDE
Y IN ARR B
Slide No: ‹#›
Copyright © 2006
WHICH ARE THE PRS THAT CAN RUN
CONCURRENTLY ?
• P1, P3 –C
• P1 & P2 CANNOT RUN CONC AS THEY
•
•
•
SHARE ARR A
P2 MUST GO AFTER P1 FINISHES
P4 CAN BE INITD ONLY AFTER P1 & P3
TERMINATE
P5 CAN BE INITD AFTER P1 TERMS,
WHILE P6 CAN START ONLY AFTER P4
& P5 TERMINATE
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Interprocess messages
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Advantages of message passing
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
IMPLEMENTATION OF INTERACTING
PROCESSES
FORK - JOIN & PARBEGIN - PAREND
• FORK SPAWNS A NEW PR & JOIN
WAITS FOR A PREVIOUSLY CREATED
PR TO TERMINATE
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
PR 0
FORK
A,J,3
A
PR 0
FORK B
B
JOIN J
PR 1
PR 2
JOIN J
JOIN J
J
J+1
PR i
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•
•
•
•
•
•
•
•
•
•
•
LAB1:
LAB2:
LAB3:
Chapter 3:
Processes and Threads
FOR I= 1 TO 100
READ A[I];
M=3;
FORK LAB1;
FORK LAB2;
GOTO LAB3;
X=MIN(A);
GOTO LAB3;
Y=MAX(A);
JOIN M;
RESULT=Y/X;
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
•
•
•
•
•
•
•
Chapter 3:
Processes and Threads
FOR I= 1 TO 100
READ A[I];
PARBEGIN
X=MIN(A);
Y=MAX(A);
PAREND
RESULT=Y/X;
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Signals
•
•
•
•
•
A process can send signals to other processes to convey
exceptional situations
A process must anticipate signals from other processes
It must provide a signal handler for each such signal. It
specifies the handler through a system call. The kernel
notes information about the signal handler
The kernel activates the signal handler when a signal is
sent to the process
Schematic of signals on the next slide
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Signal handling by process Pi
(a) Initialization, (b) Signal processing
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Processes in Unix
•
A process operates in two modes---user mode and
kernel mode.
– When a process makes a system call, it enters the kernel mode
and itself executes the kernel code for handling the system call
– It reenters the user mode after handling the system call
– Hence two running states: User running and kernel running
– A process in the kernel running state is non-interruptible
– A process in kernel running mode gets blocked when it makes
an I/O request
– Kernel code is written in a reentrant manner so that a process
can enter the kernel mode even if other processes are blocked in
that mode
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Process state transitions in Unix
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Threads in Solaris
•
Provides three entities for concurrency:
– User threads: Managed by a threads library
– Light weight processes (LWP): A unit of parallelism within a
process. Threads library maps user threads into LWPs. Several
LWPs may be created within a process.
– Kernel threads: A kernel thread is associated with each LWP.
The kernel also creates some kernel threads for its own use,
e.g., a thread to handle disk I/O.
•
Mapping between threads and LWPs influences
parallelism (see Hybrid models’ schematic)
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Processes and threads in Linux
•
•
Linux supports kernel-level threads
Threads and processes are treated alike except at
creation
– A thread shares the information about memory management,
current directory, open files and signal handlers of its parent
process; a process does not share any information of its parent
•
A thread or process contains information about
– Its parent
– Its deemed parent, to whom its termination should be reported
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Processes and threads in Linux
•
Process and thread states:
– Task_running: scheduled or waiting to be scheduled
– Task_interruptible: sleeping on an event, but may receive a
signal
– Task_uninterruptible: sleeping and may not receive a signal
– Task_stopped: operation has been stopped by a signal
– Task_zombie: operation completed, but its parent has not issued
a system call to check whether it has terminated
•
Interruptibility simplifies implementation of signals
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Processes and threads in Windows
•
•
A process is a unit for resource allocation, and a thread
is a unit for concurrency. Hence each process must have
at least one thread in it
Thread states:
– Standby: Thread has been selected to run on a CPU
– Transition: Kernel stack has been swapped out
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006
Thread state transitions in Windows 2000
Chapter 3:
Processes and Threads
Dhamdhere: Operating Systems—
A Concept-Based Approach, 2ed
Slide No: ‹#›
Copyright © 2006