MultiThreaded Concepts and Programming
Download
Report
Transcript MultiThreaded Concepts and Programming
MultiThreaded Concepts
and Programming
Liran Zvibel
4.5.99
Copyright, 1999 © Elbit Medical Imaging -- Tor Systems
Organization of the lecture:
• Historical review.
• What is a tread.
• Why and when to use threads.
• When NOT to use threads.
• Can threads cause problems?
• Some programming techniques.
• How the hell do I debug threads !?!?!
Subjects not included
• Thread implementation in different OSes.
• Usage of threads in specific OS.
• Advanced programming techniques and
algorithms.
• Theory of deadlocks.
Historical Review
In the course for better system utilization:
• One job at a time systems
• Batch systems with SPOOLing
• MultiProgramming / MultiTasking
• MultiThreaded programming
Timesharing
Thread -- A definition
Thread (thred) n. very thin twist of wool,
cotton, linen, silk, etc.; filament as of
gold, silver; prominent spiral part of
screw; consecutive train of thought.
--
New Webster’s
Dictionary.
What is a thread?
A thread is a basic unit of CPU utilization.
A thread consists of:
A thread shares with
its peer threads:
• A Program Counter
• Code Section
• Register Set
• Data Section
• Stack Space
• Operating System
resources
• Priority
• Thread ID
• Process ID
What is MultiThreaded
programming?
Basically multithreaded programming is
implementing your software so that two
or more activities can be performed in
parallel within the same application.
Programmers should be aware of the OS
rules and the way it governs the different
threads, and how threads can interact
with the “outer world” (other
threads/processes)
Thread Scheduling
The OS schedules each thread in a RR
fashion. Each thread is allocated a small
amount of time called quantum, to run
on the CPU.
When switching between two threads the
OS performs a context switch.
• Intra-process Context Switch
• Inter-process Context Switch
Thread States:
• Running
• Ready
• Blocked
• Suspended
Why write a MultiThreaded
program?
• Increased/Maximum Parallelization
• Faster processing
• Simplified design
• Increased robustness
• Increased responsiveness to the user
• Better use of the CPU
When NOT to use threads
• You don’t have a really good reason to
use threads. (Just for the heck of it isn’t
good enough…)
• The OS overhead would not make it
worthwhile. If your thread does not
perform a lot of work, the context switch
does not worth it.
Can threads cause problems?
• Deadlocks
• Starvation
• Destruction of data structures
• Race conditions.
• Problems with synchronization
Race Conditions
Incrementing num = 5 twice
thread A
thread B
Fetch num into register
Fetch num into register
increment and returns num
increment and returns num
Final value of num is 6 and not 7
Destruction of Data Structures,
an example
Typedef struct tagGADGET
{
struct tagGADGET * pNext;
}
GADGET, *PGADGET;
PGADGET gpFreeGadgets = NULL;
gpFreeGadgets
Gadget1
Gadget2
Gadget3
Destruction -- cont.
PGADGET GetAGadget (void)
Void FreeAGadget
{
(PGADGET pGadget)
PGADGET pFreeGadget =
{
gpFreeGadget;
pGadget->next =
if (pFreeGadget) {
gpFreeGadget =
gpFreeGadget->next;
}
return pFreeGadget;
}
gpFreeGadget;
gpFreeGadget =
pGadget;
}
Destruction -- Cont.
Now consider that there are two threads,
thread A only calls GetAGadget, and
thread B only calls FreeAGadget.
Lets assume that the OS lets B work till the
first assignment and then schedules
thread A, and then thread B continues
processing.
What Happens?
Destruction -- Cont.
gpFreeGadgets
FreeGadget
Gadget1
gpFreeGadgets
B Finishes
Gadget3
Beginning of thread B
Gadget2
gpFreeGadgets
FreeGadget
Gadget2
Gadget1
Gadget3
A scheduled
FreeGadget
Gadget2
Gadget1
Gadget3
MultiThreaded Concepts
• Atomic operations
• Mutual Exclusion (mutex)
• Race Condition
• starvation
MT Concepts -- Cont
• Semaphores
• Priority Inversion
• Deadlocks
Solving the Mutex problem
• Testandset : An atomic operation that
checks whether a pointer points to a
location that contains 0, then changes to
the requested value.
• Spinlock :
while (testandset (value)) continue;
The problems with SpinLocks:
• Needs hardware assistance
• Busy Waiting.
• Fairness (starvation)
• Priority Inversion
Mutex solution in software
G.L. Peterson, 1981
enam {False = 0, True = 1};
int interested[2] = {False, False}, turn=0;
void EnterCriticalSection(int this_thread){
int other_thread = 1 - this_thread;
interested[this_thread] = True;
turn = this_thread;
while (turn == this_thread &&
interested[other_thread] ) ;
}
void LeaveCriticalSection(int this_thread) {
interested[this_thread] = Flase;
}
Semaphores
E.W.Dijkstra, 1965
Semaphores provide a method of
synchronizing access to a shareable
resource by multiple threads. Allowing for
multiple concurrent access to a finite
number of shared resources, where the
semaphore value gets initialized to the
number of shared resources. Each time a
thread needs a resource, the semaphore
value gets decremented.
Semaphores -- cont.
• Usually semaphores do not have a busywait implementation within the OS.
• The interface is easy to use, and allows
for resource counting.
• Can be used as a simple lock (then called
binary semaphore -- the semaphore is
initialized to 1)
Barrier
I.Carmon, 1988
A barrier is a way to force number of
threads to work together.
All the threads first have to wait till all of
them come to a certain point in the code
(the barrier), and when the last thread
arrives, all of the threads are set free,
and continue working.
Barrier -- Implementation
typedef struct
Barrier(barrier *b, int n)
{ lock(b->in);
{
if(++b->count < n){
lock
in = 0,
unlock(b->in);
out = 1;
lock(b->out);
int counter = 0;
}
if(--b->count > 0)
} barrier;
unlock(b->out);
else unlock(b->in);
}
Debugging Threads.
• Thinking in parallel
• Remember that those bugs are (mostly)
not easily reproducible.
• Know your code well!
• Use the Visual C++ Thread and Callstack
windows
Debuggin Threads -- Cont.
• Know your primary thread.
• Know what thread you are currently
debugging.
• Watch for context switches while
stepping.
• Time does not stand still in the debugger
The End
Any Questions?