Concurrency in Programs

Download Report

Transcript Concurrency in Programs

CHAPTER ELEVEN
Concurrent
Programming
Programming Languages – Principles and Paradigms
by Allen Tucker, Robert Noonan
Background:
• Parallelism can provide a distinct way of
conceptualizing problems.
• Modern processors can contain a number of
functional units that can not be fully utilized
without considering explicit parallelism.
• Operating systems have long embraced the
concept of concurrent programming (more
about this later).
• Applications are finding more and more use for
concurrent/parallel operation (examples?).
Concepts:
• A process is a program in execution.
• A process has state, including resources like
memory and open files, as well as the contents of
its program counter.
• This is the process’s execution context.
• A concurrent program has more than one
execution context.
• This type of program is also referred to as a
multi-threaded program.
• A parallel program is a concurrent program with
more than one thread executing simultaneously.
Concepts:
• A distributed program is a system of programs
designed to execute simultaneously on a network
of computers.
• Because the individual processes may be
executed on different computers the do not share
memory (as threads do).
• Distributed programs must communicate using
some form of messaging.
• Multi-threaded programs most often
communicate using shared (e.g. non local)
variables.
Concurrency in Operating Systems:
• The earliest operating systems executed
programs in batch mode, sequentially executing
programs from start to finish.
• Later OSs were multiprogrammed, loading and
executing several programs in an interleaved
fashion.
• A scheduler forced execution to switch among
the active processes (context switching).
• Modern OSs us interactive time-sharing,
quickly switching among processes to give the
appearance of simultaneous execution of many
programs (even on a single cpu).
Concurrency in Programs:
• Concurrent execution of a program can use
multiple processors within a single cpu or
interleave a single processor using time slicing.
• In Java and Ada separate threads are applied to
functions or methods.
• Starting a thread is different than the normal
procedure for calling a function:
 The invoking thread doesn’t wait for the new thread
to complete.
 When the new thread does complete control doesn’t
return to the calling invoking thread.
States of a Thread
Figure 11.1
Created but not yet ready to run
Waiting to gain access to some resource
Ready to run but needs a processor
Actually executing on a processor
Finished
Communication:
• All concurrent programs involve inter-thread
communication or interaction:
 Threads compete for exclusive access to shared
resources (such as?).
 Threads communicate to share data.
• A thread can communicate using:
 Non-local shared variables (used by Java).
 Message passing (used by Ada).
 Access Parameters (used by Ada with message
passing [like pointers]).
• Note that Ada uses a mechanism called a rendezvous to
provide points of synchronization among threads.
Sharing Access:
• The fundamental problem in shared access is
called a race condition.
• Race conditions can occur when operations on
shared variables are not atomic.
• When two or more threads something like “load
value, change value, store value” results become
unpredictable.
• Some computers provide atomic test and set
instructions to help avoid such problems (alpha).
• Programmatically, code that accesses a shared
variable is termed a critical section.
Critical Sections:
• For a thread to safely execute a critical section
some form of locking mechanism must be
implemented.
• The locking mechanism ensures that only one
thread at a time is executing within the critical
section making it effectively atomic.
• When implementing locks you must beware of
possible deadlock conditions (how can this
happen?).
• It is also possible for locks to be unfair, giving
preference to some threads and starving others.
Semaphores:
• Semaphores were invented by Dijkstra in 1968.
• Semaphores consist of a shared integer variable
and an associated queue to hold blocked threads.
• There are two atomic operations defined on
semaphores:
P(s) – if s > 0 then set s = s – 1, else the thread blocks
V(s) – if there is a thread blocked on s wake it, else set s = s + 1.
• If a semaphore only takes on the values 0 and 1
it is called a binary semaphore.
• If the semaphore can take on any non-negative
integer value it is called a counting semaphore.
A word about Edsgar Dijkstra:
• Professor Edsger Wybe Dijkstra, a noted pioneer of the
science and industry of computing, died after a long
struggle with cancer on 6 August 2002 at his home in
Neunen, the Netherlands.
• Dijkstra was the 1972 recipient of the ACM Turing Award,
often viewed as the Nobel Prize for computing.
• He was particularly acerbic about the many sins he
considered encouraged by the Basic programming
language, which he said irreparably harmed young
programmers, and wrote a famous paper: Go To
Statement Considered Harmful.
• The operations get their name from the Dutch proberen
(to test) and verhogen (to increment).
Simple Producer-Consumer
Cooperation Using
Semaphores
Figure 11.2
With only one producer
and one consumer we can
use a pair of binary
semaphores to create a
critical section around the
variable buffer.
Multiple Producers-Consumers
Figure 11.3
With multiple producers
and consumers we use
counting semaphores to
keep track of a circular
buffer pool.
As before we use binary
semaphores to create a
critical section protecting
the buffer.
Next time…
More
Concurrent
Programming