Transcript Concurrency

Concurrency
CS-502 Operating Systems
Fall 2007
(Slides include materials from Operating System Concepts, 7th ed., by Silbershatz, Galvin, & Gagne and
from Modern Operating Systems, 2nd ed., by Tanenbaum)
CS-502 Fall 2007
Concurrency
1
Concurrency
• Since the beginning of computing, management of
concurrent activity has been the central issue
• Concurrency between computation and input or
output
• Concurrency between computation and user
• Concurrency between essentially independent
computations that take place at same time
• Concurrency between parts of large computations
that are divided up to improve performance
• …
CS-502 Fall 2007
Concurrency
2
Example – 1960s
• Programmers tried to write programs that
would read from input devices and write to
output devices in parallel with computing
• Card readers, paper tape, line printers, etc.
• Challenges
•
•
•
•
CS-502 Fall 2007
Keeping the buffers straight
Synchronizing between read activity and computing
Computing getting ahead of I/O activity
I/O activity getting ahead of computing
Concurrency
3
Shared Computing Services
• Multiple simultaneous, independent users of
large computing facilities
• E.g., Time Sharing systems of university computing
centers
• Data centers of large enterprises
• Multiple accounting, administrative, and data
processing activities over common databases
• …
CS-502 Fall 2007
Concurrency
4
Modern PCs and Workstations
• Multiple windows in personal computer doing
completely independent things
• Word, Excel, Photoshop, E-mail, etc.
• Multiple activities within one application
– E.g., in Microsoft Word
•
•
•
•
•
•
CS-502 Fall 2007
Reading and interpreting keystrokes
Formatting line and page breaks
Displaying what you typed
Spell checking
Hyphenation
….
Concurrency
5
Modern Game Implementations
• Multiple characters in game
• Concurrently & independently active
• Multiple constraints, challenges, and
interactions among characters
• Multiple players
CS-502 Fall 2007
Concurrency
6
Technological Pressure
• From early 1950s to early 2000s, single
processor computers increased in speed by
2 every 18 months or so
• Moore’s Law
• Multiprocessing was somewhat of a niche
problem
• Specialized computing centers, techniques
CS-502 Fall 2007
Concurrency
7
Technological Pressure (continued)
• No longer!
• Modern microprocessor clock speeds are no
longer increasing with Moore’s Law
• Microprocessor density on chips still is!
multi-core processors are now de facto
standard
• Even on low-end PCs!
CS-502 Fall 2007
Concurrency
8
Traditional Challenge for OS
• Useful set of abstractions that help to
– Manage concurrency
– Manage synchronization among concurrent
activities
– Communicate information in useful way among
concurrent activities
– Do it all efficiently
CS-502 Fall 2007
Concurrency
9
Modern Challenge
• Methods and abstractions to help software
engineers and application designers
– Take advantage of inherent concurrency in
modern systems
– Do so with relative ease
CS-502 Fall 2007
Concurrency
10
Fundamental Abstraction
• Process
• … aka Task
• …
• …
• …
CS-502 Fall 2007
aka Thread
aka Task
aka [other terms]
Concurrency
11
Process (a generic term)
• Definition: A particular execution of a
program.
• Requires time, space, and (perhaps) other resources
• Separate from all other executions of the
same program
• Even those at the same time!
• Separate from executions of other programs
CS-502 Fall 2007
Concurrency
12
Process (a generic term – continued)
• …
• Can be
•
•
•
•
•
Interrupted
Suspended
Blocked
Unblocked
Started or continued
• Fundamental abstraction of all modern operating
systems
• Also known as thread (of control), task, job, etc.
• Note: a Unix/Windows “Process” is a heavyweight concept
with more implications than this simple definition
CS-502 Fall 2007
Concurrency
13
Process (a generic term – continued)
• Concept emerged in 1960s
• Intended to make sense out of mish-mash of
difficult programming techniques that
bedeviled software engineers
• Analogous to police or taxi dispatcher!
CS-502 Fall 2007
Concurrency
14
Background – Interrupts
• A mechanism in (nearly) all computers by which a
running program can be suspended in order to
cause processor to do something else
• Two kinds:–
• Traps – synchronous, caused by running program
– Deliberate: e.g., system call
– Error: divide by zero
• Interrupts – asynchronous, spawned by some other concurrent
activity or device.
• Essential to the usefulness of computing systems
CS-502 Fall 2007
Concurrency
15
Hardware Interrupt Mechanism
• Upon receipt of electronic signal, the processor
• Saves current PSW to a fixed location
• Loads new PSW from another fixed location
• PSW — Program Status Word
•
•
•
•
Program counter
Condition code bits (comparison results)
Interrupt enable/disable bits
Other control and mode information
– E.g., privilege level, access to special instructions, etc.
• Occurs between machine instructions
• An abstraction in modern processors (see Silbershatz, §1.2.1
and §13.2.2)
CS-502 Fall 2007
Concurrency
16
Interrupt Handler
/* Enter with interrupts disabled */
Save registers of interrupted computation
Load registers needed by handler
Examine cause of interrupt
Take appropriate action (brief)
Reload registers of interrupted computation
Reload interrupted PSW and re-enable interrupts
or
Load registers of another computation
Load its PSW and re-enable interrupts
CS-502 Fall 2007
Concurrency
17
Requirements of
interrupt handlers
• Fast
• Avoid possibilities of interminable waits
• Must not count on correctness of interrupted
computation
• Must not get confused by multiple interrupts in
close succession
• …
• More challenging on multiprocessor systems
CS-502 Fall 2007
Concurrency
18
Result
• Interrupts make it possible to have
concurrent activities at same time
• Don’t help in establishing some kind of
orderly way of thinking
• Need something more
CS-502 Fall 2007
Concurrency
19
• Hence, emergence of generic concept of
process
– (or whatever it is called in a particular operating
system and environment)
CS-502 Fall 2007
Concurrency
20
Information the system needs to maintain
about a typical process
• PSW (program status word)
• Program counter
• Condition codes
• Control information – e.g., privilege level, priority,
etc
• Registers, stack pointer, etc.
• Whatever hardware resources needed to compute
• Administrative information for OS
• Owner, restrictions, resources, etc.
• Other stuff …
CS-502 Fall 2007
Concurrency
21
Process Control Block (PCB)
(example data structure in an OS)
CS-502 Fall 2007
Concurrency
22
Switching from process to process
CS-502 Fall 2007
Concurrency
23
Result
• A very clean way of thinking about separate
computations
• Processes can appear be executing in
parallel
• Even on a single processor machine
• Processes really can execute in parallel
• Multiprocessor or multi-core machine
• …
CS-502 Fall 2007
Concurrency
24
Process States
CS-502 Fall 2007
Concurrency
25
The Fundamental Abstraction
• Each process has its “virtual” CPU
• Each process can be thought of as an
independent computation
• On a fast enough CPU, processes can look
like they are really running concurrently
CS-502 Fall 2007
Concurrency
26
Questions?
CS-502 Fall 2007
Concurrency
27
Challenge
• How can we help processes synchronize
with each other?
• E.g., how does one process “know” that another has
completed a particular action?
• E.g, how do separate processes “keep out of each
others’ way” with respect to some shared resource
CS-502 Fall 2007
Concurrency
28
Digression (thought experiment)
static int y = 0;
int main(int argc, char
**argv)
{
extern int y;
y = y + 1;
return y;
}
CS-502 Fall 2007
Concurrency
29
Digression (thought experiment)
static int y = 0;
int main(int argc, char
**argv)
{
extern int y;
y = y + 1;
return y;
}
Upon completion of main, y == 1
CS-502 Fall 2007
Concurrency
30
Thought experiment (continued)
static int y = 0;
Process 1
int main(int argc, char **argv)
{
extern int y;
Process 2
int main2(int argc, char **argv)
{
extern int y;
y = y + 1;
y = y - 1;
return y;
return y;
}
}
Assuming processes run “at the same time,” what are
possible values of y after both terminate?
CS-502 Fall 2007
Concurrency
31
Another Abstraction – Atomic Operation
• An operation that either happens entirely or
not at all
• No partial result is visible or apparent
• Non-interruptible
• If two atomic operations happen “at the
same time”
• Effect is as if one is first and the other is second
• (Usually) don’t know which is which
CS-502 Fall 2007
Concurrency
32
Hardware Atomic Operations
• On (nearly) all computers, reading and writing
operations of machine words can be considered as
atomic
• Non-interruptible
• It either happens or it doesn’t
• Not in conflict with any other operation
• When two attempts to read or write the same data,
one is first and the other is second
• Don’t know which is which!
• No other guarantees
• (unless we take extraordinary measures)
CS-502 Fall 2007
Concurrency
33
Definitions
• Definition: race condition
• When two or more concurrent activities are trying to
do something with the same variable resulting in
different values
• Random outcome
• Critical Region (aka critical section)
• One or more fragments of code that operate on the
same data, such that at most one activity at a time
may be permitted to execute anywhere in that set of
fragments.
CS-502 Fall 2007
Concurrency
34
Synchronization – Critical Regions
CS-502 Fall 2007
Concurrency
35
Class Discussion
• How do we keep multiple computations
from being in a critical region at the same
time?
• Especially when number of computations is > 2
• Remembering that read and write operations are
atomic
example
CS-502 Fall 2007
Concurrency
36
Possible ways to protect critical section
• Without OS assistance — Locking variables
& busy waiting
– Peterson’s solution (p. 195-197)
– Atomic read-modify-write – e.g. Test & Set
• With OS assistance — abstract
synchronization operations
• Single processor
• Multiple processors
CS-502 Fall 2007
Concurrency
37
Requirements – Controlling Access to a
Critical Section
– Only one computation in critical section at a
time
– Symmetrical among n computations
– No assumption about relative speeds
– A stoppage outside critical section does not lead
to potential blocking of others
– No starvation — i.e. no combination of timings
that could cause a computation to wait forever
to enter its critical section
CS-502 Fall 2007
Concurrency
38
Non-solution
static int turn = 0;
Process 1
Process 2
while (TRUE) {
while (turn !=0)
/*loop*/;
critical_region();
turn = 1;
noncritical_region1()
;
};
while (TRUE) {
while (turn !=1)
/*loop*/;
critical_region();
turn = 0;
noncritical_region2()
;
};
CS-502 Fall 2007
Concurrency
39
Non-solution
static int turn = 0;
Process 1
Process 2
while (TRUE) {
while (turn !=0)
/*loop*/;
critical_region();
turn = 1;
noncritical_region1()
;
};
while (TRUE) {
while (turn !=1)
/*loop*/;
critical_region();
turn = 0;
noncritical_region2()
;
};
What is wrong with this approach?
CS-502 Fall 2007
Concurrency
40
Peterson’s solution (2 processes)
static int turn = 0;
static int interested[2];
void enter_region(int process) {
int other = 1 - process;
interested[process] = TRUE;
turn = process;
while (turn == process &&
interested[other] == TRUE)
/*loop*/;
};
void leave_region(int process) {
interested[process] = FALSE;
};
CS-502 Fall 2007
Concurrency
41
Another approach: Test & Set Instruction
(built into CPU hardware)
static int lock = 0;
extern int TestAndSet(int &i);
/* sets the value of i to 1 and returns the
previous value of i. */
void enter_region(int &lock) {
while (TestAndSet(lock) == 1)
/* loop */ ;
};
void leave_region(int &lock) {
lock = 0;
};
CS-502 Fall 2007
Concurrency
42
Test & Set Instruction
(built into CPU hardware)
static int lock = 0;
extern int TestAndSet(int &i);
/* sets the value of i to 1 and returns the
previous value of i. */
void enter_region(int &lock) {
while (TestAndSet(lock) == 1)
/* loop */ ;
};
void leave_region(int &lock) {
lock = 0;
};
What about this solution?
CS-502 Fall 2007
Concurrency
43
Variations
• Compare & Swap (a, b)
•
•
•
•
temp = b
b=a
a = temp
return(a == b)
• …
• A whole mathematical theory about efficacy of
these operations
• All require extraordinary circuitry in processor
memory, and bus to implement atomically
CS-502 Fall 2007
Concurrency
44
With OS assistance
• Implement an abstraction:–
– A data type called semaphore
• Non-negative integer values.
– An operation wait_s(semaphore &s) such that
• if s > 0, atomically decrement s and proceed.
• if s = 0, block the computation until some other
computation executes post_s(s).
– An operation post_s(semaphore &s):–
• Atomically increment s; if one or more
computations are blocked on s, allow precisely one
of them to unblock and proceed.
CS-502 Fall 2007
Concurrency
45
Critical Section control with Semaphore
static semaphore mutex = 1;
Process 1
while (TRUE) {
wait_s(mutex);
Process 2
while (TRUE) {
wait_s(mutex);
critical_region();
critical_region();
post_s(mutex);
post_s(mutex);
noncritical_region1();
noncritical_region2();
};
};
CS-502 Fall 2007
Concurrency
46
Critical Section control with Semaphore
static semaphore mutex = 1;
Process 1
while (TRUE) {
wait_s(mutex);
Process 2
while (TRUE) {
wait_s(mutex);
critical_region();
critical_region();
post_s(mutex);
post_s(mutex);
noncritical_region1();
noncritical_region2();
};
};
Does this meet the requirements for controlling access to
critical sections?
CS-502 Fall 2007
Concurrency
47
Semaphores – History
• Introduced by E. Dijkstra in 1965.
• wait_s() was called P()
• Initial letter of a Dutch word meaning “test”
• post_s() was called V()
• Initial letter of a Dutch word meaning “increase”
CS-502 Fall 2007
Concurrency
48
Abstractions
• The semaphore is an example of a new and
powerful abstraction defined by OS
• I.e., a data type and some operations that add a
capability that was not in the underlying hardware or
system.
• Any program can use this abstraction to
control critical sections and to create more
powerful forms of synchronization among
computations.
CS-502 Fall 2007
Concurrency
49
Process and semaphore
data structures
class State {
long int PSW;
long int regs[R];
/*other stuff*/
}
class PCB {
PCB *next, prev, queue;
State s;
class Semaphore {
int count;
PCB *queue;
friend wait_s(Semaphore &s);
friend post_s(Semaphore &s);
Semaphore(int initial);
/*constructor*/
~Semaphore();
/*destructor*/
PCB (…); /*constructor*/
~PCB(); /*destructor*/
}
CS-502 Fall 2007
}
Concurrency
50
Implementation
Ready queue
PCB
PCB
Semaphore A
PCB
PCB
count = 0
PCB
Semaphore B
count = 2
CS-502 Fall 2007
Concurrency
51
PCB
Implementation
Ready queue
PCB
PCB
PCB
PCB
• Action – dispatch a process to CPU
• Remove first PCB from ready queue
• Load registers and PSW
• Return from interrupt or trap
• Action – interrupt a process
•
•
•
•
CS-502 Fall 2007
Save PSW and registers in PCB
If not blocked, insert PCB into ReadyQueue (in some order)
Take appropriate action
Dispatch same or another process from ReadyQueue
Concurrency
52
Implementation – Semaphore actions
Ready queue
PCB
PCB
• Action – wait_s(Semaphore
PCB
PCB
&s)
• Implement as a Trap (with interrupts disabled)
if (s.count == 0)
– Save registers and PSW in PCB
– Queue PCB on s.queue
Event wait
– Dispatch next process on ReadyQueue
else
– s.count = s.count – 1;
– Re-dispatch current process
CS-502 Fall 2007
Concurrency
53
Implementation – Semaphore actions
Ready queue
PCB
PCB
Semaphore A
PCB
PCB
count = 0
• Action – post_s(Semaphore
PCB
PCB
&s)
• Implement as a Trap (with interrupts disabled)
if (s.queue != null)
Event completion
– Save current process in ReadyQueue
– Move first process on s.queue to ReadyQueue
– Dispatch some process on ReadyQueue
else
– s.count = s.count + 1;
– Re-dispatch current process
CS-502 Fall 2007
Concurrency
54
Interrupt Handling
• (Quickly) analyze reason for interrupt
• Execute equivalent post_s to appropriate
semaphore as necessary
• Implemented in device-specific routines
• Critical sections
• Buffers and variables shared with device managers
• More about interrupt handling later in the
course
CS-502 Fall 2007
Concurrency
55
Timer interrupts
• Can be used to enforce “fair sharing”
• Current process goes back to ReadyQueue
• After other processes of equal or higher priority
• Simulates concurrent execution of multiple
processes on same processor
CS-502 Fall 2007
Concurrency
56
Definition – Context Switch
• The act of switching from one process to
another
• E.g., upon interrupt or some kind of wait for event
• Not a big deal in simple systems and
processors
• Very big deal in large systems such
• Linux and Windows
• Pentium 4
CS-502 Fall 2007
Concurrency
Many microseconds!
57
Complications for
Multiple Processors
• Disabling interrupts is not sufficient for
atomic operations
• Semaphore operations must themselves be
implemented in critical sections
• Queuing and dequeuing PCB’s must also be
implemented in critical sections
• Other control operations need protection
• These problems all have solutions but need
deeper thought!
CS-502 Fall 2007
Concurrency
58
Summary
• Interrupts transparent to processes
• wait_s() and post_s() behave as if they are
atomic
• Device handlers and all other OS services
can be embedded in processes
• All processes behave as if concurrently
executing
• Fundamental underpinning of all modern
operations systems
CS-502 Fall 2007
Concurrency
59
Questions?
CS-502 Fall 2007
Concurrency
60