Processes and Threads
Download
Report
Transcript Processes and Threads
Operating Systems
CST 352
Processes and Threads
4/11/2016
CST 352 - Operating Systems
1
Topics
• Definitions
• Communications
–Process to Process
–Process to Thread
–Thread to Thread
• Scheduling
4/11/2016
CST 352 - Operating Systems
2
Definitions - Prelims
Concurrency – The appearance
that threads are running
simultaneously even though
there is a single CPU.
4/11/2016
CST 352 - Operating Systems
3
Definitions - Prelims
Context – The “processor” state
of a block of executing code.
This includes all registers
required to uniquely identify
this chain of execution.
4/11/2016
CST 352 - Operating Systems
4
Definitions - Process
Process – A group of instructions
along with the context defining the
execution “state (s)” of those
instructions.
Q: How does the concept of a “process” and
the code that describes the process parallel
the concept of a “class” and an “object”?
4/11/2016
CST 352 - Operating Systems
5
Definitions - Process
A process is an abstraction defining
processor execution resource grouping.
In the days of “batch” processing,
processes executed to completion
before another process was loaded,
then started.
4/11/2016
CST 352 - Operating Systems
6
Definitions - Process
Batch Processing – What had to happen?
1.
2.
3.
4.
5.
6.
7.
8.
9.
4/11/2016
Operator is waiting for input (reading newspaper)
User writes a program on punch cards.
Operator takes a tray of punch cards from a stack of jobs.
Operator loads the cards into the system tray – computer
reads each card into memory.
Operator starts the job.
System compiles the code in the job.
System jumps to the first machine instruction of
compiled code.
Job runs to completion.
Operator places output in out box. – Go to 1
CST 352 - Operating Systems
7
Definitions - Process
Batch Processing – What had to happen?
The computer is only involved in steps 4, 6,7 , and 8.
Where did the OS reside?
4/11/2016
CST 352 - Operating Systems
8
Definitions - Process
Synchronous Processing – What had to
happen?
1.
2.
3.
4.
5.
6.
7.
4/11/2016
System is waiting for input.
User writes program and compiles it on the computer.
User starts execution of the written program.
System loads program into memory for execution.
System jumps to the first instruction of program.
Program runs to completion.
System jumps back to state of waiting for input.
CST 352 - Operating Systems
9
Definitions - Process
Preemptive Multitasking – What has to
happen?
1.
2.
3.
System is waiting for input (running SETI program).
User writes program and compiles it on the computer.
User starts execution of the written program.
A.
B.
C.
D.
4/11/2016
System loads next program into memory for execution.
System jumps to the first instruction of program.
Program runs to till time slice time-out or program done.
System jumps back to “A”.
CST 352 - Operating Systems
10
Definitions - Thread
In an older OS, each process had:
• Address space
– stack
– heap
• Single “thread” of control.
– Serial execution of instructions in the
executing program.
4/11/2016
CST 352 - Operating Systems
11
Definitions - Thread
A “thread” breaks the grouping
provided by a process of “resources
and execution”.
A process main thread can “spawn”
off threads. Each thread the process
spawns will share the resources of
the process.
4/11/2016
CST 352 - Operating Systems
12
Definitions - Thread
Q:
1. Define Process
2. Define Thread
3. Define Context
4/11/2016
CST 352 - Operating Systems
13
Processes and Threads
Processes:
Create Thread
Process
Memory
Create primary thread
1. Set up run-time stack.
2. Set up memory segment.
3. Create context.
4. Set ready for execute.
RunTime
Stack
Primary Thread
4/11/2016
CST 352 - Operating Systems
14
Processes and Threads
Processes:
•
•
A process has a memory segment assigned to it.
A process has a primary run-time stack assigned
to it (used by the primary thread).
Processes must communicate with special
mechanism.
•
–
–
–
4/11/2016
A process has protected code segment.
A process has a protected memory segment.
The OS must provide a special address space for
interprocess communication.
CST 352 - Operating Systems
15
Processes and Threads
Processes and Threads:
Create Thread
RunTime
Stack
Primary Thread
4/11/2016
Create Thread
1. Set up run-time stack.
2. Create context.
3. Set ready for execute.
Process
Memory
•
Execution is
controlled by
parent process.
•
Parent process
must retain a
leash on child
threads.
Run•
Threads may
Time
communicate
Stack
through the
Thread 2
process memory
CST 352 - Operating Systems
16
segment.
Processes and Threads
Threads:
•
•
•
•
•
4/11/2016
Threads share the memory segment of a process
(data and code).
A Thread is assigned it’s own run-time stack by
the Operating System.
Threads can communicate with other threads
contained in the same process using data
structures in the process memory segment.
The data structures must be “thread-safe”.
The process and thread code must be written
“thread-safe”.
CST 352 - Operating Systems
17
Processes and Threads
A preemptive multitasking OS is
made up of several processes:
1. Foreground processes – those that
require user interaction (shell, GUI,
etc.).
2. Background processes – those that run
in the background, performing their jobs
without user intervention (mailers,
network monitors, printing monitors,
etc).
4/11/2016
CST 352 - Operating Systems
18
Processes and Threads
Background processes that help the
OS with some task are called
daemons (not demons).
Daemon - an attentive benevolent entity. An
intermediary between gods and men.
4/11/2016
CST 352 - Operating Systems
19
Processes and Threads
Process/Thread States:
•
•
•
•
4/11/2016
Suspend – Processes that are waiting for some
event to occur (i.e. I/O, Time of Day, etc.)
Active State – Process is ready to run. Waiting
for CPU.
Execute State – Switched into the CPU and has
control of the execution unit.
Blocked State – Process that are waiting for
access to a dynamic resource.
CST 352 - Operating Systems
20
Processes and Threads
Process/Thread/Fiber States:
Keep in mind the processing hierarchy:
Processes contain threads
Threads contain fibers
–
–
Process state change implies thread state change
Thread state change implies fiber state change
The inverse is not true.
4/11/2016
CST 352 - Operating Systems
21
Processes and Threads
Process State Transition Analysis:
–
Dispatch
or
Lo
gic
al
S
Sy
ste
m
Ca
ll
st
ue
eq
–
eR
–
Create process data
segment.
Create process code
segment.
Load op codes from
disk into memory
Build run-time Process
stack.
Entry
urc
so
Re
–
Preempt
Scheduler creates the
process control block
(TCB) and places it
in the suspend list.
us
pe
nd
Process Creation:
Execute
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
22
Processes and Threads
Process Suspend Activation:
us
pe
nd
Dispatch
eR
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
urc
so
Re
Preempt
Process System Call is done
and process is ready
for execute.
Execute
Scheduler removes the TCB
from the suspend list
and places it on the
active list.
e
et
pl
m
Co
e
.
q at
e
v
i
r
R -o ct
A
em
t
al
s
c
y
S
gi
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
23
Processes and Threads
Process Dispatch:
Dispatch
us
pe
nd
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
eR
Round Robin
Priority Based
Etc.
urc
so
Re
–
–
–
Preempt
TCB is switched into the
CPU based on the
scheduling
algorithm.
Execute
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
24
Processes and Threads
Process Preempt:
us
pe
nd
Dispatch
or
Lo
gic
al
S
Ca
ll
ste
m
Sy
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
st
ue
eq
te
ple
eR
m
Co
urc
so
Re
Preempt
TCB is switched out of the
CPU. Scheduler uses
the scheduling
algorithm to
determine next TCB
to be switched in.
Execute
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
25
Processes and Threads
Process Execute Block:
Sy
ste
m
Dispatch
Ca
ll
t
es
qu
or
Lo
gic
al
S
Re
Preempt
Process requests some
unavailable resource.
e
urc
so
Re
us
pe
nd
Execute
TCB is switched out of the
CPU and placed in
the “blocked” list.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
26
Processes and Threads
Process Execute Suspend:
Sy
ste
m
Dispatch
C
Su all o
sp
r
en Log
d
i ca
l
d
en
p
s
Su
st
ue
eq
l
ica
g
Lo
Active
eR
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
urc
so
Re
TCB is switched out of the
CPU and placed in
the “Suspend” list.
m
Co
Preempt
Process makes a system
call that requires a
lengthy operation.
Execute
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
27
Processes and Threads
Process Suspend Blocked:
us
pe
nd
Dispatch
eR
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
urc
so
Re
Preempt
Process system call is done
but the resource
required to complete
the call is
unavailable.
Execute
TCB is moved from the
“Suspend” list to the
“Blocked” list.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Process Exit
4/11/2016
Suspend Cleared - Process Still
Blocked
CST 352 - Operating Systems
28
Processes and Threads
Process BlockActivated:
Sy
ste
m
Dispatch
Ca
ll
st
ue
eq
or
Lo
gic
al
S
eR
Preempt
Requested resource
becomes available.
urc
so
Re
us
pe
nd
Execute
TCB is moved from the
Blocked list back to
the Active List.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
our
ce A
v ail
able
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
29
Processes and Threads
Process Active Block:
us
pe
nd
Dispatch
eR
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
urc
so
Re
Preempt
Currently dispatched
process “Blocks” an
active process.
Execute
TCB is moved from the
Active list to the
Blocked List.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Log
ic al
Ava
ilab
le
Blocked
Blo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
30
Processes and Threads
Process Active Suspend:
us
pe
nd
Dispatch
eR
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
urc
so
Re
Preempt
Currently dispatched
process “Suspends”
and active process.
Execute
TCB is moved from the
Active list to the
Suspend List.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
al
ic
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
31
Processes and Threads
Process Terminate:
•
•
Process resources are
returned to the OS.
Process memory
segment is cleaned up.
Process run-time stack
is cleaned up. Process Entry
Sy
•
ste
m
Dispatch
Ca
ll
st
ue
eq
or
Lo
gic
al
S
eR
Preempt
Process is terminated by
another process.
Process exits.
urc
so
Re
us
pe
nd
Execute
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
32
Processes and Threads
Q:
1. What states are required for
implementation of a multitasking
kernel?
4/11/2016
CST 352 - Operating Systems
33
Processes and Threads
• Processes must be managed in
kernel space. Why is this the case?
• It is possible to manage threads in
user space. Why is this the case?
• What are advantages of managing
threads in user space vs. kernel
space?
4/11/2016
CST 352 - Operating Systems
34
OS Requirements for Process
Implementation
To exist, a process needs:
•
•
•
•
Process Control Block – Contains the
context of the process main thread.
Process Code Segment – Contains the code
relocated to a physical address space.
Process Data Segment – Contains the
memory “heap” assigned to the process.
Process Run-time Stack – Contains the call
stack assigned to the process main thread.
4/11/2016
CST 352 - Operating Systems
35
Processes and Threads
When a process gets suspended it can be
moved out of physical memory.
This activity is called “swapping”.
Don’t confuse swapping with virtual
memory.
4/11/2016
CST 352 - Operating Systems
36
Processes and Threads
Swapping of a process involves taking
all the process components (TCBs,
Code Segment, Data Segment,
Heap, and run-time stack(s)) and
moving them from memory to disk.
4/11/2016
CST 352 - Operating Systems
37
Processes and Threads
Paging of a process involves taking a
piece of a process (e.g. a page of
memory) and moving it from
memory to disk.
4/11/2016
CST 352 - Operating Systems
38
Processes and Threads
The disk space where the process
components get written to disk is
called the swap file (page file
virtual memory systems).
•
•
Windows 7 uses virtual memory. The page file is
normally on the “C:” drive.
UNIX sets aside a partition on the hard drive.
4/11/2016
CST 352 - Operating Systems
39
Processes and Threads
An OS that has a poorly tuned process
management scheme will do what is
known as “thrashing”.
Thrashing – The OS spends more time
moving process elements from disk to
memory and back again than it does
running processes.
4/11/2016
CST 352 - Operating Systems
40
Processes and Threads
Q:
1. Define Swapping
2. Define Paging
3. Does swapping require virtual
memory? Why or why not?
4/11/2016
CST 352 - Operating Systems
41
OS Requirements for
Multitasking Implementation
• Data Structures of TCBs for process
control:
– Suspend List (a linked list)
– Active Queue (priority queues, circular
linked list)
– Blocked List (a table of queues keyed on
a resource ID)
4/11/2016
CST 352 - Operating Systems
42
OS Requirements for
Multitasking Implementation
• Interrupt Service routine to
handle:
– I/O devices
– CPU interrupts
» Interrupt asserted for task switching
» This ISR calls the dispatcher
4/11/2016
CST 352 - Operating Systems
43
OS Requirements for Thread
Implementation
Kernel Space Threads:
• Process needs to maintain some
reference to threads it owns in a
thread table associated with the
TCB.
• The Kernel schedules threads based
on run-time activity of controlling
processes.
4/11/2016
CST 352 - Operating Systems
44
OS Requirements for Thread
Implementation
User Space Threads:
•
•
•
•
Process needs to maintain some reference to threads it owns
in a thread table.
User space system calls are made to perform thread
management. These calls cause the process to change
“thread” context.
The kernel only handles the scheduling of processes.
User space thread management is analogous to
“Cooperative Multitasking”. If a thread in a process “runs
amok”, the process threads will starve.
4/11/2016
CST 352 - Operating Systems
45
OS Requirements for Thread
Implementation
Hybrid Threads (one-scheme):
• Processes live in kernel space.
• Threads are managed in kernel space.
• User space threads are provided, known
as “light-weight” threads (fibers).
4/11/2016
CST 352 - Operating Systems
46
OS Requirements for Thread
Implementation
Hybrid Threads (another scheme):
•
•
•
Processes live in kernel space.
Threads live in user space.
When a thread blocks, it gives the kernel
what is basically a pointer to a call-back
routine. The kernel then calls the “callback” when the thread resource become
available.
4/11/2016
CST 352 - Operating Systems
47
Process Synchronization
Race Conditions:
Two or more threads are reading
or writing to a shared resource.
The final result depends on who
writes at what precise time.
4/11/2016
CST 352 - Operating Systems
48
Process Synchronization
Race Conditions: - Example
1.
2.
3.
4.
4/11/2016
Process Thread A writes to a memory area then gets
switched out.
Process Thread B writes to the same memory area
then gets switched out.
Process Thread A gets switched back in and uses the
memory area to make some calculation.
Process Thread A has no knowledge that Process
Thread B has changed the value used for the
calculation.
CST 352 - Operating Systems
49
Process Synchronization
Race Conditions: - Example
Consider the function:
void swap( int& intOne, int& intTwo )
{
static int temp;
temp = intTwo;
intTwo = intOne;
intOne = temp;
}
4/11/2016
CST 352 - Operating Systems
50
Process Synchronization
Race Conditions: - Example
1.
Initial Conditions:
Thread T1 – varOne = 5, varTwo = 8
Thread T2 – varOne = 15, varTwo = 25
2.
3.
4.
5.
6.
7.
8.
4/11/2016
Thread T1 calls swap(varOne, varTwo);
8 gets put in temp in the swap procedure.
Thread T1 gets switched out of the CPU.
Thread T2 gets switched in and calls swap(varOne, varTwo);
25 gets put into temp. Thread T2 finishes swap.
Thread T1 gets switched back in. swap finishes.
In Thread T1, varOne now contains 25 and varTwo contains 5.
T1 has no idea there was a problem.
CST 352 - Operating Systems
51
Process Synchronization
Race Conditions:
Race conditions are a real problem to
track because the behavior
changes based on CPU Load,
current active processes, amount
of memory, etc.
4/11/2016
CST 352 - Operating Systems
52
Process Synchronization
Race Conditions:
Necessary conditions for avoidance:
1.
2.
3.
4.
4/11/2016
No two threads may be simultaneously accessing a
shared resource.
No assumptions should be made about the CPU as a
resource.
Any threads busy accessing a shared resource cannot
be blocked by another thread (with the scheduler as an
exception).
No thread should have to wait indefinitely for a shared
resource.
CST 352 - Operating Systems
53
Process Synchronization
Critical Regions:
Create an area in a program where a
thread cannot be interrupted while it
is executing.
This way, a thread will be guaranteed
that it completes it’s task before any
other thread has the chance to mess it
up.
4/11/2016
CST 352 - Operating Systems
54
Process Synchronization
Critical Regions:
A critical region can be implemented
by writing a function call such that:
»first thread can enter the code
»subsequent threads will be blocked
until the first thread is done
4/11/2016
CST 352 - Operating Systems
55
Process Synchronization
Critical Regions:
Consider the scenario:
1. Thread 1 enters the critical region.
2. Thread 2 will not get CPU cycles.
3. Thread 1 GPFs inside the critcal
region.
4. What now happens to Thread 2?
4/11/2016
CST 352 - Operating Systems
56
Process Synchronization
Mutual Exclusion – Busy Waiting
1. Disabling Interrupts
Dangerous because of the disabling thread
dies, our system needs to be reset.
2. Using a locking variable.
Does not work properly since there is a
possibility that between checking the lock and
setting it, the thread gets switched out.
4/11/2016
CST 352 - Operating Systems
57
Process Synchronization
Mutual Exclusion – Busy Waiting
3.
Strict Alteration
Threads grant other threads by setting a “turn” variable in a
cooperative manner. “turn” is a shared variable.
Called “Strict Alteration” because thread code needs to be
specifically altered for exclusion.
Thread 1:
Thread 0:
While (TRUE)
While (TRUE)
{
{
};
4/11/2016
while (turn != 0);
while (turn != 1);
criticalRegion( );
criticalRegion( );
turn = 1;
turn = 0 ;
nonCriticalRegion( );
nonCriticalRegion( );
};
CST 352 - Operating Systems
58
Process Synchronization
Mutual Exclusion – Busy Waiting
3. Strict Alteration
While Thread 1 is waiting for Thread 0,
it is “polling” the turn variable.
This uses up processor CPU cycles.
4/11/2016
CST 352 - Operating Systems
59
Process Synchronization
Mutual Exclusion – Busy Waiting
4. Petersons Algorithm.
Similar to Strict Alteration.
Keep a list of PIDs for processes that
want a turn for use of a shared resource.
4/11/2016
CST 352 - Operating Systems
60
Process Synchronization
Mutual Exclusion – Busy Waiting
4.
Petersons Algorithm - Globals
#define MAX_PROC 500 // Maximum number of processes
struct petersonVars
{
int turn;
int interested[MAX_PROC];
};
4/11/2016
CST 352 - Operating Systems
61
Process Synchronization
Mutual Exclusion – Busy Waiting
4.
Petersons Algorithm.
void enterRegion(int PID, petersonVars* peteVar)
{
int other = peteVar->turn;
// Capture who is currently
// using the resource
peteVar->interested[PID] = TRUE; // Set PID interest to true
peteVar->turn = PID;
// Set turn to our PID
//
// Loop until it is our turn and "we" are the interested process.
//
while( peteVar->turn == PID && peteVar->interested[other] == TRUE ) // spin-lock
{
if (peteVar->turn != PID)
// Somebody else got our turn
{
other = peteVar->turn;// Get the current user
peteVar->turn = PID; // Set the turn to us
}
}
}
4/11/2016
CST 352 - Operating Systems
62
Process Synchronization
Mutual Exclusion – Busy Waiting
4.
Petersons Algorithm (cont’d) - Use
void thread( void *dummy )
{
thrdArgs* threadArgs = (thrdArgs*)dummy;
petersonVars* pvars = threadArgs->pVars;
int PID = threadArgs->pid;
while( 1 )
{
enterRegion(PID, pvars);
shared_var = PID;
cout << "Child thread " << PID << endl;
if ( shared_var != PID )
{
cout << "OOPS!!!! ****** Child thread " << PID << endl;
}
leaveRegion(PID, pvars);
}
}
4/11/2016
CST 352 - Operating Systems
63
Process Synchronization
Mutual Exclusion – Busy Waiting
4. Petersons Algorithm (cont’d).
void leaveRegion(int pid, petersonVars* peteVar)
{
peteVar->interested[pid] = FALSE; // We are done. Set our interest to FALSE;
};
4/11/2016
CST 352 - Operating Systems
64
Process Synchronization
Q:
1 – What is the difference between Strict Alteration
and Peterson’s Algorithm?
4/11/2016
CST 352 - Operating Systems
65
Process Synchronization
Mutual Exclusion – Busy Waiting
4. TSL instruction.
This is a special instruction that sets a memory area to
1 and returns the previous value in one execution
cycle.
Test and Set
Check the value
If the value was 1, loop until it test and set returns a previous
value of 0.
4/11/2016
CST 352 - Operating Systems
66
Process Synchronization
Mutual Exclusion is a valid method for resource
sharing.
The major drawback with mutual exclusion is the
requirement of a busy wait.
Low priority processes (or threads) may never
gain access to a resource since they will never be
the process that gets the resource access flag.
The inverse problem is once a low priority process
(or thread) gains access to a resource, all higher
priority processes must wait for the slow one to
finish.
4/11/2016
CST 352 - Operating Systems
67
Process Synchronization
Producer – Consumer
Two processes (or threads) share a fixed size buffer.
One process puts information into the buffer
(producer).
The other process takes information out of the
buffer (consumer).
The producer will block when the buffer is full.
The consumer will block when the buffer is empty.
Either will block if the other has access of the
buffer.
4/11/2016
CST 352 - Operating Systems
68
Process Synchronization
Producer – Consumer – Code sample
// Shared resource – Bounded Buffer
int buffer[MAX_ITEMS];
int nextInsert = 0;
// Next index to produce to
int nextRemove = 0;
// Next index to consume from
int count = 0;
// Number of items in the buffer
4/11/2016
CST 352 - Operating Systems
69
Process Synchronization
Producer – Consumer – Code sample
// Producer
…
while (1)
{
bb.addItem(rand());
}
// Bounded Buffer
…
void addItem (int itemToAdd)
{
if ( count = = MAX_ITEMS ) sleep( );
buffer[nextInsert] = itemToAdd;
++nextInsert %= MAX_ITEMS;
count++;
if (count > 0 ) wakeup(consumer);
// Suspend
// Produce
// Increment the insert point
// Increment the count
// Activate the consumer
}
4/11/2016
CST 352 - Operating Systems
70
Process Synchronization
Producer – Consumer – Code sample
// Consumer
…
int consumerInt;
while (1)
{
consumerInt = removeItem();
}
// Bounded Buffer
…
int removeItem ( )
{
int retVal;
if ( count = = 0 ) sleep( );
retVal = buffer[nextRemove];
++nextRemove %= MAX_ITEMS;
count--;
if (count < MAX_ITEMS )
wakeup(producer);
return retVal;
// Suspend
// Consume
// Increment the remove point
// Decrement the count
// Activate the producer
}
4/11/2016
CST 352 - Operating Systems
71
Process Synchronization
Producer – Consumer
We can get into trouble here because of the unconstrained
access to the shared resource variables.
If the count is set to 0, the consumer sleeps, but the sleep
state is not fully entered since the scheduler switches out the
Consumer.
Now the Producer puts a value in the buffer and increments
the count. Count == 1, so the producer signals the consumer
to wakeup.
Since the consumer is not yet asleep, the signal is ignored.
The consumer is then switched back in and finishes going to
sleep, never to wake again.
4/11/2016
CST 352 - Operating Systems
72
Process Synchronization
Producer – Consumer
The problem exist because the consumer lost the
wakeup signal since it was still in the process of
going to sleep.
To avoid this problem, the “wakeup” signals must be
saved until processes are in a state that will allow
the signal to be correctly applied by the OS.
4/11/2016
CST 352 - Operating Systems
73
Process Synchronization
Counting Semaphores
A semaphore is a construct that
allows the saving of “wakeup”
signals.
4/11/2016
CST 352 - Operating Systems
74
Process Synchronization
Counting Semaphores
Two operations:
1. dec( ) – same as “down” in the
book.
» If semaphore count == 0
Put “this” thread on “this” semaphore block
list.
Remove “this” thread from the active thread
list.
» Decrement count and continue on.
4/11/2016
CST 352 - Operating Systems
75
Process Synchronization
Counting Semaphores
Two operations:
2. inc( ) – same as “up” in the book.
» Increment semaphore count.
» If a thread is suspended waiting for
“this” semaphore
4/11/2016
Choose a waiting thread and move it to
the active list.
CST 352 - Operating Systems
76
Process Synchronization
Counting Semaphores
For the semaphore to work properly, the
checking of the count, incrementing it and
performing some “suspend” or “activate”
operation, must be done without interrupt.
To do this, the OS must disable interrupts
before the operation starts, then enable
them when it is done.
4/11/2016
CST 352 - Operating Systems
77
Process Synchronization
Q:
1. Write the pseudo code for a
counting semaphore.
4/11/2016
CST 352 - Operating Systems
78
Process Synchronization
Counting Semaphores
The Producer/Consumer problem can be solved
using semaphores:
• Semaphore called full – counts the number of
full slots.
• Semaphore called empty – counts the number of
empty slots.
• Semaphore called access – initialized to 1 to
control access to shared structures.
4/11/2016
CST 352 - Operating Systems
79
Process Synchronization
Producer – Consumer – semaphore impl.
class semaphore
{
private:
int count;
TCB* tcb_Q;
public:
int inc( );
int dec( );
int getCount( );
};
4/11/2016
// increment and activate
// decrement and suspend
// return semaphore state
CST 352 - Operating Systems
80
Process Synchronization
Producer – Consumer – semaphore impl.
// Shared resources
class BoundedBuffer
{
private:
int buffer[MAX_ITEMS];
int nextInsert = 0;
// Next index to produce to
int nextRemove = 0;
// Next index to consume from
semaphore mutex(1);
// resource access control
semaphore empty(MAX_ITEMS); // empty behavior control
semaphore full(0);
// full behavior control
Public:
void AddItem(int newItem);
int RemoveItem( void );
}
4/11/2016
CST 352 - Operating Systems
81
Process Synchronization
Producer – Consumer – Code sample
// Producer
static BoundedBuffer bb;
while (1)
{
bb.AddItem(rand( ));
// Produce
}
4/11/2016
CST 352 - Operating Systems
82
Process Synchronization
Producer – Consumer – Code sample
// Add Item
void BoundedBuffer::AddItem (int newItem)
{
empty.dec( );
mutex.dec( );
buffer[nextInsert] = newItem;
++nextInsert %= MAX_ITEMS;
mutex.inc( );
full.inc( );
// Suspend if buffer is full
// Wait if shared data is in use
// Add to the buffer
// Increment the insert point
// Done with shared data
// Kick off any consumers waiting
}
4/11/2016
CST 352 - Operating Systems
83
Process Synchronization
Producer – Consumer – Code sample
// Consumer
extern BoundedBuffer bb;
int consumerInt;
…
while (1)
{
consumerInt = bb.RemoveItem ( );
// Consume
}
4/11/2016
CST 352 - Operating Systems
84
Process Synchronization
Producer – Consumer – Code sample
// Consumer
int BoundedBuffer::RemoveItem(void)
{
int retVal;
full.dec( );
mutex.dec( );
retVal = buffer[nextRemove];
++nextRemove %= MAX_ITEMS;
mutex.inc( );
empty.inc( );
return (retVal);
// Suspend if buffer is empty
// Lock up the shared data
// Remove an item
// Increment the remove point
// Unlock shared data
// Kick off any producers
}
4/11/2016
CST 352 - Operating Systems
85
Process Synchronization
Mutexes
A semaphore provides a general purpose
method to control multiple access to
resources.
When simple process exclusion is all that is
required, a mutex can be used.
A Mutex is a counting semaphore with
count initialized to “1”;
4/11/2016
CST 352 - Operating Systems
86
Process Synchronization
Mutexes
A Mutex has two methods:
1. lock( ) – decrement the mutex value
to 0. If already 0, put the process to
sleep;
2. unlock( ) – increment the mutex
value from 0 to 1;
4/11/2016
CST 352 - Operating Systems
87
Process Synchronization
Q:
1. What is the difference between a
Semaphore and a Mutex?
4/11/2016
CST 352 - Operating Systems
88
Process Synchronization
Monitors
A Monitor is a programming
language construct providing the
functionality of a semaphore,
however, a monitor is a
programming language
construct.
4/11/2016
CST 352 - Operating Systems
89
Process Synchronization
Monitors
E.g. a monitor requires support of a
compiler.
It has been implemented in various
flavors of Pascal, Modula, and
Java.
4/11/2016
CST 352 - Operating Systems
90
Process Synchronization
Monitors
A monitor is an encapsulation of
procedures and data. Entry into
the monitor is done through a
public procedure.
4/11/2016
CST 352 - Operating Systems
91
Process Synchronization
Monitors
Characteristics:
All data variables are private.
Access to the monitor is done only
through public methods.
Only one process may be active in
the monitor at any given time.
4/11/2016
CST 352 - Operating Systems
92
Process Synchronization
Monitors
Once a process/thread is inside the
monitor, it must engage in
cooperative multitasking using
cwait(c) – suspend process based on
some condition c
csignal(c) – inform OS that any
processes waiting on c may resume.
4/11/2016
CST 352 - Operating Systems
93
Process Synchronization
Monitors
Entry Q
Monitor Waiting
Area
Monitor
Condition c1 Q
A Monitor provides an
Local Data
abstraction where
Condition Variables
data and a set of
Procedure 1
associated
procedures are
Procedure n
protected through
Init Code
compiler generated
signaling
primitives.
4/11/2016
CST 352 - Operating Systems
cwait(c1)
Condition c2 Q
cwait(c2)
Condition cn Q
cwait(cn)
Exit
94
Process Synchronization
Monitors
Remember, what we are dealing with
here is defined entry points to data.
The interface defines the points
where a process may gain access.
The process has no control over
when the monitor will block it.
4/11/2016
CST 352 - Operating Systems
95
Process Synchronization
Monitors
A process (thread) enters the monitor by calling one of
the public functions.
The process (thread) will end up in the waiting area if it
calls cwait(c) and c is a condition not yet met.
As soon as the condition c is met, if there are processes
(threads) queued waiting on it, the next one in the
queue will be unblocked.
A process (thread) will cause a condition to be met by
calling csignal(c).
4/11/2016
CST 352 - Operating Systems
96
Process Synchronization
Message Passing
Two processes or threads will use
“send( )” and “receive( )” to
pass an agreed upon message.
4/11/2016
CST 352 - Operating Systems
97
Process Synchronization
Message Passing
send(message msg) – send a
message to a shared message area.
Block if the buffer is full.
message receive( ) – receive a
message from a shared message
buffer. Block if the buffer is
empty.
4/11/2016
CST 352 - Operating Systems
98
Process Synchronization
Message Passing
Message Queue
Receive
Send
Process 1
4/11/2016
Process 2
CST 352 - Operating Systems
99
Process Synchronization
Message Passing
The OS manages the shared
memory area containing
message queues.
The OS manages the blocking
and activating processes using
the message queue.
4/11/2016
CST 352 - Operating Systems
100
Process Synchronization
Message Passing
What is the quality of service you
are capable of providing?
Guaranteed delivery?
Ack/Nak protocol?
How can you guarantee delivery for
deadlock avoidance?
How will you provide security?
4/11/2016
CST 352 - Operating Systems
101
Process Synchronization
Barriers
Barriers apply to a group of processes
or threads within a process.
A Barrier is used to make sure a group
of processes (or threads) will all wait
for a particular synchronization point.
This concept is useful for problems that
are divided into parallel computations.
4/11/2016
CST 352 - Operating Systems
102
Process Synchronization
Barriers
1. A process spawns off threads to
perform a computation.
2. The OS assigns threads to run on one
of 32 processors.
3. The parent process then waits for all
children threads to finish before
continuing on using the results of the
computation.
4/11/2016
CST 352 - Operating Systems
103
Communication - IPC
Shared Memory
A segment of memory that can be
accessed from two different process
context.
The OS must manage access to this
memory segment.
Address of a shared memory segment
will not resolve to the process heap.
4/11/2016
CST 352 - Operating Systems
104
Communication - IPC
Shared Memory
There are usually no synchronization
primitives built into a shared memory
segment.
The shared memory must be protected
through the use of some
synchronization primitive (see above).
4/11/2016
CST 352 - Operating Systems
105
Communication - IPC
Named Pipes
A named pipe is a message passing
mechanism.
The message queue (the pipe) is
usually implemented as a file on a disk.
This allows IPC to have a persistent
behavior (communication could occur
across process life).
Disk access is slower than memory
4/11/2016
CST 352 - Operating Systems
106
Communication - IPC
Named Pipes
There are also implementations
that can cross network boundaries.
_pipe( ), _popen( ), _wpopen( ) in
the MSDN to see Microsoft
implementation.
4/11/2016
CST 352 - Operating Systems
107
Classic IPC
Dining Philosophers
1. Five philosophers are at a table
2. Each plate has two forks next
to it.
3. For a philosopher to eat, he/she
must have two forks.
4. The life of a philosopher
consists of eating and thinking.
4/11/2016
CST 352 - Operating Systems
108
Classic IPC
Dining Philosophers
5. When a philosopher gets
hungry, he first tries to acquire
his left fork, then his right fork.
6. If he is successful, he eats for a
while, then puts down the forks
and continues to think.
Write a program that will allow all
philosophers to live a well fed
and well thought life.
4/11/2016
CST 352 - Operating Systems
109
Classic IPC
Dining Philosophers
Try #1:
1.
2.
3.
4.
5.
6.
Take left fork. If unavailable, block until it is available.
Take right fork. If unavailable, block until it is available.
Eat.
Put down right then left fork.
Think.
Repeat (go to 1).
Why does this fail?
4/11/2016
CST 352 - Operating Systems
110
Classic IPC
Dining Philosophers
Try #2:
1.
2.
3.
4.
5.
6.
Take left fork. If right fork is unavailable, put down the left
fork and wait for a while. Try again.
Take right fork.
Eat.
Put down right then left fork
Think
Repeat (go to 1).
Why does this fail?
4/11/2016
CST 352 - Operating Systems
111
Classic IPC
Dining Philosophers
Try #2:
If each philosopher waited a random
amount of time, the chances of a
deadlock would be reduced.
This is what CSMA/CD does in the
Ethernet protocol. (Carrier Sense
Multiple Access/Collision Detection)
4/11/2016
CST 352 - Operating Systems
112
Classic IPC
Dining Philosophers
Try #3:
4/11/2016
Use an array of state information to
keep track of the state each
philosopher is in – Eating, Thinking
or Hungry (trying to acquire forks to
eat).
A Philosopher can only move into the
Eating state if neither neighbor is
eating.
CST 352 - Operating Systems
113
Classic IPC
Dining Philosophers
Try #3:
Both neighbor philosophers must be
checked before allowing the philosopher to
eat.
This analogy applies to processes in the
CPU
An eating philosopher is a process in the CPU.
The forks are shared resources requiring
exclusive access for a process to run.
4/11/2016
CST 352 - Operating Systems
114
Classic IPC
Dining Philosophers
User space solution:
Each fork is protected by a
semaphore.
The semaphore is implemented
by the OS.
4/11/2016
CST 352 - Operating Systems
115
Classic IPC
Readers and Writers
Many processes and contained threads
are attempting access to the same
information.
At any given time, any number of
processes and contained threads may
read the information source.
Only one thread may write to the
information source at any time.
4/11/2016
CST 352 - Operating Systems
116
Classic IPC
Readers and Writers
Any reader can gain access.
A writer must wait until all readers
are done before it can be allowed
to write to the information source.
This problem is typical of DBMS
systems.
4/11/2016
CST 352 - Operating Systems
117
Classic IPC
Sleeping Barber
There is a barber shop
one barber.
one barber chair.
“n” chairs for waiting customers.
4/11/2016
CST 352 - Operating Systems
118
Classic IPC
Sleeping Barber
If there are no customers present, the barber
sits down in the barber chair and sleeps.
When a customer arrives, he wakes up the
barber to get a haircut.
If an additional customer arrives, he will sit
down if there are empty chairs. If there are
no chairs, the customer will leave.
4/11/2016
CST 352 - Operating Systems
119
Classic IPC
Sleeping Barber
The problem is to allow the customers
to get their hair cut without getting into
race conditions.
4/11/2016
CST 352 - Operating Systems
120
Classic IPC
Sleeping Barber
One solution – use three semaphores
A semaphore to count the number of waiting customers.
A semaphore to keep track of the number of barbers.
A Mutex (binary semaphore) to keep track of when a
customer is waiting for for the barber to start cutting.
Waiting – a counter to check the number of waiting
customers.
4/11/2016
CST 352 - Operating Systems
121
Communication - PTC
Via Shared Heap
Process-to-Thread communication
can be done through the shared
heap.
The shared heap acts as a shared
memory segment, however, it does
not require intervention of the OS.
4/11/2016
CST 352 - Operating Systems
122
Communication - PTC
Via Shared Heap
1. Main thread creates shared data
structures in the shared heap.
2. Main thread spawns child threads and
passes the address to the created data
structures to the child threads.
3. Child threads use the shared memory to
post information to the parent thread.
4/11/2016
CST 352 - Operating Systems
123
Communication - TTC
Via Shared Heap
1. Parent thread sets up shared heap data
structures.
2. Parent thread passes address of
structures to child threads.
3. Threads then use shared structures to
pass information.
4. Shared structures must be protected.
4/11/2016
CST 352 - Operating Systems
124
Communication - TTC
Q:
1. What is the difference between
inter-process communication and
thread to thread communication?
…yes, one is between processes and the other is
between threads….but what else?
4/11/2016
CST 352 - Operating Systems
125
Scheduling
Goals
Maximize CPU utilization.
Minimize process waiting.
Enforce system processing priorities.
Provide all system resident processes
with fair resource allocation.
4/11/2016
CST 352 - Operating Systems
126
Scheduling
Dispatch
eR
Sy
ste
m
Ca
ll
st
ue
eq
or
Lo
gic
al
S
urc
so
Re
us
pe
nd
Execute
Preempt
The area of
scheduling
impact is
shown in
blue.
m
Co
te
ple
q.
te
Re -or- ctiva
m
A
ste
al
y
c
i
S
g
Lo
l
ica
g
Lo
Active
d
en
p
s
Su
Res
ou rc
e
Ava
ilab
le
Blocked
Lo g
ic
al B
lo
ck
Process Entry
Suspend
Suspend Cleared - Process Still Blocked
Process Exit
4/11/2016
CST 352 - Operating Systems
127
Scheduling
Definitions
Scheduler – The OS process or thread that
is responsible for moving a TCB from the
Active list to the an area where it will be
dispatched next into the CPU.
Scheduling Algorithm – The policy the
scheduler uses to determine what TCB to
move.
4/11/2016
CST 352 - Operating Systems
128
Scheduling
Definitions
Compute Bound Process – A
process that spends the majority of
it’s time waiting for the CPU.
I/O Bound Process– A process
that spends the majority of it’s time
waiting for some I/O device.
4/11/2016
CST 352 - Operating Systems
129
Scheduling
Bounded Curve:
4/11/2016
CPU Bounded
Degree of Boundedness
As CPUs get
faster, they
outpace the
increases in I/O
devices, causing
processes to
spend more of
their run-time
waiting for I/O
devices.
I/O Bounded
CPU Speed
CST 352 - Operating Systems
130
Scheduling
Q:
1. What happens to a compute
bound process as CPU speed
increases?
2. What happens to an IO bound
process as CPU speed increases?
4/11/2016
CST 352 - Operating Systems
131
Scheduling
When does the scheduler get invoked:
1. Process/Thread Creation – A new process or
thread gets created by the OS. What process will
run next?
2. Process/Thread Exit – What process or thread
will run next?
3. I/O block or process/thread synchronization
block – The current process or thread is
suspended or blocked. What process or thread
will run next?
4/11/2016
CST 352 - Operating Systems
132
Scheduling
When does the scheduler get invoked:
4. I/O interrupt – An I/O device is
now available. What process or
thread will run next?
5. Clock interrupt – The current
process or thread turn is over.
Next?
4/11/2016
CST 352 - Operating Systems
133
Scheduling
Scheduling algorithms apply to both the
scheduling of CPU cycles from a
“process” point of view (e.g. a collection
of 1 or more threads), a “thread” point of
view (e.g. how much “process” time
should a contained thread get).
4/11/2016
CST 352 - Operating Systems
134
Scheduling
First Come – First Serve
The first process and/or contained
threads in the active queue will run
in the CPU to completion.
This algorithm defines a nonpreemptive system.
4/11/2016
CST 352 - Operating Systems
135
Scheduling
Shortest Process First
Based on historical run-time information or
job-time estimates.
Run the “shortest” job first.
This is a non-preemptive algorithm.
Only works if all processes to run are predetermined.
Typically used for older batch processing
systems.
4/11/2016
CST 352 - Operating Systems
136
Scheduling
Shortest Process First (preemptive
version):
Run the process with the estimated
“shortest” time left.
Must keep a running estimate of
how much time a job will take after
each run.
4/11/2016
CST 352 - Operating Systems
137
Scheduling
Three Level Scheduling
Keep three queues
1. Entering processes
2. Processes “swapped” out to disk
(which ones should come in next?)
3. CPU scheduling – from memory to
the CPU (active dispatch to
execute).
4/11/2016
CST 352 - Operating Systems
138
Scheduling
Three Level Scheduling
This requires tracking of what types of
processes are in the system
I/O bound processes (these may “thrash” the
system).
CPU bound processes (these may be process
pigs).
How much CPU time has the process used.
How much I/O resource has the process used.
Etc.
4/11/2016
CST 352 - Operating Systems
139
Scheduling
Round Robin Scheduling
Keep a circular linked list of TCBs.
Each TCB gets a turn in the CPU
in a circular fashion.
Must balance context switching
overhead with process run-time.
4/11/2016
CST 352 - Operating Systems
140
Scheduling
Round Robin Scheduling
PCB 1
Exec
PCB 2
PCB 5
PCB 4
PCB 3
4/11/2016
CST 352 - Operating Systems
141
Scheduling
Round Robin Scheduling
PCB 1
PCB 2
PCB 5
Exec
PCB 4
PCB 3
4/11/2016
CST 352 - Operating Systems
142
Scheduling
Round Robin Scheduling
PCB 1
PCB 2
PCB 5
PCB 4
PCB 3
Exec
4/11/2016
CST 352 - Operating Systems
143
Scheduling
Round Robin Scheduling
PCB 1
PCB 2
PCB 5
Exec
4/11/2016
PCB 4
PCB 3
CST 352 - Operating Systems
144
Scheduling
Round Robin Scheduling
PCB 1
Exec
PCB 2
PCB 5
PCB 4
PCB 3
4/11/2016
CST 352 - Operating Systems
145
Scheduling
Round Robin Scheduling
PCB 1
Exec
PCB 2
PCB 5
PCB 4
PCB 3
4/11/2016
CST 352 - Operating Systems
146
Scheduling
Priority Scheduling
Each process or contained thread is
assigned a priority.
The process or contained threads with the
highest priority get to run for a time slice.
To prevent the highest priority process
(thread) from stealing the CPU, the OS may
decrease priority based on CPU time used.
4/11/2016
CST 352 - Operating Systems
147
Scheduling
Priority Scheduling
I/O bound processes (threads)
should get a higher priority when
the I/O device becomes available.
Simple algorithm – Increase the
priority to 1/f where f = fraction of
time slice used by the process.
4/11/2016
CST 352 - Operating Systems
148
Scheduling
Priority Scheduling
Example:
If in a time slice of 30 ticks, a process
uses 2 ticks, the priority will be
increased by 15.
e.g. 1/(2/30) = 15
4/11/2016
CST 352 - Operating Systems
149
Scheduling
Priority Queue Scheduling
TCBs are assigned to queues.
Each queue has a priority.
A TCB from a higher priority queue
will get a larger time quantum
multiple than a TCB from a lower
priority queue.
4/11/2016
CST 352 - Operating Systems
150
Scheduling
Guaranteed Scheduling
Guarantee a certain level of CPU
time to a user process.
Given the number of processes and
contained threads running on the
CPU, calculate the amount of time
each process will get.
4/11/2016
CST 352 - Operating Systems
151
Scheduling
Guaranteed Scheduling
Example:
• There are “n” processes (or
threads) running on a system.
• Guarantee each (process or
thread) will receive 1/nth of the
CPU time.
4/11/2016
CST 352 - Operating Systems
152
Scheduling
Guaranteed Scheduling
Example (cont’d):
• The OS must keep track of how
much time in the CPU each
process (and threads) uses.
4/11/2016
CST 352 - Operating Systems
153
Scheduling
Guaranteed Scheduling
Example (cont’d):
For instance:
4 processes on a CPU.
Each process gets ¼ (.25) of the cycles.
2 processes blocked
Process “2” has used 10 of 100
possible CPU cycles -> 1/10 -> .10
-> .10/.25 -> ratio of .4
4/11/2016
CST 352 - Operating Systems
154
Scheduling
Guaranteed Scheduling
Example (cont’d):
Process 1 has used 50/100 cycles =
.50.
Process 1 has used .25 excess
cycles -> ratio of .50/.25 -> ratio
of 2.0
Dispatch process with the lowest
ratio.
4/11/2016
CST 352 - Operating Systems
155
Scheduling
Lottery
Processes (threads) are chosen from a
pool based on a random decision.
Processes (threads) can be given more
“tickets” to increase their chances of
obtaining the right to be dispatched.
4/11/2016
CST 352 - Operating Systems
156
Scheduling
Fair Share
Base scheduling on not only the
process but also the process
origination.
If user A starts up 1 process and user B
starts up 9
4/11/2016
User A process will get 50% of the CPU time
User B process will divide the remaining 50%.
CST 352 - Operating Systems
157
Scheduling
Real-Time Policies
Hard real-time policies
Configure a process to react in a given
amount of time.
Failure to meet this deadline is
considered a failure.
Soft real-time systems
Time constraint must be met within
certain tolerances.
4/11/2016
CST 352 - Operating Systems
158