Process management
Download
Report
Transcript Process management
Process Management
1
Process
• Process is a program in execution;
process execution must progress in
sequential fashion
• A process includes:
– program counter
– Stack (Registers) contains temporary data
such a functions parameters, return address
and local variables
– data section (global Variables)
2
Process Model
• In this model all the runnable software on
the computer , sometimes including the
operating system, is organized into a
number of sequential processes or
simply processes.
• Conceptually each process has its own
virtual CPU, in reality the real CPU
switches back and forth from process to
process (multiprogramming)
3
The Process Model
(a) Multiprogramming of four programs.
(b) Conceptual model of four independent, sequential processes.
(c) Only one program is active at once.
Difference between Process and
Program
• Program is an algorithm expressed in a suitable
notation where as a process is a program in
execution
• Process is an activity of some kind. It has a
program, input, output and a state.
• A single processor may be shared among
several processes, with some scheduling
algorithm being used to determine when to stop
work on one process and service a different one.
5
Process Creation
Events which cause process creation:
•
•
•
•
System initialization.
Execution of a process creation system call by a
running process.
A user request to create a new process.
Initiation of a batch job.
Process Creation-System
initialization
• When an OS is booted, several processes are
created
• Foreground processes-interact with the user and
perform work for them
• Background processes-not associated with
particular users, but instead have specific
function (e.g. a background process to accept
incoming request for web pages hosted on a
machine, waking up when the request comes)
7
Process Creation- by process
creation system call
• A running process may issue system calls to
create one or more new processes to help it do
the job
• Useful when the work to be done can be
formulated in terms of several related but
otherwise independent interacting processes
• E.g. In Unix when compiling a large program, the
make program invokes the C compiler to convert
source file to object code and then it invokes the
install program to copy the program to its
destination, set ownership and permissions etc.
8
Process Creation- User Request,
Initialization of a batch process
• In interactive systems, users can start a
program by typing a command
• In case of batch processing systems,
users can submit batch jobs to the system.
When OS decides that it has the
resources to run another job, it creates a
new process and runs the next job from
the input queue in it.
9
Process Termination
Events which cause process termination:
•
•
•
•
Normal exit (voluntary).
Error exit (voluntary).
Fatal error (involuntary).
Killed by another process (involuntary).
10
Process Termination-Normal Exit
• If a process has completed it work then it
performs a normal exit voluntarily.
• E.g. When a compiler has compiled the
program given to it, the compiler executes
a system call to tell the OS that it has
finished the compilation
11
Process Termination-Error exit
• If a process discovers a fatal error it
performs a error exit voluntarily
• E.g. if a user types the command to
compile a program and no such file exists,
the compiler simply exits
12
Process Termination-fatal error
• If an error is caused by the process due to
a program bug it discovers a fatal error
and terminates involuntarily
• E.g. Executing an illegal instruction,
referencing nonexistent memory or
dividing by zero
13
Process Termination-Killed by
another process
• If one process executes a system call
telling the operating System to kill another
process (the killing process must have the
authorization to kill the process)
• In some systems, when a process
terminates, either voluntarily or
involuntarily , all process it has created will
be immediately killed
14
Process Hierarchies
•
Parent creates a child process, child processes
can create its own process
•
Unix forms a hierarchy
– Unix calls this a "process group”
•
Windows has no concept of process hierarchy
– all processes are created equal
15
Process States
A process can be in running, blocked, or ready state. Transitions
between these states are as shown.
Process States
• When a process blocks, it does so
because logically it can continue, typically
because it is waiting for input that is not
yet available
• It may be ready and able to run to be
stopped because the OS has decided to
allocate the CPU to another process for a
while
17
Implementation of processes
•
•
To implement the process model,the OS
maintains a table called the process table (an
array of structures), with one entry per process
(also called Process control Blocks)
This entry contains information about
–
the process’ state,
–
its program counter,
–
stack pointer,
–
memory allocation,
–
the status of its open files,
–
its accounting and scheduling information,
–
alarms and other signals
That must be saved when process switches from running to ready
18
state so that it can be restarted later
Implementation of Processes (1)
The lowest layer of a process-structured operating system
handles interrupts and scheduling. Above that layer are
sequential processes.
Implementation of Processes (2)
20
20
Text Segment.
The Text segment (a.k.a the Instruction segment) contains the executable program
code and constant data. The text segment is marked by the operating system as
read-only and can not be modified by the process. Multiple processes can share the
same text segment. Processes share the text segment if a second copy of the
program is to be executed concurrently.
Data Segment
The data segment, which is contiguous (in a virtual sense) with the text segment, can
be subdivided into initialized data (e.g. in C/C++, variables that are declared as static
or are static by virtual of their placement) and uninitialized (or 0-initizliazed) data. The
uninitialized data area is also called BSS (Block Started By Symbol). For example,
Initialized Data section is for initialized global variables or static variables, and BSS is
for uninitialized. During its execution lifetime, a process may request additional data
segment space. Library memory allocation routines (e.g., new, malloc, calloc, etc.) in
turn make use of the system calls brk and sbrk to extend the size of the data
segment. The newly allocated space is added to the end of the current uninitialized
data area. This area of available memory is also called "heap".
Stack Segment
The stack segment is used by the process for the storage of automatic identifier,
register variables, and function call information. The stack grows towards the
uninitialized data segment.
21
Implementation of Processes (3)
Some of the fields of a typical process table entry.
Implementation of Processes (4)
Skeleton of what the lowest level of the operating system does
when an interrupt occurs.
Process Control Block (PCB)
Information associated with each process
• Process state
• Program counter
• CPU registers
• CPU scheduling information
• Memory-management information
• Accounting information
• I/O status information
24
Process Control Block (PCB)
25
CPU Switch From Process to Process
26
Process Scheduling
• The objective of multiprogramming is to have
some process running at all times to maximize
CPU utilization.
• The objective of time sharing is to switch the
CPU among processes so frequently that users
can interact with each program while it is
running .
• To meet these objectives, the process scheduler
selects an available process from a set of
available process , for program execution on the
CPU
27
Process Scheduling Queues
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in
main memory, ready and waiting to execute,
generally stored as a linked list. A ready-queue
header contains the pointers to first and last PCB
in the list. Each PCB includes a pointer that
points to the next PCB in the ready queue
• Device queues – set of processes waiting for an
I/O device. Each device has its own device queue
• Processes switches among the various queues 28
Ready Queue And Various I/O Device Queues
29
Process Scheduling-Queuing
• Common representation of process scheduling is
queuing diagram.
• Each rectangular box represents a queue (ready & set of
device queues). Circles represent resources that serve
the queue and arrow show the flow of processes in the
system.
• A new process is put in a ready queue. It waits there
until it is selected for execution or s dispatched
• Once the process is allocated to CPU and is executing
,these events could happen
– The process could issue an I/O request and then be placed in an
I/O queue
– The process could create a new sub process and wait for its
termination.
– Due to an interrupt the process may be removed form CPU
forcefully and put back in ready queue
30
Representation of Process Scheduling
31
Schedulers
• Long-term scheduler (or job scheduler) –
selects which processes should be brought
into the ready queue
• Short-term scheduler (or CPU scheduler)
– selects which process should be executed
next and allocates CPU
32
Schedulers (Cont)
• Short-term scheduler is invoked very frequently
(milliseconds) (must be fast)
• Long-term scheduler is invoked very infrequently
(seconds, minutes) (may be slow)
• The long-term scheduler controls the degree of
multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
– CPU-bound process – spends more time doing
computations; few very long CPU bursts
33
Addition of Medium Term Scheduling
34
Context Switch
• When CPU switches to another process, the
system must save the state of the old process
and load the saved state for the new process via
a task known as Context Switch
• Context of a process represented in the PCB
• Context-switch time is overhead; the system
does no useful work while switching
• Time dependent on hardware support
35
Threads
The term thread is shorthand for "thread of control." A
thread is the path taken by a program while running, the
steps performed, and the order in which the steps are
performed. A thread runs code from its starting location
in an ordered, predefined sequence for a given set of
inputs.
Like a process, except that some state is shared
Threads have their own registers and stack frames
Threads share memory
Tradeoffs: Ease of programming + better performance vs.
programming complexity + less protection
Switching to another thread: save registers, find the right
stack frame, load registers, run thread
Typically used to service asynchronous events
36
Single and Multithreaded
Processes
37
Thread models
• User threads model, all program threads share the same
process thread. The scheduling policy allows only one thread to
be actively running in the process at a time. The operating
system kernel is only aware of a single task in the process.
• Kernel threads model, kernel threads are separate tasks that
are associated with a process. Uses a pre-emptive scheduling
policy in which the operating system decides which thread is
eligible to share the processor. One-to-one mapping between
program threads and process threads. OS/400 supports a
kernel thread model.
• MxN thread model. Each process has M user threads that
share N kernel threads. The user threads are scheduled on top
of the kernel threads. The system allocates resources only to
the more "expensive" kernel threads.
38
Comparison: Process & Thread
• Process is an execution of a program and program
contain set of instructions but thread is a single
sequence stream within the process.
• Thread is sometime called lightweight process. Single
thread allows an OS to perform single task at a time
• Similarities between process and threads are:
–
–
–
–
share CPU.
sequential execution
create child
if one thread is blocked then the next will start to run like
process.
• Dissimilarities:
– threads are not independent like process.
– all threads can access every address in the task unlike process.
– threads are designed to assist one another and process might
or not might be assisted on one another.
39
Difference between Thread &
process
• Threads share the address space of the process that created it;
processes have their own address.
• Threads have direct access to the data segment of its process;
processes have their own copy of the data segment of the parent
process.
• Threads can directly communicate with other threads of its process;
processes must use inter-process communication to communicate
with sibling processes.
• Threads have almost no overhead; processes have considerable
overhead.
• New threads are easily created; new processes require duplication
of the parent process.
• Threads can exercise considerable control over threads of the same
process; processes can only exercise control over child processes.
• Changes to the main thread (cancellation, priority change, etc.) may
affect the behavior of the other threads of the process; changes to
the parent process does not affect child processes.
40
Benefits of Threads over
Processes
• Less time to create a new thread than a process,
because the newly created thread uses the current
process address space.
• Less time to terminate a thread than a process.
• Less time to switch between two threads within the same
process, partly because the newly created thread uses
the current process address space.
• Less communication overheads -- communicating
between the threads of one process is simple because
the threads share everything: address space, in
particular. So, data produced by one thread is
immediately available to all the other threads
41
Inter-Process Communication
• Process executing concurrently in the Operating
Systems may be either independent processes
or cooperating processes
• An independent process cannot affect or be
affected by the other processes executing in the
system (Any process that do not share data with
any other process)
• A Cooperating process will affect or be
affected by the other processes executing in the
system (Any process that share data with any
other process)
42
• Reasons for Cooperating processes
–
–
–
–
Information sharing
Computation speedup
Modularity
Convenience (user may work on many tasks at the
same time)
• Cooperating processes need Inter-Process
Communication (IPC)
• Two models of IPC
– Shared memory
– Message passing
43
Communications Models
Message Passing
Shared Memory
44
Shared Memory Systems
• Communicating processes must establish a region of
shared memory.
• A shared memory region resides in the address space of
the process creating the shared memory segment .Other
processes that wish to communicate using this shared
memory segment must attach it to their memory space.
• Normally the OS tries to prevent the processes from
accessing other process address space , but this type of
IPC requires processes to agree to remove this
restriction. They exchange information by reading and
writing in this shared area.
• The form of data and the location must be determined by
the processes and are not under OS’s control
45
Interprocess Communication – Message
Passing
• Mechanism for processes to communicate and to
synchronize their actions
• Message system – processes communicate with each
other without resorting to shared variables
• IPC facility provides two operations:
– send(message) – message size fixed or variable
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Implementation of communication link
– physical (e.g., shared memory, hardware bus)
– logical (e.g., logical properties)
46
Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from
process Q
• Properties of communication link
– Links are established automatically
– A link is associated with exactly one pair of
communicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bidirectional
47
Indirect Communication
• Messages are directed and received from mailboxes
(also referred to as ports)
– Each mailbox has a unique id
– Processes can communicate only if they share a mailbox
• Properties of communication link
– Link established only if processes share a common mailbox
– A link may be associated with many processes
– Each pair of processes may share several communication
links
– Link may be unidirectional or bi-directional
48
Synchronization
• Message passing may be either blocking or nonblocking
• Blocking is considered synchronous
– Blocking send has the sender block until the message is
received
– Blocking receive has the receiver block until a message
is available
• Non-blocking is considered asynchronous
– Non-blocking send has the sender send the message
and continue
– Non-blocking receive has the receiver receive a valid
message or null
49
Buffering
• Queue of messages attached to the
link; implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver
(rendezvous)
2. Bounded capacity – finite length of n
messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
50
CPU Scheduling
51
Basic Concepts
• Maximum CPU utilization obtained
with multiprogramming
• CPU–I/O Burst Cycle – Process
execution consists of a cycle of CPU
execution and I/O wait
• CPU burst distribution
52
Alternating Sequence of CPU And
I/O Bursts
53
Histogram of CPU-burst Times
54
CPU Scheduler
• Selects from among the processes in memory
that are ready to execute, and allocates the CPU
to one of them
• CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive
55
Dispatcher
• Dispatcher module gives control of the
CPU to the process selected by the shortterm scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user
program to restart that program
• Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
56
Scheduling Criteria
• CPU utilization – keep the CPU as busy as
possible
• Throughput – # of processes that complete
their execution per time unit
• Turnaround time – amount of time to
execute a particular process
• Waiting time – amount of time a process
has been waiting in the ready queue
• Response time – amount of time it takes
from when a request was submitted until
the first response is produced, not output
(for time-sharing environment)
57
Optimization Criteria
•
•
•
•
•
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
58
First-Come, First-Served (FCFS)
Scheduling
Process Burst Time
P1
24
P2
3
P3
3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1
0
P2
24
P3
27
30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
59
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2
0
•
•
•
•
P3
3
P1
6
30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
60
Convoy effect short process behind long process
Shortest-Job-First (SJF)
Scheduling
• Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time
• Two schemes:
– nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU burst
– preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting
time for a given set of processes
61
Example of Non-Preemptive
SJF
Process Arrival Time Burst Time
P1
0.0
7
P2
2.0
4
P3
4.0
1
P4
5.0
4
• SJF (non-preemptive)
P1
0
3
P3
7
P2
8
P4
12
16
• Average waiting time = (0 + 6 + 3 + 7)/4 = 4
62
Example of Preemptive SJF
Process Arrival Time Burst Time
P1
0.0
7
P2
2.0
4
P3
4.0
1
P4
5.0
4
• SJF (preemptive)
P1
0
P2
2
P3
4
P2
5
P4
7
P1
11
16
• Average waiting time = (9 + 1 + 0 +2)/4 = 3
63