Module 4: Processes

Download Report

Transcript Module 4: Processes

Chapter 3: Processes
Chapter 3: Processes
 Process Concept
 Process Scheduling
 Operations on Processes
 Cooperating Processes
 Interprocess Communication
 Communication in Client-Server Systems
2
Process Concept
 An operating system executes a variety of programs:
Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution
 A process includes
 Code (or text)
 Data
 Stack
 Current values of the program counter and registers

3
Process Image in Memory (1/2)
Process Image in Memory (2/2)
 Logical view of process image in memory
3.1.1 Fig 3.1
5
Process State
 As a process executes, it changes state





new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
6
Diagram of Process State
3.1.2 Fig 3.2
7
Process State -- Example
Source program:
/* test.c */
int main(int argc, char**
argv)
{
printf(“Hello world\n");
exit(0);
}
Compile in Linux:
gcc test.c –o test
Run test:
./test
A process test will be created,
executed, and terminate.
 Process test runs through
following states (in the best case):
 new
 ready
 running
 waiting (I/O due to call of
printf)
 ready
 running
 terminated
Process Control Block (PCB)
PCB stores the information associated with each process
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information
3.1.3
9
Process Control Block (PCB)
 One of the most important data structures in operating systems
3.1.3 Fig 3.3
10
CPU Switch from Process to Process
Fig 3.4
11
Process Scheduling Queues
 Job queue – set of all processes entering the system
 Ready queue – set of all processes residing in main memory, ready and
waiting to execute
 Device queues – set of processes waiting for an I/O device
 Queues can be implemented as lists of PCB’s
 Processes change state  they migrate among the various queues
3.2.1
12
Ready Queue and Various I/O Device
Queues
3.2.1 Fig 3.5
13
Queueing Diagram
interrupt occurs
3.2.1 Fig 3.6
14
Schedulers (1/3)
 Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue
 UNIX and MS Windows have no long-term scheduler
 Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
15
Schedulers (2/3)
 Addition of medium-term scheduling to regulate the degree of
multiprogramming
 The medium term scheduler swaps out/in processes between memory
and disk to decrease/increase the number of processes in memory
memory
memory
Schedulers (3/3)
 Short-term scheduler is invoked very frequently (milliseconds)  must be
fast
 Long-term scheduler is invoked very infrequently (seconds, minutes)  may
be slow
 The long-term scheduler controls the degree of multiprogramming
 Processes can be described as either:
 I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
 CPU-bound process – spends more time doing computations; few very
long CPU bursts
17
Context Switch
 Context switch: When CPU switches to another process, the system must
save the data about the old process and load the previously saved data for
the new process
 Context of a process is represented in the PCB
 Context-switch time is the time needed by OS to do a context switch
 Context-switch time is overhead; the system does no useful work while
switching
 Context-switch time is dependent on hardware support
3.2.3
18
Process Creation (1/3)
 In Linux, Windows and many OSes, process can create new processes
(children, child processes), which in turn create other processes, forming a
tree of processes.
 Resource (files,…) sharing possibilities, dependent on OS,
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
 Execution possibilities
 Parent and children execute concurrently
 The OS blocks the parent until the child finishes
3.3.1
19
Process Tree Linux/Unix
root
pagedaemon
gcc
swapper
init
bash
bash
ls
mkdir
bash
grep
Process Creation (2/3)
 In UNIX



New processes are not created from scratch (except for?)
fork system call creates new process
 new process is identical to its parent process; only differences are
their process id’s and the return value from the fork.
Why fork? exec system call used after a fork to replace the process’
memory space with a new program ( command interpreter)
21
Process Creation (3/3)
3.3.1 Fig 3.10
22
C Program Forking Separate Process
int main()
{
pid_t pid;
/* fork another process */
return_value = fork();
if (return_value < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
exit(-1);
}
else if (return_value == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child Complete");
exit(0);
}
}
3.3.1 Fig 3.9
23
Fig from Feitelson
Process Termination
 Process executes last statement and asks the operating system to delete it
(exit)
 Output data from child to parent (via wait)
 Process’ resources are deallocated by operating system
 Parent may terminate execution of children processes (abort)
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 If parent is exiting
 Some operating system does not allow child to continue if its parent
terminates
– All children terminated – cascading termination
3.3.2
26
Cooperating Processes
 Independent process cannot affect or be affected by the execution of
another process
 Cooperating process can affect or be affected by the execution of another
process
 Advantages of process cooperation
 Information sharing
 Computation speed-up
 Modularity
 Convenience
27
Producer-Consumer Problem
 Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process
 unbounded-buffer places no practical limit on the size of the buffer
 bounded-buffer assumes that there is a fixed buffer size
28
Bounded-Buffer – Shared-Memory Solution
 Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
 Solution is correct, but can only use BUFFER_SIZE - 1 elements
29
Bounded-Buffer – Insert() Method
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
30
Bounded Buffer – Remove() Method
while (true) {
while (in == out)
; // do nothing -- nothing to consume
// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
31
Interprocess Communication (IPC)
 Two models for process communication:


Using shared memory
Message passing – processes communicate with each other without
resorting to shared memory
3.4
32
Communications Models
Message passing
Using shared memory
3.4 Fig 3.12
Direct Communication
 Processes must name each other explicitly:


send(P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
3.4.2.1
34
Indirect Communication (1/3)
 Messages are directed to / received from mailboxes (also referred to as
ports)
 Each mailbox has a unique id
 Processes can communicate only if they share a mailbox
35
Indirect Communication (2/3)
 Operations

create a new mailbox
 send and receive messages through mailbox
 destroy a mailbox
 Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
36
Indirect Communication (3/3)
 Mailbox sharing

P1 , P2 , and P3 share mailbox A
 P1 sends; P2 and P3 receive
 Who gets the message?
 Solutions
 Allow only one process at a time to execute a receive operation
 Allow the system to select arbitrarily the receiver. Sender is notified who
the receiver was.
3.4.2.1
37
Blocking/Nonblocking Send/Receive

Blocking
 Blocking send has the sender block until the message is received
 Blocking receive has the receiver block until a message is available
 Non-blocking
 Non-blocking send has the sender send the message and continue
 Non-blocking receive has the receiver receive a valid message or null
3.4.2.2
38
Message Queue Length
 Queue of messages; implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
3.4.2.3
39
Example of shared memory for IPC
 POSIX Shared Memory

Process first creates shared memory segment
segment id = shmget(IPC PRIVATE, size, S IRUSR | S
IWUSR);

Process wanting access to that shared memory must attach to it
shared memory = (char *) shmat(id, NULL, 0);

Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared memory");

When done a process can detach the shared memory from its address
space
shmdt(shared memory);
40
Client-Server Communication
 Examples

A program running on a workstation is the client of a file server, and
requests it to perform operations on files.
 A program with a graphical user interface running on a workstation is
the client of the X server running on that workstation. The X server
draws things on the screen for it, and notifies it when input is events
have occured in its window.
 A web browser is a client of a web server, and asks it for certain web
pages.
 An ATM is a client of a bank’s central computer, and asks it for
authorization and recording of a transaction.
 Techniques
 Sockets
 Remote Procedure Calls
 Remote Method Invocation (Java)
3.6
41
Sockets
 A socket is defined as an endpoint for communication

socket number (address) consists of IP address and port
 The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
 Well-known ports used for standard services
3.6.1
42
Communication Using Socket
Browser at
3.6.1 Fig 3.17
43
Communication Using Socket
 Socket primitives
Set up a Connection between Client and
Server
Server
socket()
bind()
listen()
accept()
recv()
send()
close()
communication
socket()
connect()
Client
 send(socket, buffer, buffer_length, flags)
 recv(socket, buffer, buffer_length, flags)
send()
recv()
close()
Remote Procedure Calls (1/3)
 Remote procedure call (RPC) extends procedure call to call a procedure
residing on a remote machine – a remote procedure.
 Client program is bound with a client stub – a small library procedure.
Server program is bound with a server stub
 The client-side stub locates the server and marshalls the parameters.
 The server-side stub
 receives this message,
 unmarshalls, i.e. unpacks, the marshalled parameters,
 and performs the procedure on the server.
3.6.2
46
RPC (2/3)
Fig from Feitelson
RPC (3/3)
 Calling remote procedure add(i, j)
Server stub
Client stub
Execution of RPC
3.6.2 Fig 3.20
49
Remote Method Invocation
 Remote Method Invocation (RMI) is a technique similar to RPC, but
implemented on JVM.
 RMI allows a Java program on one machine to invoke a method on a
remote object.
3.6.3 Fig 3.21
50
Marshalling Parameters
marshalling
unmarshalling
51
End of Chapter 3