Interprocess communication

Download Report

Transcript Interprocess communication

Advanced
Operating Systems
Lecture 5: Interprocess
Communication
University of Tehran
Dept. of EE and Computer Engineering
By:
Dr. Nasser Yazdani
Univ. of Tehran
Distributed Operating Systems
1
How Processes communicate


Some general guides.
References
Univ. of Tehran
Distributed Operating Systems
2
Outline

Why IPC
Univ. of Tehran
Distributed Operating Systems
3
IPC Fundamentals

What is IPC?


Mechanisms to transfer data between
processes
Why is it needed?

Not all important procedures can be easily
built in a single process
Univ. of Tehran
Distributed Operating Systems
4
Why do processes
communicate?





To share resources
Client/server paradigms
Inherently distributed applications
Reusable software components
Other good software engineering reasons
Univ. of Tehran
Distributed Operating Systems
5
The Basic Concept of IPC



A sending process needs to communicate
data to a receiving process
Sender wants to avoid details of receiver’s
condition
Receiver wants to get the data in an
organized way
Univ. of Tehran
Distributed Operating Systems
6
IPC from the OS Point of
View
Private
address
space
Private
address
space
OS address space
Process B
Process A
Univ. of Tehran
Distributed Operating Systems
7
Fundamental IPC Problem
for the OS



Each process has a private address space
Normally, no process can write to another
process’s space
How to get important data from process
A to process B?
Univ. of Tehran
Distributed Operating Systems
8
Solutions to IPC

Fundamentally, two options
1. Support some form of shared address
space

Shared memory
2. Use OS mechanisms to transport data from
one address space to another

Files, messages, pipes, RPC
Univ. of Tehran
Distributed Operating Systems
9
Differences in Treatment of
IPC

Shared memory




OS has job of setting it up
And perhaps synchronizing
But not transporting data
Messages, etc


OS involved in every IPC
OS transports data
Univ. of Tehran
Distributed Operating Systems
10
Desirable IPC
Characteristics






Fast
Easy to use
Well defined synchronization model
Versatile
Easy to implement
Works remotely
Univ. of Tehran
Distributed Operating Systems
11
IPC and Synchronization

Synchronization is a major concern for
IPC



Allowing sender to indicate when data is
transmitted
Allowing receiver to know when data is ready
Allowing both to know when more IPC is
possible
Univ. of Tehran
Distributed Operating Systems
12
IPC and Connections



IPC mechanisms can be connectionless or
require connection
Connectionless IPC mechanisms require
no preliminary setup
Connection IPC mechanisms require
negotiation and setup before data flows

Sometimes much is concealed from user,
though
Univ. of Tehran
Distributed Operating Systems
13
Connectionless IPC
Data simply flows
 Typically, no permanent data structures
shared in OS by sender and receiver
+ Good for quick, short communication
+ less long-term OS overhead
- Less efficient for large, frequent
communications
- Each communication takes more OS
resources per byte

Univ. of Tehran
Distributed Operating Systems
14
Connection-Oriented IPC



Sender and receiver pre-arrange IPC
delivery details
OS typically saves IPC-related information
for them
Advantages/disadvantages pretty much
the opposites of connectionless IPC
Univ. of Tehran
Distributed Operating Systems
15
IPC Through File System



Sender writes to a file
Receiver reads from it
But when does the receiver do the read?


Often synchronized with file locking or lock
files
Special types of files can make file-based
IPC easier
Univ. of Tehran
Distributed Operating Systems
16
File IPC Diagram
Process A
Univ. of Tehran
Data
Distributed Operating Systems
Process B
17
Message-Based IPC

Sender formats data into a formal
message




With some form of address for receiver
OS delivers message to receiver’s
message input queue (might signal too)
Receiver (when ready) reads a message
from the queue
Sender might or might not block
Univ. of Tehran
Distributed Operating Systems
18
Message-Based IPC
Diagram
OS
Process A
Univ. of Tehran
Data sent
from A to B
B’s message
queue
Distributed Operating Systems
Process B
19
Procedure Call IPC

Interprocess communication uses same
procedure call interface as intraprocess




Data passed as parameters
Information returned via return values
Complicated since destination procedure
is in a different address space
Generally, calling procedure blocks till call
returns
Univ. of Tehran
Distributed Operating Systems
20
File IPC Diagram
main () {
.
call();
.
.
.
Data as parameters
Data as return values
.
.
.
server();
.
.
.
}
Process B
Process A
Univ. of Tehran
Distributed Operating Systems
21
Shared Memory IPC

Different processes share a common
piece of memory



Either physically or virtually
Communications via normal reads/writes
May need semaphores or locks

In or associated with the shared memory
Univ. of Tehran
Distributed Operating Systems
22
Shared Memory IPC
Diagram
main () {
.
x = 10
.
.
.
write variable x
x: 10
read variable x
.
.
.
print(x);
.
.
.
}
Process B
Process A
Univ. of Tehran
Distributed Operating Systems
23
Synchronizing in IPC


How do sending and receiving process
synchronize their communications?
Many possibilities


Based on which process block when
Examples that follow in message context,
but more generally applicable
Univ. of Tehran
Distributed Operating Systems
24
Blocking Send, Blocking
Receive




Both sender and receiver block
Sender blocks till receiver receives
Receiver blocks until sender sends
Often called message rendezvous
Univ. of Tehran
Distributed Operating Systems
25
Non-Blocking Send, Blocking
Receive


Sender issues send, can proceed without
waiting to discover fate of message
Receiver waits for message arrival before
proceeding

Essentially, receiver is message-driven
Univ. of Tehran
Distributed Operating Systems
26
Non-Blocking Send, NonBlocking Receive



Neither party blocks
Sender proceeds after sending message
Receiver works until message arrives


Either receiver periodically checks in nonblocking fashion
Or some form of interrupt delivered
Univ. of Tehran
Distributed Operating Systems
27
Addressing in IPC



How does the sender specify where the
data goes?
In some cases, the mechanism makes it
explicit (e.g., shared memory and RPC)
In others, there are options
Univ. of Tehran
Distributed Operating Systems
28
Direct Addressing



Sender specifies name of the receiving
process
Using some form of unique process name
Receiver can either specify name of
expected sender

Or take stuff from anyone
Univ. of Tehran
Distributed Operating Systems
29
Indirect Addressing



Data is sent to queues, mailboxes, or
some other form of shared data structure
Receiver performs some form of read
operations on that structure
Much more flexible than direct addressing
Univ. of Tehran
Distributed Operating Systems
30
Duality in IPC Mechanisms




Many aspects of IPC mechanisms are
duals of each other
Which implies that these mechanisms
have the same power
First recognized in context of messages
vs. procedure calls
At least, IPC mechanisms can be
simulated by each other
Univ. of Tehran
Distributed Operating Systems
31
So which IPC mechanism to
build/choose/use?



Depends on model of computation
And on philosophy of user
In particular cases, hardware or existing
software may make one perform better
Univ. of Tehran
Distributed Operating Systems
32
Typical UNIX IPC
Mechanisms

Different versions of UNIX introduced
different IPC mechanisms






Pipes
Message queues
Semaphores
Shared memory
Sockets
RPC
Univ. of Tehran
Distributed Operating Systems
33
Pipes

Only IPC mechanism in early UNIX
systems (other than files)





Uni-directional
Unformatted
Uninterpreted
Interprocess byte streams
Accessed in file-like way
Univ. of Tehran
Distributed Operating Systems
34
Pipe Details




One process feeds bytes into pipe
A second process reads the bytes from it
Potentially blocking communication
mechanism
Requires close cooperation between
processes to set up

Named pipes allow more flexibility
Univ. of Tehran
Distributed Operating Systems
35
Pipes and Blocking

Writing more bytes than pipe capacity
blocks the sender


Reading bytes when none are available
blocks the receiver


Until the receiver reads some of them
Until the sender writes some
Single pipe can’t cause deadlock
Univ. of Tehran
Distributed Operating Systems
36
UNIX Message Queues



Introduced in System V Release 3 UNIX
Like pipes, but data organized into
messages
Message component include:



Type identifier
Length
Data
Univ. of Tehran
Distributed Operating Systems
37
Semaphores


Also introduced in System V Release 3
UNIX
Mostly for synchronization only


Since they only communicate one bit of
information
Often used in conjunction with shared
memory
Univ. of Tehran
Distributed Operating Systems
38
UNIX Shared Memory




Also introduced in System V Release 3
Allows two or more processes to share
some memory segments
With some control over read/write
permissions
Often used to implement threads
packages for UNIX
Univ. of Tehran
Distributed Operating Systems
39
Sockets



Introduced in 4.3 BSD
A socket is an IPC channel with generated
endpoints
Great flexibility in it characteristics


Intended as building block for communication
Endpoints established by the source and
destination processes
Univ. of Tehran
Distributed Operating Systems
40
UNIX Remote Procedure
Calls

Procedure calls from one address space
to another




On the same or different machines
Requires cooperation from both processes
In UNIX, often built on sockets
Often used in client/server computing
Univ. of Tehran
Distributed Operating Systems
41
Problems with Shared
Memory



Synchronization
Protection
Pointers
Univ. of Tehran
Distributed Operating Systems
42
Synchronization



Shared memory itself does not provide
synchronization of communications
Except at the single-word level
Typically, some other synchronization
mechanism is used


E.g., semaphore in UNIX
Events, semaphores, or hardware locks in
Windows NT
Univ. of Tehran
Distributed Operating Systems
43
Protection



Who can access a segment? And in what
ways?
UNIX allows some read/write controls
Windows NT has general security
monitoring based on the object-status of
shared memory
Univ. of Tehran
Distributed Operating Systems
44
Pointers in Shared
Memory


Pointers in a shared memory segment can
be troublesome
For that matter, pointers in any IPC can
be troublesome
Univ. of Tehran
Distributed Operating Systems
45
Shared Memory Containing
Pointers
a: __
z: __
b: __
y: 20
x: 10
w: 5
Process B
Process A
Univ. of Tehran
Distributed Operating Systems
46
A Troublesome Pointer
a: __
z: __
b: __
y: 20
x: 10
w: 5
Process B
Process A
Univ. of Tehran
Distributed Operating Systems
47
So, how do you share
pointers?

Several methods are in use




Copy-time translation
Reference-time translation
Pointer Swizzling
All involve somehow translating pointers
at some point before they are used
Univ. of Tehran
Distributed Operating Systems
48
Copy-Time Pointer
Translation





When a process sends data containing
pointers to another process
Locate each pointer within old version of
the data
Then translate pointers are required
Requires both sides to traverse entire
structure
Not really feasible for shared memory
Univ. of Tehran
Distributed Operating Systems
49
Reference-Time
Translation




Encode pointers in shared memory
segment as pointer surrogates
Typically as offsets into some other
segment in separate contexts
So each sharer can have its own copy of
what is pointed to
Slow, pointers in two formats
Univ. of Tehran
Distributed Operating Systems
50
Pointer Swizzling





Like reference-time, but cache results in
the memory location
Only first reference is expensive
But each sharer must have his own copy
Must “unswizzle” pointers to transfer data
outside of local context
Stale swizzled pointers can cause
problems
Univ. of Tehran
Distributed Operating Systems
51
Shared Memory in a Wide
Virtual Address Space


When virtual memory was created, 16 or
32 bit addresses were available
Reasonable size for one process


But maybe not for all processes on a
machine
And certainly not for all processes ever on a
machine
Univ. of Tehran
Distributed Operating Systems
52
Wide Address Space
Architectures


Computer architects can now give us 64bit virtual addresses
A 64-bit address space, consumed at 100
MB/sec, lasts 5000 years


Orders of magnitude beyond any process’s
needs
40 bits can address a TB
Univ. of Tehran
Distributed Operating Systems
53
Do we care?



Should OS designers care about wide
address space?
Well, what can we do with them?
One possible answer:


Put all processes in the same address space
Maybe all processes for all time?
Univ. of Tehran
Distributed Operating Systems
54
Implications of Single Shared
Address Space

IPC is trivial



Shared memory, RPC
Separation of concepts of address space
and protection domain
Uniform address space
Univ. of Tehran
Distributed Operating Systems
55
Address Space and
Protection Domain

A process has a protection domain


The data that cannot be touched by other
processes
And an address space


The addresses it can generate and access
In standard systems, these concepts are
merged
Univ. of Tehran
Distributed Operating Systems
56
Separating Concepts




These concepts are potentially orthogonal
Just because you can issue an address
doesn’t mean you can access it
(Though clearly to access an address you
must be able to issue it)
Existing hardware can support this
separation
Univ. of Tehran
Distributed Operating Systems
57
Context-Independent
Addressing




Addresses mean the same thing in any
execution context
So, a given address always refers to the
same piece of data
Key concept of uniform-address systems
Allows many OS
optimizations/improvements
Univ. of Tehran
Distributed Operating Systems
58
Uniform-Addressing Allows
Easy Sharing

Any process can issue any address



So any data can be shared
All that’s required is changing protection
to permit desired sharing
Suggests programming methods that
make wider use of sharing
Univ. of Tehran
Distributed Operating Systems
59
To Opal System




New OS using uniform-addressing
Developed at University of Washington
Not intended as slight alteration to
existing UNIX system
Most of the rest of material specific to
Opal
Univ. of Tehran
Distributed Operating Systems
60
Protection Mechanisms for
Uniform-Addressing


Protection domains are assigned portions
of the address space
They can allow other protection domains
to access them



Read-only
Transferable access permissions
System-enforced page-level locking
Univ. of Tehran
Distributed Operating Systems
61
Program Execution in
Uniform-Access Memory




Executing a program creates a new
protection domain
The new domain is assigned an unused
portion of the address space
But it may also get access to used
portions
E.g., a segment containing the required
executable image
Univ. of Tehran
Distributed Operating Systems
62
Virtual Segments

Global address space is divided into
segments



Each composed of variable number of
contiguous virtual pages
Domains can only access segments they
attach to
Attempting to access unattached segment
causes a segment fault
Univ. of Tehran
Distributed Operating Systems
63
Persistent Memory in Opal


Persistent segments exist even when
attached to no current domain
Recoverable segments are permanently
stored



And can thus survive crashes
All Opal segments can be persistent and
recoverable
Pointers can thus live forever on disk
Univ. of Tehran
Distributed Operating Systems
64
Code Modules in Opal

Executable code stored in modules


Pure modules can be easily shared


Independent of protection domains
Because they are essentially static
Can get benefit of dynamic loading
without run-time linking
Univ. of Tehran
Distributed Operating Systems
65
Address Space
Reclamation




Trivial in non-uniform-address systems
Tricky in uniform-address systems
Problem akin to reclaiming i_nodes in the
presence of hard links
But even if segments improperly
reclaimed, only trusting domains can be
hurt
Univ. of Tehran
Distributed Operating Systems
66
Windows NT IPC

Inter-thread communications


Local procedure calls


Within a single process
Between processes on same machine
Shared memory
Univ. of Tehran
Distributed Operating Systems
67
Windows NT and
Client/Server Computing




Windows NT strongly supports the
client/server model of computing
Various OS services are built as servers,
rather than part of the kernel
Windows NT needs facilities to support
client/server operations
Which guide users to building
client/server solution
Univ. of Tehran
Distributed Operating Systems
68
Client/Server Computing
and RPC


In client/server computing, clients
request services from servers
Service can be requested in many ways


But RPC is a typical way
Windows NT uses a specialized service for
single machine RPC
Univ. of Tehran
Distributed Operating Systems
69
Local Procedure Call (LPC)




Similar in many ways to RPC
But optimized to only work on a single
machine
Primarily used to communicate with
protected subsystems
Windows NT also provides a true RPC
facility for genuinely distributed
computing
Univ. of Tehran
Distributed Operating Systems
70
Basic Flow of Control in
Windows NT LPC



Application calls routine in an application
programming interface
Which is usually in a dynamically linked
library
Which sends a message to the server
through a messaging mechanism
Univ. of Tehran
Distributed Operating Systems
71
Windows NT LPC Messaging
Mechanisms



Messages between port objects
Message pointers into shared memory
Using dedicated shared memory
segments
Univ. of Tehran
Distributed Operating Systems
72
Port Objects



Windows NT is generally object-oriented
Port objects support communications
Two types:


Connection ports
Communication ports
Univ. of Tehran
Distributed Operating Systems
73
Connection Ports



Used to establish connections between
clients and servers
Named, so they can be located
Only used to set up communication ports
Univ. of Tehran
Distributed Operating Systems
74
Communication Ports




Used to actually pass data
Created in pairs, between given client
and given server
Private to those two processes
Destroyed when communications end
Univ. of Tehran
Distributed Operating Systems
75
Windows NT Port Example
Connection port
Server process
Client process
Univ. of Tehran
Distributed Operating Systems
76
Windows NT Port Example
Connection port
Server process
Client process
Univ. of Tehran
Distributed Operating Systems
77
Windows NT Port Example
Connection port
Communication ports
Server process
Client process
Univ. of Tehran
Distributed Operating Systems
78
Windows NT Port Example
Connection port
Send request
Communication ports
Server process
Client process
Univ. of Tehran
Distributed Operating Systems
79
Message Passing through
Port Object Message Queues

One of three methods in Windows NT to
pass messages
1. Client submits message to OS
2. OS copies to receiver’s queue
3. Receiver copies from queue to its own
address space
Univ. of Tehran
Distributed Operating Systems
80
Characteristics of Message
Passing via Queues


Two message copies required
Fixed-sized, fairly short message


Port objects stored in system memory


~256 bytes
So always accessible to OS
Fixed number of entries in message
queue
Univ. of Tehran
Distributed Operating Systems
81
Message Passing Through
Shared Memory


Used for messages larger than 256 bytes
Client must create section object




Shared memory segment
Of arbitrary size
Message goes into the section
Pointer to message sent to receiver’s
queue
Univ. of Tehran
Distributed Operating Systems
82
Setting up Section Objects




Pre-arranged through OS calls
Using virtual memory to map segment
into both sender and receiver’s address
space
If replies are large, need another
segment for the receiver to store
responses
OS doesn’t format section objects
Univ. of Tehran
Distributed Operating Systems
83
Characteristics of Message
Passing via Shared Memory


Capable of handling arbitrarily large
transfers
Sender and receiver can share a single
copy of data


i.e., data copied only once
Requires pre-arrangement for section
object
Univ. of Tehran
Distributed Operating Systems
84
Server Handling of
Requests



Windows NT servers expect requests
from multiple clients
Typically, they have multiple threads to
handle requests
Must be sufficiently general to handle
many different ports and section objects
Univ. of Tehran
Distributed Operating Systems
85
Message Passing Through
Quick LPC



Third way to pass messages in Windows
NT
Used exclusively with Win32 subsystem
Like shared memory, but with a key
difference

Dedicated resources
Univ. of Tehran
Distributed Operating Systems
86
Use of Dedicated Resources
in Quick LPC

To avoid overhead of copying





Notification messages to port queue
And thread switching overhead
Client sets up dedicated server thread
only for its use
Also dedicated 64KB section object
And event pair object for synchronization
Univ. of Tehran
Distributed Operating Systems
87
Characteristics of Message
Passing via Quick LPC




Transfers of limited size
Very quick
Minimal copying of anything
Wasteful of OS resources
Univ. of Tehran
Distributed Operating Systems
88
Shared Memory in
Windows NT



Similar in most ways to other shared
memory services
Windows NT runs on multiprocessors,
which complicates things
Done through virtual memory
Univ. of Tehran
Distributed Operating Systems
89
Section Views




Given process might not want to waste
lots of its address space on big sections
So a process can have a view into a
shared memory section
Different processes can have different
views of same section
Or multiple views for single process
Univ. of Tehran
Distributed Operating Systems
90
Shared Memory View
Diagram
view 1
view 2
Section
view 3
Process B
Process A
Physical memory
Univ. of Tehran
Distributed Operating Systems
91
Improving IPC
IPC performance is vital for new, specially
microkernel based OS.
 IPC improves







Modularity
Flexibility
Security
Scalability
Vital for Distributed system.
On L3 OS, which is based on microkernel

Reduced IPC to 5 us from 100.
Univ. of Tehran
Distributed Operating Systems
92
Improving IPC (2)



On L3, all interactions with outside are based on IPC
and through message. Everything is implemented as
user level and accessed through IPC.
Implemented on an Intel machine with 50 MHz, with
TLB 32 entries.
A null IPC
Thread A (user mode) Kernel: Access thread B
Thread B (user mode)
Load id of B
Set msg length to 0
switch stack pointer
Call Kernel
load id of A
inspect received msg
switch addr space
return to user
Univ. of Tehran
Distributed Operating Systems
93
Improving IPC (3)
The minimum, omitting everything, takes 20
instruction, 172 cycles (3.5 us).
 The aim was to reach 7 us for IPC, in practice
5us.

Univ. of Tehran
Distributed Operating Systems
94
Improving IPC (Solutions)

System calls




Reduced sys calls for IPC
It is around 2 us
Implement call (for sending message and waiting
for response) and reply & wait (for receiver)
Messages

Combine a sequence of send operation: avoid
several address space switches.
Univ. of Tehran
Distributed Operating Systems
95
Improving IPC (Solutions)

Message transfer
Avoid copying, in general
user A space -> kernel space -> user B space

Copying n bytes takes 20+ 0.7n cycles + TLB and
cache miss
 Solution: shared buffer





Security
Many clients, shared buffer can be bottleneck
Not variable to variable transfer.
Explicit opening of comm. Channels.
Univ. of Tehran
Distributed Operating Systems
96
Improving IPC (Solutions)

Alternative Solution: temporary mapping of address
spaces.





Avoid region crossing
Put all Thread Control Blocks (tcbs) in a large virtual
array such that the address can be calculated from
thread ID.
Use Double linked list for queues hold in tcbs
Between %50-%80 of msg are short. Transfer short
message through registers.
Reduce cache miss by putting relvant codes and data
in one pages.
Univ. of Tehran
Distributed Operating Systems
97
Next Lecture
Scheduling
 References

Univ. of Tehran
Distributed Operating Systems
98