CS471-6/14 - George Mason University Department of Computer

Download Report

Transcript CS471-6/14 - George Mason University Department of Computer

CS 471 - Lecture 6
Distributed System Structures
Ch. 3.4.2, 3.6, 16
George Mason University
Fall 2009
Distributed System Structures










Introduction
Design Goals
Distributed Operating Systems
Network Operating Systems
Middleware-Based Systems
Client-Server Model
Peer-to-Peer Computing Model
Communication Protocols
Sockets
Remote Procedure/Method Calls
GMU – CS 571
6.2
Distributed Systems

A distributed system is a collection of loosely
coupled processors interconnected by a
communication network.

Implications:
• No shared physical memory
• Communication/coordination through message passing
• No global clock
• Difficulty of keeping track of global state with accuracy
• Independent failure considerations
GMU – CS 571
6.3
A Distributed System
GMU – CS 571
6.4
Example Distributed System: Internet
intranet
%
ISP
%
%
%
backbone
satellite link
desktop computer:
server:
network link:
GMU – CS 571
6.5
Example Distributed System: Intranet

An intranet is a portion of the Internet that is separately
administered and has a boundary that can be configured
to enforce local security policies.
email server
Desktop
computers
print and other servers
Local area
network
Web server
email server
print
File server
other servers
the rest of
the Internet
router/firewall
GMU – CS 571
6.6
Why build Distributed Systems?




Resource Sharing – expensive/rarely used
hardware, large databases
Computation Speedup – divide computation in
to tasks that can execute concurrently, can
include the ability to use idle cycles elsewhere
(SETI@HOME)
Reliability via redundancy
Communication – file transfer, mail, RPC
(Remote Procedure Call)
GMU – CS 571
6.7
Design Goals in Distributed Systems






Overcoming Heterogeneity
Security
Concurrency
Transparency
Failure Handling
Scalability
GMU – CS 571
6.8
Scalability


A system is described as scalable if it will
remain effective when there is significant
increase in the number of users and the number
of resources.
Internet provides an illustration of a distributed
system for a drastic increase of
computers/services.
Date
1979, Dec.
1989, July
1999, July
2003, Jan.
GMU – CS 571
Computers
Web servers
188
0
130,000
56,218,000
171,638,297
0
5,560,866
35,424,956
6.9
Scalability (Cont.)

If more users or resources need to be supported we
are often confronted with limitations of
• Centralized services (e.g. a single server for all users)
• Centralized data (e.g. a single on-line telephone book)
• Centralized algorithms (e.g. doing routing based on

complete information)
In decentralized (distributed) algorithms
• No machine has complete information about the
system state.
• Machines make decisions based only on local
information.
• Failure of one machine does not ruin the algorithm.
• There is no implicit assumption that a global clock
exists.
GMU – CS 571
6.10
Design Challenges for Scalability

Avoiding performance bottleneck through
•
•
•
•
GMU – CS 571
Caching
Replication
Distribution
Use of distributed algorithms
6.11


Case Study in Scalability:
Domain Name System (DNS)
The first component of network communication
is the naming (i.e. the way components refer to
each other) of the systems in the network.
Identify processes on remote systems by
<host-name, identifier> pair.


Need to provide a mechanism to resolve the
symbolic host name into a numerical host-id
that describes the destination system to the
networking hardware.
In Internet, Domain Name System (DNS)
specifies the naming structure of the hosts, as
well as name-to-address resolution.
GMU – CS 571
6.12
DNS and Name Resolution



Generally, DNS resolves addresses by
examining the host name components in reverse
order.
If the host name is flits.cs.vu.nl, then first the
name server for the .nl domain will be contacted.
Name resolution may proceed in either iterative
fashion, or recursive fashion.
• Interative queries expects the best answer the

DNS server can provide immediately, without
contacting other DNS servers.
Local caches are usually kept at each name
server to enhance the performance.
GMU – CS 571
6.13
DNS: Scaling through distribution
GMU – CS 571
6.14
OS Structures in Distributed Systems
 Operating systems for distributed systems can
be roughly divided into two categories
• Distributed Operating Systems: The OS
essentially tries to maintain a single, global view
of the resources it manages (Tightly-coupled
operating system)
• Network Operating Systems: Collection of
independent operating systems augmented by
network services (Loosely-coupled operating
system)

Modern distributed systems are mostly
designed to provide a level of transparency
between these two extremes, through the use of
middleware.
GMU – CS 571
6.15
Distributed Operating Systems


Full transparency, users are not aware of the
multiplicity of machines.
Access to remote services similar to access to
local resources.
GMU – CS 571
6.16
Distributed Operating Systems (Cont.)
 Each node has its own kernel for managing local




resources (memory, local CPU, disk, …).
The only means of communication among nodes is
through message passing.
Above each kernel is a common layer of software
that implements the OS supporting parallel and
concurrent execution of various tasks.
This layer may even provide a complete software
implementation of shared memory (distributed
shared memory)
Additional facilities may include, task assignments
to processors, masking hardware failures,
transparent storage, general interprocess
communication, or data/computation/process
migration.
GMU – CS 571
6.17
Network Operating Systems


NOS does not try to provide a single view of the
distributed system.
Users are aware of the multiplicity of the
machines.
GMU – CS 571
6.18
Network Operating Systems (Cont.)

NOS provide facilities to allow users to make
use of services in other machines
• Remote login (telnet, rlogin)
• File transfer (ftp)



Users need to explicitly log on into remote
machines, or copy files from one machine to
another.
Need multiple passwords, multiple access
permissions.
In contrast, adding or removing a machine is
relatively simple.
GMU – CS 571
6.19
Middleware-Based Systems


Achieving full and efficient transparency with
distributed operated systems is a major task
On the other hand, a higher level of abstraction is
highly desired on top of network operating
systems.
GMU – CS 571
6.20
Middleware-Based Systems (Cont.)



Each local system forming part of the underlying
NOS provides local resource management in
addition to simple communication.
The concept of middleware was introduced due to
the integration problems of various networked
applications (distributed transactions and
advanced communication facilities).
Example middlewares: Remote Procedure Calls,
Remote Method Invocations, Distributed File
Systems, Distributed Object Systems (CORBA)
GMU – CS 571
6.21
Client-Server Model


How to organize processes in a distributed
environment?
Thinking in terms of clients that request services from
servers helps understanding and managing the
complexity.
GMU – CS 571
6.22
Client-Server Model (Cont.)


Servers may in turn be clients of other servers:
vertical distribution (example: web crawlers at
a search engine)
Services may be also implemented as several
server processes in separate host computers
interacting as necessary to provide a service to
client processes: horizontal distribution
• The servers may partition the set of objects on
which the service is based and distribute them
between themselves.
• Replication may be used to increase performance,
availability and to improve fault tolerance.
GMU – CS 571
6.23
Client-Server Model (Cont.)
An example of horizontal distribution of a Web Service
GMU – CS 571
6.24
Peer-to-peer (P2P) systems

As an alternative to the client-server model,
interacting processes may act cooperatively as
peers to perform a distributed activity or
computation
• Example: distributed ‘whiteboard’ application
allowing users on several computers to view and
interactively modify a picture that is shared
between them
• Middleware layers will perform event notification
and group communication.

P2P networks gained popularity in the late 90s
with file-sharing services (e.g. Napster, Gnutella)
GMU – CS 571
6.25
P2P Systems (cont.)
Peer 2
Peer 1
Application
Application
Peer 3
Sharable
objects
Application
Peer 4
Application
Peers 5 .... N
GMU – CS 571
6.26
Communication Structure
The design of a communication network must address four basic
issues:




Naming and name resolution - How do two
processes locate each other to
communicate?
Routing strategies - How are messages sent
through the network?
Connection strategies - How do two
processes send a sequence of messages?
Contention - The network is a shared
resource, so how do we resolve conflicting
demands for its use?
GMU – CS 571
6.27
Communication Protocols



The systems on a network must agree on a
concrete set of rules and formats before
undertaking a communication session.
The rules are formalized in what are called
protocols. Ex: FTP, HTTP, SMTP, telnet, …
The definition of a protocol contains
• A specification of the sequence of messages that must
be exchanged
• A specification of the format in the data in the
messages
GMU – CS 571
6.28
OSI Protocol Model

The International Standards Organization (ISO)
developed a reference model identifying the various
levels involved, and pointing out which level performs
which task (Open Systems Interconnection Reference
Model – OSI model).
GMU – CS 571
6.29
OSI Protocol Model



Each layer provides service to the one above it
through a well-defined interface.
On the sending side, each layer adds a header to the
message passed by the layer above and passes it
down to the layer below.
On the receiving side, the message is passed upward,
with each layer stripping off and examining its own
header.
GMU – CS 571
6.30
Layers in OSI Protocol Model
1. Physical Layer
Mechanical and electrical network-interface connections –
implemented in the hardware; defines the means of transmitting
raw bits rather than logical data packets.
2. Data Link Layer
Framing, error detection and recovery; node-to-node (hop-to-hop)
frame delivery on the same link.
3. Network Layer
Providing host-to-host connections, routing packets (routers
work at this layer); responsible for source to destination packet
delivery including routing through intermediate hosts
4. Transport Layer
End-to-end connection management, message partitioning
into packets, packet ordering, flow and error control
GMU – CS 571
6.31
Layers in OSI Protocol Model (Cont.)
5.
Session Layer
Dialog and synchronization control for application entities
(remote login, ftp, …); opening, closing, and managing a session
between end-user application processes; a session is a dialogue
or meeting between two or more communicating devices, or
between a computer and user (see Login session)
6.
Presentation Layer
Data representation transformations to accommodate
heterogeneity, encryption/decryption; responsible for the delivery
and formatting of information to the application layer; It relieves
the application layer of concern regarding syntactical differences
in data representation within the end-user systems.
7.
Application Layer
Protocols designed for specific requirements of different
applications, often defining interfaces to services
GMU – CS 571
6.32
TCP/IP Protocols



Dominant “Internetworking” protocol suite used
in Internet.
Fewer layers than ISO model, combines
multiple functions at each layer  High
efficiency (but more difficult to implement)
Many application services and application-level
protocols exist for TCP/IP, including the Web
(HTTP), email (SMTP, POP), netnews (NNTP), file
transfer (FTP) and Telnet.
GMU – CS 571
6.33
ISO vs. TCP/IP Protocol Stacks
GMU – CS 571
6.34
IP Layer


Performs the routing function
Provides datagram packet delivery service
• No set-up is required
• Packets belonging to the same message may

follow different paths
• Packets can be lost, duplicated, delayed or
delivered out of order
The IP layer
• puts IP datagrams into network packets suitable

for transmission in the underlying networks
• may need to break the datagram into smaller
packets
Every IP packet contains the full network
address of the source and destination hosts.
GMU – CS 571
6.35
IP Address Structure
Clas s A:
Clas s B:
0
7
24
Netw ork ID
Host ID
1 0
14
16
Netw ork ID
Host ID
21
Clas s C:
1 1 0
8
Netw ork ID
Host ID
28
Clas s D (multicast):
1 1 1 0
Multicast address
27
Clas s E (reserved):
GMU – CS 571
1 1 1 1 0
unused
6.36
TCP and UDP (Transport Layer)


Whereas IP supports communication between pairs
of computers (identified by their IP addresses), TCP
and UDP, as transport protocols, provide process-toprocess communication.
Port numbers are used for addressing messages to
processes within a particular computer.

UDP is almost a transport-level replica of IP.

A UDP datagram
• is encapsulated inside an IP packet
• includes a short header indicating the source and
destination port numbers, a length field and a
checksum
GMU – CS 571
6.37
TCP and UDP (Transport Layer)

UDP provides “connectionless” service
• no need for initial connection establishment
• no guarantee for reliable delivery is provided

TCP is “connection-oriented”
• TCP layer software provides delivery guarantee
for all the data presented by the sending process,
in the correct order.
• Before any data is transmitted, the sending and
receiving processes must co-operate to establish
a bi-directional communication channel.
GMU – CS 571
6.38
Sockets



A socket is an endpoint for communication made up of an IP
address concatenated with a port number.
A pair of processes communicating over a network employ a pair
of sockets.
The server waits for incoming client requests by listening to a
specified port. Once a request is received, the server accepts a
connection from the client socket to complete the connection.
socket
agreed port
any port
socket
message
client
server
other ports
Internet address = 138.37.94.248
GMU – CS 571
Internet address = 138.37.88.249
6.39
Socket programming
Goal: learn how to build client/server application that
communicate using sockets
Socket API




GMU – CS 571
introduced in BSD4.1 UNIX
explicitly created, used,
released by apps
client/server paradigm
two types of transport
service via socket API:
• unreliable datagram
(UDP)
• reliable, byte streamoriented (TCP)
6.40
socket
a host-local, applicationcreated/owned,
OS-controlled interface
(a “door”) into which
application process can
both send and
receive messages to/from
another (remote or
local) application process
Port Numbers

Servers implementing specific services listen to wellknown ports (All ports below 1024 are considered wellknown).
• Telnet server: port 23
• FTP server: port 21
• HTTP server: port 80


When a client process initiates a request for a
connection, it is assigned a port by the host computer.
Berkeley Sockets Interface and X/Open Transport
Interface are well-known socket implementations.
GMU – CS 571
6.41
Trying out http (client side) for yourself
1. Telnet to your favorite Web server:
telnet www.eurecom.fr 80
Opens TCP connection to port 80
(default http server port) at www.eurecom.fr.
Anything typed in sent
to port 80 at www.eurecom.fr
2. Type in a GET http request:
GET /~ross/index.html HTTP/1.0
By typing this in (hit carriage
return twice), you send
this minimal (but complete)
GET request to http server
3. Look at response message sent by http server!
GMU – CS 571
6.42
TCP/IP Sockets




Sockets may use TCP or UDP protocol when
connecting hosts in the Internet.
TCP requires a connection establishment phase
and provides guaranteed delivery.
UDP does not require a connection set-up
phase, however provides only a best-effort
delivery service.
Communication primitives are slightly different
in two cases.
GMU – CS 571
6.43
The programmer's conceptual view of a
TCP/IP Internet
Applic ation
Applic ation
TCP
UDP
IP
GMU – CS 571
6.44
Socket programming with TCP
Client must contact server
 server process must first be
running
 server must have created
socket (door) that welcomes
client’s contact


Client contacts server by:
 creating client-local TCP
socket
 specifying IP address, port
number of server process
GMU – CS 571
When client creates socket:
client TCP establishes
connection to server TCP
When contacted by client,
server TCP creates new socket
for server process to
communicate with client
• allows server to talk with
multiple clients
application viewpoint
TCP provides reliable, in-order
transfer of bytes (“pipe”)
between client and server
6.45
Client-Server Communication
with Sockets (TCP)
GMU – CS 571
6.46
Berkeley Sockets API
Socket primitives for TCP/IP.
GMU – CS 571
Primitive
Meaning
Socket
Create a new communication endpoint
Bind
Attach a local address to a socket
Listen
Announce willingness to accept connections
Accept
Block caller until a connection request arrives
Connect
Actively attempt to establish a connection
Send
Send some data over the connection
Receive
Receive some data over the connection
Close
Release the connection
6.47
C server (TCP)
/* A simple server in the internet domain using TCP*/
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
int main(int argc, char *argv[])
Create
socket
at port 6789
{
int sockfd, newsockfd, portno, clilen, n;
char buffer[256];
struct sockaddr_in serv_addr, cli_addr;
sockfd = socket(AF_INET, SOCK_STREAM, 0); // open a socket
if (sockfd < 0)
error("ERROR opening socket");
bzero((char *) &serv_addr, sizeof(serv_addr));
// place sizeof(serv_addr) 0-bytes in the area pointed by serv_addr
portno = 6789;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);//converts to network byte order
if (bind(sockfd, (struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
Wait, on welcoming
socket for contact
by client
error("ERROR on binding");
listen(sockfd,5);
clilen = sizeof(cli_addr);
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0)
Read/Write line
from/to socket
error("ERROR on accept");
bzero(buffer,256);
n = read(newsockfd,buffer,255);
if (n < 0) error("ERROR reading from socket");
printf("Here is the message: %s\n",buffer);
n = write(newsockfd,"I got your message",18);
if (n < 0) error("ERROR writing to socket");
return 0;
GMU – CS 571
}
6.48
Java server (TCP)
import java.io.*;
import java.net.*;
Create
socket
at port 6789
class TCPServer {
public static void main(String argv[]) throws Exception
{
String clientSentence;
String capitalizedSentence;
ServerSocket welcomeSocket = new ServerSocket(6789);
while(true) {
Wait, on welcoming
socket for contact
by client
Socket connectionSocket = welcomeSocket.accept();
BufferedReader inFromClient =
new BufferedReader(new
InputStreamReader(connectionSocket.getInputStream()));
DataOutputStream
Create input/output
streams, attached
to socket
new DataOutputStream(connectionSocket.getOutputStream());
clientSentence = inFromClient.readLine();
capitalizedSentence = clientSentence.toUpperCase()
+ '\n';
Read/Write line
from/to socket
outToClient.writeBytes(capitalizedSentence);
}
}
}
GMU – CS 571
outToClient =
6.49
#include <arpa/inet.h>
#include <netdb.h>
C client (TCP)
int main(int argc, char *argv[])
{
int sockfd, portno, n;
struct sockaddr_in serv_addr;
struct hostent *server;
char buffer[256];
Create
client socket,
connect to server
portno = atoi(argv[2]);
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0)
error("ERROR opening socket");
server = gethostbyname(argv[1]);
if (server == NULL) {
fprintf(stderr,"ERROR, no such host\n");
exit(0); }
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,
(char *)&serv_addr.sin_addr.s_addr,server-
>h_length); //bcopy(s1, s2, n) function shall copy n bytes from s1 to s2.
serv_addr.sin_port = htons(portno);
if (connect(sockfd,&serv_addr,sizeof(serv_addr)) < 0)
connecting");
printf("Please enter the message: ");
bzero(buffer,256);
fgets(buffer,255,stdin);
Send line
to server
n = write(sockfd,buffer,strlen(buffer));
if (n < 0)
error("ERROR writing to socket");
bzero(buffer,256);
Read line
from server
n = read(sockfd,buffer,255);
if (n < 0)
error("ERROR reading from socket");
printf("%s\n",buffer);
return 0;
}
GMU – CS 571
6.50
error("ERROR
Java client (TCP)
Create
input stream
Create
client socket,
connect to server
import java.io.*;
import java.net.*;
class TCPClient {
public static void main(String argv[]) throws Exception
{
String sentence;
String modifiedSentence;
BufferedReader inFromUser =
new BufferedReader(new
InputStreamReader(System.in));
Socket clientSocket = new Socket("hostname", 6789);
DataOutputStream outToServer =
new
DataOutputStream(clientSocket.getOutputStream());
Create
output/input streams
attached to socket
BufferedReader inFromServer =
new BufferedReader(new
InputStreamReader(clientSocket.getInputStream()));
Send line
to server
sentence = inFromUser.readLine();
outToServer.writeBytes(sentence + '\n');
modifiedSentence = inFromServer.readLine();
Read line
from server
System.out.println("FROM SERVER: " +
modifiedSentence);
clientSocket.close();
}
}
GMU – CS 571
6.51
Socket programming with UDP
UDP: no “connection”
between client and server
 no handshaking
 sender explicitly attaches
IP address and port of
destination
 server must extract IP
address, port of sender
from received datagram
UDP: transmitted data may be
received out of order, or
lost
application viewpoint
UDP provides unreliable transfer
of groups of bytes (“datagrams”)
between client and server
GMU – CS 571
6.52
Client/server socket interaction: UDP
Server (running on hostid)
Client
create socket,
port=x, for
incoming request:
serverSocket =
DatagramSocket()
create socket,
clientSocket =
DatagramSocket()
Create, address (hostid, port=x),
send datagram request
using clientSocket
read request from
serverSocket
write reply to
serverSocket
specifying client
host address,
port number
GMU – CS 571
read reply from
clientSocket
close
clientSocket
6.53
Example: Java server (UDP)
import java.io.*;
import java.net.*;
Create
datagram socket
at port 9876
class UDPServer {
public static void main(String args[])
throws Exception
{
DatagramSocket serverSocket = new DatagramSocket(9876);
byte[] receiveData = new byte[1024];
byte[] sendData = new byte[1024];
while(true)
{
Create space for
received datagram
DatagramPacket receivePacket =
new DatagramPacket(receiveData,
receiveData.length);
serverSocket.receive(receivePacket);
Receive
datagram
GMU – CS 571
6.54
Example: Java server (UDP), cont
String sentence = new String(receivePacket.getData());
Get IP addr
port #, of
sender
InetAddress IPAddress = receivePacket.getAddress();
int port = receivePacket.getPort();
String capitalizedSentence =
sentence.toUpperCase();
sendData = capitalizedSentence.getBytes();
Create datagram
to send to client
Write out
datagram
to socket
}
GMU – CS 571
DatagramPacket sendPacket =
new DatagramPacket(sendData, sendData.length,
IPAddress, port);
serverSocket.send(sendPacket);
}
}
End of while loop,
loop back and wait for
another datagram
6.55
Example: Java client (UDP)
import java.io.*;
import java.net.*;
Create
input stream
Create
client socket
Translate
hostname to IP
address using DNS
class UDPClient {
public static void main(String args[]) throws Exception
{
BufferedReader inFromUser =
new BufferedReader(new InputStreamReader(System.in));
DatagramSocket clientSocket = new DatagramSocket();
InetAddress IPAddress = InetAddress.getByName("hostname");
byte[] sendData = new byte[1024];
byte[] receiveData = new byte[1024];
String sentence = inFromUser.readLine();
sendData = sentence.getBytes();
GMU – CS 571
6.56
Example: Java client (UDP), cont.
Create datagram
with data-to-send,
length, IP addr, port
DatagramPacket sendPacket =
new DatagramPacket(sendData, sendData.length, IPAddress, 9876);
Send datagram
to server
clientSocket.send(sendPacket);
Read datagram
from server
clientSocket.receive(receivePacket);
DatagramPacket receivePacket =
new DatagramPacket(receiveData, receiveData.length);
String modifiedSentence =
new String(receivePacket.getData());
System.out.println("FROM SERVER:" + modifiedSentence);
clientSocket.close();
}
}
GMU – CS 571
6.57
Remote Procedure Calls



A communication middleware solution.
Idea: Handle communication as transparently as
a (local) procedure call.
Allowing programs to call procedures located on
other machines.
GMU – CS 571
6.58
Remote Procedure Call



When a process on the client machine A calls a
procedure on the server machine B, the calling process
on A is suspended.
Information can be transported from the caller to the
callee in the parameters and can come back in the
procedure result.
No message passing at all is visible to the programmer.
GMU – CS 571
6.59
RPC: Client and Server Stubs



For the client and server sides, the library
procedures will act between the caller/callee and
the Operating System.
When a remote procedure is called, the client stub
will pack the parameters into a message and call
OS (through send primitive). It will then block
itself (through receive primitive).
The server stub will unpack the parameters from
the message and will call the local procedure in a
usual way. When the procedure completes, the
server stub will pack the result and return it to the
client.
GMU – CS 571
6.60
Remote Procedure Calls (1)
A remote procedure call occurs in the following steps:
1. The client procedure calls the client stub in the normal way.
2. The client stub builds a message (marshalling the parameters) and calls
3.
4.
5.
6.
the local operating system.
The client’s OS sends the message across the network to the remote OS.
The remote OS gives the message to the server stub.
The server stub unpacks the parameters
Calls the server implementing the function.
Continued …
GMU – CS 571
6.61
Implementing RPC

Client stub and server stub communicate over
the network to perform RPC
Client
Machine
GMU – CS 571
Client Process
Server Process
Server
Machine
Client Stub
Server Stub
Client OS
Server OS
6.62
Remote Procedure Calls (2)
A remote procedure call occurs in the following steps (continued):
7.
8.
9.
10.
11.
The server does the work and returns the result to the stub.
The server stub packs it in a message and calls its local OS.
The server’s OS sends the message across the network to the client’s
OS.
The client’s OS gives the message to the client stub.
The stub unpacks the result and returns to the client.
GMU – CS 571
6.63
Implementing RPC

Client stub and server stub communicate over
the network to perform RPC
Client
Machine
GMU – CS 571
Client Process
Server Process
Server
Machine
Client Stub
Server Stub
Client OS
Server OS
6.64
Parameter Passing in RPC


Packing parameters into a message is called
parameter marshaling.
Passing value parameters
• Poses no problems as long as both machines
have the same representation for all data types
(numbers, characters, etc.)

May not be taken for granted
• Floating-point numbers and integers may be
represented in different ways.
• Some architectures (e.g. Intel Pentium) numbers
its bytes from right to left, while some others (e.g.
Sun SPARC) numbers from left to right.
GMU – CS 571
6.65
Parameter Passing in RPC (Cont.)

Passing reference parameters
• Pointers, and in general, reference parameters are
passed with considerable difficulty because a pointer is
only meaningful in the address space of the process
where it is used.

Solutions
• Forbid reference parameters
• Copy the entire data structure (e.g. an entire array may
•
•
GMU – CS 571
be sent if the size is known).
Straightforward for some parameter types (arrays for
example) but not in general.
How to handle complex data structures (e.g. general
graphs) ?
6.66
Stub Generation

Hiding a remote procedure call requires that the caller
and callee agree on
• the format of the messages
• the representation of simple data structures
• the transport layer protocol


Once the RPC protocol has been completely defined,
the client and server stubs need to be implemented.
Stubs for the same protocol but different procedures
generally differ only in their interface to applications.
Interfaces are often specified by means of an Interface
Definition Language (IDL). After an interface is
specified in such an IDL, it is then compiled into
client&server stubs.
GMU – CS 571
6.67
Writing a Client
and a Server (1)
User generates client, server and an interface definition file. The code file must #include
the header file (to be generated).
GMU – CS 571
6.68
#include <stdio.h>
#include <rpc/rpc.h>
#include "RPCTest.h"
main(int argc, char**argv) {
CLIENT *clientHandle;
char *serverName = "cs1.gmu.edu";
readargs a;
Data *data;
clientHandle=
clnt_create(serverName,FILEREADWRITE,VERSION,"udp");
/* creates a socket and a client handle */
if (clientHandle == NULL) {
clnt_pcreateerror(serverName); /* unable to contact server */
exit(1); }
a.val1 = 10;
a.val2 = 20;
data = double_1(&a,clientHandle); /* call to remote procedure */
printf("%d**\n",*data);
clnt_destroy(clientHandle);
}
Client Code
GMU – CS 571
6.69
#include <stdio.h>
#include <rpc/rpc.h>
#include "RPCTest.h"
Data *double_1(readargs *a) {
static Data result; /* must be static */
result = a->val1 + a->val2;
return &result;
}
Server Code
Remote function double
typedef int Data;
struct readargs {
int val1; int val2;
}; /* pack multiple arguments into struct */
program FILEREADWRITE {
version VERSION {
Data DOUBLE(readargs)=1;
}=1;
}=9999;
GMU – CS 571
6.70
Interface
Specification
Writing a Client and a Server (2)
Four files output by the Sun RPC IDL compiler
from the interface definition file:




A header file (e.g., interface.h, in C terms).
XDR file – describes data details
The client stub.
The server stub.
GMU – CS 571
6.71
All must
#include
header file
Writing a Client
and a Server (1)
XDR code
C compiler
XDR obj file
Figure 4-12. The steps in writing a client and a server in Sun RPC.
GMU – CS 571
6.72
/*
Please do not edit this file.
It was generated using rpcgen.
*/
#ifndef _RPCTEST_H_RPCGEN
#define _RPCTEST_H_RPCGEN
#include <rpc/rpc.h>
typedef int Data;
struct readargs { int val1; int val2; };
typedef struct readargs readargs;
#define FILEREADWRITE 9999
#define VERSION 1
#define DOUBLE 1
extern Data * double_1();
extern int filereadwrite_1_freeresult();
/* the xdr functions */
extern bool_t xdr_Data();
extern bool_t xdr_writeargs();
extern bool_t xdr_readargs();
#endif /* !_RPCTEST_H_RPCGEN */
GMU – CS 571
6.73
Header File
/*
* Please do not edit this file.
* It was generated using rpcgen.
*/
#include "RPCTest.h"
#ifndef _KERNEL
#include <stdio.h>
#include <stdlib.h> /* getenv, exit */
#endif /* !_KERNEL */
/* Default timeout can be changed using clnt_control() */
static struct timeval TIMEOUT = { 25, 0 };
Data * double_1(argp, clnt)
readargs *argp;
CLIENT *clnt;
{
static Data clnt_res;
memset((char *)&clnt_res, 0, sizeof (clnt_res));
if (clnt_call(clnt, DOUBLE,
(xdrproc_t) xdr_readargs, (caddr_t) argp,
(xdrproc_t) xdr_Data, (caddr_t) &clnt_res,
TIMEOUT) != RPC_SUCCESS) {
return (NULL);
}
return (&clnt_res);
}
Client Stub
GMU – CS 571
6.74
/*
* Please do not edit this file.
* It was generated using rpcgen.
*/
#include "RPCTest.h"
Server Stub
(lots of code
omitted)
…
static void
closedown(sig)
int sig;
{
…
}
static void filereadwrite_1(rqstp, transp)
struct svc_req *rqstp;
register SVCXPRT *transp;
{
…
switch (rqstp->rq_proc) {
case DOUBLE:
_xdr_argument = xdr_readargs;
_xdr_result = xdr_Data;
local = (char *(*)()) double_1;
break;
…
}
main()
{
…
}
GMU – CS 571
6.75
XDR file
/*
* Please do not edit this file.
* It was generated using rpcgen.
*/
#include "RPCTest.h"
bool_t xdr_Data(xdrs, objp)
register XDR *xdrs;
Data *objp;
{
#if defined(_LP64) || defined(_KERNEL)
register int *buf;
#else
register long *buf;
#endif
if (!xdr_int(xdrs, objp)) return
(FALSE);
return (TRUE);
}
GMU – CS 571
6.76
bool_t xdr_readargs(xdrs, objp)
register XDR *xdrs;
readargs *objp;
{
#if defined(_LP64) ||
defined(_KERNEL)
register int *buf;
#else
register long *buf;
#endif
if (!xdr_int(xdrs, &objp>val1)) return (FALSE);
if (!xdr_int(xdrs, &objp>val2)) return (FALSE);
return (TRUE);
}
Binding a Client to a Server (1)
 Registration of a server makes it possible for a
client to locate the server and bind to it.
 Server location is done in two steps:
1. Connect to the server’s machine on known
port.
2. Locate the server’s port on that machine.
GMU – CS 571
6.77
Binding a Client to a Server (2)
Figure 4-13. Client-to-server binding in DCE (Distributed Computing
Environment ). SunRPC does not use directory server.
GMU – CS 571
6.78
Asynchronous RPC (1)
The interaction between client and server in a traditional RPC.
GMU – CS 571
6.79
Asynchronous RPC (2)
The interaction using asynchronous RPC.
GMU – CS 571
6.80
Asynchronous RPC (3)
A client and server interacting through two asynchronous RPCs.
GMU – CS 571
6.81
Remote Method Invocation (RMI)


RMI is a Java feature similar to RPC: it allows a
thread to invoke a method on a remote object.
The main differences with RPC
• RPC supports procedural programming whereby
only remote procedures or functions may be
called. With RMI, it is possible to invoke methods
on remote objects.
• In RMI, it is possible to pass objects as
parameters to remote methods.
GMU – CS 571
6.82
RMI

RMI = RPC + Object-orientation
• Java RMI
• CORBA
 Middleware that is language-independent
• Microsoft DCOM/COM+
• SOAP
 RMI on top of HTTP
GMU – CS 571
6.83
Remote and local method invocations
local
remote
invocation
A
GMU – CS 571
B
C
E
invocation local
invocation
local
invocation
D
6.84
remote
invocation
F
Distributed Objects
2-16
Common organization of a remote object with client-side proxy (loaded
when the client binds to a remote object).
GMU – CS 571
6.85
Binding a Client to an Object
Distr_object* obj_ref;
obj_ref = …;
obj_ref-> do_something();
//Declare a systemwide object reference
// Initialize the reference to a distributed object
// Implicitly bind and invoke a method
(a)
Distr_object obj_ref;
Local_object* obj_ptr;
obj_ref = …;
obj_ptr = bind(obj_ref);
obj_ptr -> do_something();
//Declare a systemwide object reference
//Declare a pointer to local objects
//Initialize the reference to a distributed object
//Explicitly bind and obtain a pointer to the local proxy
//Invoke a method on the local proxy
(b)
a)
b)
(a) Example with implicit binding using only global references
(b) Example with explicit binding using global and local references
GMU – CS 571
6.86
Distributed Objects





Remote object references
• An identifier that can be used throughout a distributed system to refer
to a particular remote object
Remote interfaces
• CORBA provides an interface definition language (IDL) for specifying a
•
remote interface
JAVA RMI: Java interface that extends Remote interface
Actions: remote invocations
Remote Exceptions may arise for reasons such as partial failure or
message loss
Distributed Garbage Collection: cooperation between local garbage
collectors needed
GMU – CS 571
6.87
RMI Programming

RMI software
• Generated by IDL compiler
• Proxy (client side)
 Behaves like remote object to clients (invoker)
 Marshals arguments, forwards message to remote object,
unmarshals results, returns results to client
• Skeleton (server side)
 Server side stub;
 Unmarshals arguments, invokes method, marshals results and
sends to sending proxy’s method
• Dispatcher (server side)
 Receives the request message from communication module,
passes on the message to the appropriate method in the skeleton
GMU – CS 571
6.88
RMI Programming

Binder
• Client programs need a means of obtaining a remote object
•
•

•
reference
Binder is a service that maintains a mapping from textual
names to remote object references
Servers need to register the services they are exporting with
the binder
Java RMIregistry, CORBA Naming service
Server threads
• Several choices: thread per object, thread per invocation
• Remote method invocations must allow for concurrent
execution
GMU – CS 571
6.89
Java RMI

Features
• Integrated with Java language + libraries
 Security, write once run anywhere, multithreaded
 Object orientation
• Can pass “behavior”
 Mobile code
 Not possible in CORBA, traditional RPC systems
• Distributed Garbage Collection
• Remoteness of objects intentionally not transparent
GMU – CS 571
6.90
Remote Interfaces, Objects, and
Methods

Objects become remote by implementing a
remote interface
• A remote interface extends the interface
java.rmi.Remote
• Each method of the interface declares
java.rmi.RemoteException in its throws clause in
addition to any application-specific clauses
GMU – CS 571
6.91
Creating distributed applications using RMI
1.
2.
3.
4.
5.
6.
7.
8.
Define the remote interfaces
Implement the remote objects
Implement the client (can be done anytime after remote
interfaces have been defined)
Register the remote object in the name server registry
Generate the stub and client using rmic
Start the registry
Start the server
Run the client
GMU – CS 571
6.92
import java.rmi.*;
import java.rmi.registry.*;
import java.net.*;
public class RmiClient
{
static public void main(String args[])
{ ReceiveMessageInterface rmiServer;
Registry registry;
String serverAddress=args[0];
String serverPort=args[1];
String text=args[2];
System.out.println("sending "+text+" to "+serverAddress+":"+serverPort);
try{
// get the “registry”
registry=LocateRegistry.getRegistry(serverAddress,(new Integer(serverPort)).intValue() );
// look up the remote object
rmiServer= (ReceiveMessageInterface)(registry.lookup("rmiServer"));
// call the remote method
int i = rmiServer.receiveMessage(text);
System.out.println("received "+i);
}
catch(RemoteException e){ e.printStackTrace();}
catch(NotBoundException e){ e.printStackTrace(); }
}
}
Java Client
GMU – CS 571
6.93
import java.rmi.*;
import java.rmi.registry.*;
import java.rmi.server.*;
import java.net.*;
Java Server (1)
public class RmiServer extends java.rmi.server.UnicastRemoteObject
implements ReceiveMessageInterface
{
int
thisPort;
String thisAddress;
Registry registry; // rmi registry for lookup the remote objects.
// This method is called from the remote client by the RMI.
// This is the implementation of the “ReceiveMessageInterface”.
public int receiveMessage(String x) throws RemoteException
{
System.out.println(x);
return 42;
}
GMU – CS 571
6.94
public RmiServer() throws RemoteException
{
try{ // get the address of this host.
thisAddress= (InetAddress.getLocalHost()).toString();
} catch(Exception e){
throw new RemoteException("can't get inet address.");
}
thisPort=3232; // this port(registry’s port)
System.out.println("this address="+thisAddress+",port="+thisPort);
try{
// create the registry and bind the name and object.
registry = LocateRegistry.createRegistry( thisPort );
registry.rebind("rmiServer", this);
} catch(RemoteException e){ throw e;}
}
static public void main(String args[])
{
try{
RmiServer s=new RmiServer();
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
}
GMU – CS 571
Java Server (2)
6.95
Java Remote interface
import java.rmi.*;
public interface ReceiveMessageInterface extends Remote
{
int receiveMessage(String x) throws RemoteException;
}
GMU – CS 571
6.96
The Naming class of Java RMIregistry
void rebind (String name, Remote obj)
This method is used by a server to register the identifier of a remote
object by name.
void bind (String name, Remote obj)
This method can alternatively be used by a server to register a remote
object by name, but if the name is already bound to a remote object
reference an exception is thrown.
void unbind (String name, Remote obj)
This method removes a binding.
Remote lookup(String name)
This method is used by clients to look up a remote object by name. A
remote object reference is returned.
String [] list()
This method returns an array of Strings containing the names bound in
the registry.
GMU – CS 571
6.97
Object References as Parameters
2-18
The situation when passing an object by reference or by value. There is
no analogous activity in RPC.
GMU – CS 571
6.98
Classes supporting Java RMI
RemoteObject
RemoteServer
Activatable
UnicastRemoteObject
<servant class>
GMU – CS 571
6.99