Concurrent Servers

Download Report

Transcript Concurrent Servers

Client-Server Programming and
Applications
References
• Douglas Comer, David Stevens, “Internetworking
with TCP/IP: Client-Server Programming and
Applications”, Volume III, Prentice Hall
• Chapter 8: Algorithms and Issues in Server
Software Design.
• Chapter 11:Concurrent Connection-oriented
servers.
• Chapter 12:Single-process concurrent servers.
• “An advanced 4.4BSD Interprocess
Communication Tutorial”
www.cis.temple.edu/~ingargio/old/cis307s96/readings/docs/ipc.html
Terminology
• Sequential Program
– Typical of most program
– Single thread of control
• Concurrent program
– Multiple threads of control
– Execution proceeds in parallel
– More difficult to create
Types of Servers
A server can be:
Connectionless
Connection-Oriented
Iterative
Concurrent
iterative
connectionless
concurrent
Connectionless
iterative
connection
-oriented
concurrent
connectionoriented
Iterative (or sequential) Server
• Handles one request at a time
• Client waits for all previous requests to be processed
• Unacceptable to user if long request blocks short
request
while (1) {
accept a connection (or request) from a client
service the client
close the connection (if necessary)
}
Concurrent Server
• Can handle multiple requests at a time by creating
new thread of control to handle each request
• No waiting
while (1) {
accept a connection/request from client
start a new thread to handle this client
/* the thread must close the connection! */
}
Stateful Server
• Maintains some information between requests
• Requires smaller messages, since some
information is kept between contacts
• May become confused if a connection terminates
abnormally (if the design is not fault tolerant)
• Example: FTP
Stateless Server
• Requires larger messages. That is, the message
must contain all information about the request
since no state information is kept.
• Example: HTTP
Applications and Transport Protocols
Application
Application-Layer Protocol
Underlying Transport
Protocol
Email
SMTP
TCP
Remote Terminal Access
Telnet
TCP
Web
HTTP
TCP
File Transfer
FTP
TCP
Remote File Server
NFS
Typically UDP
Streaming Multimedia
Proprietary
Typically UDP
Network Management
SNMP
Typically UDP
Routing Protocol
RIP
Typically UDP
Name Translation
DNS
Typically UDP
Concurrency in Client/server
• Concurrency refers to real or apparent
simultaneous computing
– time-sharing
– multiprocessor
• Concurrent processing is fundamental to
distributed computing
– concurrency among machines in a network
– among clients on a single machine
– among client programs running on several machines
Concurrency in Client Software
• “Most client software achieves concurrent
operation because the underlying OS allows users
to execute client programs concurrently or because
user can execute client software simultaneously”
SERVER SOFTWARE DESIGN
• A server follows a simple set of steps
– Creates a socket and binds the socket to the well-known port at
which it desires to receive the client requests (In UNIX systems,
the services are registered in /etc/services.
– Enters an infinite loop in which it accepts the next request that
arrives from a client.
– Processes the request by itself.
– Formulates a reply and sends it back to the client according to the
predefined service protocol, (e.g. POP3, TELNET, FTP, HTTP,
etc..)
 This represents the simplest form of server design and is
referred to as an iterative server. It can handle only one
request at a time.
Why Was There a Need to Introduce
Concurrency in Client-Server
• Not effective conventional iterative method
 Iterative servers are suitable for the most trivial services, (e.g.,
Time of Day, Local Time, Echo Servers, etc..)
 They are NOT suitable for requests that require substantial amount
of time to be completed, (e.g., FTP, TELNET, FILE SERVER,
etc..)
• Large amount of data could consume lot of time
• This deprives communication for other client terminals
 A concurrent server, on the other hand, is a server that can
handle multiple requests.
How Concurrent Implementation Could
Be an Answer to This Kind of Problem
• A concurrent server allows communication with
many clients simultaneously
• In this kind of configuration a single client is not
allowed to hold all resources
• Concurrency in servers
– more difficult
– server software must be programmed explicitly to
handle requests concurrently
- Concurrent Server can
handle multiple requests at the
same time
- Server maintains a queue of
connections.
Client
Client
Internet
Concurrent Server
Client
- Iterative Server can handle only
ONE request at a time.
- No queues are maintained.
- If the server is busy, client must
retry again.
Client
Client
Internet
Iterative Server
Client
CONCURRENT SERVER ALGORITHM
 Introducing concurrency into a server arises from the
need to provide faster response time to multiple clients.
 Concurrency improves response time, if
•Forming a response requires significant I/O.
•The processing time varies dramatically among
requests,
•The server executes on a computer with multiple
processors.
 Concurrency may have significant overhead cost
associated with it.
How Does Concurrent Model Works
• Master-Slave model
• The server is not connected to client directly
• The accept command makes the server to wait at a
particular port for the request to arrive
• Upon its arrival master server process creates a
slave process to handle the connection
• Also the master process is blocked to accept
another call
master
slave1
Socket for
connection
requests
slave2
slaven
socket for individual
connections
Server
application
processes
Operating
system
Advantages of the Concurrent Model
• More than one slave process can be created and
can be allowed to operate concurrently
A. Concurrent, Connectionless Server:
Parent Step 1: Create a socket and bind, leave the
socket unconnected.
Parent Step 2: Repeatedly call recvfrom(
to
receive the next request from a client, and create a
new slave process (either through thr_create()
or fork( )) to handle the response.
)
Slave Step 1: Receive a specific request upon
creation as well as access to the socket.
Slave Step 2: Form a reply according to the
application protocol and send it back to the
client using sendto( ) API.
Slave Step 3: Exit (i.e. a slave process
terminates after handling one request).
 Because process creation is usually
expensive, few connectionless servers have
concurrent implementations.
B. Concurrent, Connection-oriented Server:
• Connection-oriented servers use a connection as the
basic paradigm for communication.
• They allow a client to establish a connection to a
server, communicate over that connection, then
discard it after finishing.
• In most cases, the connection between clients and a
server handles more than a single request, thus:
Connection-oriented protocols implement concurrency among
connections rather than individual requests.
The Algorithm
• Parent Step 1: Create a socket and bind, leave the
socket unconnected.
• Parent Step 2: Place the socket in a passive mode,
making it ready for use by a server, e.g.
listen(s, 3).
• Parent Step 3: Repeatedly call accept() to
receive the next request from a client, and create a
new slave process to handle the response.
The Algorithm (cont.)
• Slave Step 1: Receive a connection request upon
creation, i.e., socket for the connection.
• Slave Step 2: Interact with the client using the
connection, read and send back the responses.
• Slave Step 3: Close the connection and exit. The
slave process exits after handling all requests from
clients.
The Pseudo code
/* create a socket and bind
*/
s = tcp_open(…);
listen (s, 5);
len = sizeof (client_addr);
for (;;){
/* Wait for a connection request */
ns = accept (s, (struct sockaddr *) &client_addr, len );
/* Now create a slave process to handle the request */
child = fork();
if ( child == 0 ){
close (s);
process_the_request (ns);
exit (0);
}
close (ns); /* just to be safe */
}
Concurrent ECHO
• Echo service
• Iterative or concurrent implementation?
Concurrent ECHO server example
/* TCPechod.c - main, TCPechod */
/* include header files here */
#define QLEN
5
/*max. connection queue length */
#define BUFSIZE
4096
void reaper (int)
/* clean up zombie child */
int TCPechod(int fd) /*echo data until end of file */
int errexit (const char *format, …);
int passiveTCP (const char *service, int qlen);
/* main- concurrent TCP server for ECHO service */
int main(int argc, *argv[])
{
char *service = “echo”; /*service name or port number */
struct sockaddr_in fsin; /* the address of a client */
int alen;
/* length of a client’s address */
int msock;
/*master server socket */
int ssock;
/*slave server socket */
/* check arguments - not detailed here*/
msock = passiveTCP (service, QLEN);
(void) signal(SIGCHLD, reaper);
while (1){
alen = sizeof(fsin);
ssock=accept(msock, (struct sockaddr *)&fsin,
&alen);
/* error with accept call not detailed here */
switch (fork( ) ) {
case 0:
/* child */
(void) close (msock);
exit (TCPechod ( ssock ));
default:
/* parent */
(void) close ( ssock );
break;
case -1:
errexit(“fork: %s\n”, strerror(errno));
}
}
}
Details of Concurrency
• Master server calls accept to wait for connection
request from client.
• The command accept creates a socket and returns
a socket descriptor.
• Master server creates a slave process using fork
command to handle connection.
• The parent process closes the socket.
• The above loop is repeated
Details of Concurrency
• Close command by the master process for the new
connection for the new connection, closes
connection (socket) for the master process.
• Close command by the slave process, closes
connection (socket) for the slave process.
• Slave process continues to have access to new
socket until it exits.
• Master continues to retain access to the socket that
corresponds to the well known port.
Details of Concurrency
• The slave closes the master socket and provides
echo service.
• Read and write repeatedly executed, returning
number of bytes read and written.
• In this case, negative number indicates error and 0
indicates EOF condition.
• Exit command is used to terminate the process.
• When slave exits the system automatically closes
open descriptors.
How to Solve the Problem of
Incompletely Terminated Process
• A signal is send to parent process whenever a child process
exits.
• The exiting process remains in zombie state until parent
executes wait3 system call.
• Server catches the child termination signal and executes a
signal handling function
signal(SIGCHLD,reaper)
• The above function indicates that master server process
should execute function reaper whenever it receives a
signal that child process has exited (signal SIGCHLD).
• Function reaper calls system function wait3 to complete
termination of child that exits.
• Wait3 blocks until one or more children exit.
/* reaper - clean up zombie child */
void reaper (int sig)
{
int status;
while (wait3(&status, WNOHANG, (struct rusage *)
0) >= 0)
/* empty */;
}
wait3 returns a value in the status structure that can be
examined to find out about the process that has exited.
WNOHANG tells the kernel not to block if there are
no terminated children.
Alternative to function reaper
/* reaper - clean up zombie child */
void reaper (int sig)
{
pid_t
pid;
int
status
while ((pid = waitpid (-1, &status, WNOHANG) )> 0)
/* empty */;
return;
}
waitpid returns a value in the status structure that can be
examined to find out about the status of the child.
A value of –1 says to wait for the first of the children processes.
How to handle interrupted system call
• Recall:
– The parent is blocked in its call to accept when
SIGCHLD signal is delivered. What happens when the
signal handler returns?
– Since the signal was caught by the parent while the
parent was blocked in a slow system call (accept), the
kernel causes the accept to return an error of EINTR
(interrupted signal call). The parent does not handle this
error so it aborts.
• Solution:
If ((ssock=accept(….)) <0)
if (errno = EINTR) continue;
else err_sys(“accept error”);
Summary
• ECHO service example
– use of fork function
– each slave process begins execution immediately
following the call to fork.
Single Process, Concurrent,
Connection-Oriented Servers (TCP)
(Chapter 12)
INTRODUCTION
• Last time: Concurrent Connection-Oriented server
- echo server - that supports multiple clients at the
same time using multiple processes
• Today: similar echo server but uses only one
single process.
Motivation for apparent concurrency
using a single process
• Cost of process creation
• Sharing of information among all connections
• Apparent concurrency among processes that share
memory can be achieved if the total load of
requests presented to the server does not exceed its
capacity to handle them.
Single-process, Concurrent Server
Idea: Arrange for a single process to keep TCP
connections open to multiple clients. In this case a
server handles a given connection when data
arrives. Thus arrival of data is used to trigger
processing.
How Concurrent Execution Differs From
Single-process Execution
• In concurrent execution a server creates a separate slave
process to handle each new connection. So theoretically it
depends on operating systems time slicing mechanism to
share the CPU among the processes and thus among the
connections.
• However in reality the arrival of data controls processing.
“Concurrent servers that require little processing time per
request often behave in a sequential manner where the
arrival of data triggers execution. Timesharing only takes
over if the load becomes so high that the CPU cannot
handle it sequentially.”
How Does Single Process Mechanism Works?
• In a single process server, a single server process
has TCP connections open to many clients.
• The process blocks waiting for data to arrive.
• On the arrival of data, on any connection, the
process awakens, handles the request and sends
the reply.
Advantages of Single-process Server Over
Multiple Process Concurrent Server
• Single- process implementation requires less
switching between process contexts. Thus it may
be able to handle slightly higher load than
implementation that uses multiple processes.
server
Socket for
connection
requests
Server
<--application
process
sockets for
<--individual connections
Operating
system
Details of Single-process Server
• A single-process server must perform the duties of
both master and slave process
• A set of socket is maintained. One socket is set
bound to the well known port at which master can
accept connection.
• The other socket in the set correspond to a
connection over which a slave can handle request.
Details of Single-process Server
• If the descriptor corresponding master socket is
ready, it calls accept on the socket to obtain a new
connection. If the descriptor corresponding to
slave is ready, it calls read to obtain request and
answers it.
• The above step is then repeated.
Algorithm
• Create a socket and bind to well-known port for
the service. Add socket to the list of those on
which I/O is possible.
• Use select to wait for I/O on existing sockets.
• If original socket is ready, use accept to obtain the
next connection, and add the new socket to the list
of those on which I/O is possible.
Algorithm (Cont.)
• If some socket other than the original is ready, use
read to obtain the next request, form a response,
and use write to send the response back to the
client.
• Continue processing with step 2 above.
The select system call
• Select provides asynchronous I/O by permitting a
single process to wait for the first of any file
descriptors (not restricted to sockets) in a specified set
to become ready. The caller can also specify a
maximum timeout for the wait.
• By example, tell the kernel to return only when:
– Any of the descriptors in set {1,4,5} are ready for reading
– Any of the descriptors in set {2,7} are ready for writing
– Any of the descriptors in set {1,4,5} have an exception
pending
– Or after 20 seconds have elapsed
The select system call
• retcode = select (numfds, rfds, wrfds, exfds, time)
• Arguments
–
–
–
–
–
int numfds
&fd_set rfds
&fd_set wrfds
&fd_set exfds
&struct timeval
• returns: number of ready file descriptors
• fd_set data type: descriptor sets are typically an
array of integers, with each bit in the integer
corresponding to a descriptor.
Macros
• /* clears all bits in fdset
void FD_ZERO(fd_set *fdset);
• /* turn on the bit for fd in fdset
void FD_SET(inf fd, fd_set *fdset);
• /* turn off the bit for fd in fdset
void FD_CLR(inf fd, fd_set *fdset);
• /* Is the bit for fd on in fdset?
Int FD_ISSET(inf fd, fd_set *fdset);
Example - Single Process ECHO Server
/* TCPmechod.c - main, TCPechod */
/* include header files here */
#define QLEN
5
/*max. connection queue length */
#define BUFSIZE
4096
extern int errno;
int echo (int fd)
/*echo data until end of file */
int errexit (const char *format, …);
int passiveTCP (const char *service, int qlen);
/* main- concurrent TCP server for ECHO service */
int main(int argc, *argv[])
{
char *service = “echo”; /*service name or port number */
struct sockaddr_in fsin; /* the from address of a client */
int alen;
/* length of a client’s address */
int msock;
/* master server socket */
fd_set rfds;
/* read file descriptor set */
fd_set afsd;
/* active file descriptor set */
int fd;
/* check arguments - not detailed here*/
msock = passiveTCP (service, QLEN);
FD_ZERO (&afds);
FD_SET (msock, &afds);
while(1) {
memcpy(&rfds, &afds, sizeof(rfds));
if ( select (FD_SETSIZE, &rfds, (fd_set *) 0,
(fd_set *) 0, (struct timeval *) 0) < 0)
errexit (“select: %s\n”, strerror(errno));
if ( FD_ISSET (msock, &rfds)) {
int ssock;
alen = sizeof (fsin);
ssock = accept(msock,(struct sockaddr *)&fsin,
&alen);
if ( ssock < 0)
errexit (“accept: %s\n, strerror (errno));
FD_SET (ssock, &afds);
}
for ( fd = 0; fd < FD_SETSIZE; ++fd)
if (fd!=msock && FD_ISSET(fd, &rfds))
if (echo(fd) ==0 ) {
(void) close (fd);
FD_CLR (fd, &afds);
}
}
}
/* Echo - echo one buffer of data, returning byte count */
int echo (int fd)
{ char buf[BUFSIZE];
int cc;
cc = read (fd, buf, sizeof(buf));
if ( cc < 0 )
errexit(“echo read: %s\n”, strerror(errno));
if (cc && write(fd, buf, cc) < 0 )
errexit (“echo write: %s\n”, strerror(errno));
return cc;
}
Summary
• Execution in concurrent servers often driven by
arrival of data
• When the service requires little processing, a
single-process implementation is preferable:
– Can use asynchronous I/O to manage connections to
multiple clients
• Single-process implementation performs the duties
of master and slave processes.