Upto Network Layer - Home Pages of People@DU

Download Report

Transcript Upto Network Layer - Home Pages of People@DU

Transport Layer
Context of various layers upto
Network layer



NL : present in LAN and subnet
MAC sublayer : not present in Subnets,
only in LANs
DLL : present in LAN and subnet
Context of various layers



Transport Layer : Peer (a transport entity, a
process in TL, in a host H1) to Peer (a transport
entity, a process in TL, in a host H2).
Network Layer: End (IP address of H1) to End
(IP address of H2).
DLL: Local, (DLL address of) router to (DLL
address of) router in subnet or host to router on
a LAN.
Context of various layers contd



DLL : node to node and not source to
destination
MAC address also : node to node and not
source to destination
NL : Though addressing is source to
destination but responsibilities are node to
node
DLL revisited



Services offered by the DLL may be CO or
CL
Implementation of CO in DLL is done with
the help of frame sequence numbers
DLL handles errors due to transmission
only I.e. due to the physical errors..bits
damaged etc
Services offered by the DLL may
or may not be reliable


When the medium is reliable or the system is
real time where fast response is more important
than correct data for eg. Voice DLL should be
kept light and reliability if required must be
added in higher layers
When the medium is unreliable and it is more
important that data is never lost than the
response time, it may be worth adding reliable
services(ACKs) in the DLL
NL revisted


CL service : Internet community : They say
that subnet is inherently unreliable and
hence reliability, error control, flow control
must be done by the destination host
anyway, and hence there is no need of
doing that in the NL (and hence at every
router).
CO service : Telephone Companies
Implementation of CO in NL


Is done with Virtual Circuit Numbers between H1
and H2.
Now suppose there are two processes
(transport entity) on H1 wanting to establish
connection with one or two processes on H2. NL
may establish separate connections or use a
common connection between H1 and H2 for the
two applications.
Contd..

Now consider the following scenario:
Suppose a network entity is informed halfway halfway
through a long transmission that its network connection
has been abruptly terminated (router crash or link down
etc).
 TL entity can set up a new connection to the remote TL
entity and ask upto what point it received and start from
thereon.
 This cannot be done at the NL since the NL cannot
distinguish between the two TL connections because it
was using the same virtual circuit for both.
The Transport Layer


Truly Peer to peer layer.
Services provided in TL are similar to
those provided in the NL and DLL
together.
Why do we need the Transport
Layer


To take care of the issues which were not taken
care of, by the lower layers.
Or, one could say, a final check.
Why Transport Layer?

Since the protocols are developed independent of the
protocols in the lower layers, It handles all the things





CO
Reliability by adding Acks, even if Acks were added at the DLL
we still need them here as at DLL only frames damaged due to
transmission errors are resent. However, packets may be
dropped for other reasons such as congestion etc. If this is not
taken care of at the NL, which is most often the case, then it
must be done at the transport layer.
Error-control
Flow-control
Congestion-control
Elements of Transport Protocols
•
•
•
•
•
•
Addressing
Connection Establishment
Connection Release
Flow Control and Buffering
Multiplexing
Crash Recovery
Differences between NL and TL
Real Networks can loose packets and NL may
not make an attempt to recover or retransmit the
lost packets. Whereas TL services are meant to
be reliable.
One of the main objective of TL: provide a reliable
service on top of unreliable network.
 Users: Users of NL services are TL entities and
not an end user whereas the users of TL
services are the programmers writing
applications, hence TL entity must be
convenient and easy to use.

Similarities between NL and TL


Both provide services CO/CL to the higher
layer.
Both perform congestion control
Similarities between DLL and TL

TL services resemble more with that of the
DLL:



Error Control
Flow Control
Sequencing etc
Differences between DLL and TL

Due to the dissimilarities between the
environment they operate in:In DLL, two
routers communicate directly through a
physical link. In TL, the two transport
entities communicate through the entire
subnet. Thus the life is greatly simplified in
case of DLL as compared to that in TL.
Differences contd..



In DLL, each outgoing line uniquely identifies the
destination router, no explicit addressing is
required whereas in TL explicit addressing of the
destination is required.
Initial connection establishment is more
complicated in TL.
In DLL, a packet is either delivered or lost. But at
TL, a packet may go around in the subnet, be
stored in some router for some time and emerge
later.
Differences contd..

Buffering and Flow Control: In DLL, buffer
space is allocated with each line. The
number of lines a router is connected is
small and constant. Having dedicating
buffers with each VC at the TL is not
feasible as the number of VCs is too large.
TL addresses : TSAP (Transport
Service Access Point)






A server 1(say Time of Day) on host 2 attaches itself to TSAP 1522
to continuously listen for an incoming request. This attachment is
done by a system call by the OS.
An application process on Host 1 attaches itself to an available
TSAP say 1208 and issues a CONNECT request specifying source
TSAP as 1208 and destination TSAP as 1522.
When Server 1 receives this request, connection is established
eventually (we’ll discuss later).
AP asks for the Time-of the Day
Server responds with the TOD
Connection is released.
Addressing
.
TSAPs, NSAPs and transport connections
TSAP

There may be other servers on Host 2
attached to a different TSAP waiting for an
incoming request.
Issues in Addressing

How does the client know the TSAP (Transport
Service access point) I.e. the port used by the
server



Well known servers use well known ports : defined in
/etc/services file on UNIX for eg.
Initial Connection Protocol : Process Server (attached
to a well known TSAP)
Name Server (attached to a well known TSAP) :
Provides TSAP of the required service/server.
Some well known assigned ports
Port
21
23
25
69
79
80
110
119
Protocol
FTP
Telnet
SMTP
TFTP
Finger
HTTP
POP-3
NNTP
Use
File transfer
Remote login
E-mail
Trivial File Transfer Protocol
Lookup info about a user
World Wide Web
Remote e-mail access
USENET news
Back
Issues contd..




Stable and Standardized TSAPs are used
for some frequently used servers.
How about the servers which are rarely
used?
Letting a server listen whole day to a fixed
TSAP is wasteful.
Solution: proxy server listening to multiple
TSAPs at the same time.
Proxy Server



A client sends a request specifying the TSAP of
the required service. But finds no one waiting at
that TSAP, the request is handed over to the
proxy server. The proxy server spawns the
requested server inheriting all the properties of
the connection.
Problem: TSAP of the service required is still
required to be known and works fine only for
those services which can be created as and
when required …like TOD server.
Solution: Name Server
Initial Connection Protocol
How a user process in host 1
establishes a connection with a
time-of-day server in host 2.
Name Server





Client connects itself to a name server on host
2. Name Server works on a well known TSAP.
It requests the name server to provide the TSAP
of the required service.
NS responds back.
Connection with the NS is released and a new
connection with the required server is set up.
Needless to say, when a new service comes up
it must register itself with the NS together with its
TSAP.
Connection Establishment



Establish connection
Transfer Data
Release Connection
Issues in Connection
Establishment


Packets may be lost or may be duplicated or
may roam around and emerge later
Consider the following scenario:
Connection Request
 Transfer data (say banking, say some amount to
some account)
 Connection Release
Suppose all the packets duplicate, roam around and
emerge again after the connection is released… the
connection will be re-established, data transferred
again and again released.

How to handle


Use connection identifier (a sequence number)
and keep a list of obsolete connections after
they are released….Not good to keep the history
for infinite amount of time, also what happens
when a router crashes.. All history will be lost.
Kill off ageing packets : Keep a packet life time.



Hop count or,
Time Stamp: requires global synchronization of clock
Unfortunately, none of them is fool proof.
Packet Lifetime

To make sure that not only data packets
but even their acknowledgements are lost,
a multiple of true lifetime is used instead.
Tomlinson’s method


Each host is equipped with a TOD clock
which is assumed to be running even
when the router crashes. No problem in
this assumption, a battery-operated clock
which gets charged up when powered.
Clocks at different hosts need not be
synchronized.
Tomlinson’s Method contd..





The clock is actually a binary counter which is incremented at
uniform intervals.
The number of bits in the binary counter must be greater or equal
than the number of bits in the sequence number.
When a connection is established, the low-order k bits of the clock
are used as the initial sequence number and put into the TPDU
(connection request).
Once the initial sequence number is fixed any sliding window
protocol can be used to control the flow of data.
The sequence numbers are incremented and put into subsequent
TPDUs (for the same connection) independent of the clock.
TM contd..


Now we want to ensure that two TPDUs
numbered identically are not outstanding
at the same time.
If the sequence space is large enough that
by the time seq no.s wrap around, old
TPDUs with the same seq. no. are long
gone, problem occurs only due to crashes.
Recovery after a crash








Consider the following scenario:
At t = t0, a connection is established with connection identifier 5 and
ISN = t0, more TPDUs are generated for this connection and,
At t = 30 a sequence number say 80 is generated and put into a
TPDU for the same connection, call this TPDU X.
Then, the host crashes and comes up immediately again.
At t=60, it starts establishing connections 0-4.
At t= 70, it establishes a new connection 5, with ISN =70 and within
next 15 sec, it sends TPDUs numbered 70-80.
Thus a new TPDU with CI 5 and SN =80 has been created, call it Y.
If X arrives at the receiver before Y, X may be accepted as original
and Y may be rejected as duplicate…..problem
Solution


We should not have assigned the sequence
number 80 to X. We should have waited for an
appropriate amount of time before assigning 80
to X.
i.e. If we are about to assign a sequence
number say 80 to a TPDU say x, when there are
chances that in a near future (before the life time
of x say T) 80 may be assigned as ISN, then we
should wait. This wait period is called the
forbidden region for a sequence number.
An example




For example, let at t =30, host wants to
send a TPDU x with CI = 5 and SN = 90.
Suppose T= 60 then the TPDU with
desired SN can be generated but if
It was to be generated at t = 31, it cannot
be, it will have to wait until t = 91.
i.e. a sequences s should not be
generated in the time period s-T to s.
Another Example


Let at t =30, host wants to send a TPDU x
with CI = 5 and SN = 31.
It waits until t =32.
Connection Establishment (2)
(a) TPDUs may not enter the forbidden
region.
(b) The resynchronization problem.

Note that if at t=20, host wants to send a
TPDU with SN = 80. It can send.
What about the control packets?


Once the two parties agree on the ISN,
this method works fine?
What if the control packets like
CONNECTION REQUEST get delayed.
Possible Scenario



a delayed duplicate packet of a CR by
host1 proposing an ISN comes up.
CA by host2 accepting the request.
Connection will be setup incorrectly.
Three way handshake for
Connection Establishment
Three protocol scenarios for establishing a connection using a
three-way handshake. CR denotes CONNECTION REQUEST.
(a) Normal operation,
(b) Old CONNECTION REQUEST appearing out of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicate ACK.
Connection Release

Asymmetric: Telephone System, may
result in loss of data.

Symmetric: treats the connection as two
unidirectional connections and each must
be closed separately.
Connection Release
Abrupt disconnection with loss of data.
Connection Release (2)
The two-army problem.
Connection Release (3)
Four protocol scenarios for releasing a
connection. (a) Normal case of a three6-14, a, b
way handshake. (b) final ACK lost.
Connection Release (4)
(c) Response lost. (d) Response lost
and subsequent DRs lost.
6-14, c,d
Flow control and buffering

Difference between DLL and TL


In TL, too many connections, dedicated
buffers for each connection is not a very good
idea.
Buffers at sender vs at receiver


If the network service is unreliable, sender
must keep the unacked TPDUs in the buffer.
If the network service is reliable(acked
packets), several tradeoffs are possible.
An Important (side) Note


In a reliable network, if the receiver guarantees
enough buffer space to accept every incoming
TPDU, sender need not buffer the unacked
TPDUs (remember reliable network).
Else, sender must. It cannot rely on the acked
service of the network for that only means that
packet arrived safely (and accepted) at the NL of
the receiver but whether TL has enough space
to buffer the TPDU or not is not guaranteed.
Tradeoffs in a reliable network
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers.
(c) One large circular buffer per connection.
Tradeoffs in a reliable network



Fixed size buffers – what should be the size?
Variable size buffers – better memory utilization
but complicated buffer management
One large circular buffer per connection: good if
all connections are heavily loaded but poor
otherwise.
In practice, a combination might be used
depending upon the application.
Tradeoffs between buffering at
sender and at receiver

Low bandwidth bursty application like interactive
terminal,



having dedicated buffers is not a good idea, buffers
must be acquired dynamically.
Hence, buffer space at the receiver is not guaranteed,
therefore sender must keep.
High bandwidth traffic like file transfer


Receiver must dedicate buffers to allow smooth and
fast flow of data.
Hence, sender need not keep.
Negotiating buffer space in CR



As connections are opened and closed, and the
traffic pattern changes, sender and receiver
must be able to adjust their buffer allocation
dynamically.
TP should allow the sender to request for buffer
space at the receiver in CR,
And the receiver to inform the sender as to how
much buffer space it has for the sender, say in
CA.
COO service in TL : Transport
Service Primitives
The primitives for a simple transport service.





Server is continuously LISTENing.
Client sends a CONNECT request.
Server is unblocked and accepts the
request.
Exchange data using SEND and
RECEIVE.
DISCONNECT could be symmetric or
asymmetric.
DISCONNECT


Symmetric: Each side is closed separately.
When one side issues a DISCONNECT, it
means it has no more data to send but it can
accept more data. The other side can continue
to send data and issues a separate
DISCONNECT when it is done.
Asymmetric: Any side can issue a
DISCONNECT and the connection is released
when it arrives at the other end.
TPDU : Transport Protocol Data
Unit
The nesting of TPDUs, packets, and frames.
Transport Service Primitives (3)
A state diagram for a simple connection management scheme.
Transitions labeled in italics are caused by packet arrivals. The
solid lines show the client's state sequence. The dashed lines show
the server's state sequence.
Berkeley Sockets
The socket primitives for TCP.
Socket
Programming
Example:
Internet File
Server
6-6-1
Client code using
sockets.
Socket
Programming
Example:
Internet File
Server (2)
Client code using
sockets.
Issues in Connection Release


Asymmetric Release : One party
disconnects as in telephone system and
the connection is released …. May lead to
data loss
Symmetric Release : If fixed amount of
data then fine else problem
Connection Release
Abrupt disconnection with loss of data.
Connection Release
Four protocol scenarios for releasing a
connection. (a) Normal case of a three6-14, a, b
way handshake. (b) final ACK lost.
Connection Release
(c) Response lost. (d) Response lost
and subsequent DRs lost.
6-14, c,d
Connection Release


What if the initial DR and all subsequent
tries are lost : sender releases the
connection but receiver stays connected :
results in half-open connection
Use timer : If no TPDUs arrive for a certain
amount of time then disconnect
Flow Control and Buffering



In DLL, Sliding window (buffering) is kept at both
the sender’s end as well as at the receiver’s end
for each connection.
However, in a router the number of lines is few
whereas at TL, the number of connections is
large.
Thus keeping buffers with each connection may
not be very memory efficient.
Flow Control and Buffering : If
subnet provides datagram service



Since subnet provides unreliable datagram, TL
must acknowledge, hence sender has to keep
the unacked TPDUs, I.e. has to buffer them.
So buffering at the receiver need not be very
efficient.. Say a common pool may be
maintained for all the connections.
When a TPDU arrives, if there is a room in the
pool, it is accepted else discarded and
retransmitted by the sender
Flow Control and Buffering :
subnet is reliable


Other optimizations are possible : receiver
can agree to do the buffering.
Assuming the receiver always have
sufficient space, sender need not buffer
and TPDUs are never discarded and
hence retransmitted due to insufficient
buffer space.
How to make sure that the receiver
always has enough space?



Case a : most TPDUs are of same size
Case b : variable-sized TPDUs chained
together : space efficient, buffer
management is more complicated
Case c : single large chunk per connection
: works well if all connections are heavily
loaded, but poor otherwise.
Flow Control and Buffering
(a) Chained fixed-size buffers. (b) Chained variable-sized buffers.
(c) One large circular buffer per connection.
Source buffering Vs Destination
buffering

Low-bandwidth bursty traffic : eg. Bursty
traffic .. Buffer at sender’s end

High-bandwidth smooth traffic: eq. file
transfer : buffer at receiver’s end to allow
the data to flow at maximum speed
Dynamic allocation of buffers


As the traffic pattern changes, buffer
allocation strategies must change.
The TL Protocol thus should allow the
sender to request buffer space at the
receiver’s end.
Flow Control and Buffering
.
Multiplexing



Upward Multiplexing : to reduce cost by
multiplexing several TL connections on a single
NL VC. Cost of NL VC is high as resources are
reserved.
Downward Multiplexing: Improve throughput by
multiplexing single TL connection on multiple NL
throughput.
There is a tradeoff between cost and throughput
Crash Recovery