The Transport Layer
Download
Report
Transcript The Transport Layer
The Transport Layer
How do we ensure that packets get
delivered to the process that needs
them?
Summarizing the past
• The physical layer describes the physical medium that
connects two devices and how to encode data to send it
across that medium.
• The data link layer describes how we ensure the
integrity of the data being transmitted across a
particular link. (node-to-node delivery)
• The network layer describes how we route data
between two devices where that data needs to traverse
multiple physical links because the devices are not
directly connected. (host-to-host)
The Transport Layer
• It has almost always been the case that
devices connected to a network have had
multiple processes trying to simultaneously
use that network connection.
• Multi-user architectures (e.g., UNIX)
• Multi-tasking architectures (e.g., Windows, MacOS)
• The transport layer defines how a given
packet gets delivered to the appropriate
process. (process-to-process delivery)
Process-to-process
communication
• A process is any instance of a program running on a
given device at a given time.
• The same application can generate many processes all
communicating with different network hosts.
• For the purposes of the transport layer, different
threads are equivalent to different processes.
• To allow information to be delivered to the appropriate
process, we must have some way of identifying that
processes.
Ports
• The addressing system used to distinguish
different processes on the same device and/or
attached to the same network interface is the port
number.
– An ephemeral port is one assigned by the operating
system to a process when it initiates network
communication.
– The well-known ports are ones that are reserved for
particular application-layer protocols to listen for
requests on.
• A socket address is the combination of an IP
address and a port.
Client/Server communication
• Communication at the transport layer
follows a client/server model.
– The client device initiates communication by
sending a packet to the server requesting data.
– This packet contains the socket address of the
sender, where the port is assigned by the
operating system, and the socket address of the
receiver, where the port is the well-known port
for the process the client wants to connect to.
UDP
• Connectionless and unreliable
• Used primarily for short, simple
transmissions:
– BOOTP
– DNS
– NTP
• No flow or error control
TCP
• Connection-oriented and reliable.
• Used for transfers that require numerous
packets to be integrated properly and
seamlessly
–
–
–
–
HTTP
Telnet
SMTP
FTP
TCP Segments
• TCP divides a transmission up into segments, which it
encapsulates in a header and, most importantly,
numbers in sequence. This numbering is by byte.
• These segments need to be encapsulated in IP packets,
which is unreliable. It is up to TCP to reassemble the
segments in the proper order and request
retransmission of lost segments.
• The sequence number field in the TCP header contains
the number of the first byte of the segment being sent.
• The acknowledgement number field contains the
number of the next expected byte.
QuickTime™ and a
decompressor
are needed to see this picture.
TCP Connections
• Connections are established using a three-way
handshake.
• The server starts listening at a port, usually a well-known
port (passive open).
• The client sends a SYN message to the well-known port
on the server asking for a connection to be opened (active
open) and for the sequence numbers to be synchronized.
• The server responds with an ACK message indicating
what port the client should use for future communictions
and the sequence number for the client to synchronize
with.
• The client responds to the server with an ACK message.
Disconnecting
• TCP connections must also be terminated.
– Three-way handshake:
• Client sends a FIN segment to the server.
• Server sends a FIN + ACK segment back to the client.
• Client sends and ACK to the server
– Half close:
• If the one end (usually the client) is done sending before the
other, it can close its sending connection while still receiving
data.
• Client sends FIN; server returns an ACK.
• Server then sends data
• Finally, server sends FIN and client returns ACK
Flow control in TCP
• Uses a sliding window, similar to the data link
layer, with some important differences:
– Window is byte-oriented rather than frame or segment
oriented
– Window can change size depending on various factors
such as network congestion and the business of the
receiver.
QuickTime™ and a
decompressor
are needed to see this picture.
Error control in TCP
• A checksum is part of every TCP header to
help the receiver identify damaged TCP
segments.
• Acknowledgements of properly received
segments are always sent, including control
segments (but not ACK segments).
• Unacknowledged segments are
retransmitted - after timeout, or after three
identical ACKs received in a row.
QuickTime™ and a
decompressor
are needed to see this picture.
QuickTime™ and a
decompressor
are needed to see this picture.
SCTP
• Stream Control Transport Protocol
– Message oriented (like UDP)
– Connection oriented and fully reliable (like TCP)
– Used mainly for streaming applications (VOIP, video,
radio, etc.)
– Multi-streamed, as opposed to TCP which is singlestreamed.
– Supports multihoming
• What we mean by “message oriented”
– In TCP, the unit that we count is a byte; sequence
numbers are byte-based.
– In SCTP, the unit we count is a data chunk; a given
chunk can be fragmented into many pieces by the
process
– The transmission sequence number (TSN) is how we
label these chunks.
• Since SCTP is multistreamed, we have to have
addresses for each stream - the stream identifier
• Data chunks on streams need to be sequenced with
a stream sequence number (SSN).
• TCP made a distinction between data (bytes in
the data segment) and control information (flags
in the header).
• SCTP packs control information into control
chunks, which can be bundled into an SCTP
packet with data chunks.
• The data chunks in a given packet can all be
destined for different streams or different
multihomed IP addresses.
• Acknowledgements are chunk-oriented based on
the TSN.
SCTP Associations
• Because of the multihomed nature of SCTP,
connections are referred to as associations.
• Associations are established with a fourway handshake
• The client sends an INIT chunk to the server
• The server responds with an INIT ACK chunk and a
cookie.
• The client responds with a COOKIE ECHO chunk
containing the server’s cookie and possibly data.
• The server responds with a COOKIE ACK chunk
and possibly data.
Cookies
• What are cookies?
• TCP is vulnerable to SYN flooding attacks (the
root of many Denial of Service attacks on web
sites).
• When a SYN segment is received, TCP allocates the resources
necessary to create and maintain the connection. Excessive
allocation of resources causes the server to fail.
• Cookies eliminate this problem by allowing the
server to not allocate resources until the intact
cookie has been returned in the COOKIE ACK
chunk.
Data transfer in SCTP
QuickTime™ and a
decompressor
are needed to see this picture.
Flow Control in SCTP
QuickTime™ and a
decompressor
are needed to see this picture.
QuickTime™ and a
decompressor
are needed to see this picture.
QuickTime™ and a
decompressor
are needed to see this picture.
Error control in SCTP
QuickTime™ and a
decompressor
are needed to see this picture.
QuickTime™ and a
decompressor
are needed to see this picture.
Congestion Control
The best network design takes into
account the network traffic when
making decisions about how and
where to send data
Congestion at multiple levels
• Data link: leads to a high rate of collisions
or lost frames from overrun buffers
• Network: leads to many lost packets from
overrun buffers, or slow delivery from timeshare routing
• Transport: Also leads to overrun buffers and
slow delivery.
Defining congestion
• All networks have a capacity of how much traffic
they can send in a given time frame.
• Congestion is what happens when the load on a
network (the amount of data it needs to handle in a
given time frame) exceeds the capacity.
• For direct-connect or virtual-circuit networks,
congestion is less of an issue because the link
between two devices is dedicated. If no links are
available, none can be created.
• However, for packet-switched networks without
dedicated connects, congestion can cause
significant data loss.
QuickTime™ and a
decompressor
are needed to see this picture.
Open-loop congestion control
• Open-loop congestion methods are generally designed
to try and prevent congestion by addressing those
things that affect congestion.
– Retransmission policy: Retransmission timers and policies
can be adjusted to prevent congestion
– Window policy: The sliding window the sender uses also will
affect congestion. Selective Repeat is better than Go-BackN, for example.
– Acknowledgement policy: Acknowledgements provide more
network traffic
– Discarding policy: Routers can have the option of discarding
certain types of packets if it will not harm the overall
integrity of the transmission
Closed-loop congestion control
• Closed-loop congestion control schemes try to clear
congestion once it has happened by indicating that
senders need to slow down their transmission rates.
– Backpressure: A congested node stops receiving data from its
nearest neighbors, causing those neighbors to become
congested, etc.
– Choke packet: A congested node sends a special packet to a
source telling it, essentially, to shut up (source quench
message in ICMP).
– Implicit signals: The source guesses about congestion
downstream based on clues like lack of acknowledgements,
delay in acknowledgements, etc.
– Explicit signals: Messages can be included in data packets
indicating to the source to shut up.
Congestion control in TCP
• Slow start
– The first phase of data
transmission in TCP
starts with a slow rate,
where cwnd is the
maximum segment size
(MSS).
– Every time an
acknowledgement is
received, cwnd increases
by one MSS.
– This continues until the
slow start threshold
(ssthresh) is reached
QuickTime™ and a
decompressor
are needed to see this picture.
• Congestion Avoidance: Once ssthresh
is reached, TCP enters the next phase
• Instead of increasing the window size for
each acknowledge segment, we increase
cwnd by 1 MSS for each full window of
chunks that gets acknowledged.
QuickTime™ and a
decompressor
are needed to see this picture.
• Congestion Detection: When congestion occurs, we
must decrease cwnd.
• Whenever a segment needs to be retransmitted due to timeout, TCP
presumes congestion and restarts the slow-start phase with ssthresh
set to 1/2 the current window size.
• When a segment is retransmitted due to three consecutive identical
ACKs, congestion is less likely. TCP sets ssthresh and cwnd both to
1/2 the current window size and starts the congestion avoidance phase
again.
QuickTime™ and a
decompressor
are needed to see this picture.