Midterm Review

Download Report

Transcript Midterm Review

Midterm Review
In class
Closed Book
One 8.5” by 11” sheet of paper
permitted (single side)
Calculators Recommended
Packet Switching: Statistical Multiplexing
10 Mbs
Ethernet
A
B
statistical multiplexing
C
1.5 Mbs
queue of packets
waiting for output
link
D
E
Sequence of A & B packets does not have fixed
pattern  statistical multiplexing.
In TDM each host gets same slot in revolving TDM
frame.
Packet Switching versus Circuit Switching
• Circuit Switching
– Network resources (e.g., bandwidth) divided into
“pieces” for allocation
– Resource piece idle if not used by owning call (no
sharing)
– NOT efficient !
• Packet Switching:
– Great for bursty data
– Excessive congestion: packet delay and loss
– protocols needed for reliable data transfer,
congestion control
Datagram Packet Switching
• Each packet is independently switched
– Each packet header contains destination address
which determines next hop
– Routes may change during session
• No resources are pre-allocated (reserved) in
advance
• Example: IP networks
Internet structure: network of networks
• “Tier-3” ISPs and local ISPs
– last hop (“access”) network (closest to end systems)
– Tier-3: Turkish Telecom, Minnesota Regional Network
local
ISP
Local and tier3 ISPs are
customers of
higher tier
ISPs
connecting
them to rest
of Internet
Tier 3
ISP
Tier-2 ISP
local
ISP
local
ISP
local
ISP
Tier-2 ISP
Tier 1 ISP
Tier 1 ISP
Tier-2 ISP
local
local
ISP
ISP
NAP
Tier 1 ISP
Tier-2 ISP
local
ISP
Tier-2 ISP
local
ISP
Four sources of packet delay
• 1. processing:
• 2. queueing
– check bit errors
– time waiting at output
link for transmission
– determine output link
– depends on congestion
level of router
transmission
A
propagation
B
processing
queueing
Delay in packet-switched networks
3. Transmission delay:
4. Propagation delay:
• R=link bandwidth (bps)
• d = length of physical link
• L=packet length (bits)
• s = propagation speed in
medium (~2x108 m/sec)
• time to send bits into
link = L/R
transmission
A
• propagation delay = d/s
Note: s and R are very
different quantities!
propagation
B
processing
queueing
Internet protocol stack
• application: supporting network applications
– FTP, SMTP, STTP
• transport: host-host data transfer
– TCP, UDP
• network: routing of datagrams from source
to destination
– IP, routing protocols
• link: data transfer between neighboring
network elements
– PPP, Ethernet
• physical: bits “on the wire”
application
transport
network
link
physical
HTTP connections
Nonpersistent HTTP
Persistent HTTP
• At most one object is
sent over a TCP
connection.
• Multiple objects can be
sent over single TCP
connection between
client and server.
• HTTP/1.0 uses
nonpersistent HTTP
• HTTP/1.1 uses
persistent connections
in default mode
• HTTP Message, Format, Response, Methods
• HTTP cookies
Response Time of HTTP
Nonpersistent HTTP issues:
Persistent without pipelining:
• requires 2 RTTs per object
• client issues new request
only when previous
response has been received
• OS must work and allocate
host resources for each TCP
connection
• but browsers often open
parallel TCP connections to
fetch referenced objects
Persistent HTTP
• server leaves connection
open after sending response
• subsequent HTTP messages
between same client/server
are sent over connection
• one RTT for each
referenced object
Persistent with pipelining:
• default in HTTP/1.1
• client sends requests as
soon as it encounters a
referenced object
• as little as one RTT for all
the referenced objects
FTP: separate control, data connections
• FTP client contacts FTP server
at port 21, specifying TCP as
transport protocol
• Client obtains authorization over FTP
control connection
client
TCP control connection
port 21
TCP data connection
port 20
FTP
server
• Client browses remote directory
• Server opens a second TCP
by sending commands over
data connection to transfer
control connection.
another file.
• When server receives a
• Control connection: “out of
command for a file transfer, the
band”
server opens a TCP data
connection to client
• FTP server maintains “state”:
current directory, earlier
• After transferring one file,
authentication
server closes connection.
Electronic Mail: SMTP [RFC 2821]
• uses TCP to reliably transfer
outgoing
message queue
email message from client to
user
agent
server, port 25
• direct transfer: sending
user mailbox
mail
server
server to receiving server
user
agent
SMTP
mail
server
user
agent
SMTP
user
agent
mail
server
user
agent
user
agent
DNS name servers
• no server has all name-to-IP
Why not centralize DNS?
address mappings
• single point of failure
• traffic volume
• distant centralized
database
• maintenance
doesn’t scale!
local name servers:
– each ISP, company has local
(default) name server
– host DNS query first goes to
local name server
authoritative name server:
– for a host: stores that host’s
IP address, name
– can perform name/address
translation for that host’s
name
DNS example
root name server
Root name server:
7
• may not know
authoritative name
server
• may know
intermediate name
server: who to
contact to find
authoritative name
server
6
2
local name server
dns.eurecom.fr
1
8
requesting host
3
intermediate name server
dns.nwu.edu
4
5
authoritative name server
dns.cs.nwu.edu
surf.eurecom.fr
www.cs.nwu.edu
DNS: iterated queries
recursive query:
• puts burden of name
resolution on
contacted name
server
• heavy load?
iterated query:
• contacted server
replies with name of
server to contact
• “I don’t know this
name, but ask this
server”
root name server
iterated query
2
3
4
7
local name server
dns.eurecom.fr
1
8
requesting host
intermediate name server
dns.umass.edu
5
6
authoritative name server
dns.cs.umass.edu
surf.eurecom.fr
gaia.cs.umass.edu
Web caches (proxy server)
Goal: satisfy client request without involving origin server
• user sets browser: Web
accesses via cache
• browser sends all HTTP
requests to cache
origin
server
client
Proxy
server
– object in cache: cache
returns object
– else cache requests object
from origin server, then
returns object to client
• Why web caching?
client
origin
server
Caching example (3)
Install cache
origin
servers
• suppose hit rate is .4
Consequence
• 40% requests will be satisfied
almost immediately
public
Internet
• 60% requests satisfied by origin
server
• utilization of access link reduced
to 60%, resulting in negligible
delays (say 10 msec)
• total delay = Internet delay +
access delay + LAN delay
= .6*2 sec + .6*.01 secs +
milliseconds < 1.3 secs
1.5 Mbps
access link
institutional
network
10 Mbps LAN
institutional
cache
Transport Layer
• Transport-layer services
• Multiplexing and demultiplexing
• Connectionless transport: UDP
• Principles of reliable data transfer
• TCP
– Segment structures
– Flow control
– Congestion control
Demultiplexing
• UDP socket identified
by two-tuple:
(dest IP address, dest
port number)
• When host receives
UDP segment:
– checks destination
port number in
segment
– directs UDP segment
to socket with that
port number
• TCP socket identified by
4-tuple:
– source IP address
– source port number
– dest IP address
– dest port number
• recv host uses all four
values to direct segment
to appropriate socket
UDP: User Datagram Protocol [RFC 768]
32 bits
source port #
dest port #
length
checksum
Application
data
(message)
UDP segment format
Why is there a UDP?
• no connection
establishment (which can
add delay)
• simple: no connection state
at sender, receiver
• small segment header
• no congestion control: UDP
can blast away as fast as
desired
UDP checksum
Goal: detect “errors” (e.g., flipped bits) in
transmitted segment
Receiver:
Sender:
• treat segment contents as
sequence of 16-bit integers
• checksum: addition (1’s
complement sum) of segment
contents
• sender puts checksum value
into UDP checksum field
Addition:
1’s complement
sum:
0110
0101
1011
0100
• addition of all segment
contents + checksum
• check if all bits are 1:
– NO - error detected
– YES - no error detected.
But maybe errors
nonetheless? More later
….
1’s complement
sum:
Addition:
0110
0101
0100
1111
Go-Back-N
Sender:
• k-bit seq # in pkt header
• “window” of up to N, consecutive unack’ed pkts allowed
• ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”
– may deceive duplicate ACKs (see receiver)
• Single timer for all in-flight pkts
• timeout(n): retransmit pkt n and all higher seq # pkts in window
Selective Repeat
• receiver individually acknowledges all correctly
received pkts
– buffers pkts, as needed, for eventual in-order delivery
to upper layer
• sender only resends pkts for which ACK not
received
– sender timer for each unACKed pkt
• sender window
– N consecutive seq #’s
– again limits seq #s of sent, unACKed pkts
Selective repeat: sender, receiver windows
TCP segment structure
32 bits
URG: urgent data
(generally not used)
ACK: ACK #
valid
PSH: push data now
(generally not used)
RST, SYN, FIN:
connection estab
(setup, teardown
commands)
Internet
checksum
(as in UDP)
source port #
dest port #
sequence number
acknowledgement number
head not
UA P R S F
len used
checksum
Receive window
Urg data pnter
Options (variable length)
application
data
(variable length)
counting
by bytes
of data
(not segments!)
# bytes
rcvr willing
to accept
Approaches towards congestion control
Two broad approaches towards congestion control:
End-end congestion
control:
Network-assisted
congestion control:
• no explicit feedback from
network
• routers provide feedback
to end systems
• congestion inferred from
end-system observed loss,
delay
• approach taken by TCP
– single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
– explicit rate sender
should send at
26
TCP Slow Start (more)
Host A
Host B
RTT
• When connection begins,
increase rate
exponentially until first
loss event:
– double CongWin every
RTT
– done by incrementing
CongWin for every ACK
received
• Summary: initial rate is
slow but ramps up
exponentially fast
time
27
TCP sender congestion control
Event
State
TCP Sender Action
Commentary
ACK receipt
for previously
unacked
data
Slow Start
(SS)
CongWin = CongWin + MSS,
If (CongWin > Threshold)
set state to “Congestion
Avoidance”
Resulting in a doubling of
CongWin every RTT
ACK receipt
for previously
unacked
data
Congestion
Avoidance
(CA)
CongWin = CongWin+MSS *
(MSS/CongWin)
Additive increase, resulting
in increase of CongWin by
1 MSS every RTT
Loss event
detected by
triple
duplicate
ACK
SS or CA
Threshold = CongWin/2,
CongWin = Threshold,
Set state to “Congestion
Avoidance”
Fast recovery,
implementing multiplicative
decrease. CongWin will not
drop below 1 MSS.
Timeout
SS or CA
Threshold = CongWin/2,
CongWin = 1 MSS,
Set state to “Slow Start”
Enter slow start
Duplicate
ACK
SS or CA
Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
28
Why is TCP fair?
Two competing sessions:
• Additive increase gives slope of 1, as throughout increases
• multiplicative decrease decreases throughput proportionally
equal bandwidth share
R
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput
R
29