CongWin - s3.amazonaws.com

Download Report

Transcript CongWin - s3.amazonaws.com

Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-oriented
transport: TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
Transport Layer
3-1
True/False Quiz
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
If an HTTP request message uses the Accept-language: fr header, and the
server only has an English version of the object, then the server will return
the 404 Document Not Found error message.
A server can use cookies to determine a user's postal address without the
user's consent.
The Web typically sends multiple objects in a Web page within a multipart
MIME message.
With a POP3 client, user folder information is kept on the mail server.
If a POP3 client does not send the dele command, copies of the messages
that the client has retrieved remain in the mail server.
With SMTP, it is possible to send multiple mail messages over the same TCP
connection.
DNS lookups often involve a combination of recursive and iterative queries.
With non-persistent connections between browser and origin server, it is
possible for a single TCP segment to carry two distinct HTTP request
messages.
The Date: header in the HTTP response message indicates when the object in
the response was last modified.
Network programming on the new Suns is cool!
Transport Layer
3-2
Principles of Congestion Control
Congestion:
 informally: “too many sources sending too much
data too fast for network to handle”
 different from flow control!
 manifestations:
 lost packets (buffer overflow at routers)
 long delays (queueing in router buffers)
 a top-10 problem!
Transport Layer
3-3
Causes/costs of congestion: scenario 1
Host A
 two senders, two
receivers
 one router,
infinite buffers
 no retransmission
Host B
lout
lin : original data
unlimited shared
output link buffers
 large delays
when congested
 maximum
achievable
throughput
Transport Layer
3-4
Causes/costs of congestion: scenario 2
 one router, finite buffers
 sender retransmission of lost packet
Host A
Host B
lin : original
data
l'in : original data, plus
retransmitted data
lout
finite shared output
link buffers
Transport Layer
3-5
Causes/costs of congestion: scenario 2
= l
(goodput)
out
in
 “perfect” retransmission only when loss: l > l
out
in
 retransmission of delayed (not lost) packet makes l
 Transmit when buffers free:
(than perfect case) for same
l
lout
in
larger
“costs” of congestion:
 more work (retrans) for given “goodput”
 unneeded retransmissions: link carries multiple copies of pkt
Transport Layer
3-6
Causes/costs of congestion: scenario 3
 four senders
Q: what happens as l
in
and l increase ?
 multihop paths
 timeout/retransmit
in
Host A
lin : original data
lout
l'in : original data, plus
retransmitted data
finite shared output
link buffers
Host B
Transport Layer
3-7
Causes/costs of congestion: scenario 3
H
o
s
t
A
l
o
u
t
H
o
s
t
B
Another “cost” of congestion:
 when packet dropped, any “upstream transmission
capacity used for that packet was wasted!
Transport Layer
3-8
Approaches towards congestion control
Two broad approaches towards congestion control:
End-end congestion
control:
 no explicit feedback from
network
 congestion inferred from
end-system observed loss,
delay
 approach taken by TCP
Network-assisted
congestion control:
 routers provide feedback
to end systems
 single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
 explicit rate sender
should send at
Transport Layer
3-9
Case study: ATM ABR congestion control
ABR: available bit rate:
 “elastic service”
RM (resource management)
cells:
 if sender’s path
 sent by sender, interspersed
“underloaded”:
 sender should use
available bandwidth
 if sender’s path
congested:
 sender throttled to
minimum guaranteed
rate
with data cells
 bits in RM cell set by switches
(“network-assisted”)
 NI bit: no increase in rate
(mild congestion)
 CI bit: congestion
indication
 RM cells returned to sender by
receiver, with bits intact
Transport Layer 3-10
Case study: ATM ABR congestion control
 two-byte ER (explicit rate) field in RM cell
 congested switch may lower ER value in cell
 sender’ send rate thus minimum supportable rate on path
 EFCI bit in data cells: set to 1 in congested switch
 if data cell preceding RM cell has EFCI set, sender sets CI
bit in returned RM cell
Transport Layer
3-11
Chapter 3 outline
 3.1 Transport-layer
services
 3.2 Multiplexing and
demultiplexing
 3.3 Connectionless
transport: UDP
 3.4 Principles of
reliable data transfer
 3.5 Connection-oriented
transport: TCP




segment structure
reliable data transfer
flow control
connection management
 3.6 Principles of
congestion control
 3.7 TCP congestion
control
Transport Layer 3-12
TCP Congestion Control
 end-end control (no network
assistance)
 sender limits transmission:
LastByteSent-LastByteAcked
 CongWin
 Roughly,
rate =
CongWin
Bytes/sec
RTT
 CongWin is dynamic, function
of perceived network
congestion
How does sender
perceive congestion?
 loss event = timeout or
3 duplicate ACKs
 TCP sender reduces
rate (CongWin) after
loss event
three mechanisms:



AIMD
slow start
conservative after
timeout events
Transport Layer 3-13
TCP AIMD
multiplicative decrease:
cut CongWin in half
after loss event
congestion
window
additive increase:
increase CongWin by
1 MSS every RTT in
the absence of loss
events: probing
24 Kbytes
16 Kbytes
8 Kbytes
time
Long-lived TCP connection
Transport Layer 3-14
TCP Slow Start
 When connection begins,
CongWin = 1 MSS


Example: MSS = 500
bytes & RTT = 200 msec
initial rate = 20 kbps
 When connection begins,
increase rate
exponentially fast until
first loss event
 available bandwidth may
be >> MSS/RTT

desirable to quickly ramp
up to respectable rate
Transport Layer 3-15
TCP Slow Start (more)
 When connection


Host B
RTT
begins, increase rate
exponentially until
first loss event:
Host A
double CongWin every
RTT
done by incrementing
CongWin (by 1 MSS)
for every ACK received
 Summary: initial rate
is slow but ramps up
exponentially fast
time
Transport Layer 3-16
Refinement
Philosophy:
 After 3 dup ACKs:
is cut in half
 window then grows
linearly
 But after timeout event:
 CongWin instead set to
1 MSS;
 window then grows
exponentially
 to a threshold, then
grows linearly
 CongWin
• 3 dup ACKs indicates
network capable of
delivering some segments
• timeout before 3 dup
ACKs is “more alarming”
Transport Layer 3-17
Refinement (more)
Implementation:
14
congestion window size
(segments)
Q: When should the
exponential
increase switch to
linear?
A: When CongWin
gets to 1/2 of its
previous max value
before timeout.
 Variable Threshold
 At loss event, Threshold is
12
TCP
Reno
10
8
6
threshold
TCP
Tahoe
4
2
0
1
2 3
4 5
6 7
8 9 10 11 12 13 14 15
Transmission round
Series1
Series2
set to 1/2 of CongWin just
before loss event
Transport Layer 3-18
Summary: TCP Congestion Control
 When CongWin is below Threshold, sender in
slow-start phase, window grows exponentially.
 When CongWin is above Threshold, sender is in
congestion-avoidance phase, window grows linearly.
 When a triple duplicate ACK occurs, Threshold
set to CongWin/2 and CongWin set to
Threshold.
 When timeout occurs, Threshold set to
CongWin/2 and CongWin is set to 1 MSS.
Transport Layer 3-19
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
Transport Layer 3-20
Why is TCP fair?
Two competing sessions:
 Additive increase gives slope of 1, as throughout increases
 multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput R
Transport Layer 3-21