Transcript Chapter 3
Flow and Congestion Control
Ram Dantu (compiled from various
text books)
Transport Layer
1
TCP Flow Control
receive side of TCP
connection has a
receive buffer:
flow control
sender won’t overflow
receiver’s buffer by
transmitting too much,
too fast
speed-matching
app process may be
service: matching the
send rate to the
receiving app’s drain
rate
slow at reading from
buffer
Transport Layer
2
TCP Flow control: how it works
Rcvr advertises spare
(Suppose TCP receiver
discards out-of-order
segments)
spare room in buffer
room by including value
of RcvWindow in
segments
Sender limits unACKed
data to RcvWindow
guarantees receive
buffer doesn’t overflow
= RcvWindow
= RcvBuffer-[LastByteRcvd LastByteRead]
Transport Layer
3
Principles of Congestion Control
Congestion:
informally: “too many sources sending too much
data too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!
Transport Layer
4
cliff – point after which
throughput starts to decrease
very fast to zero (congestion
collapse)
delay approaches infinity
Delay
knee – point after which
throughput increases very
slowly
delay increases fast
Throughput
Congestion: A Close-up View
knee
packet
loss
cliff
congestion
collapse
Load
Note (in an M/M/1 queue)
delay = 1/(1 – utilization)
Load
Transport Layer
5
Congestion Control vs. Congestion
Avoidance
Congestion control goal
stay left of cliff
Right of cliff:
Congestion collapse
Throughput
Congestion avoidance goal
stay left of knee
knee
cliff
congestion
collapse
Load
Transport Layer
6
Congestion Collapse: How Bad is
It?
Definition: Increase in network load results in decrease
of useful work done
Many possible causes
Spurious retransmissions of packets still in flight
Undelivered packets
• Packets consume resources and are dropped elsewhere in network
Fragments
• Mismatch of transmission and retransmission units
Control traffic
• Large percentage of traffic is for control
Stale or unwanted packets
• Packets that are delayed on long queues
Transport Layer
7
Solution
Directions….
i
i
outstrips
available capacity
•Problem: demand
1
Demand
Capacity
n
If information about i , and is known in a
central location where control of i or can be
effected with zero time delays, the congestion
problem is solved!
Capacity () cannot be provisioned very fast =>
demand must be managed
Perfect callback: Admit packets into the network
from the user only when the network has capacity
(bandwidth and buffers) to get the packetTransport
across.
Layer
8
Causes/costs of congestion: scenario 3
four senders
Q: what happens as
in
and increase ?
multihop paths
timeout/retransmit
in
Host A
in : original data
out
'in : original data, plus
retransmitted data
finite shared output
link buffers
Host B
Transport Layer
9
Causes/costs of congestion: scenario 3
H
o
s
t
A
o
u
t
H
o
s
t
B
Another “cost” of congestion:
when packet dropped, any “upstream transmission
capacity used for that packet was wasted!
Transport Layer
10
Approaches towards congestion control
Two broad approaches towards congestion control:
End-end congestion
control:
no explicit feedback from
network
congestion inferred from
end-system observed loss,
delay
approach taken by TCP
Network-assisted
congestion control:
routers provide feedback
to end systems
single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
explicit rate sender
should send at
Transport Layer
11
TCP Congestion Control
end-end control (no network
assistance)
sender limits transmission:
LastByteSent-LastByteAcked
CongWin
Roughly,
rate =
CongWin
Bytes/sec
RTT
CongWin is dynamic, function
of perceived network
congestion
How does sender
perceive congestion?
loss event = timeout or
3 duplicate acks
TCP sender reduces
rate (CongWin) after
loss event
three mechanisms:
AIMD
slow start
conservative after
timeout events
Transport Layer
12
TCP AIMD
(Additive increase and multiplicative decrease)
additive increase:
multiplicative decrease:
increase CongWin by
cut CongWin in half
1 MSS every RTT in
after loss event
the absence of loss
congestion
events: probing
window
24 Kbytes
16 Kbytes
8 Kbytes
time
Long-lived TCP connection
Transport Layer
13
TCP Slow Start
When connection begins,
CongWin = 1 MSS
Example: MSS = 500
bytes & RTT = 200 msec
initial rate = 20 kbps
When connection begins,
increase rate
exponentially fast until
first loss event
available bandwidth may
be >> MSS/RTT
desirable to quickly ramp
up to respectable rate
Transport Layer
14
TCP Slow Start (more)
When connection
Host B
RTT
begins, increase rate
exponentially until
first loss event:
Host A
double CongWin every
RTT
done by incrementing
CongWin for every ACK
received
Summary: initial rate
is slow but ramps up
exponentially fast
time
Transport Layer
15
Refinement
Philosophy:
After 3 dup ACKs:
is cut in half
window then grows
linearly
But after timeout event:
CongWin instead set to
1 MSS;
window then grows
exponentially
to a threshold, then
grows linearly
CongWin
• 3 dup ACKs indicates
network capable of
delivering some segments
• timeout before 3 dup
ACKs is “more alarming”
Transport Layer
16
Refinement (more)
14
congestion window size
(segments)
Q: When should the
exponential
increase switch to
linear?
A: When CongWin
gets to 1/2 of its
value before
timeout.
Implementation:
Variable Threshold
At loss event, Threshold is
12 threshold
10
8
6
4
2
0
1
TCP
Tahoe
TCP
Reno
2 3
6 7
4 5
8 9 10 11 12 13 14 15
Transmission round
Series1
Series2
set to 1/2 of CongWin just
before loss event
Transport Layer
17
Summary: TCP Congestion Control
When CongWin is below Threshold, sender in
slow-start phase, window grows exponentially.
When CongWin is above Threshold, sender is in
congestion-avoidance phase, window grows linearly.
When a triple duplicate ACK occurs, Threshold
set to CongWin/2 and CongWin set to
Threshold.
When timeout occurs, Threshold set to
CongWin/2 and CongWin is set to 1 MSS.
Transport Layer
18
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
Transport Layer
19
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput R
Transport Layer
20
Fairness (more)
Fairness and UDP
Multimedia apps often
do not use TCP
do not want rate
throttled by congestion
control
Instead use UDP:
pump audio/video at
constant rate, tolerate
packet loss
Research area: TCP
friendly
Fairness and parallel TCP
connections
nothing prevents app from
opening parallel
connections between 2
hosts.
Web browsers do this
Example: link of rate R
supporting 9 cnctions;
new app asks for 1 TCP, gets
rate R/10
new app asks for 11 TCPs,
gets R/2 !
Transport Layer
21