Congestion in Data Networks - NYU Computer Science Department
Download
Report
Transcript Congestion in Data Networks - NYU Computer Science Department
Data Communication and
Networks
Lecture 7
Congestion in Data Networks
October 16, 2003
Joseph Conron
Computer Science Department
New York University
[email protected]
Principles of Congestion Control
Congestion:
informally: “too many sources sending too much data
too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queuing in router buffers)
a top-10 problem!
What Is Congestion?
Congestion occurs when the number of packets
being transmitted through the network
approaches the packet handling capacity of the
network
Congestion control aims to keep number of
packets below level at which performance falls
off dramatically
Data network is a network of queues
Generally 80% utilization is critical
Finite queues mean data may be lost
Queues at a Node
Effects of Congestion
Packets arriving are stored at input buffers
Routing decision made
Packet moves to output buffer
Packets queued for output transmitted as fast as
possible
Statistical time division multiplexing
If packets arrive to fast to be routed, or to be
output, buffers will fill
Can use flow control
Can discard packets
Can propagate congestion through network
Interaction of Queues
Ideal
Network
Utilization
Causes/costs of congestion: scenario 1
two senders, two
receivers
one router, infinite
buffers
no retransmission
large delays
when congested
maximum
achievable
throughput
Practical Performance
Ideal assumes infinite buffers and no overhead
Buffers are finite
Overheads occur in exchanging congestion
control messages
Effects of
Congestion No Control
Causes/costs of congestion: scenario 2
one router, finite buffers
sender retransmission of lost packet
Causes/costs of congestion: scenario 2
always: =
(’in = in)
out
in
“perfect” retransmission only when loss: >
out
in
retransmission of delayed (not lost) packet makes
(than perfect case) for same
out
in
larger
“costs” of congestion:
more work (retrans) for given “goodput”
unneeded retransmissions: link carries multiple copies of pkt
Causes/costs of congestion: scenario 3
four senders
multihop paths
timeout/retransmit
Q: what happens as
in
and increase ?
in
Causes/costs of congestion: scenario 3
Another “cost” of congestion:
when packet dropped, any “upstream transmission
capacity used for that packet was wasted!
Mechanisms for
Congestion Control
Backpressure
If node becomes congested it can slow down or
halt flow of packets from other nodes
May mean that other nodes have to apply
control on incoming packet rates
Propagates back to source
Can restrict to logical connections generating
most traffic in VC system.
Backpressure is of limited utility:
Used in connection oriented that allow hop by hop
congestion control (e.g. X.25)
Not used in ATM
Choke Packet
Control packet
Generated at congested node
Sent to source node
e.g. ICMP source quench
From router or destination
Source cuts back until no more source quench message
Sent for every discarded packet, or anticipated
Rather crude mechanism
Implicit Congestion Signaling
Transmission delay may increase with
congestion
Packet may be discarded
Source can detect these as implicit indications of
congestion
Useful on connectionless (datagram) networks
e.g. IP based
(TCP includes congestion and flow control - see chapter 17)
Used in frame relay LAPF
Explicit Congestion Signaling
Network alerts end systems of increasing
congestion
End systems take steps to reduce offered load
Backwards
Congestion avoidance in opposite direction to packet
required (node receiving packet must slow sending)
Forwards
Congestion avoidance in same direction as packet
required (node receiving packet must tell peer to
slow down)
Used in ATM by ABR Service
Explicit Signaling Approaches
Binary
Bit in packet indicates to receiver that congestion
exists - reduce send rate.
Credit Based
Sender is given “credit” for number of octets or
packets it may send before it must stop and wait for
additional credit.
Rate Based
Sender may transmit at a rate up to some limit.
Rate can be reduced by control message.
Traffic Shaping
Smooth out traffic flow and reduce cell clumping
Used in rate based schemes.
Token bucket algorithm.
Token Bucket for
Traffic Shaping
Approaches towards congestion control
Two broad approaches towards congestion control:
End-end congestion control: Network-assisted
congestion control:
no explicit feedback from
network
congestion inferred from endsystem observed loss, delay
approach taken by TCP
This is an IMPLICIT approach
routers provide feedback to
end systems
single bit indicating
congestion (SNA, DECbit,
TCP/IP ECN, ATM)
explicit rate sender should
send at
This is an EXPLICIT approach
Case study: ATM ABR congestion control
ABR: available bit rate:
“elastic service”
if sender’s path
“underloaded”:
sender should use
available bandwidth
if sender’s path congested:
sender throttled to
minimum guaranteed
rate
RM (resource management)
cells:
sent by sender, interspersed with
data cells
bits in RM cell set by switches
(“network-assisted”)
NI bit: no increase in rate
(mild congestion)
CI bit: congestion indication
RM cells returned to sender by
receiver, with bits intact
EXPLICIT Case study: ATM ABR
congestion control
two-byte ER (explicit rate) field in RM cell
congested switch may lower ER value in cell
sender’ send rate thus minimum supportable rate on path
EFCI bit in data cells: set to 1 in congested switch
if data cell preceding RM cell has EFCI set, sender sets CI bit in
returned RM cell
ABR Cell Rate Feedback Rules
IF CI == 1
Reduce ACR to a value >= MCR
ELSE IF NI == 0
Increase ACR to a value <= PCR
IF ACR > ER
Set ACR = max(ER, MCR)
ACR = Allowed Cell Rate
MCR = Minimum Cell Rate
PCR = Peak Cell Rate
ER = Explicit Rate
If you want to know more
Implicit Case Study: TCP
Congestion Control
end-end control (no network assistance)
transmission rate limited by congestion window size, Congwin,
over segments:
Congwin
w segments, each with MSS bytes sent in one RTT:
throughput =
w * MSS
Bytes/sec
RTT
TCP congestion control:
“probing” for usable
bandwidth:
ideally: transmit as fast as
possible (Congwin as large
as possible) without loss
increase Congwin until loss
(congestion)
loss: decrease Congwin,
then begin probing
(increasing) again
two “phases”
slow start
congestion avoidance
important variables:
Congwin
threshold: defines
threshold between two
slow start phase,
congestion control phase
TCP Slowstart
Host A
initialize: Congwin = 1
for (each segment ACKed)
Congwin++
until (loss event OR
CongWin > threshold)
exponential increase (per RTT)
in window size (not so slow!)
loss event: timeout (Tahoe
TCP) and/or or three duplicate
ACKs (Reno TCP)
RTT
Slowstart algorithm
Host B
time
TCP Congestion Avoidance
Congestion avoidance
/* slowstart is over
*/
/* Congwin > threshold */
Until (loss event) {
every w segments ACKed:
Congwin++
}
threshold = Congwin/2
Congwin = 1
1
perform slowstart
1: TCP Reno skips slowstart (fast
recovery) after three duplicate ACKs
AIMD
TCP congestion avoidance (AIMD) :
Additive Increase, Multiplicative Decrease
increase window by 1 per RTT
decrease window by factor of 2 on loss event
TCP Fairness
Fairness goal: if N TCP sessions share same
bottleneck link, each should get 1/N of
link capacity
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput R