Transcript ppt
Promoting the Use of End-toEnd Congestion Control in the
Internet
Sally Floyd and Kevin Fall
Presented by
Scott McLaren
Overview
Introduction
The problem of unresponsive flows
Identifying flows to regulate
Alternate approaches
Conclusions and future work
Introduction
End-to-end congestion control mechanisms
of TCP are critical to the robustness of the
Internet
We can no longer rely on all end-nodes to
use congestion control for best-effort traffic
We also cannot rely on all developers to use
congestion control
The network must participate in resource
utilization (congestion control)
Approach 1 – Packet scheduling in routers
to isolate flows
Per-flow scheduling to regulate bandwidth
Attempt max-min fairness
Approach 2 – Incentives for use of
congestion control
Restrict bandwidth of best-effort flows using a
large share of the bandwidth
Incentive for users, developers, and protocol
designers
Restrict bandwidth of flows without congestion
control
Approach 3 – financial incentives or
pricing mechanisms
Can network providers deploy effective
pricing structures fast enough to keep up with
growth of best-effort traffic?
Definitions
Best-effort – UDP, etc.
Unresponsive flows – flows that do not use end-toend congestion control and that do not reduce their
load on the network when subjected to packet drops
TCP-friendly – a flow whose arrival rate does not
exceed the arrival of a conformant TCP connection
in the same circumstances
Goodput – the bandwidth delivered to the receiver,
excluding duplicate packets
Congestion collapse – when an increase in the
network load results in a decrease in the useful work
done by the network
Unfairness
TCP flows competing with unresponsive UDP
flows for scarce bandwidth
TCP flow reduces load, UDP uses the extra
bandwidth
Due to TCP’s congestion control algorithms,
throughput = 1/(roundtrip time)
With multiple congested gateways,
throughput = 1/sqrt(# congested gateways)
Simulations
3 TCP flows,1 UDP flow,
FCFS Scheduling
3 TCP flows,1 UDP flow,
WRR Scheduling
Congestion Collapse
Classical – results from unnecessary
retransmissions of packets
First reported in mid 1980s, due to retransmission of
packets that were in transit or already received
Corrected by timer improvements and congestion control in
modern TCP implementations
From undeliverable packets – bandwidth wasted by
delivering packets that will eventually be dropped
Largest unresolved danger
Condition is not stable, it returns to normal once load is
reduced
Not an issue in a circuit-switched network – a sender can
only send when a path exists with the necessary bandwidth
Congestion Collapse Simulations
(top line = Bold line: Aggregate Goodput)
Congestion Collapse Simulations
Other forms of congestion collapse
Fragmentation-based – network transmits fragments or cells that
are discarded because they can’t be reassembled into a valid
packet
ATM uses Early Packet Discard – when a cell is dropped, a
complete frame is dropped
Variant is packets received by transport-level, then later
discarded before used by user
Occurs when web users abort transfers due to delays, then rerequest the same data
Increased control traffic – an increasing fraction of the bytes
transmitted belong to control traffic, and a decreasing fraction of
the bytes are data for applications
Control packets – packet headers for small data packets, routing
updates, multicast join an prune messages, session messages,
DNS messages, etc
Incentives to use Congestion Control
The current Internet “rewards” users that do not use
congestion control, they might get a larger fraction
of the bandwidth
Incentives cannot come from other users, they need
to come from the network
Two ways to prevent congestion collapse from
undeliverable packets
End-to-end congestion control, possibly by using incentives
at routers
Use of virtual-circuits, where packets only enter the
network if they can reach their final destination
Identifying flows to regulate
Designed to detect a small number of
misbehaving flows
An incentive to limit the benefits of not using
congestion control
Issues not addressed
encryption and fragmentation make it more
difficult to classify packets into flows
IPsec could prevent routers from using source IP
addresses and port numbers
Identifying flows that are not TCP-friendly
T = maximum sending rate (Bps)
B = size of packets (bytes)
R = roundtrip time (s)
No simple test for this
p = packet drop rate
Use this equation to calculate the maximum arrival
rate from a conformant TCP connection in similar
circumstances
Identifying unresponsive flows
TCP-friendly test is based on TCP, routers may want to consider other
protocols
One approach is to verify that a high-bandwidth flow was responsive (its
arrival rate drops when its packet drop rate increases)
If drop rate increases by x, and the load does not decrease by a factor
close to √x or more, the flow is unresponsive
Requires estimates of a flow’s arrival rate and packet drop rate over several
long time intervals
Arrival rate estimated from packet drops from active queue management
Drop rate estimate using aggregate drop rate of the queue
Only applied to high bandwidth flows
Incentive to start with an overly-high initial bandwidth, so it would get
larger share of bandwidth even after it is reduced
Example test: if drop rate increases by a factor of four, but arrival rate
has not decreased to below 90% of its previous value
Identifying unresponsive flows
Other tests
Flows with an increasing or constant average
arrival rate, while the router drop rate is increasing
Flows whose average arrival rate tracks changes
in the router drop rate
Flows whose average arrival rate changes
independently of the router drop rate
Identifying flows using disproportionate
bandwidth
Router might restrict bandwidth of flows even
if they are using congestion control
If it is the only TCP
with sustained demand
using large windows
with significantly smaller roundtrip time
with larger packet sizes
Identifying flows using disproportionate
bandwidth
A flow uses disproportionate share if
Arrival rate > log(3n)/n
log(3n)/n is close to one (0.9) for n=2
Grows slowly as a multiple of 1/n
A flow has a high arrival rate relative to the
level of congestion if
Arrival rate > c/√p Bps
Alternate Approaches
Deploy per-flow scheduling mechanisms such as
RR or FQS at all congested routers
Problem with per-flow scheduling
Encourages flows to make sure their queue never get
empty – so they lose their turn at scheduling
FCFS advantages
More efficient to implement – link speeds and active flows
are increasing
Allows packets arriving in a small burst to be transmitted in
a burst, in stead of spread out by the scheduler
Alternate Approaches
FCFS with per-flow scheduling
FCFS with differential dropping for flows using
large fraction of bandwidth
Scheduling mechanisms such as Class-Based
Queuing or Stochastic Fair Queuing that can
operate on levels other that a single flow or all of
the traffic
Min-max fairness restricts attention to each
component
It would be better to consider the number of
congested links on each flow’s path
Alternate Approaches
Pricing structures
State required for this would be complex
Conclusions and future work
Arguments for
The need for end-to-end congestion control
The need for mechanisms in the network to detect
and restrict unresponsive or high-bandwidth besteffort flows
Future Work
A specific proposal for mechanisms to identify and
control unresponsive flows
The specifics are not as important as deployment
of a mechanism