Week 10 - Congestion

Download Report

Transcript Week 10 - Congestion

Data Communications
Congestion in
Data Networks
What Is Congestion?
Congestion occurs when the number of packets
being transmitted through the network
approaches the packet handling capacity of the
network
Congestion control aims to keep number of
packets below level at which performance falls
off dramatically
Data network is a network of queues
Generally 80% utilization is critical
Finite queues mean data may be lost
Figure 23.5
Incoming packet
Interaction of Queues
Effects of Congestion
Packets arriving are stored at input buffers (not ATM)
Routing decision made
Packet moves to output buffer
Packets queued for output transmitted as fast as
possible
Statistical time division multiplexing
If packets arrive too fast to be routed, or to be
output, buffers will fill
Can discard packets
Can use flow control
Can propagate congestion through network
Ideal
Performance
Top: As load increases,
throughput directly
increases
Middle: As load
increases, delay
increases
Bottom: Power is ratio of
throughput to delay
Practical Performance
Ideal assumes infinite buffers and no overhead
Unfortunately, buffers are finite
Additional overhead occurs in exchanging
congestion control messages
Effects of
Congestion No Control
Congestion Control Objectives
In general, we want to:
Minimize discards
Maintain agreed QoS (if applicable)
Minimize probability of one end user monopoly over other
end users
Simple to implement
Little overhead on network or user
Create minimal additional traffic
Distribute resources fairly
Limit spread of congestion
Operate effectively regardless of traffic flow
Minimum impact on other systems
Minimize variance in QoS
Basic Mechanisms for
Congestion Control
Open-Loop Congestion Control (rely on other layers for feedback
and control)
Retransmission policy - a good policy can reduce congestion
Window policy - sel-reject better than go-back-N; use a bigger
window size
Acknowledgment policy - don’t ack each packet individually
Discard policy - a good policy by routers may prevent
congestion and at the same time may not harm the integrity
of the transmission
Admission policy - QOS mechanism
Basic Mechanisms for
Congestion Control
Closed-Loop Congestion Control
Backpressure - when a router is congested, it informs the
previous upstream router to reduce the rate of outgoing
packets
Choke packet of choke point - sent by router to source, similar
to ICMP’s source quench packet
Implicit signaling - look for delay in some other action
Explicit signaling - router sends an explicit signal
Backward signaling - bit is set in packet moving in the
direction opposite to the congestion
Forward signaling - bit is set in packet moving in the direction
of congestion. Receiver can use policy such as slowing
down acks to alleviate congestion
Basic Mechanisms for
Congestion Control (visual
examples)
Backpressure
If node becomes congested it can slow down or
halt flow of packets from other nodes
May mean that other nodes have to apply control
on incoming packet rates
Propagates back to source
Can restrict to logical connections generating most
traffic
Used in connection oriented that allow hop by hop
congestion control (e.g. X.25)
Not used in ATM or frame relay
Only recently developed for IPv6 (PRI field)
Choke Packet
Control packet
Generated at congested node
Sent to source node
e.g. ICMP source quench
From router or destination
Source cuts back until no more source quench message
Sent for every discarded packet, or anticipated
Rather crude mechanism
Implicit Congestion Signaling
Transmission delay may increase with congestion
Packets may be discarded
Source can detect these as implicit indications of
congestion (source is responsible, not network)
Useful on connectionless (datagram) networks
e.g. IP based (TCP includes congestion and flow
control)
Used in frame relay LAPF
Explicit Congestion Signaling
Network alerts end systems of increasing
congestion
Used on connection-oriented networks
End systems take steps to reduce offered load
Backwards
Congestion avoidance info sent in opposite direction of
packet travel
Forwards
Congestion avoidance info sent in same direction as
packet travel - when end system receives info, either
sends it back to source or hands it to higher layer to
take action
Categories of Explicit Signaling
Binary
A bit set in a packet indicates congestion
Credit based
Indicates how many packets source may send
Common for end to end flow control
Rate based
The source may transmit data at a rate up to the set
limit
Any node along the path of the connection can reduce
the data rate limit in a control message to the source
e.g. ATM
How Does TCP Handle / Avoid
Congestion? (details in TDC 365 and TDC 463)
To Handle: TCP has a sender window size.
Sender window size is minimum of receiver
window size or network congestion window size.
To Avoid: TCP can use Slow Start and Additive
Increase - at beginning, TCP sets congestion
window size to maximum segment size, then
increases window size with each ack.
Can also use Multiplicative Decrease - after
timeout, threshold set to 1/2 previous threshold,
and congestion window size reset to 1 (then
slow start)
Figure 23.8
Multiplicative decrease
How Does Frame Relay Handle
Congestion?
Connection management, coupled with
Discard strategy
Explicit signaling
Implicit signaling
In more detail:
Connection Management
Before a frame relay network allows a user to
transmit data, they agree on a connection
Some call this an SLA (service level agreement)
Frame relay calls it the CIR (committed
information rate)
Committed burst size - max amount of data the
network agrees to transfer, under normal
conditions
Excess burst size - max amount of data in excess
of committed burst size
Different frame relay companies have different
agreements
Connection Management
What happens if you exceed your CIR and the network
experiences congestion?
Frame relay may start discarding your frames (Discard
Eligible bit = 1) (Discard Strategy)
Does frame relay tell you that your frames are being
tossed?
No. Frame relay assumes a higher layer protocol (such as
TCP) will monitor lost or missing frames
Frame relay could discard arbitrarily with no regard for
source, but then no reward for restraint so end systems
transmit as fast as possible
CIR not 100% guaranteed, but network tries hard
Aggregate CIR should not exceed physical data rate
Figure 23.1 Traffic descriptors
Relationship
Among
Congestion
Parameters
Explicit Signaling
Network alerts end systems of growing congestion
Backward explicit congestion notification
Notifies the user that CA procedures should be initiated
for traffic in the opposite direction; simpler
Forward explicit congestion notification
Notification goes forward, so end user has to somehow
get signal back to other end to tell them to slow
down
Frame handler monitors its queues
May notify some or all logical connections
User response
Reduce rate
Figure 23.9
BECN
Figure 23.10
FECN
Figure 23.11 Four cases of congestion
Implicit Signaling
Implicit Congestion Notification Telecom
Definition
In frame relay, inference by user equipment that
congestion has occurred in the network. The inference is
triggered by realization of the receiving frame relay
access device (FRAD) of transmission delays. Based on
block, frame or packet sequence numbers, another
protocol may recognize that one or more frames have
been lost in transit. Control mechanisms at the upper
protocol layers of the end devices then deal with frame
loss by requesting retransmissions.
From: YourDictionary.com
What About ATM?
High speed, small cell size, limited overhead bits
Requirements (difficult)
Majority of traffic not amenable to flow control
Feedback slow due to reduced transmission time
compared with propagation delay
Wide range of application demands
Different traffic patterns
Different network services
High speed switching and transmission increases
volatility
Latency/Speed Effects
Consider a typical ATM transmission speed of
150Mbps
~2.8x10-6 seconds to insert single cell
Time to traverse network depends on propagation
delay, switching delay
Assume propagation at two-thirds speed of light
If source and destination on opposite sides of
USA, propagation time ~ 48x10-3 seconds
Given implicit congestion control, by the time
dropped cell notification has reached source,
7.2x106 bits have been transmitted
So, this is not a good strategy for ATM
Cell Delay Variation
For ATM voice/video, data is a stream of cells
Delay across network must be short
AND Rate of delivery must be constant
There will always be some variation in transit
Delay cell delivery to application so that constant
bit rate can be maintained to application
Time Re-assembly of CBR Cells
D(i)=end to end delay of ith cell
V(0)= estimate of amount of cell delay variation that an
application can tolerate
Various Network Contributions
to Cell Delay Variation
Packet switched networks
Queuing delays
Routing decision time
Frame relay
As above but to lesser extent
ATM
Less than frame relay
ATM protocol designed to minimize processing
overheads at switches
ATM switches have very high throughput
Only noticeable delay is from congestion
Must not accept load that causes congestion
Cell Delay Variation
At The UNI in ATM
Even if application produces data at fixed rate,
processing at (potentially) three layers of ATM
causes delay
Interleaving cells from different connections
Operation and maintenance signals need to be
interleaved
If using synchronous digital hierarchy frames, potential
delays here are inserted at physical layer
Can not predict these delays
(See figure next slide)
Origins of Cell Delay Variation
Traffic and Congestion
Control Objectives for ATM
ATM layer traffic and congestion control should
support QoS classes for all foreseeable network
services
ATM layer traffic and congestion control should
not rely on AAL protocols that are network
specific, nor on higher level application specific
protocols
Any traffic and congestion controls should
minimize network and end to end system
complexity
Traffic Management and
Congestion Control Techniques
ITU-T and ATM Forum have defined a range
of traffic management functions to
maintain the QoS of ATM connections:
Resource management using virtual paths separate traffic flow according to service
characteristics (1)
Connection admission control (2)
Usage parameter control (3)
Traffic shaping (4)
Let’s examine these in more detail
Resource Management Using
Virtual Paths (1)
ATM network can use the virtual path to group
similar virtual channels
Connection Admission Control
(2)
Good first line of defense
User specifies traffic characteristics for new
connection (VCC or VPC) by selecting a QoS
Network accepts connection only if it can meet the
demand
Traffic contract
Peak cell rate - upper bound, CBR and VBR
Cell delay variation - CBR and VBR
Sustainable cell rate - average rate, VBR
Burst tolerance - VBR
Usage Parameter Control (3)
Monitor established connection to ensure traffic
conforms to contract
Protection of network resources from overload by
one connection
Done on VCC and VPC
Control of peak cell rate and cell delay variation,
or
Control of sustainable cell rate and burst tolerance
Discard cells that do not conform to traffic
contract
Called traffic policing
Traffic Shaping (4)
Smooth out traffic flow and reduce cell clumping
Token bucket and leaky bucket are examples of
traffic shaping
Token bucket allows bursts, while leaky bucket
maintains an even flow
(See figures next slides)
Figure 23.18 Token bucket
Token Bucket
Figure 23.16
Leaky bucket
Figure 23.17
Leaky bucket implementation
Leaky bucket keeps an average flow moving. Queue overflows?
Discard packets. Unlike token bucket, no credit for no transmissions.
ATM’s Real-time Traffic
Management
QoS provided for CBR, and rt-VBR is based on a
traffic contract (connection admission control)
and UBC (usage parameter control)
There is no feedback in these systems. Cells are
simply discarded. This is called open-loop
control.
This is not used for ABR or UBR traffic.
Non-real-time
Traffic Management
Some applications (Web, file transfer) do not have
well defined traffic characteristics
Best efforts
Allow these applications to share unused capacity
If congestion builds, cells are dropped, eg UBR
Closed loop control
ABR connections share available capacity
Share varies between minimum cell rate (MCR) and
peak cell rate (PCR)
ABR flow limited to available capacity by feedback
Buffers absorb excess traffic during feedback delay
Low cell loss
Feedback Mechanisms
Transmission rate characteristics:
Allowed cell rate
Minimum cell rate
Peak cell rate
Initial cell rate
Start with ACR=ICR
Adjust ACR based on feedback from network
Resource management cells
Congestion indication (CI) bit
No increase (NI) bit
Explicit cell rate (ER) field
Variations in Allowed Cell Rate
Cell Flow
(RM = resource management)
23.7 Integrated Services
Integrated Services (IntServ) is a model
used to provide QoS in the Internet
at the IP layer.
IntServ is a flow-based model, in that
a user needs to create a flow or virtual
circuit between source and destination.
But IP is connectionless. How do you
create a connection? Use RSVP.
Figure 23.19
Path messages
Path messages are sent from sender (S1) to all receivers
(multiple if multi-cast). This establishes the path.
Figure 23.20
Resv messages
Once the path is set, receivers return Resv messages. Note
how reservations are merged (next slide).
Figure 23.21
Reservation merging
R3 takes the larger of the two reservations and sends that
upstream.
Figure 23.22
Reservation styles
Wild card filter - the router creates a single reservation for all
the senders, based on the largest request.
Fixed filter - the router creates a distinct reservation for each
flow.
Shared explicit - the router creates a single reservation which
can be shared by a set of flows.
23.8 Differentiated Services
An alternative to Integrated Services.
Produced by the IETF to create a class-based QoS model
for IP.
Beyond the scope of this class.
Figure 23.24 Traffic conditioner
Meter checks to see if incoming flow matches negotiated traffic
profile. Marker can re-mark a packet that is using best-effort
delivery or down-mark a packet based on info received from meter.
Shaper uses the info received from meter to reshape the traffic.
Dropper works like a shaper with no buffer, discarding packets if
the flow severely violates the negotiated profile.