Transcript Document

More Traffic Engineering
• TE with MPLS
• TE in the inter-domain
Practical QoS routing - CSPF
• Some form of QoS routing is actually
deployed
– TE extensions for IGP and Constrained SPF
(CSPF)
– Find a path to a single destination
• Not a shortest path tree to all destinations like SPF
– Take into account
• available bandwidth on links
• Link attributes
• Administrative weight
– These are advertised by TE extensions to the
IGP
TE info advertised by the IGP
• Bandwidth
– This is the available bandwidth on a link
• Link attributes
– Bitmap of 32 bits to encode “link affinity”
– I can use this to avoid certain paths/links, if they belong
to the same SRLG for example
– Helps achieve path diversity
• Administrative weight
– It is a link cost specific to TE
– Different that the IGP link cost
• IGP cost usually reflects bandwidth availability
– Can be used to encode TE specific information
• Link delays for example
• It is still static – configured by the administrator
Other TE knobs/functions
• Re-optimization
– After I compute a route for an LSP I can check for
better ones
– Periodically, manually or triggered by an event
• Tunnel priorities and pre-emption
– Each LSP has a 0-7 priority
– When establishing a higher priority LSP it is possible to
preempt an existing lower priority LSP
• Load sharing
– Can split arbitrarily the load among multiple LSPs to
the same destination
• “auto-bandwidth”
– Monitor the bandwidth used by the LSP dynamically
– Adjust the path/reservation of the LSP accordingly
Sending traffic into an LSP
• The ingress router can send traffic into the LSP
– Include it in its SPF computation, can be used as a nexthop for traffic to destination
– Can configure a “cost” for the tunnel
• Use the LSP if the IGP path fails
• Or vice versa
• Could even load share between a LSP and an IGP path
• “Forwarding adjacency”
– Advertise an LSP in IGP as if it was a regular link
– It will be considered when computing shortest paths
Off-line MPLS TE
• Compute best LSP paths for the whole network
– Signal them using RSVP
• When something changes
– Re-compute all the LSPs again
• Off-line allows for better control
– Compute best LSP paths for the whole network
• No oscillations
– Global view can optimize resource usage
• But can not respond to sudden traffic changes
– Attacks
– Flash crowds
On-line MPLS TE
• Full mesh of LSPs with on-line optimization
– Compute the LSPs independently in each ingress
– will have to be done in some automated fashion
• Periodic re-optimization
• Triggered re-optimization
• Auto-bandwidth
– May be a lot of CPU in a large full mesh
• 10,000 – 50,000 LSPs to re-optimize
– Can have the oscillations of IGP routing
• Re-optimization is a local decision at ingress nodes
• Can make conflicting decisions
– There can be setup collisions
• Two ingress routers attempt to setup an LSP on a link at the
same time and fail
Improved on-line MPLS TE
• Minimum inference routing [kar, kodialam, Lakhsman,
2000]
– Avoid loading links that are on multiple shortest paths
– Similar to trunk reservation
• MATE [Elwalid et al. 2001]
– Multiple LSPs for each source/destination pair
– Split flow between a source/destination pair among these LSPs
• Goal: minimize the cost of routing
• Problem: granularity of flow splitting
– Each source decides independently using a iterative gradient
descent method
– Feedback from the network using probe packets
• Measure delay over an LSP
• TeXCP [kandula et al., 2005]
– Similar to the above
– Monitor the link utilization with explicit feedback
• Problem: Needs support from the network
Limitations of On-line solutions
• Consider only point-to-point flows
– What happens with multicast traffic?
• No failures
– What happens when links fail?
• Effects of BGP/hot-potato?
– Can have a dramatic effect on traffic inside the
network
• Must avoid oscillations
– Adjustments to the flow splits must be done
carefully
Other hard questions
• How does my optimal routing depend on the
traffic matrix estimation
– If my estimation if wrong how bad is my routing
– Are there any “universally” good routings?
• Overlays?
– Traffic may flow very differently than my routing tables
may suggest
• BGP effects
– Hot potato mostly
• Security?
– Can I adapt traffic engineering fast enough to react to
DoS attacks?
When links fail?
• All this TE is good when links do not fail
• What happens in failures?
• MPLS TE
– Failures are handled by fast reroute
– Some minimal optimization when determining backup LSPs
– Global re-optimization if the failure lasts for too long
• IGP weight optimization
– It is possible to optimize weights to take into account single link
failures
• Other approaches:
– Multi-topology traffic engineering
– Optimize the weights in the topologies that are used for protecting
from failures
More IGP TE
• Can take BGP effects into account
– Consider them when computing how traffic is
routed after each weight setting
• It is possible to extend this in a multitopology model
– Optimize the different backup topologies
– Achieve optimal traffic engineering even after
failures
• And very fast repair times
– Active research area
Diff-Serv and TE and MPLS
• Diff-Serv defines multiple classes of service
• Can reserve resources for each class of service
– Not per flow
• Different trade-off
– Better scalability
– Worse QoS guarantees for my traffic
• In diff-serv each packet is marked
– IP packet
• 6 DSCP bits in the IP header
– MPLS packet
• 3 EXP bits in the MPLS header
Issues
• 6 DSCP bits do not fit in the 3 EXP bits
– Can define DSCP -> EXP mappings
– Can have different LSPs for each DSCP value
• L-LSP mode
• Available bandwidth now is advertised per
EXP class
– Different amount of bandwidth per class
available on each link
• CSPF can take this into account
• Admission control also takes this into
account
How to combine IP and MPLS
diff-serv?
• Tunneling
–
–
–
–
IP inside MPLS
MPLS inside MPLS
Each level has its down EXP/DSCP bits
How to set them?
• What are the new EXP bits when I do a push?
• What are the old EXP bits when I do a pop?
• Different models
– Uniform: all levels have the same values
• Single ISP, single DS model
– Pipe: different levels have their own values
• Do not touch the values at the lower levels
• ISP provides transit to traffic with different DS models
How to tunnel Diff-Serv traffic
over MPLS
• E-LSP:
– Can combine different Diff-serv levels of traffic into a
single LSP
– The service given to each packet is determined by the
EXP bits in the MPLS header
• Can have only 8 Diff-Serv levels
• L-LSP:
– Each LSP carries only a single Diff-Serv level
– The service given to each packet is determined by the
MPLS label
– Can have more precise control of the QoS given to the
traffic
– Worse scaling though, will need many more LSPs
How to select where to send QoS
traffic?
• How do I choose where to send traffic?
– May have data traffic and VoIP traffic
– VoIP traffic should go over the DS LSP
• Use the FEC
– Define a filter that will cover the traffic
– May be able to include DSCP values in the
filter
• Based on its destination
– Need to design my network so that I can
distinguish traffic
Traffic engineering and overlays
• Overlays may/will get popular
– Will carry a lot of traffic
• Overlays may affect how traffic moves in my network
– Overlays often have a mechanism to monitor (overlay) links and
choose the ones that are better
• Usually lower latency
– This may cause a lot of traffic shifts in the IP network
• Will interfere with the TE in the IP network
• TE and overlay decisions are probably
– Independent
– Conflicting
• Different goals
• May end up causing oscillations in the IP network
– Not too different than the BGP effects
What is going on in the InterDomain
• All the above apply to the network of a
single provider
• Things are much harder in the Inter-domain
• Can rely only on BGP
– And BGP was not designed for TE
– Does not convey information about the
performance of paths
BGP techniques
• Basic BGP tools are
– For inbound traffic
• AS path pre-pending
• Export policies
– Provider gives some options to the customer
– Customer selects according to his goals
• De-aggregation and selective advertisement
• MED
– For outgoing traffic
• Local preference
• The problems is that it is hard to predict the effect
of certain changes
– It may make things worse
Overlays for improving QoS in
the Inter-domain
• Use an overlay to “augment” the performance of BGP
– Overlays have been used for long time
– To introduce new functionality
• Multicast, IPv6
• Content distribution networks
• RON – Resilient overlay network
– Build a small (<50 nodes) overlay
• N nodes in a full mesh
– Use probing to constantly estimate the quality of the paths between
the nodes
• Path availability
• Path loss rate
• Path TCP throughput
– Flood this information over the overlay using a link state-link
protocol
– Select which path to use based on rich application specific policies
RON works but there are some
problems
• In most of the cases RON improved the
performance seen by the packets
– One extra hop is sufficient
• But
– Overlay forwarding is a bit slower than native
forwarding
– Like in all load sensitive routing can have oscillations
• Keep using a good path for a while
– Latency measurements are round-trip
• By overlay links are asymmetric
– Probing limits the scalability of the overlay
• Overlay nodes may be connected through DSL
Overlays and multi-homing
• Similar problems
• Overlay can improve the performance of a
singly connected network
• But when the network is connected through
multiple providers (> 2) then it can do quite
well
– if it implements clever route selection
– Based on monitoring paths similar to the way
the overlay monitors overlay links