Dave Siegel Presentation
Download
Report
Transcript Dave Siegel Presentation
Operational Experience with
MPLS
Dave Siegel
Vice President
IP Engineering
Table of Contents
»Overview of hub architecture
»History of Network architecture
»Early challenges and how MPLS solved them
»Challenges with MPLS today
»VPN Deployment
»Architecture
»Capabilities (RFC2547, l2VPN)
»Provisioning aspects
»Customer experiences
»IPv6 deployment w/ MPLS
»How the network would behave without MPLS
WR1
WR2
CR1
CR2
OC-48/OC-192
OC-12/OC-48
OC-3/OC-12
AR1
AR2
PR1
RR1
Modems
Ethernet Switches
ADMs
GBLX
Early Challenges
» Hop Count
» Network diameter ranged from 14-18 hops
» GRE tunnels were not supported on GSR images
» Traffic Engineering
» Large numbers of DS3’s and OC-3c’s in metro
regions proved difficult to manage with IS-IS
metrics
» Future VPN Product
MPLS Solutions
» Hop Count
» MPLS had no-decrement-ttl option
» MPLS tunnels were in implementation phase for
GSR images
» Traffic Engineering
» MPLS provided for much more efficient utilization
of metro bandwidth
MPLS Solutions
» Hop Count
» Established Cross-country tunnels to mask main
hops normally encountered in the core
» Traffic Engineering
» Established regional meshes of LSPs between
devices
Multi-vendor networks
» Theory: having multiple suppliers gives you best-ofbreed, plus contingency plans if you have major
problems with your primary supplier
» Reality: Once a vendor is entrenched in your network,
replacing them completely is simply too capital
intensive
» Reality: you have worst-of-breed, because you must
wait for both of your vendors to have a compatible
implementation of a feature before deployment.
Multi-vendor networks
» Early interoperability issues (circa 1999-2000)
» Penultimate hop (NULL label vs. strip)
» No-decrement-ttl issues (it’s a one-hop network!)
» Current Issues
» Fast Re-route
» Secondary LSP
» Auto-bw
Current Stats
» MPLS core LSP mesh
» 9900 tunnels make up the core mesh (100 core
routers)
» 1200 tunnels between PR’s that make up the IPVPN Express route Product
» 11,100 tunnels total in the core
» Complexity requires automated management tools
MPLSrobot
» Bot components
» High-speed snmp poller
» Tunnel resize script w/ tons of knobs
» Graphing capability
» Path database
» Configuration push scripts
» Day-to-day challenges involve conditioning of
collected data
» Run daily but configs pushed weekly
WANDL
Getting Started
» Remove roadblocks
» Look for features of your network design that
increase complexity or introduce roadblocks to
implementing MPLS
» Multiple AS’s
» Multiple levels/areas in your IGP
» Lack TE support in your IGP
Getting Started
» Choose reasonable RSVP bandwidth
» Set bandwidth values on new tunnels to 0 Mbps, and
then measure over 24 hours.
» Set tunnel bandwidth to observed peak + some fudge
factor (e.g. 95th %tile peak + 10%)
» Do tunnel implementations slowly over time…don’t
introduce too much churn in the network
» Tune link utilization with RSVP bandwidth values
during transition
Follow our Roadmap!
» Q4 1998 MPLS lab trials begin
» Q1 1999 MPLS limited production trial begins
(regional mesh + ttl masking hack)
» Q2 1999 national LSP mesh between all CR’s
complete
» Q2 2000 global LSP mesh complete
» Q2 2001 RFC 2547 IP VPN’s and L2-VPN’s with
DiffServ (2 CoS’s)
Operational issues uncovered
» Through 1999, MPLS was blamed for a variety of
outages and/or performance degradation issues,
including
» High latency
» Loss
» Reachability
» Workarounds included bouncing LSP’s
» Most of the time, CEF bugs were to blame!
Operational issues uncovered
» Except when WRED was to blame
Operational issues uncovered
» Training, Training, Training
»Cannot be underestimated
» Experience, Experience, Experience
»GX had 2 full years of experience with MPLS
operationally before adding MPLS-based VPN’s
to the network
IP-VPN (ExpressRoute) architecture
» Objective is to provide as much isolation from the
Internet as possible
»Separate ASN (not AS3549)
»Private Routers (PR’s) not reachable from
outside gblx.net (non-advertised address space)
»Full mesh of LSP’s between all PR’s
»Full iBGP mesh among all PR’s
IP-VPN (ExpressRoute) architecture
» Secondary Objective is to provide as high a class
of service as possible.
»LSP’s have higher priority than LSP’s for Internet
Service so they always get the best (lowest
latency) routes.
»ToS Bits are painted into a Business Class (vs.
Best Effort for Internet Service) which is rewritten into the EXP field
IP-VPN Customers
» Connected: Even mix of RFC 2547 and l2-vpn
» Sales Funnel: Majority (60%) want L3-VPN
» Sales Funnel: good mix between carrier and
enterprise
» Largest customer is RFC 2547 with approximately
50 circuits
» Market interest is still gaining momentum for this
product set (ISP provided IP-VPN)
IP-VPN Provisioning pros/cons
» L2-VPN’s are the easiest to engineer for the
customer, but adds/deletes/moves require updating
configs on every PR where the customer is
connected.
» L3-VPN’s require less configuration on the ISP
side, but are preferred less due to the high level of
CPE engineering coordination required.
» L3-VPN’s were designed as a complete out-source
of a customer’s routing, but in reality customers
use this service in conjunction with another VPN
IPv6 deployment using MPLS
» GX has 3 routers located at native IPv6 exchange
points
» Sure, you could use GRE tunnels to interconnect
them over IPv4, but MPLS gives you:
»Per tunnel utilization statistics
»Path info
»Scalability (as the product grows, you can add
devices as an overlay network without impacting
stability on existing platforms)
How the network would behave
without MPLS
» WANDL simulations show that there would be no
congestion in the network based on IGP TE with ISIS, so MPLS is not needed today for TE.
» Bandwidth reservations for MPLS-based VPNs
would not be as meaningful with large amounts of
native IP traffic on backbone trunks.
Questions
THANK YOU