Dallas Terabit Routing - Stupid Networks
Download
Report
Transcript Dallas Terabit Routing - Stupid Networks
Terabit Backbones
A Reality Check
Vijay Gill <[email protected]>
Agenda
Current State of the Internet
Side detour through
VPNs
DiffServ/QoS/CoS
The Converged Core (hype machine that
goes to 11)
State Of the Internet Address
Reality Based Internet Economics
1.
2.
Amount of state at any level should be constrained
and must not exceed Moore’s Law for economically
viable solutions.
Ideally – growth of state should trail Moore’s Law
We’re in trouble
“If you’re not scared, you don’t understand” – Mike O’Dell
Growth of State
Recent trends show high growth in
Internet state (routes, prefixes etc.)
Isolate this growth as a predictor of
future growth
Compare growth to Moore’s law
Source: Tony Li (Procket Networks)
Source: Tony Li (Procket Networks)
The Very Bad News
Growth rate is Increasing
Hyper-exponential growth
Will eventually outgrow Moore’s law
Moore’s law may fail
Source: Tony Li (Procket Networks)
The Real Problems
If we don’t fix these, the other problems won’t
matter
Hyper-exponential growth will exceed Moore’s law
Safety margins are at risk
We need concerted effort on a new routing
architecture
Multi-homing must not require global prefixes
Example: IPv6 plus EIDs
BGP Advertisement Span
Nov 1999 - 16,000 individual addresses
Dec 2000 - 12,100 individual addresses
Increase in the average prefix length from /18.03 to /18.44.
Dense peering (Rise of Exchange Points) and Multi-homing
State Now
# of Paths vs. # of Prefixes
Large amounts of peering
CPU to crunch RIB to populate FIB
More state requires more CPU time
Leads to Delayed Convergence
BGP – TCP rate limited, just adding pure CPU
isn’t the entire answer
Issue is with propagating state around
Convergence Times
1,200
1114
1,000
810
Seconds
800
600
550
400
360
200
195
100
30
0
0
0
35
20000
50
40000
60000
80000
100000
Routes
120000
140000
160000
180000
200000
Problem With State
Issues with interactions of increased state, CPU,
and message processing
Run to completion processing <-> missed hellos
Time diameter exceeds hold down
IGP meltdowns
Pegged CPU on primary causes slave to initiate
takeover
Decoupled Hello processing from Routing
Process
VoIP? What VoIP?
IGPs Converge on average converge an order of
magnitude faster than BGP
Leads to temporary black holing
Router reboots (like that ever happens)
IGP converges away, BGP teardown
Router comes back up
IGP converges and places router in forwarding path
BGP is still converging
Packets check in, but don’t check out
VPNs - Operational Reality
Check
Vendors can barely keep one routing table
straight
Potential for several thousand internal customer
prefixes inside our Edge routers
Customer Enragement Feature, IBGP withdraw bugs
Into this mess, we’re going to throw in another couple
of hundred routing tables like some VPN proposals?
Revenge of RIP
Provider Provisioned VPNs – Just Say No.
What Is Going to Work
Some people will optimize for high touch
edges – Provider provisioned VPNs etc.
But if they are talking with the rest of the
world, welcome to the new reality – It sucks.
For the Rest….
“Already, data dominates voice traffic on our networks”
-Fred Douglis, ATT Labs
These exhibits were originally published in Peter Ewens, Simon Landless, and Stagg
Newman, "Showing some backbone," The McKinsey Quarterly, 2001 Number 1, and can
be found on the publication's Web site, www.mckinseyquarterly.com. Used by permission.
What to optimize for
Optimize for IP
Parallel backbones
Some ISPs already have to do this based on volume of
traffic for IP alone
Do not cross the streams
Voice traffic has well known properties
Utilize them
Optical network – Utilize DWDM and OXCs to
virtualize the fiber
Solution
Internet (IP)
Internet (IP)
VPN
VPN
Voice/Video
Voice/Video
CES
CES
Multi Service Optical
Transport Fabric
NEWS FLASH
Simple
& Stupid Trumps
Complex & Smart Every Time
Networks Powered by PowerPoint ™
Stuff looks good on slides, then we try and
hire people to implement and operate it
Operational Reality Beats PowerPoint every
time
The Converged Core ™
For the fortunate few
Utilize OXCs + DWDM to impose arbitrary
topologies onto fiber
For the rest trying to run IP over Voice…
Nice knowing you….
Voice - Use SONET as normal, it’s not
growing very fast, so don’t mess with it
WCOM, T
Network Design Principle
The main problem is SCALING
Everything else is a secondary
If we can scale, we’re doing something right
State Mitigation
Partition State
What you don’t know, can’t hurt you
Common Backbone
Application Unaware
Rapid innovation
Clean separation between transport, service, and
application
Allows new applications to be constructed without
modification to the transport fabric.
Less Complex (overall)
Why A Common Backbone?
Spend once, use many
Easier capacity planning and implementation
Elastic Demand
Increase of N on edge necessitates 3-4 N
core growth
Flexibility in upgrading bandwidth allows you
to drop pricing faster than rivals
By carrying more traffic, a carrier can
lower costs by up to 64%
These exhibits were originally published in Peter Ewens, Simon Landless, and Stagg
Newman, "Showing some backbone," The McKinsey Quarterly, 2001 Number 1, and can
be found on the publication's Web site, www.mckinseyquarterly.com. Used by permission.
Bandwidth Blender - Set on
Frappe
Price per STM-1 ($m)
18,000
16,000
14,000
PRICE
12,000
10,000
8,000
6,000
4,000
2,000
COST
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Historical and forecast market price and unit cost of Transatlantic
STM-1 circuit (on 25 year IRU lease)
Source: KPCB
Problem
We keep hearing the phrase ‘bandwidth glut’
So are we experiencing a glut or not?
No matter how much terabits of core bandwidth gets
turned up….
Capacity Constraints are at the edges
Go drop 2-4 racks in colocation facilities
Q in QoS stands for Quantity, not Quality
We don’t need to boil the oceans, all we want is
a poached fish
How To Build A Stupid
Backbone
Optical backbones cannot scale at the STS-1
level
High speed backbone reduces complexity and
increases manageability….
Impose a Hierarchy
Optical Backbone provides High-speed
provisioning/management: OC-192/48
Sub-rate clouds multiplex lower speed traffic onto
core lightpaths
Regional-Core Network Infrastructure
Core OXC
Multi-Service Platform
Core network
Client equipment
Metro SubNetwork
Metro SubNetwork
Metro SubNetwork
Requirements
Support multiple services
Voice, VPN, Internet, Private Line
Improving service availability with stable
approaches where possible
Use MPLS if your SONET ring builds are taking
too long (anyone still building SONET rings for
data?)
If you have to use MPLS….
Stabilize The Edge
LSPs re-instantiated as p2p links in IGP
e.g. ATL to DEN LSP looks like p2p link with
metric XYZ
Helps obviate BGP blackholing issues
Stabilize The Core
Global instability propagated via BGP
Fate sharing with the global Internet
All decisions are made at the edge where
the traffic comes in
Rethink functionality of BGP in the core
LSP Distribution
LDP alongside RSVP
Routers on edge of RSVP domain do fan-out
Multiple Levels of Label Stacking
Backup LSPs
Primary and Backup in RSVP Core
Speed convergence
Removes hold down issues (signaling too fast in a
bouncing network)
Protect path should be separate from working
There are other ways, including RSVP E2E
Implementation
IP + Optical
Virtual Fiber
Mesh Protection
Overlay
We already know where the big traffic will be
NFL Cities, London, Paris, Frankfurt, Amsterdam
DWDM + Routers
IP + Optical
IP / Routers
•
•
•
•
•
Virtual Fiber
•
Embed Arbitrary fiber
topology onto physical fiber.
Mesh restoration.
Private Line
Increased Velocity of service
provisioning
Higher cost, added complexity
Optical
Switching
Fiber
DWDM / 3R
Backbone Fiber
Metro
Collectors
DWDM
Terminal
Optical Switch
Core
Edge
IP + Optical Network
Out of port capacity, switching
speeds on routers? Bypass
intermediate hops
Dual Network Layers
Optical Core (DWDM Fronted by OXC)
Metro/Sub-rate Collectors
Fast Lightpath provisioning
Remember - Routers are very expensive OEO devices
Multiservice Platforms, Edge Optical Switches
Groom into lightpaths or dense fiber.
Demux in the PoP (light or fiber)
Eat Own Dog Food
Utilize customer private line provisioning internally to
run IP network.
Questions
Vijay “Route around the congestion, we must” Gill
Many thanks to Tellium (Bala Rajagopalan and Krishna Bala) for providing icons at short notice!
Nota Bene – This is not a statement of direction for MFN!