Gopher GigaNet - A Next Generation Campus Network
Download
Report
Transcript Gopher GigaNet - A Next Generation Campus Network
Gopher GigaNet
A Next Generation Campus Network
David Farmer ([email protected])
Winter 2005 Joint Techs
February 14th 2005
1
Alternate Titles
How I spent my summer
– Without any Vacation
Firewalls every where
– But not a Policy to implement
Why MPLS
– Policy, Policy, Policy
– Or, I want to build a broken network,
But still manage it
2
Agenda
3
About UMN
The Old Network
Design Goals
Key Technologies We Picked
Architecture Components
The “Big Picture”
Twin Cities Campus
Vital Statistics
897 surface acres
– East Bank, West Bank, St. Paul
251 Buildings
– 20 story Office Towers to Garden Sheds
Nearly 13M Assignable ft2
– Nearly 22M Gross ft2
50,954 Student Enrollment – Fall 2004
– Second Largest Nationally (first only 41 more)
Ranked 10th in total research
4
Twin Cities Campus
Network Statistics
More than 200 on-net Buildings
1730 Wire Centers (Closets or Terminal Panels)
– 842 With Network Electronics
5
2774 Edge Access Switches (3750G-24TS)
312 Aggregation Switches (3750G-12S)
29 Core Switches (6509-NEB-A)
5000 Virtual Firewall Instances
The Old Network
Originally installed Sept ’97 – Dec ’99
– Took way too long
10Mb Switched Ethernet to desktop
– Small amount of 100Mb for high-end
desktops and servers
Typically multiple 100Mb building links
Partial-Mesh OC3 ATM backbone
6
The Old Network
Cisco 1924 Closet Switches
– 4 switches per 100Mb uplink
Cisco 2924M-XL Closet Switches
– Used for small amounts of 100Mb for servers
and desktops
– single switch with two 100Mb uplinks
Cisco 5500 Core Switches
– With RSMs for routing
– 25 Core Nodes
FORE ASX-200 and ASX-1000 ATM
switches for Core network
7
The Old Network – Midlife Upgrade
Installed Aug ’00
Added GigE Backbone
Cisco 5500 Core Switches
– Upgraded to Sup3s with GigE uplinks & MLS
Foundry BigIron
– Center of Star Topology GigE Backbone
8
Design Goals
Divorce Logical and Physical Topologies
Provide more than 4096 VLANs network wide
“Advanced” Services
Routed (L3) Core, Switched (L2) Aggregation
and Edge
Network Policy – AKA Security
Network Intercept
Other Stuff
9
Design Goals
Divorce Logical and Physical Topologies
– Administrative Topology
– Policy Topology
Security or Firewalls
Bandwidth shaping or Usage
QOS
– Functional or Workgroup Topology
10
Design Goals
Provide more than 4096 VLANs network wide
– More than 1000 VLANs now
– Micro segmentation for Security and other Policy
could easily require 4X over the next 5 years
– Even if we don’t exceed 4096 VLANs, the VLAN
number space will be very full
11
Design Goals
“Advanced” Services
– Native IPv4 Multicast
PIM Sparse Mode, MSDP, BGP for Routing
IGMP v3 (SSM support) for L2 switching
– IPv6
Unicast for sure
Multicast best shot
– Jumbo Frame
9000 Clean
12
Design Goals
Routed (L3) Core, Switched (L2) Aggregation
and Edge
– How many L3 control points do you want to
configure
– Limit scope of Spanning Tree
If possible eliminate Spanning Tree
Minimally, limit it to protecting against mistakes,
NOT an active part of the Network Design
13
Design Goals
Network Policy – AKA Security
– Security is, at least partly, the network’s problem
Let’s design it in to the network, rather than add it in as
an after thought
– The network needs to enforce Policies
Only some of these are actually truly related to Security
– Rate Shaping, COS/QOS, AAA, just to name a few
– Firewalls with state-full inspection are
necessary in some locations
– Network Authentication (802.1x)
14
Design Goals
Network Intercept
–
–
–
–
–
15
Intrusion Detection and Prevention
Trouble shooting
Measurement and Analysis
Legal Intercept and Evidence collection
Sinkhole Routing
Design Goals
Other Stuff
– Core Services
DNS
DHCP
NTP
– Measurement
– Localized Logging
Syslog
Netflow
16
Design Goals
Other Stuff
– Data Centers
Intend to support 6 – 12 Data Centers on campus
Create Separate Infrastructure
– Allows different maintenance windows
– Provide Higher SLA/SLE
– Provide things that can’t scale to the rest of campus
Server load balancing
Dual fiber entrances
Single L2 Domain
Redundant Routers
17
Design Goals
Other Stuff
– Management Network
Console Servers
Remote Power Control
Redundant GigE network
– Allow access to critical Core Network equipment at all
times
Dial-up Modem on Console Server for
Emergency Backup
18
Key Technologies We Picked
MPLS VPNs
Cisco StackWise Bus on 3750s
– Cross Stack EtherChannel provides
redundancy without creating loops in the
Spanning Tree topology
Cisco FWSM with Transparent Virtual
Firewalls
– Policy as L2 bumps on the wire
– Let the Routers Route
19
How to Scale
A network with those numbers doesn’t fit in
your head
– My mind is to small to hold it all
– How about yours
“consistency is the hobgoblin of little minds”
– Emerson
Consistency is the answer to Scaling
20
MPLS VPNs – Short Tutorial
RFC 2547 defines layer 3 routed MPLS
VPNs
Uses BGP for routing of VPNs
Routers create a VRF (VPN Routing &
Forwarding) Instance
VRFs are to Routers as
VLANs are to Ethernet Switches
21
MPLS VPNs – Short Tutorial
P – “Provider” Router
– No knowledge of customer VPNs
– Strictly routes MPLS tagged packets
PE – “Provider Edge” Router
– Knowledge of customer VPNs & provider network
– Routes packets from customer network across
the provider network by adding VPN MPLS tag
and tag for the remote PE
22
MPLS VPNs – Short Tutorial
CE – “Customer Edge” Router
– No knowledge of provider network
– Strictly routes IP packets to PE
Only PE routers are necessary in the
MPLS VPN Architecture
– This is important in a Campus Network
23
Example Campus
MPLS VPN Architecture
HIPPA Data
Campus
Border
ResNet
Campus
Campus
No Firewall
Campus No
PacketShaper
24
Control Plane &
Management
Architecture Components
Campus Border
Border
Routers
Core Network
Backbone Nodes
Core Nodes
Aggregation Networks
Building
Aggregators
Edge Nodes
25
Regional
Aggregators
Area
Aggregators
EDGE
Campus Border
Border Routers
– Redundant routers in diverse locations
– Act as CE routers for all VRFs that need Internet
Access
– Cisco 6509
Dual SUP720-3BXL
Dual Power Supplies and Fans
All 6700 Series Interface Cards
26
Campus Border
Border Policy Enforcement
– Layer 2 bumps on the wire
Cisco FWSM
Packeteer 9500
Home grown ResNet Authentication Control & Scanner
(RACS)
– Attach to or contained within Border Router
Packets get a little dizzy passing through Border
Router L2 or L3 switching fabric several times
27
Core Network
Backbone Nodes
– 2 Backbone Nodes producing a Dual-Star
Topology
– Collocated with the Border Routers
– 10Gb interconnection between Backbone Nodes.
– 10Gb connection to each Core Node
– Cisco 6509
28
Core Network
Core Nodes
– Located at 16 Fiber aggregation sites around
campus
– 10Gb connection to each Backbone Node
– 2 or 3Gb to Aggregators or Edge Nodes
– Cisco 6509-NEB-A
29
Core Network
Core Nodes
– Layer 3 routing provide for End User Subnets
Layer 3 MPLS VPNs provide separate Routing Domains
– Virtual Firewalls provided per Subnets as needed
– Root of a VLAN Domain
802.1q tags have local significance only
VLANs connected between Core Nodes using
Layer 2 MPLS VPNs as needed
30
Aggregation Networks
Layer 2 only
Aggregates Edge Nodes & connects them to
a Core Node
Cisco 3750G-12S
31
Aggregation Networks
Regional Aggregator
– 3Gb Connection to Core Node
Area Aggregator
– 3Gb Connection to Regional Distribution Node
Building Aggregator
– 2 or 3Gb Connection to Regional or Area Dist.
Node or directly to Core Node
32
Edge Nodes
Connects users and servers to the Network
Connects to a Building Aggregator
– If more than one closet in a building
– Otherwise connects to
Core Node
Regional Aggregator
Area Aggregator
Cisco 3750G-24TS
33
Typical Building
24 + 4
24 + 4
24 + 4
24 + 4
3G uplinks if 7 or
more switches in
an Edge stack
1G
24 + 4
1G
24 + 4
1G
24 + 4
Floor 3
24 + 4
24 + 4
2G uplinks if 1-6
switches in an
Edge stack
24 + 4
Floor 2
1G
1G
1G
1G
24 + 4
2G uplinks if 1-6
switches in an
Edge stack
Floor 1
0 + 12
34
To Area,
Region
or Core
1G
0 + 12
1G
0 + 12
Building
distribution using
stack bus
MDF
Data Center Networks
Data Center Core Nodes
–
–
–
–
–
–
35
Redundant Routers servicing all Data Centers on Campus
Collocated with the Border Routers and Backbone Nodes
10Gb interconnection between Data Center Core Nodes.
10Gb connection to each Backbone Node
2Gb up to 10G connection to each Data Center
Cisco 6509-NEB-A
Data Center Networks
Data Center Aggregator
– Connected to both Data Center Core Nodes
– Two 3750G-12S or two 3750G-16TD
– Feeds Data Center Edge Nodes within a single
Data Center
36
Data Center Networks
Data Center Edge Nodes
– Min Stack of two 3750G-24TS
– Connects to Data Center Aggregator
Or directly to Data Center Core Node if a single stack
serves the Data Center
– Want hosts to EtherChannel to separate switches
in the Stack for redundancy
37
Management Network
Management Node
– 3750G-24TS collocated with each Core Node
– Routed as part of Control Plane & Management
network
– Cyclades Console server and Remote Power
Control
Management Aggregator
– Connects all the Mgmt Nodes
38
Management Network
Measurement Server collocated with each
Core Node
Log Server Collocated with each Core Node
DNS, DHCP, NTP Server Collocated with
each Core Node
– Using Anycast for DNS Redundancy
39
Analysis Network
Analysis Node
– All switches collocated in single location
– Provides access to every Core Node for testing
and Analysis
– Provides for remote packet sniffing of any traffic
on campus
– Provides Sinkhole Drains for each Core Node
40
41
A Closer Look
UNIVERSIT
Y REC
CENTER
(1169)
MCNAMAR
A CENTER
(1182)
RIDDER
ARENA &
BASELINE
TENNIS
(1181)
UNIVERSIT
Y AVENUE
RAMP
(1188)
COOKE
HALL (1056)
2G
2G
UNIVERSIT
Y PRESS
BLDG
(1102)
AQUATICS
CENTER
(1167)
2G
3G
2G
2G
3G
MAST LAB
(1191)
2G
UNIV
OFFICE
PLAZA
(1985)
INFORMA
TION
TECHNOL
OGY
(1184)
2G
2G
LIONS
RESEARCH
BUILDING
(1174)
WILLIAMS
ARENA
(1050)
MARIUCCI
HOCKEY
ARENA
(1176)
2G
2G
3G
YMCA
(1093)
2G
2G
2G
2G
UNIVERSIT
Y VILLAGE
(1986)
42
INFORMATI
ON
TECHNOLO
GY (1184)
2G
CENTER
FOR
MAGNETIC
RESONANC
E (1180)
U TECH
EAST
(X130)
DINNAKEN
OFFICE
BUILDING
(X965)
2G
UNKNOWNJSB (8039)
That’s enough
That’s enough rambling for now!
I real want to do more, but!
Find me and lets talk more!
– I’ll even argue if you want
– Email me ([email protected])
43