TRB Informational Briefing Review

Download Report

Transcript TRB Informational Briefing Review

Smithsonian Network
Infrastructure Program
ITPA Meeting
Martin Beckman
Director IT Engineering and Plans
6 November 2014
Background








BSEE – USMA, West Point
MA – Foreign Policy and National Strategy
Retired Army Colonel (Infantry) after 30 years, 7 years Active, 23 years Reserves
with 9 months in Bosnia and 5 years at USASOC, Ft Bragg, NC.
1992-1995:
USASOC, Fort Bragg, NC. Built the ASOC Command Center and
the SOCOM COOP. Implemented encrypted IP over HF and VHF.
1996-2001:
Pentagon. Designed and built the Pre-9/11 Core Network (FDDI)
for Unclassified and Secret Networks.
2001-2007:
DISA, Systems and Network Engineer on NIPRNet, SIPRNet, and
the IPv6 Pilot.
2007-2012:
Designed and built the Virtualization Environment and the Private
Cloud Infrastructure (VIDAR/DAVE).
2012-Present: Smithsonian Institution. Director IT Operations and now Director
IT Engineering and Plans. Backbone Network and Private Cloud.
Vision and Goals for the Smithsonian
Challenges:
 The buildings themselves range in age from the mid-1800’s (the Castle, the
Renwick Gallery, and the Arts and Industry Building) to the 21st century (NMAI).
 The main data center for all of the activities is in Herndon, Virginia and is over 30
miles from the National Mall.
 The ongoing efforts in digitization, genomic research, enhanced use of technology
to support the exhibits, and demands for data storage and access have pushed the
current infrastructure to capacity.
 Fixed or declining budgets and insufficient manpower limit the speed, flexibility,
and responsiveness to the Smithsonian community.
 No Engineering Staff and very limited technical expertise and Core Circuits overutilized (90% or greater).
 Stalled Virtualization effort with sub-optimal implementation (Switching and
Storage)
Vision and Goals for the Smithsonian
Goals and Objectives:








64-bit and follow-on 128-bit processing and operating systems for all servers, each
accessing the network at dual 10Gbps (physical servers) or dual 40Gbps (virtual server
cluster).
64-bit processing and operating systems on all desktops and notebooks with 1Gbps
Ethernet access to the network infrastructure.
Desktop video-conferencing and instant messaging within the Smithsonian network at the
desktop, notebook, and device.
Utilization of virtual desktops to support mobility of the security operations, guest
services, volunteers, interns, NOC, Help Desk, and Training Rooms.
Full integration of Smartphones and devices connecting over the Wi-Fi infrastructure.
Note: 802.11ac adds considerable bandwidth requirements
Server Virtualization using 10Gbps FCOE and fibre-channel storage arrays with 40Gbps
access to the network.
Reduce operational recurring costs for circuits, space, HVAC, power consumption, and
equipment maintenance.
Long-term storage on 100GB Blue Ray disks in storage arrays.
Vision and Goals for the Smithsonian
Goals and Objectives:







Core network between Herndon and the National Mall at Quad 100Gbps, increasing in
100Gbps increments on two independent resilient links.
High performance firewall structure operating at 40Gbps rates securing all traffic to and
from the Mall, the Data Center, and the Internet.
Tier One Internet and Internet II access rates, each at 10Gbps, scalable to 40Gbps in
10Gbps increments.
All Museums, the National Zoo and Smithsonian sites connect to the backbone network at
dual 10Gbps rates (except for small office sites with under 20 personnel).
High-definition (HD 1080p/60) video networking between sites via an independent fibre
channel network that also supports storage operations.
Ubiquitous Wi-Fi access with accountability, authentication, and access control for all
staff, guests, exhibits, and the public.
Private Cloud operations to provide 100% resilient network services: Domain Name
Services (DNS), IP address assignment via DHCP, Active Directory access control,
Telephone service (VOIP), and IP Video conferencing.
Vision and Goals for the Smithsonian
Goals and Objectives:







Migration from diverse PRI circuits to dual homed SIP trunks (VA and NYC).
Commercial Encrypted Remote Access Services via dynamic and static Virtual Private
Networks (VPN) from anywhere globally.
Fault-tolerant power implementations standardizing all equipment rooms, closets, and
cabinets.
Standardized 50-micron fiber optical distribution racks to eliminate wall-mounted fiber
cabinets.
Reduced operational recurring costs for circuits, space, HVAC, power consumption, and
equipment maintenance.
Enhanced digital radio and satellite telephone services as required by remote locations in
Arizona, Panama, Florida, New York, and Boston, including flyaway communications
support via INMARSAT for teams travelling into remote locations.
Blue Ray disks in storage arrays.
Optical Networking in Phases – VA, DC, MD
Smithsonian Optical Network Program
Four Core SMR2 ONS 15454 Optical Switches with an initial Capacity is five 10Gbps Ethernet circuits and two 8Gbps Fibre Channels circuits. The Core upgrade adds
100Gbps Ethernet Channels to the Core.in 2014. This allows the core of the backbone network to operate at 100Gbps. There are four loops: Internet, Maryland, Zoo, and
Virginia. MAryland and Internet are of near equal priority and are planned for 2014/2015. Zoo and Virginia are follow-on priorities planned for 2015/2016 or as funding
permits. The driving forces are the Genomics and Digitization programs that affect the Internet2 connection on the Internet Loop and the major work done for this
programs at Pennsy and Suitland.
All loops have SMR1 ONS15454 switches with two channels 10Gbps Ethernet, except the Equinix NAP switches with five channels 10Gbps Ethernet. Total of four
SMR2 and ten SMR1 switches (14 switches overall).
Mall
North
HDC1
Equinix
NAP
Internet Loop
4x10Gbps
Zoo Loop
2x10Gbps
Admin/DZR
NZP
Core Optical Ring
Quad 10Gbps - 2013
Dual 8Gbps FC - 2013
100Gbps - 2014
All ONS Switches on the Core
(Herndon, NMAH, and the Castle)
are SMR2 and loops
switchesare SMR1
Front
Royal
Pennsy
Maryland Loop
2x10Gbps
NMAH-Pennsy
Pennsy-Suitland
Suitland-Castle
Virginia Loop
10Gbps
Suitland
HDC2
Mall
South
Optical Networking – NYC
New York City Metro Area Network - Dual Fiber Loop
hdc-ndsw1
Nexus 7009
i
Sw
Equinix Data Center
NY9
th o
Pa /M
ed 500 ink
s
a , L
Le t: $3 ted bps
Es Rou 10G
Nexus 5548
chof-dsw1
th
Pa o
sed 0/M
Lea $5,00 Link
:
Est outed bps
R 0G
1
ed
s
h
bp o ug
0G 0/M thro ers
1
t
t 0
ui , 0 e d o u
rc $3 nk R
Ci t: u P
x Es k Tr BG
i
n
n x
ui
Li ino
Eq
ed qu
t
u E
Ro
111 8th Ave, New York, NY
Nexus 5548
nyc-csw2
tch
Smithsonian Backbone Network
Washington DC/Herndon, VA
Connection via Equinix/Ashburn
Nexus 7004 Switch/Routers VRF Internal
th
e
Equinix Data Center
NY2
275 Hartz, Secaucus, NJ
Nexus 5548
nyc-csw1
Leased Path
Est: $3,000/Mo
Routed Link
10Gbps
hdc-ndsw2
Nexus 7009
ed
tch
e
wi
th
S
s
h
bp o oug
G
0 /M r rs
t 1 00 th te
ui ,0 ed ou
rc : $3 unk P R
i
C st Tr G
x
ni E ink x B
ui
L ino
q
E
ted Equ
u
Ro
NYC Fiber Leases
10Gbps Backbone
with Dual 10Gbps
Switched Service to
Ashburn, VA via Equinix
L
Es ease
t:
d
Ro $4,5 Path
ut 0 0
10Ged Li /Mo
bps nk
L
Es ease
t:
d
Ro $4,5 Pat
ute 00 h
/M
d
10
Gb Link o
ps
Nexus 5548
gghc-dsw1
Nexus 5548
Nexus 5548
chof-dsw2
CHNDM Off-Site
560 Irvine Turner Blvd,
Newark, NJ
Nexus 5548
gghc-dsw2
George Gustav Heye Ctr
1 Bowling Green, NYC, NY
Leased Path
Est: $4,500/Mo
Routed Link
10Gbps
chndm-dsw1
Nexus 5548
chndm-dsw2
Cooper Hewitt National
Design Center
2 East 91st Street, NYC, NY
Internet/DMZ Upgrade Plan
BGP Connection
Cogent: AS 174
10Gbps
Internet2: AS 10866
10Gbps
20Gbps - iBGP
2x10Gbps LACP
160.111.184.4/30
bgp1: .5 bgp2: .6
si-bgp1
Cisco 6506e
Lo0: 160.111.184.1/32
10Gbps - Leased Fiber
HDC to Equinix
160.111.184.8/30
bgp1: .9 Firewall#1: .10
Gigamon Switches
Port Tap Monitoring
Multiple 10Gbps Taps
hdc-inet-sw1
C2970G
63.88.104.230
BGP Connection
Level 3: AS 3356
10Gbps
si-bgp2
Cisco 6506e
Lo0: 160.111.184.2/32
10Gbps - Leased Fiber
HDC to Equinix
160.111.184.12/30
bgp2: .13 Firewall#12 .14
Smithsonian
AS 25829
160.111.0.0/16
hdc-inet1: G0/3
esi-n5k-sw02: Port TBD
hdc-inet1: G0/2
esi-n5k-sw01: Port TBD
esi-n5k-sw02
Nexus5548 w/FEX
Lo10: 160.111.184.254/32
esi-n5k-sw01
Nexus5548 w/FEX
Lo10: 160.111.184.253/32
Core Firewall #1
si-esicl01-r1
Checkpoint 12600
Lo0: 10.250.255.253/32
20Gbps
LACP 2x10Gbps
10.250.0.244/30
hdc-ndsw1: .245
si-esicl01-r1: .246
Nexus 7010/7009
hdc-ndsw1
Lo10: 10.250.255.1/32
20Gbps
2x10Gbps - LACP
160.111.184.16/30
FW#1: .17 FW#2: .18
OSPF Process 25829
Area 0 Networks
Network 160.111.0.0 0.0.255.255
Network: 10.250.0.0 0.0.3.255
Network: 10.250.255.0 0.0.0.255
40Gbps
4x10Gbps LACP Trunked
Core Firewall #2
si-esicl01-r2
Checkpoint 12600
Lo0: 10.250.255.254/32
20Gbps
LACP 2x10Gbps
10.250.0.248/30
hdc-ndsw1: .249
si-esicl01-r1: .250
Nexus 7010/7009
hdc-ndsw2
Lo10: 10.250.255.2/32
Private Cloud Core Plan - 2015
ESXi Server #1
Physical VM Server
ESXi Server #3
Physical VM Server
ESXi Server #5
Physical VM Server
ESXi Server #7
Physical VM Server
ESXi Server #9
Physical VM Server
ESXi Server #11
Physical VM Server
ESXi Server #13
Physical VM Server
ESXi Server #2
Physical VM Server
ESXi Server #4
Physical VM Server
ESXi Server #6
Physical VM Server
ESXi Server #8
Physical VM Server
ESXi Server #10
Physical VM Server
ESXi Server #12
Physical VM Server
ESXi Server #14
Physical VM Server
Virtual Server Cluster - Herndon
Core Subnet
Port Channel 101.x
Network: 10.250.1.4/30
vCenter VLAN 990
Port Channel 101.990
192.168.250.0/24
vMotion VLAN 991
Port Channel 101.991
192.168.251.0/24
HDC CSW1
Nexus 7009
Port Channel 2
4x10Gbps LACP
Switchport Trunked
HDC CSW2
Nexus 7009
vCenter VLAN 990
Port Channel 101.990
192.168.250.0/24
Network Core
40Gbps Ethernet Ring
Current
vBackup VLAN 992
Port Channel 101.992
192.168.252.0/24
Mall CSW1
Nexus 5548
(NMAH)
4x10Gbps LACP
Port Channel 100
Network: 10.250.1.12/30
ESXi Server #1
Physical VM Server
ESXi Server #3
Physical VM Server
ESXi Server #5
Physical VM Server
ESXi Server #7
Physical VM Server
ESXi Server #9
Physical VM Server
ESXi Server #11
Physical VM Server
ESXi Server #13
Physical VM Server
ESXi Server #2
Physical VM Server
ESXi Server #4
Physical VM Server
ESXi Server #6
Physical VM Server
ESXi Server #8
Physical VM Server
ESXi Server #10
Physical VM Server
ESXi Server #12
Physical VM Server
ESXi Server #14
Physical VM Server
Virtual Server Cluster - NMAH
Core Subnet
Port Channel 101.y
Network: 10.250.1.8/30
vMotion VLAN 991
Port Channel 101.991
192.168.251.0/24
vBackup VLAN 992
Port Channel 101.992
192.168.252.0/24
Mall CSW2
Nexus 5548
(Castle)
IT Infrastructure and Private Cloud Core Points







Private Cloud requires a resilient high-speed Backbone. 1Gbps is the minimum speed and
10Gbps is optimal.
Private and Leased Fiber fixes the recurring costs for circuits (OPEX) and improving
bandwidth and latencies; however, the speed of light is still in effect. Round trip Coast to
Coast is ~5,500 miles or about 30ms! OEO adds a couple of micro-seconds per hop.
Meeting the Carriers at the Meet-me Data Center saves money and provides for additional
services: Examples: MPLS, Comcast, Ashburn to NYC switched paths, etc.
10Gbps FCOE reduces the CPU use of servers, since the network is faster than the Bus
Speeds of the Servers (and other hosts). Servers=3.3Ghz versus Network Access=12Ghz.
Leasing fiber requires understanding the optical limits. Stand Single Mode reaches up to
2KM, Extended Reach (ER) up to 40KM, and Optical Amplification (OEO) reaches up to
180 miles.
SIP Trunking for VOIP is a major cost savings to an enterprise; however, many
organizations lack technical depth and understanding.
Contracting for Fiber, Internet Access, or Data Center services is a simple matter of
writing Statements of Work for the CO’s. Keep it simple and use other’s work as a
template.
Getting things done with IT Infrastructure








Planning, Design. Execution. All require LEADERSHIP and THINKING. Going to
Leased Fiber and DWDM was a follow-on idea after costing an OC48 pipe.
Look for follow-on opportunities. The Fiber Leases led to discovering Equinix (the new
MAE-East) and meeting the carriers for a major reduction if Internet access costs. That in
turn led to finding other switch circuit alternatives (Equinix 10Gbps to NYC, Comcast,
MPLS, Cloud Storage Vendors, etc.)
One Size does not fit all, but the solutions are a combination of technologies to meet the
organizational missions and organization. Organizations may need realignment!
A common network infrastructure (including Private Cloud Computing) save time, effort,
and Money.
Contracting out technical expertise will result in limited progress and improvements to
the Infrastructure and adoption of new technologies.
IT is personal and the new appliances and applications are driving the infrastructure.
Look ahead and lead the wave or get crushed under the weight of the double overhead
coming ashore.
Carriers and Vendors need to make a profit and cannot read minds, read, learn, and train.
There is NO box, except the one we make for ourselves.
Personal Views about Future Trends & Other Ideas









Virtual Desktops and VDI. Running a Linux/Windows VM on a Mac/Windows Desktop. A
failed VM can be copied down and installed on the fly within minutes! $$$$
The new 802.11ac WiFi allows for 1Gbps WiFi and will slam existing network switching;
however, with encryption of the WiFi connection it can and may supplant using copper to the
user in most cases. $$$$
Look out for 128-bit processing and the next generation of processing power as well as HTML
5 enhancements for VOIP/Video use.
HPC Infiniband Clusters (20Gbps) for clustered large scale Database computing (1TB or
greater tables and indexes). Distributed virtualization has core limits of sockets per server.
Storage will increase since resolution of still imaging and video seems to never stop (and it
won’t). Remember when a 10MB file was considered a beast and a 512KB USB drive was big?
Inventiveness and Innovation are singular and personal. It requires leadership to see it within a
team or organization. A camel is a horse designed by a committee!
Planning reduces waste. Do you binge spend at year end OR do your have a deliberate
shopping list? No plan survives first contact, but you cannot survive (and thrive) first contact
without a plan! $$$$
Virtualize your data centers with a private cloud infrastructure and save power, space, HVAC,
and staffing. $$$$
Beware of outsourcing and running blindly after things (SDN, Public Cloud, etc.) $$$$