Transcript Slide 1

Network Research Infrastructure
“Back to the Future”
(aka National Lambda Rail – NLR)
Bob Aiken ([email protected])
&
Javad Boroumand ([email protected])
Aiken & Boromound
© 2004, Cisco Systems, Inc. All rights reserved.
1
Why go back to the future ?
• Major advances and impact were made during
ARPANET and early NSFNET years
They had concurrent application and network R&D
• WEB took off in early 90’s due to existence of a
network research infrastructure
• Mid to late 90s we saw production networks only
No collaboration between applications and network/system
researchers (ie. network researchers were estranged )
• Merging and interdependence of applications,
networks, systems, storage, content, …
• Dark Fiber now allows Morphnet concept of 1997
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
2
Why go back to the future ?
- continued
• The need was never greater for a research
infrastructure that concurrently supports application,
network, system, & security research & interplay - e.g.:
GRIDs, GRID applications, Middleware, Multi-media …
Evolution of real end to end networks and protocols
Real network data for monitoring & analysis of network traffic
Scalable, Secure, highly available, resilient networks
New applications , e.g. VOIP, NEES, sensor networks, gaming,
on demand computing, HD multi-media collaborations …
Adaptive networks
Network management, control planes, provisioning
Scaling of all aspects wrt speeds, users, networks, hosts,…
EmuLabs and VPNs are good for proof of concept - but ::
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
3
We need Network Research Infrastructure like
the NLR for the following types of research
• End to end
Security
(IDS,DOS,firewalls, crypto)
QoS
Predictability
• GRIDS and Mware
• TCP evolution & efficiency
FAST, HS-TCP, XCP
SuperJumbo frames
Reliable UDP
RDMA
Alternative transport
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
• High Availability, resiliency, fail
over,…
• Peering (Lambda, Frame, Packet)
• Provisioning
VPNs, IP, L2,L1, Policy, security
•
•
•
•
Routing and addressing
Network Management
Control Planes and signalling
Merging of end systems and
network provided storage,
computing, …
4
Historical Perspective: NLR is the next big US
Research and Education community initiative
NLR
vBNS
• NSF funded; part NGI program
• Originally connecting supercomputing centers
and NAPs; later expanded to all research
universities
• National footprint; MCI managed ATM and IP
service
• Production traffic but had a separate “testnet”
1988
1993
• Research universities ownership with
significant Cisco investment
• Connecting new generation of ‘regionals’ and
driving regional R&E fiber projects
• National footprint; own dark fiber, DWDM,
Ethernet and IP services
• First ever ‘dual-mission’ experimental and
production national scale infrastructure
1998
2003
Abilene
NSFnet
• NSF funded
• Connecting “regionals” which along with the
backbone commercialized the Internet
• National footprint; leased circuits; own IP
service
• Production traffic with limited experimentation
• Higher education membership funded through
UCAID
• Main Internet2 backbone connecting
“GigaPoPs”
• National footprint; Qwest managed Sonet and
wave service; own IP service
• Production traffic only
Every 5 years the US national research networking infrastructure evolves
to the next level
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
5
NLR User Communities
•
Big Science Projects
High performance experimental and production network
infrastructure of unparallel capabilities for applications
level and distributed systems research (e.g., TeraGrid,
OptIPuter, HEP experiments, …)
•
Networking & Distributed Systems Research
Breakable and programmable experimental infrastructure
for networking research at various layers (ie. 1 thru 9)
•
Commodity and general purpose R&E networking
Production infrastructure for cost reduction purposes,
regional aggregation, commodity Internet access, K-12,
Federal/Gov networks and international transit, etc.
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
6
Research & Education Network Tiers
N
LEADERS
NETWORK TYPE
A
Web100, optical packet
T
switching , NLR
I
Research
O
N
A & Teragrid, DATA GRID,
CALREN,
L
L
A
M
B
D
A
R
A
I
L
C SINET,
E NLR
N
I
C
I2-Abilene,
Ukerna ,DFN,
CALREN,
GEANT,
NLR
ISPs
Aiken & Boroumand
CAPABILITIES/USERS
Experimental environments for
network researchers
Experimental
Networks
Next generation architecture
and applications
for research
community
Advanced Education Networks
Commodity Internet
© 2004, Cisco Systems, Inc. All rights reserved.
Advanced services
for education
General Use
7
Network Tiers are a good idea - but …
• CENIC, NSF, and others advocating 3 types of
“research networks”
Good to focus on the 3 type of requirements – but >
• What we really need is ONE network that
combines and concurrently supports all 3 types
of network requirements
Original goal of HPCC and NGI programs
End to end (host-campus-LAN-MAN-WAN)
Wavenet (L1) , Framenet (L2) , Packetnet L3) via same
infrastructure
• The NLR is that network !
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
8
NLR networking research use vs.
production use (MORPHNET concept)
Packet
Net
Frame
Net
Wave
net
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
9
[email protected] 12Jan2003
NLR - Potential Use Examples
Production DWDM Network
1st pair Fiber
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.

Optical packet
switching
architecture


Experimental L1-3
Networks
Production Switched
Ethernet
ETF distributed
backplane
Production Routed IP
Network
Production L23 Networks
Prod. L3
Networks

XCP reference
implementation
Exp. L3
Networks

Experimental
L2-3 Networks
Deterministic
UltraLight
access

New routing
protocols
Infrastructure

AUP-free
Internet service
Internet BGP
visibility
Use Examples
(courtesy of John Silvester [USC] & modified by Aiken)
Packet
Net
Frame
Net
Wave
net
Additional
Fiber Pairs
10
NLR Footprint & PoP Types
Phase 1 and planned Phase 2
SEA
POR
SYR
BOI
CHI
OGD
SVL
PIT
DEN
CLE
KAN
SLC
NYC
WDC
RAL
LAX
PHO
ALB
TUL
SAN
ATL
DAL
ELP
NLR Owned Fiber
Managed Wave
PEN
JAC
BAT
SAA
HOU
NLR WaveNet, FrameNet & PacketNet PoP
NLR WaveNet & FrameNet PoP
NLR WaveNet PoP
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
11
Challenges
• Network Aware
Applications
Checkpointing & “serial 1
mentality”
• Merged network, system,
middleware, and
applications
• Exposure of network and
systems to applications in
a secure manner
• End to end networking
System, building, campus,
metro, regional, state,
nation, international
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
• L1- L3 networks
Convergence, control
Complexity of design and
operation
• NOC support for
concurrent production &
research infrastructures
Research in and of itself
• Mindset
Research verticals (apps,
networks, systems, GRIDs)
need to work together
Concurrent production &
research
12
Summary
• To really make advances we need to go back to
the ARPANET and early NSFNET concept where
applications, middleware, security, network
management, systems management, and
network (layers 1-9) R&D coexist and compliment
each other on the same infrastructure
These were the goals of the HPCC and NGI programs
• We need campus, metro, state, regional and
national research infrastructures to support this
goal
• NLR is the only such National infrastructure so
designed to support this challenge
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
13
References
• NLR : http://http://www.nlr.net/
• Communications of ACM, January ,2004, Volume
47, Number 1, pps 93-98
“Network and Computing Research Infrastructure : The
Need to go Back to the Future” by Aiken, Boroumand,
Wolff
• Morphnet (1997)
http://www.anl.gov/ECT/Public/research/morphnet.html
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
14
Extra slides
• The rest are extra slides
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
15
NLR PoP Architecture
Carrier Colo
15808
West
15808
East
7600
CRS-1
NLR demarc
15500
15454
Campus Pull Thru
Metro and regional nets
15500
15500
15454
15454
DWDM
10GE or OC192
Aiken & Boroumand
1GE
© 2004, Cisco Systems, Inc. All rights reserved.
10GE -6500
10GE -15540/30
1G - various
16
NLR Planned Capabilities and Services
• Point-to-point waves
10GE, OC192, (future: 1GE, OC48)
Using Cisco 15808 long haul and extended long haul and
Cisco 15454 extended metro DWDM systems
• Switched Ethernet
Using Cisco 6500 switches
• Routed IP
Using Cisco 7600, 12400, and CRS-1 routers
• Dark fiber for optical layer research
• Traditional NOC services but also “Experiments
Support Center” services
Instrumentation, measurement, config/re-config
management, tool development
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
17
NLR Regional Networks
PNWG
Northern
Lights
Cornell/NYSERnet/NEREN
PGP
CENIC
CIC
FRGP
OARNet
MATP
GPN
NMG
NCLR
OneNet
LEARN
SLR
LONI
FLR
NLR Fiber Routes
NLR Leased Managed Wave Routes
NLR PoP
Aiken & Boroumand
© 2004, Cisco Systems, Inc. All rights reserved.
Solid color indicates current or shortly pending
NLR membership
Striped color indicates interest in NLR membership
18
Current NLR Participants
Aiken & Boroumand
•
Corp for Education Network Initiatives in Calif. (CENIC)
•
Pacific Northwest GigaPOP (PNWGP)
•
Pittsburgh Supercomputing Center
•
Duke (representing coalition of NC universities)
•
Mid-Atlantic Terascale Partnership
•
Cisco Systems
•
Internet2
•
Florida LambdaRail
•
Georgia Institute of Technology
•
Committee on Institutional Cooperation (CIC)
•
Lone Star Education and Research Network (LEARN) - Texas
•
Cornell University – New York
•
Louisana Board of Regents (LONI)
•
University of New Mexico
© 2004, Cisco Systems, Inc. All rights reserved.
19