U.S. Optical Network Status
Download
Report
Transcript U.S. Optical Network Status
U.S. Optical Network Testbeds
Status
Grant Miller
National Coordination Office
July 3, 2004
U.S. Optical Network Testbeds
UltraSciencenet: DOE
CHEETAH: NSF
DRAGON: NSF
STARLight: NSF
HOPI: Internet2
OMNInet: Nortel
National LambdaRail (NLR): Consortium
Also: CALREN, Colorado, Connecticut, Florida, Indiana
(I-Light), Illinois (I-Wire), Maryland/DC/ Virginia (MAX),
Michigan, Minnesota, NY(NEREN), North Carolina, Ohio,
Oregon, Rhode Island, SURA, Texas, Utah, Wisconsin
2
Applications Network Performance
Applications requirements are drivers for bandwidth needs
DWDM delivers up to 100 Gbps
SONET framing delivers up to 40 Gbps
TCP/IP delivers about 15 Gbps
Site firewalls deliver about 7 Gbps to an application
But high-end applications require 40+ Gbps, e.g.:
– Terascale Supernova Initiative: Terabyte in days
– High Energy Physics: Terabyte data transfers
Therefore, consider optical networking options to:
– Bypass firewalls
– Carry non-IP frames (e.g., Fiber Channel over SONET)
3
UltraScienceNet
Planned Capabilities
Sparse, lambda-switched, dedicated, channelprovisioned testbed
Connects hubs close to DOE’s largest science
users: Users pay last-mile costs
Provide an evolving matrix of switching
capabilities
Separately fund research projects to support
applications
– High-performance protocols
– Control
– Visualization
4
UltraScienceNet
Resources
Off-hours capacity from DOE’s ESnet: Expected 2 x OC48
between Sunnyvale and Chicago
Dedicated lambdas on NLR
– 2 x 10G lambdas between Chicago and Sunnyvale
– Possibly two more in year 2 or 3
Two dedicated lambdas on the Oak Ridge National
Laboratory Chicago-Atlanta Connector
Switching technologies
– Ciena, Cisco or Sycamore (SONET) migrating to
– Calient Glimmerglass all-optical or a hybrid
– Progression of point-to-point (P2P) transport technologies (Fiber
channel, Infiniband)
Migrate to the production ESnet environment
5
UltraScienceNet
Engineering Approach
Network engineering
–
–
–
–
–
Connect Atlanta-Chicago via ORNL
16 P2P circuits: OC192, 10 Gig-E
Provide the NLR alternate route to close its ring
Buy IRUs from Qwest and TVA
Light with equipment from Ciena
Please see: http://www.csm.ornl.gov/ultranet
6
CHEETAH: Sponsored by the NSF
January 2004-December 2007
Goal: Develop the infrastructure and networking
technologies to support a broad class of escience, and
specifically the Terascale Supernova Initiative
Concept: Create a network to provide on-demand end-toend dedicated bandwidth channels to applications as well as
an IP path to support:
–
–
–
–
High throughput file transfers
Interactive remote visualization
Remote computational steering
Multipoint collaborative computation
Participation by:
–
–
–
–
Oak Ridge National Laboratories
University of Virginia
North Carolina State University
City University of New York
7
CHEETAH
Technology
Dedicated channel: High-speed Ethernet mapped
to Ethernet-over-SONET circuit
Leverage existing technologies
– 100 Mbps/1Gbps Ethernet in LANs
– SONET in MANs/WANs
– Availability of Multi-Service Provisioning Platforms
(MSPP) class devices that can:
Map Ethernet to Ethernet over SONET
Cross-connect dynamically
Rate-control Ethernet ports
Provide a 1 Gbps ORNL-Atlanta Channel
8
CHEETAH
Implementation
Application tools
– File tansfer
– Visualization: Ensight or Aspect/Paraview, Custom open GL
codes
– Computational steering
Transport protocols
– File transfers
– Control channels: small portion of channel bandwidth
Rate-based flow control: 2 x Dell 2.4 Ghz PCs with 100
Mhz 64-bit PCI busses
Make it wide-area: e.g., use NLR, MPLS tunnels through
Abilene, or collocated switches at Abilene PoPs
9
DRAGON: Funded by NSF
Provide Cyberinfrastructure application support and
advanced network services on an experimental
infrastructure using emerging standards and technology
Advanced services
–
–
–
–
–
Dynamic provisioning of deterministic end-to-end paths
Rapid provisioning of application-specific net topologies
Reserve resources and topology in advance, instantiate as needed
Provide AAA
Protocol, format, framing agnostic: direct transmission of any
optical signal
10
DRAGON
Design
All optical transport in the metro core: Edge-toedge wavelength switching. Push OEO demarc to
the edge
Standardized GMPLS protocols for dynamic
provisioning intra-domain connections
Develop inter-domain protocols to distribute
Transport Layer Capability Sets (TLCS) across
multiple domains
11
DRAGON
Research Areas
Inter-domain routing to advertise the TLCS:
Network Aware Resource Broker (NARB)
Ability to request deterministic network
resources
Virtual label switched routers: Translate GMPLS
requests into configuration commands to
switches via the SNMP protocol
Minimize OEO requirements for light-paths
Formalized definition language to instantiate
complex application topologies
12
DRAGON
Network Points
Un. Of Maryland
NASA GSFC
DC, Northeast
NCSA
ISI-East
Connection to Bossnet, MIT/Haystack
Note: Commercial partner is Movaz
13
StarLight
Exchange point: 1 GigE and 10 GigE for national and
international research networks (over 30 networks)
NSF Teragrid (10 x 10 Gb over I-wire), Extensible Teragrid
Facility (ETF) NLR
UltraScienceNet (DOE)
Global Lambda Integrated Facility (GLIF):
–
–
–
–
–
GEANT
WIDE
APAN
SURFnet
Many others
Calient Diamondwave Switches at StarLight and
NetherLight facilities
14
Hybrid Optical Packet Infrastructure
(HOPI) Project
Architecture based on availability of optical infrastructure:
dark fiber acquisitions at national, regional, local level
Implement a hybrid of shared IP packet switching and
dynamically provisioned optical lambdas.
Infrastructure
– MPLS tunnels on Abilene
– Internet2 Wave on the NLR footprint
– Regional Optical Networks (RONs)
Model waves using deterministic paths
Provide basic service of 1 GigE or 10 GigE unidirectional
point-to-point path
Access through Abilene through direct or MPLS L2VPN
tunnel
Support 15-20 experiments, e.g. dynamic provisioning
15
HOPI
Status
Deterministic path: CERN to LA
–
–
–
–
- Internet2
GEANT
CANARIE
Others: StarLight, SURFnet
Address issues:
– Different technologies
– Cross administrative domains
– Dynamic provisioning
http://hopi.internet2.edu
16
Optical Metro Network Initiative
(OMNInet)
Metropolitan 10 Gbps DWDM WAN and LAN photonic
switched network trial
Partnership of Nortel Networks, SBC Communications,
International Corporation of Advanced Internet Research
(iCAIR)/Northwestern Un.
Services: O-VPNs, dial-a-lambda service, router bypass
Emerging applications: Optical Grids, storage on-demand,
data mining, 3D teleconferencing, large-science apps,
visualization
17
OMNInet
Architecture
4 sites in Chicago
6 fiber spans
4 wavelength planes: switching without
wavelength translation
DWDM Lightpaths
18
National LambdaRail (NLR)
National-scale member-owned/managed optical networking and
research facility
NLR Objective: Bridge the gap between optical networking
research and state-of-the-art applications research
NLR is a set of facilities, capabilities, and services supporting
multiple experimental and production networks for the U.S.
research community
Networks exist side-by-side on the same fiber but are physically
and operationally distinct
Virtuous Circles: Participants dedicated optical capability from
campus labs to the NLR network. NLR works with RONs to
deliver NLR capabilities to campuses.
19
NLR
Characteristics
Experimental platform for research
– Optical switching and network layers
– 50% of capacity is reserved for research
– Experimental Support Center
Use high-speed Ethernet for WAN Transport:
First national-scale Ethernet deployment
Sparse backbone technology: Members develop
local optical networking and performance in
their areas
Acceptable Use Policy Free
20
NLR
Planned Capabilities
Point-to-point waves: 10 GigE LAN PHY, OC192 Cisco systems
Switched Ethernet network using Cisco switches
Experimental IP network using Cisco routers
Dark fiber for optical layer research
Traditional NOC services
Dense Wave Division Multiplexing national
optical footprint: Capacity of 40 wavelengths per
fiber pair deployed on 10,000 miles of dark fiber
21
NLR
Deployment
Initial 4 lambdas
– One lambda for national switched Ethernet experimental
network
– One lambda for national 10 Gbps IP network
– One lambda for quick start facility for new research
projects
– One lambda for Internet2 HOPI testbed
Additional lambdas provisioned as needed
National deployment (California to DC to
Florida) by August 2004
http://www.nationallambdarail.org
22