No Slide Title - APAN-TH

Download Report

Transcript No Slide Title - APAN-TH

Creating a Global Lambda GRID: International
Advanced Networking and StarLight
Presented by
Joe Mambretti, Director,
International Center for Advanced Internet Research (www.icair.org)
Director, Metropolitan Research and Education Network (www.mren.org)
([email protected])
Based on StarLight Presentation Slides by Tom DeFanti –
PI, STAR TAP, Director EVL, University of Illinois, Chicago
([email protected])
APAN Conference
Phuket, Thailand
January 24, 2002
Introduction to iCAIR:
Accelerating Leading Edge Innovation
and Enhanced Global Communications
through Advanced Internet Technologies,
in Partnership with the Global Community
• Creation and Early Implementation of Advanced Networking
Technologies - The Next Generation Internet All Optical Networks,
Terascale Networks
• Advanced Applications, Middleware and Metasystems, Large-Scale
Infrastructure, NG Optical Networks and Testbeds, Public Policy
Studies and Forums Related to NG Networks
Tom DeFanti, Maxine Brown
Principal Investigators, STAR TAP
Linda Winkler, Bill Nickless, Alan Verlo, Andy Schmidt
STAR TAP Engineering
Joe Mambretti, Tim Ward
StarLight Facilities, et al
Who is StarLight?
StarLight is jointly managed and engineered by
• Electronic Visualization Laboratory (EVL), University of Illinois at
Chicago
– Tom DeFanti, Maxine Brown, Andy Schmidt, Jason Leigh, Cliff Nelson, and
Alan Verlo
• International Center for Advanced Internet Research (iCAIR),
Northwestern University
– Joe Mambretti, David Carr and Tim Ward
• Mathematics and Computer Science Division (MCS) , Argonne National
Laboratory
– Linda Winkler and Bill Nickless; Rick Stevens and Charlie Catlett
• In Partnership with Bill St Arnaud, Kees Neggars, Olivier Martin, etc.
What is StarLight?
StarLight is an experimental
optical infrastructure and
proving ground for network
services optimized for
high-performance
applications
StarLight Leverages
$32M (FY2002-3) in
experimental networks
(I-WIRE, TeraGrid,
OMNInet, SURFnet,
CA*net4, DataTAG)
Chicago view from 710
710 N. Lake Shore Drive, Chicago
Abbott Hall, Northwestern University
Where is
StarLight?
• Located in
Northwestern’s
Downtown
Campus: 710 N.
Lake Shore Drive
Carrier POPs
Chicago NAP
StarLight Infrastructure
• StarLight is a large research-friendly co-location facility with
space, power and fiber that is being made available to
university and national/international network collaborators
as a point of presence in Chicago
StarLight Infrastructure
• StarLight is a production GigE and trial 10GigE
switch/router facility for high-performance access
to participating networks
StarLight is Operational
Equipment at StarLight
• StarLight Equipment installed:
– Cisco 6509 with GigE
– IPv6 Router
– Juniper M10 (GigE and OC-12 interfaces)
– Cisco LS1010 with OC-12 interfaces
– Data mining cluster with GigE NICs
– Visualization/video server cluster (on order)
• SURFnet’s 12000 GSR
• Multiple vendor plans for 10GigE, DWDM and Optical
Switch/Routing in the future
Carriers at StarLight: SBC-Ameritech, Qwest,
AT&T, Global Crossing, Teleglobe…
StarLight Connections
• STAR TAP (NAP) connection with two OC-12c ATM circuits
• The Netherlands (SURFnet) has two OC-12c POS from
Amsterdam and a 2.5Gbps OC-48 to StarLight this month
• Abilene will soon connect via two GigE circuits
• Canada (CA*net3/4) is connected via GigE, soon 10GigE
• I-WIRE, a State-of-Illinois-funded dark-fiber multi-10GigE
DWDM effort involving Illinois research institutions is being
built. 36 strands to the Qwest Chicago PoP are in.
• NSF TeraGrid (DTF) 4x10GigE network being engineered by
PACI and Qwest.
• NORDUnet is now sharing StarLight’s OC-12 ATM connection
• TransPAC/APAN is bringing in an OC-12, later an OC-48
• CERN’s OC-48 is in the advanced funding stages
Evolving StarLight
Optical Network Connections
AsiaPacific
SURFnet,
CERN
Vancouver
CA*net4
CA*net4
Seattle
Portland
U Wisconsin
San Francisco
Chicago
IU
PSC
NYC
NCSA
AsiaPacific
Los Angeles
San Diego
(SDSC)
Atlanta
AMPATH
Source: Maxine Brown 12/2001
StarLight Services and Locations
AADS NAP StarLight
Qwest
225 West Randolph Street
710 North Lakeshore Drive
455 North Cityfront Plaza
IPv4 and IPv6
STAR TAP Transit
(AS 10764)
Int’l R&E
Networks
Int’l R&E
Networks
FedNets /
NGIX
ATM PVC Mesh to
Other Participants
Yes
-
-
GigE 802.1q
Policy-Free
VLANs
-
Yes
FedNets /
NGIX
Co-Location
Space, Power
-
Yes
Qwest
Customers
Fiber Patches
-
$T&M Install
$0 Monthly
$ Install
$ Monthly
TeraNodes in Action:
Interactive Visual Tera Mining
(Visual Data Mining of Tera Byte Data Sets)
Chicago
Amsterdam
Data Mining
Servers
(TND-DSTP)
Data Mining
Servers
(TND-DSTP)
Parallel Data Mining
Correlation (TNC)
Parallel Visualization (TNV)
Tera Map
Tera Snap
Tera Snap
NWU, NCSA, ANL etc…
Data Mining
Servers
(TND-DSTP)
–
Problem is to touch a Terabyte of data
interactively and to visualize it
–
100M/s – 24 hours to access 1
Terabyte of data
–
500M/s – 4.8 hours using a single PC
–
10G/s – 14.4 minutes using 20 node
PC cluster
–
Need to parallelize data access and
rendering
Prototyping The Global Lambda Grid in Chicago:
A Photonic-Switched Experimental Network of Light Paths
l1
l2
Apps
Clusters
Dynamically
Allocated
Lightpaths
Switch Fabrics
Physical
Monitoring
C
O
N
T
R
O
L
NEW!
P
L
A
N
E
Multi-leveled Architecture
Metros As International Nexus Points
CA*net4
ANL
Tokyo?
APAN
StarLight
CERN
DataTAG
CalTech
I-WIRE
Amsterdam
NetherLight
NCSA
SDSC
AP Metro2
CSW
ASW
Cluster
OFA
TeraGrid
Prototype Global
Lambda Grid
Optical Metro
Europe
Miami to
South Am?
10GE Links
IEEE 802.3 LAN PHY
Interface, eg, 15xx nm
l1
10GE serial
l2
Multiwavelength Fiber
l3
l4
Multiple l Per Fiber
CSW
GE Links
ASW
10GE Links
ASW
DWDM Links
N*N*N
GE Links
Multiwavelength Optical Amplifier
Power Spectral Density
Processor,
Source + Measured PSD
Multiple Optical Impairment
Issues, Including Accumulations
Optical,
l Monitors, for
Wavelength Precision, etc.
Computer Clusters Each Node = 1GE
Multi 10s, 100s, 1000s of Nodes
Optical Layer Control Plane
Client
Controlle
r
Client Layer Control Plane
Optical Layer Control Plane
UNI
Controlle
r
CI
Controlle
r
CI
Controlle
r
Controlle
r
I-UNI
CI
Client
Device
Client Layer
Traffic Plane
Optical Layer – Switched Traffic Plane
Controlle
r
OMNInet Technology Trial: January 2002
UIC
8x1GE
Application
Cluster
Northwestern U
2x10GE
2x10GE
Passport
8600
OPTera
Metro
5200
Optical
Switching
Platform
Application
Cluster
Passport
8600
Passport
8600
Application
Cluster
CA*net3--Chicago
StarLight
8x1GE
Optical
Switching
Platform
8x1GE
2x10GE
2x10GE
Optical
Switching
Platform
Optical
Switching
Platform
8x1GE
Passport
8600
Application
Cluster
• A four-site network in Chicago -- the first 10GE service trial!
• A test bed for all-optical switching and advanced high-speed services
• Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL
StarLight On Ramps : Proposed Development Phase I
Gigabit Ethernet NICs to 10 Gigabit Ethernet MAN
UIC
LAC
TND3
StarLight
TNV3
EVL
TNV3
TNC3
6509
10GigE
6509
TND3
6509
10GigE
6513
2x 10GigE
DWDM
DWDM
TNDs = Datamining Clusters
TNVs = Visualization Clusters (gigapixel/sec)
TNCs = TeraGrid On-Ramps
10x10
GigE
StarLight On Ramps : Proposed Development Phase II
10 Gigabit Ethernet to 2x80 Gb MAN
LAC
TND3
EVL
TNV3
TNC3
UIC
TNV3
8x 10GigE
TND3
6509
10GigE
6509
StarLight
6509
10GigE
2x40GigE (UIC fiber)
DWDM
6513
10x
10GigE
2x40GigE
DWDM
O-E-O or
O-O-O
switch
…
TND: Upgrade NICs in TND clusters to (8)x 10GigE
TNV: Upgrade NICs in TNV3 clusters to (8)x10GigE
O-E-O: or O-O-O Optical Switch at StarLight(?)
DWDM: (2) 40Gb(?) and (8) 10Gb
NSF’s Distributed Terascale Facility (DTF)
TeraGrid Interconnect Objectives
• Traditional: Interconnect sites/clusters using WAN
– WAN bandwidth balances cost and utilization- objective to keep
utilization high to justify high cost of WAN bandwidth
• TeraGrid: Build a wide area “machine room” network
– TeraGrid WAN objective to handle peak M2M traffic
– Partnering with Qwest to begin with 40 Gb/s and grow to ≥80
Gb/s within 2 years.
• Long-Term TeraGrid Objective
– Build Petaflops capable distributed system, requiring Petabytes
storage and a Terabit/second network.
– Current objective is to step toward this goal.
– Terabit/second network will require many lambdas operating at
minimum OC-768 and its architecture is not yet clear.
Source: Rick Stevens 12/2001
Trends Cyberinfrastructure
• Advent of regional dark fiber infrastructure
– Community owned and managed (via 20 yr IRUs)
– Typically supported by state or local resources
• Lambda services (IRUs) viable replacements for
bandwidth service contracts
– Need to be structured with built in capability escalation (BRI)
– Need strong operating capability to exploit this
• Regional groups moving faster (much faster!) than
national network providers and agencies
– A viable path to putting bandwidth on a Moore’s law curve
– Source of new ideas for national infrastructure architecture
Source: Rick Stevens 12/2001
13.6 TF Linux TeraGrid
574p IA-32
Chiba City
32
256p HP
X-Class
32
24
128p HP
V2500
24
8
8
92p IA-32
24
Extreme
Black Diamond
HPSS
4
Calren
NTON
vBNS
Abilene
Calren
ESnet
Caltech
Argonne
32 Nodes
0.5 TF
0.4 TB Memory
86 TB disk
64 Nodes
1 TF
0.25 TB Memory
25 TB disk
32
32
128p Origin
32
32
5
HR Display &
VR Facilities
5
HPSS
OC-12
OC-48
OC-48
OC-12
OC-12 ATM
GbE
Juniper M160
SDSC
NCSA
256 Nodes
4.1 TF, 2 TB Memory
225 TB disk
500 Nodes
8 TF, 4 TB Memory
240 TB disk
Juniper M40
OC-12
OC-12
2
OC-12
OC-3
ESnet
HSCC
MREN/Abilene
Starlight
Juniper M40
OC-12
2
vBNS
Abilene
MREN
OC-12
OC-3
8
4
UniTree
8
HPSS
2
Sun
Starcat
4
1176p IBM SP
Blue Horizon
4
= 32x 1GbE
1024p IA-32
320p IA-64
16
Myrinet Clos Spine
= 64x Myrinet
= 32x Myrinet
14
Myrinet Clos Spine
1500p Origin
Sun E10K
= 32x FibreChannel
= 8x FibreChannel
10 GbE
32 quad-processor McKinley Servers
(128p @ 4GF, 8GB memory/server)
32 quad-processor McKinley Servers
(128p @ 4GF, 12GB memory/server)
Fibre Channel Switch
16 quad-processor McKinley Servers
(64p @ 4GF, 8GB memory/server)
Router or Switch/Router
IA-32 nodes
Source: Rick Stevens 12/2001
TeraGrid Network Architecture
• Cluster interconnect using multi-stage
switch/router tree with multiple 10 GbE external
links
• Separation of cluster aggregation and site
border routers necessary for operational
reasons
• Phase 1: Four routers or switch/routers
– each with three OC-192 or 10 GbE WAN PHY
– MPLS to allow for >10 Gb/s between any two sites
• Phase 2: Add Core routers or switch/routers
– Each with ten OC-192 or 10 GbE WAN PHY
– Expandable with additional 10 Gb/s interfaces
Source: Rick Stevens 12/2001
Option 1: Full Mesh with MPLS
Los Angeles
Chicago
One Wilshire
(Carrier Fiber Collocation Facility)
2200mi
455 N. Cityfront Plaza
(Qwest Fiber Collocation Facility)
1 mi
20mi
710 N. Lakeshore
(Starlight)
115mi
Qwest San
Diego POP
140mi
25mi
IP Router
DWDM
20mi
OC-192
Caltech
SDSC
ANL
NCSA
10 GbE
Site Border
Router or
Switch/Router
Cienna Corestream
DWDM
Cluster
Aggregation
Switch/Router
DWDM TBD
Caltech
Cluster
SDSC
Cluster
Source: Rick Stevens 12/2001
NCSA
Cluster
ANL
Cluster
Other site resources
Expansion Capability: “StarLights”
Los Angeles
Chicago
One Wilshire
(Carrier Fiber Collocation Facility)
Regional Fiber
Aggregation
Points
455 N. Cityfront Plaza
(Qwest Fiber Collocation Facility)
2200mi
1 mi
Additional Sites
And
Networks
20mi
IP Router
(packets)
710 N. Lakeshore
(StarLight)
115mi
IP Router
Qwest San
Diego POP
140mi
25mi
or
Lambda
Router
(circuits)
DWDM
20mi
OC-192
Caltech
SDSC
ANL
NCSA
10 GbE
Site Border
Router or
Switch/Router
Cienna Corestream
DWDM
Cluster
Aggregation
Switch/Router
DWDM TBD
Caltech
Cluster
SDSC
Cluster
Source: Rick Stevens 12/2001
NCSA
Cluster
ANL
Cluster
Other site resources
Leverage Regional/Community Fiber
Experimental Interconnects
Illinois’ I-WIRE Logical and Transport
Topology
Starlight
(NU-Chicago)
Argonne
18
4
Qwest
455 N. Cityfront
UC Gleacher
4
450 N. Cityfront
UIC
10
4
12
12
Next Steps-Fiber to FermiLab, other sites
-Additional fiber to ANL, UIC
-DWDM terminals at Level(3),
McLeodUSA locations
-Experiments with OC-768, Optical
Switching/Routing
4
McLeodUSA
151/155 N. Michigan
Doral Plaza
Level(3)
Illinois Century Network
111 N. Canal
James R. Thompson Ctr
City Hall
State of IL Bldg
2
2
2
IIT
Source: Rick Stevens 12/2001
UChicago
UIUC/NCSA
• An Advanced Network for Advanced Applications
• Designed in 1993; Initial Production in 1994, Managed at L2 &
L3
• Created by Consortium of Research Organizations -- over 20
• Partner to STAR TAP/StarLight, I-WIRE, NGI and R&E Net
Initiatives, Grid and Globus Initiatives etc.
• Model for Next Generation Internets
• Developed World’s First GigaPOP
• Next – the “Optical MREN”
• Soon - Optical ‘TeraPOP’ Services
GigaPoPs  TeraPoPs (OIX)
Pacific Lightrail
TeraGrid Interconnect
GigaPoP data from Internet2, map by Rick Stevens, Charlie Catlett
Pacific Light Rail
Critical Mass Sites
Top 10 Res. Univ.:
Next 15 Res. Univ:
Centers, Labs:
Intl. 10gig & l
Key Hubs
Source: Ron Johnson 12/2001
draft 12/4/01
CA*net 4 Physical Architecture
By Bill St. Arnaud (Provider of Excellence
In Advanced Networking)
Optional Layer 3 aggregation service
Dedicated
Wavelength or
SONET channel
Winnipeg
Regina
St. John’s
Charlottetown
Calgary
Vancouver
Seattle
OBGP switches
Europe
Montreal
Large channel
WDM system
Fredericton
Halifax
Ottawa
Los Angeles
Chicago
Toronto
Miami
New York
NSF ANIR
• NSF will emphasize support for domestic and
international collaborations involving resourceintensive applications and leading-edge optical
wavelength telecommunication technologies
• But, NSF will not abandon needed international
collaboration services (e.g., STAR TAP)
StarLight Thanks
• StarLight planning, research, collaborations, and outreach efforts at
the University of Illinois at Chicago are made possible, in part, by
funding from:
•
•
•
•
•
– National Science Foundation (NSF) awards ANI-9980480, ANI-9730202, EIA9802090, EIA-9871058, and EIA-0115809
– NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative
agreement ACI-9619019 to the National Computational Science Alliance
– State of Illinois I-WIRE Program, and UIC cost sharing
– Northwestern University for providing space, engineering and management
Argonne National Laboratory for StarLight and I-WIRE network engineering
and planning leadership
NSF/ANIR, Bill St. Arnaud of CANARIE, Kees Neggers of SURFnet, and Olivier
Martin and Harvey Newman of CERN for global networking leadership;
NSF/ACIR and NCSA/SDSC for DTF/TeraGrid opportunities
UCAID/Abilene for Internet2 and their ITN
CA*net3/4 and CENIC/Pacific Light Wave for planned North America and West
Coast transit
Coming…..
iGrid 2oo2
Grid-Intensive Application
Control of LambdaSwitched Networks
www.startap.net/igrid2002
University of Illinois at Chicago and Indiana University in collaboration
with The GigaPort Project and SURFnet5 of The Netherlands
September, 2002, Amsterdam, The Netherlands
Maxine Brown
STAR TAP/StarLight co-Principal Investigator
Associate Director, Electronic Visualization Laboratory
A showcase of applications that are “early adopters” of very
high bandwidth national and international networks
Further Information
www.startap/starlight
www.evl.uic.edu
www.icair.org
www.mren.org
www.canarie.ca
www.anl.gov
www.surfnet.nl
www.globalgridforum.org
www.globus.org
www.ietf.org
www.ngi.gov
Ed by Foster &
Kesselmann
“Bring Us Your Lambdas!”
www.startap.net/starlight
[email protected]