20070212-burrescia
Download
Report
Transcript 20070212-burrescia
ESnet Update
Joint Techs Meeting
Minneapolis, MN
Joe Burrescia
ESnet General Manager
2/12/2007
1
Office of Science Collaborators (or “Why I am Here in the Frozen North”)2
•
The Office of Science (OC) supports (FY2008) the
research of about 25,500 Ph.D.’s, Postdoctoral Research
Associates, and Graduate Students
•
Half of over 21,500 users of SC’s scientific facilities in FY
2008 will come from universities
SC National User Facilities – User Affiliations
Industry, Int'l
collaborators,
and others
21%
Universities
49%
DOE
Laboratories
30%
From Dr Orbach’s FY2008
Budget Request for the
Office of Science
Collaborative Effort: OSCARS
•
On-demand Secure Circuit Advanced Reservation System
(OSCARS)
•
Collaborative effort status
– Working with Internet2 and DRAGON to support interoperability
between OSCARS/BRUW and DRAGON
– Working with Internet2, DRAGON, and Terapaths to determine an
appropriate interoperable AAL framework (this is in conjunction with
GEANT2's JRA5)
– Working with DICE (Dante, Internet2, CANARIE, ESnet) Control Plane
group to determine schema and methods of distributing topology and
reachability information.
•
Completed porting OSCARS from Perl to Java to better
support web-services.
– This is now the common code base for OSCARS and Internet2's
BRUW.
3
Collaborative Effort: perfSONAR
•
perfSONAR is a global collaboration to design, implement
and deploy a network measurement framework.
•
Collaborators
– ARNES, Belnet, CARnet, CESnet, Dante, University of Delaware, DFN,
ESnet, FCCN, FNAL ,GARR, GEANT2, Georga Tech, GRNET, Internet2, IST,
POZNAN Supercomputing Center, Red IRIS, Renater, RNP, SLAC, SURFnet,
SWITCH, Uninett, and others…
•
ESnet Deployed Services
– Link Utilization Measurement Archive
– Virtual Circuit Status
•
In Development
– Active Latency and Bandwidth Tests
– Topology Service
– Additional Visualization capabilities
4
Current ESnet Network Status
•
Since the July Joint Techs meeting…
o
o
ESnet transited LHC traffic between FNAL and, via
USLHCnet, CERN for the first time this month
10 GE NLR connection between Seattle and Chicago
- In production currently carrying LHC traffic to FNAL
- Backup for IP circuits
o
10 GE NLR connection between Chicago and
Washington DC
- Was accepted
- Will serve as one transition point between the current ESnet
backbone and ESnet4
o
o
Chicago MAN dark fiber physically in place
Direct peering between ESnet and Latin America on both
Coasts
- CUDI in San Diego
- AMPATH at MANLAN
5
Current ESnet Topology
2700 miles / 4300 km
Aus.
Canada
Russia and
China
Canada
CERN
CERN
Europe
NLR supplied
10Gbps circuits
Qwest supplied
10Gbps backbone
Sunnyvale
1200 miles / 1900 km
Australia
Asia-Pacific
New York
Washington,
DC
Albuquerque
San Diego
Latin America
Backbone hubs
Primary DOE Labs
Major research and
education (R&E)
network peering points
10 Gbps circuits
Production IP core
NLR core
Metro Area Networks
Lab supplied
International
connections
South America
0
ORAU 99.660
Lamont 99.718
Ames-Lab 99.778
Bechtel 99.811
OSTI 99.852
NOAA 99.855
DOE-GTN 99.889
“4 nines” (>99.95%)
INL 99.893
NREL 99.968
BNL 99.969
Allied 99.972
Pantex 99.976
SNLA 99.977
LANL 99.977
DOE-ALB 99.977
JLab 99.985
IARC 99.989
ANL 99.989
MIT 99.990
LLNL-DC 99.991
LBNL-DC 99.991
“5 nines” (>=99.995%)
LANL-DC 99.991
Yucca 99.992
PPPL 99.992
SRS 99.994
LLNL 99.995
FNAL 99.995
BJC 99.996
DOE-NNSA 99.997
NERSC 99.998
SNLL 99.999
SLAC 99.999
PNNL 99.999
ORAU-DC 99.999
LBL 99.999
1200
JGI 99.999
1400
GA 99.999
ORNL 100.000
Outage Minutes
ESnet Site Availability
ESnet Availability 3/2006 through 2/2007
2000
1800
1600
“3 nines (>99.5%)”
1000
800
600
400
200
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Dually connected sites
7
ESnet4 Status
•
ESnet4 progress is being made:
o
All hardware needed to deploy Phase 1 has been ordered
- At the cost of about $4M
- This hardware continues to arrive at LBNL
o
A transition plan and schedule is in place, we are
scheduled to be transitioned from Qwest backbone circuits
totally by September 2007
- ~30 new 10G circuits in 2007 (WAN and MAN)
– 1 new 10GE circuit every 12 days
- ~40 new 10G circuits in 2008 (WAN and MAN)
– 1 new 10GE circuit every 9 days
o
The first ESnet4 Science Data Network switch has just
been installed in New York City, it is connected to the
Level3 Infinera gear
8
ESnet4 IP + SDN Configuration, April 2007
9
Seattle
Portland
Boise
Boston
Chicago
Clev.
Sunnyvale
NYC
Denver
Philadelphia
KC
Salt
Lake
City
Pitts.
Wash DC
LA
Albuq.
Raleigh
Tulsa
Nashville
San Diego
Atlanta
Jacksonville
El Paso
ESnet IP switch/router hubs
Houston
ESnet IP switch only hubs
ESnet SDN switch hubs
All circuits are 10Gb/s.
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core
ESnet Science Data Network core
ESnet SDN core, NLR links (existing)
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN Configuration, September, 2007
10
All circuits are 10Gb/s.
Seattle
Portland
Boise
Boston
Chicago
Clev.
Sunnyvale
NYC
Denver
Philadelphia
KC
Salt
Lake
City
Pitts.
Wash DC
LA
Albuq.
Raleigh
Tulsa
Nashville
San Diego
OC48
Atlanta
Jacksonville
El Paso
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN Configuration, September, 2008
11
All circuits are 10Gb/s, or multiples thereof
Seattle
Portland
Boise
Boston
Sunnyvale
Chicago
Clev.
Denver
Philadelphia
KC
Salt
Lake
City
Wash DC
LA
Albuq.
Raleigh
Tulsa
Nashville
San Diego
OC48
Atlanta
Jacksonville
El Paso
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN, 2011 Configuration
12
Seattle
(>1 )
Portland
5
Boise
Boston
Sunnyvale
5
San Diego
KC
4
4
Philadelphia
5
Wash. DC
5
3
Albuq.
Tulsa
5
Nashville
4
Raleigh
OC48
3
3
El Paso
Atlanta
4
Jacksonville
4
(6)
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
5
5
Denver
4
4
Clev.
NYC
Salt
Lake
City
4
LA
5
Chicago
4
Baton
Rouge
ESnet IP core
ESnet Science Data Network core
ESnet SDN core, NLR links (existing)
Lab supplied link
LHC related link
MAN link
International IP Connections
13
ESnet4 Built Out
Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits),
500-600 Gbps by 2011-2012 (100 Gb/s circuits)
Canada
Asia-Pacific
Canada
Asia Pacific
(CANARIE)
(CANARIE)
GLORIAD
CERN (30+ Gbps)
CERN (30+ Gbps)
Europe
(Russia and
China)
(GEANT)
Boston
Australia
1625 miles / 2545 km
Science Data
Network Core
Boise
IP Core
New York
Denver
Washington
DC
Australia
Tulsa
LA
Albuquerque
San Diego
Latin America
IP core hubs
SDN hubs
Primary DOE Labs
Core network fiber path is
High speed cross-connects
~ 14,000 miles / 24,000 km
with Ineternet2/Abilene
Possible hubs
2700 miles / 4300 km
South America
(AMPATH)
Jacksonville
Production IP core (10Gbps)
SDN core (20-30-40-50 Gbps)
MANs (20-60 Gbps) or
backbone loops for site access
International connections