ESNet Update

Download Report

Transcript ESNet Update

1
ESnet Update
Winter 2008
Joint Techs Workshop
Joe Burrescia
ESnet General Manager
Energy Sciences Network
Lawrence Berkeley National Laboratory
January 21, 2008
Networking for the Future of Science
2
ESnet 3 with Sites and Peers (Early 2007)
ESnet Science Data Network (SDN) core
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2
MREN
Netherlands
StarTap
Taiwan (TANet2,
ASCC)
GÉANT
- France, Germany,
Italy, UK, etc
SINet (Japan)
Russia (BINP)
CERN
(USLHCnet
DOE+CERN funded)
NSF/IRNC
funded
LIGO
PNNL
AU
MIT
JGI
TWC
LLNL
SNLL
LBNL
NERSC
SLAC
ESnet IP core:
Packet over SONET
Optical Ring and
Hubs
Lab DC
Offices
NASA
Ames
GA
AMPATH
Equinix
OSC GTN
NNSA
KCP
JLAB
OSTI
LANL
ARM
AU
PPPL
MAE-E
Equinix
PAIX-PA
Equinix, etc.
YUCCA MT
FNAL
ANL
AMES
SNLA
Allied
Signal
42 end user sites (S. America)
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
R&E
commercial peering points
networks Specific R&E network peers
Other R&E peering points
ESnet core hubs
high-speed peering points with Internet2/Abilene
BNL
ORNL
ORAU
NOAA
SRS
AMPATH
(S. America)
International (high speed)
10 Gb/s SDN core
10G/s IP core
2.5 Gb/s IP core
MAN rings (≥ 10 G/s)
Lab supplied links
OC12 ATM (622 Mb/s)
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
ESnet 3 Backbone as of January 1, 2007
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
MAN rings (≥ 10 G/s)
Lab supplied links
3
ESnet 4 Backbone as of April 15, 2007
4
Boston
Clev.
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone as of May 15, 2007
5
Boston
Clev.
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone as of June 20, 2007
6
Boston
Clev.
Kansas City
Houston
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL)
7
Boston
Clev.
Kansas City
Los Angeles
Houston
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone September 30, 2007
8
Boston
Boise
Clev.
Kansas City
Los Angeles
Houston
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone December 2007
9
Boston
Boise
Clev.
Los Angeles
Kansas City
Houston
Future ESnet Hub
ESnet Hub
10 Gb/s SDN core (NLR)
2.5 Gb/s IP Tail (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet 4 Backbone December, 2008
10
Boston
Clev.
X2
Kansas City
Houston
Future ESnet Hub
ESnet Hub
X2
X2
X2
X2
Los Angeles
X2
X2
10 Gb/s SDN core (NLR)
10/2.5 Gb/s IP core (QWEST)
10 Gb/s IP core (Level3)
10 Gb/s SDN core (Level3)
MAN rings (≥ 10 G/s)
Lab supplied links
ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities
and Collaborators (12/2007)
AU
KAREN/REANNZ
ODN Japan Telecom
America
NLR-Packetnet
Abilene/I2
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2
GÉANT
- France, Germany,
Italy, UK, etc
SINet (Japan)
Russia (BINP)
MREN
StarTap
Taiwan (TANet2,
ASCC)
CERN
(USLHCnet:
DOE+CERN funded)
NSF/IRNC
funded
KAREN / REANNZ
Internet2
SINGAREN
ODN Japan Telecom
America
PNNL
LIGO
MIT/
PSFC
BNL
Lab DC
Offices
Salt Lake
JGI
DOE
LBNL
NERSC
SLAC
USHLCNet
to GÉANT
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
PPPL
NETL
Equinix
Equinix
PAIX-PA
Equinix, etc.
DOE GTN
NNSA
KCP
NASA
Ames
JLAB
ORAU
YUCCA MT
OSTI
ARM
AU
SNLA
GA
Allied
Signal
AMPATH
(S. America)
NOAA
SRS
AMPATH
(S. America)
~45 end user sites
Office Of Science Sponsored (22)
NNSA Sponsored (13+)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
commercial peering points
ESnet core hubs
R&E
networks
Specific R&E network peers
Other R&E peering points
Geography is
only representational
International (1-10 Gb/s)
10 Gb/s SDN core (I2, NLR)
10Gb/s IP core
MAN rings (≥ 10 Gb/s)
Lab supplied links
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
12
ESnet4
Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits),
500-600 Gbps by 2011-2012 (100 Gb/s circuits)
Canada
Asia-Pacific
Canada
Asia Pacific
(CANARIE)
(CANARIE)
GLORIAD
CERN (30+ Gbps)
CERN (30+ Gbps)
Europe
(Russia and
China)
(GEANT)
Boston
Australia
1625 miles / 2545 km
Science Data
Network Core
Boise
IP Core
New York
Denver
Washington
DC
Australia
Tulsa
LA
Albuquerque
San Diego
South America
IP core hubs
(AMPATH)
SDN hubs
Primary DOE Labs
Core network fiber path is
High speed cross-connects
~ 14,000 miles / 24,000 km
with Ineternet2/Abilene
Possible hubs
2700 miles / 4300 km
South America
(AMPATH)
Jacksonville
Production IP core (10Gbps)
SDN core (20-30-40-50 Gbps)
MANs (20-60 Gbps) or
backbone loops for site access
International connections
A Tail of Two ESnet4 Hubs
MX960 Switch
6509 Switch
T320
T320Routers
Router
Sunnyvale Ca Hub
Chicago Hub
ESnet’s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s each
present their own unique challenges.
13
ESnet 4 Factoids as of January 21, 2008
•
ESnet4 installation to date:
o
32 new 10Gb/s backbone circuits
- Over 3 times the number from last JT meeting
o
20,284 10Gb/s backbone Route Miles
- More than doubled from last JT meeting
o
10 new hubs
- Since last meeting
– Seattle
– Sunnyvale
– Nashville
o
o
7 new routers 4 new switches
Chicago MAN now connected to Level3 POP
- 2 x 10GE to ANL
- 2 x 10GE to FNAL
- 3 x 10GE to Starlight
14
ESnet Traffic Continues to Exceed 2 PetaBytes/Month
15
Overall traffic tracks
the very large science
use of the network
Bytes Accepted
3.00E+15
2.7 PBytes in
July 2007
2.50E+15
2.00E+15
1.50E+15
1 PBytes in
April 2006
1.00E+15
5.00E+14
Ja
n,
0
Ap 0
r,
00
Ju
l,
0
O 0
ct
,0
Ja 0
n,
0
Ap 1
r,
0
Ju 1
l,
0
O 1
ct
,0
Ja 1
n,
0
Ap 2
r,
02
Ju
l,
0
O 2
ct
,0
Ja 2
n,
0
Ap 3
r,
03
Ju
l,
0
O 3
ct
,0
Ja 3
n,
0
Ap 4
r,
04
Ju
l,
0
O 4
ct
,0
Ja 4
n,
0
Ap 5
r,
05
Ju
l,
0
O 5
ct
,0
Ja 5
n,
0
Ap 6
r,
06
Ju
l,
0
O 6
ct
,0
Ja 6
n,
0
Ap 7
r,
07
Ju
l,
0
O 7
ct
,0
7
0.00E+00
ESnet traffic historically has increased 10x every 47 months
When A Few Large Data Sources/Sinks Dominate Traffic
it is Not Surprising that Overall Network Usage Follows the
Patterns of the Very Large Users - This Trend Will Reverse in the Next Few
Weeks as the Next Round of LHC Data Challenges Kicks Off
ESnet Continues to be Highly Reliable; Even During the Transition
17
ESnet Availability 2/2007 through 1/2008
1800
1600
1200
1000
“4 nines” (>99.95%)
“5 nines” (>99.995%)
“3 nines” (>99.5%)
800
600
400
Dually connected sites
SRS 99.704
Lamont 99.754
OSTI 99.851
NOAA 99.756
Ames-Lab 99.852
BJC 99.862
ORAU 99.857
Y12 99.863
KCP 99.871
Bechtel 99.885
GA 99.916
INL 99.909
Yucca 99.917
DOE-NNSA 99.917
MIT 99.947
NREL 99.965
BNL 99.966
SNLA 99.971
Pantex 99.967
LANL 99.972
DOE-ALB 99.973
JLab 99.984
PPPL 99.985
IARC 99.985
JGI 99.988
LANL-DC 99.990
NSTEC 99.991
LLNL-DC 99.991
MSRI 99.994
LBL 99.996
SNLL 99.997
PNNL 99.998
DOE-GTN 99.997
LLNL 99.998
NERSC 99.998
LIGO 99.998
FNAL 99.999
SLAC 100.000
0
ANL 100.000
200
ORNL 100.000
Outage Minutes
1400
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Note: These availability measures are only for ESnet infrastructure, they do
not include site-related problems. Some sites, e.g. PNNL and LANL, provide
circuits from the site to an ESnet hub, and therefore the ESnet-site demarc
is at the ESnet hub (there is no ESnet equipment at the site. In this case,
circuit outages between the ESnet equipment and the site are considered
site issues and are not included in the ESnet availability metric.
OSCARS Overview
On-demand Secure Circuits and Advance Reservation System
Path Computation
• Topology
• Reachability
• Contraints
Scheduling
• AAA
• Availability
OSCARS
Guaranteed
Bandwidth
Virtual Circuit Services
Provisioning
• Signalling
• Security
• Resiliency/Redundancy
18
OSCARS Status Update
•
ESnet Centric Deployment
Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet
(1Q05)
o Prototype layer 2 (Ethernet VLAN) virtual circuit service deployed in ESnet (3Q07)
o
•
Inter-Domain Collaborative Efforts
o
Terapaths (BNL)
- Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06)
- Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07)
o
LambdaStation (FNAL)
- Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07)
o
HOPI/DRAGON
- Inter-domain exchange of control messages demonstrated (1Q07)
- Integration of OSCARS and DRAGON has been successful (1Q07)
o
DICE
- First draft of topology exchange schema has been formalized (in collaboration with NMWG)
(2Q07), interoperability test demonstrated 3Q07
- Initial implementation of reservation and signaling messages demonstrated at SC07 (4Q07)
o
UVA
- Integration of Token based authorization in OSCARS under testing
o
Nortel
- Topology exchange demonstrated successfully 3Q07
- Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07)
19
Network Measurement Update
•
ESnet
o
About 1/3 of the 10GE bandwidth test platforms & 1/2 of the
latency test platforms for ESnet 4 have been deployed.
- 10GE test systems are being used extensively for acceptance
testing and debugging
- Structured & ad-hoc external testing capabilities have not been
enabled yet.
- Clocking issues at a couple POPS are not resolved.
o
Work is progressing on revamping the ESnet statistics
collection, management & publication systems
- ESxSNMP & TSDB & PerfSONAR Measurement Archive (MA)
- PerfSONAR TS & OSCARS Topology DB
- NetInfo being restructured to be PerfSONAR based
• LHC and PerfSONAR
o
o
PerfSONAR based network measurement solutions for the
Tier 1/Tier 2 community are nearing completion.
A proposal from DANTE to deploy a perfSONAR based
network measurement service across the LHCOPN at all
Tier1 sites is being evaluated by the Tier 1 centers
20
21
End