Transcript 1000 Gb/s

ESnet Status and Plans
Ineternet2 All Hands Meeting, Sept. 28, 2004
William E. Johnston, ESnet Dept. Head and Senior Scientist
R. P. Singh, Federal Project Manager
Michael S. Collins, Stan Kluz,
Joseph Burrescia, and James V. Gagliardi, ESnet Leads
Gizella Kapus, Resource Manager
and the ESnet Team
Lawrence Berkeley National Laboratory
1
ESnet Provides Full Internet Service
to DOE Facilities and Collaborators with High-Speed Access to
Major Science Collaborators
CA*net4
KDDI (Japan)
France
Switzerland
Taiwan
(TANet2)
Australia
CA*net4
Taiwan
(TANet2)
Singaren
CA*net4
MREN
Netherlands
Russia
StarTap
Taiwan
(ASCC)
SInet (Japan)
Japan – Russia(BINP)
CERN
(DOE link)
GEANT
- Germany, France,
Italy, UK, etc
LIGO
PNNL
ESnet mid-2004
Japan
MIT
JGI
LBNL
NERSC
SLAC
FNAL
ANL-DC
INEEL-DC
ORAU-DC
ANL
LLNL/LANL-DC
SNLL
QWEST
ATM
LLNL
AMES
BNL
NY-NAP
PPPL
MAE-E
4xLAB-DC
GTN&NNSA
MAE-W
PAIX-E
KCP
YUCCA MT
JLAB
ORNL
LANL
SDSC
ALB
HUB
42 end user sites
GA
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
peering points
hubs
high-speed peering points
OSTI
ARM
SNLA
ORAU
NOAA
SRS
Allied
Signal
ESnet core: Packet over
SONET Optical Ring and Hubs
International (high speed)
OC192 (10G/s optical)
OC48 (2.5 Gb/s optical)
Gigabit Ethernet (1 Gb/s)
OC12 ATM (622 Mb/s)
OC12
OC3 (155 Mb/s)
T3 (45 Mb/s)
T1-T3
T1 (1 Mb/s)
ESnet’s Peering Infrastructure
Connects the DOE Community With its Collaborators
Australia
CA*net4
Taiwan
(TANet2)
Singaren
PNW-GPOP
CA*net4
CERN
MREN
Netherlands
Russia
StarTap
Taiwan
(ASCC)
KDDI (Japan)
France
GEANT
- Germany
- France
- Italy
- UK
- etc
SInet (Japan)
KEK
Japan – Russia (BINP)
SEA HUB
2 PEERS
Distributed 6TAP
19 Peers
Abilene
Japan
1 PEER
LBNL
CalREN2
1 PEER
Abilene +
7 Universities
Abilene 2 PEERS
PAIX-W
3 PEERS
FIX-W
MAE-W
39 PEERS
CENIC
SDSC
NYC HUBS
5 PEERS
26 PEERS
MAX GPOP
MAE-E
PAIX-E
22 PEERS
20 PEERS
EQX-SJ
GA
ESnet Peering
(connections to
other networks)
6 PEERS
LANL
TECHnet
University
International
Commercial
Abilene
ATL HUB
ESnet provides access to all of the Internet by managing the
full complement of Global Internet routes (about 150,000) at
10 general/commercial peering points + high-speed peerings
w/ Abilene and the international R&E networks. This is a lot
of work, and is very visible, but provides full access for DOE.
Major ESnet Changes in FY04
• Dramatic increase in International traffic as major largescale science experiments start to ramp up
• CERNlink connected at 10 Gb/s
• GEANT (main European R&E network – like Abilene and
ESnet) connected at 2.5 Gb/s
• Abilene-ESnet high-speed cross-connects ([email protected] Gb/s
and 1@10 Gb/s)
• In order to meet the Office of Science program needs, a
new architectural approach has been developed
o
Science Data Network (a second core network for highvolume traffic)
o
Metropolitan Area Networks (MANs)
4
Predictive Drivers for Change
August 13-15, 2002
Organized by Office
of Science
Mary Anne Scott, Chair
Dave Bader
Steve Eckstrand
Marvin Frazier
Dale Koelling
Vicky White
Workshop Panel Chairs
• Focused on science requirements that drive
o
o
o
o
Advanced Network Infrastructure
Middleware Research
Network Research
Network Governance Model
Ray Bair and Deb Agarwal
Bill Johnston and Mike Wilde
Rick Stevens
Ian Foster and Dennis Gannon
Linda Winkler and Brian Tierney
Sandy Merola and Charlie Catlett
• The requirements for DOE science were developed by the OSC science
community representing major DOE science disciplines
o Climate
o Magnetic Fusion Energy Sciences
o Spallation Neutron Source
o Chemical Sciences
o Macromolecular Crystallography
o Bioinformatics
o High Energy Physics
Available at www.es.net/#research
5
Evolving Quantitative Science Requirements for Networks
Science Areas
Today
End2End
Throughput
5 years
End2End
Throughput
5-10 Years
End2End
Throughput
Remarks
High Energy
Physics
0.5 Gb/s
100 Gb/s
1000 Gb/s
high bulk
throughput
Climate (Data &
Computation)
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
high bulk
throughput
SNS NanoScience
Not yet started
1 Gb/s
1000 Gb/s +
QoS for control
channel
remote control
and time critical
throughput
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.198 Gb/s
(500MB/
20 sec. burst)
N x 1000 Gb/s
time critical
throughput
Astrophysics
0.013 Gb/s
(1 TBy/week)
N*N multicast
1000 Gb/s
computational
steering and
collaborations
Genomics Data &
Computation
0.091 Gb/s
(1 TBy/day)
100s of users
1000 Gb/s +
QoS for control
channel
high throughput
and steering
6
Observed Drivers for Change
ESnet Inter-Sector Traffic Summary,
Jan 2003 / Feb 2004: 1.7X overall traffic increase, 1.9X OSC increase
(The international traffic is increasing due to BABAR at SLAC
and the LHC tier 1 centers at FNAL and BNL)
72/68%
DOE sites
DOE is a net supplier
of data because
DOE facilities are
used by universities
and commercial
entities, as well as by
DOE researchers
21/14%
ESnet
~25/18%
14/12%
17/10%
10/13%
Note that more that 90% of the ESnet traffic is
OSC traffic
ESnet Appropriate Use Policy (AUP)
All ESnet traffic must originate and/or terminate on
an ESnet an site (no transit traffic is allowed)
R&E (mostly
universities)
Peering Points
53/49%
DOE collaborator traffic, inc.
data
Commercial
9/26%
4/6%
International
(almost entirely
R&E sites)
Traffic coming into ESnet = Green
Traffic leaving ESnet = Blue
Traffic between sites
% = of total ingress or egress traffic
7
1 Terabyte/day
ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20
A small number
of science users
account for a
significant
fraction of all
ESnet traffic
Since BaBar production started, the top 20 ESnet flows have consistently
accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo)
As LHC data starts to move, this will increase a lot (200-2000 times)
8
ESnet Top 10 Data Flows, 1 week avg., 2004-07-01
The traffic is not transient: Daily and weekly averages are about the same.
9
ESnet and Abilene
•
Abilene and ESnet together provide most of the
nation’s transit networking for science
•
Abilene provides national transit networking for most
of the US universities by interconnecting the
regional networks (mostly via the GigaPoPs)
•
•
ESnet connects the DOE Labs
•
Goal is that DOE Lab ↔ Univ. connectivity should
be as good as Lab ↔ Lab and Univ. ↔ Univ.
ESnet and Abilene have recently established highspeed interconnects and cross-network routing

Constant monitoring is the key
10
Monitoring DOE Lab ↔ University Connectivity
AsiaPac
SEA
• Current monitor infrastructure (red) and target infrastructure
• Uniform distribution around ESnet and around Abilene
• Need to set up similar infrastructure with GEANT
Europe
CERN/Europe
LBNL
FNAL
Abilene
OSU
Japan
Japan
CHI
ESnet
NYC
DEN
SNV
DC
KC
BNL
IND
Japan
LA
NCSU
SDG
SDSC
ALB
ELP
HOU
DOE Labs w/ monitors
Universities w/ monitors
Initial site monitors
network hubs
high-speed cross connects: ESnet ↔ Internet2/Abilene
ATL
ESnet
Abilene
ORNL
11
Initial Monitoring is with OWAMP One-Way Delay Tests
• These measurements are very sensitive – e.g. NCSU
Metro DWDM reroute of about 350 micro seconds is
easily visible
ms
Fiber Re-Route
42.0
41.9
41.8
41.7
41.6
41.5
12
Initial Monitor Results (http://measurement.es.net)
13
ESnet, GEANT, and CERNlink
•
GEANT plays a role in Europe similar to Abilene and ESnet in
the US – it interconnects the European National Research
and Education networks, to which the European R&E sites
connect
•
GEANT currently carries essentially all ESnet international
traffic (LHC use of CERNlink to DOE labs is still ramping up)
•
GN2 is the second phase of the GEANT project
o
The architecture of GN2 is remarkably similar to the new ESnet
Science Data Network + IP core network model
• CERNlink will be the main CERN to US, LHC data path
o
Both US, LHC tier 1 centers are on ESnet (FNAL and BNL)
o
ESnet directly connects at 10 Gb/s to the CERNlink
o
The ESnet new architecture (Science Data Network) will
accommodate the anticipated 40 Gb/s from LHC to US
14
GEANT and CERNlink
•
A recent meeting between ESnet and GEANT produced
proposals in a number of areas designed to ensure robust
and reliable science data networking between ESnet and
Europe
o
A US-EU joint engineering task force (“ITechs”) should be formed to
coordinate US-EU science data networking
- Will include, e.g., ESnet, Abilene, GEANT, CERN
- Will develop joint operational procedures
o
ESnet will collaborate in GEANT development activities to ensure
some level of compatibility
- Bandwidth-on-demand (dynamic circuit setup)
- Performance measurement and authentication
- End-to-end QoS and performance enhancement
- Security
o
10 Gb/s connectivity between GEANT and ESnet will be established
by mid-2005 and backup 2.5 Gb/s will be added
15
New ESnet Architecture Needed to Accommodate OSC
•
The essential DOE Office of Science requirements
cannot be met with the current, telecom provided,
hub and spoke architecture of ESnet
New York (AOA)
DOE sites
ESnet
Core
Washington, DC (DC)
Sunnyvale (SNV)
El Paso (ELP)
•
Atlanta (ATL)
The core ring has good capacity and resiliency
against single point failures, but the point-to-point
tail circuits are neither reliable nor scalable to the
required bandwidth
16
A New ESnet Architecture
•
•
Goals
o
full redundant connectivity for every site
o
high-speed access for every site (at least 10 Gb/s)
Three part strategy
1) MAN rings provide dual site connectivity and much higher
site-to-core bandwidth
2) A Science Data Network core for
- multiply connected MAN rings for protection against hub failure
- expanded capacity for science data
- a platform for provisioned, guaranteed bandwidth circuits
- alternate path for production IP traffic
- carrier circuit and fiber access neutral hubs
3) An IP core (e.g. the current ESnet core) for high-reliability
17
A New ESnet Architecture:
Science Data Network + IP Core
CERN
AsiaPacific
GEANT
(Europe)
ESnet
Science Data
Network
(2nd Core)
Sunnyvale
(SNV)
Metropolitan
Area
Rings
New York
(AOA)
ESnet
IP Core
Atlanta (ATL)
Existing hubs
New hubs
DOE/OSC Labs
Possible new hubs
Washington,
DC (DC)
El Paso (ELP)
ESnet Long-Term Architecture
site
ESnet management
and monitoring
equipment
monitor
10 GigEthernet
switch(s) – ESnet
management
domain
core router – Esnet
management
domain
one or more
independent fiber pairs
monitor
ESnet
SDN
core
ring
ESnet Metropolitan
Area
Networks
ESnet
IP core
ring
one or more
indep. fiber pairs
monitor
ESnet hub
(typ.)
Optical channel (λ)
equipmen – Carrier
management domain
monitor
site router – site
management domain
site (typ.)
production IP
provisioned circuits carried
over optical channels / lambdas
provisioned circuits tunneled
through the IP core via MPLS 19
ESnet New Architecture, Part 1: MANs
•
The MAN architecture is designed to provide
o
At least one redundant path from sites to ESnet hub
o
Scalable bandwidth options from sites to ESnet hub
o
The first step in point-to-point provisioned circuits
- With endpoint authentication, these are private and intrusion
resistant circuits, so they should be able to bypass site firewalls if
the endpoints trust each other
- End-to-end provisioning will be initially provided by a combination
of Ethernet switch management of λ paths in the MAN and MPLS
paths in the ESnet POS backbone (OSCARS project)
- Provisioning will initially be provided by manual circuit
configuration, on-demand in the future (OSCARS)
o
Cost savings over two or three years, when including the
future site needs in increased bandwidth
20
ESnet MAN Architecture – logical (Chicago, e.g.)
CERN (DOE funded link)
Qwest
hub
International peerings
T320
ESnet IP
core
ESnet
SDN
core
StarLight
ESnet managed
λ / circuit services
ESnet production
IP service
ESnet managed
λ / circuit services
tunneled through
the IP backbone
ESnet
management and
monitoring
ANL
FNAL
monitor
site equip.
Site gateway router
Site LAN
monitor
Site gateway router
Site LAN
site equip.
21
ESnet Metropolitan Area Network ring (MANs)
•
In the near term MAN rings will be built in the San
Francisco and Chicago areas
•
In long term there will likely be MAN rings on Long
Island, in the Newport News, VA area, in No. New
Mexico, in Idaho-Wyoming, etc.
•
San Francisco Bay Area MAN ring progress
o
Feasibility has been demonstrated with an engineering
study from CENIC
o
A competitive bid and “best value source selection”
methodology will select the ring provider within two
months
22
SF Bay Area MAN
Seattle and
Chicago
Chicago
Joint
Genome
Institute
LBNL
NERSC
SF BA MAN
SF Bay
Area
ESnet Science
Data Network
core
LLNL
SNLL
NLR /
UltraScienceNet
SLAC
Level 3
hub
Qwest /
ESnet hub
ESnet IP Core
Ring
LA and San Diego
El Paso
23
Proposed Chicago MAN
ESnet CHI-HUB
Qwest - NBC Bld
455 N Cityfront Plaza Dr, Chicago, IL 60611
StarLight
910 N Lake Shore Dr, Chicago,
IL 60611
FNAL
Feynman Computing Center,
Batavia, IL 60510
ANL
9700 S Cass Ave, Lemont,
IL 60439
24
ESnet New Architecture – Part 2: Science Data Network
•
SDN (second core): Rationale
Add major points of presence in carrier circuit and
fiber access neutral facilities at
Sunnyvale, Seattle, San Diego, and Chicago
o
o
o
•
Enable UltraSciNet cross-connect with ESnet
Provide access to NLR and other fiber-based networks
Allow for more competition in acquiring circuits
Initial steps toward Science Data Network (SDN)
o
Provide a second, independent path between major
northern route hubs
-
o
o
Alternate route for ESnet core IP traffic
Provide for high-speed paths on the West Coast to reach
PNNL, GA, and AsiaPac peering
Increase ESnet connectivity to other R&E networks
25
AsiaPac
ESnet New Architecture Goal FY05
Science Data Network Phase 1 and SF BA MAN
SEA
Europe
CERN (2X10Gb/s)
Europe
Japan
Japan
New
Core
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
Existing ESnet Core
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
ESnet IP core (Qwest)
ESnet SDN core
Lab supplied
Major international
UltraSciNet
2.5 Gbs
10 Gbs
ATL
Future phases
26
AsiaPac
SEA
ESnet New Architecture Goal FY06
Science Data Network Phase 2 and Chicago MAN
Europe
CERN (3X10Gb/s)
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
ESnet IP core (Qwest)
ESnet SDN core
Lab supplied
Major international
UltraSciNet
2.5 Gbs
10 Gbs
ATL
Future phases
27
ESnet Beyond FY07
AsiaPac
SEA
CERN
Europe
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
ATL
SDG
MANs
ESnet IP core (Qwest) hubs
ELP
ESnet SDN core hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
Production IP ESnet core
High-impact science core
Lab supplied
Major international
2.5 Gbs
10 Gbs
10Gb/s
30Bg/s
Future phases
40Gb/s 28