ESnet - Indico
Download
Report
Transcript ESnet - Indico
ESnet4
IP Network and Science Data Network
Configuration and Roll Out Schedule
Projected Schedule as of Sept., 2006
For more information contact
William E. Johnston ([email protected]), ESnet Dept. Head,
or
Joe Burrescia ([email protected]), ESnet General Manager
1
DOE Office of Science and ESnet – the ESnet Mission
•
2
ESnet’s primary mission is to enable the largescale science that is the mission of DOE’s Office
of Science (SC):
o
o
o
o
o
Sharing of massive amounts of data
Supporting thousands of collaborators world-wide
Distributed data processing
Distributed data management
Distributed simulation, visualization, and computational
steering
• ESnet provides network and collaboration services
to Office of Science laboratories and to sites of other
DOE programs in cases where this increases cost
effectiveness
ESnet3 Today (Summer, 2006) Provides Global High-Speed Internet
Connectivity for DOE Facilities and Collaborators
ESnet Science Data Network (SDN) core
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2
LIGO
PNNL
AU
MREN
Netherlands
StarTap
Taiwan (TANet2,
ASCC)
GÉANT
- France, Germany,
Italy, UK, etc
SINet (Japan)
Russia (BINP)
CERN
(USLHCnet
CERN+DOE funded)
ESnet IP core:
Packet over SONET
Optical Ring and
Hubs
MIT
BNL
JGI
TWC
LLNL
SNLL
LBNL
NERSC
SLAC
Lab DC
Offices
PPPL
MAE-E
NASA
Ames
YUCCA MT
KCP
AMPATH
42 end user sites (S. America)
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
commercial peering points
ESnet core hubs
JLAB
OSTI
LANL
ARM
GA
Equinix
OSC GTN
NNSA
PAIX-PA
Equinix, etc.
AU
FNAL
ANL
AMES
SNLA
ORAU
NOAA
SRS
Allied
Signal
AMPATH
(S. America)
Other R&E peering points
R&E
networks
ORNL
Specific R&E network peers
high-speed peering points with Internet2/Abilene
International (high speed)
10 Gb/s SDN core
10G/s IP core
2.5 Gb/s IP core
MAN rings (≥ 10 G/s)
Lab supplied links
OC12 ATM (622 Mb/s)
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
A Changing Science Environment is the Key Driver of the
Next Generation ESnet
• Large-scale collaborative science – big facilities,
massive data, thousands of collaborators – is now a
dominate feature of the Office of Science (“SC”)
programs
• Distributed systems for data analysis, simulations,
instrument operation, etc., are essential and are now
common
• These changes are supported by network traffic
pattern observations
4
Footprint of Largest SC Data Sharing Collaborators
(50% of ESnet traffic)
Drives the Footprint that ESnet Must Support
(Showing Fraction of Top 100 AS-AS Traffic)
Evolution of ESnet Traffic Patterns
6
1400
1200
800
top 100
sites
600
400
Jul, 06
Jan, 06
Jul, 05
Jan, 05
Jul, 04
Jan, 04
Jul, 03
Jan, 03
Jul, 02
Jan, 02
Jul, 01
Jan, 01
0
Jul, 00
200
Jan, 00
Terabytes / month
1000
ESnet Monthly Accepted Traffic, January, 2000 – June, 2006
• ESnet is currently transporting more than1 petabyte (1000 terabytes) per month
• More than 50% of the traffic is now generated by the top 100 work flows (system
to system)
0
FNAL -> CERN traffic is comparable to BNL -> CERN
but on layer 2 flows that are not yet monitored for traffic – soon)
NERSC (DOE Supercomputer) -> LBNL
Math. & Comp. (MICS)
Fermilab -> U. Florida
20
IN2P3 (France) -> Fermilab
LIGO (NSF)
Italy R&E -> SLAC
40
Argonne -> US Commodity
Fermilab -> U. Oklahoma
60
Fermilab -> UK R&E
80
UC San Diego -> Fermilab
100
BNL -> French R&E
Traffic Volume of the Top 30 AS-AS Flows, June 2006
(AS-AS = mostly Lab to R&E site, a few Lab to R&E
network, a few “other”)
Fermilab -> Swiss R&E
Fermilab -> Germany R&E
SLAC -> Karlsruhe (Germany)
Italy R&E -> Fermilab
PNNL -> Abilene (US R&E)
ESNET -> CalTech
Fermilab -> Belgium R&E
SLAC -> IN2P3 (France)
U. Neb.-Lincoln -> Fermilab
SLAC -> UK R&E
Abilene (US R&E) -> PNNL
Fermilab -> Germany R&E
RIKEN (Japan) -> BNL
Fermilab -> Estonia
SLAC -> Italy R&E
Fermilab -> DESY-Hamburg (Germany)
120
BNL -> RIKEN (Japan)
140
Fermilab -> Italy R&E
Fermilab -> MIT
Fermilab -> U. Neb.-Lincoln
CERN -> BNL
Terabytes
Top
30
AS-AS
flows,
June
2006
Large-Scale Flow Trends, June 2006
Subtitle: “Onslaught of the LHC”)
DOE Office of Science Program
LHC / High Energy
Physics - Tier 0-Tier1
LHC / HEP - T1-T2
HEP
Nuclear Physics
Lab - university
Lab - commodity
Traffic Patterns are Changing Dramatically
1/05
total traffic,
TBy
total traffic,
TBy
1200
1200
1000
1000
6/06
800
2 TB/month
8
800
600
600
400
400
2 TB/month
200
0
200
Jun. 06
Jan, 00
0
1200
1000
7/05
800
• While the total traffic is increasing
600
400
exponentially
200
0
Jul, 05
2 TB/month
o
Peak flow – that is system-to-system
– bandwidth is decreasing
o
The number of large flows is
increasing
1200
1/06
1000
800
600
400
200
0
Jan, 06
2 TB/month
The Onslaught of Grids
9
Question: Why is peak flow bandwidth decreasing while total traffic is increasing?
plateaus indicate the emergence of
parallel transfer systems (a lot of
systems transferring the same
amount of data at the same time)
Answer: Most large data transfers are now done by parallel / Grid data
movers
• In June, 2006 72% of the hosts generating the 1000 work flows were
involved in parallel data movers (Grid applications)
• This, combined with the dramatic increase in the proportion of traffic
due to large-scale science (now 50% of all traffic) represents the
most significant traffic pattern change in the history of ESnet
• This probably argues for a network architecture that favors path
multiplicity and route diversity
Network Observation – Circuit-like Behavior
10
• For large-scale data handling projects (e.g. LIGO - Caltech) the work flows
(system to system data transfers) exhibit circuit-like behavior
• This circuit has a duration of about 3 months (all of the top traffic
generating work flows are similar to this) - this argues for a circuit-oriented
element
1550 in the network architecture
1350
Gigabytes/day
1150
950
750
550
350
9/2005
3/05
8/2005
3/05
7/2005
3/05
6/2005
3/05
5/2005
3/05
4/2005
3/05
3/2005
3/05
2/2005
3/05
1/2005
3/05
3/04
12/2004
11/2004
3/04
(no data)
10/2004
3/04
-50
9/2004
3/04
150
The Evolution of ESnet Architecture
ESnet IP
core
ESnet to 2005:
• A routed IP network
with sites singly attached to a
national core ring
ESnet sites
ESnet hubs / core network connection points
ESnet Science Data
Network (SDN) core
11
ESnet IP
core
ESnet from 2006-07:
• A routed IP network with sites dually
connected on metro area rings or
dually connected directly to core ring
• A switched network providing virtual
circuit services for data-intensive
science
Metro area rings (MANs)
Other IP networks
Circuit connections to other science networks (e.g. USLHCNet)
ESnet4
•
•
12
Internet2 has partnered with Level 3 Communications Co. for
a dedicated optical fiber infrastructure with a national footprint
and a rich topology - the “Internet2 Network”
o
The fiber will be provisioned with Infinera Dense Wave Division
Multiplexing equipment that uses an advanced, integrated opticalelectrical design
o
Level 3 will maintain the fiber and the DWDM equipment
o
The DWDM equipment will initially be provisioned to provide10 optical
circuits (lambdas - s) across the entire fiber footprint (80 s is max.)
ESnet has partnered with Internet2 to:
o
Share the optical infrastructure
o
Develop new circuit-oriented network services
o
Explore mechanisms that could be used for the ESnet Network
Operations Center (NOC) and the Internet2/Indiana University NOC to
back each other up for disaster recovery purposes
ESnet4
•
ESnet will build its next generation IP network and
its new circuit-oriented Science Data Network
primarily on the Internet2 Network circuits (s) that
are dedicated to ESnet, together with a few National
Lambda Rail and other circuits
o
ESnet will provision and operate its own routing and
switching hardware that is installed in various commercial
telecom hubs around the country, as it has done for the
past 20 years
o
ESnet’s peering relationships with the commercial
Internet, various US research and education networks,
and numerous international networks will continue and
evolve as they have for the past 20 years
13
ESnet4
•
•
ESnet4 will also involve an expansion of the multi10Gb/s Metropolitan Area Rings in the San
Francisco Bay Area, Chicago, Long Island, and
Newport News, VA/Washington, DC area
o
provide multiple, independent connections for ESnet sites
to the ESnet core network
o
expandable
Several 10Gb/s links provided by the Labs that will
be used to establish multiple, independent
connections to the ESnet core
o
currently PNNL and ORNL
14
Internet2 Network Footprint
Core network fiber path is
~ 14,000 miles / 24,000 km
15
1625 miles / 2545 km
Internet2 Network Footprint
Core network fiber path is
~ 14,000 miles / 24,000 km
2700 miles / 4300 km
16
ESnet4 IP + SDN Configuration, mid-August, 2007
17
All circuits are 10Gb/s.
Seattle
Portland
miles / 2545 km
1625 Sunnyvale
Boise
Existing site
supplied
circuits
Boston
Chicago
Clev.
NYC
Denver
Existing
NLR
circuits
Philadelphia
KC
Salt
Lake
City
Pitts.
Wash DC
LA
Albuq.
Raleigh
Tulsa
San Diego
Nashville
Existing site
supplied
circuits
El Paso
Atlanta
Jacksonville
ESnet IP switch/router hubs
Houston
ESnet IP switch only hubs
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
2700 miles / 4300 km
Baton
Rouge
ESnet IP core
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 Metro Area Rings, 2007 Configurations
Long Island MAN
West Chicago MAN
600 W.
Chicago
Seattle
18
USLHCNet
32 AoA, NYC
Starlight
Portland
BNL
BoiseUSLHCNet
Boston
Chicago
Sunnyvale
FNAL
Denver
Clev.
NYC
ANL
Philadelphia
KC
Salt
Lake
City
San Francisco
Pitts.
Wash DC
Bay Area MAN
LA
JGI
Albuq.
Raleigh
Tulsa
Nashville
LBNL
San Diego
SLAC
Atlanta
NERSC
Jacksonville
El Paso
LLNL
ESnet IP switch/router hubs
Houston
All circuits are 10Gb/s.
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Wash.,
DC
MATP
ESnet IP switch only hubsSNLL
ESnet SDN switch hubs
Newport News - Elite
Baton
MAX
Rouge
JLab
ESnet IP core
ELITE
ESnet Science Data
Network core
ESnet SDN core, NLR links (existing)
Lab suppliedODU
link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN, 2008 Configuration
19
All circuits are 10Gb/s, or multiples thereof, as
indicated. (E.g. 2 = 20 Gb/s., etc.)
Seattle
(? )
Portland
(2 )
Boise
Boston
Sunnyvale
(1 )
Chicago
Salt
Lake
City
(1 )
KC
San Diego
(2)
(1 )
(21)
(1 )
(2)
Albuq.
Tulsa
NYC
Philadelphia
(2 )
LA
(1 )
(2 )
(2 )
(13)
Denver
(2 )
Clev.
(2 )
(3 )
Nashville
(1 )
(2 )
Wash DC
Raleigh
(3 )
Atlanta
(1 )
El Paso
(1 )
(1 )
Jacksonville
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 Metro Area Rings, 2008 Configurations
Long
Island MAN
All circuits
are 10Gb/s,
or multiples thereof, as
indicated. (E.g. 2 = 20 Gb/s., etc.)
West Chicago MAN
600 W.
Chicago
Seattle
USLHCNet
Starlight
32 AoA, NYC
(? )
Portland
BNL
(2 )
BoiseUSLHCNet
Sunnyvale
Chicago
Salt (2 )
Lake
City
San Francisco
Bay Area MAN
LA
San Diego
FNAL
Denver
(1 )
ANL
(2)
(1 )
(21)
(1 )
(2)
Albuq.
Tulsa
(1 )
(2 )
Newport News - Elite
(1 )
(1 )
ESnet IP switch/router hubs
LLNL
SNLL
ESnet IP switch only hubs
Jacksonville
Wash.,
DC
MATP
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Raleigh
(3 )
Atlanta
NERSC
Wash DC
(2 )
(3 )
Nashville
(1 )
NYC
Philadelphia
(2 )
El Paso
Lab site
(2 )
(2 )
LBNL
SLAC
Clev.
(13)
KC
JGI
Boston
111-8th
(1 )
(1 )
20
Baton
Rouge
JLab
ESnet IP core (1 )
ELITE
ESnet Science Data
Network core
ESnet SDN core, NLR links
Lab suppliedODU
link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN, 2009 Configuration
21
Seattle
(? )
Portland
(3 )
Boise
Boston
Sunnyvale
(2 )
Chicago
Salt
Lake
City
(2 )
(3 )
KC
San Diego
(3 )
(2 )
(3 )
(3 )
(2 )
Albuq.
Tulsa
NYC
Philadelphia
(3 )
(4)
(3(10))
(3 )
(3 )
Denver
LA
(2 )
Clev.
Nashville
(1 )
(3 )
Wash DC
Raleigh
(3 )
Atlanta
(2 )
El Paso
(2 )
(2 )
Jacksonville
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 Metro Area Rings, 2009 Configurations
Long Island MAN
West Chicago MAN
600 W.
Chicago
Seattle
USLHCNet
32 AoA, NYC
Starlight
(? )
Portland
(3 )
BNL
BoiseUSLHCNet
Sunnyvale
Chicago
Salt (3 )
Lake
City
San Francisco
Bay Area MAN
LA
San Diego
FNAL
Denver
(2 )
(2 )
(2 )
Albuq.
Nashville
(1 )
(3 )
(2 )
ESnet IP switch/router hubs
LLNL
ESnet IP switch only hubs
SNLL
Newport News - Elite
Jacksonville
Wash.,
DC
MATP
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Raleigh
(3 )
(2 )
(2 )
Wash DC
(3 )
Atlanta
NERSC
El Paso
Lab site
(3 )
(3 )
Tulsa
NYC
Philadelphia
(3 )
(4)
(3(10))
(3 )
(3 )
LBNL
SLAC
Clev.
ANL
KC
JGI
Boston
111-8th
(2 )
(2 )
22
Baton
Rouge
JLab
ESnet IP core (1 )
ELITE
ESnet Science Data
Network core
ESnet SDN core, NLR links
Lab suppliedODU
link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN, 2010 Configuration
23
Seattle
(>1 )
Portland
(4 )
Boise
Boston
Sunnyvale
(3 )
Chicago
Salt
Lake
City
(3 )
(4 )
KC
San Diego
(4 )
(3 )
(4 )
(3 )
(3 )
Albuq.
Tulsa
NYC
Philadelphia
(4 )
(6)
(3(10))
(4 )
(4 )
Denver
LA
(3 )
Clev.
Nashville
(2 )
(4 )
Wash DC
Raleigh
(3 )
Atlanta
(3 )
El Paso
(3 )
(3 )
Jacksonville
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
ESnet4 IP + SDN, 2011 Configuration
24
Seattle
(>1 )
Portland
(5 )
Boise
Boston
Sunnyvale
(4 )
Chicago
(4 )
KC
(5 )
(4 )
(4 )
(4 )
(8)
(5 )
(3 )
Albuq.
Tulsa
NYC
Philadelphia
(5 )
LA
San Diego
(5 )
(5 )
Denver
(5 )
Salt
Lake
City
Clev.
Nashville
3 )
(5 )
Wash DC
Raleigh
(3 )
Atlanta
(4 )
El Paso
(4 )
(4 )
Jacksonville
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
25
ESnet4
Core networks: 50-60 Gbps by 2009-2010, 200-600 Gbps by 2011-2012
Canada
Asia-Pacific
(IP peering connections not shown)
Canada
Europe
Asia Pacific
(CANARIE)
(CANARIE)
GLORIAD
(GEANT)
CERN (30+ Gbps)
CERN (30+ Gbps)
Europe
(Russia and
China)
(GEANT)
Boston
Australia
1625 miles / 2545 km
Science Data
Network Core
Boise
IP Core
New York
Denver
Washington
DC
Australia
Tulsa
LA
Albuquerque
San Diego
South America
IP core hubs
(AMPATH)
SDN hubs
Primary DOE Labs
Core network fiber path is
High speed cross-connects
~ 14,000 miles / 24,000 km
with Ineternet2/Abilene
Possible hubs
2700 miles / 4300 km
South America
(AMPATH)
Jacksonville
Production IP core (10Gbps)
SDN core (20-30-40-50 Gbps)
MANs (20-60 Gbps) or
backbone loops for site access
International connections
Outstanding Issues (From Rome Meeting, 4/2006)
•
Is a single point of failure at the Tier 1 edges a
reasonable long term design?
•
Bandwidth guarantees in outage scenarios
o
o
•
What expectations should we set for fail-over times?
o
•
How do the networks signal that something has failed to
the applications?
How do sites sharing a link during a failure coordinate BW
utilization?
Should BGP timers be tuned?
We need to monitor the backup paths ability to
transfer packets end-to-end to ensure they will work
when needed.
o
How are we going to do it?
26
Possible USLHCNet - US T1 Redundant Connectivity
Harvey Newman
27
ESnet4 Metro Area Rings, 2009 Configurations
West Chicago MAN
Long Island MAN
USLHCNet
600 W.
Chicago
Starlight
Seattle
28
32 AoA, NYC
(? )
Portland
(3 )
BNL
USLHCNet
Boise
Sunnyvale
(2 )
(2 )
Chicago
Salt (3 )
Lake
City
San Francisco
Bay Area MAN
FNAL
Denver
San Diego
Clev.
ANL
(3 )
(2 )
(3 )
(3 )
(2 )
Albuq.
Tulsa
Nashville
(1 )
(3 )
(2 )
Raleigh
Newport News - Elite
(2 )
(2 )
Wash DC
(3 )
Atlanta
El Paso
NYC
Philadelphia
(3 )
(4)
(3(10))
(3 )
(3 )
KC
LA
(2 )
Boston
111-8th
Jacksonville
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Lab site
Baton
Rouge
ESnet IP core (1 )
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections