Transcript Title

Nuclear Physics Network
Requirements Workshop
Washington, DC
Eli Dart, Network Engineer
ESnet Network Engineering Group
Energy Sciences Network
Lawrence Berkeley National Laboratory
May 6, 2008
Networking for the Future of Science
1
Overview
 Logistics
 Network Requirements
 Sources, Workshop context
 Case Study Example
 Large Hadron Collider
 Today’s Workshop
 Structure and Goals
2
Logistics
• Mid-morning break, lunch, afternoon break
• Self-organization for dinner
• Agenda on workshop web page
– http://workshops.es.net/2008/np-net-req/
• Round-table introductions
3
Network Requirements
•
Requirements are primary drivers for ESnet – science focused
•
Sources of Requirements
– Office of Science (SC) Program Managers
– Direct gathering through interaction with science users of the
network
• Examples of recent case studies
– Climate Modeling
– Large Hadron Collider (LHC)
– Spallation Neutron Source at ORNL
– Observation of the network
– Other Sources (e.g. Laboratory CIOs)
4
Program Office Network Requirements Workshops
• Two workshops per year
• One workshop per program office every 3 years
• Workshop Goals
– Accurately characterize current and future network
requirements for Program Office science portfolio
– Collect network requirements from scientists and Program
Office
• Workshop structure
– Modeled after the 2002 High Performance Network Planning
Workshop conducted by the DOE Office of Science
– Elicit information from managers, scientists and network users
regarding usage patterns, science process, instruments and
facilities – codify in “Case Studies”
– Synthesize network requirements from the Case Studies
5
Large Hadron Collider at CERN
6
LHC Requirements – Instruments and Facilities
•
Large Hadron Collider at CERN
– Networking requirements of two experiments have been characterized – CMS
and Atlas
– Petabytes of data per year to be distributed
•
LHC networking and data volume requirements are unique to date
– First in a series of DOE science projects with requirements of unprecedented
scale
– Driving ESnet’s near-term bandwidth and architecture requirements
– These requirements are shared by other very-large-scale projects that are
coming on line soon (e.g. ITER)
•
Tiered data distribution model
– Tier0 center at CERN processes raw data into event data
– Tier1 centers receive event data from CERN
• FNAL is CMS Tier1 center for US
• BNL is Atlas Tier1 center for US
• CERN to US Tier1 data rates: 10 Gbps in 2007, 30-40 Gbps by 2010/11
– Tier2 and Tier3 sites receive data from Tier1 centers
• Tier2 and Tier3 sites are end user analysis facilities
• Analysis results are sent back to Tier1 and Tier0 centers
• Tier2 and Tier3 sites are largely universities in US and Europe
7
LHC Requirements – Process of Science
•
Strictly tiered data distribution model is only part of the picture
– Some Tier2 scientists will require data not available from their local Tier1 center
– This will generate additional traffic outside the strict tiered data distribution tree
– CMS Tier2 sites will fetch data from all Tier1 centers in the general case
•
Network reliability is critical for the LHC
– Data rates are so large that buffering capacity is limited
– If an outage is more than a few hours in duration, the analysis could fall
permanently behind
• Analysis capability is already maximized – little extra headroom
•
•
CMS/Atlas require DOE federated trust for credentials and federation with LCG
•
Several unknowns will require ESnet to be nimble and flexible
– Tier1 to Tier1,Tier2 to Tier1, and Tier2 to Tier0 data rates could add significant
additional requirements for international bandwidth
– Bandwidth will need to be added once requirements are clarified
– Drives architectural requirements for scalability, modularity
Service guarantees will play a key role
– Traffic isolation for unfriendly data transport protocols
– Bandwidth guarantees for deadline scheduling
8
LHC Ongoing Requirements Gathering Process
• ESnet has been an active participant in the LHC network
planning and operation
– Been an active participant in the LHC network operations
working group since its creation
– Jointly organized the US CMS Tier2 networking requirements
workshop with Internet2
– Participated in the US Atlas Tier2 networking requirements
workshop
– Participated in US Tier3 networking workshops
9
LHC Requirements Identified To Date
•
10 Gbps “light paths” from FNAL and BNL to CERN
– CERN / USLHCnet will provide10 Gbps circuits to Starlight, to 32 AoA, NYC
(MAN LAN), and between Starlight and NYC
– 10 Gbps each in near term, additional lambdas over time (3-4 lambdas each by
2010)
•
BNL must communicate with TRIUMF in Vancouver
– This is an example of Tier1 to Tier1 traffic – 1 Gbps in near term
– Circuit is currently up and running
•
Additional bandwidth requirements between US Tier1s and European Tier2s
– Served by USLHCnet circuit between New York and Amsterdam
•
Reliability
– 99.95%+ uptime (small number of hours per year)
– Secondary backup paths
– Tertiary backup paths – virtual circuits through ESnet, Internet2, and GEANT
production networks and possibly GLIF (Global Lambda Integrated Facility) for
transatlantic links
•
Tier2 site connectivity
– 1 to 10 Gbps required
– Many large Tier2 sites require direct connections to the Tier1 sites – this drives
bandwidth and Virtual Circuit deployment (e.g. UCSD)
Ability to add bandwidth as additional requirements are clarified
•
10
Identified US Tier2 Sites
• Atlas (BNL Clients)
• CMS (FNAL Clients)
– Boston University
– Caltech
– Harvard University
– MIT
– Indiana University Bloomington
– Purdue University
– Langston University
– University of California San
Diego
– University of Chicago
– University of New Mexico Alb.
– University of Oklahoma
Norman
– University of Texas at Arlington
– University of Florida at
Gainesville
– University of Nebraska at
Lincoln
– University of Wisconsin at
Madison
• Calibration site
– University of Michigan
11
LHC ATLAS Bandwidth Matrix as of April 2007
Site A
Site Z
ESnet A
ESnet Z
A-Z 2007
A-Z 2010
Bandwidth Bandwidth
CERN
BNL
AofA (NYC)
BNL
10Gbps
20-40Gbps
BNL
U. of Michigan
(Calibration)
BNL (LIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
BNL
Boston University
Internet2 / NLR
Peerings
3Gbps
10Gbps
BNL
Harvard University
(Northeastern
Tier2 Center)
(Northeastern
Tier2 Center)
BNL
Indiana U. at
Bloomington
Internet2 / NLR
Peerings
3Gbps
10Gbps
(Midwestern
Tier2 Center)
(Midwestern
Tier2 Center)
3Gbps
10Gbps
BNL
U. of Chicago
BNL
Langston
University
BNL
U. Oklahoma
Norman
BNL
U. of Texas
Arlington
BNL
BNL
BNL (LIMAN)
BNL (LIMAN)
BNL (LIMAN)
Internet2 / NLR
Peerings
(Southwestern
Tier2 Center)
(Southwestern
Tier2 Center)
Tier3 Aggregate
BNL (LIMAN)
Internet2 / NLR
Peerings
5Gbps
20Gbps
TRIUMF (Canadian
ATLAS Tier1)
BNL (LIMAN)
Seattle
1Gbps
5Gbps
12
LHC CMS Bandwidth Matrix as of April 2007
Site A
Site Z
ESnet A
ESnet Z
A-Z 2007
Bandwidth
A-Z 2010
Bandwidth
CERN
FNAL
Starlight
(CHIMAN)
FNAL
(CHIMAN)
10Gbps
20-40Gbps
FNAL
U. of Michigan
(Calibration)
FNAL
(CHIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
FNAL
Caltech
FNAL
(CHIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
FNAL
MIT
FNAL
(CHIMAN)
AofA (NYC)/
Boston
3Gbps
10Gbps
FNAL
Purdue University
FNAL
(CHIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
FNAL
U. of California at
San Diego
FNAL
(CHIMAN)
San Diego
3Gbps
10Gbps
FNAL
U. of Florida at
Gainesville
FNAL
(CHIMAN)
SOX
3Gbps
10Gbps
FNAL
U. of Nebraska at
Lincoln
FNAL
(CHIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
FNAL
U. of Wisconsin at
Madison
FNAL
(CHIMAN)
Starlight
(CHIMAN)
3Gbps
10Gbps
FNAL
Tier3 Aggregate
FNAL
(CHIMAN)
Internet2 / NLR
Peerings
5Gbps
20Gbps
13
Estimated Aggregate Link Loadings, 2007-08
unlabeled links are 10 Gb/s
9
12.5
Seattle
13
Portland
Boise
13
9
Existing site
supplied
circuits
2.5
Boston
Chicago
Clev.
Sunnyvale
NYC
Denver
Philadelphia
KC
Salt
Lake
City
Pitts.
Wash DC
LA
Albuq.
San Diego
8.5
Raleigh
Tulsa
Nashville
OC48
(1(3))
(1)
Atlanta
6
6
Jacksonville
El Paso
ESnet IP switch/router hubs
ESnet IP switch only hubs
Houston
Baton
Rouge
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Committed bandwidth, Gb/s
2.5
Lab site
2.5
ESnet IP core (1)
ESnet Science Data Network core
ESnet SDN core, NLR links
Lab supplied link
LHC related link
MAN link
International IP Connections
14
ESnet4 2007-8 Estimated Bandwidth Commitments
Long Island MAN
600
W. Chicago
West
Chicago
MAN
unlabeled links are 10 Gb/s
CERN
5
Seattle
USLHCNet
BNL
(28)
Portland
CERN
32 AoA, NYC
Starlight
Boise
(29)
13
Sunnyvale
(32)
(23)
Bay Area MAN
LA
Chicago
10
(24)
SLAC
(19)
Philadelphia
KC
(15)
Wash DC
(30)
Raleigh
Tulsa
ANL
Nashville
OC48
(1(3))
(3)
(4)
Newport News - Elite
(2)
(20)
Jacksonville
(17)
(6)
LLNL
ESnet IP switch/router hubs
ESnet SDN switch hubs
(26)
Pitts.
Atlanta
NERSC
ESnet IP switch only hubsSNLL
(25)
(22)
Albuq.
El Paso
(10)
NYC
LBNL(1)
San Diego
Clev.
(21)
(0)
JGIFNAL
(11)
(13)
Denver
Salt
Lake
City
San Francisco
Boston
(9)
29
(total)
(7)
USLHCNet
10
Houston
(5)
All circuits are 10Gb/s.
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Committed bandwidth, Gb/s
2.5
Lab site
Baton
MAX
Rouge
Wash.,
DC
MATP
JLab
ESnet IP core
ELITE
ESnet Science Data
Network core
ESnet SDN core, NLR links (existing)
Lab suppliedODU
link
LHC related link
MAN link
International IP Connections
15
Estimated Aggregate Link Loadings, 2010-11
unlabeled links are 10 Gb/s
labeled links are in Gb/s
30
Seattle
50
50
Boise
Boston
Sunnyvale
50
San Diego
50
Chicago
40
Philadelphia
50
KC
40
5
40
Wash. DC
5
30
Albuq.
Tulsa
5
40
Jacksonville
40
ESnet IP switch/router hubs
20
20
Atlanta
5
El Paso
OC48
30
30
10
Raleigh
50
Nashville
40
20
ESnet IP switch only hubs
50
50
Denver
4
40
Clev.
NYC
Salt
Lake
City
40
20
15
(>1 )
Portland
LA
45
Houston
ESnet IP core (1)
ESnet Science Data Network core
ESnet SDN core, NLR links (existing)
Lab supplied link
LHC related link
MAN link
International IP Connections
Baton
Rouge
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Committed bandwidth, Gb/s
2.5
Lab site
40
link capacity, Gb/s
16
ESnet4 2010-11 Estimated Bandwidth Commitments
unlabeled links are 10 Gb/s
600 W. Chicago
CERN
25
40
Seattle
BNL
(28)
Portland
15
(>1 )
32 AoA, NYC
CERN
5
(29)
Boise
Starlight
Sunnyvale
4
LA
(24)
4
(13)
Denver
Salt
Lake
City
FNAL
Albuq.
4
(20)
El Paso
10
5
(17)
Philadelphia
5 (26)
4
Wash. DC
(30)
5
Raleigh
5
Nashville
OC48
(4)
(3) 3
3
(19)
(25)
3
Tulsa
40 ANL
(1)
ESnet IP switch only hubs
5 (10)
(21)
(22)
(0)
ESnet IP switch/router hubs
Clev.
100
80
80
5
4
4
KC
(15)
5
(11)
Boston
(9)
NYC
5
USLHCNet
5
Chicago
(32)
(23)
20
65
(7)
4
San Diego
20
USLHCNet
25
10
Atlanta
(2)
5
4
Jacksonville
4
(6)
(5)
Houston
Baton
Rouge
ESnet SDN switch hubs
Layer 1 optical nodes at eventual ESnet Points of Presence
Layer 1 optical nodes not currently in ESnet plans
Committed bandwidth, Gb/s
2.5
Lab site
(20)
ESnet IP core (1)
ESnet Science Data Network core
ESnet SDN core, NLR links (existing)
Lab supplied link
LHC related link
MAN link
International IP Connections
Internet2 circuit number
17
2008 NP Workshop
•
Goals
– Accurately characterize the current and future network requirements for the NP
Program Office’s science portfolio
– Codify the requirements in a document
• The document will contain the case studies and summary matrices
•
Structure
– Discussion of ESnet4 architecture and deployment
– NP Science portfolio
– I2 Perspective
– Round table discussions of case study documents
• Ensure that networking folks understand the science process, instruments and
facilities, collaborations, etc. outlined in case studies
• Provide opportunity for discussions of synergy, common strategies, etc
• Interactive discussion rather than formal PowerPoint presentations
– Collaboration services discussion – Wednesday morning
18
Questions?
• Thanks!
19