Internet2 - Indico

Download Report

Transcript Internet2 - Indico

June 29th 2010, LHCOPN
Jason Zurawski – Internet2
Internet2 Update
Introduction
• Internet2 is an advanced networking consortium led by
members of the Research and Education (R&E) community.
• We promote the missions of our members, in part through the
development and support of networking activities and related
initiatives.
• We are committed to supporting scientific use of the network,
including the LHC.
– Enabling large scale data transfers over a high capacity nationwide
network
– Dynamic circuit capability through the ION service
– Performance Monitoring through perfSONAR
– Support for the debugging of Network Performance, end to end.
2 – 4/9/2016, © 2010 Internet2
Outline
•
•
•
•
Internet2 Network and Advanced Services Update
ARRA and Stimulus Update
A Blast from the Past
LHC Traffic Observations
3 – 4/9/2016, © 2010 Internet2
Network Architecture
• Backbone router upgrades
– Houston, Kansas City, Salt Lake City, Los Angeles, Seattle routers
upgraded to Juniper MX960s in early 2010
– Chicago router will remain Juniper T1600 in 2010 (getting crowded!)
• Backbone augments
– All backbone circuits re-framed from OC-192 to 10GigE LANPHY
– Additional backbone links lit between all adjacent routers; adjacent
nodes now connected with 20G of bandwidth
• Optical capacity added between Denver and Salt Lake City (10
additional waves)
• 10G capacity between Internet2 and TransitRail added in Los
Angeles, Seattle, Chicago, Washington DC
IP NETWORK VISUALIZED
• Note: New York to Chicago is now 20G, more later…
IP Service
• Less Than Best Effort service available on Internet2 IP network
– Researchers can signal LBE service using TOS overhead bytes
– Internet2 will evolve the service and documentation over the
coming months
• Backbone routers configured to handle MPLS transport of ION
services
• Route Statistics
–
–
–
–
R&E IPv4: 13,094 routes
R&E IPv6: 812 routes
CPS* IPv4: 154,272 routes
CPS* IPv6: 2,850 routes
*CPS = Commercial Peering Service (http://noc.net.internet2.edu/i2network/commercial-peering-service.html)
ION Service
• ION = Interface to Dynamic Circuits (similar to Autobahn/SDN)
• Transition from Ciena CoreDirectors to Juniper MX960s in 1H2010
– Move from SONET-based network on the Cienas to an MPLS- based
service operating on the current IP network
• MPLS transport more efficient use of resources
– Bandwidth reserved for circuit instantiation is available for use by
other users when circuit owner not utilizing circuit for transfer
– Opportunity to provide circuits that can burst above their requested
commit rate, if sufficient headroom available
• ION will be a production service managed by the Internet2 NOC
• ION circuits provisioned using a simple and secure web-based
interface or IDC signalling
INTERNET2 HISTORICAL OFFERED LOAD
Outline
•
•
•
•
Internet2 Network and Advanced Services Update
ARRA and Stimulus Update
A Blast from the Past
LHC Traffic Observations
9 – 4/9/2016, © 2010 Internet2
Internet2 and ARRA Stimulus
• Many Internet2 connectors are looking to expand through NTIA
BTOP program.
• Internet2 is exploring ways to upgrade/expand capabilities to
match up with the expected growth of regionals and ensure
fees remain the same or potentially reduced.
• Internet2 has submitted a Round 2 Proposal to the ARRAfunded Broadband Technologies Opportunities Program (BTOP)
as funded by the NTIA
– Seeks to acquire nationwide dark fiber, optical equipment to light
the fiber at 100G speeds, and an upgraded IP network delivering
100GigE to the Internet2 Community
10 – 4/9/2016, © 2010 Internet2
New Network Builds in Proposal
11 – 4/9/2016, © 2010 Internet2
Combined US UCAN System Capability
12 – 4/9/2016, © 2010 Internet2
Upgraded IP Backbone
13 – 4/9/2016, © 2010 Internet2
Outline
•
•
•
•
Internet2 Network and Advanced Services Update
ARRA and Stimulus Update
A Blast from the Past
LHC Traffic Observations
14 – 4/9/2016, © 2010 Internet2
A Blast from the Past
• The following slides were given at Summer Joint Techs 2007
(Fermilab)
– Rick Summerhill and Eric Boyd
– http://www.internet2.edu/presentations/jt2007jul/20070716boyd-summerhill.ppt
• Background – Internet2 used to sponsor workshops as a service
to our members and connectors to prepare for the LHC
– Data and network requirements
– Common stumbling blocks to success (e.g. network performance
and design)
• Have since evolved into a more general ‘Network Performance’
workshop
15 – 4/9/2016, © 2010 Internet2
Are you ready for LHC?
16 – 4/9/2016, © 2010 Internet2
Local Infrastructure
US Tier 4
(1500 US
scientists)
Scientists
Analyze Data
US Tier 3
(68 orgs)
Internet2/Connectors
Internet2/Connectors
Scientists
Request Data
Atlas (6-7)
CMS (7)
US Tier 2
(15 orgs)
Provides
Data to
Tier 3
GEANT-ESNet-Internet2
Tier 1
(12 orgs)
FNAL
BNL
Shared Data Storage
and Reduction
LHCOPN
Tier 0
17 – 4/9/2016, © 2010 Internet2
CERN
Raw Data
Peak Flow Network Requirements
Local Infrastructure
Tier 1 or 2 to Tier 3: Estimate: Requires 1.6 Gbps per transfer (2
TB's in 3 hours)
Internet2/Connectors
Internet2/Connectors
Tier 1 to Tier 2: Requires 10-20 Gbps
GEANT-ESNet-Internet2
Tier 0 to Tier1: Requires 10-40 Gbps
LHCOPN
CERN
18
18 – 4/9/2016, © 2010 Internet2
What are the Implications for
Normal Network Operations from
T2 to T3?
Example: 13 people (3 Professors and 10 Graduate Students)
require ten 3-hour timeslots a month to receive 8 Gigabit data
flows.
10 Gig
4 Gig
19
19 – 4/9/2016, © 2010 Internet2
CMS T2 Traffic at UNL
20 – 4/9/2016, © 2010 Internet2
20
Internet2 Connectors
CalREN-2 South
NYSERNet
3ROX
Great Plains Network
Indiana GigaPoP
MAGPI
MREN
Internet2
NoX
Merit
OARnet
ESnet
Oregon
GigaPoP
LONI
SoX
21
21 – 4/9/2016, © 2010 Internet2
OmniPoP
Pacific Northwest
GigaPoP
Cyberinfrastructure Requirements
•
•
•
•
•
Data storage
Robust campus infrastructure
Security and Authorization
IT support for local and remote resources
Network Performance monitoring tools
22
22 – 4/9/2016, © 2010 Internet2
Cyberinfrastructure Components
Applications Bulk
Transport
2-Way
Interactive
Video
Real-Time
Communications
….
Phoebus
Middleware
….
….
Performance
Infrastructure / Tools
Network
23 – 4/9/2016, © 2010 Internet2
….
Control
Plane
Network
Cyberinfrastructure
Applications call on Network Cyberinfrastructure
Blast from the Past Summary
• Was the message heard at the Tier2 Level?
– Absolutely – most (if not all) US Tier2s are extremely well
connected (diverse and capable network paths) and can (do) flood
the network at will (see examples later)
– Cyberinfrastructure components are well deployed and useful
• perfSONAR-PS available at all USATLAS Tier1/Tier2s. Gaining at
USCMS as well. Striving for Tier3s to have a deployment available
• Lambda Station/Terapaths/Phoebus are successful data movement
tools that utilize the Dynamic Circuit networks
• What more needs to be done?
– Tier3s – what is the worse case scenario?
– Bridging the gap – Campus IT vs the Science Disciplines
– The workshops where valuable, why can’t they continue?
• Internet2 wants to be involved, but needs support and help of the
scientific communities (beyond the LHC as well) and network
partners
24 – 4/9/2016, © 2010 Internet2
Internet2 LHC Project Connectivity (2009)
25 – 4/9/2016, © 2010 Internet2
And a note on perfSONAR-PS…
•
Based on conversation by John and others yesterday, some clarifications
– perfSONAR-MDM: Managed service e.g. support available for the installation,
configuration, and management of open source software based on the
perfSONAR protocols
– perfSONAR-PS: Non-managed service (e.g. pure open source support model)
for the use of open source software based on the perfSONAR protocols
•
Is there a difference between the two?
– Only in the management and development, the software is interoperable on a
protocol basis
•
Key stakeholders (for both)
–
–
–
–
•
Networks (R&E and Commercial)
Campuses
Federal Labs
VOs
Open development opportunities
– Yes! There are APIs and the data is available
– Traditional (Python, Perl, Java), REST gaining strength
26 – 4/9/2016, © 2010 Internet2
Outside Development Gaining Traction
27 – 4/9/2016, © 2010 Internet2
Outline
•
•
•
•
Internet2 Network and Advanced Services Update
ARRA and Stimulus Update
A Blast from the Past
LHC Traffic Observations
28 – 4/9/2016, © 2010 Internet2
So … Where is All the Data?
• Aggregate traffic from Fermilab/BNL on Internet2 Network
• Dates: 3/30 to 4/2 (First collision through data dissemination)
• Note the ‘peaks’ of around 3-5G. Didn’t last too long, supports
the maximum size of the data set.
• Graph courtesy of Chris Robb.
29 – 4/9/2016, © 2009 Internet2
So … Where is All the Data?
• Aggregate traffic from Fermilab/BNL on Internet2 Network
• Dates: 3/30 to 4/2 (First collision through data dissemination)
• Note the ‘peaks’ of around 3-5G. Didn’t last too long, supports
the maximum size of the data set.
• Graph courtesy of Chris Robb.
Possible Transfers?
30 – 4/9/2016, © 2009 Internet2
So … Where is All the Data?
• Despite these facts on data size and where it came from, did we
see the data on Internet2?
– “Some”, but not all
– A little later than first availability (more with Tier2 and Tier3
transfers)
• Who saw the data?
– Purpose built R&E nets (Ultralight, USLHCNet)
– ESnet (into and out of Fermilab/BNL)
– Internet2/NLR
• From Tier1s
• Between Tier2s*
• To Tier3s
*Expected, and becoming common
31 – 4/9/2016, © 2009 Internet2
Connectivity
• Minor experiment by me to see how Tier-2s route to each
other, and the Tier-1 for USATLAS.
• pS Performance Toolkit (http://psps.perfsonar.net/toolkit/)
available at Tier-1 and almost all Tier-2s.
–
–
–
–
–
Co-allocated near the rest of the processing/storage
Using the available performance tools, analyze the routes
Determine the paths the data is flowing
Check the times/data stores to find evidence of the transfers
‘Reverse Traceroute’ Tool – Developed by SLAC
32 – 4/9/2016, © 2009 Internet2
Connectivity – BNL (Tier 1)
• Tier 1 for USATLAS
• Connectivity to other sites (Tier-2s, Tier3s)
–
–
–
–
–
–
–
–
–
MSU/UMich – Ultralight
Indiana - ESnet
U of Chicago – Private R&E Network/Peering
Boston Univ. – Private R&E Network/Peering
Oklahoma - ESnet
U of Texas at Arlington – Esnet/NLR
SMU – ESnet/Internet2
U of Wisconsin - ESnet
LBNL/NERSC – ESnet
• As expected for a Tier1, there is not much touching Internet2
33 – 4/9/2016, © 2009 Internet2
Connectivity – Boston Univ. (Tier 2)
• Tier 2 (Northeast Tier2 [NET2] w/ Harvard)
• Connectivity to other sites
–
–
–
–
–
–
–
–
–
BNL (Tier-1) – Private Network/Peering
MSU/UMich (Tier-2) – Internet2
Indiana (Tier-2) – Internet2
U of Chicago (Tier-2) – Internet2
Oklahoma (Tier-2) – Internet2
U of Texas at Arlington (Tier-2) – Internet2
SMU (Tier-3) – Internet2
U of Wisconsin (Tier-3) – Internet2
LBNL/NERSC (Tier-3) – ESnet
• Private connectivity to the Tier1 (shared with NET2 partner
Harvard), but T2-T2 transfers almost exclusively R&E
34 – 4/9/2016, © 2009 Internet2
Connectivity – Oklahoma (Tier 2)
• Tier 2 (Southwest Tier2 [SWT2] w/ U of T at A)
• Connectivity to other sites
–
–
–
–
–
–
–
–
–
BNL (Tier-1) - ESnet
MSU/UMich (Tier-2) - NLR
Indiana (Tier-2) - NLR
U of Chicago (Tier-2) – Internet2
Boston Univ. (Tier-2) – Internet2
U of Texas at Arlington (Tier-2) - NLR
SMU (Tier-3) - NLR
U of Wisconsin (Tier-3) - NLR
LBNL/NERSC (Tier-3) – ESnet
• Well connected site, mix of different R&E peerings. Diversity of
path is a good thing.
35 – 4/9/2016, © 2009 Internet2
What We are Expecting
• Tier-2 to Tier-2 Transfers
– We are always monitoring and looking for pinch points
– Some activity is more visible than others…
– Fully expect this type of to occur (and increase!) as the project
matures
• Tier-2 Transfers, sometimes International (CMS)
– Expecting this based on the CMS model
– Directly lead to capacity changes on the network
• New 10G between New York and Chicago – Early May 2010
– Ready and willing to add capacity as needed to where it is needed
• RE: David’s slides yesterday regarding ‘protection’
• Will be speaking with heavy network users as traffic increases to talk
about solutions (e.g. extra capacity at the regional/campus network, use
of ION, etc.)
• Working with regional partners to increase capacity into heavy use
campuses (e.g. Vanderbilt University [New Heavy ION Tier2] -> SOX)
36 – 4/9/2016, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Inbound to Internet2 from GPN (UNL – A CMS Tier-2)
37 – 4/9/2016, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Outbound to CalREN (Caltech – A CMS Tier-2)
38 – 4/9/2016, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Backed up by CMS PhEDEx data
39 – 4/9/2016, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Backed up by CMS PhEDEx data
40 – 4/9/2016, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Backbone traffic heating up (NEWY-CHIC)
41 – 4/9/2016, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Tracked to University of Wisconsin (USCMS/USATLAS Tier-2)
42 – 4/9/2016, © 2009 Internet2
Other Example of a T2: 4/15 to date
• PhEDEx confirms (into UofWisc)
43 – 4/9/2016, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Some (not all) coming out of a Tier-1 in Germany (KIT/GridKA)
44 – 4/9/2016, © 2009 Internet2
LHC Science Preparedness
• Backbones are ready for the challenges
– Underestimates can be met with action to increase capacity
• Regional Networks should be prepared as well.
– Working to upgrade heavy users to increase the science capability
• Campus preparedness will vary
– Large campus – more than likely aware of the demands of big
science
– Small campus – prepared? Time is running out, to find out …
• Internet2’s Roll
– Support the missions of our members, no matter the project
– Deliver networking
– Support key cyberinfrastructure, either through software
development, instruction, or advanced services
45 – 4/9/2016, © 2009 Internet2
Internet2 Update
June 29th 2010, LHCOPN
Jason Zurawski – Internet2
For more information, visit www.internet2.edu
46 – 4/9/2016, © 2009 Internet2