The LHC Open Network Environment (LHCONE)

Download Report

Transcript The LHC Open Network Environment (LHCONE)

Networking@desy
Volker Gülzow, Kars Ohrenberg
Computing Seminar
Zeuthen, 23.04.2013
Network Topology Hamburg
Topology
Zeuthen inbound
Zeuthen outbound
Bandwidth Evolution @ DFN
> DFN is upgrading the optical platform of the X-WiN
■ Contract awarded to ECI Telecom (http://www.ecitele.com)
■ Migration work is currently underway
> High Bandwidth Capabilities
■ 88 wave length per fiber
■ Up to 100 Gbps per wave length
■ thus 8.8 Tbps per fiber!
■ 1 Tbps Switching Fabric (aggregation of 10 Gbps lines on single 100 Gbps line)
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 7
Growing WiN Capacities
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 8
Bandwidth Evolution @ DFN
> Significant cheaper components for 1 Gbps and 10
Gbps components -> reduced cost for VPN
connections, new DFN pricing
> New DFN conditions starting 1.7.2013
■ DESYs contract of 2 x 2 GBits will go to 2 x 5 Gbps without additional costs
■ New cost model for Point-to-point VPNs
■ 1) Initial installation payment
■ 10 GBps ~ 11.400 €, 40 GBps ~ 38.000 €, 100 GBps ~ 94.000 €
■ 2) Annual fee now depends on the distance
■ Hamburg <> Berlin at ~ 20% of the current costs (for 10 Gbps)
■ Hamburg <> Berlin at ~ 80% of the current costs (for 40 Gbps)
■ Hamburg <> Karlsruhe at ~ 45% of the current costs (for 10 Gbps)
■ Hamburg <> Karlsruhe at ~ 150% of the current costs (for 40 Gbps)
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 9
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 10
Geant3 topology
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 11
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 12
Networking for LHC
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 13
LHC Computing Infrastructure
> WLCG in brief:
■ 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 14
The LHC Optical Private Network
> The LHCOPN (from http://lhcopn.web.cern.ch)
■ The LHCOPN is the private IP network that connects the Tier0 and the Tier1
sites of the LCG.
■ The LHCOPN consists of any T0-T1 or T1-T1 link which is dedicated to the
transport of WLCG traffic and whose utilization is restricted to the Tier0 and the
Tier1s.
■ Any other T0-T1 or T1-T1 link not dedicated to WLCG traffic may be part of the
LHCOPN, assuming the exception is communicated to and agreed by the
LHCOPN community
> Very closed and restricted access policy
> No Gateways
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 15
LHCOPN Network Map
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 16
Data transfers
CERN Tier 1s
> Global transfer rates are
always significant (1215 Gb/s) – permanent
on-going workloads
> CERN export rates
driven (mostly) by LHC
data export
Global transfers
> By Ian Bird, CRRB,4/13
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite
April 16, 2013
[email protected]
17
Resource usage: Tier 0/1
By Ian Bird
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite
April 16, 2013
[email protected]
18
Resource use vs pledge
CERN
Tier 1s
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite
CCRC F2F 10/01/2008
19
Resource vs pledges: Tier
2
By Ian Bird
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite
April 16, 2013
[email protected]
20
Connectivity (100 Gb/s)
By Ian Bird
Latency measured;
No problems anticipated
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite
April 16, 2013
[email protected]
21
Computing Models Evolution
> The original MONARC model was
strictly hierarchical
> Changes introduced gradually since
2010
> Main evolutions:
■ Meshed data flows: Any site can use any other
site as source of data
■ Dynamic data caching: Analysis sites pull
datasets from other sites „on demand“, including
from Tier-2s in other regions
■ Remote data access
> Variations by experiment
> LHCOPN only connects T0 and T1
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 22
LHC Open Network Environment
> With the successful operation of the LHC accelerator and the start of the
data analysis, there has come a re-evaluation of the computing and data
models of the experiments
> The goal of LHCONE (LHC Open Network Environment) is to ensure
better access to the most important datasets by the worldwide HEP
community
> Traffic patterns have altered to the extent that substantial data transfers
between major sites are regularly being observed on the General Purpose
Networks (GPN)
> The main principle is to separate the LHC traffic from the GPN traffic, thus
avoiding degraded performance
> The objective of LHCONE is to provide entry points into a network that is
private to the LHC T1/2/3 sites.
> LHCONE is not intended to replace LHCOPN but rather to complement it
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 23
LHCONE Achitecture
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 24
LHCONE VRF Map (from Bill Johnston, ESNet)
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 25
LHCONE: A global Infrastructure
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 26
LHCONE Activities
> With the above in mind, LHCONE has defined the following
activities:
> VRF-based multipoint service: a “quick-fix” to provide multipoint
LHCONE connectivity, with logical separation from R&E GPN
> Layer 2 multipath: evaluate use of emerging standards such as
TRILL (IETF) or Shortest Path Bridging (SPB, IEEE 802.1aq) in
WAN environments
> Openflow: There was wide agreement that SDN is the most
probable candidate technology for LHCONE in the long-term (but
needs more investigations)
> Point-to-point dynamic circuits pilots
> Diagnostic Infrastructure: each site to have the ability to perform
E2E performance tests with all other LHCONE sites
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 27
Software-Defined Networking (SDN)
> Is a form of network virtualization in which the control
plane is separated from the data plane and
implemented in a software application
> This architecture allows network administrators to have
programmable central control of network traffic without
requiring physical access to the network's hardware
devices
> SDN requires some method for the control plane to
communicate with the data plane. One such mechanism
is OpenFlow which is a standard interface for controlling
computer networking switches
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 28
LHCONE VRF
> Implementation of multiple logical router instances
inside a physical device (virtualized Layer 3)
> Logical control plane separation between multiple
clients
> VRF in LHCONE: regional networks implement VRF
domains to logically separate LHCONE from other flows
> BGP peerings used inter-domain and to end-sites
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 29
Multipath in LHCONE
> Multipath problem:
■ How to use the many (transatlantic) paths at Layer 2 among the many partners,
e.g. USLHCNet, GEANT, SURFnet, NORDUnet, ...
> Layer 3 (VRF) can use some BGP techniques
■ MED, AS padding, local preference, restricted announcements
■ works in a reasonably small configuration, not clear it will scale up to O(100)
end-sites
> Some approaches to Layer 2 mulitpath:
■ IETF: TRILL (TRansparent Interconnect of Lots of Links)
■ IEEE: 802.1aq (Shortest Path Bridging)
> None of these L2 protocols is designed for WAN!
■ R&E needed
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 30
LHCONE Routing Policies
> Only the networks which are announced to LHCONE are
allowed to reach the LHCONE
■ Networks announced by DESY:
■ 131.169.98.0/24, 131.169.160.0/21, 131.169.191.0/24 (Tier-2 Hamburg)
■ 141.34.192.0/21, 141.34.200.0/24 (Tier-2 Zeuthen)
■ 141.34.224.0/22, 141.34.228.0/24, 141.34.229.0/24, 141.34.230.0/24 (NAF Hamburg)
■ 141.34.216.0/23, 141.34.218.0/24, 141.34.219.0/24, 141.34.220.0/24 (NAF Zeuthen)
■ e.g. networks announced by CERN:
■ 128.142.0.0/16 but not 137.138.0.0
> Only these networks will be reachable via the LHCONE
> Other traffic uses the public, general purpose networks
> Asymmetric routing should be avoided as this will cause
problems for traffic passing (public) firewalls
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 31
LHCONE - the current status
> Currently ~100 network prefixes
■ German sites currently participating in LHCONE
■ DESY, KIT, GSI, RWTH Aachen, Uni Wuppertal
■ Europe
■ CERN, SARA , GRIF (LAL + LPNHE), INFN, FZU, PIC, ...
■ US:
■ AGLT2 (MSU + UM), MWT2 (UC), BNL, ...
■ Canada
■ TRIUMF, Toronto, ...
■ Asia
■ ASGC, ICEPP, ...
> Detailed monitoring via perfSONAR
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 32
LHCONE Monitoring
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 33
R&D Network Trends
> Increased multiplicity of 10Gbps links in the major R&E
networks: GEANT, Internet2, ESnet, various NREN, ...
> 100Gbps Backbones in place and transition now
underway
■ GEANT, DFN, ...
■ CERN - Budapest 2 X 100G for LHC Remote Tier- 0 Center
> OpenFlow (Software-defined switching and routing)
taken up by much of the network industry and R&E
networks
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 34
WAN + LHCONE Infrastructure at DESY
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 35
Summary
> The LHC computing and data models continue to
evolve towards more dynamic, less structured, ondemand data movement thus requiring different network
structures
> LHCOPN and LHCONE may merge in the future
> With the evolution of the new optical platforms
bandwidth will get more affordable
Gülzow/Ohrenberg, Zeuthen | 23.4.2013 | Seite 36