European Organization for Nuclear Research - Indico

Download Report

Transcript European Organization for Nuclear Research - Indico

EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH
EUROPEAN LABORATORY FOR PARTICLE PHYSICS
T0/T1/T2 Networking Issues
May 2005
David Foster
Communications Systems Group Leader
CERN
CERN - European Organization for Nuclear Research
CERN Networking for LCG

LCG will require





Several thousands of Gigabit ports in the computer center
Hundreds of Ten Gigabit Ethernet connections in the computer
Center
10+ Ten Gigabit Ethernet links to T1’s (WAN)
8+ Ten Gigabit Ethernet links to the Experiments
Challenges




Operation of the system as ONE entity
Ensure security and protection of the system
Good monitoring to understand how the network is being used.
T1/T2 Campus Infrastructures
2
CERN - European Organization for Nuclear Research
LCG cluster network
Gigabit Ethernet
Ten Gigabit Ethernet link
Double Ten gigabit Ethernet link
WAN
One double
Link not shown
2.4 Tbps CORE
Experiment
al
areas
Campus
Network
Distribution layer
10gbps to 88*1gbps
..96..
10gbps to 88*1gbps
10gbps to 88*1gbps
..96..
~6000 CPU servers
..96..
…
.
10gbps to 32*1gbps
..32..
10gbps to 10*10gbps
…
.
..10..
~2000 Tape and Disk servers
3
CERN - European Organization for Nuclear Research
The future ???
• Coming faster than we imagine … ???
• 3 Core
• 1 TFlop??
• Liquid Cooled
• Networked, 20G Disk
• <$300 ??
•And then there is the PS3 …..
4
CERN - European Organization for Nuclear Research
T0/T1/T2 Interconnectivity
T2s and T1s are inter-connected
by the general purpose research
networks
T2
T2
T2
GridKa
T2
T2
IN2P3
T2
Dedicated
10 Gbit links
Any Tier-2 may
access data at
any Tier-1
TRIUMF
Brookhaven
ASCC
T2
Nordic
Fermilab
CNAF
T2
T2
RAL
T2
SARA
PIC
5
CERN - European Organization for Nuclear Research
LHC T1 Networking

Needs to provide high bandwidth production connections to the T0



For the first time for HEP the WAN is an integral part of the computing system.
Dedicated 10Gb/sec links
Is the combination of a number of initiatives:

GEANT networking deployed in Europe – Geant-2








Dedicated transatlantic and transpacific links




SURFNET, SARA
UKERNA, RAL
RENATER, IN2P3
DFN, FZK
NORDUNET, Nordic T1
RedIRIS, PIC
GARR, Bologne
TRIUMF
ASCC
FNAL, BNL
Networking Initiatives

NetherLight, UKLight, GLIF, Gloriad, Ultralight and many others interconnecting
China, Asia Pacific, North America, South America etc …
6
CERN - European Organization for Nuclear Research
LHC T2 Networking

Needs to provide connectivity between T2’s and T2
to T1. No particular access pattern is assumed.

T2’s are expected to have good (1Gb/sec -> 10Gb/sec) access
to national and international research networks

Geant
ESnet
Abilene

…….


7
CERN - European Organization for Nuclear Research
GÉANT2 Topology

Up to 15 of 30 consortium partners will be
connected to DF

Selection of preferred providers expected
to be completed on 9.5.2005 in Pisa

On 31.3. all DF routes relevant to the 7
European T1s have been selected
8
CERN - European Organization for Nuclear Research
SNIC
CPH
SARA
AMS
RAL
LON
FRA
FZK
as relevant to
LHC
PAR
GVA
IN2P3
PIC
MAD
GÉANT2
DF
Topology
LHC
MIL
BOL
up to 6
further
Central
European
locations
9
CERN - European Organization for Nuclear Research
Cost Model

GÉANT2 does not have prices, it shares cost

The NRENs on the dark fibre cloud subscribe to a
GEANT+ service - about 2 M€ per NREN per year


This subscription finances the DF backbone


10Gb/s IP and 10Gb/s worth of p2p services
Extra wavelengths for projects at marginal cost
Opportunities for more direct connections (T2-T1)
than first thought?
10
CERN - European Organization for Nuclear Research
GLIF MAP From GLIF
11
CERN - European Organization for Nuclear Research
ESnet Goal – 2007/2008
AsiaPac
SEA
• 10 Gbps enterprise IP traffic
• 40-60 Gbps circuit based transport
CERN
CERN
Aus.
Europe
Europe
ESnet
Science Data Network
(2nd Core – 30-50 Gbps,
National Lambda Rail)
Japan
Japan
CHI
SNV
NYC
DEN
DC
Metropolitan
Area
Rings
Aus.
ALB
SDG
ESnet IP Core
(≥10 Gbps)
ATL
ESnet hubs
New ESnet hubs
ELP
Metropolitan Area Rings
Major DOE Office of Science Sites
High-speed cross connects with Internet2/Abilene
Production IP ESnet core
Science Data Network core
Lab supplied
Major international
10Gb/s
10Gb/s
30Gb/s
12
40Gb/s
CERN - European Organization for Nuclear Research
Tier0/1 Network Topology
Triumf
IN2P3
1G
2005/2006 Evolution
Renater
BNL
Canari
e
ManLan
G
E
A
N
T
2
CERN
TIER-0
ESNe
t
10G
2x1G
10G
ASCC
FNAL
GridKa
DFN
nx1G
StarLigh
t
10G
6x1G
10G
10G
Nether
Light
2x1G
PIC
RedIris
GARR
CNAF
Surfnet
SARA
Nordunet
UKERN
A
UKLigh
t
Nordic
2x1G
RAL
10Gb/sec
< 10Gb/sec
13
CERN - European Organization for Nuclear Research
Basic Network Issues

Collections of circuits are not a network.




Testbeds and Production are not the same thing



GEANT will provide diverse backup routes for dedicated circuits.
DOE/CERN will provide circuits to New York (BNL) and Chicago (Fermilab)
with additional transit between New York and Chicago.
The backup connectivity for TRIUMF (Canada) and ASCC (Taipei) is still being
discussed.
Many issues to be resolved that need funding, testbeds are important.
We need to evolve a clear plan for production infrastructures for LHC.
The LHC Network design is being discussed.

The sub-group of the Grid Deployment Board, “T0/T1 Networking” is preparing
an architecture document.



Aims to reach agreement on how the IP network on the dedicated circuits will be
designed.
Will indicate what type of equipment will be required. “Who should put what, where”
Some technology investigations are underway



The use of long-distance WAN-PHY links given the OC192 interface costs.
UCLP
GFP and VCAT technologies.
14
CERN - European Organization for Nuclear Research
Basic Operational Issues

Many parties are involved in operations support

At the network layer there are a number of partners with
different spheres of influence




GEANT, NREN’s Commercial links, T0 and T1 Centers
At the grid layer there are operations centers, Regional
Operations Center (ROC), Grid Operations center (GOC),
Core Infrastructure Center (CIC) etc.
For the end user there is Global Grid User Support (GGUS)
Performance Enhancement and Response Teams (PERT) are
emerging in some NRENs

The process for resolving end user problems has
yet to be fully defined.

We still need to decide on a monitoring strategy

and deploy appropriate monitoring tools.
15
CERN - European Organization for Nuclear Research
Far-reaching Issues


There is tremendous momentum behind world-wide
networking initiatives but it is important that this continues.
What we want to do today … “requirements”


A typical approach to size needs according to current understanding.
What we can do tomorrow … “opportunity”

Affordability of high capacity end-end networking





Opportunity for “business transformation” and conceive of new ways
of collaborating and sharing resources.



CPU-Memory-Disk
Bus-NIC
NIC-Campus
Campus-WAN
Current grid use is largely off-line batch like.
Continued advancement in “on-demand” end-end networking will provide
for increasingly cost effective, real-time and interactive usage.
World class networking available for everyone will bring
dramatic changes and opportunities


Digital divide issues need to be addressed to make cost effective
access to high performance networking accessible to everyone.
Pervasive high performance networking is needed to realise the
vision of pervasive high performance grid services
16