LHCOPN connectivity for FR-CCIN2P3

Download Report

Transcript LHCOPN connectivity for FR-CCIN2P3

Network infrastructure at FR-CCIN2P3
Guillaume Cessieux – CCIN2P3 network team
Guillaume . Cessieux @ cc.in2p3.fr
On behalf of CCIN2P3 network team
LHCOPN meeting, Vancouver, 2009-09-01
FR-CCIN2P3
Since 1986
Now 74 persons
~ 5300 cores
10 Po disk
30 Po tape
Computing room ~ 730m2
1.7 MW
GCX
LHCOPN meeting, Vancouver, 2009-09-01
2
RENATER-4 → RENATER-5: Dark fibre galore
~7500km of DF
Kehl
Kehl
Le Mans
Le Mans
Angers
Tours
CERN
→
Angers
Tours
Genève (CERN)
Cadarache
Cadarache
Dark fibres
Leased line 2,5 G
Leased line 1 G (GE)
GCX
LHCOPN meeting, Vancouver, 2009-09-01
3
Pop RENATER-5 Lyon

(D)WDM based

Previously
– Alcatel 1.6k series
– Cisco 6500 & 12400

Upgraded to
– Ciena CN4200
– Cisco 7600 & CRS-1

Hosted by CCIN2P3
– Direct foot into RENATER’s backbone
• No last miles or MAN issues
GCX
LHCOPN meeting, Vancouver, 2009-09-01
4
Ending two 10G LHCOPN links
GRIDKA-IN2P3-LHCOPN-001
CERN-IN2P3-LHCOPN-001
CERN-GRIDKA-LHCOPN-001
Candidate for L1 redundancy
Layer 3 view:
100km
5
WAN connectivity related to T0/T1s
LAN
WAN
Beware: Not for LHC
Chicago
Tiers2
Edge
Backbone
NREN
2x1G
Tiers2
FR
GÉANT2
RENATER
10G
Internet
Geneva
Karlsruhe
LHCOPN
Tiers1
1G
Dedicated data servers for LCG
MDM appliances
GCX
Generic IP
LHCOPN meeting, Vancouver, 2009-09-01
Dedicated
6
LAN: Just fully upgraded!
→
Computing
GCX
Storage
SATA
Storage
FC+TAPE
Computing
Storage
FC+TAPE
LHCOPN meeting, Vancouver, 2009-09-01
Storage
SATA
7
Now “top of rack” design

Really easing mass handling of devices
– Enable directly buying pre-wired racks
• Just plug power and fibre – 2 connections!
…
GCX
LHCOPN meeting, Vancouver, 2009-09-01
8
Current LAN for data analysis
Backbone 40G
Computing
Storage
2 distributing switches
3 distributing switches
Linked to backbone with 4x10G
1 switch/rack
(36 access switches)
1x10G uplink
Linked to backbone with 4x10G
34 access switches with
Trunked uplink 2x10G
…
48x1G/switch
…
10G/server
10G/server
24 servers per switch
2x1G per server
1G per server
36 computing racks
34 to 42 server per rack
GCX
Data SATA
816 servers
in 34 racks
Data FC
(27 servers)
LHCOPN meeting, Vancouver, 2009-09-01
Tape
10 servers
9
Main network devices and configurations used
• 24x10G (12 blocking)
+ 96x1G
+ 336x1G blocking (1G/8ports)
• 48x10G (24 blocking)
+ 96x1G
6513
• 64x10G (32 blocking)
6509
x5
Backbone & Edge
4900
4948
x5
16x10G
Distribution
48x1G + 2x10G
x70
Access
> 13km of copper cable & > 3km of 10G fibres
GCX
LHCOPN meeting, Vancouver, 2009-09-01
10
Tremendous flows
LHCOPN links not so used yet
CERN-IN2P3-LHCOPN-001
GRIDKA-IN2P3-LHCOPN-001
But still regular peaks at 30G on the LAN backbone
GCX
LHCOPN meeting, Vancouver, 2009-09-01
11

LAN
Other details
– Big devices preferred to meshed bunch of small
– We avoid too much device diversity
• Ease management & spare
– No spanning tree, trunking is enough
• Redundancy only at service level when required
– Routing only in the backbone (EIGRP)
• 1 VLAN per rack

No internal firewalling
– ACL on border routers are sufficient
• Only on incoming traffic and per interface
– Preserve CPU
GCX
LHCOPN meeting, Vancouver, 2009-09-01
12
Monitoring

Home made flavour of netflow
– EXTRA: External Traffic Analyzer
• http://lpsc.in2p3.fr/extra/
• But some scalability issues around 10G...

Cricket & cacti + home made
– ping & TCP tests + rendering

Several publicly shared
– http://netstat.in2p3.fr/
GCX
LHCOPN meeting, Vancouver, 2009-09-01
13
Ongoing (1/3)

WAN - RENATER
– Upcoming transparent L1 redundancy Ciena based
– 40G & 100G testbed
• Short path FR-CCIN2P3 – CH-CERN is a good candidate
GCX
LHCOPN meeting, Vancouver, 2009-09-01
14
Ongoing (2/3)

LAN
– Improving servers’ connectivity
• 1G → 2x1G→ 10G per server
• Starting with most demanding storage servers
– 100G LAN backbone
• Investigating Nexus based solutions
– 7018: 576x10G (worst case ~144 at wirespeed)
– Flat to stared design
→
Nx40G
Nx100G
GCX
LHCOPN meeting, Vancouver, 2009-09-01
15
Ongoing (3/3)

A new computer room!
850m² on two
floors
• 1 cooling, UPS, etc.
• 1 computing devices
Target 3 MW
Expected
beginning 2011
(Starting at 1MW)
GCX
LHCOPN meeting, Vancouver, 2009-09-01
16
Conclusion

WAN
– Excellent LHCOPN connectivity provided by
RENATER
– Demand from T2s may be next working area

LAN
– Linking abilities recently tripled
– Next step will be core backbone upgrade
GCX
LHCOPN meeting, Vancouver, 2009-09-01
17