The USLHCNet Project

Download Report

Transcript The USLHCNet Project

US LHCNet Update
Dan Nae
California Institute of Technology
US LHCNet + ESnet (2007)
SEA
2007
BNL
ESNet4
Science Data Network Core
(20-50 Gbps)
CHI
SNV
AMS
DEN
Metropolitan
Area
Rings
LA
NYC
NYC
KAN
DC
FNAL
TUL
ALB
CERN
ATL
SDG
ELP
HOU
Metropolitan Area Rings
ESnet4
IP Core (≥10
Gbps)
US LHCNet
Data Network
(3x10Gbps to the US)
Major DOE Office of Science Sites
10Gb/s
2006
SDNetwork core, 20-30-40-50 Gbps circuit based transport
20-30-40-50 Gb/s
Major international
 Connections to ESnet Hubs in New-York
and
20-30-40
Gb/s Chicago
LHCNet Data Network
NSF/IRNC circuit; GVA-AMS connection
via Surfnet or Geant2
 Redundant
“light-paths” to BNL and FNAL
 Redundant 10 Gbps peering with Abilene
 Access to USNet/HOPI for R&D
Production IP ESnet core, 10 Gbps enterprise IP traffic
Multiple Fiber Paths:
Reliability Through Diversity
NYC
111 8th
Brookhaven
AC-1
NYC-MANLAN
LON
Whitesands
Bellport
AC-2
Bude
VSNL
Frankfurt
Highbridge
GVA-CERN
NY
Wal,
60 Hudson NJ
CHI-Starlight
WEST
Paris
Atlantic
Ocean
Four
providers:
Colt
Qwest
Global Crossing
GEANT
AMS-SARA
Pottington
(UK)
London
EAST
Global Crossing
Qwest
Colt
GEANT
LCG Availability
requirement: 99.95%
Additional Slides
Equipment Diversity
US LHCNet, ESnet and
the two US Tier1s
(FNAL and BNL) are
working to achieve
complete equipment
diversity for the
primary and backup
paths
Single point of failure
for the CERN-FNAL
traffic
Equipment Diversity (cont.)

The new setup allows for independent paths and can survive the failure
of any single piece of equipment

Great advantage in case of hardware of software maintenance

Similar setup for the CERN-BNL connection
Ciena Transitions (Today)
GEANT-ESnet
Peering, FNALGridKa, BNLGridKa

Two parallel networks, one Force10 and one Ciena

Today the main links (CERN-BNL, CERN-FNAL) go over the Force10s (proven
reliability, stable configuration)

Circuit oriented services development for the Cienas
Planned Configuration (2008)
Emerging
Standards
VCAT, LCAS
Robust fallback at layer 1 + next-generation hybrid optical network:
Dynamic circuit-oriented network services with BW guarantees
Ciena “Mesh Restoration” of a Circuit
Provisioned circuits
over a failed SONET
link can be re-routed
according to priorities
and preempt lower
priority circuits.
Fallback is automatic
and very fast
(<50ms once failure
is detected)
Network Forecast
2009
2010
8 x 10GBE
10 x 10GBE
2 x 10GBE
2 x 10GBE
Geneva
3 x 10G SDH
4 x 10G SDH
Geneva
Amsterdam
H
SD
2
Amsterdam
x
10
G
S
10G
3x
SD
H
New York
G
10
1x
4 x 10GBE
Circuit-enabled
regional
network
Circuit-enabled
regional
network
Ports at Each PoP
10 GbE Ports
OC-192/STM-64 Ports
AMS
2
7
Two GVA
port 10 GbE
10
card
NYC
4
CHI
10G
ESnet SDN
4
10 GbE Ports
AMS
2
16
12
CHI
9
4 x 10GBE
Circuit-enabled
regional
network
Ports at Each PoP
Double densityGVA
cards and matrix
NYC
H
SD
SD
H
4 x 10GBE
1x
SD
H
H
SD
10
G
SD
H
ESnet SDN
1x
10
G
H
SD
10
G
Circuit-enabled
regional
network
1
0G
x1
3 x 10G SDH
1x
4 x 10GBE
SD
H
10G
SD
H
SD
H
Chicago
H
SD
G
10
1x
H
SD
10G
1x
DH
0G S
1x1
1x
10
G
4 x 10G SDH
2 x 10G SDH 3 x 10G SDH
DH
S
10G
1x
10
G
x
New York
3 x 10G SDH
2 x 10G SDH
1x
3
2
DH
Chicago
x
G
10
H
SD
1x
DH
0G S
2x1
2
x
G
10
8
4
4
OC-192/STM-64 Ports
5
40 Gbps
7
wavelengths?
12
10
LHCNet connections to ESnet: FY09/FY10
Seattle
Boise
Clev
Sunnyvale
Chicago
New York
CERN (Geneva)
Denver
K
C
Pitts
Wash DC
Raleigh
LA
Phoenix
Albuq.
San
Diego
Tulsa
Atlanta
Dallas
El Paso Las Cruces
Jacksonville
Pensacola
San Ant.
Houston Baton
Rouge
NLR regeneration / OADM sites
NLR wavegear sites
ESnet via NLR (10 Gbps waves)
LHCNet (10 Gbps waves)
LHCNet:
To ~80 Gbps by 2009-10
Routing + Dynamic managed
circuit provisioning
Network Monitoring

MonALISA (TL1 Module)

Spectrum (CERN first line)

Various open source tools
(cricket, nagios, rancid,
syslog-ng, etc)

perfSONAR (GEANT E2ECU)

True end-to-end (host-to-host)
monitoring using MonALISA

“Network intelligence” or the ability to
reconfigure the circuits based on
performance, changing network
conditions or high-priority scheduled
transfers
US LHCNet Working Methods
Production Network
Develop and build
next generation
networks
Pre-Production
High performance
High bandwidth
Reliable network
Networks for Research
D0, CDF, BaBar, CMS, Atlas
HEP & DoE
N x 10 Gbps transatlantic testbed
Roadmaps
GRID applications
New Data transport protocols
PPDG/iVDGL, OSG, WLCG,
Interface and kernel setting
DISUN
HOPI / UltraScience Net /
LHCOPN
Ultralight / CHEPREO /
LambdaSation
Testbed for Grid Interconnection of US and
Development
EU Grid domains
Lightpath technologies
Vendor Partnerships
VRVS/EVO
http://ultralight.caltech.edu
Four Continent Testbed and Facility
Caltech, Florida,
FIU, UMich,
SLAC,FNAL,
CERN, Internet2,
NLR, UERJ(Rio),
USP, CENIC,
Starlight, Cisco
Building a global, network-aware end-to-end managed real-time Grid
Network Services for Managed
End-to-End Data Transfers
Robust Network Services
based on
 Bandwidth guarantees
 Virtual Circuits
 Scheduled Transfers
 Transfer Classes
 Priorities
 Monitoring of all
components end-to-end
 Network Elements
 End-Hosts
 Interface to other circuitoriented systems
 Be part of
heterogeneous end-toend infrastructure
Problem Finding and Resolution
 Problems encountered today are
hard to track due to missing the
global view of the system
 Example situation: the system
recognizes an end-host problem
during the transfer and takes
mitigating actions, re-scheduling
transfers and notifying operators
End-to-end Monitored
Managed Transfers
 Track problem to the source
 Network / End-host
 Take appropriate action
 Change transfer path
 Adjust end-host parameters
 Re-schedule transfer
 Provide experts with relevant
(real-time) information
 Keep the user/application
up-to-date on transfer
progress
 Progessive automation: Target
optimal resource utilization
 Developed in the field-proven
MonALISA Framework
GLIF
Concept
Tom Lehman, GLIF 2007 Winter Workshop
http://www.glif.is/meetings/2007/winter/controlplane/lehman-dynamic-services.pdf
US LHCNet Milestones




“Pre-production”: The new infrastructure initially deployed 2007
will offer circuit-based services intended to provide redundant
paths and on-demand, high bandwidth end-to-end dedicated
circuits. Circuit-switched services will be used to directly
interconnect the DOE laboratories to CERN and will be available
on demand to policy-driven, data-intensive applications,
managed by MonALISA services
End of 2007: initial deployment of our circuit oriented network
services on US LHCNet; simple scheduler with fixed bandwidth
circuits for site to site on-demand data set transfers.
Spring 2008: interaction with the data transfer application of the
experiments, as well as with other intra-domain and interdomain (LambdaStation, TeraPaths, DRAGON, Oscars) control
plane services in order to provide an end-to-end path
reservation.
LHC Startup: July 2008: We will begin to exercise the network
and services with real data, in close cooperation with the LHC
experiments.