slides - CERN Indico
Download
Report
Transcript slides - CERN Indico
UltraLight: A Managed Network
Infrastructure for HEP
Shawn McKee
University of Michigan
CHEP06, Mumbai, India
February 14, 2006
Introduction
The UltraLight Collaboration was formed to
develop the network as a managed
component of our infrastructure for HighEnergy Physics
(See Rick Cavanaugh’s overview from yesterday)
Given a funding level at 50% of our request we
had to reduce our scope while maintaining
the core network infrastucture.
This talk will focus on the UltraLight network and
the activities of the UL network group to
enable effective use of 10 Gbps networks.
S. McKee, UM - CHEP06
2
Overview
Overview of UltraLight Network Effort
Network Engineering Plans and Status
Monitoring
Kernel/End-host Development
Status and Summary
S. McKee, UM - CHEP06
3
The UltraLight Network
Engineering Team
Many individuals have worked deploying and
maintaining the UltraLight Network
S. McKee (UM, Team Leader)
S. Ravot (LHCNet)
D. Nae (LHCNet)
R. Summerhill (Abilene/HOPI)
D. Pokorney (FLR)
S. Gerstenberger (MiLR)
C. Griffin (UF)
S. Bradley (BNL)
J. Bigrow (BNL)
S. McKee, UM - CHEP06
J. Ibarra (WHREN, AW)
C. Guok (ESnet)
L. Cottrell (SLAC)
C. Heerman (I2/HOPI)
D. Petravick (FNAL)
M. Crawford (FNAL)
R. Hockett (UM)
E. Rubi (FIU)
4
UltraLight Backbone
UltraLight has a non-standard core network with dynamic
links and varying bandwidth inter-connecting our nodes.
Optical Hybrid Global Network
The core of UltraLight is dynamically evolving as function of
available resources on other backbones such as NLR,
HOPI, Abilene and ESnet.
The main resources for UltraLight:
LHCnet (IP, L2VPN, CCC)
Abilene (IP, L2VPN)
ESnet (IP, L2VPN)
UltraScienceNet (L2)
Cisco NLR wave (Ethernet)
Cisco Research Service (L3)
HOPI NLR waves (Ethernet; provisioned on demand)
UltraLight nodes: Caltech, SLAC, FNAL, UF, UM,
StarLight, CENIC PoP at LA, CERN, Seattle
S. McKee, UM - CHEP06
5
UltraLight Network
Infrastructure Elements
Trans-US 10G
s Riding on NLR, Plus CENIC, FLR, MiLR
LA – CHI (4 Waves): HOPI (2 Waves), USN, and Cisco Research
CHI – JAX (Florida Lambda Rail/NLR)
Dark Fiber Caltech – L.A.: 2 X 10G Waves (One to WAN In Lab);
10G Wave L.A. to Sunnyvale for UltraScience Net Connection
Dark Fiber with 10G Waves (2 Waves): StarLight – Fermilab
Dedicated Wave StarLight (1 + 2 Waves) – Michigan Light Rail
SLAC: ESnet MAN to Provide Links (from July):
One for Production, and One for Research
Partner with Advanced Research & Production Networks
LHCNet (Starlight- CERN), Abilene/HOPI, ESnet, NetherLight,
GLIF, UKLight, CA*net4
Intercont’l extensions: Brazil (CHEPREO/WHREN), GLORIAD,
Tokyo, AARNet, Taiwan, China
S. McKee, UM - CHEP06
6
UltraLight Network
Infrastructure Elements
Trans-US 10G
s Riding on NLR, Plus CENIC, FLR, MiLR
LA – CHI (4 Waves): HOPI (2 Waves), USN, and Cisco Research
CHI – JAX (Florida Lambda Rail/NLR)
Dark Fiber Caltech – L.A.: 2 X 10G Waves (One to WAN In Lab);
10G Wave L.A. to Sunnyvale for UltraScience Net Connection
Dark Fiber with 10G Waves (2 Waves): StarLight – Fermilab
Dedicated Wave StarLight (1 + 2 Waves) – Michigan Light Rail
SLAC: ESnet MAN to Provide Links (from July):
One for Production, and One for Research
Partner with Advanced Research & Production Networks
LHCNet (Starlight- CERN), Abilene/HOPI, ESnet, NetherLight,
GLIF, UKLight, CA*net4
Intercont’l extensions: Brazil (CHEPREO/WHREN), GLORIAD,
Tokyo, AARNet, Taiwan, China
S. McKee, UM - CHEP06
7
UltraLight Points-of-Presence
StarLight (Chicago)
HOPI (2 x 10GE), USNet (2 x 10GE), NLR (4 x 10GE)
UM (3 x 10GE), TeraGrid, ESnet, Abilene
FNAL, US-LHCNet (2 x 10GE), TIFR (Mumbai)
MANLAN (New York)
HOPI (2 x 10GE), US-LHCNet (2 x 10GE), BNL,
Buffalo (2 x 10GE), Cornell, Nevis
Seattle
GLORIAD, JGN2, Pwave, NLR (2 x 10GE)
CENIC (Los-Angeles)
HOPI (2 x 10GE), NLR (4 x 10GE)
Caltech (2 x 10GE), Pwave
Level3 (Sunnyvale)
USNet (2 x 10GE), NLR, SLAC
S. McKee, UM - CHEP06
8
International Partners
One of the UltraLight program’s strengths is the large
number of important international partners we have:
UltraLight is well positioned to develop and coordinate
global advances to networks for LHC Physics
S. McKee, UM - CHEP06
9
UltraLight Global Services
•
Global Services support management / co-scheduling of
multiple resource types with a strategic end-to-end view.
Provide strategic recovery mechanisms from system failures
Schedule decisions based on CPU, I/O, Network capability and
•
•
End-to-end task performance estimates, incl. loading effects
Constrained by local and global policies
Global Services Consist of:
Network and System Resource Monitoring
Network Path Discovery and Construction Services
Policy Based Job Planning Services
Task Execution Services
These types of services are required to deliver a managed
network.
S. McKee, UM - CHEP06
See “VINCI” talk on Thursday
10
UltraLight Network Engineering
GOAL: Determine an effective mix of bandwidthmanagement techniques for this application-space,
particularly:
Best-effort and “scavenger” using “effective” protocols
MPLS with QOS-enabled packet switching
Dedicated paths arranged with TL1 commands, GMPLS
PLAN: Develop, Test the most cost-effective integrated
combination of network technologies on our unique
testbed:
1. Exercise UltraLight applications on NLR, Abilene and campus
networks, as well as LHCNet, and our international partners
2. Deploy and systematically study ultrascale protocol stacks
(such as FAST) addressing issues of performance & fairness
3. Use MPLS/QoS and other forms of BW management, to optimize
end-to-end performance among a set of virtualized disk servers
4. Address “end-to-end” issues, including monitoring and end-hosts
S. McKee, UM - CHEP06
11
UltraLight: Effective Protocols
The protocols used to reliably move data are a critical
component of Physics “end-to-end” use of the network
TCP is the most widely used protocol for reliable data
transport, but is becoming ever more ineffective for
higher and higher bandwidth-delay networks.
UltraLight is exploring extensions to TCP (HSTCP,
Westwood+, HTCP, FAST, MaxNet) designed to maintain
fair-sharing of networks and, at the same time, to allow
efficient, effective use of these networks.
We identified a need to provide an “UltraLight” kernel to
make protocol testing easy among the UltraLight sites.
UltraLight plans to identify the most effective fair
protocol and implement it in support of our “Best Effort”
network components.
S. McKee, UM - CHEP06
12
FAST Protocol Comparisons
Gigabit WAN
5x higher utilization
Small delay
Random packet loss
10x higher thruput
Resilient to random loss
FAST: 95%
FAST
Reno: 19%
others
S. McKee, UM - CHEP06
13
MPLS/QoS for UltraLight
UltraLight plans to explore the full range of end-to-end
connections across the network, from best-effort, packetswitched through dedicated end-to-end light-paths.
MPLS paths with QoS attributes fill a middle ground in
this network space and allow fine-grained allocation of
virtual pipes, sized to the needs of the application or
user.
TeraPaths SC|05 QoS Data Movement BNL→UM
UltraLight, in conjunction
with the DoE/MICS
funded TeraPaths and
OSCARS effort, is
working toward
extensible solutions for
implementing such
capabilities in next
generation networks
S. McKee, UM - CHEP06
See “Terapaths” talk in previous session today
14
Optical Path Developments
Emerging “light path” technologies are arriving:
They can extend and augment existing grid computing
infrastructures, currently focused on CPU/storage, to include the
network as an integral Grid component.
Those technologies seem to be the most effective way to offer
network resource provisioning on-demand between endsystems.
We have developed a multi-agent system for secure light path
provisioning based on dynamic discovery of the topology in distributed
networks. [See VINCI talk on Thursday, Feb. 16]
We are working to further develop this distributed agent system and to
provide integrated network services capable to efficiently use and
coordinate shared, hybrid networks and to improve the performance and
throughput for data intensive grid applications.
This includes services able to dynamically configure routers and to
aggregate local traffic on dynamically created optical connections.
S. McKee, UM - CHEP06
15
Monitoring for UltraLight
Realtime end-to-end Network monitoring is essential
for UltraLight. You can’t manage what you can’t see!
We need to understand our network infrastructure and
track its performance both historically and in real-time
to enable the network as a managed robust component
of our infrastructure.
MonALISA http://monalisa.cern.ch
IEPM http://www-iepm.slac.stanford.edu/bw/
We have a new effort to push monitoring to the “ends”
of the network: the hosts involved in providing
services or user workstations.
S. McKee, UM - CHEP06
16
MonALISA UltraLight Repository
The UL repository: http://monalisa-ul.caltech.edu:8080/
S. McKee, UM - CHEP06
17
Host Monitoring with UltraLight
Many “network” problems are actually endhost problems:
misconfigured or underpowered end-systems
TCP
Settings
Host/System
Information
Network
Device
Information
The LISA application (see
Iosif’s talk later) was designed to
monitor the endhost and its view of the network.
For SC|05 we developed a Perl script to gather the relevant
host details related to network performance and
integrated the script with ApMon (an API for MonALISA)
to allow us to “publish” this data to a MonALISA
repository.
Information on the system information, TCP configuration
and network device setup was gathered and accessible
from one site.
Future plans are to coordinate this with LISA and deploy this
as part of OSG. The Tier-2 centers are a primary target.
S. McKee, UM - CHEP06
19
UltraLight Kernel
Kernels and the associated
device drivers are critical to the
achievable performance of
hardware and software.
The FAST protocol
implementation for Linux requires
a modified kernel to work.
.
We learned to deal with many pitfalls in the configuration and varying versions of linux
kernels, particularly how they impact the performance of the system on the network.
Currently we are working on a new version based upon 2.6.15.3. Will include FAST,
NFSv4, newest drivers for 10GE NICs and RAID cards.
Because of the need to have FAST easily available, and the desire to achieve the best
performance possible, we plan for all UltraLight “hosts” to install and utilize this Kernel.
S. McKee, UM - CHEP06
20
End-Systems performance
Latest disk to disk over 10Gbps WAN: 4.3 Gbits/sec (536 MB/sec) - 8 TCP
streams from CERN to Caltech; windows, 1TB file, 24 JBOD disks
Quad Opteron AMD848 2.2GHz processors with 3 AMD-8131 chipsets: 4 64bit/133MHz PCI-X slots.
3 Supermicro Marvell SATA disk controllers + 24 SATA 7200rpm SATA disks
Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20%
CPU utilization)
10GE NIC
10 GE NIC – 9.3 Gbits/sec (memory-to-memory, with 52% CPU
utilization, PCI-X 2.0 Caltech-Starlight)
2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (mem-2-mem)
Need PCI-Express?, TCP offload engines
Use 64 bit OS? Which architectures and hardware?
GOAL: Single server to server at 1 GByte/sec
Discussions are underway with 3Ware, Myricom to try to prototype viable
servers capable of driving 10 GE networks in the WAN.
Targeting Spring 2006
S. McKee, UM - CHEP06
21
UltraLight Network Summary
The network technical group has been hard at work on
implementing UltraLight. Network and basic services are
deployed and operating.
Significant progress has been made in monitoring, kernels,
prototype services and disk-to-disk transfers in the WAN.
Our global collaborators are working with us on achieving
the UltraLight vision.
2006 will be busy, critical (for LHC) and productive year
working on UltraLight!
S. McKee, UM - CHEP06
22