The UltraLight Program

Download Report

Transcript The UltraLight Program

The UltraLight Program
UltraLight: An Overview and Update
Shawn McKee
University of Michigan
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Topics
• Introduction: What is the UltraLight Program?
• History
• Program Goals and Details
• Current Status and Summary
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
What is UltraLight?
• UltraLight is a program to explore the integration of
cutting-edge network technology with the grid computing
and data infrastructure of HEP/Astronomy
• The program intends to explore network configurations
from common shared infrastructure (current IP networks)
thru dedicated optical paths point-to-point.
• A critical aspect of UltraLight is its integration with two
driving application domains in support of their national and
international eScience collaborations: LHC-HEP and eVLBIAstronomy
• The Collaboration includes:
—
—
—
—
—
Shawn McKee
Caltech
Florida Int. Univ.
MIT
Univ. of Florida
Univ. of Michigan
―
―
―
―
―
UC Riverside
BNL
FNAL
SLAC
UCAID/Internet2
HENP SIG • Austin, TX • September 27th, 2004
Some History…
• The UltraLight Collaboration was originally formed in
Spring 2003 in response to an NSF Experimental
Infrastructure in Networking (EIN) RFP in ANIR
• After not being selected, the program was refocused on
LHC/HEP and eVLBI/Astronomy and submitted to “Physics
at the Information Frontier” (PIF) in MPS at NSF
• Collaboration was notified at the end of 2003 that the PIF
program was being postponed 1 year. Suggested that
proposals be redirected to the NSF ITR program.
• ITR Deadline was February 25th, 2004.
• We were selected for funding and official started on
September 15, 2004!
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
HENP Network Roadmap
LHC Physics will require large bandwidth
capability over a globally distributed
network. The HENP Bandwidth Roadmap
is shown in the table below:
Table 1: Bandwidth Roadmap (Gbps) for Major HENP Network Links
Year
Production
Experimental
Remarks
2001
0.155
0.622 – 2.5
SONET/SDH
2002
0.622
2.5
SONET/SDH; DWDM; GigE Integration
2003
2.5
10
DWDM; 1 & 10 GigE Integration
2005
10
2-4  10
 Switch,  Provisioning
2007
2–4  10
~10  10 (and 40)
1st Gen.  Grids
2009
~10  10 (or 1–2  40)
~5  40 (or 20–50  10)
40 Gbps  Switching
2011
~5  40 (or ~20  10)
~5  40 (or 100  10)
2nd Gen.  Grids, Terabit networks
2013
~Terabit
~Multi-Terabit
~Fill one fiber
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Architecture
UltraLight envisions extending and
augmenting the existing grid computing
infrastructure (currently focused on
CPU/storage) to include the network as an
integral component.
A second aspect is
strengthening and
extending “end-toend” monitoring and
planning
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
Workplan and Phased Deployment
• UltraLight envisions a 4 year program to deliver a new,
high-performance, network-integrated infrastructure:
• Phase I will last 12 months and focus on deploying the
initial network infrastructure and bringing up first services
• Phase II will last 18 months and concentrate on
implementing all the needed services and extending the
infrastructure to additional sites
• Phase III will complete UltraLight and last 18 months.
The focus will be on a transition to production in support of
LHC Physics and eVLBI Astronomy
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Network: PHASE I
• Implementation via
“sharing” with
HOPI/NLR
• MIT not yet “optically”
coupled
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Network: PHASE II
• Move toward multiple
“lambdas”
• Bring in BNL and MIT
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Network: PHASE III
• Move into production
• Optical switching fully
enabled amongst
primary sites
• Integrated international
infrastructure
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Network
• UltraLight is a hybrid packet- and circuit-switched network
infrastructure employing ultrascale protocols and dynamic
building of optical paths to provide efficient fair-sharing on long
range networks up to the 10 Gbps range, while protecting the
performance of real-time streams and enabling them to coexist
with massive data transfers.
• Circuit switched: “Intelligent photonics” (using wavelengths
dynamically to construct and tear down wavelength paths rapidly
and on demand through cost-effective wavelength routing) are a
natural match to the peer-to-peer interactions required to meet
the needs of leading-edge, data-intensive science.
• Packet switched: Many applications can effectively utilize the
existing, cost effective networks provided by shared packet
switched infrastructure. A subset of applications require more
stringent guarantees than a best-effort network can provide, and
so we are planning to utilize MPLS as an itermediate option
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
UltraLight Optical Exchange Point
•
•
L1, L2 and L3 services
Interfaces
•
Hybrid packet- and circuit-switched PoP
•
•
Control plane is L3
Locations: Los-Angeles, Geneva, Chicago (in the future)
Shawn McKee
— 1GE and 10GE
— 10GE WAN-PHY (SONET friendly)
— Interface between packet- and circuit-switched networks
HENP SIG • Austin, TX • September 27th, 2004
MPLS Topology
• Current network engineering
knowledge is insufficient to
predict what combination of
“best-effort” packet switching,
QoS-enabled packet switching,
MPLS and dedicated circuits will
be most effective in supporting
these applications.
• We will use MPLS and other
modes of bandwidth
management, along with
dynamic adjustments of optical
paths and their provisioning, in
order to develop the means to
optimize end-to-end
performance among a set of
virtualized disk servers, a
variety of real-time processes,
and other traffic flows.
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
MPLS deployment
•
•
Compute path from one given node to another such that the path does not
violate any constraints (bandwidth/administrative requirements)
Ability to set the path the traffic will take through the network (with simple
configuration, management, and provisioning mechanisms)
—
•
•
•
Take advantage of the multiplicity of waves/L2 channels across the US (NLR, HOPI, Ultranet and
Abilene/ESnet MPLS services)
VPLS: Single broadcast domain for users who want to deploy their own L2
private network
EoMPLS will be used to build layer2 paths
?? natural step toward the deployment of GMPLS ??
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
SC2004
Targets:
 100 Gbps of aggregated throughput to Caltech & SLAC/FNAL booths
 1-2 GByte/s of disk to disk transfers.
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
Summary and Status
• UltraLight promises to deliver the critical missing
component for future eScience: the integrated,
managed network
• We have a strong team in place, as well as a detailed
plan, to provide the needed infrastructure and
services for production use by LHC turn-on at the
end of 2007
• Currently we are ramping up to “turn on” UltraLight
• The SC2004 demo will help jumpstart UltraLight and
provide a glimpse of what we hope to enable
• We plan to augment the proposal thru additional
grants to enable us to reach our goal of having
UltraLight be a pervasive and effective
infrastructure for LHC physics
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004
Questions?
Questions? (or Answers)?
Shawn McKee
HENP SIG • Austin, TX • September 27th, 2004