ppt 3.8MB - Star Tap
Download
Report
Transcript ppt 3.8MB - Star Tap
All-Optical Networks for Grids:
Dream or Reality?
Payam Torab
Lambda Optical Systems Corporation
September 28, 2005
www.lambdaopticalsystems.com
Grids – Tflops vs. Tbps.
Emergence of grids is the result of the synergism between
communications and computing, just like cybernetic systems that
came out of synergism between communications and control
Role of the network in Grids: to provide throughput
TeraGrid
– Application-aware networks, or network-aware applications?
– Network providing services, or network as a services?
– Throughput is the theme unifying connectivity, delay and bandwidth
NEESgrid
Balanced growth of networking and computing results in Grids
Clusters
Computing
power (Tflops)
Grids
North European Grid
Internets
Networking power
(Tbps)
Surfnet
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
2
Need for High Throughput
Throughput is a grid resource: Uniform grid
growth requires growth in throughput
Throughput growth requires improvement
in bandwidth, delay and availability
Examples of throughput requirements
– GridFTP applications
– Large Hadron Collider (LHC) at CERN
Year
Production
Experimental
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
Switch;
Provisioning
2007
2-4 X 10
1st Gen. Grids
2009
~10 X 10
or 1-2 X 40
~5 X 40 or
~20 X 10
~Terabit
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
2011
2013
~MultiTbps
Remarks
SONET/SDH
SONET/SDH
DWDM; GigE Integ.
PHENIX experiment – Used GridFTP to transfer
270 TB of data from Long Island, NY to Japan
ESNET outage?
Relativistic Heavy Ion Collider
RHIC at Brookhaven:
600 Mbps peak
250 Mbps average
Brookhaven National Lab
Long Island, NY
OC-48 link
to ESNET
40 Gbps
Switching
2nd Gen Grids
Terabit Networks
~Fill One Fiber
Source: Larry Smarr, “The Optiputer - Toward a Terabit LAN ,”
The On*VECTOR Terabit LAN Workshop Hosted by Calit2,
University of California, San Diego - January 2005
Transpacific 10
Gbps line to
SINET in Japan
Source: www.cerncourier.com/main/article/45/7/15
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
3
Photonic Switching: Key to End-to-End Transparency
O-E-O
Electrical
Cross-Connect
(EXC)
~O(102)
wavelengths
WDM and electrical
switching
O-E-O
Photonic CrossConnect (PXC)
Separate WDM and
optical switching
O-O-O
~O(102)
wavelengths
~O(102) Gbps
per wavelength
WDM + Photonic switching
Photonic CrossConnect (PXC)
Integrated WDM and
optical switching
Full transparency
– End-to-end transparency
• Bitrate transparency (10 Gbps, 40 Gbps, …)
• Payload transparency (SONET, SDH, Ethernet, …)
– Transmission robustness
• Simplification or even elimination of windowing
• No packet loss due to congestion/buffer overrun
• Simpler transport protocols, higher throughput
From: “Development of a Large-scale 3D MEMS Optical
Switch Module,” T. Yamamoto, J. Yamaguchi and R.
Sawada, NTT Technical Review, Vol. 1, No. 7, Oct. 2003
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
4
Wavelength Switching Scalability
Grid-scale applications will ultimately press
even wavelength switching – Example:
Year
Require too
many optical
ports to
provide nonblocking
connectivity!
Production
Experimental
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
Switch;
Provisioning
2007
2-4 X 10
1st Gen. Grids
2009
~10 X 10
or 1-2 X 40
~5 X 40 or
~20 X 10
~Terabit
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
2011
2013
~MultiTbps
PXC
PXC
PXC
PXC
Wavelength switching
Remarks
SONET/SDH
4 wavelengths over 4 hops 32 optical ports
SONET/SDH
DWDM; GigE Integ.
40 Gbps
Switching
PXC
PXC
PXC
PXC
Waveband switching
4 wavelengths over 4 hops 8 optical ports
Waveband
multiplexer
Waveband
demultiplexer
2nd Gen Grids
Terabit Networks
~Fill One Fiber
Source: Larry Smarr, “The Optiputer - Toward a Terabit LAN ,”
The On*VECTOR Terabit LAN Workshop Hosted by Calit2,
University of California, San Diego - January 2005
Similar to any other switching technology,
aggregation is essential for scalability of
wavelength switching – hence the emergence of
transparent multigranular (wavelength and
waveband) switching architectures
From: “A Graph Model for Dynamic Waveband Switching
in WDM Mesh Networks,” M. Li and B. Ramamurthy,
IEEE ICC 2004, Vol. 3, June 2004, pp. 1821-1825.
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
5
Waveband Switching Efficiency
Switching efficiency (%)
Waveband switching efficiency: Relative saving in
the total number of optical ports in a network
when waveband switching is used instead of
wavelength switching
n
h 1 1
e 1 b
nw h 1 bu
60
40
20
0
-20
-40
-60
-80
-100
4
nw = number of ports under wavelength switching
Wavebandnb = number of ports under waveband switching
Physical hops in
switched
waveband path (h)
h = average number of physical hops in each waveband circuits (bu)
4
b = average number of wavelengths in a waveband
Waveband-switching
efficient region
u = average waveband utilization (used wavelengths)
2
Waveband switching becomes only more efficient
(more saving in optical ports) as more
wavelength circuits are carried over longer paths
Example: GridFTP using 4 parallel TCP streams over
4x40 Gbps circuits carried over 6 hops More than
0.1 Tbps throughput over 6 hops using only 30 ports
Waveband-switched circuits (bu)
0
3
Increased
waveband
utilization
2
1
1
2
3
4
5
8
6
4
2
0
10
Waveband
switching
gets more
efficient
Increased
waveband path
length (hops)
6
7
8
9
10
Physical hops in waveband path (h)
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
6
More on Waveband Switching Efficiency
Example: WDM WAN
615 circuits@40Gbps ~ 2.5 Tbps
~6200 ports - wavelength switching
~5500 ports - waveband switching
~80 nodes, ~140 links
This simple analysis
does not consider the
extra scalability from
the increase in bitrate
(160Gbps and beyond,
OTDM).
30000
998 circuits@40Gbps ~ 40 Tbps
~11800 ports - wavelength switching
~9700 ports - waveband switching
2085 circuits@40Gbps ~ 85 Tbps
~28400 ports - wavelength switching
~21800 ports - waveband switching
More to appear in:
P. Torab and V. Hutcheon,
“Waveband switching efficiency
in all-optical networks: analysis
and case study,” in preparation
for OFC 2006.
Waveband-switching
25000
20000
15000
Transmission
breakthroughs
Increase in
throughput
without increase
in ports
10000
5000
0
20
40
60
80
Network throughput (Tbps)
100
Waveband-switched circuits (bu)
Required optical ports
Wavelength-switching
4
3
40 Tbps
2
1
1
Waveband
switching
gets more
efficient
Waveband-switching
efficient region
80 Tbps
2.5 Tbps
2
3
4
5
6
7
8
9
10
Physical hops in waveband path (h)
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
7
Hierarchical Transparent Switching
Waveband switching adds another level of switching
to the transparent switching hierarchy
Multigranular switching Logical WDM topologies
Node A
Fiber
Waveband
Wavelength
IP/TDM
Node B
Waveband XC
Waveband XC
1
Waveband XC
2
h
Waveband
Multiplexer
Waveband
Demultiplexer
Bandpath
bp1
Wavelength XC
Wavelength
Interfaces
Several physical hops are lumped into
one logical WDM link, requiring
switching only at the link endpoints
Fast and still flexible dynamic
wavelength service over reduced
number of hops
Wavelength XC
Wavelength
Interfaces
h physical hops – one logical hop
bp1
Node A
Node B
Two lightpaths with the same routes
Node A
Node B
Waveband XC
Waveband XC
1
Node C
Waveband XC
2
h1
Waveband XC
1
Waveband XC
2
h2
Waveband
Demultiplexer
Waveband
Multiplexer
Bandpath
bp1
Wavelength XC
Wavelength
Interfaces
Bandpath
bp2
Wavelength XC
Wavelength
Interfaces
h1 physical hops – one logical hop
bp1
Node A
Payloadtransparent
Switching
lp1
Wavelength XC
Wavelength
Interfaces
h2 physical hops – one logical hop
bp2
Node B
Node C
lp2
Two lightpaths with partially overlapping routes
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
8
Logical (Virtual) WDM
Combined wavelength and waveband switching allows dynamic configuration of
transparent optical topologies supporting dynamic lambdas (from connection ondemand to topology on-demand)
Example: During the next 14 days, computing facility at site A, the storage center at
site B, and the visualization room at site C will participate in an experiment that will
require multiple dynamic lambdas (e.g., timescale in seconds)
Logical WDM Topology
Computing - A
Computing - A
Storage - B
Waveband
connections
Visualization - C
Storage - B
Visualization - C
Dynamic lambdas
(fast setup and teardown)
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
9
Lambda OpticalSystems Solutions
Dedicated to transparent switching technology
Addressing research community and carrier needs
Deployed at U.S. Naval Research Lab (NRL) and
Starlight
LambdaNode 200
Transparent 64x64 full duplex ports
GMPLS, CLI and web interface
5.25 inches tall
LambdaNode 2000
Integrated WDM and photonic switching
Multigranular switching for maximum scalability
Provides waveband and wavelength switching
GMPLS, CLI, TL1 and web interface
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
10
NL 101, NL 103 Demos at iGrid 2005
AAA/DRAC
AAA/DRAC
AAA/DRAC
VMT
Controller
2 x GbE
circuits
HDXc
Qwest / other wave service
LambdaNode
200
CENIC
VMT
visualization
host
HDXc
a
**or other
L2 switch
3
5
2
4
2 x GbE
circuits
6
12/2
12/3
2/18
2/19
GbE
OC192
STM64
y
2/13
VLAN 350
VLAN 350
iGRID B
nud05
San Diego/UCSD (SAN)
nud06
vangogh 5
Chicago/SL (CHI)
vm
vh
x
4003(2)
b
2/12
iGRID A
OME
E600
E1200
GbE
OC192
STM64
**E600
1
HDXc
vm
/
2
vangogh 6
Amsterdam/NL (AMS)
vm
X
/
2
X
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
11
Control Plane: Enabler of Transient Services
Grid’s balanced growth needs dynamic on-demand high network throughput
What do we need to provide high throughput?
1. Dynamism: Make optimum use of all network resources for the tasks at hand
•
Example: If 1.0 Tbps throughput is needed between A and B for one hour, fill up the
network with 25x40Gbps connections and kill them an hour later.
2. Availability: The ability to maintain high throughput through fast recovery
•
•
Network failures do happen, therefore high bandwidth does not guarantee high throughput
In a transient service environment protection is not as expensive
– Telco thinking: 1+1 protection is expensive- I need to plan for twice the capacity, therefore I need
to charge my customer twice as much (bronze service, silver service, platinum service, …)
– Grid thinking: Provide as much protection that your schedule allows. The connections will not be
there in an hour. The more network resources the more protected circuits.
•
(Dynamic) restoration can also add to reliability when (dedicated) protection is unavailable
Key effort needed: Integrating traditional service levels (1+1 protection, 1:N
protection, shared mesh restoration, …) into Grid services
–
–
–
Can a GridFTP application ask for transfer over 1+1 connection?
Application
Trade-off between replication/migration and network recovery intelligence
(replication,
Where does the optimal performance stand?
migration)
Network
intelligence
(protection,
restoration)
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
12
Generalized Multiprotocol Label Switching (GMPLS)
IP-based control plane paradigm to control packet, time slot (TDM), wavelength,
waveband and space (fiber) switching across multiple switching layers, and across
multiple domains.
Developed by IETF – CCAMP workgroup with liaison work with OIF and ITU-T
Mature standard now (RFC 3945) with various extensions for different switching
technologies (Layer 2, wavelength/waveband, SONET/SDH,…)
Basic functionalities/protocols
– Neighbor discovery/link management (Link Management Protocol - LMP)
– Routing with traffic engineering extensions (OSPF-TE, ISIS-TE)
Bidirectional
– Signaling (RSVP-TE with GMPLS extensions)
LSP
PATH
Applications/solutions
– Recovery (protection, restoration)
– Make-before-break
– Layer 1 VPN (L1VPN working group)
Ingress
Node A
PATH
RESV
RESV
Transit
Node B
RESVCONF (optional)
Cross-connect set upon
receiving the PATH message
PATH
RFC 3473
bidirectional
Bidirectional
LSP setup
data plane
PATH
More efficient
bidirectional
LSP setup Bidirectional
data plane
Egress
Node C
RESV
Cross-connect set upon
receiving the RESV message
Both cross-connects set upon
receiving the PATH message
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
13
Generalized Multiprotocol Label Switching (GMPLS)
New directions
– Separation of path computation as a service
– Attention to Ethernet as a Layer2 transport
– Inter-domain traffic-engineering
• Good work at NSF’s DRAGON project
– Inter-domain circuit setup, path computation element (Network Aware Resource Broker –NARB)
– The next step is interoperability with other networks
Transport Layer Capability Set Exchange
NARB
NARB
NARB
End
System
End
System
AS 1
AS 3
AS 2
Source: Jerry Sobieski, Tom Lehman, Bijan Jabbari, “Dynamic Resource
Allocation via GMPLS Optical Networks (DRAGON),” Presented to the
NASA Optical Network Technologies Workshop, August 8, 2004
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
14
Conclusions: Dream or Reality?
Key word for Grid networks is high throughput
Lambda Grids are the only way to keep up with throughput demand – Reality
When is access to dark fiber going to be cheap? Dream
– Starting as islands of transparency
• Regional Optical Networks (RONs)
• Fiber sharing is critical, RONs have to have transparent access to each other
• Wavebands as highways between RONs
Photonic access to super
– Islands growing as optical reach/transmission improves
highway for RONs?
• Digital wrapper, FEC
High throughput needs end-to-end transparency
– Data plane transparency
• WDM and photonic switching
– Control plane transparency
• Inter-domain end-to-end circuit setup
Availability and recovery are the new QoS for lambda grids
Ethernet will be the dominant end-to-end payload
HOPI Node
– Transparent networks are ready for payload change
Enabling Data-Intensive Grid Applications with Advanced Optical Technologies - 9/28/2005
15