PowerPoint - OptIPuter
Download
Report
Transcript PowerPoint - OptIPuter
OptIPuter Backplane: Architecture, Research Plan,
Implementation Plan
Joe Mambretti, Director, ([email protected])
International Center for Advanced Internet Research (www.icair.org)
Director, Metropolitan Research and Education Network (www.mren.org)
Partner, StarLight/STAR TAP, PI-OMNINet (www.icair.org/omninet)
OptIPuter Backplane Workshop
OptIPuter AHM
CalIT2
January 17, 2006
LambdaGrid Control Plane
Paradigm Shift
Traditional Provider Services:
Invisible, Static Resources,
Centralized Management
Distributed Device, Dynamic Services,
Visible & Accessible Resources,
Integrated As Required By Apps
Invisible Nodes,
Elements,
Hierarchical,
Centrally Controlled,
Fairly Static
Unlimited Functionality,
Limited Functionality,
Flexibility
Flexibility
Ref: OptIPuter Backplane Project, UCLP
OptIPuter Architecture,
Joint Project w/UCSD, EVL, UIC
Signalling,
ODIN
Control,
Management
Techniques
Source: Andrew Chien, UCSD
OptIPuter Software Architect
Optical Control Plane
Client
Controlle
r
Client Layer Control Plane
Optical Layer Control Plane
UNI
Controlle
r
CI
Controlle
r
CI
Controlle
r
Controlle
r
I-UNI
CI
Client
Device
Client Layer
Traffic Plane
Optical Layer – Switched Traffic Plane
Controlle
r
Network Side Interface
•
•
•
•
WS BPEL
APIs
GMPLS as a uniform control plane
SNMP, with extensions as the basis of a management
plane
• Extended MIB capabilities
– L3
– IEEE L2 MIB Developments
– MIB Integration with higher layer functionality
• Monitoring, Analysis, Reporting
L2 10 GE
•
•
•
•
•
•
•
•
10 GE Node Compute Clusters
APIs
10 GE NICs
10 Gbps Switch on a Chip
Currently, Low Cost Devices, Lost Per Port Cost
240 GE
SNMP
Standard Services
– Spanning Tree
– vLANs
– Priority Queuing
• IEEE Enhancing Scalability
IEEE L2 Scaling Enhancements
•
•
•
•
•
•
•
Current Lack of Hierarchy
IEEE Developing Hierarchical Architecture
Network Partitioning (802.1q, vLAN tagging)
Multiple Spanning Trees (802.1s)
Segmentation (802.1ad, “Provider Bridges”)
Enables Subnets To be Characterized Differently Than Core
IETF – Architecture for Closer Integration With Ethernet
–
–
–
–
–
GMPLS As Uniform Control Plane
Generalized UNI for Subnets
Link State Routing In Control Plane
TTL Capability to Data Plane
Pseudo – Wire Capabilities
L2 Services Enhancements
• Metro Ethernet Forum
• Three Primary Technical Specifications/Standards
– Ethernet Services Model (ESM)
• Ethernet Service Attributes (Core “Building Blocks”)
• Architectural Framework For Creating an Ethernet Service
• No Specific Service – Any Potential Service
– Ethernet Services Definitions (ESD)
• Guidelines for Using ESM Components for Service Development
• Provides Example Service Types and Variations of Types
– Ethernet Line (E-Line)
– Ethernet LAN (E-LAN)
– Ethernet Traffic Management (ETM)
• Implications for Operations, Traffic Management, Performance, eg,
Managing Services vs Pipes
• Quality of Service Agreements, Guarantees
L1 10 Gbps
•
•
•
•
•
•
•
10 GE Node Compute Clsuters
APIs
Automated Switch Panels
GMPLS
IETF GMPLS UNI (vs ONI UNI, Implications for Restoration Reliability)
10 G Ports
MEMs Based
– Services
•
•
•
•
•
•
Lightpaths with Attributes, Uni-directional, Bi-directional
Highly Secure Paths
OVPN
Optical Multicast
Protected Through Associated Groups
ITU-T SG Generic VPN Architecture (Y.1311), Service Requirements
(Y.1312), L1 VPN Architecture (Y.1313)
HP-PPFS
HP-APP2
VS
HP-APP3
VS
HP-APP4
VS
VS
Previously OGSA/OGSI, Soon OGSA/OASIS WSRF
tcp
ODIN Server
Creates/Deletes
LPs, Status Inquiry
Discovery/Resource
Manager, Incl Link Groups
Addresses
Lambda Routing:
Topology discovery, DB of physical links
Create new path, optimize path selection
Traffic engineering
Constraint-based routing
O-UNI interworking and control integration
Path selection, protection/restoration tool - GMPLS
tcp
Access
Policy (AAA)
Process Registration
GMPLS Tools
LP Signaling for I-NNI
Attribute Designation, eg
Uni, Bi directional
LP Labeling
Link Group designations
Process
Instantiation
Monitoring
ConfDB
System Manager
Discovery
Config
Communicate
Interlink
Stop/Start Module
Resource Balance
Interface Adjustments
OSM
UNI-N
Data
Plane
Physical Processing Monitoring and Adjustment
Resource
Resource
Resource
Control Channel monitoring, physical fault detection, isolation, adjustment, connection validation etc
Resource
The OptIPuter LambdaGrid
UIC
UoA
U Amsterdam
Seattle
UIC EVL
Chicago
PNWGP
Seattle
Northwestern
Amsterdam
NU
StarLight
Chicago
NetherLight
Amsterdam
StarLight
Pacific Wave
CERN
NASA Ames
CERN
NASA Ames
NLR
NASA JPL
CENIC LA GigaPOP
NLR
NASA JPL
2
ISI
UCI
2
ISI
UCI
2
CENIC
Los Angeles
GigaPOP
NASA Goddard
NASA Goddard
CalREN-XD
UCSD
UCSD
San Diego
Level 3
4
Level 3
4
CENIC
San Diego
GigaPOP
CENIC San Diego GigaPOP
OMNInet Network Configuration 2006
W Taylor
PP
10/100/
GIGE
8600
10 GE
10 GE
Optera
5200
10Gb/s
TSPR
l 1 Photonic
l 2 Node
l3
l4
NWUEN-1
NWUEN-5
NWUEN-3
l
Photonic 1
l
Node 2
l3
l4
NWUEN-8
Optera
5200
10Gb/s
TSPR
PP
8600
Optera Metro 5200 OFA
CAMPUS
FIBER (4)
10 GE
10 GE
Optera
5200
10Gb/s
TSPR
PP
8600
INITIAL
CONFIG:
10 LAMBDAS
(ALL GIGE)
10/100/
GIGE
Fiber
NWUEN-9
NWUEN-7
StarLight
Interconnect
with other
research
networks
NWUEN-4
5200 OFA
DOT
Clusters
10/100/
GIGE
TECH/NU-E
OM5200
5200 OFA
LAC/UIC
OM5200
10 GE
10 GE
710 Lake Shore
5200 OFA
INITIAL
CONFIG:
10 LAMBDA
(all GIGE)
Lake Shore
NWUEN-6
NWUEN-2
EVL/UIC
OM5200
l1
l2
l3
l4
Node
Optera 5200 OFA
CAMPUS
…
FIBER (16)
CAMPUS
FIBER (4)
• 8x8x8l Scalable photonic switch
• Trunk side – 10 G WDM
750 North
• OFA on all trunks
Photonic
600 S. Federal
Photonic l 1
Node l 2
l3
l4
10 GE
10 GE
Optera
5200
10Gb/s
TSPR
PP
10GE LAN PHY (Dec 03)
8600
NWUEN
Link
1*
2
3*
4
5
6
7*
8
9
Span Length
KM
35.3
10.3
12.4
7.2
24.1
24.1
24.9
6.7
5.3
MI
22.0
6.4
7.7
4.5
15.0
15.0
15.5
4.2
3.3
To Ca*Net 4
10/100/
GIGE
1310 nm 10 GbE
WAN PHY interfaces
Fiber in use
Fiber not in use
Default configuration:
Tribs can be moved as needed
Only TFEC link can support OC192c (10G Wan) operation
Non -TFEC link used to transport
Ge traffic
Could have 2 facing L2 SW
1890 W. Taylor
600 N. Federal
Optical
Switch
Optical
Switch
1x 10G Wan
1 x 10G Wan
High Performance
L2 Switch
Ge (x2)
Ge (x2)
Optical
Switch
High Performance
L2 Switch
Optical
Switch
1 x 10G Wan
1 x 10G Wan
Ge (x2)
Ge (x2)
710 North Lake Shore Drive
Trib Content
OC-192 – with TFEC
OC-192 – without TFEC
Ge
OC-48
High Performance
L2 Switch
16
12
8
0
High Performance
L2 Switch
750 North Lake Shore Drive
Non - TFEC Link
TFEC Link
OMNInet 2005
TFEC = Out of band
error correction
Extensions to Other Sites Via Illinois’ I-WIRE
StarLight
Argonne
Research Areas
• Latency-Tolerant
Algorithms
• Interaction of
SAN/LAN/WAN
technologies
• Clusters
Source: Charlie Catlett
UIC/EVL
UIUC
CS
NCSA
Research Areas
•Displays/VR
•Collaboration
•Rendering
•Applications
•Data Mining
Summary Optical Services: Baseline + 5 Years
2005
2006
2007
2008
2009
2010
Dedicated
Lightpaths
Enhanced Direct
Addressing
Additional LPs
National, Global
Additional LPs
National, Global
Site Expansion:
Multiple Labs
Site Expansion:
Multiple Labs
Dynamic
Lightpath
Allocation
Increased
Number of
Nodes on LPs
Increased
Allocation
Capacity US
Increased
Allocation
Capacity Global
Increased
Allocation
Capacity: Sites
Increased
Allocation
Capacity: Sites
Highly
Distributed
Control Plane
Persistent Inter
Domain
Signaling
National Global
Multi-Domain
Distributed
Control
Extension to
Additional Net
Elements
Persistence:
Common
Facilities
Additional
Facility
Implementations
Deterministic
Paths
Close Integration
w/ App Signaling
Increased
Attribute
Parameters
Increased
Adjustment
Parameters
Performance
Metrics and
Methods
Enhanced
Recovery
Restoration
Autonomous
Dyn.
Lightpath Peer.
Multi-Domain
ADLP
Integration with
Management
Sys
Extensions of
ADLP Peering
E2E ADLP
Recovery
Restoration
Multi-Service
Layer Integration
5-10 MSI
Facilities
10-20 MSI
Facilities
20-40 MSI
Facilities
Additional US,
Global Facilities
Additional US,
Global Facilities
Optical Multicast
Enhanced
Control of OM
OM/App
Integration
Expansion to
Addtn’l Objects
Expansion to
Addtn’l Apps
Expansion to
Addtn’l Apps
App/Optical
Integration
App API-Op Ser
Validation
Integration with
Optical Services
Monitoring
Techniques
Analysis
Techniques
Recovery,
Restoration
Wavelength
Routing
Persistent
Wavelength
Routing
Multi-Domain
Wavelength
Routing
Multi-layer
Integration
Multi-Services
Integration
Enhanced
Recovery
Restoration
Summary Optical Technologies: Baseline + 5 Years
2005
2006
2007
2008
2009
2010
O-APIs
Additional
Experiments w/
Architecture
App Specific
APIs
Variable APIs
Integrated with
Common
Services
Enhancement of
Architecture
Additional
Deployment
Distributed
Control Systems,
Multi-Domain
Additional
Experiments w/
Architecture
Integration with
ROADMs
Expansion to
Edge Sites
Enhancement of
Architecture
Additional
Deployment
OOO
Switches
At Selected
Core Sites
At Selected
Core, Edge Sites
+ Experimental
Solid State
OSWs
Solid State OSW
Deployment
Solid State
At Core, Edge
O-UNIs
At Selected
Core Sites
At Selected
Core, Edge Sites
Deployment At
All Key Sites
Additional
Deployments
Wide
Deployment
Service
Abstraction –
GMPLS Integr.
Additional
Signaling
Integration
Increased
Transparency &
LayerElimination
Increased
Integration with
ID/Obj.Discovery
Prototype Arch
for App Specific
Serv Abstractns.
Formalization of
Enhanced
Architecture
Policy Based
Access Control
Additional
Experiments w/
Architecture
Formalization of
Architecture, eg
Via WS
Expansion to
Additional
Resources
L1 Security
Enhancements
Formalization of
Enhanced
Architecture
New Id, Object
and Discovery
Mechanisms
Integration of
New Id, Obj, Dis
w/ New Arch.
Integration
With Multiple
Integrated Serv.
Integration
w/New
Management
Sys
Extensions to
various TE
Functions
Persistent at
Core, Edge
Facilities
DWDM
DWDM
CWDM
Integration with
Edge Optics
Integration with
BP Optics
Additional
MUX/DMUX
Increased
Stream
Granularity
Summary Optical Interoperability Issues: Baseline + 5 Years
2005
2006
2007
2008
2009
2010
Common Open
Services
Definitions
Common
Services R&D
Common
Services
Experimentation
Initial Standards
Formalization
Establish CSD
Enhancement
Process
On-Going
COS
Architecture
Initial
Implementations
Expansion of
Functionality
Initial Standards
Formalization
Enhancement
Process
On-Going
Open Protocols
And Standards
Initial
Implementations
Expansion of
Functionality
Initial Standards
Formalization
Enhancement
Process
On-Going
Distributed
Control
V2 with WS
Integration
Multi-Service
Integration
New Services
Integration
Extensions,
Horizontal, Vert
Integration with
New Opt Core
Multi-Domain
Interoperability
Enhancement of
Signaling
Functionality
Access Policy
Services
Expansion to
Additional
Domains
Increasing US,
Global
Extensions
Increasing US,
Global
Extensions
Implementation
At GLIF Open
Exchanges (4)
5-10 OE Sites
10-20 OE Sites
20-30 OE Sites
30-40 OE Sites
40-50 OE Sites
Basics Services
at
Key US, Global
Research Sites
15-30 Sites
30-60 Sites
60-90 Sites
90-120 Sites
120-150 Sites
Basic Services at
Key US Science
Sites
7-15 Sites
15-30 Sites
30-45 Sites
45-60 Sites
60-75 Sites
Service Est.
At Selected Labs
30-60 Labs
60-120 Labs
120-180 Labs
180-240 Labs
240-300 Labs
Overall Networking Plan
Seattle
PW/CENIC
NetherLight
Dedicated Lightpaths
Dedicated Lightpaths
NLR
Pacific Wave
CENIC
Chicago
4*1Gpbs
Paths +
One Control Channel
San Diego (iGRID,UCSD)
San Diego (iGRID, UCSD)
Seattle
Route B
StarLight
NetherLight 4 Dedicated Paths
University of Amsterdam
AMROEBA Network Topology
iGRID Conference
StarLight
SURFNet/
University of Amsterdam
Visualization
L2SW
L2SW
L2SW
L3 (GbE)
L2SW
OME
L2SW
L2SW
Control
iGRID Demonstartion
iCAIR DOT Grid Clusters
UvA VanGogh Grid Clusters
Visualization
www.startap.net/starlight