Data Plane: Support Heterogeneity
Download
Report
Transcript Data Plane: Support Heterogeneity
Tesseract
A 4D Network Control Plane
Carnegie Mellon University
Microsoft Research
Rice University
Presented by: Alberto Gonzalez, Whitney Young
Current Designs
No direct control
Subtle dependencies
Example: load balance forwarding by tuning
OSPF link weights, but impacts inter-domain
routing
4D Architecture
Control plane:
Decision
Dissemination
Discovery
Data
Services:
Dissemination
Node configuration
Design
Design Goals
Timely reaction to network changes
Resilient to decision plane failure
Robust and secure control channels
Minimal switch configuration
Backward compatibility
Support diverse decision algorithms
Support multiple data planes
Implementation Overview
Switch
Implements data plane
Decision Element (DE)
Implements discovery, dissemination, and
decision planes
Decision Plane
Any network control algorithm can be easily
integrated
Incremental shortest path first
Spanning tree
Joint packet filtering/routing
Link cost-based traffic engineering
Resiliency to DE failure
Hot standbys receiving heartbeats
Dissemination Plane
Goal: communication between DEs and
switches
DEs handle most of dissemination plane, but
switches help out
Path to destination handled by DE
Switches have separate queue and dissemination
packets have higher priority
Security (protects switches, info passed through
dissemination plane, and compromised DEs)
Discovery Plane
Goal: minimize manual configuration
Switches send HELLO messages
DEs handle instructing the switches on what
to do once active
Initiate eBGP session with outside world
Backward compatibility (bootstrapping end
hosts)
Discovery plane as DHCP proxy
Data Plane
Configured by decision plane
WriteTable exposed with simple interface to
provide configuration service to decision plane
Allows easy implementation of different services
Decision/Dissemination Interface
Function independently of each other
Only 3 functions used to interface between them
(2 more simply to improve performance)
Performance Evaluation
Single Link Failures
Switch& Regional Failures
• Link Flapping
10-hop to 12-hop change
Tesseract can handle network changes
Performance Evaluation
1347 nodes & 6244 edges
DE Computation Time
Worst Case: 151ms
99th percentile: 40ms
Bandwidth overhead
Worst Case: 4.4MB
90% of switched updated with new state
Performance Evaluation
Failover times
Applications
In enterprise network:
Computers both new routes & packet filter
placements
Loads into routers with no forbidden traffic
leaked
No human involvement once security policy is
specified
Ethernet
Key features
Widely implemented frame format
Support for broadcasting frames
Transparent address learning model
Tesseract keeps these properties.
Ethernet
Through point comparisons
Control Plane for TCP flows
Started at 570Mbps
Leveled at 280Mbps after a failure
Conventional RSTP Control Plane
Starts at 280Mbps
Hit zero after failure
Recovered after 7-8 seconds at ~180Mbps
Summary
Tesseract
Robust
Secure
Convergence & Throughput
Scalable
Ethernet or IP
Good Performance
Enterprise Network
Resuable
Decission/Dissemination Planes
1,000+ Switches
Enables direct Control
Easier to Understand and Deploy