Multi Node Label Routing Protocol

Download Report

Transcript Multi Node Label Routing Protocol

MULTI NODE LABEL ROUTING – A LAYER 2.5
ROUTING PROTOCOL
ADDRESSING ROUTING SCALABILITY ISSUES
NSF IGNITE PROJECT
1
ROUTING PROTOCOL CHALLENGES
• All Routing Protocols face
•
•
•
•
•
Scalability issues – dependency on network size
Flooding on information changes
High Churn rates
Packet Looping
Unreliable and unstable routing
• Work with IP and avoid impacts of IP forwarding when necessary
2
WHAT SHOULD THE SOLUTION LOOK LIKE?
• Forward the packets - transparent to IP at layer 3 –
• at layer 2.5 - MPLS-like?
• But MPLS is not adequate for emergency situations
• Set up time, failure recovery, dependency on routing protocols etc.
3
WHAT DO WE HAVE?
• Every network has a structure
• CORE (Backbone (BB) routers)
• DISTRIBUTION (DB routers)
• ACCESS (AC Routers)
• Tiers of routers for specific operations
• If they do not have - it is easy to set up a (virtual) structure
• Packets between access networks have to be forwarded via distribution and core
• Flat routing based on logical IP addresses
OR
• Use the structure to forward packets
4
USING THE STRUCTURE – THE TIER STRUCTURE
• Capture the attributes of the structure
• LABELS
• Based on the structure and connectivity • Routers can have multiple LABELS
• We will use the LABELS to route and forward
5
TIER STRUCTURE AND LABELS
Let us introduce routers and assign LABELS that capture the structure properties
1.1
TierValue.UniqueID
1.1
BB Routers 1.2
1.3
2.1:1
TierValue.UniqueID
UniqueID = parentID:
ChildUniqueID
2.1:1
DB
Router Set
1
2.3:1
AC Router 3.3:1:1
Set 1
3.1:1:1
3.1:1:1
TIER 1
DB Router 2.2:1
Set 2
2.3:2
AC Router3.2:1:1
Set 2
3.3:2:1
TIER 2
TIER 3
TierValue.UniqueID
UniqueID = grandparentID:parentID: ChildUniqueID
The label structure is
TierValue
.
UniqueID
Unique ID carries the parent child relationship Grandparent :
Tree like - Can be used for routing and forwarding
TierValue provides a level of aggregation
Parent
:
Child
6
ROUTING AND FORWARDING IN THE STRUCTURE
Each node records a neighbor table
Neighbor Table of 2.1:1
Label
Port
3.1:1:1
1
2.3:1
2
1.1
3
1.2
1.1
1.3
2.1:1
2.3:1
3.1:1:1
3.3:1:1
2.3:2
3.3:2:1
2.2:1
3.2:1:1
Frame from Source 3.1:1:1 to Destination 3.3:1:1
At 3.1:1:1 check my neighbor table
3.3:1:1 is not my direct child or parent (compare uniqueIDs 1:1:1 with 3:1:1)
send to my parent
packet reaches 2.1:1
7
ROUTING AND FORWARDING IN THE STRUCTURE
Neighbor Table of 2.1:1
Label
Port
3.1:1:1
1
2.3:1
2
1.1
3
1.2
1.1
1.3
2.1:1
2.3:1
3.1:1:1
3.3:1:1
2.3:2
3.3:2:1
2.2:1
3.2:1:1
Frame from Source 3.1:1:1 to Destination 3.3:1:1
At 2.1:1 – this node checks if the destination is relation of any one of my
neighbors –
3.3:1:1 is a child of 2.3:1 – so node 2.1:1 forward packet to port 2
if this was not the case – send to my parent
8
ROUTING AND FORWARDING IN THE STRUCTURE
At each node - comparisons with the neighbor table entries
The packet will be directed to the proper port
2.1:1
No match – send to my parent
No routing tables – like OSPF or BGP
No flooding of routing updates –
local neighbor information exchanged
3.1:1:1
Tier 1 may get more traffic – that normally happens
Tier 1 – partial mesh – max 2 hops
(Seattle POP – AT&T network 44 routers)
Neighbor table can record up-to max 2 hops (working)
1.2
1.1
1.3
2.3:1
3.3:1:1
2.3:2
3.3:2:1
2.2:1
3.2:1:1
ROUTING AND FORWARDING IN THE STRUCTURE
1.2
1.1
1.3
Link / node failure –
follow another decision path
no flooding, low churn rates
Example link between 2.3:1and 2.1:1 fails
only node 2.1:1 record a change
2.1:1 would send the packet to 1.1
Packet will take path 1.1,
1.3,
2.3:1
2.1:1
2.3:1
3.1:1:1
3.3:1:1
2.3:2
3.3:2:1
2.2:1
3.2:1:1
10
HOW TO IMPLEMENT IN CURRENT NETWORKS?
• Solution should be Deployable and easy/smooth transition
• As a layer 2.5 routing protocol – similar to MPLS
• Forward traffic from IP networks connected at the edge
• Learning edge IP Networks <-> Labels of nodes connected to the IP Networks
• Disseminated this information to all edge nodes
11
MULTI NODE LABEL ROUTING (MNLR)
IMPLEMENTATION DETAILS
DEMO AVAILABLE WITH 30 NODES ON GENI - SAI
12
MNLR DOMAIN – EDGE IP NETWORKS
End IP network 2
End IP network 1
MNLR
Edge Node
MNLR
Edge Node
MNLR
Core
Node
MNLR
Core
Node
End IP network 4
MNLR
Edge Node
MNLR
Core
Node
MNLR
Edge Node
MNLR Domain
End IP network 3
13
TASKS COMPLETED
• MNLR coding in C and implementation over GENI
• Automation of script to setup in GENI and performance metric collection
• BGP performance over Emulab
14
DEMO 27 NODES
MNLR VS BGP
• MNLR operation in a 27 node scenario / BGP operation
15
Automated Process Flow
Manifest
Rspec.xml
Parse
Topology Info
Trigger MNLR on
all Nodes
Trigger Performance
Analysis Test Cases using
Iperf
GIT
HUB
GIT update on
the Local Repo
Tier Allocation and
Command
Formation
Notification
Email
Copy Code to all
GENI Nodes
Code Compilation/
Error Check
16
SOFTWARE DEFINED NETWORKS
• SDN can be used for
• Label assignment and connectivity information dissemination at tier 1
• Kind of management
• End IP address to Label map dissemination to all edge MNLR nodes
• Placement of SDN controllers
17
SDN
C1
SDN CONTROLLERS
1.3
C
1.1
A
Tier-1
Backbone routers
2.1:1 D
Tier-2
Distribution routers
IP NW 1
J
Tier-3
3.1:1:1
Access routers
SDN
C2
B 1.2
E 2.1:2
2.2:1 F
G 2.2:2
2.3:3
*
2.3:1 H
I
2.3:2
SDN
C3
K
L
M
N
O
3.1:1:2
3.1:2:1
3.2:1:1
3.2:2:1
3.3:1:1
3.3:2:1
3.3:2:2
IP NW 2
SDN
C5
IP NW 3
IP NW 4
18
SDN
C4
MNLR ROUTERS (EDGE AND CORE)
End IP
network 1
MNLR
Edge
Node
• CONFIGURATION
End IP
network 4
End IP network 2
MNLR
Edge
Node
MNLR
Core
Node
MNLR
Core
Node
MNLR
Edge
Node
MNLR
Core
Node
MNLR
Edge
Node
MNLR
Domain
End IP network 3
• Labels assignment to all nodes in an MNLR domain.
• Hello Messages (neighbor table) and Connect messages (label assignment at tiers 2 and 3
• CORE CONNECTIVITY
• Periodic ‘hello’ messages by MNLR nodes to neighbors -> Neighbor Table
• EDGE CONNECTIVITY
• End IP network address dissemination to all egress/ingress MNLR nodes
• EDGE ENCAP/DE-ENCAP
• Encapsulation of incoming IP packets in special MNLR headers MNLR SRC | MNLR DST
IP PACKET
• De-encapsulating MNLR packets to deliver IP packet to the destination IP networks
• CORE FORWARDING
• Forwarding of MNLR encapsulated packets towards egress MNLR nodes based on MNLR labels
19
OPTIMIZATION WITH MNLR
• With one-hop neighbors in neighbor table
• How many hops in the neighbor table
• Optimization problem
• Only same tier neighbors
20
IMPLEMENTED AT LAYER 2.5 IN THE GENI TESTBEDS
• Evaluated for several topologies and compared with BGP
• Quagga BGP run on Emulab
21
SETTING UP THE TOPOLOGY AND
CONFIGURATIONS
• TIME CONSUMING
• AUTOMATION SCRIPTS
• CAN SET UP LARGE TOPOLOGIES IN GENI
22
TEST CASES
• IPN1  IPN2
• N5 – N3 – N4 – N8
• IPN1  IPN4
• N5 – N3 – N0 – N1 – N13 – N16
• IPN1  IPN6
• N5 – N3 – N0 – N2 – N24 – N26
• IPN1  IPN7
• N5 – N3
23
TEST CASE – LINK FAILURE
• IPN1  IPN2
• N5 – N3 – N4 – N8
• After link failure between N3 and N4,
follows the path:
• N5 – N3 – N0 – N4 – N8
24
BGP ROUTING TABLE SIZE
• Routing table size is equal to the number of networks in the topology
• All nodes have routing information for all networks in the topology
• For this topology, the routing table size is 29
25
LINK FAILURE EXAMPLE
• We fail the link between Nodes 0 and 1:
• Churn Rate: 18/27
• Convergence Time: 156 secs (HELLO INTERVAL IS 60 secs)
• In comparison with MNLR:
• Churn Rate is equal to the number nodes that experience a link failure (2 in this case)
• Convergence Time is equal to the number of hello times required to determine a link failure plus
the time to update the affected nodes’ neighbor tables (2 second hello times, 3 hellos for link
failure)
26
FUTURE WORK
• Video images
• Files
• Round trip time
• Reliability
• Demo in Early May
27
DEMO
MNLR VS BGP
28
DISCUSSIONS
• Truly clean slate technology
• No distance vector, link state, path vector
• Decouples dependency of routing and operations from network size
• Scalable
• Complements SDN – modularized control plane operations
• MNLR is modular
• Can be used for intra or inter-AS routing
• Suggestions
29
BACKUP SLIDES
30
ROUTING STRUCTURE AND
MODULARITY
Let us introduce routers and assign IDs that capture the
structure properties
Seattle
POP
Tier 1
Tier 2
Tier 3
1.2:1
New York POP
1.2:2
1.1
BB Routers 1.2
1.3
2.1:1
DB
Router Set
1
2.3:1
AC Router 3.3:1:1
Set 1
3.1:1:1
ISP A 1.2
2.2:12
DB
Router Set
2.3:2
AC
Router Set
2
3.3:2:1
3.2:1:1
Devices 4. :::
Chicago POP 1.2:3
Forward between 3.3:1:1 – 3.3:2:1 – via 2.3:1 and 2.3:2
Forward between 3.3:1:1 in Seattle POP – and NY POP – packet leaves the Seattle cloud
– address will be 1.2:1(3.3:1:1). The device in NY POP will accordingly have an address
1.2:2{3.3:1:1…) – Name services
31