Label Switching

Download Report

Transcript Label Switching

Multi Protocol Label Switching
(MPLS)
UK HEP Grid Networking meeting
UCL - 8th May
David Salmon - RAL
Overview
• Background
– Why MPLS ?
• Work proposed
– “Lab” (LAN) experiments
• RAL testbed
– Wide area (WAN) experiments
• PIPSS proposal
Why the interest ?
• Bottom line
– Get the service/s we want from the network !
• Grid/LHC computing requirements
– reliable high-volume data-transfer
– data movements on demand…
• Need more control over traffic in the network
• Network Engineering techniques
– Traffic engineering
• Multi Protocol Label Switching (MPLS)
– Quality of Service (QoS)
• End to end
– Class/es of Service (CoS)
• in domains - WANs, MANs, LANs
History
• Standard Internet
– “best-efforts” traffic
– competes on equal terms
– OK if average bandwidth utilisation is low
• little congestion
• Managed bandwidth - Janet & TEN-155
–
–
–
–
Dedicated bandwidth for research projects
restricted - only finite bandwidth available
complex to set up (by hand)
expensive - needed ATM link with kit on site
• not available to most sites
Traffic classes
• PPNCG document
– submitted to UKERNA for SJ4 requirements
– specifies various classes of traffic
– details of latency & bandwidth requirements
• Simplified view
– Data-transfer
• Quasi-continuous, repository to repository
• On-demand (user driven)
– Interactive
– Real-time
• Audio & Video
Traffic handling
• Classification
– IP header information
• Protocol, source, ToS...
• Prioritisation
– Queuing disciplines
– Round robin, weighted queues, priority queues...
• Rate control
– allocating bandwidth to traffic classes
• Build traffic models form these components
MPLS
• Connection oriented switching in the core
• Can establish explicit paths across the network
– Label switched paths (LSPs)
– Traffic engineering
• Packets get an extra component “Label”
– MPLS Shim
L2 header
MPLS Shim L3 header & contents
• Packets are forwarded based on the Label
• IP routing still used & routers exchange routing
information over IP - NOT MPLS
– Dynamic label distribution protocols available (at least 2)
– LDP & RSVP based
MPLS shim structure
0
19 20
20 bits
3 bits
1 bit
8 bits
22 23 24
Label - structureless (but see below)
Experimental - eg Diffserv/MPLS
Bottom of stack - S
Time to live - TTL
NB labels are structureless in the sense that they don’t contain any addressing
information related to the source/destination systems - they are local to links however at least one VPN architecture builds labels incorporating AS numbers to
ensure global uniqueness.
31
MPLS transport
L2 L3(IP)
LER: Label Edge Router
LSR: Label Switched Router
IP
45
LSR
LER
33
MPLS
LSR
87
LSR
IP routed
Label switched
LER
112
IP
• Motivation
RAL testbed
– need to learn about MPLS & CoS/QoS concepts
– Linux kernels include advanced network features
• hierarchical traffic classification
• several queue types available
• rate control
– PCs are much cheaper than commercial routers
– Open source MPLS software available for Linux
• Rationale
– Establish a small testbed to emulate a network backbone
with both core and edge routers
– Define traffic models using MPLS & CoS/QoS techniques
– Run application tests over the network to test traffic
control and CoS/QoS features
• Need 5+ PCs, each with 2 network cards
MPLS & VPNs
• Virtual private networks (VPNs)
• May offer a convenient way of managing a path
across the network
– with rate control may give managed/protected bandwith
facilities similar to an ATM virtual circuit
– not so interested yet in privacy/partitioning issues which
are motivators for commercial VPNs
• High priority area for investigation in both
testbed and WAN tests
MPLS Software
• Open source software for linux systems
– seems to be under active development
• www.sourceforge.net
– MPLS-Linux
• MPLS switching software
– MPLS-LDP
• MPLS label distribution protocol
• something for future investigation !
• There are others….not investigated yet
MPLS set-up
• mplsadm comand
– rather like adding static routes in IP
– create incoming and outgoing labels
– bind them to FECs
• “Assembler level” routing configuration !
– very detailed & prone to error
– dynamic routing is for later
• too complex to start with
RAL testbed configuration
Gateway
MPLS
42
RAL HEP LAN
D1
10.1.0.1 10.1.0.2
10.3.0.1
E1
eth0
eth1
10.3.0.2
C1
eth0
eth1
17
10.3.1.1
17
39
17
D2
10.2.0.1 10.2.0.2
10.3.2.1
E2
eth0
eth1
39
10.3.1.2
10.3.2.2
eth1
eth0
C2
No through traffic
NB: all STATIC routing, both IP and MPLS
Edge 1 (E1) configuration
# assign label spaces to eth0 and eth1
#
mplsadm -v -L eth0:0
mplsadm -v -L eth1:0
# explicitly add a route to say that the core knows about D2
#
route add 10.2.0.1/32 gw 10.3.0.2
# explicitly add a generic MPLS label.
# “If you see a packet from D1 going out on eth1 then give it
# MPLS label 17 and use 10.3.0.2 as the next hop”
#
# MPLS label 17 originates here
#
mplsadm -v -A -B -O gen:17:eth1:ipv4:10.3.0.2 -f 10.2.0.1/32
# “If you see a packet with label 42 on it, pop the label and pass
# the packet to the IP layer”
mplsadm -v -A -I gen:42:0
Edge 2 (E2) configuration
# assign label spaces to eth0 and eth1
#
mplsadm -v -L eth0:0
mplsadm -v -L eth1:0
# explicitly add a route to say that the core knows about D1
#
route add 10.1.0.1/32 gw 10.3.2.2
# explicitly add a generic MPLS label.
# “If you see a packet from D2 going out on eth1 then give it
# MPLS label 39 and use 10.3.2.2 as the next hop”
#
# MPLS label 39 originates here
#
mplsadm -v -A -B -O gen:39:eth1:ipv4:10.3.2.2 -f 10.1.0.1/32
# “If you see a packet with label 17 on it, pop the label and pass
# the packet to the IP layer”
mplsadm -v -A -I gen:17:0
Core 1 (C1) configuration
# assign label spaces to eth0 and eth1
#
mplsadm -v -L eth0:0
mplsadm -v -L eth1:0
# Set up part of an LSP in one direction
# “If you see label 17 coming in (on space 0), switch it out with
# label 17 and use 10.3.1.2 as the next hop”
#
mplsadm -v -A -I gen:17:0 -O gen:17:eth1:ipv4:10.3.1.2 -B
# Set up part of an LSP in the other direction
# “If you see label 39 coming in (on space 0), switch it out with
# label 42 and use 10.3.0.1 as the next hop”
#
mplsadm -v -A -I gen:39:0 -O gen:42:eth0:ipv4:10.3.0.1 -B
Core 2 (C2) configuration
# assign label spaces to eth0 and eth1
#
mplsadm -v -L eth0:0
mplsadm -v -L eth1:0
# Set up part of an LSP in one direction
# “If you see label 39 coming in (on space 0), switch it out with
# label 39 and use 10.3.1.1 as the next hop”
#
mplsadm -v -A -I gen:39:0 -O gen:39:eth1:ipv4:10.3.1.1 -B
# Set up part of an LSP in the other direction
# “If you see label 17 coming in (on space 0), switch it out with
# label 17 and use 10.3.2.1 as the next hop”
#
mplsadm -v -A -I gen:17:0 -O gen:17:eth0:ipv4:10.3.2.1 -B
Testbed Plan
• Get label switching working as outlined
• Configure a simple traffic model
– two traffic types
• high priority
• low priority
– hierarchical bandwidth allocation
• low priority traffic can use all available bandwidth if no high
priority traffic is present
• high priority traffic can use up to a certain bit rate
– use ftp for both traffic types
• on-demand ftp is high priority
• background ftp is low priority
• Monitor the traffic and measure conformance
Example
• On a 100 Mbit/sec link (ethernet)
• Force the interface to 10Mbit/sec
– easier to understand at lower bandwidths
– systems may not have the power to cope at 100Mbit/s
• Configure high priority FTP at 2Mbit/s
• Start a long low priority transfer
– monitor the transfer rate
• Start a high-priority transfer
– check if it gets the 2Mbit/s allocated
– check that the low priority traffic reduces its rate
• Once understood, move on to more complex models
and higher bandwidths
Config: CoS/QoS
• Rate control: TC command
– define parameters for token bucket algorithms
– committed information rate (like Cisco CAR)
• IPROUTE2 ?
PIPSS proposal
• “Collaborative project to demonstrate
end-to-end traffic management services and
high-performance data-transport applications
required for Grid operations”
• Partners
– Cisco
– CLRC
– Manchester university
– University College London
– UKERNA
PIPSS project aims
• WAN tests of MPLS with CoS/QoS
• Initially similar to testbed work, but..
• Production-scale - SuperJANET4 Development
Network
– very high bandwidth - 2.5 Gbit/s
– real routers/switches
• Configure traffic models
• Measure QoS/CoS performance for applications
• extend across multiple domains
– test interworking of traffic model implemementations
SJDN & Site links
Manchester
Warrington
Reading
Leeds
London
UCL
RAL
C-POP with Cisco 12008
Multiple management domains
Organisation LAN
Regional Network
- MAN
National Backbone
International Backbone
National Backbone
Regional Network
- MAN
Organisation LAN
Defined tasks
• TM1: Understand MPLS for traffic engineering in
the SJDN
• TM2: Demonstrate end-to-end traffic
management across multiple domains with live Grid
traffic.
• TM3: Demonstrate end-to-end QoS and traffic
management UK <-> USA
• TM4: Demonstrate end-to-end QoS and traffic
management to CERN
• TP1: Demonstrate high performance datatransport applications across WAN in Grid context
-aim for 1 Gibit/sec or higher.
No more
Classification
• Source / Destination
• Protocol (port no)
• CoS
– ToS Byte - (Diffserv code point)
Traffic prioritisation
• Queuing models (disciplines)
– PQCBWFWQ….
Rate Control
• Token bucket
• rate control algorithms
• time slices
MPLS routing & FECs
• Forwarding Equivalence Classes
Traffic Engineering/Management
• Control what goes where
• Fishtail diagram
LSPs & VPNs