MCT+Design+Options+and+Best+Practices+Guide+for+NetIron+

Download Report

Transcript MCT+Design+Options+and+Best+Practices+Guide+for+NetIron+

MCT Design Options & Best Practices
© 2012 Brocade Communications Systems, Inc
1
Single Level MCT
• Active-active load balance in between servers and access switches
• High Availability in link-level and switch-level redundancy
• Fast failover in sub-second without Spanning tree protocol
• Work with existing LACP and server trunk
• Coexist with existing Spanning Tree based network for migration
•© 2012
Dual-homed
default gateway
Brocade Communications Systems, Inc
2
Multi-tier MCT – Full Mesh MCT
• Active-active load balance in between access and aggregation/core
switches.
• Full protection against failures in both upstream and downstream
direction
• Remove Spanning Tree Protocol from aggregation/core layer
• Work with existing LACP
© 2012 Brocade Communications Systems, Inc
3
Best Practice of Single Level MCT and Multi-tier
MCT
• ICL ports must be tagged member of session VLANs
• ICL is preferably a LAG for link redundancy and higher bandwidth for
packets cross ICL
• If CCEP is 1GbE, ICL is preferably a 10G LAG.
• If CCEP is 10GbE, ICL is preferably a 100G LAG
• All member VLANs including MCT client must be running on ICL
• Non-MCT VLAN from CEP can co-exist with client VLANs on ICL in MLXe
• Slow failover mode is to prevent the port flapping on ICL
• Topology is preferably in symmetry to avoid single point of failures
• Number of clients, ports and VLANs must conform scalability formula
• Exceeding the supported scalability may cause cluster in unstable forwarding
• More clients can be added later without interrupting the traffic from existing clients
with scalability enhancement
© 2012 Brocade Communications Systems, Inc.
4
Best Practice of Single Level MCT and Multi-tier
MCT
• Fiber ports is preferable in low latency requirement in data center
• Fiber ports converge faster than copper ports when link failure occurs.
• Copper interfaces need more time to initialize
• Port Loop Detection (PLD) is strongly recommended when the first time
configuring MCT
• Prevent the loops caused by mis-configuration
• It is recommended to configure loop-detection shutdown-disble on ICL
port if PLD is still enabled after deployment
• The configuration will prevent ICL be shut down by PLD
• CPU protection and VLAN hardware flooding is recommended if MCT is
only in L2 domain
• MCT control protocol and FDB synchronization requires CPU power intensively
© 2012 Brocade Communications Systems, Inc.
5
Best Practice of Single Level MCT and Multi-tier
MCT
• Hitless failover and upgrade are not supported but compatible
• During failover/upgrade, the MCT peer will be responsible for forwarding
• After failover/upgrade, the MCT peer will synchronize the FDB database and reestablish MCT
• PB and PBB are not supported on CCEP ports
• It is recommended to configure the keep-alive VLAN in between two
MCT nodes only and configure the VLAN for this purpose only
• Only one VLAN can be configured as the keep-alive VLAN
• If the cost of extra ports in two MCT nodes for keep-alive VLAN is a consideration, it is
suggested to configure the keep-alive VLAN through one of uplinks to the L3 network
• MCT control plane is capable of crossing continents
• CCP is running by connection-oriented TCP protocol
• Keep-alive timer: 300ms as hello timer, 900ms as down timer by default, routed
packets from San Francisco and Tokyo is about 73 ms each way
© 2012 Brocade Communications Systems, Inc.
6
Best Practice of Single Level MCT and Multi-tier
MCT
Client Isolation Mode
MCT Master
VRRP Master
MCT
ICL
MCT Slave
VRRP Backup
SPF
Keep Alive
VLAN
Routed
Layer 3
Network
MCT Master
VRRP Master
loop
ICL
MCT Slave
VRRP Backup
SPF
Routed
Layer 3
Network
• Loose Client Isolation Mode is recommended to keep traffic forwarding
when ICL fails
• With keep alive VLAN in loose client isolation mode, MCT slave will block its client ports
• Keep-alive VLAN is recommended with loose client isolation mode for preventing an
unexpected l2 loop
© 2012 Brocade Communications Systems, Inc.
7
Best Practice of Single Level MCT and Multi-tier
MCT
Client Isolation Mode (cont’)
VRRP Master
MCT
ICL
Routed
Layer 3
Network
VRRP Backup
SPF
• Strict Client Isolation Mode will isolate the client network when ICL fails
• Regardless keep-alive VLAN, both MCT peers blocks client ports
• Prevent the packets from a problem client network to populate into the whole network
© 2012 Brocade Communications Systems, Inc.
8
Best Practice of MCT with VRRP/VRRP-E
VRRP Master
VRRP Master
MCT
MCT
ICL
VRRP Backup
Routed
Layer 3
Network
ICL
VRRP Backup
SPF
Routed
Layer 3
Network
• VRRP/VRRP-E is recommended as a default gateway of clients
• IPv4 and IPv6 VRRP/VRRP-E (no SPF) are both supported with MCT
• VRRP-E Short Path Forwarding (SPF) is recommended to prevent
overloading ICL for layer 3 uplink
• VRRP-e backup with SPF acts as a hidden VRRP-e master in terms of default gateway
• IPv6 VRRP-E Short Path Forwarding is not supported
© 2012 Brocade Communications Systems, Inc
9
Best Practice of MCT with VRRP/VRRP-E
VRRP-E Master
SPF enabled
Back Hole
MCT
ICL
VRRP-E Backup
SPF enabled
VRRP-E Backup
SPF enabled
L3 Uplink
MCT
L3 Uplink
Routed
Layer 3
Network
ICL
VRRP-E Master
SPF enabled
Routed
Layer 3
L3 Uplink Network
• When a Layer 3 uplink fails and routes to Layer 3 network are all learned from one uplink,
the routed traffic may hit a black-hole situation routing regardless SPF is enabled or not
• Configuring the Layer 3 uplink as VRRP/VRRP-E track port and track port priority can force
VRRP/VRRP-E master failover, the MCT node without L3 uplink will forward traffic to the
new VRRP/VRRP-E master with routes
• The track port priority should be configured large enough to force VRRP/VRRP-E master failover
when the Layer 3 uplink fails.
• Adding IP interfaces to propagate the route from VRRP/VRRP-E backup to master is an
alternative to solve the black-hole situation.
© 2012 Brocade Communications Systems, Inc.
10
Best Practice of Routing with MCT
IP subnet 3
IP subnet 4
IP subnet 2
VRRP
Master
MCT
IP subnet 1
VRRP
Backup
SPF
Routed
Layer 3
Network
• Before R5.4, routing occurs above default gateway
• Routing with MCT is not supported on CCEP and ICL ports before R5.4
• Requires virtual router as default gateway, add another layer of routers for routing
purpose
• Packets can not route across ICL as shortest path routing
• Support only IPv4 routing if VRRP-E short path forwarding is configured
© 2012 Brocade Communications Systems, Inc
11
Best Practice of Routing with MCT
IP subnet 3
IP subnet 4
IP subnet 2
MCT
IP subnet 1
Routed
Layer 3
Network
• With R5.4, layer 3 routing can occurs on CCEP and ICL
• Routing with MCT supports IPv4 and v6 passive interface on CCEP
• Routing at level of MCT peers without virtual router
• Routed packets can route across ICL as shortest path routing
• Connect IP-based VMs in different subnets in a data center with less latency
• Connect routed customer networks in WAN
© 2012 Brocade Communications Systems, Inc.
12
Best Practice of MCT with L2 Metro Ring
Metro Ring Protocol
Customer A
Network
PBB endpoints or
L2VPN endpoints
MCT
Provider
Network
ICL
Metro Ring
MRP Master
Customer B
Network
• Dual-homing to Metro Ring to build large resilient Layer 2 domains
• Sub-second re-convergence for any failure from access switches to
border routers
© 2012 Brocade Communications Systems, Inc.
13
Best Practice of MCT with L2 Metro Ring
Metro Ring Protocol
• The secondary port of MRP master must not be configured on ICL
• ICL must not be in blocking state
• The convergence time requires to balance the preforwarding time and
the number of MRP instances
• With MRP default preforwarding timer, topology group is recommended for reducing
the number of MRP instance
• G.8032 (ERP) is not recommended
© 2012 Brocade Communications Systems, Inc.
14
Best Practice of MCT For VPLS
MCT
Active
CE
LAG
MCT
PE
CE
PE
SPOKE-PW
LAG
MPLS
Network
MCT
MCT
Active
SPOKE-PWPE
Point-to-multipoint
VPLS
PE
PE
CE
MCT
PE
MPLS
network
PE
PW
Standby
LAG
CE
CE
MCT cluster client
edge ports
MCT cluster
Active-Active data path
Active-Standby data path
PW
Active
PE
PE
PE
CE
MCT
LAG
CE
CE
• High availability in between point-to-multipoint clients
• Active-active path to customer edge router
• Active-standby path to remote end-points
• Multiple standby path in between local and remote end-points
• Provide cloud service across MPLS network in between data centers
• Do not require remote PEs be aware of MCT
© 2012 Brocade Communications Systems, Inc.
15
Best Practice of MCT For VLL
CE
MCT
Active
CE
LAG
MCT
PE
LAG
PE
SPOKE-PW
MPLS
Network
SPOKE-PW
Point-to-point VLL
PE
PE
MCT
MCT
Active
PE
PE
MPLS
network
PW
Standby
MCT
LAG
CE
MCT cluster client
edge ports
MCT cluster
Active-Active data path
Active-Standby data path
PW
Active
PE
PE
MCT
LAG
CE
• High availability in between point-to-point clients
• Active-active path to customer edge router
• Active-standby path to remote end-point
• Multiple standby path in between local and remote end-points
• Provide cloud service across MPLS network in between data centers
• Do not require remote PEs be aware of MCT
© 2012 Brocade Communications Systems, Inc.
16
Best Practice of MCT for VPLS/VLL
• Two MCT PE peers must configure same VC-mode, have same size of
VPLS MAC table, and have same set of remote peers
• Tagged and raw mode are supported in MCT for VPLS/VLL
• MCT will not form if these configurations are not identical
• It is preferable to use MCT spoke-PW in between MCT PE nodes
• In a MPLS network, a direct ICL with L2 session VLANs may not be achievable
• As long as there is a routed path in between two MCT PEs, MCT spoke PW never fails
• L2 ICL and session and MCT for VPLS/VLL can be support simultaneously
• MCT for VPLS/VLL always requires to configure L2VPN peer in the
cluster
• Most IT admin who are familiar with L2 MCT forget to configure l2VPN peers
• Cluster will not form without L2VPN peer
• If node faulure it will take more than 10 (30sec)
© 2012 Brocade Communications Systems, Inc.
17
Best Practice of MCT for VPLS/VLL
• MCT for VPLS/VLL requires keep-alive timer for confirming CCP down
• CCP can run in layer 3 domain without direct-link ICL
• Adjustable keep alive timer to accommodate the time of layer 3 path reroute
• MAC address withdrawal is recommended in MCT for VPLS to avoid
black holing of packets from remote PE’s
• VE over VPLS does not support Routing over MCT for VPLS/VLL
© 2012 Brocade Communications Systems, Inc.
18
New
R5.4
Introduction of Multicast Routing with MCT
PIM-SM over MCT as Last Hop
• PIM and IGMP run natively on the MCT VLAN (VE)
•
PIM sets up peering across the MCT chassis via ICL
• IGMP query message sent natively on CCEP
• IGMP membership synched across ICL to MCT peer when IGMP join message is received
on CCEP.
• IGMP membership is installed in PIM mcache on both MCT peers
• Hashing algorithm determines if the actual forwarding port is local or remote if the source
is on uplink and receiver on CCEP. For other combinations of source and receivers, MCT
multicast uses shortest path
• Both MCT peers use PIM to join RPF towards RP
• Multicast traffic arrives on both MCT peers, but the peer with local forwarding port
forwards traffic to client
•© 2012
Adding
the process of registry and remove the picture
Brocade Communications Systems, Inc.
19
Introduction of Multicast Routing with MCT
Synchronize IGMP State On CCEPs
Routed L3
network
IGMP report
CCEP
CEP
MDUP sync
New
R5.4
(G1)
(G2)
MDUP Sync
CCEP1
(G1)
(G2)
CCEP2
(*,G2)
(*,G1)
MCT Client
Receiver
• Receiver sends IGMP report for (*, G1) (*, G2)
• MCT client forwards (*,G1) to CCEP1 and forwards (*, G2) to CCEP2
• MCT switches synchronize the IGMP report received locally to the peer CCEPs via MDUP
© 2012 Brocade Communications Systems, Inc.
20
Introduction of Multicast Routing with MCT
Receivers Behind MCT Client
S1
S2
Multicast
Streams
New
R5.4
Routed L3 network
CCEP
CEP
S3
(G1) (G3)
(G2) (G4)
ICL
(G1) (G3)
(G2) (G4)
S4
CCEP2
CCEP1
MCT Client
Receiver
• Streams requested by Receiver are added to the CCEP in both MCT Peers
• Streams requested by Receiver are pulled to both MCT peers
• Only one MCT switch forwards a stream to its CCEP and the MCT peer drops the stream
• Streams ingress from CEP (S3, S4), the MCT switch connecting to the source forwards to CCEP. If the
local CCEP goes down, the peer will forward to its CCEP
• A stream ingress from ICL (S3, S4), it is dropped until the remote CCEP goes down
• A stream ingress from Uplink (S1, S2), MCT hash function decides which MCT switch forwards a steam
© 2012 Brocade Communications Systems, Inc.
21
Introduction of Multicast Routing with MCT
Receivers on CEP
S2
Multicast
Streams
CCEP
CEP
Multicast
Streams
dropped by
peer
New
R5.4
Routed L3 network
CEP1
ICL
(G1) (G3)
(G2) (G4)
(G1) (G3)
(G2) (G4)
CEP2
S4
CCEP2
CCEP1
Receiver
MCT Client
S1
S3
• Streams requested by Receiver are added to CEP1
• A stream from Routed L3 network (S2) is pulled by Receiver side MCT switch via the Uplink and
forwarded to CEP1
• A stream sourced from CEP2 (S4) is pulled by Receiver side MCT via the ICL and forwarded to CEP1
• Streams sourced from MCT Client (S1,S3) are load balancing on the LAG to one of MCT switches
• A stream load balancing to Receiver side MCT switch (S1) is natively forwarded to CEP1
• A stream load balancing to the remote side of MCT (S3) switch will be forwarded to CEP1 via the ICL
© 2012 Brocade Communications Systems, Inc.
22
Introduction of Multicast Routing with MCT
Sources Behind MCT Client
R1
Multicast
Streams
CCEP
CEP
Multicast
Streams
dropped by
peer
Remote
Receiver
R3
Receiver
R2
Remote
Receiver
Routed L3 network
(G1) (G3)
(G2) (G4)
CCEP1
(G1) (G3)
(G2) (G4)
CCEP2
New
R5.4
R4
Receiver
MCT Client
S1 S2
S3 S4
Source
R5
Receiver
• Streams sourced behind MCT client are load balancing on the LAG to MCT switches
• Streams load balancing to the MCT switch which has the OIF (S2, S3) are forwarded to the OIFs locally
regardless the OIF is Uplink, CEP, or other CCEP
• Streams load balancing to the MCT switch that the OIFs are on MCT Peer (S1 to R1, S4 to R4) will be
forwarded to the OIF via the ICL regardless the OIF is Uplink or CEP
• From the above rule, the stream (S4 to R4) will not be forward to remote CCEP since the ingress is the
ICL and both CCEPs are up
• MCT Client forwards the stream (S4) to the local receiver (R5) in normal VLAN Flooding.
•© 2012 Brocade Communications Systems, Inc.
23
Introduction of Multicast Routing with MCT
Sources on CEP
R1
Multicast
Streams
CCEP
CEP
Multicast
Streams
dropped by
peer
Remote
Receiver
CEP1
Source
S1 S2
S3 S4
R2
Remote
Receiver
Routed L3 network
ICL
(G1) (G3)
(G2) (G4)
(G1) (G3)
(G2) (G4)
CEP2
CCEP2
CCEP1
New
R5.4
R4
Receiver
MCT Client
R5
R3 Receiver
• Streams with OIFs on local Uplink are forwarded locally by MCT switch (S1 to R1)
• Streams with OIFs on local CCEP are forwarded locally by MCT switch (S1 to R5)
• Streams with OIFs on remote CCEP are forwarded to MCT peer via ICL then forward to MCT client (S3
to R3)
•
From the above rule, steams are also forwarded to MCT peer via ICL but dropped until the local CCEP
is down (S1 to MCT peer)
• Streams with OIFs on remote Uplink (R2) and remote CEP (R4) are forwarded to OIFs via the ICL (S2 to
R2, S4 to R4)
© 2012 Brocade Communications Systems, Inc.
24
Best Practice of Multicast for MCT
Routed L3 network
Source
Routed L3 network
Source
Keep-Alive VLAN
ICL
CEP1
CEP2
CCEP2
Forwarding state
CCEP1
Forwarding state
ICL
CEP1
CCEP2
Forwarding state
CCEP1
shutdown
MCT Client
R
Receiver
CEP2
MCT Client
R
Receiver
• In the case of source is from L3 uplink and receiver is on the MCT client. Without keepalive VLAN, ICL failure results in the client interface in both MCT nodes to be in forwarding
state
• The client interface in both MCT nodes are in forwarding state, multicast traffic is duplicated in the
receiver.
• Keep-alive VLAN is strongly recommended for multicast for MCT
© 2012 Brocade Communications Systems, Inc
25
Best Practice of Multicast for MCT
Routed L3 network
Source
Routed L3 network
Source
Keep-Alive VLAN
Keep-Alive VLAN
ICL
CEP1
CEP2
CCEP2
CCEP1
Designated
Forwarder
ICL
CEP1
CCEP2
CCEP1
Designated
Forwarder
Extra IP interface
MCT Client
R
CEP2
Receiver
MCT Client
R
Receiver
© 2012 Brocade Communications Systems, Inc. C
• Following the previous scenario. If the uplink of designated forwarder fails, it does not
cause the designated forwarder to change to peer MCT node. Only the peer MCT node
now can pull the traffic from source.
• Only CCEP failure triggers designated forwarder to change to peer MCT node
• An extra L3 interface in between both MCT nodes should be configured to be the next hop
of routed multicast traffic in order to prevents black-hole from uplink failure.
• The IP interface can be either configured on ICL or on an extra link
• It is required to have an extra L3 interface in between both MCT nodes for multicast for
MCT if the source and receivers
are in the same position as the topology
© 2012 Brocade Communications Systems, Inc
26
Best Practice of Multicast for MCT
• The (S, G) registry is synchronized to both MCT nodes. But only one MCT
node in the pair forwards the multicast traffic to clients or uplink, not
both.
• Double amount of data traffic flows through ICL, ICL requires more
bandwidth in the deployment of multicast for MCT
• Multicast for MCT is recommended only for single-tier MCT
© 2012 Brocade Communications Systems, Inc
27
Summary
Key takeaways
• High-Availability: Sub-second failover in the event of a Link, Module, Switch Fabric, Control Plane,
or Node failure
• Active-Active links : No idle Ethernet links in the network.
• Optimal Forwarding; Layer 2 and Layer 3 forwarding regardless of VRRP-E state
• Traffic Load-Balancing: Flow base load balancing rather than VLANs sharing across network links
• Simple Deployment & Operation: Minimal configuration and easy troubleshooting.
• No Rip and Replace: Ability to provide the resiliency regardless of the type/vendor of edge device
• Flexibility: Ability to provide this resiliency, regardless of the traffic type layer 2, layer 3 or non-IP
• Scalable for VMotion: Interaction with MRP to build larger resilient Layer 2 domains
© 2012 Brocade Communications Systems, Inc
28