Switching - Westcon Security Solutions Netherlands

Download Report

Transcript Switching - Westcon Security Solutions Netherlands

JUNIPER METAFABRIC
Westcon 5 daagse
FEBRUARY, 2014
Washid Lootfun
Sr. Pre-Sales Engineer
[email protected]
META-FABRIC ARCHITECTURE PILLARS
Simple
Open
Smart
Easy to deploy & use
Maximize flexibility
Save time,improve
performance
•
Mix- and match deployment
•
One OS
•
Universal buidling block for
any network architecture
•
Seamless 1GE  10GE 
40GE  100GE upgrades
•
IT Automation via Open
Interfaces; Vmware, Puppet,
Checf, Python
•
JUNOS Scripting & SDK
•
Standard Optics
•
•
Open Standards-based
interfaces L2,L3 MPLS
Open SDN protocol support,
VxLAN, OVSDB, OpenFlow
•
Elastic (Scale-out) Fabrics
•
Qfabric
•
Virtual Chassis
•
Virtual Chassis Fabric
METAFABRIC ARCHITECTURE PORTFOLIO
Switching
Routing
Management
SDN
Flexible building blocks; simple switching fabrics
Universal data center gateways
Smart automation and orchestration tools
Simple and flexible SDN capabilities
Data Center Security
Adaptive security to counter data center threats
Solutions & Services
Reference architectures and professional services
EX SWITCHES
EX SERIES PRODUCT FAMILY
AGGREGATION/ CORE
Network
Director
One JUNOS
Core/
Aggregation
Switch
Dense Access/
Aggregation
Switch
ACCESS
EX8208
EX8216
EX6210
Entry Level
Access
Switches
EX2200
EX2200-C
EX3300
MODULAR
Versatile
Access
Switch
Proven
Access
Switch
EX4200
FIXED
Powerful
Aggregation
Switch
EX4300
EX9204
EX9208
EX9214
Programmable
Core/Distribution
Switch
EX4300 SERIES SWITCHES
Product Description
•
•
•
•
•
24/48x 10/100/1000 TX access ports
4x 1/10G (SFP/SFP+) uplink ports
4x 40G (QSFP+) VC / uplink ports
PoE / PoE+ options
Redundant / Field Replaceable components (power supplies,
fans, uplinks)
DC power options
•
AFI
AFO

Notable Features
•
L2 and basic L3 (static, RIP) included
•
OSPF, PIM available with enhanced license
•
BGP, ISIS available with advanced license
Virtual Chassis
•
10 members
•
160-320 Gbps VC backplane
12 hardware queues per port
Front to Back & Back to front airflow options
SKU
# Ports
PoE/PoE+
Ports
PoE power budget
EX4300-24P
24
24
550 W
EX4300-24T
24
-
-
EX4300-48P
48
48
900 W
EX4300-48T
48
-
-
Target Applications
EX4300-48T-AFI
48
-
-
•
EX4300-48T-DC
48
-
-
EX4300-48T-DC-AFI
48
-
-
•
•
•
•
•
Campus data closets
Top of Rack data center / High Performance 1G
server attach applications
Small Network Cores
INTRODUCING THE EX9200 ETHERNET SWITCH
AVAILABLE MARCH 2013
EX9214
• Native programmability (Junos image)
• Automation toolkit
• Programmable Control/Management planes and
SDK (SDN, OpenFlow, etc.)
EX9208
• 1M MAC addresses
• 256K IPv4 and 256K IPv6 routes
EX9204
• 32K VLANs (bridge domains)
•
•
•
•
L2, L3 switching
MPLS & VPLS /EVPN*
ISSU
Junos Node Unifier
• 4, 8 & 14 slots; 240G/slot
• 40x1GbE, 32x10GbE, 4x40GbE & 2x100GbE
• Powered by Juniper One Custom Silicon
Juniper One Custom Silicon
 Roadmap
EX9200 LINE CARDS
EX9200-40F/40T
EX9200-32XS
EX9200-4QS
EX9200-2C-8XS
1GbE
Line Cards
 40
x 10/100/1000BASE-T
 40
x 100FX/1000BASE-X SFP
10GbE
Line Card
 32
x 10GbE SFP+
 Up
to 240G throughput
40GbE
Line Card
4
100GbE
Line Card
2
x 40GE QSFP+
 Up
to 120G throughput
x 100G CFP + 8 x 10GbE SFP+
 Up
to 240G throughput
EX9200 FLEXIBILITY VIRTUAL CHASSIS
13.2R2
High Availability
 Redundant RE, switch fabric
 Redundant power /cooling
Management
Require Dual RE’s Per Chassis
Performance and Scale
 Modular configuration
 High-capacity backplane
Easy to Manage
 Single image, single config
 One management IP address
Single Control Plane
 Single protocol peering
 Single RT/FT
Access
Switch
Access
Switch
Virtual Chassis–A Notch Up
 Scale ports/services beyond one chassis
 Physical placement flexibility
 Redundancy beyond one chassis
 One management and control plane
ON ENTERPRISE SWITCHING ARCHITECTURES
Network
Director
Multi-Tier
Collapsed Distribution & Core
Distributed Access
CORE
DISTRIBUTION
ACCESS

Problem: Existing architectures lack scale, flexibility and are operationally complex
Solution: Virtual chassis at both
Access and Distribution layers
Solution: Collapse Core and
Distribution, Virtual chassis at
Access layer
Solution: Virtual chassis at
Access layer across
wiring closets
Benefit: Management
Simplification, Reduced Opex
Benefit: Simplification through
Consolidation, Scale,
Aggregation, Performance
Benefit: Flexibility to expand and
grow, Scale, Simplification
VIRTUAL CHASSIS DEPLOYMENT ON ENTERPRISE
Span Horizontal or Vertical
CONNECT WIRING CLOSETS
COLLAPSE A VERTICAL BUILDING
BUILDING A
CLOSET 1
EXSeries
Virtual Chassis
BUILDING B
EX4300VC-3a
EX6200-1b
10GbE/40GbE
uplinks
WLC
Cluster
WLA
WLA
WLA
WLA
EX4300VC-2a
LAG
Centralized
DHCP and
other services
WLA
EX3300VC-1a
10/40GbE
App Servers
40G VCP
WLA
CLOSET 2
WLA
WLA
WLA
LAG
EX4300
Access
Aggregation/
Core
LAG
SRX Series
Cluster
Internet
LAG
EX4550VC-1a
EX9200VC-1b
DEPLOYING MPLS AND VPN ON ENTERPRISE—
METRO/DISTRIBUTED CAMPUS
Stretch the Connectivity for a Seamless Network
Core
Switch (PE)
MPLS
Core
Switch (PE)
VLAN
Access
Switche (CE)
Core
Switch (PE)
Private MPLS Campus
Core with VPLS
or L3VPN
Core
Switch (PE)
MPLS
VLAN
Access
Switche (CE)
Access
Switche (CE)
Core
Switch (PE)
MPLS
Access
Switche (CE)
Core
Switch (PE)
VLAN
Wireless
Access
Point
Wireless Access
Point
Access
Switches (CE)
Access
Switches (CE)
SITE 1
Wireless
Access Point
Wireless
Access Point
SITE 3
Wireless
Access Point
Wireless
Access Point
SITE 2
VLAN1
Finance/ Business Ops VPN
VLAN2
VLAN3
R&D VPN
Marketing/ Sales VPN
JUNIPER ETHERNET SWITCHING
Simple
Reliable
Secure
 #3 market share in 2 years
 20,000+ switching customers
 Enterprise & Service Providers
 23+ Million ports deployed
QFX5100 PLATFORM
Copyright © 2013 Juniper Networks, Inc.
QFX5100 SERIES
• Next Generation Top of rack switches
– Multiple 10GbE/40GbE port
count options
– Supports multiple data center
switching architectures
• New Innovations:
 Rich L2/L3 features including MPLS
 Low Latency
 SDN ready
– Topology-Independent In-Service Software
Upgrades
– Analytics
– MPLS
– GRE tunneling
QFX5100 NEXT GENERATION TOR
QFX5100-48S
QFX5100-96S
QFX5100-24Q
 48 x 1/10GbE SFP+
 96 x 1/10GbE SFP+
 24 x 40GbE QSFP
 6 x 40GbE QSFP uplinks
 8 x 40GbE QSFP uplinks
 1.44 Tbps throughput
 2.56 Tbps throughput
 8 x 40GbE expansion
slots
 1U fixed form factor
 2U fixed form factor
 2.56 Tbps throughput
 1U fixed form factor
Low latency │ Rich L2/L3 feature set │ Optimized FCoE
Q4CY2013
QFX5100-48S
Front side (port side) view
48 x 1/10GbE SFP+ interfaces
Console
6 x 40GbE QSFP interfaces
USB
Mgmt1 Mgmt0
(SFP) (RJ45)
4+1 redundancy fan tray, color coded (orange:
AFO, blue: AFI), Hot-swappable
1+1 redundancy 650W PS
color coded, hot-swappable
 Each 40GbE QSFP interface can be converted to 4 x 10GbE interfaces without reboot
 Maximum 72 x 10GbE interfaces, 720Gbps
 CLI to change port speed:
set chassis fpc <fpc-slot> pic <pic-slot> port <port-number> channel-speed 10G
set chassis fpc <fpc-slot> pic <pic-slot> port-range <low> <high> channel-speed 10G
QFX5100-96S
Q1CY2014
Front side (port side) view
96 x 1/10GbE SFP+ interfaces
 Supports two port configuration modes:
 96 x 10GbE SFP plus 8 x 40GbE interfaces
 104 x 10GbE interfaces
 1.28Tbps (2.56Tbps full duplex) switching performance
 New 850W 1+1 redundant color-coded hot-swappable power supplies
 2+1 redundant color-coded hot-swappable fan tray
8 x 40GbE QSFP interfaces
QFX5100-24Q
Front side (port side) view
Q1CY2014
(Same FRU side configuration as QFX5100-24S
24 x 40GbE QSFP interfaces
Two hot-swappable 4x40GbE QSFP modules
Port configuration has 4 modes, mode change requires reboot
1.
2.
3.
4.
Default (Fully Subscribed mode):
1.
Doesn’t support QIC
2.
Maximum 24x40GbE interfaces or 96x10GbE interfaces; line rate performance for all packet sizes
104-port mode
1.
Only first 4x40GbE QIC are supported with last 2 40GbE interfaces disabled; first 2 QSFPs work as 8x10GbE
2.
2nd QIC slot cannot be used; no native 40GbE support.
3.
All base ports can be changed to 4x10GbE ports (24x4=96), so total is 104x10GbE interfaces
4x40GbE PIC mode
1.
All base ports can be channelized
2.
Only 4x40GbE QIC is supported; works in both QIC slots but can’t be channelized.
3.
32X40GbE or 96X10GbE + 8X40GbE
Flexi PIC mode
1.
Support all QICs but QIC can’t be channelized
2.
Only base port 4-24 can be channelized. Also supports 32x40GbE configuration
ADVANCED JUNOS SOFTWARE ARCHITECTURE
Provides the foundation for advanced functions
•
•
•
•
ISSU (In-Service Software Upgrade). ENABLE HITLESS UPGRADE
Other Juniper applications for additional service in a single switch
Third-party application
Can bring up the system much faster
JunOS
VM
(Active)
JunOS
VM
(Standby)
Host NW Bridge
3rd Party Application
KVM
Juniper Apps
Linux Kernel (Centos)
QFX5100 HITLESS OPERATIONS
DRAMATICALLY REDUCES MAINTENANCE WINDOWS
Simple
Network Performance
QFX5100 TopologyIndependent ISSU
High-Level QFX5100 Architecture
Junos
Junos VM (Master)
(Master)
Junos
Junos VM
VM (Backup)
(Master)
PFE
PFE
Kernal Based Virtual Machines
Competitive
ISSU Approaches
Network Resiliency
Data Center Efficiency During
Switch Software Upgrade
Linux Kernel
x86 Hardware
Broadcom
BroadcomTrident
Trident IIII
Benefits:
•
Seamless Upgrade
•
•
•
•
No Traffic Loss
No Performance impact
No resilient risk
No port flap
INTRODUCING VCF ARCHITECTURE
Leafs - Integrated L2/L3 gateways
Spines – Integrated L2/L3 switches
 Connects to Virtual and bare metal servers
 Connects leafs , Core, WAN and services
 Local switching
 Any to Any connections
 Single Switch to Manage
Services GW
Spine
Switches
Any to Any connections
Leaf switches
1 RU, 48 SFP+ & 1 QIC
VM
VM
VM
O
VM
VM
VM
O
vSwitch
vSwitch
Virtual Server
Virtual Server
Bare Metal
PLUG-N-PLAY FABRIC
Services GW
WAN/Core

New leafs are auto-provisioned

Auto configuration and image Sync

Any non-factory default node is treated as network device
1 RU, 48 SFP+ & 1 QIC
VM
VM
vSwitch
VM
O
Virtual Server
VM
VM
vSwitch
VM
O
Virtual Server
Bare Metal
VIRTUAL CHASSIS FABRIC DEPLOYMENT OPTION
EX9200
QFX5100-24Q
Virtual Chassis Fabric (VCF) – 10G/40G
QFX5100-48S
QFX3500
EX4300
1 RU, 48 SFP+ & 1 QIC
10G access
Existing 10G access
Existing 1G
access
QFX5100 – SOFTWARE FEATURES
Q1 2014
Q4 2013
• Planned FRS Features*
•
•
•
•
•
•
•
•
•
•
•
•
L2: xSTP, VLAN, LAG, LLDP/MED
L3: Static routing, RIP, OSPF, IS-IS, BGP, vrf-lite, GRE
Multipath: MC-LAG, L3 ECMP
IPv6: Neighbor Discovery, Router advertisement, static
routing, OSPFv3, BGPv6, IS-ISv6, VRRPv3, ACLs
MPLS, L3VPN, 6PE
Multicast: IGMPv2/v3, IGMP snooping/querier, PIMBidir, ASM, SSM, Anycast, MSDP
QoS: Classification, Cos/DSCP rewrite, WRED,
SP/WRR, ingress/egress policing, dynamic buffer
allocation, FCoE/Lossless flow, DCBx, ETS. PFC, ECN
Security: DAI, PACL, VACL, RACL, storm control,
Control Plane Protection
10G/40G FCoE, FIP snooping
Micro-burst Monitoring, analytic
Sflow, SNMP
Python
*Please refer to release notes and manual for latest information
• Planned Post-FRS Features
• Virtual Chassis – Mixed mode
•
•
•
•
•
•
•
• 10 Member Virtual Chassis: Mix of QFX5100,
QFX3500/QFX3600, EX4300
Virtual Chassis Fabric: 20 nodes at FRS with mix of
QFX5100, QFX3500/QFX3600, and EX4300
Virtual Chassis features:
• Parity with standalone
• HA: NSR, NSB, GR for routing protocols,
GRES
ISSU on standalone QFX5100 and all QFX5100
Virtual Chassis, Virtual Chassis Fabric
NSSU in mixed mode of Virtual Chassis or Virtual
Chassis Fabric
64-way ECMP
VXLAN gateway*
OpenStack, Cloudstack integration*
* After Q1 time frame
QFX5100
Virtual Chassis Fabric
Up to 20 members
QFabric
Virtual Chassis
Up to 10 members
…
Up to 128 members
Managed as a Single Switch
Spine-Leaf
Layer 3 Fabric
QFX5100
L3 Fabric
…
VCF OVERVIEW
Flexible
Simple









Single device to manage
Predictable performance
Integrated RE
Integrated control plane
Up to 768 ports
1,10,40G
2-4 spines
10 and 40G spine
L2 , L3 and MPLS
….





Available
4 x Integrated RE
GRES/NSR/NSB
ISSU/NSSU
Any-to-Any connectivity
4 way multi-path



Automated
Plug-n-Play
Analytics for traffic monitoring
Network Director
Hardware
CDBU SWITCHING ROADMAP SUMMARY
2T2013
3T2013
1T2014
2T2014
Future
EX4300
EX9200 2x100G LC
QFX5100 (24QSFP+)
QFX5100 10GBASE-T
Opus PTP
EX4550 10GBASE-T
EX4550 40GbE
Module
QFX5100 (48SFP+)
QFX5100 (24SFP+)
EX9200 6x40GbE LC
EX9200 MACsec
EX9200 400GbE per
slot
QFX5100 (96SFP+)
EX4300 Fiber
Solutions
Software
AnalyticsD
ND 1.5
QFX3000-M/G
L3 Multicast
40GbE
Virtual Chassis w/
QFX Series
QFX3000-M/G
QinQ, MVRP
DC 1.0
Virtualized IT DC
V20
VXLAN Gateway Opus
ISSU on Opus
VXLAN Routing EX9200
ND 2.0
QFX3000-M/G
QFX5100 (48 SFP+)
Node
Campus 1 .0
QFX3000-M/G
10GBASE-T Node
OpenFlow 1.3
DC 1.1
ITaaS & VDI
DC 2.0
IaaS /w Overlay
MX SERIES
Copyright © 2013 Juniper Networks, Inc.
SDN AND THE MX SERIES
Delivering innovation inside and outside of the data center
Flexible SDN enabled silicon to
provide seamless workload
mobility and connections between
private and public cloud infrastructures
USG
EVPN
VMTO
ORE
(Universal
SDN Gateway)
(Ethernet
VPN)
(VM Mobility Traffic
Optimizer)
(Overlay Replication
Engine)
The most advanced and
flexible SDN bridging and
routing gateway
Next-generation technology
for connecting multiple data
centers and providing
seamless workload mobility
Creating the most efficient
network paths for mobile
workloads
A hardware-based, highperformance services
engine for broadcast and
multicast replication within
SDN overlays
VXLAN PART OF UNIVERSAL GATEWAY FUNCTION ON MX
VPLS,
EVPN
L3VPN
IRB.N
IRB.1
Tenant #N,
virtual DC #N
Tenant #0:
virtual DC #0
• - High scale multi-tenancy
– VTEP tunnels per tenant
– P2P, P2MP tunnels
• - Tie to full L2, L3 functions
on MX
IRB.0
Tenant #1,
virtual DC #1
1H 2014
Bridge-Domain.N
VLAN-ID: N
Bridge-Domain.1
VTEP #N
VNID N
LAN interface VLAN-ID:
LAN
interface
1002
#N
#K
Bridge-Domain.0
LAN interfaceVLAN-ID:
LAN
1001interface
#3
#4
LAN interface
#1
LAN interface
#2
VTEP #1
VNID 1
VTEP #0
VNID 0
DC GW
– Unicast, multicast
forwarding
– IPv4, IPv6
– L2: Bridge-domain, virtualswitch
• - Gateway between LAN,
WAN and Overlay
– Ties all media together
– Giving migration options to
the DC operator
NETWORK DEVICES IN THE DATA CENTER
USG
(Universal SDN Gateway)
L4 – 7 Appliances
Bare Metal Servers
Virtualized Servers
SDN Servers
• Databases
• ESX
• NSX ESXi
• Firewalls
• HPC
• ESXi
• NSX KVM
• Load Balancers
• Legacy Apps
• HyperV
• SC HyperV
• NAT
• Non x86
• KVM
• Contrail KVM
• Intrusion Detection
• IP Storage
• ZEN
• Contrail ZEN
• VPN Concentrator
USG (UNIVERSAL SDN GATEWAY)
USG
(Universal SDN Gateway)
Introducing four new options for SDN enablement
Provide SDN-to-non-SDN translation, same IP subnet
SDN to IP (Layer 2)
Layer2 USG
Provide SDN-to-non-SDN translation, different IP subnet
SDN to IP (Layer 3)
Layer3 USG
Provide SDN-to-SDN translation, same or different IP subnet, same or different overlay
SDN to SDN
SDN USG
Provide SDN-to-WAN translation, same or different IP subnet, same or different encapsulation
SDN to WAN
WAN USG
Remote
Data
Center
Branch
Offices
Internet
USGs INSIDE THE DATA CENTER
USG
(Universal SDN Gateway)
DATA CENTER 1
Layer2 USG
VxLAN VxLAN VxLAN VxLAN
VxLAN
VxLAN VxLAN VxLAN VxLAN
Using Layer 2 USGs to bridge between
devices that reside within the same IP
subnet:
SDN USG
1. Bare metal servers like high-performance databases,
non-x86 compute, IP storage, non-SDN VMs
2. Layer 4–7 services such as load balancers, firewalls,
Application Device Controllers, and Intrusion
Detection/Prevention gateways.
WAN USG
Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2 Native IP L2 Native IP
Layer3 USG
SDN
Pod 1
VxLAN
Legacy Pods
L2 Native IP L2 Native
L4 – 7
Services
USGs INSIDE THE DATA CENTER
USG
(Universal SDN Gateway)
DATA CENTER 1
Layer2 USG
VxLAN VxLAN VxLAN VxLAN
VxLAN
VxLAN VxLAN VxLAN VxLAN
Using Layer 3 USGs to route between
devices that reside within different IP
subnets:
SDN USG
1. Bare metal servers like high-performance databases,
non-x86 compute, IP storage, non-SDN VMs
2. Layer 4–7 services such as load balancers, firewalls,
Application Device Controllers, and Intrusion
Detection/Prevention gateways.
WAN USG
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3 Native IP L3 Native IP
Layer3 USG
SDN
Pod 1
VxLAN
Legacy Pods
L3 Native IP L3 Native
L4 – 7
Services
USGs INSIDE THE DATA CENTER
USG
(Universal SDN Gateway)
DATA CENTER 1
Layer2 USG
VxLAN
VxLAN VxLAN VxLAN VxLAN
Using SDN USGs to communicate
between islands of SDN:
1. NSX to NSX – Risk, scale, change control,
administration
2. NSX to Contrail – Multi-vendor, migrations
WAN USG
VxLAN VxLAN VxLAN VxLAN
GRE MPLSoverGRE MPLSoverGRE MPLSoverGRE MP
NSX
SDN Pod 2
LSoverGRE MPLSoverGRE MPLS
SDN USG
VxLAN VxLAN VxLAN VxLAN
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
Layer3 USG
SDN
Pod 1
VxLAN
Contrail
SDN Pod 1
USGs FOR REMOTE CONNECTIVITY
USG
(Universal SDN Gateway)
DATA CENTER 1
Internet
Layer2 USG
Layer3 USG
VxLAN
VxLAN VxLAN VxLAN VxLAN
SDN
Pod 1
SDN USG
1. Data Center Interconnect – SDN to [VPLS, EVPN,
L3VPN]
2. Branch Offices – SDN to [GRE, IPSec]
GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE
BRANCH OFFICES
EVPN EVPN EVPN EV
PN EVPN EVPN EVPN EVPN
Using SDN USGs to communicate to
resources outside the local data center:
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
NSX SDN Pod 2
EVPN EVPN
VxLAN
VxLAN VxLAN VxLAN VxLAN
3. Internet – SDN to IP (Layer 3)
DATA CENTER 2
WAN USG
UNIVERSAL GATEWAY SOLUTIONS
USG
(Universal SDN Gateway)
DATA CENTER 1
Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2
Layer2 USG
VxLAN
VxLAN VxLAN VxLAN VxLAN
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN VxLAN VxLAN VxLAN
MPLSoverGRE MPLSoverGRE MPLSoverGRE
VxLAN VxLAN VxLAN VxLAN
Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3 Native IP
VxLAN
GRE GRE GRE GRE GRE GRE GRE
EVPN
GRE GRE GRE
SDN Pod 2
SDN USG
Native IP L2 Native IP L2 Native IP L2 Native IP L2
Internet
LSoverGRE MPLSoverGRE MPLS
Native
IP L2 IPNative
IP L2 IPNative
IP IP
Native
L3 Native
L3 Native
Layer3 USG
VxLAN
SDN
Pod 1
Legacy Pods
L2 Native IP L2 Native
VxLAN VxLAN VxLAN
L3 Native IP L3 Native
BRANCH OFFICES
WAN USG
DATA CENTER 2
NSX
SDN Pod 2
Contrail
SDN Pod 1
L4–7
Services
USG COMPARISONS
USG
(Universal SDN Gateway)
Layer 2
Layer 3
SDN
WAN
USG
USG
USG
USG
Description
Provide SDN-to-non-SDN
translation, same IP subnet
Provide SDN-to-non-SDN
translation, different IP
subnet
Provide SDN-to-SDN translation,
same or different IP subnet,
same or different Overlay
Provide SDN-to-WAN translation,
same or different IP subnet
QFX5100
✔
MX Series/EX9200
✔
✔
✔
✔
X86 Appliance
✔
✔
Competing ToRs
✔
Competing Chassis
✔
Use Cases
NSX or Contrail talk Layer
2 to non-SDN VMs, bare
metal and L4-7 services
NSX or Contrail talk to
other PODs of NSX or
Contrail
NSX or Contrail talk to
other remote locations –
branch, DCI
Description
NSX or Contrail talk Layer
3 to non-SDN VMs, bare
metal and L4-7 services
and Internet
EVPN (Ethernet VPN)
Next-generation technology for connecting multiple data
centers and providing seamless workload mobility
PRE-EVPN: LAYER 2 STRETCH BETWEEN
DATA CENTERS
EVPN
(Ethernet VPN)
Without EVPN
Data
Plane
Control
Plane
•
•
Only one path can be active at a given time
Remaining links are put into standby mode
•
Layer 2 MAC tables are populated via the data plane
(similar to a traditional L2 switch)
Results in flooding of packets across WAN due to
out of sync MAC tables
•
DATA CENTER 1
MAC
VLAN
Interfaces
MAC
VLAN
Interfaces
AA
10
xe-1/0/0.10
BB
10
xe-1/0/0.10
Router 1’s MAC Table
Server 1
MAC: AA
xe-1/0/0.10
DATA CENTER 2
Router 2’s MAC Table
ge-1/0/0.10
ge-1/0/0.10
xe-1/0/0.10
Server 2
PRIVATE MPLS WAN without EVPN
xe-1/0/0.10
VLAN 10
xe-1/0/0.10
ge-1/0/0.10
✕
ge-1/0/0.10
VLAN 10
MAC: BB
POST-EVPN: LAYER 2 STRETCH BETWEEN
DATA CENTERS
EVPN
(Ethernet VPN)
With EVPN
Data
Plane
Control
Plane
•
•
All paths are active
Inter-data center traffic is load-balanced across all
WAN links
•
Layer 2 MAC tables are populated via the control
plane (similar to QFabric)
Eliminates flooding by maintaining MAC table
synchronization between all EVPN nodes
•
DATA CENTER 1
MAC
VLAN
Interfaces
MAC
VLAN
Interfaces
AA
10
xe-1/0/0.10
BB
10
xe-1/0/0.10
BB
10
ge-1/0/0.10
AA
10
ge-1/0/0.10
Router 1’s MAC Table
Server 1
MAC: AA
xe-1/0/0.10
DATA CENTER 2
Router 2’s MAC Table
ge-1/0/0.10
ge-1/0/0.10
xe-1/0/0.10
Server 2
PRIVATE MPLS WAN without EVPN
xe-1/0/0.10
VLAN 10
xe-1/0/0.10
ge-1/0/0.10
ge-1/0/0.10
VLAN 10
MAC: BB
VMTO
(VM Mobility Traffic Optimizer)
Creating the most efficient network paths for mobile
workloads
THE NEED FOR L2 LOCATION AWARENESS
Scenario without VMTO
DC1
(VM Mobility
Traffic Optimizer)
Scenario with VMTO enabled
PRIVATE MPLS WAN
VLAN 10
VMTO
PRIVATE MPLS WAN
VLAN 10
VLAN 10
DC2
DC1
VLAN 10
DC2
VMTO
WITHOUT VMTO: EGRESS TROMBONE EFFECT
(VM Mobility
Traffic Optimizer)
20.20.20.100/24
Server 1
VLAN 20
DC 1
PRIVATE MPLS WAN
Standby VRRP
DG: 10.10.10.1
DC 2
Active VRRP
DG: 10.10.10.1
VLAN 10
Server 2
10.10.10.100/24
Task:
Server 3 in Data Center 3 needs to send packets to Server 1 in Data
Center 1.
Problem:
Server 3’s active Default Gateway for VLAN 10 is in Data Center 2.
Effect:
1. Traffic must travel via Layer 2 from Data Center 3 to Data Center
2 to reach VLAN 10’s active Default Gateway.
2. The packet must reach the Default Gateway in order to be routed
towards Data Center 1. This results in duplicate traffic on WAN
links and suboptimal routing – hence the “Egress Trombone
Effect.”
Standby VRRP
DG: 10.10.10.1
DC 3
Standby VRRP
DG: 10.10.10.1
VLAN 10
Server 3
10.10.10.200/24
VMTO
WITH VMTO: NO EGRESS TROMBONE EFFECT
(VM Mobility
Traffic Optimizer)
20.20.20.100/24
Server 1
VLAN 20
DC 1
PRIVATE MPLS WAN
Active IRB
DG: 10.10.10.1
DC 2
Active IRB
DG: 10.10.10.1
Task:
Server 3 in Datacenter 3 needs to send packets to Server 1 in
Datacenter 1.
Solution:
Virtualize and distribute the Default Gateway so it is active on every
router that participates in the VLAN.
VLAN 10
Server 2
10.10.10.100/24
Effect:
1. Egress packets can be sent to any router on
VLAN 10 allowing the routing to be done in the
local datacenter. This eliminates the “Egress
Trombone Effect” and creates the most optimal
forwarding path for the Inter-DC traffic.
Active IRB
DG: 10.10.10.1
DC 3
Active IRB
DG: 10.10.10.1
VLAN 10
Server 3
10.10.10.200/24
WITHOUT VMTO: INGRESS TROMBONE EFFECT
VMTO
(VM Mobility
Traffic Optimizer)
20.20.20.100/24
Server 1
VLAN 20
DC 1
Route
Mask
Cost
Next Hop
10.10.10.0
24
5
Datacenter 2
10.10.10.0
24
10
Datacenter 3
DC 1’s Edge Router Table Without VMTO
PRIVATE MPLS WAN
10.10.10.0/24 Cost 10
10.10.10.0/24 Cost 5
DC 2
VLAN 10
Server 2
10.10.10.100/24
Task:
Server 1 in Datacenter 1 needs to send packets to
Server 3 in Datacenter 3.
Problem:
Datacenter 1’s edge router prefers the path to
Datacenter 2 for the 10.10.10.0/24 subnet. It has
no knowledge of individual host IPs.
Effect:
1. Traffic from Server 1 is first routed across the
WAN to Datacenter 2 due to a lower cost route
for the 10.10.10.0/24 subnet.
2. Then the edge router in Datacenter 2 will send
the packet via Layer 2 to Datacenter 3.
DC 3
VLAN 10
Server 3
10.10.10.200/24
VMTO
WITH VMTO: NO INGRESS TROMBONE EFFECT
(VM Mobility
Traffic Optimizer)
20.20.20.100/24
Server 1
VLAN 20
DC 1
Route
Mask
Cost
Next Hop
10.10.10.0
24
5
Datacenter 2
10.10.10.0
24
10
Datacenter 3
10.10.10.100
32
5
Datacenter 2
DC 1’s Edge 32
Router Table
VMTO
10.10.10.200
5 WITH Datacenter
3
10.10.10.200/32 Cost 5
10.10.10.100/32 Cost 5
PRIVATE MPLS WAN
10.10.10.0/24 Cost 10
10.10.10.0/24 Cost 5
Task:
Server 1 in Datacenter 1 needs to send packets to Server 3 in
Datacenter 3.
DC 2
Solution:
In addition to sending a summary route of 10.10.10.0/24 the
datacenter edge routers also send host routes which represent the
location of local servers.
VLAN 10
Server 2
10.10.10.100/24
Effect:
1. Ingress traffic destined for Server 3 is sent directly
across the WAN from Datacenter 1 to Datacenter
3. This eliminates the “Ingress Trombone Effect”
and creates the most optimal forwarding path for
the Inter-DC traffic.
DC 3
VLAN 10
Server 3
10.10.10.200/24
NETWORK DIRECTOR
SMART NETWORK MANAGEMENT FROM A SINGLE PANE OF GLASS
Visualize
Physical and virtual visualization
API
Network
Director
Analyze
Smart and proactive networks
Physical
Networks
Virtual
Networks
Control
Lifecycle and workflow automation
CONTRAIL
SDN CONTROLLER
OVERLAY ARCHITECTURE
Orchestrator
SDN
CONTROLLER
REST
Horizontally
scalable
Highly available
Federated
BGP
Clustering
SDN Controller
BGP
Federation
Control
Configuration
Analytics
Control
XMPP
BGP + Netconf
Virtualized Server
VM
VM
VM
JunosV Contrail Controller
XMPP
Virtualized Server
IP fabric
(underlay network)
VM
VM
VM
Tenant VMs
KVM Hypervisor +
JunosV Contrail vRouter/Agent
(L2 & L3)
Juniper Qfabric/QFX/EX
or 3rd party underlay switches
Juniper MX
or 3rd party gateway routers
METAFABRIC ARCHITECTURE:
WHAT WILL IT ENABLE?
SIMPLE
OPEN
VM
VM
VM
VM
VM
VM
SMART
VM
VM
VM
VM
VM
VM
VM
VM
VM
Accelerated time to value and increased value over time
www.juniper.net/metafabric
THANK YOU