Transcript PTX

Juniper SP Products
Update
Ivan Lysogor
4th September 2015
MARKET & CHALLENGES
€
Margin
pressure
Long Lead
Times
Successful business requires
Velocity - Agility - Continuity
Operational
Complexity
FULL PORTFOLIO COVERAGE
ACX
PTX
NorthStar
Controller
MX
MX
ACX
MOBILE
Contrail
Controller
ACCESS
ACX
PRE-AGGREGATION
ACX
AGGREGATION
MX
CORE NETWORK
PTX
EPC
MX
MX
ACX
PTX
ACX
PTX
MX
PTX
vMX
MOBILE
HQ
Metro
CPE
MX
MX
MOBILE
PTX
MX
BRANCH
BRANCH
ACX
Metro
Metro
ACX
Solution requirements
HOME
MOBILE
MX
MX
MX
ACX
PTX
PTX
Backbone
PTX
HOME
ACX
PTX
MOBILE
MX
HQ
HQ
ACX
HOME
Metro
Metro
ACX
HOME
• Solution cost
- Port cost
- Maintenance cost
- Upgrade cost
• Reliability
• New service rollout speed
MX Product Update
MX Portfolio Overview
9600 Gbps
One TRIO Architecture
One UNIVERSAL EDGE
4800 Gbps
2860 Gbps
1560 Gbps
520 Gbps
80 Gbps
N x 10Gbps
VMX
MX 104
MX 240
* Current full duplex capacity is shown
MX 480
MX 960
MX 2010
MX 2020
MPC5E 24 x 10GE or 6 x 40GE
MPC5E
Description
240 Gbps line card with flexible 10GE/40GE interface
configuration options, increased scale and OTN support
MIC0/1
Interface Features
• Interface combinations:
• 24 x 10GE (MIC0 and MIC1)
• 6 x 40GE (MIC2 and MIC3)
• 12 x 10GE and 3 x 40GE (MIC0 and MIC3)
• 12 x 10GE and 3 x 40GE (MIC1 and MIC2)
• Port queues with optional 32K queues upgrade license
• 1M queues option
Applications and Scale
• Up to 10M IP Routes (in hardware)
• Full scale L3VPN and VPLS
• Increased Inline IPFIX Scale
XM
XL
QSFP
QSFP
QSFP
QSFP
QSFP
QSFP
MIC2
XM
MIC3
XQ
MPC5E 2 x 100 GE and 4 x 10GE
MPC5E
Description
240 Gbps line card with providing 100GE and 10GE
connectivity, increased scale and OTN support
CFP2
MIC1
XM
Interface Features
CFP2
• Interfaces:
• 4 x 10GE SFP+
• 2 x 100GE CFP2
• Port queues with optional 32K queues upgrade license
• 1M queues option
Applications and Scale
• Up to 10M IP Routes (in hardware)
• Full scale L3VPN and VPLS
• Increased Inline IPFIX Scale
MIC3
XL
MIC0
XM
MIC2
XQ
MPC6E Overview
MPC6E
Description
480 (520) Gbps modular line card for MX2K platform,
increased scale and performance
XM
MIC0
Interface Features
• Interface cards supported:
• 2x100G CFP2 w/ OTN (OTU4)
• 4x100G CXP
• 24x10G SFP+
• 24x10GSFP+ w/ OTN (OTU2)
• Port based queueing
• Limited scale per-vlan queueing
Applications and Scale
• Up to 10M IP Routes
• Full scale L3VPN and VPLS
• Increased Inline IPFIX Scale
XL
XM
XM
MIC1
XL
XM
Routing system upgrade scenario
Highlights
Driver package
installed
With Continuity
Support
• Protects investments into hardware (SFBs)
• Reduces software qualification efforts
• No JUNOS upgrade required, driver
installation is in service
May upgrade to
new fabric or may
use the same
8 x SFB
800 Gbps per slot
New higher density
MPCs added
MPC6 installed,
480Gbps per slot
Future Router
Happy network engineers working on
something else (probably SDN-related)
MX Data Center Gateway
Internet
DC Gateway
EVPN
•
•
•
•
13.2R1 First Implementation (MPLS encaps)
14.1R2 VM Mobility Support
14.1R4 Active / Active Support
14.1R4 VXLAN encapsulation
DC Fabric
VXLAN
• 14.1R4 VMWare NSX Integration
EVPN Advantages
Link Efficiency
Convergence
L3 and L2
All Active forwarding with
built-in L2 Loop
Prevention
Leading high availability,
convergence, fast reroute
capabilities
L2 & L3 Layers Tie-In
Built-in the protocol
Optimal Routing
Ingress and Egress VM
Mobility Optimizations
MPLS / IP
DC Fabric
DC Gateway
DC Gateway
Virtual Machine Mobility
DC Fabric
PTX Update
PTX Series Routers
PTX 1000: Distributed Converged Supercore Power
Performance
Powered by
ExpressPlus
Deployability
OS + SDN
*Courting Bits In & Bits Out.
2.88Tb Distributed Core Router
Flexible 288x10GbE, 72x40GbE, 24x100GbE
Combining Full IP with Express MPLS
Industries Only Fixed Core Router
Only 2RU and 19” Rack Mountable
JUNOS: 15 Years of Routing Innovations
SDN: 25 Years Perfecting IP/MPLS-TE
Traffic Optimization Algorithms
PTX Series Routers
PTX Product Family of Routers: Technical Specifications
PTX1K
PTX3K
PTX5K
Fixed 2.88Tbps per system
1T/Slot
3T/Slot
2.88 Tbps ( 288x10GE)
8T (80x100GE)
24T (240x100GE)
Typical
~1.35KW
~6kW
13kW*
Maximum
1.65 KW
~7.2kW
18kW*
Height
2 RU
22RU
36RU
Depth
31”
270mm
33”
No. of FPCs/PICs
N/A
8/8
8/16
Type of FPCs supported
N/A
SFF-FPC
FPC1/FPC2/FPC3
100GE Density
24
80
240
10GE Density
288
768
1536
2HCY15
2HCY15
2HCY15
Slot Capacity at FRS
System Capacity
Timing
vMX introduction
Physical vs. Virtual
Each option has its own strength, and it is
created with different focus
Physical
Virtual
High throughput, high density
Flexibility to reach higher scale in control plane and
service plane
Guarantee of SLA
Agile, quick to start
Low power consumption per throughput
Low power consumption per control plan and service
Scale up
Scale out
Higher entry cost in $ and longer time to deploy
Lower entry cost in $ and shorter time to deploy
Distributed or centralized model
Optimal in centralized cloud-centric deployment
Well development network mgmt system, OSS/BSS
Same platform mgmt as Physical, plus same VM
mgmt as a SW on server in the cloud
Variety of network interfaces for flexibility
Cloud centric, Ethernet-only
Excellent price per throughput ratio
Ability to apply “pay as you grow” model
VMX overview
Efficient separation of control and data-plane
Data packets are switched within vTRIO
Multi-threaded SMP implementation allows core elasticity
Only control packets forwarded to JUNOS
Feature parity with JUNOS (CLI, interface model, service configuration)
NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (JUNOS)
Guest OS (Linux)
Hypervisor
x86 Hardware
SNMP
TCP
DCD
vTRIO
CHASSISD
VCP
VFP
RPD
•
•
•
•
•
LCKernel
•
vMX Use Cases
General consideration of vMX deployment
• vMX behaves the same way as the physical MX, so it is capable of being the virtual CPE,
or virtual PE, virtual BNG, … (*note-1)
• Great option as lab equipment for general qualification or function validation
• In cloud or NFV deployment, when virtualization is the preferred technology
• Fast service enablement without overhead of installing new HW
• Solution for scaling services when control plane scale is the bottleneck
• When network functions are centralized in DC or cloud
• When service separation is preferred by deploying different routing platforms
Note-1: Please see product feature & Perf detail in later slides, also see other comments for where vMX is more a suitable choice.
Agility example: Bring up a new service in a POP
4. Integrated the new service into existing PE
when the service is mature
1. Install a new vMX to
start offering a new
service without
impact to existing
platform
2. Scale out the service with vMX quickly if
traffic profile fits the requirements
POP
vMX
vMX
MX
3. Add service directly to the physical MX
GW or add more physical MX if service is
successful and there is more demand with
significant traffic growth
PE
SP Network for VPN
service
L3 CPE
PE
L3 CPE
Proof of concept lab validation or SW certification
• CAPEX or OPEX reduction for lab validation or network POC
vCPE solution with vMX as the SDN Gateway
• vMX as SDN GW router
providing support for BGP
and overlay tunneling
protocols
• vMX also address the VRF
scaling issue for L3 service
chaining
Service-chain-X1
DC
SDN GW
L3 CPE
VPNstarbucks-core
PE
SDN GW
VPNstarbucksHawaii
PE
VPNstarbucks-NY
L3 CPE
New York
SP Network for VPN
service
PE
PE
Las
Vegas
Service-chain-Y1
DC
VPN-starbucks-LA
VPN-starbucks-LA
LA
• vCPE service
• Virtualized services for VPN customer before
access to internet or among VPN sites: NAT,
Firewall, DPI, caching
VPNstarbucks-core
VPNstarbucks-NY
New Jersy
L3 CPE
DC GW
Chicago
L3 CPE
L3 CPE
Honolulu
L3 CPE
DC
VPN-starbucks-LA
VPN-starbucks-NY
VPN-starbucks-core
VPN-starbucks-Hawaii
Virtual Route Reflector
Virtual RR on VMs
On standard servers
Virtual RR
Client 3
Client 1
Client n
Client 2
•
vMX can be used as Route Reflector and deployed the same way as the physical RR in the network
•
vMX can act as both vRR or any typical router function with forwarding capability
Virtual BNG cluster in a data center
vMX as vBNG
BNG cluster
vMX
vMX
vMX vMX vMX
Data Center or CO
10K~100K subscribers
•
•
•
•
•
Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS);
Suitable to perform heavy load BNG control-plane work while there is little BW needed;
Pay-as-you-grow model;
Rapid Deployment of new BNG router when needed;
Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
vMX Product Details
vMX Components
VCP
(Virtualized Control Plane)
VFP
(Virtualized Forwarding Plane)
•
•
•
Virtual JUNOS to be hosted on a VM
Follows standard JUNOS release cycles
Additional software licenses for different control plane
applications such Virtual Route Reflector
•
VM that runs the packet forwading engine that is
modeled after Trio ASIC
Can be hosted on a VM (offer at FRS) OR run as a
Linux container (bare-metal) in the future
•
VMX system architecture
•
Optimized data path from physical NIC to
vNIC via SR-IOV (Single Root IO
Virtualization).
•
vSwitch for VFP to VCP communication
(internal host path)
OpenStack (Icehouse) for VM management
(Nova) and provisioning of infrastructure
network connections (Neutron)
Guest VM (FreeBSD)
VFP
VCP
Virtual NICs
SR-IOV
•
Guest VM (Linux + DPDK)
Hypervisor
vSwitch
Physical layer
Cores
Physical NICs
Memory
Product Offering at FRS
FRS Product Offering
•
•
•
•
•
•
FRS at Q2 2015 with JUNOS release 14.1R4
Function: Provide feature parity with MX except function related to HA and QOS
Performance: SR-IOV w/ PCI pass-through, along with DPDK integration
Hypervisor support: KVM
VM Implementation: VFP to VCP 1:1 mapping
OpenStack integration (to be finalized)
VFP
VCP
Juniper deliverable
Hypervisor/Linux
NIC drivers, DPDK
Server, CPU, NIC
Customer defined
Reference server configuration
CPU
Intel Xeon 3.1GHz
Cores
Minimum 10 cores
RAM
20GB
OS
• Ubuntu 14.04 LTS (w/ libvirt 1.2.2, for better performance, upgrade to
1.2.6)
• Kernel: Linux 3.13.0-32-generic
• Libvert: 1.2.6
NICs
Intel 82599EB (for 10G)
QEMU-KVM
Version 2.0
Note: Initially requires minimum 10 Cores: 1 for RE VM, 7 for PFE VM which include 4 packet
processing cores, 2 I/O cores, 1 for host0if processing), and 2 cores for RE and PFE emulations
(QEMU/KVM) ; Later a version with smaller footprint, less # of cores or RAM required
Performance
Test setup
Tester
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
8.9G
VMX
8.9G
• Setup:
• Single instance of VMX with 6 ports of 10G
sending bidirectional traffic
• 16 core total (among those, 6 for I/O, 6 for
packet processing)
• 20G RAM total, 16G memory for vFP
process
• Basic routing is enabled, no filter
configured
• Performance
• 60G bi-directional traffic per VMX instance
@ 1500 bytes
• No packet loss
• Complete RFC2544 results to follow
Thank you