Brocade z13 RoCE SMCR High level Presentation

Download Report

Transcript Brocade z13 RoCE SMCR High level Presentation

Shared Memory
Communication – RDMA
(SMC-R)
Utilizing the 10GbE RoCE
Express and Brocade VDX
Agenda
• Introduction: Why RoCE on zSystems?
• RDMA, RoCE and SMC-R basics
• SMC-R with RoCE on z Systems
• Brocade VCS Fabric Technology benefits for SMC-R
• Examples and Performance Summary
CPU Savings (cost reduction)
With response time improvements
• SMC-R can lead to substantial CPU savings (lower costs)
• SMC-R also simultaneously improves response times
Lower costs with improved performance
© 2016 Brocade Communications Systems, Inc.
3
Why SMC-R (RoCE ) on z Systems?
CPU Savings and application response time improvement
• Significant response time improvements and CPU savings for CICS
transactions using Distributed Program Link (DPL) when using SMC-R
vs. standard TCP/IP.
‒ Up to 48% reduction in response time and 10% CPU savings
• Significant overall transaction response time improvement for
Websphere Application Server (WAS) accessing DB2 in another system.
‒ 40% reduction vs standard TCP/IP
• Brocade VCS Fabric technology further enhances this performance
© 2016 Brocade Communications Systems, Inc.
4
Part 1: RDMA, RoCE, SMC-R and
zSystems
© 2016 Brocade Communications Systems, Inc.
5
RDMA
Remote Direct Memory Access (RDMA)
• Allows a host to write or read memory from
a remote host without involving the remote
host’s CPU and operating system (OS).
• Bypasses OS layers and many
communications protocol layers that are
otherwise required for communication
between applications.
• Reduces software overhead, providing for
high throughput, low latency networking.
© 2016 Brocade Communications Systems, Inc.
6
RoCE - RDMA over Converged Ethernet
 RDMA-based technology has been available in the industry for many years –
primarily based on Infiniband (IB)
‒ RDMA technology provides the capability to allow hosts to logically share memory
‒ Infiniband requires a completely unique network ecosystem (unique hardware such
as host adapters, switches, host application software, system management
software/firmware, security controls, etc.) – IB is common in the HPC market
 RDMA technology is now available on Ethernet – RDMA over Converged
Ethernet (RoCE)
RoCE
RDMA over Converged Ethernet (standard)
• RDMA protocol over Ethernet
• Uses RoCE Network Interface Adapter (RNIC) and
Layer-2 switches with IEEE Converged Enhanced
Ethernet (CEE) capability.
• Provides low latency, high bandwidth, high
throughput, low processor utilization data
transfer between hosts
• Direct RoCE port to RoCE port (no switch) is
possible, but not recommended by IBM.
• Switches must support global pause frame (IEEE
802.3x)
© 2016 Brocade Communications Systems, Inc.
8
Shared Memory Communications – Remote Direct
Memory Access (SMC-R) Definition
 Shared Memory Communications – Remote Direct Memory Access (SMC-R) is a new
communication protocol aimed at providing transparent acceleration for sockets-based
TCP/IP applications and middleware
‒ Remote Direct Memory Access (RDMA) technology provides low latency, high bandwidth, high
throughput, low processor utilization attachment between hosts
‒ SMC-R utilizes RDMA over Converged Ethernet (RoCE) as the physical transport layer
 SMC-R is built on the following concepts:
‒ RDMA enablement of the communications fabric
‒ Partitioning a part of OS host real memory into buffers and using RDMA technology to access this
memory
‒ Establishing an ‘out of band’ connection over which data is passed to the partner peer using RMDA
writes and signaling
SMC-R
Shared Memory Communication over RDMA
• Sockets over RDMA communication protocol that
allows existing TCP applications to transparently
benefit from RoCE.
• Requires no application changes.
• Provides host-to-host direct memory access
without the traditional TCP/IP processing
overhead.
• z/OS V2R1 includes SMC-R support
• SMC-R is only used over 10GbE RoCE Express
features to a partner z/OS V2R1 system.
© 2016 Brocade Communications Systems, Inc.
10
SMC-R Additional Benefits
• Load balancing
‒ The first application data is sent over one
RoCE pair between the two hosts
‒ The second application data would be sent
over the second RoCE pair between the two
hosts
• High Availability
‒ If there was a failure of the connection
between the first RoCE pair, all the sessions
using the first RoCE pair would transparently
move to the second RoCE pair.
© 2016 Brocade Communications Systems, Inc.
11
SMC-R Additional Benefits
• Provides high availability and load
balancing when redundant network
hardware paths are available.
• Introduces minimal administrative and
operational changes.
• Provides dynamic discovery of partner
RDMA capabilities and dynamic setup of
RDMA connections over RoCE.
© 2016 Brocade Communications Systems, Inc.
12
The Network
• A single Layer 2 (no IP router) 10GbE
network is required.
‒ RoCE provides low latency, high bandwidth,
high throughput, low processor utilization
data transfer between hosts by avoiding
TCP/IP capabilities such as IP routing.
• Both partners must be in the same IP
subnet (no IP router between them)
© 2016 Brocade Communications Systems, Inc.
13
OSA channels (Open Systems Adapter)
SMC-R does require OSA channels
• Still requires the OSA TCP connect
• SMC-R uses the TCP connection to
determine eligibility for RoCE and to build
the point-to-point SMC-R link.
‒ Then the actual data traffic is sent over the
SMC-R link.
• The TCP session is also used for Keepalive
and to terminate both the RDMA and TCP
connections when the session ends.
• OSA channels can be connected to the same
switches as the RNICs.
© 2016 Brocade Communications Systems, Inc.
14
Recommended configuration
Minimum of 2 RoCE features and minimum 2 switches
• Once a session has been switched to SMC-R it cannot fall back to the TCP/IP OSA path.
• If a failure occurs with the RoCE connection and an alternate RoCE path exists, the active
sessions on the original path will transparently move to the alternate path.
• If a failure occurs with the RoCE connection and an alternate RoCE path is not available,
then all active sessions will fail.
‒ All new sessions will not switch to SMC-R but will flow over the TCP/IP OSA path instead.
© 2016 Brocade Communications Systems, Inc.
15
SMC-R Link Group
• Two SMR-R links using different RNICs,
between two peers are logically grouped
into an SMC-R Link Group.
• This provides redundancy and load
balancing.
‒ If one link fails, all active connections are
automatically and dynamically moved to
the other link without interruption.
‒ After recovery of the failed link, no
connections are moved back.
‒ All new connections will be setup over the
recovered link and load balancing
recovered.
© 2016 Brocade Communications Systems, Inc.
16
Application use cases for SMC-R
• Application servers such as the z/OS WebSphere Application Server
communicating (via TCP based communications) with CICS, IMS or DB2 –
particularly when the application is network intensive and transaction oriented
• Transactional workloads that exchange larger messages (e.g. web services such
as WAS to DB2 or CICS) will see benefit.
• Applications that use z/OS to z/OS TCP based communications using Sysplex
Distributor
© 2016 Brocade Communications Systems, Inc.
17
z13 SMC-R advantages
• Allows concurrent sharing of a RoCE Express feature by multiple virtual servers
(OS instances)
‒ Up to 31 virtual servers (OS instances, LPARs or 2nd level guests under zVM) can
share a single feature
• Support for up to 16 RoCE Express features per zCPC
• Enables concurrent use of both RoCE Express ports b z/OS (SMC-R)
• For High Availability each OS instance requires access to two unique (physical)
features
© 2016 Brocade Communications Systems, Inc.
18
z13: 10GbE RoCE Express Sample Configuration
On z13 each 10GbE RoCE FC 0411
Can support up to 31 logical partitions
LPAR A
z/OS V2.1
IFP
(Two or more features for each server recommended)
IFP
z13
RoCE
RoCE
RoCE
RoCE
Brocade
VDX
LPAR B
z/OS V2.1
LPAR 3
z/OS V2.1
z/OS V2.1
LPAR D


LPAR 1
z/VM V6.3 + z/OS V2.1
LPAR 2
LPAR C
LPAR E
z13
OSA/OSD
OSA/OSD
Brocade
VDX
LPAR 4
OSA/OSD
OSA/OSD
This configuration allows redundant SMC-R connectivity among LPAR A, LPAR C, LPAR 1, LPAR 2, and LPAR 3
LPAR to LPAR OSD connections are required to establish the SMC-R communications
‒ 1 GbE OSD connections can be used instead of 10 GbE
‒ OSD connections can flow through the same 10 GbE switches or different switches
‒ z13 exclusive: Simultaneous use of both 10 GbE ports on 10 GbE RoCE Express features
LPAR 5
Measuring CPU usage
• TCP/IP address space CP usage in
the time interval shown in:
‒ RMF Report Class for TCP/IP
‒ SMF Record Type 30
© 2016 Brocade Communications Systems, Inc.
20
CPU Usage by Active Jobs
© 2016 Brocade Communications Systems, Inc.
21
Performance benchmarks of SMC-R at distance
 Performance summary
‒ Technology viable even at 100km distances with DWDM
‒ At 10km: Retain significant latency reduction and increased throughput
‒ At 100km: Large savings in latency and significant throughput benefits for larger payloads,
modest savings in latency for smaller payloads
‒ CPU benefits of SMC-R for larger payloads consistent across all distances
 Use cases for SMC-R at distance
‒ TCP Workloads deployed on Parallel Sysplex spanning sites
‒ Software based replication (i.e. TCP based) across sites (Disaster Recovery)
• e.g. InfoSphere Data Replication suite for z/OS
‒ File transfers across z/OS systems in different site
• FTP, Connect:Direct, SFTP, etc.
‒ Opportunity: Lower CPU cost for sending/receiving data while boosting throughput and lowering
latency
Part 2: Brocade VCS Fabric
Technology and SMC-R
© 2016 Brocade Communications Systems, Inc.
23
Brocade VCS Fabrics Evolve Data Centers
Continual evolution with VCS fabrics
Efficient
• All links fully active
• Multipathing at all layers:
Layers 1, 2, and 3
• IP storage-aware
Greater
network utilization
Automated
• Automatic provisioning
• Zero-touch VM discovery, Layer
2/Layer 3 configuration, and
mobility
• Self-forming trunks
• Manage many switches as
a single logical device
Lower
OpEx
Cloud-Optimized
• Multitenancy at scale with
Brocade VCS Virtual Fabric
feature
• Scale out non-disruptively
• Orchestrated via OpenStack
• DevOps support
Faster time
to application
deployment
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Ethernet Fabrics vs. Legacy Networks
•
•
•
•
•
Rigid architecture, north-south optimized
Inefficient link utilization
Individually managed switches
VM-ignorant
No network virtualization
Core
Ethernet Fabric Architecture
Leaf / Spine
Access
Aggregation Core
Classic Hierarchical Architecture
•
•
•
•
•
Scale-out
Automated provisioning
All links active, L1/2/3 multipathing
Fabric managed as one logical switch
VM-aware
Native and overlay network virtualization
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC.
25
Key VDX Capabilities for SMC-R
Operational automation and efficiency
Provides an automated RoCE Fabric
Single Logical Chassis for entire fabric
Scale-out fabric technology
Automatic trunk formation
Per-frame load balancing between
switches
• Deep on-chip buffering
• Simplified automation and visibility
•
•
•
•
•
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC.
26
Automated RoCE fabric with Brocade VDX
RDMA over Converged Ethernet – the automatic lossless fabric
RoCE enabled automatically on all fabric ports
VDX
VDX
Automatic, end-to-end lossless Ethernet
throughout the VCS Ethernet fabric.
VDX
VDX
VDX
VDX
RoCE hosts
• Configure host's RoCE network adapter to enable DCB Priority Flow
Control (DCB PFC) and set it for PFC class 3 (priority 3).
VDX 6740
VDX 6940
• On the VDX’s host-connected interfaces:
• Ensure the setting for the CEE map is set to its
default with global command: cee-map
default
• Disable LLDP with interface command: lldp
disable
• Enable CEE map with interface command:
• cee default
VDX 8770-4
VDX 8770-8
Advanced Flexibility
Address business-critical SLAs with a resilient, high-performance fabric
• Deliver predictable performance and unmatched
resiliency for business-critical applications
• Provision network capacity with minimal
intervention and virtually no learning curve
• Configure and manage multiple switches
in the fabric as a single logical element
• Provide storage-class resiliency with nondisruptive failover after a path or link failure
NAS
iSCSI Flash
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Auto-Configuration with Logical Chassis
Others
Brocade
Configuring LAG (for 2 members)
Configuring ISL Trunking
(for up to 8 members)
• Simplifies Brocade VCS fabric deployment,
scalability, and management of the network
Execute the following commands on one switch:
• configure terminal
• interface port-channel
1
VCS
• switchport
• switchport mode trunk
• switchport trunk allowed vlan all
• qos flowcontrol tx on rx on
• mtu 9208
• no shutdown
• interface tengigabitethernet 1/Ø/5
VCS type standard
•VCS
channel-group 1 mode active
• no shutdown
• interface tengigabitethernet 1/Ø/6
• channel-group 1 mode active type standard
• no shutdown
• exit
• Enables VCS fabric capabilities on each switch
(on by default)
Absolutely no configuration required.
• Connects Total
thecommands:
switches
0
• Fabric automatically forms
‒ Common configuration across all switches, InterSwitch Link (ISL) trunks auto-form
• Managed as a single logical chassis
Repeat same commands on other end switch.
Total commands: 30
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Eliminate Protocol Exotica
You don’t have to be an IP expert
Traditional Shared Network
BGP
MP-BGP
RSVP-TE
OSPF
PIM-SM
IS-IS
EIGRP
RIP
MSDP
X.509
VRRP
IRDP
VPLS
MS-CHAP
MPLS
ISIS-TE
MSTP
IGMP TACACS+
DVMRP
CIDR RADIUS
DSCP
LISP
DRR
LLDP
IPsec
STP
RSVP
AAA
WRED
HMAC
sFlow
WFQ WRR IPv6
LDP IKE L2TP
RSTP
RED
PWE3 SNTP
Dedicated IP Storage Network
NTP
MLD GRE
SHA DWFQ
Layer 2+3 IP Connectivity
vs.
• Step 1: Configure addressing and VLANs
• Step 2: Connect switches and automatically
form fabric
• Step 3: Connect hosts and storage
Layer 2 Ethernet Fabric
Connectivity
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC
30
Brocade VCS Logical Chassis—Architecture
Simple and efficient
Virtual IP Management
v
v
• Configuration
Management
• Centralized Software
Upgrade
Downgrade and
Auto Provisioning
• Centralized
Monitoring and
Troubleshooting
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC.
31
Brocade Trunking
Frame-based, high throughput ISL trunking using Brocade ASICs
• Brocade ISL Trunking provides high link utilization
and ease-of-use
‒ All 10GE ports are not alike
Brocade ISL Trunking (8 links active)
80 Gbps
• Frame-level, hardware-based trunking at Layer 1
‒ Near 100% link utilization versus 802.3ad LAG groups
~60–70% link utilization
‒ Spill and Fill across links in trunk group
‒ Single flows can be split across all links
‒ Built into Brocade ASIC
• ISL Trunks automatically form
‒ Once both switches are in VCS mode,
multiple ISLs automatically form a trunk
‒ No CLI entries necessary
Frame-based trunking at Layer 1
802.3ad Link Aggregation (8 links active)
~50 Gbps
Flow-based trunking at Layer 2
= 10GE link, width represents utilization
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC.
32
VDX 6740 & 6940 Buffering
Results of real-world performance of deep on-chip buffering
Device
10GbE Egress Rate
without loss
TE1/0/13
TE1/0/14
TE1/0/15
TE1/0/16
6740
Broadcom
Trident-based
competitor
4.0 Gbps
VDX 6740
8.4 Gbps
TE1/0/10
VDX 6740/6940 has twice the on-chip buffering of in-class competitive products.
Allows absorption of much longer bursts, resulting in greater throughput without loss.
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC
33
BNA for Storage Stakeholders
 Extending Operational Ownership into IP Storage
• Building on the combined dashboard
customizable today
• Ongoing Strategy: Combined usability focus via
a “Policy Configuration Center” across both
FOS & NOS to include:
• Extending MAPS to include NOS
• Leverage dynamic group capabilities
from dashboards across MAPS and
Configuration Policies
• Consistent Usability Focus
© 2014 Brocade Communications Systems, Inc. Proprietary Information
Unified Storage
Dashboard
Common
Troubleshooting
navigation
Common
Configuration
Policy UI
Concepts
MAPS
Configuration
Policy
34
BNA MAPS Dashboard
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC
35
Storage Innovation: MAPS
Simplified storage monitoring and alerting
Groups
Policies, Rules, Actions
• Monitor a group of similar
components as one entity
• Apply aggressive, moderate
or conservative policy levels
• Use pre-defined groups (SFPs,
fans, power supplies)
• Policies based on multiple
rules with unique actions
• Filter network scope to view
specific port groups
• Automate policy application
across fabrics with Brocade
Network Advisor integration
• Reduce errors and laborious
manual effort
Reporting
• Monitor storage health
and performance in
dashboards and reports
• Track the status of each
monitored category
• View out-of-range
conditions and rules
• Compare policies to identify
drift from default
© 2015 BROCADE COMMUNICATIONS SYSTEMS, INC
36
Brocade VDX Switches
Data Center Fabrics Building Blocks
Brocade VDX Switches
SPINE
Complete breadth of portfolio
Brocade VDX 6940 Family
Brocade VDX 6740 Family
LEAF
Brocade VDX 8770 Family
Brocade VDX 6940-36Q
Brocade VDX 8770-4
Brocade VDX 8770-8
Brocade VDX 6740
Brocade VDX 6740T
Brocade VDX 6940-144S
Brocade VDX 6740T-1G
1 GbE/10 GbE/10 GBASE-T
optimized with 40 GbE uplinks
10 GbE/40 GbE optimized
with 100 GbE uplinks
10 GbE/10 GBASE-T/40 GbE/100
GbE optimized modular system
SCALABILITY
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Brocade VDX 6740 Switches
Product details
Simplicity and Automation
•
•
•
•
Leading Scalability and Performance
Brocade VCS Fabric technology
Automatic Migration of Port Profiles (AMPP)
VCS Logical Chassis
Auto-fabric provisioning
Advanced Capabilities
• Full IP storage support with DCB capabilities
• Auto QoS prioritizes storage traffic
–
–
–
–
VCS Virtual Fabric feature supports multitenancy
VCS Gateway for NSX unifies virtual and physical networks
SDN-capable (OpenFlow support)
IPv6 hardware-ready
Brocade VDX 6740
• Fixed 48 1/10 GbE SFP+ (6740) /48 1/10 GBASE-T ports
(Brocade VDX 6740-T) and 4 40GbE QSFP+ GbE; option up
to 64 ports 10 GbE
• Fixed 48 1 GbE with 10 GbE software upgrade option
• 4×40 GbE QSFP+ ports; each 40 GbE can optionally be
configured as 4×10 GbE in break-out mode
• High-performance Layer 2/Layer 3 switching
• Industry-leading deep buffers with dynamic buffering
–
–
–
–
Increases MAC, VLANS, port profiles; delivers increased scalability
Single ASIC, non-blocking with cut-through architecture
Up to 160 GbE Brocade ISL Trunking improves switch capacity
Efficient multilayer, multipathing for reliability and elasticity
Brocade VDX 6740T
Brocade VDX 6740T-1G
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Brocade VDX Flex Ports
Panel layout for Brocade VDX 6740
• Ports highlighted in blue are Flex Ports
• Flex Ports can be configured individually as Fibre Channel or Ethernet
• Brocade VDX 6740 is the only converged platform with Gen 5 Fibre
Channel support
Console
CEE/FC
Ports
Management
Port 1 GbE
CEE-only
Ports
4 x 40 GbE Ports
48 x 10 GbE
Ports
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Brocade VDX 6940-36Q and 6940-144S Switches
Industry’s highest-density 10/40/100 GbE compact leaf and spine switch
Brocade VDX 6940 Switches
Industry-leading performance
• Industry’s highest-density 10/40 GbE switch in a fixed form factor
Brocade VDX 6940-36Q
Brocade VDX 6940-144S
‒ Line-rate 144×10 GbE in 1RU: 40 percent higher density than that
of the closest competition
‒ Line-rate 36×40 GbE in 1RU: Industry’s highest 40 GbE density
‒ Line-rate 96×10 GbE and 12×40 GbE (or up to 4×100 GbE)
in 2RU*
Optimized buffer and latency
Brocade
VDX 6940
36
24
Optimized buffer and latency
Cisco, Arista, Juniper
Low
100
150
10 GbE density
* 100 GBE WILL BE ENABLED IN A LATER RELEASE.
• Low latency, non-blocking, cut-through architecture
• High on-chip buffer with dynamic buffering capability
Advanced capabilities
• Distributed VXLAN Gateway inside the VCS fabric
• Virtual Fabric extension (extend Layer 2 over Layer 3) capability
High
50
•
Brocade VDX
6940
Latency
40 GbE density
Leading port density
Low
High
Buffer
• Hardware-optimized ISSU for Layer 2, Layer 3, Fibre Channel,
and FCoE protocols
• OpenFlow 1.3-capable
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
Brocade VDX 8770 Switch
Product details
Simplicity and Automation
Leading Scalability and Performance
• Brocade VCS Fabric technology
• Supports 1 GbE/10 GbE/40 GbE/100 GbE
• Automatic Migration of Port Profiles (AMPP)
• Scales from 12 ports to over 8,000 ports per fabric
• Brocade Fabric Watch provides proactive
monitoring and notification of critical switch
component failure
• Backplane scales to 4 Tbps per slot
Built to Last
• Best-in-class, 3.5 microsecond any-to-any latency
• Efficient multipathing for reliability and elasticity
• Best-in-class power efficiency
• 100 GbE connectivity
• Hardware-enhanced network virtualization*
• Cloud management via RESTful Application
Programming Interfaces (APIs)*
•
* Hardware-ready; some features to be enabled post-GA.
Ethernet Fabrics
Data Center
Access/Aggregation
© 2016 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION
VDX Mainframe Deployment example
FICON SAN
IBM z13
VDX 6740
IBM z13
• Use the VDX for:
• RoCE/SMC-R
• IBM DB2AA
• GDPS AA Qrep
• TS7700 Grid
DASD
FICON SAN
Virtual tape
10GbE OSA
channels
Management
DB2AA
Brocade Network Advisor
SAN + IP
RoCE
10GbE
© 2016 Brocade Communications Systems, Inc.
43
Performance impact of SMC-R on real z/OS workloads
WebSphere to DB2 communications using SMC-R
40% reduction in overall
transaction response time for
WebSphere Application Server v8.5
Liberty profile TradeLite workload
accessing z/OS DB2 in another
system measured in internal
benchmarks *
z/OS SYSA
SMC-R
z/OS SYSB
Linux on x
TCP/IP
HTTP/REST
Workload Client Simulator
(JIBE)
JDBC/DRDA
WAS
Liberty
TradeLite
DB2
RoCE
File Transfers (FTP) using SMC-R
Up to 50% CPU savings for FTP
binary file transfers across z/OS
systems when using SMC-R versus
standard TCP/IP. **
z/OS SYSA
SMC-R
FTP
FTP Client
FTP Server
RoCE
* Based on projections and measurements completed in a controlled environment. Results may vary by
customer based on individual workload, configuration and software levels.
** Based on internal IBM benchmarks in a controlled environment using z/OS V2R1 Communications Server
FTP client and FTP server, transferring a 1.2GB binary file using SMC-R (10GbE RoCE Express feature) vs
standard TCP/IP (10GbE OSA Express4 feature). The actual CPU savings any user will experience may vary.
z/OS SYSB
Performance impact of SMC-R on real z/OS workloads (cont)
Up to 48% reduction in response time
and up to 10% CPU savings for CICS
transactions using DPL (Distributed Program
Link) to invoke programs in remote CICS
regions in another z/OS system via CICS IP
interconnectivity (IPIC) when using SMC-R
vs standard TCP/IP *
CICS to CICS IP Intercommunications (IPIC) using SMC-R
z/OS SYSA
CICS A
DPL calls
SMC-R
IPIC
z/OS SYSB
CICS B
Program X
RoCE
WebSphere MQ for z/OS using SMC-R
z/OS SYSA
WebSphere
MQ
SMC-R
MQ messages
RoCE
z/OS SYSB
WebSphere MQ
WebSphere MQ for z/OS realizes
up to 200% increase in messages
per second it can deliver across
z/OS systems when using SMC-R
vs standard TCP/IP ****
* Based on internal IBM benchmarks using a modeled CICS workload driving a CICS transaction that performs 5 DPL (Distributed Program Link)
calls to a CICS region on a remote z/OS system via CICS IP interconnectivity (IPIC), using 32K input/output containers. Response times and CPU
savings measured on z/OS system initiating the DPL calls. The actual response times and CPU savings any user will experience will vary.
** Based on internal IBM benchmarks using a modeled WebSphere MQ for z/OS workload driving non-persistent messages across z/OS systems
in a request/response pattern. The benchmarks included various data sizes and number of channel pairs The actual throughput and CPU savings
users will experience may vary based on the user workload and configuration.
Network performance comparison
• Network latency reduced up to 80% for z/OS TCP/IP multi- tier OLTP
workloads such as web-based claims and payment systems *
Transaction
without SMC-R
Transaction
with SMC-R
Network
TCP
SMC-R
Network
Reduced latency,
CPU consumption and improved wall clock time
* Based on internal IBM benchmarks of modeled z/OS TCP sockets-based workloads with
throughput that any user will experience will vary.
request/response traffic patterns using SMC-R vs TCP/IP. The actual
TIME
SMC-R and Brocade VCS-summary
• Optimized Network Performance (leverageing RDMA technology)
• Transparent to (TCP socket based) application software
• More efficient, highly available SMC-R fabric
• Simpler to manage and configure
• Preserves existing network security model
• Resiliency (dynamic failover to redundant hardware)
• Transparent to load balancers
• Preserves existing IP topology, network administrative and operational models.
47