The Normalized comparison of DB2 and Direct Access

Download Report

Transcript The Normalized comparison of DB2 and Direct Access

Measuring zSeries System
Performance
Dr. Chu J. Jong
School of Information Technology
Illinois State University
06/11/2012
Sponsored in part by Deer & Company
Outline
• Computer System Performance
• Performance Factors and Measurements
• zSeries Performance
– Measuring Application Performance
– Measuring System Performance
• Additional Tools used
• Discussion
Computer System Performance
•
•
•
•
Amount of time used to complete a task
Amount of work completed in unit of time
Resource required and resource usage
Others
–
–
–
–
–
–
Storage
Channel
Scalability
Availability (MTBF and MTTF)
Power
Etc.
Factors of System Performance
Hardware
Software
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Processor
Bus
Clock
Memory
Secondary Storage
I/O Devices
Network
Cooling
Power
Resource Allocation
Resource Sharing
Process Distribution
Operating Systems
Algorithms
Context Switches
Compilation
Optimization
Others … and more
Performance Measurement
• Metrics include: availability, response time,
channel capacity, latency, completion time,
service time, bandwidth, throughput,
scalability, performance per watt,
compression ratio, speed up, …, and more.
• Two are used:
– Response Time
– Throughput
Measuring zSystem Performance
• Application – DB2
Compare the performance of accessing data stored
in DB2 table against reading the same data accessed
directly in VSAM running on z/OS hosted by IBM
zSystem.
• System – CP, the Hypervisor
Compare and analysis the performance of resource
management by Hypervisor against the
performance of resource management by z/VM and
Linux (guest) of z/VM hosted by IBM zSystem
Application – DB2
A Normalized Comparison of DB2 and Direct Access
Performance under z/OS Environment
(By Christopher Corso)
Compares the performance of accessing data stored in
DB2 tables against reading the same data values accessed
directly in VSAM files. Validation testing is performed on
MVS mainframes running DB2 version 9 under zOS. The
comparison of the performance will be of a DB2 FETCH
with a read until end of file on a direct access VSAM file.
The resulting CPU processing times of the different
methods are discussed and conclusions are offered
Testing and System Configuration
Time differentials of:
• Task Control Blocks (TCB)
• Service Request Blocks (SRB)
• Computer clock speeds (CPU)
Systems Used
• ISU Mainframe (z890) – zOS, DB2 version 9
• IIC Mainframe (IBM Innovation Center-Dallas) VM
– zOS, DB2 version 9
Table Relationships
Testing Programs and Names
•
•
•
•
•
•
•
•
DB2 only
VSAM only
DB2 with internal files
VSAM with internal files
SPEED1 – DB2 processing only
SPEED2 - direct access VSAM processing only
SPEED3 - direct access VSAM with internal files
SPEED4 - DB2 with internal files.
Testing Results
•
•
•
•
Wall Clock Time
Task Control Block (TCB) Time
Service Request Block (SRB) Time
CPU Time
Wall Clock Time
60
50
Time in seconds
40
ISU SPEEDA1
30
ISU SPEEDA2
ISU SPEEDA3
ISU SPEEDA4
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
Run number
Wall Clock Time
16.000000
14.000000
12.000000
Time (in seconds)
10.000000
IIC SPEEDA1
8.000000
IIC SPEEDA2
IIC SPEEDA3
IIC SPEEDA4
6.000000
4.000000
2.000000
0.000000
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
Run number
TCB time by run
0.45
0.4
0.35
Time in Seconds
0.3
0.25
ISU TCB SPEEDA1
ISU TCB SPEEDA2
0.2
ISU TCB SPEEDA3
ISU TCB SPEEDA4
0.15
0.1
0.05
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
Run number
SRB time per run
0.25
0.2
Time in Seconds
0.15
ISU SRB SPEEDA1
ISU SRB SPEEDA2
ISU SRB SPEEDA3
ISU SRB SPEEDA4
0.1
0.05
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
Run number
CPU time per run
2.5
2
Time in Seconds
1.5
ISU CLOCK SPEEDA1
ISU CLOCK SPEEDA2
ISU CLOCK SPEEDA3
ISU CLOCK SPEEDA4
1
0.5
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
Run number
Observation
• DB2 proved faster than direct VSAM access
• One method is not significantly faster to process
than the other
• The least practical method of storing the data is
to put it in variables within the source code itself
• Converting direct access VSAM is not always the
best option
• It is much easier for users of the data to access it
via DB2 tables
System – CP Hypervisor
compare and analysis the performance of resource
management by Hypervisor against the performance of
resource management by z/VM and Linux (guest) of z/VM
hosted by IBM zSystem. The purpose is to analyze and
correlate the relationship between the resource management
of Guest Virtual Machine (Linux on z/VM) and the hypervisor
of hosting Virtual Machine (z/VM). We will run benchmark on
combinations of different of processes and guest VMs to
collect their performance data for a closure statement.
Optimizing Guest System’s Addressing Space in a Virtual
Environment
By Niranjan Sharma
Performance of CP and z/VM
• Resource Allocation
– By CP, By z/VM, By guest O/S – Linux
• Memory Management
– By CP, By z/VM, By guest O/S – Linux
• CPU Cycle Distribution
– By CP, By z/VM, by guest O/S – Linux
• Mainframe Resource Utilization and Scalability
– Do they fit in the distributed system?
– How about Cloud Computing and Parallel Computing
Performance Concerns
• LPAR Optimization
– How many LPARs becomes too many
– What are the overheads of managing LPARs
• Guest Optimization
– How many guest O/S’s becomes too many
– What are the overheads of managing guest O/S’s
• Process Optimization
– How many processes a guest O/S may handle to maintain scalability
– What are the cost of context switches
• Resource Sharing
– Processor assignment
– Memory allocation
– Buffer space and channel distribution
Benchmark Testing
• CPU Intensive Application Response Time and
Throughput
– Scaling from 1 to 2n processes per guest O/S
– Scaling from 1 to 2m guests per LPAR
– Scaling from 1 to 2k LPAR per system
• Memory Intensive Application Response Time and
Throughput
– Scaling from 1 to 2n processes per guest O/S
– Scaling from 1 to 2m guests per LPAR
– Scaling from 1 to 2k LPAR per system
• Resource Utilization and Scalability
• LINPACK test suite – parallel computing
Tivoli Performance Monitoring Tools
Discussions
• Under what circumstance CP allocate its
resource adequately
• Under what circumstance VM manage its
resource effectively
• Scalability of CPU intensive applications
• Scalability of memory intensive applications
• Mainframes v.s. Distributed Systems – A
Collaboration Approach
Question?