T3 Consortium Presentation Template

Download Report

Transcript T3 Consortium Presentation Template

Professional Services
Performance Testing
Center of Excellence
Application Performance Management Solution
Information Technology Services
Click to Advance Presentation
Mute sound if necessary (sound icon)
Today’s Challenges
Decentralized approach to building and testing software.
Business lines rarely share information on tool research, usage,
testing practices, and cost containment.
Software Development Life Cycle models frequently compromise testing
initiatives to meet deliverable deadlines.
Performance Testing Practices vary in most projects.
Performance Tools are expensive, and very complex to integrate.
Knowledge Transfer – performance testing information leaves with consultants.
Expensive consultants are frequently hired for Performance Testing
No global approach to Performance Testing tools, techniques, or costs.
Many projects cannot afford to make long term commitments to
Performance testing.
2
Industry Trends
Application owners often ask the following:
Does my application scale to meet business goals?
75% of ERP systems fail to do so (Gartner)
Does my application scale to meet business goals?
70% of performance problems can be resolved by configuration changes
(survey of 3,000 HP/Mercury engagements)
Can I resolve application bottlenecks?
40% of app bottlenecks are first noticed by a customer or executive
(Mercury Survey)
Have I overspent on infrastructure?
$2B + spent in app server overcapacity from 2001 to 2003
(Gartner)
3
One Platform Solution
T3 works to empower Business, Quality Assurance, and Development
teams to deliver better software by centralizing all aspects of testing and
performance management.
Become Performance center of excellence
Centralize performance testing and monitoring tools
Global Performance testing execution capability
Focus on quick “cost savings” wins!
Reduce dependency on expensive consultancy
QUICK START- Bring automation in quickly that addresses an immediate need.
Reduce infrastructure support for testing
“Bringing the pieces together”
Take a ‘value’ approach and network it
Forge alliances & partnerships
Leverage vendor Purchasing Power
4
Performance Approach
Phase-driven approach to Performance Optimization
Phase 1:
Plan
Fully Plan the project
Organize the Team
Determine the business goals
Define the business process
Inventory HW, SW and
network
Identify key participants
Mobilize test team
Create test plan
Agree execution schedule
Phase 2:
Baseline
Phase 3:
Optimize
Quantify the System
Performance
Iteratively Isolate and
Reduce Bottlenecks
Create test cases
Deploy monitoring agents
Run initial test plan
Document the baseline profile
Compare baseline with target
Emulate production load
Monitor system performance
Identify problem areas
Analyze root cause
Determine resolution
Apply modification
Phase 4:
Report
Assess the Performance
Improvements and Report
Document
 improved throughput
 Increased capacity
 reduced error rate
 greater stability
 better user response time
Compare with baseline and target
Produce findings/recommendations
Retain data for future comparison
5
Why HP/Mercury
1.
Proven Leadership
2.
Market Visionaries
3.
Strategic Alliance
Empirix Compuware
4%
4%
Other
9%
Segue
2%
Rational
9%
RadView
1%
HP/Mercury
71%
Worldwide Performance Testing Market Share
Source Newport Group, Inc © 2004
6
Automated Performance Testing
PERFORMANCE MONITORS
Controller
User
Simulation
Internet/
WAN
Web Server
App. Server
Database
Replaces real users with thousands of virtual users
Generates consistent, measurable, and repeatable
load, managed from a single point of control
Efficiently isolates performance bottlenecks across all
tiers/layers with automated reporting and analyses
7
Types of Performance Testing
1. Stress Testing
2. Capacity Planning
3. Throughput Improvement
4. Server Consolidation
5. Baseline Assessment
6. New Version Impact Analysis
7. Ensure Business Performance
8
Application Monitoring
Application Management monitors the entire infrastructure
from the users perspective!
ISP
ISP
ISP
Router
Router
Firewall

Load
Web
Balancer
Servers
Database
Server
Application
Server
98% of sites experience critical performance problems
– Typical applications encounter problems at 15% of design
capacity

Problems occur both inside & outside firewall
– 25% - network and bandwidth related
– 23% - application server related
– 20% - load balancers, web server or proxy server issues
Source: Mercury Interactive hosted services
9
A Day In the Life… Daily Challenges
07:05AM
Problem detected - data center serving
e-Bank customers is down
07:17AM
Ticket opened
07:31AM
Bridge-line opened - 8 people
08:03AM
Check UNIX issue, escalate to hardware vendor
08:06AM
Business urgency discussed, not well understood
10:20AM
Security clearance delays hardware vendor from
entering data center
10:51AM
Hardware is ok, Informix DB appears down
11:48AM
4 more people paged, DBA joins line
04:05PM
Root cause identified: mis-configured connection
pooling caused DB crash at peak traffic. At least 800
customers affected, business impact unknown
04:20PM
IT representative sent to client impact
assessment meeting
10
Diagnose Root Cause
BUSINESS AVAILABILITY CENTER
Top View
Define SLAs
Business
Console
Customer
Impact
SLM
Event
Viewer
RESOLUTION CENTER
Tier 1
Tier 2
Tier 3
11
Application Management Dashboard
BUSINESS AVAILABILITY CENTER
Top View
Define SLAs
Business
Console
Customer
Impact
SLM
Event
Viewer
RESOLUTION CENTER
Tier 1
Tier 2
Tier 3
12
Appendix
Supplemental Information
13
Introducing LoadRunner
Our Center of Excellence approach uses Mercury Interactive LoadRunner
product to deliver an integrated solution
for business technology optimization.
Automated
Scripts
System
Monitor
 It provides an end-to-end tool set for performance
tuning


Load
Runner
Data
Repository


a user simulation module to create business transaction load
a system monitoring module to display infrastructure behavior and application
errors
a set of automated scripts that identifies modifications
a data repository for future comparisons
Application
Infrastructure
Security
Application servers
Client Systems
Network bandwidth
Gateways
Firewalls
Databases
Legacy system
Routers
Web servers
IDS systems
Middleware
.NET services
Switches
Cache servers
DDoS systems
Back office
Sun ONE services
Hosting equipment
LAN / WAN
Load balancers
14
Powering the CoE
PERFORMANCE CoE
Application Delivery Console
LoadRunner
Tuning
Diagnostics
Capacity
Planning
Global Management
Multi-project Management
Resource Management
Application Delivery Foundation
Monitors
Virtual
Users
Protocols
Application Monitoring
Business Availability Center • Resolution Center
15
LoadRunner Coverage
LoadRunner Capabilities in Optimization Exercise
User Simulation Monitoring
Problem
Identification
Recommendation Resolution
Performance Tuning Steps
Tunable Components
Infrastructure
(e.g. Linux, Solaris,
Windows)
Vendor Product
(e.g. Oracle,
Websphere)
In-house Apps
• method level for J2EE apps
• program level for other types
* J2EE source code line-level identification available using OptiBench add-on.
16
LoadRunner Overview
ERP/CRM
Web
•SAP
•Oracle
•Siebel
•PeopleSoft
•
•
•
•
•
Middleware
•
•
•
•
•
HTTP(S)
XML
Citrix ICA
SOAP
WAP
EJBs
CORBA
COM
RMI
MQSeries
Databases
• Oracle
• MS
SQLServer
• DB2
• ODBC
Legacy
• 3270
• 5250
• VT100
USER SIMULATION PROTOCOLS
Clients
Internet/ Intranet
Web Server
App. Server
Database
PERFORMANCE MONITORS
Operating
Systems
Network
Web
Servers
• Windows
• Unix
• Linux
• SNMP
• WAN
Emulation
• MS IIS
• iPlanet
• Apache
App
Servers
•
•
•
•
BEA WebLogic
IBM WebSphere
ATG Dynamo
iPlanet App Server
Java
•
•
•
•
EJB
JDBC
JSP
Sitraka
JMonitor
Databases
• Oracle
• MSSQL
Server
• DB2
17
LoadRunner Deployment
Distributed System
Stress Simulation
Infrastructure Tier
Application Tier
Database Tier
Step 1
~~~
~~~
Step 2
Environmen
t
under
User Load
1. Define business process
2. Capture user behavior
3. Create simulation profile
4. Apply controlled load
Fire
wall Load
Balancer
Fire
wall
Web
Servers
Storage
Application
Servers
Applications
Database
Servers
Telecomm Fabric
Step 3
Step 4
Automated & Manual
Tuning
Typical Statistics
System
User count
100
Infrastructure
CPU
28 CPUs
Capacity
Tranx rate
100/mi
n
Consumption
Memor
y
2,560MB
User
Response
time
10 sec
Disk I/O
70%
10 %
Networ
k
80%
Error rate
Experienc
e
App & System
Monitoring
Measure Performance → Identify Constraints → Apply Modification
Sample Recommendations
WebLogic - misconfigured Java VM heap
size
Database - missing indexes, full table
scan
...
18
Metrics
Web Servers
Streaming Media Monitors
Checkpoint Firewall Server
ERP Performance Monitors
Middleware Monitors
Java Performance Monitors
Citrix MetaFrame Monitors
Network Delay
IBM WebSphere
• ThreadCreates
• ActiveThreads
• ConnectionPoolSize
• SessionsActive
• Plus 82 other counters
BEA WebLogic
• HeapSizeCurrent
• ActiveConnectionsCurrentCount
• WaitingForConnectionCurrentCount
• Plus 118 other counters
Iplanet Application Server
• nasKesEngAutoStart
• nasEngSYBPreparedQueryTotal
• nasEngThreadWait
• Plus 118 other counters
Microsoft Active Server Pages
• Requests Executing
• Requests Queued
• Request Bytes Out Total
• Requests/Sec
• Transactions Aborted
• Transactions Pending
Oracle 9iAS HTTP Server
Allaire ColdFusion
SilverStream
Ariba
ATG Dynamo
Microsoft COM+ Monitor
BroadVision
Databases
Customized Counters Capability
Other Software
and Hardware
Customized Counters Capability
Apache
• #Busy Servers
• #Idle Servers
• CPU Usage
• Hits/Sec
• KBytes Sent/Sec
MS IIS
• Connection
Attempts/Sec
• Files Received/Sec
• Logon Attempts/Sec
• Total Files Transferred
• Plus 50 other counters
Iplanet (Netscape)
• Bad requests/Sec
• Bytes Sent/Sec
• Hits/Sec
• Plus 11 other counters
App. Servers
Oracle
• Total file opens
• Enqueue deadlocks
• Enqueue waits
• Opened cursors current
• SQL*Net roundtrips to/from client
• Plus 164 other counters
Sybase
• Disk Reads
• Memory
• Disk Writes
• Disk Waits
• Locks count
• % Hits
• % Processor Time (process) %
• Plus 49 other counters
Microsoft SQL Server
• SQL Re-Compilations/Sec
• I/O - Outstanding Reads
• Lock Wait Time (ms)
• Total Latch Wait Time (ms)
• SQL Re-Compilations/Sec
• Plus 38 other counters
DB2
• Local_con_exec
• Total_sorts
• Plus 174 other counters
19
The Tuning Process
LoadRunner’s tuning agent can be deployed to automatically
recommend optimum settings for a wide range of industry-standard
applications.
 Apache Web Server 1.x/2.x
 Microsoft IIS 4/5
 SAP Enterprise Portals 5
 BEA Weblogic 6.x/7.x
 Microsoft Active Server
Pages 2/3
 Siebel 7.x
 IBM HTTP Server
 IBM Websphere Advanced 4.x
 Oracle Database
 Oracle 9iAS
 IBM Websphere Single Server
4.x
 PeopleSoft 8.x
 iPlanet Enterprise Server 6 &
higher
 SQL Server 7.5/2000
 Windows: NT, 2000 and
XP
 UNIX: Solaris, HP, AIX
and Linux
IT specialists will manually optimize additional
hardware and software based on LoadRunner
performance metrics.
20
Benefits of Tuning
• Optimizes application and
infrastructure performance
• Isolates and resolves
performance bottlenecks
• Establishes optimized
configuration settings for
production
21
Benefits of LoadRunner Diagnostics
• Pinpoints application bottlenecks, e.g.,
J2EE to method/
SQL level
• Reduces time
to resolution
for application
issues
• Integrated with Mercury
LoadRunner— combines end
user response
time with diagnostics
22
Visit Us Online
Click Here or copy/paste
www.t3consortium.com
Global Virtual
Testing Capabilities
For additional information contact:
[email protected]
23