Presentation

Download Report

Transcript Presentation

Predictors of
customer perceived
software quality
Paul Luo Li (ISRI – CMU)
Audris Mockus (Avaya Research)
Ping Zhang (Avaya Research)
1
Need to View Quality from the
Customer’s Perspective
… We translate these advanced technologies into value
for our customers …
-IBM (#9 on the Fortune 500)
… Our strategy is to offer products, services and
solutions that are high tech, low cost and deliver the
best customer experience.
-HP (#11 on the Fortune 500)
… We deliver unparalleled value to our customers.
Only by serving our customers well do we justify our
existence as a business
-Avaya (#401 on the Fortune 500)
2
What Would be Ideal

Predict customer perceived quality
 Using
customer characteristics
 For each customer
Key idea: Focus on the customer
3
Possible Applications of Predictions



How do I plan
deployment to meet the
quality expectations of
the customer?
How do I target
improvement efforts?
How do I allocate the
right resources to deal
with customer problems

Predict customer
experience for each
customer

Identify possible causes
of problems
Predict customer
interactions

4
Solutions for Software Producers



How do I plan
deployment to meet the
quality expectations of
the customer?
How do I target
improvement efforts?
How do I allocate the
right resources to deal
with customer problems

Predict customer
experience for each
customer

Identify possible causes
of problems
Predict customer
interactions

5
To Improve Customer Perceived
Quality



How do I plan
deployment to meet the
quality expectations of
the customer?
How do I target
improvement efforts?
How do I allocate the
right resources to deal
with customer problems

Predict customer
experience for each
customer

Identify possible causes
of problems
Predict customer
interactions

6
Gaps in Current Research

Prior work examined:
 Software
defect prediction for a single
customer (Musa et al. 1987, Lyu et al. 1996)
 Software
defect prediction for modules or
features (Jones et al. 1999, Khoshgoftaar et al.
1996)
Is not scalable
7
Not Focused on Customers

Prior work examined:
 Software
defect prediction for a single
customer (Musa et al. 1987, Lyu et al. 1996)
 Software
defect prediction for modules or
features (Jones et al. 1999, Khoshgoftaar et al.
1996)
Tell us nothing about a
specific customer
8
Does not Capture other Aspects of
Customer Perceived Quality

Prior work examined:
 Software
defect prediction for a single
customer (Musa et al. 1987, Lyu et al. 1996)
 Software
defect prediction for modules or
features (Jones et al. 1999, Khoshgoftaar et al.
1996)
Does not predict other aspects of customer perceived
quality that are not code related.
9
Research Contributions
Predict software defects for each customer
in a cost effective manner
 Predict other aspects of customer
perceived quality for each customer
 Empirically validate deployment, usage,
software, and hardware predictors

10
Rest of This Talk
The setting
 Customer interactions (outputs)
 Customer characteristics (inputs)
 Results
 Conclusion

11
Empirical Results from a Real
World Software System

Avaya telephone call processing software
system
7
million+ lines of C/C++
 Fixed release schedule
Process improvement efforts
 Tens of thousands of customers

 90%

of Fortune 500 companies use it
Professional support organization
12
Data Used are Commonly Available

Customer issue tracking system
 Trouble


ticket database
The equipment database
Change management
 Sablime
database
Data collected as a part
of everyday operations
Data sources available at
other organizations e.g.
IBM and HP
13
At Other Organizations

Customer issue tracking system
 Trouble


ticket database
The equipment database
Change management
 Sablime
database
Data collected as a part
of everyday operations
Data sources available at
other organizations e.g.
IBM and HP
14
Customer Interactions (Outputs)


We assume customer interaction == customer
perceived quality
Five customer interaction (Chulani et al. 2001,
Buckley and Chillarege 1995) within 3 month of
deployment
 Software
defects: high impact problem
 System outages: high impact problem
 Technician dispatches
 Calls
 Automated alarms
Important for Avaya and likely for
other organizations as well
15
Number of deployments
Examine Customer Installations
5
1
Months after general availability
16
Number of deployments
Capture Characteristics of Each
Installation
Customer 1: Deployed first month, a Large system, Linux…
Customer 2: Deployed first month, a Small system, Windows…
Customer 3: Deployed first month, a Large system, Proprietary Os…
Customer 4: Deployed first month, a Small system, Linux…
Customer 5: Deployed first month, a Large system, Linux…
5
1
Months after general availability
17
Number of deployments
Analyze Using Statistical Analysis
Customer 1: Deployed first month, a Large system, Linux…
Customer 2: Deployed first month, a Small system, Windows…
Customer 3: Deployed first month, a Large system, Proprietary Os…
Customer 4: Deployed first month, a Small system, Linux…
Customer 5: Deployed first month, a Large system, Linux…
5
Similarities
Differences
1
Months after general availability
18
Category of Predictors (Kinds of
Inputs)

We examine:
 Deployment
issues
 Usage patterns
 Software platform
 Hardware configurations

Prior work examines:
 Software
product
 Development process
Common sense issues, but
lack empirical validation
19
Category of Predictors (Kinds of
Inputs)

We examine:
 Deployment
issues
 Usage patterns
 Software platform
 Hardware configurations

Prior work examines:
 Software
product
 Development process
Key idea: From the customer’s perspective, they are not
good predictors (i.e. do not vary for a single release)
20
Specific Predictors (Inputs)

Total deployment time


Operating system


deployment issues
software platform, hardware configurations
System size
 hardware
configurations, software platform, usage
patterns

Ports
 usage

pattern, hardware configurations
Software upgrades
 deployment
issue
21
Recap

Predict for each
customer (outputs):

 Software
 Total
defects
 System outages
 Technician dispatches
 Calls
 Automated alarms

Using Logistic
regression and Linear
regression
Using predictors
(inputs):
deployment time
 Operating system
 System size
 Ports
 Software upgrades

For a real world
software system
22
Example: Field Defect Predictions
23
Predictors
24
Nuisance Variables
25
All Predictors are Important
26
The Most Important Predictor

Total deployment time (deployment issue)
 Systems
deployed half way into our observational
period are 13 to 25 times less likely to experience a
software defect
27
May Enable Deployment Adjustments

Total deployment time (deployment issue)
 Systems
deployed half way into our observational
period are 13 to 25 times less likely to experience a
software defect
 May be due to software patching, better tools, more
experienced technicians
28
Another Important Predictor


Total deployment time (deployment issue)
Operating system (software platform, hardware
configurations)
 Systems
running on the proprietary OS are 3 times
less likely to experience a software defect compared
with systems on running the open OS (Linux)
 Systems running on the commercial OS (Windows)
are 3 times more likely to experience a software
defect compared with systems running on the open
OS (Linux)
29
May Allow for Targeted Improvement
or Improved Testing


Total deployment time (deployment issue)
Operating system (software platform, hardware
configurations)
 May
be due to familiarity with the operating system
 May be due to operating system complexity
30
More Results in Paper


The complete results and analyses for field
defects
Predictions for other customer interactions
31
Validation of Results and Method

We accounted for data reporting differences
 Included
indicator variables in the models to identify
populations (e.g. US or international customers)

We independently validated the data collection
process
 Independently

extracted data and performed analyses
We interviewed personnel to validate findings
 Programmers
 Field
technicians
32
Summary: Identified Predictors of
Customer Perceived Quality

We identified and quantified characteristics,
like time of deployment, that can affect
customer perceived quality by more than an
order of magnitude
33
Summary: Modeled Customer
Interactions


We identified and quantified characteristics ,
like time of deployment, that can affect
customer perceived quality by more than an
order of magnitude
We created models that can predict various
customer interactions and found that
predictors have consistent effect across
interactions
34
Summary: Deployment is Important
for High Reliability



We identified and quantified characteristics ,
like time of deployment, that can affect
customer perceived quality by more than an
order of magnitude
We created models that can predict various
customer interactions and found that
predictors have consistent effect across
interactions
We learned that controlled deployment may
be the key for high reliability systems
35
Improve Customer’s Experiences



You can target improvement efforts
You can allocate the right resources to deal
with customer reported problems
You can adjust deployment to meet the quality
expectations of your customers
36
Predictors of
customer perceived
software quality
Paul Luo Li ([email protected])
Audris Mockus (Avaya Research)
Ping Zhang (Avaya Research)
37
Predicted Number of Calls Match
Actual Number of Calls
Calls for the next release
Calls
Predictions are made here
Time
38