An Exploration of Errors in Web-based Applications in the Context of
Download
Report
Transcript An Exploration of Errors in Web-based Applications in the Context of
An Exploration of Errors in Webbased Applications in the Context
of Web-based Application Testing
PhD Proposal
Kinga Dobolyi
May 2009
1
The shopping cart
2
The shopping cart
3
The shopping cart
4
What is going on
• Problem: faults in web-based applications
cause losses of revenue, and they are hard to
test
• Approach: study errors in web-based
applications in the context of web testing
• Solution: improve the state of the art in web
testing techniques through guidelines targeted at
high severity faults and automation and
precision in regression testing
5
Outline
•
•
•
•
•
•
Introduction and motivation
Thesis statement
Background
Goals and approaches
Preliminary work
Expected contributions
6
Motivation
• Testing of web-based applications in
particular deserves further examination
due to economic considerations:
– Monetary throughput: Backbone of ecommerce and communications businesses
– Customers: low customer loyalty
– Development: Companies are choosing not to
test due to resource constraints
7
Motivation: e-commerce
• Internet usage: 73% of people in the US in 2008
– Browsers are dominant application
– $204 billion in Internet retail sales annually
• Global online B2B transactions total several
$trillions annually
• One hour of downtime at Amazon.com cost $1.5
million dollars
• 70% of major online sites exhibit user-visible
failures
8
Motivation: customers
• Customer loyalty is notoriously low
– Determined by the usability of the application
[Offutt 2002]
– Freedom and options
9
Motivation: customers
• Lesson learned: web-based applications
need to be well-designed and well tested
• Are they?
10
Motivation: development
• Technology
challenges:
– Heterogeneous, opaque
components
– Dynamic page content generation
– Persistent state operated upon by
concurrent, global users around the clock
11
Motivation: development
• Web-based applications are often not
tested
– Enormous pressure to change
• Short delivery times, high developer turnover rates,
and quickly evolving user needs
– No formal process model
12
Motivation: summary
• Problem: faults in web-based applications
cause losses of revenue, and they are hard to
test
• Approach: study errors in web-based
applications in the context of web application
testing
• Solution: improve the state of the art in web
testing techniques through guidelines targeted at
high severity faults and automation and
precision in regression testing
13
Thesis statement
• Hypothesis: web-based applications have
special properties that can be exploited to
build tools and models that improve the
current state of web application testing
and development:
– Tend to fail in predictable and similar ways
– Human centric definition of acceptability
14
Thesis statement
• Problem: faults in web-based applications
cause losses of revenue, and they are hard to
test
• Approach: study errors in web-based
applications in the context of web testing
• Solution: improve the state of the art in web
testing techniques through guidelines targeted at
high severity faults and automation and
precision in regression testing
15
Background: testing techniques
• Non-functional (static) validation
– Server load testing
– Link testing
– HTML/spelling validation
• Modeling approaches
• Capture-replay
– User session-based testing
16
Background: oracles
• Oracles (oracle-comparator)
1 < <P>The same table could be indented.
2 < <TABLE border="1">
3 --4 > <p>The same table could be indented.</p>
5 > <table border="1" summary="">
– False positives from
diff-like tools
– Want precise comparators
17
Background: automation
• Automation
– Test case generation: VeriWeb, PHP
– Test case replay
• URL + post-data
– Failure detection
18
Background: metrics
• How do we measure success?
– Code coverage
– Fault seeding
• Human
• Automatic
– Cost
• How do we know these are indicative of
the real world?
19
Background: fault definition
• Defining an error:
– “the inability to obtain and deliver information,
such as documents or computational results,
requested by web
users.”
[Ma & Tian 2003]
• Fault taxonomies
– Figure from
[Marchetto et al
2007]
20
Background: challenges
• Functional validation remains a challenge
– Regression testing should be more precise
and automatic
• We do not know if test suite efficacy
metrics are indicative of the real world
– We should examine the severity of uncovered
faults
21
Goals and approaches
• Problem: faults in web-based applications
cause losses of revenue, and they are hard to
test
• Approach: study errors in web-based
applications in the context of web testing
• Solution: improve the state of the art in web
testing techniques through guidelines targeted at
high severity faults and automation and
precision in regression testing
22
Goals and approaches
• I propose to:
– Model errors in web-based applications
• Identify them more accurately
• Automate the oracle-comparator process
– Make web testing more cost-effective
• Devise a model of fault severity that will guide test
case design, selection, and prioritization
• Validate or refute the current underlying
assumption that all faults are equally severe in
fault-based testing
23
Goals and approaches: Goals
• Reduce the cost of regression testing webbased applications
– Use special structure of web-based application output
to precisely identify errors
• Automate web-based application regression
testing
– Unrelated web-based applications tend to fail in
similar ways
• Understand customer-perceived severities of
web application errors.
24
Goals and approaches: Goals
• Formally ground the current state of industrial
practice
– Validate or refute fault injection as a standard for
measuring web application test suite quality
• Understand how to avoid high-severity faults
during web application design and development
• Reduce the cost of regression testing web
applications by exposing high-severity faults
– Test case design, selection, and prioritization (test
suite reduction)
25
Goals and approaches: Outline
26
Goals and approaches: Step 1 –
oracle-comparator
• Construct a precise oracle-comparator that
uses the tree-structured nature of
XML/HTML output and other features
– Model errors on a per-project basis
– Semantic distance metric to reduce false
positives
27
Goals and approaches: Step 2 –
automated oracle-comparator
• Exploit the similar way in which web
applications fail to avoid the need for
human annotations in training a precise
oracle-comparator
– Train a precise oracle-comparator on data
from other, unrelated web applications
– Use fault injection to improve the results when
necessary
28
Goals and approaches: Step 3 –
human study
• Conduct a human study of real-world fault
severity to identify a model of fault severity
– Severities different than self-reported in bug
repositories
– Screenshots of current-next idiom
– Also survey developers
29
Goals and approaches: Step 4 –
fault seeding validation
• Compare the severities of real-world faults
to seeded faults using human data
(validate fault seeding)
– The severities of seeded errors have uniform
distributions?
– The severity distribution of seeded errors
matches the distribution of real-world errors,
according to the results of the survey from
Step 3?
30
Goals and approaches: Step 5 –
software engineering guidelines
• Identify underlying technologies and
methodologies that correlate with highseverity faults
– As an alternative to testing
– Tie high severity errors to underlying code,
components, programming languages, and
software engineering practices
31
Goals and approaches: Step 6 –
testing techniques
• Identify testing techniques to maximize
return on investment by targeting highseverity faults
– Introduce a new metric for the (web
application) test suite reduction research
community
32
Preliminary Work: Step 1
• Step 1: Construct a precise oracle-comparator
using tree structured XML/HTML output and
other features
– Multiple versions of
10 open-source
benchmarks
– 7154 pairs of oracletestcase output, 919
of which labeled as
“possibly an error”
33
Preliminary Work: Step 1
• Evaluation: F-measure (precision and
recall) using our model
34
Preliminary Work: Step 1
• Longitudinal study to measure effort saved
– Calculate cost of looking : cost of missing
– Low ratio means we are saving effort
35
Preliminary Work: Step 2
• Step 2: Exploit similarities in web
application failures to avoid human
annotations when training a precise
oracle-comparator
– Same setup as Step 1
– Use existing, annotated pairs of test-oracle
output from unrelated applications to train a
comparator for the application at test
36
Preliminary Work: Step 2
• Evaluation: measure precision and recall
37
Preliminary Work: Step 2
• Use fault seeding to introduce projectspecific faults into the training data set
38
Preliminary Work: Step 3
• Step 3: Model real-world fault severity
based on a human study.
– Collect 400 real-world faults
• Evaluation:
have subjects
use the model
to classify
faults
39
Preliminary Work: Step 4
• Step 4: Compare the severities of realworld faults to seeded faults using human
data.
– Test same human subjects with 200 + 200
manually-injected and automatically-injected
faults
– Conduct a survey
of developers for
fault severity
40
Preliminary Work: Step 5
• Step 5: Identify technologies and methodologies that
correlate with high-severity faults
• Do high severities correlate with:
–
–
–
–
–
Programming Language
Level in three-tier architecture
COTS component
User error (usability issue)
Type of error (business logic, resource allocation, syntax error,
etc)
– Fault taxonomies (existing)
– Surface features of visible output: white screens, stack traces,
misspellings, formatting errors
• Evaluation: developer survey (time permitting)
41
Preliminary Work: Step 6
• Step 6: Identify testing techniques to target
high-severity faults
– Targets testing
– Assign a testcase a severity rating a priori
• Evaluation: compare the severity of faults
exposed with our technique versus other
test suite reduction approaches
42
Expected Contributions
• Problem: faults in web-based applications
cause losses of revenue, and they are hard to
test
• Approach: study errors in web-based
applications in the context of web testing
• Solution: improve the state of the art in web
testing techniques through guidelines targeted at
high severity faults and automation and
precision in regression testing
43
Expected Contributions
• Reduce the cost of regression testing by
constructing a precise oracle-comparator
• Develop a model of customer-perceived
severities of web application faults
• Validate or refute fault injection as a standard for
measuring web application test suite quality
• Propose new software engineering guidelines
for web application development and testing
44
Questions?
45
Original Contributions
• Fault Severity Model
– Severity has not been studied in this domain, and
customers are an integral part of these applications
– Providing a new metric to the research community
– Validate/refute fault seeding
• Precise-oracle comparators
– First to use different versions of benchmarks
– Can be completely automated
– XML and HTML
46
Expected Impact
• Fault Severity Model
– Can be applied to testing techniques in this field to
make them financially feasible for developers
– Change the way in which test suite efficacy is
measured
– Potentially impact web site design as usability issues
may become more evident
• Precise oracle-comparator
– Automation makes it much more feasible for adoption
than existing techniques
– Potentially allow companies to conduct regression
testing if they were not testing beforehand
47
Timeline
• Steps 1 and 2: precise comparators
– Completed, 2 papers under submission
• Steps 3 and 4: human study
– Data collection completed, analysis under way for submission of
1 paper
– Expected completion by September
• Step 5: software engineering guidelines
– Expected completion by October
– Expected 0.5 paper
• Step 6: testing according to fault model
– Expected completion by February
– Expected 1 paper
48