Lectures 13, 14 & 15

Download Report

Transcript Lectures 13, 14 & 15

CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
Lecture 13
Analysis and Design
Chapter 5
Introduction
• Many design techniques available
• Choice based on many factors
– Nature of system
– Risk of implementation
– Level of test
– Skill set of testers
• Create inventories
Process for
creating an
inventory
1. Gather reference
materials
•
•
•
•
•
•
Requirements documentation
Design documentation
User’s manuals
Product specifications
Training manuals
Customer feedback
2. Form a Brainstorming
Team
•
•
•
•
•
Test manager
Systems architect
Senior developer
Business analyst
Marketing representative
3. Determine test objectives
•
•
•
•
•
Functions or methods
Constraints or limits
System configurations
Interfaces with other systems
Conditions on input and output
attributes
• Behavior rules
4. Prioritize objectives
• Scope
• Breadth
• Risk
• Choose an objective that has a broad
coverage of the system
5. Parse objectives into lists
• Parse highest-priority objectives into
more detailed components
• Lower-priority objectives will be
parsed into lower detail if time allows
• Start by not making them too
detailed or the test cases can be
overwhelming
6. Create inventory tracking
matrix
7. Identify tests for
unaddressed conditions
• Some conditions may not have a test
mapped to it
• Rather than modify existing test
cases, it easier to add new test cases
to address untested conditions
8. Evaluate each inventory
item
• Evaluate for adequacy of coverage
and add additional test cases as
required
– Never quite complete
9. Maintain the testing
matrix
Black box vs White box
BB and WB testing
• White box or black box testing
improves quality by 40%. Together
they improve quality by 60%
Black box science
Equivalence partitioning
• A group of tests forms an
equivalence class if you believe that:
– They all test the same thing
– Of one catches a bug, the others
probably will too
– If one doesn’t catch a bug, the others
probably won’t either
Boundary value analysis
• Boundaries are often prone to failure
• Does it make sense to also test in the
middle?
• Procedure
– Test exact boundaries
– Value immediately above upper
boundary
– Value immediately below lower
boundary
Decision tables
• List all possible conditions (inputs)
and all possible actions (outputs)
• Useful for describing critical
components of a system that can be
defined by a set of rules
Decision table
Test cases for payroll
example
CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
End of Lecture – 10 minute break
CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
Lecture 14
State transition diagrams
• Old but effective method for
describing a system design and
guiding our testing
• Functionality dependent on current
input and also its past input (state
and transitions)
• Transition mapped to requirement
• State are expected output
opportunities
Orthogonal arrays
• Two dimensional array of integers
• Choosing two columns in the array
gives all combinations of the
numbers
• Used when too many tests to write
and execute
• OATS allows the choice of a “good”
subset
Orthogonal array
Example orthogonal array
Black box art
Ad hoc testing
• Based on experience
• Pareto analysis approach
• Risk analysis (importance to the
user)
• Problematic situations (boundaries,
etc)
• Make sure problem can be replicated
Random testing
• Creating tests where the data is in
the format of real data but all of the
fields are generated randomly, often
using a tool
• Minimally defined parameters
– “Monkeys”
– “intelligent monkeys”
Random testing
weaknesses
•
•
•
•
•
Test often not realistic
No gauge of actual coverage
No measure of risk
Many become redundant
Lots of time to developed expected
results
• Hard to recreate
Semi-random testing
• Refined random testing
• Equivalence partitioning
• Little added confidence to systematic
techniques
• May explode if not careful
• “intelligent” monkey
Exploratory testing
• Test design and execution are
conducted concurrently
• Results prompt tester to delve
deeper
• Not the same as ad-hoc testing
• Good alternative to structured
testing techniques
White box science
White box testing
• Look inside a component and create
tests based on implementation
Cyclomatic complexity
• From mathematical graph theory
• C = e – n + 2p
– e = number of edges in the graph
(number of arrows)
– n = number of nodes (basic blocks)
– p = number of independent procedures
Example
C=7–6+2(1)=3
Code coverage
• Design test cases using techniques
discussed
• Measure code coverage
• Examine unexecuted code
• Create test cases to exercise
uncovered code (if time permits)
Structure of a test
procedure specification
Specification for a typical
system-level test
Test implementation
Chapter 6
Test implementation
process
•
•
•
•
Acquiring test data
Developing test procedures
Preparing the test environment
Selecting and implementing the tools
used to facilitate process
Test environment
• Collection of various pieces
– Data
– Hardware configurations
– People (testers)
– Interfaces
– Operating systems
– Manuals
– Facilities
People
• Not just execution of tests
• Design and creation
• Should be done by people who
understand the environment at at
certain level
– Unit testing by developers
– Integration testing by systems people
CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
End of Lecture – 10 minute break
CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
Lecture 15
Test environment
• Collection of various pieces
– Data
– Hardware configurations
– People (testers)
– Interfaces
– Operating systems
– Manuals
– Facilities
People
• Not just execution of tests
• Design and creation
• Should be done by people who
understand the environment at at
certain level
– Unit testing by developers
– Integration testing by systems people
Hardware configuration
• Each customer could have different
configurations
• Develop “profiles” of customers
• Valuable when customer calls with a
problem
• If cost limited, create a “typical”
environment
Co-habitating software
• Applications that are installed on a
PC will have other apps running on
them as well
• Do they share common files?
• Is there competition for resources
between the applications?
• Inventory and profile
Interfaces
• Difficult to do and a common source
of problems once the system is
delivered
• Systems may not have been built to
work together
– Different standards and technology
• Many tests have to be simulated
which adds to the difficulty
Source of test data
• Goal should be to create the most
realistic data possible
• Real data is desirable
• Challenges
– Different data formats
– Sensitive
– Classified (military)
• Adds to the overall cost
Data source characteristics
Volume of test data
• In many cases a limited volume of
data is sufficient
• Volume, however, can have a
significant impact on performance
• Mix is also important
Repetitive and tedious tasks
Test tooling traps
•
•
•
•
•
•
•
•
No clear strategy
Great expectations
Lack of buy-in
Poor training
Automating the wrong thing
Choosing the wrong tool
Ease of use
Choosing the wrong vendor
Test tooling traps
•
•
•
•
Unstable software
Doing too much, too soon
Underestimating time/resources
Inadequate or unique testing
environment
• Poor timing
• Cost of tools
Evaluating testware
•
•
•
•
QA group
Reviews
Dry runs
Traceability
Defect seeding
• Developed to estimate the number of
bugs resident in a piece of software
• Software seeded with bugs and then
tests run to determine how many
bugs were found
• Can predict the number of bugs
remaining
Defect
Seeding
Mutation analysis
• Used as a method for auditing the
quality of unit testing
• Insert a mutant statement (bug) into
code
• Run unit tests
• Result determines if unit testing was
comprehensive or not
Steps in mutation analysis
CSE 7314
Software Testing and Reliability
Robert
Oshana
[email protected]
End of Lecture – 10 minute break