A Novel Approach to Unit Test: The Aspect
Download
Report
Transcript A Novel Approach to Unit Test: The Aspect
JAOUT: Automated Generation of Aspect
Oriented Unit Test
Guoqing Xu, Z. Yang, H. Huang, Q. Chen, L.
Chen and F. Xu
Software Engineering Lab (SEL)
East China Normal University
APSEC’04, Dec. 2nd 2004, Pusan, Korea
Background
Test Oracle problem
-- how to identify oracles in unit test?
Automating the test oracle generation
-- Manually writing test codes is a labor-intensive
job.
-- JMLUnit framework [CL02].
Automating the test case generation
-- Korat. [BKM02]
-- JMLAutoTest [XY03]
Problems
Current test automating process relies on formal
assertions and predicts.
Conventional specifications only focus on
programs’ functional behaviors, with little
support for specifying their non-functional
behaviors, e.g. temporal logic, performance...
How to find a common approach to identifying
oracles for testing non-functional aspects? How
to automate this process?
Our approach
Using a crosscutting property of the program as
the criterion to check the correctness of the
application in the corresponding aspect is well
suited to the unit testing problems.
-- In AOP crosscutting properties are used to
model the program from different aspects.
-- Some problems which have been difficult to
handle in conventional ways are solved easily.
Application-specific Aspect
A new notion: application-specific aspects (ASS)
-- top-level application related aspects.
-- established at the design level.
-- may be picked up from low language level
aspects.
-- all the ASS for the same application share
some common features, e.g. testing ASS, tracing
ASS...
-- may be translated into language level aspects.
Aspect-Oriented Test Description
Language
How to describe ASS?
-- a formal way is needed.
-- can not be too complicated.
Aspect-Oriented Test Description Language(AOTDL)
-- used by the designer at design level
-- can be translated into AspectJ aspects.
Basic units in AOTDL
-- Utility unit
-- MeaninglessCase Advice unit
-- Error Advice unit
advicetype (arguments): pointcuts: conditions: message
AOTDL (cond.)
Class Stack{
public void
init(){...}
public void push
( Node
n){...}
...
}
TestingAspect tempLogic{
Utility{
protected boolean isInitialized = false;
//push is reached
pointcut pushReached(Stack st):
target(st)&&call(void Stack.push(Integer));
//init is reached
pointcut initReached(Stack st):
target(st)&&call(void Stack.init(void));
// after advice
after(Stack st):initReached (st){
isInitialized = true;
}
}
AOTDL (cond.)
MeaninglessCase
Advice{
/*advices for specifying
criteria of
meaningless test
cases */
before(Stack s) :
pushReached(s) :
s.getSize()
>=MAX : ”Overflo
w”;
...
}
Error Advice{
/*advices for specifying
criteria of test errors */
before(Stack s):
pushReached(s):
! isInitialized : ”Not
Initialized” ;
...
}
}
AOTDL (cond.)
//error advices
before(Stack s) throws TestErrorException:
pushReached(s){
if (!isInitialized){
TestErrorException ex =new
TestErrorException(“Not Initialized”);
ex.setSource(“TempLogic”);
throw ex;
}
}
JAOUT Framework Overview
AOTDL
Testing
Aspects
Jaout/
Tranlator
AspectJ
translate
Aspects
AspectJ
in
Java
Java
programs
Generate
Arj
Weave and
Compile
Bytecode
files
Jaout/
Generator
Junit test
classes
Junit test
runner
JMLAutoTest
Test Case Supply
Run the test
Test
Results
An Overview of the basic technique
JAOUT: Automated Generation of
AO Testing Framework
JAOUT takes Java class M.java as the input,
and automatically generate JUnit test
framework.
-- Aspect_M_Test.java, the JUnit unit test class.
-- Aspect_M_TestCase.java, the test case
provider.
-- Aspect_M_TestClient.java, JMLAutoTest test
case generation class.
Test Framework Definition
For class C and its method M(A1 a1, A2 a2…An
an), the generated test suite is defined as
--- C[ ] receivers
-- A1[ ] vA1; ... ; An[ ] vAn;
There is a corresponding init_Ai method for each
type Ai and a method init_receiver in test case
provider to initialize test cases.
Testers use APIs provided by JMLAutoTest to
generate test case space in test client, and pass
it to the test case provider.
Generated Test Method
public void testM(){
catch (TestErrorException e) {
for (int i0 = 0; io < receiver.length; i0++){
String msg = e.getSource();
fail(msg + NEW LINE +
e.getMessage());
} catch (java.lang.Throwable e) {
continue;
} finally {
setUp(); // restore test cases
}
for (int i1 = 0; i1 < a1.length; i1++){
…
try {
receiver[i0].M(a1[i1], : : :, an[in]);
} catch (MeaninglessCaseException e)
{
}
/* ... tell framework test case was
meaningless ... */
continue;
}
}
Test Result
...in push
false
F
Time: 0.06
There was 1 failure:
1) testPush
(sample.Stack_Aspect_TestCase)junit.framework.AssertionFailedError: In Testing
Aspect TempLogic: Not Initilized!
at ample.Stack_Aspect_Test.testPush
(Stack_Aspect_Test.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0
(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:25)
at sample.Stack_Aspect_Test.run(Stack_Aspect_Test.java:26)
at sample.Stack_Aspect_TestCase.main
(Stack_Aspect_TestCase.java:24)
FAILURES!!!
Tests run: 3, Failures: 1, Errors: 0, Meaningless:0
Double-phase Testing
A large number of meaningless test
inputs in the generated test case space.
It is a waste of time running programs
with meaningless inputs.
The effectiveness of test results are
compromised if the test is exercised
with too many meaningless cases.
Double-phase Testing (Cond.)
Goals:
1. Prevent meaningless test cases being
processed in the final test, and
therefore save the time.
2. Do not require testers to know the
details of the program to be tested.
Double-phase Testing (cond.)
Steps
1. Establish Operational Profile
2. The first phase test (pre-test)
3. The second phase test (final test)
Methods
1. Do the statistics based test cases selection.
2. Use the pre-test as the cost for reducing
the number of meaningless cases in the final
test.
Working Sequence
Operational Profile
Operational Profile is the criterion made by
the tester to divide the generated test case
space into several partitions.
the validity of double-phase testing relies on
the quality of the operational profile.
it is a good idea to start out with several
models and evaluate their predictive accuracy
before settling on one.
The first phase
Take a relatively small number (e.g. 10%) of
test cases out from each partition according
to the average distribution.
Run these groups of cases respectively.
Make statistics on the number of meaningless
test cases appeared in each group.
The second phase
Calculate and determine the probability of
meaningless test cases existing in each
partition.
Reorganize the test cases according to the
inverse proportion of meaningless cases in
each partition.
(e.g. take 80% of cases from the partition
which produces 20% of meaningless ones in
the first phase.)
Run the final test.
Experimental results
A sample class
public class BinaryTree{
public Node root;
protected int size;
public int getSize() {… }
public JMLObjectSequence toObjectSet() {…}
…
}
public class Node{
public Node left;
public Node right;
public int ID;
}
BinaryTree findSubTree(BinaryTree parentTree, Node thisNode)
Testing ASS
}
}
MeaninglessCase Advice{
before(BinaryTree t) :
MethodReached(t) :
parentTree==null || thisNode ==null ||
(forall Node n; parentTree.toObjectSet().has(n);
n.ID != thisNode.ID)
Test case generation
We use JMLAutoTest to generate the test
case space of type BinaryTree with a few
nodes (five through eight).
We also generate the case space of type
Node. It contains 12 nodes whose IDs are
from 0 to 11.
Divide the test case space
For the test case space of type BinaryTree,
we do not divide it and leave it as the only
partition.
For the space of type Node, We divide it into
two partitions. The first one contains nodes
whose ID varies from 0 to 5 and the second
one contains the rest.
A Comparison
Double-phase way
nodes
in
binary
tree
meaningful
/total in
final test
time
in the
first
phase
(s)
Conventional way
total
meaningful
time
time of /total in final in
the test test
the
first
(s)
phas
e (s)
total
time of
the test
(s)
5
410/492
0.079
0.266
420/1008
0
0.219
6
1572/1572 0.188
0.422
1584/3168
0
0.468
7
5136/5136 0.36
0.766
6006/1029
0
1.25
17148/171 0.703
2.016
0
3.484
8
48
6
22880/343
20
Related Work (Spec-based test)
TestEra – Automating OO test generation. [MK01]
Mulsaw project, MIT
JMLUnit – Generating test oracles from JML runtime
assertions. [CL02] Iowa State Univ.
Korat – Generating test case based on Java predicts.
[BKM02] Mulsaw project, MIT.
JMLAutoTest – Generating test framework from JML
runtime assertions and test cases based on class
invariants. [XY03] SEL, ECNU.
Jov -- java automated unit test based on inferred
program properties. [XN03] Univ. of Washington.
Conclusions
Traditional formal predicts do not deal with nonfunctional properties of the program.
AOP is well suited to the unit test problems.
Designers use AOTDL to build Application-Specific
Testing Aspects.
JAOUT translates Testing Aspects to AspectJ aspects
automatically.
JAOUT automatically generates JUnit test framework and
uses the runtime messages thrown by Testing ASS as
test oracles.
JAOUT uses double-phase testing approach to filtering
out meaningless cases.
Thank you… Questions?