The generated mock objects behave according to the Design by Contract TM specification of the .... allows the programmer to specify the semantics of object.
Automatically Extracting Mock Object Behavior from Design by Contract Specification for Test Data Generation TM
Stefan J. Galler and Andreas Maller and Franz Wotawa Institute for Software Technology Graz University of Technology Inffeldgasse 16b/II, 8010 Graz, Austria
{galler,wotawa}@ist.tugraz.at ABSTRACT
1.
INTRODUCTION
Testing is a very important but resource-demanding part of every software development process to ensure software quality. The automation of this process reduces costs while improving software quality at the same time. Our work focuses on automatically generating test input data for JUnit tests based on the provided Design by ContractTM specification. In particular we are interested in generating values that satisfy the given pre-condition for non-primitive type parameters of the method under test. Consider, the add() method in Figure 1 as the method under test. To satisfy the pre-condition a Stack object has to be constructed that returns at least a value greater than or equal to two when size() is called. Consider the actual java.util.Stack implementation: it provides one method to put an element on the stack, one to remove an element from the stack and three methods to observe the current state of the stack without changing it. Therefore, to generate a Stack object, which satisfies the given pre-condition requires at least two successive method calls to push() at the very end of the object construction method call sequence. This means for a random approach such as JET [3] or a state space exploring techCategories and Subject Descriptors nique such as PKorat [14], that it generates a pre-condition D.1.5 [PROGRAMMING TECHNIQUES]: Object-oriented satisfying Stack object only in one out of 25 cases. Programming; D.2.5 [SOFTWARE ENGINEERING]: For both, the first and second method call, only one out of Testing and Debugging—data generator five available methods is suitable. Using more advanced random approaches like the adaptive random testing, may improve this probability slightly. On General Terms the other hand, real-world software may have more complex Algorithms, Verification pre-conditions. Therefore, even more runs are required to generate test input data that satisfies the given pre-condition of the method under test. Generating objects that cannot Keywords be used costs time and we try to avoid it. test data generation; automated unit testing; mock object; Therefore, we suggest to replace all non-primitive paramDesign-by-Contract; eters of the method under test with Mock Objects. Our approach uses Design by ContractTM specifications to automatically configure the mock object such that the return values of all methods of the mocked object satisfy their post-condition and the pre-condition of the method under Permission to make digital or hard copies of all or part of this work for test. The approach increases overall line coverage, since it personal or classroom use is granted without fee provided that copies are is able to generate objects that satisfy the pre-condition of not made or distributed for profit or commercial advantage and that copies the method under test immediately. It does not have to bear this notice and the full citation on the first page. To copy otherwise, to construct multiple objects to generate one that fits, such as republish, to post on servers or to redistribute to lists, requires prior specific random has to. permission and/or a fee. The main contributions of this paper are: AST’10 May 2-8 2010, Cape Town, South Africa
Test data generation is an important task in the process of automated unit test generation. Random and heuristic approaches are well known for test input data generation. Unfortunately, in the presence of complex pre-conditions especially in the case of non-primitive data types those approaches often fail. A promising technique for generating an object that exactly satisfies a given pre-condition is mocking, i.e., replacing the concrete implementation with an implementation only considering the necessary behavior for a specific test case. In this paper we follow this technique and present an approach for automatically deriving the behavior of mock objects from given Design by ContractTM specifications. The generated mock objects behave according to the Design by ContractTM specification of the original class. Furthermore, we make sure that the observed behavior of the mock object satisfies the pre-condition of the method under test. We evaluate the approach using the Java implementations of 20 common Design Patterns and a stack based calculator. Our approach clearly outperforms pure random data generation in terms of line coverage.
Copyright 2010 ACM 978-1-60558-970-1/10/05 ...$10.00.
class Stack { @Post(‘‘size()==@Old(size())+1’’) void push(T elem) { ... }
two case studies is presented in Section 4. After the discussion of related work in Section 5 we draw our conclusions in Section 6.
@Pre(‘‘size()>0’’) @Post(‘‘@Return!=null’’) T peek() { ... }
2.
@Post(‘‘size()==@Old(size())−1’’) T pop() { ... }
Bertrand Meyer propagated Design by ContractTM [11] with the Eiffel programming language. In addition, to typical interface, class and method declarations, Design by ContractTM allows the programmer to specify the semantics of object oriented software components. Similar to legal contracts between two partners, a programmer can specify what a method requires from its caller and what it guarantees after execution. These contracts are written as conjunction/disjunction of Boolean expressions that are checked at runtime. The three main concepts of Design by ContractTM are [10]:
@Post(‘‘@Return>=0’’) int size () { ... } } class AdditionOperator { @Pre(”stack.size()>=2 && otherParameter.isValid()==true && stack.peek().doubleValue()= var := [a−zA−Z][a−zA−Z0−0]∗
Figure 2: Grammar of the supported pre-condition syntax. The pre-condition (PRECOND) is build up by Boolean expressions (BEXPR). Each of them defining the return value of a method call (MCALL). The comparison of two method calls is not supported by the current implementation.
1. the mock object for the given type is instantiated 2. the required behavior of the mock object is derived from the Design by ContractTM specification of the given type 3. the Mock Object is configured through the API of the used mock library to behave accordingly 4. the generated Mock Object is returned and used as test input. The required behavior of the mock object is determined based on the Design by ContractTM specification of the corresponding type and the given pre-condition of the method under test. The Design by ContractTM specification of the type defines the principal behavior of the mock object. The pre-condition strengthens this specification further, to guarantee that the mock object reacts as if it is in a state that satisfies the pre-condition. Therefore, we configure all methods of the Mock Object, such that they return a value according to the conjunction of the pre-condition of the method under test and the postcondition of the mocked method. The detailed description of this configuration process is presented in Subsection 3.2. The presented approach leads to higher code coverage, in terms of covered lines, since it generates test input data that satisfy the pre-condition of more methods under test.
3.1 Supported Specification Syntax Figure 2 defines the syntax of the pre-condition currently supported by the implementation. The pre-condition is a conjunction/disjunction of Boolean expressions. Each of the Boolean expressions may consist of a left-hand side, a comparison operator and a right-hand side. The latter two are optional in case that the left-hand side evaluates to a boolean value. The left hand-side is a method call or a single statement of multiple successive method calls, e.g., stack.pop().toString(). The right-hand side defines the comparison value for the return value of the last method call, in the example the call to toString(). All other method calls on the left-hand side are considered to return a valid instance, i.e., not null. The currently implemented syntax does not support a method call on the right-hand side. The pre-condition of
our running example in Figure 1 is matched by the specified syntax.
3.2
From Design by ContractTM to mock behavior
Automatically extracting Mock Object behavior from Design by ContractTM specification for test data generation requires the class type to generate and the Design by ContractTM specification. The algorithm is shown in Figure 3. Each step is explained in detail in the following paragraphs.
build a map. First, a map containing all non-void methods of the mock object is created. Second, to each method the corresponding post-condition is mapped. Figure 4 shows a summary of the map for the java.util.Stack class of the running example (Figure 1).
derive requested state. To derive the requested state for the mock object, the pre-condition of the method under test (which includes constraints on the mock object) is transformed, through rewriting of the specification by following a number of rules. Therefore, for each method of the mocked object a subcondition of the pre-condition is extracted, such that the sub-condition constraints only the according method. A sub-condition in that sense, is a conjunction/disjunction of Boolean expressions, with equal left-hand sides. The conditions m1 and m2 in Figure 5 are two Boolean expressions matching the BEXPR rule from Figure 2. Both impose constraints on the same method m - one of those present in the map. Condition o refers to any other boolean expression. The given rules are applied repeatedly until no rule is applicable anymore. No rule is applicable anymore when the pre-condition does not include any Boolean expression that references the currently processed method. Basically the rules are applied to the overall Boolean expression iteratively for each method m in the map. The result is a Boolean expression where all left-hand sides equal m, i.e., imposing constraints only on method m. Any Boolean
method size() 1. build a map: method 7→ post-condition (postcond ) of method
pop() peek()
2. derive requested state of the mock object from the pre-condition of the method under test (precond )
Figure 6: Map content after the second step of the transformation process.
(a) split pre-condition in conjunction of subconditions (subcond ) where each Boolean expression in one sub-condition references always the same method: subcond1 && ... && subcondn =⇒ precond
expression o, which left-hand side differ from m1 can be removed. Therefore, rules 1 and 2 only keep m1 and m2. Rules 3, 4 and 5 are similar to 1 and 2, but in those cases even m2 can be removed since the overall Boolean expression is satisfied as soon as m1 is satisfied. Following those rules guarantees, that if all sub-conditions are satisfied the original pre-condition is satisfied too. Each generated sub-condition is then conjuncted with the already present specification of the corresponding method in the map. The resulting map of this step in the transformation process is given in Figure 6.
(b) update map to: method 7→ subcond && postcond 3. generate return value that satisfies the specification 4. generate Java code Figure 3: From Design by ContractTM to mock behavior. An outline of the algorithm used to generate Java source code that configures a mock object, which satisfies a given pre-condition, and can be used as test input data.
method size() pop() peek()
java.util.Stack constraint 7→ @Return>=0 && @Return>= 2 7→ size()==@Old(size())-1 7→ @Return! =null && @Return.doubleValue()=0 7→ size()==@Old(size())-1 7→ @Return! =null
Figure 4: Map content after first step of transformation process. The map contains for each non-void method an entry: method 7→ post-condition.
m1 && o && m2 ⇒ m1 && m2 m1 k o && m2 ⇒ m1 && m2
(1) (2)
m1 && o k m2 ⇒ m1 m1 k o k m 2 ⇒ m1 m1 k m2 ⇒ m1
(3) (4) (5)
Figure 5: Extraction rules. Applying those rules iteratively as long as no rule is applicable anymore results in a set of sub-conditions. Each sub-condition imposes constraints on one and the same method. m1 and m2 are two Boolean expressions constraining the same method, o constraints a different method.
generate return values. For each method in the map return values are generated, that satisfy the specification present in the map. This is done by calling the value generator component of the test data generation tool and passing the specification as argument. Depending on the required return type, different value generators are either called with the given specification or called until the generated value satisfies the given sub-condition (or a maximum number of trials is reached). In our case a value for primitive data types is generated randomly and checked against the specification. If it does not hold, a new value is generated randomly. For non-primitive data types the value generator component constructs a mock object recursively (up to a manually set depth). The provided specification is the sub-condition eliminated by all Boolean expression that do not constraint any return value of method calls on the requested type itself. For this elimination the same algorithm as described in Figure 5 is applied, this time with m1 and m2 referencing the method called on the object under generation. In addition, a new temporary variable has to be introduced, referencing the required return value. In the specification the method call on the object under generation in m1 and m2 has to be replaced by the temporary variable. Given the pre-condition of the add() method in Figure 1 and generating the return value for method call peek(), the provided specification to the value generator component is stack.peek().doubleValue()0) { tmp += stack.pop(); } return tmp; }
Figure 9: Example implementation of a method that expects a state change of the mock object. Currently the generated test results in an endless loop. public class Math { @Pre(”x>0 && y>0”) @Post(”@Return==x∗y”) int multiply(int x, int y) { return x ∗ y; } }
Figure 10: Example specification of a weak postcondition for a method that returns the result of the addition of two integer values passed as parameters. This is caused by the static behavior configuration of the mock object. The Mock Object configuration does not consider any internal state changes caused by method calls. Our future work will focus on extending the Mock Object by dynamic state information. The initial state of theMock Object is generated by the presented approach. But afterwards the Mock Object records all method calls. Return values of method calls on the Mock Object are not configured statically but take into account the history of method calls.
return value may not depend on parameter values. Consider a method under test that requires the Math object in Figure 10 as test input. The presented approach uses the specified post-conditions to generate a return value that satisfies it, at test generation time. The presented approach does not work, whenever the return value of the method depends on parameters to the method. The value of the parameter may not be defined before test execution time, therefore the test data generation algorithm may not generate a return value for the method at test generation time. Future work will focus on an approach to generate the return value at test execution time. By introducing a return value generator component that is called from the mocked method to return a value that satisfy the post-condition at runtime, similar to the symbolic mock object approach from Tillmann and Schulte [16].
static state of the generated mock object.
4.
The presented approach does not work for implementations that expect state changes of the mock object nor Design by ContractTM specifications that specify the state of the mocked object after method execution. Since the generated mock object for the Stack in Figure 9 always returns the same value for each call to method size() - one that satisfies the pre-condition of the method under test - the given implementation leads to an endless loop.
Using test data that is automatically generated by extracting Mock Object behavior from Design by ContractTM specification improves line coverage on our case studies by at least 20%. Both, test generation and test execution time, increase only slightly.
CASE STUDY AND RESULTS
4.1
Case Studies
We evaluate our approach on two different case studies.
Table 1: Case studies source code measurements. This table shows the size of the case studies in terms of number of classes, functions and non commenting source statements (NCSS). attribute StackCalc design pattern classes 40 161 methods 71 825 testable methods 68 754 NCSS 422 3223
Table 2: Amount of specifications. The table lists the amount of specifications written for each of the case studies. attribute StackCalc design patterns @Invariant 2 0 @Pre 40 225 @Post 21 191 @SpecCase 44 5 @Pure 4 169 @Also 15 2 @Helper 3 71 sum 129 663
Both case studies were implemented without knowledge that they will be annotated with Design by ContractTM specification and used as case studies later. Therefore, we can guarantee that both case studies are not tailored to this evaluation. 1. StackCalc : This case study was developed by two students as a course work. It implements a stack based calculator with a command line input/output interface. Multiple mathematical operators are implemented that operate on the Stack. 2. Design Pattern : This case study is available online from the homepage of the Design Patterns in Java [9] book. The implementation consists of all Design Patterns introduced by Gamma et al. [5]. We use this case study because well-designed software heavily builds upon Design Patterns. We believe that the approach is useful in more general software components as well when we can show an improvement in the testability of Design Patterns. The Design by ContractTM specifications were added to the case studies by the authors without changing any implementation or design details. To get an impression of the case studies in terms of size and amount of specifications, Table 1 shows some common software metrics and Table 2 lists the amount of specifications for both case studies. All methods annotated with @Helper are not tested (e.g., getter methods in our case studies).
4.2 Evaluation Process We compare the presented approach to random data generation. Therefore, we use multiple servers with the same hardware and software configuration (AMD Athlon 64bit Dual Core; each 2,2GHz; 2GB RAM).
Both, our approach and the random approach, is executed with 25 different parameter value combinations. Each of these configurations is executed 30 times for each approach. The two alternating parameters are: 1. mutating probability: This parameter defines the probability that after executing a method call on a generated object, another method is chosen to change the objects state (mutate the object state). This parameter is only used by the random approach and may be one of the following values: 0.0, 0.2, 0.4, 0.6, 0.8. 2. attempts: This parameter defines how many tests are generated for each method of the case study. We used the following values: 1, 2, 3 ,4 , 5. We claim that using automatic generation of mock behavior from Design by ContractTM specification increases line and branch coverage compared to the random approach. This is due to the ability of the presented approach to generate input data that satisfies the given pre-condition for more methods. Using more sophisticated methods usually takes more resources. Since the approach is only applicable to industrial projects when the increase of test generation time and execution time is worth it, we also evaluate the approaches with respect to test generation time and test execution time.
4.3
Detailed Results
The presented approach leads to a significant increase in line and branch coverage for both case studies. The improvement by 50% for the StackCalc case study (Table 3) is due to the design structure. About half of the methods under test operate on the Stack (e.g., sum up, multiply, subtract or divide the last two elements on the stack) and therefore have very similar pre-conditions. They require at least two elements on the stack. The random approach was not able to generate a Stack object with two elements in hardly any test run. Whereas, the presented approach always succeeded in generating a Stack object that satisfies the given pre-condition. The enormous increase in branch coverage can be tracked down to the same issue. All methods operating on the stack use conditional statements. As soon as the test input data satisfies the pre-condition of the method under test, at least one conditional statement is executed. The Design Pattern case study, as the more general one in terms of software design, still shows an increase in line coverage by 20% as presented in Table 4. Since this case study includes all design patterns this increase cannot be explained by a special software design anymore. One may have suggested that the coverage decreases due to the fact, that mock objects replace real implementation. This is not the case, because each class is tested anyway, but separately. Only when the class is used as a parameter for a method the implementation is mocked. As expected the presented approach requires slightly more time for both, test generation and test execution. Tables 5 and 6 show in column one of each approach the required time to generate all tests divided by the amount of methods for which the approach tried to generate a test. Column two shows the required time divided by the amount of actually generated tests. This is caused by heavy use of Java reflection and the overhead of instantiating a Mock Object. But
Table 3: StackCalc: achieved coverage. Line coverage increased by 50% for the stackcalc case study. This is mainly due to the fact that the random approach hardly never constructed a Stack object with two elements. No methods that operate on the Stack (e.g., multiply, add, substract) could be tested. RANDOM MOCK attempts line branch line branch 1 11% 1% 17% 8% 2 11% 1% 17% 9% 3 12% 1% 18% 10% 4 14% 2% 21% 12% 5 2% 2% 17% 10% avg. 12% 1% 18% 10% Table 4: Design Pattern: achieved coverage. Line coverage increased by 20% only. Since the design pattern are widely used and very general, we claim that our approach is helpful in testing an arbitary program. RANDOM MOCK attempts line branch line branch 1 33% 13% 41% 20% 2 35% 15% 42% 22% 3 35% 16% 41% 22% 4 39% 18% 45% 24% 5 35% 17% 40% 23% avg. 35% 16% 42% 22%
Table 5: Stackcalc: required time per RANDOM MOCK att. gen./ gen./ exec. gen./ gen/ sum succ. sum succ. 1 0.83 0.93 0.79 1.08 1.41 2 0.55 0.61 0.55 0.71 0.93 3 0.44 0.49 0.42 0.57 0.75 4 0.38 0.42 0.36 0.47 0.62 5 0.41 0.46 0.32 0.51 0.68 avg. 0.45 0.50 0.45 0.57 0.76
test. exec. 0.41 0.26 0.20 0.16 0.14 0.20
with increasing size of the case study the premium decreases relatively to the overall generation time. In addition, to the significant increase of line coverage of the system under test the presented approach inherits all advantages from using Mock Objects. The Design Pattern case study uses Java File objects. The random approach messed up the file system by generating thousands of randomly named files. They had to be deleted manually after the test generation process. The presented approach did not create a single file, since the File object was mocked, the createFile() method of it was actually never executed. It is to mention that one class of the StackCalc case study could not be tested due to the limitation shown in Figure 9.
5. RELATED WORK This section should give an overview of recent work on test data generation to show where current research efforts focus on. Both, Tillmann et al. [15] and Saff et al. [13], use Mock
Table 6: Design Pattern: Required Time on average per test. RANDOM MOCK att. gen./ gen./ exec. gen./sum gen/succ. exec. sum succ. 1 0.81 0.96 0.28 0.89 1.03 0.43 2 0.64 0.74 0.18 0.68 0.79 0.32 3 0.62 0.71 0.14 0.62 0.7 0.26 4 0.55 0.62 0.12 0.59 0.65 0.23 5 0.52 0.58 0.10 0.55 0.61 0.21 avg. 0.58 0.66 0.14 0.62 0.69 0.26 Objects to generate test data input. They require in addition to the program under test, a set of system tests respectively unit tests. The given test set is executed and the interaction of the method under test with its parameters is recorded. This information is used later in two different scenarios: • The next time the test set is executed, the parameter objects are replaced by Mock Objects to reduce test execution time. • Automatically generated unit tests use Mock Objects as test input which are configured to exactly behave as recorded earlier. Both approaches do not guarantee that the behavior of the Mock Object corresponds to the intended one - since no formal specification is present. In addition, they may find errors, that are caused only by a change in the method call sequence, even tough the semantics did not change at all. Related approaches that use different technologies for test data generation can be categorized into random based approaches, state space exploring based approaches and approaches based on evolutionary methods. Cheon [3] and Ciupa et al. [4] work on random based approaches. The key idea of Cheon is to construct objects incrementally. First choosing randomly a constructor for instantiating the requested object. Afterwards, transforming the object state by calling a random number of methods on it. This random approach is used for our case studies. Ciupa extended adaptive random testing [2] from Chen for object types. ARTOO [4] works with two sets of test data: the candidate set and the applied data set. The candidate set is filled with randomly generated values. Values used for test generation are moved from the candidate set to the applied data set. The value from the candidate set which has the highest distance value to the already applied values is used the next time. Therefore, Ciupa introduced the notion of an object-distance, which is calculated recursively based on its members and the distance in the inheritance tree of the two compared objects. Both approaches depend on randomly constructed class instances, which they store in a pool for later usage. ARTOO uses a more sophisticated algorithm to choose which object is used next from the pool. But in any case, the sequence of method calls to transform the object state is chosen randomly. Therefore, it is very unlikely that one of the objects satisfy a given pre-condition. Kurshid et al. [14] and Howe et al. [1] are working on state space exploring techniques. The former approach explores the whole state space - up to a given bound. This approach is very expensive in terms of object constructions to find one that satisfies the given pre-condition. The latter approach
uses AI planner to directly find a method sequence to a state that satisfies the given pre-condition. This approach is very promising, but still requires human interaction - in contrast to our approach, which is fully automated. Mark Harman [6] works on search-based algorithm for test data generation. It is based on the theory of evolution. The problem of constructing non-primitive objects is modeled as search problem, where the fitness function determines how good the data is with respect to achieving a given problem. Techniques such as crossing-over and mutation help in generating new sets of data based on an initial (random) data set.
[4]
[5] [6]
6. CONCLUSION In this paper we presented an approach for automated generation of mock behavior from Design by ContractTM specifications. Using automatically generated Mock Objects in unit testing allows test generation tools to easily generate data that satisfies a given pre-condition. The evaluation of the presented approach for two case studies shows an increase above 20% in terms of line coverage. At the same time the test generation time and the test execution time increased only slightly. Especially for the larger case study the premium of using automated mock behavior generation was very low relative to the overall execution time. In addition, all advantages of using Mock Objects for testing are inherited: generated tests report only errors in the method under test, not in the implementation of test input data called by the method under test; reducing generation overhead for instantiating complex objects (e.g., objects that require multiple other objects to be instantiated before); being independent from system objects such as databases, files and network connections. Our future research will focus on automated generation of a dynamic mock behavior. This will eliminate the two limitations of the approach: no changes of the mock objects state are possible at test execution time; dealing with parameter references in post-conditions. The former can be removed by introducing a virtual state of a mock object. The mock objects initial state is generated as presented in this paper but afterwards the interaction with the mock object is recorded and stored. State discovering method calls (usually getter methods) may then return different values depending on the pre-sequence of method calls.
[7]
[8]
[9] [10] [11]
[12]
[13]
[14]
[15]
Acknowledgment The research herein is partially conducted within the competence network Softnet Austria (www.soft-net.at) and funded by the Austrian Federal Ministry of Economics (bm:wa), the province of Styria, the Steirische Wirtschaftsf¨ orderungsgesellschaft mbH. (SFG), and the city of Vienna in terms of the center for innovation and technology (ZIT).
7. REFERENCES [1] A. K. Amschler Andrews, C. Zhu, M. Scheetz, E. Dahlman, and A. E. Howe. AI Planner Assisted Test Generation. Software Quality Journal, 10(3):225–259, November 2002. [2] T. Y. Chen, H. Leung, and I. K. Mak. Adaptive random testing. pages 320–329. 2004. [3] Y. Cheon. Automated random testing to detect specification-code inconsistencies. Technical report,
[16]
Department of Computer Science The University of Texas at El Paso, 500 West University Avenue, El Paso, Texas, USA, 2007. I. Ciupa, A. Leitner, M. Oriol, and B. Meyer. Artoo: Adaptive random testing for object-oriented software. In International Conference on Software Engineering 2008, May 2008. E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns. Addison-Wesley Reading, MA, 2002. M. Harman, F. Islam, T. Xie, and S. Wappler. Automated test data generation for aspect-oriented programs. In AOSD ’09: Proceedings of the 8th ACM international conference on Aspect-oriented software development, pages 185–196, New York, NY, USA, 2009. ACM. G. T. Leavens, A. L. Baker, and C. Ruby. Preliminary design of jml: a behavioral interface specification language for java. SIGSOFT Softw. Eng. Notes, 31(3):1–38, May 2006. T. Mackinnon, S. Freeman, and P. Craig. Endo-testing: unit testing with mock objects. pages 287–301, 2001. S. J. Metsker and W. C. Wake. Design Patterns in Java. 2006. B. Meyer. Applying ”design by contract”. Computer, 25(10):40–51, October 1992. B. Meyer. Object Oriented Software Construction. Prentice Hall PTR, Englewood Cliffs, NJ, first edition, 1997. J. Rieken. Design by contract for java - revised. Master’s thesis, Department f¨ ur Informatik, Universit¨ at Oldenburg, April 2007. D. Saff, S. Artzi, J. H. Perkins, and M. D. Ernst. Automatic test factoring for java. In ASE ’05: Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, pages 114–123, New York, NY, USA, 2005. ACM. J. H. Siddiqui and S. Khurshid. PKorat: Parallel Generation of Structurally Complex Test Inputs. In Software Testing Verification and Validation, 2009. ICST ’09. International Conference on, pages 250–259, 2009. N. Tillmann and W. Schulte. Mock-object generation with behavior. In ASE ’06: Proceedings of the 21st IEEE International Conference on Automated Software Engineering (ASE’06), pages 365–368, Washington, DC, USA, 2006. IEEE Computer Society. N. Tillmann and W. Schulte. Unit tests reloaded: parameterized unit testing with symbolic execution. In Software, IEEE, volume 23, pages 38–47, Los Alamitos, CA, USA, July 2006. IEEE Computer Society Press.