Automated Requirements-based Generation of Test ...

90 downloads 148020 Views 214KB Size Report
Automated Requirements-based Generation of Test Cases for Product Families. Clщmentine ... marketing analysis to testing and integration. .... tributed platform.
Automated Requirements-based Generation of Test Cases for Product Families Clémentine Nebut, Simon Pickin, Yves Le Traon and Jean-Marc Jézéquel IRISA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex, France {Clementine.Nebut, Simon.Pickin, Yves.Le_Traon, Jean-Marc.Jezequel}@irisa.fr

Abstract Due to increasingly strict productivity and time-tomarket constraints in industry, software product families (PF) are becoming one of the key challenges of software engineering. Product family models and methods aim at providing support for efficient reuse by making reuse an integral part of the development process. However, despite recent interest in this area, the extent to which the close relationship between PF and requirements engineering is exploited to guide the V&V (Validation and Verification) tasks is still limited. In particular, PF processes generally lack support for generating test cases from requirements. This is a difficult problem since test cases generated for different products from the same requirement may not be identical, due to interference from variants. In this article, we propose a requirements-based approach to functional testing of product lines, based on a formal test generation tool. We show how product-specific test cases can be automatically generated from functional requirements expressed in UML, even when the latter are common to a product family. We illustrate the use of the method and tool presented by means of a case study, and study the efficiency of the generated test cases.1

1 Introduction Building product families (PF) is not a new idea: already in 1976 the author of [15] reported “on the design and development of program families”. The idea of products families now goes much further than just the reuse of code: it involves all the phases of development of a system, from marketing analysis to testing and integration. Product families (or product lines) are now seen as “a set of products that 1 This work has been partially supported by the CAFE European project. Eureka S! 2023 Program, ITEA project ip 0004.

share a set of requirements and significant variability”[7], and are currently undergoing a resurgence of interest, some factors being the relentless pace of hardware development, the growing complexity of software systems, and the need to respond promptly to more rapidly-changing consumers habits. PF conception and design brings up a large number of novel issues, among them, PF testing methods. As underlined in [11], PF requirements assist design evaluation, and consequently play a crucial role in driving the functional testing task. However, there is a real difficulty in ensuring that the requirements of the PF are satisfied on all the products. In other words, requirements-based testing of a product family is an arduous task, which is in serious need of automation. A PF potentially contains a very large set of products, due to the combinatorial aspect of using variants. The testing task is all the more tedious since all the common and the shared variant requirements have to be tested for each product. The same piece of functional test code, derived from a requirement, cannot be reused exactly since the objects addressed are possibly different from one product to another, due to the crossing of different variation points. For example, the initialization sequence leading to the testing of a particular point may be totally different from a product to another. Consequently, from a single requirement, a test case have to be written for each specific product. In this article, we propose a method and a toolset to generate test cases for each of the different products of a PF from the same PF functional requirements. By using our approach, tests designers can take advantage of the presence of a common core, and consequently of a common behavior of products, to reuse tests. We suppose that requirements are documented with high-level sequence diagrams: it is common practice in industry for the main contractor to provide high-level functional requirements to their subcontractors (e.g. in the defense industry). Those sequence diagrams are combined into reusable assets, which we call behavioral test patterns (behTP), and used to automatically generate test cases specific to each product. This approach

is consistent with the conclusion of the survey of industrial software projects [18], which insists on the industrial need to base system tests on use cases and scenarios. At first sight, however, such reuse of generic test scenarios leads to a contradiction:

nominal

exceptional

Requirements analysis nominal

Use cases UC1 and sequence diagrams

UC2

...

on the one hand, to be reusable, test scenarios must not rely on any precise modeling details of the variants; they must be expressed at a very high level to ensure that they are not dependent on the specific products,

General design

on the other hand, generic scenarios are too vague and incomplete to be directly used on a specific product.

Detailed design

Main Classes Interfaces

2 A requirement-based testing method Our main objective is to demonstrate the feasibility, utility and efficiency of generating test cases for each product from the requirements of the entire PF. Since our method is based on the principle that the test cases can be derived from functional requirements, we deal with black box testing. We do not address the problem of modeling PF, and in particular, we dot not treat the case where a product is first created and tested adequately, and a PF is then built from this first product. Nor do we deal with the question of reusing test by “reverse engineering” this first product. We focus on synthesizing test cases from functional requirements of a PF. Method Our method is illustrated in Figure 1. In the first part of the method (detailed in Section 3), the test designer constructs a particular type of test objectives or test purposes, which we call behavioral test patterns (behTP), from certain of the use-case scenarios. Independently of the test development process, the PF design is developed and the final products are instantiated. Further evolutions can also be carried out without changing the behTPs library, which is considered an asset of the PF.

SELECTION

Test patterns specification

Test cases synthesis

Evolution Product Instantiation

Our proposal for resolving this apparent contradiction is to build behavioral test patterns from the functional requirements, and then use suitable tools to automatically synthesize test cases for each specific product. The rest of the paper is organized as follows: Section 2 outlines our method, and presents a case study that is used for experiments. Section 3 defines and explains the building of our behTPs. Section 4 presents the generation method from those behTPs. Section 5 presents and analyses the results of our method on the case study. One of the goals is to determine if test cases generated from very high level requirements are still relevant at code-level, in terms of code-coverage. Sections 6 and 7 respectively discuss related work and present conclusions and future work.

exceptional

Accept scenario Reject scenarios (optional) Prefix sceanrios (optional)

TP1

TP2

P1 P2 P3

Figure 1. The requirements-based methodology for PF testing

An important aspect of our method is that the behTP are defined once, and then kept unchanged and reused throughout the software life-cycle even if the code or the model evolve: a behTP is derived from requirements of the PF, and is thus reusable for all its products. Then, for each instantiated product, a set of test cases are generated from the model of that product together with the set of behTPs. The generated test cases are thus specific to each product, since they stem from specific specifications. The test case synthesis is the second part of the method, and is detailed in Section 4. Behavioral test patterns are independent of low-level design and implementation choices. While defining a highlevel test scenario is not difficult when the main classes are identified, refining and adapting it to the final software product is an arduous process. However, the details of the lowlevel design are contained in the product-specific model; completing the behTP with these details is therefore a task which can be left to the automatic synthesis. Allowing test designers to work at the behTP level rather than the test case level thus frees them from the need to specify the low-level detail, enabling them to concentrate on the essential aspects of the test. Assumptions and notations Use of our method and toolset requires two assumptions: First, the requirements of the PF are expressed in the form of use cases documented with generic scenarios (UML sequence diagrams). These sequence diagrams can be generic with respect to the name and type of the instances

involved, as well as w.r.t. the method names, and the names and types of the parameters in the method calls. The notations used here are a sub-set of the test language Tela [16]. The syntax of the UML Testing Profile, on which Tela had some influence, could be used instead once it is finalized. The main notations reused from Tela are the wildcards and the variables. A wildcard can replace any name or type, in an instance or a parameter. For example, an instance declared as *:T stands for any instance of type T. In a message, a method call f(1,*) stands for any call to method f, with 1 as first parameter (and any valid value for the second parameter). The variables used are static variables, that is, their value is the same on each occurrence in the same scenario. Second, we assume that we have (at the end of the design process) a UML model available for each product, containing: (1) a class diagram (2) state machines for the main classes (3) an object or deployment diagram representing the initial state of the system. These product-specific UMLmodels could be either built by hand, or by using automatic derivation techniques from a model of the PF, e.g. frameworks techniques (such as the one presented in [3]) are used. Case study: a virtual meeting server Our case study is a virtual meeting server PF offering simplified web conference services. The same system has been implemented in Java, Eiffel and C# languages. It is used in the advanced courses of the Univ. of Rennes. The whole system contains more than 80 classes but a simplified version is presented here with few variants for the sake of readability (only functional variants appear since we address functional testing). Our case study is thus a simplistic example of PF, but is sufficient to illustrate our method. The virtual meeting server PF (VMPF) permits several different kinds of work meetings to be organized on a distributed platform. When connected to the server, a user can enter or exit a meeting, speak, or plan new meetings. Each meeting has a manager. The manager is the participant who has planned the meeting and set its main parameters (such as its name, its agenda, ...). Each meeting may also have a moderator, designated by the meeting manager. The moderator gives the floor to a participant who has asked to speak. Before opening a meeting it, he or she may decide that it is to be recorded in a log file. The log file will be sent to the moderator after the closing of the meeting. The corresponding use case diagram is given on Figure 2. Three types of meetings exist: - standard meetings where the current speaker is designated by a moderator (nominated by the organizer of the meeting). In order to speak, a participant has to ask for the floor, then be designated as the current speaker by the moderator. The speaker can speak as long as he or she wants; he or she can decide to stop speaking by sending a particular message,

          

    

  

 

   !

           



"#   

 

  "$ % 

Figure 2. Use case diagram of the virtual meeting

on reception of which the moderator can designate another speaker. - democratic meetings which are like standard meetings except that the moderator is a FIFO robot (the first client to ask for permission to speak is the first to speak) - private meetings which are standard meetings with access limited to a certain set of users. We define our PF describing the variation points and products (the commonalities corresponding to the basic functionalities of a virtual meeting server, as described above). Variation points: In this article, for the sake of simplicity, we only present 5 variation points in our VMPF: - the limitation or lack of it, of the number of participants to three. - the type of meetings available; possible instantiations correspond to a selection of 1, 2, or all of the 3 types of possible meetings. - the presence or absence of a facility enabling the moderator to ask for the meeting to be recorded. - the languages supported by the server (possible languages being English, Spanish, French). - the presence or absence of a supervisor of the whole system, able to spy and log it. The other variation points which are not described here concern the presence of a translator, OS on which the software must run, various interfaces - from textual to graphical, network interface etc.. Testing all the possible products independently is inescapable. In our case, this would mean testing (2*7*2*7*2*3*3)=2352 products (considering 3 OS

edition

demonstration

personal

enterprise

meeting limitation

true

true

false

meeting types

{std}

{std, democ, priv}

{std, democ, priv}

recording

false

false

true

language

{En}

{En}

{En, Fr, Sp}

supervisor

false

false

true

 &' 7 ! 8  !

3 Step 1: from requirements to behavioral test patterns In this Section, we define and illustrate the test objectives we deal with, called behavioral Test patterns. Then we explain how to build them from the requirements of the

 # 97 ! 8  !

Qfe3g \MMJ g K J _hJ ] O \M4O P 0   1  d P P F ; < = > < cD C = > = A P H \ ] NY ML ^ J ] G H I)J J K L MN4O Q H I/Y MY NJ ]      Q RS  TUV  W X  U F d bXX4 V UV X W X  U ;< = > G i @ ?6j @ D = D C . * !  #  (/  # % *+   ! " # ! I`Y MY NJ _aK N ?6@ A = > B C @ > ? B D B E = >      $ % ' & % #    ) (   # %  * + Q

H Y ZK [4\ ] L ^ J _ Q

Table 1. Variation points and products and 2 GUIs). In order to simplify the presentation, in this paper we only consider 3 products (a demonstration edition, a personal edition, and an enterprise edition). However, this does not in any way reflect a restriction on the method. The characteristics of the 3 products are given in table 1. The demonstration edition manages standard meetings only, limited to a certain number of participants. In this paper, this maximal number is 3, though this number could have been treated as a variant parameter. Only English is supported, and no supervisor is present. The personal edition provides the three kinds of meetings but still limits the number of participants. Only English is supported, and no supervisor is present. The enterprise edition limits neither the type of meeting nor the number of participants. In this product, it is possible for the moderator to record the meeting and to obtain a log file. This edition supports English, Spanish and French. Each participant chooses his or her preferred language, the default being English. A supervisor is present. To illustrate the method, we present some elements of the UML models of our case study. Figure 3 gives a simplified class diagram for the product line architecture. Products are easily instantiated thanks to an Abstract Factory, and a Decorator is used to allow the orthogonal properties of a meeting to be combined. To model behaviors, state diagrams are attached to the main classes that represent the users and the meetings. Concerning the meetings, a state diagram is attached to the root of the concrete meetings hierarchy (ConcreteMeeting) and to the decorator LimitedMeeting, see Figure 4. They differ by one state for expressing the meeting saturation. Note that in Figure 4, not all the guards and actions are shown in order to keep the diagrams readable, though this information is necessary in order for the model to be simulatable and therefore usable for test synthesis.

9 ! : 7 ! 8  !

,'-  

0 32 24

56  1  

      

Figure 3. Class diagram for the virtual meeting PF

r pk s m €n k  ‚n k t ƒ m t k l m n m o „ klmn mo pm q  m

klmn

l pq n n m o r v m q tm

r pk s m m n t m v w u s m v s xn t m v m o y r q v o z{|6q } ~ r pk s m s q tu v q tm o

r pk s m o

  ˆ  Š “ ‹ ˆ ”• ‹ ˆ ’ – Š ’ ˆ ‰ Š ‹ Š Œ — ˆ‰ Š‹ ŠŒ  ˆ  Š

ˆ‰ Š‹

‰ Ž ‹ ‹ Š Œ

 ‘ Š Ž ’Š

 ˆ  Š Œ

m n t m v € n k ‚ |6m m t … n †‡s q t u v q t m o „ limited meetings

unlimited meetings

Figure 4. State diagrams for the meetings system under test (SUT).

3.1 Definition of the behavioral Test Patterns The use-case diagram and the sequence diagrams attached to them represent the requirements of the system. The behavioral test patterns are defined based on these requirements. The sequence diagrams are either of two types: those describing nominal scenarios, expressing the standard use-case occurrence, and those describing exceptional ones. A behavioral test pattern (behTP) is here defined as a set of generic scenario structures representing a high-level view of some scenarios which the system under test may engage in, according to its specification, and which we want to test. The genericity may involve abstraction on instance names and types, method names or on method parameters. The scenario structures are elaborated with the aim of using them as selection criteria to derive a test case; they will

usually include a subset of the communications of the corresponding test case. We suppose that in the derived test cases, the SUT environment role will be played by the tester. A test case derived from a behTP specifies how to stimulate the SUT via the test interface, as well as specifying the expected responses at this interface, in such a way as to cause a conformant implementation to execute a scenario that fits the behTP. In our method, a behTP comprises three parts, each part being specified using sequence diagrams:

    

*

   

   

*   

  



    

!" #  $





(b) Accept scenario

(a) Prefix

an accept scenario: the specification of the (visible or internal) behavior the test designer wants to test; the accept scenario structure serves to select the scenarios of the specification which are relevant for the test case; reject scenarios: the specification of the (visible or internal) behavior the test designer wants to avoid in the test; the reject scenario structures serve to eliminate the scenarios of the specification which are irrelevant for the test case; a prefix: the specification of the behavior needed to place the SUT in a state in which the accept behavior can take place; the prefix scenario serves to factorize the part of the accept scenario which may be common to several test patterns.

3.2 Building a behTP To specify a behavioral test pattern, the tester selects from among the nominal and exceptional scenarios an accept scenario describing the behavior that has to be tested, one or several (optional) reject scenarios describing the behaviors that are not significant for the test and one (optional) prefix scenario that must precede the accept scenario. Since this task does not involve writing test code or reading SUT code, the tester can concentrate on building relevant behTP. Particular care must be taken regarding the scenario variables and their scope or constraints. The scenario variables scope is by default the scenario. Nevertheless, constraints can be attached to the behTP to constrain the values of two variables to be identical. We present here three examples of behTP: behTP 1: a user may be rejected on requesting entry into a meeting that is currently opened. This behTP is presented in Figure 5. The prefix puts the system into a state where a meeting is created by a user. The reject scenarios prevent any actor from leaving or creating a meeting, from consulting the current state and from disconnecting from the server (only two of those three reject scenarios are illustrated in Figure 5). The accept scenario shows an entrance request being refused. behTP 2: a user can speak in a meeting. The prefix is identical to that of behTP 1. The accept scenario of this

   

    

*   

*   

&  '   

(c) Reject scenario 1

%& '  ( 

(d) Reject scenario 2

Figure 5. behavioral test pattern 1

behTP is presented in Figure 6 and represents the fact that a user can speak in a meeting and receives the message he or she has sent. Note the use of a scenario parameter: the method speak takes as parameter the name of the meeting and the speech. The meeting name is represented by the variable x, which is also present in the return message. The reject scenarios prevent any actor from creating a meeting, from consulting the current state and from disconnecting from the server. behTP 3: a user can record a meeting of which he or she is moderator. The accept scenario presented in the r.h.s of Figure 6 shows a user asking for the meeting to be recorded, and eventually receiving the log file. The prefix is the same as for behTP 2 and 3. The reject scenarios are the same as for behTP 2. The behTP are incomplete and generic in the sense that (1) they do not specify objects or types involved and (2) they specify only a small part of the messages involved. They are thus easy to build since they consist only in test objectives and not in the exact test sequence. The behTP are built during the analysis phase of the PF development, and added as test assets in the PF. When the testers are asked to test the PF, they test the products one by one, with sets of test cases automatically generated from the test assets (i.e. sets of test patterns) and the UML model of each product. Each behTP will produce a number of different test cases depending on the product for which the tests are generated, as described in the next section.

Objecteering Case tool          

Modelling XMI

XMI

         

         

UMLAUT

generic BehTP

UML Specification of a product (class diagram, object diagram, statecharts)

pre−processed BehTP

     

LTS Specification model via simulation API          

LTS test objective

    

Test synthesis TGV IOLTS test case

product−specific test case

(a) Test pattern 2

(b) Test pattern 3

XMI Objecteering Case tool

Figure 6. Accept scenarios for behavioral test patterns 2 and 3

4 Step 2: from reusable behavioral test patterns to test cases In this section, we explain the test synthesis mechanism. The formal model and tool (Umlaut-TGV[8, 9] ) are not detailed here since they have already been presented in the context of conformance testing for distributed/protocolbased software [9, 17]. The tool uses on-the-fly model checking algorithms. In this article, we demonstrate the interest of applying these powerful techniques to the generation of product-specific test cases from PF requirements. We adapt the techniques to the UML and PF context by extending the test objectives of [17] to behavioral test patterns built from use cases and by defining the pre-processing necessary to adequately treat genericity. More detailed information (for the case study, the test scenario language and the synthesis process) can be found at [1]. The synthesis method is summarized in Figure 7. In this section, we present the pre-processing of the behTP, and give an overview of the transformation into an intermediate format, and of the synthesis itself. The core of the synthesis is based on the Labeled Transition System (LTS) and Input Output Labeled Transition System (I OLTS) formal models [9].

4.1 Pre-processing the behTP The requirements, and consequently the behTP are generic in the sense that, thanks to the use of wildcards or variables of the scenario, the objects involved are not specified, nor are the exact parameters. The exact objects and parameters involved must be deduced from the specification of the SUT, i.e. in our context, the exact product of the PF to be tested. This refinement of the test pattern may involve increasing the number of pre-processed behTP as explained below. The pre-processing resolves only part of

Display

Figure 7. The synthesis method the genericity involved in the behTP: it solves the genericity on the objects instances of the scenarios for the accept and prefix scenarios, the genericity on non-primitives types, and the genericity using variables. The remaining genericity is resolved later by the test synthesis, since TGV tool can handle regular expressions in test objectives. During the pre-processing, the actual objects to be used must be found in the object diagram describing the initial state of the system (for dynamically created objects, naming convention are used). A simplified version of the pre-processing algorithm is given in Algorithm 1. Its main principle is that an accept scenario or a prefix is replaced by its instantiation using the first object of the right type found, whereas a reject scenario is replaced by all of its possible instantiations. A key part of giving the behTP their product-independent status is to deal with variation modeled using inheritance. Indeed, a behTP scenario may involve objects of a given class C (such as Meeting), while the refined behTP (and thus the resulting test cases) may involve objects of descendant classes of C, exactly which descendant depending on the product under test (in our example, the concrete meeting types available in a product). Applying Algorithm 1 to the behTP 1 and the demonstration edition product, we suppose that user1 is the user object chosen for the accept scenario, and the server is replaced by the demonstration server. Since the wildcards used in the exchanges of messages represent primitive types, they are therefore left unchanged by the pre-processing.

4.2 Processing the behTP into a labeled transition system TGV test objectives are then derived from the preprocessed behTPs. These LTS , with distinguished transitions accept and refuse, define the semantics of the behTP.

Reject scenarios

Algorithm 1 behTP pre-processing



The prefix is composed with the accept scenario (with weak sequential composition), and transformed (under certain assumptions, see [17]) into a LTS which final state is an acceptance one. Then, composing the reject scenarios, the LTS is completed with transitions leading to a reject state. This step leads to the LTS of Figure 8 for the behTP 1 preprocessed for the demonstration edition.

4.3 Test cases synthesis The technical part of the synthesis, and specially the tools used are described in detail in [9, 17]. Our purpose here is not to go far into the details of the underlying principles and algorithms, but just to explain briefly how we use the synthesis process. The global process is summarized in Figure 7. Umlaut builds a simulation A PI from the U ML specification (class diagram, state machines of the main classes, and an object or deployment diagram giving the initial state of the system), in order to be able to lazily build the LTS representing the operational semantics of the entire system. Then, using the simulation API, TGV finds a test case conformant to the system specification and sat-

Prefix

0 * user1 !connect(*) 1 * leave(*,*) or plan(*,*) user1?ok leave(*,*) or plan(*,*) 2 * leave(*,*) or plan(*,*) user1 !plan(*,*)

7

REFUSE

leave(*,*) or plan(*,*) 3 * leave(*,*) or plan(*,*) user1?ok

Accept scenario

boolean done=false // treatment of reject scenarios for each bTP in ens_bTP until done find a declaration w=*:T with a wildcard or variable in s btp.reject for each subtype T’of T { for each object o:T’ in OD { btp’=clone(btp) btp’.replace(w,o:T’) ens_bTP.add(btp’)}} done=not(ens_btp owns a btp | reject.own(*:T)) } done=false // treatment of the accept and prefix scenarios for each bTP in ens_bTP until done find a declaration w with a wilcard or variable in s btp.prefix btp.accept case w=*.* {irrelevant} case (w=*:T){ for each subtype T’of T { choose one object o:T’ in OD btp’=clone(btp) btp’.replace(w,o.name:o.type) ens_bTP.add(btp’)} case (w=obj:*){ find o in OD |o.name=obj btp’=clone(btp) btp’.replace(w,o.name:o.type) ens_bTP.add(btp’)} done=not(ens_btp owns a btp | (prefix or accept).own(*:T or x:*)) }

leave(*,*) or plan(*,*) 4 * user1 !enter(*) 5 * user1?nok 6

ACCEPT

Figure 8. TGV test objective for behTP 1 isfying the test objective, i.e. a test case which makes the test objective progress to an accept state. This test case is found in the form of an I OLTS which is then transformed back into a sequence diagram. Since TGV accepts regular expressions, we can handle very generic scenarios, but of course, the more imprecise the test objective, the most space- and time-consuming is the test case synthesis. The use of a prefix and reject scenarios helps to reduce the size of the state space to be explored, as does the existence of the pre-processing phase. Figure 9 shows a test case generated from behTP1 (and more precisely from the LTS of Figure 8) and the UML model of demonstration edition. Note that, though in this case the generated test case is the smallest possible, in general, the test cases generated by the TGV tool are not necessarily minimal (due to fundamental limits on on-the-fly determinisation). Here, we provide the minimal test cases matching the behTP and the target product.

5 Experiments and discussion The aim of this experimental case study is three-fold: illustrating the approach, determining how different the test cases are from one product to another, and estimating the efficiency of such a high-level test generation technique in terms of code coverage on a given product. The number of test cases generated per product for the 3 behTPs described above is given in Table 2. Since the case study is simplified, the number of generated test cases remains low: it in-

behTP

1

2

3

Demonstration

3

1

0

         

Personal

10

3

0

 

Enterprise

7

3

3

Total number of different TC

10

3

3

Av number of TC occurrences

2

2.3

1

        

      

  

      

                                                

Table 2. Number of test cases generated for each product

                    

                 

General results

 

                     

Figure 9. Test case: entrance in a meeting that is already full

creases in function of the number of variants and products. Columns represent the behTPs and, for each product (row), the number of generated test cases is given.

Detailed results for behTP 1 In this Section we present the results obtained for behTP 1. 10 test cases are automatically generated for the 3 products (note that these 10 test cases are generated by varying the prefix and reject scenarios, in a way that is not yet automated): - 3 test cases correspond to the common requirement that entering in a closed meeting of any type fails. This common requirement has to be tested for each type of meeting since the meeting type is a variation point of the meeting. - 3 other test cases look like the first 3, but instead of a closed meeting, they deal with an as-yet unopened meeting. - 1 test case corresponds to entry into a private meeting. When a participant who is not on the access list tries to enter a private meeting he or she is rejected. This test case is only generated for products in which private meetings can be created, i.e. PE and EE. - 3 test cases test the limited number of participants variant: when 3 participants have already entered a meeting, the fourth is rejected. For example, for DE, the generated test case is given in Figure 9.

To give a synthetic view on how different the test cases are from one product to another, we give the average number of a test case occurrences (ANO), defined as: !#"%$'&)(+*-,/.1032546,87-4:9/;  $%&:(+*-,/.1032A@CBD2:2:,/.C,/$E4F46,=7-4)9/;