Slicing-based test case generation from UML activity diagrams

10 downloads 10192 Views 703KB Size Report
UML diagrams are important design and modeling artifacts. These diagrams can also ... reported a use of dynamic slicing for interprocedural testing. Their work ...
ACM SIGSOFT Software Engineering Notes

Page 1

November 2009 Volume 34 Number 6

Slicing-Based Test Case Generation from UML Activity Diagrams Philip Samuel School of Engineering Cochin University of Science and Technology Cochin, India-682022, E-mail: [email protected] Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur(WB), India-721302 E-mail: [email protected]

Abstract

p, whilst it maintains the effect of p upon the considered variables or vice versa[HD95]. As originally introduced, slicing (static slicing) considers all possible executions of a program. Korel and Laski [KJ88] introduced the concept of dynamic slicing. Dynamic slicing considers a particular execution of the program and hence significantly reduces the size of the computed slice. A dynamic program slice can be thought of as that part of a program that “affects” the computation of a variable of interest during a program execution on a specific program input [KR98]. A dynamic slice is no larger than and usually smaller than a static slice, because run time information collected during execution is used to compute the slice.

UML diagrams are important design and modeling artifacts. These diagrams can also be used to generate test cases. We present a novel test case generation method that is based on dynamic slicing of UML activity diagrams. We use flow dependence graph (FDG) of an activity diagram to generate dynamic slices. Dynamic slices are created using an edge marking method. Slices are constructed corresponding to each conditional predicate on activity edges and test cases are automatically generated with respect to each slice. Our generated test cases satisfy path coverage criterion. Our test data generation scheme is automatic and the test data is generated considering the slice condition. We have implemented our approach to realize a prototype tool.

Ever since Weiser [Wei84] introduced the concept of program slicing, researchers have shown considerable interest in this field probably due to its application potential. Slicing has been found to be useful in several important application areas such as software maintenance, reengineering [GL91, LD98], testing [HD95, KFS93, KR98, MMS02], decomposition and integration [HPR89], decompilation [CF97], program comprehension [LFM96, HFH+ 99], and debugging [LW87]. Most of the works reported on slicing concern improvements and extensions to algorithms for slice construction [Luc01, HK98, KR98, F.T95, BG96]. Even though dynamic slicing is identified as a powerful tool for software testing [KR98, MMS02], reported work on how dynamic slicing can actually be used in testing is rare to find in the literature. In 1993, Kamkar et al. [KFS93] reported a use of dynamic slicing for interprocedural testing. Their work addressed testing procedural programs. To the best of our knowledge, there is no work reported in the literature that investigates the use of dynamic slicing in testing object-oriented programs. In this paper, we propose a method to generate test cases automatically by dynamic slicing of UML activity diagrams.

Keywords: software testing, UML activity diagram, test case generation, dynamic slicing, UML testing.

1

Introduction

One of the requirements for white box as well as grey box testing [LJX+ 04] of large software products is that it be decomposed into manageable pieces and that an accurate dependency among different parts of the software is made available to the tester. An important problem with test generation methodologies is that of scalability. This is especially true since the sizes of the software solutions are becoming large. In this context, program slicing is a powerful tool that can be used to tackle the problem of size and automatically generate test cases based on the various dependency relations existing among different components. Program slicing is essentially a decomposition technique that extracts only those program statements that are relevant to a particular computation [BG96]. Of late, slicing has been used as a reduction technique on specifications like state models [KSTV03]. Informally, a program slice provides an answer to the question, which program statements potentially affect the computation of a variable v at some statement s? The need for slicing arises from the fact that the slice of a program p is (usually) simpler than

DOI: 10.1145/1640162.1640172

Weiser [Wei82] used a control flow graph as an intermediate representation of programs for constructing slices. In both Agrawal and Horgan [JA90] as well as Mund et al. [MMS02], dynamic slices were constructed using program

1

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 2

dependence graph as an intermediate representation. We use flow dependence graph (Section 4.1) as an intermediate representation of an activity diagram to generate dynamic slices. We compute dynamic slices using an edge marking method.

going activity edges that specify control and data flow from and to other nodes [OMG05]. An activity diagram can describe how actions take place, what they do (change of object states) and in what sequence actions take place [HMBD04].

Design specifications are an intermediate artifact between requirement specification and final code. They preserve the essential information from the requirement, and are the basis of code implementation. Moreover, in component-based software development, often only the specifications are available and the source code is proprietary. Test case generation from design specifications has the added advantage of allowing test cases to be available early in the software development cycle, thereby making test planning more effective. It is therefore desirable to generate test cases from the software design or analysis documents, in addition to test case design using the code. An automated model-driven test generation framework is usually recommended. In this context, we propose a novel grey box approach based on UML activity diagrams.

ATM Withdraw Cash Display Transaction Menu

4. [a > 500]

Accept Amount

6. {r = res − a}

5. [a < 500] Check Cash Reserve

7. [r < a]

Accept Pin 8. [r > a]

9. {b = bal − a} 2. [Pin Entered]

Find Balance

Validate Pin 10. [b < 1000] 11. [b > 1000] Display Cash Available

Display Insufficient Balance 12. {bal = b}

Update Account Print Passbook

Display Insufficient Cash Reserve 13. {res = r} Update Cash Reserve

Figure 1: An Activity Diagram Modelling an ATM Withdraw Cash Situation An activity diagram can be viewed as a description of the workflow behavior of a system that depicts the business logic flow and the actions that are performed. In an activity diagram, an activity is shown as a rectangle with rounded corners. An activity edge is represented by an arrow pointing towards the next activity. Edges are annotated by labels placed near arrows. A guard condition may also be associated with an edge. The guard is a Boolean expression which has to be evaluated to be true for control or data to flow along the edge. Activity diagrams can represent conditional and parallel activities. A fork construct is used when multiple activities occur at the same time.

This paper is organized as follows: A brief discussion on UML 2.0 activity diagrams is given in the next section. Related research in the area of UML based testing is discussed in the Section 3. Section 4 describes our methodology to generate test cases from activity diagrams and explains our methodology using an example. An implementation of our approach is discussed in section 5. We provide a comparison with related work in section 6 and the conclusions are given in Section 7.

UML Activity Diagrams

In UML 2.0, the basic modeling element is termed as a classifier. A classifier represents a group of things with common properties. A class is an example of a classifier. When an activity diagram models the behavior of a classifier, the activity can access the attributes, operations and objects associated with the respective classifier [Pil05]. UML 2.0 also allows an element called note, for adding additional information to the activity diagram. Notes are shown with dog-eared rectangle symbols linked to activities through

In UML 2.0, an activity diagram consists of actions (executable activity nodes) and their relationships [HMBD04]. An activity diagram models the various activities in a use case, or a package, or the implementation of an operation. Actions are the basic unit of behavior that make up an activity. Actions are linked together using activity edges or transitions. An action may have several incoming and out-

DOI: 10.1145/1640162.1640172

3. [Valid]

1. [Withdraw Selected]

UML has become the de facto standard for object oriented modeling and design. Recently, several methods have been proposed to execute UML models [RFBLO01, MB02, RFW+ 04, HG97, EHSW99, DT03]. Executable UML [MB02, RFW+ 04] allows model specifications to be efficiently translated into code. Executable UML formalizes requirements and use cases into a set of verifiable diagrams. The models are executable and testable and can be translated directly into code by executable UML model compilers. Our approach can also work on executable UML models.

2

November 2009 Volume 34 Number 6

2

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 3

a dashed line as shown in Fig. 1. Notes are convenient to include pseudocode, constraints, pre-conditions, postconditions, text annotations etc. in an activity diagram. However, in our approach we restrict the notes to contain only executable statements denoting the implementation of the corresponding activity. Notes capture the executions involved in the corresponding activity. Though not mandated by UML, we number the predicates and expressions on activity edges as well as the notes as shown in Fig. 1. This numbering is done in an arbitrary manner. The numbers help to establish the correspondence between an activity diagram and its flow dependency graph (section 4.1).

and activity diagrams. They also describe various types of model annotations that can be added by a test designer before test generation. In this approach, first a test designer manually annotates the UML models, which may have been semi automatically extracted from existing, textual use case documentation, with test requirements. In the second step, the test generation tool automatically creates a set of textual test procedures or executable test scripts. In the third step, a test executor runs these against the system under test using a commercial testing tool. The coverage criterion being applied is transition coverage. Briand and Labiche [BL02] describe the TOTEM (Testing Object-orienTed systEms with the Unified Modeling Language) system test methodology. Functional system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams and from OCL constraints used in all these artifacts. They represent sequential dependencies between use cases using an activity diagram for each actor in the system. The derivation of use case sequences from the activity diagram is done using a depth first search through a directed graph capturing the activity diagram. They generate test requirements for system testing. Their test requirements are specific things (like possible execution sequences of use cases) that must be satisfied or covered during testing. They generate legal sequences of use cases according to the sequential dependencies specified in the activity diagram.

An activity diagram can represent both conditional and parallel activities. A fork construct is used when concurrent activities in the activity diagram. For example, in Fig. 1 a fork construct appears after the action Display Cash Available. This indicates that both Update Account and Update Cash Reserve actions occur concurrently. All concurrent activities must be synchronized by a join construct before the final activity node is reached. In our approach, we have not considered the concurrency and synchronization issues for the purpose of test case generation. A decision node is used to select different output flows subject to satisfaction of certain conditions. A decision node is represented using a diamond symbol. In Fig. 1 a decision node appears after Accept Amount action. A decision node has only one incoming activity edge whereas it can have many outgoing edges in the activity diagram. A decision node chooses output flows based on the guard condition associated with the outgoing edge [Pil05]. In our approach, we restrict that a decision node cannot enable more than one output flow at the same time.

3

Among all UML diagrams, test generation from state chart diagram has possibly received maximum attention from researchers [CCD04, HIM00, KR03, KHBC99, OA99, SvAMF99, KWD05]. Offutt and Abdurazik [OA99, OLAP03] generated test cases from UML state diagrams. They have introduced several useful test coverage criteria for UML state charts like (1) transition coverage, (2) full predicate coverage, (3) transition-pair coverage, and (4) complete sequence coverage. Their approach is intended to help perform class-level testing. Kansomkeat, et al. [KR03] have proposed a method for generating test sequences using UML state chart diagrams. They transform the state chart diagram into a flattened structure of states, called Testing Flow Graph (TFG). From the TFG, they list possible event sequences which they consider as test sequences. The testing criterion they used to guide the generation of test sequences is the coverage of the states and transitions of TFG. Cavarra, et al. [CCD04] describe how UML class diagrams, state diagrams, and object diagrams can be translated into a formal language called IF (intermediate format) to characterize the behavior of a system. They use IF as a basis for test generation. The behavioral descriptions are written in a language of communicating state machines and from this they form a test graph. Their test graph

Related Work

In this section, we briefly discuss related work. The current scenario reveals that there has been an increasing interest among researchers to use UML models for test generation. Linzhang, et al. [LJX+ 04] proposed a grey-box testing method using UML activity diagrams. In their approach, test scenarios are directly derived from an activity diagram modeling an operation. They proposed an algorithm to generate test scenarios from an activity diagram. The information regarding input/output sequence, parameters, the constraint conditions and expected object method sequence is extracted from each test scenario. They recommend applying category-partition method [OB98] to generate possible values of all the input/output parameters. Hartmann, et al. [HVFR05] describes an interesting approach for generating and executing system tests. Their approach models the system behavior using UML use cases

DOI: 10.1145/1640162.1640172

November 2009 Volume 34 Number 6

3

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 4

consists of all traces leading to a valid state together with branches that might produce an inconclusive result.

views through class diagrams and behavioral views through sequence diagrams. They then use information from both these views to derive and execute tests that satisfy certain test adequacy criteria. They extract attributes and parameters from class diagrams and transform sequence diagrams into directed acyclic graphs (OMDAG), and finally present an algorithm that merges the structural and behavioral representations into a table called OMET (object-method test execution) table. Fraikin, et al. [FL02] provides guidelines for generating testable sequence diagrams, and SeDiTeC a test tool that supports automated generation of test stubs based on testable sequence diagrams. For classes and their methods whose behavior is specified in sequence diagrams and if the corresponding test case data sets are also given, SeDiTeC can automatically generate test stubs.

Hartmann et al. [HIM00] augment the UML description with specific notations to create a design-based testing environment. The developers first define the dynamic behavior of each system component using a state diagram. The interactions between components are then specified by annotating the state diagrams, and the resulting global FSM that corresponds to the integrated system behavior is used to generate the tests. Kim, S.K et al. [KWD05] describe a way to generate test sequences starting with a UML state machine specification that captures the underlying functionality of some given Java-based concurrent systems. This approach applied the model checker SAL to generate these test sequences, which involves manually translating the UML model to the SAL model. Kim, et al. [KHBC99] proposed a method for generating test cases for class testing using UML state chart diagrams. They transform state charts to extended finite state machines (EFSMs) to derive test cases. In the resulting EFSMs the hierarchical and concurrent structure of states are flattened and broadcast communications are eliminated. Then data flow is identified by transforming EFSMs into flow graphs, to which conventional data flow analysis techniques are applied. Scheetz et al. [SvAMF99] developed an approach for generating system (black box) test cases using an AI (Artificial Intelligence) planner. They used UML class diagrams and state diagrams to represent the conceptual architecture of a system under test.

Supavita, et al. [SS05] describes an approach for testing polymorphic interactions, which are defined in UML sequence diagrams. This paper discusses several forms of polymorphic interactions and their factors along with guidelines for test design. Chen [Che03] proposed an approach for object oriented cluster level test using a formal specification language called Contract specification. They propose a set of guidelines to transform a given UML sequence or collaboration diagram into corresponding Contract specification. Dinh-Trong et al. [DTKG+ 05] present an approach to test designs described by UML class diagrams and interaction diagrams. They describe a prototype tool that transforms UML design models into executable forms. The executable forms of the design models are generated from class diagrams and activity diagrams. They [DTKG+ 05] also identify the structural and behavioral characteristics that need to be observed during testing.

Bertolino and Basanieri [BB00] proposed a method to generate test cases following the sequence of messages between components in a sequence diagram. They develop sequence diagrams for each use case and use category partition method [OB98] to generate test data. They characterize a test case as a combination of all suitable choices of the involved settings and interactions in a sequence of messages. In another interesting work, Basanieri, et al. [BBM02] describe the CowSuite approach which provides a method to derive test suites and a strategy for test prioritization and selection. CowSuite is mainly based on the analysis of the use case diagrams and sequence diagrams. From these two diagrams they construct a graph structure which is essentially a mapping of the system architecture. This graph is searched using a modified depth-first search algorithm. They use category partition method [OB98] for generating tests. They construct test procedures using the information retrieved from the UML diagrams. Their test procedure consists of a sequence of messages, and the associated parameters.

The works [AFGC03, RKS05, AO00, MP05] proposes several novel test criteria based on UML models. Abdurazik and Offutt [AO00] proposed several test criteria based on collaboration diagrams. Ghosh et al. [GFB+ 03] present a testing method in which executable forms of Unified Modeling Language (UML) models are tested. Their method incorporates the use of test adequacy criteria based on UML class diagrams and interaction diagrams. Andrews et al. [AFGC03] describe several useful test adequacy criteria for testing executable forms of UML. The criteria proposed for class diagrams include association-end multiplicity criterion, generalization criterion and class attribute criterion. The interaction diagram criteria like condition coverage, full predicate coverage, each message on link, all message paths and collection coverage are used to determine the sequences of messages that should be tested. They also describe a test process. Engels et al. [EHHS02] discuss how consistency among different UML models can be tested.

Pilskalns, et al. [PAGF03] presents an approach to integrate two complementary views of UML designs: structural

DOI: 10.1145/1640162.1640172

November 2009 Volume 34 Number 6

Kamkar et al. [KFS93] explain how interprocedural dy-

4

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 5

namic slicing can be used to increase the reliability and precision of interprocedural data flow testing. Harman and Danicic [HD95] present an interesting work that illustrates how slicing will remove statements which do not affect a program variable at a location thereby simplifying the process of testing and analysis. They also provide a program transformation algorithm to make a program robust.

4

UseVar(x): It is the set of all nodes in FDG that use the value of the variable x. AllotVar(x): It is a set of two types of nodes of FDG, i.e. AllotVar(x) = {F,C}. F is the set of all r nodes that define the variable x. Nodes of type C are the following: if x is used to specify guards in more than one e node, such e nodes are also treated as members of AllotVar(x). In the rest of paper, we use the term allotment to indicate that a variable x is either defined or if x is used to specify guards. The C type node arises due to decision nodes in activity diagrams where only one of the guards associated with the decision node would evaluate to true for a particular execution. For example, consider nodes 4 and 5 in FDG as shown in Fig. 2. These nodes correspond to activity edges 4.[a > 500] and 5.[a ≤ 500] respectively in Fig. 1. For a particular input for the variable a, only one of these activity edges will take place. Hence, both nodes 4 and 5 of FDG are treated as members of AllotVar(a). A use of a will require only one of the AllotVar(a).

Dynamic Slicing-Based Test Case Generation from Activity Diagrams

In this section, we describe our proposed methodology for automatic test case generation from UML activity diagrams using dynamic slicing.

4.1

November 2009 Volume 34 Number 6

Definitions and Concepts

The following definitions are used to describe our methodology.

Dependency Edge: If a node i, i ∈ N is either data or control dependent on node j, j ∈ M , then there can be an edge in FDG from node i to node j called a dependency edge. An outgoing dependence edge is an edge that emanates out of a node.

Flow Dependency Graph (FDG): We define an FDG as a directed graph where FDG = {N,D}. In FDG, N is a set of nodes and D is a set of edges. Here, N = {E,R}, where E is a set of e nodes. An e node, e ∈ E represents an activity edge of the corresponding activity diagram with its attributes such as guard condition and label. R is a set of r nodes. An r node, r ∈ R represents a note in the activity diagram. Here we assume that notes are attached to activities and a note captures the executions involved in the corresponding action. Edges of FDG called dependency edges, represent dependencies among nodes. Dependency edges are defined subsequently. In an FDG there is no distinction between control or data dependence edges. A distinction however is made between stable and unstable edges. Definitions of stable and unstable edges are given subsequently. The subgraph of FDG corresponding to the activity diagram in Fig. 1 on the node set {4,5,6,7,8,9,10,11,12,13} is shown in Fig. 2.

Dependency Path: Given an FDG, a dependency path F from some node vi , to some other node vk is a sequence of nodes and edges in the FDG from vi to vk . Stable Edge 5

4

Unstable Edge

9

6

8 7

Slicing Criterion: A slice is constructed based on a slicing criterion. In our case, the slicing criterion (m) specifies the location (i.e. identity) m of an e node in FDG.

10 11 13

Dynamic Slice: A dynamic slice of an activity diagram is defined using its corresponding FDG. We say that a node d affects another node q if q is control or data dependent on node d. In FDG, for a set of inputs, let T be the set of nodes T ∈ N , that affects a node q, q ∈ N for a given execution of node q for a slicing criteria. A dynamic slice is the induced subgraph of FDG on the node set T. We call this slice a dynamic slice of an activity diagram.

DOI: 10.1145/1640162.1640172

12

Figure 2: The Subgraph of FDG of the Activity Diagram in Fig. 1 on the Node Set(4,5,6,7,8,9,10,11,12,13) Unstable Edge: Let M, Mi , Mj be three nodes in FDG. An outgoing dependence edge (M, Mi ) from node M in FDG is said to be unstable if there exists an outgoing dependence

5

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 6

edge (M, Mj ) with Mi not equal to Mj such that the nodes Mi and Mj both are members of AllotVar(x), where x is any variable that is used at M. For example, (9,4) and (9,5) are unstable edges in Fig. 2. The nodes 4,5 and 9 of FDG corresponds to 4.[a > 500], 5.[a ≤ 500] and 9.(b = bal − a) respectively in Fig. 1.

In UML 1.x based testing literature the term transitions were used to denote connection between activities. In UML 2.0, transitions among activities have been replaced by activity edges. A test set that ensures the coverage of all transitions in an activity diagram is said to achieve transition coverage.

Stable Edge: An edge in a dependency graph is said to be stable, if it is not an unstable edge. For example, (10,9) and (11,9) are stable edges in Fig. 2.

Theorem Transition coverage does not ensure path coverage in an activity diagram.

Slice Condition: Consider a dynamic slice (S) for the slicing criterion (m). Corresponding to each node in S, there can be predicates or expressions in the respective activity diagram. We define the slice condition of the slice S as the conjunction of all predicates and expressions present in the activity diagram corresponding to the dynamic slice for a given execution. For example, let {a = 400, bal = 1800, res = 1000} be a data set for a particular execution for the diagram given in Fig. 1. For the slicing criterion (11), the dynamic slice S consists of nodes {5,9} in FDG and the slice condition consists of [(a ≤ 500) and (b = bal - a)].

Proof. We use a counter example to prove this theorem. Assume that transition coverage ensures path coverage. Let us consider three transitions say a,b,c in an activity diagram. Let us further assume that the path sequence b → a → c and a → b → c are possible during some executions. A test set that ensure coverage of the path sequence say b → a → c ensures coverage of all three transitions but does not test the other path sequence a → b → c. This contradicts with our original assumption. Hence transition coverage does not ensure path coverage.

Slice Domain: The slice domain of a slice S is the set of all input data values for which the slice condition associated with S is satisfied.

4.2.1

4.2.2

Boundary Testing Criterion

Testers have frequently observed that domain boundaries are particularly fault-prone and should therefore be carefully checked [JW94]. Boundary testing criterion is applicable whenever the test input domain is subdivided into subdomains by decisions. Let us select an arbitrary border for each predicate p. We assume that the predicates on the activity diagram are relational expressions (inequalities). That is, all predicates are of the following form: E1 opE2 , where E1 andE2 are arithmetic expressions, and op is one of {, ≥}. Jeng and Weyuker [JW94] have reported that a border due to an inequality can be adequately tested by using only two points of test input domain, one named ON point and the other named OFF point. The ON point can be anywhere on the given border. It does not even have to lie exactly on the given border. All that is necessary is that it satisfies the slice condition associated with the border. The

Test Coverage

A software test data adequacy criterion (or coverage criterion) can be used to find out whether a set of test cases is sufficient, or adequate, for testing a given software. Several test coverage criteria such as message path criteria, full predicate coverage etc., have been proposed in the literature [AFGC03]. We discuss a few coverage criteria that would be used later in this paper. Path Coverage Criterion: We define path coverage criterion for an activity diagram as follows: consider a test set T and an FDG corresponding to an activity diagram AD. In order to satisfy the path coverage criterion, it is required that T must cause all dependency paths in an FDG to be taken at least once.

DOI: 10.1145/1640162.1640172

Full Predicate Coverage

Full predicate coverage for an activity diagram is defined as follows: Given a test set T and an activity diagram AD, T must cause each clause in every predicate on each activity edge in AD to take on the values of TRUE and FALSE, while all other clauses in the predicate have values such that the value of the predicate will still be the same as the clause being tested. This ensures that each clause in a condition is tested separately [OA99, AFGC03]. In other words, each clause in each predicate on every activity edge must independently determine the outcome of the predicate.

Boundary: A slice domain is surrounded by a boundary. A boundary might consist of several segments and each segment of the boundary is called a border [HF98]. Each border is determined by a single simple predicate in the slice condition. For example in Fig. 1, the condition (b > 1000) defines a border. The domain space for the variable b is the set of all integers. The condition turns out to become false for values (b ≤ 1000). We say that a border crossing occurs for some input for which the conditional predicate changes its truth value from true to false or vice versa.

4.2

November 2009 Volume 34 Number 6

6

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 7

only requirement for the OFF point is that it be as close to the ON point as possible, but it should lie outside the border.

in [MMS02] use program dependence graph. We generate a flow dependence graph from UML activity diagram and apply the edge marking technique to it. The edge marking algorithm involves marking and unmarking the unstable edges appropriately as and when dependencies arise and cease at run time. After an execution of a node x at run time, an unstable edge(x,y) is marked if the node x uses the value of a variable v allotted at y. A marked unstable edge(x,y) is unmarked after an execution of a node z if the nodes y and z are in AllotVar(v), and the value of v computed at the node y does not affect the present value of v at node z.

The boundary testing criterion can now be defined as follows: The boundary testing criterion is satisfied for inequality borders, if each selected inequality border b is tested by two points (ON-OFF) of test input domain such that, if for one of the points the outcome of a selected predicate q is true, then for the other point the outcome of q is false. Also the points should satisfy the slice condition associated with b and the points should be as close as possible to each other [HF98].

Before execution of an activity diagram, the type of each edge of FDG is appropriately recorded as either stable or unstable. The dependence associated with a stable edge exists at all executions of an activity diagram. Whereas the dependence associated with an unstable edge may change during different executions. We mark an unstable edge when its associated dependence exists, and unmark when its associated dependence ceases to exist. Each stable edge is marked and each unstable edge is unmarked at the time of construction of the FDG. We mark and unmark unstable edges during the execution of the activity edge sequences, as and when dependencies arise or cease, and a stable edge is never unmarked. Let dslice(n) denote the dynamic slice with respect to the recent execution of the node n. Let (n, x1 ), (n, x2 ), . . . , (n, xn ) be all the marked outgoing dependence edges of n in the updated FDG after an execution of the node n. It is clear that the dynamic slice with respect to the present execution of the node n is dslice(n) = x1 , x2 , . . . , xn ∪ dslice(x1 ) ∪ . . . ∪ dslice(xn ).

Similar definitions of border testing criteria for equality and non-equality borders are defined in [JW94, HF98]. For conciseness, we do not consider them here. However they can easily be considered in our approach. We use boundary testing along with path coverage criterion. The number of test cases that needs to be generated for achieving path coverage can be large when random testing is used. Instead of generating a set of test cases randomly and selecting the test cases from this set that satisfies this slice condition, we generate two test cases for each predicate using boundary testing.

4.3

November 2009 Volume 34 Number 6

Generation of Test Cases

In an activity diagram, all activity edges are labeled with guard conditional predicates. Of course the conditional predicate might trivially be an empty predicate, which is always true. For each activity edge, there will be a corresponding node in FDG. From FDG, we create the dynamic slice for the slicing criteria (m) for the given set of inputs for each e node. In our case, the slicing criterion (m) specifies the location (i.e. identity) m of an e node in FDG. For the given set of inputs, those nodes of FDG that do not affect the predicate at m for a given execution are removed to form the dynamic slice of an activity diagram, for the slicing criterion (m). With respect to each slice, we generate test data. The generated test data for the predicate associated with m corresponds to the true or false values of the predicate and also, these test data values are generated subject to the slice condition.

We now present our edge marking dynamic slicing algorithm for activity diagrams in pseudocode form. Subsequently, we explain this algorithm using an example. Edge Marking Dynamic Slicing Algorithm for Activity Diagrams 1. Do Initialization before execution:(a) Each edge in FDG is suitably recorded as either a stable or as an unstable edge. (b) Unmark all the unstable edges.

4.4

Dynamic Slice of Activity Diagrams

(c) Set dslice(n) = NULL for every node n of the FDG.

In our approach, the slices of an activity diagram are generated using the flow dependency graph (FDG). An FDG is constructed statically and it needs to be constructed only once. For a given set of inputs, we create dynamic slice corresponding to e nodes. For creating dynamic slices, we use an edge marking method. Edge marking methods are reported in [MMS02] for generating dynamic slices in the context of procedural programs. The edge marking methods

DOI: 10.1145/1640162.1640172

2. For each node n ∈ N of FDG, when n is executed Do (a) For every variable used at n, mark the unstable edge corresponding to its most recent allotment. (b) Update dslice(n). (c) If n ∈ AllotV ar(x) and n is not a UseVar(x) node, then do the following :-

7

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 8

i. Unmark every marked unstable edge (n1 , n2 ), where n1 ∈ U seV ar(x) and n2 is a node that does not affect the present allotment of the variable x.

transformation, the problem of finding the value at which the predicate p changes its boolean value, corresponds to the minimization problem of the function F. This minimization can be achieved through repeated modification of input data value. This step is explained in the following paragraph.

A marked unstable edge (n1 , n2 ) representing the dependence of node n1 on node n2 in the previous execution of node n1 will not continue to represent the same dependence in the next execution of node n1 when the input data changes. For example, let {a = 700, bal = 1800, res = 1000} be an input data set for the diagram given in Fig. 1. For the slicing criteria (11), initially edges (9,4) and (9,5) are unmarked unstable edges as seen in Fig. 2. During the execution of node 11, for the given data set, we mark the unstable edge (9,4) whereas the unstable edge (9,5) remains unmarked as the value of a at present is 700. Hence the dynamic slice for the slicing criteria (11) is 9 ∪ dslice(4) and does not include node 5.

4.5

The search procedure we use for finding the minimum of the predicate function F is the alternating variable method [Kor90]. The method consists of minimizing F with respect to each input variable in turn. Each input data variable xi is increased/decreased in steps of U xi , while keeping all the other data variables unchanged. Here U xi refers to a unit step of the variable xi . The choice of unit step depends on the data type being considered. For example, for integer values the unit step naturally is 1. The method works with many other types of data such as float, double, array, pointer etc. However, this method may not be applicable when a variable assumes only a discrete set of values. Each conditional predicate in the slice can be considered as a constraint. If any of the constraints is not satisfied in the slice for some input data value, we say that a constraint violation has taken place. We compute the value of F after each input data is modified by U xi . If the function F decreases on the modified data, and constraint violation does not occur, then the given data variable and the appropriate direction is selected for minimizing F further. Here, appropriate direction refers to whether we increase or decrease the data variable xi . We search for a minimum with an input variable while keeping all the other input variables constant. We go on modifying the value of xi until a solution is found (the predicate function becomes negative) or the minimum of the predicate function F is located with the current input variable xi . In the latter case, the search continues from this minimum with the next input variable.

Generating Test Data

In our approach, we select a conditional predicate on an activity edge during the execution of an activity diagram with a random input set. A predicate will not be selected if test data is already generated for it using a different input set. For each activity edge, in an activity diagram there will be a corresponding node in the FDG. From FDG, we create the dynamic slice for the slicing criteria (m). Here m specifies the location (i.e. identity) of an e node in FDG. Those nodes of FDG that do not affect the predicate at m for a given set of inputs are removed to form the dynamic slice of an activity diagram, for the slicing criterion (m). The dynamic slice (S) will contain the nodes of FDG that actually affect a predicate at run time. From S, we form the slice condition which is the conjunction of all predicates and expressions present in the activity diagram corresponding to the dynamic slice. For each predicate, we generate test data satisfying the slice condition of the computed dynamic slice. For generating test data automatically, we transform the predicates to a predicate function and use function minimization on it.

4.6

An Example

In this subsection, we explain the working of our methodology using an example. Consider the activity diagram shown in Fig. 1. This activity diagram represents the withdraw cash situation for a bank ATM. Initially, the ATM displays the transaction menu. When a user selects withdraw transaction and supplies the pin (or password), the machine tries to validate it. In our example, for simplicity we have shown only the case of a successful password validation. Once password validation is successful, the user enters the amount (represented by a) to be withdrawn. Here, it is assumed that the user can withdraw cash up to a maximum of $500 in a day. Each user has to maintain a minimum balance of $1000 with the bank. Assume that, the machine allows cash withdrawal only if the condition (b > 1000) is true, where b represents the balance before updating the account. Each time a user tries to withdraw an amount a, the ATM computes the remaining balance using the

Consider an initial set of data I0 for the variables that affect a predicate p of an activity diagram. As already mentioned in our approach, based on the slice we generate, we compute two points named ON and OFF for a given border satisfying the boundary testing criterion. For this, we transform a predicate p to a function F called a predicate function. If the predicate p is of the form (E1 op E2), where E1 and E2 are arithmetic expressions, and op is a relational operator, then F = (E1 - E2) or (E2 - E1) depending on whichever is positive for the data I0 . Next, we successively modify the input data I0 such that the function F decreases and finally turns negative. When F turns negative, it corresponds to the alternation of the outcome of the predicate. Hence as a result of predicate

DOI: 10.1145/1640162.1640172

November 2009 Volume 34 Number 6

8

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 9

expression (b = bal − a), where bal represents the account balance once the withdraw transaction is complete.

We have implemented our method for generating test cases automatically from UML activity diagrams in a prototype tool named UTG. Here, UTG stands for UML behavioral Test case Generator. UTG has been implemented using Java and is able to integrate with other UML CASE tools like MagicDraw UML [Inc] that supports exporting and importing models in XML (Extensible Markup Language) format. Since UTG takes UML models in XML format as input, UTG is independent of any specific CASE tool.

Using this example we demonstrate test data generation for the predicate (b > 1000) which corresponds to node 11 in FDG in Fig. 2. Consider an execution of the situation where the user has a current balance of $1550 with the bank and tries to withdraw an amount of $450 from the ATM. Assume that in this situation the ATM has a cash reserve of $5000. Hence a = 450, bal = 1550, res = 5000 is the initial data set. For the slicing criteria (11), initially edges (9,4) and (9,5) are unmarked unstable edges as shown in Fig. 2. During the execution of node 11, for the given data set, we mark the unstable edge (9,5) whereas the unstable edge (9,4) remains unmarked as the value of a at present is 450. Hence the dynamic slice of node 11 for the slicing criteria (11) is 9 ∪ dslice{5}. The dynamic slice for this predicate consists of nodes 5 and 9 of FDG which corresponds to the slice condition [(a ≤ 500)and(b = bal − a)]. The generated test data should satisfy these constraints in the slice.

XmlBoundary

ActivityTestCase

TestCase Boundary

Controller

Slice Generator

Slice Record

Figure 3: Class Diagram of the Activity Component of UTG

Therefore, the test cases we generate for the predicate (b > 1000) are (Find Balance -Decision, [500,1501], Display Cash Available) and (Find Balance -Decision, [500,1500], Find Balance -Decision) correspond to different Boolean truth values of the predicate (b > 1000). Here we wrote Find Balance -Decision to denote the decision following Find Balance action. Here test cases have the form (Source Action, [test data], Destination Action). In this example, the test data has values of [a,bal]. We require only two test data points Iin: [500,1501] and Iout: [500,1500] to test the predicate (b > 1000). These test cases are generated subject to the slice condition. With our proposed method we generate test cases for each such conditional predicates on the activity diagram.

Figure 4: The GUI Screen of UTG With an Example Activity Diagram Fig. 3 shows the important classes that we used in our implementation. ActivityDocumentParser class parses the XML file corresponding to a UML activity diagram. We have used the Document Object Model (DOM) API that comes with the standard edition of the Java platform, for parsing XML files. The package org.w3c.dom.*, provides the interfaces for the DOM. The DOM parser begins by creating

An Implementation

To the best of our knowledge, no full-fledged ready made tool exists that are publicly available to execute UML models. Hence, for generating dynamic slices in our experimentation, we have simulated the executions with random input values.

DOI: 10.1145/1640162.1640172

TestData Record

Stack

Activity Document Parser

Let I0 be the initial data. So here I0 is: [450,1550] where (a = 450, bal =1550). The condition (b > 1000) is true for I0 . The predicate function F will be the expression (b 1000) and F(I0 ) = 100. We should minimize F, in order to alter the boolean truth value of the predicate (b > 1000), which is true initially. In order to minimize F, we do a series of function minimization iterations discussed in Section 4.5 and finally arrive at two points [a,bal] = [500,1501] and [500,1500] for which the predicate (b > 1000), has different truth values.

5

November 2009 Volume 34 Number 6

9

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 10

November 2009 Volume 34 Number 6

Figure 5: A Screen Shot of UTG with a Portion of the XML File Corresponding to example in Fig. 4

Figure 6: A Screen Shot of UTG with Generated Test Cases Corresponding to the Example of Fig. 4

a hierarchical object model of the input XML document. This object model is then made available to the application for it to access the information it contains in a random access fashion. This allows an application to process only the data of interest and ignore the rest of the document. By not having to process every element of a document, the programmer using DOM is freed from having to define a complete set of classes that map to the elements in the document. DOM also makes it easy to revisit elements encountered earlier in the document. After using factory methods to create a DOM parser, one have to pass the XML file to the parser and it returns an org.w3c.dom.Document object. This document object contains all the information parsed from the file, and we call the methods to extract individual elements or lists of elements from the object.

In our prototype implementation, we have considered that only integer and boolean variables are used to form conditional expressions in activity diagrams. Other data types such as float, double etc., however can easily be considered through a simple extension. The GUI was developed using the swing component of Java. A GUI screen along with a sample activity diagram is shown in Fig. 4. The GUI gives the flexibility to view the activity diagram, its XML representation and the generated test cases. Fig. 5 shows the UTG display of the XML file of example given in Fig. 4. The corresponding test cases generated are shown in Fig. 6. Our tool allows storing the test cases as text files which can be used by a test execution tool to automatically test software. We have found that our tool is effective in generating test cases from a given activity diagram.

XmlBoundary is the class of the program from which the execution starts. It accepts an XML file of activity diagram from a user. Then it extracts the parent tag of the XML file and passes the tag (called head) to the ActivityTestCaseController class. ActivityTestCaseController class coordinates the different activities of the program. TestCaseBoundary class is responsible for displaying the list of test cases for an activity diagram. The source and destination activities as well as the slice condition is printed along with test data.

6

In this section we provide a comparison of our work with other UML based test generation methods. To the best of our knowledge, no work has been reported in the literature that discusses application of dynamic slicing on UML models for test case generation. Linzhang, et al. [LJX+ 04] proposed a method to derive test scenarios directly from the UML activity diagrams. Automatic generation of test data values from the test scenario was not addressed. They use category partition method [OB98] and generates test data using a semi manual method. Further, they generate test scenarios using a DFS traversal on the activity diagram considering it as a graph. Their approach [LJX+ 04] does not capture actual dependencies of the system. This is due to the fact that they use only the static structural information available on an activity diagram. Run time behaviors due to data

SliceGenerator class uses the flow dependency graph. It creates the sets AllotVar and UseVar for each variable in the activity diagram. It forms slices based on the slicing criteria for a random set of inputs for the given activity diagram. SliceRecord class keeps a record of the slices generated and later each slice is used for test data generation.

DOI: 10.1145/1640162.1640172

Comparison With Related Work

10

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 11

dependency are not captured. Since we use dynamic slicing based on the flow dependency graph of activity diagram, the slice captures the actual dependencies of the system during run time based on inputs.

satisfy all constraints in the slice condition for (b > 1000). A limitation of our approach is the number of input sets that we need to consider for creating dynamic slices of an activity diagram. Our approach can achieve complete path coverage if all executions for all inputs are considered. This is not practical and hence a test designer has to judiciously limit the number of input sets to be considered for an activity diagram to achieve high path coverage.

Hartmann, et al. [HVFR05] describe an approach for generating and executing system tests using UML 1.x use cases and activity diagrams. Their method achieves transition coverage. We have proved using theorem given in section 4.2 that transition coverage does not ensure path coverage. In UML 2.0, transitions among activities are replaced by activity edges. Transition coverage, however does not cover the dependencies that might exist among transitions [OA99, OLAP03]. This is due to the fact that certain sequence of transitions must also be considered when there exist dependencies among these transitions. Compared to this [HVFR05] our approach uses dynamic slices where the dependency of one activity edge on the other that occurs during run time is considered. Test cases are generated considering the actual dependencies in a dynamic slice. Hence, our approach can achieve high path coverage. Further, [HVFR05] use category partition method [OB98] and generate test data which is a semi manual method. Compared to this, our test data generation scheme can be fully automated. Also in our approach, test data is generated considering the slice condition. This helps in satisfying all conditions that makes a node to execute. This is not considered in [LJX+ 04, HVFR05].

7

Conclusion

We have presented an execution based scheme to generate test cases using dynamic slicing of UML activity diagrams. Originally, dynamic slicing was defined and used with respect to program code. We have defined dynamic slices and slicing criteria in terms of flow dependency graph to make it applicable to activity diagram. A dynamic slice helps to consider the dependencies among activities that arise during run time. We generate dynamic slices based on the FDG of an activity diagram using an edge marking method. Construction of FDG is the only static part in our approach. Slices are created from FDG corresponding to conditional predicates at each activity edge in the respective activity diagram. We apply function minimization methods to generate test data based on each slice. Our approach automatically generates test data to achieve high path coverage. The test cases generated through our approach can be used for testing cluster level behaviors. In our approach, we have not addressed specific issues like concurrency and polymorphism. We can also use the test cases for the activity diagram to see if the software conforms to the requirements in the activity diagram. This helps to ensure conformance of an implementation with the actual design.

Cavalli, et al. [CMPV04] discusses testing of dotLRN platform based on activity diagrams, use case diagrams and class diagrams but do not generate test data. Briand and Labiche [BL02] generated functional system test requirements from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams and from OCL used in all these artifacts. They represent sequential dependencies between use cases by means of an activity diagram for each actor in the system. They generate test requirements meant for system testing, but do not generate test cases. Compared to these approaches [LJX+ 04, HVFR05, CMPV04, BL02] our approach automatically generates test data considering the actual dependencies arising at run time.

Acknowledgements The authors would like to thank Mr. Ajay Kumar Bothra who has helped in implementing the prototype tool.

References

When test data is generated manually as in the case of many related work, the achieved test coverage is usually poor. This is especially true for large systems. For large systems, it is practically impossible to generate test data by hand that achieves path based coverage criteria. In the example given in Fig. 1, we have shown that the predicate (b > 1000) can be tested with two test data points for [a,bal] given by [500,1501] and [500,1500]. To generate these values by hand in this simple example itself is cumbersome and time consuming as these generated test values have to

DOI: 10.1145/1640162.1640172

November 2009 Volume 34 Number 6

11

[AFGC03]

A. Andrews, R. France, S. Ghosh, and G. Craig. Test adequacy criteria for UML design models. Software Testing Verification and Reliability, 13:97 – 127, 2003.

[AO00]

A. Abdurazik and J. Offutt. Using UML collaboration diagrams for static checking and test generation. In Proceedings of the 3rd International Conference on the UML, Lecture Notes in Computer Science, volume 1939, pages 383 –

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 12

November 2009 Volume 34 Number 6

395, York, U.K., October 2000. Springer-Verlag GmbH. [BB00]

[BBM02]

[BG96]

models. In Proceedings of the 10th IEEE International Conference on Engineering of Complex Computer Systems, pages 519 – 528, Shanghai, China, June 2005. IEEE Computer Society.

A. Bertolino and F. Basanieri. A practical approach to UML-based derivation of integration tests. In Proceedings of the 4th International Software Quality Week Europe and International Internet Quality Week Europe, Brussels, Belgium, 2000. QWE.

[EHHS02]

F. Basanieri, A. Bertolino, and E. Marchetti. The cow suit approach to planning and deriving test suites in UML projects. In Proceedings of the Fifth International Conference on the UML, LNCS, volume 2460, page 383397, Dresden, Germany, October 2002. Springer-Verlag GmbH.

G. Engels, J.H. Hausmann, R. Heckel, and S. Sauer. Testing the consistency of dynamic UML diagrams. In Proceedings of the Sixth International Conference on Integrated Design and Process Technology(IPDT), USA, 2002. Society for Design and Process Science.

[EHSW99]

G. Engels, R. Hucking, S. Sauer, and A. Wagner. UML collaboration diagrams and their transformations to java. In Proceedings of the 2nd International Conference on the UML, LNCS, volume 1723, pages 473 – 488, Berlin / Heidelberg, October 1999. Springer.

[FL02]

F. Fraikin and T. Leonhardt. SeDiTeC-testing based on sequence diagrams. In Proceedings 17th IEEE International Conference on Automated Software Engineering, pages 261 – 266. IEEE Computer Society, September 2002.

[F.T95]

F.Tip. A survey of program slicing techniques. Journal of Programming Languages, 3(3):121 – 189, June 1995.

[GFB+ 03]

S. Ghosh, R. France, C. Braganza, N. Kawane, A. Andrews, and O. Pilskalns. Test adequacy assessment for UML design model testing. In Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE03), pages 332 – 343. IEEE Computer Society, November 2003.

D. Binkley and K. Gallagher. Program Slicing, volume 43 of Advances in Computers. Academic Press, 1996.

[BL02]

L. Briand and Y. Labiche. A UML-based approach to system testing. Journal of Software and Systems Modeling, 1(1):10 – 42, 2002.

[CCD04]

A. Cavarra, C. Crichton, and J. Davies. A method for the automatic generation of test suites from object models. Information and Software Technology, 46(5):309 – 314, 2004.

[CF97]

C. Cifuentes and A. Fraboulet. Intraprocedural static slicing of binary executables. In IEEE International Conference on Software Maintenance (ICSM97), page 188 195. IEEE Computer Society Press, Los Alamitos, USA, 1997.

[Che03]

H. Y. Chen. An approach for object-oriented cluster-level tests based on UML. In IEEE International Conference on Systems, Man and Cybernetics, volume 2, page 1064 1068, 2003.

[GL91]

K. B. Gallagher and J. R. Lyle. Using program slicing in software maintenance. IEEE Transactions on Software Engineering, 17(8):751 – 761, August 1991.

[CMPV04]

A. Cavalli, S. Maag, S. Papagiannaki, and G. Verigakis. From UML models to automatic generated tests for the dotLRN e-learning platform. Electronic Notes in Theoretical Computer Science, 116:133–144, January 2004.

[HD95]

M. Harman and S. Danicic. Using program slicing to simplify testing. Software Testing Verification and Reliability, 5(3):143 – 162, September 1995.

[HF98] [DT03]

T. T. Dinh-Trong. Rules for generating code from UML collaboration diagrams and activity diagrams. Master’s thesis, Colorado State University, Fort Collins, Colorado, 2003.

A. Hajnal and I. Forgacs. An applicable test data generation algorithm for domain errors. In ACM SIGSOFT Software Engineering Notes, Proceedings of ACM SIGSOFT International Symposium on Software Testing and Analysis, volume 23, 1998.

[HFH+ 99]

M. Harman, C. Fox, R. M. Hierons, D. Binkley, and S. Danicic. Program simplification as

[DTKG+ 05] T. T. Dinh-Trong, N. Kawane, S. Ghosh, R. France, and A.A. Andrews. A toolsupported approach to testing UML design

DOI: 10.1145/1640162.1640172

12

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 13

a means of approximating undecidable propositions. In 7th IEEE Workshop on Program Comprehension, page 208 217. IEEE Computer Society Press, Los Alamitos, USA, 1999. [HG97]

D. Harel and E. Gery. Executable object modeling with statecharts. IEEE Computer, 30(7):31 – 42, 1997.

[HIM00]

J. Hartmann, C. Imoberdorf, and M. Meisinger. UML-based integration testing. In ACM SIGSOFT Software Engineering Notes, Proceedings of International Symposium on Software testing and analysis, volume 25, August 2000.

[HK98]

M. Harman and K.B.Gallagher. Program slicing. Information and Software Technology, 40:577 – 581, December 1998.

[HMBD04]

H.Eriksson, M.Penker, B.Lyons, and D.Fado. UML 2 Toolkit. Wiley, 2004.

[HPR89]

S. Horwitz, J. Prins, and T. Reps. Integrating noninterfering versions of programs. ACM Transactions on Programming Languages and Systems, 11(3):345 – 387, July 1989.

[HVFR05]

J. Hartmann, M. Vieira, H. Foster, and A. Ruder. A UML-based approach to system testing. Innovations in Systems and Software Engineering, 1(1):12 – 24, 2005.

[Inc]

No Magic Inc. MagicDraw UML, Version 9.5. Golden, CO, www.magicdraw.com.

[JA90]

J.Horgan and H. Agrawal. Dynamic program slicing. In Proceedings of the ACM SIGPLAN’90 Conference on Programming Languages Design and Implementation, volume 25, pages 246 – 256, White Plains, New York, 1990. SIGPLAN Notices, Analysis and Verification.

[JW94]

B. Jeng and E. J. Weyuker. A simplified domain-testing strategy. ACM Transactions on Software Engineering and Methodology(TOSEM), 3(3), 1994.

[KFS93]

M. Kamkar, P. Fritzson, and N. Shahmehri. Interprocedural dynamic slicing applied to interprocedural data flow testing. In Proceedings of the Conference on Software Maintenance, page 386 395. IEEE Computer Society, Washington, DC, USA, 1993.

[KHBC99]

Y. G. Kim, H. S. Hong, D. H. Bae, and S. D. Cha. Test cases generation from UML state diagrams. Proceedings: Software, 146(4):187 – 192, 1999.

DOI: 10.1145/1640162.1640172

13

November 2009 Volume 34 Number 6

[KJ88]

B. Korel and J.Laski. Dynamic program slicing. Information Processing Letters, 29(3):155 – 163, October 1988.

[Kor90]

B. Korel. Automated software test data generation. IEEE Transactions on Software Engineering, 16(8):870 – 879, 1990.

[KR98]

B. Korel and J. Rilling. Dynamic program slicing methods. Information and Software Technology, 40:647 659, 1998.

[KR03]

S. Kansomkeat and W. Rivepiboon. Automated-generating test case using UML statechart diagrams. In Proceedings of SAICSIT 2003, pages 296 – 300. ACM, 2003.

[KSTV03]

B. Korel, I. Singh, L. H. Tahat, and B. Vaysburg. Slicing of state-based models. In Proceedings of the 19th International Conference on Software Maintenance (ICSM), pages 34 – 43. IEEE, 2003.

[KWD05]

S. K. Kim, L. Wildman, and R. Duke. A UML approach to the generation of test sequences for java-based concurrent systems. In Proceedings of the Australian Software Engineering Conference, ASWEC05, pages 100 – 109. IEEE Computer Society, 2005.

[LD98]

A. Lakhotia and J. C. Deprez. Restructuring programs by tucking statements into functions. Information and Software Technology Special Issue on Program Slicing, 40:677 689, 1998.

[LFM96]

A. De Lucia, A. R. Fasolino, and M. Munro. Understanding function behaviours through program slicing. In 4th IEEE Workshop on Program Comprehension, page 9 18. IEEE Computer Society Press, Los Alamitos, USA, 1996.

[LJX+ 04]

W. Linzhang, Y. Jiesong, Y. Xiaofeng, H. Jun, L. Xuandong, and Z. Guoliang. Generating test cases from UML activity diagrams based on gray-box method. In Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC04), pages 284 – 291. IEEE, 2004.

[Luc01]

A. De Lucia. Program slicing: methods and applications. In First IEEE International Workshop on Source Code Analysis and Manipulation, page 142 149. IEEE, November 2001.

[LW87]

J. R. Lyle and M. Weiser. Automatic program bug location by program slicing. In 2nd International Conference on Computers and Applications, page 877 882. IEEE Computer Society Press, Los Alamitos, USA, 1987.

http://doi.acm.org/10.1145/1640162.1640172

ACM SIGSOFT Software Engineering Notes

Page 14

[MB02]

S. J. Mellor and M. J. Balcer. Executable UML: A Foundation for Model Driven Architecture. Addison-Wesley: Reading, MA, 2002.

[MMS02]

G.B. Mund, R. Mall, and S. Sarkar. An efficient program slicing technique. Information and Software Technology, 44:123 132, 2002.

[MP05]

J. A. McQuillan and J. F. Power. A survey of UML-based coverage criteria for software testing. Technical report, National University of Ireland, Maynooth, Co. Kildare, Ireland, 2005.

[OA99]

J. Offutt and A. Abdurazik. Generating tests from UML specifications. In Proceedings of the 2nd International Conference on UML, Lecture Notes in Computer Science, volume 1723, pages 416 – 429, Fort Collins, TX, 1999. SpringerVerlag GmbH.

[OB98]

[OLAP03]

J. Offutt, S. Liu, A. Abdurazik, and P.Ammann. Generating test data from statebased specifications. Software Testing Verification and Reliability, 13:25 – 53, 2003. OMG. Unified Modeling Language Specification, Version 2.0. Object Management Group, www.omg.org, August 2005.

[PAGF03]

O. Pilskalns, A. Andrews, S. Ghosh, and R. France. Rigorous testing by merging structural and behavioral UML representations. In Proceedings of the 6th International Conference on The Unified Modeling Language, volume 2863, pages 234 – 248, San Francisco, CA, USA, October 2003. Springer-Verlag GmbH.

[Pil05]

D. Pilone. UML 2.0. O’Reilly, first edition, 2005.

[RKS05]

A. Rountev, S. Kagan, and J. Sawin. Coverage criteria for testing of object interactions in sequence diagrams. In Proceedings of the 8th International Conference on Fundamental Approaches to Software Engineering, volume 3442, pages 234 – 248, Edinburgh, UK, April 2005. LNCS, Springer-Verlag GmbH.

[SS05]

S. Supavita and T. Suwannasart. Testing polymorphic interactions in UML sequence diagrams. In Proceedings of the International Conference on Information Technology: Coding and Computing, ITCC05, volume 2, pages 449 – 454. IEEE Computer Society Press, April 2005.

[SvAMF99] M. Scheetz, von A. Mayrhauser, and R. France. Generating test cases from an object oriented model with an AI planning system. In Proceedings of the 10th International Symposium on Software Reliability Engineering, ISSRE 99, pages 250 – 259. IEEE Computer Society Press, 1999.

T. J. Ostrand and M. J. Balcer. The categorypartition method for specifying and generating fuctional tests. Communications of the ACM, 31(6), June 1998.

[OMG05]

November 2009 Volume 34 Number 6

[Wei82]

M. Weiser. Programmers use slices when debugging,. Communications of the ACM, 25(7):446 – 452, 1982.

[Wei84]

M. Weiser. Program slicing. IEEE Transactions on Software Engineering, 10(4):352 – 357, 1984.

[RFBLO01] D. Riehle, S. Fraleigh, D. Bucka-Lassen, and N. Omorogbe. The architecture of a uml virtual machine. In Proceedings of the 16th ACM SIGPLAN conference on Object oriented programming, systems, languages, and applications, volume 36, pages 327 – 341. ACM SIGPLAN Notices, ACM Press, USA, October 2001. [RFW+ 04]

C. Raistrick, P. Francis, J. Wright, C. Carter, and I. Wilkie. Model Driven Architecture with Executable UML. Cambridge University Press, 2004.

DOI: 10.1145/1640162.1640172

14

http://doi.acm.org/10.1145/1640162.1640172

Suggest Documents