Using Traceability to Support Model-Based ... - Semantic Scholar

4 downloads 366 Views 1MB Size Report
transformation to create a traceable infrastructure of test-related artifacts. We use this infrastructure to support model-based selective regression testing.
Using Traceability to Support Model-Based Regression Testing Leila Naslavsky Advisor: Debra J. Richardson Donald Bren School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3425 USA

[email protected] The role played by these relationships to support automation of testing activities had long been recognized [5]. They are established with different purposes that include: mapping concepts from models to implementation, which supports test script generation or execution of abstract test scripts; relating models, test suites and results, which support coverage measurement and model-based selective regression testing.

ABSTRACT Model-driven development is leading to increased use of models in conjunction with source code in software testing. Modelbased testing, however, introduces new challenges for testing activities, which include creation and maintenance of traceability information among test-related artifacts. Traceability is required to support activities such as selective regression testing. In fact, most model-based testing automated approaches often concentrate on the test generation and execution activities, while support to other activities is limited (e.g. model-based selective regression testing, coverage analysis and behavioral result evaluation).

Regression testing is “one necessary but expensive maintenance task” [14]. Selective regression testing aims at reducing regression testing effort by selecting a subset of the test cases to be re-run, while obtaining similar outcome. The granularity of relationships used to support selective regression testing influences selectivity precision. The more fine-grained, the more precise the selection can be. For instance, consider a test suite is created to exercise n paths of a program’s (P) control-flow graph (cfg), where nodes in the cfg represent statements in the program. Suppose P changes to P’ modifying nodes in its cfg in such a way that only k paths derived from this cfg might be impacted. Accordingly, only the test cases that exercise these paths should be re-run to test if the change inserted new faults to the system. Locating these test cases, however, requires finegrained relationships from nodes (end point) in the cfg to test cases [13]. Applying this idea to models instead of code requires fine-grained relationships from concepts in models to test cases.

To address this problem, we propose a solution that uses model transformation to create a traceable infrastructure of test-related artifacts. We use this infrastructure to support model-based selective regression testing.

Categories and Subject Descriptors D.2.5 [Testing and Debugging]: Testing tools (e.g., data generators, coverage testing)

General Terms Measurement, Documentation, Design.

Keywords

Regression testing is part of traditional testing activities, which also comprise: test case generation, test execution, result evaluation, and coverage analysis. While automated code-based regression-testing support had been largely studied and is supported by many tools, there is a lack of automated support/studies on the feasibility of model-based regression testing. In fact, most MBT automated approaches often concentrate on test case generation and execution activities, while limited support is available for other activities [11]. With MDD, changes are performed to the models, so the result of comparing different models, as opposed to different implementations, should be used as the basis for test case regression test selection. Accordingly, this work investigates model-based selective regression testing.

Model-driven development, model-based testing, traceability.

1. INTRODUCTION The increasing complexity of software systems has resulted in the widespread adoption of Model-Driven Development (MDD). MDD facilitates development by adding a new level of abstraction to software artifacts. However, it imposes a new requirement on testing activities: demonstrate compliance of implementation with models. Model-Based Testing (MBT) uses these high level artifacts as basis for test generation. It is regarded as complementary to code-based testing (CBT), because while CBT approaches test the implementation, MBT approaches test the implementation against expectations about the systems (in the form of models). Therefore, MBT often uncovers faults that CBT does not, such as missing functionalities. Additionally, automated support for testing systems based on their models leverages the effort put into creating models. MBT, however, introduces new challenges for testing activities.

In this paper, we describe one approach to mode-based testing that uses model transformation techniques [10] to create an infrastructure that comprises model-based testing artifacts and fine-grained relationships among these models. In addition to test generation, the proposed approach supports model-based regression testing by leveraging fine-grained relationships established during test generation process. We plan to use this approach to investigate the feasibility of using such relationships to support of model-based selective regression testing.

One main challenge for MBT is how to create and maintain explicit fine-grained relationships among test-related artifacts. Copyright is held by the author/owner(s). ASE’07, November 5–9, 2007, Atlanta, Georgia, USA. ACM 978-1-59593-882-4/07/0011.

567

(selective regression testing) model-based approaches that only consider explicit coarse-grained relationships [4].

2. HYPOTHESIS The availability of more automated model-based testing solutions can stimulate the use of models by leveraging the effort put into creating them. When changes happen, selective regression testing plays an important role at reducing the chances that new faults were inserted to the system. Given the importance of such activity, automated testing approaches should improve their support for selective regression testing.

We build upon model transformation techniques to achieve support for model-based test generation and traceability of testrelated artifacts. The traceability models are comprised of finegrained relationships, and they are created during the test generation process (overview in figure 1). The approach ultimately generates a test generation hierarchy [15] from sequence and class diagrams from a UML model. We adopt a test strategy [3] that consists of: (a) deriving a control flow model from the sequence diagram, (b) developing a set of paths that achieves particular coverage criteria based on the control flow model (e.g. branch coverage, or path coverage), (c) and identifying inputs that cause the paths in the set to be taken. Thus, each path derived from the sequence diagram partitions the valid input set into test case data inputs. We implement (a) and (b) as model transformations, these are steps A and B in figure 1. Input identification is a complementary task that is part of step B.

Relationships that support model-based selective regression testing relate models to test cases derived from these models, and support identification of test cases to be re-run. The level of granularity of their end points influences the selectivity precision (fine-grained relationships increases precision). Previous work [11] enforces the observation [5] that relationships play and important role in the support of modebased testing activities. A key insight behind our approach is that information concerning relationships used to support modelbased regression testing is embedded in the test case generation strategy. As a result, if they were made explicit and persistent during the test case generation process, we would be leveraging information embedded in the test case generation strategy to support other testing activities. We, therefore, propose an approach based on the hypothesis that when fine-grained relationships among test cases and their originating models are made explicit and persistent, they can be used to effectively support model-based selective regression testing.

In figure 1, step A is a model transformation that creates a model-based control flow graph (mbcfgV1) model from a UML model (umlV1), where one sequence diagram becomes one model-based control flow graph. It also creates a traceability model with fine-grained relationships (traceUmlToMbcfg). Then, step B uses both UML and model-based control flow graph models to generate a test generation hierarchy model (tghV1). The first level of the hierarchy tghV1 describes possible paths for each sequence diagram in the UML model. Then, concrete test case values are generated, these are values that when assigned to parameters of a sequence diagram, trigger each path. This task considers each possible path in the hierarchy, and identifies constraints on sequence diagram’s parameters accountable for triggering each path. Then, it applies data selection strategies to partition the parameters’ input domains. A solution that satisfies the conjunction of these constraints is one test case input data.

Another insight behind our approach is that test case generation can be supported with model transformation. In addition, model transformation solutions address the traceability problem by creating relationships among transformed artifacts as part of the transformation process [10]. These solutions also support traceability at different levels of abstraction. Thus, we can leverage their capabilities for traceability to make relationships used (and required by) testing activities explicit and persistent. Regarding adopted behavioral models, both scenario-based and state-based specification languages have emerged as important perspectives. A fair amount of academic research explores statebased specification testing; their languages have often a high degree of formalism that reduces the likelihood of adoption by practitioners when compared to scenarios. In fact, practitioners had adopted scenarios as a means to express systems requirements and specifications [16]. UML sequence diagrams are scenarios at the design level of abstraction, and for this reason, we adopt them as basis for our testing automated approach.

2.1 Approach Overview Our approach provides testers with an alternate (model-based) method that supports selective regression testing. It leverages model transformation traceability techniques to create finegrained relationships among model-based testing artifacts. It uses the infrastructure to support model-based regression testing. This approach compares different versions of models as opposed to different versions of implementations and use resulting comparison as the basis to select test cases to be regression tested. We expect it should be at least as cost-effective as available (selective regression testing) code-based approaches [13], when changes to code are made to reflect changes to models. We also expect it should be more cost-effective than

Figure 1 - Approach overview: creating the infrastructure.

2.2 Model-Based Regression Testing Our approach creates an underlying infrastructure composed of test-related models (umlV1, mbcfgV1, and tghV1 models), and fine-grained relationships (traceUmlToMbcfg, traceMbcfgToTgh model). This infrastructure supports model-based regression

568

(traceMbcfgToTgh in figure 1) to locate node n12 in the test generation hierarchy model. This node is under paths #1 (n1, n2, n3, n6, n9, n11, n12, n14, n25) and #5 (n1, n2, n3, n6, n9, n11, n12, n14, n15-n18, n15-n18, n15-n18, n25). Therefore, the test cases under those two paths in the hierarchy should be regression tested.

testing (figure 2). It works as follows: (1) comparison of two versions of a UML model (umlV1 and umlV2), and generation of model describing the changes (diffUmls); (2) identification of subset of elements in the model-based control flow graph model (mbcfgV1’) not impacted by the changes; (3) creation of new version of the model-based control flow graph (mbcfgV2); (4) comparison of two versions of model-based control flow graph models (mbcfgV1 and mbcfgV2), and generation of model describing the changes (diffMbcfgs); (5) test case selection.

This information could be acquired because we used the finegrained relationships between test case partitions and nodes that influenced the partition’s creation. If fine-grained relationships were not made explicit and persistent as in our approach, they would be embedded in the test generation algorithm. As a result, identification of a precise subset of test cases to be regression tested would be a more labor-intensive task. Additionally, given the same change and coarse-grained relationships among sequence diagrams and derived test cases, we would identify the 37 test cases to be regression tested.

Step (2) reduces the effort of transforming UML into modelbased control flow graphs models because it avoids processing non-impacted elements. Step (3) is produced from: (1) elements from the not-impacted subset (mbcfgV1’), and (2) new (or removed) elements required to address the changes to the UML model. Step (5) uses the traceability (traceMbcfgToTgh) model, to identify paths test generation hierarchy (tghV1) model impacted by the modifications to the model-based control flow graph model. Test case values under a path in the test generation hierarchy result from partitioning the input space of the parameters that trigger each path. Therefore, test cases under impacted paths should be regression tested.

4. IMPLEMENTATION PLAN Automation of the presented approach is in progress. This section describes our efforts to date. The transformations were implemented using the Atlas Language Transformation (ATL). We currently do not address all possible sequence diagrams constructors and combinations, but this is in our plan. Generation of concrete test data, which is part of the test generation hierarchy, is under development. In addition, we are also customizing an Eclipse plug-in that supports model comparison. Customization aims at improving semantics of identification of changes (differences) between models. Next steps in the implementation plan include automating test selection based on identified differences among models.

3. EXAMPLE In this section, we demonstrate support for model-based regression testing with a simple ATM example.

5. EVALUATION PLAN We plan on doing comparative studies between our approach and other selective regression testing approaches. First, we will compare it to code-based approaches [13], when changes to code are made to reflect changes to models. Second, we will compare it to model-based approaches that only consider explicit coarsegrained relationships [4].We assume the cost of result evaluation when performing model- or code- based regression test to be the same. Therefore, we will consider our approach successful if it (1) reduces the test suite, (2) maintains or reduces the cost of selecting the test suite subset, and at the same time (3) maintains (or increases) the fault detection effectiveness. We plan to use one small and one medium size project (like Aqualush [1]). We expect a small-size project to have a smaller test suite, and it is possible that re-running all test cases will be more cost-effective. On the other hand, the medium should have more test cases and therefore the cost of selection and re-run could be less than re-run all.

Figure 2 - Sequence diagram "start session". Figure 3 depicts two versions a sequence diagram (umlV1 and umlV2) with their corresponding model-based control flow graphs (mbcfgV1 and mbcfgV2). From umlV1 and mbcfgV1, 6 paths and 37 test cases were derived. In umlV2, a new message (logSessionAccount) was added. We compared mbcfgV1 and mbcfgV2, and we used the resulting model to locate the differences (nodes n12a, n12b, n12c highlighted in mbcfgV2). This change modifies paths containing node n12, and therefore, it should impact test cases in the test generation hierarchy model under those path partitions. We used the traceability model

6. RELATED WORK Many automated model-based testing approaches exploit implicit and/or explicit relationships. Implicit relationships are embedded in the tool’s algorithms and models. Explicit relationships are created either manually or automatically. Some approaches use implicit relationships to support to support test generation, execution and evaluation [7, 12, 17], while others use implicit relationships to support regression testing [4].

569

Additional approaches use explicit relationships to support test generation [2], test execution and evaluation [8, 9], or coverage analysis [9].

[3]

In COWtest pluS UIT Environment (COW_SUITE) [2], relationships among UML models are made explicit when the model is loaded into the tool. The Automated Generation and Execution of test suites for DIstributed component-based Software (AGEDIS) [9] tool uses explicit relationships to execute and evaluate the test scripts. Abstract state machine Language (AsmL) [8] also uses explicit relationships to execute and evaluate the abstract test scripts. It supports parallel execution of the model and its implementation.

[4]

[5]

Similarly to these approaches, we are proposing the use of explicit relationships among testing artifacts. Differently from them, we are externalizing fine-grained relationships from model to a test case generation hierarchy, which we believe will support cost-effective selective regression testing.

[6]

The behavioral model adopted by our approach is UML sequence diagrams, which is also adopted by Sequence Diagram Test Center (SeDiTeC) [7], SCENTOR [17], COW_SUITE, and Testing Object-orienTed systEMs with the unified Modeling language (TOTEM) [4].

[7]

[8]

SeDiTeC executes sequence diagrams test integration among objects. The user manually creates concrete parameters and expected results. Result evaluation is based on source code instrumentation to log obtained method calls, concrete parameters and obtained results. SCENTOR extends JUnit to generate concrete test scripts. The user also specifies methods’ concrete parameter values and expected results, while the assumption of direct mapping between sequence diagrams and implementation supports concrete test scripts generation. COW_SUITE supports test case generation based on category partition. It analyzes sequence diagrams and interacts with users to gather choices values, and constraints for the case of contradictory choice combinations.

[9]

[10]

[11]

In [6], a graph that integrates class and sequence diagrams information is used as basis for test case generation. This approach adopts a symbolic execution approach to derive test input constraints from the presented graph and solves these constraints with the Alloy constraint solver.

[12]

[13]

While SCENTOR and SeDiTeC address generation and executing concrete test scripts, COW_SUITE manages test cases and their prioritization. Emphasis is not given to automatic creation of test data input. Regarding test generation, our approach is closer to [6], since it also proposes the use of constraint solvers to discover test data input satisfying a composition of constraints obtained from a UML model. TOTEM supports test generation, while an extension of this work supports regression testing based on UML diagrams and relationships among these diagrams. Different from our approach, TOTEM uses coarse-grained and implicit relationships. Experiments with the tool have shown that precision of regression testing was limited, and further investigation exploring guard conditions was needed.

[14]

[15]

[16]

[17]

7. REFERENCES [1] https://users.cs.jmu.edu/foxcj/Public/ISED/index.htm, [2] Basanieri, F., Bertolino, A., Marchetti, E., The Cow_Suite Approach to Planning and Deriving Test Suites in UML

570

Projects, 5th International Conference on The Unified Modeling Language, 2002. Binder, R. V., Round-trip Scenario Test, Testing ObjectOriented Systems - Models, Patterns, and Tools, AddisonWesley, 2000, pp. 579-591. Briand, L. C., Labiche, Y. and Soccar, G., Automating Impact Analysis and Regression Test Selection Base on UML Designs, International Conference on Software Maintenance (ICSM), 2002. Dick, J., Faivre, A., Automating the Generation and Sequencing of Test Cases from Model-Based Specifications, 1st International Symposium of Formal Methods Europe on Industrial-Strength Formal Methods, 1993. Dinh-Trong, T. T., Ghosh, S., France, R. B., A Systematic Approach to Generate Inputs to Test UML Design Models, 17th International Symposium on Software Reliability Engineering (ISSRE'06), 2006. Fraikin, F., Leonhardt, T., SeDiTeC — Testing Based on Sequence Diagrams, 17th IEEE International Conference on Automated Software Engineering, 2002. Grieskamp, W., Nachmanson, L., Tillmann, N., Veanes, M., Test Case Generation from AsmL Specifications - Tool Overview, 10th International Workshop on Abstract State Machines, 2003. Hartman, A., Nagin, K., The AGEDIS tools for model based testing, 2004 ACM SIGSOFT international symposium on Software testing and analysis, 2004. Jouault, F., Loosely Coupled Traceability for ATL, European Conference on Model Driven Architecture Workshop on Traceability, 2005. Naslavsky, L., Ziv, H., Richardson, D. J., Scenario-based and State Machine-based Testing: An Evaluation of Automated Approaches, ISR, University of California, Irvine, 2006. Nebut, C., Fleurey, F., Traon, Y. L., Jézéquel, J., Automatic Test Generation: A Use Case Driven Approach, IEEE Transactions on Software Engineering, 32 (2006), pp. 140155. Orso, A., Shi, N., Harrold, M. J., Scaling Regression Testing to Large Software Systems, 12th ACM SIGSOFT Sysmposium on the Foundations of Software Engineering (FSE 2004), 2004. Rothermel, G., Harrold, M. J., Analyzing Regression Test Selection Techniques, IEEE Transactions on Software Engineering, 1996, pp. 529-551. Stocks, P., Carrington, D., A Framework for SpecificationBased Testing, IEEE Transactions on Software Engineering, 22 (1996), pp. 777-793. Weidenhaupt, K., Pohl, K., Jarke, M., Haumer, P., Scenarios in System Development: Current Practice, IEEE Software, 1998. Wittevrongel, J., Maurer, F., SCENTOR: Scenario-Based Testing of E-Business Applications, Tenth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2001.

Suggest Documents