A Generic Platform for Model-Based Regression Testing - Springer Link

8 downloads 110955 Views 522KB Size Report
platform to the model-based testing approaches UML Testing Profile and ... design for the modeling of test artifacts and/or the automation of tests activi- ties.
A Generic Platform for Model-Based Regression Testing Philipp Zech, Michael Felderer, Philipp Kalb, and Ruth Breu Institute of Computer Science, University of Innsbruck, Austria {philipp.zech,michael.felderer,philipp.kalb,ruth.breu}@uibk.ac.at

Abstract. Model-based testing has gained widespread acceptance in the last few years. Models enable the platform independent analysis and design of tests in an early phase of software development resulting in effort reduction in terms of time and money. Furthermore, test models are easier to maintain than test code when software systems evolve due to their platform independence and traceability support. Nevertheless, most regression testing approaches, which ensure that system evolution does not introduce unintended effects, are solely code-based. Additionally, many model-based testing approaches do not consider regression testing when applied in practice, mainly due to the lack of appropriate tool support. Therefore, in this paper we present a generic tool platform for modelbased regression testing based on the model versioning and evolution framework MoVE. Our approach enhances existing model-based testing approaches with regression testing capabilities aiming at better tool support for model-based regression testing. In a case study, we apply our platform to the model-based testing approaches UML Testing Profile and Telling TestStories.

1

Introduction

In recent years, model-based testing found its way into practice and is still an active area of research [1, 2]. Model-based testing (MBT) applies model-based design for the modeling of test artifacts and/or the automation of tests activities. MBT has several advantages like the abstractness of test cases, the early detection of faults, and the high level of automation that justify the additional effort of test model design and maintenance. However, if considering existing model-based testing approaches in terms of providing a complete testing process, most of them suffer from one important feature, namely their tool support for regression testing [1]. Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended side effects and that the system or component still complies with its specified requirements [3]. Under consideration of the modeling effort, model-based regression test selection has several advantages to test selection on the code level [4]. The effort for testing can be estimated earlier, tools for regression testing can be largely technology independent, the management of traceability and test automation at the model level is more practical, no complex static and dynamic code analysis is required, and models are smaller compared to the size of modifiable elements because they are more abstract. T. Margaria and B. Steffen (Eds.): ISoLA 2012, Part I, LNCS 7609, pp. 112–126, 2012. c Springer-Verlag Berlin Heidelberg 2012 

A Generic Platform for Model-Based Regression Testing

113

A potential solution to the missing model-based regression testing support dilemma is provided by the Model Versioning and Evolution (MoVE) platform [5]. MoVE provides a platform for versioning various software models based on the well known Subversion versioning system. The versioning procedure of MoVE is based on state machines attached to model elements. When a new version of a model is committed, a change set of modified model elements is identified which triggers change events processed by the attached state machines to calculate a new consistent version of the model in the repository. MoVE is based on the generic machine-readable exchange format XML Metadata Interchange (XMI) and not tailored to any specific type of model representation. Thus, due to its change management, model versioning and model representation capabilities MoVE provides a promising platform for generic model-based regression testing. In this paper we present an approach for generic model-based regression testing based on the MoVE platform. The approach provides regression testing support for arbitrary XMI-based model representations and can be parameterized with different change identification, impact analysis, and regression test selection strategies defined in the Object Constraint Language (OCL). Additionally, the approach guarantees full traceability between various artifacts for efficient fault detection, and assures model consistency and validity and hence, also test suite validity, due to the change management capabilities of MoVE. To show the applicability of our generic model-based regression platform we apply it to regression testing of two independent model-based testing approaches, namely the UML Testing Profile (UTP) [6] and Telling TestStories (TTS) [7]. The remainder of this paper is structured as follows. Section 2 positions our approach in respect to related work. Section 3 introduces the various technologies underlying the platform and its case study, and Section 4 introduces our generic model-based regression testing platform and its implementation. We then provide a case study applying our approach for regression testing of two different model-based testing approaches, i.e. UTP and TTS in Section 5, and finally conclude in Section 6.

2

Related Work

Rerunning every test after each modification is not feasible, thus a trade-off has to be found between the confidence gained from regression testing, and resources used for it. For this reason, several regression testing techniques for test case minimization, prioritization and selection were proposed over the years [8]. Most regression testing approaches operate on the source code [9], although model-based regression testing has several advantages as mentioned in the introduction [4]. Most model-based regression testing approaches are built on UML and select tests based on change identification and impact analysis [10]. Our platform provides tool support for this important class of model-based regression testing techniques. UML-based system models typically consider class models and a specific type of UML behavior models, such as state machines in Farooq et al. [11], sequence diagrams in Briand et al. [4] or activity diagrams in Chen et al. [12]. Some of the proposed UML-based regression testing approaches have

114

P. Zech et al.

quite mature but very specific tool implementations like START (STAte-based Regression testing Tool) [11] or RTSTool (Regression Test Selection Tool) [4] not providing a generic platform (as MoVE does) applicable to other model-based testing approaches. Although some industrial model-based testing tools are available and applied in practice, advanced model-based regression testing is still not supported adequately by these tools. A generic model based regression testing platform like MoVE provides additional support for model-based regression testing. For codebased regression testing several industrial and research tools are available. Industrial regression testing tools and platforms used nowadays - both commercial, e.g. [13] and open source (see [14] for an overview) - usually focus only on the automatic execution of tests, the collection of results, and creation of test reports when talking about regression testing. These tools typically do not apply the advanced regression testing techniques supported on the model level. More advanced academic regression testing tools [15, 16] are very specific and only available on the code level. For instance, TestTube [15] has been developed for selective retesting of C programs. It instruments the source code to capture which part of the system is covered by each test, and then computes which tests are needed for a given modification. Beside MoVE, other platforms have recently been developed for model versioning [17–19]. Differing from MoVE, these platforms focus on a single modeling tool and have not yet been applied to model-based regression testing. The Eclipse Modeling Group provides two solutions for model persistence [20, 21]. These approaches do not provide model versioning support which is needed for model-based regression testing and provided by our approach.

3

Building Blocks

In this section we give an overview of the model versioning framework MoVE which our model-based regression testing approach is based on. Besides, we also introduce two model-based testing approaches, i.e. the UML Testing Profile and Telling TestStories which our model-based regression testing approach is applied to later on in a case study (see Section 5). 3.1

MoVE - Model Versioning and Evolution

MoVE is a model repository supporting versioning of arbitrary models. In the MoVE context we do not only consider models like in UML, but also other models, e.g. an Excel spreadsheet which can be interpreted as instance in tabular representation of a previously specified metamodel. Modeling tools can be integrated into the MoVE tool using MoVE adapters. Adapters consist of two parts: the server side part is responsible to provide the data in a readable format to MoVE. The client side adapter integrates into the modeling tool, using the tool’s Application Programming Interface (API), and provides communication methods for the modeling tool with the MoVE server. A minimal requirement for the tool’s API is the possibility to access the data stored in the tool and to call an external script or process. Both features are standard features of a tool-API.

A Generic Platform for Model-Based Regression Testing

115

MoVE supports a change-driven process as described by Breu [22]. A changedriven process combines three aspects: change-propagation, states and support of state machines. Fig. 3.1 shows the change-driven process in MoVE: on every commit the MoVE repository calculates the changes of the new version of the model to the previous version of the model and generates change events for each change. MoVE provides an API to develop plugins and register each plugin for a certain type of event. Each change event is sent to the registered plugin(s) which may trigger further change events and alter the model.

«Model» Working Model

«Model»

«component» Change Detection

State of A1 changed Class A1 changed

«component» Plugin Registry

Class A2 added

Base Model Class A2 added1

State of A1 changed

«Plug-in»

«Plug-in»

«Plug-in»

MetaModel Evolution

TestCalculation

State Machine Class A1 changed Class A2 added

Fig. 1. Change-Driven Process in MoVE

In the MoVE context every model element can have a state machine attached, which is defined in the common metamodel. The state machine can define transitions between states of the model element and also actions that are triggered if a state is reached. This extension of state machines allow us to define a behavior not only on the model element under focus but also on different model elements which have a relation to the current model element. A task system enriches the state machines by user interaction. The change-driven process is used to identify state changes. In case a state change occurred, the state machines are used to calculate the correctness of the state change and derive possible actions belonging to the state change. As part of several industrial projects, the MoVE approach is currently evaluated and enhanced for large models. 3.2

Model-Based Testing Approaches

In this section we describe two actual model-based testing approaches, i.e. the UML Testing Profile and Telling TestStories. UML Testing Profile. The UML Testing Profile (UTP) [23] provides concepts to develop test specifications and test models for black-box testing. UTP has been standardized by the OMG and mapping rules to the executable test definition languages TTCN-3 and JUnit have been defined. The profile introduces four concept groups for Test Architecture, Test Behavior, Test Data and Time. The concepts of the test architecture are related to the structure and the configuration of tests, each consisting of test components and a test context. Test

116

P. Zech et al.

components interact with each other and the SUT to realize the test behavior. The test context encapsulates the SUT and a set of tests as well as the necessary arbiter and scheduler interfaces for verdict generation and controlling test execution, respectively. The composite structure of the test context is referred to as the test configuration. The concepts of the test behavior specify a test in terms of sequences, alternatives, loops, stimuli, and observations from the SUT. During execution a test verdict is returned to the arbiter. The arbiter assesses the correctness of the SUT and finally sets the verdict of the whole test. The test data is supplied via so-called data pools. These either have the form of data partitions (equivalence classes) or as explicit values. The test data is used in stimuli and observations of a test. A Stimulus represents the test data sent to the SUT in order to assess its reaction. The concepts of test time are related to time constraints and observations within a test specification. A timer controls the test execution and reacts to start and stop requests as well as timeout events. In Baker et al. [24] all UTP concepts and their meaning are explained in detail. Telling TestStories. Telling TestStories (TTS) [7] is a model-based methodology for the requirements-driven system testing of service centric systems. TTS is based on tightly integrated, yet separated platform-independent requirements, system and test models annotated with a UML profile. The requirements model is based on a hierarchy of functional and nonfunctional requirements, attachable to test cases. The system model describes the system structure and system behavior in a platform independent way. Its static structure is based on the notions of services, components and types. The test model contains the test scenarios as so called test stories. Test stories are controlled sequences of service operation invocations exemplifying the interaction of components. The necessary test data is provided in a table-based manner to each test story. The manual test design process is supported by validation and coverage checks in and between the requirements, system and test model guaranteeing a high quality of the models. TTS is capable of test-driven development on the model level and provides full traceability between all system and testing artifacts. The test stories are transformed to executable test code in Java invoking running services via adapters which are automatically generated from WSDL files. Felderer et al. [25] proposed a test evolution management methodology for TTS attaching state machines to model elements and propagating changes. Based on the actual state of model elements regression tests are selected. However, the proposed approach has not been implemented so far. But it can be implemented based on our generic regression testing platform.

4

Model-Based Regression Testing Platform

In this section we unroll our idea of a generic platform for model-based regression testing. Besides discussing the theoretical foundations of our approach we also present an implementation of the framework based on the Eclipse platform.

A Generic Platform for Model-Based Regression Testing

4.1

117

A Generic Model-Based Regression Testing Approach

If developing a generic approach for model-based regression testing, the primary issue to overcome and deal with is to not only provide support for a certain type of model but instead for a broad range of different types of (meta)models. This simply comes by the broad diversity of currently existing model-driven and modelbased testing approaches [1], using different types of models and metamodels.

System Model (base copy)

C1

C2

C3

C4

C6

C5

context Class: self.base_Property.owner. getAppliedStereotype('Test::SUT') null;

Delta

C2

context Class: self.base_Property.owner -> asSequence() -> union(self.base_Property.ownedElements);

System Model (working copy)

C1

C3

C5

C5

Delta Calculation

Delta Expansion C6

C4

C6

«activity» TC4

«activity» TC3 «activity» TC6

Expanded Delta

C3

C4

C6

C5

Test Set Generation «activity» TC5

Test Model context Activity: self.base_Property.owner. getAppliedStereotype('Test::Testcase') null;

Fig. 2. Overview of the Generic Model-based Regression Testing Approach

Fig. 2 gives an overview of our idea of model-based regression testing. We start by comparing two different versions of the same model, i.e. the Base Model (initial development model) and the Working Model (current development model) and calculate a delta from it, the so called change set, containing the differences between the two model versions. As a next step, a regression test selection strategy is used to expand the delta by means of including additional elements from the SUT model. Finally, with the given expanded delta, a new test set is derived by means of selection. To support the necessary level of genericity our approach is completely unaware of any model, however, by allowing to customize and constrain the calculations in each step by means of OCL queries, we successfully circumvent this problem and enable to support a broad range of existing models. In the following, the three tasks, namely, delta calculation, delta expansion and test set generation are discussed in more detail.

118

P. Zech et al.

Delta Calculation. The calculation of the delta (change set) is the initial task of our model-based regression testing approach. However, prior to calculating the delta, the scope of SUT needs to be defined. For example, if one wants to restrict the scope only to elements of type Class, the OCL query shown below the Base Model in Fig. 2 achieves this task. This query also assumes, that each SUT element has a stereotype SUT, defined in a profile named Test, applied. At this point it should be mentioned, that, depending on whether one uses a combined SUT/test model or separated ones, OCL queries at such an early point may be omitted. With the SUT scope defined, the change set is ready to be calculated. The underlying brainchild hereby is to use the notion of a left (Base Model) and a right (Working Model) model to calculate the change set from left to right. Put it another way, elements from the right model are compared to elements from the left model by their matching IDs and changes are extracted. If in the right model, an element has been newly added or deleted, this actually poses no difficulty, as in the former case it is already ignored, as it is non-existent anymore in the right model. In the case of a newly created element, it is automatically added to the change set, as no matching already existing element can be found. In case that the IDs of the model elements change, we use a backup strategy based on metrics to define the similarity of model elements from the left and right model, respectively. Section 4.2 gives a detailed description of how this backup strategy works. In the case of Fig. 2, the change set would contain classes C5 and C6, as they had been changed in the Working Model. Delta Expansion. After successfully calculating the change set, next, the distinct regression test selection strategy enters the stage, as it defines in which way the delta is expanded. Basically, the initially calculated delta already represents a regression test selection strategy, based on the minimal change set, viz., only taking the modified elements but nothing else into account. However, in most cases this clearly does no suffice. Hence, we allow to customize the expansion of the delta by means of OCL queries. For example, the OCL query as depicted at the right picture margin in Fig. 2 would expand the delta by all classes either referring to elements of the delta or refereed to by elements from the delta. Subsequently, the delta would be expanded by adding C3 and C4 to it, as both classes either use one or both of C5 and C6. Section 4.2 gives a detailed description of how the expansion actually works in a programmatic way. Test Set Generation. As a last step, we calculate the new test set based on the expanded delta. As first step, like during the previous tasks, the scope for possible test cases needs to be constrained by means of an OCL-based test set generation strategy. For example, the OCL query shown at the bottom of Fig. 2 searches for possible test cases, based on activity diagrams. Also the query assumes that each test case has a distinct stereotype Testcase applied, defined in a profile named Test. With the given set of possible test cases, in a last substep, the new test set is calculated. We evaluate associations between elements of the delta and possible test cases, i.e. we attempt to resolve the links which interconnect each element of the SUT with a given test case. If such a link exists either from an element

A Generic Platform for Model-Based Regression Testing

119

of the delta to a test case or vice versa (from a test case to an element of the delta), the test case is selected and added to the new test set. The definition of the OCL queries for each of the above mentioned steps currently happens manually, yet, we are about to create a library of OCL queries to be used for regression testing. In defining any kind of query for the purpose of regression testing, a tester must not follow any requirements posed by our approach, yet solely the application of the respective model-based testing approach must be valid. Hence, our approach also is completely language independent, as it can deal with any kind of model and hence, any kind of target language, used to generate test cases into. As our approach emerges out of the area of model versioning and not software testing, our terminology slightly differs from a testers’ one, defined e.g. in [26]. The delta calculation corresponds to change identification in [26], the delta expansion to impact analysis, and the test set generation to regression test selection. 4.2

A Generic Model-Based Regression Testing Implementation

In the previous section we have given a generic description of our approach. This section provides more details on our implementation and shows how the MoVE tool is used to automate our model-based regression testing approach. The implementation of our regression testing methodology has two parts, one on the client and one on the server side. On the client-side we provide MoVE adapters (as described in Section 3.1), tightly integrated into modeling tools such as MagicDraw, Eclipse or Papyrus. Therefore we support modeling versioning among various tools and do not restrict our approach to a single tool, respectively. Fig. 3 shows a component based view of the MoVE Environment with the testing plugin. In this view MagicDraw is used to model tests with TTS, whereas Papyrus is used to model scenarios with UTP.

«component» Magic Draw

«component» MoVE Repository

«artifact» TTS Model

«artifact» base model «component» MoVE Client «component» Testing Plugi n

«component» Papyrus «artifact» UTP Model

«component» MoVE Configuratio n View

«artifact» Plugin Configuratio n

Fig. 3. Architecture of the MoVE Regression Testing Plugin

On the server-side MoVE is a repository containing previous versions of the test models (see Base Model in Fig. 3). The MoVE server also provides a plugin interface which we used to write and deploy a testing plugin. The Testing Plugin is our implementation for the concepts as explained in Section 4.1. The plugin can be configured with the MoVE configuration view which creates a plugin configuration for each model or project. The plugin configuration is an XML

120

P. Zech et al.

file that consists of three parts, corresponding to the three tasks identified in Section 4.1, i.e. delta calculation, delta expansion and test set generation. For each task we define a strategy in the configuration file. Fig. 4 shows the schema of the XML file and shows each part containing an OCL expression which is used in our methodology, respectively. PluginConfiguration

Delta Calculation Strategy OCL Statement

Delta Expansion Strategy OCL Statement

Test Generation Strategy OCL Statement

Fig. 4. Schema of Plugin Configuration

Our testing plugin for MoVE follows the workflow shown in Fig. 2. The delta calculation consists of two minor subtasks: the calculation of the change set and the restriction of this very set. MoVE supports difference calculation as part of the change-driven process. This calculation is based on a modified version of EMF Compare [27], which was enhanced by several small patches to improve the comparison of UML models. The result is a delta model containing all elements which were either changed, added, deleted or moved in the current version of the model compared to the base model in the MoVE repository. The delta model is very fine-grained and usually contains elements that are not relevant for regression testing. To restrict the set of delta model elements we use the OCL expression that was defined in the plugin configuration section delta calculation strategy. The result is a sanitized delta, containing only elements which are important for the regression testing strategy. In the next step the sanitized delta is expanded. Therefore our implementation reads the delta expansion strategy from the plugin configuration and iteratively applies the OCL expression to each element of the sanitized delta. This strategy strongly depends on the regression testing method that one wants to apply and is profile independent. The last step is to identify test cases associated to the elements of the expanded delta. In doing so, we read the test set generation strategy from the plugin configuration. Again, this strategy consists of an OCL expression, that returns the affected test cases for the context element. This query is applied to every element of the expanded delta and returns a set of test cases. The final result of the plugin is a map that contains all elements of the expanded delta and the associated test cases.

5

Case Study

In this section we present a case study applying our generic regression testing platform on the two model-based testing approaches UTP and TTS. The goal of

A Generic Platform for Model-Based Regression Testing

121

the case study is to show that nevertheless which model-based testing approach is used, the regression test sets, as calculated by our approach, do not differ for identical changes in the system model. 5.1

System Under Test

For the purpose of our case study, we use a simple calculator service. Its system model is shown in Fig. 5. The service offers five different components, i.e. AdderService, SubtractService, DivideService, MultiplyService and PowService (see Fig. 5a). Each of the service components offers a distinct calculation interface with corresponding name (see Fig. 5b) via an implementing class, e.g. the interfaceIAdder is implemented by the class AdderServiceImpl, which itself is offered via the component AdderService. In case of the PowService, its implemented interface IPow extends IMultiply. Each of the interfaces offers two operations providing the mathematical operation of the declared type name (in this case the interface) for both, integer and float types, e.g. IAdder offers the operations addInt and addFloat. 5.2

Application of the Model-Based Regression Testing Platform

In this section we apply our model-based regression testing platform to UTP and TTS. Due to space limitations we skip the explicit presentation of the modeling fragments of TTS and only print the UTP modeling artifacts in this paper. But for the interpretation of the findings, we consider the results achieved with UTP and TTS. Additionally, we refer to [7] for an in-depth explanation of TTS. Test Modeling. Fig. 5 shows the model of the SUT with the UTP specific stereotypes applied. As one can see in Fig. 5a each component is tagged as a SUT. Yet, the interfaces (see Fig. 5b) and also the classes remain untagged, as they are inherently part of the tagged SUT components. Fig. 6 shows some of the UTP test artifacts, i.e. a test case and a test context. In the context of the UTP, test cases are often modeled using notions of UML sequence diagrams. The test case in Fig. 6a validates the proper behavior of

«SUT» AdderService

«SUT» SubtractService

AdderServiceImpl

SubtractServiceImpl

IAdder +addFloat( a : float, b : float ) : float +addInt( a : int, b : int ) : tin

ISubtract +subtractInt( a : int, b : int ) : tin +subtractFloat( a : float, b : float ) : float

«SUT» MultiplyService

«SUT» DivideService

MultiplyServiceImpl

DivideServiceImpl

«SUT» PowService PowServiceImpl

(a) Component Diagram

IDivide +divideInt( a : int, b : int ) : tin +divideFloat( a : float, b : float ) : float

IMultiply +multiplyInt( a : int, b : int ) : tin +multiplyFloat( a : float, b : float ) : float

IPow +powInt( x : int, y : int ) : tin +powFloat( x : float, y : float ) : float

(b) Interface Diagram

Fig. 5. SUT model with the necessary UTP Stereotypes applied

122

P. Zech et al.

«TestContext» : CalculatorUnitTestContext

: AdderServiceImpl

1: addInt(a="10", b="20"):"30"

«TestContext» CalculatorUnitTestContext 2: addInt(a="5", b="-12"):"-7"

«TestCase»+addIntegersTest() : Verdict «TestCase»+addFloatTest() : Verdict «TestCase»+subtractIntegerTest() : Verdict «TestCase»+subtractFloatTest() : Verdict «TestCase»+multiplyIntegerTest() : Verdict «TestCase»+multiplyFloatTest() : Verdict «TestCase»+divideIntegerTest() : Verdict «TestCase»+divideFloatTest() : Verdict «TestCase»+powIntegerTest() : Verdict «TestCase»+powFloatTest() : Verdict

return pass;

(a) Add Integers Test

(b) Test Context

Fig. 6. UTP Test Model Artifacts

the operation addInt of the AdderService. Fig. 6b shows the associated testing context as required by the UTP. The test context itself is a collection of test cases which is the basis for the configuration and execution of tests. In our example the test context contains ten test cases, one for each of the operations defined in Fig. 5b. We skip the presentation of any further UTP specific test artifacts like an arbiter or a scheduler both, due to space restrictions but also as they are hardly relevant for regression testing. Changing the System Model. We show the derivation of a regression test set by means of expanding the initial delta after a system change. In Section 4.1 the approach is generally introduced, in this section we explain the approach by an example and its interpretation. The system model shown in Fig. 7 has been changed compared to Fig. 5 by adapting the return type of the operations divideInt, multiplyInt, and multiplyFloat. «SUT» AdderService

«SUT» SubtractService

AdderServiceImpl

SubtractServiceImpl

IAdder +addFloat( a : float, b : float ) : float +addInt( a : int, b : int ) : tin

ISubtract +subtractInt( a : int, b : int ) : tin +subtractFloat( a : float, b : float ) : float

«SUT» MultiplyService

«SUT» DivideService

MultiplyServiceImpl

DivideServiceImpl

«SUT» PowService PowServiceImpl

(a) Component Diagram

IDivide +divideInt( a : int, b : int ) : float +divideFloat( a : float, b : float ) : float

IMultiply +multiplyInt( a : int, b : int ) : long +multiplyFloat( a : float, b : float ) : doubl e

IPow +powInt( x : int, y : int ) : tin +powFloat( x : float, y : float ) : float

(b) Interface Diagram

Fig. 7. Changed SUT Model with Applied UTP Stereotypes

Now, with the changed system model, first, the initial change set is calculated (delta calculation) with MoVE. As already described before, this set contains the

A Generic Platform for Model-Based Regression Testing

123

set of model elements of the SUT model with immediate changes (compared to model elements, referring change elements, which would be an implicit change). By applying the procedure presented in Section 4.1, the initially calculated delta is as follows: Delta = {IDivide, IM ultiply} 1 2 3 4 5 6 7 8 9 10 11 12 13 14

context D i f f E l e m e n t : or s e l f . o c l I s T y p e O f ( R e f e r e n c e Ch a n g e ) ; context NamedElement : s e l f . ownedElement−>s e l e c t ( e | e . o c l I s K i n d O f ( D i r e c t e d R e l a t i o n s h i p))−> c o l l e c t ( o b j : Element | o b j . oclAsType ( D i r e c t e d R e l a t i o n s h i p ) . t a r g e t )−> a s S e t ()−> i t e r a t e ( o b j 2 : Element ; r e s u l t 2 : Set ( Element ) = Set {} | r e s u l t 2 −>u n i o n ( o b j 2−>a s S e t ())−> u n i o n ( o b j 2 . ownedElement−> s e l e c t ( e | e . o c l I s K i n d O f ( D i r e c t e d R e l a t i o n s h i p))−> c o l l e c t ( o b j 3 : Element | o b j 3 . oclAsType ( D i r e c t e d R e l a t i o n s h i p ) . t a r g e t ) ) ) ;

Listing 1. OCL Query for Link-based Delta Expansion Strategy

After the initial delta has been calculated, we expand this very delta by means of applying a delta expansion strategy onto the delta (delta expansion). For instance, the query as shown in Listing 1 allows to identify model elements, associated with the elements, contained in the initial delta, by means of links. Such an expansion is reasonable, i.e. in the case if a component refers (implements or extends) a changed component. The link strategy, as depicted in Listing 1, extends the sanitized delta by all components that are linked with an association or inherit from a changed component. Hence, by applying this strategy we retrieve the expanded delta which is as follows: Deltaexp = {IDivide, IM ultiply, IP ow} We implemented two more delta expansion strategies for our model-based regression testing approach. (1) The minimal strategy does not extend the delta but only retests elements that changed. Therefore, the result is the sanitized delta. It is also possible to use the type of change as impact. (2) The added strategy restricts the sanitized delta to all elements that were added to the changed model. Since our case study does not add components or interfaces, the result of the added strategy is an empty set. Finally, as a last step, based oh the expanded delta, we are ready to derive the new test set (test set generation). With the expanded delta and by subsequently applying another OCL query, evaluating the associations between system artifacts and test cases, we retrieve the set of associated test cases which is as follows: T estSet = {divideIntegerT est, diviveF loatT est, multiplyIntegerT est, multiplyF loatT ets, powIntegerT est, powF loatT est} The generated set of regression tests which is equal for UTP and TTS (see Table 1), is then further processed and executed by the respective model-based testing environment. The number of test cases in this case study for sure does

124

P. Zech et al.

not suffice if performing some real world testing of the Calculator system. However, as we are not about to prove that the Calculator system works, but instead to show to proper workings of our approach, the number of test cases clearly suffices. Table 1. Number of Test Cases for Different Regression Testing Strategies Number of Testcases 10

UTP TTS Minimal Link Added Minimal Link Added 4 6 0 4 6 0

Table 1 shows the results of our case study with the delta expansion strategies minimal, link and added as explained before. Each component has two tests (one test for each of its operations). Therefore, the sum of all tests is 10 which is shown in the first column of the table. The minimal strategy results in 4 tests which are the tests for IMultiply and IDivide. The missing errors in the interface IPow are only detected with the link strategy that adds the missing 2 tests of IPow. The added strategy does not execute any tests since no component was added. The results of UTP and TTS profile were equal which shows that our approach delivers the same results for different profiles, i.e. model-based testing approaches. Table 2. Metrics on the used OCL Queries for Regression Testing Strategy

UTP TTS Calculation Expansion Generation Calculation Expansion Generation Minimal 10/5/18 – 12/10/30 5/3/11 – 15/9/35 Link 10/5/18 19/21/50 12/10/30 5/3/11 19/21/50 15/9/35 Added 10/5/18 25/30/60 12/10/30 5/3/11 25/30/60 15/9/35

The variation points of our regression testing approach, namely the delta calculation, the delta expansion, and the test set generation are controlled purely by OCL-based strategies. Table 2 shows that complexity of the OCL queries for the various phases delta calculation (Calculation), delta expansion (Expansion), and test set generation (Generation), for the approaches UTP and TTS, and for the minimal, link and added delta expansion strategies. Each table entry has the form x/y/z, where x denotes the lines of code, y the number of referenced metamodel/profile elements, and z the overall number of words of the respective OCL query. As the delta expansion strategies are independent of the profile or metamodel, the values are equal for UTP and TTS. The OCL queries for delta expansion are the most complex ones, i.e. they have the highest values for lines of code, number of profile elements, and the overall number of words. But the OCL queries for expansion strategies are independent of the metamodel. Thus, there is a trade-off between complexity and genericity of the OCL queries for delta calculation, delta expansion and test set generation.

A Generic Platform for Model-Based Regression Testing

6

125

Conclusion

In this paper we presented a generic model-based regression testing platform based on the model versioning tool MoVE. The model-based regression testing approach consists of the three phases delta calculation, delta expansion, and test set generation which are controlled purely by OCL queries. After an overview of the platform’s implementation we performed a case study where we applied our platform to the model-based testing approaches UML Testing Profile (UTP) and Telling TestStories (TTS). In the case study, we have applied the minimal, link and added delta expansion strategies to UTP and TTS. We have shown that our platform derives the same regression test sets for UTP and TTS for each of the three delta expansion strategies providing evidence that our approach is applicable to various model-based testing approaches. On the one side, it turned out that the OCL queries for delta expansion are more complex than the OCL queries for delta calculation and test set generation. On the other side, the delta expansion queries are independent of the applied testing memtamodel. Our approach is based on the standardized XMI model interchange format and not tailored to a specific test model representation. Currently, our approach only supports selection based regression testing strategies based on delta expansion. As future work, we also consider prioritization based regression testing techniques. Another future research task is to define a library of parameterized OCL queries implementing various regression testing strategies. The queries are parameterized with stereotypes or other metamodel elements. Such a library concept would greatly enhance the applicability of our platform as the tedious task of writing custom OCL queries is reduced to a minimum. Acknowledgement. This research was partially funded by the research projects MATE (FWF P17380), and QE LaB - Living Models for Open Systems (FFG 822740).

References 1. Dias Neto, A.C., Subramanyan, R., Vieira, M., Travassos, G.H.: A Survey on Model–based Testing Approaches: A Systematic Review. In: 1st ACM International Workshop on Empirical Assessment of Software Engineering Languages and Technologies, pp. 31–36. ACM (2007) 2. Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann Publishers Inc., San Francisco (2007) 3. IEEE: Standard Glossary of Software Engineering Terminology. IEEE (1990) 4. Briand, L.C., Labiche, Y., He, S.: Automating Regression Test Selection based on UML Designs. Inf. Softw. Technol. 51(1) (2009) 5. Breu, M., Breu, R., Low, S.: Living on the MoVE: Towards an Architecture for a Living Models Infrastructure. In: The Fifth International Conference on Software Engineering Advances, pp. 290–295 (2010) 6. OMG: OMG UML Testing Profile (UTP), V1.0 (2007) 7. Felderer, M., Zech, P., Fiedler, F., Breu, R.: A Tool–based methodology for System Testing of Service–oriented Systems. In: The Second International Conference on Advances in System Testing and Validation Lifecycle, pp. 108–113. IEEE (2010)

126

P. Zech et al.

8. Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability 22(2), 67–120 (2012) 9. von Mayrhauser, A., Zhang, N.: Automated Regression Testing using DBT and Sleuth. Journal of Software Maintenance 11(2) (1999) 10. Fahad, M., Nadeem, A.: A Survey of UML Based Regression Testing. In: Shi, E., Mercier-Laurent, D., Leake, D. (eds.) Intelligent Information Processing IV. IFIP, vol. 288, pp. 200–210. Springer, Boston (2008) 11. Farooq, Q., Iqbal, M., Malik, Z., Riebisch, M.: A model-based regression testing approach for evolving software systems with flexible tool support. In: International Conference and Workshops on Engineering Computer-Based Systems (2010) 12. Chen, Y., Probert, R.L., Sims, D.P.: Specification–based Regression Test Selection with Risk Analysis. In: CASCON 2002 (2002) 13. IBM: IBM Rational Quality Manager (2011), http://www-01.ibm.com/software/rational/offerings/quality/ (accessed: January 5, 2011) 14. Mark Aberdour: Opensourcetesting (2011), http://www.opensourcetesting.org/ (accessed: January 5, 2011) 15. Chen, Y.F., Rosenblum, D.S., Vo, K.P.: TestTube: A System for Selective Regression Testing. In: ICSE, pp. 211–220 (1994) 16. Seidl, H., Vojdani, V.: Region Analysis for Race Detection. In: Palsberg, J., Su, Z. (eds.) SAS 2009. LNCS, vol. 5673, pp. 171–187. Springer, Heidelberg (2009) 17. Aldazabal, A., Baily, T., Nanclares, F., Sadovykh, A., Hein, C., Ritter, T.: Automated Model Driven Development Processes. In: ECMDA Workshop on Model Driven Tool and Process Integration (2008) 18. Altmanninger, K., Kappel, G., Kusel, A., Retschitzegger, W., Schwinger, W., Seidl, M., Wimmer, M.: AMOR — Towards Adaptable Model Versioning. In: 1st Int. Workshop on Model Co-Evolution and Consistency Management (2008) 19. Amelunxen, C., Klar, F., K¨ onigs, A., R¨ otschke, T., Sch¨ urr, A.: Metamodel–based tool integration with MOFLON. In: ICSE (2008) 20. Eclipse Teneo, http://wiki.eclipse.org/Teneo#teneo (accessed: April 25, 2012) 21. Eclipse CDO, http://wiki.eclipse.org/CDO (accessed: April. 25, 2012) 22. Breu, R.: Ten Principles for Living Models - A Manifesto of Change-Driven Software Engineering. In: CISIS, pp. 1–8. EIEE Computer Society (2010) 23. OMG: UML Testing Profile, Version 1.0 (2005), http://www.omg.org/spec/UTP/1.0/PDF (accessed: February 25, 2011) 24. Baker, P., Ru Dai, P., Grabowski, J., Haugen, O., Schieferdecker, I., Williams, C.E.: Model-Driven Testing - Using the UML Testing Profile. Springer (2007) 25. Felderer, M., Agreiter, B., Breu, R.: Evolution of Security Requirements Tests ´ Wieringa, R., Zannone, N. (eds.) for Service–Centric Systems. In: Erlingsson, U., ESSoS 2011. LNCS, vol. 6542, pp. 181–194. Springer, Heidelberg (2011) 26. Farooq, Q.U.A., Iqbal, M.Z., Malik, Z., Riebisch, M.: A Model-Based Regression Testing Approach for Evolving Software Systems with Flexible Tool Support, pp. 41–49 (2010) 27. EMF Compare Project, http://www.eclipse.org/emf/compare/ (accessed: April 8, 2012)

Suggest Documents