Practical Problems with Modeling Variability in Test cases – an Industrial Perspective Sachin Patel TCS Innovation Labs. Tata Consultancy Services Pune, India +912066086333 ext.: 86467
[email protected] ABSTRACT There is growing trend towards using Commercial off the shelf (COTS) software within enterprises, as against developing custom-built software. IT service providers, who specialize in executing COTS implementation projects, have to deal with the problem of managing variability within the implementations at different customer enterprises. Customer specific implementations would have variations across different dimensions such as product used, industry vertical, business processes, navigational flows, user interface, technology platform and so on. In this paper, we describe the practical problems faced by service providers in managing variability within test cases for COTS implementations. We draw upon the experience shared with us by practitioners from COTS implementation testing teams and teams who have been developing reusable test cases for various COTS products. We motivate the need for further research on test notations/metamodels for business applications and variability management within these models.
Categories and Subject Descriptors D.2.13 [Reusable Software]: Reuse Models
General Terms Design, Documentation, Management
Keywords Variability Management, COTS Implementation, Test Case Reuse, Test Notation/Meta-Model, ERP Testing
rather than designing new test cases; practitioners are selecting, configuring and modifying reusable COTS test cases for use within their context. When a COTS product is implemented in an enterprise, it has to be configured, customized and integrated with existing systems in order to suit the enterprise specific needs. This configuration is then tested with enterprise specific data to ascertain functional correctness. IT service providers have to deal with the problem of testing many such configurations created for different enterprises. In order to run the testing organization efficiently it is imperative that the commonality across the configurations is leveraged. This can be possible if the test organization has a mechanism to specify commonality and variability in the test cases. We posit that - as the use of COTS products becomes more prevalent there will be a need to maintain repositories of reusable test cases for the popular products. Organizations that maintain such repositories will have to deal with the problem of variability management in the repository. In this paper, we discuss the practical problems faced by IT service providers in managing variability within test cases. We restrict the discussion to test cases for commercial enterprise software such as SAP, Oracle ERP, Peoplesoft, Seibel and so on. In section 2, we explain the relevance of the variability management problem in the context of the IT services industry. In section 3, we describe the methods employed by the practitioners in order to deal with the problem. This is followed by a description of the state of art in the area of test modeling and variability management in section 4. We conclude the paper with a list of open problems in section 5.
2. BACKGROUND 1. INTRODUCTION There is growing trend towards using Commercial off the shelf (COTS) software within enterprises, as against developing custom-built software. One of the implications of this trend on software engineering processes is that, there is a shift in focus from design and development to configuration, customization and integration. A similar shift occurs in the testing process where Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
[email protected]. VaMoS '14, January 22 - 24 2014, Sophia Antipolis, France Copyright 2014 ACM 978-1-4503-2556-1/14/01…$15.00. http://dx.doi.org/10.1145/2556624.2556626
Enterprise software (ES) is the term used to describe information systems designed to integrate and optimize the business operations in an enterprise. These include operations such as enterprise resource planning, accounting, business intelligence, customer relationship management, human resource management, and so on. Enterprise Software Implementation (ESI) tends to be a complex project that involves tasks such as business process reengineering, configuration, customization, integration, testing, data migration, training and so on. ESI requires specialized domain, technology and management skills and it is prevalent to choose an IT service provider to execute it. Tata Consultancy Services is an IT services, consulting and business solutions organization. It provides testing and end-to-end implementation services for enterprise software. While the organization also provides custom software development and maintenance services, the discussion in this paper is limited to issues related to testing of commercial enterprise software such as such as SAP, Oracle ERP, Peoplesoft, Seibel and so on.
TABLE 1: Structure of a Test Case #
Test Case description / Test Steps
Pre-Conditions data
1
a. b.
Customer = TCS Items to add - 3 TV Cables, 5 batteries of 12V DC Use promotion code for Corporate-Offer Order created in previous test case Item to add - 2 batteries of 5V DC
c.
2
a.
b. c.
Create new Order. Add Items and choose promotion code created previously. Go to review screen to check total amount for the order and then submit the order on the review screen Go to Order list screen, select the previously created order and click Edit Order. Add one more item to the order. Go to review screen to check total amount for the order and then submit the order on the review screen
a. b.
Test
Reference to UI Elements
UI Navigations
3
/
Select Order created in previous step and “Generate Invoice”. Send invoice to customer
When a new ESI project is initiated a testing team is formed to test the enterprise specific implementation. The team comprises of domain experts, product experts and testers. The team analyzes the business process documentation and develops a set of test cases to test the implementation. The test cases are documented in a spreadsheet or a test management tool for use during execution. Following are some aspects of an ES implementation that need to be tested: 1.
Configuration: These are feature variations provided by the product and in mature products these are already rigorously tested. Hence, the focus of testing is not to test the product, but to test whether all the business operations can be executed as expected by the enterprise.
2.
Customization: When a product doesn’t provide a feature or a feature variation required by the enterprise there is a need to develop custom components to fulfill the need. These components have to be tested and such testing requires a complete test cycle as is done for any newly developed component.
3.
Integration: When the product interacts with other existing products there is a need to test all data flows between different products. Integration testing is challenging as it requires the tester to be knowledgeable on multiple products and domains.
4.
Data Migration: Most enterprises would have existing systems that contain data that has to be migrated to the newly implemented product. After the data is migrated the testing team validates the data by executing business scenarios in the new system.
5.
Product Upgrades: When there are new releases of a product there is a need to upgrade the implementation. Testing such upgrades involves all of the above mentioned tasks depending on the new features in the upgrade.
Expected Result
Actual Result
The total price calculated should be ( 3 * price of TV cable ) + ( 5 * price of 12V DC battery ) – ( 5% of the total obtained so far ). Also check if this amount is correctly entered in the database record. Check the total of the order as in the previous test case and also check if the database row is correctly updated.
As expected
As expected
Test Behavior (the oracle) Use of a test data descriptor
Identical Step as test case 1, step c
Invoice with new number should be generated in the system. An email with pdf report of the invoice should be sent to the corresponding customer email address
As expected
Owing to similarity in the domain, there is a lot of commonality across implementations and reuse of test cases is necessary in order to execute the business efficiently. Tata Consultancy Services has setup a center in order to promote reuse in ESI projects. The center has built a repository of reusable test cases for some of the commonly used ES products. A set of test cases is provided for all the major business processes and scenarios in the products. These reusable test cases are further specialized for use within different industry verticals. The reusable test cases aim at testing a particular business flow and may contain 4-5 test steps. Each step represents some user activity performed on the interface of the system. Some of the complex business scenarios may have as much as 50 test steps in one test case. The reuse repository is a large set of test cases containing 16000 test cases for 6 products. As many as 2000 to 3000 test cases may be developed in an average sized implementation. Smaller implementations may require 500 to 1000 test cases.
3. STATE OF THE PRACTICE 3.1 Structure of a Test Case The common practice in ESI projects is to specify test cases in a tabular form with the following information: Test steps explain the navigation and form fields to be entered on the user interface. Test data is specified or described for the major form fields of each step. Different observations are made at each step and compared with an expected behavior. Information about the data setup required for a test case is sometimes written in a separate column and sometimes is part of the test steps. See TABLE 1 for a couple of sample test cases of a Billing application. In the table we have also marked some aspects of the test case that have a bearing on its reusability.
TABLE 2: Test case variant for another billing product #
Test Case description / Test Steps
Pre-Conditions data
1
a. b.
Create New Invoice Add Items to the list and enter discount rate Select “Allow partial payment for this invoice” and submit the invoice.
Customer = TCS Items to add - 3 TV Cables, 5 batteries of 12V DC
Go to Invoice list screen, select the previously created invoice and click Edit Invoice. Add one more item to the invoice. Click Modify invoice. Open the invoice selected in previous step and click Send invoice
Invoice created in previous test case Item to add - 2 batteries of 5V DC
c.
2
a.
3
b. c. a.
1.
User Interface Elements: We can observe that the elements of the user interface (UI) such as field names, navigational flows are embedded within the test step descriptions. This restricts the reuse of test steps across different products. For example, if there are 2 implementations using different ES products but implement a similar “Purchase Order” flow, testers would like to reuse the flow and refine it with details such as UI structure later. This is not possible with the current structure.
2.
Test Steps: Another observation is that test steps are repeated and can be reused. Developing reusable test steps is complicated because a test step would have its own test data, referred UI elements and observations made. The test oracle is specified at the test case level and may use the observations made at individual test steps. If a test step has to be designed for reusability, it should be possible to know all the kind of observations that may be required by the test cases where the step may be reused.
3.
Test Data: Testers from ESI projects mention that designing test data is the most time consuming part of implementation testing. Unfortunately, it is also something that is least reused. This is because the objective of testing is to test the operations with enterprise specific data. People who reuse test cases are likely to discard that test data values and update it with their own values. In many cases we observe use of test data descriptors instead of actual values. A data descriptor is a natural language specification of the data that should be used in the test case. For example, “Use an order number for which partial repayment has been done” or “Select a customer who is placing an order for the first time”. Such descriptors help make the test case reusable.
4.
Test Oracles: The example test cases also show a case where there is a need to reuse the test oracle. If the oracle is very short, testers repeat it in the test case else they simply refer to the other test cases where the same oracle was used. In many cases test oracles refer to the observations made in one or more of the test steps of the test case.
3.2 Dimensions of Variation An analysis of the repository reveals that there are many common features across various products and industry specific instances. For example the “Order to Cash” process may be quite similar
/
Test
Expected Result
Actual Result
The total price calculated should be ( 3 * price of TV cable ) + ( 5 * price of 12V DC battery ) – ( 5% of the total obtained so far ). Also check if this amount is correctly entered in the database record. Check the invoice amount as in the previous test case and also check if the database row is correctly updated.
As expected
An email with pdf report of the invoice should be sent to the corresponding customer email address
As expected
As expected
across different ERP products. However, every product would implement the process with subtle variations in naming conventions or UI structures. The test cases shown in Table 1 correspond to the invoice creation process in a billing product. Table 2 shows another variant of test cases for the same process in another billing product. The two products differ in the way they provide the invoicing feature. In the first product, users are required to place orders with the items that the customer wants to purchase. Once the order is finalized an invoice is generated and sent to the customer. In the second product there is no concept of creating orders. The user directly creates an invoice with the required items and sends it to the customer. Another difference between the 2 products is that the first product has a concept of “promotions”. A promotion code has to be selected while choosing the discount for the particular order. In the second product this concept does not exist and the user has to directly enter a discount rate in the invoice being created. We can see that the same business functionality is being accomplished in the test cases of tables 1 and 2 though the test cases themselves vary. Similarly, there would be variations of the business process across different enterprises. For example a particular enterprise may allow partial deliveries for their orders whereas for some other enterprises, only a full delivery of the order may be possible. This is an example of variation due to “Process”. We have observed that “User Interface” is one of the most significant sources of variation in ES implementations. One reason for this is that every product may have a different UI. The other reason is that enterprises want the UIs to be similar to their old systems with which their end-users are familiar. Another important source of variation is “Technology Platform”. By this we mean the same functionality features being provided in different technical architectures such as mobile applications, desktop applications, web interface, point of sale systems, web services and so on.
3.3 Mechanisms of Variability Management ESI projects are long running complex projects and it is customary to have a dedicated team to work on every implementation. In such an arrangement, ESI teams do not have to deal with the variability management problem at all. However, this requires presence of domain and product experts in each team. When such experts are not available, the team members
have to acquire the required knowledge before developing test cases for the enterprise. This increases the test design time. To deal with shortage of experts the test case repository was developed. The experts develop reusable test cases for use by ESI teams. The repository is organized in a hierarchical structure having multiple layers. See Figure 1. The topmost layer consists of various products and modules for which testing services are provided. The second layer consists of diagrams for the business processes in all the modules. Each business process is sub-divided into scenarios belonging to that process. The last layer contains the test cases designed to test each scenario. These test cases are designed for the default configuration of the product. This is called the standard instance of the product. The standard test cases are also specialized into industry specific repositories so that, they may have a higher reuse potential in the implementation for that industry. This specialized instance is a configuration containing variations that are most often used in a particular industry vertical. When a new ESI project is initiated, the repository team provides a specialized instance of the particular industry and product, if available; else a standard instance is provided.
4. STATE OF THE ART In this section, we discuss the state of the art in the area of test case notations and variability management mechanisms for testing.
4.1 Test Models / Notations The standards defined for test notations are UML Test Profile (UTP) [1] and Testing and Test Control Notation (TTCN) [3]. UTP provides a complete meta-model covering various testing concepts. Authors of [2] propose a mechanism to model variability in a UTP model, thus making it a complete approach for VM in test cases. However, there is a major difference between a UTP specification and the test case structure required by us. The test behavior specification in UTP can be one of the UML structures such as a state-transition diagram, activity diagram or sequence diagram. None of these are designed to specify the information that we would like to specify in the test case. For example, one cannot specify user interface activity in the state-transition diagram. Activity diagrams can be used to specify user activity but one cannot specify oracles and navigations in activity diagrams. Sequence diagrams can be used to specify navigations but it will be difficult to describe form fields, test data and corresponding oracles on the diagram. TTCN is another notation widely used in a variety of domains such as telecommunications, automotive and medical devices [3]. There have also been case studies on application of TTCN3 for testing of Web applications [4]. Our objective is to specify test cases in a non-redundant manner and not necessarily to automate their execution. For example, if the user interface (UI) model of the product could be specified separately from the test behavior, then changes to the user interface do not have to be made in every test case, but can be made in UI model and propagated to all test cases. We could not find an easy way to do such things with TTCN-3.
4.2 Variability Management in Testing Figure 1: Organization of the Test Case Repository The repository contains different sets of test cases for each product as well as each industry vertical. The repository team mentioned that this was necessary because different groups of experts were required for developing each of these variants. This structure of the repository is useful in choosing the enterprise specific variations of test cases. The ESI team analyzes the business process documentation to identify candidates for reuse from the repository. If they find that a business process is the same as that in the repository, they reuse the test cases for that process. If they find that a particular business scenario in the process is different they reuse the test cases for the remaining business scenarios and develop their own test cases for the different scenario. They develop whole new set of test cases if a particular business process is very different from the one available in the repository. When the differences are minor, such as navigational flows, user interface structure or naming conventions then they make a copy of the test cases from the repository and modify them as required. This reuse mechanism has resulted in savings of up to 15% in test design effort of ESI projects, however it is inefficient since a copy is created for every variation required; this leads to redundancy. One of the advantages of the mechanism is the ease of test case selection.
In our literature review, we have not come across a empirically validated approach for the purpose of Variability Management (VM) in test cases. [8] provides a survey on Product Family Testing. The survey identifies traceability from requirements to implementation and test assets as the means to deal with commonality and variability in test assets. Another suggestion coming from the survey is that the same modifications that have been done to the product realization assets, such as architecture and components, should be done to test assets. [7] is a systematic mapping study of software product lines testing research. Some of the conclusions of this study are: there are still many issues to be considered regarding variation and testing, such as what is the impact of designing variations in test assets regarding effort reduction? What is the most suitable strategy to handle variability within test assets: use cases and test cases or maybe sequence or class diagrams? A review of the literature in variability management [6] indicates that Feature Models (FM) and UML models have been most prominent forms used to express variability. Majority of these approaches separate variability representation from the representation of software engineering artifacts. In a case study [5], a FM was used to express variability and features were mapped to test cases. It was found that majority of the test cases
span across multiple features which results in a many to many mapping between features and test cases. Such a mapping results in too many test cases being selected while doing feature-based selection and many of them are not necessarily relevant to the search. In our opinion, this issue may not arise if variability is modeled in the test artifact instead of an application model. Authors of [9] propose a method to define variability in extended UML activity diagrams and use them as a basis for testing. [10] proposes an approach to develop domain test cases from use cases that contain variability and to derive application test cases from them. [11] suggests a mechanism to deal with variability in test steps of a test case. It requires the development of a separate variability diagram and a mapping of variable features to the corresponding test step. This is relevant to our requirement but does not discuss how to deal with variations in test oracles, user interfaces and test data. We intend to experiment with such a method in future.
5. OPEN PROBLEMS In this section, we summarize the challenges discussed in the section 3 and 4 and conclude the paper. The problems listed are restricted to our requirement of maintaining a reusable test case repository for Enterprise Software.
5.1 Test Definition The test case structure discussed in section 3.1 contains multiple issues such as redundant information and lack of a formal structure. In the current method of specifying test steps the three elements of user interface navigations, user interface elements and business process flow seem to be inseparable. This is major impedance in reducing redundancy in the test case repository. We perceive a need to have a test definition mechanism where various elements or granules of a test case can be specified separately and be combined to form new test cases.
5.2 Change Propagation Enterprise software products get upgraded periodically and it is important to be able to distribute upgraded test cases to users of the repository. This means that the test case parts should be dynamically linked just like components. Also testers who reuse test case parts should be able to inherit or extend them to modify behavior. This will allow users to maintain their own set of test cases and also be able to take updated test case parts from the repository when there is a product upgrade.
5.3 Variability Specification We believe that a test definition mechanism with linkable test case granules will be amenable to existing variability management techniques. However, defining such test cases using the granules and specifying variability along the many dimensions of variability discussed in section 3.2, might become cumbersome. Considering the fact that repository contains thousands of test cases, it is likely that the number of variation points will be even higher. This might compromise the ease of specification, selection and understandability that is present in the current tabular structure. This may result in people reverting to their method of creating copies of test cases for every variation required.
5.4 Tool Support There will be a need for tool support for activities such as defining test case granules, constructing test cases, specifying variability, test case variant selection and so on. The whole process of VM in test cases depends on effective tool support and it could be a critical success factor for any new approach developed.
6. REFERENCES [1] http://utp.omg.org/ [2] Beatriz Pérez Lamancha, Pedro Reales Mateo, Ignacio Rodríguez de Guzmán, Macario Polo Usaola and Mario Piattini Velthius. 2009. Automated model-based testing using the UML testing profile and QVT. In Proceedings of the 6th International Workshop on Model-Driven Engineering, Verification and Validation MoDeVVa '09, Article No. 6 [3] Ina Schieferdecker. 2010. Test Automation with TTCN-3 State of the Art and a Future Perspective. Testing Software and Systems, Lecture Notes in Computer Science Volume 6435, 2010, pp 1-14 [4] Cosmin Rentea, Ina Schieferdecker and Valentin Cristea. 2009. Ensuring Quality of Web Applications by Client-side Testing Using TTCN-3. 9th International Conference on Web Engineering [5] Sachin Patel, Priya Gupta and Vipul Shah. 2013. Feature Interaction Testing of Variability Intensive Systems, 4th International workshop on Product Line Approaches in Software Engg., ICSE 2013 [6] Lianping Chen, Muhammad Ali Babar and Nour Ali. 2009. Variability management in software product lines: a systematic review, In Proceedings of the 13th International Software Product Line Conference (SPLC '09). Carnegie Mellon University, Pittsburgh, PA, USA, 81-90. [7] Paulo Anselmo da Mota Silveira Neto, Ivan do Carmo Machado, John D. McGregor, Eduardo Santana de Almeida, Silvio Romero de Lemos Meira. 2011. A systematic mapping study of software product lines testing. Information and Software Technology, Volume 53, Issue 5, May 2011, Pages 407-423, ISSN 0950-5849 [8] Antti Tevanlinna, Juha Taina, Raine Kauppinen. 2004. Product family testing: a survey. SIGSOFT Softw. Eng. Notes 29, 2 (March 2004), 12-12. [9] André Heuer, Vanessa Stricker, Christof J. Budnik, Sascha Konrad, Kim Lauenroth, and Klaus Pohl. 2013. Defining variability in activity diagrams and Petri nets. Sci. Comput. Program. 78, 12 [10] Erik Kamsties, Klaus Pohl, Sacha Reis and Andreas Reuys. 2003. Testing Variabilities in Use Case Models. Software Product-Family Engineering, 5th International Workshop, PFE 2003, Siena, Italy, November 4-6, 2003, Lecture Notes in Computer Science, Vol. 3014 [11] Klaus Pohl, Günter Böckle, and Frank J. van der Linden. 2005. Software Product Line Engineering: Foundations, Principles and Techniques. Springer-Verlag New York, Inc., Secaucus, NJ, USA