An Open Environment for Automated Integrated Testing Oliver Niese1 , Tiziana Margaria1 , Markus Nagelmann1 , Bernhard Steffen2 , Georg Brune3 and Hans-Dieter Ide3 1
2
METAFrame Technologies GmbH, Dortmund, Germany { ONiese, TMargaria, MNagelmann}@METAFrame.de Chair of Programming Systems, University of Dortmund, Germany
[email protected] 3 Siemens AG, Witten { Georg.Brune, Hans-Dieter.Ide}@wit.siemens.de
Keywords: Automated, Integrated, Distributed Testing; Testing Environments; Test Management Abstract The increasing complexity of today’s testing scenarios demands for an integrated, open and flexible approach to support the management of the overall test process. Furthermore systems under test become composite (e.g. including Computer Telephony Integrated platform aspects), embedded (e.g. with hardware/software codesign) and run on distributed architectures (e.g. client/server architectures). In addition, it is increasingly unrealistic to restrict the consideration of the testing activities to single units, since complex subsystems affect each other and require scalable, integrated test methodologies. In this paper, we present a test management layer driving the generation, execution and evaluation of system-level tests in a highly heterogeneous landscape. The management layer introduces the required flexibilization of the overall architecture of the test environment: it is a modular and open environment, so that several tools and units under test can be added at need. By means of a CORBA/RMI-based implementation of the external interfaces of the management layer, we are able to address and encapsulate a wide range of commercial test tools. This increases over time the reach and the capabilities of the resulting environment.
1
Introduction
The increasing complexity of today’s testing scenarios demands for an integrated, open and flexible approach to support the management of the overall test process, e.g. specification of tests, execution of tests and analysis of test results. Furthermore, systems under test (SUT) become composite (e.g. including Computer Telephony Integrated (CTI) platform aspects), embedded (e.g. with hardware/software codesign) and run on distributed architectures (e.g. client/server architectures). In addition, it is increasingly unrealistic to restrict the consideration of the testing activities to single units of the systems, since complex subsystems affect each other and require scalable, integrated test methodologies.
Fig. 1. Example of an integrated CTI platform
As an example for an integrated CTI platform, fig. 1 shows a telephone switch and its environment. The switch is connected to the ISDN telephone network, or more generally to the public telephone network (PSTN) and acts as a ”normal” telephone switch to the phones. Additionally, it communicates directly via a LAN or indirectly via an application server with CTI applications that are executed on PCs. The CTI applications are also active components, like the phones. So it is possible for an application to control the switch (e.g. initiate a call) and vice versa (e.g. notification of an incoming call). In a system test, it is important to investigate the interaction between the subsystems.
More generally speaking, in our context a typical system under test is composed of several independent subsystems communicating with each other. To test this kind of composite systems we need to resort to specific test tools for each participating subsystem. Thus, we must be able to coordinate heterogenous test tools in a context of heterogenous platforms, a task exceeding the capabilities of today’s commercial test management tools.
2
Overview
Fig. 2. Architectural Overview of the Test Environment Figure 2 shows the general architecture of the test environment. The system under test (SUT) is composed out of several subsystems communicating with and affecting each other. The subsystems can be hardware or software components, often also including and driving external applications or devices. Each of the components can also be used in completely different configurations in other scenarios. In general each subsystem has to be tested using its specific test tool or at least using its own instance of a test tool. To coordinate the different control and inspection activities of an integrated test a test management layer is mandatory. This test management layer, called Test Coordinator in fig. 2, communicates with the test tools by means of Common Object Request Broker Architecture (CORBA) [3] or Remote Method Invocation (RMI) [4]. However, when testing composite systems it is not sufficient to support the aspect of coordinating test tools only, but the whole process, from test specification to the analysis of test results. Therefore, the following aspects of the test process have to be supported by an integrated test environment:
1. 2. 3. 4.
Organization of test relevant data Design of test cases and composition of test suites Coordination of test execution Analysis of test execution results
2.1
Organization
The organizational aspects of the test process are among others: Version control Beside the test cases itself many other files have to be organized throughout the test process, e.g. configuration files or test documentation. Because of changes throughout the test process it is important to capture the history of changes and dependencies between versions. Configuration management It is mandatory, especially when considering integrated tests, that the system under test is in a well defined state before executing tests. This is a non-trivial task because we treat complex systems, where the initialization of one component can affect the state of other components. Moreover, it is also important to document the versions of the subsystems and test tools and to ensure that they are correctly working together. Structuring of tests Tests have to be structured to – provide a simple mechanism to build test suites out of the set of test cases via criteria, e.g. regression test or feature test – eliminate redundant test cases which may dramatically reduce the whole test execution time. 2.2
Design
The design of test cases, i.e. specifying which control or inspection activities have to be performed and in which order, should be possible without any expertise of how to apply a specific test tool. Therefore, the design should be intuitive but reliable concerning executability and other frame conditions. Design Test cases are specified on the level of SUT-usage and are formulated in a graphical way. Hierarchical design, i.e. the usage of a macro mechanism should be possible. Analysis Consistency checks of tests cases at design time to ensure the correct specification of the test cases. Automatic generation Beside the manual design of test cases, automatic derivation of new test cases out of the system specification should be possible. Variation The automatic variation of existing test cases should be possible by means of parameter variation.
2.3
Coordination
The whole test execution process must be supported, including: Initialization of the whole test scenario (SUT components, test tools) Execution of the test cases, i.e. instructing the test tools Analysis of the results of test runs Documentation of the test runs and their results 2.4
Analysis
The analysis of the results of test runs for the error detection and diagnosis is an essential feature of every test environment. It must be possible to analyse results on an abstract level, in order to hide unnecessary details. However, if required an inspection of the details must be possible.
3
Test formalism
When providing such a wide support for test management an adequate formalism for the description of tests is mandatory. Since we are considering system-level tests, we are in fact performing black-box testing. Thus we are abstracting from the internals of the system and take only external observable ”inputs” and ”outputs” of the system into account, cf. fig. 3.
Fig. 3. Black-box testing In order to specify test cases at this abstract level, i.e. to allow a flexible test modelling, we build test cases out of generic basic functionalities. Such a basic functionality can be either a single stimulus for the system (input) or a single check point which checks the status of the system (output). In the context of testing an application these basic functionalities may be e.g. ”pressing a specific button on the GUI” or the check ”is a specific button enabled?”.
" !$#&%('*)(+-,/.10325476 89;:7=@?BADC;E@F
ONP HIGJ QRS KML TU VWX Y[\Z
Fig. 4. Test coordinator as instance of the Agent Building Center
4
Realisation
The heart of the test management environment is the Test Coordinator, built on top of METAFrame’s Agent Building Center (ABC) [1,2], which is a generic and flexible workflow management system. In this application, we view system-level test cases as executable workflows within the integrated test environment (which plays the role of an extended runtime environment). Building on the ABC’s capabilities, test cases can be graphically designed from palettes of subsystem-specific, generic test components or basic functionalities. Test cases are combinations of these basic functionalities, which are connected through edges to describe the control flow, see fig. 5. The main portion of the test coordinator is the ABC itself, cf. fig. 4. Basically, it offers the functionalities necessary to cover the organizational and the design related aspects. Some extensions are needed before the ABC can be used as a test coordinator: Testing specific extensions Among others communication with specific test tools must be integrated into the ABC. Basic functionalities Stimuli and check points which are necessary to test a SUT’s functionality must be provided as basic functionalities. They will be implemented using the specific commands provided by the relative test tool. While the basic functionalities are developed by experts of the particular system, the test cases can be designed graphically at an intuitive level by testers. The test coordinator uses the capabilities of the ABC e.g. for the design and analysis of the test cases.
Figure 5 shows how test cases can be designed within the ABC. On the left hand side is the editor, which allows the design of test cases. On the other side is a browser for the available basic functionalities which are structured into classes1 . In this example there are three classes of basic functionalities: GuiCommon, Phone and SimplyPhone. The class GuiCommon provides generic functionalities for the usage of a Graphical User Interface, e.g. checkWindow checks whether a specific window is visible on the desktop or not. The class Phone provides input-actions (e.g. hookOff) and outputactions (e.g. checkConnectState) which allows the tester to control a ”real” phone, whereas the class SimplyPhone can be used to test a specific application called SimplyPhone which is a CTI-Application that simulates a phone. The analysis methods of the ABC are based on formal verification methods (model checking). In particular, – libraries of constraints capture the essence of the test designers’ expertise about do’s and dont’s of test definition. Automatically accessed by the model checker during the verification, they express vital properties concerning the interplay between the components of a test as global correctness and consistency conditions of the test logic. – automatic verification of test-dependent frame conditions (from the constraint library) allows designers to verify that the test is consistent before executing it, – automatic error location delivers the exact portions of the test where violations of frame conditions occur. This eliminates the manual search for the erroneous segments in the test. This can be used either at design time (see above) or at runtime, when analysing erroneous test runs.
5
Integration of test tools
To communicate with different test tools the test environment offers a general CORBA interface: ToolAccess, cf. fig. 6. This interface comprises basic methods (e.g. getName, getVersion) which all test tools have to support. Special features (e.g. input and output commands for testing a special SUT component) can be added by extending the ToolAccess interface. From the basic functionality execution environment the special methods of the test tool are accessible for the tester via a test tool specific adapter. The extension of the ABC environment through the integration of adapters is the key to our approach. 1
Note that each class is represented by its own icon, so that they can be distinguished easily in the graphical notation.
Fig. 5. Example of a test case
The integration process consists of two main activities: 1. Integration of the interface provided by the test tool into the test coordinator and 2. Implementation of the interface functionality by the test tool. When the ToolAccess interface is not already implemented in the test tool by the vendor, there are different ways to implement it: Plug-in approach If the test tool supports customisation via loading plugins or libraries, the ToolAccess interface can be integrated by implementing such a plugin including the CORBA object request broker. Separated server process If a test tool offers remote access via an interface of its own (e.g. COM, DCOM or CORBA) a separate server process can communicate via this interface with the test tool, cf. fig. 6. Then, the special server implements the interface or its derivation and communicates with the test coordinator, i.e. it can be seen as a relay between the test coordinator and the test tool.
Fig. 6. Integration of test tools
6
Conclusion
To test composite systems in an integrated way, we have added a test management layer responsible for the generation, execution and evaluation of system-level tests
in a highly heterogeneous landscape. The management layer introduces the required flexibilization of the overall architecture of the test environment: it is a modular and open environment, so that several tools and units under test can be added at need. By means of a CORBA/RMI-based implementation of the communication layer we are able to address and encapsulate a wide range of commercial test tools: this increases over time the reach and the capabilities of the resulting environment. Beside the test tool coordination described in this paper, we will support all other important aspects of the test process in a more comprehensive way in the near future: 1. Organization of test relevant data 2. Design of test cases and building of test suites 3. Analysis of test execution results Together with Siemens ICN an integrated test environment has been developed to test the interaction between a medium-range telephone switching system and a variety of computer-telephony applications to prove the concept.
Acknowledgements We would like to thank the whole ITE-Team, especially Andreas Hagerer for helpful comments and criticism on draft copies.
References 1. B. Steffen, T. Margaria: METAFrame in Practice: Intelligent Network Service Design, In Correct System Design – Issues, Methods and Perspectives, E.-R. Olderog and B. Steffen (eds.), LNCS 1710, Springer Verlag, 1999, pp.390-415. 2. B. Steffen, T. Margaria: Coarse-grain Component Based Software Development: The METAFrame Approach, invited to the 3. Fachkongress “Smalltalk und Java in Industrie und Ausbildung” (STJA’97), 10.-11. September 1997, Erfurt. 3. The Common Object Request Broker: Architecture and Specification, Revision 2.3, Object Management Group, 1999. 4. Java(TM) Remote Method Invocation, Sun, http://java.sun.com/products/jdk/rmi.