Validation of Functional (In) Correctness for Large-scale Component

0 downloads 0 Views 72KB Size Report
pected behavior of “real” components using a DSML named the Component Behavior. Model Language (CBML) [2]. The behavior model is then synthesized into ...
Validation of Functional (In)Correctness for Large-scale Component-based Systems using Model-driven Engineering James H. Hill and Aniruddha Gokhale Vanderbilt University, Nashville, TN, USA {hillj, gokhale}@dre.vanderbilt.edu http://www.dre.vanderbilt.edu

Abstract. Validating functional (in)correctness of large-scale component-based distributed system composition continuously grows in importance. The underlying “business-logic” used in individual components that make up the system is usually thoroughly tested. However, once the business logic is encapsulated within components, testing usually does not occur until system integration. This poster describes an approach based on model-driven engineering and generative programming techniques, which pervades the entire system development lifecycle, to validate the functional (in)correctness of the system. Our approach enables the validation both for individual components as well as assemblies of components that are deployed in the target environment.

1

Introduction

Methods for developing large-scale component-based distributed systems are transitioning from manual development – developers handcraft a vast majority of system files such as IDL files, and deployment and configuration descriptors – to model-driven engineering (MDE) development [7]. One of the main benefits of MDE is its ability to raise the level of abstraction so developers interact with artifacts closer to their domain. Moreover, MDE techniques provide tools that help alleviate “hard” and error prone tasks if done manually by a human such as, manually handcrafting IDL files, deployment and configuration descriptors consisting of thousands of lines, or input files for model checking tools. Although MDE is raising the level of abstraction in many areas of large-scale component-based system development, one area lacking such advances is verification and validation (V&V) [3]. In many cases, developers test the “business-logic” of the system (i.e., individual classes from the underlying framework) [6], but do not test the “business-logic” once it is encapsulated inside a component [5]. Encapsulating a business logic inside a component also requires validation of functional correctness (e.g., correct translation of input/output values between the component architecture and the “business-logic” or correct generation of exceptions at the component level). Instead, developers wait until system integration time to perform such testing on the system as a whole as opposed to individual isolated components, which can be detrimental to the development process and time-to-market pressure [1].

Proposed Solution. In earlier work, we have developed a system execution modeling tool suite called the Component Workload Emulator (CoWorkEr) Utilization Test Suite (CUTS) [1]. CUTS uses model-driven engineering (MDE) techniques to provide continuous system integration throughout the entire development lifecycle. It also provides real-time data collection solutions for capturing QoS metrics and presenting them in a graphical format. This poster illustrates the V&V capabilities, primarily the validation capabilities, we added to CUTS to support the synthesis of testing components. This allows developers to empirically test “real” components in isolation, or within a partially or completely assembled and deployed system. Moreover, true V&V involves both formal analysis followed by empirical analysis. To support the formal aspects of V&V, our enhancements to CUTS enable seamless integration with backend model checking tools by automating the generation of the necessary configuration files. Our additions allow developers to perform better testing of the system while using existing features of CUTS to provide continuous system integration and QoS provisioning throughout the entire development lifecycle of the system. This poster will illustrate our approach to validating functional (in)correctness of large-scale component-based systems described in Section 2 . It will also be accompanied by a demo described in Section 3. Lastly, it will highlight future work described in Section 4.

2

Testing for Functional (In)Correctness

This section describes our approach to overcome key challenges of testing functional (in)correctness in large-scale, component-based distributed systems. Our poster outlines the challenges and the approach we have taken to overcome these challenges. Validation of Component Behavior. Although components may be developed independently by third party vendors, CUTS allows system engineers to model the expected behavior of “real” components using a DSML named the Component Behavior Model Language (CBML) [2]. The behavior model is then synthesized into timed input/output automata [4] source files for validation (such as such testing of correctness, reachibility analysis, and simulation) by external tools, e.g., Tempo Toolkit (www. veromodo.com) and UPPAAL (www.uppaal.com). Generation of Test Drivers for Instrumentation. CUTS provides a secondary modeling language named the Workload Modeling Language (WML) [2], which complements CBML. WML allows developers to parameterize generic actions in CBML with realistic workloads to create realistic behavior. The enhanced model is then used to generate – or developers can manually create – test driver models for individual components. The newly constructed models are then used to synthesize source code to test components in their target environment by subjecting them to valid and invalid inputs while monitoring its output.

3

Demonstration

In this section, we describe our demo that will accompany the poster. The demo will comprise the V&V capabilities of CUTS described in this document. We will show

a representative system from the domain of shipboard computing environments. Our demo will illustrate the following key capabilities of CUTS: 1. Capture the behavior and characteristics of system applications using CBML and WML to test for functional correctness. 2. Synthesize application components and test drivers to test application components for functional (in)correctness. 3. Synthesize configuration files for model checking using tools such as Tempo Toolkit and UPPAAL. 4. Demonstrate the validation of a component and a small system using a backend model checking tool as well as via the CUTS empirical benchmarking system. Moreover, this demo will illustrate how CUTS can help system developers and engineers understand how a component’s performance differs based on its deployment, i.e., where it is located, with respect to other components in the system.

4

Future Work

CUTS allows the developers to model the behavior of components still under development; however, we only generate test driver components based on the component’s exposed interfaces (i.e., a form of black-box testing). Our future work, therefore, involves extending CUTS to support traversing the behavior model for a component to generate test drivers that fully exercise the complete functionality of the component (i.e., a form of white-box testing). We are also aware that the ability to exercise a component is directly proportional to the developers ability to accurately model a component’s behavior.

References 1. James H. Hill, Douglas C. Schmidt, and John Slaby. System Execution Modeling Tools for Rapid Evaluation of Enterprise Distributed Real-time and Embedded System Quality of Service. In Pierre F. Tiako, editor, Designing Software-Intensive Systems: Methods and Principles. 2007. 2. James H. Hill, Sumant Tambe, and Aniruddha Gokhale. Model-driven Engineering for Development-time QoS Validation of Component-based Software Systems. In Proceedings of 14th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS), Tucson, AZ, Mar 2007. 3. IEEE Computer Society. IEEE Standard for Software Verificiation and Validation, revision of 1012-1998 edition, June 2005. 4. Dilsun K. Kaynar, Nancy Lynch, Roberto Segala, and Frits Vaandrager. The Theory of Timed I/O Automata, Synthesis Lectures in Computer Science. Morgan and Claypool Publishers, San Rafael, CA, April 2006. 5. Zhongjie Li, Wei Sun, Zhong Bo Jiang, and Xin Zhang. BPEL4WS Unit Testing: Framework and Implementation. In ICWS ’05: Proceedings of the IEEE International Conference on Web Services (ICWS’05), pages 103–110, Orlando, FL, 2005. IEEE Computer Society. 6. Vincent Massol and Ted Husted. JUnit in Action. Manning Publications Co., Greenwich, CT, USA, 2003. 7. Douglas C. Schmidt. Model-Driven Engineering. IEEE Computer, 39(2):25–31, 2006.

Suggest Documents