Software Component Test Cases - Semantic Scholar

2 downloads 0 Views 154KB Size Report
Software Component Test Cases: The Need for a Conceptual. Taxonomy. Allen Parrish. David Cordes. The University of Alabama. Department of Computer ...
Software Component Test Cases: The Need for a Conceptual Taxonomy Allen Parrish David Cordes The University of Alabama Department of Computer Science Tuscaloosa, AL 35487 Tel: (205) 348-3749 Fax: (205) 348-0219 Email: [email protected] David Hale Joanne Hale Shane Sharpe The University of Alabama Department of Management Science and Statistics Tuscaloosa, AL 35487

Abstract Component-based software engineering is becoming an increasingly popular endeavor, in part because of the perceived gains in terms of software reuse. However, we feel that for component engineering to be successful in fully exploiting reuse opportunities, components must be independent of the processes used to derive them. That is, there should be standard canonical forms for software artifacts that do not carry with them the baggage associated with a particular development methodology or process. This is the type of e ort associated with the rapidly emerging discipline of software architecture, and would make the notion of \software component" much more like the analogous \hardware component" concept from hardware engineering. In this paper, we outline some initial modest ideas with regard to capturing a process-independent notion of a component \test case." Ultimately, the idea behind this work is to produce a complete conceptual framework for understanding test cases as objects. Such a framework would support the speci cation and reuse of test cases as part of a standardized component architecture.

Keywords: Software testing, conceptual models, test cases, test reuse Working Groups: Component certi cation

Parrish- 1

1 Introduction One of the most important aspects of the eld of software architecture is its impact on software reuse [4, 11]. Making software architecture as an explicit object of study is part of an evolution toward standard models of software components. The use of standard models of software components can support reuse, in that such components can be viewed without the baggage associated with either a particular design methodology or implementation language. Of course, certain design methodologies do lead to certain types of components, but many types of components may be identi ed that are common to many methodologies. If components are to be procured and reused in the context of di erent design methodologies, then the conceptual model of a component should be independent of the notation, terminology, etc. associated with a speci c design methodology. A number of recent papers in the area of software architecture support this and other worthwhile goals [1, 5, 9, 11, 12]. In this paper, we would like to extend this idea of \methodologyindependence" to the notion of a test case. If testing is to be used as the approach to validating components, then optimally, one would like to associate test cases with speci c components. Indeed, test cases which have been identi ed as part of a component's validation suite should be somehow linked to the component for its entire lifetime for regression testing purposes. Thus, someone wanting to reuse a particular component could obtain the test cases along with (or as part of) the component. In order to support such a paradigm, a conceptual model is needed that promotes test cases as objects capable of being reused. Note that classical software testing frameworks lack much support for this idea. Classical software \test plans" tend to de ne the type of testing to be done in terms of process (e.g., \all branches in the program must be executed") or non-standard, semi-formal or informal descriptions. Our e orts in this paper are centered around providing formal templates that characterize test case structure in order to promote the conceptualization of test cases as objects. Although this paper is not intended as a research paper, we wish to provide some insight into this line of research. Accordingly, in the rest of this paper, we present the beginnings of such a conceptual model for test cases. We identify two types of test cases for a restricted class of components (i.e., object-oriented components). Although this work is in its infancy, we feel that it is ripe for extension, and invite others to extend the model by identify other types of test cases for a larger, less restricted class of components.

2 Basic De nitions We assume a restricted model of components, where a component is simply an object-oriented class. The component may or may not have some type of functional speci cation attached; the presence or absence of a speci cation makes no di erence in our formalization of test cases. We assume the classical de nitions of classes and objects from the object-oriented literature. Regardless of whether a functional speci cation is present, we assume that all classes have a syntactic part called the de nition. The de nition contains two parts: an interface consisting of a list of operations that can be performed by instances of the class, and a body consisting of the implementation of the operations and the data attributes for an instance. The implementation of an operation is sometimes called a method, and invoking an operation with respect to a given object is sometimes referred to as sending a message to the object, which responds to the message by executing a speci c method. Additionally, every object has state; this state may be characterized either by its history of method invocations or the current values of its attributes. Parrish- 2

A method may have input parameters and output parameters. An input parameter is a parameter whose value is transmitted into the method. An output parameter is a parameter whose value is returned by the method. Input parameters may include both value and reference parameters (or IN and IN OUT parameters in Ada), as well as the \implicit" object parameter found in languages such as C++. Output parameters may include reference parameters (or IN OUT and OUT parameters in Ada), the implicit object parameter in C++{like languages (which may be changed by the method), or function return values. Some parameters may be both input and output parameters (e.g., IN OUT parameters in Ada). Our formalization of test cases is based on how di erent types of methods are ordered during execution. To characterize the various types of methods, we use the classi cation scheme developed in [7] for data abstractions in general. In particular, a method may be either a constructor, modi er or observer. A constructor method returns a new object of the class from scratch (i.e., without that object as an input parameter). A modi er method modi es an existing class instance (i.e., it changes the state of one or more of the data attributes in the instance). An observer method inspects the object and returns a value characterizing one or more of its state attributes. In this classi cation system, it is assumed that observer methods do not cause side e ects on the object state; thus, any methods producing side e ects are either constructors or modi ers. Also, to support our discussion, we refer periodically to the class under test (the CUT), as well as to one or more objects under test (referred to as OUTs). Testing involves invoking constructor, modi er and observer methods on one or more OUTs. Note that such OUTs serve as output parameters for CUT constructors, input parameters for CUT observers, and input/output parameters for CUT modi ers.

3 Component Test Cases We have identi ed two distinct categories of test cases: 1. State inspection test cases; 2. State comparison test cases. State inspection test cases involve executing sequences of methods that yield \results," and then physically examining those results. State comparison test cases involve executing two sequences of methods and comparing their results to determine whether they satisfy some predetermined relationship. We consider these models in the subsections below.

3.1 State Inspection Testing

We rst consider state inspection testing. Test cases of this form involve generating a \result" (by sending a sequence of messages to an OUT) and then examining this \result" for correctness. We have the following model for state inspection testing: 1. Execute some constructor method to produce an OUT in some initial state. 2. Execute a sequence of (0 or more) modi er methods that modify the OUT in some way. Each modi er begins with the OUT in the state that resulted from the previous method invocation. The modi er method induces a state change in the OUT. 3. Using as input the OUT generated by steps (1) and (2), repeatedly apply observer methods to this OUT. Each observer method returns a result characterizing some state attribute of the OUT. 4. The results returned by each of the observer methods are inspected. We assume that all observer methods return operationally observable objects. Operationally observable objects either:

Parrish- 3

(a) Contain methods for display on standard output devices, or (b) Contain observer methods that return operationally observable objects.

Thus, we are executing a constructor, followed possibly by modi er(s), and terminating with observer(s). Let C be any constructor, M be any modi er, and O be any observer. Then, state inspection test cases may be characterized by the regular expression CM  O+ . We claim that although there may be a number of di erent ways to implement state-inspection-based test cases, all such characterizations may be reduced to this form. A \proof" of this is found in [10]. E ectively therefore, CM O+ constitutes a template for de ning the \fundamental form" of state-inspectionbased test cases.

3.2 State Comparison Testing

We now consider state comparison testing. State comparison testing involves generating two object states and comparing them to determine whether or not they satisfy some predetermined relationship. For example, sending an Add message followed by a Remove message to a stack object s should result in an object equivalent to s. To conduct this state comparison test, one would generate s, send Add followed by Remove to s, and compare the nal result with the original stack s. A second example might involve sending Depth to some stack s (yielding an integer object), and then verifying that the integer returned is one less than the integer object returned by Depth after sending Add to s. This type of testing is commonly done when using algebraic speci cations [2, 3], or when using class invariants [6, 8]. Thus, in this model, two objects must be generated and compared. These objects may be either: (a) objects of the CUT or (b) objects returned by an observer method of the CUT. Case (a) corresponds to the case where testing a stack class might involve generating two stack objects and comparing them (e.g., verifying that New.Add.Add.Remove is equal to New.Add). In such a case, it is necessary to generate two objects of the CUT by executing a constructor followed by a sequence of modi ers (where each modi er takes the previous CUT object state as input and produces a new CUT object for consumption by the next modi er in the sequence). The resulting CUT objects can then be compared using a CUT comparison operator (e.g., an Equal method for the stack class). Case (b) corresponds to the case where testing a stack class might involve comparing the Depths of two stacks and determining that the appropriate relationship is satis ed (e.g., verifying that New.Add.Depth is one less than New.Add.Add.Depth). In such a case, it is necessary to generate two integer objects. Each object is generated by executing a (stack) constructor, followed by a sequence of (stack) modi ers as above, followed by an observer (Depth). More formally, the state comparison model can be described using a triple (MS 1 , MS 2 , comp), where each MS is a method sequence that has a \result" that is compared using the comparison operator comp. The MS sequences may be of two forms corresponding to cases (a) and (b) above: (a) CM  { indicating that comp is comparing objects of the CUT or, (b) CM O { indicating that comp is comparing the objects returned by the modi er O at the end of the sequence.

As with state inspection testing, the state comparison model is expressed in the most fundamental form possible. Formal arguments to this e ect are found in [10].

Parrish- 4

4 Conclusion The idea behind this work is to present a framework for thinking about test cases as \ rst-class objects," where each test case is capable of being independently reused. In this paper, we have developed conceptual models of two distinct types of component test cases. State inspection test cases are of the form CM  O+ ; state comparison test cases are of the form (MS 1 , MS 2 , comp), where the MS method sequences are either of the form CM  or CM  O. This characterization presents an abstract template for these two types of test cases. Such a template supports a clear conceptual framework, and may ultimately be used to standardize the structure of component test case documentation. Just as many in the software architecture community have identi ed categories of components (e.g., clients, servers, lters, classes and objects), we have identi ed categories of component test cases. We do not claim that the universe as a whole is restricted to the categories that we have identi ed, but we do feel that we are on the right track with this direction of thought. We invite others to extend this model, or to identify alternative models for addressing the problems identi ed here.

References [1] Dean, T.R., and J.R. Cordy, \A Syntactic Theory of Software Architecture," IEEE Transactions on Software Engineering, vol. 21, no. 4, April 1995, pp. 302-313. [2] Doong, R. and P. Frankl. \Case Studies on Testing Object-Oriented Programs," Proceedings of the Fourth Symposium on Software Testing, Analysis and Veri cation, October 1991, pp. 165-177. [3] Gannon, J., P. McMullin, R. Hamlet. \Data Abstraction, Implementation, Speci cation and Testing," ACM Transactions on Programming Languages and Systems, vol. 3, July 1981, pp. 211-223. [4] Garlan, D. and D. Perry, \Introduction to the Special Issue on Software Architecture," IEEE Transactions on Software Engineering, vol. 21, no.4, April 1995, pp. 269-274 [5] Garlan, D., R. Allen and J. Ockerbloom, \Architectural Mismatch or Why It's Hard to Build Systems out of Existing Parts," Proceedings of the 17th International Conference on Software Engineering, 1995, pp. 179-185. [6] Horstmann, C. Mastering Object-Oriented Design in C++, Wiley, 1995. [7] Liskov, B. and Guttag, J. Abstraction and Speci cation in Program Development, McGraw-Hill, New York, 1986. [8] Liskov, B. and Wing, J. \Speci cations and Their Use in De ning Subtypes," Proceedings of OOPSLA '93, pp. 16-28. [9] Moriconi, M., X. Qian, and R. Riemenschneider, \Correct Architecture Re nement," IEEE Transactions on Software Engineering, vol. 21, no. 4, April 1995, pp. 356-372. [10] Parrish, A., D. Cordes, and J. McGregor, \Class Development and Testing Models: A Contribution to Object-Oriented Pedagogy," Department of Computer Science Technical Report, The University of Alabama, February, 1996. [11] Shaw, M., R. DeLine, D. Klein, T. Ross, D. Young and G. Zelesnik, \Abstractions for Software Architecture and Tools to Support Them," IEEE Transactions on Software Engineering, vol. 21, no. 4, April 1995, pp. 314-335. [12] Soni, D., R.L. Nord, and C. Hofmeister, \Software Architecture in Industrial Applications," Proceedings of the 17th International Conference on Software Engineering, 1995, pp. 196-207.

Parrish- 5

Biography Allen Parrish is an Associate Professor in the Department of Computer Science at The University

of Alabama. His research interests are in software engineering, particularly in software testing, object-oriented systems, and software reuse. Dr. Parrish is also quite involved with computer science and software engineering education, and has received funding from the National Science Foundation, the Advanced Research Projects Agency and the Ada Joint Program Oce to investigate new techniques in this regard.

David Cordes is also an Associate Professor in the Department of Computer Science at The University of Alabama, with research interests in software engineering. His particular interests are in requirements analysis, speci cation and testing of object-oriented systems. Additionally, Dr. Cordes is actively involved with the Foundation Coalition (FC), a seven-school partnership funded by the National Science Foundation to focus on the re-tooling of undergraduate engineering education. He is a member of the National Management Team for the FC, and is its (nationwide) Strategy Director for Dissemination.

David Hale is Director of the Enterprise Integration Laboratory in the College of Commerce

at The University of Alabama. His research interests are in enterprise integration and modeling, collaborative human-computer problem-solving systems, database management system design, accounting information systems, and software maintenance. Dr. Hale's research has appeared in a number of research journals (including Management Information Systems Quarterly, Journal of Management Information Systems, Information and Management IEEE Systems, Man, and Cybernetics, and Journal of Software Maintenance), and has been funded by Texas Instruments, EDS, DEC, Data General, SEMATECH, and Mobil.

Joanne Hale is an Assistant Professor in the Management Information Systems program at The

University of Alabama. Her research interests include decision support systems for crisis management, information system quality assurance, human-computer collaboration, strategic uses of information technology and software reuse.

Shane Sharpe is an Assistant Professor in the Management Information Systems program at The University of Alabama. His interests include enterprise modeling, work ow systems, requirements speci cation in software engineering, and software reuse and maintenance. His research has appeared in a number of publications including the International Journal of Human-Computer Studies, the Journal of Computer Information Systems, the Journal of Systems Management, and the Journal of Software Maintenance. Dr. Sharpe has work experience in the healthcare, pharmaceutical manufacturing and distribution, banking, and public utilities industries.

Parrish- 6

Suggest Documents