Aug 26, 1993 - Test templates and a test template framework are introduced as useful ..... Some partitioning strategies assume each member of a domain is ...
Test Templates: A Speci cation-based Testing Framework P. A. Stocks
D. A. Carrington
August 26, 1993 Abstract
Test templates and a test template framework are introduced as useful concepts in speci cationbased testing. The framework can be de ned using any model-based speci cation notation and used to derive tests from model-based speci cations|in this paper, it is demonstrated using the Z notation. The framework formally de nes test data sets and their relation to the operations in a speci cation and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including re nement of test data, regression testing, and test oracles.
Contents 1 INTRODUCTION 1.1 1.2 1.3 1.4
Speci cation-based testing : Scope of the framework : : Related work : : : : : : : : This document : : : : : : :
2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
2 TEST TEMPLATE FRAMEWORK 2.1 2.2 2.3 2.4 2.5
Test template overview : : : : : : : : : : Constructing a test hierarchy : : : : : : Model of test templates and hierarchies Valid input space : : : : : : : : : : : : : Domain partitioning : : : : : : : : : : : 2.5.1 Notes on the choice of domains : 2.5.2 Continuing subdivision : : : : : : 2.6 Instantiation and instance templates : : 2.7 Common properties of templates : : : :
2 3 3 3
4
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
3 EXAMPLE: Simple le read operation
3.1 Z Speci cation : : : : : : : : : : : : : : : : : : 3.2 Input spaces and the hierarchy : : : : : : : : : 3.3 Equivalence partitioning and boundary analysis 3.3.1 Equivalence partitioning : : : : : : : : : 3.3.2 Boundary analysis selection : : : : : : : 3.4 Domain testing : : : : : : : : : : : : : : : : : : 3.4.1 Domain testing partitioning : : : : : : : 3.4.2 Domain test point selection : : : : : : : 1
4 4 5 6 7 8 8 9 10
10 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
10 11 12 12 13 14 14 14
3.5 Removing redundant templates : : 3.6 Test data: Instantiating templates
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
4 DISCUSSION
4.1 Analysis of test set : : : : : : : : : : 4.2 Analysis of heuristics : : : : : : : : : 4.3 Application areas : : : : : : : : : : : 4.3.1 More testing heuristics : : : : 4.3.2 Re nement : : : : : : : : : : 4.3.3 Maintenance : : : : : : : : : 4.3.4 Oracles : : : : : : : : : : : : 4.3.5 Robustness tests : : : : : : : 4.3.6 Software design: Components 4.4 Future work : : : : : : : : : : : : : : 4.5 Conclusion : : : : : : : : : : : : : :
16 16
16 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
A Notes on the schema model of templates
A.1 Schemas vs sets : : : : : : : : : : : : : : : : : : : : : : : : A.2 Sets of templates: Z peculiarities : : : : : : : : : : : : : : A.2.1 Syntax of singleton sets of templates : : : : : : : : A.2.2 The dierence between schema types and schemas
17 17 17 17 17 19 19 20 20 20 21
24 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
24 24 24 24
1 INTRODUCTION
1.1 Speci cation-based testing
Speci cation-based Testing (SBT) oers many advantages in software testing. The (formal) speci cation of a software product can be used as a guide for designing functional tests for the product. The speci cation precisely de nes fundamental aspects of the software, while more detailed and structural information is omitted. Thus, the tester has the important information about the product's functionality without having to extract it from unnecessary detail. SBT from formal speci cations oers a simpler, structured, and more formal approach to the development of functional tests than standard testing techniques. The strong relationship between speci cation and tests facilitates error pin-pointing and can simplify regression testing. An important application of speci cations in testing is providing test oracles. The speci cation is an authoritative description of system behaviour and can be used to derive expected results for test data. Other bene ts of SBT include developing tests concurrently with design and implementation, using the derived tests to validate the original speci cation, and simpli ed auditing of the testing process. Speci cations can also be analysed with respect to their testability (e.g., [Fre91]). With so many advantages to be gained, it is surprising to note the limited research into SBT. The idea is not new, but has only recently received renewed attention. One explanation is that usage and stability of formal methods has increased considerably over the past decade, making SBT more eective. This discussion uses the following terminology. A functional unit is a distinct piece of functionality the system is meant to provide. Usually this is expressed by a single operation, but it may involve (parts of) multiple operations. Testing involves more than just the test data, e.g., oracle information to validate the output of the test, functional units to be tested, constraints on test data, purpose of test data, test plans, dependency information, etc. The general term test information is used to 2
denote any information used in the testing process. A test case is the combination of test data and oracle.
1.2 Scope of the framework
This paper describes the Test Template Framework (TTF) which is a structured strategy and a formal framework for SBT. The TTF deals with (functional) unit testing, particularly deriving and de ning tests for operations de ned in the speci cation. Integration testing is an area for future work. The strategy is based on partition testing, but, as discussed in [JW89], partition testing can include strategies such as equivalence partitioning, branch testing, and even mutation testing, if the partitioning is selective enough. The TTF is not yet another testing strategy, rather it is a general framework for SBT. No particular testing heuristic is better than all the others. They all have strengths and weaknesses, and should be combined to increase the eectiveness of testing. The objective of testing is to identify errors in any way possible. The TTF is very exible, allowing many dierent testing strategies to be incorporated. It provides a common foundation for applying testing strategies, and for classifying and comparing these strategies. The Z speci cation notation [Hay87, Spi92] is used as a Test Description Language (TDL), and to de ne the basic components of the TTF. This enables a formal description of test data, and other test information such as test oracles. The uniform use of the Z notation improves the clarity and structure of the test information. The advantages of using an existing speci cation language as a TDL are a formal syntax and semantics, reasoning power in the expression of properties and constraints, and familiarity with the notation. The framework's application in SBT is demonstrated on speci cations expressed in model-based notations, which facilitate SBT due to their explicit state transition model. Note that the TTF can be used on speci cations expressed in notations other than Z (and the framework can be de ned in other languages) that use a state-transition model, particularly where pre- and post-conditions for operations can be determined, e.g., VDM [Jon90] and RSL [RAI92]. In this discussion, the framework is applied to Z speci cations. The potential to use the same notation for test de nition and formal speci cation of a system is another strength of the TTF. The test information is largely derivable from the speci cation and is represented in the same notation.
1.3 Related work
Various methods have been employed to derive test information from speci cations, concentrating on generating test data and, in some cases, test oracles. Most SBT work is based on process-based speci cation languages, e.g., [AHKN90, BGM91]. Some work has been done on test information derivation from model-based speci cation languages like Z and VDM, e.g., [Hay86, RAO92], but to the our knowledge, a framework for SBT using model-based notations is novel. In some aspects, the framework is quite similar to category-partitioning [OB88] which uses a description language to de ne tests derived from the possible combinations of values of variables in the input space. The framework, however, has more structure and doesn't restrict itself to one testing strategy. It allows dierent heuristics and strategies to be used to determine interesting subdomains of the input space and to select tests for these subdomains.
1.4 This document
Familiarity with Z is assumed (tutorial references include [WL88] and the opening chapters of [Hay87, Spi92]). Familiarity with testing strategies is also assumed [Bei90]. Section 2 de nes the framework. Section 3 demonstrates speci cation-based testing using the framework on an example speci cation. Section 4 discusses the framework and its many applications. 3
2 TEST TEMPLATE FRAMEWORK This section describes the Test Template Framework (TTF) beginning with a general discussion of test templates and a test hierarchy, then discussing each aspect in more detail.
2.1 Test template overview
The central concept of the framework is the Test Template (TT), which is the basic unit for de ning data. The art of designing test data is determining the particular aspects of the implementation that are to be tested, and determining the distinguishing characteristics of input data that test these aspects. Once these classes of requirements are de ned, any actual input satisfying them is appropriate test data. Most important is de ning the classes of requirements test data must satisfy. A TT is a formal statement of a set of requirements, and is thus a minimal description of data that is
generic, instantiable, derivable from a formal speci cation, and re nable.
TTs constrain the important features of data without placing unnecessary restrictions.
2.2 Constructing a test hierarchy
Test data derivation is simpli ed by a structured approach involving the systematic application of various testing heuristics; templates are derived in stages. TTs are organised into a hierarchy representing the tests for a functional unit. The algorithm for constructing the test hierarchy is based on the familiar partition testing approach, where the input space of an operation is partitioned into subspaces whose elements display some common characteristics. The Input Space (IS) of an operation is the set of all possible inputs to the operation based on the signature of the input alone. For example, if the input is of type integer, then the IS is the set of integers. The Valid Input Space (VIS) of an operation is the subset of the IS for which the operation is de ned, i.e., the subset which satis es the operation's pre-condition. For example, the VIS of the factorial function de ned on integers is the non-negative integers. The VIS may equal the IS. The rst step is to determine the VIS of the functional unit under test. The VIS is derived directly from the speci cation of the functional unit. This derivation is largely an automatic process. There is one VIS for each functional unit. The next step is to subdivide the VIS into the desired subsets, or partitions, called domains. Choice of domains is not determined by the TTF. Rather, testing strategies and heuristics are used to subdivide the input space. The goal is to derive domains which are equivalence classes of error detection ability for the function under test, and which cover the input space. That is, the goal is to choose domains so that each element of a domain has the same error detecting ability. Some, but not all, strategies assume every element of a domain is equivalent to all the others for this purpose and so only one need be chosen. However, this assumption is often invalid. To preserve the exibility to choose tests for domains selectively, the domain derivation step is used repeatedly, dividing domains into further sub-domains, until the tester is satis ed that the domains represent the desired equivalence classes. This derivation results in a collection of test templates, related to each other by their derivation and the heuristics used in their derivation. The collection can be considered a directed graph of templates, where the heuristics represent the edges of the graph. Typically a TT hierarchy looks something like Figure 1. Instances of the terminal nodes in the hierarchy represent test data. Some strategies do not advocate domain partitioning (e.g., random testing), in which instances are derived directly from 4
hc ha VIS
: hb
Ta1 : Tam Tb1 : Tbn
: hd
: he
: hf
Tc1 : Tco Td1 : Tdp Te1 : Teq Tf1 : Tfs
Figure 1: Typical TTF Hierarchy the VIS. Some partitioning strategies assume each member of a domain is equivalent to all others, in which case only one level of derivation is required. Some heuristics may advocate further subdividing of already derived templates. Figure 1 shows a common structure.
2.3 Model of test templates and hierarchies Preamble
The Z model of the test template framework is presented here. Z schemas are used to represent test templates. Appendix A discusses the motivation for using schemas rather than sets to de ne templates, and discusses some semantic issues of using sets of templates (schemas) in Z. It is not necessary to understand these Z intricacies to understand this discussion. One point arising from the discussions in appendix A that must be noted is a notational inconvenience: the syntactic representation of singleton sets of schemas must use double braces to de ne the set. So, the set of templates containing only the template T is written ffT gg, rather than fT g as one would expect.
The model
The hierarchy of templates for each operation is a directed acyclic graph. The root of each hierarchy is the valid input space of the operation being tested. The valid input space is a test template. Notationally, the valid input space template is represented by VIS subscripted with the operation name, for example, VISOp . All templates in the hierarchy are sub-schemas of the valid input space. This means that each template in the hierarchy describes a set of bindings which is a subset of the set of all possible bindings satisfying the valid input space. So, the test templates type (TTOp ) is de ned TTOp == VISOp
The hierarchy shows the derivation structure of the templates as a relationship between sets of templates derived from some other template using some heuristic. The generic set of heuristics is introduced and deliberately left abstract: [HEURISTIC ] The Test Template Hierarchy (TTH) graph is a set of mappings from parent template/heuristic tuples to the set of child templates derived from the parent using the heuristic: 5
TTHOp : TTOp HEURISTIC TTOp
Templates are de ned in terms of their parents and additional constraints. For example, a template, T 1, derived from VISOp with the additional constraint cst is de ned T 1 =b [VISOp j cst ]
If the heuristic used in this derivation was h , then its position in the TTH can be described by h : HEURISTIC T 1 2 TTHOp (VISOp ; h )
If T 1 is the only template derived from the valid input space using h , then this section of the hierarchy can be completely de ned by TTHOp (VISOp ; h ) = ffT 1gg
Useful relationships amongst templates in the hierarchy can be de ned. We de ne two standard functions over templates in a hierarchy: childrenOp and descendantsOp . childrenOp : TTOp ! (TTOp ) S childrenOp = ( T : TTOp fh : HEURISTIC TTHOp (T ; h )g) descendantsOp : TTOp ! (TTOp ) descendantsOp = ( TS: TTOp childrenOp (T ) [ fT 2 : childrenOp (T ) descendantsOp (T 2)g) childrenOp determines the set of templates directly derived from some template using any heuristic. For example, given the hierarchy in gure 1 childrenFig 1 = fTa1 ; ::; Tam ; ::; Tb1 ; ::; Tbn g descendantsOp determines the set of templates directly or indirectly derived from some template using any heuristic. That is, the descendant templates from some template are all the templates in the sub-graph extending from that template.
2.4 Valid input space
The VIS can be derived for an operation by using its pre-condition as the constraint on the IS. The pre-condition is the minimum constraint which must hold on the input for the operation not to fail (note this does not require a deterministic post-state to exist for every input satisfying the pre-condition). For example, the operation Converge x; x : (x < 5 ^ x = x + 1) _ (x > 5 ^ x = x ? 1) 0
0 0
has VIS 6
VISConverge =b pre Converge = [x : j x < 5 _ x > 5] All inputs for which the operation is de ned conform to this structure. This is signi cant because at this level it is meaningless to derive test data outside the valid input space since there is no description of intended behaviour for such input, i.e., any behaviour satis es the speci cation. Consequently, as mentioned in [SC91], the expression : VISOp describes input not conforming to the requirements of the operation. It is a concise description of what is not acceptable input for the operation and may alert the designer to some error in the speci cation. Depending on design decisions and testing strategies, this may also be the source of robustness tests for the operation. This need not always be the case, though. It may be convenient in the implementation for each operation to assume its pre-conditions are checked elsewhere, and so robustness testing operations in isolation is not meaningful. If this is the case, then it is necessary to show that the pre-condition for an operation holds when the operation is invoked.
2.5 Domain partitioning
Heuristics determine interesting features of the operation to be used as determining factors in domain de nition. Using heuristics introduces informality, but test derivation can still be rigorous. Including a heuristic in the framework involves stating guidelines and, if possible, de ning some properties of the heuristic. For example, consider equivalence partitioning: dividing the VIS into equivalent domains (in other words, distinctions of input of the operation with the same error detection ability). Domains are determined according to classes of input for which the operation does the same thing. The whole input space could be partitioned. In the previous example of the operation Converge , it is clear that the input is divided by the classes x < 5 and x > 5. These are expressed as TTs as follows P 1 =b [VISConverge j x < 5] P 2 =b [VISConverge j x > 5]
and included in the hierarchy. equiv part : HEURISTIC TTHConverge (VISConverge ; equiv part ) = fP 1; P 2g
In partition testing, the domains are required to partition the input space. That is, they must cover it and have no elements in common. Corresponding relationships between templates in general can be expressed by covers : (TTConverge ) $ TTConverge 8 TS : (TTConverge ); T : TTConverge TS covers T , S TS = T mutex : TTConverge $ TTConverge 8 T 1; T 2 : TTConverge T 1 mutex T 2 , T 1 \ T 2 = fg
A set of templates covers a template if each element of the template can be found in one of the templates of the set. Two templates are mutex if they have no elements in common. The following predicates express the partitioning property for the equivalence-partitioning heuristic.
8 anc : TTConverge TTHConverge (anc ; equiv part ) covers anc 7
8 anc : TTConverge (8 T 1; T 2 : TTHConverge (anc ; equiv part ) j T 1 6= T 2 T 1 mutex T 2) The heuristic is thus incorporated in the TTF as a systematic application of guidelines for selecting interesting classes of the input space and as a collection of properties, expressed using Z, of the domain templates derived.
2.5.1 Notes on the choice of domains
Obviously, the heuristic used for determining domains is very important, and perhaps hard to formalise. However, most partition methods are very structured and quite straightforward. For example, determining equivalence classes of inputs in the VIS can be a very simple process given a Z speci cation of the operation; consider operation Converge discussed in section 2.4. But it isn't always easy. Consider the similar operation below. Converge 2 x; x : x < 5) x = x +1 x > 5) x = x ?1 0
0 0
Here the pre-condition is true since the operation does not fail when x = 5, it is merely nondeterministic. However, it is reasonable to use the same domains for Converge 2 as were used for Converge , with the additional domain fx : j x = 5g, because to guarantee behaviour x = x + 1 or x = x ? 1 the input must satisfy x < 5 or x > 5 respectively. This example shows that a cause-eect method is really being used: selecting input domains based on what output will be achieved. Again, though, the domains are obvious, given the speci cation. Now consider the equivalent operation formed by expressing the implications as disjunctions Converge 3 x; x : (x 5 _ x = x + 1) ^ (x 5 _ x = x ? 1) 0
0
0
0 0
The pre-condition is true , but what is not obvious is that the domains are x < 5, x > 5, and x = 5, as for Converge 2. This is so because these predicates guarantee each dierent behaviour of the operation. The point is that heuristics should be clear about what factors really determine the domains of interest. These domains will not always be simple to derive. In most cases, though, it is a simple process and it is reasonable to assume that partitioning can be performed.
2.5.2 Continuing subdivision
A curious aspect of most partition testing methods is the assumption that each element of a partition is as good as every other element at detecting errors. On the surface it is a valid assumption, and simpli es the testing process by vastly reducing the required number of test cases. Studies have shown that, under certain conditions, partition testing is no more eective than random testing, which is often considered the benchmark for a testing heuristic [JW89]. However, due to this assumption, these studies do not consider further subdividing the partitions based on heuristics. Further subdividing the partitions can increase the focus of tests, which can increase the error-detecting ability of a test suite. The TTF permits further heuristics to be applied to derive the most desirable test data for particular domains. For example, choosing points close to the domain boundaries: B 1 =b [P 1 j x = 4] B 2 =b [P 2 j x = 6] 8
boundary : HEURISTIC TTHConverge (P 1; boundary ) = ffB 1gg TTHConverge (P 2; boundary ) = ffB 2gg
Note that these templates have only one possible instantiation, but this need not always be so.
2.6 Instantiation and instance templates
After applying all the desired heuristics the template hierarchy is considered complete. If no further subdivision of templates is to be undertaken, each instance of the terminal templates in the hierarchy graph is considered equivalent to all others for testing purposes. For a complete description of the test data, the only remaining task is to instantiate the terminal templates in the hierarchy. There are two ways to view the instantiation of templates. Before discussing these, however, it must be noted that an instance of a template is a precisely de ned object, but it is still abstract. That is, it exists at the same level of abstraction as the templates. An instance of a template will most likely not serve as nal test data because it probably has some data re nement to undergo. For example, suppose one input class identi ed by a test template for queue operations involves a two element queue (of natural numbers, say) with duplicate elements. In Z, the queue would be represented by a sequence, so this template would be QT 1 =b [q : seq j #q = 2 ^ #(ran q ) = 1]
Any instances of this template expressed in Z describe speci c Z sequences (e.g., h1; 1i), but if the nal implementation re ned the sequence representation of the queue to a linked list, the instances of templates would also have to be re ned to suitable linked list equivalents. The most straightforward way to describe instances of templates is to use schema instantiation. If QT 1 is a template, then Q : QT 1
is an instance of the template|it is (abstract) test data. This form of instantiation is no more useful than the original template because no new information is presented. Constraints can be de ned on instances, so this approach could be used to describe the test datum mentioned above: Q : QT 1 Q :q = h1; 1i
The preferred approach to describing instances is to de ne instance templates. These are merely templates (schemas) with only one possible instantiation. This approach is more attractive in three ways. Firstly, it presents more information, as in the second example of using schema instantiation above. Secondly, uniform use of schemas and templates is made in the hierarchy. Thirdly, some normal templates derived using heuristics may have only one instantiation (for example, templates B 1 and B 2 in section 2.5.2), so, again, the uniformity of the model is preserved. The instance template corresponding to the instance of QT 1 described above is simply de ned as Q =b [QT 1 j q = h1; 1i]
Again, the nal translation of instance templates to concrete test data is implementation dependent. Instance templates are incorporated into the hierarchy. The heuristic to derive instance templates is assumed in the TTH: instantiation : HEURISTIC
9
2.7 Common properties of templates
It is possible to build a library of common properties of templates, such as the relations covers and mutex de ned above. These properties can be used in de ning properties of testing heuristics. A notion of template equivalence is useful when using multiple heuristics in test development, since some of the templates derived may be equivalent, and can thus be discarded. The Z schema calculus has an equivalence operation on schemas: ,. Schemas are equivalent when they describe the same collection of bindings. Another useful function describes the subset of a template not covered by its children: notcovered : TTOp ! TTOp S notcovered = ( T : TTOp T n (childrenOp (T )))
This identi es regions of a domain for which tests are not derived, and can aid static checking of the application of heuristics in test development. Not all domain subdivisions will enforce the entire input domain to be covered. For example, with an ideal, revealing domain-partitioning [WO80], where the input is partitioned into the set of all error-causing inputs and the set of all correct inputs, one need only consider the error-causing domains. Checking these properties is useful in detecting incorrect use of heuristics when de ning templates. Expression of such properties also helps increase understanding of heuristics.
3 EXAMPLE: Simple le read operation This section presents test derivation from a speci cation. The TTF is used to derive tests for a simple le read operation. Two dierent approaches to test derivation are examined; two test suites are derived. The rst approach uses a simple combination of equivalence partitioning and boundary analysis (functional testing [How86]). The second approach uses domain testing [WC80, CHR82].
3.1 Z Speci cation
This Z speci cation is a simpli ed speci cation of a read operation on les based on the speci cation of the UNIX read operation in [Hay87]. [BYTE ] ReadStatus ::= ok j le empty j le too short MaxFileSize : File le : seq BYTE # le MaxFileSize
The simpli ed read operation takes an input length and reads that many characters from the beginning of the le.
10
Read 0 File len ? : data ! : seq BYTE stat ! : ReadStatus # le > 0 len ? # le data ! = (1 : : len ?) le stat ! = ok FileEmpty File data ! : seq BYTE stat ! : ReadStatus # le = 0 data ! = hi stat ! = le empty FileTooShort File len ? : data ! : seq BYTE stat ! : ReadStatus # le > 0 len ? > # le data ! = hi stat ! = le too short Read =b Read 0 _ FileEmpty _ FileTooShort
The fully expanded version of Read is Read le ; le : seq BYTE len ? : data ! : seq BYTE stat ! : ReadStatus # le MaxFileSize # le MaxFileSize le = le ((# le > 0 ^ len ? # le ^ data ! = (1 : : len ?) le ^ stat ! = ok ) _ (# le > 0 ^ len ? > # le ^ data ! = hi ^ stat ! = le too short ) _ (# le = 0 ^ data ! = hi ^ stat ! = le empty )) 0
0
0
3.2 Input spaces and the hierarchy
The input space of Read describes the set of all possible inputs to Read. It is given by the signature of Read restricted to the input variables. 11
ISRead =b [ le : seq BYTE ; len ? : ]
The rst step in determining the VIS is to determine the pre-condition of Read . For every output state, some input state can be found which satis es the Read relation, so there are no implicit restrictions on the input space due to unreachable output states. An expression for the pre-condition is # le MaxFileSize ^ ((# le > 0 ^ len ? # le ) _ (# le > 0 ^ len ? > # le ) _ (# le = 0)) which is the predicate describing the various combinations of input variables. This predicate simpli es to # le MaxFileSize ^ # le 0 which can be further simpli ed to # le MaxFileSize since the range of # is . The valid input space is the input space restricted by this predicate: VISRead =b [ le : seq BYTE ; len ? : j # le MaxFileSize ]
With the de nition of the valid input space, the template hierarchy for Read can be de ned TTHRead : TTRead HEURISTIC (TTRead )
As templates are derived, the template hierarchy can be further de ned by expressing the actual relationships amongst elements of the hierarchy.
3.3 Equivalence partitioning and boundary analysis
In deriving the rst collection of test data, equivalence partitioning is used to partition the valid input space into sub-domains representing equivalence classes of input. Test points within these domains are chosen using boundary analysis, a \common sense" heuristic that suggests that the problem cases occur near the boundaries of input spaces.
3.3.1 Equivalence partitioning
There are three partitions, as shown by the three distinct classes of input. P 1 =b [VISRead j # le > 0 ^ len ? # le ] P 2 =b [VISRead j # le > 0 ^ len ? > # le ] P 3 =b [VISRead j # le = 0] equiv part : HEURISTIC TTHRead (VISRead ; equiv part ) = fP 1; P 2; P 3g
We can check the constraints placed on templates derived using equivalence partitioning de ned in section 2.5. Each partition is clearly a subset of the valid input space, i.e., no contradictions are introduced by the further restrictions placed by each template on the valid input space. Equivalence partitioning requires the domains to cover the valid input space and to be mutually exclusive. Coverage: the following predicate must hold. 12
TTHRead (VISRead ; equiv part ) covers VISRead
From the de nition of covers and the reduction of TTHRead (VISRead ; equiv part ): S TTHRead (VISRead ; equiv part ) covers VISRead , fP 1; P 2; P 3g = VISRead
SfP 1; P 2; P 3g = (P 1 [ P 2) [ P 3 = (fb : VISRead j #b : le > 0 ^ b :len ? #b : le g[ fb : VISRead j #b : le > 0 ^ b :len ? > #b : le g) [ P 3 = fb : VISRead j #b : le > 0 ^ b :len ? #b : le _ #b : le > 0 ^ b :len ? > #b : le g [P 3 = fb : VISRead j #b : le > 0g [ P 3 = fb : VISRead j #b : le > 0g [ fb : VISRead j #b : le = 0g = fb : VISRead j #b : le 0g = fb : VISRead g
= VISRead The derived templates cover the valid input space. Mutual Exclusion: each case is very similar. Only P 1 mutex P 2 is shown. From the de nition of mutex , the following predicate must hold. P 1 mutex P 2 , P 1 \ P 2 = fg P 1 \ P 2 = fb : VISRead j #b : le > 0 ^ b :len ? #b : le g\ fb : VISRead j #b : le > 0 ^ b :len ? > #b : le g = fb : VISRead j #b : le > 0 ^ b :len ? #b : le ^ #b : le > 0 ^ b :len ? > #b : le g = fb : VISRead j #b : le > 0 ^ b :len ? #b : le ^ b :len ? > #b : le g = fb : VISRead j false g = fg P 1 and P 2 are mutually exclusive.
3.3.2 Boundary analysis selection
Boundary value points based on each partition are now de ned. The boundary analysis heuristic is introduced. boundary : HEURISTIC For the rst domain, data close to the le size limits are chosen in combination with length values close to, and less than, the le size. B 11 =b [P 1 j # le = MaxFileSize ^ len ? = # le ] B 12 =b [P 1 j # le = MaxFileSize ^ len ? = # le ? 1] B 13 =b [P 1 j # le = MaxFileSize ? 1 ^ len ? = # le ] B 14 =b [P 1 j # le = MaxFileSize ? 1 ^ len ? = # le ? 1] B 15 =b [P 1 j # le = 1 ^ len ? = # le ] B 16 =b [P 1 j # le = 1 ^ len ? = # le ? 1] TTHRead (P 1; boundary ) = fB 11; B 12; B 13; B 14; B 15; B 16g
For the second partition, data close to the le size limits are chosen in combination with length values close to, and larger than, the le size. B 21 =b [P 2 j # le = MaxFileSize ^ len ? = # le + 1] B 22 =b [P 2 j # le = MaxFileSize ? 1 ^ len ? = # le + 1] B 23 =b [P 2 j # le = 1 ^ len ? = # le + 1] 13
TTHRead (P 2; boundary ) = fB 21; B 22; B 23g
The last partition varies the length of data read, while keeping the le size at zero. B 31 =b [P 3 j # le = 0 ^ len ? = # le ] B 32 =b [P 3 j # le = 0 ^ len ? = # le + 1] TTHRead (P 3; boundary ) = fB 31; B 32g
3.4 Domain testing
Domain testing is a testing heuristic designed to detect \shifts" in the boundaries between input sub-domains caused by errors in the input speci cations. There are a number of assumptions, one of which is that the boundaries can be represented in a linear space as lines. The simple predicates de ning domains represent the domain boundaries. Given this, a nite number of test points for each line are selected so that if the boundary has shifted from the \correct" position it will be detected. The intricacies of domain testing are not under discussion here; the interested reader is referred to [WC80, CHR82]. The method is shown as an interesting contrast to equivalence partitioning and boundary analysis. The input space is represented using # le and len ? as the axes.
3.4.1 Domain testing partitioning
There are ve domain borders in the IS, represented by the simple predicates in the speci cation: B 1 : # le MaxFileSize B 2 : # le > 0 B 3 : # le = 0 B 4 : len ? # le B 5 : len ? > # le
These borders form only three input domains, corresponding to the partitions in the previous section. Points to test each border are derived. Other borders are considered only when considering the extreme points along the border being tested. The three domains are easily de ned D 1 =b P 1
D 2 =b P 2
D 3 =b P 3
domain deriv : HEURISTIC TTHRead (VISRead ; domain deriv ) = fD 1; D 2; D 3g
3.4.2 Domain test point selection
Firstly, the error margin, , is de ned; it is the maximum distance of the OFF points from the boundary being tested. In the discrete space of this example the minimum value for is 1. : =1
The heuristic for selecting points using domain testing is introduced domain select : HEURISTIC
14
Domain D 1 has two signi cant borders that need to be tested: # le MaxFileSize and len ? # le (B 1 and B 4). Each requires four test points: 2 ON and 2 OFF. [WC80] uses only 1 OFF point for this type of inequality, but, as noted in [Sco88], another OFF point is required to distinguish this inequality from an equality. The points to test boundary B 1 are B 1ON 1 =b [VISRead j # le = MaxFileSize ^ len ? = 0] B 1ON 2 =b [VISRead j # le = MaxFileSize ^ len ? = MaxFileSize ] B 1OFF 1 =b [VISRead j # le = MaxFileSize ? ^ len ? = MaxFileSize div 2] B 1OFF 2 =b [VISRead j # le = MaxFileSize + ^ len ? = MaxFileSize div 2] B 1OFF 2 falls outside the valid input space (it contradicts the constraints of VISRead ), and is ignored. The points to test boundary B 4 are B 4ON 1 =b [VISRead j # le = 1 ^ len ? = 1] B 4ON 2 =b [VISRead j # le = MaxFileSize ^ len ? = MaxFileSize ] B 4OFF 1 =b [VISRead j # le = (MaxFileSize div 2) ? ^ len ? = MaxFileSize div 2] B 4OFF 2 =b [VISRead j # le = (MaxFileSize div 2) + ^ len ? = MaxFileSize div 2]
There is also the pathological case due to the discrete input space [WC80] D 1PATH =b [VISRead j # le = 1 ^ len ? = 0] TTHRead (D 1; domain ) = fB 1ON 1; B 1ON 2; B 1OFF 1; B 4ON 1; B 4ON 2; B 4OFF 1; B 4OFF 2; D 1PATH g D 2 also has two signi cant boundaries that need to be tested: # le > 0 and len ? > # le (B 2 and B 5). Both of these require three test points: 2 ON and 1 OFF. B 2ON 1 =b [VISRead j # le = 0 ^ len ? = 0] B 2ON 2 =b [VISRead j # le = 0 ^ len ? = MaxFileSize ] B 2OFF =b [VISRead j # le = 1 ^ len ? = MaxFileSize div 2] B 5ON 1 =b [VISRead j # le = 1 ^ len ? = 1] B 5ON 2 =b [VISRead j # le = MaxFileSize ^ len ? = MaxFileSize ] B 5OFF =b [VISRead j # le = (MaxFileSize div 2) ? ^ len ? = MaxFileSize div 2] TTHRead (D 2; domain ) = fB 2ON 1; B 2ON 2; B 2OFF ; B 5ON 1; B 5ON 2; B 5OFF g D 3 is an equality boundary requiring ve test points: 3 ON and 2 OFF; only 1 OFF point is used because the other falls in the space where le sizes have negative values, which does not exist. B 3ON 1 =b [VISRead j # le = 0 ^ len ? = 0] B 3ON 2 =b [VISRead j # le = 0 ^ len ? = MaxFileSize ] B 3ON 3 =b [VISRead j # le = 0 ^ len ? = MaxFileSize div 2] B 3OFF =b [VISRead j # le = 1 ^ len ? = MaxFileSize div 2] TTHRead (D 3; domain ) = fB 3ON 1; B 3ON 2; B 2ON 3; B 3OFF g
15
3.5 Removing redundant templates
Many of the templates derived are equivalent. Equivalent templates can be derived using one heuristic (e.g., domain testing), or can arise due to using multiple heuristics. A template is redundant if it is a terminal node in the template hierarchy and it is equivalent to another template in the hierarchy. Redundant templates can be ignored. None of the 11 partition-boundary templates are redundant amongst themselves. Of the 18 domain test points, 10 aren't redundant amongst themselves. A strength of domain testing is the relatively small number of test points generated to infer the correctness of the domain boundaries. The redundancies are B 11 , B 1ON 2 , B 4ON 2 , B 5ON 2 B 15 , B 4ON 1 , B 5ON 1 B 16 , D 1PATH B 31 , B 2ON 1 , B 3ON 1 B 2ON 2 , B 3ON 2 B 2OFF , B 3OFF B 4OFF 1 , B 5OFF
Overall, there are 17 non-redundant templates representing test data for Read : 1)B 11 7)B 21 10)B 31 12)B 1ON 1 15)B 2ON 2 17)B 3ON 3 2)B 12 8)B 22 11)B 32 13)B 1OFF 1 16)B 2OFF 3)B 13 9)B 23 14)B 4OFF 1 4)B 14 5)B 15 6)B 16
3.6 Test data: Instantiating templates
The above collection of TTs represents the test data set for Read . Each template describes a set of possible instances, yet each instance of a particular template is deemed equivalent in error detecting ability. The factors dierentiating instances of a template are not factors that dierentiate the operation's behaviour. In this case, the actual BYTE values of the le contents distinguish many instances of each TT from each other, but the Read operation is not concerned with such distinctions. Example instance templates are T 5 =b [B 15 j le = h`a 'i; len ? = 1]
and T 10 =b [B 31 j le = hi; len ? = 0]
assuming `a ': BYTE . The instance templates are re ned as more information about the concrete representation of data becomes available.
4 DISCUSSION The previous sections describe the basic mechanics of the TTF. The following sections discuss ideas on further applications of the framework beyond speci cation of test data, and future work areas.
16
4.1 Analysis of test set
Expressing the test data using a formal notation facilitates analysis of the test set. For example, [JW89] lists a number of properties of partition testing (using random selection of tests within partitions) useful for comparing the test set with a collection of random tests. For example, Observation 4 is that if all partitions are of the same size and the same number of tests for each partition is chosen, then the test set is at least as good as a random test set. Since templates are Z schemas, and Z schemas are sets, we use the cardinality operator on sets (#) to represent the size of a template. Observation 4 may be expressed
8 T : TTOp (8 c 1; c 2 : childrenRead (T ) #c 1 = #c 2 ^
#(TTHOp (c 1; instantiation )) = #(TTHOp (c 2; instantiation )))
4.2 Analysis of heuristics
The TTF provides common ground for comparing and contrasting testing heuristics. Domain testing [WC80, CHR82] usually derives fewer test points, but it is a more dicult strategy to apply, and it is not applicable in many cases. It is interesting to note which test data are common in the derivations from dierent heuristics, and whether any TTs not equivalent could actually satisfy criteria of other heuristics. For example, instantiation templates derived using any heuristic are also valid templates for random testing. Templates derived independently using dierent heuristics might have other, more general, relationships among them. One TT might be a subset of another, or a collection of TTs might partition another. If so, the heuristics and templates should be examined to determine any redundancy or to consider using the heuristics in conjunction. Naturally, the heuristics can be compared based on error detection when run on implementations, which makes the common templates particularly interesting.
4.3 Application areas 4.3.1 More testing heuristics
The TTF facilitates development of more SBT heuristics, particularly in the partitioning of the input space into domains. What are good partitioning strategies? Is analysis of the output space to determine unique input partitions for all cases worthwhile? Is it ever acceptable not to consider portions of the input space? Similar questions are raised when deriving sub-templates. What information is there in the speci cation that indicates good subdivisions for a partition? Also of interest are examples showing how to incorporate existing SBT and other testing strategies into the TTF.
4.3.2 Re nement
The TTs are as abstract as the speci cation. There are many possible implementations of a speci cation, and correspondingly there are many concrete representations of the abstract test information. The speci cation is re ned to an implementation; corresponding re nements can be made to the TTs to describe (more concretely) the test data and test information for the re ned speci cation. This simpli es test derivation in two ways. Firstly, it is usually easier to derive tests at higher levels of abstraction. Secondly, more information about the nal implementation is introduced in stages, so that additional tests due to increased knowledge of structure are required in small manageable amounts, which greatly simpli es structural, or white-box, testing. The treatment of re nement in the TTF is in its early stages, and deals primarily with data re nement. 17
Between a speci cation (at the abstract level) and a re nement of the speci cation (at the concrete level, though not necessarily the nal implementation) there exists an abstraction relation, say Abs (see Chapter 1 of [Spi92] for more details and examples). Re ned templates are derived from their abstract counterparts by conjoining the abstract template and the abstraction relation, and then hiding the components of the abstract data representation. For example, consider implementing the data type File from section 3 by a combination of an array of bytes (represented as a mapping from to BYTE ) and a size variable representing the size of the array File 1 elts : BYTE size : size MaxFileSize
The relation between abstract and concrete les is Abs File File 1 # le = size 8 i : dom le le (i ) = elts (i )
To see how this re nement can be applied to the test templates, consider B 11 derived in section 3: B 11 =b [P 1 j # le = MaxFileSize ^ len ? = 0]
The template re ned from the abstract template B 11 using Abs (represented by the superscript R on the name) is B 11R =b (B 11 ^ Abs ) n ( le )
That is, B 11R =b [elts : BYTE ; size : ; len ? : j size = MaxFileSize ^ len ? = 0]
If testing is conducted separately on the re ned speci cation, one of the TTs produced using boundary analysis is Read 1 B 11 =b [elts : BYTE ; size : ; len ? : j size = MaxFileSize ^ len ? = 0]
As expected B 11R , Read 1 B 11
Graphically, this relationship is
18
SPEC
TT
-
Read
re ne (using Abs )
B 11
?
?
-
Read 1
derive template
B 11R = Read 1 B 11
Another simple example of re nement demonstrates how more detailed information can require additional tests. At some point the type will probably be replaced by the type integer (). When this step is made, the implicit boundary len ? 0 must be made explicit. The new VIS derived after this re nement is applied to Read is VISRead 2 =b [ le : seq BYTE ; len ? : j # le MaxFileSize ^ len ? 0]
which adds a new domain boundary to the valid input space. More tests must be derived. For example, using domain testing, four more points must be derived: B 6ON 1 =b [VISRead j # le = 0 ^ len ? = 0] B 6ON 2 =b [VISRead j # le = MaxFileSize ^ len ? = 0] B 6OFF 1 =b [VISRead j # le = MaxFileSize div 2 ^ len ? = 1] B 6OFF 2 =b [VISRead j # le = MaxFileSize div 2 ^ len ? = ?1]
Template B 6OFF 2 contradicts the valid input space and is ignored. Templates B 6ON 1 and B 6ON 2 are redundant, as they are equivalent to templates B 2ON 1 and B 1ON 1 respectively.
4.3.3 Maintenance
In a similar vein to re nement, the TTF is also useful in software maintenance. When the speci cation is updated, the TTs that require updating can be determined and re-derived. This reduces the eort required during regression testing. A tool that performed dependency analysis on the speci cation would be very useful since eects of speci cation changes may propagate to unexpected areas of the speci cation.
4.3.4 Oracles
The formal speci cation does more than describe conditions on the input. The relationship between input states and output states is precisely speci ed. This means that the speci cation can serve as a test oracle. Deriving test oracles from speci cations has received some attention, e.g., [RAO92]. A simple approach to deriving an oracle in the TTF is to generate output TTs corresponding to input TTs. An oracle template is derived for a test data template by using the input-output relationship of the operation to derive an expression for the output components for certain input. The oracle template is de ned over the output space of the operation. Note that the oracle template de nes a set of possible outputs if the operation is non-deterministic, and that an oracle template can be determined for a TT that isn't an instantiation TT, also resulting in a set of possible outputs. 19
We de ne the output space (OS) of an operation, similarly to the IS, as the signature of the operation restricted to the output variables. A general expression for the oracle template of any test template T derived from operation Op is (Op ^ T ) OSOp We use the expression oracleOp to represent this: oracleOp (T 1) == (Op ^ T 1) OSOp
Thus, a description of the expected output for each TT can be derived. Again, nal concrete instantiation of this oracle template depends on the nal implementation. The combination of test data and test oracle forms a test case.
4.3.5 Robustness tests
As mentioned earlier, the negation of the VIS could be used as the source of robustness tests for an operation. : VISOp could form the root of another TT hierarchy and robustness TTs could then be derived using the TTF.
4.3.6 Software design: Components
Brad Cox describes a software engineering ideal where software development involves constructing programs from pre-de ned components, similar to conventional engineering's use of nuts and bolts [Cox90]. A software engineering revolution similar to the industrial revolution is discussed which will move software development from a \build everything from scratch" approach to a \build from re-usable components where possible" approach. Real, hardware components are de ned by a speci cation of their purpose/parameters and some sort of gauge to determine whether a proposed piece of hardware is satisfactory. The TTF is useful in component de nition because the set of test cases derived from the speci cation can act as the component gauge. The test cases can be used to determine whether implementations of the component conform to the speci cation.
4.4 Future work
The TTF described in this paper is usable, but additional work is required. Many interesting areas of future work have been identi ed from applying the framework, notably
examination of methods for selecting input partitions, extending the re nement model, and re ning the test oracle model.
A major area for future work is tool support. A simple tool interface to the template collection would greatly assist using the TTF. Such a tool would keep track of the derivation structure so that only new information would need to be entered. Other useful tools that are more complicated are a pre-/postcondition analyser that derives pre- or post-conditions from operations' speci cations, a theorem prover that can assist in verifying properties of TTs, and a test data generator that automatically instantiates TTs given the TT de nition. Another aspect for consideration is applying the TTF beyond unit testing. The speci cation de nes operation interfaces, which is useful in module and integration testing. As it stands, the TTF is useful for deriving test data, but means of testing modules on the data are required. Integrating the TTF with module testing approaches such as in [Hay86, HB89], which focus on developing test scripts to apply (sequences of) tests rather than deriving tests, should be investigated. 20
4.5 Conclusion
A generally applicable testing framework has been described, which allows test information to be structured in a useful and usable manner. Expressing properties of test classes and data is facilitated by the formal nature of the language used to de ne test information. The TTF is a speci cation-based testing method with the advantages of SBT, with applications beyond testing software. Its exible nature facilitates incorporating new heuristics. The existence of useful speci cation-based methods and tools may encourage increased use of formal methods. Finally, one interesting point about a speci cation-based testing method is that, if there are certain styles of speci cation where the method is more applicable or easier to use, it will encourage speci ers to structure their speci cations in these styles. Design for testability encourages speci ers and designers to consider factors that could otherwise be delayed far into the development process, and usually improves the clarity of the speci cation or design.
Acknowledgements Much of this work was conducted at the University of Massachusetts at Amherst during a six month visit by the rst author. Many thanks are due to Lori Clarke for her eorts and interest in this work. We are grateful for the useful discussions on this work held with Barbara Lerner and other members of the Software Development Laboratory at the University of Massachusetts. We thank Ian Hayes, at the University of Queensland, for some very insightful comments on the basic model of TTs. Phil Stocks is supported by an Australian Postgraduate Research Award and a scholarship with the Centre for Expertise in Distributed Information Systems, which is funded by Telecom Research Laboratories Australia under Telecom Australia Contract # 7015.
21
References [AHKN90] J. Arkko, V. Hirvisalo, J. Kuusela, and E. Nuutila. Supporting testing of speci cations and implementations. EUROMICRO Journal, 30(1-5):297{302, August 1990. EUROMICRO'90. [Bei90] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold, New York, second edition, 1990. [BGM91] G. Bernot, M.-C. Gaudel, and B. Marre. Software testing based on formal speci cations: A theory and a tool. Software Engineering Journal, 6(6):387{405, November 1991. [CHR82] L. A. Clarke, J. Hassell, and D. J. Richardson. A close look at domain testing. IEEE Transactions on Software Engineering, 8(4):380{390, July 1982. [Cox90] B. J. Cox. Planning the software industrial revolution. IEEE Software, 7(6):25{33, November 1990. [Fre91] R. S. Freedman. Testability of software components. IEEE Transactions on Software Engineering, 17(6):553{564, June 1991. [Hay86] I. J. Hayes. Speci cation directed module testing. IEEE Transactions on Software Engineering, 12(1):124{133, January 1986. [Hay87] I. Hayes, editor. Speci cation Case Studies. Series in Computer Science. Prentice Hall International, 1987. [HB89] Daniel Homan and Christopher Brealey. Module test case generation. Software Engineering Notes, 14(8):97{102, December 1989. Proceedings of the ACM SIGSOFT '89 Third Symposium on Software Testing, Analysis, and Veri cation (TAV3). [How86] William E. Howden. A functional approach to program testing and analysis. IEEE Transactions on Software Engineering, 12(10):997{1005, October 1986. [Jon90] C. B. Jones. Systematic Software Development Using VDM. Series in Computer Science. Prentice Hall International, 1990. Second Edition. [JW89] B. Jeng and E. J. Weyuker. Some observations on partition testing. Software Engineering Notes, 14(8):38{47, December 1989. Proceedings of the ACM SIGSOFT '89 Third Symposium on Software Testing, Analysis, and Veri cation (TAV3). [OB88] T. J. Ostrand and M. J. Balcer. The category-partition method for specifying and generating functional tests. Communications of the ACM, 31(6):676{686, June 1988. [RAI92] The RAISE Language Group. The RAISE Speci cation Language. BCS Practitioner Series. Prentice Hall International, 1992. [RAO92] D. J. Richardson, S. L. Aha, and T. O. O'Malley. Speci cation-based test oracles for reactive systems. In Proceedings of the 14th International Conference on Software Engineering, pages 105{118, May 1992. [SC91] P. Stocks and D. A. Carrington. Deriving software test cases from formal speci cations. In 6th Australian Software Engineering Conference, pages 327{340, July 1991. [Sco88] L. T. Scott. On the problem of software testing and the generation of test data. Master's thesis, The University of Queensland, Queensland 4072, Australia, 1988. [Spi92] J. M. Spivey. The Z Notation: A Reference Manual. Series in Computer Science. Prentice Hall International, second edition, 1992. 22
[WC80] [WL88] [WO80]
L. J. White and E. I. Cohen. A domain strategy for computer program testing. IEEE Transactions on Software Engineering, 6(3):247{257, May 1980. J. Woodcock and M. Loomes. Software Engineering Mathematics. Pitman, 1988. E. J. Weyuker and T. J. Ostrand. Theories of program testing and the application of revealing subdomains. IEEE Transactions on Software Engineering, 6(3):236{246, May 1980.
23
A Notes on the schema model of templates A.1 Schemas vs sets
It has already been noted that templates describe sets of test data, so it may seem strange not to use sets to de ne templates. Instead, Z schemas are used. De ning test data for an operation involves assigning values to the input components (both state and parameter) of the operation, that is, de ning a binding between input component identi ers and values. Thus, a template de nes a set of bindings, which is exactly what a Z schema de nes. The set of bindings can be constrained by predicates in the same way as sets are de ned using set comprehension. Schema types de ne generalised tuples, where ordering of components is not signi cant, and individual components can be referenced. Consider these alternatives SchemaT =b [x ; y : j x < y ] SetT = fx ; y : j x < y g Used as a template, SetT de nes a set of ordered pairs, where individual components cannot be referenced. SchemaT de nes a set of bindings of values to the identi ers x and y . If B were such a binding (B : SchemaT ), then B :x and B :y reference the components of the binding. The descriptive power of schemas ts the idea of describing test data. What does using a schema to represent a template mean? As a test template, SchemaT describes the set of test data consisting of two components, x and y , both natural numbers, satisfying the condition that x is less than y . Templates are, of course, types in the Z notation. An instance of a template is a particular binding of values to components, and represents an actual test.
A.2 Sets of templates: Z peculiarities
The particulars of the Z syntax and semantics raise two points in the usage of schemas as test templates. These do not restrict the use of templates, but must be made clear. A useful concept in the framework is reasoning with sets of templates, that is, sets of Z schemas. Both points relate to this usage.
A.2.1 Syntax of singleton sets of templates
The rst point is one of Z syntax. Schemas are Z types. De ning objects with schema types (bindings) has the syntax inst : Schema . However, this is a shorthand for the syntax inst : fSchema g, which states that inst is a member of the set of bindings de ned by Schema . Because of this shorthand, a singleton set of schemas, containing only schema S , cannot be declared fS g, since this is merely the schema type \set of all bindings de ned by S ." Rather the singleton set must be de ned using double braces, to unambiguously describe the correct set: ffS gg. Non-singleton sets of templates can be de ned normally, since there is no ambiguity.
A.2.2 The dierence between schema types and schemas
There is a subtle dierence between schemas and schema types in Z, best illustrated with an example. Consider the following de nitions with the type of the de ned entity shown at its side. Bindings are described using the : : notation from [Spi92]. Schema =b [x ; y : ] type : x : ; y :
Schema Set == Schema type : ( x : ; y : )
j S : Schema j SSet : Schema Set j SS : SSet
type : x : ; y :
type : x : ; y :
type : x : ; y :
Both Schema and SSet de ne sets of bindings, and instances of each are as expected. However, they are not exactly the same. Despite the similarity, Schema is a Z schema, and SSet is only a set of 24
bindings. This means that operations of the schema calculus cannot be applied to SSet : it is a set, not a schema. In every other regard Schema and SSet are identical. Instances of both are bindings (with no ordering of elements and component reference). Because the types of schemas and sets of bindings are so similar, schemas can be used in set expressions. Set operations require all sets in the expression to have the same signature. The resulting type of a set expression involving schemas is a set of bindings.
25