Lutess: a testing environment for synchronous software? L. du Bousquety , F. Ouabdesselamy , I. Parissisz , J.-L. Richiery, N. Zuanony LSR-IMAG, BP 72, 38402 St-Martin-d'Heres, France fldubousq, ouabdess, richier,
[email protected] z France Telecom - CNET, 28 chemin du Vieux Ch^ene, 38243 Meylan, France
[email protected] y
Abstract Several studies have shown that automated testing is a promis-
ing approach to save signi cant amounts of time and money in the industry of reactive software. But automated testing requires a formal framework and adequate means to generate test data. In the context of synchronous reactive software, we have built such a framework and its associated tool -Lutess- to integrate various wellfounded testing techniques. This tool automatically constructs test harnesses for fully automated test data generation and verdict return. This paper describes the four black-box testing techniques which are coordinated in Lutess uniform framework.
1 Introduction Testing receives an increasing attention from research teams working on formal techniques for software speci cation, development and veri cation, for two reasons. First, testing appears to be the only means to perform the validation of a piece of software, when formal veri cation is impracticable because of lacks of memory and/or time. Second, testing brings a practical solution to the speci cations themselves. It can help one get con dence in the consistency and relevance of the speci cations. It can also reveal discrepancies between the speci cations and the speci er's intentions. So, testing is more and more often used jointly with, and in complement to formal veri cation [2]. Besides, to be a signi cant support to validation, the testing techniques must either provide a basis to reliability analysis [8], or be aimed at revealing errors in the software application to improve its correctness. In this paper, we present Lutess, a tool for testing synchronous reactive software. Lutess provides a formal framework based on the use of the Lustre language [3]. It embodies several testing techniques: random testing with or without operational pro les [10, 4], speci cation-based testing [12] and behavioral pattern oriented testing [11]. ?
This work has been partially supported by a contract between CNET-France Telecom and University Joseph Fourier, #957B043.
Section 2 introduces the issue of testing reactive software and presents Lutess from the tester's viewpoint. Section 3 describes Lutess functional generation methods, while section 4 presents their formal de nitions and section 5 gives some aspects about their implementation. Section 6 shortly considers Lutess applicability through its actual experimentations.
2 Testing reactive systems 2.1 Speci c attributes of reactive systems An important feature of a reactive system is that it is developed under assumptions about the possible environment behavior. For example, in a steam-boiler, the physical phenomenon evolves at a low speed. This makes impossible for the temperature to raise, at once, from a low level to an emergency situation. Consider a program which monitors the temperature through three input signals (temperatureOK, BelowDesiredTemperature, AboveDesiredTemperature). This program will observe a gradual evolution of the parameters. Thus the input domain of the reactive program is not equal to the Cartesian product of the input variable domains. Not only it is restricted to subsets of this product, but, in addition, the environment properties rule out some sequences of successive values for some input variables. The environment properties constrain the test data which are to be generated. Besides, testing reactive systems can hardly be based on manually generated data. The software input data depend on the software outputs produced at the previous step of the software cyclic behavior. Such a process requires an automatic and dynamic generation of input data sequences. An input data vector must be exhibited at every cycle.
2.2 Lutess: operational principle The operation of Lutess requires three elements: a random generator, a unit under test and an oracle (as shown in Figure 1). Lutess constructs automatically the test harness which links these three components, coordinates their executions and records the sequences of input-output relations and the associated oracle verdicts. The three components are just connected to one another and not linked into a single executable code. The unit under test and the oracle are both synchronous and reactive programs, with boolean inputs and outputs. Optionally, they can be supplied as Lustre programs. The generator is automatically built by Lutess from Lustre speci cations. These speci cations are grouped into a speci c syntactic unit, called a testnode. This notion of testnode has been introduced as a slight extension to Lustre [13].
Unit under test
Constrained random generator
Oracle
Figure1. Lutess The aforecited speci cations correspond to environment constraints and possibly properties which serve as test guides (guiding properties). The environment constraints de ne the valid environment behaviors. Guiding properties de ne a subset of the environment behaviors which is supposed to be interesting for the test. Both are exhibited by the user. Notice that, from the tester's viewpoint, guiding properties are considered as an environment extension. This allows us to have a uniform framework to de ne both environment constraints and testing guides. Moreover, all the testing techniques perform a random generation of values from a domain according to constraints. For this reason, they are all implemented as constrained random generation techniques. The test is operated on a single action-reaction cycle, driven by the generator. The generator randomly selects an input vector for the unit under test and sends it to the latter. The unit under test reacts with an output vector and feeds back the generator with it. The generator proceeds by producing a new input vector and the cycle is repeated. The oracle observes the program inputs and outputs, and determines whether the software speci cation is violated. The testing process is stopped when the user-de ned length of the test sequence is reached. The construction of the generator is carried out in two steps. First, the environment description (i.e. the environment constraints and the guiding properties) is compiled into a nite state automaton [9]. This automaton recognizes all input and output value sequences satisfying environment constraints. Then, this automaton is made indeterministic and transformed into a generator. Lutess has a user-friendly interface, which oers the user an integrated environment: { to de ne the testnode, the oracle and the unit to be tested, { to command the construction of the test harness, to compile Lustre programs, and to build constrained random generators, { to run the testing process, to set the number and the length of the data sequences, and to replay a given sequence with a dierent oracle, { to visualize the progression of the testing process and to format the sequences of inputs, outputs and verdicts,
{ to abstract global results from the sequences for both testing eciency analysis and software reliability evaluation.
Example Let us consider the validation of a telephony system simulation program ( g. 2). ...
Telephony System simulation program Environment interface
Figure2. Telephony System simulation program The environment of this program is composed of physical telephones. It is characterized by the events issued by the devices (On-Hook, O-Hook, Dial...), and the signals emitted by the program (e.g. the classical tones such as Dialingtone, Busy-tone...).
. Examples of environment constraints: { at most one action can be carried out on a phone at each instant of time, { one cannot go o (resp. on) the hook twice, without going on (resp. o) the hook in between, { one can dial only after the phone has received a Dialing tone. . Examples of oracle properties: { outputs are correctly produced, { a phone never receives two dierent tones at the same time (tone outputs are mutually exclusive). In the following section, this example is detailed to illustrate the dierent testing methods.
3 Testing methods This section presents the various testing techniques provided by Lutess. Basic testing
This method corresponds to an environment simulator. Test data is selected only according to the environment constraints. Therefore, the test data selection criterion is the weakest one can de ne for synchronous software. The test data
generation is performed in such a manner that the data distribution is uniform. However, for complex systems, a uniform distribution is far from the reality: realistic environment behaviors may be a small subset of all valid behaviors. The following three methods aim at providing a solution to this problem. Statistical testing
In [15], Whittaker has shown that, to be useful, the description of the operation modes of a piece of software must contain multiple probability distributions of input variables. The problem of the operational pro le is that the user should de ne it completely. Usually, building a complete operational pro le is a useless eort for two reasons. In industrial contexts, the speci cations are most often incomplete. Furthermore, the user has only a partial knowledge of the environment characteristics. To bypass this drawback, Lutess oers facilities to de ne a multiple probability distribution in terms of conditional probability values associated with the unit under test input variables [10]. The variables which have no associated conditional probabilities are assumed to be uniformly distributed. An algorithm to translate a set of conditional probabilities into an operational pro le (and vice versa) is described in [4].
. Examples of conditional probabilities: { the probability that someone dials his own number is low. { the probability for someone to go on the hook is high when the busy tone has been received. Property-oriented testing
A tester may want to test a program against important program properties regardless of any input distribution. In this case, the testing process is directed toward the violation of the properties. Such properties are, for example, safety properties which state that something bad never happens. Random generation is not well-adapted when the observation of such properties corresponds to very few data sequences. The actual aim of property-oriented testing is to analyze those properties and to automatically generate relevant input values, i.e. values that are the most liable to cause failure with respect to these properties. Let's consider the simple property P : i ) o, where i is an input and o is an output. P states that o must be true every time i is true. P holds for i = false. So, when i is false, the program has no chance to violate P . Hence, to have a good chance to reveal an error, the interesting value for i is true [12]. A similar analysis is carried out by an automated analyzer on any safety property, allowing to characterize the relevant input values for the property to be taken into
account. It must be noticed that property-oriented testing is always applied with test data which satisfy the environment constraints. This technique is however limited, since it consists in an instantaneous guiding. If we consider a safety property like pre i ) o2, the analysis won't reveal that setting i to true will test the property at the following step.
. Example of property: { if one goes o the hook on a phone which was previously idle, then the dialing tone should be received at this end-point. Behavioral pattern-based testing
As complexity grows, reasonable behaviors for the environment may reduce to a small part of all possible ones with respect to the constraints. Some interesting features of a system may not be tested eciently since their observation may require sequences of actions which are too long and complex to be randomly frequent. The behavioral pattern-based method aims at guiding further input generation so that the most interesting sequences will be produced. A behavioral pattern characterizes those sequences by listing the actions to be produced, as well as the conditions that should hold on the intervals between two successive actions. Regarding input data generation, all sequences matching the pattern are favored and get higher chance to occur. To that, desirable actions appearing in the pattern are preferred, while inputs that do not satisfy interval conditions get lower chance to be chosen. Unlike the constraints, these additional guidelines are not to be strictly enforced. As a result, all valid behaviors are still possible, while the more reasonable ones are more frequent. The model of the environment is thus more \realistic". Here again, the generation method is always applied with test data which satisfy the environment constraints.
. Example of a (simple) behavioral pattern: { Let us consider the following pattern (t: instant, l: interval), which guides input generation so that user A will call user B when the latter is talking to C: (t) B and C should be talking together, (l) B and C should not go on the hook, (t) A should be previously idle and should go o the hook, (l) B and C should not go on the hook, (t) A should dial B's number. 2
pre i
is a Lustre expression which returns the previous value of i.
4 Foundations This section presents the foundations of the techniques described above. A constrained random generator is de ned formally as a generating machine and a selection algorithm to determine the values which are sent to the unit under test. A generating machine is an I/O machine (de nition 1), whose inputs (respectively outputs) are the unit under test outputs (resp. inputs). De nition 1 An I/O machine is a 5-tuple M = (Q; qinit; A; B; t) where { Q is a nite set of states, { qinit 2 Q is the initial state, { A is a set of input variables, { B is a set of output variables, { t : Q VA VB ! Q is the transition (possibly partial) function. In the following, for any set X of boolean variables, VX denotes the set of values of the variables in X . x 2 VX is an assignment of values to all variables in X .
4.1 Basic generating machine The following de nition is inspired mainly from [13]. A similar approach can be found in [7] and [14] to deal with the formal veri cation problem. A constrained random generator must be a reactive machine. A reactive machine is never blocked: in every state, whatever the input is, a new output can be computed to enable a transition. This means that the generator is always able to compute a new input for the unit under test.
De nition 2 A generating machine, i.e. a machine associated with a constrained random generator, is an I/O machine Menv = (Q; qinit; O; I; tenv; env ) where { O (resp. I ) is the set of the unit under test output (resp. input) variables. { Q is the set of all possible environment states. A state q is an assignment of values to all variables in L, I , and O (L is the set of the testnode local variables), { env Q VI represents the environment constraints, { tenv : Q VO VI ! Q is the transition function constrained by env, tenv (q; o; i) is de ned if and only if (q; i) 2 env .
Behavior outline: A basic generating machine operates in a cyclic way as follows: from its current state, it chooses an input satisfying env for the unit under test, gets the unit response and nally uses this response to compute its new state. In each state, the possible inputs have an equally probable chance to be selected.
Remark 1: The environment constraints env may be given by the user as formulas involving output variables. But, at each cycle, these constraints can only depend on the output variable previous values. That justi es the de nition of
env .
Remark 2: The variables from L are intermediate variables created for the environment constraint evaluation. For example, they can be used to store the output variable previous values. Remark 3: If the machine constrained by env is not reactive, we consider that
there is an error in the expression of the constraints. The user is always informed of this problem, even if it is sometimes possible to transform env and make it generating. Indeed, the means to determinate whether env is generating is to compute the set of reachable states, or its complement (i.e. the set of states leading inevitably to the violation of env). These computations are based on a least xed point calculation which can be impracticable in some cases. Nevertheless, the constrained random generator can always operate since it detects blocking situations. It is the responsibility of the tester to rewrite the constraints.
4.2 Statistical-guided machine Statistical testing enables a Lutess user to specify a multiple probability distribution in terms of conditional probability values associated with the input variables of the unit under test.
De nition 3 A statistical-guided machine is de ned as Mstat = (Menv ; CPL) { Menv = (Q; qinit; O; I; tenv; env) is a basic generating machine, { CPL = (cp0; cp1; :::; cpk) is a list of conditional probabilities associated with Menv , { each cp is a 3-tuple (i; v; fcp ) where i is an input variable (i 2 I ), v is a probability value (v 2 [0::1]), and fcp is a condition (fcp : Q VO VI ! f0; 1g). v denotes the probability that the variable i takes on the value true
where
when the condition fcp is true.
Behavior outline: A statistical-guided machine has the same behavior as the
basic generating machine: it selects an input satisfying env with the probability speci ed by the list of conditional probabilities. By default, it uses the equally probable distribution. When the conditional probability list is empty, the machine is equivalent to the basic one.
4.3 Property-guided machine In property-oriented testing, test data are selected in order to facilitate the detection of property violations. The following de nition aims at characterizing such test data:
De nition 4 Let Menv = (Q; qinit; O; I; tenv; env) be a generating machine and fP Q VO VI be a predicate representing a property P . The fact that a
software input value i 2 VI (adequately) tests P on state q 2 Q is de ned as adequateP (i,q): adequateP (i,q) i 9o 2 VO ; fP (q; o; i) = false
That is, input data for which the properties are true (regardless of the values of state and output variables) are not able to adequately test these properties.
De nition 5 A property-guided machine is de ned as MP = (Menv; P ) where { Menv = (Q; qinit; O; I; tenv; env) is a generating machine, { P is a conjunction of properties, Behavior outline: Whenever it is possible to produce an input value which ad-
equately tests the properties, a property-guided machine ignores all input values which do not test adequately these properties. Thus, from the current state q, whenever possible, the machine chooses an input i satisfying env(q; i) ^ adequate p (q; i); otherwise it chooses i such as (q; i) 2 env . Note that this machine has the same transition function than the basic generating machine.
4.4 Pattern-guided machine A behavioral pattern (BP) is made out of alternating instant conditions and interval conditions. The instant conditions must be satis ed one after the other. Each interval condition shall be continually satis ed between the two successive instant conditions which border it. A behavioral pattern characterizes the class of input sequences that match the sequence of conditions. A behavioral pattern (BP) is built with the following syntax rule, where (SP) is a Lustre boolean expression which does not include the current outputs: BP ::= [SP ]SP j [SP ] SP BP
The meaning of a BP is that one wants the sequence of non-braced predicates to hold one after the other, and the braced predicates to hold continually in between. [true]X[Y]Z means \X should hold, then Z, and not(Y) should not occur in the meantime". This provides a means to describe only signi cant parts of an event sequence. With a behavioral pattern, a progress variable is associated. It indicates what pre x of the BP has been satis ed so far. To any value of this variable corresponds a pair of predicates (inter, cond). cond is the next-to-appear predicate and inter is the predicate that should continually hold in the meantime.
De nition 6 A pattern-guided machine is de ned as MBP = (Menv ; BP ) where: { Menv is a generating machine, { BP is a behavioral pattern, BP = f(interk ; condk)gk ; k 2 [0; : : : ; n].
Behavior outline: Let j be an integer variable taking its values in [?1; : : : ; n]. j represents a progress index on BP. j is initialized to 0.
Let SH (q; j ); SL (q; j ); SN (q; j ) be three input variable sets de ned as: { SH (q; j) = fi 2 VI j (q; i) ` cond j g { SL(q; j) = fi 2 VI j (q; i) 0 inter j (q; i) ^ (q; i) 0 cond j g { SN (q; j) = fi 2 VI j (q; i) ` inter j ^ (q; i) 0 cond j g At the current state, a pattern-guided machine rst chooses one non empty input variable set. Second, it selects an input i in the chosen set. The machine computes the progress index value as follows: { if SH (q; j) was chosen, j j + 1, { if SN (q; j) was chosen, j is unchanged, { if SL (q; j) was chosen, j ?1. If j becomes equal to -1 or n +1, the pattern is negatively or positively nished, and then j is reset to 0. Intuitively, transition function of the basic machine is partitionned into three classes. The partition is motivated by the status of the transitions regarding the progression of the guiding process: SH identi es all transitions that make the process go forward, SL identi es those that lead to the process stopping, while SN identi es all transitions that do not aect the process.
5 Some implementation issues The automaton obtained by compiling the environment constraints is coded using a symbolic notation in which the states are represented by a set of variables, and the transitions by boolean functions. For a given state and a given value of the inputs and outputs, one can check whether the environment constraints are satis ed. The boolean functions are implemented as a single Binary Decision Diagram (BDD) [1]. At every instant, the test data generator uses the environment description to randomly choose input values which make the constraints satis ed, so that the associated boolean function takes a true value. Each node of the diagram carries a variable and each of its outgoing branches is labeled with the value taken by that variable. Variables are ordered as follows: the lower order variables are the local variables, next come output variables while input variables occur at the bottom of the diagram. As a result, it is easy to locate the sub-diagram corresponding to a given state of the environment since it is fully de ned with the local variables. Basic constrained random generation
The basic random input generation algorithm produces equally probable input values. It consists in two steps. The rst one is a recursive traversal which labels the environment BDD nodes with a couple of integers. This couple indicates how many valid input states can be reached from the associated node. The value of
the variable ei is set with respect to the label (v0 ; v1): p(ei = true) = v v+1 v and p(ei = false) = v v+0 v 0 1 0 1 The second step of the algorithm produces the input vector at each cycle. To that, the generator performs the four operations below: { locate, in the diagram describing the environment constraints, the sub-diagram corresponding to the current values of the state, { generate a random value for the software inputs satisfying the boolean function associated with that diagram, { read the new software outputs, { compute the next state by computing the next value of each state variable. In other words, the generator searches in the diagram associated with the constraints a path leading to a true leaf. Statistical testing generation
The statistical testing-oriented generation produces input data using both the previous BDD labeling and the conditional probability list. Let CP (e) be a list of conditional probabilities associated with the input variable e. CP (e) =((p1 ; ce1),(p2 ; ce2)...(pr ; cer )). In CP (e), pj denotes the probability that the variable e takes on the value true when the condition cej is true. The selection function assigns a value to e according to the following algorithm: 8 p(e = true) = if ce then p else if ce then p else::: >< 1 1 2 2 1 if cer then pr else v0v+v 1 with v1 and v0 refering to the basic labeling >: p(e = false) = (1 ? p(e = true))
Property-oriented testing
This technique is implemented by building a new BDD from the environment constraints and the properties to be tested. On this BDD, one can check whether a given state and a given value of the inputs both satisfy the environment constraints and are liable to exhibit an error with respect with the properties. The basic algorithm is modi ed as follows: { locate, in this late diagram, the sub-diagram corresponding to the current value of the state, { check whether there exists at least one value for the inputs which can lead to a true leaf in this diagram, { if positive, randomly select one of these values; otherwise, perform the basic algorithm. Behavioral pattern-based generation
Given the pattern to be matched, the method drives the generator to consider at every cycle the pair of predicates (inter, cond) corresponding to the current value of the progress variable. At each step, rst, the input space is computed to get all the possible inputs meeting the environment speci cation. It is then divided in three categories:
{ inputs which can satisfy cond, called H; { inputs falsifying inter, called L; { inputs in none of the two rst categories, called N .
A probability is assigned to each category so that an input in the rst one would be favored over an input in the third category, which, itself, would be preferred to an input from the second category. These probabilities are determined with respect to the cardinality of each partition and to given weights associated with them: wH , wL and wN . A partition is said to be of higher priority than an other if its weight is greater. The input selection is a two-step process. First, one has to select a category according to the determined probabilities. Each category c in C =fH, L, Ng has a probability pc of being selected: (c) pc = P wc w card card(j ) j 2C j
Then, an input is chosen in an equally probable manner from the selected category. As a result, the probability for any input i in c to be chosen is pi;c :
pi;c = card1 (c) pc = P
wc j 2C wj card(j )
The implementation of the algorithm is also based on the environment BDD
Env. Each predicate in the pattern is represented by a BDD, called Condi for the ith predicate to be satis ed, and InterCondi for the ith intervall predicate. With each value i of the progress variable are associated three BDDs. These
BDDs are computed as follows:
EnvH (i) = Env ^ Condi EnvL (i) = Env ^ :InterCondi EnvN (i) = Env ^ :Condi ^ InterCondi Each of these BDDs indicates whether a given state and a given value of the inputs satisfy both the environment constraints and the given predicate. These BDD are labeled in the very same manner than for the basic generation. Every generation step involves therefore the traversal of the three diagrams corresponding to the current value of progress. The traversal leads to the subdiagrams corresponding to the current environment state, where the cardinalities of H, L and N can be retrieved, thanks to the labelling. The selection is then performed with respect to the given weights and obtained cardinalities.
6 Application Lutess has been used since 1996 [13]. Its latest developments (statistical and behavioral pattern based testing) have shown their practicalness and their eciency on industrial case studies conducted in partnership with CNET/France Telecom. The experimentation has consisted in the validation of a synchronous model of a standard telephony system. Given the basic call service and additional service speci cations, the goal was to detect possible and undesired interactions between those services [5]. The speci cation were drawn from the ITU Recommendations. This experiment has concerned ve services. Another experiment has addressed twelve services in the framework of the \Feature Interaction Detection Tool Contest" held in association with the 5th Feature Interaction Workshop" [6]. The goal of this contest was to compare dierent feature interaction detection tools according to a single benchmark collection of features. Both experiments have shown that the use of Lutess is well adapted to the feature interaction problem. In particular, Lutess has won the \Best Tool Award" of the forementioned contest, since its use has allowed L. du Bousquet and N. Zuanon to nd the largest number of interactions. The synchronous approach has led to concise validations, thanks to the reduced number of states in the model. Indeed, all transitions are observable. The executable model is of higher abstraction and avoids the state space explosion problem. As a consequence, ve execution steps are enough to initiate a call, while two steps suce to terminate a communication. In addition to that, Lustre is able to generate naive C code, instead of a structured automaton. This avoids the state space explosion even further. On the average, a 10 000-step test case takes about one hour, depending on the environment complexity, on a Sparc Ultra-1 station with 128 MB RAM. Practical results have demonstrated that the guiding techniques were excellent at nding problems involving rare scenarios. The case studies have also con rmed that Lutess can be valuably applied at an earlier stage in the software validation, in order to tune speci cations. Experimentation has however highlighted the following drawbacks. It has clearly shown that the more detailed the properties, the higher the chances to detect a problem. However, detailing properties is not always possible, and/or would lead us away to the user's view which, so far, we have tried to favor. One has therefore to nd a good balance for the property precision level. In addition, specifying the software environment by means of invariant properties is a rather dicult task. Indeed, one should adequately choose a set of properties which do not \overspecify" the environment. Overspecifying may prevent some realistic environment behaviors from being generated. Conversely, a loose speci cation may cause the generation of invalid environment behaviors.
References 1. S.B. Akers. Binary Decision Diagrams. IEEE Transactions on Computers, C27:509{516, june 1978. 2. J. Bicarregui, J. Dick, B. Matthews, and E. Woods. Making the most of formal speci cation through animation, testing and proof. Science of computer programming, 29(1-2), july 1997. 3. P. Caspi, N. Halbwachs, D. Pilaud, and J. Plaice. LUSTRE, a declarative language for programming synchronous systems. In 14th Symposium on Principles of Programming Languages (POPL 87), Munich, pages 178{188. ACM, 1987. 4. L. du Bousquet, F. Ouabdesselam, and J.-L. Richier. Expressing and implementing operational pro les for reactive software validation. In 9th International Symposium on Software Reliability Engineering, Paderborn, Germany, november 1998. 5. L. du Bousquet, F. Ouabdesselam, J.-L. Richier, and N. Zuanon. Incremental feature validation : a synchronous point of view. In Feature Interactions in Telecommunications Systems V. IOS Press, 1998. 6. N.D. Grieth, R. Blumenthal, J.-C. Gregoire, and T. Otha. Feature interaction detection contest. In K. Kimble and L.G. Bouma, editors, Feature Interactions in Telecommunications Systems V, pages 327{359. IOS Press, 1998. 7. N. Halbwachs, F. Lagnier, and P. Raymond. Synchronous Observers and the Veri cation of Reactive Systems. In M. Nivat, C. Rattray, T. Rus, and G. Scollo, editors, Third Int. Conf. on Algebraic Methodology and Software Technology, AMAST'93, Twente. Workshops in Computing, Springer Verlag, june 1993. 8. D. Hamlet and R. Taylor. Partition Analysis Does Not Inspire Con dence. IEEE Transactions on Software Engineering, pages 1402{1411, december 1990. 9. F. Ouabdesselam and I. Parissis. Testing Synchronous Critical Software. In 5th International Symposium on Software Reliability Engineering, Monterey, USA, november 1994. 10. F. Ouabdesselam and I. Parissis. Constructing operational pro les for synchronous critical software. In 6th International Symposium on Software Reliability Engineering, pages 286{293, Toulouse, France, october 1995. 11. F. Ouabdesselam, J.-L. Richier, and N. Zuanon. Using behavioral patterns for guiding the test of service speci cation. technical report PFL, IMAG - LSR, Grenoble, France, 1998. 12. I. Parissis and F. Ouabdesselam. Speci cation-based Testing of Synchronous Software. In 4th ACM SIGSOFT Symposium on the Foundation of Software Engineering, San Francisco, USA, october 1996. 13. Ioannis Parissis. Test de logiciels synchrones speci es en Lustre. PhD thesis, Grenoble, France, 1996. 14. P. Ramadge and W. Wonham. Supervisory Control of a Class of Discrete Event Processes. SIAM J. CONTROL AND OPTIMIZATION, 25(1):206{230, january 1987. 15. J. Whittaker. Markov chain techniques for software testing and reliability analysis. Thesis, University of Tenessee, May 1992.