Synthesis of Fault-Tolerant Concurrent Programsy

0 downloads 0 Views 307KB Size Report
(1) It is always the case that any move P1 makes from its Noncritical region is into its Trying region and .... cess from executing any actions, possibly forever. Thus ...
Synthesis of Fault-Tolerant Concurrent Programsy (Extended Abstract)

Paul C. ATTIE z

Anish ARORA x

attie@ u.edu

[email protected]

School of Computer Science Department of Computer Florida International Sciences University The Ohio State University

E. Allen EMERSON {

Department of Computer Sciences The University of Texas at Austin [email protected]

Abstract Methods for mechanically synthesizing concurrent programs from temporal logic speci cations have been proposed (cf. [EC82, MW84, PR89, PR89b, AM94]). An important advantage of these synthesis methods is that they obviate the need to manually construct a program and compose a proof of its correctness. A serious drawback of these methods in practice, however, is that they produce concurrent programs for models of computation that are often unrealistic. In particular, all extant synthesis methods assume completely fault-free operation, i.e., the programs they produce are fault-intolerant. In this paper, we show how to mechanically synthesize fault-tolerant concurrent programs for various fault models. We illustrate the method by synthesizing fault-tolerant solutions to the mutual exclusion and barrier synchronization problems.

1 Introduction Methods for synthesizing concurrent programs from Temporal Logic speci cations based on the use of a decision procedure for testing temporal satis ability have been proposed by Emerson and Clarke [EC82] and Manna and Wolper [MW84]. An important advantage of these synthesis methods is that they obviate the need to manually compose a program and manually construct a proof of its correctness. One only has to formulate a precise problem speci cation; the synthesis method then mechanically constructs a correct solution. A serious drawback of these methods in practice, however, is that they deal only with functional correctness properties. Non-functional properties such as fault tolerance are not addressed. For example, the method of Manna and Wolper [MW84] produces CSP programs in which all communication takes place between a central synchronizing process and one of its satellite processes. Thus, the failure of the central synchronizer blocks the entire system. In this paper, we present a method for synthesizing concurrent programs for various fault models. This method is an extension of the method of [EC82]. Essentially we rst synthesize a correct but fault-intolerant program. We then model the occurrence of faults as fault transitions, and the recovery from faults as recovery transitions. The addition of these transitions to the originial program results in the nal, fault tolerant program. We illustrate our method by synthesizing fault-tolerant solutions for the mutual exclusion and barrier synchronization problems. The paper is organized as follows: Section 2 de nes the model of computation, the speci cation language, and the fault models that we consider. Sections 3 to 6 present synthesis methods for the various fault models. Section 7 establishes soundness results for our synthesis methods, and section 8 presents our conclusions and discusses extensions of this work. For reasons of space, all proofs are omitted, and are provided in the full paper. y z x {

An expanded version of this paper may be found at ftp://ftp.cis.ohio-state.edu/pub/anish/anish/ft-syn-expanded.ps.gz Supported in part by Air Force Oce of Scienti c Research, U. S. Air Force, Grant No. F49620-96-1-0221 Supported in part by NSF Grant CCR-93-08640 and OSU Grant 221506. Supported in part by NSF Grant CCR-9415496 and by SRC Contract DP 95-DP-388.

1

2 Preliminaries 2.1 Model of Parallel Computation We consider concurrent programs of the form P = P1 k   kPK which consist of a nite number of xed sequential processes P1 ; : : : ; PK running in parallel. With every process Pi , we associate a single, unique index, namely i. We observe that for most actual concurrent programs the portions of each process responsible for interprocess synchronization can be cleanly separated from the sequential applications-oriented computations performed by the process. This suggests that we focus our attention on synchronization skeletons, which are abstractions of actual concurrent programs where detail irrelevant to synchronization is suppressed. We may view the synchronization skeleton of an individual process Pi as a state machine where each state represents a region of code intended to perform some sequential computation and each arc represents a conditional transition (between di erent regions of sequential code) used to enforce synchronization constraints. For example, there may be a node labeled Ci representing the critical section of process Pi . While in Ci , the process Pi may simply increment a single variable, or it may perform an extensive series of updates on a large database. In general, the internal structure and intended application of the regions of sequential code in an actual concurrent program are unspeci ed in the synchronization skeleton. By virtue of the abstraction to synchronization skeletons, we thus eliminate all steps of the sequential computation from consideration. Formally, the synchronization skeleton of each process Pi is a directed graph where each node is labeled by a unique name (si ), and each arc is labeled with a synchronization command B ! A consisting of an enabling condition (i.e., guard) B and corresponding action A to be performed (i.e., a guarded command [Dij76]). Self-loops, where there is an arc from a node to itself, are disallowed. A global state is a tuple of the form (s1 ; : : : ; sK ; x1 ; : : : ; xm ) where each node si is the current local state of Pi and x1 ; : : : ; xm is a list (possibly empty) of shared synchronization variables. A guard B is a predicate on global states and an action A is a parallel assignment statement which updates the values of the shared variables. If the guard B is omitted from a command, it is interpreted as true and we simply write the command as A. If the action A is omitted, the shared variables are unaltered and we write the command as B . We model parallelism in the usual way by the nondeterministic interleaving of the \atomic" transitions of the individual synchronization skeletons of the processes Pi . Hence, at each step of the computation, some process with an enabled transition is nondeterministically selected to be executed next. Assume that the current state is s = [s1 ; : : : ; si ; : : : ; sK ; x1 ; : : : ; xm ] and that process Pi contains an arc from node si to s0i labeled by the command B ! A (such a Pi -arc will be written as (si ; B ! A; s0i ) ). If B is true in the current state then a permissible next state is s0 = [s1 ; : : : ; s0i ; : : : ; sK ; x01 ; : : : ; x0m ] where x01 ; : : : ; x0m is the list i;A 0 of updated shared variables resulting from action A (we notate this transition as s ?! s ). A computation path is any sequence of states where each successive pair of states is related by the above next state relation. The synthesis task thus amounts to supplying the commands to label the arcs of each process' synchronization skeleton so that the resulting computation trees of the entire program P1 k   kPK meet a given Temporal Logic speci cation.

2.2 The Speci cation Language Our speci cation language is the propositional branching time temporal logic CTL (Computation Tree Logic, cf. [EC82]). The basic modalities of CTL consist of a path quanti er, either A (for all paths) or E (for some path) followed by one of the linear time modalities G (always), F (sometime), X (nexttime), and U (until). CTL formulae are built up from atomic propositions, the boolean operators ^; _; :, and the basic modalities. The logic CTL extends CTL in that the basic modalities of CTL consist of a path quanti er followed by any number of linear time modalities. We use CTL to state some of our correctness results. We de ne the semantics of CTL and CTL formulae with respect to a Kripke structure M = (S 0 ; S; R; L) consisting of a countable set S of global states, a set S 0  S of initial states, a binary transition relation R  S  S , giving the transitions of every process, and a labelling function L : S 7! 2AP which labels each state with the set of atomic propositions true in the state. R is partitioned into relations R1 ; : : : ; RK , where Ri gives 2

the transitions of process i. We require that R be total, i.e., 8x 2 S; 9y : (x; y) 2 R. A fullpath is an in nite sequence of states (s0 ; s1 ; s2 : : :) such that 8i : (si ; si+1 ) 2 R. For the formal de nitions of CTL, CTL syntax and semantics, the reader is referred to the appendix. We remark that since synchronization skeletons are in general nite state, the propositional version of temporal logic can be used to specify their properties. This is essential, since only propositional temporal logics enjoy the nite model property, which is the underlying basis of the synthesis method of [EC82] which this paper builds upon. An example CTL speci cation is that for the two process mutual exclusion problem: (0) Initial State (both processes are initially in their Noncritical region):

N1 ^ N 2

(1) It is always the case that any move P1 makes from its Noncritical region is into its Trying region and such a move is always possible. Likewise for P2 : AG(N1 ) (AX1 T1 ^ EX1 T1 )) AG(N2 ) (AX2 T2 ^ EX2 T2 )) (2) It is always the case that any move P1 makes from its Trying region is into its Critical region. Likewise for P2 : AG(T1 ) AX1 C1 ) AG(T2 ) AX2 C2 ) (3) It is always the case that any move P1 makes from its Critical region is into its Noncritical region and such a move is always possible. Likewise for P2 : AG(C1 ) (AX1 N1 ^ EX1 N1 )) AG(C2 ) (AX2 N2 ^ EX2 N2 )) (4) P1 is always in exactly one of the states N1 ; T1 ; or C1 . Likewise for P2 : AG(N1  :(T1 _ C1 )) ^ AG(T1  :(N1 _ C1 )) ^ AG(C1  :(N1 _ T1 )) AG(N2  :(T2 _ C2 )) ^ AG(T2  :(N2 _ C2 )) ^ AG(C2  :(N2 _ T2 )) (5) P1 ; P2 do not starve: AG(T1 ) AFC1 ) AG(T2 ) AFC2 ) (6) P1 ; P2 do not access critical resources together: AG(:(C1 ^ C2 )) (7) It is always the case that some process can move: AGEXtrue (1{4) are local structure speci cations; they describe the structure of the synchronization skeleton for each process. Another example speci cation is for the two process barrier synchronization problem. Each process consists of a cyclic sequence of two terminating phases, phase 1 and phase 2. Process i; i = 1; 2, is in exactly one of 4 local states, s1i , e1i , s2i , e2i , corresponding to the start of phase 1, the end of phase 1, the start of phase 2, and the end of phase 2, respectively. (0) Initial State (both processes are initially at the start of phase 1): s11 ^ s12 (1) The start of phase 1 is always followed by the end of phase 1. AG(s11 ) AX1 e11 ) AG(s12 ) AX2 e12 ) (2) The end of phase 1 is always followed by the start of phase 2. AG(e11 ) AX1 s21 ) AG(e12 ) AX2 s22 ) (3) The start of phase 2 is always followed by the end of phase 2. AG(s21 ) AX1 e21 ) AG(s22 ) AX2 e22 ) (4) The end of phase 2 is always followed by the start of phase 1. 3

AG(e21 ) AX1 s11 ) AG(e22 ) AX2 s12 ) (5) Process 1 is in exactly one of the states s11 or e11. Likewise for process 2: AG(s11  :(e11 _ s21 _ e21 )) ^ AG(e11  :(s11 _ s21 _ e21 )) ^ AG(s21  :(s11 _ e11 _ e21 )) ^ AG(e11  :(s11 _ s21 _ e21 )) AG(s12  :(e12 _ s22 _ e22 )) ^ AG(e12  :(s12 _ s22 _ e22 )) ^ AG(s22  :(s12 _ e12 _ e22 )) ^ AG(e12  :(s12 _ s22 _ e22 )) (6) No process starts phase 2 before both processes end phase 1: AG((s11 ^ s12 ) ) A[(:s21 ^ :s22 )U(e11 ^ e12 )]) (7) No process ends phase 2 before both processes start phase 2: AG((e11 ^ e12) ) A[(:e21 ^ :e22 )U(s21 ^ s22 )]) (8) After the rst iteration, no process starts phase 1 before both processes end phase 2: AG((s21 ^ s22 ) ) A[(:s11 ^ :s12 )U(e21 ^ e22 )]) (9) No process ends phase 1 before both processes start phase 1: AG((e21 ^ e22) ) A[(:e11 ^ :e12 )U(s11 ^ s12 )]) (10) It is always the case that some process can move: AGEXtrue Note that we require that progress is made from one phase to another, since the until (U) modality requires that its second operand eventually holds.

2.3 The Fault Models The faults that a concurrent program is subject to may be categorized in a variety of ways. One categorization distinguishes between faults whose e ect is to \corrupt" the global state of the program and faults whose e ect is to corrupt program processes. Another categorization distinguishes between faults that are \detectable" by processes and faults that are undetectable by processes. Yet another distinguishes between faults whose duration is \permanent" and faults whose duration is \transient". To demonstrate the generality of our approach, we consider three models of faults that are representative of the di erent categories mentioned here. The rst fault model that we consider is the well-known fail-stop failures and repairs [Sc84, Sc90]: A fault in this model stops an arbitrarily chosen process from executing any actions for a nite length of time, after which that process resumes its normal computation. Processes that are not stopped can detect processes that are stopped. Thus, fail-stop failures and repairs e ectively corrupt processes in a detectable and transient manner. The second fault model that we consider is fail-stop failures only: A fault in this model stops a process from executing any actions, possibly forever. Thus, fail-stop failures e ectively corrupt processes in a detectable and transient/permanent manner. If a set of processes fails permanently, then the remaining processes can be thought of as a \subsystem" of the original system. We parametrize these subsystems by the set ' of the permanenetly failed processes, and notate them as subsystem('). In keeping with a standard approach to representing fail-stop failures [AG93], we introduce the distinguished atomic proposition Di for process i. Di is true i process i has stopped and has not yet repaired. The third fault model that we consider is general state failures: A fault in this model arbitrarily perturbs the state of a process or a shared variable, without being detected by any process. As a result, the program may be placed in a state that it would not have reached under normal computation of the processes. Such state failures are general in the sense that by a sequence of such faults the program may reach arbitrary global states; thus, faults e ectively corrupt global state in an undetectable and transient manner. A transition that models a fault is called a fault-transition. A transition that models the recovery of a process is called a recovery-transition. All other transitions are called normal transitions. 4

2.4 Technical De nitions For a global state s = [s1 ; : : : ; sK ; x1 ; : : : ; xm ], let s"i =df= si . Each We extend the labeling function L to local states as follows: L[si ] = L[s] \ AP i . For Qi 2 AP i , we write si (Qi ) = true, s(Qi ) = true i Qi 2 L[si ], Qi 2 L[s] respectively, and si (Qi ) = false, s(Qi ) = false otherwise, respectively. s#i =df= s ? fsi g, i.e., s#i is s with its Pi -component removed. Likewise, s#i1 : : : in =df= s ? fsi1 ; : : : ; sin g. SH denotes the set fx1 ; : : : ; xm g of shared variables. f[sg] =df= \( ^ Q) ^ ( ^ :Q) ^ ( ^ x = s(x))" where Q ranges over AP , s(Q) = true

s(Q) = false

x 2 SH

and f[s#ig] =df= \( ^ Q) ^ ( ^ :Q) ^ ( ^ x = s(x))," where Q ranges over AP ? AP i . f[sg] x 2 SH s(Q) = true s(Q) = false characterizes s in that s j= f[sg] , and s0 6j= f[sg] for all states s0 such that s0 6= s, i.e., it converts a state into a propositional formula. A particular Pi -arc (si ; B ! A; ti ) generates a set of Pi -transitions; one for every state s such that s"i = si and s(B ) = true. We call such a set a family, and use the term Pi -family to specify a family generated by an arc of process Pi . Formally, a Pi -family F in a Kripke structure M = (S 0 ; S; R; L) is a maximal subset i;A of R such that: 1) all members of F are Pi -transitions having the same label ?! , and, 2) for any pair i;A i;A 0 i;A 0 0 0 s ?! t; s ?! t of members of F , s"i = s "i and t"i = t "i. If s ?! t 2 F , then let F :start, F :finish, F :assig denote s"i, t"i, and A, respectively. Given that T:begin denotes the source state of transition T , i.e., W i;A T:begin = s for transition T = s ?! t, let F :guard denote T 2 F f[(T:begin)#ig] .

De nition 1 (Program Extraction) Let M = (S ; S; R; L) be an arbitrary Kripke structure. Then the program P = P k   kPK is extracted from M as follows. For all i 2 f1; : : : ; K g: (si ; B ! A; ti ) 2 Pi i there exists a Pi -family F in M such that si = F :start, ti = F :finish, A = F :assig, B = F :guard. 0

1

A state in which Pi has failed has the form [Di s#i] (for some global state s) and is called a i-fault state. A state in which a single unspeci ed process has failed is called a single-fault state. Likewise, a state in which Pi1 ; : : : ; Pin have failed has the form [Di1 ; : : : ; Din s#i1 ; : : : ; in ] and is called an i1 : : : in -fault state. A state in which n unspeci ed processes have failed is called an n-fault state.

3 Synthesis of Programs for Fail-Stop Failures and Repairs Let f be a speci cation, expressed in CTL, for a concurrent program. We proceed as follows. We rst apply the CTL decision procedure to f as in [EC82]. If f is satis able, then the decision procedure yields a model M = (S 0 ; S; R; L) of f . M can be viewed as the global state transition diagram of a fault-intolerant program P which satis es f , and P can be extracted from M via de nition 1. To extract a fault-tolerant program from M , we rst transform M by adding to it transitions that model the failure and subsequent recovery of processes. Failure of Pi in (global) state s is modelled by a fault transition from s to the new state [Di s#i]. Recovery of Pi from state [Di s#i] is modelled by a recovery transition from [Di t#i] to a reachable state t such that t#i = s#i. We note that cascaded faults must also be modelled, i.e., faults can occur in global states in which some processes have already failed. When all of the fault and recovery transitions have been added to M , the fault-tolerant program is extracted from M via de nition 1. Our method is given in detail below: 1. Apply the CTL decision procedure of [EC82] to the CTL speci cation f . The result is a model M = (S 0 ; S; R; L) of f . 2. For all i 2 [1 : K ] do 5

2.1 Partition S into equivalence classes cl(s) = ft j t#i = s#ig. 2.2 For each equivalence class cl(s) do 2.2.1 Add a fault transition from every state in cl(s) to the i-fault state [Di s#i]. 2.2.2 Add a recovery transition from [Di s#i] to every state in cl(s). 2.3 Repeat step 2, this time partitioning the single-fault states introduced by step 2.2.1, and adding the fault transitions from the single-fault states, and the recovery transitions to the single-fault states. This will introduce double-fault states, and will model all possible cascades of two faults. 2.4 Continue iteratively in this manner. After n applications of step 2 (1  n  K ), all cascades of n faults have been modelled. On the n + 1'st application of step 2, the n-fault states thereby introduced are partitioned into equivalence classes, and the fault/recovery transitions from/to these states are then introduced, which extends the current model to all cascades of n + 1 faults. Termination occurs after K applications of step 2, since we cannot have cascades of more than K faults. 3. Using de nition 1, extract the fault-tolerant program from the Kripke structure produced by step 2. Synthesis method for the fail-stop failures and repairs model We now apply the method to the mutual exclusion speci cation given above. Figure 1 shows the fault-free model produced for this speci cation by the CTL decision procedure of [EC82]. Figure 2 shows the nal model produced by the synthesis method. For clarity, only the fault/recovery transitions coresponding to the failure of P1 followed by the failure of P2 , are shown. The transitions corresponding to the other order of failure can be deduced by symmettry considerations. Finally, gure 3 shows the skeletons extracted from the nal model.

N1

N2 2

1

T1

2 x := 1

1

C1

1

1

C1

T2 2

x := 2 T1 x = 2 T 2

T1 x = 1 T2

N2 2

1

N1

N2

N1

2

T2

1

T1 1

C2

C2

2

2

The initial state set is f [N1 N2 ] g Figure 1: Model of the Mutual Exclusion Speci cation Produced by the CTL Decision Procedure 6

4 Synthesis of Programs for Fail-Stop Failures The synthesis method presented in the preceding section produces programs in which the only outgoing transitions from a fault state are recovery transitions. Thus, once a process fails, the rest of the system is e ectively locked, waiting for that process to recover. Not only does this restrict applicability of the method to fault models where every failed process is assumed to eventually recover, but it also results in inecient programs that contain super ous synchronization. For example, in the mutual exclusion problem, when one process fails, that should not prevent other processes from making progress and eventually entering the critical section (note that if a process Pi fails inside the critical section then it leaves the critical section, since it transits to the detectable local fault state Di ). We now remedy these defects by introduing normal transitions between fault states (i.e., transitions executed by non-failed processes in the normal course of operation). Suppose Pi1 ; : : : ; Pin have failed. Since we admit the possibility of these processes \staying down" forever, we consider the \subsystem" consisting of the remaining processes. We attempt to synthesize this subsystem using the synthesis method of [EC82]. To do this, we must derive a spci cation for this subsystem from the CTL speci cation for the \full system," i.e., the system consisting of all the processes. Let pnf (f ) denote the positive normal form of f (i.e., with negations pushed inwards as far as possible using DeMorgans laws and CTL dualities). De ne a minimal conjunct of f to be a conjunct of pnf (f ) that does not have ^ as its main (\outemost") operator. Then, the fault residue of f with respect to Pi (notated resi (f )) is:

Vf 0 : f 0is a minimal conjunct of f and (atomic ? props(f ) 6 AP or atomic ? props(f ) = ;) i where atomic ? props(f ) is the set of atomic propositions mentioned in formula f . Essentially, resi (f ) is

f with all conjuncts which mention only Pi removed. The fault residue with respect to several processes, notated resi1 ;:::;in , is obtained by repeated application of the above de nition in the straightforward way. The CTL decision procedure of [EC82], when applied to a CTL formula f , associates with each state in the generated structure M a label FL(s), which is a subset of the Fisher-Ladner closure of f . FL(s) is the set of all CTL formulae that must be true in s in order for M to be a model of f . We use these labels to construct the speci cation for the subsystem as follows. Let s be an n-fault state that is reached from an (n ? 1)-fault state t by the failure of Pi . Then: FL(s) = f AGDi ; AG(^si 2Pi ?Di :f[sig] ); \ t 2cl(t) resi (L(t0 )) g 0

The rst formula states that Pi remains failed forever. The second formula is a consequence of the rst | since Pi is failed forever, it can never (re-)enter one of its normal local states. The third formula is the intersection of the labels of all the states in the equivalence class from whcih s was reached (by the failure of Pi ). The intuition behind this is that once failure has occurred, it is not known which state in cl(t) the program was in immediately before the failure. Thus, the only CTL formulae that one might reasonably use as a speci cation for the subsystem corresponding to n faults are those that held in all states of the (n ? 1)-fault subsystem. By applying the above de nition iteratively, we extend the labelling from the non-fault states of the original structure M (which is generated by the CTL decision procedure) to all fault states. Now, to synthesize a particular subsystem, we apply the CTL decision procedure to the label of all the states in that subsystem, merging any duplicate states that are generated. Also, the [EC82] synthesis method generates assignments to shared variables along transitions leading into propositionally identical states that nevertheless have di erent Fisher-Ladner labels. We must ensure that such assignments are not placed along faulttransitions, since this would require (costly) stable storage. We do ths by requiring that all propositionally identical states of the subsystem also have indentical Fisher-Ladner labels. If this requirement is not met, then fault-tolerance cannot be achieved without resort to stable storage. In other words, we have an impossibility result in this case. We now apply the method to the mutual exclusion speci cation. Figure 4 shows the nal model produced by the synthesis method. The only di erence with the structure of gure 2 is that the P1 -fault states have outgoing normal P2 -transitions. Thus, P2 is not prevented from making progress by the failure of P1 . As in 7

gure 2, only the fault/recovery transitions coresponding to the failure of P1 followed by the failure of P2 , are shown. Figure 5 shows the skeletons extracted from the structure of gure 5. The only di erence with the skeletons of gure 3 is that between the normal local states of P2 , there is an extra arc that is enabled when D1 is true, re ecting the fact that P2 can progress when P1 is down. Consider also the barrier synchronzation speci cation. The structure generated by the CTL decision procedure applied to this speci cation is given in gure 6. Suppose a P1 -fault occurs in state [s11e12 ]. The resulting fault-state will have a Fisher-Ladner label containing, among other formulae, the formulae AG:e11 , A[(:s21 ^:s22 )U(e11 ^ e12 )]. These formulae contradict each other, and so this label is not satis able. This means that a P1 -subsystem cannot be generated. In other words, when P1 fails in this state, P2 can do nothing except wait for P1 to recover. Indeed we can verify this from considering the meaning of the barrier synchronization problem | the progress of P2 requires the concomitant progress of P1 , whereas this was not the case in the mutual exclusion problem, which is why we were able to synthesize a P1 subsystem for that speci cation. The nonexistence of a P1 -subsystem for the barrier-synchronization problem may be interpreted as an impossibility result. Since P2 cannot make progress unless P1 comes back up, it follows that there is no fault-tolerant program for barrier-synchronization in the fail-stop failures model. By contrast, the (1-residue of the) mutual exclusion speci cation can still be satis ed even if P1 stays down forever. Thus, our method is capable of generating impossibility results mechanically.

5 Synthesis of Programs for General State Failures We now address the general state failures model. Since each fault arbitrarily perturbs a single global state componenet (i.e., local state or shared variable), a cascade of such faults can generate arbitrary global states. Thus, we generate all possible (new) global states and add them to the structure M produced by the CTL decision procedure. Then, we construct paths from each such fault-state back to some non-fault state (i.e., state of M ). From the resulting structure we then extract the fault-tolerant program. Formally the method is as follows: 1. Apply the CTL decision procedure of [EC82] to the CTL speci cation f . The result is a model M = (S 0 ; S; R; L) of f . 2. Add all possible global states to M . Call the result M 0 = (S 0 ; S 0 ; R0 ; L0 ) (then, the fault states are the states in S 0 ? S ). 3. Construct a path from every fault state in M 0 to some non-fault state in M 0 . 4. Using de nition 1, extract the fault-tolerant program from M 0 . Synthesis method for the general state failures model Figure 6 gives the structure M for the barrier synchronization speci cation, and gure 7 shows the skeletons extraced. Figure 8 gives the structure M 0 generated by our synthesis method, and gure 9 shows the skeletons extraced. It is interesting that in the fault-intolerant program of igure 7, a process can move if the other process is at the same state or one state \ahead", whereas in the fault-tolerant program of gure 9, a process can move if the other process is at the same state or one state \ahead", or two states ahead. The fault-intolerant program deadlocks in any of the fault-states, whereas the fault-tolerant program, with the weaker guards on its transitions, does not deadlock. But note however, that these weaker guards do not permit the fault-tolerant program to generate any new states or transitions under normal (fault-free) operation. 8

6 Synthesis of Atomic Read/Write Programs for Fail-stop Failures In [AE96], we presented a method for the synthesis of concurrent programs for a model of computation in which all interprocess synchronization and communication is achieved by atomic reads and writes of single variables. We now extend this method to produce fault-tolerant atomic read/write programs for the fail-stop failures model. Roughly speaking, we rst generate the fault-intolerant program using the method of [AE96]. Then, we generate the fault-subsystems as in the synthesis method for fail-stop failures of section 4, but adding in only the fault-transitions, and not the recovery transitions. Next, we add recovery. Recovery is achieved in two stages, a read stage, where the recovering process reads a single global state component. The process uses this information to determine which local state it can recover to (the process cannot recover to an arbitrary local state, since it could introduce new global states that violate the speci cation if it did this). This is followed by a write stage, where the process rewrites its local states to a state that it has determined is permissible for it to recover to. For space reasons, we omit a full description, and give an example of a fault-tolerant atomic read/write mutual exclusion program in gure 10. In this gure, Li , i = 1; 2 is a location counter that encodes the value of all the atomic propositions of Pi (including Di ). The integer superscripts shown are introduced to distinguish propositionally identical local states and essentially represent a hidden component of the location counter of each process that is not visible to other processes.

7 Correctness Results Let the synthesis methods presented in sections 3 to 6 be called methods 1{4 respectively. Let f be the (assumed satis able) CTL speci cation, and M 0 be the structure produced by the synthesis method in question. We assume f has the form f 00 ^ AGf 0 , i.e., AGf 0 represents the \invariant" component of f . This assumption in no way restricts f since f 00 is an arbitrary CTL formula. For the fail-stop models, let crashi be a transition-assertion that is true along an i-fault transition and false along any other type of transition. For the general state failures model, let fault be a transition-assertion that is true along a fault-transition and false along any other type of transition. We note that all of our methods are complete, by virtue of the completeness of the [EC82] synthesis 1 method. We now present some soundness results. Note that G is an abbreviation for FG, and means \true for all but a nite number of states along the path."

Theorem 1 For all four methods: M 0 ; S " fault?free j= f where M 0 S " fault?free is the fault-free portion of M 0 , i.e., the portion that is reached from the initial 0

0

states via normal transitions only. Proof sketch : The fault-free portion of M 0 is just M , the structure produced by the CTL synthesis method of [EC82]. By the soundness of this method, we have M; S 0 j= f . 2

Theorem 2 For method 1:

1

1

M 0 ; S j= A( G (^i2[1:K ] :crashi ) ) G (fault?free ^ AGf 0 )) W where fault?free =df= ( s 2 M : f[sg] ), i.e., fault?free is a propositional formula that is true of a state i that state is a state of the fault-free portion of M 0 , i.e., the portion that is reached from the initial states via normal transitions only. Proof sketch : By inspection of M 0 we see that once faults stop occurring, the only possible in nite paths consist of some nite (possibly empty) sequence of recovery actions, followed by an in nite sux wholly contained in the fault-free protion M . A fault-free state satis es fault?free by de nition. Now AGf 0 is a conjunct of the speci cation (of the \full" system). Thus, every initial state satis es AGf 0 . Thus, by CTL semantics, every state reachable from some initial state via normal transitions, i.e., every fault-free state, 9

satis es AGf 0 . Hence, every state along the sux of the path in question satis es fault?free ^ AGf 0 , and 1 so the path in question satis es G (fault?free ^ AGf 0 ). 2

Theorem 3 For method 2:

1

1

M 0 ; S j= A( ^ G (^i2[1:K ] :crashi ) ) G (fault?free ^ AGf 0))

where  is weak process fairness. Proof sketch : Analogous to the previous theorem, except we observe the need for fairness. Previously, once faults stopped occuring, the only possible transitions were recovery transitions, and these always lead back to the fault-free structure M . Now, it is also possible to execute normal transitions from a fault-state (these being the transitions of the various subsystems synthesized by the method). However, the method produces a recovery transition for Pi in every Pi -fault state. Thus, Pi is continously enabled in a Pi -fault state, and, under weak process fairness, will eventually execute its recovery transition. Thus, once faults stop, eventually all failed processes will recover, and thus every ini te path satisfying the antecedent of the implication must eventually enter the fault-free region, where it subsequently remains. 2 Note that this theorem goes counter to the intent of the second method, which is to synthesize programs where failed processes are allowed to stay down forever. Here, the use of weak process fairness prevents this, and forces each failed process to eventually recover. Although this strictly violates the fail-stop failures model, the point of this theorem is to establish that, as in the structure M 0 generated by method 1, there exist paths leading back to the fault-free region from any state whatsoever (theorem 2 establishes the existence of these paths for the structure produced by method 1). In other words, what this theorem really says is that if faults stop occurring, and all the failed process do happen to recover, then the program eventually enters the fault-free region and stays there forever. Weak process fairness is used to capture this assumption that all failed processes do eventually recover.

Theorem 4 For method 2:

M 0 ; S j= A(G(^i2' Di ^ ^i2[1:K ]?' :Di ) ) G(good' ^ AGf'0 )) W where good' =df= ( s 2 subsystem(') : f[sg] ) and f'0 is the invariant component of the speci cation of subsystem(').

Proof Sketch : The antecedent of the implication states that the only paths considered are those that are wholly contained in the region corresponding to subsystem('). good' is a propositional formula which is true of a state i that state is in the region of subsystem('). Thus, it is true along all states of such a path. AGf'0 is a conjunct of the subsystem speci cation by construction of method 2. Since the CTL decision procedure is applied to all the states in subsystem('), we conclude, by soundness of the CTL decision procedure, that 2 AGf'0 holds in all states of the subsystem, and therefore along all states of the path in question.

Theorem 5 For method 3:

1

1

M 0 ; S j= A( G :fault ) G (fault?free ^ AGf 0 ))

Proof Sketch: By construction of the method, the only paths from a fault-state lead to some fault-free state. Thus, once faults stop occurring, every in nite path eventually enters the fault-free region and stays there forever. Thus, every in nte path has some sux composed entirely of fault-free states. A fault-free state satis es fault?free by de nition. Now AGf 0 is a conjunct of the speci cation (of the \full" system). Thus, every initial state satis es AGf 0 . Thus, by CTL semantics, every state reachable from some initial state via normal transitions, i.e., every fault-free state, satis es AGf 0 . Hence, every state along the sux of the path 1 in question satis es fault?free ^ AGf 0 , and so the path in question satis es G (fault?free ^ AGf 0 ). 2

Theorem 6 For method 4:

1

1

M 0 ; S j= A( ^ G (^i2[1:K ] :crashi ) ) G (fault?free ^ AGf 0)) 10

where  is weak process fairness if every i-fault state has an outgoing i-recovery transition, and is strong process fairness if not. Proof sketch : Analogous to the proof of theorem 3, with the exception that method 4 does not guarantee that every i-fault state has an outgoing i-recovery transition, in which case strong process fairness will be needed, since Pi willbe enabled in nitely often rather than continuously in this case. 2

Theorem 7 For method 4:

M 0 ; S j= A(G(^i2' Di ^ ^i2[1:K ]?' :Di ) ) G(good' ^ AGf'0 )) W where good' =df= ( s 2 subsystem(') : f[sg] ) and f'0 is the invariant component of the speci cation of subsystem('). Proof Sketch : Analogous to the proof of theorem 4, except that rather than invoke the soundness of the CTL decision procedure [EC82], we invoke the soundness of the atomic read/write synthesis method of [AE96]. 2

8 Conclusions and Further Work We have presented several methods for the synthesis of fault-tolerant programs from speci cations expressed in temporal logic. We have dealt with di erent fault models (fail-stops, repairs, state corruption). We have also shown how impossibility results may be mechanically generated using our methods. We note that in the mutual exclusion example, safety is preserved in all states, not just the fault-free states. An important extension of our results then, is to characterize the speci cations that are masked, i.e., satis ed in all states. In [AE96], we presented a method for the synthesis of concurrent programs for a model of computation in which all interprocess synchronization and communication is achieved by atomic reads and writes of single variables. This method can be easily extended to generate fault-tolerant atomic read/write programs for the fail-stop failures model. All that is needed is to break up a recovery transition into two transitions: a transition that reads a single global state component (i.e., shared variable or local state of some process), followed by a write transition that changes the state of the recovering process appropriately. We omit a full description and example for space reasons, but note that we have synthesized a fault-tolerant version of Petersons mutual exclusion algorithm [Pe81] for the fail-stop failures model. One potential diculty with our methods is the state explosion problem | the number of states in a structure usually increases exponenetially with the number of processes, thereby restricting the applicability of synthesis methods based on state enumeration to small systems. In [AE89], a method is proposed of overcoming the state explosion problem by considering the interaction of processes pairwise. The exponentially large global product of all the proceeses in the system is never constructed. Instead, using the CTL synthesis method of [EC82], small Kripke structures depicting the product of two processes are constructed and used as the basis for synthesis of programs with arbitrarily many processes. Using the synthesis methods given here insetad of the [EC82] method, we can construct these \two-process structures," which will now incoprporate fault-tolerant behavior. We then plan to extend the synthesis method of [AE89] to take these structures as input and produce fault tolerant programs of arbitrary size. Another possible extension of this work is to deal with real-time. By using a timed version of CTL, such as Real-Time CTL [EMSS93], it should be possible to synthesize real-time fault-tolerant programs. Since we already have some preliminary results regarding the extension of the many-process synthesis method of [AE89] to real-time, we expect that these two extensions can be integrated into one overall method. Finally, an important issue we have not dealt with is that of the consistency of underlying data. In the synchronization skeleton model, we assume that each local state has a piece of sequential terminating code associated with it that manupulates "underlying", possibly shared, data. When a fault occurs, the resulting change of state in the syncronization skeletons may leave the underlying data in an inconsistent state. We intend to address this problem through the use of generic transformations, such as distributed reset [AG94] or checkpointing/recovery. 11

References [AE89]

P. Attie and E.A. Emerson, \Synthesis of Concurrent Systems with Many Similar Sequential Processes (Extended Abstract)," Proceedings of the Sixteenth Annual Association of Computing Machinery Symposium on the Principles of Programming Languages, Austin, January 1989, pp. 191 { 201. [AE96] P. Attie and E.A. Emerson, \Synthesis of Concurrent Systems for an Atomic Read / Atomic Write Model of Computation (Extended Abstract)," Proceedings of the Fifteenth Annual ACM Symposium on the Principles of Distributed Computing, Philadelphia, May 1996, 111{120. [AG93] A. Arora and M. G. Gouda: \Closure and convergence: A foundation of fault-tolerant computing," IEEE Transactions on Software Engineering, 19(11): 1015{1027 (1993). [AG94] A. Arora and M. G. Gouda: \Distributed reset," IEEE Transactions on Computers , 43(9): 1026{1038 (1994). [AM94] A. Anuchitanukul, Z. Manna: \Realizability and Synthesis of Reactive Modules," CAV94, Springer LNCS 818, (1994), 156{169. [CGB86] E.M. Clarke, O. Grumberg, and M.C. Browne: \Reasoning About Networks With Many Identical FiniteState Processes," Proc. 5'th ACM PODC, (1986), 240{248. [Dij76] E.W. Dijkstra: A Discipline of Programming, Prentice-Hall Inc., 1976. [EC82] E.A. Emerson, E.M. Clarke: \Using Branching Time Temporal Logic To Synthesize Synchronization Skeletons," Science of Computer Programming 2 (1982) 241{266. [Em90] E.A. Emerson: \Temporal and Modal Logic," in Handbook of Theoretical Computer Science, vol. B, J. Van Leeuwen, (ed.), 1990. [EMSS93] Emerson, E.A., Mok, A.K., Sistla, A.P., and Srinivasan, J. Quantitative temporal reasoning. Real Time Systems Journal 2: 331{352 (January 1993). [Ho86] R. Hoogerwoord: \An Implementation of Mutual Inclusion," IPL, 23 (1986) 77{80. [MW84] Z. Manna, P. Wolper: \Synthesis of Communicating Processes from Temporal Logic Speci cations," ACM TOPLAS, 6 (1984) 68{93. [Pe81] G.L. Peterson: \Myths About the Mutual Exclusion Problem," IPL, 12 (1981) 115{116. [PR89] A. Pnueli, R. Rosner: \On the Synthesis of a Reactive Module," Proc. 16'th ACM POPL, (1989), 179{190. [PR89b] A. Pnueli, R. Rosner: \On the Synthesis of Asynchronous Reactive Modules," Proc. 16th ICALP, Springer LNCS 372, (1989), 652{671. [Sc84] F.B. Schneider: \Byzantine Generals in Action: Implementing Fail-Stop Processors," ACM TOCS 2 (2): 145{154 (1984). [Sc90] F.B. Schneider: \Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial," ACM Computing Surveys, 22 (4): 299{319 (1990).

12

[N2 ]

N1

1

T1

N1

2; x := 1 T1 x = 1 T 2

N2 C1

T2

N1

1

1

1

D1

N2

D1

T2

D1

2

1

C2

C2 2

2

1

2

[C2 ] 2

T1 1

T2

1; x := 2 T1 x = 2 T 2

1

2 1

2

N2

1

C1

[T2]

N2

1

1

D1

C2

2

D2

The initial state set is f [N1 N2] g Figure 2: Two-Process Mutual Exclusion Structure for the Fail-stop Failures and Repairs Model

13

[D1 ]

D1 N2 _ T 2 _ D 2

N2 _ T2 _ C2 _ D2 N2 _ T 2 _ C2 _ D 2 T2 ! x := 2 T1

N1

N2 _ (T2 ^ x = 1)

C1

N2 _ C2 N2 _ T2

D2 N1 _ T 1 _ D 1

N1 _ T1 _ C1 _ D1 N1 _ T 1 _ C1 _ D 1 T1 ! x := 1 T2

N2

N1 _ (T1 ^ x = 2)

N1 _ C1 N1 _ T1 Figure 3: Two-Process Mutual Exclusion Skeletons for the Fail-stop Failures and Repairs Model

14

C2

[N2 ]

N1

1

T1

N1

2; x := 1 T1 x = 1 T 2

N2 C1

T2

1

D1

N2

2

N1

1

D1

T2

D1

2

1

C2

C2 2

2

1

2

[C2 ] 2

T1 1

1

T2

1; x := 2 T1 x = 2 T 2

1

2 1

2

N2

1

C1

[T2]

N2

2 2

1

1

D1

C2

2

D2

The initial state set is f [N1 N2] g Figure 4: Two-Process Mutual Exclusion Structure for the Fail-stop Failures Model

15

[D1 ]

D1 N2 _ T 2 _ D 2

N2 _ T2 _ C2 _ D2 N2 _ T 2 _ C2 _ D 2 T2 ! x := 2 D2

N1

T1

N2 _ (T2 ^ x = 1) _ D2

C1

N2 _ C2 N2 _ T2 _ D2

D2 N1 _ T 1 _ D 1

N1 _ T1 _ C1 _ D1 N1 _ T 1 _ C1 _ D 1 T1 ! x := 1 N2

D1

T2

N1 _ (T1 ^ x = 2) _ D1

N1 _ C1 N1 _ T1 _ D1 Figure 5: Two-Process Mutual Exclusion Skeletons for the Fail-stop Failures Model

16

C2

s11

s12

1

e11

2

s12 2

s22

s21

e22

e21

s12

2

e12 2

1

s21

s22

1

2

s22 2

1

e21

e22

1

s11

e11

e12

1

e21

e12

1

e11 s21

s11

2

e22 2

1

s11

s12

The initial state set is f [s11 s12] g. The double boxed state at the bottomstate at the bottom is the initial state, which is duplicated for clarity of the gure Figure 6: Structure for the Barrier-Synchronization Problem

17

s11

s12 _ e12

e11

e12 _ s22

s21

s22 _ e22

e21

s22

s21 _ e21

e22

e22 _ s12

s12

s11 _ e11

e12

e11 _ s21

e21 _ s11 Figure 7: Skeletons for the Barrier-Synchronization Problem

18

s11

s12

1

e11

2

s12

2

s11

1

2

s21

1

s12

e11

2

s11

e12 2

1

s21

e12

e11

e12

s21 2

1

s22

e11

s22

s21

e22 2

1 1

e21

e22

e22

2

e21

2

e22

1

2

2

s11

1

s22

1

e21

s22

1

2

e21

e12

s12

1

s11

s12

The initial state set is f [s11 s12 ] g The fault-sates are [s21 s12 ], [s11 s22 ], [e21 e12], [e11 e22 ] Figure 8: Barrier-Synchronization Structure for the General State Failures Model

19

s11

s12 _ e12 _ s22

e11

e12 _ s22 _ e22

s21

s22 _ e22 _ s12

e21

s22

s21 _ e21 _ s11

e22

e22 _ s12 _ e12

s12

s11 _ e11 _ s21

e12

e11 _ s21 _ e21

e21 _ s11 _ e11 Figure 9: Barrier-Synchronization Skeletons for the General State Failures Model

20

L2 = N2 _ x = 1_ L2 = D2

D1 L2 = D2

D11 L2 = D2 D12

L1 := N1

N11

L1 := T1

x=2_ L2 = D2 D13

L1 := T1 T11

D14

L1 := T1 x := 2

D15

L2 = D2

T12

L1 := T1 L2 = N 2 _ x = 1 _ L 2 = D 2

L1 := C1 T13

L1 := C1

C11

L1 := N1 L1 = N1 _ x=2_ L1 = D1

D2 L1 = D1

D21 L1 = D1 D22

L2 := N2

N21

L2 := T2

L2 := T2 T21

x=1_ L1 = D1 D23

D24

L2 := T2 x := 1

D25

L1 = D1

T22

L2 := T2 L1 = N 1 _ x = 2 _ L 1 = D 1

L2 := C2 T23

L2 := C2

L2 := N2 Figure 10: Two-Process Mutual Exclusion Skeletons for the Fail-Stop Failures Model and the Atomic Read / Atomic Write Model of Concurrent Computation 21

C21

Appendix: The Temporal Logics CTL and CTL We have the following syntax for CTL [Em90], where p denotes an atomic proposition, and f; g denote (sub)formulae. The atomic propositions are drawn from a set AP that is partitioned into sets AP 1 ; : : : ; AP K . AP i contains the atomic propositions local to process i. We inductively de ne a class of state formulae (true or false of states) using rules (S1{3) below and a class of path formulae (true or false of paths) using rules (P1{3) below. (S1) each atomic proposition p is a state formula (S2) if f; g are state formulae, then so are f ^ g, :f (where ^; : indicate conjunction and negation, respectively) (S3) if f is a path formula, then Ef and Af are state formulae (P1) each state formula is also a path formula (P2) if f; g are path formulae, then so are f ^ g, :f (P3) if f; g are path formulae, then so are Xj f , f Ug Ef is a formula which intuitively means that there is some maximal path for which f holds, and Af is a formula which intuitively means that f holds of every maximal path. Xj f (process indexed strong nexttime) is a formula which intuitively means that the immediate successor state along the maximal path under consideration is reached by executing one step of process Pj , and formula f holds in the immediate successor state. f Ug is a formula which intuitively means that there is some state along the maximal path under consideration where g holds, and f holds at every state along this path up to, but not necessarily including, that state. CTL consists of the set of state formulae generated by the above rules. The linear time temporal logic PTL, [MW84], consists of the set of path formulae generated by rules (S1), (P1{3). The logic CTL [EC82, Em90] is obtained by replacing rules (P1{3) by:

(P0) if f; g are state formulae, then Xj f , f Ug are path formulae The set of state formulae generated by rules (S1{3) and (P0) forms CTL. Formally, we de ne the semantics of CTL formulae with respect to a (K -process) structure M = (S 0 ; S; R1 ; : : : ; RK ; L) consisting of

S S0 Ri L

| a countable set of states |  S , the set of initial states |  S  S , a binary relation on S giving the transitions of sequential process i | a labelling of each state with the set of atomic propositions true in the state.

Let R = Ri1 [    [ RiK . A path is a sequence of states (s1 ; s2 : : :) such that 8i; (si; si+1 ) 2 R, and a fullpath is a maximal path. A fullpath (s1 ; s2 ; : : :) is in nite unless for some sk there is no sk+1 such that (sk ; sk+1 ) 2 R. We use the convention that  = (s1 ; s2 ; : : :) denotes a fullpath and i denotes the sux (si ; si+1 ; si+2 ; : : :) of , provided i  jj, where jj, the length of , is ! when  is in nite and k when  is nite and of the form (s1 ; : : : ; sk ); otherwise i is unde ned. We also use the usual notation to indicate truth in a structure: M; s1 j= f (respectively M;  j= f ) means that f is true in structure M at state s1 (respectively of fullpath ). In addition, we use M; S j= f to mean 8s 2 S (.M; s j= f ), where S is a set of states. We de ne j= inductively: (S1) M; s1 j= p i s1 (p) = true

22

(S2) M; s1 j= f ^ g i M; s1 j= f and M; s1 j= g M; s1 j= :f i not(M; s1 j= f ) (S3) M; s1 j= Ef i there exists a fullpath  = (s1 ; s2 ; : : :) in M such that M;  j= f M; s1 j= Af i for every fullpath  = (s1 ; s2 ; : : :) in M , M;  j= f (P1) M;  j= f i M; s1 j= f (P2) M;  j= f ^ g i M;  j= f and M;  j= g M;  j= :f i not(M;  j= f ) (P3) M;  j= Xj f i 2 is de ned and (s1 ; s2 ) 2 Rj and M; 2 j= f M;  j= f Ug i there exists i 2 [1 : jj] such that M; i j= g and for all j 2 [1 : (i ? 1)], M; j j= f We write j= f to indicate that f is valid, i.e., true at all states in all structures. When the structure M does not a ect the truth of the formula, (e.g., M; s1 j= p depends only on s1 and p), it may be omitted (e.g., M; s1 j= p is written as s1 j= p). We introduce the abbreviations f _ g for :(:f ^:g), f ) g for :f _ g, and f  g for (f ) g) ^ (g ) f ), which denote logical disjunction, implication, and equivalence, respectively. We also introduce a number of additional modalities as abbreviations: Yj f for :Xj :f , Ff for [trueUf ], Gf for :F:f , f Uw g for [f Ug] _ Gf , 1 1 F f for GFf , G f for FGf . Yj is the \process indexed weak nexttime" modality. Yj f is a formula which intuitively means that if the immediate successor state along the maximal path exists, and is reached by executing one step of process Pj , then formula f holds in the immediate successor state. F is the \eventually" modality. Ff is a formula which intuitively means that there is some state along the maximal path where f holds. G is the \always" modality. Gf is a formula which intuitively means that f holds at every state along the maximal path. Uw is the \weak until" (or \unless") modality. f Uw g is a formula which intuitively means that either g holds at some state along the maximal path and f holds at every state along the maximal path up to (but not necessarily including) that state; or g never holds along the maximal path and f holds at all 1 1 states of the maximal path. Finally, F and G are the in nitary modalities \in nitely often" and \almost 1 1 everywhere" respectively. F f , G f are formulae which intuitively mean that f holds in in nitely many states along the maximal path, f holds in all but a nite number of states along the maximal path, respectively. Particularly useful modalities are AFf , which means that for every maximal path, there exists a state on the path where f holds, and AGf , which means that f holds at every state along every path. Note that the unindexed nexttime operators can be viewed as abbreviations: Xf  (Xi1 f _ : : : _ XiK f ) Yf  (Yi1 f ^ : : : ^ YiK f ) A formula of the form A[f Ug] or E[f Ug] is an eventuality formula. An eventuality corresponds to a liveness property in that it makes a promise that something does happen. This promise must be ful lled. The eventuality A[f Ug] (E[f Ug]) is ful lled for s in M provided that for every (respectively, for some) path starting at s, there exists a nite pre x of the path in M whose last state satis es g and all of whose other states satisfy f . Since AFg and EFg are special cases of A[f Ug] and E[f Ug], respectively, they are also eventualities. In contrast, A[f Uw g], E[f Uw g] (and their special cases AGf and EGf ) are invariance formulae. An invariance corresponds to a safety property since it asserts that whatever happens to occur (if anything) will meet certain conditions.

23

Suggest Documents