Backward Stochastic Bisimulation in CSL Model Checking Jeremy Sproston and Susanna Donatelli Dipartimento di Informatica, Universit`a di Torino, 10149 Torino, Italy
[email protected],
[email protected] Abstract Equivalence relations can be used to reduce the state space of a system model, thereby permitting more efficient analysis. We study backward stochastic bisimulation in the context of model checking continuous-time Markov chains against Continuous Stochastic Logic (C SL ) properties. While there are simple C SL properties that are not preserved when reducing the state space of a continuous-time Markov chain using backward stochastic bisimulation, we show that the equivalence can nevertheless be used in the verification of a practically significant class of C SL properties. Furthermore, we identify the logical properties for which the requirement on the equality of state-labelling sets (normally imposed on state equivalences in a modelchecking context) can be omitted from the definition of the equivalence, resulting in a better state-space reduction.
1. Introduction The practical applicability of model checking [13] is often limited by the excessive size of the system’s state space. One method to reduce the size of the state space is to use a notion of state equivalence in order to combine suitably equivalent states into a single super-state. The resulting, reduced state space, called a quotient, can then be used for analysis. In this paper, we study the use of equivalence relations to reduce the state space of stochastic models, represented as continuous-time Markov chains (CTMCs). Forward bisimulation of stochastic systems, which is defined according to the outgoing transitions of a state, has been widely studied [20, 9, 19, 18, 6]. Forward stochastic bisimulation is related to the notions of ordinary (or strong) lumpability of Markov chains [20]. Instead, our focus is on backward stochastic bisimulation, which is related to exact lumpability on Markov chains [23, 8, 10]. In contrast to forward bisimulation on stochastic systems, backward stochastic bisimulation is defined ac
Supported in part by MIUR-FIRB Perf.
cording to the incoming transitions of a state. An advantage of backward stochastic bisimulation over its forward counterpart is that, given an appropriate side condition on the initial distribution of the CTMC, each state within each equivalence class has an equal probability, both for the transient and steady-state distributions. We study backward stochastic bisimulation in the context of model checking Continuous Stochastic Logic (C SL) properties [1, 4]. The logic C SL includes a probabilistic operator, which can refer to the probability with which a certain property is satisfied, and a steady-state operator, which can refer to the probabilities of the system being in certain states in equilibrium. The model-checking approach consists of considering a temporal logic formula , and identifying which states of the system model satisfy progressively larger subformulae of , until finally itself is considered. A useful property of an equivalence for reducing the state space of a system model in this context is that all states within a class of the equivalence either satisfy a formula or they do not satisfy it. In such a case, it suffices to reason about classes rather than states, as classes can be used to identify exactly the state sets which satisfy a formula. On the one hand, we show in this paper that there are C SL properties that are not preserved in this way when reducing the state space of a CTMC using backward stochastic bisimulation: for example, probabilities of reaching state sets in the future may differ in backward stochastic bisimilar states. On the other hand, we show that the equivalence can be used in the verification of a practically significant class of C SL properties, namely those formulae without nesting of probabilistic or steady-state operators, given an initial distribution of the CTMC. Furthermore, in the context of qualitative properties, we extend the results of Buchholz [10] to show that backward stochastic bisimilar states exhibit the same infinite traces, and therefore satisfy the same linear temporal logic properties. This result makes use of the approach of Lynch and Vaandrager [22] for forward and backward simulation relations of untimed, non-stochastic transition systems. The forward (backward) stochastic bisimulation quotient can be computed using the algorithm defined in [15], which runs in O(m log n) time for a CTMC with n states and
m transitions. This quotient is essentially the strongly (exactly) lumped CTMC, and can be augmented with a labelling condition. In Section 2, we recall the definition of CTMCs, and in Section 3 both forward and backward stochastic bisimulation are defined. We consider qualitative properties of backward stochastic bisimulation in Section 4, whereas in Section 5 the use of backward stochastic bisimulation in the verification of subclasses of C SL is explored. In Section 6 we study how the subclass of C SL considered in Section 5 may be extended by applying backward stochastic bisimulation to certain parts of the state space of a CTMC, and forward stochastic bisimulation elsewhere. Finally, in Section 7 we conclude the paper. 2. Continuous-Time Markov Chains We consider continuous-time Markov chains with a labelling condition, which associates with every state a set of atomic propositions which are valid in that state. Let AP be a set of atomic propositions, which will be fixed throughout the remainder of the paper, and let R0 be the set of non-negative reals. A probability distribution on a set of elements S is a function : S ! [0; 1] such that s2S (s) = 1. We use s to denote the probability distribution with probability 1 in the single element s.
P
Definition 2.1 A continuous-time Markov chain (CTMC) is a quadruple (S; R ; p ; L) comprising a finite set of states S , a rate transition function R : S S ! R0, an initial probability distribution p on S , and a labelling function L : S ! 2AP . The interpretation of the rate transition function is that
R (s; s0 ) > 0 if and only if there exists a transition from state s to state s0 , and that the probability that this transition 0 is triggered within t time units is 1 ? e?R(s;s )t (that is, the duration of a transition from s to s0 is governed by an exponential distribution with rate R (s; s0 )). As in [4], we model self-looping transitions, corresponding to R (s; s) > 0, in an
explicit manner, in order to retain the standard interpretation of temporal logic formulae. In our context, we may distinguish as different two CTMCs that have the same infinitesimal generator (and the same transient and steady-state distributions), but have different self-looping behaviour. Let the exit rate E(s) for the state s 2 S be defined by E(s) = s0 2S R (s; s0 ). A state s is called absorbing if and only if E(s) = 0. If R (s; s0 ) > 0 and t 2 R0, then we say that there exists a transition of duration t from state s to state s0 , denoted by s ? !t s0 . An infinite path is a set0 s ?t! 1 quence s0 ?! of transitions. A finite path is 1
P
t
t
tn?1
0 s1 ?!1 sn?1 ???! sn of transitions a sequence s0 ?! such that sn is absorbing. Let Path C be the set of paths of C. Let be a probability distribution on the set S of states. The
probability measure given by using the standard cylinder set construction is denoted by Prob C [4]: then, for a set of paths Path C , the probability of C exhibiting the paths in after commencing from the starting distribution is Prob C ( ). Often, we let the starting distribution be the initial distribution of the CTMC (that is = p ), or let the starting distribution assign probability 1 to a single state; in the latter case, we write Prob Cs rather than Prob Cs . For any in-
t
t
0 s1 ?!1 and any i 2 N, let finite path ! = s0 ?! !(i) = si , the (i + 1)st state of !, let (!; i) = ti , and, for t 2 R0 and i the smallest index such that t ij=0 tj , let !@t = !(i), the state occupied at time t. For any fit0 s ?t! 1 nite path ! = s0 ?! sl , the state !(i) and dura1 tion (!; i) = ti are only defined if i < l, and are defined as in the infinite-path case. We also let (!; l) = 1. Furl?1 thermore, for t > j=0 tj , let !@t = sl ; otherwise, !@t is defined as in the infinite-path case. A transient probability is the probability of being in a certain state s at time t given an initial distribution . In the model-checking context, we can express a transient probability in terms of paths, as C (; s; t) = Prob C f! 2 Path C j !@t = sg. The steady-state probabilities are used to refer to the long-run average probability of the CTMC being in a state, and are defined by C (; s) = limt!1 C (; s; t). If the CTMC is ergodic, the steadystate distribution does not depend on , and we write C (s) rather than C (; s). For S 0 S , let C (; S 0 ; t) = C C 0 C s2S 0 (; s; t) and let (; S ) = s2S 0 (; s).
P
P
P
P
3. Stochastic bisimulation We proceed to define the state equivalence relation for CTMCs called stochastic bisimulation. The equivalence is studied in two principal forms: the first, forward bisimulation, identifies states as equivalent if their outgoing transitions can be related, whereas the second equivalence, backward bisimulation, identifies states as equivalent if their incoming transitions can be related. Forward stochastic bisimulation is related to the notion of ordinary lumpability (also called strong lumpability) in Markov chains [20], while backward stochastic bisimulation is a variant of exact lumpability [23]. We base the equivalences on the rate transition function R of the CTMC, and not its infinitesimal generator (hence, our equivalences take self-loops into account). Given a CTMC C = (S; R ; p ; L), a state s 2 S and a set of states C S , let R (s; C) = s0 2C R (s; s0 ), and let R (C; s) = s0 2C R (s0 ; s).
P
P
Definition 3.1 Let C = (S; R ; p ; L) be a CTMC. An equivalence relation over the set S of states is a forward stochastic bisimulation of C if, for all states s; s0 2 S such
1
1
s1
1 2
1
1 s3
1
s0
3
s1
s3
1 1
s0
1
1
s2
1 2
1
s4
1
(CA : forward, not backward)
1 s2
2 2
s4
1
(CB : backward, not forward)
Figure 1. CTMCs which can be reduced by only one of forward or backward stochastic bisimulation. that s s0 , we have R (s; C) lence class C 2 S= .
= R (s0 ; C) for each equiva-
Note that forward stochastic bisimilar states s; s0 have the same exit rate; that is, E(s) = E(s0 ). We now consider backward stochastic bisimulation, and use a definition adapted from the inverse performance bisimulation of [10]. Our definition of backward stochastic bisimulation includes a condition which stipulates that the exit rates of equivalent states are the same, which is necessary in our context for making the equivalence useful for the verification of C SL path properties (that is, the properties next and until: see Section 5). As noted above, this condition on exit rates is implicit in the definition of forward stochastic bisimulation. Definition 3.2 Let C = (S; R ; p ; L) be a CTMC. An equivalence relation over the set S of states is a backward stochastic bisimulation of C if, for all states s; s0 2 S such that s s0 , we have: 1. 2.
R (C; s) = R (C; s0) for each equivalence class C 2 S= , and E(s) = E(s0 ).
Example. In Figure 1, we illustrate two CTMCs which have the same transition structure, but with different rates (for simplicity, we do not consider the initial distribution or the labelling function). Applying forward stochastic bisimulation to CA results in the aggregation of s1 and s2 into a single equivalence class, and also s3 and s4 into a single class; however, backward stochastic bisimulation will not result in the aggregation of any states. Vice versa, backward stochastic bisimulation applied to CB will aggregate s1 and s2 into a class, and s3 and s4 into another class. Forward stochastic bisimulation cannot aggregate any states of CB . Two states s; s0 2 S of a continuous-time Markov chain C are forward stochastic bisimilar, denoted by s =f s0 , if there exists a forward stochastic bisimulation which is such that s s0 . If the equivalence is instead a backward stochastic bisimulation, then states can also be identi-
fied as backward stochastic bisimilar, denoted by =b. Note f b and correspond to the coarsest equivalence relathat = = tions of each type. We now consider two common “static” conditions on states, which can be combined with the two definitions of stochastic equivalence given above to obtain equivalence relations which can be more distinguishing. Firstly, we consider a condition which requires that equivalent states must have the same initial probability (possibly 0); secondly, we consider a condition which requires equality of statelabelling in equivalent states. Definition 3.3 Let C = (S; R ; p ; L) be an CTMC. An equivalence relation over the set S of states satisfies:
the initial condition if, for all states s; s0 s s0 , we have p (s) = p (s0 );
2 S such that
the state-labelling condition if, for all states s; s0 such that s s0 , we have L(s) = L(s0 ).
2S
We use subscripts I and L to denote the initial and statelabelling conditions, respectively, that an equivalence satisfies. For example, I denotes an equivalence relation satisfying the initial condition, L satisfies the state-labelling condition, whereas IL satisfies both conditions. The imposition of a state-labelling is standard when relating temporal logic and bisimulation [7, 2, 17, 4]. The initial condition is relevant in the case of the calculation of transient probabilities, and steady-state probabilities for non-ergodic CTMCs. We combine the notation for stochastic bisimulation and the static conditions of Definition 3.3 to obtain new equivalences, for example, =fI (the coarsest forward stochastic bisimulation satisfying the initial condition) or =bIL (the coarsest backward stochastic bisimulation satisfying both the initial and state-labelling conditions). The definition of an equivalence relation on the state space of a CTMC can be used to define a quotient CTMC, the states of which are classes of the equivalence relation. We consider the definition of Buchholz [10]. Definition 3.4 [10] Let C = (S; R ; p ; L) be an CTMC and be an equivalence relation on S . The quotient of C with respect to is a CTMC C= = (S= ; R =; p =; L= ), where:
S= is the set of equivalence classes of ; R = : S= S= ! [0; 1] is defined such that, for each pair of classes C; C 0 2 S= , we have: P P R(s; s0) R = (C; C 0) = s2C sj0C2Cj 0 ; pP= : S= ! [0; 1] is defined by p = (C) = s2C p (s) for each C 2 S= ; LS= : S= ! 2AP is defined by L= (C) = s2C L(s) for each C 2 S= .
For each C 2 S= , if satisfies the state-labelling condition, then L= (C) = L(s) for any/each state s 2 C . We note that the labelling function L= will be used in this paper only in the context of equivalence relations which satisfy the state-labelling condition (hence L= could have equally been defined as the intersection, rather than the union, of the label sets of a class’s constituent states). In the case in which is forward stochastic bisimulation, the definition above collapses to the definitions of [9, 19, 4]. Both the forward and backward stochastic bisimulation equivalence classes of a CTMC can be computed in time O(jR j log jS j), where jR j is the number of state pairs with positive rate according to R [15]. We recall the following result from [10], which relates the transient and steady-state probabilities of the original and quotient CTMCs obtained from backward stochastic bisimulation under the initial condition. More precisely, the theorem states that each state in an equivalence class has the same probability of being reached after t time units, and that this probability can be obtained by calculating the probability of reaching the class after t time units in the quotient system and then dividing by the cardinality of the class. Theorem 3.5 [10] Let C = (S; R ; p ; L) be a CTMC, and C==bI be its quotient CTMC with respect to =bI . Then, for all classes C 2 S= =bI and for any state s 2 C , we have:
C==bI (p ==bI ; C; t) : jC j Note that this theorem implies that C (p ; s) = C==bI (p ==bI ; C)=jC j. The theorem can also be applied to the equivalence relation =bIL , because =bIL is more b distinguishing than =I . The theorem describes the main C (p ; s; t) =
(theoretical) advantage of backward equivalence over forward equivalence, for which the theorem does not hold.
4. Qualitative properties In this section, we consider briefly the qualitative properties of backward stochastic bisimulation. More precisely, we add to the results of Buchholz [10], which described the equivalence of the set of finite behaviours of the unreduced CTMC and those of its quotient CTMC, to consider infinite behaviours. A transition system T = (S; );; L) is a tuple comprising a finite set S of states, a transition relation ) S S , a set of initial states S , and a labelling function L : S ! 2AP . Definition 4.1 The transition system of a CTMC (S; R ; p ; L) is the tuple TC = (S; );; L), where:
C =
) S S is such that (s; s0 ) 2) if and only if R (s; s0 ) > 0, and
S is such that s 2 if and only if p (s) > 0.
An initialized path of TC is an infinite sequence s0 s1 s2 such that s0 2 and (si ; si+1 ) 2) for all i 2 N. The trace of an initialized path s0 s1 s2 is the sequence L(s0 )L(s1 )L(s2 ) . The set of traces of a transition system, denoted by Traces(T), is the set of traces of all initialized paths of T. We now proceed to define backward bisimulation on transition systems, following [22]. Definition 4.2 Let T = (S; );; L) be a transition system. An equivalence relation over the set S of states is a backward bisimulation of T if, for all states s; s0 2 S such that s s0 , if (u; s) 2) then there exists (u0; s0 ) 2) such that u u0. Furthermore, the equivalence satisfies the initial condition if s s0 implies that s 2 if and only if s0 2 , and the state-labelling condition if s s0 implies that L(s) = L(s0 ). Given a transition system, let 'bIL be the coarsest backward bisimulation which satisfies the initial condition and the state-labelling condition. Given two transition systems, T1 = (S1 ; )1;1; L1) and T2 = (S2 ; )2;2; L2) such that S1 \ S2 = ;, we can construct the “union” transition system T1 [ T2 in the following way: let T1 [ T2 = (S1 [ S2 ; )1 [ )2;1 [ 2; L12), where L12(s) = L1(s) if s 2 S1 and L12(s) = L2 (s) if s 2 S2 . Two transition systems T1 = (S1 ; )1;1; L1) and T2 = (S2 ; )2;2; L2) are backward bisimilar if, in the union transition system T1 [ T2 , for each s1 2 1 , there exists a s2 2 2 such that s1 'bIL s2 (and vice versa). It follows from the work of Lynch and Vaandrager concerning backward simulations of (image-finite) transition systems [22] that the sets of traces of (finite-state) backward bisimilar transition systems are equal, as stated formally by the following theorem.
Theorem 4.3 [22] Let T1 and T2 be transition systems. If T1 and T2 are backward bisimilar, then Traces(T1 ) = Traces(T2 ). We now show that TC and TC= are backward bisimi= bIL lar. Once we have established this result, from Theorem 4.3, are we will have that the set of traces of TC and TC= =bIL equal. Our proof is similar to that of Buchholz [10], which considered the transition-by-transition correspondence of TC and TC==b in order to show equivalence of finite traces. IL
Proposition 4.4 Let
C be a CTMC. Then TC
and
TC==b
are backward bisimilar.
IL
C = (S; R ; p ; L), C==bIL = (Se; Re ; ep ; Le ), TC = (S; );; L), and TC==b = (Se; f ); e; Le). We show that, for Proof. Let
s 2 S and C 2 Se, we have that s 2 C implies s =bILC . IL
The first task is to show that s 2 C implies L(s) = Le (C), which follows immediately from the state-labelling e. condition and the definition of L The second task is to show that s 2 C implies that s 2 if and only if C 2 e . On the one hand, s 2P implies that p (s) > 0. Then, as s 2 C , and as pe(C) = s2C p (s), we p (C) > 0 and hence C 2 e. On the other hand, have that e ep (C) > 0 implies p (s) > 0 because =bIL satisfies the initial condition, and therefore p (s) is equal for all states in C . Hence C 2 e implies s 2 . The third task is to show that s 2 C implies that: (s0 ; s) 2) implies (C 0; C) 2 f ) for the class C 0 such that 0 0 0 s 2 C . From (s ; s) 2), we must have R (s0 ; s) > 0, e (C 0; C) > 0, which which, by Definition 3.4, implies that R 0 in turn gives us (C ; C) 2). The fourth task is to show that s 2 C implies that: (C 0; C) 2 f ) implies (s0 ; s) 2) for some s0 2 C 0. From 0 (C ; C) 2 f ), we must have Re (C 0; C) > 0. Now, because b C is a =IL-equivalence class, we know that R (C 0; u) = R (C 0; u0) for all states u; u0 2 C . Hence, by Definition 3.4, Re (C 0; C) > 0 implies that R (C 0; u) > 0 for all states u 2 C . Then we know that R (s0 ; s) > 0 for some s0 2 C , and we are done. Corollary 4.5 Let C be a CTMC. Then Traces(TC ) = Traces(TC==b ) IL Because trace equivalence implies equivalence of linear temporal logic properties, these results mean that we can use the quotient CTMC C= =bIL in place of the unreduced CTMC C for qualitative model checking of such properties.
5. Continuous Stochastic Logic 5.1. Syntax and semantics We now recall the syntax and semantics of Continuous Stochastic Logic (C SL) [1, 4]. Definition 5.1 The syntax of C SL is defined as follows:
::= a j ^ j : j P./ (X I ) j P./ (U I ) j S./ () where a 2 AP is an atomic proposition, I R0 is a nonempty interval, ./2 fg is a comparison operator, and 2 [0; 1] is a probability. The interpretation of the path formulae X I and 1 U I 2 is as follows: X I is true for a path if the state reached after the first transition along the path satisfies , and the duration of this transition lies in the interval I ; the formula 1 U I 2 is true along a path if 2 is true at some state along the path, the time elapsed before reaching this state lies in I , and 1 is true along the path until that
state. The probabilistic quantifier P is used to refer to the probability of satisfying a path formula, while the steadystate quantifier S refers to the steady-state probability of satisfying a C SL subformula. Examples of C SL formulae, taken from [4], are the following. The formula S10?5 (a) is true if the probability of being in a state labelled by the atomic proposition a in steady-state is not greater than 0.00001. The formula P0:01(aU [10;20]b) is true if the probability of being in a b-labelled state after between 10 and 20 time units have elapsed, while remaining in a-labelled states before that point, is not greater than 0.01. The formula S0:9(P 0. A subgraph B of G is a bottom strongly connected component (BSCC) if it is a maximal strongly connected component such that edges from states within B always point at vertices which are also within B . Let B be the set of BSCCs of C. Note that a strongly-connected CTMC is an ergodic CTMC. Let S 0 S be a subset of states of C. Then C[S 0] = (S; R 0 ; p ; L) is the CTMC obtained from C by letting R 0 (s0 ; s) = 0 for each s0 2 S 0 , s 2 S (that is, all states in S 0 are made absorbing). The following lemma asserts that entry states of a BSCC may be turned into absorbing states without altering the satisfaction sets of a P -until formula in three cases: (1) if the second argument of the until formula is not satisfied in all states of the BSCC, (2) if the arguments of the until formula are both satisfied in all states of the BSCC, and (3) if the second argument of the until formula is satisfied in all states of the BSCC and the lower bound of the formula’s time interval is 0. We write j=C for the C SL satisfaction relation j= interpreted on C. Lemma 6.1 Let
B
be a BSCC of the CTMC
C,
and let
P./ (1U I 2 ) be a C SL formula. Then, for each state s 2 S n B , if at least one of the following conditions hold: 1. s0 6j=C 2 for all states s0 2 B , 2. s0 j=C 1 ^ 2 for all states s0 2 B , 3. s0 j=C 2 for all states s0 2 B , and inf I = 0, then we have s j=C P./ (1 U I 2 ) if and only if s j=C[B] P./ (1U I 2 ). For example, consider the formula P0:01(aU [10;20]b): if the CTMC enters a BSCC in which b is not true for any of its states, then b will never become true in the future. Hence, all states in the BSCC can be made absorbing without affecting the probability of satisfying the until formula. Similarly, if the CTMC enters a BSCC in which a and b are both always true, then the until formula must be satisfied. Finally, in the case of the formula P0:01(aU [0;15]b), if a BSCC in which b is always true is entered, and the elapsed time is less than 15 on entry (which can be evaluated on the state space outside of the BSCC), then the until formula will be satisfied. We note that related reductions have been introduced by Baier et al. [4] (although not in the context of BSCCs). The conversion to absorbing states described by Lemma 6.1 requires a step to find the BSCCs of the CTMC in question (which has time complexity O(jS j + jR j) [14]), followed by a check of whether 1 and/or 2 hold in the states of the BSCCs (which takes O(jS j) time).
In order to simplify the following explanation, we consider formulae of the form P./0 0 ( 0 U [0;t] S./ ( )) and P./0 0 (I S./ ( )), where ; 0 are formulae of the syntax ::= a j ^ j : and where we use I to abbreviate trueU I . From the semantics of the steady-state operator we have the following fact: for any BSCC B , either s j= S./ ( ) for all states s 2 B , or s 6j= S./ ( ) for all states s 2 B (see [5]). Hence, in the case of P./0 0 ( 0 U [0;t] S./ ( )) and P./0 0 (I S./ ( )), we know that all BSCCs of a CTMC satisfy at least one of the conditions of Lemma 6.1: either S./ ( ) is not satisfied within a BSCC, in which case condition (1) applies; or S./ ( ) is satisfied throughout a BSCC, in which case condition (2) and (3) apply in the case of P./0 0 (I S./ ( )) and P./0 0 ( 0 U [0;t] S./ ( )), respectively. We now consider how the results of Section 5.2.2 can be used to define a model-checking algorithm for formulae such as P./0 0 (I S./ ( )) and P./0 0 ( 0 U [0;t]S./ ( )) which exploits backward stochastic bisimulation. For the CTMC C, we consider each of the BSCCs of C in turn, with the aim of determining whether the formula S./ ( ) holds in the BSCC. For a BSCC B 2 B, consider the CTMC CjB obtained from C by restricting the components of C to states in B . As the resulting CTMC is strongly connected, steadystate probabilities are independent of the initial distribution; hence, we can write B (s) to denote the steady-state probability of state s within B , instead of CjB (; s). Next, observe that the BSCC B is necessarily contained within some BSCC B of CjB = =b : for every finite path s0 ! s1 ! ! sn in B , there exists a path C0 ! C1 ! ! Cn in B such that si 2 Ci for each 0 i n. Note that the fact that we consider the equivalence =b , and not the b equivalence =I , which depends on the initial condition, suffices for the following reason. The initial condition is irrelevant to the steady-state distribution in B ; hence taking such an (arbitrarily chosen) initial distribution into account when constructing the quotient CTMC of B will make no difference to the correspondence of results between the CTMC of B and the resulting quotient CTMC. Then, as in the case of Section 5.2.2, we can obtain the following result:
e
e
X
s2B\Sat( )
B (s) =
X
C 2Be
( )j : Be (C) jC \jSat Cj
Hence, by applying backward stochastic bisimulation without the initial and state-labelling conditions to B a BSCC, we can obtain s2B\Sat( ) (s). Then, if B ( s2B\Sat( ) (s)) ./ , we know that all states within B satisfy S./ ( ), otherwise all states within B do not satisfy S./ ( ). We can repeat this process for all the BSCCs of the CTMC. It remains to consider whether S./ ( ) holds in the states of C which lie outside of the BSCCs. Observe the fol-
P
P
lowing fact, adapted from [4]:
C (s ; Sat( ))
= 0 X@ ProbReach C (s; B)
B2B
X s0 2B\Sat ( )
1 B (s0 )A ;
e
(1)
ProbReach C (s; S 0 ) = Prob Cs f! 2 Path C j 9i:!(i) 2 S 0 g for S 0 S . We advocate constructing a forwhere
ward stochastic bisimulation quotient from all of the states outside of the BSCCs. The following lemma is immediate given the fact that forward stochastic bisimulation with the state-labelling condition preserves C SL .
Lemma 6.2 Let C be a CTMC and C= =fL be its quotient CTMC with respect to forward stochastic bisimulation and the state-labelling condition. Then, for any state s 2 S , the class C 2 S= =fL for which s 2 C , and an atomic proposition a 2 AP , we have ProbReach C (s; Sat(a)) = ProbReach C==fL (C; Sat(a)).
Then we extend the set of atomic propositions AP with the atomic proposition at B for each B 2 B, and extend the labelling of a state s 2 S with at B if and only if s 2 B . We also use Lemma 6.1 to make the states of each BSCC absorbing to obtain the CTMC C[B] (recall that at least one condition of this lemma must hold, given our restriction to the formulae P./0 0 ( 0 U [0;t] S./ ( )) and P./0 0 (I S./ ( ))). Next, we construct the quotient f CTMC C = C[B]= =fL using the equivalence =L, where the state-labelling condition takes into account the additional labels of the form at B . Then, to obtain C (s; Sat( )) for s 2 S n B, it suffices to use Equation (1), but substituting ProbReach Ce (C; Sat(a)) for ProbReach C (s; B), where s 2 C , according to Lemma 6.2. For each B 2 B, we B have already computed the sum s2B\Sat( ) (s), and hence we can substitute these values into Equation (1). Given the computed values of C (s; Sat( )), we can then decide whether s j= S./ ( ) for each s 2 S . Thus, we have obtained a method establishing whether each state of the CTMC satisfies S./ ( ), first by using backward stochastic bisimulation on the states of the BSCCs, then using forward stochastic bisimulation on the remaining states. A further reduction in the size of the state space can be made by combining all of the states of a BSCC into a single state after they have been made absorbing, as done in [4]. Moreover, if two BSCCs are found to have the same value s2B\Sat ( ) B (s), then there is no need to introduce different atomic propositions for each of these BSCCs; this can have the effect of reducing the amount of subdivision that is necessary to distinguish BSCCs when computing the forward stochastic
e
P
P
bisimulation relation. Furthermore, the forward stochastic bisimulation quotient CTMC may be reused to establish the states which satisfy the formulae P./0 0 ( 0 U [0;t]S./ ( )) and P./0 0 (I S./ ( )): hence, we can use the quotient C, not only for obtaining the satisfaction set of S./ ( ), but also for resolving the P part of the overall formula. Although we concentrated our attention on two formulae, P./0 0 ( 0 U [0;t] S./ ( )) and P./0 0 (I S./ ( )) for simplicity, the method of this section can be applied to more general formulae. For example, the method can be adapted in a straightforward way to P./0 0 (I (a ^ S./ ( ))) if the atomic proposition a either holds in all states of a BSCC, or it does not hold in all of the BSCC’s states. Similarly, the method could be used to verify properties of the form P./0 0 (S./00 00 ( )U I S./ ( )). Furthermore, we can construct the backward stochastic bisimulation quotient only for some BSCCs, and apply forward stochastic bisimulation with the state-labelling condition to others. This can be useful if some BSCCs satisfy the conditions of Lemma 6.1, and others do not. A disadvantage of the approach of this section is that it can only be applied after the CTMC has been constructed, and therefore cannot be used in, for example, a compositional manner on system sub-components.
7. Conclusions We have presented a study of backward stochastic bisimulation in the context of C SL model checking. It has been shown that choice of the application of backward or forward equivalences is formula-dependent, in the sense that backward stochastic bisimulation cannot be applied to C SL formulae with arbitrary nesting. However, the degree of nesting in practical examples of C SL formulae that have been given in the literature thus far is usually limited; this leads us to believe that the methods in Section 5 and Section 6 are often applicable to a significant class of properties. Our work has some parallels with that of [17] in the non-stochastic context, in which it was shown that branching-time temporal logic properties (with nesting of “path quantifiers”) are not preserved by backward bisimulation, but linear-time properties are preserved. We note that the results of this paper can also be applied in the case of Discrete-Time Markov Chains and the temporal logic P CTL [16]. Furthermore, the results can be adapted to the case of Markov reward models and (an extension of) the logic C SRL [3]. Equivalences on Markov reward models are defined usually as satisfying a reward-labelling condition, which stipulates that reward labels in equivalent states must be equal. It can be shown that, for certain nonnested expected-reward properties, either the state-labelling condition or the reward-labelling condition can be omitted from backward stochastic bisimulation. However, it is not
the case that both reward- and state-labelling conditions can be removed simultaneously; see [24] for more details. Although in this paper we have worked at the CTMC level, the lumpability and stochastic bisimulations considered have been used successfully also for producing directly a quotient CTMC starting from an high-level description of the stochastic system, as it is the case, for example, in PEPA [19], and in the symbolic reachability graph (SRG) construction of Stochastic Well-formed Nets (SWN) [12]. SRG is based on strong and exact lumpability (forward and backward bisimulation). The more recent work on Extended SRG is based on a strong (forward) equivalence; as observed in [11], an exact (backward) equivalence could be used instead, if it leads to a smaller number of equivalence classes. However, at the current state of research it is not possible to foresee which one of the two works better for a given SWN model. Future work includes an implementation of the ideas of the paper within the framework of the probabilistic model checker PRISM [21], in particular to compare the reductions obtained from forward and backward equivalences on case studies.
References [1] A. Aziz, K. Sanwal, V. Singhal, and R. Brayton. Modelchecking continuous time Markov chains. ACM Transactions on Computational Logic, 1(1):162–170, 2000. [2] A. Aziz, V. Singhal, F. Balarin, R. K. Brayton, and A. L. Sangiovanni-Vincentelli. It usually works: The temporal logic of stochastic systems. In P. Wolper, editor, Proceedings of the 7th International Conference on Computer Aided Verification (CAV’95), volume 939 of LNCS, pages 115–165. Springer-Verlag, 1995. [3] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. On the logical characterisation of performability properties. In U. Montanari, J. D. P. Rolim, and E. Welzl, editors, Proceedings of the 12th International Colloquium on Automata, Languages and Programming (ICALP’00), volume 1853 of LNCS, pages 780–792. Springer-Verlag, 2000. [4] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. Model-checking algorithms for continuous-time Markov chains. IEEE Transactions on Software Engineering, 29(6):524–541, 2003. [5] P. Ballarini. Towards Compostional CSL Model Checking. PhD thesis, Dipartimento di Informatica, Universit`a di Torino, 2004. [6] M. Bernardo and R. Gorrieri. A tutorial on EMPA: A theory of concurrent processes with nondeterminism, priorities, probabilities and time. Theoretical Computer Science, 202:1–54, 1998. [7] M. C. Browne, E. M. Clarke, and O. Grumberg. Characterizing finite Kripke structures in propositional temporal logic. Theoretical Computer Science, 59:115–131, 1988. [8] P. Buchholz. Exact and ordinary lumpability in finite Markov chains. Journal of Applied Probability, 31:59–74, 1994.
[9] P. Buchholz. On a Markovian process algebra. Technical Report 500, Fachbereich Informatik, University of Dortmund, 1994. [10] P. Buchholz. Exact performance equivalence: An equivalence relation for stochastic automata. Theoretical Computer Science, 215(1-2):263–287, 1999. [11] L. Capra, C. Dutheillet, G. Franceschinis, and J.-M. Ili´e. Exploiting partial symmetries for Markov chain aggregation. In F. Corradini and P. Inverardi, editors, Proceedings of the 1st International Workshop on Models for Time-Critical Systems (MTCS 2000), volume 39 of Electronic Notes in Theoretical Computer Science. Elsevier, 2000. [12] G. Chiola, C. Dutheillet, G. Franceschinis, and S. Haddad. Stochastic Well-Formed coloured nets for symmetric modelling applications. IEEE Transaction on Computers, 42(11):1343–1360, 1993. [13] E. M. Clarke, O. Grumberg, and D. A. Peled. Model Checking. MIT Press, 1999. [14] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990. [15] S. Derisavi, H. Hermanns, and W. Sanders. Optimal statespace lumping in Markov chains. Information Processing Letters, 87(6):309–315, 2003. [16] H. Hansson and B. Jonsson. A logic for reasoning about time and reliability. Formal Aspects of Computing, 6(5):512–535, 1994. [17] T. A. Henzinger, O. Kupferman, and S. Qadeer. From prehistoric to postmodern symbolic model checking. Formal Methods in System Design, 23:303–327, 2003. [18] H. Hermanns, U. Herzog, and V. Mertsiotakis. Stochastic process algebras: Between LOTOS and Markov chains. Computer Networks and ISDN Systems, 30(9–10):901–924, 1998. [19] J. Hillston. A Compositional Approach to Performance Modelling. Cambridge University Press, 1996. [20] J. G. Kemeny and J. L. Snell. Finite Markov Chains. Van Nostrand, 1960. [21] M. Kwiatkowska, G. Norman, and D. Parker. PRISM: Probabilistic symbolic model checker. In T. Field, P. G. Harrison, J. T. Bradley, and U. Harder, editors, Proceedings of the 12th International Conference on Computer Performance Evaluation, Modelling Techniques and Tools (TOOLS 2002), volume 2324 of LNCS, pages 200–204. Springer-Verlag, 2002. [22] N. Lynch and F. W. Vaandrager. Forward and backward simulations part I: Untimed systems. Information and Computation, 121(2):214–233, 1995. [23] P. J. Schweitzer. Aggregation methods for large Markov chains. In G. Iazeolla, P. J. Coutois, and A. Hordijk, editors, Mathemantical Computer Performance and Reliability, pages 275–302. Elsevier, 1984. [24] J. Sproston and S. Donatelli. Backward stochastic bisimulation in CSL model checking. Available from: http://www.di.unito.it/ sproston/ Research/research.html.