Equivalence Relations for Stochastic Automata Networks

12 downloads 701 Views 255KB Size Report
tomaton in a network can be substituted by an equivalent and usually smaller ..... A MRP X is completely characterized by the generator matrix QX, reward vector ...
Equivalence Relations for Stochastic Automata Networks  Peter Buchholz

Universitat Dortmund, Informatik IV D-44221 Dortmund; Germany e-mail:[email protected]

Abstract

Stochastic Automata Networks (SANs) are an ecient means to describe and analyze parallel systems under Markovian assumptions. The main advantage of SANs is the possibility to describe and analyze a complex parallel system in a compositional way such that the transition matrix of the Markov chain underlying the complete SAN can be described in a compositional way using only small matrices specifying single automata and combine these matrices by means of tensor operations. This approach allows, up to a certain extent, the handling of the state space explosion resulting from complex Markov models. In this paper equivalence relations for stochastic automata are introduced such that an automaton in a network can be substituted by an equivalent and usually smaller automaton without a ecting the results of an analysis. We consider equivalence according to several performance quantities and for stationary and transient analysis of SANs.

1 Introduction Stochastic Automata Networks (SANs) under Markovian timing have been proposed by Plateau and her coworkers in several papers [12, 13, 15] as a good modeling tool for the quantitative analysis of complex parallel and concurrent systems. A SAN is described by a number of isolated stochastic automata (SAs) which are dependent due to synchronization and transition rates of a local SA depending on the state of other SAs in the SAN. The main advantage of SANs over most other approaches to specify Markov processes is that the transition matrix of a Markov process underlying a SAN can be completely described by the combination of small matrices specifying a single SA which are combined using operations from tensor algebra. This decomposition of the transition matrix can be exploited in iterative numerical solution techniques by implementing the basic operation, namely vector matrix multiplication, without rst generating the complete transition matrix. Thus SANs allow a signi cant reduction of storage requirements since the large transition matrix needs not to be stored as a whole. The approach therefore enables the analysis of very large models which cannot be handled with conventional analysis techniques. In this paper we consider a new aspect of the analysis of SANs. The motivation is to reduce the complexity of a SAN and compute exact results from the reduced model. Such an approach is called exact aggregation. The goal is to nd for a SA a smaller but equivalently behaving representation. Equivalence is a very important concept in the functional analysis of complex systems which result from the composition of less complex parts like it is the case in process algebras (see e.g. [10]). Equivalence according to quantitative aspects is not so established since quantitative analysis based on composition was nearly unknown in the past. However, here we show that a well formalized concept of equivalence according to quantitative results can be de ned for SANs. Apart from theoretical importance equivalence in SANs is practically applicable. An equivalent SA can be computed for a given SA using only very limited information about the rest of the SAN, in particular, only matrices  revised version accepted for 2nd Int. Workshop on the Numerical Solution of Markov Chains, Raleigh NC, 16-18

January 1995.

1

for a single SA need to be considered. However, the use of the reduced SA yields a reduction of the overall state space by a factor similar to the reduction factor reached on the local state space of the SA. The results in this paper are based on lumpability in nite Markov chains as originally de ned in [9], basic results for this paper are given in [4]. For a di erent class of models, which are described by hierarchical queueing networks or stochastic Petri nets, exact aggregation of submodels has been introduced in [2, 3]. However, although the basic ideas of these papers are similar there are clear di erences to the results presented here since the model class and structure of the transition matrix is di erent, in particular, communication between SAs is synchronous rather than asynchronous. Furthermore equivalence of SAs is introduced here in a way which is very similar to the inductive de nition of equivalence according to the functional behavior of a process (see e.g., [10]). This allows the computation of a, up to the ordering of states unique, smallest representation of a SA according to a given equivalence relation. The outline of the rest of the paper is as follows. In the next section we introduce SANs and the structure of the underlying transition matrix based on the combination of matrices for a single SA. Afterwards equivalence and exact aggregation of Markov reward processes is considered in general. In section 4 equivalence of SAs in a SAN is introduced followed by two examples for the approach. The paper ends with the conclusions and directions for future research.

2 Stochastic Automata Networks SANs have been de ned in [13] for continuous time and in [12] for a discrete time scale. We rely here on continuous time models, however, results can be easily transferred to discrete time models. A SAN consists of a number of individual SAs. Each SA is characterized by a number of states and transition rules to describe the dynamic behavior. Embedded in an environment the SA has a Markovian behavior. The state of a SAN including N SAs can be interpreted as N -dimensional CTMC, each component describes the state of one SA. Transitions of a SA in a SAN are classi ed into the three following categories.

 Local constant transitions which are completely characterized by the current state of the SA.  Local functional transitions which occur locally in a SA but have a transition rate which depends

on the state of some other SAs in the SAN.  Synchronized transitions which are performed simultaneously by some SAs in the SAN.

Functional and synchronized transitions describe interactions and dependencies between SAs in a SAN. For the purpose of the paper is sucient to consider SANs which consist of only two SAs, the results can be easily transferred to SANs with an arbitrary number of SAs as explained below. In particular, we can assume that the individual SAs are composed from smaller SAs. Number the SAs in the SAN with 1 and 2 and let S (i) be the state space of SA i including ni states. Local transitions of SA i are describes in a ni  ni matrix Q(i) . Position Q(i) (x; y ) includes the transition rate between state x and y which is in case of a functional transition a function of the state of the other SA. Diagonal elements of Q(i) are de ned as the negative sum of non-diagonal elements, which might be, in case of functional transition rates, a function of the state of the other SA. Let Qix be the matrix of local transition rates in SA i when the state of SA j 1 is x. Observe that matrices Qix include only constant values. Let A be a set of labels describing synchronized transitions. We assume that each synchronized transition a 2 A has a xed rate a independent from the state of the SAs. A synchronized transition with state dependent rates can be represented by a number of transitions with xed rates one for each state dependent rate, which, of course, increases the complexity of speci cation. However, we 1 If we use in the sequel indices i and j , then i = j is implicitly assumed. 6

2

do not claim to use this approach for practical modeling, but the theoretical possibility of expressing state dependent transition rates by state independent rates enables us to consider only the latter in the following theorems and proofs. Nevertheless, the results still hold for the general model class including state dependency also for synchronized transition rates. For each synchronized transition a 2 A and each SA i = 1; 2 we de ne two matrices. Ea(i) is a ni  ni matrix which include only non-negative elements. The row sum of each row of Ea(i) is 1, if transition a is possible in the current state or 0, if not. Ea(i)(x; y ) = p > 0 indicates that SA i is able to perform synchronized transition a in state x and the successor state is y with probability p (x = y is not excluded). However, transition a has to be performed as joint transition in both P ( i ) ( i ) ( i ) ( n i automata. For each matrix Ea de ne a ni  ni matrix Da such that Da (x; x) = y=1 Eai)(x; y ), all non-diagonal elements of Da(i) are 0. Matrices Da(i) are used to correct the diagonal elements of the generator matrix. This representation of synchronized transitions is slightly di erent from the original approach introduced by Plateau [13], where one SA is interpreted as active in a synchronized transition by de ning the rate and the others are interpreted as passive. Here all SAs are on the same level since the transition rate is outside the matrix representation. However, both approaches di er only slightly and allow the expression of exactly the same behavior. The above matrices completely characterize the SAN, before we introduce the computation of the generator matrix some additional notations are needed. Functional transition rates in one SA are usually of such a type that the rate is constant for a subset of states in the other SA. Thus we de ne an aggregated description T (i) of the state space of SA i, T (i) includes one state for each subset of states from S (i) which cause an identical transition rate for all transitions in SA j . We denote the elements of T (i) by Greek letters, e.g., 2 T (i) represents some states x; y 2 S (i) such that Q(xj ) = Q(yj ) = Q( j ). If SA j has only constant transitions, then T (i) consists of one element. We use the notation x 2 , if x 2 S (1) is represented by 2 T (i). Let L( i) be a ni  ni matrix with L( i) (x; x) = 1 for x 2 and 0 otherwise. Now consider the state space of the SAN, each state is characterized by the state of SA 1 and SA 2. Let S = S (1)  S (2) be the state space of the SAN. Not all states from S need to be reachable from a given initial state. With these notations Q, the generator matrix of the CTMC underlying the SAN, can be expressed as (see [13, 15] for further details)

Q=

X

2 2T (2)

(2) Q(1) 2 L 2 +

X

1 2T (1)

(2) L(1) 1 Q 1 +

X

a2A

a (Ea(1) Ea(2) ? Da(1) Da(2))

(1)

is the tensor (or kronecker) product of matrices, details about the operation can be found in the appendix and in [5, 13, 15]. The above representation of the generator matrix is important since it can be directly used in iterative numerical solution techniques for stationary and transient analysis such that Q never needs to be stored as a whole. Thus the complete model can be solved using only matrices of the size n1  n1 or n2  n2 and not of size n1 n2  n1 n2 . It is usually sucient to store one matrix Q(i) for SA i including appropriate functions for functional transition rates, these functions are evaluated during iteration to yield the matrices Q( i) . The approach can be easily generalized to more than two SAs yielding further improvements concerning the size of the matrices. However, the advantages of the approach decrease with an increasing number of synchronized events or functional transitions. Let p be a vector of size n1 n2 including the stationary probabilities and let p(t) be a vector of the same size including transient probabilities at time t for some given initial vector p(0). If S includes more than one ergodic subset of states, then p, like p(t), depends on p(0). Like the matrices, the initial vector p(0) is de ned in terms of the initial distribution of the individual SAs. Let p(i)(0) be the initial distribution of SA i, then p(0) = p(1)(0) p(2)(0) (2) Let qmax = (1+ ) maxx2S j(Q(x; x)j for some 1   > 0 and de ne the following iteration sequence. 3

k = k?1(I + q 1 Q) = k?1R ; max

where  0 = p(0)

(3)

If p(0) includes non-zero probabilities in only one ergodic subset of states, then limk!1  k = p (and also limt!1 p(t) = p). The transient probability vector can be expressed by means of the vectors k using the randomization formula (see e.g., [7]).

p(t) =

1 X

k=0

t) k e?qmaxt (qmax k!

k

(4)

The goal of an analysis of SAN is usually not the stationary or transient vector, required results are much coarser and describe measures like state probabilities for a speci c SA or higher level results like waiting times for resource access, occupation of resources etc. A commonly used approach to specify such measures on state space level is to de ne reward function which extends the CTMC to a Markov reward process (MRP). For SANs it is natural to de ne rewards for single SAs. Let r(i) be a vector of size ni including in position r(i) (x) the reward gained in state x 2 S (i) , rewards are de ned as real values [8]. For the SAN a reward vector r is computed from the reward vectors of the SAs as

r = r(1) e(2) + e(1) r(2) ; where e(i) is a vector of size ni with all elements equal to 1.

(5)

If the analysis of a SAN is performed according to one SA and the other SA serves only as environment, then the rewards for the environment SA are all chosen equal to 0. The stationary reward of a SAN is computed as X M = p(x)r(x) x2S

the transient reward at time t equals

M (t) =

X x2S

p(t)(x)r(x)

If limt!1 p(t) = p, then this implies limt!1 M (t) = M . Cumulative measure might also be computed (see [8, 11]). By means of state based rewards a wide variety of application oriented measures for performance, dependability and performability analysis can be expressed (see [8]). In what follows we introduce equivalence for MRPs and SAs, which implies that equivalent MRPs or SAs yield identical results for a given reward speci cation. Thus equivalence depends on the structure of the underlying CTMC and the reward structure, which implicitly de nes the measure to be evaluated.

3 Equivalence and Exact Aggregation of Markov Reward Processes In this section we consider equivalence of MRPs. Very roughly two MRPs are equivalent if they yield identical results according to stationary and/or transient reward. Results presented in this section are mainly based on [4] and [11]. A MRP X is completely characterized by the generator matrix QX , reward vector rX and initial distribution pX (0). MX and MX (t) are the stationary and transient reward gained from X . We rst consider strong equivalence of MRPs.

De nition 1 Two MRPs X and Y are strongly equivalent, if a permutation matrix2 P exists such

that

2 Permutation

QX = P T QY P ; rX = rY P and pX (0) = pY (0)P matrices are de ned in the appendix.

4

Theorem 1 If X and Y are two strongly equivalent MRPs, then MX = MY and MX (t) = MY (t) for each t  0. Proof: It is sucient to show that Xk = Yk P for all k since this implies pX (t) = pY (t)P for all t which implies pX (t)(rX )T = pY (t)P (rY P )T = pY (t)(rY )T . For k = 0 we have X0 = pX (0) = pY (0)P = Y0 P . Furthermore QX = P T QY P implies RX = 1=qmax(I ? QX ) = 1=qmax(I ? P T QY P ) = P T RY P . Now assume that the equivalence has been proved for k, we now prove the induction to k + 1 Xk+1 = Xk RX = Yk PP T RY P = Yk+1P Which completes the proof. 2 Strong equivalence is a very strong relation which only relates MRPs with an identical number of states and can therefore not be used to represent a given SA by a smaller but equivalent behaving SA, which is the major goal of our approach. The next step is to introduce lumpability of MRPs which is the base to de ne weaker equivalence relations for MRPs. For the de nition of equivalence based on lumpability we rst de ne equivalence classes of states from the state space of a single MRP based on these results the MRP can be reduced to a smaller but equivalent representation and equivalence of two MRPs can be established. This approach is well known in the area of qualitative system analysis to de ne strong or observational equivalence of di erent speci cations (see e.g. [1, 10]). Let R  S  S be a binary relation on the state space of a MRP. We use the notation (x; y ) 2 R that x and y are in relation R, additionally we de ne R?1(x) = fyj(x; y) 2 Rg : De nition 2 Let X be a MRP with state space S = f1; : : :; ng, generator matrix Q, reward vector r and initial vector p(0). Let R  S  S be a binary relation between states, R implies  ordinary if for all (x; x0) 2 R and all y 2 S : P Qlumpability, P (x; z ) = Q(x0; z) and r(x) = r(x0) holds. z2R?1 (y)

z2R?1 (y)

z2R?1 (y)

z2R?1 (y)

 exact if for all (x; x0) 2 R and all y 2 S : P lumpability, P Q(z; x) = Q(z; x0) and p(0)(x) = p(0)(x0) holds. Both relations are equivalence relations on S . Exact lumpability depends on the initial distribution such that states which are in relation R have initially equal probabilities, this condition is not necessary for ordinary lumpability, but ordinary lumpability requires identical rewards for all state in one equivalence class, exact lumpability is independent from the reward structure. If only stationary results should be computed and S includes only one recurrent subset of states, then the initial distribution needs not to be part of the MRP speci cation and an initial distribution can be chosen to meet the conditions for exact lumpability. According to the resulting equivalence classes we can de ne a state space S~ including one state per equivalence class and a reduced generator Q~ on S~. Let x~ 2 S~, then R?1(~x) is the set of all states x 2 S which belong to the equivalence class represented by x~. According to the reduced (aggregated) MRP the probability vectors and performance results are denoted by p~(t), p~, M~ (t) and M~ , respectively. The following theorems show how to compute the reduced generator matrices and prove that M = M~ and M (t) = M~ (t) hold. It is sucient to prove equivalence for transient results since limt!1 p(t) = p, in the cases where p is uniquely de ned. Theorem 2 Let R be a binary relation on S implying ordinary lumpability and let S~ be the reduced state space according to the equivalence classes of R. The elements of the reduced generator matrix Q~ and the reduced reward vector r~ are computed as X Q(x; y) and r~(~x) = r(x) for some x 2 R?1(~x) : Q~ (~x; y~) = y2R?1 (~y )

5

The following relations between the results of the original and the reduced MRP hold X X p~(t)(~x) = p(t)(x) and M~ (t) = p~(t)(~x)~r(~x) = M (t) : x2R?1 (~x)

x~2S~

Proof: We can de ne R~ = I + 1=qmaxQ~ . As shown in [16] lumpability of Q~ according to R implies P

lumpability to R. In [4] it is proved that x2R?1(~x)  k (x) = ~ k (~x), this implies P ?1 p(oft)(Rx~) according = p~(t)(~x). Since states from one equivalence class have identical rewards, M (t) x2R (~x) depends only on the probabilities of the equivalence classes and not of the individual states. 2

Theorem 3 Let R be a binary relation on S implying exact lumpability and let S~ be the reduced state space according to the equivalence classes of R. The elements of the reduced generator matrix Q~ , the reduced reward vector r~ and reduced initial vector p~(0) are computed as kR?1 (~y )k P Q(x; y ) for some y 2 R?1(~y ) ; Q~ (~x; y~) = kR ?1 (~x)k x2R?1 (~x)

r~(~x) =

P r(x)=kR?1(~x)k and p~(0)(~x) = kR?1(~x)kp(0)(x) for some x 2 R?1(~x) ; ?1

x2R (~x)

where kR1(~x)k counts the number of elements in the set. The following relations between the results of the original and the reduced MRP hold X p(t)(x) = p~(t)(~x)=kR?1(~x)k and M~ (t) = p~(t)(~x)~r(~x) = M (t) : x~2S~

Proof: In [4] it is proved that k(x) = ~ k (~x)=kR?1(~x)k this implies p(t)(x) = p~(t)(~x)=kR?1(~x)k. The computation of M (t) becomes

n M (t) = P p(t)(x)r(x)

=

x=1

P

P P p(t)(x)r(x) ?1 P p~(t)(~x) r(x) = x~P2S~ px~2(tR~)(~x(~)~xr)(~x) : kR?1 (~x)k =

x~2S~ x2R~ ?1(~x)

x~2S~

2

It is often convenient to describe the computation of the reduced generator matrices in terms of matrix products. Following [4] we de ne a n  n~ collector matrix V belonging to R, where n is the number of states of the original MRP and n~ the number of equivalence classes in R. V (x; x~) = 1 for x 2 R1(~x) and 0 otherwise. The condition for ordinary lumpability on matrix Q can be reformulated in terms of the matrices as 8x; y 2 R?1(~x) : exQV = ey QV : The condition for exact lumpability on matrix Q implies 8x; y 2 R?1(~x) : exQT V = ey QT V : We often use the matrix formulation to prove exact/ordinary lumpability. However, the formulation in the above theorems and the matrix formulation are completely equivalent. A distributor matrix W is de ned as W = V T , where Q is the matrix Q  0 with each non-zero row3 normalized to 1. With these matrices the generator matrix, reward vector and initial vector of the aggregated MRP can be computed as

Q~ = WQV ; r~ = rW T and

p~(0) = p(0)V

(6) The resulting matrix and vectors equal the matrix and vectors computed in theorem 2 and 3 (see [4]). Thus we are ready to de ne weak equivalence of MRPs. 3 V T includes in each row at least one non-zero element. 6

De nition 3 Two MRPs X and Y are weakly equivalent, if there exist relations RX and RY with corresponding collector matrices VX , VY , which imply exact or ordinary lumpability for X , Y and there exists a permutation matrix P such that the following relations hold Q~ X = WX QX VX = P T WY QY VY P = P T Q~Y P ; r~X = rX (WX )T = rY P (WX )T = r~Y P and p~X (0) = pX (0)VX = pY (0)PVX = p~Y (0)P Weak equivalence allows the comparison of MRPs with a di erent number of states, however, a large number of relations R might exist which imply exact or ordinary lumpability. We are interested in, ideally unique, relations which include a minimum number of equivalence classes, to reduce the state space as much as possible. In the following section we extend equivalence of MRPs to SAs and show that indeed unique relations for exact and ordinary lumpability exist which include a minimal number of equivalence classes. Each relation R implying exact or ordinary lumpability can be re ned to these minimal relations and the minimal relations can be computed eciently.

4 Equivalent Representations for a Stochastic Automaton in a Network We start this section be transferring the de nition of strong equivalence and exact/ordinary lumpability from MRPs to SAs. It should be noticed that a SA without synchronized and functional transitions equals a MRP. However, the existence of functional and/or synchronized transitions implies that the behavior of a SA is in uenced and also in uences other SAs in a SAN. Thus a de nition of equivalence has to take into account that equivalent SAs are indistinguishable from their environment (i.e., the other SAs in the SAN) and are equivalent according to all in uences from their environment.

De nition 4 Two SAs i and i0 are strongly equivalent, if a permutation matrix P exists such that 0

8 2 T (i ); 2 T (i); a 2 A: 0

0

0

Q (i ) = P T Q( i)P; L( i ) = P T L( i)P; Ea(i ) = P T Ea(i)P; r(i0) = r(i)P

and

p(i0)(0) = p(i)(0)P

Theorem 4 Let 1 and 10 be two strongly equivalent SAs and let P be the permutation matrix trans-

forming 1 into 10. Let Q, r and p(0) be the generator matrix and vectors describing the MRP resulting from the combination of SA 1 with some SA 2, let Q0 , r0 and p0 (0) be the corresponding quantities resulting from the combination of SA 10 with SA 2, then

Q0 = (P T In2 )Q(P In2 ); r0 = r(P In2 ) and p0(0) = p(0)(P In2 ) where Ix is the identity matrix of order x and P In2 is a permutation matrix.

Proof: The proof is a direct application of the properties of the matrix operators de ned in the appendix, i.e.,

A0 = P T AP ) A0 B = P T AP B = (P T InB )(A B)(P InB ) which can be applied to each term of the sums in (1) to compute the generator and can also be applied to the vectors. 2 We extend the de nition of ordinary and exact lumpability to the matrices and vectors describing a SA. 7

De nition 5 Let R(i) a binary relation on the state space of SA i and let V (i) be the corresponding collector matrix. R(i) implies

 ordinary lumpability, if for all (x; y) 2 R(i), all 2 T (j), 2 T (i) and a 2 A4 : exQ( i)V (i) = ey Q( i)V (i); exL( i)V (i) = ey L( i)V (i) exEa(i)V (i) = ey Ea(i)V (i); and r(i)(x) = r(i)(y) :

 exact lumpability, if for all (x; y) 2 R(i), all 2 T (j), 2 T (i) and a 2 A: ex(Q( i))T V (i) = ey (Q( i))T V (i); ex(L( i))T V (i) = ey (L( i))T V (i); ex(Ea(i))T V (i) = ey (Ea(i))T V (i); ex(Da(i))T V (i) = ey (Da(i))T V (i)and p(i)(0)(x) = p(i)(0)(y) : Using a collector matrix V (i), a corresponding distributor matrix W (i) = (V (i) )T can be computed. With these matrices the matrices and vectors of an aggregated SA are given by

Q~ ( i) = W (i)Q( i)V (i) ; E~ (i) = W (i)E (i)V (i) ; D~ (i) = W (i)D (i)V (i) ; r~(i) = r(i)(W (i))T and p~(i)(0) = p(i)(0)V (i) :

(7)

Now we consider the SAN which results from the combination of the aggregated SA i with SA j . Let Q~ be the generator matrix and r~ the reward vector of the underlying MRP. We can assume without loss of generality that i = 1, otherwise the SAs are renumbered which yields a strongly equivalent MRP (this can be proved by repeated use of theorem 12 from the appendix).

Theorem 5 Consider a SAN consisting of SA 1 and 2. Let V (1) be a collector matrix implying

exact/ordinary lumpability for SA 1, then V = V (1) In2 is a collector matrix implying exact/ordinary lumpability for the MRP underlying the SAN.

Proof: We present the proof here for ordinary lumpability, the corresponding proof for exact lumpa-

bility is straightforward. We have to prove that V implies ordinary lumpability on Q and rewards of states from one partition group are the same (cf. de nition 2). First consider the conditions on the rewards. Let x = (x(1); x(2)); y = (y(1); y(2)) 2 S = S (1)  S (2). Due to the construction of V x and y belong to the same partition group, if x(2) = y (2) and x(1), x(2) belong to the same partition group according to V (1). Using (5) r(x) = r(x(1)) + r(x(2)) and r(y ) = r(y (1)) + r(y (2)). We have r(x(2)) = r(y (2)) (since x(2) = y (2)) and r(x(1)) = r(y (1)) since x(1) and y (1) belong to the same partition group of an ordinarily lumpable partition on S (1). Thus rewards are identical for states from one equivalence class which ends the rst part of the proof. P T Using equation (1): Q = t=1 A(t) B (t) , where A(t) are matrices describing SA 1 and B (t) are matrices to describe SA 2 (see also [15]). We have to show that ex QV = ey QV for x = (x(1); x(2)), y = (y(1); y(2)) belonging to the same equivalence class. Using the above arguments we can assume that x(2) = y (2) and x(1) , y (1) belong to the same equivalence class according to V (1) . ex is a vector of length n1 n2 which includes in position x = (x(1) ? 1)n2 + x(2) a 1, all other elements are 0. This is caused by the lexicographical ordering induced by the tensor operations [13]. Thus we can express ex = ex(1) ex(2) , where ex(i) is a vector of size ni with a 1 in position x(i) and 0 elsewhere. Since SA 1 is ordinarily lumpable according to V (i) we have ex(1) A(t)V (i) = ey(1) A(t) V (i) for all t and all x(1), y (1) need conditions for the matrices Da(i) for ordinary lumpability, since ex Ea(i) V (i) = ey Ea(i) V (i) implies ex a y Da(i) V (i) due to the construction of Da(i) . However, this does not hold for the transposed matrices, thus the additional conditions are necessary for exact lumpability. 4 We do not D(i) V (i) = e

8

belonging to the same equivalence class. By elementary application of the properties for our matrix calculus (cf. the appendix) we have

exQV = (ex(1) ex(2) P A(t) B(t)(V (1) In2 ) = P (ex(1) A(t)V (1)) (ex(2) B(t)In2 ) T t=1 T P = (ey(1) A(t) V (1)) (ey(2) B (t) In2 ) t=1

T t=1

= ey QV

Which proves our theorem for ordinary lumpability. For exact lumpability we have to use the transposed matrices instead. 2 The above theorem shows that ordinary or exact lumpability as de ned here for a SA is preserved by composing the SA with an environment, represented by another SA which, of course, can also consist of several SAs. Thus the aggregated SA, instead of the original one, can be combined with an environment yielding an aggregated MRP. Since aggregation is based on exact/ordinary lumpability all results which are speci ed by the reward vector are preserved. Lumpability is preserved by the composition of SAs. However, there are two open points. First, there might be a large number of relations R implying ordinary/exact lumpability and we are interested in a unique one to indicate whether two arbitrary SAs are equivalent. Second, we have no method to compute for a given SA equivalent SAs with a smaller state space. The following de nition and theorem introduce an inductive method to compute a relation on the state space which implies ordinary lumpability and which is also the smallest relation implying ordinary lumpability. Similar ideas for exact lumpability will be introduced afterwards.

De nition 6 Let R(ki)  S (i) S (i) be a family of binary relations and Vk(i) the corresponding collector matrices. The relations R(ki) are de ned as

1: R(0i)

= f(x; y )jr(i)(x) = r(i)(y )g

) = f(x; y )j(x; y) 2 R(i); 8 2 T (j ) : e Q(i) V (i) = e Q(i)V (i) 2: R(ki+1 y k x k k 8 2 T (i) : exL( i)Vk(i) = ey L( i)Vk(i) and 8a 2 A : ex Ea(i)Vk(i) = ey Ea(i)Vk(i)g:

Theorem 6 Let R(ki) be a family of relations as de ned above, then  R(ki)  R(ki?) 1 ) , then R(i) = R(i) for all l > 0.  if R(ki) = R(ki+1 k+l k

Proof: The proof is straightforward.

2

De nition 7 Two states x; y 2 S (i) are ordinarily equivalent, denoted as _ , if (x; y) 2 R(ki) for all k  0 and relations Rk from de nition 6.

Theorem 7 The relation _ implies ordinary lumpability for SA i. Proof: The proof is straightforward, since exactly the conditions of de nition 5 are applied in theorem 2

6.

De nition 6 together with theorem 6 suggests an iterative algorithm to compute the sequences ) . The resulting relation includes all ordinarily equivalent states. It should be until R(ki) = R(ki+1 noticed that only matrices of size ni  ni are used in the approach, however, the e ort is increased by an increasing number of matrices to specify the SA. Let Vo(i) be the collector matrix belonging to relation _ and Wo(i) the corresponding distributor matrix.

R(ki)

9

Theorem 8 Let R be a binary relation implying ordinary lumpability on S (i), then (x; y) 2 R implies x_ y.

Proof: The proof is inductive, assume that (x; y) 2 Rk is necessary for the conditions for ordinary

lumpability. We will show that then also (x; y ) 2 Rk+1 is necessary. Obviously (x; y ) 2 R0 is necessary, otherwise r(i) (x) 6= r(i)(y ) for (x; y ) 2 R and R cannot imply ordinary lumpability. Let A be one of the matrices for SA i used in de nition 6 and let Sk be the set of equivalence classes of Rk . Now assume that ex AVk ez~ 6= ey AVk ez~ for some z~ 2 Sk . Since (x; y ) 2 Rk is necessary for ordinary lumpability as assumed, each ordinarily lumpable partition R includes partition groups containing only states from R?k 1 (~z ). There exists no re nement of R?k 1 (~z) such that ex AVk+1 ez_ = 1 (z_ ) with z_ 2 Z_ and R?1 (~ ey AVk+1ez_ for all z_ , where R?k 1(~z) is re ned into disjoint sets R?k+1 k z) = ? 1 [z_2Z_ Rk+1(z_ ), since this would require

exAVk ez~ =

X

z_ 2Z_

exAVk+1ez_ =

X

z_ 2Z_

ey AVk+1ez_ = ey AVk ez~

which does not hold by assumption. Thus either the relation R is not ordinarily lumpable or (x; y ) 2 Rk+1 is also necessary for ordinary lumpability. By induction we can conclude that x_ y is necessary for ordinary lumpability. 2 The above theorem shows that we cannot nd a relation that implies ordinary lumpability and is not a re nement of _ . Thus for each R implying ordinary lumpability a relation R0 exists such that R [ R0 equals _ . _ is therefore unique up to the ordering of partition groups and the smallest relation observing the conditions for ordinary lumpability. Furthermore, _ is an equivalence relation since it is obviously re exive and symmetric, by the previous arguments we can also conclude that it is transitive. The previous theorems and de nitions are all based on ordinary lumpability, we introduce corresponding theorems and de nitions for exact lumpability, before weak equivalence of SAs is formally de ned.

De nition 8 Let R(ki)  S (i) S (i) be a family of binary relations and Vk(i) the corresponding collector matrices. The relations R(ki) are de ned as

1: R(0i)

= f(x; y )jp(i)(0)(x) = p(i) (0)(y ) and 8a 2 A : Da(i)(x; x) = Da(i) (y; y )g

) = f(x; y )j(x; y ) 2 R(i); 8 2 T (j ) : e (Q(i))T V (i) = e (Q(i))T V (i); 2: R(ki+1 y x k k k 8 2 T (i) : ex(L( i))T Vk(i) = ey (L( i))T Vk(i); 8a 2 A : ex(Ea(i))T Vk(i) = ey (Ea(i))T Vk(i)g:

It is easy to show that theorem 6 holds also for the above relations.

De nition 9 Two states x; y 2 S (i) are exactly equivalent, denoted as '_ , if (x; y) 2 R(ki) for all k  0

and relations Rk from de nition 8.

Theorem 9 The relation '_ implies exact lumpability for SA i. Proof: The proof is straightforward, since exactly the conditions of de nition 5 are applied in theorem 2

8.

Theorem 10 Let R be a binary relation implying exact lumpability on S (i), then (x; y) 2 R implies x'_ y.

10

Proof: The proof is similar to the proof of theorem 8.

2

We denote the collector and distributor matrices belonging to SA i and relation '_ as Ve(i) and We(i), respectively. With the above relations we have, up to the ordering of states unique relation to relate a given SA with the smallest equivalent SA according to exact or ordinary lumpability. This suggests the following de nition of weak equivalence of SAs.

De nition 10 Two SAs i and j are weakly equivalent, if they can be reduced to strongly equivalent SAs ~i and ~j using the relation _ or '_ .

The above results have been proved for a SAN including two SAs. We now give an outline of the extension of the approach for SANs including an arbitrary number of SAs. First it should be mentioned that SAs can be combined and yield a new SA, the approach is compositional. Thus the equivalence relations for an isolated SA hold for the SA in an arbitrary environment, since each environment can be interpreted as a single SA. Although it has been shown that the equivalence relations are the smallest according to exact/ordinary lumpability for a single SA in a SAN, lumpability of a single SA is only sucient, not necessary, for lumpability on the complete MRP. The combination of isolated SAs to a new SA might allow further reductions of the resulting SA. An important application is the composition of identical SAs with symmetric interactions with their environments. When N identical SAs are composed to form a new SA and each individual SA has n states, then the resulting state space has nN states. However, exploiting symmetry, states can be aggregated, if they are identical according to the ordering of indices of the identical SAs. The ?  N + n ? 1 resulting partition is exactly and ordinarily lumpable and includes N states. We formalize this kind of aggregation in the theorem below, but rst some additional notations are necessary. De ne for SA i a set of permutation matrices P (i), such that for all P 2 P (i) the following identities hold. For all matrices A(i) which describe SA i A(i) = P T A(i) P and r(i) = r(i)P , p(i)(0) = p(i)(0)P . Since PP T = P T P = I for permutation matrices P 2 P (i) implies P T 2 P (i).

Theorem 11 Let R be a binary relation on S (i), such that (x; y) 2 R, if a permutation matrix P 2 P (i) with P (x; y) = 1 exists. R implies exact and ordinary lumpability on S (i).

Proof: First of all R is an equivalence relation since P (x; y) = 1 implies P T (y; x) = 1 such that

(x; y ) 2 R implies (y; x) 2 R. The conditions for the vectors r(i) and p(i)(0) are observed since P 2 P (i) implies r(i)(x) = r(i)(y) and p(i)(0)(x) = p(i)(0)(y). Each matrix A(i) is invariant under left and right multiplication with some P 2 P (i). Let V be the collector matrix belonging to R, it is easy to show that V = PV = P T V . Furthermore we have ey = ex P , if P (x; y ) = 1. Now assume that (x; y ) 2 R and P (x; y ) = 1, then we have for each A(i) :

ey A(i)V = exPP T A(i)PP T V = exA(i)V which is the condition for ordinary lumpability. To prove exact lumpability we use matrices (A(i) )T instead and get an identical result. 2 If SA i is composed from identical and symmetric SAs, then P (i) includes, apart from the identity matrix, other permutation matrices, since a renumbering of identical and symmetric SAs obviously does not modify SA i. The superposition and aggregation of identical SAs is well known and is used in [3] for a class of hierarchical queueing networks and in [14] for SANs. However, it is a special case of the more general concept of equivalence presented here. The above discussion suggests the following approach to handle a SAN. First nd equivalent representations for each SA, then combine SAs to nd reduced representations for the combined SAs until an equivalent reduced representation for the whole MRP underlying the SAN has been found. However, there is one drawback with this approach, namely, if SAs are composed and an equivalent representation is found, then the resulting reduced matrices cannot directly be represented by matrices 11

of the combined SAs using tensor operations. There are two possibilities to handle the problem. The rst is the compute the matrices of the SA resulting from composition of the isolated SAs and to store the reduced matrices. The second approach is to compute the collector and distributor matrices for the aggregation and to use these matrices in each iteration step without computing the reduced matrices for the composed SA. Which of both approaches is used depends on the structure of the model, the former approach usually needs additional space, whereas the latter approach needs more computational e ort.

5 Examples We present here two examples. The rst is a simple double processor with failure and repair of the processors. One SA describes the state of both processors, each processor can be in one of the following states. -1 The processor is currently down and has to be repaired. 0 The processor is currently idle and waits for an arriving job to be served. 1 The processor is currently processing a job. The state space of the resulting SA is characterized by the state of each processor and the number of jobs, we allow up to two jobs. Jobs arrive via a synchronized transition labeled with a and are generated by some environment which is not considered here. The state space S (i) includes the following states, states are described as triples including the number of jobs and state of processor 1 and 2, respectively. 1) (0; ?1; ?1) 2) (0; ?1; 0) 3) (0; 0; ?1) 4) (0; 0; 0) 5) (1; ?1; ?1) 6) (1; ?1; 1) 7) (1; 0; 1) 8) (1; 1; ?1) 9) (1; 1; 0) 10) (2; ?1; ?1) 11) (2; ?1; 1) 12) (2; 1; ?1) 13) (2; 1; 1) In the sequel we use the above numbering of states. We rst consider the arrival of jobs. An arrival is only possible if less than two jobs are actually there. If at an arrival instants a processor is idle and the other one is processing or down, then the idle processor starts processing the new job. If both processors are idle, then with probability p the rst processor starts processing the job, with probability 1 ? p the second processor gets the job. The corresponding transitions are collected in the following matrix Ea(i).

0? BB ? BB ? BB ? BB BB ? B? Ea(i) = B BB ? BB ? BB ? BB ? BB ? B@ ? ?

? ? ? ? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ? ? ? ? ?

1 ? ? 1

? ? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ? ? ?

? ? ? 1?p ? ? ? ? ? ? ? ? ?

12

? ? ? ? ? ? 1 ? ? ? p ? ? ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

? ? ? ? ?

? ? ? ? ? 1 ? ? ? ? 1 ? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? 1

? 1

? ? ? ?

1 CC CC CC CC CC CC CC CC CC CC CC CA

Now assume that the processing rate of processor i is i , the failure rate is i and the repair rate equals i. This yields the following matrix

0? BB 2 BB 1 BB ? BB BB ? B? Q(i) = B BB ? BB ? BB ? BB ? BB ? B@ ? ?

?1 ? CCC ?C ? CCC ? ? ? CC 2 ? ? CCC ? ? ? CC ? 1 ? CC ? ? ? CC ? ?  2 1 ? C C ? ? 2 ? ? 1 C C A ? ? 1 ? ?  2 C ? ? ? ? ? ? If both processors are identical and p = 0:5, then the following relation R is exactly and ordinarily lumpable. States which are in relation R are surrounded by brackets.  2 1 ? ? ? 1 ? ? 2  1 2 ?

? ? ? ? ? ? ? 2 2 ? ? 1 1 ? ? ? ? ? ? ? ? ?

? ? ? ?

? ? ? ? 2 ? ? 1 1 ? ? ? 1 ? ? ? 2 ? ? ? ? 1

? ? ? ?

? ? ? ? 1 ? ? ? 2 ? ? 2 2 ? ? ? ? ? 1 ? ? 2

? ? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ?

? ? ? ? ? ? ? ? ?

(1) (2; 3) (4) (5) (6; 8) (7; 9) (10) (11; 12) (13) The resulting aggregated SA includes 9 instead of 13 states and yields exact transient results if rewards for states from a partition group are identical (i.e., no processor speci c measures are needed) or the initial probabilities of states from a partition group are identical (i.e., both processors are initially in an identical state). Stationary results gained from the aggregated SA are always exact. It should be noticed that the aggregated SA can be combined with any environment for the original SA and the resulting state space is reduced by a factor 9=13. Now assume that p 6= 0:5. In this case one processor is preferred, if both are idle. The processors are no longer symmetric and the above relation does not imply exact lumpability. However, it still implies ordinary lumpability, thus the overall performance is not modi ed. However, processor speci c measures can no longer be gained from the aggregated SA. We can de ne another relation R0 which locates states 7 and 9 in own individual partition groups and is for the other states identical to R. R0 implies exact and ordinary lumpability. The second example has been discussed several times in the context of SANs (see e.g., [13, 15]). The model consists of N jobs which are processed on a degradable multiprocessor. A job starts processing by forking in up to C subprocesses which are processed in parallel, processing is completed after all subprocesses have completed their service. The behavior is cyclic, i.e., after termination of all subprocesses another fork operation is performed to start the subprocesses. The state of a SA specifying a single process is characterized by the number of subprocesses actually processing. Thus each process can be characterized by a SA including C + 1 states. Denote by a(xi) (~x) the processing rate of subprocesses in state x for SA i. x~ denotes the state of the rest of the system. The processing rate of a SA depends on the state of other SAs since all subprocesses are served by a multiprocessor which can be in di erent modes and serves processes according to a processor sharing discipline. Thus the processing rate of single SA depends on the number of jobs which are actually served (determined by each of the N SAs which specify processes) and by the state of the multiprocessor (described by the N + 1-th SA). Let D the number of modes of the multiprocessor. We assume that the multiprocessor changes its mode independently from the current load and denote by xy the rate of going from mode x to y for x < y, xy denotes the rate of going from mode x to y for x > y. We assume that lower numbers indicate a higher degradation of the multiprocessor, thus xy can be interpreted as repair and xy as failure rates. The degradation of the multiprocessor a ects running processes only by a possibly reduced processing capacity. Additionally there are also hard failures which bring the multiprocessor immediately down to the lowest degradation mode and cause all running processes to go before the 13

latest checkpoint, which is assumed to be the last fork operation. Hard errors are modeled by a synchronized transition between multiprocessor and all processes. The size of the state space of the overall model equals (C + 1)N D, the generator matrix can be described by matrices of size (C + 1)2 and D2 . It is not possible to nd a lumpable partition for a SA modeling a single process, since the behavior is strictly sequential. However, if we consider all processes described by a single SA and assume that processes are identical, the resulting SA includes a lot of symmetries. An aggregated description be found ?  based on exact and ordinary lumpability can for the N processes, which has only NN+C instead of (C + 1)N states. This is, of?course, a signi cant  ?  reduction, however, to describe the resulting SA directly we need matrices of size NN+C  NN+C . The alternative is to use only collector and distributor matrices in an iterative solution algorithm without computing the matrices for the resulting SA. Which of both ways is used depends on the parameters of the model, normally matrices including a less than 5000 states can be stored as sparse matrices on todays computers eciently. To show the e ect of the proposed reduction consider a model with N = 5, C = 5 and D = 5. The original model has 38880 states, using the above aggregation we get a reduced model with only 1260 states, which causes a signi cant reduction of the solution time needed by an iterative algorithm. Additionally the memory requirements are also reduced, although we generate the matrices for the aggregated SA describing the processes. The aggregated SA includes only 252 states and the resulting matrices are rather sparse. Thus not too much additional memory is needed to store the new matrix instead of the matrices for the isolated processes. On the other hand we only need two vectors of size 1260 to perform an iterative solution, whereas the original model needs two vectors of size 38880 each. Now we consider very brie y the SA for the multiprocessor. For D = 4 it is characterized by the following matrices. 1 0 0 1 1 0 0 0 1 12 13 14 C B B 21 2 23 24 CC Q(N +1) = B B@ 31 32 3 34 CA ; Ea(N +1) = BB@ 11 00 00 00 CCA 1 0 0 0 41 42 43 4 where x denotes the negative sum of non-diagonal elements in row x. An aggregation of states is only possible for states with an identical processing power. This might be the case in systems working with cold spares such that a failed part is immediately substituted by an identical part without a ecting the speed of the processor. Now assume that the states 2, 3 and 4 realize an identical processing power and state 1 is the state, where the multiprocessor has completely failed. States 2 ? 4 can be aggregated according to ordinary lumpability, if 21 = 31 = 41; the states can be aggregated according to exact lumpability, if 12 = 13 = 14 and 2 + 32 + 42 = 23 + 3 + 43 = 24 + 34 + 4.

6 Conclusions We have considered a new aspect in the analysis of SANs, namely equivalence and exact aggregation of SAs. A de nition of equivalence of SAs has been given such that a SA can be substituted by an equivalent SA without modifying the behavior of the SAN. Equivalence is very important for the functional analysis of state based speci cations (see e.g., [10]). Here we present a de nition of quantitative equivalence which is an extension of qualitative equivalence, additionally the proposed equivalence is a congruence according to the composition which is used to combine SAs to a SAN. Although the paper does not introduce an algorithm to compute a reduced and equivalent representation for a given SA i, it is rather straightforward to de ne such an algorithm based on the inductive de nition of ordinarily or exactly lumpable relations. In particular a reduced SA can be computed locally on matrices of size ni  ni . This implies that aggregation of several SAs in a SAN can be performed independently possibly in parallel. The proposed equivalence is of practical importance since larger systems tend to include parts which can be reduced due to lumpability. This is in particular true for models of multiprocessor or 14

fault tolerant systems. It is normally not possible to specify the reduced models directly since the reduction often hides a lot of the details of the modeled system, thus it is important to implement algorithms which automatically reduce a given speci cation. There are several topics for future research. In particular we think of extending the approach to near lumpability which allows an additional reduction of a SA, however, the price to pay is an approximation error. Results on nearly lumpable Markov chains can be found in [4].

References [1] T. Bolognesi, S.A. Smolka; Fundamental results for the veri cation of observational equivalence: a survey; In: H. Rudin, C. West (eds.), Protocol Speci cation, Testing and Veri cation VII, North Holland (1987) 165-179. [2] P. Buchholz; The Aggregation of Markovian Submodels in Isolation; Universitat Dortmund, Fachbereich Informatik, Forschungsbericht Nr. 369, 1990 (revised version submitted for publication). [3] P. Buchholz; Hierarchical Markovian Models -Symmetries and Aggregation-; In: J. Hillston, R. Pooley (eds.), Computer Performance Evaluation 92, Edinburgh University Press (1993) 305-320. [4] P. Buchholz; Exact and Ordinary Lumpability in Finite Markov Chains; Journ. of Appl. Prob. 31 (1994) 59-75. [5] M. Davio; Kronecker Products and Shue Algebra; IEEE Trans. on Comp. 30 (1981) 116-125. [6] S. Donatelli; Superposed Stochastic Automata: a class of Stochastic Petri nets amenable to parallel solution; In: Proc. of the Fourth Int. Workshop on Petri Nets and Performance Models, IEEE Press (1991) 54-63. [7] D. Gross, D.R. Miller; The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes; Operations Research 32 (1984) 343-361. [8] B. R. Haverkort, K.S. Trivedi; Speci cation Techniques for Markov Reward Models; Discrete Event Systems: Theory and Application 3 (1993) 219-247. [9] J.G. Kemeny, J.L. Snell; Finite Markov Chains; Springer New York, Heidelberg, Berlin 1976. [10] R. Milner; Communication and Concurrency; Prentice Hall 1989. [11] V. Nicola; Lumping in Markov Reward Processes; Research Report RC14719, IBM Yorktown Heights (1989). [12] B. Plateau, K. Atif; Stochastic Automata Networks for Modeling Parallel Systems; IEEE Trans. on Softw. Eng. 17 (1991) 1093-1108. [13] B. Plateau, J.M. Fourneau; A Methodology for Solving Markov Models of Parallel Systems; Journ. of Paral. and Distr. Comp. 12 (1991). [14] M. Siegle; Structured Markovian Performance Modelling with Automatic Symmetry Exploitation; In: G. Haring, K. Kotsis (eds.) Computer Performance Evaluation, Modelling Techniques and Tools, Springer LNCS 794 (1994). [15] W.J. Stewart, K. Atif, B. Plateau; The Numerical Solution of Stochastic Automata Networks; IMAG, Rapport Apache (LGI, LMC) No. 6, Grenoble France (1993). [16] U. Sumita, M. Riders; Lumpability and time reversibility in the aggregation-disaggregation method for large Markov chains; Commun. Statist. - Stoch. Models 5 (1989) 63-81. 15

A Some Matrix Operations This appendix summarizes some de nitions and theorems on matrices and vectors which are used in the paper.

De nition 11 A permutation matrix P is a n  n matrix which includes in each row and column exactly one element equal to 1, all other elements are 0.

For permutation matrices PP T = P T P = I holds. Tensor products are de ned as follows (see also [5, 12, 13]).

De nition 12

The tensor (kronecker) product of two matrices A 2 IRr1 c1 and B 2 IRr2 c2 is de ned as C = A B, C 2 IRr1r2c1 c2 , where C ((i1 ? 1)  r2 + i2; (j1 ? 1)  c2 + j2) = A(i1; j1)B(i2; j2) (1  ix  rx ; 1  jx  cx ; x 2 f1; 2g).

For tensor products the following properties hold if the ordinary matrix product are de ned (see [5, 13]). 1. Associativity (A B ) C = A (B C ) 2. Compatibility with multiplication (AB ) (CD) = (A C )(B D). 3. Distributivity over addition (A + B ) (C + D) = A C + A D + B C + B D. Tensor products are not commutative, however, the following theorem relates A B and B A.

Theorem 12 Let A be a nA  nA matrix and B a nB  nB matrix, then B A = P T (A B)P ,

where P is nA nB  nA nB permutation matrix such that P (i; j ) = 1 for i = (iA ? 1)nB + iB and j = (iB ? 1)nA + iA.

Proof: The proof can be found in [5]. As an example we consider

; 1) A(1; 2) A = AA(1 (2; 1) A(2; 2)

2

!

; 1) B(1; 2) B = BB(1 (2; 1) B (2; 2)

!

and get

1 0 A(1; 1)B(1; 1) A(1; 1)B(1; 2) A(1; 2)B(1; 1) A(1; 2)B(1; 2) B A(1; 1)B(2; 1) A(1; 1)B(2; 2) A(1; 2)B(2; 1) A(1; 2)B(2; 2) CC C = A B = B B @ A(2; 1)B(1; 1) A(2; 1)B(1; 2) A(2; 2)B(1; 1) A(2; 2)B(1; 2) CA A(2; 1)B(2; 1) A(2; 1)B(2; 2) A(2; 2)B(2; 1) A(2; 2)B(2; 2)

16

Suggest Documents