Knowledge and the ordering of events in distributed systems Extended Abstract Paul J. Krasucki Dept. of Math. Sciences Rutgers University Camden College Camden, NJ 08102
[email protected]
R. Ramanujam The Institute of Mathematical Sciences C.I.T. Campus Madras - 600 113 India
[email protected]
ABSTRACT In asynchronous distributed systems logical time is usually interpreted as \possible causality", a partial order on event occurrences. We investigate the relationship between passage of time and changes in the knowledge of agents. We show that there is a certain duality between knowledge transition systems (de ned here to model changes in the states of knowledge of agents) and partially ordered sets of event occurrences (the model of nAsynchronously Communicating Sequential Agents).
1 Introduction Consider a distributed system of n agents acting autonomously. Agents communicate by passing messages from one to another. Assume that every message sent is eventually delivered to the intended recipient and that messages are delivered in the order in which they were sent. Such a model is standard in the theory of distributed computing. Lamport [Lam] discussed the ordering of event occurrences in such systems and argued that each agent `locally' sees a linear order of event occurrences whereas `globally', only a partial order consistent with the local linear ones is available. In this discussion, the ordering refers to causal dependency between event occurrences and thus incomparability under the ordering denotes causal independence, and therefore (in a sense) concurrency. If we see concurrency as causal independence, one way of phrasing the assertion \event occurrences e1 and e2 can be concurrent" is: \no agent in the system knows that e1 must precede e2 or that e2 must precede e1". In a sense, this identi es the states of the system with the states of knowledge of agents in the system. After Halpern and Moses [HM] there has been extensive work done in the study of knowledge states of agents in distributed systems. In particular, looking at how the occurrence 1
' &
Knowledge state-transition systems
$ ' % & -
Partially ordered sets of event occurrences
$ %
Figure 1: of an event can cause a change in agents' states of knowledge leads to viewing distributed protocols as goal-oriented activity. Such a protocol is then a transformation from the given initial state of knowledge in the system to a desired state where agents know some speci c facts. We wish to study to what extent these notions of agents' knowledge (speci ed as equivalence relations on states, one relation for each agent) and partial orders on event occurrences are dual. In Figure 1, can we go back and forth without losing information about agents' behavior? In the process, we would also like to understand more precisely assertions like the following ones: "an agent cannot lose knowledge by receiving a message", "an agent cannot gain knowledge by sending a message" and so on. Such statements are commonly used in the analysis of distributed protocols, and do make intuitive sense. Similar questions have been addressed in the literature, but in dierent contexts: Chandy and Misra [CM] have related chains of messages to change in knowledge of agents, and Parikh and Krasucki [PK] have precisely characterized levels of knowledge of agents (for a formula) and speci ed what sequences of messages are required to attain a given level. In the area of knowledge- based protocols there has been extensive work relating states of knowledge of agents and message histories (see [DM], [HZ], [Ma] for some expositions). However, the question we study here is the formal relationship between knowledge structures speci ed as transition systems and temporal structures speci ed as partially ordered sets of event occurrences. In a sense, it is closer to the spirit of [Pra], [NPW]. In the following sections we show that there is a simple class of transition systems (Knowledge Transition Systems, KTSs for short) enriched with equivalence relations on states, which corresponds to a natural partial order model of event occurrences in distributed systems (Asynchronously Communicating Sequential Agents, abbreviated ACSAs). This correspondence is precise in the following sense: we associate a KTS with an ACSA and conversely an ACSA with a KTS in such a way that ACSA ! KTS ! ACSA is an isomorphism, and KTS ! ACSA ! KTS is a simulation.
2 ACSAs In [LRT], a model of distributed systems has been de ned in the following manner: assume a collection of n agents, each of which is sequential, interacting by message passing. Each agent is `tree-like', in the sense that its behaviour is given as a `backwards-linear' poset of event occurrences. The formal de nition is as follows: Def 1 : A system of n-Asynchronously Communicating Sequential Agents (n-ACSA) is a triple E = (E; ; ) where n 2 N ; n > 0, and
(i) E is a set of event occurrences, (ii) E E is a partial order called the causality relation, and (iii) : E ! f1; :::; ng is a naming function such that 8e8i 2f1; :::; ng fe0je0 eg \ ?1(i) is totally ordered by . 2
We will use the notation # x for the initial segment of the poset (X; ) up to the element x, i.e. for x 2 X , # x = fyjy xg; similarly for X 0 X , # X 0 = fyj9x 2 X 0; y xg. We will also speak of ACSAs, leaving n implicit.
An agent is simply the set of event occurrences having the same name. We will often speak of the set Ei def = ?1 (i) as agent i. Note that condition (iii) imposes backward-linearity, making each agent tree-like. It is in this sense that the agents are sequential. Since is a function, each event occurrence is uniquely `owned' by an agent, forcing asynchrony in communication; any communication is necessarily split into `send' and `receive'. is a causality relation in the sense that when e1 e2, every observation of event occurrence e2 necessarily implies an earlier observation of e1 . For example when e1 e2 , (e1) = i, (e2) = j 6= i and there is no e3 `between' e1 and e2 , we can interpret e2 as the receipt by j of a message from i, where e1 constitutes the sending of this message. Clearly, causality here is in the sense that the sending of a message causally precedes its receipt. Given such a notion of causality, a computation in an ACSA is simply a downwardclosed set of event occurrences. This leads us to notions of con ict and concurrency in ACSAs. When we consider event occurrences which are incomparable under the causality ordering, con icting ones are those which cannot both occur in the same computation, and concurrent ones are those which can. These are formally given below. We say that event occurrences e1 and e2 are in local con ict when neither e1 e2 , nor e2 e1, and (e1) = (e2). Since agents are sequential, we do not interpret causal independence within agents as potential concurrency but as denoting choice made in computation. e1 and e2 are in con ict, if and only if there exist e01 e1 ; e02 e2 such that e01 and e02 are in local con ict. When are two event occurrences concurrent ? We can say that e1 and e2 are concurrent if (e1) 6= (e2), e1 and e2 are incomparable under the causal ordering, and # e1 [ # e2
e1 e2
? e3
XXX
agent 1
f1 XXX z ? f2
agent 2
Figure 2: e2 and e3 are in local con ict,e2 and f2 in con ict (non-local) is con ict-free. These three notions, namely causality, con ict and concurrency form the backbone of the behaviour theory of distributed systems. Note that for any e1 ; e2 2 E , we have e1 e2 or e2 e1 or e1 is in con ict with e2 or e1 and e2 are concurrent. We can now de ne a notion of a global state in an ACSA: c E is a global state i # c c and c is con ict-free. Thus a state being downward-closed and con ict-free, can be thought of as a partial run. We will often refer to global states as con gurations. Note that the empty set is always a con guration. More importantly, for any e 2 E , the set # e is a con guration, a fact which we will use crucially later on. Consider the 2-ACSA in Figure 2. Events e2 and e3 are in local con ict, whereas e2 and f2 are in con ict. e2 and f1 are concurrent. For this example, fe1; e3 ; f1g is a con guration, whereas fe3 ; f1g and fe1 ; e2; e3; f1; f2g are not. We say that an ACSA is nitary if and only if # e is nite, for every e 2 E . In the context of computation, it is natural to restrict attention to nitary ACSAs, and we will do precisely that. In fact, we will also be interested only in the nite con gurations of nitary ACSAs. Note that nitariness implies discreteness of the partial order and hence we will freely make use of 0, and in discussions often implicitly assume a xed n without specifying it. Def 3 : A Knowledge Transition System of n agents is a quadruple K = (S; !; s0; Eqn) where
(a) S is a set of states (at most countable), (b) ! S S is a transition relation on S , (c) s0 2 S is an initial state such that every state in S is reachable from s0 by paths through !, and
i
s0
i
-
s1
i
-
? s2
Figure 4: Granularity of transitions in KTS.
(d) Eqn = (1; :::; n) is an n-tuple of equivalence relations i S S satisfying the
following conditions: (i) for all s; s0 2 S , if s ! s0, then there exists a unique i such that s 6i s0. [we can therefore de ne ) S f1; :::; ng S by s )i s0 i s ! s0 and s 6i s0 ], (ii) for all s1; s2; s3 2 S and i 6= j 2f1; :::; ng, if s1 )i s2 and s1 )j s3 then there exists s4 2 S , such that s2 )j s4 and s3 )i s4 , and (iii) for all s1; s2; s3 2 S and i 2f1; :::; ng, if s1 )i s2 and s1 )i s3 and s2 6= s3, then s2 6i s3 .
2
Condition (i) refers to the locality of event occurrences in the systems we wish to study and to the fact that observers see only one event occurrence at a time (this condition can be relaxed, and we will discuss that later on). In a sense, this ensures that transitions in KTSs correspond to event occurrences in ACSAs. Note that we still have considerable exibility in the granularity of transitions. We can have the situation as in Figure 4: where transitions s0 ! s1 and s1 ! s2 can mean \add 1 to local variable y", whereas s0 ! s2 is \add 2 to local variable y". Condition (ii) ensures con uence. If both an i-action and a j -action are enabled at a state, i 6= j , then they can be performed in any order. Such `forward-diamond conditions' are typical in models of `true' concurrency. Condition (iii) asserts that every agent knows the eect of making a local choice. This corresponds with the intuition that the two i-transitions enabled at the same state are in local con ict. In the previous section, we discussed how a transition system with equivalence relations can be associated with an ACSA. As may be expected, it is a KTS, as the following proposition asserts: Proposition 4 : Let E = (E; ; ) be a nitary n-ACSA, and let C denote the set of all nite con gurations of E . De ne the structure K = (C; !; ;; Eqn) by:
(a) c ! c0 i there exists e 2 E such that c0 = c feg, and
e1 e2
f1
? ? ) f2 PPP PPP PPP ? q f P 3
agent 1
agent 2
s0 = ;
Figure 5: 2-ACSA s3 = fe1 g s6 = fe1; e2; f1 g
s1 = ff1 g
s4 = fe1 ; f1g
s2 = ff1 ; f2g
s5 = fe1 ; f1; f2g s8 = fe1; e2; f1; f2; f3g
s0
-
s1
-
s2
? s3
-
? s4
-
? s5
-
s7
?
s6
?
s7 = fe1; e2; f1 ; f2g
-
s8
Figure 6: KTS associated with the 2-ACSA from Figure 5. Horizontal arrows are )2 , vertical arrows are )1 .
(b) Eqn = (1; :::; n), where i C C is given by: c i c0 i c \ Ei = c0 \ Ei, for all i 2 f1; :::; ng. Then K is a KTS. 2 Consider the 2-ACSA in Figure 5 : Figure 6 shows the transition system associated with it. Note that in Figure 6, s5 is the latest state where agent 1 could have sent the message to agent 2 (event occurrence e2 ), and s8 is the earliest state where agent 2 could receieve that message. We also have s5 )1 s7 )2 s8 but there is no state s such that s5 )2 s )1 s8 . A similar remark applies for s3 )2 s4 )1 s6 in the context of the message sent from agent 2 to agent 1 (event occurrence f1 ). We thus have a situation where s5 2 s7 but the capabilities of agent 2 are dierent in the two states!
s0
i ? s2
j j
- s1 P PPPi Pq P s ?i j 7 - s3 - s4
i ? s5
j
?i - s6
Figure 7: KTS event types: concurrency ((s0 ; s1) and (s0 ; s2)); local con ict: ((s0; s2) and (s1 ; s7)); con ict: ((s5 ; s6) and (s1 ; s7)); local causality: ((s0; s2) and (s3; s5 )) and remote causality: ((s1 ; s3) and (s5 ; s6)) This brings us to the crucial point: situations where we have s1 )i s2 )j s3 but without any s such that s1 )j s )i s3 represent messages from agent i to j . Now we will identify `event types' speci ed in the KTS and then we will proceed to associate an ACSA with a given KTS. Consider Figure 7, where we depict four dierent situations that occur in KTSs, which are intuitively interpreted as concurrency, con ict (local as well as inherited), local causality and remote causality. When the (cyclic) KTS is unfolded, these event types yield event occurrences. To this end we look at paths in the KTS. For the rest of the section x a KTS K = (S; !; s0; Eqn). De ne RK S , the runs of the KTS, to be the set of sequences t1 :::tk ; k 0, where t1 = s0 , and for 1 l < k; tl ! tl+1 . We use ; 0 etc to range over elements of R (we will often omit the subscript K, when the context is clear). Usually we will be interested only in non-null runs in the KTS. Let R+ def = f 2 Rjj j > 1g; these have at least one state transition. De ne : R+ ! f1; :::; ng by: ( ) = i where = 0ss0 and s )i s0 . We use the notation to denote the pre x ordering on sequences. iq j1 jr i1 Let = t1 :::tl )i tl+1 ) ::: ) tp ) tp+1 ::: ) tm?1 )j tm :::tn be a run of the KTS. We will call a pair of transitions tl )i tl+1 and tm?1 )j tm concurrent i there exist a run of j1 jr the KTS, 0 = t1 :::tl ) up+1 ::: ) um?1 )j um such that for all v, p + 1 v m; tv 6= uv , and sets I1 = fi; i1; i2; :::; iqg and I2 = fj1; j2; :::; jr; j g are disjoint and up+1 j1 tp+1 , up+2 j2 tp+2 ,...,um j tm . Note that this implies that there is a `grid' of transitions in KTS between tl and tm . Consider a run 2 R+ , say, t1 :::tk (k > 1). Let ( ) = i and let j 6= i. In such a case, we say that the subsequence t1 :::tl, (1 < l < k) is the j -predecessor of if tl?1 )j tl is maximal
j -transition in not concurrent with tk?1 )i tk . If no such l exists, de ne the j -predecessor of to be the null sequence. Note that the j -predecessor is de ned now for all j = 6 ().
The intuition behind the above de nition should be obvious: within a run, this helps us identify event occurrences which constitute receipt of messages and match them with corresponding send occurrences. k s ) i As an example suppose = s0 )j s1 ) 2 s3 . Assume i; j; k are distinct. If there k s . Then the transitions (s ; s ) and (s ; s ) exists s such that s1 )i s and s i s3 , then s ) 3 1 2 2 3 are concurrent and cannot be interpreted as a send-receive pair. Otherwise, we have the lack of "forward-con uence" referred to above, which we interpret as message passing. On the other hand, we can have s1 )i s, but no s0 )i s0 such that s i s0 i s3 . This again leads us to interpret (s0 ; s1) as a send and (s2 ; s3) as a receive. When neither s nor s0 exists, we can think of a message from j to i routed through k. Proposition 5 : Suppose 1 2; (1) 6= i and (2) 6= i. Then the i-predecessor of 1 is also a pre x of the i-predecessor of 2 . 2 Let [s]i def = fs0 2 S js i s0g; Si def = f[s]i js 2 S g; 1 i n. De ne Li Si, the local iruns of the KTS, to be the set of sequences x1 :::xk ; k 0, where s0 2 x1, and for 1 l < k, there exist s 2 tl ; s0 2 tl+1 such that s )i s0 . A local run is a local i-run, for some 1 i n. We use ; 0 etc. to range over elements of local runs. The following proposition asserts that an i-run cannot also be (usually) a j -run: the exceptions are, the null sequence and the singleton sequence when )i is empty. Proposition 6 : Suppose 2 Li; jj > 1. Then 62 Lj , for every j 6= i. Proof : Suppose 2 Li; = x1x20. Let s 2 x1; s0 2 x2 such that s )i s0. Clearly, s 2 [s0]i ; s0 62 [s0]i and for j 6= i; fs; s0g [s0]j . Therefore there can be no 00 2 Lj starting with x1. 2 De ne the projection maps i : R ! Li as follows: let = t1 :::tk ; k 0.
i () = null, if there is no 1 l < k such that sl )i sl+1 , and i () = [s]i[s0]i i(2 ), if = 1 ss0 2 ; s )i s0 and i (1 ) = null. Let the map : R+ ! L1 ::: Ln be de ned as follows: ( ) = (1 (1); :::; n(n )), where i = , if i = ( ), and i is the i-predecessor of otherwise. Clearly the map is well-de ned. We use to associate an ACSA with the KTS. Proposition 7 : Let K = (S; !; s0; Eqn) be a KTS. De ne the structure E = (E; ; ) by:
E def = f( )j 2 R+ jg,
def = f((1); (2))j there exists 2 ; ( ) = (1 ), and either ( ( ) = (1), or (there exists 0 such that ( ) = ( 0) = j = 6 (2); 0 2 and 0 is the j -predecessor of 2 ) g, and
e1 = (x0 x1; )
Z Z Z Z Z e3 = (x0x2 ; y 0y1) Z Z ) ? Z Z ~ Z = e2 = (x0x1 x3 ; y0y1)
agent i
x0 = fs0; s1 g y0 = fs0 ; s2g f1 = (; y0y1 ) x1 = fs2; s3 ; s4g y1 = fs1 ; s3; s5; s7g x2 = fs7g y2 = fs4 ; s6g x3 = fs5; s6 g ? f2 = (x0 x1; y0 y1 y2)
agent j
Figure 8: ACSA associated with the KTS from Figure 7.
(()) def = ( ). Then E is an n-ACSA. Proof : Re exivity and antisymmetry of are trivial. To prove transitivity, suppose that e1 e2 e3 and let e1 = (1 ), e2 = (2 ), e3 = (3 ), 1 2 3 . Clearly 1 3 , so we are done if (1 ) = (3 ). Hence suppose that (1 ) = i 6= j = (3 ). Let 30 be the i-predecessor of 3 . We now have two cases: (2 ) = i. Then 1 2 30 , and we are done. Now consider the case when (2 ) 6= i and let 20 be the i- predecessor of 2 . Since 1 20 and by Proposition 5 above, 20 30 , we get 1 30 , as required. To check that is backwards linear within agents, consider e1 e3 and e2 e3 such that (e1) = (e2) = i, say. Let e1 = (1 ), e2 = (2 ), e3 = (3 ) and 1 3 ; 2 3 . Clearly 1 2 or 2 1 . By de nition of , we see that e1 e2 in the former case, and that e2 e1 in the latter. Thus # e3 \ Ei is ordered by , as required.
2
We refer to the structure E as the ACSA associated with K and denote it EK . In Figure 8, we have an example of the ACSA associated with the KTS from Figure 7.
4 Forth and Back Given an ACSA E , we can associate a KTS KE with it, and with this KTS we can associate an ACSA EK . What is the relationship between E and EK ? E
E
Theorem 8 : Let ES = (E; ; ) be a nitary n-ACSA, and let EK = (E 0; 0; 0). Then there is a bijection f : E ! E 0 such that : E
8e; e0 2 E; e e0 i f (e) 0 f (e0) and 8e 2 E; (e) = 0(f (e)). In other words, EK is isomorphic to E . Proof : Let E = (E; ; ) be the given ACSA, and x KE to be K. Let C denote the set of nite con gurations of E . For c 2 C , let [c]i stand for the equivalence class of c under the relation i . Given e 2 E; (e) = i, consider the set f[# e0]i je0 e; (e0) = ig; since E is nitary and backwards linear within agents, this set can be written as a nite sequence of elements [# e1]i ; :::; [# ek ]i , where for j 2 f1; :::; k ? 1g; ej < ej +1 and ek = e; call this sequence, pre xed by f;g, the local history of i at e, denoted lh(e) . Now consider the tuple < 1; :::; n >, where for j 2 f1; :::; ng; j is the null sequence if # e \ Ej = ;, and otherwise it is lh(e0), where e0 is the maximal j -event in # e. Call this tuple the i-view at e. We claim that the i-view at e is indeed an i-event occurrence in K and hence a member of E 0. To see this, observe rstly that for any e0 2 E , lh(e0) is a local j -run in K where (e0) = j , and jlh(e0)j > 1. We only need to check that every such i-view is generated as () for some run in K. A schedule of a con guration c 2 C is a sequence e1:::ek such that l 6= m implies el 6= em ; c = fe1; :::; ekg, and if el em, then l m. Note that every schedule (of any con guration) corresponds to a run in K, and conversely that every run from ; to a con guration c 2 C de nes a schedule of c. Let e1 :::ek be a schedule of # ek ; (ek) 6= j . Suppose # ek \ Ej 6= ; and el is the maximal j -event occurrence in # ek . It can be easily seen that the schedule e1 :::el (of # el = c0, say) is the j -predecessor of e1 :::ek : since el 2# ek , el and ek cannot be concurrent. This shows that ( ) is exactly the i-view at ek , where = ;fe1g:::fe1; :::; ekg; e1:::ek is a schedule of # ek and (ek) = i. We can thus meaningfully de ne the map F : E ! E 0 : given by F (e) = the (e)-view at e. To prove that F is injective, suppose e1 6= e2 . If (e1) = (e2) = i, clearly lh(e1) 6= lh(e2) and hence i-view at e1 is distinct from the i-view at e2 . Otherwise, suppose (e1) = i 6= j = (e2), but that the i-view at e1 is identical to the j -view at e2 . This means that e1 is the maximal i-event in # e2 , and that e2 is the maximal j -event in # e1 . Then in particular, we get e1 e2 ; e2 e1 and e1 6= e2 , contradicting the antisymmetry of . Thus F is injective. Now consider e0 2 E 0; 0(e0) = i. Then e0 is an i-event occurrence in KE say, < 1 ; :::; n >. Since ji j > 1, it is of the form xx0, where there exist c; c0 2 C; c 2 x; c0 2 x0 and for some e 2 Ei ; c0 = c feg. It can then be easily checked that e0 is indeed the i-view at e, that is F (e) = e0, proving surjectivity of F . The fact that (e) = 0(F (e)) is trivial. Now consider e1 ; e2 2 E such that e1 e2 . Then every schedule 2 for # e2 includes as a pre x a schedule 1 for # e1 . Since 1 E
2 ; (1) 0 (2), that is, F (e) 0 F (e0 ), as required. On the other hand, suppose e1 ; e2 2 E such that F (e1 ) 0 F (e2 ). Again let F (e1 ) be an i-event occurrence in KE and F (e2 ) a j -event occurrence. Then if F (e2 ) =< 1; :::; n >, then j is of the form xx0 where there exist c 2 x; c0 2 x0 such that c0 = c fe2g, and i is of the form 0x1x2 00x3x4 , where there exist cl 2 xl ; l 2 f1; ::4g such that c2 = c1 fe1 g; c2 c3 ; c4 = c3 fe3g, and e3 is the i-maximal event in # e2. Thus we get e1 e3 e2, and thus e1 e2 . Thus F is order-preserving too, and we have the result. 2
5 Back and Forth Given a KTS K, we can associate an ACSA EK with it, and with this ACSA we can associate a KTS KE . What is the relationship between K, and KE ? Before we answer this question, we de ne a notion of simulation between transition systems, which is a kind of unfolding. Def 9 : Let K = (S; !; s0; Eqn) and K0 = (S 0; !0; s00; Eqn0 ) be KTS s. We say that TS 0 is a simulation of TS i there exists a surjective map f : S 0 ! S : such that K
K
(i) f (s00) = s0, (ii) for s01; s02 2 S 0, if s01 !0 s02 then f (s01) ! f (s02), (iii) for s1; s2 2 S , if s1 ! s2 and f (s01) = s1, then there exists s02 2 S 0 such that s01 !0 s02
and f (s02 ) = s2 , and (iv) for s01; s02 2 S 0, if s01 0i s02 then f (s01) i f (s02).
2
For the rest of this section, x a KTS K = (S; !; s0; Eqn ) , and EK = (E; ; ). Before we proceed with the "back and forth" argument, we need some technical results. Recall that R denotes the set of all runs of the KTS. De ne the map g : R ! 2E by: g(s0) = ;, and for 2 R+ ; g() = f( 0 )j 0 ; 0 2 R+ g. Proposition 10 : Suppose 2 R. Then g() 2 C . Further,
(a) jg()j = jj ? 1, and (b) if 1 is a pre x of 2, then g(1) g(2). 2
Proposition 11 : Suppose g(1) = g(2), then there exists s such that 1 is of the form s and 2 is of the form 0s.
tm = sm j ? tm+1 j1 ?
ppp
tp?1 jr ? tp
i i
i i
- sm+1
?j - um+1
ppp
?j1
- up?1
?jr - tp+1 = up
Figure 9:
Proof : Suppose g(1) = g(2) = c. Then j1j = j2j = k, say. When k = 1, the result i1 i2 is obvious and the required s = s0 . So suppose k > 1. Let 1 = s1 ) s2 ) :::sk and j1 j2 2 = t1 ) t2 ) :::tk . Clearly, s1 = t1 = s0, the initial state of the KTS. Let m be the latest index of agreement between 1 and 2 , i.e. for 1 l m; sl = tl and sm+1 = 6 tm+1. Since the rst state of both runs is s0; m > 0. Thus 0 k ? m < k. We show the result by induction on k ? m. The base case, when m = k is trivial, since then the
required s = sk = tk . Now assume by induction hypothesis that the result holds for runs with m k. Now consider runs 1 and 2 such that m < k. To x notation, let = s1 s2 :::sm (= t1 t2 :::tm); c0 = g ( ); sm )i sm+1 ; sm )j tm+1 . Further let c01 = g (sm+1) = c0 fe1 g; c02 = g (tm+1 ) = c0 fe2 g. That such con gurations exist is assured by the construction of g . Clearly, e1 and e2 are respectively i and j -event occurrences. Note that by monotonicity of g , e1 2 c01 c; e2 2 c02 c, thus fe1; e2g c. Now suppose i = j . If e1 = e2, then [sm+1 ]i = [tm+1 ]i , and we have sm )i sm+1 ; sm )i tm+1 and sm+1 i tm+1 , therefore sm+1 = tm+1 , contradicting the fact that the latest index of agreement is m < k. Therefore e1 6= e2 . But then e1 and e2 are in immediate con ict in E, contradicting the fact that fe1 ; e2g c. Thus we nd that i 6= j . Let p be the smallest index such that tp )i tp+1 . Clearly, m < p < k. Now for each l : m < l p + 1, let g(tm+1 :::tl) = cl and let cl+1 = cl felg. By de nition of KTSs, we can nd a sequence of states (see Figure 9) um+1 :::up such that j for every l : m < l p; tl )i ul . Further sm+1 )j um+1 and for all m < l < p; ul ) ul+1 j i tl ) ul+1 . Now, for m < l p, let dl = g (sm+1 um+1 :::ul). From the fact that sm i tm+1 i ::: i tp and sm+1 i um+1 i ::: i up , we get, for all m < l < p; dl = cl fe1g. Now let e0 = cp+1 ? cp . Thus we have two i-event occurrences e1 ; e0 enabled at cp. If e1 6= e0 , they are in (immediate) con ict, and hence e1 62 d, for every con guration d : cp+1 d. But by monotonicity of g , we have cp+1 g (2) = c and e1 2 c, a contradiction. Therefore e1 = e0 implying up i tp+1 which is possible only if up = tp+1 . 0
0
Now let 20 = 1 sm+1 um+1 :::uptp+2 :::tk . Since g (1sm+1 um+1 :::up) = g (tm+1 :::tpup ), we also have g (2) = g (20 ) = c. But now consider the two runs 1 ; 20 : their g -image is the same, and their latest index of agreement is m + 1; but then by induction hypothesis, sk = tk , which is what we set out to prove. 2 Corollary 1 : Suppose c 2 C . Then there exists 2 R such that g() = c. Proof : If c = ;, then g(s0) = c. Inductively assume that for every con guration c such that jcj < k, there exists such that g ( ) = c. Now suppose c 2 C; jcj = k; k > 0. There exists a con guration c0 and e 2 c such that c = c0 feg. By inductive assumption, there exists 0 such that g ( 0) = c0 . Let e = ( ); = 1 ss0 . But then g (1s) = c0, hence by the previous proposition, 0 is of the form s. Thus g (ss0) = c, as required. 2 Theorem 12 : Let K be a KTS. Then KE is a simulation of K. Proof : Let K = (S; !; s0; Eqn) , EK = (E; ; ) and KE = (S 0; !0; s00; Eqn0 ). Clearly 0 S is the set C of nite con gurations of E . De ne f : C ! S by: f (;) = s0 , and for all nonempty con gurations c; f (c) = s, where there exists a path s 2 R+ such that g (s) = c. The previous proposition and corollary together ensure that f is indeed well-de ned. Now if c0 = c feg; f (c) = s; f (c0) = s0 , then by an argument similar to the one above, we can check that e = (ss0 ), and hence s ! s0 . On the other hand, if s ! s0 in K, there is a run ss0 such that g (ss0) = g (s) f(ss0 )g. That is, when f (c) = s, there exists c0; f (c0) = s0 such that c !0 c0 . Suppose c 0i c0, that is c \ Ei = c0 \ Ei. Let g (s) = c; g ( 0s0 ) = c0. We nd that i(s) = i ( 0s0), and hence s i s0 , as required. 2 K
K
6 Discussion It is interesting to explore the implications of condition (i) in the de nition of knowledge transition systems. Condition (i) asserts that every state is a knowledge state and hence any change in state is due to the fact that the knowledge of some agent (in fact, a unique one) has changed. In this view, there is no \state of the real world", but only what is got by pooling the information available to all the agents. One technical reason for such a condition is the following: we wish to see every state change as an event in the system, and if the new state is indistinguishable from the old one for every agent in the system, this means an event occurrence unobservable to all agents. In such a situation, we have taken the attitude that events outside the system might as well not exist. However, we could indeed drop this stringent condition and work with external events; this means a similar change in the de nition of n-ACSAs: the funtionality of the naming function is from E to f1; 2; :::; ng [ fg, where (e) = can be used to denote that e is an external event. The other aspect is that of uniqueness: why should a state change not be accompanied by knowledge change for several agents ? In the previous section, we have utilized uniqueness
principally for convenience in extracting asynchronously communicating sequential agents, where every internal event occurrence is associated with only one agent. We can in fact, generalize these structures to allow shared events, as proposed in [LRT]: Def 13 : A system of n Communicating Sequential Agents (n-CSA) is a triple C=(E; ; ) where
(i) E is a set of event occurrences, (ii) E E is a partial order called the causality relation, and (iii) : E ! 2f1;:::;ng is a naming function such that 8e8i 2 f1; :::; ng fe0je0 eg \ fe0ji 2 (e0)g is totally ordered by . 2
This gives us a considerable generality to discuss systems with both the possibilities of synchronous as well as asynchronous communication. With a view to relating KTS with n-CSAs, we can drop the uniqueness conditions. However, the con uence and other conditions become more dicult to express now. We u , where u f1; 2; :::; ng, with the interpretation that need to generalize the relation )i to ) u 0 s ) s means that for all i 2 u, s 6i s0, and for all j 62 u, s j s0 (along with the condition s ! s0 ). Then the con uence condition is speci ed for su)u s1 ; sv)v s2 , where u \ v = ;. On the other hand, the local choice condition is now for s ) s1 ; s ) s2 , where u \ v 6= ;. With such modi cations on both sides we believe that (the generalized) KTSs can be related with n-CSAs as before. However, the system can now have many \major" state transitions which in a sense `bypass' many paths in the system, and this creates many technical diculties. We believe that a result similar to the one in the previous section can be proved also for this class. Another question relates to the xed number n of agents in the system - is that crucial? It can be easily checked that the results in the previous section can be easily extended to systems with countably many agents. However, when we allow shared events as proposed above, we need to ensure that maps events only to nite sets of agents, and we need conditions to ensure that every state change in the KTS causes knowledge change for only nitely many agents. In a dierent direction, we can relate knowledge transition systems to "communicating" transition systems. We can think of these as automata making local transitions, except that there can be dependencies between transitions across systems. Every KTS can be easily "decomposed" into such automata, and we can associate n-ACSAs with the automata so that a simulation of the original KTS can be recovered. Def 14 : A system of n communicating transition systems is an (n + 1)- tuple CA, CA = (TS1; :::; TSn; Comm), where for each i in f1; 2; :::; ng, TSi = (Xi; !i) is a local transition system with local states Xi and local transition system !i Xi Xi . Comm is the communication constraint, Comm (X X ) (X X ), where X = [i Xi and (x1 ; x2)Comm(x3; x4) implies that for some i 6= j , x1 !i x2 , and x3 !j x4 .
Now suppose we are given a KTS (S; !; s0; Eqn ), we can associate a CA with it as follows. De ne:
Xi = fxjx is an equivalence class under ig, x !i y i there exist s 2 x, s0 2 y such that s )i s0 , and (x; y )Comm(x0; y 0) i there exist s0 2 x, s1 2 y \ x0 and s2 2 y 0 such that s0 )i s1 )j s2 , and there is no s3 2 S such that s0 )j s3 )i s2 , where x; y are i-equivalence classes and x0 ; y 0 are j -equivalence classes, i 6= j .
2
The unfolding of CA to obtain n-ACSA's is straightforward. In this manner, we can relate CA and KTSs.
7 Conclusion We have argued that the notion of partially ordered time in distributed systems is in a sense dual to the notion of knowledge change of agents in the system. We have attempted to formalize this (widely-believed) result. Further, while it is customary to assert that \knowledge is not lost by receiving messages" and that \knowledge is not gained by sending messages", here we utilize such information to conclude whether an event occurrence constitutes an internal event, sending a message or receiving a message. This setting seems to be appropriate for category-theoretical analysis. Note that in knowledge transition systems, each state has an S5-Kripke structure in it, and transitions re ne the information partitions and thus modify the structures. Further, we hope to place relationships between KTSs and ACSAs on rmer foundations by studying categories of knowledge transition systems and categories of ACSAs. We would like to study the category of KTSs (with KTSs as objects and simulations as morphisms) on one hand and the category of ACSAs (with label- preserving order-preserving maps as morphisms) on the other: the question we study in this paper then becomes the following { given the functor from the category of ACSAs to the category of KTSs, obtain the left adjoint of that functor. Our results about isomorphism in Section 4 and simulation in Section 5 then constitute a co-re ection. We would like to study knowledge transition systems using a propositional modal logic which contains the usual S5 modalities Ki for i 2f1; :::; ng, as well as the temporal modalities
and 3. Such a logic is easily de ned, and seen to be decidable too. However, obtaining a complete axiomatization seems to be a nontrivial problem. Another interesting question relates to viewing KTSs as automata. In the last section we mentioned communicating transition systems, but to make sense of these as automata, we need to introduce action-labelled transitions. The alphabet of actions can then be partitioned n-ways. We can then consider nite state KTSs as acceptors of regular languages
with some notion of communication dependency (in the same way as Zielonka automata [Zie] accept the regular trace languages of Mazurkiewicz [Maz]). We thus have hopes of KTSs providing a transition system model of message passing.
References [CM] Chandy, M., and Misra, J., \How processes learn", Distributed Computing, vol. 1, #1, pp. 40-52. [DM] Dwork, C., and Moses, Y., \Knowledge and common knowledge in a byzantine environment: Crash failures", Inf and Comp, vol 88 (1990) pp 156-186. [HF] Halpern, J., and Fagin, R., \Modelling knowledge and action in distributed systems", Distributed Computing, vol 3, #4, 1989, 159-177. [HM] Halpern, J., and Moses, Y., \Knowledge and Common Knowledge in a Distributed Environment", JACM, vol. 37, pp. 549-578. [HZ] Halpern, J., and Zuck, L., \A little knowledge goes a long way: simple knowledgebased derivations and correctness proofs for a family of protocols", Proc. 6th ACM Symp. on Principles of Distributed Computing, 1987, pp. 269-280. [Lam] Lamport, L., \Time, Clocks and the Ordering of Events in a Distributed System", CACM , vol. 21, #7, July 1978, pp. 558-565. [LRT] Lodaya, K., and Ramanujam, R., and Thiagarajan, \Temporal Logics for Communicating Sequential Agents: I", Intl Jnl on Found. of Comp Sci, vol 3, #2, 1992, 117-159. [Ma] Mazer, M., \A link between knowledge and communication in faulty distributed systems (Preliminary Report)", TARK III, 1990, pp. 289-304. [Maz] Mazurkiewicz, A., \Basic notions of trace theory", LNCS 354 (1989), pp 285-363. [NPW] Nielsen, M., Plotkin, G., and Winskel, G., \Petri Nets, Event Structures and Domains, Part I", Theoretical Computer Science, vol 13, #1, 1980, 86-108. [PK] Parikh, R., and Krasucki, P., \Levels of knowledge in distributed systems", Sadhana, vol. 17, Part. 1, March 1992, pp. 167-191. [PR] Parikh, R., and Ramanujam, R., \Distributed Processes and the Logic of Knowledge", Logics of Programs Springer LNCS #193, pp 256-268. [Pra] Pratt, V., \The duality of time and information", Proceedings of CONCUR 92, Springer LNCS 630, pp 237-253. [Zie] Zielonka, W., \Notes on nite asynchronous automata", RAIRO-Inf. Theor. et Appli., vol 21 (1987) pp 99-135.