2 Department of Computer Science, University of Nevada, Las Vegas, .... root must wait to broadcast its message in the tree network are presented in the same section. ...... of the Association of the Computing Machinery, 17:643â644, 1974.
Optimal Snap-Stabilizing PIF Algorithms in Un-Oriented Trees
∗
Alain Cournier,1
Ajoy K. Datta,2 Franck Petit,1 1 Vincent Villain 1 LaRIA, CNRS FRE 2733, Universit´e de Picardie Jules Verne, France. 2 Department of Computer Science, University of Nevada, Las Vegas,
Abstract: A snap-stabilizing protocol, starting from any arbitrary initial system configuration, always behaves according to its specification. In other words, a snap-stabilizing protocol is a self-stabilizing protocol which stabilizes in zero steps. In this paper, we first prove the number of states required on processors to design a snap-stabilizing Propagation of Information with Feedback (PIF) algorithm in arbitrary un-oriented trees running under any distributed daemon (four states per processor for the middle processors and two states for each of the two extreme end processors). Then, we propose two snap-stabilizing PIF algorithms for un-oriented trees. The former works under any (fair or unfair, central or distributed) daemon. It matches the lower bound in terms of number of states we established in this paper. The latter works under any (fair or unfair) central daemon. It uses only three states for the internal processors (two states for the root and the leaves). It is optimal in terms of number of states assuming a central daemon. Thus, both algorithms are optimal both in terms of the stabilization time (zero steps) and state requirement per processor. Keywords: Distributed systems, fault-tolerance, PIF, self-stabilization, snap-stabilization, wave algorithms.
1
Introduction
The wave scheme is a fundamental and widely used approach in distributed computing [Cha82, Seg83]. The concept of wave can be used to solve various versions of the following two problems: the token circulation (TC) problem (also, called the Token Traversal problem), and the propagation of information with feedback (PIF) problem. The solution to these basic problems (TC and PIF) can then be used as the basis for the solution to a wide class of problems in distributed computing, e.g., mutual exclusion, spanning tree construction, distributed infimum function computation, termination detection, and synchronization. So, designing efficient fault-tolerant wave algorithms is an important task in the distributed computing research. Self-stabilization [Dij74] is the most general technique to design a system to tolerate arbitrary transient faults. A self-stabilizing system, regardless of the initial states of the processors and initial messages in the links, is guaranteed to converge to the intended behavior in finite time. Snap-stabilization was introduced in [BDPV99c]. A snap-stabilizing algorithm guarantees that it always behaves according to its specification. In other words, a snap-stabilizing algorithm is also a self-stabilizing algorithm which stabilizes in zero steps. Obviously, a snap-stabilizing protocol is ∗
A preliminary version of this work appeared in [CDPV01a].
1
optimal in stabilization time. This notion of zero stabilization time is a surprising result in the area of self-stabilization. It is important to note that this new paradigm does not guarantee that all components of the system always work as expected (in a non-faulty environment), but it ensures that if an execution of a protocol is initiated by some processor, the protocol behaves as expected. Consider the problem of mutual exclusion. Starting from an arbitrary configuration, a snap-stabilizing protocol does not guarantee that in this configuration, several processors cannot be in the critical section. But, it guarantees that for any new requests, requesting processors will not enter the critical section before the above problem is solved. The self-stabilizing (but not snap-stabilizing) protocols cannot provide this guarantee. Related Work. On linear chain networks, self-stabilizing PIF algorithms can be easily deduced from self-stabilizing TC algorithms [BGW89, Gho93, GH96, Vil99]. Self-stabilizing PIF algorithms for tree and arbitrary networks have been proposed in [BDPV99b, CDPV01b, DIM97, KMM02, Var93]. Self-stabilizing PIF protocols have also been used in the area of synchronizers [ABDT98, AKM+ 93, AV91] or in the design of reset protocols [AKY90, AG94, APSV91]. Snap-stabilizing PIF protocols were proposed in [BDPV99c, BDPV99a] for tree networks and in [CDPV02] on general graphs. Other than the algorithms in [Gho93, Vil99], all TC algorithms for linear chains work on oriented chains. A chain is said to be oriented if every processor (with two neighbors) can distinguish between its left (connected to its left neighbor) and right (connected to its left neighbor) links. Moreover, the notion of left and right must be consistent, i.e., the left (right) of every processor must lead to the same end of the chain. Consistency is not required in un-oriented chains. It is easier to build un-oriented chains because the processors can be plugged in freely with no restrictions on following a particular orientation (such as left or right connection port). The concept of orientation is also applicable to trees. In an oriented tree, every processor knows which one of its links leads to a particular processor called the root. In an un-oriented tree, no processor knows which one of its links leads to the root (if such a root exists). PIF protocols can be used to compute a global task, to maintain a global structure, or a global property of a system. Any processor can initiate a global computation. PIF algorithms designed for un-oriented rooted trees [BDPV99a, BDPV99b] need to maintain only one spanning tree of the network (instead of one per processor in the case of oriented trees) regardless of the type of global computation and the number of initiators of the computation. To deal with the concurrent executions of the same global task, the identity of the associated network component (e.g., the resource or the root identity) must be added to the messages exchanged in the network, and also, must be maintained by every processor. So, each processor in the network has to maintain n spanning trees rooted at n distinct processors of the network (n being the number of processors in the network). Thus, for every spanning tree, every processor needs to maintain a subset of its neighbors as descendants, and a parent pointer. So, the total memory requirement of each processor p is O((2∆p )n ) states (or O(∆p × n) bits), where ∆p is the degree of the processor p. On the other hand, to design a PIF algorithm for the above situation, i.e., to deal with the multiple initiators, if we use an un-oriented tree, every processor needs to maintain only one spanning tree of the network (instead of n spanning trees). So, every processor p needs a space of 2∆p states or ∆p bits in order to maintain the spanning tree. The solution for un-oriented chains of [Gho93] uses four-state processors. A self-stabilizing TC algorithm for un-oriented chain with three-state processors was proposed in [Vil99]. The stabilization 2
time of the solution in [Gho93] is O(n2 ) rounds, whereas the solution in [Vil99] is O(n) rounds. Villain also proved that the algorithm of [Vil99] is optimal in terms of the number of states per processor. Except [BDPV99b, BDPV99a], all self-stabilizing PIF algorithms for (spanning) trees in the current literature require the tree to be oriented. The PIF algorithm proposed in [BDPV99b] is optimal (proven in [BDPV99c]) in space requirement (three states per internal processor and only two states for the root and leaves), stabilizes in O(h2 ) rounds, where h is the height of the tree. In [BDPV99c], a state-optimal and snap-stabilizing PIF algorithm is presented. However, the algorithm of [BDPV99c] requires the tree to be oriented. The PIF algorithm proposed in [BDPV99a] is also snap-stabilizing, and works on un-oriented trees. But, each internal processor p requires ∆p + 2 states, where ∆p is the degree of p. Contributions. In this paper, we first show that the minimum number of states required per processor to solve the snap-stabilizing PIF problem in arbitrary un-oriented trees under any distributed daemon is four states for the internal processors, and two states for the root and the leaf processors. Next, we propose two snap-stabilizing PIF algorithms for un-oriented trees. The former assumes an unfair distributed daemon. So, it works under any (fair or unfair, central or distributed) daemon. It matches the lower bound in terms of number of states we established in this paper. The latter works under any (even unfair) central daemon. It uses only three states for the internal processors (two states for the root and the leaves). It is optimal in terms of number of states assuming a central daemon [BDPV99c]. Therefore, both algorithms presented in this paper are truly optimal — both in terms of the stabilization time (0 rounds) and space requirement. Outline of the paper. In the next section (Section 2), we describe the distributed systems and the model we consider in this paper. We also present the notion of snap-stabilization there, followed by the specification of the PIF scheme. The lower bound on the number of states per processor is shown in Section 3. A space optimal snap-stabilizing PIF algorithm under an unfair distributed daemon is presented in Section 4. The proofs of snap-stabilization, state optimality, and the maximum time the root must wait to broadcast its message in the tree network are presented in the same section. The space optimal snap-stabilizing PIF algorithm working under an unfair central daemon is presented Section 5. We discuss in Section 6 the trade-off between the state requirement and the (maximum) delay required for the root to broadcast its message. Finally, we make some concluding remarks in Section 7.
2
Preliminaries
In this section, we define the distributed systems and programs considered in this paper, and state what it means for a protocol to be snap-stabilizing. We then present the statement of the problems considered in this paper.
2.1
Distributed System
System. A distributed system is an undirected connected graph, S = (V, E), where V is a set of nodes (|V | = n) and E is the set of edges. Nodes represent processors, and edges represent bidirectional communication links. A communication link (p, q) exists iff p and q are neighbors. Every processor p associates a label lp to each of its incident communication links (p, q). The labels 3
are stored in the set Np . Each processor can distinguish among its incident links, i.e., ∀ lp , kp ∈ Np where lp and kp are the labels assigned by p to the links (p, q) and (p, r), respectively, lp = kp iff q = r. The degree of p, denoted by ∆p , is the number of label in Np . Programs. We consider semi-uniform protocols, i.e., every processor with the same degree executes the same program, except one processor called the root. The program consists of a set of locally shared variables (henceforth, referred to as variables) and a finite set of actions. A processor can only write to its own variables, and read its own variables and variables owned by the neighboring processors. Each action is of the following form: < label >:: < guard > −→ < statement >. The guard of an action in the program of p is a boolean expression involving the variables of p and its neighbors. The statement of an action of p updates one or more variables of p. When p executes a statement, we say that “p moves” or “p executes an action”. An action can be executed only if its guard evaluates to true. We assume that the actions are atomically executed, meaning, the evaluation of a guard and the execution of the corresponding statement of an action, if executed, are done in one atomic step. The state of a processor is defined by the values of its variables. The state of a system is the product of the states of all processors (∈ V ). In the sequel, we refer to the state of a processor and system as a (local) state and configuration, respectively. Let a distributed protocol P be a collection of binary transition relations denoted by 7→, on C, the set of all possible configurations of the system. A computation of a protocol P is a maximal sequence of configurations e = γ0 , γ1 , ..., γi , γi+1 , ..., such that for i ≥ 0, γi 7→ γi+1 (a single computation step) if γi+1 exists, or γi is a terminal configuration. Maximality means that the sequence is either infinite, or it is finite and no action of P is enabled in the final configuration. All computations considered in this paper are assumed to be maximal. The set of all possible computations of P in system S is denoted as E. A processor p is said to be enabled in γ (γ ∈ C) if there exists an action A such that the guard of A is true in γ. When there is no ambiguity, we will omit γ. Similarly, an action A is said to be enabled (in γ) at p if the guard of A is true at p (in γ). We refer to the two following types of daemons in this paper: Central Daemon — in every computation step, if one or more processors are enabled, then the daemon chooses exactly one of these enabled processors to execute an action; Distributed Daemon — in every computation step, if one or more processors are enabled, then the daemon chooses a nonempty subset of those enabled processors to execute an action. In both cases, no further assumptions are made on the daemon. This type of daemons are called unfair. In particular, even if a processor p is continuously enabled, then p may never be chosen by the daemon unless p is the only enabled processor. In order to compute the time complexity measure, we use the metric, called round [DIM97]. This definition captures the execution rate of the slowest processor in any computation. Given a computation e (e ∈ E), the first round of e (let us call it e′ ) is the minimal prefix of e containing one (local) atomic step of every continuously enabled processor from the first configuration. Let e′′ be the suffix of e, i.e., e = e′ e′′ . Then second round of e is the first round of e′′ , and so on.
2.2
Snap-stabilization.
Let X be a set. x ⊢ P means that an element x ∈ X satisfies the predicate P defined on the set X .
4
Definition 2.1 (Snap-stabilization) Let T be a task, and SPT the specification of T . The protocol P is snap-stabilizing for the specification SPT on E if and only if the following condition holds: ∀e ∈ E :: e ⊢ SPT .
2.3
Specifications of the Problems to Be Solved.
Specification 2.1 (PIF Cycle) A finite computation e = γ0 , . . . , γi , γi+1 , . . . , γt ∈ E is called a PIF Cycle, if and only if the following condition is true: If the root processor broadcasts a message m in the computation step γ0 7→ γ1 , then: [PIF1] For each p 6= r, there exists a unique i ∈ [1, t − 1] such that p receives m in γi 7→ γi+1 , and [PIF2] In γt , the root receives an acknowledgment of the receipt of m from every processor p 6= root. Remark 2.1 To prove that an algorithm achieves a snap-stabilizing PIF algorithm, we have to show that every execution of the algorithm satisfies the following two conditions: 1. if root has a message m to broadcast, then it will do in a finite time, and 2. starting from any configuration where root is ready to broadcast, the system satisfies Specification 2.1.
3
Lower Bound of the State Requirement Under a Distributed Daemon
In this section, we study the minimal number of states required by the processors to design snapstabilizing PIF algorithms for arbitrary un-oriented trees under any distributed daemon. We start by borrowing the following result from [BDPV99c]. Theorem 3.1 The minimal state requirement to design any PIF algorithm on any arbitrary (oriented or not) tree network is (i) two states for the root, (ii) two states for the leaf processors, and (iii) three states for the internal processors. We now show that three states for the internal processors are not enough to make a PIF cycle snap-stabilizing on an arbitrary un-oriented rooted tree if the daemon is distributed. We prove this result by contradiction, i.e., we assume that there exists a snap-stabilizing PIF algorithm working under a distributed daemon and requiring only three states for the internal processors. Obviously, if such an algorithm exists for any un-oriented tree, it works on any (un-oriented) (linear) chains. The major part of this section is to show that even on such a simple tree, no such algorithm exists. Thus, we arrive at the contradiction in the general case. We consider an un-oriented chain. As in [BDPV99c], the chain is assumed to have at least four processors1 . Without loss of generality, we assume that the links of internal processors are labeled L and R. Since the chain is un-oriented, the labeling is not necessarily consistent, i.e., the labels L (or R) of all processors may not lead to the same end of the chain. Again without loss of generality, we only consider that side of the chain which extends from the root to one of the two possible leaves 1
It is easy to verify that the 3-state algorithm proposed in Section 5 for the central daemon also works for un-oriented trees of diameter two under the distributed daemon.
5
(so called the “right leaf” or simply the “leaf”) and the root is the left extremity of this segment of the chain. So, the internal processors we consider are only the processors between the root and the right leaf. Henceforth, Processor 0 refers to the root, Processor 1 to its right neighbor, Processor 2 to the right neighbor of Processor 1, and so on. Let p be an internal processor, q and q ′ neighbors of p, and Si the state of processor i. Since the labeling may not be consistent, no internal processor p can decide which one of its neighbors q and q ′ is on the path from p to the root. So, without considering any ordering between q and q ′ , an action of p can be written as follows: Label :: P rX (Sp ) ∧ P rY (Sq ) ∧ P rZ (Sq′ ) → Sp := a where P rX , P rY , and P rZ are predicates defined on Sp , Sq , and Sq′ , respectively, and a is one of the possible state values of Sp . For example: Label :: Sp = 0 ∧ Sq = 1 ∧ Sq′ = 0 → Sp := 1 We consider the direction of the message flow from the root to the leaf (in the PIF Cycle) as the broadcast phase. Remark 3.1 There must be at least one action in the internal processor to implement the broadcast phase. We call this the B-Action. The B-Action can be written as follows: B-Action :: P rW ait (Sp ) ∧ P rSucc(Sq ) ∧ P rW ait (Sq′ ) → Sp := B P rW ait (Sp ) means that p is waiting for the message M to be broadcast (in the PIF Cycle). P rSucc (Sq ) means that the successor of q to receive M is p. The effect of the B-Action is the following: Message M moves from q to p. Next, p uses M , if necessary. Finally, Sp becomes B indicating that p is now ready to broadcast M to its successor. Note that M must pass through p before reaching q ′ . So, both p and q ′ must be in a waiting configuration to avoid the merging of two broadcast movements. P rSucc(Sp ) would hold for q ′ after the execution of B-Action by p. So, M would move to q ′ . So, the state value B can be viewed as keeping a trace of the broadcast movement (called, the broadcast trace) of M in the PIF Cycle. A possible B-Action could be the following: B-Action :: Sp = W ∧ Sq = B ∧ Sq′ = W → Sp := B The above guard indicates the fact that p is in W , one of its neighbors is in B, and the other neighbor is in W . Since the chain is not oriented, the local configuration centered at p corresponding to the above guard could be written as BW W or W W B. Remark 3.2 On a chain, the acknowledgment phase, called feedback phase, is similar to the broadcast phase except that it proceeds from the right to the left of the chain. It must be implemented by using at least one action in the internal processor. We call this action the F -Action. We now classify the B values (i.e., the broadcast traces) into distinguishable classes. The classes we consider are related to the direction of the root. If a B value cannot indicate to any internal processor i that the broadcast is coming from either i − 1 or i + 1, then B belongs to a class B. 6
←
→
Otherwise, B belongs to either the class B or B , depending on the direction of the broadcast trace, from the left to the right or from the right to the left of the chain, respectively. The broadcast traces that belong to the same class are said to be B-equivalent. Examples showing the different cases of classes are given in Figure 3.1. The arrows under the processors indicate the local perception of the broadcast direction. In Figure 3.1 Case (a), bi and bi+1 i L
i+1 R
b ?
R
i
L
b
L
?
L
b
i+1 R
R
bR
L
←
(b) bi and bi+1 belong to B .
(a) bi and bi+1 belong to B.
i L
L
b
i+1 R
R
L
b
L
→
←
(c) bi ∈ B and bi+1 ∈ B .
Figure 3.1: Examples of broadcast traces. belong to the same class B because neither i nor i + 1 can establish the direction of the broadcast. In Case (b), bL and bR on processors i and i + 1, respectively, indicate the same broadcast trace coming ←
from the left of the chain. So, both bi and bi+1 belong to the same class B . In Case (c), bL on both ← → i and i + 1 indicate broadcast traces belonging to B and B , respectively. Remark 3.3 A processor can only detect a local orientation of the broadcast phase and not the global orientation. So, an internal processor cannot locally determine which class it belongs to. For instance, in both Cases (b) and (c), neither i nor i + 1 can (locally) determine if their states ←
→
belong to B or B . Let us denote the class corresponding to the value B on a processor i by Bi . Consider Figure 3.1 ← ← → R L L again. In Case (b), Si 6= Si+1 (bL 6= bR ), but bL i = bi+1 = B . In Case (c), bi = B and bi+1 = B . So, L Si = Si+1 , but bL i 6= bi+1 . To maintain a consistent set of notations, we assume that each value X which is not a broadcast trace belongs to a distinct class X (X = {X}). We denote by 3-PIF a PIF algorithm (not necessarily snap-stabilizing) with three states for the internal processors. Lemma 3.1 Let A be a 3-PIF algorithm running on an un-oriented chain. Then A cannot include a B-Action of the following form: B-Action :: P rW ait (Sp ) ∧ P rSucc(Sq ) ∧ P rW ait (Sq′ ) → Sp := B such that the state values in the guard satisfy the condition Sp 6= Sq 6= Sq′ 6= Sp . Proof. We prove this lemma for any non-stabilizing A running on an un-oriented chain. Then, the result must hold for stabilizing algorithms as well. Assume that A has a B-Action where the state of the three processors are all distinct.
7
An initial configuration of any semi-uniform PIF algorithm can be defined as follows: The only enabled processor is the root (Processor 0), and since the algorithm is semi-uniform, all other procesn−2 I where I , I , and I sors have the same initial state. Assume that the initial configuration is Ib Im t t b m are the initial state of the root processor, all internal processors, and the leaf processor, respectively. n−2 I . In order to apply the B-Action, there must exist an action The second configuration is Bb Im t (let us call it an X-Action) such that processor 1 is enabled for this X-Action and after executing the n−3 I where B , X , and I are the required states X-Action, the next configuration becomes Bb Xm Im t m m b n−3 I . to enable a B-Action. Now, Processor 1 is enabled, and the next configuration can be Bb Bm Im t n−4 Similarly, after Processor 2 executes an X-Action, the next configuration becomes Bb Bm Xm Im It . We can write the X-Action as follows: X-Action :: Sp = I ∧ Sq = B ∧ Sq′ = I → Sp := X n−2 I . By induction, it is clear that we can reach the following configuration: Bb Bm t The feedback phase being similar to the broadcast phase (Remark 3.2), A includes one of the following two actions:
F1 -Action :: Sp = B ∧ Sq = B ∧ Sq′ = Y → Sp := Y or F2 -Action :: Sp = Z ∧ Sq = B ∧ Sq′ = Y → Sp := Y Assume that A includes the F1 -Action. It is obvious that Y ∈ / {B, I}. So, Y = X. We now consider the problem of the PIF Cycle termination. We can observe that, while the state of the root processor is B, its neighbor state gets the three possible values of an internal processor. So, the root cannot detect the termination while its state value is B. Assume that the root changes its state value to T while its neighbor state is B. Consider the following execution: I n → BI n−1 → BXI n−2 → BBI n−2 → BBXI n−3 → T BXI n−3 . After the message has visited every processor, we can also obtain the following last configuration T BXX n−3 . In both cases, the local state of processor 1 remains the same (T BX). So, in both cases, either the root can decide the termination after processor 1 changes its state to X, or it cannot. In the first case, the root can decide the termination even if the PIF Cycle is not complete. In the second case, the root never decides even if the PIF Cycle is complete. If we consider that A includes the F2 -Action, we would arrive at the same situation following a similar reasoning. 2 We now consider initial configurations. If the initial configuration contains a trace of a broadcast phase not initiated by the root processor, then the broadcast phase is an abnormal broadcast phase and this initial configuration is an abnormal configuration. This broadcast phase has two ends, and both of the end processors can be an internal processor. On an oriented chain, any processor can distinguish between the left (oriented towards the root) from the right (oriented towards the leaf). So, even if there is only one class of broadcast traces B, it is easy to detect the tail of the broadcast phase (the first processor at the left end of the trace) and the head (the last processor at the right end of the trace). An example of such an abnormal initial configuration is shown in Figure 3.2. Since the chain considered here is oriented, it is easy to (locally) detect on Processor 5 and Processor 6 (the tail of the abnormal broadcast phase) the existence of the abnormal broadcast phase: Processor 5 has its right neighbor in State B while itself is waiting for the message broadcasted by the root from its left. Also, Processor 6 can locally detect that it is in State B while its left neighbor is in State W 8
(while it should be in State B). This configuration can be corrected as follows: Processor 5 does not broadcast the message coming from the right to its left neighbor (Processor 4) while Processor 6 corrects its state to W . 1 L
2
b
R
L
3
b
R
L
4
b
R
L
5
W
R
L
W
6 R
L
7
b
R
L
R
b
root
Figure 3.2: An Example With An Oriented Chain. On an un-oriented chain, both end processors could be the tail or head. So, in this case of a unique class B, a processor cannot decide if it is the tail or head of a broadcast phase. Figure 3.3 shows a ← → configuration similar to Figure 3.2 on an un-oriented chain, but with classes B and B . Processor 6 can (locally) detect that it is a tail of the abnormal broadcast phase because the neighbor from which the broadcast trace is supposed to come (Processor 5) is in state W . 1 L
2
L R
b
R
3
R L
b
L
4 L
b
R
L
5
W
R
R
W
6 L
R
7
R
b
L
L
bL
R
root
Figure 3.3: An Example With An Un-oriented Chain. In Figure 3.4, the direction of the broadcast in Processors 6 and 7 are opposite to that in the same processors in Figure 3.3. In this configuration, the local configurations of Processors 4 and 5 are similar. So, none of them can decide whether the root is on Processor 3′ s or Processor 6′ s side. 1 L
b
2
L R
R
b
3 R L
L
b
4 L R
L
5
W
R
R
W
6 L
R
b
7 L L
L
b
R R
root
Figure 3.4: An Example Illustrating Remark 3.3. Lemma 3.2 Let A be a PIF algorithm running under a distributed daemon on an un-oriented ← → → ← chain. If A contains broadcast traces in B ( B , resp.), then A contains broadcast traces in B ( B , resp.). Proof. Consider two neighboring internal processors i and i + 1 locally oriented in opposite directions. (For instance, that is the case for the following pairs of processors in Figure 3.4: (1, 2), (2, 3), (4, 5), and (6, 7).) Without loss of generality, assume that bL is a possible value of Si . Since all internal processors have the same variables (A is a semi-uniform algorithm), bL is also a possible →
value of Si+1 . Thus, B is not empty.
2
Lemma 3.3 Let A be a 3-PIF running under a distributed daemon on an un-oriented chain. If A is snap-stabilizing, then A contains no broadcast traces in B. 9
Proof.
Assume the contradiction that A contains broadcast traces in B.
1. Assume that A includes a B-Action such that Sp = X ∈ / B, Sq ∈ B, and Sq′ ∈ B. Let e be an execution such that the root processor initiates the broadcast phase, so that the second configuration of e is BXB . . . In this configuration, Processor 2 has its B-Action enabled. The next configuration is then BBB . . . Now, no i (3 ≤ i ≤ n) will ever send the message sent by the root during this PIF Cycle. Hence, [PIF1] is violated. / B. Let e 2. Assume that A has a B-Action such that Sp = X, Sq ∈ B, Sq′ = X, and X ∈ be an execution such that the root processor initiates a broadcast phase, so that the second configuration of e is BXXB . . . Since Processor 4 cannot decide if it is the head or tail of the broadcast phase, Processors 2 and 3 must have their B-Actions enabled. Using a distributed daemon, the next configuration could be BBBB . . ., where no i (3 ≤ i ≤ n) will ever be able to broadcast the message sent by the root during this PIF Cycle. Hence this violates [PIF1]. Note that Processor 4 cannot be the leaf processor. So, the above reasoning holds for chains such that n ≥ 5, where n is the number of processors of the chain. This is due to the fact that the leaf processor has no successor. So, it does not require a broadcast trace. But, it does need a feedback trace to initiate the feedback phase. However, the reasoning also holds for n = 4. In that case, we consider XXB . . . such that the left processor is the root. The behaviors of both left processors are similar to the behavior of Processors 2 and 3 described above. The above two results imply that A must include at least a B-Action such that Sp = X, Sq ∈ B, / B, Y ∈ / B, and X 6= Y . By Lemma 3.1, A cannot be a PIF algorithm. 2 Sq′ = Y , where X ∈ Theorem 3.2 There exists no snap-stabilizing 3-PIF running under a distributed daemon on an arbitrary un-oriented chain. Proof. By Lemma 3.3, if there exists a 3-PIF snap-stabilizing A, then A contains no broadcast traces in B. By Lemmas 3.1 and 3.2, A may contain the following two B-Actions: ←
B1 -Action :: P rW ait (Sp ) ∧ P rSucc (Sq ) ∧ P rW ait (Sq′ ) → Sp := b ∈ B ←
where Sp = X, Sq = B ∈ B , and Sq′ = X →
B2 -Action :: P rW ait (Sp ) ∧ P rSucc (Sq ) ∧ P rW ait (Sq′ ) → Sp := b′ ∈ B →
where Sp = X, Sq ∈ B , and Sq′ = X. Note the following: ←
→
1. X ∈ / B ∪ B and, since the internal processors have only three states, X is the same in both B1 -Action and B2 -Action. 2. By Remark 3.3, the network being un-oriented, B1 -Action and B2 -Action can only be written as a single action because no internal processor can locally distinguish its right from its left neighbor. We show this action as two actions only to simplify the presentation. 3. By Remark 3.3, B1 -Action and B2 -Action are enabled iff q is the left neighbor of p and q is the right neighbor of p, respectively.
10
Let e be an execution such that the root processor initiates a broadcast phase so that the sec← → ond configuration of e is B XX B . . . B1 -Action and B2 -Action are enabled at Processors 2 and 3, ←←→→ respectively. Using a distributed daemon, the next configuration could be B B B B . . . In this configuration, 2 and 3 cannot locally detect if it is the head of the normal and abnormal broadcast phase, respectively. The only way to ensure the correct behavior of the normal broadcast phase is that it is the tail of an abnormal broadcast phase which will take this responsibility, i.e., it will initiate the correction of the trace. (That is always possible due to the local orientation.) Let Y be the value ←
→
used to correct the trace. It is obvious that Y ∈ / B ∪ B and Y 6= X. Hence, the internal processors must have at least four states, and the algorithm cannot be A. 2 The following follows from Theorem 3.2: Theorem 3.3 There exists no snap-stabilizing 3-PIF running under a distributed daemon on any un-oriented rooted tree.
4
Optimal Snap-Stabilizing PIF Algorithm Under a Distributed Daemon
In this section, we first present the snap-stabilizing PIF for un-oriented trees assuming an (unfair) distributed daemon. We then present the proofs of snap-stabilization, state optimality, and the delay to initiate a PIF cycle.
4.1
Algorithm
The four state snap-stabilizing PIF on trees is presented in Algorithm 1. In the rest of this section, we denote the set of leaf processors by L and the set of internal processors by I. Every processor p maintains a variable Sp , called the state variable of p. The root and the leaf processors use only two states. The root uses B and C, and the leaf processors use the states F and C. Each internal processor has four different state values: B, F , R, and C. States B and F denote the broadcast phase and the feedback phase, respectively. For any processor p, Sp = C means that p is in the initial (also called the cleaning) state. This is the state a processor is in before it participates in the PIF cycle. State R denotes the Ready phase. The ready phase precedes the broadcast phase—the ready phase changes the state of every processor from C to R (see R-action). State R means that the processor is ready to participate in the broadcast phase. This prevents the broadcast phase to meet another broadcast phase which may exist due to some abnormal initial configuration. So, the ready phase avoids the disruption of the broadcast phase initiated by the root. A processor changes its state from R to B to execute its broadcast phase (State R becomes B, see B-action). The feedback phase changes Sp from B to F . After initiating the feedback phase, in the next step, the leaf processors can initiate the cleaning phase by changing its state from B to C (C-action). Thus, the feedback and the cleaning phase can run concurrently. All we have to make sure is that the cleaning phase does not meet the broadcast phase, i.e., the processors in the cleaning phase do not confuse the processors in the broadcast phase. We implement this constraint as follows: An internal processor can execute its C-action (i.e., changes its state from B to C) only if all its neighbors are in the feedback or cleaning phase (i.e., in state B or C). Thus, as soon as the ancestor of a leaf processor executes its B-action (i.e., changes its state from F to B), the leaf processor can execute its C-action. 11
Algorithm 1 Four State Snap-Stabilizing PIF Under a Distributed Daemon. Variable: Sp Sp ∈ {B, R, F, C} if p is an internal processor (p ∈ I). Sp ∈ {B, C} if p is the initiator (p = root). Sp ∈ {F, C} if p is a leaf processor (p ∈ L). Notation: q is a neighbor of p, i.e., q ∈ Np . Actions: R-action B-action F -action C-action Correction
:: :: :: :: ::
B-action :: C-action :: F -action :: C-action ::
4.2
{For the internal processors (p ∈ I)} Sp = C ∧ (∃q ∈ Np :: (Sq = B) ∧ (∀q ′ ∈ Np \ {q} :: Sq′ ∈ {F, C})) Sp = R ∧ (∃q ∈ Np :: (Sq = B) ∧ (∀q ′ ∈ Np \ {q} :: Sq′ = C)) Sp = B ∧ (∃q ∈ Np :: (Sq = B) ∧ (∀q ′ ∈ Np \ {q} :: Sq′ = F )) Sp = F ∧ (∀q ∈ Np :: Sq 6= B) (Sp = R ∨ Sp = B) ∧ (∀q ∈ Np :: Sq 6= B) {For the initiator (p = root)} Sp = C ∧ (∀q ∈ Np :: Sq = C) Sp = B ∧ (∀q ∈ Np :: Sq = F ) {For the leaf processors (p ∈ L)} Sp = C ∧ Sq = B Sp = F ∧ Sq 6= B
−→ −→ −→ −→ −→
Sp Sp Sp Sp Sp
:= R := B := F := C := C
−→ −→
Sp := B Sp := C
−→ −→
Sp := F Sp := C
Proof of Snap-Stabilization
We now prove that despite an initial erroneous configurations, the root eventually executes a B-action (Lemma 4.3). Next, we show that starting from such a configuration, the system behaves according to its specifications, i.e., is snap-stabilizing (Theorem 4.1). We use the following notations: − − − − − −
Pp : Tp : Lp : hp : np : µp :
The topological parent of p. The subtree rooted at p. The set of leaf processors of Tp . The height of Tp . The number of processors in Tp . Any path from p to a leaf.
The state sequence of a path µp is the sequence of the state of processors on µp , ordered from p to the leaf, and can be expressed by the regular expression {B, R, F, C}∗ {F, C}. In a normal configuration, any path µroot (root to any leaf) can be expressed by the regular expression {B + R, B + , C}{F, C}+ . Lemma 4.1 Let p be a processor which never moves. Then, no processor in Tp \ {p} is enabled (which implies that no processor in Tp \ {p} will ever move) iff the following two conditions are true: 1. Sp ∈ {R, F, C} implies that every processor in Tp \ {p} is in state C (Condition 1). 2. Sp = B implies that the state sequence of every µp ∈ BF C ∗ (and in this case, p cannot be a leaf ) (Condition 2).
12
Proof. (Proof of “if ” condition) Follows directly from Algorithm 1. (Proof of “only if ” condition) We need to prove the following: If either Condition 1 or Condition 2 is false, then there exists at least one enabled processor in Tp \ {p}. We prove this by induction on hp . 1. Let hp be equal to 1 and q be a leaf processor (q ∈ Tp ). Assume that Sp ∈ {R, F, C} and Condition 1 is false. So, Sq must be equal to F (because q, being a leaf processor, cannot be in state B nor R). Then, q is enabled (C-action). Now, assume that Sp = B and Condition 2 is false. So, Sq = C. Then, q is also enabled (F -action). 2. Assume that the lemma statement (“only if ”) is true for any p such that hp ≤ k where k ≥ 1. We will now show that the statement is also true for any p such that hp = k + 1. (a) Assume that Condition 1 is false for p. Let q be a processor such that q is the first processor on a path µp not in state C. Thus, if there exists some processors q ′ between p and q, then every processor q ′ between p and q is in C. So, more specifically, SPq = C. (Note that q does not know that Pq is its parent, but we use this notation just to simplify the presentation). If no processor exists between p and q (p = Pq ), then SPq = R, SPq = F , or SPq = C. Since hq ≤ k, by hypothesis, the lemma statement is true for q. We need to consider two cases: i. Assume that Sq ∈ {F, R}. This implies that every processor in Tq \ {q} has the state C (Condition 1). Then, q is enabled (C-action if Sq = F , Correction if Sq = R)). ii. Assume that Sq ∈ {B}. Therefore, the state sequence of every µq ∈ BF C ∗ (Condition 2). Then, q is enabled (Correction). (b) Assume that Condition 2 is false for p. Let q be a neighbor of p such that Sq 6= F (the case of a processor q not in state C and not a neighbor of p is similar to Case 2(a)). Again, since hq ≤ k, the lemma statement is true for q. i. Assume that Sq ∈ {C, R}. This implies that every processor in Tq \ {q} has the state C (Condition 1). Then, q is enabled (R-action if Sq = C, B-action if Sq = R). ii. Assume that Sq = B. Therefore, the state sequence of every µq ∈ BF C ∗ (Condition 2). So, q is enabled (F -action). 2 Lemma 4.2 Let p be a processor which never moves. Then, the number of possible moves in Tp \{p} is finite. Proof.
We prove this lemma by induction on hp .
1. Let hp be equal to 1. Then any leaf processor q (q ∈ Lp ) can move at most once. So, the total number of moves in Tp = |Lp |, which is finite. 2. Assume that for any p such that hp ≤ k, k ≥ 1, Lemma 4.2 is satisfied. We now show that Lemma 4.2 is also true for any p such that hp = k + 1.
13
(a) Assume that Sp ∈ {B, R, F }. Then, it is easy to verify by checking all different configurations that, ∀q ∈ Np \ {Pp }, q can move at most three times. (The worst case is when Sp = B and Sq = C.) (b) Assume Sp = C. Then, ∀q ∈ Np \ {Pp }, Sq can follow the cycle of states (1) C → R → B → F → C → . . . or the cycle (2) C → R → C → . . . i. Assume that there exists q such that q executes Cycle (1) infinitely often. So, q executes infinitely often the B-action. Let B-set be a maximal connected set of processors in State B, where maximal means that no neighbor of this set is in State B. Let q ′ be the parent of the nearest processor in a B-set from root. If Sq′ = F , then the B-set is said to be closed. Otherwise, the B-set is said to be open (Sq′ ∈ {R, C}). Let α be the number of open B-sets in Tp \ {p}. By the algorithm, no open B-sets can be created. Each time q executes an F -action, q closes a B-set. So, q can execute at most α cycles of states, which contradicts the assumption. ii. Assume that every q executes a finite number of time Cycle (1). We now consider that every q is done executing Cycle (1). Assume that there exists q such that q executes Cycle (2) infinitely often. So, there exists infinitely often a neighbor q ′ ∈ Np \ q such that q ′ executes infinitely often the B-action. Thus, by induction on every processor on µq′ , eventually, q ′ does not move any more. By applying the same reasoning on each q ′ ∈ Np \ q, eventually q does not move any more. A contradiction. iii. Assume that eventually, ∀q ∈ Np \{Pp }, no q moves. Then, by assumption, Lemma 4.2 is true ∀q ∈ Np \ {Pp }. Thus, in all cases, q executes a finite number of moves. By assumption (hq ≤ k), between each move of q, there is a finite number of moves in Tq \{q}. So, for each neighbor q of p, the number of moves in Tq is finite. 2 Lemma 4.3 The root eventually executes a B-action, even if the daemon is unfair. Proof. We prove this by contradiction. Assume that the root never executes its B-action. Then the system will reach a configuration from where the root will never move (because the root cannot execute a C-action twice without executing a B-action. By Lemma 4.2, if the root never moves, then the number of moves in T \ {root} is finite. When eventually no processor in T \ {root} is enabled, by Lemma 4.1, either Sroot = C and every processor has the state C, or Sroot = B and the state sequence of every µroot ∈ BF C ∗ . In both cases, the root is enabled (B-action or C-action). Since the root is the only enabled processor, the daemon must select it and allow it to move. Thus, we arrive at the contradiction. 2 We now show that once root executes a B-action, the system behaves according to SP P IF , even if the configuration from which root executes its B-action is abnormal (Lemma 4.5). The system is in an abnormal configuration if there exists at least an internal processor p (∈ I) such that p is an abnormal source (Sp = B and SPp 6= B). Part (i) of Figure 4.5 shows an abnormal configuration. In such a configuration, the system can be split into two parts: the “normal” part corresponding to the normal PIF and the “abnormal” part containing the other processors. If the state Sp of every processor p in the abnormal part is 14
11Ready 00 00 11 root
Broadcast
Normal Part
p
11 00 00 11
Feedback
11Cleaning or Feedback 00 00 11 root
Cleaning
1 0 0 1 00 011 1 00 0 1 011 1 0 0 1 01 1 0 1 0 01 1 0 1 0 1 0 1 0 1 0 1 01 1 0(ii) 0 01 1 0 1
1 0 0 1 Abnormal Part
(i)
Figure 4.5: Abnormal and Normal Configurations. changed to C or F (as shown in Part (ii) of Figure 4.5), then the configuration becomes a normal configuration with respect to the PIF cycle. Note the processor p in Part (i) of Figure 4.5. Since SPp = B and Sp = C, p could be enabled with R-action. But, it is not enabled because the state of its descendant is not in {F, C}. So, we generalize the notion of “enabled” action (processor) to “pre-enabled” action (processor) as follows: Definition 4.1 A P IF -action is said to be pre-enabled at a processor p if: 1. p = root and the state of p satisfies the guard of the P IF -action, or 2. p 6= root, and the state of p and Pp satisfies the guard of the P IF -action. A processor p, where a P IF -action is pre-enabled, is referred to as a pre-enabled processor. Note that, by definition, any enabled processor is also a pre-enabled processor. The following remark directly follows from Algorithm 1: Remark 4.1 In a normal part of the PIF cycle, every pre-enabled processor p such that a P IF -action A is pre-enabled, remains pre-enabled for the same pre-enabled action A forever if p never moves. When the pre-enabled processor p does move, p executes the statement of the pre-enabled A. Lemma 4.4 Every pre-enabled processor of the normal PIF cycle eventually moves. Proof. Assume that a pre-enabled processor p of the normal PIF never moves. By Lemma 4.3, p cannot be the root. So, every processor from the root to Pp is in B, and Sp ∈ {R, B, C}. So, B-action, F -action, or R-action is pre-enabled at p. Since, p never moves, every processor from the root to Pp will remain in state B forever. Thus, the root never moves. This contradicts Lemma 4.3. 2 Lemma 4.5 Starting from a configuration from which the root executes its B-action, the system behaves according to SP P IF . Proof. After the first system transition (the root executed its B-action), Sroot = B. Then, the state sequence of every µroot has a prefix of the form B + R, B + C, or B + F . 15
1. Let p be the last processor of the sequence B + R (p ∈ I). So, p is pre-enabled (B-action). By Remark 4.1 and Lemma 4.4, we obtain the following: p will eventually propagate the broadcast phase (p received the message sent by the root and broadcast it towards the leaves). 2. Let p be the last processor of the sequence B + C. So, p is pre-enabled (R-action if p ∈ I, or F -action if p ∈ L). Again, by Remark 4.1 and Lemma 4.4, we obtain the following: if p ∈ I, then the state of p will become R and reach Case 1 (i.e., Sp = R). Otherwise, if p ∈ L, then p will eventually initiate the feedback phase. 3. Assume that p is the last but one processor in the sequence B + F . p is pre-enabled (F -action if p ∈ I, or C-action if p = root). Again, by Remark 4.1 and Lemma 4.4, we find the following: if p ∈ I, then p will eventually propagate the feedback phase. Otherwise, if p = root, then p will eventually complete the feedback phase, which completes the PIF cycle. 2 By Remark 2.1, Lemmas 4.3 and 4.5, we can claim the following result: Theorem 4.1 Running under any daemon, Algorithm 1 is a snap-stabilizing PIF protocol. From Theorems 4.1 and 3.3: Theorem 4.2 Algorithm 1 is optimal in terms of the number of states per processor to implement a snap-stabilizing PIF cycle under a distributed daemon on a rooted un-oriented tree network.
4.3
Delay of the PIF-Cycle.
We now compute the maximal delay necessary to start the PIF cycle starting from an abnormal configuration on an un-oriented tree network. Theorem 4.3 The delay to start the PIF cycle using Algorithm 1 is O(h2 ) rounds. Proof. From the proof of Lemma 4.3, the worst case is obtained when the number of B-sets where no processor can execute a correction is maximal. Consider F L (stands for “First Layer”), the set of B-sets having no B-set in their subtrees. Since they cannot apply any correction, the only way to remove the B-sets in F L is to broadcast towards the leaves of their subtree. Then, the leaves initiate the corresponding feedback phase which remove each B-set of F L in O(h) rounds. Since there exists at most h/2 distinct B-sets on the path from the root to each leaf, all the B-sets disappear in O(h2 ) rounds. 2
5
Snap-Stabilizing PIF Algorithm under a Central Daemon
In this section, we present the snap-stabilizing PIF for un-oriented trees assuming an (unfair) central daemon. From the proofs of Lemma 3.3 and Theorem 3.2, the main argument against the possibility of the existence of a 3-state algorithm is that in the configuration BCCB, both processors in state C can execute a B-action simultaneously. This problem cannot occur under the (unfair) central daemon. The proposed algorithm (Algorithm 2) is very similar to Algorithm 1, except that the state value R and every part of code concerning R are removed. 16
Algorithm 2 Three State Snap-Stabilizing PIF Under a Central Daemon. Variable: Sp Sp ∈ {B, F, C} if p is an internal processor (p ∈ I). Sp ∈ {B, C} if p is the initiator (p = root). Sp ∈ {F, C} if p is a leaf processor (p ∈ L). Notation: q is a neighbor of p, i.e., q ∈ Np . Actions: B-action F -action C-action Correction
:: :: :: ::
B-action :: C-action :: F -action :: C-action ::
{For the internal processors (p ∈ I)} Sp = C ∧ (∃q ∈ Np :: (Sq = B) ∧ (∀q ′ ∈ Np \ {q} :: Sq′ = C)) Sp = B ∧ (∃q ∈ Np :: (Sq = B) ∧ (∀q ′ ∈ Np \ {q} :: Sq′ = F )) Sp = F ∧ (∀q ∈ Np :: Sq 6= B) Sp = B ∧ (∀q ∈ Np :: Sq 6= B) {For the initiator (p = root)} Sp = C ∧ (∀q ∈ Np :: Sq = C) Sp = B ∧ (∀q ∈ Np :: Sq = F ) {For the leaf processors (p ∈ L)} Sp = C ∧ Sq = B Sp = F ∧ Sq 6= B
−→ −→ −→ −→
Sp Sp Sp Sp
:= B := F := C := C
−→ −→
Sp := B Sp := C
−→ −→
Sp := F Sp := C
The proof of snap-stabilization of Algorithm 2 follows the same pattern as but simpler than that in Section 4. So, we can claim: Theorem 5.1 Running under any central daemon, Algorithm 2 is a snap-stabilizing PIF protocol. Next theorem follows from Theorems 3.1 and 5.1: Theorem 5.2 Algorithm 1 is optimal in terms of the number of states per processor to implement a snap-stabilizing PIF cycle under a central daemon on a rooted un-oriented tree network.
6
State Requirement vs. Delay
Denote the height of the tree by h. In Subsection 4.3, we established that the delay to allow the root to initiate a PIF cycle is O(h2 ) rounds in the worst case. We now show that this complexity is not optimal. Consider the following initial configuration: The root processor is in a state as if it just finished broadcasting a message and the other processors are waiting for participating in the broadcast phase. (In Algorithm 1, the states of the root and other processors are B and C, respectively.) Clearly, such a configuration seems to be a normal configuration whereas the root did not initiate the broadcast phase. Starting from such a configuration, any algorithm which performs the best cannot do anything else other than waiting for the completion of the broadcast and feedback phases before the root can initiate the first complete PIF cycle during this computation. Obviously, the minimum number of rounds required for this (almost) PIF-cycle to terminate is at least the time needed to perform the broadcast phase from the neighbor of the root all the way to the leaf processors (h − 2 rounds), followed by the feedback phase from the leaves to the root (h rounds), taking a total of 2(h − 1) rounds. Hence, assuming any daemon, the delay to start the PIF cycle starting from an abnormal initial configuration is Ω(h) rounds. It is shown in [BDPV99a] that 17
the lower bound for this delay can be reached with an algorithm using ∆p + 2 states for the internal processors, ∆p denotes the degree of p. Furthermore, considering the normal behavior of Algorithm 1, we can also notice that due to the Ready phase, the time needed to complete a PIF cycle is about 3h rounds instead of 2h rounds in [BDPV99c]. The above two facts show the trade-off between the state requirement and either the delay to start the PIF cycle or the time needed to perform a PIF cycle.
7
Concluding Remarks
We have shown that four states for each internal processor, two states for the root, and two states for each leaf processor are required to design snap-stabilizing PIF scheme in arbitrary un-oriented trees under any (even unfair) distributed daemon. Then we proposed two snap-stabilizing PIF algorithms for un-oriented trees, one working under any daemon, and the other working under any central daemon. Both algorithms are optimal in terms of number of states and the stabilization time. Also, we showed the trade-off between the state requirement and either the delay to start the PIF cycle or the time needed to perform a PIF cycle. We conjecture that any algorithm with a delay of O(h) rounds must use Ω(∆p ) states for the internal processors p.
Acknowledgments We are grateful to the anonymous reviewers for their valuable comments which significantly improved the presentation of the paper.
References [ABDT98]
L. O. Alima, J. Beauquier, A. K. Datta, and S. Tixeuil. Self-stabilization with global rooted synchronizers. In ICDCS98 Proceedings of the 18th International Conference on Distributed Computing Systems, pages 102–109, 1998.
[AG94]
A Arora and MG Gouda. Distributed reset. IEEE Transactions on Computers, 43:1026– 1038, 1994.
[AKM+ 93] B Awerbuch, S Kutten, Y Mansour, B Patt-Shamir, and G Varghese. Time optimal self-stabilizing synchronization. In STOC93 Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 652–661, 1993. [AKY90]
Y Afek, S Kutten, and M Yung. Memory-efficient self-stabilization on general networks. In WDAG90 Distributed Algorithms 4th International Workshop Proceedings, SpringerVerlag LNCS:486, pages 15–28, 1990.
[APSV91]
B Awerbuch, B Patt-Shamir, and G Varghese. Self-stabilization by local checking and correction. In FOCS91 Proceedings of the 31st Annual IEEE Symposium on Foundations of Computer Science, pages 268–277, 1991.
18
[AV91]
B Awerbuch and G Varghese. Distributed program checking: a paradigm for building self-stabilizing distributed protocols. In FOCS91 Proceedings of the 31st Annual IEEE Symposium on Foundations of Computer Science, pages 258–267, 1991.
[BDPV99a] A Bui, AK Datta, F Petit, and V Villain. Snap-stabilizing PIF algorithm in tree networks without sense of direction. In SIROCCO’99, The 6th International Colloquium On Structural Information and Communication Complexity Proceedings, pages 32–46. Carleton University Press, 1999. [BDPV99b] A Bui, AK Datta, F Petit, and V Villain. Space optimal PIF algorithm: Self-stabilizing with no extra space. In IPCCC’99, IEEE International Performance, Computing, and Communications Conference, pages 20–26. IEEE Computer Society Press, 1999. [BDPV99c] A Bui, AK Datta, F Petit, and V Villain. State-optimal snap-stabilizing PIF in tree networks. In Proceedings of the Forth Workshop on Self-Stabilizing Systems, pages 78– 85. IEEE Computer Society Press, 1999. [BGW89]
GM Brown, MG Gouda, and CL Wu. Token systems that self-stabilize. IEEE Transactions on Computers, 38:845–852, 1989.
[CDPV01a] A Cournier, AK Datta, F Petit, and V Villain. Optimal snap-stabilizing PIF in unoriented trees. In 5th International Conference On Principles Of Distributed Systems Proceedings (OPODIS 2001), pages 71–90, 2001. [CDPV01b] A Cournier, AK Datta, F Petit, and V Villain. Self-stabilizing PIF algorithm in arbitrary rooted networks. In 21st International Conference on Distributed Computing Systems (ICDCS-21), pages 91–98. IEEE Computer Society Press, 2001. [CDPV02]
A Cournier, AK Datta, F Petit, and V Villain. Snap-stabilizing PIF algorithm in arbitrary networks. In 22st International Conference on Distributed Computing Systems (ICDCS-22), pages 199–206, 2002.
[Cha82]
EJH Chang. Echo algorithms: depth parallel operations on general graphs. IEEE Transactions on Software Engineering, SE-8:391–401, 1982.
[Dij74]
EW Dijkstra. Self stabilizing systems in spite of distributed control. Communications of the Association of the Computing Machinery, 17:643–644, 1974.
[DIM97]
S Dolev, A Israeli, and S Moran. Uniform dynamic self-stabilizing leader election. IEEE Transactions on Parallel and Distributed Systems, 8(4):424–440, 1997.
[GH96]
MG Gouda and FF Haddix. The stabilizing token ring in three bits. Journal of Parallel and Distributed Computing, 35:43–48, 1996.
[Gho93]
S Ghosh. An alternative solution to a problem on self-stabilization. ACM Transactions on Programming Languages and Systems, 15:735–742, 1993.
[KMM02]
D Kondou, H Masuda, and T Masuzawa. A self-stabilizing protocol for pipelined PIF in tree networks. In 22st International Conference on Distributed Computing Systems (ICDCS-22), pages 181–190, 2002.
19
[Seg83]
A Segall. Distributed network protocols. IEEE Transactions on Information Theory, IT-29:23–35, 1983.
[Var93]
G Varghese. Self-stabilization by local checking and correction (Ph.D. thesis). Technical Report MIT/LCS/TR-583, MIT, 1993.
[Vil99]
V Villain. A key tool for optimality in the state model. In Proceedings of DIMACS Workshop on Distributed Data and Structures, pages 133–148. Carleton University Press, 1999.
20