A Dynamic Algorithm for Reachability Games ... - Semantic Scholar

0 downloads 0 Views 257KB Size Report
2 Universität Leipzig, Institut für Informatik, Germany ..... however still say that a reachability game G is played on trees if its arena is a forest F. A node u.
A Dynamic Algorithm for Reachability Games Played on Trees Bakhadyr Khoussainov1 , Jiamou Liu2? , Imran Khaliq1 1

Department of Computer Science, University of Auckland, New Zealand 2 Universit¨ at Leipzig, Institut f¨ ur Informatik, Germany [email protected], [email protected] [email protected]

Abstract. This paper starts the investigation on dynamic algorithms for solving games that are played on finite graphs. The dynamic game determinacy problem calls for finding efficient algorithms that decide the winner of the game when the underlying graph undergoes repeated modifications. In this paper, we focus on turn-based reachability games. We provide an algorithm that solves the dynamic reachability game problem played on trees. The amortized time complexity of our algorithm is O(log2 n) for updates and O(log n) for queries, where n is the number of nodes in the current graph.

1

Introduction

We start the investigation on dynamic algorithms for solving games played on finite graphs. Games played on graphs, with reachability, B¨ uchi, Muller, Streett, parity and similar winning conditions, have recently attracted a great attention due to connections with model checking and verification problems, automata and logic [7][12][14][19]. Here we focus on two-player games that are turnbased, deterministic and with perfect information. Each such game is defined on a finite directed graph. The two players play the game by moving a token on the underlying graph in turn. The goal of one player is to move the token along a path that satisfy the winning condition, while the other player wants the opposite. Given one of these games, to solve the game means to design an (efficient) algorithm that tells us from which nodes a given player wins the game. Formally, the game determinacy problem is defined as follows: INPUT: A game G and a node u on the underlying graph of G. QUESTION: Does Player 0 wins the game G starting from the node u? Polynomial time algorithms exist to solve some of the games mentioned above, while efficient algorithms for other games remain unknown. For example, on a graph with n nodes and m edges, the reachability game problem is in O(n + m) and is PTIME-complete [9], and B¨ uchi games are in O(n · m) [2]. Parity games are known to be in NP ∩ Co-NP but not known to be in P. The game determinacy problem can be answered both by a static algorithm or a dynamic algorithm. In the setting of the static algorithm, the games, once given as the input, remain unchanged over time. All the works on games played on graphs mentioned above belong to this category. On the other hand, in the setting of the dynamic algorithm, the game is dynamically modified. Examples of situations where such dynamic algorithms are of interest include 1). The game is used for modeling a system which undergoes certain changes over time. 2). In the case where we do not have full information about the game, we may solve the game by performing a series of refinements on approximations of the game. Each approximation is itself a game and each refinement can be thought of as an update to the previous approximation. We pose the dynamic game determinacy problem as follows: We would like to maintain the graph of the game that undergoes a sequence of update and query operations in such a way that facilitates an efficient solution of the current game. ?

The second author is supported by the DFG research project GELO.

Contrary to the static case, the dynamic determinacy problem takes as input a game G and a (finite or infinite) sequence α1 , α2 , α3 , . . . of update or query operations. Each update operation makes a unit change to the current game such as inserting or deleting a node or an edge. Since each time the change to the game is small, we hope to handle these updates more efficiently than re-solving the game from scratch. The dynamic algorithm that solves this problem is a collection of computations that handle all the operations. There has recently been increasing interest in dynamic graph algorithms (See, for example, [5][6]). The dynamic reachability problem on graphs have been investigated in a series of papers by King [10], Demetrescu and Italiano [4], Roditty [15] and Roditty and Zwick [16][17]. In [17], it is shown that for directed graphs with m edges and n nodes, there is a dynamic algorithm for the reachability problem which has an amortized update time of O(m + n log n) and a worstcase query time of O(n). This paper extends this line of research to dynamic reachability game algorithms. In the setting of games, for a given directed graph G and a player σ, a set of nodes T is reachable from a node u in G means that there is a strategy for player σ such that starting from u, all paths produced by player σ following that strategy reach T , regardless of the actions of the opponent. In this manner, graphs can be seen as games where one of the players has no power to change the course of the play. Hence, the dynamic reachability game problem can be viewed as a generalization of the dynamic reachability problem for graphs. In this paper we describe a dynamic algorithm that solves dynamic reachability games played on trees. We analyze the amortized time complexity of the algorithm, which measures the average running time per operation over a worst-case sequence of operations1 . We concentrate on trees because: (1) Trees are simple data structures, and the study of dynamic algorithms on trees is the first step towards the dynamic game determinacy problem. (2) Even in the case of trees the techniques one needs to employ is non-trivial. (3) The amortized time analysis for the dynamic reachability game problem on graphs, in general case, is an interesting hard problem. (4) Finally, we give a satisfactory solution to the problem on trees. We show that the amortized time complexity of our algorithm is of order O(log2 n) for updates and O(log n), where n is the number of nodes on the tree. The rest of the paper is organized as follows. Section 2 describes the known static algorithm that solves reachability games. Section 3 lays out the basic framework of dynamic reachability game problem and reachability game played on trees. Section 4 describe the data structure we use in the dynamic algorithm. A crucial technique in the algorithm is that the tree is partitioned into a collection of paths. Nodes on the same path are processed collectively under an update operation. Section 5 and 6 describe the algorithm in detail. Finally, Section 7 analyzes the amortized time complexity of the algorithm.

2

A Static Algorithm for Reachability Games

We now describe two-person reachability games played on directed finite graphs. The two players are Player 0 and Player 1. The arena A of the game is a directed graph (V0 , V1 , E), where V0 is a finite set of 0-nodes, V1 is a finite set of 1-nodes disjoint from V0 , and E ⊆ V0 × V1 ∪ V1 × V0 is the edge relation. We use V to denote V0 ∪ V1 . A reachability game G is a pair (A, T ) where A is the arena and T ⊆ V is the set of target nodes for Player 0. We call (V, E) the underlying graph of G. The players start by placing a token on some initial node v ∈ V and then move the token in rounds. At each round, the token is moved along an edge by respecting the direction of the edge. If the token is placed at u ∈ Vσ , where σ ∈ {0, 1}, then Player σ moves the token from u to a v such that (u, v) ∈ E. The play stops when the token reaches a node with no out-going edge or a target node. Otherwise, the play continues forever. Formally, a play is a (finite or infinite) sequence π = v0 v1 v2 . . . such that (vi , vi+1 ) ∈ E for all i. Player 0 wins the play π if π is finite and the last node in π is in T . Otherwise, Player 1 wins the play. 1

See standard textbook such as [3] (Chapter 17) for a thorough introduction to amortized complexity analysis.

A (memoryless) strategy for Player σ is a partial function fσ : Vσ → V1−σ . A play π = v0 v1 ... is consistent with fσ if vi+1 = fσ (vi ) whenever vi ∈ Vσ and fσ is defined on vi for all i. All strategies in this paper are memoryless. A winning strategy for Player σ from v is a strategy fσ such that Player σ wins all plays starting from v that are consistent with fσ . A node u is a winning position for Player σ, if Player σ has a winning strategy from u. The σ-winning region, denoted Wσ , is the set of all winning positions for Player σ. Note that the winning regions are defined for memoryless strategies. A game enjoys memoryless determinacy if the regions W0 and W1 partition V . Let G = (A, T ) be a reachability game. A (memoryless) strategy for Player σ is a partial function fσ : Vσ → V1−σ . A play π = v0 v1 ... is consistent with fσ if vi+1 = fσ (vi ) whenever vi ∈ Vσ and fσ is defined on vi for all i. All strategies in this paper are memoryless. A winning strategy for Player σ from v is a strategy fσ such that Player σ wins all plays starting from v that are consistent with fσ . A node u is a winning position for Player σ, if Player σ has a winning strategy from u. The σ-winning region, denoted Wσ , is the set of all winning positions for Player σ. Note that the winning regions are defined for memoryless strategies. A game enjoys memoryless determinacy if the regions W0 and W1 partition V . The rest of the section describes the known static algorithm for solving reachability games. Theorem 1 (Reachability game determinacy[8]). Reachability games enjoy memoryless determinacy. Moreover, there is an algorithm that computes W0 and W1 in time O(n + m), where n and m are respectively the number of nodes and directed edges in A. Proof. For Y ⊆ V , set Pre(Y ) = {v ∈ V0 | ∃u[(v, u) ∈ E ∧ u ∈ Y ]} ∪ {v ∈ V1 | ∀u[(v, u) ∈ E → u ∈ Y ]}. Define a sequence T0 , T1 , ... such that T0 = T , and for i > 0, Ti =Pre(Ti−1 ) ∪ Ti−1 . Since A is finite, there is an s such that Ts = Ts+1 . We say a node u has rank r, r ≥ 0, if u ∈ Tr − Tr−1 . A node u has infinite rank if u ∈ / Ts . By induction on the rank of u, one proves that u ∈ W0 if u has a finite rank. On the other hand, if u ∈ W0 , then there is a winning strategy f0 of Player 0 such that all plays consistent with f0 starting from u reach T . Let π be a play consistent with f0 starting from u. Note that π must reach T in less than n steps as otherwise π will continue forever without reaching T . Therefore one may prove easily that u ∈ Tn . Hence W0 = {u | u has a finite rank}. The algorithm finds W0 inductively. It sets T0 = T . At each round i, the algorithm computes all the nodes of rank i + 1 by examining each edge (u, v) where v has rank i. The procedure examines each node and each edge at most once and hence takes time O(n + m). u t

3 3.1

Basic Framework Dynamic reachability game problem

As mentioned above, the dynamic game determinacy problem takes as input a reachability game G = (A, T ) and a sequence α1 , α2 , . . . of update and query operations. The operations produce the sequence of games G0 , G1 , . . . such that Gi is obtained from Gi−1 by applying the operation αi . A dynamic algorithm should solve the game Gi for each i. We define the following seven update operations and one query operation: 1. InsertNode(u, i, j) operation, where i, j ∈ {0, 1}, creates a new node u in Vi . Set u as a target if j = 1 and not a target if j = 0. 2. DeleteNode(u) deletes the node u ∈ V . 3. InsertEdge(u, v) operation inserts an edge from u to v. 4. DeleteEdge(u, v) operation deletes the edge from u to v. 5. SetTarget(u) operation sets node u as a target. 6. UnsetTarget(u) operation sets node u as a non-target. 7. SwitchPosition(u) operation changes u from a σ-node to a (1 − σ)-node where u originally belongs to Vσ . 8. Query(u) operation returns true if u ∈ W0 and false if u ∈ W1

By convention, we assume that the initial game G0 is empty (where the underlying graph is empty). At stage s, s > 0, the algorithm applies the operation αs to the current game. Using the static algorithm from Theorem 1, one produces two lazy dynamic algorithms for reachability games. The first algorithm runs the static algorithm after each update and therefore query takes constant time at each stage. The second algorithm modifies the game graph after each update operation without re-computing the winning positions, but the algorithm runs the static algorithm for the Query(u) operation. In this way, the update operations take constant time, but Query(u) takes the same time as the static algorithm. The amortized time complexity in both algorithms is the same as the static algorithm. Our goal is to improve upon these two algorithms. 3.2

Reachability games played on trees

We view trees as directed acyclic weakly-connected graphs where each node has a set of zero or more children nodes, and at most one parent node. The node with no incoming edge is called the root. Nodes with no children are called leaves. A forest consists of pairwise disjoint trees. Since the underlying tree of the game undergos changes, the game will in fact be played on forests. We however still say that a reachability game G is played on trees if its arena is a forest F . A node u is an ancestor of v (and v is a descendant of u) in forest F if there is a path in F that goes from u to v. We denote as u ≤F v that u is an ancestor of v in the forest F. For two nodes u, v with u ≤F v, we say the path from u to v is the set Path[u, v] = {w | u ≤F w ≤F v}. Let G = (V0 , V1 , E) be a game played on trees. Recall that for σ ∈ {0, 1}, Wσ denotes the σ-winning region in the forest F = (V, E). By Theorem 1, each node of F belongs to either W0 or W1 . We make the following definition. Definition 1. A node u is in state σ if u ∈ Wσ . We denote the state of u by State(u). For any node u ∈ V , the value of State(u) depends on the states of the children of u. The following lemma is immediate. Lemma 1. The value of State(u) are determined as follows: 1. If u is a target then State(u) = 0. 2. If u is a leaf and is not a target, then State(u) = 1. 3. In all other cases – If u ∈ V0 , then State(u) = 0 if and only if one child of u has state 0. – If u ∈ V1 , then State(u) = 0 if and only if all children of u have state 0. In subsequent sections, we describe a dynamic algorithm for solving reachability games played on trees. The algorithm maintains a data structure (the base structure) which stores the current game and an auxiliary data structure to facilitate efficient solutions to the query operations. Essentially, the problem amounts to efficiently update the auxiliary structure. We describe the base structure used to store a reachability game played on trees. The underlying forest F is implemented as a doubly linked list List(F) of nodes. A node u is represented by the tuple (p(u), pos(u), tar(u)) where p(u) is a pointer to the parent of u (p(u) = null if u is a root), pos(u) = σ if u ∈ Vσ , a Boolean variable tar(u) = true iff u is a target. We make the following assumption about the operations and their implementations: – Inputs of the update and query operations are given as pointers to their representatives in base structure. – The InsertNode(u) operation adds an isolated node2 to the current forest F. – We assume that DeleteNode(u) is only performed on the root. The operation replaces the tree containing u with several trees, one containing a child of u as its root. When u is not a root, we can first perform DeleteEdge(p(u), u) to make u the root. 2

A node is isolated if it has no incoming or outgoing edges

– To preserve the forest structure, the InsertEdge(u, v) operation is applied only when v is the root of a tree not containing u. InsertEdge(u, v) links the trees containing u and v. DeleteEdge(u, v) does the opposite by splitting the tree containing u and v into two trees. One contains u and the other has v as its root. – For simplicity, we assume that SetTarget(u) is applied only when u is not a target node; similarly UnsetTarget(u) is applied only when u is a target node.

4 4.1

Data structures Splay trees

This algorithm makes use of the splay tree data structure introduced by Sleator and Tarjan [18]. Splay trees form a dynamic data structure for maintaining elements drawn from a totally ordered domain D. Each splay tree is itself a tree which is identified by its root element. Elements in D are arranged in a collection PD of splay trees with the requirement that each element of D belongs to some splay tree in PD and no element appears in two different elements in PD . The data structure supports the following splay tree operations. – Splay(A, u): This operation reorganizes the splay tree A so that u is at the root if u ∈ A. – Join(A, B): This operation joins two splay trees A, B ∈ PD , where each element in A is less than each element in B, into one tree. – Split(A, u): This operation splits the splay tree A ∈ PD into two new splay trees R(u) = {x ∈ A | x > u} and L(u) = {x ∈ A | x ≤ u}. – Max(A)/Min(A): This operation returns the Max/Min element in A ∈ PD . The readers are referred to standard textbooks such as [11] or [13] for proofs of the following theorem. Theorem 2 (splay trees). For the splay trees on PD , the amortized time of the operations above is O(log n), where n is the cardinality of D. 4.2

Partition a forest by paths

For a reachability game played on the forest F, we make the following definition. Definition 2. 1. A node u ∈ V is stable if either u is a target node, or u ∈ Vσ ∩ Wσ for some σ ∈ {0, 1}, and |{v | State(u) = State(v) ∧ (u, v) ∈ E}| ≥ 2. We use Z to denote the set of stable nodes. 2. A path Path[u, v] in F is homogeneous if all nodes in Path[u, v] have the same state. 3. A path Path[u, v] is stable if it contains at most one stable node and the stable node can only appear as the ≤Fi -maximum element in Path[u, v]. As the auxiliary data structure, the algorithm maintains a partition V Path of nodes V where each element P ∈ V Path is a homogeneous and stable path in F. The partition V Path is represented using the splay tree data structure as described above, where each element of V Path forms a splay tree. We assume V Path is equipped with the Splay(A, u), Join(A, B), Split(A, u) and Max(A)/Min(A) operations. The assumed “total order” on the nodes is the forest order ≤F . The order relation ≤F is not total, however, it becomes a total order when restricted to a particular path. Therefore, when applying the Join(A, B) operation, we need to make sure that the resulting set A ∪ B again forms a path in the forest. Note that the structure F Path = (V Path , {(P1 , P2 ) | ∃u ∈ P1 ∃v ∈ P2 : (u, v) ∈ E}

again forms a forest, which we call the partition forest of F. We use Pu to denote the path containing u in V Path . We assume that from each node u in the linked list List(F) there is a pointer to the corresponding element u in the path Pu . Hence accessing Pu from u takes constant time. The algorithm maintains the following additional variables as auxiliary data structures: 1. For each Pu ∈ V Path , the algorithm maintains State(Pu ) which is the state of all nodes in P . This variable is linked to by a pointer from the root of the splay tree representing Pu . It can be accessed from Pu by performing the Splay(Pu , u) operation. 2. For each node u, this algorithm maintains h(u) = |{v | (u, v) ∈ E}|. 3. For each node u, the algorithm maintains Stable(u) ∈ { true, false } such that Stable(u) = true if and only if u is stable. 4. For each stable node u in F, the algorithm maintains ν(u) = |{v | State(u) = State(v) ∧ (u, v) ∈ E}|. Accessing h(u), Stable(u) and ν(u) from u takes constant time. Let α0 , α1 , . . . be a sequence of operations describe above. We sometimes use the notation Fi = (V0,i ∪ V1,i , Ei ), Vi , Ti , Wσ,i , hi (u), νi (u), Zi , Stablei (u), ViPath , Pi,u , FiPath to denote the underlying forest F, the set of nodes V , the target set T , σ-winning region Wσ,i , the variables h(u) and ν(u), Z, Stable(u), V Path , Pu and F Path as they appear at stage i. We say that the node u changes state at stage i + 1, if u is moved either from W0,i to W1,i+1 or from W1,i to W0,i+1 .

5

Trace up and change state

Suppose an update operation is applied at stage s + 1. This operation results in a modified forest Fs+1 . Let u be the ≤Fs -maximum node who changes state at stage s + 1. This change may trigger a sequence of state changes on the ancestors of u according to Lemma 1 (nodes that are not ancestors are not affected). Let Ps (u) be the maximal homogeneous stable path 3 that contains u at stage s. The following lemma shows how a state change on u influences its ancestors in the forest. Lemma 2. Suppose the node u changes state at stage s + 1. Let x be the ≤Fs -least node in Ps (u). Then a node y ≤Fs u changes state at stage s + 1 if and only if x ∈ Path[x, u]. Proof. Let x0 >Fs x1 >Fs . . . >Fs xm be the sequence of nodes in Path[x, u]. Note that x0 = u and xm = x. Suppose x0 ∈ W0,s . By the assumption that Path[x, u] is homogeneous, States (xi ) = 0 for each 0 ≤ i ≤ m. Suppose for 0 ≤ i < m, xi changes state from 0 to 1. By the assumption that xi+1 is not a stable node at stage s, xi+1 is not a target. Furthermore, if xi+1 ∈ V0,s , then xi+1 has exactly one child in state 0 at stage s; this child is xi . If xi+1 ∈ V1,s , then all children of xi+1 are in state 0 at stage s. In both cases, xi+1 changes state at stage s + 1. This proves that all nodes in Path[x, u] change state. Now take y 0. Hence u changes state if and only if (3) holds. (5) Say SwitchPosition(u) is performed at stage s + 1. Suppose u changes state. Then u is not a target node at stage s. If poss (u) 6= States (Pu ), then by Lemma 1 all children of u have state States (Pu ) and thus States+1 (u) = States (u). Therefore it must be that poss (u) = States (Pu ). If u is stable, then there is some child of u having state 1−States (Pu ) as otherwise u would not change state. This implies hs (u) − νs (u) > 0. If u is not stable, then u has exactly 1 child with state States (Pu ) and some other children with state 1 − States (Pu ). This implies hs (u) > 1. Hence (4) holds. On the other hand, if (4) holds, then u is not a target and u ∈ Vs,σ ∩ Ws,σ for some σ ∈ {0, 1}. If u is stable and hs (u) > νs (u), then some children of u has state 1 − σ at stage s + 1. If u is not stable and hs (u) > 1, then again some children of u has state 1 − σ at stage s + 1. In both cases u changes state. u t For each update operations above, the algorithm checks if the respective conditions as listed in Lemma 5 is met. If the condition holds, then the algorithm applies ChangeState(u). The following lemma is easily implied from Lemma 4 and 5. Lemma 6. Suppose an update operation is applied at stage s + 1. Then u is a winning position for Player States+1 (Pu ) in the game Gs+1 . 6.2

Update the variables h(u), ν(u) and Stable(u)

It remains to describe the computation for updating the values of h(u), ν(u) and Stable(u) after applying an update operation. Note that h(u) can be updated easily: In the case of InsertEdge(u, v), hs+1 (u) is set to be hs (u) + 1; in the case of DeleteEdge(u, v), hs+1 (u) is set to be hs (u) − 1; SetTarget(u), UnsetTarget(u) and SwitchtPosition(u) does not change h(u). Recall from Section 4.2 that ν(u) is only defined on the set of stable nodes. For simplicity, we also allow ν(u) to be defined when u is not stable. When u is not stable, ν(u) need not be equal to |{v | (u, v) ∈ E, State(u) = State(v)}|. Below we list the computations for updating ν(u) and Stable(u) after applying an update operation. 1. Suppose InsertEdge(u, v) is performed at stage s + 1. Then set   νs (u) + 1 if u ∈ Zs and States (u) = States (v) νs+1 (u) = 2 if u ∈ / Zs , hs (u) > 0 and States (u) = States (v) = poss (u)   νs (u) otherwise ( true if u ∈ / Zs , hs (u) > 0 and States (u) = States (v) = poss (u) Stables+1 (u) = Stables (u) otherwise.

2. Suppose DeleteEdge(u, v) is performed at stage s + 1. Then set ( νs (u) − 1 if u ∈ Zs and States (u) = States (v) νs+1 (u) = νs (u) otherwise ( false if u ∈ Zs and νs+1 (u) < 2 Stables+1 (u) = Stables (u) otherwise 3. Suppose SetTarget(u) is performed at stage s + 1. Then set  0 if hs (u) = 0 or u ∈ V0,s ∩ W1,s      hs (u) if u ∈ V1,s ∩ W0,s    ν (u) if u ∈ Zs ∩ V0,s s νs+1 (u) =  hs (u) − νs (u) if u ∈ Zs ∩ V1,s      1 if u ∈ / Zs and u ∈ Vs,0 ∩ W0,s    h(n) − 1 if u ∈ / Zs and u ∈ V1,s ∩ W1,s Stables+1 (u) = true. 4. Suppose UnsetTarget(u) is performed at stage s + 1. Then set ( hs (u) − νs (u) if States+1 (u) 6= States (u) νs+1 (u) = νs (u) otherwise ( true if νs+1 (u) > 1 and poss+1 (u) = States+1 (Pu ) Stables+1 (u) = false otherwise 5. Suppose SwitchPosition(u) is performed at stage s + 1. Then set  hs (u) if States (u) 6= poss (u)     hs (u) − νs (u) if u ∈ Zs , u ∈ / Ts and hs (u) > νs (u) νs+1 (u) =  hs (u) − 1 if u ∈ / Zs and States (u) = poss (u) and hs (u) ≥ 1    νs (u) otherwise   if u ∈ / Zs , States+1 (u) = poss+1 (u), νs+1 (u) > 1, hs (u) > 0 true Stables+1 (u) = false if u ∈ Zs , u ∈ / Ts and νs+1 (u) < 2   Stables (u) otherwise The algorithm updates the variables ν(u) and Stable(u) using the computation above. If u turns into a stable node, the algorithm applies Split(Pu , u) to preserve the stableness property of Pu . This finishes the description of the algorithm at stage s + 1. The correctness of the algorithm is proved by Lemma 6 and the following lemma. Lemma 7. Suppose an update operation is applied at stage s + 1. The following hold at stage s + 1. 1. If u is stable then νs+1 (u) = |{v | (u, v) ∈ Es+1 , States+1 (u) = States+1 (v)}|. 2. The node u is stable if and only if Stables+1 (u) is true. 3. The node u has exactly hs+1 (u) children. Proof. The third statement is immediate from the description of the algorithm. To prove the first statement, we use n(u) to denote the number |{v | (u, v) ∈ Es+1 , States+1 (u) = States+1 (v)}| and prove that u ∈ Zs+1 implies νs+1 (u) = n(u). We prove the first two statements in the lemma respectively for each type of update operations.

1. Say InsertEdge(u, v) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u ∈ Zs , then by Lemma 5, u does not change state at stage s + 1. Therefore n(u) = νs (u) + 1 if States (u) = States (v) and n(u) = νs (u) otherwise. If u ∈ / Zs , then u turns into a stable node at stage s + 1 only if States (u) = poss (u) and States (v) = States (u) and in this case νs+1 (u) = 2. This prove the first statement of the lemma. If u ∈ Zs , then u remains a stable node at stage s + 1. Suppose u ∈ / Zs . If hs (u) > 0, States (u) = States (v) = poss (u), then u has a unique child w at stage s with States (w) = States (u). Thus u ∈ Zs+1 . On the other hand, suppose u ∈ Zs+1 . If States (u) 6= States+1 (u), then States (u) 6= poss (u). This means that all children of u are in WStates (u),s and u ∈ / Zs+1 . Therefore it must be that u did not change state. By definition of a stable node, hs (u) > 0 , States (u) = States (v) = poss (u). This proves the second statement. 2. Say DeleteEdge(u, v) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u ∈ Zs then deleting the edge (u, v) will not cause u to change state. Therefore n(u) = νs (u)−1 if States (u) = States (v) and n(u) = νs (u) otherwise. If u ∈ / Zs , then u must change its state at stage s + 1 to become stable. This means that poss (u) 6= States (u) and all children of u have state States (u) at stage s. By Lemma 5, this means that u is a leaf at stage s+1 and States (u) = 0. However u ∈ / Zs+1 . Contradiction. Therefore we proves that u ∈ Zs+1 implies u ∈ Zs and νs+1 (u) = n(u). This proves the first statement. We proved above that u ∈ / Zs implies that u ∈ / Zs+1 . Suppose u ∈ Zs . Then u does not change its state at stage s + 1. Therefore u ∈ / Zs+1 if and only if States (u) = States (v) and νs+1 (u) < 2. Hence Stables+1 (u) is true if and only if u ∈ Zs+1 . The second statement in the lemma is proved. 3. Say SetTarget(u) is applied at stage s+1. By definition u ∈ Zs+1 and thus the second statement is proved. If hs (u) = 0 (u is a leaf) then n(u) = 0. Note that States+1 (u) = 0. If u ∈ V0,s ∩W1,s , all children of u have state 1 at stage s and thus again n(u) = 0. If u ∈ V1,s ∩ W0,s , then all children of u have state 0 at stage s and thus n(u) is the number hs (u) of children of u. Checking n(u) = νs+1 (u) in all other cases is similar in spirit to the argument above and is straightforward. 4. Say UnsetTarget(u) is applied at stage s + 1. Suppose u ∈ Zs+1 . Since u is no longer a target, it must be that States+1 (u) = poss+1 (u). If States (u) = States+1 (u), u does not change state at stage s + 1 and thus n(u) = νs (u). Otherwise u changes state and n(u) = hs (u) − νs (u). This proves the first statement. By definition of a stable node, it is easy to see that u ∈ / Zs+1 unless poss+1 (u) = States+1 (Pu ) and νs+1 (u) > 1. Hence Stables+1 (u) is true if and only if u ∈ Zs+1 . 5. Say SwitchPosition(u) is applied at stage s + 1. Suppose u ∈ Zs+1 . If u is a target at stage s, then it is stable at stage s + 1 and n(u) = νs (u) = νs+1 (u). If States (u) 6= poss (u), then all children of u have state States (u) at stage s and States+1 (u) = States (u). In this case n(u) = hs (u) = νs+1 (u). Now suppose States (u) = poss (u) and u is not a target. Then after applying the operation u must change state in order to remain a target node. This means that there are some children of u that have state 1 − States (u) = States+1 (u) at stage s. If u ∈ Zs , then the number of such nodes is hs (u) − νs (u) and if u ∈ / Zs , then the number of such node is hs (u) − 1 because there is exactly one child of u with state States (u). In all cases n(u) = νs+1 (u). This proves the first statement in the lemma. Suppose u ∈ / Zs . Then it is either that States (u) 6= poss (u) or States (u) = poss (u) and no more than one child of u has state States (u) at stage s. In the formal case, u ∈ Zs+1 if and only if States+1 (u) = poss+1 (u) and hs (u) = νs+1 (u) > 1, if and only if States+1 (u) is true. In the later case, u ∈ Zs+1 if and only if hs (u) > 0, States+1 (u) = poss+1 (u) and νs+1 (u) = hs (u) − 1 > 1, if and only if States+1 (u) is true. Now suppose u ∈ Zs . Then u ∈ / Zs+1 if and only if u is not a target and νs+1 (u) = hs (u) − νs (u) < 2, if and only if Stables+1 (u) is true. This shows that that States+1 (u) is true if and only if u ∈ Zs+1 and the second statement is proved. u t

7

Amortized Complexity

Lastly, we analyze the amortized complexity of the algorithm. In the analysis, we count operations such as pointer-manipulations and comparisons as low-level operations with constant time complexity, while splay tree operations as unit high-level operations. Recall from Theorem 2 that each splay tree operation has amortized time complexity O(log n), where n denotes the number of nodes in the underlying forest. We discuss the amortized time complexity for each operations below: – Each Query(u) operation takes the parameter u and searches for the canonical element in Pu . This requires applying the Splay(Pu , u) operation. By Theorem 2, Query(u) has amortized time complexity O(log n). – Each InsertNode(u, i, j) operation runs a fixed number of low-level operations. Hence by Theorem ?? it takes constant time. By the same reason, DeleteNode(u) also takes constant time under the assumption that u is the root. When u is not the root, DeleteNode(u) applies DeleteEdge(p(u), u) and hence has the same time complexity as the DeleteEdge operations. – Each InsertEdge(u, v), DeleteEdge(u, v), SetTarget(u), UnsetTarget(u) and SwitchPosition algorithm involves applying the ChangeState(u) algorithm at most once. Additionally, it includes a fixed number of splay tree operations and low-level operations. The ChangeState(u) algorithm applies the TraceUp operation at most twice and a fixed number of other splay tree or low-level operations. In turn, TraceUp(u) iteratively runs a while-loop (See Alg. ??), which also contains a fixed number of splay tree or low-level operations. From the analysis above, the time it takes to perform each of InsertEdge(u), DeleteEdge(u), SetTarget(u), UnsetTarget(u) and SwitchPosition(u) is O(log n + t log n) where t is the number of iterations of the while loop ran by the TraceUp(u) algorithm. Recall from Section 5 that Ps (u) denotes the maximal homogeneous stable path that contains u at stage s. For any u ∈ Vs , define Ts (u) as that the subgraph of Fs restricted to the nodes {v | Ps (u) ∩ Ps (v) 6= ∅}. It is clear that Ts (u) is a tree and v ∈ Ts (u) implies Pv ⊆ Ts (u). Therefore we let TsPath (u) be the subgraph of the partition forest FsPath restricted on the set {Pv | v ∈ Ts (u)}. Suppose one of the above update operation is applied at stage s + 1 and it runs ChangeState(u). Let w be the parent of the ≤Fs+1 -least node in Pu at stage s + 1. By the description of the ChangeState(u) algorithm, the number t of while-loop iterations ran at stage s + 1 is less or equal to the sum of the number of ancestors of Pu in the tree TsPath (u) and the number of ancestors of Pw in TsPath (w). In the worst case, before running TraceUp(u), each ancestor of Pu in TsPath (u) and of Pw in TsPath (w) contains exactly one node, and the TraceUp(u) operation will run exactly |Path[r, u]| while-loop iterations where r is the root of the tree Ts (w). This leads to O(n log n) time cost. On the other hand, we will argue below that over a sequence of operations, the total time used can be small. Lemma 8. The amortized number of while-loop iterations ran by TraceUp(u) is O(log n). We use a credit accounting scheme (See [11]) to analyze the amortized number of while-loop iterations ran by TraceUp(u). At each stage s, each element P in the partition forest is stored a certain number of credits cs (P ). Instead of indicating the number of while-loop iterations that have already occurred, these time credits store the numbers of iterations that we can “afford” in the future. Our plan is: – At each stage s, we introduce in total O(log n) new credits, which will be added to the credits of some P ∈ VsPath .

Path – To run a while-loop iteration, we first need to deduce one credit from some P ∈ Vs+1 ; this is called paying for the iteration. – The credits which are not paid at this stage are carried over to subsequent stages. They can be used to pay for the while-loop iterations in the future.

We want to make sure that at each stage, the total number of credits stored in the forest is positive. In this way, we can make sure that the amortized number of while-loop iterations performed at this stage is O(log n). We define Ts (Pu ) as the subtree of TsPath (u) rooted at Pu and let δs (Pu ) = blog |Ts (Pu )|c. In the rest of the section, we describe a way to create and allocate credits at each stage s that preserve the following invariant: (I) For all P ∈ VsPath , cs (P ) ≥ δs (P ) after performing operation αs . To prove Lemma 8, we first describe a way to create and allocate credits for the TraceUp(u) operation alone (without state changes or any changes to the underlying graph F). We will then describe how to take into account of the state changes or changes to F at stage s + 1. 7.1

Credit analysis for TraceUp(u)

In this section we assume no state changes nor changes to the underlying graph F take place. Our goal is to prove the following lemma. Lemma 9. Suppose TraceUp(u) is applied. We can create O(log n) new credits to pay for the while-loop iterations and preserve the invariant (I). Proof. Suppose (I) holds at stage s and TraceUp(u) is applied at stage s + 1. When u is not the ≤Fs -maximum element in Pu , then the operation splits Pu into two paths L(u) and R(u) where L(u) becomes the new Pu . Let δs,0 (P ), Ts,0 (P ) and cs,0 (P ) denote respectively the updated value of δs (P ), Ts (P ) and cs (P ). We move the cs (Pu ) credits to the new Pu . In other words, we sets cs,0 (Pu ) = cs (Pu ). Note that R(u) is a new element in V Path that has no credits assigned to it. Therefore we create and assign to R(u) δs,0 (R(u)) new credits to preserve (I). Let P0 >FsPath P1 >FsPath . . . >FsPath Pm be the sequence of ancestors of Pu in the tree TsPath (u). Note that Pu = P0 . For 0 < j ≤ m, we use δs,j (P ), Ts,j (P ) and cs,j (P ) to denote respectively the values of δs (P ), Ts (P ) and cs (P ) in the updated partition forest after running j iterations of the while loop. Suppose (I) holds after running j − 1 iterations of the while loop, 0 < j ≤ m. During the jth iteration, let w = p(Min(Pu ). The algorithm splits Pj into two paths P 0 = L(w) and P 00 = R(w). It then joins the current Pu with P 0 to form the new Pu , which we denote by Pu0 . Note that |Ts,j (Pu0 )| = |Ts,j−1 (Pj )| and therefore we “move” the cs,j−1 (Pj ) credits on Pj to Pu0 by setting cs,j (Pu0 ) = cs,j−1 (Pj ). This satisfies the invariant (I) on the updated Pu0 . Then we create 2(δs,0 (Pj ) − δs,0 (Pj−1 )) new credits, among which 1 credit is used for paying this iteration. The remaining new credits, together with the cs,j−1 (Pu ) credits that were assigned to Pu at stage s, are assigned to P 00 . In other words, we let cs,j (P 00 ) = 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + cs,j−1 (Pu ) − 1. By (I), cs,j−1 (Pu ) ≥ δs,j−1 (Pu ). Thus cs,j (P 00 ) ≥ 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + δs,j−1 (Pu ) − 1 ≥ δs,j−1 (Pu ) − 1.

Therefore if δs,j−1 (Pu ) > δs,j (P 00 ), then cs,j (P 00 ) ≥ δs,j (P 00 ), and (I) is satisfied for P 00 . Therefore, we assume δs,j−1 (Pu ) ≤ δs,j (P 00 ).

(5)

Suppose δs,0 (Pj ) = δs,0 (Pj−1 ). Note that |Ts,j−1 (Pu )| = |Ts,0 (Pj−1 )|

(6)

|Ts,j (P 00 )| ≤ |Ts,j−1 (Pj )| = |Ts,0 (Pj )|

(7)

and By (5) and (6) we have δs,j (P 00 ) ≥ δs,j−1 (Pu ) = δs,0 (Pj−1 ). By (7) we have δs,j (P 00 ) ≤ δs,j−1 (Pj ) = δs,0 (Pj ). Therefore δs,0 (Pj−1 ) = δs,j (P 00 ) = δs,0 (Pj ). Note also that |Ts,0 (Pj )| ≥ |Ts,j (P 00 )| + |Ts,0 (Pj−1 )| + 1. This implies δs,0 (Pj ) ≥ blog(|Ts,0 (Pj−1 )| + |Ts,j (P 00 )| + 1)c ≥ blog(2|Ts,0 (Pj−1 )|)c = 1 + blog(|Ts,0 (Pj−1 )|)c > δs,0 (Pj−1 ) This is a contradiction with the assumption that δs,0 (Pj ) = δs,0 (Pj−1 ). Hence δs,0 (Pj ) > δs,0 (Pj−1 ).

(8)

Therefore we have cs,j (P 00 ) = 2(δs,0 (Pj ) − δs,0 (Pj−1 )) + cs,j−1 (Pu ) − 1 (by (8)) ≥ δs,0 (Pj ) − δs,0 (Pj−1 ) + cs,j−1 (Pu ) (by inductive assumption) ≥ δs,0 (Pj ) − δs,0 (Pj−1 ) + δs,j−1 (Pu ) (by (6)) ≥ δs,0 (Pj ) (by (7)) ≥ δs,j (P 00 ) Hence (I) is satisfied after j iterations of the while loop. Summing up, we create δs,0 (R(u)) new credits before running the while-loop and 2(δs,0 (Pj ) − δs,0 (Pj−1 )) new credits for the jth iteration of the whileloop. Hence the total number of new credits created is X δs,0 (R(u)) + 2(δs,0 (Pj ) − δs,0 (Pj−1 )) 1≤j≤m

≤2δs,0 (Pm ) − δs,0 (Pu ) ∈ O(log n). u t

7.2

Credit analysis for the update operations

Suppose the invariant (I) holds at stage s and a update operation is applied at stage s + 1 which also runs ChangeState(u). Let w be the parent of the ≤Fs+1 -least node in Pu after running ChangeState(u). Suppose w ∈ / Zs , then by Lemma 3, w will turn into a stable node at stage s + 1. In this case the algorithm splits Pw into L(w) and R(w) where L(w) becomes the new Pw . Since w ∈ Zs+1 , δs+1 (Pw ) ≤ δs (Pw ) and (I) is preserved for Pw . The path R(w) is a newly created Path block in Vs+1 which does not has any credits. Therefore we preserve (I) on R(w) by creating and assigning to R(w) δs+1 (R(w)) new credits. Note that δs+1 (R(w)) ∈ O(log n). Suppose w ∈ Zs , then the ChangeState(u) operation may result in w turning into a non-stable node. In this case, the ChangeState(u) operation runs TraceUp(w). By Lemma 9, we create O(log n) 0 new credits to perform the TraceUp(w) operation. Afterwards, Pw has at least δs+1 (Pw ) credits 0 where δs+1 (Pw ) is the value of δs+1 (Pw ) assuming w is stable. Note that the updated Pw is the Path Path root in the tree Ts+1 (w). Since w becomes non-stable, the tree Ts+1 (w) is expanded by attaching all trees T ∈ {TsPath (v) | (v, w) ∈ Es+1 , States+1 (v) = States+1 (w)}. as subtrees of Pw . Therefore we preserve (I) on Pw by creating and assigning to Pw δs+1 (Pw ) new Path credits. Note that δs+1 (Pw ) ∈ O(log n). Since Pw is the root of Ts+1 (w), (I) is preserved on Pv for all v

Suggest Documents