THE APPLICATION AND EVALUATION OF BANKER’S ALGORITHM FOR DEADLOCK FREE BUFFER SPACE ALLOCATION IN FLEXIBLE MANUFACTURING SYSTEMS
Mark Lawley,∗ Assistant Professor Schoold of Industrial Engineering, Purdue University 250 Grissom Hall, West Lafayette, Indiana 47905 E-Mail
[email protected], Phone (765) 494-5415 FAX (765) 494-1299
Spyros Reveliotis, Assistant Professor School of Industrial and Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332-0205
Placid Ferreira, Associate Professor Department of Mechanical and Industrial Engineering University of Illinois at Urbana Champaign 361 Computer and Systems Research Laboratory 1308 West Main Street, Urbana, IL 61801
Appearing in International Journal of Flexible Manfuacturing Systems, 10(1); 73-100, 1998
∗
Corresponding Author
ABSTRACT Deadlock free operation is essential for operating highly automated manufacturing systems. The seminal deadlock avoidance procedure, Banker's Algorithm, was developed for computer operating systems, an environment where very little information regarding the future resource requirements of executing processes is known. Manufacturing researchers have tended to dismiss Banker’s as being too conservative in the manufacturing environment where future resource requirements are well defined by part routes. In this work, we investigate this issue by developing variants of Banker’s Algorithm that are applicable to buffer space allocation in flexible manufacturing. We show that these algorithms are not overly conservative, and that indeed, Banker’s approach can provide very good operational flexibility when properly applied to the manufacturing environment.
keywords: Banker’s Algorithm, deadlock, flexible manufacturing, manufacturing system control
1
Table 1. Selected notation. b(x,n,p) Ci E Eadmit Ec fk(j) G Gadmit Gc h h(x,N,n,k) O Oj p P Pk Pkm R RTk S SE S+ s+ V Va Vadmit Vm Vo VR VRo Vsafe VUR Vunsafe Vsample ξi πkm ρ
binomial distribution with parameters n and p buffer capacity of resource i set of all FMS state transitions set of all state transitions between states admitted by a DAP set all state transitions of set E, each reversed in sense jth machine of the route of part type k state space digraph subdigraph induced by a DAP co-space of G total ordering on parts that defines the sequence in which parts can be finished hypergeometric distribution with parameters N, n, and k “big O” notation indicates approximate number of steps to perform a computation number of occupied units of buffer capacity at resource j true proportion of safe state space admitted by a DAP, i.e., |Vadmit| / |Vsafe| set of part types produced by FMS part type k mth stage of part type k set of FMS resources route of part type k FMS state vector empty state set of part type stages represented by parts in system subset of part type stages that can be ordered set of all FMS states subset of sampled states admitted by a DAP set of all states accepted by a DAP set of Vm ordered states set of all ordered states set of all states reachable from the empty state using transitions in set E set of all ordered states reachable from the empty state along paths confined to Vo set of all states from which the FMS can be emptied using transitions in set E set of all states unreachable from the empty state using transitions in set E set of all states from which the FMS cannot be emptied using transitions in set E sample of FMS states set of part stages processed by resource i part of type k in its mth stage of processing point estimator of p
2
1 INTRODUCTION Deadlock free operation is an important operational requirement in flexible manufacturing systems (FMSs). In general, deadlock is the situation in which there exists a set of concurrent processes with each process in the set awaiting an event that can be caused only by another process in the set (Silbershatz and Peterson, 1991). It is a ubiquitous problem in discrete event systems that results from various aspects of systems operations such as resource allocation and communications (Holt, 1972). In an FMS, deadlock is caused by imprudent allocation of buffer space, tooling, and material handling equipment (Cho, Kumaran, and Wysk, 1995). The FMS controller must, therefore, incorporate some strategy for handling deadlock; otherwise continuing system operation cannot be guaranteed. The deadlock phenomenon has been extensively studied in computer operating systems (Haberman, 1969, Shoshani and Coffman, 1970, Coffman, Elphick, and Shoshani, 1971, and Holt, 1972). In these systems, executing processes compete for computing resources such as I/O channels, disk space, and memory. The operating system’s objective is to allocate these resources so that all processes eventually acquire required resources and terminate successfully. Very little information regarding how processes request resources can be assumed, that is, when a process enters the system, the operating system knows very little about the ensemble of resource requests that the process will eventually make. Fortunately, it is valid to assume that resource requirements are bounded. The seminal deadlock avoidance algorithm for these systems, Banker’s Algorithm (Haberman, 1969), assumes that as each process enters the system, it declares the maximum number of each resource that it might ever require at one time. It further assumes that if a process is simultaneously allocated its stated maximum of each resource, then the process will terminate without additional requests and release all of its allocated resources. These resources
3
then become available for allocation to other processes. Banker’s Algorithm avoids deadlock by allowing an allocation if the processes can be ordered so that the maximal resource needs of the ith process, Pi, can be met by pooling available resources with those allocated to processes P1,P2,...,Pi. The order defines a sequence in which all processes in the system can be terminated successfully, i.e., the maximal needs of P1 can be met by pooling available resources with those already held by P1, the maximal needs of P2 can be met by pooling available resources with those held by P1 and P2, etc. As a result, Banker’s Algorithm is very conservative since it assumes that all processes simultaneously require their stated maximums. Clearly, processes might not require their stated maximums or might not again require resources already requested, allocated, and released. On the other hand, testing for an order on executing processes is of polynomial complexity (Shoshani and Coffman, 1970), an indispensable requirement for real-time control applications. In contrast to computer operating systems, a part in an FMS visits a predictable sequence of machines for processing, the part route. At each machine, it requires a certain set of resources (buffer space, cutting tools, etc.) to complete its processing before moving on to the next machine. The following taxonomy of resource allocation in manufacturing establishes the structure of part requests in the FMS: • Single Unit RAS (SU-RAS): At each step of its processing, a part requires a single unit of a single resource type. The SU-RAS applies when the only relevant resource is the buffering capacity of FMS equipment. • Single Type RAS (ST-RAS): At each step of its processing, a part requires several units of a single resource type. The ST-RAS applies when the only relevant resource is buffering capacity, and parts are batched into tightly coupled groups which have varying buffer capacity
4
requirements. • Conjunctive RAS (AND-RAS): At each step of its processing, a part requires several units of each resource type from a set of resource types. The AND-RAS applies when combining buffer capacity allocation, as described above, with tool and fixturing requirements. • Conjunctive/Disjunctive RAS (AND/OR-RAS): At each step of its processing, a part makes a set of AND type requests, the allocation of any one of which will suffice. The AND/OR RAS applies when buffer capacity and tooling are allocated in an environment that supports dynamic routing flexibility. Almost all manufacturing-related deadlock literature deals with the allocation of buffer capacity under SU-RAS classification of the taxonomy (see Lawley, Reveliotis, and Ferreira, 1997a, for a literature review), where buffer capacity is assumed to be the total number of physical locations at a machine and/or material handling device where a part can sit or be held. This includes space for part staging as well as machine processing locations. The following assumptions prevail: (1) The FMS is composed of a set of machines and an interconnecting material handling system that supports the production of a set of part types (throughout the remainder of the paper, machines and material handling devices will be referred to collectively as the system resources); (2) each resource possesses a finite amount of buffer capacity; and (3) at each stage of its processing route, a part requires a single unit of buffer capacity on the required resource. To visit a resource, a part first requests a single unit of that resource’s buffer capacity. It then waits at its currently held unit of buffer capacity until the system controller grants the request. The part then releases the unit and proceeds to the next resource. The system deadlocks when there is a set of parts such that each part in the set has requested and is awaiting buffer
5
capacity occupied by another part in the set. The system controller must allocate resource buffer capacity so that all parts in the system are able to visit required resources and terminate normally. Because of the additional structure in FMS buffer capacity allocation as opposed to process requests in computer operating systems, manufacturing researchers have tended to dismiss earlier operating systems work, particularly Banker’s Algorithm. For example, in their seminal paper, Banasak and Krogh (1990) say “Dijkstra’s Banker’s Algorithm and its generalizations for deadlock avoidance are too restrictive when applied to an FMS ... Banker’s Algorithm assumes nothing about the order in which resources will be requested and released.” Wysk, Yang, and Joshi (1991) give a similar argument: “Although techniques have been developed to handle deadlocks in computer engineering applications, sufficient differences exist to prevent direct application to manufacturing systems.” Leung and Sheen (1993), D’Souza and Khator (1994), Hsieh and Chang (1994), Fanti, Maione, Mascolo, and Turchiano (1996), and Reveliotis and Ferreira (1996) repeat the same contention. Indeed, complete knowledge of future requests assumed in manufacturing does render the optimal resource allocation policy computable: a part is allocated capacity at its next resource if and only if after the allocation, there exists a sequence in which all parts are able to visit required resources and terminate successfully. Unfortunately, the optimal policy is NP-complete for the SU-RAS (Araki, Sugiyama, Kasami, and Okui, 1977). This carries two related implications: (1) the optimal policy cannot be used in real-time control since it might not return an answer in the required time-frame, and (2) tractable resource allocation mechanisms intended to avoid deadlock will generally be sub-optimal, that is, they will prevent some allocations that could be made. Given the requirement for computational tractability, it is not clear that Banker’s Algorithm (using proper assumptions) is unduly conservative for FMS
6
buffer capacity allocation. Our objective is to present variants of Banker’s Algorithm specialized for the FMS and report results on their operational flexibility. In section 2, we review basic concepts of deadlock avoidance in flexible manufacturing. In section 3, we introduce the concepts of ordered and partially ordered allocation states and show that constraining system operation to these states leads to tractable deadlock avoidance. We then develop modified versions of Banker’s Algorithm for detecting these states. In section 4, we introduce a partition on the safe allocation state space that allows us to move outside the region of allocation states admitted by Banker’s Algorithm. The structure of this partition allows us to guarantee incremental increases in the number of admissible states through controlled increases in computational effort. In section 5, we evaluate the operational flexibility of these algorithms by estimating the coverage of the safe state space for a number of randomly generated FMSs. Furthermore, we discuss theoretical relationships between Banker’s Algorithm and recent deadlock avoidance techniques developed for manufacturing. In section 6, we conclude by discussing some interesting directions for future research. 2 DEADLOCK AVOIDANCE IN FLEXIBLE MANUFACTURING SYSTEMS In this section, we define the deadlock avoidance problem as it arises in the FMS context. We use the SU-RAS to model allocation of resource buffer capacity (where resources are assumed to be machines or material handling devices). We begin by developing sufficient formalism to describe relevant FMS features. In the following notation, we use the convention that bold symbols represent sets, possibly ordered, and plain symbols represent non-divisible entities. Further, we say that if ϒ is a set of objects, then |ϒ ϒ | is the number of objects in the set. Table 1 provides a summary of selected notation.
7
Let R represent the set of FMS resources, where Cj and Oj are the respective total and occupied units of buffer capacity at Rj∈R. As previously stated, Cj is the number of locations at Rj where a part can be physically located. This includes part staging as well as machine processing locations. Let P be the set of part types produced with each Pk∈P represented as an ordered tuple of processing stages. For example, part type k is represented as Pk= 〈Pk1,Pk2,...,Pk|Pk|〉 where Pkm represents the mth processing stage of Pk. Let the route of Pk be the sequence of resources required to completely process one part of type Pk. We assume that the route is pre-specified in the process plan of the part type. Let RTk=〈Rfk(1), Rfk(2) ,..., Rfk(|Pk|) 〉 represent the route of Pk where the stage Pkm requires a single unit of buffer capacity at the resource Rfk(m), that is, fk(m) returns the resource number required for the mth processing step of Pk. Let ξj ={Pkm : fk(m)=j ∀k=1,...,|P| and m=1,...,|Pk|} represent the set of all part stages that require Rj. Finally, let RTkm= 〈Rfk(m+1),...,Rfk(|Pk|)〉 be the remaining route of Pkm. We use the symbol πkm to represent a currently executing part of type k in its mth stage of processing, i.e., we can think of πkm as an instantiation of Pkm. Note that there might be several πkm’s in the system at any given time. Let |πkm| denote the number of currently processing πkm’s. The buffer allocation state of an FMS describes the current allocation of resource buffer capacity to parts in the system and is defined as follows: Definition 1. The buffer allocation state, S, of an FMS is a nonnegative integer vector of the form T
S=〈|πkm|, k=1...|P|, m=1...|Pk|〉 , where
∑
|πkm| ≤ Cj, ∀Rj∈R. The dimension of S is the
Pkm ∈ξ j
|P |
cumulative route length, CRL= ∑ |RTk | . k =1
8
The state vector describes the current allocation of buffer capacity along with the current stage of processing for all parts in the FMS. We assume that the FMS changes state in one of three ways: (1) a new part is loaded into the system, (2) some part already in the system is +
advanced one step in its route, or (3) a finished part leaves the system. We let S ={ Pkm : |πkm| ≥ 1 in S} be the set of stages represented by parts processing in the system in state S. Define + + + + p:{1,2,...,| S |}→ S to be an indexing function on S , that is, p(i)=Pkm is the ith member of S . +
Since p is one to one, p-1(Pkm) returns the index of Pkm∈S . The FMS state space is the set of all possible allocation states augmented with state transition information. This space is conveniently represented by a directed graph, G=(V,E), wherein vertices represent states and directed edges represent state transitions. A directed edge 〈 Su, Sv 〉 is present if and only if the single step advancement of one part in Su results in Sv. To clarify the state space concept, consider the small FMS of Figure 1, which consists of two part types, P1 and P2, with RT1=〈R1,R2〉 and RT2=〈R2,R1〉. (Boxes and circles represent resources and resource capacity, respectively.) The digraph denotes all possible buffer allocation states for this system as well as all possible state transitions that result by loading, advancing, and finishing parts. We distinguish the empty state, SE, since it is the initial and, hopefully, the final state of any successful production run. In general, V can be partitioned based on reachability from the empty state (where a state is reachable from SE if and only if there exists a sequence of state transitions leading from SE to that state) and safeness (where a state is safe if and only if there exists a sequence of state transitions leading back to SE ). These partitions are defined inductively in Definition 2 and summarized in Figure 2. Definition 2. Let VR and Vsafe represent the set of reachable and safe states respectively. Then (1) SE∈VR ∩ Vsafe, (2) Su∈VR ∧ 〈Su,Sv〉∈E ⇔ Sv∈VR, (3) Sv∈Vsafe ∧〈Su, Sv〉∈E⇔Su∈Vsafe, (4)Su∉VR⇔Su ∈VUR , and (5) Su∉Vsafe⇔Su∈ Vunsafe . 9
In Figure 1, for example, all states are safe except S4 (a reachable deadlock). All states are reachable except S8 (an unreachable safe state). Clearly, for normal system operation, we are interested in states that are both reachable and safe. In graph theoretic terms, these states constitute a strongly connected region of the state space digraph, i.e., the region Vsafe∩VR. Note that this is the reachable admissible region of the optimal policy (as given in the introduction). As previously noted, however, this policy typically requires super-polynomial computation and is, therefore, not suitable for real-time control. Thus, the objective of any practical deadlock avoidance policy (DAP) is to constrain FMS operation to a strongly connected subregion of Vsafe∩VR that contains SE, and to do so in a computationally efficient manner (see Lawley et al., 1997a, and Reveliotis and Ferreira, 1996). We characterize this class of DAPs as correct and scalable. In section 3, we discuss such a strongly connected subregion of Vsafe∩VR that can be identified by a polynomial algorithm. 3 APPLYING BANKER’S ALGORITHM TO THE FMS In this section we introduce the set of ordered allocation states along with a Banker’s Algorithm that detects them in polynomial time. We then relax the requirement for total orders and develop the Partial Banker’s Algorithm, a more permissive and computationally efficient approach. 3.1 Ordered States and Banker’s Algorithm +
Definition 3. A state Su∈V is an ordered state if Su=SE or if there exists a bijection, h: Su → +
+
{1,2,...,|Su |}, such that for each Pkm∈Su , the following holds: ∀Rj∈RTkm , Cj - Oj + |Hkm∩ξ ξj |>0, +
where Hkm = { Ppq∈Su : h(Ppq) ≤ h(Pkm) }. Vo⊂V is the set of ordered states. The definition implies that a state is ordered if there exists an ordering of parts in the system such that the remaining buffer space needs of each part can be satisfied from available buffer space
10
plus that held by parts no higher in the order. Thus, the parts in the system can be finished in the order given by the bijection, h. For example, if h(Pkm)=1, then all πkm’s in the system can finish using currently available buffer space plus that already held by the πkm’s. Further, if h(Ppq)=2, then all πpq’s can finish using currently available buffer space plus that held by the πkm’s plus that held by the πpq’s, and so forth. To better explain the definition, suppose πkm occupies buffer capacity at resource Rfk(m) in state Su. The remaining route of πkm is RTkm= 〈Rfk(m+1),...,Rfk(|Pk|)〉. If Cj - Oj >0 ∀Rj∈RTkm, then πkm can finish using currently available buffer capacity. Suppose ∃Rj∈RTkm such that Cj - Oj =0. We need to examine two cases. First, if Rj=Rfk(m), then πkm must revisit its current resource to finish. Clearly, when finishing, πkm can re-use its currently occupied unit of buffer capacity. We cover this case by including Pkm in the definition of Hkm, i.e., Pkm∈ξ ξj and Pkm∈Hkm ⇒ |Hkm∩ξ ξj | >0. Secondly, ∃Rj∈RTkm, Rj≠Rfk(m), such that Cj - Oj =0. Thus, πkm cannot finish unless some part at Rj, say πpq, is finished first, i.e., unless ∃ Ppq∈ξ ξj such that Ppq∈Hkm⇒ |Hkm∩ξ ξj | >0. Note that the definition does not specify an order on the actual parts in the system. Rather, it requires the set of part type stages that have corresponding non-zero entries in the state vector to be ordered. We do this because multiple instances of Pkm have identical remaining resource needs, and thus, if we can finish one πkm, we can clearly finish all πkm’s. As another example of an ordered state, consider S6 of Figure 1. From this state, the system can be emptied by first finishing π12 and then finishing π11. Hence, S6 is ordered. Clearly, S4 is not. The following lemma guarantees that if a state is ordered, then there exists a sequence of ordered states leading to the empty state. This condition essentially guarantees the correctness of a DAP that admits the ordered states.
11
Lemma 4. ∀Su ∈Vo , ∃{S0 (=Su), S1, S2, ... ,Sk, SE}⊂Vo such that 〈Si, Si+1〉∈Eo, where Eo={〈Sx, Sy〉: Sx, Sy ∈Vo ∧ 〈Sx, Sy〉 ∈E }. In words, for every ordered state, there is a sequence of ordered states that leads to the empty state. +
+
Proof. Suppose that h: Su →{1,2,..., |Su |} defines an order in which parts in the system can be completed, and let Pkm be the first stage in the order. Further, suppose that we advance a part πkm one step in its route and that the resulting state is Sv. To construct an order for Sv, we enumerate +
+
+
+
+
+
+
the following cases: (1) Sv = Su , (2) Sv = {Pk,m+1}∪Su , (3) Sv = Su \{Pkm}, and (4) Sv = +
{Pk,m+1}∪ Su \{Pkm}. For (1), construct a new order, f, as follows: let f(Pk,m+1)=1, and ∀Ppq∈ +
Sv \{Pk,m+1}, if h(Ppq)0, let su+ be the maximal partially ordered subset of Su and assume that Sv+=Su+\ su+≠∅. (Sv is the state resulting when the maximal partially ordered subset is removed from Su.) Then, Su∈Vm if the following two conditions hold: (1) Su∉Vn for n1. Then, by Definition 8, Sv∈Vm-c+1, again a contradiction. Finally, suppose that ∃n∈{0,1,...,k-1} such that Sv+n ∈ Vm-1. This contradicts the assumption that Sw was the first Vm-1 state encountered. By elimination, Sv+n∈ Vm ∀n∈{0,1,...,k-1}. Thus, if the system is in a Vm ordered state, the process of removing the maximal partially ordered subset causes no change in safety level. Further, if the state is not partially ordered, then the process of advancing a part to yield a Vm-1 ordered state results in a single change in safety level, from Vm to Vm-1. Lemma 11 establishes an upper bound on the safety level of immediate predecessors of Vm ordered states. Lemma 11. Suppose that Sv∈Vm, 〈Su, Sv〉∈E, and Su∈Vn. Then n≤m+1. Proof. 〈 Su ,Sv〉∈E implies the existence of πpq such that by advancing πpq one step, the system enters state Sv∈Vm. If n>m, then by Definition 8, Su∈Vm+1. Lemma 11 states that all predecessors of Vm ordered states must fall in safety levels 0 to m+1. In particular, if Su is in a higher level than Sv, then Su is in the next higher level. Thus, it is impossible to move from the mth safety level to the (m-k)th safety level (k > 0) without passing through states in every intervening level, that is, Vm≠∅⇒Vm-k≠∅ ∀k∈{0,1,...,m}. Notice, however, that a transition from a lower safety level to a higher safety level might span several
21
levels. For example, consider the Vo state of Figure 7, where a single transition, advance π31, results in a V2 state. After advancing π31, we must advance π21 one step and π32 one step to return to Vo. In summary, it is possible to skip safety levels when moving up in state complexity, but it is impossible to skip safety levels when moving down in state complexity (see Figure 8). Using Lemma 11, we now establish that Definition 8 partitions the entire safe state space by proving that every safe state is in some safety level. Theorem 12. If Su∈Vsafe, then there exists a positive integer m such that Su∈Vm. Proof. Suppose that Su∈Vsafe. Clearly, there exists Sv∈Vn (for some n≥0) such that Sv is reachable from Su. Consider the sequence of states leading from Su to Sv, i.e., 〈 Su , Su+1 , Su+2 , ... , Su+k-1〉 where Su+k = Sv. Note that by Lemma 11: Su+k-1 ∈ Vp Su+k-2 ∈ Vq Su+k-3 ∈ Vr Su+k-k ∈ Vm
.. .
0 ≤ p ≤ n+1; 0 ≤ q ≤ n+2; 0 ≤ r ≤ n+3; 0 ≤ m ≤ n+k.
We have now established the fact that the set of safe states is partitioned by the definition of Vm m
ordered states. Furthermore, if we construct algorithms that admit U Vi, we guarantee that (1) i =0
each increment in m (up to some maximum) will increase our coverage of the safe state space, and (2) the reachable admissible region for such an algorithm will be correct in the strongly connected sense. For practical purposes, we are interested in those classes that are significant in terms of admitted states and for which computation does not become excessive (the lower order classes). In this paper, we study the set of V1 ordered states, i.e., states for which (after removing the maximal partially ordered subset) there exists a part, πkm, such that by advancing πkm a finite
22
number of steps, an ordered state can be reached. Note that the states of Figures 3 and 6 are V1 ordered. Algorithm 3 detects V1 ordered states. It assumes that the state has already been checked and rejected by either Algorithm 1 or Algorithm 2. These algorithms manipulate the data structures so that any partially ordered subset of parts is advanced to completion. Therefore, the state represented by the data structures presented to Algorithm 3 is not partially ordered. Algorithm 3 selects a part and attempts to advance it one step. If the part is successfully advanced, then Algorithm 1 is called on the resulting state, i.e., Algorithm 3 checks to see whether the resulting state is ordered. If so, Algorithm 3 returns ADMIT. If not, it attempts to advance the part again. This process continues until an ordered state is encountered or the part can advance no further. In the latter case, the algorithm resets the data structures to the original state and selects the next part. Theorem 13 establishes the complexity of Algorithm 3. Theorem 13. Algorithm 3 belongs to the class of polynomial algorithms.1 Proof. Suppose Γ is the set of stages in Sv+ corresponding to parts in the system that can move forward at least one step. Let kmax be the maximum number of steps that any such part can move. In the worst case, Γ =Sv+, and we attempt to advance πkm by kmax steps for each Pkm∈Sv+, calling Algorithm 1 at each step. An upper bound on the number of calls to Algorithm 1 is |Sv+|x|RTmax|, where kmax ≤ |RTma x| , the maximum route length. As previously noted, the complexity of Algorithm 1 is O(| R |x|Sv+|xlog|Sv+|). Therefore, the complexity of Algorithm 3 is O(|RTmax|x|R|x |Sv+|2 xlog|Sv+|).
1
m
For fixed level, m, detecting U Vi is polynomial; however, computation increases exponentially in m. i =0
23
Algorithm 3. Query: Is Sv∈ V1, i.e., is the state in question V1 ordered? Input: ALLOCATION, NEED, AVAILABLE representations of Sv. Output: ADMIT, REJECT Begin For i = 1...| Sv+ | // Copy data structures.// temp1=ALLOCATION; temp2=NEED; temp3=AVAILABLE Loop // Update data structures by attempting to advance p(i).// Result=move_part(temp1, temp2, temp3, i) If Result==BLOCKED //Part is blocked and cannot move.// Break From Loop Else If Result==MOVED //Part was successfully advanced.// // Is resulting state ordered?// Result=Algorithm 1(temp1, temp2, temp3) If Result==ADMIT Return ADMIT // Sv∈ V1.// End Loop End For Return REJECT // Sv∉V1.// End The complete logic of a look-ahead policy accepting states in Vo∪V1 is implemented in Algorithm 4. This algorithm assumes that a dispatching or order release policy has selected a part to advance, and that the state vector has been updated (although the actual part has not been allowed to move). Algorithm 4 determines whether deadlock-free operation can be guaranteed if this proposed move is executed. It starts by calling Algorithm 2 to determine whether the resulting state is partially ordered with respect to the selected part. If so, Algorithm 4 admits the state. If not, it calls Algorithm 3 to see if the resulting state is V1 ordered. If so, it admits the state, otherwise it rejects the proposed move. Its complexity is similar to that of Algorithm 3.
24
Algorithm 4 Query: Is Sv∈Vo∪V1? (Assume πkm is the last part to advance, and Sv is the resulting state.) Input: Sv, πkm Output: ADMIT, REJECT Begin // Setup ALLOCATION, NEED, and AVAILABLE for Sv.// configure_data_structures(Sv) // Check whether Sv is partially ordered with repect to Pkm .// Result=Algorithm 2 (Pkm, ALLOCATION, NEED, AVAILABLE) If Result==ADMIT Return ADMIT //State is partially ordered wrt Pkm.// // Check whether Sv is V1 ordered.// Result= Algorithm 3(ALLOCATION, NEED, AVAILABLE) If Result==ADMIT Return ADMIT //State is V1 ordered.// Else Return REJECT // Sv∉Vo∪V1. Do not allow proposed move.// End
In section 5, we turn our attention to evaluating the safe state space coverage of these algorithms. We also investigate the computational savings of Algorithm 2 over Algorithm 1. 5 EVALUATING BANKER’S ALGORITHM In the previous sections, we developed variants of Banker’s Algorithm for deadlock avoidance in FMS buffer space allocation. Algorithms 1, 2, and 3 use the information contained in remaining routes to determine state admissibility. In this section, we seek to estimate the operational flexibility of our algorithms. We begin by using inferential statistics to draw comparisons with the optimal policy, the deadlock avoidance policy that admits all safe states. We then develop analytical relationships between Banker’s Algorithm and other polynomial deadlock avoidance methods developed for manufacturing. As a secondary interest, we study the computational savings afforded by seeking partial rather than total orders (Algorithm 1 versus Algorithm 2). We begin by developing a measure of operational flexibility along with a statistical sampling and estimation scheme.
25
5.1 Estimating Operational Flexibility The algorithms developed in sections 3 and 4 can be thought of as “cuts” on the FMS state space, where determining whether or not a state is inside the cut is a polynomial operation (see Figure 9). Polynomial execution together with the NP-completeness of state safety implies suboptimality, that is, polynomial DAPs will tend to reject some safe states. A DAP that rejects too many safe states will restrict flexibility and will not be usable. For example, a policy that allows only one part in the system at a time is both correct and computationally tractable. However, it is clearly too restrictive to be given serious consideration. Let Vadmit represent the set of states admitted by a DAP. Then, from Figure 9, it is clear that the ratio, p = |Vadmit | / |Vsafe|, provides one measure of DAP operational flexibility. This is the ratio of DAP admissible space to the admissible space of the optimal policy (the safe state space). Unfortunately, closed-form expressions for either of these subspaces are not known. We, therefore, develop an empirical approach for estimating this ratio (Walpole and Myers, 1993, provides statistical details of the following development). To get a point estimate of p, we collect a sample of safe states, Vsample⊂Vsafe, and let Va= Vsample ∩Vadmit. If Va contains no redundant states, then x=|Va| is a hypergeometric random variable where the total number of items in the population is N= |Vsafe|; the number of admissible items in the population is k=|Vadmit|; and the number of items sampled from the population is n=|Vsample|. It is a standard result that the ratio ρ=x/n is an unbiased estimator of p, i.e., E(ρ)=E(x/n)=E(x)/n= (nk/N)/n=k/N= p (where E denotes expected value). To get an interval estimate of p, we invoke the binomial approximation to the hypergeometric, which holds when N≈N-n. We argue that N≈N-n as follows: it is easily shown that the number of
26
n k
states in the FMS state space, |V|, is given by the product ∏ C( Ci +|ξ i | ; Ci ) , where C(n;k)= . | R|
i =1
|R|
This is never smaller than 2 and tends to be much larger. Thus, |V| is exponential in R. If we j
assume that |Vsafe| ≥ |V| / 2 , then |Vsafe| is also exponential in |R| and can be quite large. (In the system of Table 2, if we assume, for example, that at least 1/32 of the total states are safe, then (1.06x108/32)≈(1.06x108/32)-90,833, and the assumptions are valid.) When approximating h(x,N,n,k) with b(x,n,p), note that p=k/N, E(x)=np, and Var(x)=np(1-p), where Var is variance. Now, consider the variance of the unbiased estimator, ρ: Var(ρ)=Var(x/n)=Var(x)/n2=np(1p)/n2 = p(1-p)/n. If n is large enough (n>30), then by the Central Limit Theorem, the statistic Z=(ρ-E(ρ))/√Var(ρ) follows the standard normal distribution. We use this fact to get the following (1-α)100% confidence interval on p: ρ-Zα/2√ρ(1- ρ)/n < p < ρ+Zα/2 √ρ(1- ρ)/n, (where α is the probability of a type I error). By selecting n ≥ Zα/22/4e2 (where e is a specified error), we can be at least (1-α)100% confident that the closed interval [ρ-e,ρ+e] covers p. For example, to be 95% confident that the interval [ρ-0.01,ρ+ 0.01] covers p, sample size should exceed 9604. We now present a method of generating samples of safe states for a given FMS configuration. Random state generation is not feasible since the general problem of deciding whether a state is safe or unsafe is NP-complete. We require the following definition: Definition 14. Let G=(V,E) be the state space digraph for a given FMS. Then the co-space is defined as Gc=(V, Ec), where Ec={ 〈 Sv, Su 〉 : 〈 Su, Sv 〉 ∈E }. In words, the co-space, Gc, reverses the sense of directed edges in G. For example, the cospace of the system in Figure 1 would simply reverse the direction of all edges in the given state space. Theorem 15 provides the basis of a safe state space sampling procedure (see Lawley et al., 1997a, for proof).
27
Theorem 15. Let G=(V,E) and Gc=(V,Ec) represent the state space and co-space of an FMS, respectively. If VRc is the set of states reachable in the co-system, then VRc = Vsafe. Consider again Figure 1 and suppose that, beginning at SE, we traverse the following edges of the co-space: 〈SE ,S3〉〈S3,S1〉〈S1,S6〉. States S3, S1,and S6 are reachable in Gc and, therefore, by Theorem 15, safe in G. Note that in moving from SE to S3 to S1 to S6, we push parts in the reverse direction of their routes, i.e., Gc is the state space generated by reversing the routes of part types in G. Therefore, to collect safe states, we reverse the routes of all part types, simulate the system running backwards, and collect a sample of the states encountered. To help avoid bias, we randomize state transition during simulation. (Although this does not guarantee unbiased sampling, there is currently no other tractable method of collecting samples of safe states.) After state collection is completed, the states must be sorted and redundant states removed. Since we do not know how many unique states have been collected until this step is completed, several simulation runs might be required to collect a sample of the desired size. 5.2 Operational Flexibility Results To investigate the operational flexibility of the algorithms developed in the previous sections, we randomly generated eight FMS configurations and collected a sample of safe states from each. (The generation scheme randomly selected the number of resources, the number of part types, and the route composition for each part type. We held machine capacity at one unit per machine to evaluate the algorithms under the most severe restrictions.) We then used the safe samples to compute an interval estimate of p=|Vadmit| / |Vsafe| for each algorithm on each configuration. The results for one randomly generated system are given in Table 2. This system consists of nine machines and produces eight part types. The state space is composed of 1.06x108 states, and we collected 90,833 safe states using the sampling technique developed in the previous section. 28
Experimental results are given at the bottom of Table 2. Algorithm 1’s acceptance of 50,700 of the 90,833 states in the safe sample indicates that 55.8% of the states in the sample are ordered.
Table 2. Operational flexibility of DAPs for a randomly generated FMS configuration. System Number Number of Resources Number of Part Types Route 1 Route 2 Route 3 Route 4
4 9 8
〈6 2 5 4 1 9 3 4 2 6 7 3〉 〈7 2 8 5〉 〈1 8 3 9 4 1 9 8 6 7 6 3〉 〈3 5 6 2 5 3 8 2〉 States Accepted
Algorithm 1 Algorithm 2 Algorithm 4 RUN
50,700 63,858 81,816 13,935
State Space Size 1.06E+08 Safe Sample 90,833 Route 5 Route 6 Route 7 Route 8
〈3 2 3 5 6 9 5 1〉 〈8 9 6 5 4 1 4〉 〈3 7 5 7 5 3 2 9 2〉 〈7 8 3〉
Operational Flexibility .558 .703 .900 .153
99% CI [.554, .562] [.699, .707] [.898, .903] [.150, .156]
Furthermore, we are 99% confident that the interval [.554, .562] covers the true value of p for Algorithm 1 on this system. Algorithm 2’s acceptance of 63,858 of the 90,833 states in the safe sample indicates that 70.3% of the states in the sample are partially ordered with respect to the last part to move forward. Furthermore, 13,158 (20.6%) of these partially ordered states are not totally ordered, and we are 99% confident that the interval [.699, .707] covers the true value of p for Algorithm 2 on this system. Next, Algorithm 4’s acceptance of 81,816 of the 90,833 states in the safe sample indicates that 90% of the states in the sample are in Vo∪V1, i.e., 31,116 states (34.3%) are V1 ordered. Furthermore, we are 99% confident that the interval [.898, .903] covers the true value of p for Algorithm 4 on this system. Finally, for comparative purposes, we estimate the operational flexibility of a DAP developed specifically for manufacturing systems, the resource upstream neighborhood policy (RUN, see
29
Lawley, Reveliotis, and Ferreira, 1997b). The intuition behind RUN is that high capacity machines can serve as buffers for their parts while parts in other portions of the system are pushed to completion. Parts temporarily stored on high capacity machines can then continue along their routes and finish without interference. Although it is easy to show that RUN permits states not allowed by Banker’s, Banker’s approach provides much better operational flexibility for the systems examined here. (In the example system of Table 2, RUN admits 15.3% of the safe sample.) The sample ratios of all eight systems are summarized in Figures 10 and 11. Figure 10 plots FMS configuration vs. sample ratio for Algorithms 1, 2, 4, and RUN, while Figure 11 plots system load vs. sample ratio for Algorithms 1, 2, and 4. Note that Algorithms 2 and 4 are less sensitive to increasing system load than is Algorithm 1. This indicates that highly loaded states are less likely to be totally ordered and more likely to be either partially or V1 ordered (which is what one would expect). Finally, note that Algorithm 4 accepts 34% more states on average than Algorithm 1. In summary, we believe that Banker’s approach can provide good operational flexibility when properly applied to FMS buffer space allocation, particularly when augmented with the V1 ordered states. In the next section, we discuss theoretical relationships between Banker’s Algorithm and several DAPs developed specifically for manufacturing. 5.3 Theoretical Relationships Between Banker’s Algorithm and DAPs for Manufacturing The logic of Banker’s Algorithm is intimately related to the successful operation of several DAPs developed for manufacturing. For example, when there are no unshared resources, the deadlock avoidance algorithm (DAA) presented by Banasak et al. (1990) is subsumed by Algorithm 1. DAA partitions each part route into a set of zones, with each zone consisting of a sequence of
30
shared resources (the shared subzone) followed by a sequence of unshared resources (the unshared subzone, a resource is unshared if it is used by one part type only). DAA constrains the number of parts in a zone to be no greater than the capacity of the unshared subzone. Furthermore, a part can proceed in the shared subzone only if capacity is available on all remaining resources in the shared subzone. Now, if there are no unshared resources, the part route is one long shared sub-zone, and a part can proceed only if all resources in the remainder of its route have available capacity. In other words, DAA will not allow a part to move forward unless the part can be placed first in some order generated by Algorithm 1. Any state permitted by DAA will, therefore, be permitted by Algorithm 1, but clearly the converse is not true. If unshared subzones exist, then DAA imposes a two-level ordering on parts. First, it guarantees an order in which all parts in shared resources can be advanced into their associated unshared subzones. The resulting state is ordered and, therefore, accepted by Algorithm 1. The resource order policy (RO, see Lawley et al., 1997a) is based on the notion that parts flowing in opposite directions through the same set of machines must be able to pass. The resources are ordered, and parts are categorized according to how they flow with respect to that order. A part requiring at least one more move up (down) the order is ‘right’ (‘left’). The policy rejects precisely those states that exhibit a pair of resources such that the one low in the order is filled with right parts, and the one high in the order is filled with left parts. It is easily shown that RO is subsumed by Algorithm 1 and that the converse is not true. The RUN policy, discussed in the previous section, is a resource reservation scheme that guarantees an order in which parts can be advanced into the “peaks” of their routes. (The peaks of a route are those resources in the route that are followed by resources of smaller capacity.) Once parts are accumulated in route peaks, RUN guarantees an order in which parts can advance from
31
peak to peak (and eventually out of the system). Thus, RUN, like DAA, guarantees a two-level ordering on parts in the system. If every route is nondecreasing in capacity, then RUN permits only non-conflicting states (non-conflicting states are ordered states in which every resource in the remaining route of every part has available capacity) since parts reserve capacity on all resources in their routes when entering the system. In this case, Algorithm 1 subsumes RUN. Finally, the avoidance method of Hsieh et al. (1994) uses scheduling heuristics to constrain the search for a reachable state exhibiting sufficient available resources for every part in the system to finish. In other words, this method admits a state if and only if a constrained search locates a reachable non-conflicting state (admissible by Algorithm 1). Like Banker’s, search is sufficiently constrained to remain polynomial. No statement can be made about relative operational flexibilities since the approach depends on the heuristics used in the search. In summary, ordering parts is a powerful concept that underlies the correctness of many DAPs developed for manufacturing. In the next section, we extend our investigation to the computational implications of seeking partial rather than total orders. 5.4 Computational Savings of Partial vs. Total Orders In this section, we compare the performance of Algorithms 1 and 2 on acceptable states (the algorithms perform similarly for rejected states). Table 3 provides a summary of computational results for the example system of Table 2 (we assume that an iteration is a single pass through the part set when searching for an accessible part). Notice that Algorithm 1 requires an average of 10.581 iterations to accept, whereas Algorithm 2 requires an average of 4.499. This indicates that Algorithm 2 requires a good deal less searching than does Algorithm 1. Furthermore, averageiterations to accept increases more rapidly with system load for Algorithm 1 than for Algorithm 2. Thus, not only is Algorithm 2 more likely to accept heavily loaded states (see section
32
5.2), it also requires significantly less computation on average. These results are summarized in Figures 12 and 13. Figure 12 plots system versus average number of iterations to accept, while Figure 13 plots system load (the number of parts in the system) versus average number of iterations to accept. Table 3 also records the variance in iterations to accept for the example system of Table 2. Note that variance grows with increasing system load for both algorithms, although much more rapidly Table 3. Computational results for Algorithms 1 and 2. Average Iterations to Accept Load 1 2 3 4 5 6 7 8 9 avg:
Algorthim 1 0.873 2.517 4.620 7.155 10.475 14.227 18.083 22.084 25.615 10.581
avg:
Variance in Iterations to Accept
Algorithm 2 0.873 1.793 2.611 3.354 4.434 5.251 6.057 6.907 9.310 4.439 pooled:
Algorthim 1 0.113 0.679 2.369 5.592 10.133 15.982 21.760 24.957 18.113 10.588 pooled:
Algorithm 2 0.113 1.221 4.327 10.015 19.525 32.007 47.299 65.734 97.086 23.765
for Algorithm 2 than for Algorithm 1. We expect this since, for some situations, Algorithm 2 accepts very quickly, whereas for others, it requires as much computation as Algorithm 1. Variance results are summarized in Figures 14 and 15. In summary, Algorithm 2 provides significant computational savings over Algorithm 1, with the inevitable increase in computational variance as one side effect. 6 CONCLUSION In this paper, we applied Banker’s Algorithm to FMS buffer space allocation and evaluated its effectiveness. Results indicate that modifications of Banker’s Algorithm can provide good
33
operational flexibility, particularly when augmented with the V1 algorithm. Higher flexibility is possible by adding V2, V3, etc., although computational requirements grow quickly (exponentially in safety level). Furthermore, these modifications of Banker’s Algorithm should easily extend to other classes of the taxonomy of section 1. Theoretically, the concept of ordered allocation states establishes deep relationships between Banker’s Algorithm, other FMS deadlock avoidance methods, and matroid theory. This is the subject of future research. For now, we note that for an ordered state, the language of all possible orders generated by Banker’s Algorithm is anti-matroid. Finally, our motivation in seeking highly flexible DAPs is to provide maximal latitude to order release and dispatching policies. Simulation analysis has shown, however, that highly permissive DAPs allow naive order release mechanisms to achieve high system workload, resulting in high blocking and low throughput. Development of performance policies capable of effectively using high operational flexibility is of utmost significance and is another ongoing research topic.
34
REFERENCES Araki, T., Sugiyama, Y., Kasami, T., and Okui, J., “Complexity of the Deadlock Avoidance Problem,” Proceedings of 2nd IBM Symposium on the Mathematical Foundations of Computer Science, IBM Japan, Tokyo, pp. 229-252 (1977). Banaszak, Z. and Krogh, B., “Deadlock Avoidance in Flexible Manufacturing Systems with Concurrently Competing Process Flows,” IEEE Transactions on Robotics and Automation, Vol. 6, No. 6, pp.724-734 (December 1990). Coffman, E., Elphick, M., and Shoshani, A., “System Deadlocks,” ACM Computing Surveys, Vol. 3, pp.67-78 (June 1971). Cho, H., Kumaran, T., and Wysk, R., “Graph-Theoretic Deadlock Detection and Resolution for Flexible Manufacturing Systems,” IEEE Transactions on Robotics and Automation, Vol. 11, No. 3, pp. 413-421 (June 1995). D’Souza, K. and Khator, S., “A Survey of Petri Net Applications in Modeling Controls for Automated Manufacturing Systems,” Computers in Industry, Vol. 24, pp. 5-16 (1994). Fanti, M., Maione, B., Mascolo, S., and Turchiano, B., “Performance of Deadlock Avoidance Algorithms in Flexible Manufacturing Systems,” Journal of Manufacturing Systems, Vol. 15, No. 3, pp. 164-178 (1996). Haberman, A., “Prevention of System Deadlocks,” Communications of the ACM, Vol. 12, No. 7, pp.373-377 (July 1969). Holt, R., “Some Deadlock Properties of Computer Systems,” ACM Computing Surveys, Vol. 4, No. 3, pp. 179-196 (September 1972). Hsieh, F. and Chang, S., “Dispatching Driven Deadlock Avoidance Controller Synthesis for Flexible Manufacturing Systems,” IEEE Transactions on Robotics and Automation, Vol. 10, No. 2, pp. 196-209 (April 1994). Lawley, M., Reveliotis, S., and Ferreira, P., “Configurable and Scalable Real Time Control Policies for Deadlock Avoidance in Flexible Manufacturing Systems,” Proceedings of the Sixth International FAIM Conference, Atlanta, GA, pp. 758-767 (May 1996). Lawley, M., Reveliotis, S., and Ferreira, P., “Design Guidelines for Deadlock Handling Strategies in Flexible Manufacturing Systems,” International Journal of Flexible Manufacturing Systems, Vol. 9, No. 1 (January 1997a). Lawley, M., Reveliotis, S., and Ferreira, P., “FMS Structural Control and the Neighborhood Policy: Parts 1 and 2,” IIE Transactions, forthcoming (1997b). Leung, Y. and Sheen, G., “Resolving Deadlocks in Flexible Manufacturing Cells,” Journal of Manufacturing Systems, Vol. 12, No. 4, pp. 291-304 (1993).
35
Reveliotis, S. and Ferreira, P., “Deadlock Avoidance Policies for Automated Manufacturing Cells,” IEEE Transactions on Robotics and Automation, Vol. 12, pp. 845-857 (November 1996). Shoshani, A. and Coffman, E., “Sequencing Tasks in Mutiprocess Systems to Avoid Deadlocks,” Proceedings of the 11th Annual Symposium on Switching and Automata Theory, pp. 225-233 (October 1970). Silberschatz, A. and Peterson, G., Operating Systems Concepts, Addison-Wesley, Reading, MA (1991). Wysk, R., Yang, N., and Joshi, S., “Detection of Deadlocks in Flexible Manufacturing Cells,” IEEE Transactions on Robotics and Automation, Vol. 7, No. 6, pp. 853-859 (December 1991).
36
P
1
R
R
1
2
P
2
SE
S2
S1 π11
π21
S3
S4 π11
π12
S5 π21
S6 π11
π22
S7 π12
π22
π21
S8 π22
π12
Figure 1. Example FMS state space
Unreachable Deadlock Unsafe
Safe
Empty
Reachable
Figure 2. FMS state space partitions
37
R2
R1
R3
π11
π21
Figure 3. Unordered safe state R1
R3
R2
R4
π11
π21 π21
R={R1,R2,R3,R4} where C1=2, C2=1, C3=2, C4=2, and P={P1,P2} with RT1=〈R1,R2, +
R3,R4〉 and RT2=〈R4,R3,R2,R1〉. In the above state, Su ={ P11, P21}. Let p(1)=P11 and p(2)= P21. Then, Π ={1,2}, and AVAILABLE=〈1 1 2 0〉.
p(1) p(2)
NEED R1 R2 0 1 1 1
R3 1 1
ALLOCATION R1 R2 R3 1 0 0 0 0 0
R4 1 0
R4 0 2
Iteration 1 NEED[1][j]=〈0 1 1 1〉 !≤ AVAILABLE[j]+ALLOCATION[1][j]=〈2 1 2 0〉; NEED[2][j]=〈1 1 1 0〉 ≤ AVAILABLE[j]+ALLOCATION[2][j]=〈1 1 2 2〉. Therefore, access P21 (advance the π21’s out of system). Π ={1}, and AVAILABLE=〈1 1 2 0〉 +〈0 0 0 2〉= 〈1 1 2 2〉. Iteration 2 NEED[1][j]=〈0 1 1 1〉 ≤ AVAILABLE[j]+ALLOCATION[1][j]=〈2 1 2 2〉. Therefore, access P11 (advance the π11 out of system). Π =∅, and AVAILABLE=〈1 1 2 1〉 +〈1 0 0 0〉= 〈2 1 2 1〉. +
Terminate by admitting the state since every Pkm∈Su is accessed, i.e., the state is ordered with h(P21)=1 and h(P11)=2. The reader should verify that the state given in Figure 3 would be rejected by Algorithm 1.
Figure 4. Algorithm 1 acceptance of ordered state
38
R1
R3
R2
R4
π11
π21 π23
π11
π13
π14
Figure 5. Example partially ordered state
R1
R2
π11 π31
π11
R3
R4
π32
π21
π32
π21
Figure 6. V1 ordered state
R1
R T 1 : 〈 R 2, R1, R4〉 R T 2 : 〈 R 3, R5, R2 ,R1〉 R T 3 : 〈 R 6 , R 4 , R 3 , R 5〉
R2
R3
π11
π21
R5
R4
R6 π31
Figure 7. Single transition move from Vo to V2 39
V m+2 V m+1 Vm V m-1
V m-2
Figure 8. Partition topology
Safe
Unsafe Deadlock
DAP Admissible Empty
Figure 9. Region of DAP admissibility
40
1 0.9
p
0.8
Alg 1
0.7
Alg 2
0.6
Alg 4
0.5
RUN
0.4 0.3 0.2 0.1 0 S1
S2
S3
S4
S5
S6
S7
S8
System
Figure 10. System versus point estimate of p=|Vadmit | / |Vsafe|
1 0.9 0.8 Alg 1
0.7
Alg2
p
Alg 4
0.6 0.5 0.4 0.3 0
1
2
3
4
5
6
7
8
9
Load
Figure 11. System load versus point estimate of p=|Vadmit | / |Vsafe|
41
14
10 Alg 1
8
Alg 2 6 4 2 0 S1
S2
S3
S4
S5
S6
S7
S8
System
Figure 12. System versus average iterations to accept
25
Average Iterations to Accept
Average Iterations to Accept
12
20
15
Alg 1 Alg 2
10
5
0 1
2
3
4
5
6
7
8
9
Load
Figure 13. System load versus average iterations to accept
42
25
20 Alg 1 Alg 2
15
10
5
0 S1
S2
S3
S4
S5
S6
S7
S8
System
Figure 14. System versus variance in iterations to accept
100 90 Variance in Iterations to Accept
Variance in Iterations to Accept
30
80 70 60
Alg 1
50
Alg 2
40 30 20 10 0 1
2
3
4
5
6
7
8
9
Load
Figure 15. System load versus variance in iterations to accept
43