Approximate MVA Algorithms for Solving

1 downloads 0 Views 621KB Size Report
As shown in Figure 1.1, the variants of the original Linearizer algorithm dom- ..... in advance, facilitating a goal of minimizing cost of the PBH algorithms under.
Approximate MVA Algorithms for Solving Queueing Network Models

by

Hai Wang

A thesis submitted in conformity with the requirements for the degree of Master of Science Graduate Department of Computer Science University of Toronto

c Copyright by Hai Wang 1997

Approximate MVA Algorithms for Solving Queueing Network Models Hai Wang Master of Science Graduate Department of Computer Science University of Toronto 1997

Abstract As distributed systems such as Internet and Integrated Services Digital Networks (ISDN) become increasingly complex, the interactions among numerous components and workloads make system performance dicult to predict reliably. It is increasingly important to apply suitable performance evaluation tools to aid system analysis and design, system con guration and capacity planning, and resource management. Queueing network models are widely adopted for the performance evaluation of computer systems and communication networks. Approximate Mean Value Analysis (MVA) is a popular technique for analyzing queueing networks because of the eciency and accuracy that it a ords. In this thesis, all of the existing approximate MVA algorithms for separable queueing networks are surveyed. New numerical and complexity analysis results for existing approximate MVA algorithms are presented. The relative merits and tradeo s of di erent implementations of the existing approximate MVA algorithms are examined quantitatively. Furthermore, two new approximate MVA algorithms, the Queue Line (QL) algorithm and the Fraction Line (FL) algorithm, for multiple class separable queueing networks are presented. Both the QL and FL algorithms are analyzed with respect to i

their computational costs and accuracies. Based on theoretical and empirical results, the QL algorithm is always more accurate than, and yet has the same computational eciency as, the Bard-Schweitzer Proportional Estimation (PE) algorithm, the most popular approximate MVA algorithm. The FL algorithm has the same computational cost and yet yields more accurate solutions than both the QL and PE algorithms for noncongested separable queueing networks. The existence, uniqueness, and convergence properties of the solutions of these two algorithms are also investigated. The QL algorithm can replace the PE algorithm in a spectrum based on the trade-o between computational costs and accuracies of all of the approximate MVA algorithms.

ii

Acknowledgements I am deeply indebted to my supervisor, Professor K. C. Sevcik, for the guidance and encouragement that he has given me. His in uence on this work and my graduate career has been great. I would like to express my gratitude to Professor K. R. Jackson for his valuable suggestions that have helped to improve this thesis to a great extent, especially for his proof of the convergence rate. Many people have aided me over the years in ways both academic and personal. My thanks to all of them. I would like in particular to acknowledge the in uence of my grandparents and my parents. If there were to be a dedication of this work, it would be to them. Finally, I would like to acknowledge the generous nancial support provided by the Natural Sciences and Engineering Research Council of Canada.

iii

Contents 1 Introduction

1

2 Background

5

2.1 Exact Mean Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Approximate Mean Value Analysis . . . . . . . . . . . . . . . . . . . . . 2.2.1 The Bard Large Customer Population Algorithm . . . . . . . . . 2.2.2 The Bard-Schweitzer Proportional Estimation Algorithm . . . . . 2.2.3 The Chandy-Neuse Linearizer Algorithm . . . . . . . . . . . . . . 2.2.4 The Chow Second Approximation Algorithm . . . . . . . . . . . . 2.2.5 The Eager Looping Algorithm . . . . . . . . . . . . . . . . . . . . 2.2.6 The Hsieh-Lam Proportional Approximation Methods . . . . . . . 2.2.7 The de Souza e Silva-Lavenberg-Muntz Clustering Approximation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.8 The Zahorjan-Eager-Sweillam Aggregate Queue Length Algorithm 2.2.9 The de Souza e Silva-Muntz Improved Linearizer Algorithm . . . 2.2.10 The Schweitzer-Serazzi-Broglia Algorithms . . . . . . . . . . . . . 2.2.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 New Results for Existing Approximate MVA Algorithms

3.1 New Numerical Analysis Results . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 De nitions and Notation . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 The Successive Substitution Method for Approximate MVA Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Newton's Techniques for Approximate MVA Algorithms . . . . . iv

6 6 7 8 8 10 10 13 15 16 17 18 20

22 24 24

28 30

3.1.4 Existence of the Solution of the Linearizer Algorithm 3.2 New Complexity Analysis Results . . . . . . . . . . . . . . . 3.2.1 Space Complexity Analysis . . . . . . . . . . . . . . . 3.2.2 Time Complexity Analysis . . . . . . . . . . . . . . . 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3.A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3.B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3.C . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Two New Approximate MVA Algorithms

4.1 Properties of Separable Queueing Networks . . . 4.2 The Queue Line Algorithm . . . . . . . . . . . . 4.3 The Fraction Line Algorithm . . . . . . . . . . . 4.4 Existence, Uniqueness, and Convergence Results 4.5 Summary . . . . . . . . . . . . . . . . . . . . . Appendix 4.A . . . . . . . . . . . . . . . . . . . . . . Appendix 4.B . . . . . . . . . . . . . . . . . . . . . . Appendix 4.C . . . . . . . . . . . . . . . . . . . . . . Appendix 4.D . . . . . . . . . . . . . . . . . . . . . . Appendix 4.E . . . . . . . . . . . . . . . . . . . . . . Appendix 4.F . . . . . . . . . . . . . . . . . . . . . . Appendix 4.G . . . . . . . . . . . . . . . . . . . . . .

5 Comparison of the PE, QL, and FL Algorithms

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

5.1 Analytic Results on the Accuracies of the PE, QL, and FL Algorithms . 5.1.1 Accuracies of the Assumptions of the PE and QL Algorithms . . . 5.1.2 Accuracies of the Solutions of the PE and QL Algorithms . . . . . 5.1.3 Accuracy of the FL Algorithm . . . . . . . . . . . . . . . . . . . . 5.2 Experimental Results on the Accuracies of the PE, QL, and FL Algorithms 5.2.1 Accuracies of the PE, QL, and FL Algorithms for Single Class Separable Queueing Networks . . . . . . . . . . . . . . . . . . . . v

31 32 32 33 35 36 38 43

45 46 47 49 53 56 57 59 61 63 70 72 73

77 77 78 88 96 97 99

5.2.2 Accuracies of the PE, QL, and FL Algorithms for Multiple Class Separable Queueing Networks . . . . . . . . . . . . . . . . . . . . 103 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Appendix 5.A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6 Conclusions and Directions for Further Research

112

Bibliography

115

6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.2 Directions for Further Research . . . . . . . . . . . . . . . . . . . . . . . 113

vi

List of Tables 2.1 Summary of all of the existing approximate MVA algorithms and the exact MVA algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1 3.2 3.3 3.4

Computational cost of matrix factorization . . . . . . . . . . . . . . Parameters for di erent implementations of the PE algorithm . . . Parameters for the network . . . . . . . . . . . . . . . . . . . . . . Empirical results of di erent implementations of the PE algorithm .

. . . .

33 34 35 35

5.1 Parameters for generating single class separable queueing networks . . . . 5.2 Parameters for generating multiple class separable queueing networks . . 5.3 Statistical results of the three algorithms for single class separable queueing networks in Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Statistical results of the three algorithms for single class separable queueing networks in Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Statistical results of the three algorithms for single class separable queueing networks in Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Statistical results of the three algorithms for single class separable queueing networks in Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . .

100 104

vii

. . . .

. . . .

105 106 106 107 107 108 108

5.10 Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Statistical results of the three algorithms for single class separable queueing networks in Experiment 5 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12 Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 5 . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Summary of results of accuracy tests of each algorithm for multiple class separable queueing networks . . . . . . . . . . . . . . . . . . . . . . . . .

viii

109 109 110 111

List of Figures 1.1 The spectrum of approximate MVA algorithms . . . . . . . . . . . . . . .

3

3.1 Plot of g(R(N )) vs R(N ) for the Core algorithm . . . . . . . . . . . . . . 41 4.1 Plot of g(R(N )) vs R(N ) for the QL algorithm . . . . . . . . . . . . . . 63 4.2 Plot of G(R(N )) vs R(N ) for the QL algorithm . . . . . . . . . . . . . . 74

~ ) with respect Nc . . . . . . . . . . . . . . . . . 5.1 Plot of the function Xc (N 5.2 Plot of the exact Qc;k (~n) with respect nc , and the approximations of the PE and QL algorithms when Dc;1 = Dc;2 =    = Dc;K and Zc = 0 . . . . 5.3 Plot of the exact Qc;k (n~ ) with respect nc , and the approximations of the PE and QL algorithms when Dc;1 = Dc;2 =    = Dc;K and Zc > 0 . . . . 5.4 Plot of the exact Qc;k (n~ ) with respect to nc when all service centers can be classi ed as bottleneck centers and nonbottleneck centers for class c customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Plot of the exact Qc;k (n~ ) with respect to nc and the approximations of the PE and QL algorithms when center k is a bottleneck center for class c customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Plot of the exact Qc;k (n~ ) with respect to nc and the approximations of the PE and QL algorithms when center k is a nonbottleneck center for class c customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Plot  ~of the~function~ Xc (n~ ) with respect to nc , and the relationship between X (N)?X (N?(N ?1)1 ) and [X (N ~ ? ~1c)] . . . . . . . . . . . . . c ~ ) ? Xc (N N ?1 c

c

c

c

c

ix

79 81 82

84

85

86 94

Chapter 1 Introduction As computer technology has advanced, computer systems and communication networks have become increasingly complex. For instance, distributed systems such as Internet and Integrated Services Digital Networks (ISDN) involve the interconnection of a large number of components via networks. Moreover, coupled with the improvements in the hardware performance of computer-communication systems, there has been a vast proliferation of workloads and widely varying resource demands, both within each load type and also between di erent types. Since client-server applications and multimedia packages are now widely used, the interactions among numerous components and workloads make human intuition no longer adequate to predict reliably how changes in the system con guration will impact overall system performance. Hence, it is increasingly important to apply quantitative tools to aid system analysis and design, system con guration and capacity planning, and resource management. Queueing network models have been widely adopted for the performance evaluation of modern computer systems and communication networks [24]. While it is infeasible to compute the exact solution for the general class of queueing networks, solutions can be computed for an important subset of queueing networks, separable (or product-form) queueing networks [5], which have become increasingly popular. The solution for separable queueing networks can be obtained with modest computational e ort using several comparatively ecient exact algorithms. The rst of these algorithms is the convolution algorithm which was developed by 1

Chapter 1. Introduction

2

Buzen [7]. Buzen's convolution algorithm applied only to single class separable queueing networks with First Come First Serve (FCFS) exponential centers. Then Reiser and Kobayashi [29] developed a similar algorithm for multiple class separable queueing networks. Reiser and Lavenberg [31] later developed an alternative analysis technique, Mean Value Analysis (MVA). Sevcik and Mitrani [35] and Lavenberg and Reiser [23] independently proved the Arrival Instant Distribution theorem that provides a foundation for the MVA algorithm. Both the convolution and MVA algorithms have identical time and space complexities, and either of them is derivable from the other [21]. The primary advantage of the MVA algorithm over the convolution algorithm is that the MVA algorithm can overcome the numeric scaling problems that sometimes occur with the convolution algorithm [30]. A number of other exact algorithms for separable queueing networks have also been discovered, namely, the local balance algorithm for normalization constants (LBANC) [10], the tree convolution [22], the tree MVA [40, 19], recursion by chain (RECAL) [11], and the mean value analysis by chain (MVAC) [12] algorithms. However, the time and space complexities of these algorithms in the worst case are not as good as those of the MVA algorithm and the convolution algorithm. However, even for separable queueing networks, the computational cost of an exact solution becomes prohibitively expensive as the number of classes, customers, and centers grows. Bard [2] rst introduced several approximate analysis techniques based on the MVA algorithm. These algorithms are termed approximate MVA algorithms, and are classi ed into two types: algorithms of one type are intended to analyze nonseparable queueing networks, and algorithms of the other type are simply approximate versions of the exact MVA algorithm for separable queueing networks with reduced computational expense. Starting with Bard's paper, numerous approximate MVA algorithms of both types have been devised. This thesis addresses approximate MVA algorithms for separable queueing networks. Among all of the approximate MVA algorithms of this type [2, 33, 9, 8, 16, 20, 36, 42, 38, 34], two major algorithms have gained wide acceptance among performance analysts: the Bard-Schweitzer Proportional Estimation (PE) algorithm [33] and the Chandy-Neuse Linearizer algorithm [9]. Both algorithms are iterative approximation algorithms with

3

Chapter 1. Introduction

substantially smaller computation requirements than the exact MVA algorithm. The Linearizer algorithm is more accurate than the PE algorithm, but at the cost of higher computational requirements. A number of variants [42, 38, 34] have been proposed to reduce the time and space complexities of the original Linearizer algorithm. Figure 1.1 shows several popular algorithms with respect to their accuracies and computational costs. As shown in Figure 1.1, the variants of the original Linearizer algorithm dominate the original Linearizer algorithm in the spectrum of techniques which trades o computational cost and accuracy. Error (%) The exact MVA algorithm The PE Algorithm The original Linearizer Algorithm The AQL Algorithm The improved Linearizer Algorithm The SSB Algorithm The QL Algorithm

0

Computational Cost

Figure 1.1: The spectrum of approximate MVA algorithms

In this thesis, two new approximate MVA algorithms for separable queueing networks are presented. The Queue Line (QL) algorithm is based on the approximation of the average queue length for each customer class at each center in the network. The Fraction Line (FL) algorithm is based on the approximation of the fraction of the customers of each class at each center in the network. Because the QL algorithm has the same asymptotic space and time complexities as the PE algorithm, and always yields more accurate solutions, it may replace the PE algorithm in the spectrum of di erent algorithms that

Chapter 1. Introduction

4

trade o accuracy and eciency. The FL algorithm also has the same asymptotic space and time complexities as the QL and PE algorithm, and it yields more accurate solutions than both QL and PE algorithms for noncongested separable queueing networks where queue lengths are quite small. The major contributions of this thesis are the following: 1. New complexity and numerical analysis results for existing approximate MVA algorithms for separable queueing networks. 2. Two new approximate MVA algorithms for multiple class separable queueing networks and analysis of their computational costs. 3. Theoretical and experimental evidence of the existence, uniqueness, and convergence properties of solutions for the new approximate MVA algorithms. 4. Theoretical and experimental evidence of the error behavior of the solutions of the new approximate MVA algorithms. 5. A comparison of the new and existing approximate MVA algorithms and their applicability to di erent types of separable queueing networks. The remainder of the thesis is organized as follows. Chapter 2 reviews material relating to approximate MVA algorithms. Chapter 3 contains new complexity and numerical analysis results for existing approximate MVA algorithms for separable queueing networks. Chapter 4 presents the QL and FL algorithms and their complexity and numerical analysis results. Chapter 5 provides a comparison of the accuracies of the PE, QL, and FL algorithms and discusses their relative merits. Finally, a summary and conclusions are provided in Chapter 6.

Chapter 2 Background Consider a closed separable queueing network [5, 31] with C customer classes (also termed routing chains), and K load independent (also termed xed-rate or single-server) service centers. The customer classes are indexed as classes 1 through C , and the centers are indexed as centers 1 through K . The customer population of the queueing network is ~ = (N1; N2; :::; NC ) where Nc denotes the number of customers denoted by the vector N belonging to class c for c = 1; 2; :::; C . Also, the total number of customers in the network is denoted by N = N1 + N2 +    + NC . The mean service demand of a class c customer at center k is denoted by Dc;k for c = 1; 2; :::; C , and k = 1; 2; :::; K . (This service demand is de ned as the product of two factors: the mean service time of a class c customer during each visit to center k, and the mean number of visits by a class c customer to center k.) The think time of class c, Zc , is the sum of the delay (also termed in nite-server) center service demands of class c. For convenience, we de ne the following performance measures of interest:

~ ) = the average residence time of a class c customer at center k, given the network Rc;k (N ~. population vector N ~ ) = the average response time of a class c customer in the network, given the Rc ( N ~. network population vector N ~ ) = the throughput of class c, given the network population vector N ~. Xc (N ~ ) = the mean queue length of class c at center k, given the network population Qc;k (N 5

6

Chapter 2. Background

~. vector N ~ ) = the mean total queue length at center k, given the network population vector Qk (N N~ .

2.1 Exact Mean Value Analysis Based on the Arrival Instant Distribution theorem [23, 35] and Little's Law [25], the exact MVA algorithm [31] involves the repeated applications of the following six equations:

A(kc)(~n) = Qk (~n ? ~1c);   Rc;k (n~ ) = Dc;k  1 + A(kc)(n~ ) ; K X Rc(n~ ) = Rc;k (n~ ); k=1 Xc (~n) = Z +nRc (n~ ) ; c c Qc;k (n~ ) = Rc;k (n~ )  Xc (n~ ); C X Qk (n~ ) = Qc;k (n~ ); c=1

(2.1) (2.2) (2.3) (2.4) (2.5) (2.6)

with initial conditions Qk (~0) = 0 for k = 1; 2; :::; K . Here ~n = (n1; n2; :::; nC ) is a ~ . A(kc)(~n) is the average number of customers population vector ranging from ~0 to N a class c customers nds already at center k when it arrives there, given the network population n~ . ~1c is a C -dimensional vector whose cth element is one and whose other elements are zeros, and (n~ ?~1c) denotes the population vector n~ with one class c customer removed. The key component of the exact MVA algorithm is the recursive expression (2.1). This recursive dependence on the performance measures of lower population levels causes both space and time complexities of the exact MVA algorithm to be (KC QCc=1 (Nc + 1)).

2.2 Approximate Mean Value Analysis The approximate MVA algorithms for separable queueing networks improve the time and ~ ) that are not recursive. space complexities by substituting approximations for A(kc)(N

7

Chapter 2. Background

~ ) by a function of Qk (N ~) Typically, an approximate MVA algorithm approximates A(kc)(N ~ ) and other quantities that can be computed without recursion, and the set and Qc;k (N of equations is solved iteratively. Among all approximate MVA algorithms for separable queueing networks, the Bard-Schweitzer Proportional Estimation (PE) algorithm [33] and the Chandy-Neuse Linearizer algorithm [9] are two major algorithms that are currently in wide use. In the following, all of the existing approximate MVA algorithms for separable queueing networks are presented in chronological order of publication date.

2.2.1 The Bard Large Customer Population Algorithm The Bard Large Customer Population (LCP) algorithm is the rst approximate MVA algorithm for separable queueing networks [2], and is the only approximate MVA algorithm whose error bounds are known. It is based on the approximation

~ ? ~1c)  Qj;k (N ~ ) for any class j , Qj;k (N

(2.7)

since with large populations, there could not be signi cant changes in mean queue length at each center by reducing a class population by merely one customer. Hence the approximation equation of the LCP algorithm is

~ ? ~1c )  Qk (N ~ ): ~ ) = Qk (N ~ ? ~1c) = X Qj;k (N A(kc)(N C

j =1

(2.8)

Using this approximation equation, the resulting MVA equations only involve variables for a single population and can be solved iteratively. The asymptotic space and time complexities of the LCP algorithm are both O(KC ), if it is assumed that the number of iterations is a constant independent of the network parameters. Chow [8] proved the existence and uniqueness of the solutions of the LCP algorithm, suggested iteration initializations that yield convergence to the solution, and established the error bounds of the solution. Unfortunately the algorithm does not yield very accurate solutions for separable queueing networks with small populations.

8

Chapter 2. Background

2.2.2 The Bard-Schweitzer Proportional Estimation Algorithm Schweitzer proposed the Bard-Schweitzer Proportional Estimation (PE) algorithm [33], based upon the Bard LCP algorithm. In practice, the PE algorithm requires much less execution time and has lower space requirements than the exact MVA algorithm, and yields relatively accurate solutions for separable queueing networks with either large or small populations. It has been adopted for many modeling applications [3, 4]. The PE algorithm is based on the approximation 8 > N ?1  Q (N > c;k ~ ) for c = j , > < N ~ ? ~1c)  Qj;k (N (2.9) > > > ~) : Qj;k (N for c 6= j . Hence the approximation equation of the PE algorithm is C ~ ? ~1c)  Qk (N ~ ) ? 1 Qc;k (N ~ ) = Q k (N ~ ? ~1c) = X Qj;k (N ~ ): A(kc)(N (2.10) N c j =1 As with the LCP algorithm, the solution of the PE algorithm is obtained iteratively. The space and time complexities of the PE algorithm are both O(KC ), if it is assumed that the number of iterations is a constant independent of the network parameters. Eager and Sevcik [17] and Agrawal [1] independently showed that the PE algorithm has a unique solution and that iterations converge to that solution for single class separable queueing networks. Pattipati et al. [28] proved the existence, asymptotic (i.e., as the number of customers in each class approaches in nity) uniqueness and asymptotic convergence of the solutions of the PE algorithm for multiple class separable queueing networks. Furthermore, it has also been shown that the PE algorithm yields pessimistic bounds on performance measures for single class separable queueing networks [17]. c

c

2.2.3 The Chandy-Neuse Linearizer Algorithm Chandy and Neuse [9] proposed an iterative approximate MVA algorithm by deriving ~ ? ~1c). The algorithm is termed the an algebraically equivalent expression for Qk (N Chandy-Neuse Linearizer algorithm in the literature, and it is based on the expression 13 2 0 C ~ X Q ( N ) i;k ( i ) (c) ~ ~ )A5 ; (2.11) Ak (N) = 4(Ni ? c(i))  @ N + c;k (N i i=1

9

Chapter 2. Background

where

8 > < 0 for i 6= c; ( i ) c = > : 1 for i = c;

and

~ ~ ~ (i) ~ c;k (N) = Qi;k (N ?(i1) c) ? Qi;k (N) ; Ni Ni ? c and the approximation that (i) ~ (i) ~ c;k (N ? ~1j )  c;k (N) for any class j .

(2.12) (2.13)

For convenience, the system of nonlinear equations consisting of (2.11) and (2.2){ (2.6) is called the Core algorithm. The Core algorithm is similar to the PE algorithm. Actually, the approximation used in the PE algorithm is equivalent to assuming that each of the -terms is zero. The fundamental problem here is to estimate the -terms. Many approaches have been suggested [9, 28, 34]. The original Chandy-Neuse Linearizer algorithm uses iterations to successively determine better approximations for the -terms as follows. (i) ~ (i) ~  initialization. Set c;k (N) = c;k (N ? ~1j ) = 0 for k = 1; 2; :::; K , c = 1; 2; :::; C ,

i = 1; 2; :::; C , and j = 1; 2; :::; C .

 Step 1. Solve the Core algorithm at population N~ .  Termination Test. If termination conditions are satis ed then stop.  Step 2. Solve the Core algorithm at each of the N~ ? ~1i populations.  Step 3. Update the -terms via (2.12) and (2.13), and go to Step 1. Although the Linearizer algorithm yields much more accurate solutions than the PE algorithm, its computational expense is higher. Chandy and Neuse [9] used only three iterations for the -terms in order to keep the execution time of the Linearizer algorithm low. However, Zahorjan et al. [42] suggested that the three iteration termination condition does not consistently yield the most accurate answers, and they proposed improved termination criteria.

Chapter 2. Background

10

The asymptotic space complexity of the Linearizer algorithm is O(KC 2), and the asymptotic time complexity is O(KC 3), if it is assumed that the number of iterations is a constant independent of the network parameters.

2.2.4 The Chow Second Approximation Algorithm The Chow Second Approximation (SA) algorithm [8] is based on the expression

~ ) = Q k (N ~ ? ~1c) = Qk (N ~ )(1 + c;k ); A(kc)(N

(2.14)

where

~ ~ ~ c;k = Qk (N ? 1c)~? Qk (N) : (2.15) Qk(N) Chow [8] suggested that the  -terms can be approximated by ^ ~ ~ ^ ~ ^c;k = Qk (N ?^1c)~? Qk (N) ; (2.16) Qk(N) or ^ ~ ^ ~ + ~1c) ; (2.17) ^c;k = Qk(N^) ?~Qk (N Qk (N + ~1c) where the Q^ k -terms are the solutions of the LCP algorithm. The results of the algorithm are obtained iteratively. Chow [8] found that generally (2.17) results in more accurate solutions than (2.16). Note that the approximation equation of the LCP algorithm is equivalent to assuming that each of the  -terms is zero.

2.2.5 The Eager Looping Algorithm The Looping algorithm was proposed by Eager [16] in conjunction with the performance bound hierarchy (PBH) algorithms [15, 18]. The Looping algorithm computes the initial pessimistic and optimistic estimates of performance measures for the PBH algorithms. Because the number of iterations of the Looping algorithm is assumed to be a constant independent of the network parameters, one advantage of the Looping algorithm is that there is no additional cost incurred when the number of iterations to be performed is not given in advance, facilitating a goal of minimizing cost of the PBH algorithms under some speci ed accuracy constraint.

Chapter 2. Background

11

An important innovation is the use of heaps in the Looping algorithm. Pessimistic and optimistic heaps of class c are de ned respectively as upper and lower bounds on the di erence between the exact mean number of class c customers at the service centers and an approximate value obtained by summing class c mean queue length lower bounds. Intuitively, a heap represents the congestion in the network that has not been accounted for by the mean queue length lower bounds. Also, there is an in ation factor associated with each heap. A pessimistic in ation factor re ects the allocation of a pessimistic heap, across all service centers, that will yield the most pessimistic bounds on the mean response time. Similarly, an optimistic in ation factor re ects the allocation of an optimistic heap, across all service centers, that will yield the most optimistic bounds on the mean response time. ~ ? ~1c ) and Jj (N ~ ? ~1c ) denote, respectively, the level 0 multiple closed class Let Bj (N PBH algorithm upper and lower bounds on mean response time of class j in the queueing ~ ? ~1c [18]. The pessimistic and optimistic bounds on perfornetwork with population N mance measures are superscripted by (pess) and (opt) respectively. The superscript +k on the performance measures indicates that these performance measures are obtained in the queueing network resulting from the addition of a service center with service demand ~ ) represent a heap in a queueing network identical to that of service center k. Let Hc;k (N ~ , and let Lc be the optimistic in ation factor of the optimistic heap of with population N class c, and Vc be the pessimistic in ation factor of the pessimistic heap of class c. By the de nitions, Lc = mink fDc;k g and Vc = maxk fDc;k g. The Looping algorithm is based on the expression +k ~ ~ ? ~1j ) = Xj (N ? ~1c ) Qc;k (N ~ ); Qc;k (N ~) Xj (N

(2.18)

~ ? ~1c) is the throughput of class j in the queueing network resulting from where Xj+k (N the addition of a service center with service demand identical to that of service center k. This expression is derived by Zahorjan [41] from the convolution algorithm solution expressions. By utilizing Little's Law [25] and the level 0 multiple closed class PBH ~ ? ~1c) can be estimated by algorithm upper bound on mean response time [18], Xj+k (N

12

Chapter 2. Background N ? ~ ?~1 ) , Z +B + (N j

j

c;j

where

8 > < 0 for j 6= c; c;j = > : 1 for j = c: The Looping algorithm iterates on the following equations: C ~ ? ~1c); ~ ? ~1c) = X Qj;k (N Qk (N k

j

c

j =1

~ ) = Dc;k (1 + Qk (N ~ ? ~1c)); Rc;k (N K ~ ) = X Rc;k (N ~ ); Rc ( N k=1 C X

~ ) = Rk (N ~)+ R(cpess) (N

c=1

~ ? ~1c); VcHc(pess) (N

Nc ~)= Xc(pess)(N ~ ); Zc + Rc(pess) (N 8 2 3 < D N ~ ) = max max 4 P c;k(pessc) 5 Rc(opt)(N : k 1 ? j6=c Xj (N ~ )Dj;k ? Zc; ) C X ( opt ) ~ ? ~1c) ; ~ ) + Lc Hc (N Rk (N

(2.19) (2.20) (2.21) (2.22) (2.23)

(2.24)

c=1 ( pess ~ ) = X c ) (N ~ )Rc;k (N ~ ); Qc;k (N (2.25) (opt) ~ ) ~ ~ ? ~1c) = Nc ? c;j Zc + Rc (N (2.26) Qj;k (N ~ ? ~1j ) Qj;k (N); Nc Zc + Bc (N 8 9 K < = ~ ~ X J ( N ? 1 ) c ~ ? ~1c ) = max 0; j ~ ~ Hj(opt)(N ( N ? ) ? Q ( N ? 1 ) c ; ; (2.27) : Zj + Jj (N ~ ? ~1c ) j c;j k=1 j;k K ~ ? ~1c) X ~ ? ~1c) = Bj (N ~ ? ~1c); Hj(pess)(N Qj;k (N (2.28) ( N ? ) ? j c;j ~ ? ~1c ) Zj + Bj (N k=1 ~ ? ~1c ) = N ?K and Hj(opt)(N ~ ? ~1c) = Hj(pess)(N ~ ? ~1c) = 0 with initial conditions Qj;k (N j

c;j

for c = 1; 2; :::; C , j = 1; 2; :::; C , and k = 1; 2; :::; K . The space and time complexities of the Looping algorithm are both O(KC 2), if it is assumed that the number of iterations is a constant independent of the network parameters. Eager [16] also proved that the Looping algorithm guarantees convergence to a solution which is suitable as an initial estimation for the multiple closed class PBH algorithm.

13

Chapter 2. Background

2.2.6 The Hsieh-Lam Proportional Approximation Methods Hsieh and Lam [20] proposed three noniterative approximate MVA algorithms for separable queueing networks. These algorithms are termed Proportional Approximation Methods (PAM), and they are PAM BASIC (PAMB), PAM IMPROVED (PAMI), PAM TWO (PAMT) algorithms. Since the PAM algorithms are noniterative, they are particularly suitable for applications whose execution speed is more important than high accuracy. The PAMB algorithm involves the following calculations:

Ec;k = PKDc;k ; i=1 Dc;i ~ ) = Ec;k  Nc; Qc;k (N 8 > ~) < Q (N for c 6= j; ~ ~ Qj;k (N ? 1c) = > j;k ~ ) ? Ej;k for c = j; : Qj;k (N C ~ ? ~1c )); ~ ) = Dc;k  (1 + X Qj;k (N Rc;k (N ~)= Rc ( N ~)= Xc(N

K X

j =1

(2.29) (2.30) (2.31) (2.32)

~ ); Rc;k (N

(2.33)

Nc ; ~ Rc(N) + Zc

(2.34)

k=1

for c = 1; 2; :::; C , and k = 1; 2; :::; K . The PAMI algorithm improves on the PAMB algorithm by checking the utilizations of centers to see whether any utilization exceeds one. The throughput of a class at a center is scaled down if the utilization of the center exceeds one. The following is the summary of the PAMI algorithm: 1. Execute the PAMB algorithm. 2. Obtain the utilizations of centers using the following formula: C ~ ); ~ ) = X Dc;k  Xc(N U k (N c=1

(2.35)

~ ) is the utilization of center k given network popufor k = 1; 2; :::; K , where Uk (N ~. lation vector N

14

Chapter 2. Background

3. Let Sc = maxD

c;k

~ ) for 6=0 Uk (N

c = 1; 2; :::; C and k = 1; 2; :::; K .

~ ) = X S(N~ ) for c = 1; 2; :::; C . 4. If Sc > 1, then Xc (N c

c

The PAMT algorithm improves the accuracy of the PAMB and PAMI algorithms by executing the nal two steps of the MVA algorithm instead of just the last step. Such an approach to trade computational cost for accuracy is similar to the Linearizer algorithm. The following is a summary of the PAMT algorithm: 1. Let Ec;k = P D

c;k

K

i=1

D

c;i

for c = 1; 2; :::; C and k = 1; 2; :::; K .

~ ) = Ec;k  Nc , and 2. Let Qc;k (N 8 > ~ > for c 6= i and c 6= j; > < Qc;k (N) ~ ? ~1i ? ~1j ) = Qc;k (N ~ ) ? Ec;k for (c = i or c = j ) and i 6= j; Qc;k (N > > > ~ ) ? 2Ec;k for c = i = j; : Qc;k (N

(2.36)

for c = 1; 2; :::; C , i = 1; 2; :::; C , j = 1; 2; :::; C , and k = 1; 2; :::; K . 3. For i = 1; 2; :::; C , repeat the following: (a) For j = 1; 2; :::; C , repeat the following calculations:

~ ? ~1i ? N ~ j )) for k = 1; 2; :::; K , (2.37) ~ ? ~1i) = Dj;k  (1 + X Qc;k (N Rj;k (N C

c=1

~ ? ~1i) = X Rj;k (N ~ ? ~1i); Rj (N (2.38) k=1 8 > < R (N~ ?N~1 )+Z for i 6= j , ~ ~ Xj (N ? 1i) = > N ?1 (2.39) : R (N~ ?~1 )+Z for i = j , ~ ? ~1i) = Xj (N ~ ? ~1i)  Rj;k (N ~ ? ~1i) for k = 1; 2; :::; K . (2.40) Qj;k (N ~ ? ~1i)) for k = 1; 2; :::; K , and Ri(N ~)= ~ ) = Di;k  (1 + PCc=1 Qc;k (N (b) Let Ri;k (N PK R (N N k=1 i;k ~ ), and Xi = R (N ~ )+Z . K

j

j

i

j

i

j

j

j

i

i

i

4. Execute steps 2-4 of the PAMI algorithm to ensure that the utilization of each center does not exceed one.

15

Chapter 2. Background

The space complexities of all three PAM algorithms are O(KC ). The time complexities of the PAMB and PAMI algorithms are O(KC ), and the time complexity of the PAMT algorithm is O(KC 2).

2.2.7 The de Souza e Silva-Lavenberg-Muntz Clustering Approximation Algorithms de Souza e Silva et al. [36, 37] proposed a set of algorithms, termed Clustering Approximation (CA) algorithms, whose costs are intermediate between those of the PE and Linearizer algorithms. Before applying these algorithms, the network must be divided into subnetworks which are not necessarily disjoint but whose union must be the entire network. For each subnetwork, a subset of the customer classes that visit the subnetwork are designated as local classes (LC) and the remaining customer classes that visit the subnetwork are designated as foreign classes (FC). The criteria for choosing the local classes are that LC customers contribute little to the utilizations of centers in the complement of the subnetwork and FC customers contribute little to the utilizations of centers in the subnetwork. However, the utilizations of centers are unknown at this stage. de Souza e Silva et al. [36, 37] suggested that the Hsieh-Lam PAMB algorithm could be used to estimate the utilizations of centers due to the low computational costs of the PAMB algorithm. Each of the CA algorithms consists of three steps: 1. For each subnetwork S and every local class c of S , estimate the mean delay time Pc of a local class c customer in the complement of S , and the utilization Uk of all foreign classes at every center k in S , using the following formulas:  ~ ) X Dc;k  1 + Qk (N   Pc = (2.41) ~ )=Nc ; 8c 2 LC (S ), k62S 1 + Dc;k  Xc (N and

Uk =

~ Di;k  Xi (N~ )  ; i2FC (S ) 1 + Di;k  Xi (N)=Ni

X

8k 2 S .

(2.42)

16

Chapter 2. Background

2. Solve each subnetwork S using either the PE algorithm or the Linearizer algorithm, but replacing (2.4) and (2.6) with

~)= P Xc (N and

~)= Qk (N

P

Nc ; ~ R ( N ) + Z + P c;k c c k2S

8c 2 LC (S ),

~ )  X c (N ~ ) + Uk c2LC (S ) Rc;k (N ; 1 ? Uk

8k 2 S .

(2.43) (2.44)

~ ) and the previous ones 3. If the di erences between the current estimates of Qc;k (N are all less than a speci ed tolerance, then terminate. Otherwise, apply (2.41) and (2.42) to obtain new estimates of Pc and Uk , and go to step 2. Note that the CA algorithm that chooses the PE algorithm for all subnetworks at step 2 is identical to the PE algorithm for the entire network. Since the Linearizer algorithm is more accurate and costly than the PE algorithm, these CA algorithms provide the exibility to trade computational costs for accuracy. However, the solutions of the same CA algorithm will di er depending on the way the network is divided and local classes for each subnetwork are selected. Also, the computational costs of the CA algorithms depend on the approximate MVA algorithm used to solve each subnetwork and the choice of subnetworks and their local and foreign classes.

2.2.8 The Zahorjan-Eager-Sweillam Aggregate Queue Length Algorithm Zahorjan et al. [42] proposed a modi cation, termed the Aggregate Queue Length (AQL) algorithm, of the Linearizer algorithm. The AQL algorithm is more ecient than, and yet obtains similar accuracy as, the Linearizer algorithm. Like the Linearizer algorithm, the AQL algorithm replaces (2.1) with an algebraically equivalent expression 1 0 ~ Q ( N ) ~ )A ; ~ ) = (N ? 1) @ i;k + c;k (N A(kc)(N (2.45) N where

~) ~ ? ~1c) Qk (N ~ ) = Qk (N ? c;k (N N ?1 N :

(2.46)

Chapter 2. Background

17

Also, the AQL algorithm uses the approximation

~ ? ~1j )  c;k (N ~ ) for any class j , c;k (N

(2.47)

and uses iterations to successively determine better approximations for the -terms in the same fashion as the Linearizer algorithm. As with the Linearizer algorithm, many di erent approaches have been suggested. A simple implementation of the AQL algorithm is as follows.

 initialization. Set c;k (N~ ) = c;k (N~ ? ~1j ) = 0 for k = 1; 2; :::; K , c = 1; 2; :::; C , and j = 1; 2; :::; C .

 Step 1. Solve the system of nonlinear equations consisting of (2.45) and (2.2){(2.6) ~. at population N  Termination Test. If termination conditions are satis ed then stop.  Step 2. Solve the system of nonlinear equations consisting of (2.45) and (2.2){(2.6) ~ ? ~1i populations. at each of the N  Step 3. Update the -terms via (2.46) and (2.47), and go to Step 1. As with any iterative algorithms, the accuracy of the AQL algorithm depends on the choice of termination conditions. Zahorjan et al. [42] suggested that to terminate after three iterations of the Linearizer algorithm is not suciently accurate. Approaches that prevent premature termination without desired accuracy have been recommended [42]. The space complexity of the AQL algorithm is O(KC ), as compared to O(KC 2) for the Linearizer algorithm. The time complexity of the AQL algorithm is O(KC 2), as compared to O(KC 3) for the Linearizer algorithm, if it is assumed that the number of iterations is a constant independent of the network parameters.

2.2.9 The de Souza e Silva-Muntz Improved Linearizer Algorithm de Souza e Silva and Muntz [38] proposed a variant of the Linearizer algorithm, called Improved Linearizer (IL) algorithm, which reduces the time complexity by a factor of

18

Chapter 2. Background

C , but does not alter the numerical value of the solutions. The main idea is to rewrite (2.11) as 2 3 C ~ X Q ( N ) ~ ) = 4(Ni ? c(i)) i;k 5 + c;k (N ~ ); A(kc)(N (2.48) N where

i=1

i

8 > < 0 for i 6= c; ( i ) c = > : 1 for i = c; and 2 3 C ~ ~ ~ X Q ( N ? 1 ) Q ( N ) ~ ) = (Ni ? c(i)) 4 i;k (i) c ? i;k 5 : c;k (N (2.49) Ni Ni ? c i=1 Hence the number of unknowns (-terms) in (2.48) is reduced by a factor of C , in comparison to the number of unknowns (-terms) in (2.11). Also, the approximation (2.13) of the original Linearizer algorithm is replaced by ~ ? ~1j )  c;k (N ~ ) for any class j: c;k (N (2.50) If it is assumed that the number of iterations is a constant independent of the network parameters, then the space and time complexities of the IL algorithm are both O(KC 2), as contrasted to O(KC 2) and O(KC 3) respectively for the original Linearizer algorithm. Moreover, since the IL algorithm is nearly always more accurate than the AQL algorithm while having computational cost comparable to it, the IL algorithm dominates the AQL algorithm in practice.

2.2.10 The Schweitzer-Serazzi-Broglia Algorithms Schweitzer et al. [34] proposed two new algorithms, called Schweitzer-Serazzi-Broglia (SSB) algorithms. They are based on the expression ~ ) = Qk (N ~ ) + #c;k (N ~ ) ? 1; A(kc)(N (2.51) where

~ ) = Q k (N ~ ? ~1c ) ? [Qk (N ~ ) ? 1]: #c;k (N

(2.52)

The main issue of the algorithm is to approximate #-terms. Schweitzer et al. [34] suggested that one simple approach is to use ~ ? ~1j )  #c;k (N ~ ) for any class j , #c;k (N (2.53)

19

Chapter 2. Background

and another more complicated and accurate approximation is

~ ?~1j ?~1i)  #c;k (N ~ ?~1j )+ #c;k (N ~ ?~1i) ? #c;k(N ~ ) for any classes i and j . (2.54) #c;k (N The SSB algorithm involving (2.53) is called the two-level SSB algorithm and the SSB algorithm involving (2.54) is called the three-level SSB algorithm. Schweitzer et al. [34] suggested that the three-level SSB algorithm yields more accurate solutions than the Linearizer algorithm while having space complexity comparable to it. Although the SSB algorithms can nd solutions using the successive substitution method as does the original Linearizer algorithm, Schweitzer et al. [34] suggested that Newton's method can also be used to nd solutions for the SSB algorithms. The equations of each SSB algorithm can be reformulated as an equivalent system of equations which is suitable for Newton's method. Let 8 > < 0 for j 6= c; c;j = > : 1 for j = c; and 8 > > > < 0 for j 6= c and t 6= c; !c;j;t = > 2 for t = j = c; > > : 1 otherwise: The new system of equations of the two-level SSB algorithm which can be solved by Newton's method is shown as follows. C ~ ) + #c;k (N ~ )] X Nc Dc;k [Qk (N ~ Qk (N) = (2.55) P K ~ ) + #c;k (N ~ )] ; c=1 Zc + k=1 Dc;k [Qk (N

~ ? ~1j ) = Qk (N and

C (N ? )D [Q (N ~ )] X c c;j c;k k ~ ? ~1j ) + #c;k (N P K ~ ? ~1j ) + #c;k (N ~ )] ; c=1 Zc + k=1 Dc;k [Qk (N

~ ) = 1 + Qk (N ~ ? ~1j ) ? Qk (N ~ ); #j;k (N

(2.56) (2.57)

~ ), Qk (N ~ ? ~1j ), for 1  k  K and 1  j  C . The unknowns of the system are Qk (N ~ ). and #j;k (N

Chapter 2. Background

20

The new system of equations of the three-level SSB algorithm which can be solved by Newton's method is shown as follows. C ~ ) + #c;k (N ~ )] X Nc Dc;k [Qk (N ~ Qk (N) = (2.58) P K ~ ) + #c;k (N ~ )] ; c=1 Zc + k=1 Dc;k [Qk (N C ~ ? ~1j ) + #c;k (N ~ ? ~1j )] ~ ? ~1j ) = X (Nc ?P Kc;j )Dc;k [Qk(N (2.59) Qk (N ~ ? ~1j ) + #c;k (N ~ ? ~1j )] ; c=1 Zc + k=1 Dc;k [Qk (N C ~ ? ~1j ? ~1t) + #c;k (N ~ ? ~1j ) + #c;k (N ~ ? ~1t) ? #c;k (N ~ )] c;j )Dc;k [Qk (N ~ ?~1j ?~1t) = X (Nc ? !P Qk (N K ~ ? ~1j ) + #c;k (N ~ ? ~1j ) + #c;k (N ~ ? ~1t) ? #c;k (N ~ )] ; c=1 Zc + k=1 Dc;k [Qk (N (2.60) ~ ) = 1 + Qk (N ~ ? ~1j ) ? Qk (N ~ ); #j;k (N (2.61) and

~ ? ~1t) = 1 + Qk (N ~ ? ~1j ? ~1t) ? Qk (N ~ ? ~1t); #j;k (N

(2.62) ~ ), for 1  k  K , 1  j  C , and 1  t  C . The unknowns of the system are Qk (N ~ ? ~1c), Qk (N ~ ? ~1c ? ~1t), #j;k (N ~ ), and #j;k (N ~ ? ~1t). Qk (N

2.2.11 Summary Table 2.1 summarizes the computational costs and ranks the accuracies of all of the existing approximate MVA algorithms and the exact MVA algorithm for separable queueing networks. Since there is no analytic result on the accuracies of the algorithms and the statistical results from di erent papers are generally incomparable, all of the existing approximate MVA algorithms are classi ed into ve categories regarding their accuracies: (1) the algorithms whose accuracies are generally greater than that of the Linearizer algorithm; (2) the algorithms whose accuracies are approximately equal to that of the Linearizer algorithm; (3) the algorithms whose accuracies are generally greater than that of the PE algorithm and less than that of the Linearizer algorithm; (4) the algorithms whose accuracies are approximately equal to that of the PE algorithm; (5) the algorithms whose accuracies are generally less than that of the PE algorithm.

Chapter 2. Background

21

Algorithm

Time Space Rank of Main Complexity Requirement Accuracy Reference Q Q | [31] Exact MVA O(KC Cc=1 (Nc + 1)) O(KC Cc=1(Nc + 1)) LCP O(KC ) O(KC ) 5 [2] PE O(KC ) O(KC ) 4 [33] Original Linearizer O(KC 3) O(KC 2) 2 [9] SA O(KC ) O(KC ) 3 [8] Looping O(KC 2) O(KC 2) 31 [16] PAMB O(KC ) O(KC ) 5 [20] PAMI O(KC ) O(KC ) 5 [20] PAMT O(KC 2) O(KC ) 5 [20] CA O(KC )  O(KC 2) O(KC )  O(KC 2) 3 [36, 37] AQL O(KC 2) O(KC ) 2 [42] Improved Linearizer O(KC 2) O(KC 2) 2 [38] Two-level SSB O(KC 2) O(KC 2) 31 [34] Three-level SSB O(KC 3) 1 O(KC 2) 1 [34] Table 2.1: Summary of all of the existing approximate MVA algorithms and the exact MVA algorithm

These gures are not from the original references. They are derived as part of the preliminary research work for this thesis. 1

Chapter 3 New Results for Existing Approximate MVA Algorithms Although the approximate MVA algorithms are widely used by performance analysts, there is more work to be done. The following is a summary of the previous results concerning the behavior of the existing approximate MVA algorithms for separable queueing networks: 1. Although some results have been obtained regarding the convergence of certain iterative approximate MVA algorithms, the convergence of most iterative algorithms, as well as the existence and uniqueness of solutions of the algorithms have not been proven, especially for multiple class separable queueing networks. Chow [8] proved the convergence of the LCP algorithm and the existence and uniqueness of the solutions of the LCP algorithm for multiple class separable queueing networks. Eager and Sevcik [17] proved the existence and uniqueness of the solutions of the PE algorithm for single class separable queueing networks, and Pattipati et al. [28] proved the existence, asymptotic (i.e., as the number of customers in each class approaches in nity) uniqueness of the solutions of the PE algorithm for multiple class separable queueing networks. Eager and Sevcik [17] and Agrawal [1] independently proved that the PE algorithm with di erent initialization procedures converges to a unique solution for single class separable queueing networks. Pattipati et al. [28] showed the convergence of the PE algorithm for multiple class separable queueing 22

Chapter 3. New Results for Existing Approximate MVA Algorithms 23

networks when the number of customers in each class tends to in nity. 2. It is unknown whether the order of the number of iterations required for these iterative algorithms to reach convergence is independent of the network parameters or not. This is important because the number of iterations will a ect the execution time of the algorithms. If the number of iterations is independent of the network parameters, then the time complexities of the algorithms are independent of the network parameters. Most previous works have made this assumption. 3. It is still unclear whether general purpose fast numerical solution techniques, such as Newton's method, can be used instead of the successive substitution method to reduce the execution time of the existing iterative approximate MVA algorithms. When the approximate MVA algorithms were rst introduced, the successive substitution method was recommended because of its simplicity. Schweitzer [33], Chow [8], Pattipati et al. [28], and others suggested the use of the fast numerical solution techniques instead of the successive substitution method to reduce the execution time. However, Zahorjan et al. [42] found that no general purpose fast numerical solution technique is suciently reliable, and the successive substitution method generally converges very quickly. 4. There are few formal results about the convergence rate of the existing iterative approximate MVA algorithms. Since only the lower bounds of the convergence rates of numerical solution techniques such as the successive substitution method and Newton's method are known, it is still unclear whether the fast numerical solution techniques, instead of the successive substitution method, can be used to accelerate convergence of the existing iterative approximate MVA algorithms. Moreover, if the exact convergence rates were known, then it could be concluded whether the order of the number of iterations of the approximate MVA algorithms is independent of the network parameters or not. 5. There are few results about error bounds for the existing approximate MVA algorithms. The only analytic error bounds were developed by Chow [8] for the LCP algorithm.

Chapter 3. New Results for Existing Approximate MVA Algorithms 24

In this chapter, new numerical and complexity analysis results for existing approximate MVA algorithms will be presented regarding the above open questions. Most of the new results will focus on the PE and Linearizer algorithms because they are the two most popular approximate MVA algorithms for separable queueing networks. Some additional notation will prove useful. For single class separable queueing networks, the subscript c on the performance measures de ned in Chapter 2 is dropped, and the network population is denoted by a scalar variable N . Also, the superscript (i) on performance measures denotes the quantities resulting from the ith iteration using a particular iterative technique, starting from the initial values superscripted by (0).

3.1 New Numerical Analysis Results The systems of nonlinear equations for existing approximate MVA algorithms can be solved numerically using di erent general purpose numerical solution techniques such as the successive substitution method and Newton's techniques [27, 32, 14]. Of course, the nal solution of any iterative approximate MVA algorithm should be independent of the numerical technique chosen. In this section, convergence rates of di erent numerical techniques will be examined for the PE and Linearizer algorithms.

3.1.1 De nitions and Notation Let } be the solution vector of the approximate MVA algorithms. It can be a vector consisting of all performance measures of interest de ned in Chapter 2, or it can be just a vector consisting of a subset of these performance measures with which } can be solved, and the remaining performance measures can subsequently be derived from } directly. For example, } can be a vector consisting of only Xc , the class throughputs. Let F be the mapping de ned by the algorithms such that } = F (}). The successive substitution method [32] computes the estimation of the results by a formula of the form

}(i+1) = F (}(i)):

(3.1)

Chapter 3. New Results for Existing Approximate MVA Algorithms 25

Let G(}) = F (}) ? }. If G is continuously di erentiable, Newton's method [27, 32, 14] computes the estimation of the results by a formula of the form

}(i+1) = }(i) ? J ?1(}(i))  G(}(i));

(3.2)

where J is the Jacobian matrix of G and must be nonsingular. If the analytic Jacobian is not available or is too complicated to be computed, it can be replaced numerically with the nite-di erence approximation or secant approximations such as Broyden's approximation [27, 32, 14]. Moreover, all Newton's method based techniques only guarantee local convergence, which means they are sensitive to initial values and may yield unphysical solutions for some sets of initial values. The globally convergent modi cations of Newton's method generally require initial bounds on the location of the solution and require more computational e ort [14]. In this thesis, all Newton's method based techniques (including secant methods) are referred to as Newton's techniques. Since the inverse of the Jacobian matrix, or an approximation to it, must be computed during the iterations of Newton's techniques, the computational costs of Newton's techniques increase with the dimension of the Jacobian matrix, or its approximation matrix [14]. In order to reduce the dimension of the Jacobian matrix, or its approximation matrix, before the nonlinear equation systems of the existing iterative approximate MVA algorithms are solved numerically, they can be algebraically reduced to equivalent simpler nonlinear systems with fewer unknown variables. Normally, the reduced nonlinear equation systems will only contain Rc-terms [17], or Xc -terms [8, 28], or Qk -terms [33, 34]. The rest of the performance measures can be computed directly from these variables. Although this approach generally reduces the execution time of the approximate MVA algorithms when Newton's techniques are used, it will not reduce the execution time when the successive substitution method is used. Most of the major issues mentioned at the beginning of this chapter concerned the convergence of the iterative approximate MVA algorithms and their convergence rates. We summarize here the de nitions of the convergence and convergence rate of an iterative algorithm that can be found in some numerical analysis books [27, 32, 14].

De nition 3.1 Let k  k be an arbitrary vector norm, and let }(i) be the solution vec-

Chapter 3. New Results for Existing Approximate MVA Algorithms 26

tor resulting from i iterations of an iterative algorithm, and let } be a solution of the associated system of equations. An iterative algorithm converges if and only if

lim k }(i) ? } k= 0:

(3.3)

i!1

2

De nition 3.2 The above de nition implies that an iterative algorithm converges if any one of the following conditions is satis ed:

(1) There are a constant c 2 [0; 1) and an integer  such that k }(i+1) ? } k c k }(i) ? } k;

8i  :

(3.4)

(2) There exist a sequence i tending to 0 and an integer  such that k }(i+1) ? } k i k }(i) ? } k; 8i  :

(3.5)

(3) There are constants C  0, > 1, and an integer  such that k }(i+1) ? } k C k }(i) ? } k ; 8i  :

(3.6)

2

De nition 3.3 For any convergent sequence f}(i)g in an arbitrary Euclidean space with limit }, the quantities

8 > > 0; if }i = } for i  , > < Qp f}(i) g = > lim supi!1 kk}} ?}?}k k ; if }i 6= } for i  , > > : +1; otherwise, for all p 2 [1; +1) and an integer , are the quotient-convergence factors (q-factors) of f}(i)g with respect to the particular norm. 2 (i+1) (i)

p

De nition 3.4 Let C( ; }) denote the set of all sequences with limit } generated by

an iterative algorithm . Then the q-factors of with respect to the particular norm are de ned as Qp ( ; } ) = supfQpf}(i) g j f}(i) g 2 Cp( ; } )g

for all p 2 [1; +1):

2

Chapter 3. New Results for Existing Approximate MVA Algorithms 27

De nition 3.5 For any convergent sequence f}(i)g in an arbitrary Euclidean space with limit }, the quantities

8 > < lim supi!1 k }(i) ? } k1=i; if p = 1; Rp f}(i) g = > : lim supi!1 k }(i) ? } k1=p ; if p > 1; i

for all p 2 [1; +1), are the root-convergence factors (r-factors) of f}(i)g with respect to the particular norm. 2

De nition 3.6 Let C( ; }) denote the set of all sequences with limit } generated by

an iterative algorithm . Then the r-factors of with respect to the particular norm are de ned as Rp ( ; }) = supfRp f}(i) g j f}(i) g 2 C( ; })g

for all p 2 [1; +1):

2

De nition 3.7 Let be an iterative algorithm with limit }. The q-order of convergence rate of at } is de ned as

8 > < +1; if Qp( ; }) = 0 for all p 2 [1; +1),  Oq ( ; } ) = > : inf fp 2 [1; +1) j Qp( ; }) = +1g; otherwise, and the r-order of convergence rate of at } is de ned as

8 > < +1; if Rp ( ; }) = 0 for all p 2 [1; +1),  Or ( ; } ) = > : inf fp 2 [1; +1) j Qp( ; }) = +1g; otherwise. 2 These orders of the convergence rate are both norm-independent, and Oq ( ; })  Or ( ; }) for any iterative algorithm with limit } [27, 32]. Also, if two iterative algorithms 1 and 2 share the same limit point }, and Oq ( 1; }) > Oq ( 2; }), then 1 is q-faster than 2 at }. Correspondingly, 1 is r-faster than 2 at } when Or ( 1; }) > Or ( 2; }) [27, 32]. Based on the above de nitions, an iterative algorithm satisfying (3.4) is said to converge at least q-linearly to the solution (i.e., Or  Oq  1). Similarly, an iterative

Chapter 3. New Results for Existing Approximate MVA Algorithms 28

algorithm satisfying (3.5) is said to converge at least q-superlinearly to the solution (i.e., Or  Oq > 1), and one satisfying (3.6) is said to converge to the solution with q-order at least (i.e., Or  Oq  ) [27, 14]. Only low bounds of the q-order or r-order of convergence rates have been established for most of the iterative techniques for solving nonlinear equation systems. For instance, Newton's method converges to the solution at least q-quadratically (i.e., Or  Oq  2), and Broyden's secant method converges to the solution at least q-superlinearly (i.e., Or  Oq > 1) [27, 32, 14].

3.1.2 The Successive Substitution Method for Approximate MVA Algorithms It is desirable to know the convergence rates of di erent iterative algorithms as precisely as possible in order to compare the computational complexities of the algorithms. The following theorems show the exact convergence rate of the PE algorithm when it is solved by the successive substitution method.

Theorem 3.1 For single class separable queueing networks, when the PE algorithm is

solved by the successive substitution method with any initialization procedure, if the algorithm converges and N > 1, then both the q-order and r-order of the convergence rate of the algorithm are exactly 1. 2

The proof of Theorem 3.1 is given is Appendix 3.A. Eager and Sevcik [17] and Agrawal [1] independently proved that the PE algorithm with di erent initialization procedures converges to a unique solution for single class separable queueing networks.

Corollary 3.1 In the worst case, the q-order and r-order of the convergence rate of the PE algorithm solved by the successive substitution method are exactly 1.

2

Corollary 3.2 For single class separable queueing networks, if the PE algorithm is solved

by the successive substitution method and it converges, then the order of the number of iterations of the PE algorithm is a constant independent of the network parameters. 2

Chapter 3. New Results for Existing Approximate MVA Algorithms 29

When the Linearizer algorithm is solved by the successive substitution method, the structure of the algorithm is the same as that shown in Section 2.2.3 with the Core algorithm being solved by successive substitution. For single class queueing networks, the -terms in the Core algorithm are simpli ed as k = k (N ) = k (N ? 1) for k = 1; 2; :::; K . The following theorems show the exact convergence rate of the Linearizer algorithm when it is solved by the successive substitution method.

Theorem 3.2 For single class separable queueing networks, when the Linearizer algorithm is solved by the successive substitution method, both the q-order and r-order of the convergence rate of the algorithm are at most 1, if the following conditions hold: 1. The algorithm converges; 2. N > 1; 3. For 1  k  K , k is always greater than ? N1?1 during the iterations.

2 The proof of Theorem 3.2 is given is Appendix 3.B.

Corollary 3.3 In the worst case, the q-order and r-order of the convergence rate of the

Linearizer algorithm solved by the successive substitution method are less than or equal to 1, if the Linearizer algorithm converges. 2

Corollary 3.4 For single class separable queueing networks, when the Linearizer algo-

rithm is solved by the successive substitution method, the order of the number of iterations of the Linearizer algorithm is a constant independent of the network parameters if the following conditions hold: 1. The algorithm converges; 2. N > 1; 3. For 1  k  K , k is always greater than ? N1?1 during the iterations.

2

Chapter 3. New Results for Existing Approximate MVA Algorithms 30

The next theorem shows that condition 3 in Theorem 3.2 and Corollary 3.4 is satis ed.

Theorem 3.3 Suppose the Linearizer algorithm is solved by the successive substitution

method. For single class separable queueing networks with N > 1, the -terms are all greater than ? N1?1 during each iteration of the Linearizer algorithm. 2

The proof of Theorem 3.3 is given in Appendix 3.C. Zahorjan et al. [42] noticed that there exist some separable queueing networks such that the PE algorithm may require an arbitrarily large number of iterations to converge to the solution when it is solved by the successive substitution method. Schweitzer et al. [34] also noted that, for some separable queueing networks, the Linearizer algorithm requires a large number of iterations to converge to the solution when it is solved by the successive substitution method. Theorems 3.1 and 3.2 con rms their observations, because any iterative algorithm with q-order of 1 for the convergence rate could require an arbitrarily large number of iterations to converge to the solution. Although the results that have been presented in this section have been derived only for the PE and Linearizer algorithms, similar results may be applicable to other iterative approximate MVA algorithms for separable queueing networks as well, because all such algorithms share the same structures. In Chapter 4, we will obtain similar results for two new algorithms introduced there.

3.1.3 Newton's Techniques for Approximate MVA Algorithms Both the q-order and r-order of the convergence rate of Newton's method are at least 2, and the convergence rates of all of the Newton's method based techniques are greater than 1, if the requirements of these techniques are satis ed [27, 32, 14]. When an existing approximate MVA algorithm is solved by Newton's method and if the requirements of Newton's method are satis ed, then the algorithm converges q-quadratically to the solution. Also, when any existing approximate MVA algorithm is solved by any one of Newton's techniques and if the requirements of the technique are satis ed, then the algorithm converges q-superlinearly to the solution. It is obvious that number of iterations for Newton's techniques is much less than that

Chapter 3. New Results for Existing Approximate MVA Algorithms 31

of the successive substitution method, when the approximate MVA algorithms are solved by them. Pattipati et al. [28] observed that the average number of iterations for a globally convergent version of Newton's method for the PE algorithm is approximately 15 that of the successive substitution method for the PE algorithm, although cost per iteration may di er in the two schemes. However, since not all separable queueing networks can meet the subtle requirements of Newton's techniques, Newton's techniques may fail to converge or converge to unphysical solutions [42, 34].

3.1.4 Existence of the Solution of the Linearizer Algorithm Although the existence and uniqueness properties of the solution of the PE algorithm have been well studied [17, 28], the existence and uniqueness of the solution of the Linearizer algorithm remain open. In this section, the existence of a solution of the Linearizer algorithm is established for single class separable queueing networks.

Theorem 3.4 There exists at least one solution to the Linearizer algorithm for single class separable queueing networks with N > 1.

2

Proof. Lemma 3.1 in Appendix 3.B shows that there exists a unique solution to the

Core algorithm for single class separable queueing networks, given that the -terms are all greater than ? N1?1 . By Theorem 3.3, all -terms are always greater than ? N1?1 during the iterations of the Linearizer algorithms for single class separable queueing networks with N > 1, when the Linearizer algorithm is solved by the successive substitution method. Moreover, when the Linearizer algorithm is solved by the successive substitution method, the nal solution of the Linearizer algorithm is the solution of the Core algorithm invoked during the last iteration. Because the existence of the nal solution of an iterative algorithm is independent of the numerical technique used to solve the algorithm, there exists at least one solution to the Linearizer algorithm for single class separable queueing networks.

Chapter 3. New Results for Existing Approximate MVA Algorithms 32

3.2 New Complexity Analysis Results The space and time complexities are two complexity measures that have been widely used to evaluate the performance of an algorithm. The time complexity of an iterative algorithm is proportional to two factors, the cost of any one iterative step and the number of iterations. In the previous section, the number of iterations of the existing approximate MVA algorithm has been investigated. In this section, we examine the space requirements, the cost of any one iterative step, and the overall time complexities of the existing approximate MVA algorithms when they are solved by di erent numerical techniques.

3.2.1 Space Complexity Analysis When the existing approximate MVA algorithms are solved by the successive substitution method, most of the implementations of the algorithms require Rc;k , Rc, Xc , Qc;k , and Qk to be stored simultaneously. Hence the space requirements of all of the iterative approximate MVA algorithms are at least (2KC +2C + K ), when they are solved by the successive substitution method. However, before the nonlinear equation systems of the existing approximate MVA algorithms are solved numerically, they can be algebraically reduced to equivalent simpler nonlinear systems with fewer unknown variables. Normally, the reduced nonlinear equation systems will only contain Rc or Xc terms, and the rest of the performance measures can be computed directly from these variables. Therefore, the space requirements could be reduced to C by this approach. However, this approach will not reduce the time complexities of the existing approximate MVA algorithm when they are solved by the successive substitution method. When the existing approximate MVA algorithms are solved by Newton's techniques, the above approach can be applied as well. Therefore, a lower bound on the space requirement of all of the approximate MVA algorithms is C .

Chapter 3. New Results for Existing Approximate MVA Algorithms 33

3.2.2 Time Complexity Analysis When the PE algorithm is solved by the successive substitution method, it requires at least (4KC ? K ) additions/subtractions, and (3KC + C ) multiplications/divisions during each iterative step. When the PE algorithm is solved by Newton's techniques, it is necessary to compute the inverse of the Jacobian matrix or an approximation to it during the iterations [27, 32, 14]. Some of Newton's techniques with the highest known convergence rates, such as Newton's method, require one to compute the inverse of the Jacobian matrix or an approximation to it for each iteration. On most computers it is usually more e ective to compute a matrix factorization rather than to compute the inverse of the matrix; this may not be true on some vector computers [14]. Let Y be the dimension of a matrix, the computational costs of all the factorizations are proportional to Y 3 and are given in Table 3.1 [14]. Factorization Techniques Additions/Subtractions Multiplications/Divisions PLU Y 3=3 Y 3=3 QR 2Y 3=3 2Y 3=3 LLT ; LDLT Y 3=6 Y 3=6 PLDB LT P T Y 3=6 Y 3=6 PLTLT P T Y 3=6 Y 3=6 Table 3.1: Computational cost of matrix factorization Obviously, the minimum dimension of the Jacobian matrix or its approximation matrix is C , and the computational costs of a single iteration of the PE algorithm based on a Newton's technique that computes a matrix factorization at each step is at least C 3=6 additions/subtractions and C 3=6 multiplications/divisions. Therefore, although the number of iterations of Newton's techniques is much less than that of the successive substitution method, it may take more execution time when the PE algorithm is solved by Newton's techniques than when it is solved by the successive substitution method,

Chapter 3. New Results for Existing Approximate MVA Algorithms 34

especially for separable queueing networks with a large number of customer classes. In fact, even for single class separable queueing networks, the computational cost during each iteration of Newton's techniques is large. The following example shows that for some single class separable queueing networks, it requires fewer iterations, but more execution time, when the PE algorithm is solved by Newton's techniques than when it is solved by the successive substitution method.

Example 3.1 The PE algorithm is implemented, respectively, using the successive sub-

stitution method and a globally convergent modi cation of Newton's method [14]. The parameters for the successive substitution method implementation and the Newton's method implementation are identical and are listed in Table 3.2. For the network with con guration given in Table 3.3, empirical results show that it requires much fewer iterations, but more execution time, when the PE algorithm is solved by Newton's method than when it is solved by the successive substitution method. The empirical results are summarized in Table 3.4. All of our experiments were conducted on a Sun Sparc 20 running SunOS 4.1. The execution time entries in Table 3.4 represent user mode CPU seconds, and were obtained by executing each algorithm 1; 000; 000 times inside an outer loop, then dividing the total execution time by 1; 000; 000 in order to amortize the measurement overhead. 2 Implemention algorithm Successive Substitution Initial Estimates Balanced queue lengths Stopping Criterion Maximum change in queue lengths less than a tolerance Tolerance 0.00005

Newton's method Balanced queue lengths Maximum change in queue lengths less than a tolerance 0.00005

Table 3.2: Parameters for di erent implementations of the PE algorithm Because the Core algorithm of the Linearizer algorithm and other existing approximate MVA algorithms share the same structure as the PE algorithm, similar conclusions are applicable to most of the iterative approximate MVA algorithms.

Chapter 3. New Results for Existing Approximate MVA Algorithms 35

Server discipline Number of classes (C ) Population size (N ) Number of centers (K ) Thinking time of customers (Z ) Loadings (Dk )

Load independent 1 82 2 0.0 112.1231 234.4561

Table 3.3: Parameters for the network

Implementation algorithm Number of iterations Execution time The successive substitution method 23 8:920  10?5 seconds Newton's method 15 9:525  10?5 seconds Table 3.4: Empirical results of di erent implementations of the PE algorithm

3.3 Summary In this chapter, we have shown that the Newton's technique implementations of the PE and Linearizer algorithms may lead to lower numbers of iterations than the successive substitution method implementation. We have also shown that the Newton's technique implementations of the iterative approximate MVA algorithms may not always yield lower execution times than the successive substitution method implementations. Of course, we still expect that the former will yield lower execution times than the latter for most separable queueing networks with a small number of classes. As well, we have given a lower bound on the space complexities for the existing approximate MVA algorithms, and suggested the implementation to achieve it. The existence of a solution to the set of equations that arise from the Linearizer algorithm has been established for single class separable queueing networks.

Chapter 3. New Results for Existing Approximate MVA Algorithms 36

Appendix 3.A This appendix establishes Theorem 3.1. In order to prove Theorem 3.1, we need the following theorems.

Theorem 3.5 Let be an iterative algorithm and let C( ; }) be the set of sequences generated by which converge to the limit }. Suppose there exist p 2 [1; +1) and a constant c2 such that, for any f}(i) g 2 C( ; }), k }(i+1) ? } k c2 k }(i) ? } kp;

8i  i0 = i0(f}(i)g);

(3.7)

then Or ( ; })  Oq ( ; })  p. On the other hand, if there is a constant c1 > 0 and some sequence f}(i)g 2 C( ; }) such that

k }(i+1) ? } k c1 k }(i) ? } kp> 0;

8i  i0 = i0(f}(i)g);

(3.8)

then Oq ( ; } )  Or ( ; } )  p. Hence, if (3.7) and (3.8) both hold, then Or ( ; }) = Oq ( ; }) = p. 2

Ortega and Rheinboldt [27] proved this theorem.

Theorem 3.6 There exists a unique solution to the PE algorithm for single class sepa-

2

rable queueing networks.

Eager and Sevcik [17] proved this theorem.

Proof of Theorem 3.1. For single class separable networks, the PE algorithm de nes the mapping

R(N ) = (R(N ));

(3.9)

where is a continuous and di erentiable function on the domain of R(N ). By Theorem 3.6, there exists a unique solution R(N ) of the residence time of the PE algorithm:

R(N ) = (R(N )):

(3.10)

When the PE algorithm is solved by the successive substitution method, the algorithm computes the estimation of the results by a formula of the form

R(i+1) (N ) = (R(i)(N )):

(3.11)

Chapter 3. New Results for Existing Approximate MVA Algorithms 37

By (3.10), (3.11), and the Mean-Value Theorem [27],

j R(i+1)(N ) ? R(N ) j =j (R(i)(N )) ? R (N ) j =j (R(i)(N )) ? (R(N )) j =j 0((i) ) j  j R(i) (N ) ? R(N ) j;

(3.12)

where (i) is a value between R(i)(N ) and R(N ). Suppose 0 is a continuous function on the domain of R(N ) and

j 0(R(N )) j = c 6= 0:

(3.13)

Hence, c > 0, and we can choose two constants c1 and c2 such that 0 < c1 < c < c2. Since 0 is a continuous function, there exists a constant " > 0 such that c1 j 0( ) j c2 for 8 2 [R(N ) ? "; R(N ) + "]. By the convergence assumption of the theorem, as i ! +1, R(i)(N ) ! R(N ). Hence, there exists an integer  such that R(i)(N ) 2 [R(N ) ? "; R(N ) + "] for i  . Since (i) in (3.12) is between R(i) (N ) and R (N ), (i) 2 [R(N ) ? "; R(N )+ "] for i   as well. Hence, c1 j 0((i)) j c2; 8i  . (3.14) Therefore, by (3.12),

c1 j R(i)(N ) ? R(N ) j  j R(i+1)(N ) ? R(N ) j  c2 j R(i)(N ) ? R (N ) j;

8i  .

(3.15)

Hence, by Theorem 3.5, the theorem is established. To complete the proof, we only need show that 0 is a continuous function on the domain of R(N ) and j 0(R (N )) j = 6 0 for N > 1. Consider the mapping de ned by the PE algorithm: " # R k (N ) Rk (N ) = Dk  1 + (N ? 1) Z + R(N ) for 1  k  K , (3.16) and K X (3.17) R(N ) = Rk (N ): k=1

Chapter 3. New Results for Existing Approximate MVA Algorithms 38

Hence, (3.16) can be rewritten as

(

) ( N ? 1) D k Rk (N ) 1 ? Z + R( N ) = Dk : Since Dk and Rk (N ) must be positive, any feasible solution must satisfy 1 ? (ZN+?R1)( ND)k > 0; for 1  k  K , which is equivalent to R(N ) > (N ? 1)Dk ? Z;

(3.18) (3.19)

for 1  k  K .

(3.20)

Also, R(N ) must be positive, therefore,  K  R(N ) > max 0; max f(N ? 1)Dk ? Z g : k=1

(3.21)

Isolating Rk (N ) on the left-hand side in (3.18), summing over k, and applying (3.17) yields K K X X Dk Dk Z + Dk R(N ) = (R(N )): R(N ) = = (3.22) (N ?1)D k=1 1 ? Z +R(N ) k=1 Dk ? NDk + Z + R(N ) Hence, K Dk2(1 ? N ) 0(R(N )) = X (3.23) [D (1 ? N ) + Z + R(N )]2 : k

k=1

k

In the region de ned by inequality (3.21), it is not dicult to verify that 0 is a continuous function, and 0(R(N )) < 0 for N > 1. Moreover, 0(R(N )) ! 0 if and only if R(N ) ! +1. By Theorem 3.6, R(N ) has a unique solution R(N ) (i.e., R(N ) 6! +1). Hence, 0(R (N )) 6= 0. Therefore, j 0(R (N )) j6= 0 for N > 1. Hence, the proof of Theorem 3.1 is complete.

Appendix 3.B This appendix establishes Theorem 3.2. In order to prove Theorem 3.2, we need the following lemmas.

Lemma 3.1 For single class separable queueing networks, there exists a unique solution 2 to the Core algorithm given that k (N ) > ? N1?1 for all k.

Chapter 3. New Results for Existing Approximate MVA Algorithms 39

Proof. For N = 1, the Core algorithm sets Rk (N ) = Dk for all k, and the theorem holds. For N > 1, any solution to the Core algorithm must satisfy ( " # ) R k (N ) Rk (N ) = Dk  1 + Z + R(N ) + k (N ) (N ? 1) ; for 1  k  K , and K X R(N ) = Rk (N ): k=1

(3.24) (3.25)

Equation (3.24) can be rewritten as # " ( N ? 1) D k (3.26) Rk (N )  1 ? Z + R(N ) = Dk [1 + (N ? 1)k (N )]: Since Rk (N ) and Dk must be positive, and k (N ) > ? N1?1 , any feasible solution must satisfy (3.27) 1 ? (ZN+?R1)(ND)k > 0; for 1  k  K , which is equivalent to

R(N ) > (N ? 1)Dk ? Z;

for 1  k  K .

(3.28)

Also, R(N ) must be positive, therefore,  K  R(N ) > max 0; max f ( N ? 1) D ? Z g : k k=1

(3.29)

Isolating Rk (N ) on the left-hand side in (3.26), summing over k, and applying (3.25) yields K K X X 1)k (N )][Z + R(N )] : (3.30) R(N ) = Dk [1 + (N(N??1)1)Dk (N )] = Dk [1Z++(NR? (N ) ? (N ? 1)Dk 1 ? Z+R(N ) k=1 k=1 k

Dividing both sides of (3.30) by R(N ) yields # K " X D D k [1 + (N ? 1)k (N )] k [1 + (N ? 1)k (N )]Z : (3.31) + 1= k=1 R(N )[Z + R(N ) ? (N ? 1)Dk ] Z + R(N ) ? (N ? 1)Dk Since N , Z , Dk , and k (N ) are all known (i.e., constants), and the only unknown variable in (3.31) is R(N ), we de ne functions g, fk and hk with respect to R(N ) for 1  k  K where k (N )]Z ; fk (R(N )) = R(ND)[kZ[1 ++ R(N(N?) 1) (3.32) ? (N ? 1)D ] k

Chapter 3. New Results for Existing Approximate MVA Algorithms 40

and

hk (R(N )) = ZD+k [1R+(N()N??(N1)?k (1)ND)] ; k

and

g(R(N )) =

K X k=1

[fk (R(N )) + hk (R(N ))] :

(3.33) (3.34)

Hence, equation (3.31) can be rewritten as 1=

K X k=1

[fk (R(N )) + hk (R(N ))] = g(R(N )):

(3.35)

Note that in the region de ned by inequality (3.29), all of g, fk and hk are continuous functions with respect to R(N ) for k (N ) > ? N1?1 . In fact, if k (N ) > ? N1?1 , then fk0 (R(N )) < 0 and h0k (R(N )) < 0 for 1  k  K in the region de ned by inequality (3.29). Therefore, as shown in Figure 3.1, g is a monotone decreasing function with respect to R(N ) in the region de ned by inequality (3.29) if k (N ) > ? N1?1 . Let ! denote the right-hand side of inequality (3.29), i.e.,  K  ! = max 0; max f(N ? 1)Dk ? Z g : k=1 It is easy to see that lim g(R(N )) = +1;

(3.36)

lim g(R(N )) = 0:

(3.37)

R(N )!!

and R(N )!+1

Since g(R(N )) decreases monotonically from +1 to 0 as R(N ) increases from ! to +1, it must take on the value 1 when R(N ) is the unique solution to equations (3.24) and (3.35). According to (3.26), the Rk (N ) values are uniquely determined by R(N ), and the theorem holds.

Lemma 3.2 For single class separable queueing networks, when the Core algorithm is

solved by the successive substitution method with any initial values, if the Core algorithm converges and k (N ) > ? N1?1 and N > 1, then both the q-order and r-order of the convergence rate of the algorithm are exactly 1. 2

Chapter 3. New Results for Existing Approximate MVA Algorithms 41 g(R(N))

g(R(N))

1

ω

R(N)

Figure 3.1: Plot of g(R(N )) vs R(N ) for the Core algorithm

Proof. For single class separable networks, the Core algorithm de nes the mapping R(N ) = (R(N ));

(3.38)

where is a continuous and di erentiable function on the domain of R(N ). By Lemma 3.1, there exists a unique solution R(N ) of the residence time of the Core algorithm:

R(N ) = (R(N )):

(3.39)

When the Core algorithm is solved by the successive substitution method, the algorithm estimates the results by a formula of the form

R(i+1) (N ) = (R(i)(N )): By (3.39), (3.40), and the Mean-Value Theorem [27],

j R(i+1)(N ) ? R(N ) j =j (R(i)(N )) ? R (N ) j =j (R(i)(N )) ? (R(N )) j

(3.40)

Chapter 3. New Results for Existing Approximate MVA Algorithms 42

=j 0((i) ) j  j R(i) (N ) ? R(N ) j;

(3.41)

where (i) is a value between R(i)(N ) and R(N ). Suppose 0 is a continuous function on the domain of R(N ) and

j 0(R(N )) j = c 6= 0:

(3.42)

Hence, c > 0, and we can choose two constants c1 and c2 such that 0 < c1 < c < c2. Since 0 is a continuous function, there exists a constant " > 0 such that c1 j 0( ) j c2 for 8 2 [R(N ) ? "; R(N ) + "]. By the convergence assumption of the theorem, as i ! +1, R(i)(N ) ! R(N ). Hence, there exists an integer  such that R(i)(N ) 2 [R(N ) ? "; R(N ) + "] for i  . Since (i) in (3.41) is between R(i) (N ) and R (N ), (i) 2 [R(N ) ? "; R(N )+ "] for i   as well. Hence, c1 j 0((i)) j c2; 8i  . (3.43) Therefore, by (3.41),

c1 j R(i)(N ) ? R(N ) j  j R(i+1)(N ) ? R(N ) j  c2 j R(i)(N ) ? R (N ) j;

8i  .

(3.44)

Hence, by Theorem 3.5, the theorem is established. To complete the proof, we only need show that 0 is a continuous function on the domain of R(N ) and j 0(R (N )) j = 6 0 for N > 1. Consider the mapping de ned by the Core algorithm: ( " #) R k (N ) Rk (N ) = Dk  1 + (N ? 1) Z + R(N ) + k (N ) for 1  k  K , (3.45) and and K X R(N ) = Rk (N ): (3.46) k=1

Hence, (3.45) can be rewritten as ( ) ( N ? 1) D k Rk (N ) 1 ? Z + R(N ) = Dk [1 + (N ? 1)k (N )]: (3.47) Since Dk and Rk (N ) must be positive, and k (N ) > ? N1?1 , any feasible solution must satisfy (3.48) 1 ? (ZN+?R1)( ND)k > 0; for 1  k  K ,

Chapter 3. New Results for Existing Approximate MVA Algorithms 43

which is equivalent to

R(N ) > (N ? 1)Dk ? Z;

for 1  k  K .

(3.49)

Also, R(N ) must be positive, therefore,  K  R(N ) > max 0; max f(N ? 1)Dk ? Z g : k=1

(3.50)

Isolating Rk (N ) on the left-hand side in (3.47), summing over k, and applying (3.46) yields

R(N ) = Hence,

K D [1 + (N ? 1) (N )] X K D [1 + (N ? 1) (N )][Z + R(N )] X k k k k = = (R(N )): (N ?1)D D ? ND + 1 ? Z+R(N ) k k Z + R(N ) k=1 k=1 (3.51) k

K 2 0 (R(N )) = X Dk (1 ? N )[1 + (N ? 1)k (N )] : 2 k=1 [Dk (1 ? N ) + Z + R(N )]

(3.52)

In the region de ned by inequality (3.50), it is not dicult to verify that 0 is a continuous function, and 0(R(N )) < 0 for N > 1 and k(N ) > ? N1?1 . Moreover, 0(R(N )) ! 0 if and only if R(N ) ! +1. By Lemma 3.1, R(N ) has a unique solution R(N ) (i.e., R(N ) 6! +1). Hence, 0(R(N )) 6= 0. Therefore, j 0(R(N )) j6= 0 for N > 1 and k (N ) > ? N1?1 . Hence, the proof of this lemma is complete.

Proof of Theorem 3.2.

The solution vector of the Linearizer algorithm only ~ . Moreover, the Core algorithm contains the performance measures at population N ~ in the outer loop of the Linearizer algorithm. is invoked only once at population N If the Linearizer algorithm converges, then the Core algorithm must converge. When N > 1 and k (N ) > ? N1?1 for all k, by Lemmas 3.2, both the q-order and r-order of the convergence rate of the Core algorithm are exactly 1. Hence, both the q-order and r-order of the convergence rate of the Linearizer algorithm are less than or equal to 1.

Appendix 3.C This appendix establishes Theorem 3.3.

Chapter 3. New Results for Existing Approximate MVA Algorithms 44

Proof of Theorem 3.3. For single class separable queueing networks with N > 1, Qk (N ) ? Qk (N ? 1)  1

(3.53)

for k = 1; 2; :::; K . Hence, the estimates of Qk (N ) and Qk (N ? 1) during each iteration of the Linearizer algorithm must satisfy

Qk (N ? 1)  Qk (N ) ? 1

(3.54)

for k = 1; 2; :::; K . Therefore, during each iteration, for k = 1; 2; :::; K ,

k (N ) = QkN(N??1 1) ? QkN(N )

 QkN(N?) ?1 1 ? QkN(N ) 1 ? Qk (N ) > QkN(N?) ? 1 N ?1 =? 1 : N ?1

(3.55)

Hence, during each iteration, for k = 1; 2; :::; K ,

k (N ) = k (N ? 1) > ? N 1? 1 :

(3.56)

Chapter 4 Two New Approximate MVA Algorithms In this chapter, two new iterative approximate MVA algorithms, the Queue Line (QL) algorithm and the Fraction Line (FL) algorithm, for multiple class separable queueing networks are presented. Both algorithms improve on the accuracy of the PE algorithm while maintaining the same asymptotic time and space complexities as the PE algorithm. Without loss of generality, we assume D1  D2      DK , and let D = PKk=1 Dk and W = DK for single class separable queueing networks. We also de ne a bottleneck service center of a single class separable queueing network as a service center with service demand W . Let B denote the set of all bottleneck service centers, and let P denote the set of all nonbottleneck service centers for single class separable queueing networks. Also, let the service center subscript b denote the index of a bottleneck service center, and let the service center subscript p denote the index of a nonbottleneck service center for single class separable queueing networks. Note that in a single class separable queueing network, there must be at least one bottleneck service center, but there may be no nonbottleneck service center. 45

46

Chapter 4. Two New Approximate MVA Algorithms

4.1 Properties of Separable Queueing Networks Any approximate MVA algorithms should observe the following fundamental properties of separable queueing networks.

Proposition 4.1 For multiple class separable queueing networks, Qc;k (~0) = 0 for c = 2

1; 2; :::; C , and k = 1; 2; :::; K .

Proof. Since there is no customer in the network, the mean queue length of each class

at each center has to be zero.

Proposition 4.2 For single class separable queueing networks, Qk (1) = 1; 2; :::; K .

D D+Z k

for k =

2

Proof. By (2.3) and (2.4), for single class separable queueing networks, N : P K Z + k=1 Rk (N ) When there is only one customer in the network (i.e., N = 1), by Proposition 4.1 and (2.1) and (2.2), 1 1 1 X (1) = = = = Z +1 D : (4.1) P P P K K K Z + k=1 Rk (1) Z + k=1 Dk (1 + Qk (0)) Z + k=1 Dk Hence, by Proposition 4.1 and (2.5) and (4.1), Qk (1) = X (1)Rk (1) = X (1) [Dk (1 + Qk(0))] = Z D+kD ; (4.2) for k = 1; 2; :::; K . X (N ) =

Proposition 4.3 For multiple class separable queueing networks, K ~) X NcZc = N ? X (N ~ ) = Nc Rc(N = N ? Qc;k (N c c c ~ )Zc ; ~ ~ Z + R ( N ) Z + R ( N ) c c c c k=1 Moreover,

K X K X k=1

k=1

~ ) = Nc ; Qc;k (N

~ ) = N; Qk (N

for c = 1; 2; :::C . (4.3)

when Zc = 0,

(4.4)

when Zc = 0 for all c,

(4.5)

Chapter 4. Two New Approximate MVA Algorithms K X k=1

and

K X k=1

~ ? ~1c ) = Nc ? 1; Qc;k (N

~ ? ~1c) = N ? 1; Qk (N

47

when Nc  1 and Zc = 0,

(4.6)

when Nc  1 and Zh = 0 for all h.

(4.7)

2

Proof. By (2.3), (2.4), and (2.5), equation (4.3) holds. Equations (4.4), (4.5), (4.6), and (4.7) can be obtained by utilizing (4.3).

4.2 The Queue Line Algorithm The Queue Line (QL) algorithm for multiple class separable queueing networks is based the following assumptions:

Assumption 4.1 (Assumptions of the QL Algorithm)

8 > ~ ? ~1c) = 0; Qj;k (N when Nc = 1 and c = j , > > > > > < Q (N~ )?Q (N~ ?~1 ) Q (N~ )?Q (N~ ?(N ?1)~1 ) = ; when Nc > 1 and c = j , 1 N ?1 > > > > > > ~ ? ~1c) = Qj;k (N ~ ); : Qj;k (N when Nc  1 and c 6= j , j;k

j;k

c

j;k

j;k

c

c

c

(4.8)

~ ? ~1c denotes the population N ~ with one class c customer removed, and N ~? where N ~ with Nc ? 1 class c customers removed. (Nc ? 1)~1c denotes the population N 2 Like the PE algorithm, the QL algorithm assumes that removing a class c customer does not a ect the proportion of time spent by customers of any other classes at each service ~) center. However, while the PE algorithm assumes the linear relationship between Qc;k (N ~ ? ~1c ) by linear interpolation between the and Nc, the QL algorithm estimates Qc;k (N ~ ? (Nc ? 1)~1c )) and (Nc; Qc;k (N ~ )). points (1; Qc;k (N For simplicity, Assumption 4.1 can be re ned as follows.

Chapter 4. Two New Approximate MVA Algorithms

48

Assumption 4.2 (Re ned Assumptions of the QL Algorithm) 8 > < c = j; ~ ? ~1c) = 0; If Nc = 1; Qj;k (N > ~ ); c = : Qj;k (N 6 j:

8 > N ?2 1 ~ ~ ~ > > < N ?1 Qc;k (N) + N ?1 Qc;k (N ? (Nc ? 1)1c); c = j; ~ ? ~1c) = If Nc > 1; Qj;k (N > > > ~ ); : Qj;k (N c= 6 j: (4.9) 2 c

c

c

Based on Assumption 4.1 or 4.2, the following theorem establishes the approximation equation for the QL algorithm.

Theorem 4.1 Under Assumption 4.2 (or Assumption 4.1), the approximation equation of the QL algorithm is

8 (c) > ~ ) = Q k (N ~ ) ? Qc;k (N ~ ); Ak ( N when Nc = 1, > > > > < h i (c) ~ 1 > ~ ~ ~ ~ A ( N ) = Q ( N ) ? Q ( N ) ? Q ( N ? ( N ? 1) 1 ) k c;k c c > k N ?1  c;k  > ~ ~ )] > 1+ Q ( N ) ? Q (N D [ 1 ~ ~ > = Qk (N) ? N ?1 Qc;k (N) ? Z +P fD [1+Q (N~ )?Q (N~ )]g ; when Nc > 1. : (4.10) 2 c

k

c;k

c

c

c;k

K

l=1

c;l

l

c;l

The proof of Theorem 4.1 is given in Appendix 4.A. Note that the approximation (4.10) satis es the properties (Propositions 4.1, 4.2, and 4.3) of separable queueing networks. The nonlinear equations (4.10) and (2.2) through (2.6) of the QL algorithm can be solved by any general purpose numerical techniques such as the successive substitution method and Newton's method. Suppose both the QL and PE algorithms are solved by the successive substitution method. For the networks such that Nc > 1 for all c, the QL algorithm requires at least (10KC ? K ? C ) additions/subtractions, and (6KC + C ) multiplications/divisions during each iteration, while the PE algorithm requires only (4KC ? K ) additions/subtractions, and (3KC + C ) multiplications/divisions during each iteration. For the networks such

49

Chapter 4. Two New Approximate MVA Algorithms

that Nc = 1 for all c, the QL algorithm requires the same number of operations as the PE algorithms. Hence, the time complexity of the QL algorithm is O(KC ) and is the same as that of the PE algorithm. Since the QL algorithm has the similar structure as the PE algorithm, the space complexity results for the PE algorithm in Chapter 3 are also applicable to the QL algorithm. Hence, the space complexity of the QL algorithm is identical to that of the PE algorithm.

4.3 The Fraction Line Algorithm The Fraction Line (FL) algorithm for multiple class separable queueing networks is based the following assumptions:

Assumption 4.3 (Assumptions of the FL Algorithm) 8 > ~ ? ~1c) = 0; Qj;k (N > > > > > < N ? N?1 N ? ? = > 1 > > > > > : Qj;k (N ~ ? ~1c) = Qj;k (N ~ ); Qj;k ( ~ Nc

)

Qj;k ( ~ Nc

~c)

1

Qj;k ( ~ Nc

)

when Nc = 1 and c = j , N?(Nc ?1)~1c )

Qj;k ( ~

N ?1 c

1

; when Nc > 1 and c = j ,

(4.11)

when Nc  1 and c 6= j ,

~ ? ~1c denotes the population N ~ with one class c customer removed, and N ~? where N ~ with Nc ? 1 class c customers removed. (Nc ? 1)~1c denotes the population N 2 Like the PE and QL algorithms, the FL algorithm assumes that removing a class c customer does not a ect the proportion of time spent by customers of any other classes ~ ? ~1c) by linear at each service center. Furthermore, the FL algorithm estimates Qc;k (N ~ ? (Nc ? 1)~1c )) and (Nc; Q N(N~ ) ). interpolation between the points (1; Qc;k (N c;k

c

For simplicity, Assumption 4.3 can be re ned as follows.

Chapter 4. Two New Approximate MVA Algorithms

50

Assumption 4.4 (Re ned Assumptions of the FL Algorithm) 8 > < 0; c = j; ~ ~ If Nc = 1; Qj;k (N ? 1c ) = > : Qj;k (N ~ ); c = 6 j:

8 > N ?2 ~ ~ ~ > > < N Qc;k (N) + Qc;k (N ? (Nc ? 1)1c ); c = j; ~ ? ~1c ) = If Nc > 1; Qj;k (N > > > ~ ); : Qj;k (N c= 6 j: c

(4.12)

c

2 Based on Assumption 4.3 or 4.4, the following theorem establishes the approximation equation for the FL algorithm.

Theorem 4.2 Under Assumption 4.4 (or Assumption 4.3), the approximation equation of the FL algorithm is

8 (c) > ~ ) = Q k (N ~ ) ? Qc;k (N ~ ); Ak (N when Nc = 1, > > > > < (c) ~ > ~ ) ? N2 Qc;k (N ~ ) + Qc;k (N ~ ? (Nc ? 1)~1c) A > k (N) = Qk (N > > ~ ) ? N2 Qc;k (N ~ ) + PD [1+Q (N~ )?Q~ (N~ )] ~ ; when Nc > 1. > = Q k (N : Z+ fD [1+Q (N)?Q (N)]g (4.13) 2 c

k

c;k

c

c

c;k

K

l=1

c;l

l

c;l

The proof Theorem 4.2 is given in Appendix 4.B. Note that the approximation (4.13) also satis es the properties (Propositions 4.1, 4.2, and 4.3) of separable queueing networks. The nonlinear equations (4.13) and (2.2) through (2.6) of the FL algorithm can be solved by any general purpose numerical techniques such as the successive substitution method and Newton's method. When the FL algorithm is solved by the successive substitution method, for the network such that Nc > 1 for all c, it requires at least (10KC ? K ? C ) additions/subtractions, and (7KC + C ) multiplications/divisions during each iteration, as contrasted to (4KC ? K ) and (10KC ? K ? C ) additions/subtractions, (3KC + C ) and (6KC + C ) multiplications/divisions, respectively, for the PE and QL algorithms. For the network such that

Chapter 4. Two New Approximate MVA Algorithms

51

Nc = 1 for all c, the FL algorithm requires the same number of operations as the PE and QL algorithms. Hence, the time complexity of the FL algorithm is O(KC ) and is identical to those of the PE and QL algorithms. Since the FL algorithm has the similar structure as the PE and QL algorithms, the space complexity results for the PE algorithm in Chapter 3 are also applicable to the FL algorithm. Therefore, the space and time complexities of the PE, QL, and FL algorithms are the same. The following is a summary of the PE, QL, and FL algorithms solved by the successive substitution method with the balanced queue length initialization.

Algorithm 4.1 (The Common Structure of the PE, QL, and FL Algorithms) N ?

C X c=1

Nc

8k : Qk (N~ ) ? NK 8c : 8k : Qc;k (N~ ) ? NKc

REPEAT FOR c = 1 TO C DO Line (1): This line of codes will be di erent for di erent algorithms. FOR k = 1 TO K DO IF Nc = 1 THEN

~ ) ? Qk (N ~ ) ? Qc;k (N ~) A(kc)(N ELSE (* Nc > 1 *) Line (2): This line of codes will be di erent for di erent algorithms. END IF END k FOR LOOP END c FOR LOOP

8c : 8k : Rc;k (N~ ) ? Dc;k [1 + A(kc)(N~ )]

Chapter 4. Two New Approximate MVA Algorithms K X

8c : Rc (N~ ) ?

k=1

52

~) Rc;k (N

Nc ~) Zc + Rc (N 8c : 8k : Qc;k (N~ ) ? Xc (N~ )Rc;k (N~ ) C X ~ 8k : Qk (N) ? Qc;k (N~ )

8c : Xc (N~ ) ?

c=1

~ ) ARE ALL STABLE. UNTIL Qc;k (N

2

Algorithm 4.2 (The PE Algorithm) The PE algorithm is the same as Algorithm 4.1

except that Line (1) is empty, and Line (2) is

~) ~ ) ? Q k (N ~ ) ? Qc;k (N A(kc)(N N : c

(4.14)

2

Algorithm 4.3 (The QL Algorithm) The QL algorithm is the same as Algorithm 4.1 except that Line (1) is

Y ?

K n X l=1

h ~ ) ? Qc;l(N ~ )io ; Dc;l 1 + Ql(N

(4.15)

and Line (2) is

h i9 8 ~ ~ < = D 1 + Q ( N ) ? Q ( N ) k c;k ~ ) ? Q k (N ~)? 1 ~ ) ? c;k A(kc)(N Q ( N c;k ;: Nc ? 1 : Zc + Y

(4.16)

2

Algorithm 4.4 (The FL Algorithm) The FL algorithm is the same as Algorithm 4.1

except that Line (1) is

Y ?

K n X l=1

h ~ ) ? Qc;l(N ~ )io ; Dc;l 1 + Ql(N

(4.17)

and Line (2) is

h i ~ ~ D 1 + Q ( N ) ? Q ( N ) c;k k c;k 2 ~)+ ~ ) ? Q k (N ~ ) ? Qc;k (N : A(kc)(N Nc Zc + Y

(4.18)

2

Chapter 4. Two New Approximate MVA Algorithms

53

4.4 Existence, Uniqueness, and Convergence Results In this section, the existence of a unique solution of the QL and FL algorithms for single class separable queueing networks is established, and performance measure initializations that guarantee convergence are exhibited for both algorithms. The existence and uniqueness properties of the solution of an iterative approximate MVA algorithm are independent of the numerical technique used to solve the algorithm. The following theorem establishes the existence and uniqueness properties of the solution of the QL algorithm for single class separable queueing networks.

Theorem 4.3 There exists a unique solution to the QL algorithm for single class sepa-

rable queueing networks.

2

The proof of Theorem 4.3 is given in Appendix 4.C. Generally, the convergence of an iterative algorithm depends on the numerical technique used and the initialization procedure of the algorithm. However, it is easy to see that the QL algorithm always converges to the exact solution for multiple class separable queueing networks when Nc = 1 for all c, regardless the numerical technique and the initialization procedure used. The following theorems show that the QL algorithm converges to a unique solution for single class separable queueing networks with N > 1.

Theorem 4.4 For single class separable queueing networks with N > 1 and Z = 0, N with the initial performance measures Q(0) k (N ) = K , 1  k  K , the QL algorithm

monotonically converges to a unique solution when it is solved by the successive substitution method. Particularly, the above performance measure initializations guarantee that the response time (R(N )) estimated by the QL algorithm monotonically increases to the unique solution, and the throughput (X (N )) estimated by the QL algorithm monotonically decreases to the unique solution. 2

The proof of Theorem 4.4 is given in Appendix 4.D. For single class separable queueing networks with Z 6= 0, the convergence of the QL algorithm may not be monotonic when it is solved by the successive substitution method with the balanced queue length initialization. Since the convergence is not monotonic,

54

Chapter 4. Two New Approximate MVA Algorithms

the behavior of the algorithm during the transient phase is very complex. When the QL algorithm is solved by the successive substitution method with balanced queue length initializations, we nd empirically that it always converges for Z 6= 0. However, we lack a formal proof of this. But when the QL algorithm is solved by the successive substitution method with a particular initialization procedure other than balanced queue length initialization, or when it is solved by Newton's method with any initialization procedure, we are able to show that it always converges for any single class separable queueing network.

Theorem 4.5 For single class separable queueing networks with N > 1, 8b 2 B , and N D (0) 8p 2 P , with the initial performance measures Q(0) b (N ) = T (Z +ND) and Qp (N ) = 0, 2

where T is the number of bottleneck service centers in the network, the QL algorithm monotonically converges to a unique solution when it is solved by the successive substitution method. Particularly, the above performance measure initializations guarantee that the response time (R(N )) estimated by the QL algorithm monotonically decreases to the unique solution, and the throughput (X (N )) estimated by the QL algorithm monotonically increases to the unique solution. 2

The proof of Theorem 4.5 is given in Appendix 4.E.

Theorem 4.6 For single class separable queueing networks with N > 1, with any ini-

tialization procedure, the QL algorithm converges to a unique solution when it is solved by a globally convergent modi cation of Newton's method. 2

The proof of Theorem 4.6 is given in Appendix 4.F. Since most Newton's method based numerical techniques only di er from Newton's method by using a di erent approximate Jacobian matrix, it can be shown that, with similar procedures, the QL algorithm converges to a unique solution when it is solved by other Newton's techniques with proper initial performance measure estimates. Just as for any other approximate MVA algorithms, we lack a formal proof that the QL algorithm will converge to a unique solution for multiple class separable queueing networks. However, in all our numerical experiments, the QL algorithm always converged when it was solved by the successive substitution method and yielded the same solution

Chapter 4. Two New Approximate MVA Algorithms

55

despite di erent initialization procedures. Hence, for multiple class separable queueing networks, the nal solution of the QL algorithm appears to always exist and be unique. Since the QL algorithm is an iterative algorithm, it is desirable to know the convergence rates of di erent implementations of the QL algorithm as precisely as possible in order to compare the execution costs of di erent implementations. The following theorems show the convergence rate of the QL algorithm when it is solved by the successive substitution method and Newton's techniques respectively.

Theorem 4.7 For single class separable queueing networks, when the QL algorithm is

solved by the successive substitution method with any initialization procedure, if the algorithm converges and N > 2, then both the q-order and r-order of the convergence rate of the algorithm are exactly 1. 2

The proof of Theorem 4.7 is given in Appendix 4.G.

Corollary 4.1 In the worst case, both the q-order and r-order of the convergence rate of

the QL algorithm solved by the successive substitution method are exactly 1.

2

Corollary 4.2 For single class separable queueing networks, if the QL algorithm is solved

by the successive substitution method and it converges, then the order of the number of iterations of the QL algorithm is a constant independent of the network parameters. 2

Theorem 4.8 For single class separable queueing networks, when the QL algorithm is

solved by a globally convergent modi cation of Newton's method with any initialization procedure, both the q-order and r-order of the convergence rate of the algorithm are at least 2. 2

Proof. For single class separable queueing networks, when the QL algorithm is solved

by a globally convergent modi cation of Newton's method, as shown in the proof of Theorem 4.6 in Appendix 4.F, the Jacobian at a solution is nonsingular and the derivative of the Jacobian exists. Since both q-order and r-order of the convergence rate of Newton's method are at least 2 if the algorithm converges to a unique solution, the Jacobian at the unique solution is nonsingular, and the derivative of the Jacobian exists [27, 32, 14], by Theorems 4.3 and 4.6, this theorem is established.

Chapter 4. Two New Approximate MVA Algorithms

56

Because the convergence rates of all of the Newton's techniques are greater than 1 if the requirements of these techniques are satis ed [27, 32, 14], we expect that the QL algorithm converges q-superlinearly to the solution when it is solved by these techniques. Hence, the Newton's technique implementations of the QL algorithm generally yield lower numbers of iterations than the successive substitution method implementation. However, as shown in Chapter 3, the computational cost for each iteration of Newton's techniques may be high. Hence, the execution time of the successive substitution method implementation may be less than those of the Newton's technique implementations for some networks. Since the PE, QL, and FL algorithms have similar structures, most of the numerical analysis and complexity analysis results for one algorithm is also applicable to another. In fact, above results on the existence and uniqueness of a solution and on the convergence of the algorithm for the QL algorithm are similar to the theorems proved by Eager and Sevcik [17], Agrawal [1], and Pattipati et al. [28] for the PE algorithm, and the theorems presented in Chapter 3. Hence, the existence, uniqueness, and convergence results above are also applicable to the FL algorithm. Furthermore, although the results presented in this section have been derived only for queueing networks with a single customer class, they provide insight that is applicable to multiple class separable queueing networks as well.

4.5 Summary In this chapter, two new approximate MVA algorithms for separable queueing networks have been presented. Results about the existence and uniqueness of a solution and about the convergence to that solution have been obtained for both new algorithms. It has been shown that both new algorithms have the same space and time complexities as the PE algorithm.

Chapter 4. Two New Approximate MVA Algorithms

57

Appendix 4.A This appendix establishes Theorem 4.1. To prove Theorem 4.1, the following lemmas are needed.

Lemma 4.1 Under Assumption 4.2, when c 6= j , Qj;k (N~ ? i~1c) = Qj;k (N~ ), for 1  i  Nc, c = 1; 2; :::; C , and k = 1; 2; :::; K .

2

Proof. By induction on i.

Under Assumption 4.2, the lemma holds for the induction base i = 1. Assume that the lemma holds for i < Nc, and consider the lemma for i = Nc. When c 6= j ,

~ ? Nc~1c) = Qj;k (N ~ ? (Nc ? 1)~1c ? ~1c ) Qj;k (N ~ ? (Nc ? 1)~1c) = Qj;k (N (applying the base case) ~) = Qj;k (N (by the induction hypothesis). Hence, the lemma holds for i = Nc.

Lemma 4.2 Under Assumption 4.2, Qj;k (N~ ? Nc~1c) = 0 for all k when c = j .

2

Proof. Let P~ = N~ ? (Nc ? 1)~1c , and let p denote the number of class c customers when ~ ? ~1c) = 0 when the network population is P~ . Hence, p = 1. By Assumption 4.2, Qj;k (P

p = 1 and j = c. Therefore, we obtain

~ ? (Nc ? 1)~1c ? ~1c ) = 0: Qj;k (N Thus, when c = j ,

~ ? Nc~1c) = 0: Qj;k (N

Lemma 4.3 Under Assumption 4.2, Qk (N~ ? Nc~1) = Qk (N~ ) ? Qc;k (N~ ). Proof. By Lemmas 4.1 and 4.2,

8 > < 0; c = j; ~ ~ Qj;k (N ? Nc1c ) = > : Qj;k (N ~ ); c = 6 j:

2

58

Chapter 4. Two New Approximate MVA Algorithms

Therefore, we obtain

~ ? Nc~1c ) = Qk (N

C X j =1

~ ? Nc~1c) = Qk (N ~ ) ? Qc;k (N ~ ): Qj;k (N

Lemma 4.4 Under Assumption 4.2, if Nc = 1, then A(kc)(N~ ) = Qk (N~ ) ? Qc;k (N~ ):

2

Proof. The proof follows from Lemma 4.3 and the Arrival Instant Theorem [23, 35]. Lemma 4.5 Under Assumption 4.2, if Nc > 1, then

h i 9 8 ~ ~ = < 1 + Q ( N ) ? Q ( N ) D k c;k c;k ~ ~ ) = Q k (N ~)? h io n Q ( N ) ? A(kc)(N ~ ) ? Qc;l(N ~ ) ;: Nc ? 1 : c;k Zc + PKl=1 Dc;l 1 + Ql(N 1

2

Proof.

~ ) = Q k (N ~ ? ~1c ) (by the Arrival Instant Theorem) A(kc)(N ~ ? ~1c) (by (2.6)). = PCj=1 Qj;k (N Under the Assumption 4.2, if Nc > 1, then ~ ) + N 1?1 Qc;k (N ~ ? (Nc ? 1)~1c ) ~ ? ~1c) = NN ??12 Qc;k (N Qj;k (N c

c

c

~ ) ? N 1?1 hQc;k (N ~ ) ? Qc;k (N ~ ? (Nc ? 1)~1c)i ; = Qc;k (N c

and

~ ? ~1c ) = Qj;k (N ~ ); Qj;k (N

for c = j ,

for c 6= j .

Hence, if Nc > 1,

~ ) = Q k (N ~ ) ? 1 hQc;k (N ~ ) ? Qc;k (N ~ ? (Nc ? 1)~1c)i : A(kc)(N N ?1 c

By (2.3), (2.4), and (2.5), ~ ? (Nc ? 1)~1c ) = Xc (N ~ ? (Nc ? 1)~1c )Rc;k (N ~ ? (Nc ? 1)~1c ) Qc;k (N

~ ? (Nc ? 1)~1c ) = Z +R (N~ ?1(N ?1)~1 ) Rc;k (N c

c

= Z +PR c

c

c

~ ?(Nc ?1)~1c ) (N ~ ?(Nc ?1)~1c ) Rc;l (N l=1

c;k C

:

(4.19)

Chapter 4. Two New Approximate MVA Algorithms

Therefore, (4.19) can be rewritten as 2 1 (c) ~ ~)? ~ 4 Ak (N) = Qk (N N ? 1 Qc;k (N) ? c

59

3 ~ ? (Nc ? 1)~1c ) Rc;k (N 5 ~ ? (Nc ? 1)~1c ) : Zc + PCl=1 Rc;l(N

Because for any center l,

h i ~ ? (Nc ? 1)~1c ) = Dc;l 1 + A(l c)(N ~ ? (Nc ? 1)~1c ) (by (2.2)) Rc;l(N h ~ ? Nc~1c)i (by the Arrival Instant theorem) = Dc;l 1 + Ql(N h i ~ ) ? Qc;l(N ~) (by Lemma 4.3), = Dc;l 1 + Ql(N

we obtain

h i 9 8 ~ ~ = < 1 + Q ( N ) ? Q ( N ) D k c;k c;k 1 ~ ) = Q k (N ~ )? ~ h io n A(kc)(N Q ( N ) ? ~ ) ? Qc;l(N ~ ) ; for Nc > 1. Nc ? 1 : c;k Zc + PKl=1 Dc;l 1 + Ql(N

Proof of Theorem 4.1. The proof follows from Lemmas 4.4 and 4.5.

Appendix 4.B This appendix establishes Theorem 4.2. The following lemmas are needed to prove Theorem 4.2.

Lemma 4.6 Under Assumption 4.4, Qk (N~ ? Nc~1) = Qk (N~ ) ? Qc;k (N~ ).

2

Proof. The proof is identical to the proof of Lemma 4.3, since both Assumptions 4.2

and 4.4 contain the conditions that are required for this proof.

Lemma 4.7 Under Assumption 4.4, if Nc = 1, then A(kc)(N~ ) = Qk (N~ ) ? Qc;k (N~ ):

2

Proof. The proof follows from Lemma 4.6 and the Arrival Instant Theorem [23, 35]. Lemma 4.8 Under Assumption 4.4, if Nc > 1, then

h i ~ ~ 1 + Q ( N ) ? Q ( N ) D k c;k c;k 2 ~)+ ~ ) ? Qc;k (N ~ ) = Q k (N io n h A(kc)(N ~ ) ? Qc;l(N ~) Nc Zc + PKl=1 Dc;l 1 + Ql(N

2

60

Chapter 4. Two New Approximate MVA Algorithms

Proof.

~ ) = Q k (N ~ ? ~1c ) A(kc)(N (by the Arrival Instant Theorem) ~ ? ~1c) (by (2.6)). = PCj=1 Qj;k (N By the Assumption 4.4, if Nc > 1, then ~ ? ~1c) = NN?2 Qc;k (N ~ ) + Qc;k (N ~ ? (Nc ? 1)~1c ) Qj;k (N c

c

~ ) ? N2 Qc;k (N ~ ) + Qc;k (N ~ ? (Nc ? 1)~1c ); = Qc;k (N c

and

~ ? ~1c ) = Qj;k (N ~ ); Qj;k (N

for c = j ,

for c 6= j .

Hence, if Nc > 1,

~ ) = Qk (N ~ ) ? 2 Qc;k (N ~ ) + Qc;k (N ~ ? (Nc ? 1)~1c): A(kc)(N N c

(4.20)

By (2.3), (2.4), and (2.5), ~ ? (Nc ? 1)~1c ) = Xc (N ~ ? (Nc ? 1)~1c )Rc;k (N ~ ? (Nc ? 1)~1c ) Qc;k (N

~ ? (Nc ? 1)~1c ) = Z +R (N~ ?1(N ?1)~1 ) Rc;k (N c

c

= Z +PR c

c

c

~ ?(Nc ?1)~1c ) (N ~ ?(Nc ?1)~1c ) Rc;l (N l=1

c;k C

:

Therefore, (4.20) can be rewritten as ~ ? (Nc ? 1)~1c ) Rc;k (N ~ ) = Q k (N ~ ) ? 2 Qc;k (N ~)+ A(kc)(N P ~ ? (Nc ? 1)~1c ) : Nc Zc + Cl=1 Rc;l(N Because for any center l, h i ~ ? (Nc ? 1)~1c ) = Dc;l 1 + A(l c)(N ~ ? (Nc ? 1)~1c ) (by (2.2)) Rc;l(N h ~ ? Nc~1c)i (by the Arrival Instant Theorem) = Dc;l 1 + Ql(N h i ~ ) ? Qc;l(N ~) (by Lemma 4.6), = Dc;l 1 + Ql(N we obtain h i ~ ~ D 1 + Q ( N ) ? Q ( N ) c;k c;k ~)+ ~ ) = Qk (N ~ ) ? 2 Qc;k (N n hk A(kc)(N P K ~ ) ? Qc;l(N ~ )io for Nc > 1. Nc Zc + l=1 Dc;l 1 + Ql(N

Proof of Theorem 4.2. The proof follows from Lemmas 4.4 and 4.5.

61

Chapter 4. Two New Approximate MVA Algorithms

Appendix 4.C This appendix establishes Theorem 4.3.

Proof of Theorem 4.3.

For N = 1, the QL algorithm sets Rk (N ) = Dk for all k,

and the theorem holds. For N > 1, any solution to the QL algorithm must satisfy " # ( N ? 2) D NR k k (N ) Rk (N ) = Dk 1 + (N ? 1) [Z + R(N )] + (N ? 1)(D + Z ) ; and K X R(N ) = Rk (N );

for 1  k  K , (4.21)

k=1

and

D=

K X k=1

Dk :

(4.22) (4.23)

Equation (4.21) can be rewritten as ( " ) # N ( N ? 2) D D k k Rk (N ) 1 ? (N ? 1)[Z + R(N )] = Dk 1 + (N ? 1)(D + Z ) : (4.24) Since Rk (N ) and Dk must be positive, any feasible solution must satisfy N ? 2)Dk > 0; for 1  k  K , 1 ? (N N?(1)[ (4.25) Z + R(N )] which is equivalent to (N ? 2) D ? Z; for 1  k  K . R(N ) > NN (4.26) ?1 k Also, R(N ) must be positive, therefore, )) ( ( N ( N ? 2) K (4.27) R(N ) > max 0; max k=1 N ? 1 Dk ? Z : Isolating Rk (N ) on the left-hand side in (4.24), summing over k, and applying (4.22) yields h i D K Dk 1 + X (N ?1)(D+Z ) R(N ) = N (N ?2)D k=1 1 ? (N ?1)[Z +R(N )] k

k

=

K X

Dk [Z + R(N )][(N ? 1)(D + Z ) + Dk ] k=1 [Z + R(N )](N ? 1)(D + Z ) ? N (N ? 2)Dk (D + Z )

Chapter 4. Two New Approximate MVA Algorithms

62

K ( X

R(N )Dk [(N ? 1)(D + Z ) + Dk ] k=1 [Z + R(N )](N ? 1)(D + Z ) ? N (N ? 2)Dk (D + Z ) ) ZD k [(N ? 1)(D + Z ) + Dk ] (4.28) + [Z + R(N )](N ? 1)(D + Z ) ? N (N ? 2)D (D + Z ) : k Dividing both sides of (4.28) by R(N ), it yields K ( X Dk [(N ? 1)(D + Z ) + Dk ] 1= k=1 [Z + R(N )](N ? 1)(D + Z ) ? N (N ? 2)Dk (D + Z ) ) ZD k [(N ? 1)(D + Z ) + Dk ] + R(N ) f[Z + R(N )](N ? 1)(D + Z ) ? N (N ? 2)D (D + Z )g : (4.29) k Since N , D, Z and Dk are all known (i.e., constants), and the only unknown variable in (4.29) is R(N ), we de ne functions g, fk and hk with respect to R(N ) for 1  k  K where ? 1)(D + Z ) + Dk ] (4.30) fk (R(N )) = [Z + R(N )](NDk?[(N 1)(D + Z ) ? N (N ? 2)Dk (D + Z ) ; and k [(N ? 1)(D + Z ) + Dk ] hk (R(N )) = R(N ) f[Z + R(NZD )](N ? 1)(D + Z ) ? N (N ? 2)Dk (D + Z )g ; (4.31) and K X (4.32) g(R(N )) = [fk (R(N )) + hk (R(N ))] : =

k=1

Hence, equation (4.29) can be rewritten as 1=

K X k=1

[fk (R(N )) + hk (R(N ))] = g(R(N )):

(4.33)

Note that in the region de ned by inequality (4.27), all of g, fk and hk are continuous functions with respect to R(N ). It is easy to see that fk0 (R(N )) < 0 and h0k (R(N )) < 0 for 1  k  K in the region de ned by inequality (4.27). Therefore, as shown in Figure 4.1, g is a monotone decreasing function with respect to R(N ) in the region de ned by inequality (4.27). Let ! denote the right-hand side of inequality (4.27), i.e., ( ( )) N ( N ? 2) K ! = max 0; max k=1 N ? 1 Dk ? Z : It is easy to see that

63

Chapter 4. Two New Approximate MVA Algorithms

g(R(N))

g(R(N))

1

ω

R(N)

Figure 4.1: Plot of g(R(N )) vs R(N ) for the QL algorithm

and

lim g(R(N )) = +1;

(4.34)

lim g(R(N )) = 0:

(4.35)

R(N )!!

R(N )!+1

Since g(R(N )) decreases monotonically from +1 to 0 as R(N ) increases from ! to +1, it must take on the value 1 when R(N ) is the unique solution to equations (4.21) and (4.33). According to (4.24), the Rk (N ) values are uniquely determined by R(N ), and the theorem holds.

Appendix 4.D This appendix establishes Theorem 4.4. In order to prove Theorem 4.4, the following theorems and lemmas are needed.

Theorem 4.9 Suppose an iterative algorithm generates the estimate }(i) after the ith iteration, where }(i) is a scalar. Let }(0) denote the initial estimate of the algorithm. For some constant  and some integer   0, if }(i)   for all i  0, and if }(i+1)  }(i)

64

Chapter 4. Two New Approximate MVA Algorithms

for all i  , then the algorithm converges. Moreover, if  = 0, then it is said that the estimate obtained by the algorithm monotonically increases to the solution. 2

Proof. Let } = limi!+1 }(i). Hence, }(i)  }(i+1)  }   . If }(i+1) = }(i) for i  ,

then the algorithm obviously converges. If }(i) < }(i+1) for i  , then j }(i) ? } j>j }(i+1) ? } j. Hence, Therefore,

j}( +1) ?} j j}( ) ?} j i

i

< 1:

j }(i) ? } j = lim j }(i) ? } j  j }(i?1) ? } j    j }(+1) ? } j = 0: lim i!1 j }() ? } j i!1 j }(i?1) ? } j j }(i?2) ? } j j }() ? } j Since j }() ? } j 6= 0, limi!1 j }(i) ? } j = 0. By De nition 3.1, the algorithm converges.

Theorem 4.10 Suppose an iterative algorithm generates the estimate }(i) after the ith

iteration, where }(i) is a scalar. Let }(0) denote the initial estimate of the algorithm. For some constant  and some integer   0, if }(i)   for all i  0, and if }(i+1)  }(i) for all i  , then the algorithm converges. Moreover, if  = 0, then it is said that the estimate obtained by the algorithm monotonically decreases to the solution. 2

Proof. Let } = limi!+1 }(i). Hence, }(i)  }(i+1)  }   . If }(i+1) = }(i) for i  ,

then the algorithm obviously converges. If }(i) > }(i+1) for i  , then j }(i) ? } j>j }(i+1) ? } j. Hence, Therefore,

j}( +1) ?} j j}( ) ?} j i

i

< 1:

j }(i) ? } j = lim j }(i) ? } j  j }(i?1) ? } j    j }(+1) ? } j = 0: lim i!1 j }() ? } j i!1 j }(i?1) ? } j j }(i?2) ? } j j }() ? } j Since j }() ? } j 6= 0, limi!1 j }(i) ? } j = 0. By De nition 3.1, the algorithm converges.

Lemma 4.9 For single class separable queueing networks with N > 1, when the QL

algorithm is solved by the successive substitution method with the initial performance N measures Q(0) k (N ) = K , if (1) (I ) Q(0) k (N )  Qk (N )      Qk (N )

Chapter 4. Two New Approximate MVA Algorithms

65

for some integer I 2 [0; +1), then (1) (I ) Q(0) k+1 (N )  Qk+1 (N )      Qk+1 (N ):

2

Proof. For single class separable queueing networks with N > 1, when the QL algorithm

is solved by the successive substitution method, utilizing (4.10) and (2.2) through (2.6) yields h i D Dk X (N ) 1 + (N ?1)( D+Z ) Qk (N ) = ; (4.36) N ? 2 1 ? N ?1 Dk X (N ) and NDk [1 + NN ??12 Q(ki)(N ) + (N ?1)(D D+Z) ] (i+1) Qk (N ) = : (4.37) Z + R(i)(N ) k

k

Hence,

(i) +1) Q(ki+1 (N ) = Dk+1 [1 + NN ??12 Qk+1(N ) + (N ?D1)(D+Z) ] : (4.38) Q(ki+1)(N ) Dk [1 + NN ??21 Q(ki)(N ) + (N ?1)(D D+Z) ] When Dk  Dk+1 , it is easy to show that Qk (N )  Qk+1(N ) by utilizing (4.36). When (0) (i) (i) N Q(0) k (N ) = Qk+1 (N ) = K , it is easy to show that Qk (N )  Qk+1 (N ) by utilizing (4.38). Moreover, when (1) (I ) Q(0) k (N )  Qk (N )      Qk (N ) k +1

k

for some integer I 2 [0; +1), utilizing (4.37) yields

NDk [1 + NN ??12 Q(ki)(N ) + (N ?1)(D D+Z) ] Qk (N )  Z + R(i)(N ) k

(i)

(4.39)

(0) N for i 2 [0; I ? 1]. Hence, when Dk  Dk+1 and Q(0) k (N ) = Qk+1 (N ) = K , utilizing (4.38) and (4.39) yields ) NDk+1 [1 + NN ??12 Q(ki+1 (N ) + (N ?D1)(D+Z) ] +1) Q N)  = Q(ki+1 (N ) Z + R(i)(N ) for i 2 [0; I ? 1]. Therefore, (i) k+1 (

k +1

(1) (I ) Q(0) k+1 (N )  Qk+1 (N )      Qk+1 (N ):

(4.40)

66

Chapter 4. Two New Approximate MVA Algorithms

Lemma 4.10 For single class separable queueing networks with N > 1 and Z = 0, when

the QL algorithm is solved by the successive substitution method with the initial conditions (m) (m?1) N Q(0) (N ) for k = 1; :::; p, and Q(km)(N ) > Q(km?1)(N ) k (N ) = K , if Qk (N )  Qk for k = p + 1; :::; K , where m and p are some integers, then R(m) (N )  R(m?1) (N ), X (m)(N )  X (m?1)(N ), and Q(km+1)(N )  Q(km)(N ) for k = 1; :::; p. 2

Proof. For single class separable queueing networks with N > 1, when the QL algorithm is solved by the successive substitution method, D NDk [1 + NN ??12 Q(ki)(N ) + (N ?1)( D+Z )] (i+1) Qk (N ) = : P D Z + Kl=1 Dl [1 + NN ??12 Q(l i)(N ) + (N ?1)( ] D+Z ) k

k

N Since Z = 0 and Q(0) k (N ) = K , by (4.3) in Proposition 4.3, for all i  0, (i) K K X X k (N ) = N; Q(ki)(N ) = ZNR (i) k=1 + R (N ) k=1 and (i+1) K K X X k (N ) = N: Q(ki+1)(N ) = ZNR + R(i+1)(N ) k=1

k=1

Hence, when Z = 0, for all i  0, K K X X Q(ki)(N ) = Q(ki+1)(N ) = N: k=1

k=1

(4.41)

(4.42) (4.43) (4.44)

Moreover, by (4.44) and D1  D2      DK , if Q(km)(N )  Q(km?1)(N ) for k = 1; 2; :::; p, and Q(km)(N ) > Q(km?1)(N ) for k = p + 1; :::; K , where m and p are some integers, then K K X X (4.45) Dk Q(km)(N )  Dk Q(km?1)(N ): k=1

Hence,

R

k=1

# " K X D N ? 2 l (m) N ) = Dl 1 + N ? 1 Ql (N ) + (N ? 1)(D + Z ) l=1 " # K X N ? 2 D l (m?1)  Dl 1 + N ? 1 Ql (N ) + (N ? 1)(D + Z )

(m) (

l=1 = R(m?1) (N ):

Also, by (2.4),

X (m)(N ) = Z + RN(m)(N )  Z + R(Nm?1)(N ) = X (m?1)(N ):

(4.46) (4.47)

Chapter 4. Two New Approximate MVA Algorithms

67

Therefore, for k = 1; :::; p, (m+1)

Qk

NDk [1 + NN ??21 Q(km)(N ) + (N ?1)(D D+Z) ] (N ) = Z + PKl=1 Dl[1 + NN ??12 Q(l m)(N ) + (N ?1)(D D+Z) ] k

k

D NDk [1 + NN ??21 Q(km?1)(N ) + (N ?1)( D+Z ) ]  PK D Z + l=1 Dl[1 + NN ??21 Q(l m)(N ) + (N ?1)( D+Z ) ] NDk [1 + NN ??21 Q(km?1)(N ) + (N ?1)(D D+Z) ]  PK Z + l=1 Dl[1 + NN ??21 Q(l m?1)(N ) + (N ?1)(D D+Z) ] = Q(km)(N ): k

k

k

k

(4.48)

Lemma 4.11 For single class separable queueing networks with N > 1 and Z = 0,

suppose that the QL algorithm is solved by the successive substitution method with the N initial conditions Q(0) k (N ) = K . Let i be the number of iterations. For i 2 [1; +1) and some integer p 2 [1; K ], there exists some integer I 2 [0; i) such that

and

(1) (I ) Q(0) k (N )  Qk (N )      Qk (N )

(4.49)

Q(kI )(N ) > Q(kI +1)(N )  Q(kI +2)(N )    

(4.50)

for k = 1; :::; p, and there exists some integer   i such that (1) () Q(0) k (N )  Qk (N )      Qk (N )

(4.51)

2

for k = p + 1; :::; K .

Proof. By induction on i.

N When i = 1, because Dk  Dk+1 and Q(0) k (N ) = K , by (4.38), (1) (1) Q(1) 1 (N )  Q2 (N )      QK (N ):

Also, by (4.44), there is some integer p such that

N (0) Q(1) k (N ) < Qk (N ) = K ; for k = 1; :::; p,

Chapter 4. Two New Approximate MVA Algorithms

and

68

N (0) Q(1) k (N )  Qk (N ) = K ; for k = p + 1; :::; K .

Hence, when i = 1, there exists an integer I = 0 such that (4.49) and (4.50) hold for k = 1; :::; p, and there exists an integer   i such that (4.51) holds for k = p + 1; :::; K . Therefore, the lemma is valid for i = 1. Assume that the lemma is valid for i = m. Therefore, by Lemmas 4.9 and 4.10, there is some integer r such that 1. for k = 1; :::; r, there exists I 2 [0; i) such that (4.49) and (4.50) hold; 2. for k = r + 1; :::; K , there exists   m such that (4.51) holds. When i = m + 1, by Lemma 4.10, Q(km+1)(N )  Q(km)(N ) for k = 1; :::; r. Let s  r be the smallest integer such that Q(sm+1+1)(N )  Q(sm+1) (N ). Then, by Lemma 4.9, Q(km+1)(N )  Q(km)(N ) for k = s+1; :::; K . Also, Q(km+1)(N ) < Q(km)(N ) for k = r +1; :::; s, i.e., I = m for k = r + 1; :::; s. Hence, when i = m + 1, the lemma is valid, and there is some integer s such that 1. for k = 1; :::; s, there exists I 2 [0; i) such that (4.49) and (4.50) hold; 2. for k = s + 1; :::; K , there exists   i such that (4.51) holds. Therefore, by induction, the lemma is valid for all i 2 [0; +1).

Corollary 4.3 For single class separable queueing networks with N > 1 and Z = 0, when the QL algorithm is solved by the successive substitution method with the initial N conditions Q(0) k (N ) = K , there exists an integer I 2 [0; +1) for each k , 1  k  K , such that (1) (I ) Q(0) k (N )  Qk (N )      Qk (N ) and

Q(kI )(N ) > Q(kI +1)(N )  Qk(I +2)(N )    

2

Chapter 4. Two New Approximate MVA Algorithms

69

Lemma 4.12 For single class separable queueing networks with N > 1 and Z = 0, when

the QL algorithm is solved by the successive substitution method with the initial conditions N (i) (i?1) (N ) and X (i) (N )  X (i?1) (N ). 2 Q(0) k (N ) = K , for any integer i  1, R (N )  R N Proof. Since Z = 0 and Q(0) k (N ) = K , by (4.3) in Proposition 4.3, for all i  0, K NR(i) (N ) X Qk (N ) = Z + Rk (i)(N ) = N; k=1 k=1 K X

and

K X

(i)

K NR(i+1) (N ) X Qk (N ) = Z + Rk (i+1)(N ) = N: k=1 k=1 By Lemma 4.11, for each i  1, there is an integer p such that (i+1)

(4.52) (4.53)

(1) (i) Q(0) k (N )  Qk (N )      Qk (N )

for k = 1; 2; :::; p, and

Q(ki?1)(N )  Q(ki)(N ) for k = p + 1; :::; K . Therefore, when D1  D2  :::  DK and PKk=1 Q(ki?1)(N ) = PK Q(i)(N ) = N for all i  1, k=1 k K K X X (4.54) Dk Q(ki?1)(N )  Dk Q(ki)(N ): Hence, for all i  1,

k=1

k=1

" # K X N ? 2 D k ( i ) R(i)(N ) = Dl 1 + N ? 1 Ql (N ) + (N ? 1)(D + Z ) l=1 # " K X D N ? 2 k (i?1)  Dl 1 + N ? 1 Ql (N ) + (N ? 1)(D + Z ) l=1 = R(i?1)(N ):

By (2.4),

X (i)(N ) = Z + RN(i)(N )  Z + RN(i?1)(N ) = X (i?1)(N ):

Proof of Theorem 4.4.

(4.55) (4.56)

Suppose the QL algorithm is solved by the successive N substitution method with the initial performance measures Q(0) k (N ) = K . For single class separable queueing networks with N > 1 and Z = 0, since the 0  Q(ki)(N )  N for all i 2 [0; +1). By Theorems 4.9 and 4.10, Corollary 4.3, and Lemma 4.12, the theorem is established.

Chapter 4. Two New Approximate MVA Algorithms

70

Appendix 4.E This appendix establishes Theorem 4.5. In order to prove Theorem 4.5, the following lemma is needed.

Lemma 4.13 For single class separable queueing networks with N > 1, when the QL

algorithm is solved by successive substitution method, suppose that two sets of initial performance measure estimates, superscripted by (0) and (0)0, are such that

R(0)0 (N )  R(0)(N ); 0

(0) Q(0) b (N )  Qb (N );

8b 2 B;

0 (0) Q(0) p (N )  Qp (N );

8p 2 P:

Then, for all i  0,

R(i)0 (N )  R(i)(N ); 0

(4.57)

Q(bi) (N )  Q(bi)(N );

8b 2 B;

(4.58)

Q(pi)0 (N )  Q(pi)(N );

8p 2 P:

(4.59)

2

Proof. By induction on i.

By assumption, the lemma holds for i = 0. Assume that the lemma holds for i. Consider the lemma for i +1 (i  0). When the QL algorithm is solved by the successive substitution method, relation (4.57) is equivalent to " # K X N ? 2 D k (i)0 Dk 1 + N ? 1 Qk (N ) + (N ? 1)(D + Z ) k=1



K X k=1

K X

" # N ? 2 D k (i) Dk 1 + N ? 1 Qk (N ) + (N ? 1)(D + Z )

K X 0 Dk Q(ki) (N )  Dk Q(ki)(N ) k=1 k=1 i X h i X h (i) () Db Qb (N ) ? Q(bi)0 (N ) ? Dp Q(pi)0 (N ) ? Q(pi)(N )  0:

()

b2B

p2P

71

Chapter 4. Two New Approximate MVA Algorithms

Since 0  Dk  Db for 1  k  K , the relation (4.57) will be established if it can be shown that i X h i X h (i) 0 Db Qb (N ) ? Q(bi) (N ) ? Db Q(pi)0 (N ) ? Q(pi)(N )  0: (4.60) b2B

p2P

Since

K X k=1

and

K X

(i) (N ) Q(ki)(N ) = ZNR + R(i)(N ) ;

(4.61)

(i)0 (N ) NR (4.62) Qk (N ) = Z + R(i)0 (N ) ; k=1 it is easy to see that (4.60) is equivalent to R(i)(N )  R(i)0 (N ) : (4.63) Z + R(i)(N ) Z + R(i)0 (N ) Since R(i)0 (N )  R(i)(N ) by the inductive hypothesis, relation (4.63) holds, establishing (4.57) for i + 1. Relation (4.63) then yields NR(i+1)(N )  NR(i+1)0 (N ) ; (4.64) Z + R(i+1)(N ) Z + R(i+1)0 (N ) and K NR(i+1)0 (N ) K NR(i+1) (N ) X X k k (4.65) Z + R(i+1)(N )  Z + R(i+1)0 (N ) : (i)0

k=1

k=1

From (4.65) it is easy to see that relation (4.58) or (4.59) could be violated for i + 1 only if there is a center p 2 P such that

NR(pi+1)(N ) NRp(i+1)0 (N ) Z + R(i+1)(N ) > Z + R(i+1)0 (N ) 1 + NN ??12 Q(pi)(N ) + (N ?1)(D D+Z) 1 + NN ??21 Q(pi)0 (N ) + (N ?1)(D D+Z) > () Z + R(i+1)(N ) Z + R(i+1)0 (N ) (i+1)0 (N ) 1 + NN ??21 Q(pi)0 (N ) + (N ?1)(D D+Z) Z + R () Z + R(i+1)(N ) > N ?2 (i) : 1 + N ?1 Qp (N ) + (N ?1)(D D+Z) However, by the inductive hypothesis, p

p

p

p

Z + R(i+1)0 (N )  1; Z + R(i+1)(N )

Chapter 4. Two New Approximate MVA Algorithms

and

72

1 + NN ??21 Q(pi)0 (N ) + (N ?1)(D D+Z)  1: 1 + NN ??21 Q(pi)(N ) + (N ?1)(D D+Z) As this is a contradiction, relations (4.58) and (4.59) are established for i + 1. Therefore, the lemma holds for i + 1, and by induction, the lemma holds for all i  0. p

p

Proof of Theorem 4.5. For single class separable queueing networks with N > 1, 8b 2 B , and 8p 2 P , when the QL algorithm is solved by the successive substitution method N D (0) (0) with the initial conditions Q(0) b (N ) = T (Z +ND) and Qp (N ) = 0, it yields R (N ) = ND, ND (0) (i) R(0) b (N ) = T , and Rp (N ) = 0. Since R (N )  D for all i, if it can be shown that R(i+1)(N )  R(i)(N ) for all i  0, then by Theorem 4.9, monotonic convergence of the QL algorithm will be established. When N > 1, " # 2D N ? 2 D (1) R (N ) = D 1 + (N ? 1)(D + Z ) + Z N+ ND (4.66) N ? 1 Db ; 2

# 2D D N ? 2 N b Rb (N ) = Db 1 + T (Z + ND) N ? 1 + (N ? 1)(D + Z ) ; " # D p (1) Rp (N ) = Dp 1 + (N ? 1)(D + Z ) : It can be veri ed that R(1)(N )  R(0)(N ) = ND; ND (0) Q(1) b (N )  Qb (N ) = T (Z + ND) ; (0) Q(1) p (N )  Qp (N ) = 0: (1)

"

(4.67) (4.68) (4.69) (4.70) (4.71)

Since the estimates resulting from one iteration can be treated as alternative initial estimates superscripted by (0)0, Lemma 4.13 implies that R(i+1)(N ) = R(i)0 (N )  R(i)(N ) for all i  0. Moreover, by (2.4), X (i+1)(N )  X (i)(N ) for all i  0. Therefore, the theorem is established.

Appendix 4.F This appendix establishes Theorem 4.6.

Chapter 4. Two New Approximate MVA Algorithms

73

Proof of Theorem 4.6.

Theorem 4.3 shows that the QL algorithm has a unique solution when it is solved by Newton's method. Let fk (R(N )), hk (R(N )), and g(R(N )) have the same de nitions as in the proof of Theorem 4.3 given in Appendix 4.C. In the proof of Theorem 4.3, it was shown that g0(R(N )) < 0. It is easy to see that for 1  k  K , fk00(R(N )) > 0 and h00k (R(N )) > 0. Hence, g00(R(N )) > 0. Also, equation (4.33) can be rewritten as

G(R(N )) = g(R(N )) ? 1 = 0;

(4.72)

where G(R(N )) is a function with respect to R(N ). Newton's method solves R(N ) by a formula of the form R(i+1)(N ) = R(i)(N ) ? GG0((RR((NN)))) : (4.73)

Since G0(R(N )) < 0 and G00(R(N )) > 0, G(R(N )) is monotonically decreasing and convex, as shown in Figure 4.2. Moreover, by Theorem 4.3, G(R(N )) = 0 has a unique solution. Since, given a mapping f (x) = 0, Newton's method with a globally convergent modi cation always converges to a unique solution from any starting point when the unique solution exists, f 0(x) < 0, and f 00(x) > 0 for all x in the domain [14], the theorem is established.

Appendix 4.G This appendix establishes Theorem 4.7.

Proof of Theorem 4.7. For single class separable networks, the QL algorithm de nes the mapping

R(N ) = (R(N ));

(4.74)

where is a continuous and di erentiable function on the domain of R(N ). By Theorem 4.3, there exists a unique solution R(N ) of the residence time of the QL algorithm:

R(N ) = (R(N )):

(4.75)

When the QL algorithm is solved by the successive substitution method, the algorithm

Chapter 4. Two New Approximate MVA Algorithms

74

G(R(N))

0

R(N)

Figure 4.2: Plot of G(R(N )) vs R(N ) for the QL algorithm computes the estimation of the results by a formula of the form

R(i+1) (N ) = (R(i)(N )):

(4.76)

By (4.75), (4.76), and the Mean-Value Theorem [27],

j R(i+1)(N ) ? R(N ) j =j (R(i)(N )) ? R (N ) j =j (R(i)(N )) ? (R(N )) j =j 0((i) ) j  j R(i) (N ) ? R(N ) j;

(4.77)

where (i) is a value between R(i)(N ) and R(N ). Suppose 0 is a continuous function on the domain of R(N ) and

j 0(R(N )) j = c 6= 0:

(4.78)

Hence, c > 0, and we can choose two constants c1 and c2 such that 0 < c1 < c < c2. Since 0 is a continuous function, there exists a constant " > 0 such that c1 j 0( ) j c2 for 8 2 [R(N ) ? "; R(N ) + "].

Chapter 4. Two New Approximate MVA Algorithms

75

By the convergence assumption of the theorem, as i ! +1, R(i)(N ) ! R(N ). Hence, there exists an integer  such that R(i)(N ) 2 [R(N ) ? "; R(N ) + "] for i  . Since (i) in (4.77) is between R(i) (N ) and R (N ), (i) 2 [R(N ) ? "; R(N )+ "] for i   as well. Hence, c1 j 0((i)) j c2; 8i  . (4.79) Therefore, by (4.77),

c1 j R(i)(N ) ? R(N ) j  j R(i+1)(N ) ? R(N ) j  c2 j R(i)(N ) ? R (N ) j;

8i  .

(4.80)

Hence, by Theorem 3.5, the theorem is established. To complete the proof, we only need show that 0 is a continuous function on the domain of R(N ) and j 0(R (N )) j = 6 0 for N > 2. Consider the mapping de ned by the QL algorithm: " # ( N ? 2) D NR k k (N ) Rk (N ) = Dk 1 + (N ? 1) [Z + R(N )] + (N ? 1)(D + Z ) for 1  k  K , (4.81) and

R(N ) =

K X k=1

Rk (N ):

Hence, (4.81) can be rewritten as " ) # ( D N ( N ? 2) D k k Rk (N ) 1 ? (N ? 1)[Z + R(N )] = Dk 1 + (N ? 1)(D + Z ) :

(4.82)

(4.83)

Since Dk and Rk (N ) must be positive, any feasible solution must satisfy

N ? 2)Dk > 0; 1 ? (N N?(1)[ Z + R(N )]

for 1  k  K ,

(4.84)

for 1  k  K .

(4.85)

which is equivalent to (N ? 2) D ? Z; R(N ) > NN ?1 k

Also, R(N ) must be positive, therefore, )) ( ( N ( N ? 2) K R(N ) > max 0; max k=1 N ? 1 Dk ? Z :

(4.86)

Chapter 4. Two New Approximate MVA Algorithms

76

Isolating Rk (N ) on the left-hand side in (4.83), summing over k, and applying (4.82) yields h i D K Dk 1 + X (N ?1)(D+Z ) R(N ) = N (N ?2)D k=1 1 ? (N ?1)[Z +R(N )] k

k

K X

Dk [Z + R(N )][(N ? 1)(D + Z ) + Dk ] k=1 (N ? 1)(D + Z )[Z + R(N )] ? NDk (N ? 2)(D + Z ) = (R(N )):

=

(4.87)

Hence, K Dk2 (2 ? N )N (ND + NZ ? D ? Z + Dk ) 0(R(N )) = X (4.88) 2 2: k=1 (D + Z )[NZ + NR(N ) ? Z ? R(N ) ? N Dk + 2NDk ] In the region de ned by inequality (4.86), it is not dicult to verify that 0 is a continuous

function, and 0(R(N )) < 0 for N > 2. Moreover, 0(R(N )) ! 0 if and only if R(N ) ! +1. By Theorem 4.3, R(N ) has a unique solution R(N ) (i.e., R(N ) 6! +1). Hence, 0(R (N )) 6= 0. Therefore, j 0(R (N )) j6= 0 for N > 2. Hence, the proof is complete.

Chapter 5 Comparison of the PE, QL, and FL Algorithms In this chapter, a comparison of the accuracies of the PE, QL, and FL algorithms is presented, and the merits of these algorithms are investigated.

5.1 Analytic Results on the Accuracies of the PE, QL, and FL Algorithms It is clear that the degrees of accuracies of the PE, QL, and FL algorithms are dependent ~ ? ~1c). The accuracy of the upon the approximations made by these algorithms of Qk (N solution of each approximate algorithm is proportional to the accuracy of the approxi~ ? ~1c ). As shown formally in (2.9), (4.8), and (4.11), all three algorithms mation to Qk (N assume that removing a class c customer does not a ect the proportion of time spent by ~ ?~1c) = Qj;k (N ~ ) for c 6= j ). customers of any other class at each service center (i.e., Qj;k (N ~ ? ~1c) by linear interpolation between (0; 0) Moreover, the PE algorithm estimates Qc;k (N ~ )), while the QL algorithm estimates Qc;k (N ~ ? ~1c ) by linear interpolation and (Nc; Qc;k (N ~ ? (Nc ? 1)~1c )) and (Nc; Qc;k (N ~ )), and the FL algorithm between the points (1; Qc;k (N ~ ?~1c) by linear interpolation between the points (1; Qc;k (N ~ ? (Nc ? 1)~1c )) estimates Qc;k (N and (Nc; Q N(N~ ) ). c;k

c

77

78

Chapter 5. Comparison of the PE, QL, and FL Algorithms

5.1.1 Accuracies of the Assumptions of the PE and QL Algorithms (c)[PE ] ~ ? ~1c ) estimated by the Let Aj;k and A(j;kc)[QL] denote the approximation of Qj;k (N assumptions of the PE and QL algorithms respectively for j = 1; :::; C , and let Ak(c)[PE] ~ ? ~1c) estimated by the assumptions of and A(kc)[QL] denote the approximation of Qk (N the PE and QL algorithms respectively. Hence, the errors on the assumptions of the PE and QL algorithms are de ned by (c)[PE ] (c)[PE ] Ej;k = Aj;k ? Qj;k(N~ ?~1c);

for j = 1; :::; C , c = 1; :::; C , and k = 1; :::; K , (5.1)

(c)[QL] ~ ?~1c); Ej;k = A(j;kc)[QL] ? Qj;k (N

for j = 1; :::; C , c = 1; :::; C , and k = 1; :::; K , (5.2)

and

~ ? ~1c); Ek(c)[PE] = Ak(c)[PE] ? Qk (N

for c = 1; :::; C , and k = 1; :::; K ,

(5.3)

~ ? ~1c ); Ek(c)[QL] = A(kc)[QL] ? Qk (N

for c = 1; :::; C , and k = 1; :::; K .

(5.4)

It is obvious that

Ek(c)[PE] =

and

Ek(c)[QL] =

It has been proved that

and

C X j =1 C X j =1

(c)[PE ] Ej;k

(5.5)

(c)[QL] : Ej;k

(5.6)

8 > < > 0 if j = c ~ ~ ~ Xj (N) ? Xj (N ? 1c) = > : < 0 if j 6= c

(5.7)

~ ) ? Qc;k (N ~ ? ~1c) > 0 Qc;k (N

(5.8)

for multiple class separable queueing networks [1, 39]. Also, it has been shown that

~ + ~1c ) ? Xc(N ~ )  Xc(N ~ ) ? X c (N ~ ? ~1c); Xc(N

(5.9)

~ ) with respect to Nc is a monotonically increasing concave which means the function Xc (N function as shown in Figure 5.1 [13, 16].

79

Chapter 5. Comparison of the PE, QL, and FL Algorithms

X c (N) Exact Throughput Balanced System Bounds Asymptotic Bounds

0

Nc

1

~ ) with respect Nc Figure 5.1: Plot of the function Xc (N ~ ? ~1c ) = Qj;k (N ~ ) for j 6= c. Both the PE and QL algorithms assume that Qj;k (N Therefore (c)[PE ] ~ ); for j 6= c, Aj;k = A(j;kc)[QL] = Qj;k (N (5.10) and

(c)[PE ] (c)[QL] Ej;k = Ej;k ;

for j 6= c.

(5.11)

~ ? ~1c). The However, the PE and QL algorithms use di erent approximations for Qc;k (N (c)[PE ] (c)[QL] relationship between Ec;k and Ec;k is given in the following theorem.

Theorem 5.1 and

(c)[PE ] (c)[QL] j Ec;k jj Ec;k j

(5.12)

(c)[PE ] (c)[QL] Ec;k Ec;k  0:

(5.13)

2

Proof. When Nc = 1, (c)[PE ] ~ ? ~1c) = 0: Ac;k = A(c;kc)[QL] = Qc;k (N

(5.14)

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Hence,

(c)[PE ] (c)[QL] Ec;k = Ec;k = 0;

80 (5.15)

and the theorem holds. ~ ? ~1c ) by Consider the case when Nc > 1. Since the PE algorithm estimates Qc;k (N ~ )), and the QL algorithm estimates linear interpolation between (0; 0) and (Nc ; Qc;k (N ~ ? ~1c ) by linear interpolation between the points (1; Qc;k (N ~ ? (Nc ? 1)~1c )) and Qc;k (N (c)[PE ] (c)[QL] ~ )) when Nc > 1, the relationship between Ec;k (Nc; Qc;k (N and Ec;k depends on the behavior of the function Qc;k (n~ ) with respect to nc , where n~ = (n1; n2; :::; nC ) is a ~. population vector ranging from ~0 to N When Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc = 0, by Proposition 4.3, PK Q (n~ ) = n . Since the network is balanced, Q (n~ ) = n . Therefore, as shown in c c;k i=1 c;i K Figure 5.2, the function Qc;k (n~ ) is a linear function with respect to nc . Since both the PE and QL algorithms assume the same linear function Qc;k (n~ ) with respect to nc, it is obvious that (c)[PE ] ~ ? ~1c): Ac;k = A(c;kc)[QL] = Qc;k (N (5.16) c

Hence,

(c)[PE ] (c)[QL] Ec;k = Ec;k = 0:

(5.17)

Therefore, (5.12) and (5.13) hold, and the theorem is established in this case. When Nc >P1 and Dc;1 = Dc;2 =    = Dc;K and Zc > 0, the network is balanced, Q (~n) . By Proposition 4.3 and (5.7), and Qc;k (n~ ) = K K K X X Qc;i(n~ ) ? Qc;i(n~ ? ~1c) K

i=1

c;i

i=1

i=1

= [nc ? Xc (~n)Zc ] ? [(nc ? 1) ? Xc (~n ? ~1c)Zc ] = 1 + Zc[Xc (~n ? ~1c) ? Xc (~n)] < 1: However, by (5.8),

K X i=1

Qc;i(~n) ?

K X i=1

Qc;i(~n ? ~1c) > 0:

Moreover, by (5.9) and (5.18), K K K K X X X X Qc;i(n~ + ~1c) ? Qc;i(~n)  Qc;i(~n) ? Qc;i(~n ? ~1c): i=1

i=1

i=1

i=1

(5.18) (5.19) (5.20)

81

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Qc,k(n) Y=

Exact Queue Length

n

c

QL Approximation PE Approximation

Qc,k(n)

=

n

c

K

Qc,k(N)

Q c,k (N -1 c)

0

1

N c -1

Nc

n

c

Figure 5.2: Plot of the exact Qc;k (n~ ) with respect nc , and the approximations of the PE and QL algorithms when Dc;1 = Dc;2 =    = Dc;K and Zc = 0 Therefore, and

0 < Qc;k (~n) ? Qc;k (~n ? ~1c) < K1 ;

(5.21)

Qc;k (~n + ~1c) ? Qc;k (~n)  Qc;k (~n) ? Qc;k (~n ? ~1c):

(5.22)

Hence, as shown in Figure 5.3, the function Qc;k (~n) with respect to nc is a monotonically increasing convex function, and it is easy to verify that and

(c)[PE ] ~ ? ~1c); Ac;k > A(c;kc)[QL] > Qc;k (N

(5.23)

(c)[PE ] (c)[QL] Ec;k > Ec;k > 0:

(5.24)

Therefore, (5.12) and (5.13) hold, and the theorem is established in this case.

82

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Y = nc

Qc,k(n) Exact Queue Length QL Approximation PE Approximation

Y = nc K

Qc,k(N) Q c,k (N -1 c) 0

N c -1

1

Nc

n

c

Figure 5.3: Plot of the exact Qc;k (n~ ) with respect nc , and the approximations of the PE and QL algorithms when Dc;1 = Dc;2 =    = Dc;K and Zc > 0 Now consider the case when Nc > 1 and Dc;i 6= Dc;j for some i and j . For multiple class separable queueing networks, let Wc = maxk fDc;k g for c = 1; :::; C . A service center with service demand Wc for class c is de ned as a bottleneck center for class c customers. When Dc;i 6= Dc;j for some i and j , each service center can be classi ed as bottleneck center or nonbottleneck center for class c customers. Suppose center b is a bottleneck center and center p is a nonbottleneck center for class c customers. Since Dc;b > Dc;p , by (2.1){(2.6), (5.7), and (5.8),

Qc;b(~n) ? Qc;b(~n ? ~1c) > Qc;p(~n) ? Qc;p(~n ? ~1c) > 0: By Proposition 4.3 and (5.7), K K X X Qc;i(~n) ? Qc;i(~n ? ~1c) i=1

i=1

(5.25)

Chapter 5. Comparison of the PE, QL, and FL Algorithms

= [nc ? Xc (~n)Zc ] ? [(nc ? 1) ? Xc (~n ? ~1c)Zc ] = 1 + Zc [Xc(~n ? ~1c ) ? Xc(~n)]  1: Also, by (5.8),

K X i=1

Qc;i(~n) ?

K X i=1

Qc;i(~n ? ~1c) > 0:

Moreover, by (5.9) and (5.18), K K K K X X X X Qc;i(n~ + ~1c) ? Qc;i(~n)  Qc;i(~n) ? Qc;i(~n ? ~1c): i=1

i=1

i=1

i=1

83 (5.26) (5.27) (5.28)

Since Qc;b(~n) = Qc;p(~n) = 0 when nc = 0, as shown in Figure 5.4, it is easy to see that and

0 < Qc;b(~n) ? Qc;b(~n ? ~1c) < Qc;b(~n + ~1c) ? Qc;b(~n) < 1;

(5.29)

1 > Qc;p(~n) ? Qc;p(~n ? ~1c) > Qc;p(~n + ~1c) ? Qc;p(~n) > 0;

(5.30)

which means the function Qc;b(n~ ) with respect to nc is a monotonically increasing convex function and the function Qc;p(~n) with respect to nc is a monotonically increasing concave function. For bottleneck centers, as shown in Figure 5.5, it is easy to verify that and

(c)[PE ] ~ ? ~1c); Ac;k > A(c;kc)[QL] > Qc;k (N

(5.31)

(c)[QL] (c)[PE ] > 0: > Ec;k Ec;k

(5.32)

Therefore, (5.12) and (5.13) hold, and the theorem is established in this case. For nonbottleneck centers, as shown in Figure 5.6, it is easy to verify that and

(c)[PE ] ~ ? ~1c); Ac;k < A(c;kc)[QL] < Qc;k (N

(5.33)

(c)[PE ] (c)[QL] Ec;k < Ec;k < 0:

(5.34)

Therefore, (5.12) and (5.13) hold, and the theorem is established in this case. The following three theorems establish the relationship between the error associated with the approximations of the QL algorithm and the error associated with the approx~ ? ~1c). imations of the PE algorithm to Qk (N

84

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Qc,k (n)

Bottleneck centers

Non-bottleneck centers

nc

0

Figure 5.4: Plot of the exact Qc;k (n~ ) with respect to nc when all service centers can be classi ed as bottleneck centers and nonbottleneck centers for class c customers

Theorem 5.2 For single class separable queueing networks (i.e., C = 1), and

j Ek(c)[PE] jj Ek(c)[QL] j;

(5.35)

Ek(c)[PE]Ek(c)[QL]  0:

(5.36)

2

Proof. When C = 1, by (5.5) and (5.6), and

(c)[PE ] Ek(c)[PE] = Ec;k ;

(5.37)

(c)[QL] Ek(c)[QL] = Ec;k :

(5.38)

85

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Qc,k(n) Exact Queue Length QL Approximation

Qc,k(N)

PE Approximation

Q c,k (N -1 c)

0

N c -1

1

Nc

n

c

Figure 5.5: Plot of the exact Qc;k (n~ ) with respect to nc and the approximations of the PE and QL algorithms when center k is a bottleneck center for class c customers Hence, by Theorem 5.1, this theorem is established.

Theorem 5.3 For multiple class separable queueing networks, when Nc = 1, Ek(c)[PE] = Ek(c)[QL]:

(5.39)

2

Proof. For multiple class separable queueing networks, by (5.5) and (5.6), (c)[PE ]

Ek and

=

Ek(c)[QL] =

C X j =1 C X j =1

(c)[PE ] (c)[PE ] Ej;k = Ec;k +

(c)[QL] (c)[QL] Ej;k = Ec;k +

X j 6=c

X j 6=c

(c)[PE ] Ej;k ;

(5.40)

(c)[QL] Ej;k :

(5.41)

86

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Qc,k(n) Exact Queue Length QL Approximation PE Approximation

Q c,k (N -1 c)

0

N c -1

1

Qc,k(N)

Nc

n

c

Figure 5.6: Plot of the exact Qc;k (n~ ) with respect to nc and the approximations of the PE and QL algorithms when center k is a nonbottleneck center for class c customers When Nc = 1, by (5.11) and (5.15), X (c)[PE] X (c)[QL] (c)[QL] Ek(c)[PE] = Ej;k = Ej;k = Ek : j 6=c

j 6=c

(5.42)

Theorem 5.4 For multiple class separable queueing networks, there exists an integer N  such that if Nc > N  then

and

j Ek(c)[PE] jj Ek(c)[QL] j :

(5.43)

Ek(c)[PE]Ek(c)[QL]  0:

(5.44)

2

Chapter 5. Comparison of the PE, QL, and FL Algorithms

87

Proof. For multiple class separable queueing networks, by (5.5) and (5.6), (c)[PE ]

Ek and

=

Ek(c)[QL] =

C X j =1 C X j =1

(c)[PE ] (c)[PE ] Ej;k = Ec;k +

(c)[QL] (c)[QL] Ej;k = Ec;k +

X j 6=c

X j 6=c

(c)[PE ] Ej;k ;

(5.45)

(c)[QL] Ej;k :

(5.46)

In the remainder of this proof, we will show that the theorem holds for each of the following cases: (c)[PE ] (c)[QL] 1. j Ec;k j=j Ec;k j, (c)[PE ] (c)[QL] 2. Ec;k > Ec;k  0, (c)[PE ] (c)[QL] 3. Ec;k < Ec;k  0. (c)[PE ] and By Theorem 5.1, these cases include all possible relationships between Ec;k (c)[QL] . Ec;k (c)[PE ] (c)[QL] When j Ec;k j=j Ec;k j, as shown by (5.15) and (5.17) in the proof of The(c)[QL] (c)[PE ] = 0 for all Nc > 0. Moreover, by (5.11), for j 6= c, = Ec;k orem 5.1, Ec;k (c)[QL] (c)[PE ] . Hence, Ek(c)[PE] = Ek(c)[QL], and the theorem is established in this = Ej;k Ej;k case. (c)[PE ] (c)[QL] (c)[PE ] (c)[QL] When Ec;k > Ec;k  0 or Ec;k < Ec;k  0, by (5.11), (c)[PE ] (c)[QL] ~ ) ? Qj;k (N ~ ? ~1c)] = 0: lim E = lim E = N lim [Q (N j;k j;k N !+1 N !+1 !+1 j;k c

Hence,

c

c

lim N !+1 c

X j 6=c

(c)[PE ] Ej;k = N lim !+1 c

lim E (c)[PE] N !+1 k

j 6=c

(c)[QL] Ej;k = 0;

(5.48)

= N lim E (c)[PE]; !+1 c;k

(5.49)

(c)[QL] (c)[QL] lim E = lim E : k c;k N !+1 N !+1

(5.50)

(c)[PE ] Ek(c)[PE] = Ec;k

(5.51)

c

and

X

(5.47)

c

c

c

By de nition of limit, there exists an integer N  such that if Nc > N  then

Chapter 5. Comparison of the PE, QL, and FL Algorithms

and

(c)[QL] Ek(c)[QL] = Ec;k :

88 (5.52)

Therefore, by Theorem 5.1, this theorem is established. Theorems 5.1, 5.2, 5.3, and 5.4 show that the error yielded by the assumption of the QL algorithm is less than or equal to the error yielded by the assumption of the PE algorithm. Next we investigate error propagation.

5.1.2 Accuracies of the Solutions of the PE and QL Algorithms It is obvious that the QL algorithm yields the exact solution for single class separable queueing networks with population 1 or 2, while the PE algorithm yields a higher mean response time and a lower throughput [17]. The following theorems show the relationship between the errors of the solutions of the PE and QL algorithms for general separable queueing networks. Let the superscripts [PE ] and [QL] on the performance measures denote the quantities estimated by the PE and QL algorithm respectively.

Theorem 5.5 For single class separable queueing networks (i.e., C = 1), j R[c;kPE](N~ ) ? Rc;k (N~ ) jj R[c;kQL](N~ ) ? Rc;k (N~ ) j :

(5.53)

2

Proof. By (5.3) and (5.4), and Therefore, by (2.1), and

~ ? ~1c) = Qk (N ~ ? ~1c) + Ek(c)[PE]; Q[kPE](N

(5.54)

~ ? ~1c) = Qk (N ~ ? ~1c) + Ek(c)[QL]: Q[kQL](N

(5.55)

h i ~ ? ~1c) + Ek(c)[PE] ; ~ ) = Dc;k 1 + Qk (N R[c;kPE](N

(5.56)

~ ? ~1c) + Ek(c)[QL]i : ~ ) = Dc;k h1 + Qk (N R[c;kQL](N

(5.57)

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Hence, and

89

j R[c;kPE](N~ ) ? Rc;k (N~ ) j= Dc;k j Ek(c)[PE] j;

(5.58)

j R[c;kQL](N~ ) ? Rc;k (N~ ) j= Dc;k j Ek(c)[QL] j :

(5.59)

By Theorem 5.2, the theorem is established.

Theorem 5.6 For multiple class separable queueing networks, when Nc = 1, j R[c;kPE](N~ ) ? Rc;k (N~ ) j=j R[c;kQL](N~ ) ? Rc;k (N~ ) j :

(5.60)

2

Proof. By (5.58), (5.59), and Theorem 5.3, the theorem is established. Theorem 5.7 For multiple class separable queueing networks, there exists an integer N  such that if Nc > N  then

j R[c;kPE](N~ ) ? Rc;k (N~ ) jj R[c;kQL](N~ ) ? Rc;k (N~ ) j :

(5.61)

2

Proof. By (5.58), (5.59), and Theorem 5.4, the theorem is established. Theorem 5.8 For single class separable queueing networks (i.e., C = 1), ~ )  R[cQL](N ~ )  Rc (N ~ ): R[cPE](N

(5.62)

2

Proof. By (2.2), (5.56), and (5.57), ~ ) = R c (N ~ ) + X Dc;k Ek(c)[PE]; R[cPE](N

(5.63)

~ ) = R c (N ~ ) + X Dc;k Ek(c)[QL]: R[cQL](N

(5.64)

K

k=1

and

K

k=1

Chapter 5. Comparison of the PE, QL, and FL Algorithms

90

(c)[PE ] (c)[QL] When C = 1, Ek(c)[PE] = Ec;k and Ek(c)[QL] = Ec;k . In the remainder of this proof, we will show that K K X X (5.65) Dc;k Ek(c)[QL]  Dc;k Ek(c)[PE]  0

k=1

k=1

and

~ )  R[cQL](N ~ )  R c (N ~) R[cPE](N

(5.66)

for each of the following cases: 1. C = 1 and Nc = 1; 2. C = 1 and Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc = 0; 3. C = 1 and Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc > 0; 4. C = 1 and Nc > 1 and Dc;i 6= Dc;j for some i and j . When C = 1 and Nc = 1, by (5.15) in the proof of Theorem 5.1,

Ek(c)[PE] = Ek(c)[QL] = 0; Hence,

for all k = 1; :::; K .

~ ) = R[cQL](N ~ ) = R c (N ~ ); R[cPE](N

(5.67) (5.68)

and the theorem holds. Now consider the case when C = 1 and Nc > 1. By (5.17) in the proof of Theorem 5.1, when Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc = 0,

Ek(c)[PE] = Ek(c)[QL] = 0:

(5.69)

Hence, when C = 1 and Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc = 0, K X k=1

and

Dc;k Ek(c)[PE] =

K X k=1

Dc;k Ek(c)[QL] = 0;

~ ) = R[cQL](N ~ ) = R c (N ~ ): R[cPE](N

Therefore, the theorem is established in this case.

(5.70) (5.71)

Chapter 5. Comparison of the PE, QL, and FL Algorithms

91

By (5.24) in the proof of Theorem 5.1, when Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc > 0, Ek(c)[PE] > Ek(c)[QL] > 0: (5.72) Hence, when C = 1 and Nc > 1 and Dc;1 = Dc;2 =    = Dc;K and Zc > 0, K X k=1

and

Dc;k Ek(c)[PE] >

K X k=1

Dc;k Ek(c)[QL] > 0;

~ ) > R[cQL](N ~ ) > R c (N ~ ): R[cPE](N

(5.73) (5.74)

Therefore, the theorem is established in this case. When C = 1 and Nc > 1 and Dc;i 6= Dc;j for some i and j , let B denote the set of bottleneck service centers whose load demand is Wc = maxk fDc;k g, and let P denote the set of nonbottleneck service centers whose load demand is less than Wc. By (5.63) and (5.64), K K X (c)[PE ] X [ QL ] [ PE ] ~ ~ (5.75) ? Dc;k Ek(c)[QL]; Rc (N) ? Rc (N) = Dc;k Ek k=1

k=1

and

K X

~ ) ? R c (N ~)= R[cQL](N

k=1

Dc;k Ek(c)[QL]:

As shown by (5.32) and (5.34) in the proof of Theorem 5.1, X  (c)[PE] (c)[QL] Ek ? Ek > 0; k2B

and

X k2P

If

K X k=1

then and

 Ek(c)[PE] ? Ek(c)[QL] < 0:

Ek(c)[PE]  K X k=1

K X k=1

Ek(c)[PE] ?

K X k=1

K X k=1

Ek(c)[QL]  0;

Dc;k Ek(c)[QL]  0;

Ek(c)[QL] =

 Ek(c)[PE] ? Ek(c)[QL]  0:

K  X k=1

(5.76)

(5.77) (5.78) (5.79) (5.80) (5.81)

92

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Hence, if (5.79) holds, by (5.76),

~ )  0: R[cQL] ? Rc (N

(5.82)

Hence, if (5.79) holds, by (5.75), (5.77), (5.78), (5.81), and the de nition of Wc, K K K   X X (c)[PE ] X (c)[QL] [ PE ] [ QL ] ~ ~ Rc (N) ? Rc (N) = Dc;k Ek ? Dc;k Ek = Dc;k Ek(c)[PE] ? Ek(c)[QL] k=1

k=1



k=1

 X   = Dc;k Ek(c)[PE] ? Ek(c)[QL] + Dc;k Ek(c)[PE] ? Ek(c)[QL] k2B k2P X  (c)[PE] (c)[QL] X  (c)[PE] (c)[QL] ? Ek > Wc Ek ? Ek + Wc Ek X

=

k2B K X k=1

k2P

  Wc Ek(c)[PE] ? Ek(c)[QL]  0:

(5.83)

Therefore, when C = 1 and Nc > 1 and Dc;i 6= Dc;j for some i and j , if K K X X Ek(c)[PE]  Ek(c)[QL]  0; k=1

then

(5.84)

k=1

R[cPE]  R[cQL]  0;

(5.85)

and the theorem is established. To complete the proof, we only need show that (5.84) holds when C = 1 and Nc > 1 and Dc;i 6= Dc;j for some i and j . When Nc > 1, (c)[PE ] ~ ); Ac;k = NcN? 1 Qc;k (N (5.86) c and Nc ? 2 Q (N 1 Q (N ~ ~ A(c;kc)[QL] = N (5.87) c;k ~ ) + ?1 N ? 1 c;k ? (Nc ? 1)1c): c

c

When C = 1, K K K K K K X X X X X (c)[PE ] X (c)[QL] Ek(c)[PE] ? Ek(c)[QL] = Ak(c)[PE] ? A(kc)[QL] = Ac;k ? Ac;k k=1

k=1

k=1

k=1

k=1

k=1

 K N ? 2 K N ?1 X X 1 Q (N c c ~ ~ ~ ~ Q ( N ) + ? ( N ? 1) 1 ) Q ( N ) ? c;k c c c;k Nc ? 1 c;k k=1 Nc ? 1 k=1 Nc  K N ? 1 X 1 Q (N Nc ? 2 Q (N c ~ ~ ~ ~ ) ? ? ( N ? 1) 1 ) Q ( N ) ? = c c Nc c;k Nc ? 1 c;k Nc ? 1 c;k k=1 =

93

Chapter 5. Comparison of the PE, QL, and FL Algorithms

2 3 K ~ ~ ~ X Q ( N ) Q ( N ? ( N ? 1) 1 ) c;k c;k c c 5 = 4 N (N ? 1) ? N ? 1 c c c k=1 3 2 K Q (N ~ X 1 ) ~ ? (Nc ? 1)~1c )5 = N ? 1 4 c;kN ? Qc;k (N c c k=1 2 PK 3 K ~ X Q ( N ) ~ ? (Nc ? 1)~1c )5 = N 1? 1 4 k=1 N c;k ? Qc;k (N c c k=1 2 !3 N Z Nc ? Z +R (N~ ) Z 1 c 5 (by Proposition 4.3) ? 1? = N ?14 ~ N Zc + Rc (N ? (Nc ? 1)~1c) c c # " Z 1 1 c = N ?1 (5.88) ~ ? (Nc ? 1)~1c) ? Zc + Rc (N ~) : Zc + Rc (N c ~ )  R c (N ~ ? (Nc ? 1)~1c ) for C = 1, the right-hand side of (5.88) is greater Because Rc (N than or equal to 0. Hence, K K X X (c)[PE ] Ek  Ek(c)[QL] (5.89) c

c

c

c

k=1

k=1

Moreover, when C = 1, K K K X X ~ ? ~1c )] = X[A(c;kc)[QL] ? Qc;k (N ~ ? ~1c)] Ek(c)[QL] = [A(kc)[QL] ? Qk (N k=1 K N X

k=1

k=1



c?2 ~ ) + 1 Qc;k (N ~ ? (Nc ? 1)~1c) ? Qc;k (N ~ ? ~1c) Q c;k (N N ? 1 N ? 1 c k=1 c 3 2 K ~ ~ ~ X Q ( N ) ? Q ( N ? ( N ? 1) 1 ) c;k c c 5 ~ ) ? Qc;k (N ~ ? ~1c) ? c;k = 4Qc;k (N N ? 1 c k=1 P PK Q (N K K Q (N K ~ X X ) ? c;k k =1 k=1 c;k ~ ? (Nc ? 1)~1c ) ~ ? ~1c) ? ~ ) ? Qc;k (N = Qc;k (N Nc ? 1 k=1 k=1

=

~ )Zc ] ? [Nc ? 1 ? Xc (N ~ ? ~1c)Zc ] = [Nc ? Xc(N ~ ~ ~ ? [Nc ? Xc (N)Zc] ? [1N? ?Xc1(N ? (Nc ? 1)1c )Zc ] (by Proposition 4.3) c 9 8 = < Xc (N ~ ) ? X c (N ~ ? (Nc ? 1)~1c ) ~ ~ ~ ? [ X ( N ) ? X ( N ? 1 )] (5.90) = Zc : c c c ;: N ?1 c

As shown in Figures 5.1 and 5.7, it is easy to verify that the right-hand side of (5.90) is greater than or equal to 0. Hence, K X Ek(c)[QL]  0: (5.91) k=1

Therefore, by (5.89) and (5.91), the proof is complete.

94

Chapter 5. Comparison of the PE, QL, and FL Algorithms

X c (n)

Xc (N) X c (N-1 c)

0

1

Nc - 1

Nc

nc

Figure  function Xc (~n) with respect to nc , and the relationship between  ~ 5.7:~ Plot of~ the X (N)?X (N?(N ?1)1 ) and [X (N ~ ? ~1c)] c ~ ) ? Xc (N N ?1 c

c

c

c

c

Corollary 5.1 For single class separable queueing networks (i.e., C = 1), j R[cPE] (N~ ) ? Rc (N~ ) jj R[cQL](N~ ) ? Rc (N~ ) j :

(5.92)

2

Theorem 5.9 For single class separable queueing networks (i.e., C = 1), ~ )  Xc[QL](N ~ )  Xc(N ~ ): Xc[PE](N

(5.93)

2

Proof. The proof follows from (2.4) and Theorem 5.8.

Chapter 5. Comparison of the PE, QL, and FL Algorithms

95

Corollary 5.2 For single class separable queueing networks (i.e., C = 1), j Xc[PE](N~ ) ? Xc (N~ ) jj Xc[QL](N~ ) ? Xc (N~ ) j :

(5.94)

2

Theorem 5.10 For multiple class separable queueing networks, there exist integers Nc

for c = 1; :::; C , such that if Nc > Nc for all c = 1; :::; C , then

~ )  R[cQL](N ~ )  Rc (N ~ ): R[cPE](N

(5.95)

2

Proof. By (2.2), (5.56), and (5.57), ~ ) = R c (N ~)+ R[cPE](N and [QL] (

Rc

K X

Dc;k Ek(c)[PE];

(5.96)

X N~ ) = Rc (N~ ) + Dc;k Ek(c)[QL]:

(5.97)

k=1 K

k=1

Because

(c)[PE ] (c)[QL] ~ ) ? Qj;k (N ~ ? ~1c)] = 0 lim E = lim E = N lim [Q (N j;k j;k N !+1 N !+1 !+1 j;k c

c

for j 6= c, (c)[PE ]

lim E N !+1 k c

and (c)[QL]

lim E N !+1 k c

c

(5.98)

2 3 X ( c )[ PE ] ( c )[ PE ] 4E = N lim + Ej;k 5 = N lim E (c)[PE] !+1 c;k !+1 c;k

(5.99)

2 3 X ( c )[ QL ] ( c )[ QL ] 4E = N lim + Ej;k 5 = N lim E (c)[QL] !+1 c;k !+1 c;k

(5.100)

c

c

j 6=c

j 6=c

c

c

for all c = 1; :::; C . The rest of the proof is identical to the proof of Theorem 5.8, because the entire proof (c)[PE ] (c)[QL] of Theorem 5.8 only relies on the relationship between Ec;k and Ec;k , together with (c)[PE ] (c)[QL] the fact that Ek(c)[PE] = Ec;k and Ek(c)[QL] = Ec;k when C = 1.

Chapter 5. Comparison of the PE, QL, and FL Algorithms

96

Corollary 5.3 For multiple class separable queueing networks, there exist integers Nc for c = 1; :::; C , such that if Nc > Nc for all c = 1; :::; C , then

j R[cPE] (N~ ) ? Rc (N~ ) jj R[cQL](N~ ) ? Rc (N~ ) j :

(5.101)

2

Theorem 5.11 For multiple class separable queueing networks, there exist integers Nc

for c = 1; :::; C , such that if Nc > Nc for all c = 1; :::; C , then

~ )  Xc[QL](N ~ )  Xc(N ~ ): Xc[PE](N

(5.102)

2

Proof. The proof follows from (2.4) and Theorem 5.10. Corollary 5.4 For multiple class separable queueing networks, there exist integers Nc for c = 1; :::; C , such that if Nc > Nc for all c = 1; :::; C , then

j Xc[PE](N~ ) ? Xc (N~ ) jj Xc[QL](N~ ) ? Xc (N~ ) j :

(5.103)

2 By Theorems 5.8, 5.9, 5.10, and 5.11, for single class separable queueing networks or multiple class separable queueing networks with large populations, the solutions of both the QL and PE algorithms are pessimistic relative to the exact solution. In particular, both the QL and PE algorithm yield a higher mean system response time and a lower throughput, and the solution of the QL algorithm is more accurate than that of the PE algorithm. These results are consistent with the analytic results obtained by Eager and Sevcik for the PE algorithm [17].

5.1.3 Accuracy of the FL Algorithm When Nc = 1 for some c = 1; :::; C , the FL algorithm yields the same performance measures for class c as the exact MVA, PE, and QL algorithms, where k = 1; :::; K . Moreover, when Nc = 1 for all c = 1; :::; C , the FL algorithm yields the same solution as the PE

Chapter 5. Comparison of the PE, QL, and FL Algorithms

97

and QL algorithm. Also, for single class separable queueing networks with population 2, the FL algorithm yields the same solution as the exact MVA and QL algorithms, thus, it is more accurate than the PE algorithm. However, generally, when Nc > 1, the behavior of the solution of the FL algorithm is complicated and is highly dependent on network parameters. Because we feel that the QL algorithm is generally superior to the FL algorithm, we do not pursue further analytic results on the FL algorithm.

5.2 Experimental Results on the Accuracies of the PE, QL, and FL Algorithms We have not obtained analytic results on 1. the magnitude of the errors yielded by the PE, QL, and FL algorithms, 2. the behavior of the solutions of the QL and PE algorithms for multiple class separable queueing networks with relatively small populations, 3. the behavior of the solution of the FL algorithm. In this section, we attempt to obtain information about these topics through extensive experimentation. In each of these experiments, two thousand random networks were generated and solved by each of the three approximate algorithms, and by the exact MVA algorithm. Statistics were gathered and analyzed during each run. The following are the speci c error measures we have employed in these experiments: ?X = The mean absolute relative error in the approximate network throughput of each class. It is de ned as follows: PC jX ?X j  ?X = c=1 X (5.104) C c

c

c

where Xc is the approximate throughput of class c, and Xc is the exact value.

?R = The mean absolute relative error in the approximate system response time of each class. It is de ned as follows: PC jR ?R j  (5.105) ?R = c=1 C R c

c

c

Chapter 5. Comparison of the PE, QL, and FL Algorithms

98

where Rc is the approximate response time of class c, and Rc is the exact value. ?Q = The mean absolute relative error in the queue lengths of each class at each center. It is de ned as follows: P P jQ ?Q j ?Q =

C c=1

K k=1

c;k

Q

c;k

(5.106) KC where Qc;k is the approximate queue length of class c at center k, and Qc;k is the exact value. c;k

Q = The max absolute relative error in the queue lengths of each class at each center. It is de ned as follows: j Qc;k ? Qc;k j Q = max (5.107) c;k Q c;k

where Qc;k is the approximate queue length of class c at center k, and Qc;k is the exact value.

For single class separable queueing networks, we de ne two more error measures in addition to the error measures de ned above: X = The signed relative error of the approximate network throughput for single class separable queueing networks. It is de ned as follows:

?X X = X X 



(5.108)

where X is the approximate throughput, and X  is the exact throughput for single class separable queueing networks. R = The signed relative error of the approximate system response time for single class separable queueing networks. It is de ned as follows:

R R = R ?  R



(5.109)

where R is the approximate response time, and R is the exact response time for single class separable queueing networks. For convenience, we also de ne

Chapter 5. Comparison of the PE, QL, and FL Algorithms

99

M = The sample mean of M in all trials of an experiment, where M is one of the error measures de ned above. SM = The sample standard deviation of M in all trials of an experiment, where M is one of the error measures de ned above. max(Q) = The maximum value of Q in all trials of an experiment. Since the error measures of one approximate MVA algorithm are paired with the measures of another algorithm in our experiments, we further de ne

DM = The mean of the di erence between two corresponding M s of two algorithms in all trials of an experiment, where M is one of the previously de ned error measures. SdM = The standard deviation of the di erence between two corresponding M s of two algorithms in all trials of an experiment, where M is one of the previously de ned error measures.

5.2.1 Accuracies of the PE, QL, and FL Algorithms for Single Class Separable Queueing Networks In order to determine the magnitude of the errors of the PE, QL, and FL algorithms for single class separable queueing networks, and to determine the behavior of the solution of the FL algorithm for single separable queueing networks, ve experiments for single class separable queueing networks were performed. The parameters used to generate the random networks in each experiment are given in Tables 5.1, and the resulting statistics on error measures are shown in Tables 5.3 through 5.12 in Appendix 5.A. In Experiments 1 and 2, we are interested in the accuracies of the three approximate MVA algorithms for single class separable queueing networks with small populations and large populations. In Experiment 1, the population size is relatively small, and average errors in throughput and system response time for all three approximate MVA algorithms are around 2% or less. In Experiment 2, the population size is relatively large, and average errors in throughput and system response time for all three algorithms are less than 1%.

Chapter 5. Comparison of the PE, QL, and FL Algorithms

Parameter Population size (N )

Experiments 1 2 3,4,5 Number of centers (K ) 1,2,3 4 5 Loadings (Dk ) All Think time of customers (Z ) All Server discipline All Number of trials (samples) All

100

Value Uniform(2,10) Uniform(20,100) Uniform(2,100) Uniform(2,10) Uniform(50,100) Uniform(100,200) Uniform(0.1,20.0) Uniform(0.0,100.0) Load independent 2000

Table 5.1: Parameters for generating single class separable queueing networks In Experiments 3, 4, and 5, we are interested in the accuracies of the three algorithms for single class separable queueing networks with small number of centers and large number of centers. In Experiment 3, the number of centers is relatively small, and average errors in throughput and system response time for all three algorithms are on average 1% or less. In Experiments 4 and 5, the number of centers is relatively large, and average errors in throughput and system response time for all three algorithms are less than 0:2%. Moreover, we nd that the sign of errors of the PE and QL algorithms is consistent across all trials in each experiment. Hence, the average signed and absolute relative errors have the same magnitude for the PE and QL algorithms in all experiments. Using hypothesis testing techniques [6, 26], we can further reveal the relationships of the PE, QL and FL algorithms with respect to accuracy based on the experimental results. Since the sample size (2000) is large, by the Central Limit Theorem [26] and the robustness property of the T test [6], the T test is used for hypothesis testing. For example, to determine whether the PE algorithm, on the average, yields a lower throughput than the

Chapter 5. Comparison of the PE, QL, and FL Algorithms

101

exact MVA algorithm for single class separable queueing networks with parameters given in Experiment 1, we intend to test

H0 :  = 0 H1 :  < 0 where  is the mean of all possible values of X yielded by the PE algorithm for all possible networks with parameters given in Experment 1. From our experimental data in Table 5.3 in Appendix 5.A, the observed value of the test statistic is Xp? 0 = ?0:0170p ? 0 = ?56:3: S = 2000 0:0135= 2000 Based on the T2000?1 = T1999  T1 distribution, the P value for this test is much less than 0.0002 (t:0002 = ?3:49). Since this probability is very small, we reject H0 and conclude that, on the average, the PE algorithm yields a lower throughput than the exact MVA algorithm for single class separable queueing networks with parameters given in Experiment 1. To determine whether the QL algorithm, on the average, yields more accurate throughput than the PE algorithm for single class separable queueing networks with parameters given in Experiment 1, we intend to test X

H0 : 1 = 2 H1 : 1 < 2 where 1 is the mean of ?X yielded by the QL algorithm, and 2 is the mean of ?X yielded by the PE algorithm for all possible networks with parameters given in Experiment 1. Thus, this is equivalent to testing

H0 : D = 0 H1 : D < 0 where D is the mean of the di erence between ?X yielded by the QL algorithm and ?X yielded by the PE algorithm for all possible networks with parameters given in Experiment 1. From our experimental data in Table 5.4 in Appendix 5.A, the observed value

Chapter 5. Comparison of the PE, QL, and FL Algorithms

102

of the test statistic is

D? p? 0 = ?0:0040p ? 0 = ?54:2: Sd? = 2000 0:0033= 2000 Based on the T1999 distribution, the P value for this test is much less than 0.0002. Since this probability is very small, we reject H0 and conclude that, on the average, the QL algorithm yields more accurate throughput than the PE algorithm for single class separable queueing networks with parameters given in Experiment 1. By applying the same statistical testing procedure, we can conclude that, in each experiment, X

X

1. The solutions of both the PE and QL algorithms are pessimistic relative to those of the exact MVA algorithm. In particular, both algorithms yield a higher mean system response time and a lower throughput. 2. The QL algorithm is more accurate than the PE algorithm. 3. The solution of the FL algorithm tends to be optimistic relative to that of the exact MVA algorithm. In particular, it tends to yield a lower mean system response time and a higher throughput. However, networks for which the FL algorithm yields a higher mean system response time and a lower throughput have been observed. 4. The FL algorithm tends to be more accurate than the PE and QL algorithm when the population size is small (i.e., in Experiment 1) and less accurate than the PE and QL algorithm when the population size is large (i.e., in Experiment 2). 5. The FL algorithm tends to be less accurate than the PE and QL algorithm when the number of centers is small (i.e., in Experiment 3) and more accurate than the PE and QL algorithm when the number of centers is large (i.e., in Experiments 4 and 5). These experimental results are consistent with the analytic results presented in Section 5.1. Also, from these experimental results, we observed that, given a network, as the population increases, the accuracy of the FL algorithm degrades. Moreover, given the network population, when the number of centers increases, the accuracy of the FL

Chapter 5. Comparison of the PE, QL, and FL Algorithms

103

algorithm increases and the FL algorithm yields more accurate solutions than the other two algorithms. The reason for these phenomena is that when the network is less congested, the queue length at each center is small, and the approximations of the PE and QL algorithms are not very accurate. In this case, the approximation of the FL algorithm is a good approximation of the reality. When the network is congested, the queue length at each center is large, the assumptions of both PE and QL algorithms yield better approximations than that of the FL algorithm.

5.2.2 Accuracies of the PE, QL, and FL Algorithms for Multiple Class Separable Queueing Networks In order to determine the accuracies of the PE, QL, and FL algorithms for multiple class separable queueing networks with relatively small populations, a number of experiments for multiple class separable queueing networks were performed. In these experiments, two thousand random networks were generated and solved by each of the three approximate algorithms and the exact MVA algorithm for each number of classes from one to four. The parameters used to generate the random networks are given in Table 5.2. The resulting statistics on error measures are shown in Table 5.13 in Appendix 5.A. By applying the same statistical testing procedure as in Section 5.2.1, we found that 1. The accuracies of all three approximate MVA algorithms decrease as the number of classes increases. 2. The FL algorithm is the most accurate, and the PE algorithm is the least accurate algorithm among the three algorithms for multiple class separable queueing networks with relatively small populations. However, these conclusions are only valid for small networks with a small number of classes and small customer populations. Although we wish to gain some insight into the behavior of the three approximate MVA algorithms for larger networks than these with more classes, the execution time of obtaining the exact solution of such networks prevented us from doing so.

Chapter 5. Comparison of the PE, QL, and FL Algorithms

104

Server discipline: Population size (Nc ):

Load independent Class 1: Uniform(1,10) Class 2: Uniform(1,10) Class 3: Uniform(1,10) Class 4: Uniform(1,10) Number of centers (K ): Uniform(2,10) Loadings (Dc;k ): Uniform(0.1,20.0) Think time of customers (Zc ): Uniform(0.0,100.0) Number of trials (samples): 2000 Table 5.2: Parameters for generating multiple class separable queueing networks

5.3 Summary In this chapter, the PE, QL, and FL algorithms have been compared with respect to accuracy. We have shown that 1. The solutions of both the PE and QL algorithms are pessimistic relative to those of the exact MVA algorithm for single class separable queueing networks, and for multiple class separable queueing networks with large populations. In particular, both algorithms yield a higher mean system response time and a lower throughput. 2. The solution of the QL algorithm is more accurate than that of the PE algorithm for single class separable queueing networks, and for multiple class separable queueing networks with large populations. Also, based on the experimental results, we have shown that 1. The solution of the QL algorithm is more accurate than that of the PE algorithm for multiple class separable queueing networks with small populations. 2. The solution of the FL algorithm tends to be optimistic relative to that of the exact MVA algorithm for single class separable queueing networks. In particular, it tends

Chapter 5. Comparison of the PE, QL, and FL Algorithms

105

to yield a lower mean system response time and a higher throughput for single class separable queueing networks. However, networks for which the FL algorithm yields a higher mean system response time and a lower throughput have been observed. 3. The solution of the FL algorithm is more accurate than those of the PE and QL algorithms for noncongested networks, and is less accurate than those of the PE and QL algorithms for congested networks.

Appendix 5.A This appendix contains the statistical results of all experiments. PE X -0.0170 S 0.0135 ?X 0.0170 S? 0.0135 R 0.0289 S 0.0278 ?R 0.0289 S? 0.0278 ?Q 0.0295 S? 0.0245 max(Q) 0.1540 X

X

R

R

Q

QL -0.0130 0.0119 0.0130 0.0119 0.0221 0.0244 0.0221 0.0244 0.0234 0.0220 0.1368

FL 0.0008 0.0050 0.0026 0.0043 -0.0001 0.0078 0.0041 0.0067 0.0056 0.0075 0.0955

Table 5.3: Statistical results of the three algorithms for single class separable queueing networks in Experiment 1

Chapter 5. Comparison of the PE, QL, and FL Algorithms

D? Sd? D? Sd? D? Sd?

X

X

R

R

Q

Q

QL{PE -0.0040 0.0033 -0.0068 0.0059 -0.0061 0.0048

FL{PE -0.0144 0.0120 -0.0249 0.0238 -0.0239 0.0194

106

FL{QL -0.0104 0.0105 -0.0181 0.0203 -0.0178 0.0169

Table 5.4: Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 1

PE X -0.0053 S 0.0049 ?X 0.0053 S? 0.0049 R 0.0061 S 0.0073 ?R 0.0061 S? 0.0073 ?Q 0.0388 S? 0.0239 max(Q) 0.1730 X

X

R

R

Q

QL -0.0050 0.0047 0.0050 0.0047 0.0057 0.0070 0.0057 0.0070 0.0375 0.0236 0.1715

FL 0.0078 0.0037 0.0079 0.0036 -0.0083 0.0047 0.0085 0.0043 0.0395 0.0207 0.3195

Table 5.5: Statistical results of the three algorithms for single class separable queueing networks in Experiment 2

Chapter 5. Comparison of the PE, QL, and FL Algorithms

D? Sd? D? Sd? D? Sd?

107

QL{PE FL{PE FL{QL ?3:20  10?4 0.0026 0.0029 3:08  10?4 0.0063 0.0061 ?3:65  10?4 0.0024 0.0028 3:93  10?4 0.0075 0.0074 ?1:30  10?3 7:10  10?4 0.0020 8:48  10?4 0.0348 0.0346

X

X

R

R

Q

Q

Table 5.6: Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 2

PE X -0.0077 S 0.0085 ?X 0.0077 S? 0.0085 R 0.0101 S 0.0145 ?R 0.0101 S? 0.0145 ?Q 0.0391 S? 0.0244 max(Q) 0.1730 X

X

R

R

Q

QL -0.0069 0.0076 0.0069 0.0076 0.0090 0.0129 0.0090 0.0129 0.0371 0.0239 0.1716

FL 0.0070 0.0048 0.0073 0.0043 -0.0073 0.0061 0.0081 0.0051 0.0343 0.0220 0.3195

Table 5.7: Statistical results of the three algorithms for single class separable queueing networks in Experiment 3

Chapter 5. Comparison of the PE, QL, and FL Algorithms

D? Sd? D? Sd? D? Sd?

X

X

R

R

Q

Q

108

QL{PE FL{PE FL{QL ?7:96  10?4 ?3:80  10?4 4:16  10?4 0.0015 0.0100 0.0091 -0.0011 -0.0020 ?8:79  10?4 0.0025 0.0147 0.0053 -0.0020 -0.0048 -0.0028 0.0023 0.0346 0.0339

Table 5.8: Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 3

PE X -0.0014 S 9:59  10?4 ?X 0.0014 S? 9:59  10?4 R 0.0015 S 9:93  10?4 ?R 0.0015 S? 9:93  10?4 ?Q 0.0041 S? 0.0032 max(Q) 0.1742 X

X

R

R

Q

QL FL -0.0014 1:34  10?4 9:54  10?4 2:15  10?4 0.0014 1:62  10?4 9:54  10?4 1:95  10?4 0.0014 ?1:38  10?4 9:89  10?4 2:22  10?4 0.0014 1:68  10?4 9:89  10?4 2:01  10?4 0.0040 0.0010 0.0032 0.0011 0.1733 0.1034

Table 5.9: Statistical results of the three algorithms for single class separable queueing networks in Experiment 4

Chapter 5. Comparison of the PE, QL, and FL Algorithms

D? Sd? D? Sd? D? Sd?

X

X

R

R

Q

Q

109

QL{PE FL{PE FL{QL ?366  10?5 -0.0013 -0.0012 1:95  10?5 8:46  10?4 8:41  10?4 ?3:86  10?5 -0.0013 -0.0013 2:13  10?5 8:78  10?4 8:72  10?4 ?8:30  10?5 -0.0031 -0.0030 3:66  10?5 0.0021 0.0021

Table 5.10: Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 4

PE QL FL X ?4:32  10?4 ?4:22  10?4 3:71  10?5 S 2:78  10?4 2:76  10?4 3:71  10?5 ?X 4:32  10?4 4:22  10?4 3:73  10?5 S? 2:78  10?4 2:76  10?4 3:69  10?5 R 4:43  10?4 4:32  10?4 ?3:80  10?5 S 2:84  10?4 2:83  10?4 3:80  10?5 ?R 4:43  10?4 4:32  10?4 3:82  10?5 S? 2:84  10?4 2:83  10?4 3:77  10?5 ?Q 0.0011 0.0011 1:62  10?4 S? 7:87  10?4 7:83  10?4 1:86  10?4 max(Q) 0.0252 0.0250 0.0102 X

X

R

R

Q

Table 5.11: Statistical results of the three algorithms for single class separable queueing networks in Experiment 5

Chapter 5. Comparison of the PE, QL, and FL Algorithms

D? Sd? D? Sd? D? Sd?

X

X

R

R

Q

Q

110

QL{PE FL{PE FL{QL ?1:02  10?5 ?3:95  10?4 ?3:85  10?4 6:33  10?6 2:48  10?4 2:47  10?4 ?1:05  10?5 ?4:04  10?4 ?3:94  10?4 6:49  10?6 2:55  10?4 2:53  10?4 ?2:27  10?5 ?9:25  10?4 ?9:02  10?4 1:04  10?5 6:09  10?4 6:04  10?4

Table 5.12: Statistical results for the algorithm-pairs for single class separable queueing networks in Experiment 5

Chapter 5. Comparison of the PE, QL, and FL Algorithms

111

Algorithm Measure 1 class 2 classes 3 classes 4 classes PE ?X 0.40% 0.73% 0.82% 0.90% S? 0.74% 0.98% 0.96% 0.84% ?R 0.59% 0.99% 1.04% 1.09% S? 1.28% 1.60% 1.42% 1.17% ?Q 0.69% 1.23% 1.45% 1.64% S? 1.38% 1.82% 1.94% 1.88% max(Q) 15.19% 14.82% 14.63% 16.20% X

R

Q

QL

?X 0.31% S? 0.61% ?R 0.45% S? 1.06% ?Q 0.55% S? 1.17% max(Q) 13.13%

0.67% 0.91% 0.91% 1.47% 1.15% 1.73% 14.41%

0.79% 0.72% 0.99% 1.35% 1.40% 1.89% 14.49%

0.88% 0.83% 1.06% 1.13% 1.61% 1.85% 16.12%

?X 0.05% S? 0.18% ?R 0.06% S? 0.26% ?Q 0.11% S? 0.37% max(Q) 10.82%

0.47% 0.67% 0.63% 1.05% 0.83% 1.38% 11.98%

0.67% 0.80% 0.84% 1.17% 1.20% 1.70% 13.64%

0.80% 0.76% 0.96% 1.04% 1.47% 1.74% 15.62%

X

R

Q

FL

X

R

Q

Table 5.13: Summary of results of accuracy tests of each algorithm for multiple class separable queueing networks

Chapter 6 Conclusions and Directions for Further Research This chapter summarizes the thesis contents and suggests directions for future research.

6.1 Conclusions The cost of using an exact solution algorithm to solve separable queueing networks may be prohibitive as the number of classes, customers, and centers grows. Approximate MVA algorithms can be used to reduce this cost, but they do not provide exact solutions to the models. This thesis has proposed two new approximate MVA algorithms, the Queue Line (QL) algorithm and the Fraction Line (FL) algorithm. Both the QL and FL algorithms have the same computational costs as the Bard-Schweitzer Proportional Estimation (PE) algorithm, the most popular approximate MVA algorithm. Based on the theoretical and experimental results, the QL algorithm is more accurate than the PE algorithm. As with the PE algorithm, the solution the QL algorithm exists and is unique. Moreover, like the PE algorithm, the QL algorithm monotonically converges to that solution when a particular initialization procedure is used. Therefore, the QL algorithm dominates the PE algorithm in the spectrum of MVA algorithms that trade o accuracy and eciency. The FL algorithm is more accurate than the QL and PE algorithms for noncongested 112

Chapter 6. Conclusions and Directions for Further Research

113

separable queueing networks, and is less accurate than the QL and PE algorithms for congested separable queueing networks. Like the PE algorithm and the Linearizer algorithm (another popular approximate MVA algorithm), both the QL and FL algorithms are iterative algorithms. Analytic results about the existence and uniqueness of a solution and about the convergence to that solution have been obtained for the QL and FL algorithms for single class separable queueing networks. The existence of a solution of the Linearizer algorithm has also been established for single class separable queueing networks. The rates of the convergence of the PE, Linearizer, QL, and FL algorithms are obtained for various implementations of these algorithm. The advantages and disadvantages of each implementation method for these iterative approximate MVA algorithms have been investigated. Each implementation method has its own merits and demerits. Generally, the successive substitution method implementation is reliable and always yields correct solutions, while the Newton's technique implementations are faster than the successive substitution method implementation. However, in the worst case, the Newton's technique implementations are slower than the successive substitution method implementation. Moreover, because not all separable queueing networks can meet the subtle requirements of Newton's techniques, Newton's techniques may fail to converge or converge to non-physical solutions.

6.2 Directions for Further Research Because queueing network models have been widely used for the performance evaluation of modern computer systems and communication networks, this thesis is an initial attempt to study the algorithms for solving queueing network models. Much work in this area remains to be done. There are several extensions of this thesis. For single class separable queueing networks, these extensions include: 1. investigating the properties of the FL algorithm, 2. proving the convergence of the Linearizer algorithm.

Chapter 6. Conclusions and Directions for Further Research

114

For multiple class separable queueing networks, the extensions include: 1. proving the existence and uniqueness of solutions of the PE, QL, FL, and Linearizer algorithms, 2. proving the convergence of these algorithms, 3. investigating the properties of the FL algorithm, 4. developing new algorithms based on the QL and FL algorithms.

Bibliography [1] S. C. Agrawal. Metamodeling: A Study of Approximations in Queueing Models, The MIT Press, Cambridge, Massachusetts, 1985. [2] Y. Bard. Some extensions to multiclass queueing network analysis. In: M. Arato, A. Butrimenko and E. Gelenbe, eds. Performance of Computer Systems, North-Holland, Amsterdam, Netherlands, 1979. [3] Y. Bard. A model of shared DASD and multipathing. Communications of the ACM, 23(10):564-572, October 1980. [4] Y. Bard. A simple approach to system modeling. Performance Evaluation, 1(3):225248, 1981. [5] F. Baskett, K. M. Chandy, R. R. Muntz and F. G. Palacios. Open, closed, and mixed networks of queues with di erent classes of customers. Journal of the ACM, 22(2):248-260, April 1975. [6] J. V. Bradley. Distribution Free Statistical Tests. Prentice-Hall, Englewood Cli s, NJ, 1968. [7] J. P. Buzen. Computational algorithms for closed queueing networks with exponential servers. Communications of the ACM, 16(9):527-531, September 1973. [8] W.-M. Chow. Approximations for large scale closed queueing networks. Performance Evaluation, 3(1):1-12, 1983. 115

BIBLIOGRAPHY

116

[9] K. M. Chandy and D. Neuse. Linearizer: A heuristic algorithm for queueing network models of computing systems. Communications of the ACM, 25(2):126-134, February 1982. [10] K. M. Chandy and C. H. Sauer. Computational algorithms for product form queueing networks. Communications of the ACM, 23(10):573-583, October 1980. [11] A. E. Conway and N. D. Georganas. RECAL { A new ecient algorithm for the exact analysis of multiple-chain closed queueing networks. Journal of the ACM, 33(4):768-791, October 1986. [12] A. E. Conway, E. de Souza e Silva and S. S. Lavenberg. Mean value analysis by chain product form queueing networks. IEEE Transactions on Computers, C-38(3):432442, March 1989. [13] L. W. Dowdy, D. L. Eager, K. D. Gordon and L. V. Saxton. Throughput concavity and response time convexity. Information Processing Letters, 19(4):209-212, November 1984. [14] J. E. Dennis, Jr. and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cli s, NJ, 1983. [15] D. L. Eager and K. C. Sevcik. Performance bound hierarchies for queueing networks. ACM Transactions on Computer Systems, 1(2):99-115, May 1983. [16] D. L. Eager. Bounding Algorithms for Queueing Network Models of Computer Systems. Ph.D. Thesis, Tech. Rept. CSRG-156, University of Toronto, Toronto, Ontario, Canada, 1984. [17] D. L. Eager and K. C. Sevcik. Analysis of an approximation algorithm for queueing networks. Performance Evaluation, 4(4):275-284, 1984. [18] D. L. Eager and K. C. Sevcik. Bound hierarchies for multiple-class queueing networks. Journal of the ACM, 33(1):179-206, January 1986.

BIBLIOGRAPHY

117

[19] K. P. Hoyme, S. C. Bruell, P. V. Afshari and R. Y. Kain. A tree-structured mean value analysis algorithm. ACM Transactions on Computer Systems, 4(2):178-185, May 1986. [20] C. T. Hsieh and S. S. Lam. PAM { A noniterative approximate solution method for closed multichain queueing networks. ACM SIGMETRICS Performance Evaluation Review, 16(1):261-269, May 1988. [21] S. S. Lam. A simple derivation of the MVA and LBANC algorithms from the convolution algorithm. IEEE Transactions on Computers, C-32(11):1062-1064, November 1983. [22] S. S. Lam and Y. L. Lien. A tree convolution algorithm for the solution of queueing networks. Communications of the ACM, 26(3):203-215, March 1983. [23] S. S. Lavenberg and M. Reiser. Stationary state probabilities of arrival instants for closed queueing networks with multiple types of customers. Journal of Applied Probability, 17(4):1048-1061, December 1980. [24] E. D. Lazowska, G. S. Graham, J. Zahorjan, and K. C. Sevcik. Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Englewood Cli s, NJ, 1984. [25] J. D. C. Little. A proof of the queueing formula L = W . Operations Research, 9(3):383-387, May/June 1961. [26] J. S. Milton and J. C. Arnold. Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences. McGraw Hill, New York, NY, 1990. [27] J. M. Ortega and W. C. Rheinboldt. Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, NY, 1970. [28] K. R. Pattipati, M. M. Kostreva and J. L. Teele. Approximate mean value analysis algorithms for queueing networks: existence, uniqueness, and convergence results. Journal of the ACM, 37(3):643-673, July 1990.

BIBLIOGRAPHY

118

[29] M. Reiser and H. Kobayashi. Queueing networks with multiple closed chains: Theory and computational algorithms. IBM Journal of Research and Development, 19(3):283-294, May 1975. [30] M. Reiser and H. Kobayashi. On the convolution algorithm for separable queueing networks. Proceedings of International Symposium on Computer Performance Measurement, Modeling, and Evaluation, 109-117, Cambridge, Great Britain, 1976. [31] M. Reiser and S. S. Lavenberg. Mean value analysis of closed multichain queueing networks. Journal of the ACM, 27(2):313-322, April 1980. [32] W. C. Rheinboldt. Methods for Solving Systems of Nonlinear Equations. SIAM, Philadelphia, PA, 1974. [33] P. J. Schweitzer. Approximate analysis of multiclass closed networks of queues. Proceedings of International Conference on Stochastic Control and Optimization, 25-29, Amsterdam, Netherlands, 1979. [34] P. J. Schweitzer, G. Serazzi and M. Broglia. A new approximation technique for product-form queueing networks. Submitted for publication, 1996. [35] K. C. Sevcik and I. Mitrani. The distribution of queueing network states at input and output instants. Journal of the ACM, 28(2):358-371, April 1981. [36] E. de Souza e Silva, S. S. Lavenberg and R. R. Muntz. A perspective on iterative methods for the approximate analysis of closed queueing networks. In: G. Iazeolla, P. J. Courtois and A. Hordijk, eds. Mathematical Computer Performance and Reliability, North-Holland, Amsterdam, Netherlands, 1984. [37] E. de Souza e Silva, S. S. Lavenberg and R. R. Muntz. A clustering approximation technique for queueing network models with a large number of chains. IEEE Transactions on Computers, C-35(5):419-430, May 1986. [38] E. de Souza e Silva and R. R. Muntz. A note on the computational cost of the linearizer algorithm for queueing networks. IEEE Transactions on Computers, 39(6):840-842, June 1990.

BIBLIOGRAPHY

119

[39] R. Suri. A concept of monotonicity and its characterization for closed queueing networks. Operations Research, 33(3):606-624, May-June 1985. [40] S. Tucci and C. H. Sauer. The tree MVA algorithm. Performance Evaluation, 5(3):187-196, 1985. [41] J. Zahorjan. The Approximate Solution of Large Queueing Network Models. Ph.D. Thesis, Tech. Rept. CSRG-122, University of Toronto, Toronto, Ontario, Canada, 1980. [42] J. Zahorjan, D. L. Eager and H. M. Sweillam. Accuracy, speed, and convergence of approximate mean value analysis. Performance Evaluation, 8(4):255-270, 1988.

Suggest Documents