introduction to queuing models

1 downloads 0 Views 3MB Size Report
the real system, and the mathematical models are a description of the system where ...... of finance, including the famous Black–Scholes option pricing formula ...... [6] Hsu H.: Schaum's Outline of Probability, Random Variables, and Random ...
UNIVERSITY OF MARIBOR FACULTY OF LOGISTICS

Working paper

INTRODUCTION TO QUEUING MODELS

Dejan Dragan Borut Jereb

Celje, June 2013

CONTENT

1 BASICS OF MODELING AND SIMULATION …….................4 1.1 Description of the system…………………………………………………...7 1.2 Definition of the model and simulation……………………………………10 1.3 Relationship between the system and the model…………………………..11 1.4 Modeling and simulation…………………………………………………..11 1.5 Modeling methodology……………………………………………………12 1.6 Model classification………………………………………………………. 13 1.7 Mathematical modeling……………………………………………………14 1.8 Theoretical and experimental modeling …………………………………..15 1.9 Computer simulation methodology………………………………………..17 1.10 Modeling and simulation iterative procedure……………………………..19 1.11 The classification of simulation…………………………………………...20 1.12 The methodology of system dynamics……………………………………22

2 DISCRETE EVENT SIMULATION …………………………..24 2.1 Introduction of time………………………………………………………...24 2.2 Inter-arrival times distribution……………………………………………...25 2.2.1 Poisson distribution ……………………………………………………………………26

2.3 Distribution of the number of arrivals……………………………………...27 2.4 Introduction to Queueing systems………………………………………….28 2.4.1 Basic characteristics of queueing system……………………………………………31 2.4.2 Queueing terminology and basic parameters………………………………………33 2.4.3 Types of queueuing systems……………………………………………………………39

2.5 Some probability basics of simulation…………………………………..…40 2.6 Random generators…………………………………………………………43

2

3 THEORY OF STOCHASTIC PROCESSES…………………..46 3.1 Definition of stochastic processes and basic properties……………………46 3.2 Markov processes…………………………………………………………..49 3.3 Markov chains……………………………………………………………...49 3.4 Poisson processes…………………………………………………………..53 3.4.1 Derivation of distribution of the number of events…………………………………54 3.4.2 Derivation of distribution of the times between events…………………………….60

3.5 Birth processes……………………………………………………………...61 3.6 Death processes…………………………………………………………….63 3.7 Birth-Death processes………………………………………………………65

4 INTRODUCTION TO BASIC QUEUEING MODELS……….70 4.1 Single channel queueing models…………………………………………...70 4.1.1 Basic model M/M/1……………………………………………………………………..71 4.1.2 Model M/M/1 with limited waiting space……………………………………………76 4.1.3 Model M/M/1 (gradually "scared" or "balked" customers)……………………….80 4.1.4 Model M/M/1 with limited number of customers……………………………………83 4.1.5 Model M/M/1 with additional server for longer queues…………………………...87

4.2 Multiple channel queueing models…………………………………………92 4.2.1 Basic model M/M/r……………………………………………………………………..92 4.2.2 Model M/M/r with limited waiting space…………………………………………..101 4.2.3 Model M/M/r with large number of servers………………………………………..105 4.2.4 Model M/M/r (waiting space is not allowed)………………………………………108 4.2.5 Model M/M/r with limited number of customers………………………………… 109

4.3 Conclusion about the queueing models…………………………………...112

LITERATURE…………………………………………...……….114

3

1 BASICS OF MODELING AND SIMULATION The building of models by means of observations and study of model properties are the main components of the modern sciences. Models can have more or less formal character ("hypotheses", "laws of nature", "paradigms", etc) and are trying to unite observations into a pattern, which has the same characteristics as the observed system [Ljung]. The system is confined arrangement of mutually affected entities (processes) that influence one another, where the process indicates the conversion and/or transport of material, energy and/or information [Isermann]. When we interact with a system, we need some concept of how its variables relate to each other. With a broad definition, we call such an assumed relationship among observed signals a model of the system. A model of the system is any “experiment”, which can be applied to in order to answer some questions about that system. This implies that a model can be used to answer questions about a system without doing experiments on the real system. Instead of this, we rather perform simplified “experiments” on the model, which in turn can be regarded as a kind of simplified system that reflects properties of the real system. In the simplest case a model can just be a piece of information that is used to answer questions about the system [Fritzson]. Models, just like systems, are hierarchical in nature. We can cut out a piece of a model, which becomes a new model that is valid for a subset of the experiments for which the original model is valid. Naturally, there are different kinds of models depending on how the model is represented, like mental models, verbal models, physical models, mathematical models, etc. Mental models helps us answer questions about persons' behaviour, verbal models are expressed in words, physical models are the physical objects that mimics some properties of the real system, and the mathematical models are a description of the system where the relationships between variables of the system are expressed in mathematical form [Fritzson]. In many cases, there is a possibility of performing “experiments” on models instead of on the real systems corresponding to the models. This is actually one of the main uses of models, and is denoted by the term simulation, from the Latin "simulare", which means to pretend. We usually define a simulation as follows: "A simulation is an experiment performed on a model".

4

The simulation allows to make the repeated observation of the model, where one or many simulations can be done. After that, the analysis can be performed, which aids in the ability to draw conclusions, verifications and validations of the research, and make recommendations based on various iterations or simulations of the model. This way, the modeling and simulation give us the strong problem-based discipline, which allows the repeated testing of a hypothesis [Sokolowski]. Simulation, as a way of solving of important and complex problems, is the very old research discipline. For example, the princes and rulers in the years before our count,

performed

the

different

possible

answers, while doing the military exercises.

strategies

of

the

enemies

and

their

Also, in many important complex and

interconnected industries, the more intuitive than scientific simulation methods were used in the broadest sense. When the first computers have occurred, the simulation is becoming a scientific discipline and the part of the system theory approach. Nowadays, the range of application of simulation models is widely and extends to all areas of science, especially in organizational, industrial, economics, transportation, technical and other important sciences. One of the main purposes of the simulation is also the analysis of the responses of a system in the future, or the increase of understanding of the system under consideration. In this way, costs for experiments on the real systems and potential hazards can be avoided. In addition, simulation is used when the treated problem is very complex and can not be solved by other methods. Simulation models are not really exclusively bounded only to the computer, but the use of computer simulation is nowadays so extended, that the word "computer simulation" became the synonymous of simulation as a technique of problem solving [Kljajič]. For the wider use of simulation in business environments, the simple and efficient solutions are needed. They must be coordinated with the real data and should not demand too high level of specialized computer knowledge. This is especially important for the reason that the business simulation based data are dedicated mostly to managers. The latter must take quick decisions, which should be based on reliable and up to date information. Accelerating the deployment of business systems based simulation is the consequence of improved communication between the human and the machine, which enabled more frequent use of simulation models by managers also, especially when the business models are concerned [Kljajić]. 5

Nowadays, the highly sophisticated graphical interfaces enables much easier model building than before. Thus, simulation, design of experiments, testing of different scenarios and the analysis of system behavior is possible for almost everybody with the basic knowledge of informatics. Consequently, the business simulation is actually transferred from highly specialized laboratories to the desks of common users [Kljajič]. Different researches in the world and the existing literature [Dijk, Klajič, Saltman, Tung] show that the combination of simulation and decision support systems enable decisions of higher quality. Simulation applied by additional use of animation, which shows the operations of modeled system, can help the users to learn the specifics of the system working mechanism very quickly. Even more, in the research [Dijk] it is confirmed that the decision makers better understand the simulation results, when they are represented with animation also. Thus, the combined simulation and animation motivate the decision makers to search for a new solutions, since the testing of scenarios, impossible in the real world, is enabled in the simulated world [Klajič]. Historically, scientists and engineers have concentrated on studying natural phenomena which are well-modeled by the laws of gravity, classical and non-classical mechanics, physical chemistry, etc. In so doing, we typically deal with quantities such as the displacement, velocity, and acceleration of particles and rigid bodies, or the pressure, temperature, and flow rates of fluids and gases. These are “continuous variables” in the sense that they can take on any real value as time itself “continuously” evolves. Based on this fact, a vast body of mathematical tools and techniques has been developed to model, analyze, control and simulate the systems (continuous simulation). It is fair to say that the study of ordinary and partial differential equations currently provides the main infrastructure for system analysis and control. But in the today life of our technological and increasingly computer-dependent world, we notice two things. Firstly, many of the quantities we deal with are “discrete”, typically involving counting integer numbers (how many parts are in an inventory, how many planes are on a runway, how many telephone calls are active). Secondly, many of the processes depend on instantaneous “events” such as the pushing of a button, hitting a keyboard key, or a traffic light turning green. In fact, much of the technology we have invented is event-driven, like

communication networks, manufacturing facilities, or the

execution of a computer programs [Cassandras].

6

In this case, we are talking about the discrete and/or event driven simulation, where the states of the system are changing in the discrete time moments. These changes are called the discrete events, which are happening periodically in specific time moments, or asynchronously in the dependence of conditions, determined by values of state variables [Zupančič]. Discrete event simulation, among other functionalities, enables the determination of efficiency of existing technology and the identification of bottlenecks, the determination of time dependence for supply of products in inventory control, the design of the model for operational planning of production, the analysis of the system in the future, etc [Kljajič]. In the past, the complexity of the model construction and simulation experiments limited us to use the simulation in the business environments. Nowadays, this problems are mostly bridged, especially, when the visual interactive modeling is possible. The latter enables us to develop the model and do simulations in the interactive environment, which simplifies the model design and improve the perception of the system performance [Kljajić].

1.1 Description of the system The system is a group of elements or units, which are connected in a certain whole. Each element have certain properties, or attributes and activities, or activities. Figure 1 shows the basic classification of systems.

Figure 1: The basic classification of systems.

7

Naturally, there are more or less complex systems in the real world. For example, figure 2 shows a relatively complex system: a house with solar-heated warm tap water, together with clouds and sunshine [Fritzson].

Figure 2: A house with solar-heated warm tap water Figure 3 shows a less complex second-order systems: a mechanical system and an electrical system.

Figure: A mechanical system and an electrical system In general, the system can be described by (c.f. figure 4): •

Input variables X = { X i } ,



States of the system Z = {Z i } ,



Outputs from the system Y = {Yi } ,

i = 1, 2,3,..., m i = 1, 2,3,..., n

i = 1, 2,3,..., l

8

Figure 4: System Table 1 shows the examples of different dystems, their elements, properties, and activities.

Table 1: The examples of different dystems, their elements, properties, and activities State of the system is defined as a value of state variables at certain time. It can be described by the elements of the system, their properties and activities. Process is the change of the state of system, induced by the influence of input variables or internal events in the system [Kljajič]. The behavior of the system can be defined as a response (reaction) of the system to input signals (stimuli). Figure 5 shows the response y ( t ) of the system to step input signal x ( t ) [Zupančič, reg].

Figure 5: The response y ( t ) of the system to step input signal x ( t )

9

1.2 Definition of the model and simulation Model of the system is simplified visualization (concept) of the real system. System simulation is the methodology of problem solving by means of experimentation on the computer model, where the main purpose is to analyze the operations of the whole system, or the operations of particular parts of the system under certain conditions [Kljajič]. Simulation is a dynamic visualization of the system behavior for the following purposes [Kljajič]: 1. The description of the system or its parts, 2. The explanation of the system behavior in the past, 3. The prediction of the system behavior in the future, and 4. The understanding of the system principles. When treating the computer simulation, the modeling procedure consists of the following steps [Kljajič]: •

definition of the problem,



determination of objectives,



a draft of study,



creation of the mathematical model,



creation of the computer program,



validation of the model,



preparation of the experiment (simulation scenarios),



simulation and analysis of the results.

10

1.3 Relationship between the system and the model Relationship between the system and the model can be defined in the following way (c.f. figure 6) [Kljajič]: 1. The system is deterministic and the model is deterministic. Example of this are the models of simple mechanical systems, like for instance the second order differential equation:

.. . x+ B ⋅ x+ C ⋅ x = U ( t )

(1.1)

2. The system is deterministic and the model is stochastic. Example of this is for instance the methodology of simplification of complicated functions by means of Monte Carlo method. 3. The system is stochastic and the model is deterministic. Example of this are for instance the congruential generators of random numbers. 4. The system is stochastic and the model is stochastic. Example of this are for instance the complex organizational systems, where the solving by use of system simulation is needed.

Figure 6: Relationship between the system and the model

1.4 Modeling and simulation Modeling represents the relation between the simulated system and the model, while the simulation represents the relation between the model and computer process. Within this philosophy, the following can be enunciated [Klajič]: Object X simulates object Y only, if: a) X and Y are both systems, b) Y represents the simulated system (system), c) X represents the simplification of simulated system (model), d) Validity of the X in dependence of Y is not necessary completed.

11

The following can be also stated: Simulation is the process of generation of model behavior. By other words, it represents the experimenting on the model [Kljajič].

1.5 Modeling methodology When the problem as a subject of study is precisely defined, the modeling procedure can begin. Within this framework, all important variables and their interconnections must be defined at first. We focus only on relevant data, which are dominant for the chosen aspect of the system study. At this stage, we can not define some specific rules, how to approach to this matter. The only exception are the natural systems, where the conservation laws and continuity are standardized. The basic procedure of modeling is shown on figure 7 [Kljajič].

Figure 7: The basic procedure of modeling Researcher can make certain simplifications with respect to the objectives and knowledge of the system. He can determine the structure of the system, collect all the data and then build the suitable model within the framework of existing theory. In sequel, he can study the properties of the real system by use of model, or tries to choose the most appropriate strategy of influencing on the real system. View of the modeling and simulation procedure from the certain angle is shown on figure 8 [Kljajič].

Figure 8: The modeling and simulation procedure 12

With respect to the way of description and the sequence of formation, the following classification of models can be introduced [Kljajič]: • Verbal models, where the principles of the observed system are described in the natural language. • Physical models, which are usually the miniature images of the observed system. The can be very useful for this kind of researches, where the experimenting with the original system could be expensive or even dangerous. • Mathematical or formal models, which are the most precise descriptions of certain system.

1.6 Model classification There are many classifications of models. One of the possible classification was introduced by [Forrester] and is shown on figure 9.

Figure 9: Classification of models by [Forrester] The choice of model depends on the system, which is taken into the consideration. From figure 9 it is evident that the models can be divided into the physical and the abstract models as a possible way of studying of systems. Naturally, every chosen model is dependent on the 13

theory, which was used to construct it. With respect to the abstraction of the used theory, we can talk about exact or verbal theories, which depends on phenomenon, which we want to interpret. Physical models are usually simplified and reduced real systems, which precisely define the behavior of the real system, especially its more important properties in the certain working environment and under specific conditions. Static physical models are for instance the illustration of urbanity or architectonic solutions in the form of models, which enables visual imaginations of the certain spatial form. Dynamical physical models are for example aerodynamic tunnels for investigation of the aircraft properties, while hydrodynamic channels are used for the investigation of hydrodynamic properties of ships. Good properties of dynamical physical models are in their clearness and transparency. But they have also bad properties, since they are usually too big, inflexible and often do not reflect causality dependency between certain phenomenon and variables.

1.7 Mathematical modeling Mathematical model is the abstract visualization of the certain system and is useful for investigation of system properties. It enables some conclusions about the characteristics and behavior of the system. Mathematical models are more or less homomorphic to the real system. From the level of model suitability it is dependent, if the achieved results and conclusions are reliable and significant enough. At this place it must be noted, that the practice and developers experiences are very crucial in attempts to design an adequate model, which sufficiently reflects the properties of the system. The mathematical models can be classified into very different categories, like: graphs, tables, logical symbols, etc, which represent certain state of the system and its behavior. Due to their precision in expressive power and possibility of analysis of future behavior of the systems in quantitative form, they can be very interested for the System theory. By their help, the behavior of the systems can be analyzed and the decision making or system control can be applied. But we must be aware that it is not always possible to find an appropriate mathematical model.

14

1.8 Theoretical and experimental modeling For the derivation of mathematical models of dynamic systems, one typically discriminates between theoretical and experimental modeling [Isermann]. For the theoretical analysis, also termed theoretical modeling, the model is obtained by applying methods from calculus to equations as e.g. derived from physics. One typically has to apply simplifying assumptions concerning the system, as only this will make the mathematical treatment feasible in most cases. In general, the following types of equations are combined to build the model [Isermann]: 1. Balance equations: Balance of mass, energy, momentum. For distributed parameter systems, one typically considers infinitesimally small elements, for lumped parameter systems, a larger (confined) element is considered. 2. Physical or chemical equations of state: These are the so-called constitutive equations and describe reversible events, such as e.g. inductance or the second Newtonian postulate. 3. Phenomenological equations: Describing irreversible events, such as friction and heat transfer. An entropy balance can be set up if multiple irreversible processes are present. 4. Interconnection equations according to e.g. Kirchhoff’s node and mesh equations, torque balance, etc. By applying these equations, one obtains a set of ordinary or partial differential equations, which finally leads to a theoretical model with a certain structure and defined parameters if all equations can be solved explicitly. In many cases, the model is too complex or too complicated, so that it needs to be simplified to be suitable for subsequent application. Even if this is not possible, the individual model equations in many cases still give us an important hints concerning the model structure and thus still can be useful [Isermann]. In case of an experimental analysis, which is also termed identification, a mathematical model is derived from measurements [Isermann]. Here, one typically has to rely on certain a priori assumptions, which can either stem from theoretical analysis or from previous (initial) experiments. Measurements are carried out and the input as well as the output signals are subjected to some identification method in order to find a mathematical model that describes the relation between the input and the output [Isermann].

15

The theoretically and the experimentally derived models can also be compared if both approaches can be applied. If the models do not match, then one can get hints from the character and the size of the deviation, which steps of the theoretical or the experimental modeling have to be corrected [Isermann]. The system analysis can typically neither be completely theoretical nor completely experimental. To benefit from the advantages of both approaches, one does rarely use only theoretical modeling (leading to so-called white-box models) or only experimental modeling (leading to so-called black-box models), but rather a mixture of both leading to what is called

gray-box models (c.f. figure 10 [Isermann]). Despite the fact that the theoretical analysis can in principle deliver more information about the system, provided that the internal behavior is known and can be described mathematically, experimental analysis has found ever increasing attention over the past 50 years. The main reasons are the following [Isermann]: • Theoretical analysis can become quite complex even for simple systems. • Mostly, model coefficients derived from the theoretical considerations are not precise enough. • Not all actions taking place inside the system are known. • The actions taking place cannot be described mathematically with the required accuracy. • Some systems are very complex, making the theoretical analysis too time-consuming. • Identified models can be obtained in shorter time with less effort compared to theoretical modeling. The experimental analysis allows the development of mathematical models by measurement of the input and output of systems of arbitrary composition [Isermann]. One major advantage is the fact that the same experimental analysis methods can be applied to diverse and arbitrarily complex systems. By measuring the input and output only, one does however only obtain models governing the input-output behavior of the system, i.e. the models will in general not describe the precise internal structure of the system. These input-output models are approximations and are still sufficient for many areas of application [Isermann].

16

Figure 10: Different kinds of mathematical models ranging from white box models to black box models [Isermann]

1.9 Computer simulation methodology As mentioned in the previous chapter, the analytical solution of the differential equations, which describe a certain dynamical system, can be found only for the simplest and most idealized systems. In the case of more complicated systems of differential equations, the use of numerical methods is usually the only possible way to find their solutions. Within this framework, the computer simulation is definitely the most popular approach. In last two decades, there were many problem oriented simulation languages developed. Naturally, the corresponding modeling demand very well known knowledge of the problem, which we want to investigate by means of computer simulation. Figure 11 shows the basic modeling approach for the purpose of computer simulation.

17

Figure 11: The basic modeling approach for the purpose of computer simulation. The basic modeling approach for the purpose of computer simulation consists of the following stages [Kljajič]: a) Definition of the problem: Define the level and the objective of the modeling, the volume of the treated system, etc. b) Define the variables, the feedback loops, and interactions between variables and the parts of the system. c) Analyze the problem in the wider frame, which establishes the connection between the observed system and concrete solutions from the more general point of view. d) Construct mathematical details of the system with corresponding equations, suitable for the chosen simulation language. e) Repeat all the stages until the satisfactory solution is not found.

18

1.10 Modeling and simulation iterative procedure In this chapter, let us describe the modeling and simulation procedure more precisely. As mentioned before, every real object is observed by observation of available data and experiments, which are possible to be done on this object. Mathematical (conceptual) models are then constructed on the basis of data analysis. As we know, they represent the imitation of the real object and behave similarly for the purposes for which they serve. When building the conceptual mathematical model, the following issues must be also taken into the consideration [Zupančič]:



The purpose of the modeling must be clearly defined,



The constraints and the limitations of the model must be also defined,



The attributes of the object, which will be included in the model, must be chosen, where insignificant details will be neglected,



The assumptions about certain real principles must be idealized,



The structure and parameters of the model must be defined, which represent the connections between particular attributes in the system (for example differential or difference equations).

When the conceptual model is constructed, as much as possible information about its behavior must be collected. We know that only in simple cases the analytical solutions can be also derived. But in general, this is not possible and the simulation model must be constructed, which is based on the conceptual model. The data about the model behavior can be collected by use of deduction (analytical treatment) of conceptual model or by experimenting with simulation model. Afterwards, the validation of the model must be applied, where the analysis of fitting of real data and simulated data is done. Naturally, the verification of simulation model must be also executed, where it is tested, if the simulation model reflects the properties of conceptual model in the proper way (computer simulation program is without any faults). The analysis of the system, the construction of the model, its experimenting, verification and validation must be usually repeatedly done, until the demanded results are not achieved. Thus, [Neelemkavil] defines simulation as a slow, iterative and experimentally oriented technique (c.f. figure 12).

19

Figure 12:Iterative procedure of modeling and simulation [Neelemkavil]

1.11 The classification of simulation Simulation as a methodology for analysis and design of systems can be classified into continuous, discrete and combined simulation [Zupančič], if the classification is related to the type of the used model. With respect to used tool or technique (type of computer), by which it is executed, the simulation can be classified into analog, digital, or hybrid simulation [Zupančič]. Continuous simulation can be analog or digital, discrete simulation is always digital, while combined simulation can be digital or hybrid. Figure 13 shows the classification of simulation. [Zupančič]. The speed of simulation execution with respect to real system time determines the way, how the model can be simulated: •

slower than in real system time,



in real system time,



faster than in real system time.

20

Figure 13: The classification of simulation [Zupančič] Simulation, which is not executed in real system time, is the most common and can be applied on general-purpose computers. If the execution is faster or slower than in real system time, depends on time constants of real system and on capability of simulation tool, how fast can perform the simulation. If the simulation is done in real system time, the computer must be usually connected to the real system. Efficient simulation in the real system time is possible only in the case, if we have the specific software and hardware (modern simulation working stations, analog-hybrid computers) [Zupančič]. Continuous simulation enables to simulate the systems, which can be described by linear or nonlinear ordinary (ODE) or partial (PDE) differential equation with constant or varying coefficients. The condition for this kind of simulation is that the state variables and derivatives are continuous through the entire simulation execution, where the independent variable is usually time. In discrete simulation, the states of the system can be changed only in discrete time instants. These changes are called the discrete events, which are happening periodically in specific time moments (usually in the theory of automatic control [Zupančič]), or asynchronously in the dependence of conditions, determined by values of state variables [Zupančič]. Classical example of the asynchronous simulation is the post office with one server and waiting line (queue). The customers arrive randomly into the system and are served according to the FIFO (first in, first out) discipline. In this case, the computer can simulate the arrivals of the customers into the system and working mechanism inside the queueing system. In this way,

21

very complex systems can be treated and simulated, where the analytical solutions usually could not be found at all. According to the [Cellier], the combined or hybrid simulation is the simulation, which can be described on the whole observation interval by means of differential equations, where at least one state variable or its derivative is not continuous quantity. By this kind of definition, we can see that in reality almost every simulation of the real system is actually combined continuous-discrete simulation.

1.12 The methodology of system dynamics There are several equivalent ways of the visualization of the system, suitable for computer simulation. In the simulation of business systems the so-called methodology of system dynamics has found the most advanced position, which was proposed by [Forrester]. In this methodology, the so-called block diagrams for discrete event simulation also found their place. The methodology of system dynamics is not only the generation of equations, which describe the dynamics of business processes, but it is the whole methodology of solving of dynamical problems (c.f. figure 14) [Forrester].

Figure 14:The concept of problem solving in the methodology of system dynamics [Forrester]

22

As it can be noticed from figure 14, the arrows show the step by step solving procedure, where the next step can iteratively influence on the previous step, which means that the corresponding steps are mutually dependent. The methodology of system dynamics can be very useful for development of decision support systems in business environments, where the following issues can be achieved [Kljajič]:



Learning of the behaviour of integral simulation system, which can support the business decisions, strategically planning and analysis of organizational systems,



Improvement of the process of planning and decision making,



Acquisition of new knowledge about the behaviour and management of complex systems, and



Education of professional personnel for planning and government of the company.

23

2 DISCRETE EVENT SIMULATION The focus of discrete event simulation is on events, which can influence on the system. These events can [Kljajič]: •

cause the change of value of certain system variable,



trigger or disconnect the system variable,



activate or deactivate the certain process.

Discrete events can be treated from the two points of view [Kljajič]: •

In the case of elements orientation (particle orientation), the system elements represent the starting point for simulation analysis,



In the case of events orientation, the system events represent the starting point for simulation analysis.

2.1 Introduction of time Time is represented with an internal simulation clock, when the simulation of discrete events is treated. Relationship between the simulation time and the real time depends on the nature of observed system, where the generation of times of arrivals of elements (transactions) depends on system conditions. The basic terms, when we study the ways of waiting line formations, are [Kljajič]: •

The sequence of arrivals of elements (transactions) (arrival patterns),



The processing of system elements (transactions) (service process),



The ways of waiting line formations (queueing disciplines).

The processing (server service) of the system elements (transactions) can be described by the service time and the capacity of processing (capacity of server, service capacity). The processing time (time of service) is the time needed for the processing of the system element (transaction, dynamic entity). The capacity of processing (capacity of server service) represents the number of system elements (transactions), which can be proceeded simultaneously.

24

When modeling the system, the probability distributions of the times between two consecutive elements arrivals (inter-arrival times) and the service processing time must be given. There are several possibilities of waiting line formations (disciplines), like FIFO, LIFO, random, etc, which will be more precisely defined in the sequel.

2.2 Inter-arrival times distribution The arrivals of the system elements into the system are usually described by inter-arrival times. In the real systems, the number of different inter-arrival times distributions is practically indefinite. By means of theoretical distributions, the real distributions are trying to be described. Naturally, the theoretical distributions are only more or less accurate approximations of the real distributions. The most frequent theoretical distributions, which can be used for the purpose of simulation, are [Kljajič]:



uniform distribution,



exponential distribution,



normal distribution, and



Erlang distribution.

For the description of elements (transactions, customers) arrivals into the system, the following parameters are usually used [Winston]: •

Ta ... Time interval between two consecutive arrivals,



E (Ta ) =



λ=

1

λ



= ∫ t ⋅ f ( t )dt ...The mean or average inter-arrival time, 0

1 ... the arrival rate (units of arrivals per time unit). E (Ta )

Naturally, the Ta is the random variable and f(t) is supposed to be the probability density function of random variable Ta .

25

2.2.1 Poisson distribution Suppose we are dealing with the random variable Ta , which represents the time interval (0, t ] between the time origin and the first following event (arrival). In the case of Poisson distribution of the number of arrivals, it can be shown that the probability for Ta to take the value between t and t + dt , follows the exponential law [Dragan, Winston, Taha, Hillier]:

fTa ( t ) = f ( t ) = λ ⋅ e − λ ⋅t ,

t >0

(2.1)

where f(t) is the exponential probability density function of random variable Ta , and λ is positive constant. If the inter-arrival times have an exponential distribution, than it can be shown that they also have the so-called no-memory property [Winston]. This finding is very important, because it implies that if we want to know the probability distribution of the time until the next arrival, then it does not matter how long it has been since the last arrival [Winston]. Figure 15 shows four examples of exponential distribution for arrival rates λ = 2, 1, 0.5, 0.25 .

Figure 15: Four examples of exponential distribution for arrival rates λ = 2, 1, 0.5, 0.25

26

Figure 15 represents the information about the chance, that the new event not yet happened until the time t. From figure 15 we can concluded that it is very unlikely to have very long inter-arrival times [Winston, Dragan]. The longer the inter-arrival time is, the smaller is the chance that the new event not yet happened.

2.3 Distribution of the number of arrivals It can be shown that there is a strong connection between the Poisson distribution of the number of arrivals, and the exponential distribution of the inter-arrival times. If the interarrival times are exponential, the probability distribution of the number of arrivals occurring in any time interval of length t is given by the following important theorem [Winston]:

Inter-arrival times are exponential with parameter λ if and only if the number of arrivals to occur in an interval of length t follows a Poisson distribution with parameter λ ⋅ t . In general, a discrete random variable N has a Poisson distribution with parameter λ , if the probability distribution function has a form [Winston]:

P [ N = n]

(λ ) =

n

n!

⋅ e−λ ,

(2.2)

n = 0,1, 2,...

If we define N ( t ) to be the number of arrivals to occur during any time interval of length t, from the previous theorem we can apply the following expression [Winston]:

P  N ( t )

(λ ⋅ t ) = n = 

n

n!

⋅ e − λ ⋅t ,

n = 0,1, 2,...

Since N ( t ) has a Poisson distribution with parameter

(2.3)

λ ⋅ t , it can be shown that the

expectation and variance are [Winston]:

E  N ( t )  = VAR  N ( t )  = λ ⋅ t

(2.4)

It follows that an average of λ ⋅ t arrivals occur during a time interval of length t, so λ may be thought as the average number of arrivals per time unit, or the arrival rate [Winston].

27

Figure 16 shows six examples of Poisson distribution for arrival rates λ = 0.25, 1, 2, 3, 5, 12 .

Figure 16: Six examples of Poisson distribution for arrival rates λ = 0.25, 1, 2, 3, 5, 12 .

2.4 Introduction to queueing systems Queues (waiting lines) are a part of everyday life. We all wait in queues to buy a movie ticket, make a bank deposit, pay for groceries, mail a package, obtain food in a cafeteria, start a ride in an amusement park, etc. We have become accustomed to considerable amounts of waiting, but still get annoyed by unusually long waits [Hillier]. However, having to wait is not just a petty personal annoyance. The amount of time that a nation’s populace wastes by waiting in queues is a major factor in both the quality of life there and the efficiency of the nation’s economy. For example, before its dissolution, the U.S.S.R. was notorious for the tremendously long queues that its citizens frequently had to endure just to purchase basic necessities. Even in the United States today, it has been estimated that Americans spend 37,000,000,000 hours per year waiting in queues. If this time could be spent productively instead, it would amount to nearly 20 million person-years of useful work each year! [Hillier]

28

Even this staggering figure does not tell the whole story of the impact of causing excessive waiting. Great inefficiencies also occur because of other kinds of waiting than people standing in line. For example, making machines wait to be repaired may result in lost production. Vehicles (including ships and trucks) that need to wait to be unloaded may delay subsequent shipments. Airplanes waiting to take off or land may disrupt later travel schedules. Delays in telecommunication transmissions due to saturated lines may cause data glitches. Causing manufacturing jobs to wait to be performed may disrupt subsequent production. Delaying service jobs beyond their due dates may result in lost future business [Hillier]. Let us condense some most typical real cases of queueing systems [Dragan]: • Waiting for service in restaurants, banks, shops, or at the doctor's office, • Waiting for transfer by bus, train, or plane, • Waiting for a ticket to the cinema, theater or game, • Waiting of cars at a gas station, • Waiting of aircrafts on takeoff or landing, • Waiting of machinery to be repaired, • Waiting of the material in the warehouse for sale, • Waiting for a telephone call to establish a connection, and so on. The limited number of servers is usually the reason for waiting lines, which are formed, when all the customers can not be served simultaneously. But in same cases, the waiting lines are also formed as a consequence of time limitations, when the service is possible only in certain time intervals. For example, pass through the intersection with a traffic light is only possible when the light is green. The entities that queue for service are called customers, or users, or jobs, depending on what is appropriate for the situation at hand [Stewart]. Customers who require service are said to “arrive” at the service facility and place service “demands” on the resource. At a doctor’s office, the waiting line consists of the patients awaiting their turn to see the doctor; the doctor is the server who is subjected to a limited resource, in this case time. The planes may be viewed as users and the runway viewed as a resource that is assigned by an air traffic controller. The resources are of finite capacity meaning that there is not an infinity of them nor can they work infinitely fast. Furthermore arrivals place demands upon the resource and these demands are unpredictable in their arrival times and unpredictable in their size. Taken

29

together, limited resources and unpredictable demands imply a conflict for the use of the resource and hence queues of waiting customers [Stewart]. Our ability to model and analyze systems of queues helps to minimize their inconvenience, to maximize the use of the limited resources and to enable the possible improvements of the existing system. An analysis may tell us something about the expected time that a resource will be in use, or the expected time that a customer must wait. This information may then be used to make decisions as to when and how to upgrade the system: for an overworked doctor to take on an associate or an airport to add a new runway, for example [Stewart]. For the efficient analysis and optimization of queueing systems, the corresponding models must be built. They can be used to answer questions like the following [Winston]: 1 What fraction of the time is each server idle? 2 What is the expected number of customers present in the queue? 3 What is the expected time that a customer spends in the queue? 4 What is the probability distribution of the number of customers present in the queue? 5 What is the probability distribution of a customer’s waiting time? Queueing theory is the study of waiting systems [Hillier]. It uses queueing models to represent the various types of queueing systems (systems that involve queues of some kind) that arise in practice. Formulas for each model indicate how the corresponding queueing system should perform. Therefore, they are very helpful for determining how to operate a queueing system in the most effective way. Providing too much service capacity to operate the system involves excessive costs. But not providing enough service capacity results in excessive waiting and all its unfortunate consequences. So the models enable finding an appropriate balance between the cost of service and the amount of waiting [Hillier].

30

2.4.1 Basic characteristics of queueing system The basic characteristics of queueing systems are [Kljajič]:



The distribution of inter-arrival times of customers



The distribution of service times



The number of servers



The capacity of queueing system



The discipline of queue



The number of serving levels.

In sequel, we are going to look at the characteristics of the queueing systems more closely.

The Basic Queueing Process The basic process assumed by most queueing models is the following [Hillier]. Customers requiring service are generated over time by an input source. These customers enter the queueing system and join a queue. At certain times, a member of the queue is selected for service by some rule known as the queue discipline. The required service is then performed for the customer by the service mechanism, after which the customer leaves the queueing system. This process is depicted in Figure 17 [Hillier].

Figure 17: The Basic Queueing Process

1. Input source: One characteristic of the input source is its size. The size is the total number of customers that might require service from time to time, i.e., the total number of distinct potential customers. This population from which arrivals come is referred to as the calling

31

population. The size may be assumed to be either infinite or finite (so that the input source also is said to be either unlimited or limited). The finite case is more difficult analytically because the number of customers in the queueing system affects the number of potential customers outside the system at any time. However, the finite assumption must be made if the rate at which the input source generates new customers is significantly affected by the number of customers in the queueing system [Hillier].

2. Queue: The queue is where customers wait before being served. A queue is characterized by the maximum permissible number of customers that it can contain. Queues are called infinite or finite, according to whether this number is infinite or finite. The assumption of an infinite queue is the standard one for most queueing models, even for situations where there actually is a (relatively large) finite upper bound on the permissible number of customers, because dealing with such an upper bound would be a complicating factor in the analysis. However, for queueing systems where this upper bound is small enough that it actually would be reached with some frequency, it becomes necessary to assume a finite queue [Hillier].

3. Queue Discipline: The queue discipline refers to the order in which members of the queue are selected for service. For example, it may be first-come-first-served, random, according to some priority procedure, or some other order. First-come-first-served usually is assumed by queueing models, unless it is stated otherwise [Hillier].

4. Service Mechanism: The service mechanism consists of one or more service facilities, each of which contains one or more parallel service channels, called servers [Hillier]. If there is more than one service facility, the customer may receive service from a sequence of these (service channels in series). At a given facility, the customer enters one of the parallel service channels and is completely serviced by that server. A queueing model must specify the arrangement of the facilities and the number of servers (parallel channels) at each one. Most elementary models assume one service facility with either one server or a finite number of servers. The time elapsed from the commencement of service to its completion for a customer at a service facility is referred to as the service time (or holding time). A model of a particular queueing system must specify the probability distribution of service times for each server, although it is common to assume the same distribution for all servers. The service-time distribution that is most frequently assumed in practice, is the exponential distribution. Other

32

important service-time distributions are the degenerate distribution (constant service time) and the Erlang (gamma) distribution [Hillier].

5. Servers in parallel and servers in series: This kind of servers classification is most usual in the queueing theory [Winston]. Servers are in parallel if all servers provide the same type of service and a customer need only pass through one server to complete service. For example, the tellers in a bank are usually arranged in parallel; any customer need only be serviced by one teller, and any teller can perform the desired service. Servers are in series if a customer must pass through several servers before completing service. An assembly line is an example of a series queuing system.

6. Finite source models and phenomena of balking: These are two typical situations, which can happen in reality. In the first case, the arrivals are drawn from a small population, for example we have the machines, which are waiting to be repaired (limited number of customers). In the second case, the arrival process depends on the number of customers, which are already present in the system. In this case, the rate at which customers arrive to the facility, decreases when the facility becomes too crowded. For example, if you see that the bank parking lot is full, you might pass by and come another day. If a customer arrives but fails to enter the system, we say that the customer has balked. Here we can distinguish between two different situations. In first situation, the arrival rate gradually decreases, which means that as bigger the quantity of already present customers in the system is, more balked the new potential customer will become. In the second situation, the arrival rate is constant, until the waiting space becomes fully loaded. At that moment, the arrival rate immediately falls down to 0 and new potential customers will definitely go away un-served (example of limited waiting space).

2.4.2 Queueing terminology and basic parameters According to Kendall, each queueing system is described by six characteristics:

1/ 2 / 3 / 4 / 5 / 6 , which are [Winston]: 1 - The nature of the arrival process, 2 - The nature of the sevice times, 3 - The number of parallel servers,

33

4 - The queue discipline, 5 - The maximum allowable number of customers in the system (including customers who are waiting and customers who are in service), 6 - The size of the population from which customers are drawn. Another version of the description of the queueing system can be given as follows [Kljajič]:

Notation = A / B / X / Y / Z

(2.5)

A − The distribution of inter − arrival times B − The distribution of service times X − The number of parallel servers Y − The limitations of service capacity Z − The discipline of queue

(2.6)

where the symbols are:

Table 2 shows the meaning of symbols in (2.6), which denote the basic characteristics of queueing systems [Kljajič].

Table 2: The basic characteristics of queueing systems

34

Example: Let us have the following queueing system:

Notation = M / D / 1/ 5 / PRI

(2.7)

In this case, the inter-arrival times are exponentially distributed, the service time is deterministic, we have one server with the total capacity five and the service has a priority mode of serving. Even simpler version of the description of the queueing system can be presented at given capacity, if the default is FIFO discipline [Winston, Hillier]:

Notation = Inter-arrival distribution/service times distribution/number of servers

(2.8)

The following standard abbreviations are used (either for inter-arrival times or service times) [Winston]:

M: Poisson random process [Winston] with the eksponential distribution of times, G: Some general distribution of times, D: Deterministic distribution of times, Ek: Erlangs distribution of times with shape parameter k. If we are dealing, for instance, with the system M/M/r, the abbreviations mean the following: The input arrival process is the Poisson process, the service times are distributed exponentially, and the number of servers is r. On the other hand, if we are dealing, for instance, with the system M/G/1, it means that the input arrival process is the Poisson process, the service times are distributed with respect to some general distribution, and the number of servers is 1. Let us count some other typical queueuing systems: G/M/r, M/D/r, G/G/r, etc. The queueing systems can be treated as so-called birth-death systems [Winston, Dragan], where customers in the system represent the observed population, arrivals are the births, and the departures are the deaths. The birth-death processes are the stochastic processes, which will be more precisely explained in chapter 3.

35

The systems of type M/M/r has so-called Markovian property (already mentioned no-memory property), where the theory of Markov processes can be applied. On the contrary, the other types of queueing systems do not have this property and must be treated by use of special, more difficult approaches [Winston, Dragan]. The basic quantities, which deserve a lot of attention in the theory of queueing systems, are [Schaums, Dragan, Winston]: •

The number of the customers in the system and in the queue, and



The customers time of being in the system and the customers waiting in the queue.

The basic parameters, which define the property of queueing channel, are [Kljajič, Dragan, Hillier, Winston]: E ( N ) = L ................................average number of the customers in the system (including those one, which are already in the service process),

E ( N q ) = Lq ............................ average number of the customers in the queue, E ( N S ) = Ls ............................ average number of the customers in the service process, E(W )......................................average customers time of being in the system (sum of the waiting time in the queue and the time being in the service process), E( Wq )......................................average customers waiting time in the queue, E( Ws ).......................................average customers time being in the service process. (2.9) where »E« denotes the expectation.

36

Let us define some other quantities, which are also important in the queueing theory [Kljajič, Hillier]:

State of the system.....................number of the customers in the system, Queue length............................ number of the customers waiting for service to begin, N ( t ) ..........................................number of the customers in the system at time t, pn (t ) ..........................................probability of exactly n customers in queueing system at time t, r..................................................number of the servers in queueing system,

λn ................................................mean arrival rate (expected number of arrivals per time unit) of new customers, when n customers are already in the system.

µn ……………………………….mean service rate for overall system (expected number of customers completing service per time unit), when n customers are already in the system. (2.10) When λn is constant for all n, this constant is denoted by λ . Similarly, when the mean service rate is constant for all n, this constant is denoted by µ [Hillier]. Quantity λ can also be treated as the speed of arrivals of transactions into the system, while the quantity µ can also be treated as the speed of the service, for example in one channel system the speed of the single server [Kljajič]. In the case of single channel system, we can introduce the following utilization factor

[Hillier]:

ρ=

λ µ

(2.11)

which represents the fraction of the system’s service capacity ( µ ), that is being utilized on the average by arriving customers ( λ ) [Hillier].

37

In the case of multiple channel system, we can introduce the following utilization factor

[Hillier]:

ρ=

λ r⋅µ

(2.12)

Certain notation also is required to describe steady-state results [Hillier]. When a queueing system has recently begun operation, the state of the system (number of customers in the system) will be greatly affected by the initial state and by the time that has since elapsed. The system is said to be in a transient condition. However, after sufficient time has elapsed, the state of the system becomes essentially independent of the initial state and the elapsed time The system has now essentially reached a steady-state condition, where the probability distribution of the state of the system remains the same (the steady-state or stationary distribution) over time. Queueing theory has tended to focus largely on the steady-state condition, partially because the transient case is more difficult analytically [Hillier].

Relationships between L, Lq , E (W ) , E (Wq ) : It has been proved, that in a steady-state the so-called Little's formula can be applied [Hillier]: L = λ ⋅ E (W )

(2.13)

Also, we can introduce the following expression:

Lq = λ ⋅ E (Wq )

(2.14)

and: E (W ) = E (Wq ) +

1

µ

(2.15)

These relationships are extremely important because they enable all four of the fundamental quantities ( L, Lq , W , Wq ) to be immediately determined as soon as one is found analytically. This situation is fortunate, because some of these quantities often are much easier to find than others when a queueing model is solved from basic principles [Hiller].

38

2.4.3 Types of queueuing systems In this chapter, let us shortly introduce some typical queueing systems [Kljajič]. Figure 18 shows the simple queueing system with one queue and single server.

Figure 18: The simple queueing system with one queue and single server Figure 19 shows the simple queueing system with one queue and multiple parallel servers.

Figure 19: The simple queueing system with one queue and multiple parallel servers Figure 20 shows the closed queueing system of maintainance of machinery.

Figure 20: The closed queueing system of maintainance of machinery

39

Figure 21 shows more complex queueing system with many queues and many channels, where the priority logic is also involved.

Figure 21: More complex queueing system

2.5 Some probability basics of simulation As mentioned before, the behavior of complex organization systems must be described by means of stochastic functions. Within this framework, the following situations are possible [Kljajič]: 1. The knowledge about the system is incomplete, so the system must be described randomly with the appropriate distribution function, 2. The description of the system is very complicated, thus it must be approximated by use of specific stochastic function. 3. The behavior of the system has a significantly stochastic character. Since the computer simulation represents experimenting on the computer, the random events must be generated for the inputs and outputs of the system model. The variable, which represents the outcome of the random event, is called the random (stochastic) variable. For the events, which are described by random variable, the stock of the values and the probability distribution is known only. Based on these two characteristics, the mean (expected) value and the variance can be also found. If the stock of the values and the probability distribution are

40

discrete, than we are talking about discrete random variables, otherwise we are talking about the continuous random variables. Let us have some continuous random variable X, which has the probability density function f ( x) .

The probability, that the random variable X

takes some value in the interval

a ≤ X ≤ b , can be derived as follows (c.f. figure 22): b

P ( a ≤ X ≤ b ) = ∫ f ( x )dx

(2.16)

a

where the following expression is always valid: ∞

∫ f ( x )dx = 1

(2.17)

−∞

Figure 22: Illustration of probability P ( a ≤ X ≤ b )

Particularly important for the description of the random variable X is the so-called cumulative distribution function, which can be expressed as follows (c.f. figure 23): x

F ( x) = P ( X ≤ x) =

∫ f ( t )dt

(2.18)

−∞

41

Figure 23: Illustration of cumulative function We can give some additional properties, which are valid for the random variable X [Dragan]:

1. Since the F ( x) is monotonically increa sin g , it follows : f ( x) =

dF ≥0 dx

(2.19)



2. F (∞) = P( X ≤ ∞) =

∫ f ( t ) dt = 1

−∞

3. F ( b ) − F ( a ) = P ( a ≤ X ≤ b ) The expectation of the random variable can be defined as: ∞

E(X ) =

∫ x ⋅ f ( x ) dx

(2.20)

−∞

and the variance can be defined as [Dragan]:

( )

VAR ( X ) = E X

2

∞  − E ( X ) = ∫ x ⋅ f ( x ) dx −  ∫ x ⋅ f ( x ) dx  −∞  −∞  ∞

2

2

2

(2.21)

42

In the case of discrete random variable X, where stock of values is: x1 , x2 , x3 ,...xn and the n

corresponding probabilities are: p1 , p2 , p3 ,... pn ,

with property

∑p

i

= 1 , the following

i =1

expressions can be given:



F ( x) = P ( X ≤ x) =

p ( xi ) =

xi ≤ x

x

pi =∑ pi

∑ xi ≤ x

n

n

i =1

i =1

i =1

(2.22)

E ( X ) = ∑ x ⋅ p ( x ) = ∑ xi ⋅ p ( xi ) = ∑ xi ⋅ pi x

( )

VAR ( X ) = E X

2

  − E ( X ) = ∑ xi ⋅ pi −  ∑ xi ⋅ pi  i =1:n  i =1:n  2

2

2

2.6 Random generators For the purpose of computer simulation, the so-called random generator is used [Kjajič, Winston]. It serves for generating of random numbers, which represent the time instants for arrivals of customers into the system, and for the generation of times related to the service with respect to some probability law. One of the typical random generators is so-called

roulette random generator, where the behavior similar to roulette behavior is applied. The latter also has the fingerprints, why the famous simulation Monte Carlo method has got its name [Winston]. The procedure of segmentation and using a roulette wheel is equivalent to generating integer random numbers in certain interval, for example between 00 and 99 [Winston]. This follows from the fact that each random number in the sequence has an equal probability of showing up (for example, in the case of interval between 0 and 99 the probability is 0.01). In addition, each random number is independent of the numbers that precede and follow it. The definition of random number generator is [Hillier]:

A random number generator is an algorithm that produces sequences of numbers that follow a specified probability distribution and possess the appearance of randomness.

43

Random numbers usually come from some form of uniform random observations, where all possible numbers are equally likely. When we are interested in some other probability distribution, we shall refer to random observations from that distribution [Hillier]. Uniform random numbers can be divided into two main categories [Hillier]: • A random integer number is a random observation from a discretized uniform distribution

over some range, • A uniform random number is a random observation from a continuous uniform distribution

over some interval [a, b]. If a random number is defined as an independent random sample drawn from a continuous uniform distribution, the probability density function is given by the expression [Winston]:

 1  f ( x) = b − a  0

a≤ x≤b

(2.23)

sicer

When a and b are not specified, they are assumed to be: a = 0, b = 1 .

Pseudo-random generator: There are several ways and methods form numeric generation of random numbers, which are usually based on recursive algorithms. From the initial number (called "seed" [Kljajič]), the next number is generated. Since it is demanded that the numbers are uniformly distributed over some interval and are independent between each other, we are talking about the deterministic procedure for generating the stochastic process. Therefore, the corresponding numerical generators are called pseudo-random, since they are not purely random. Typical generators for generating random numbers are [Kljajič]: •

Generator of the mean of square,



Generator of the mean of product,



Logistic equation,



Congruential generator, etc.

44

Most random number generators use some form of a congruential relationship [Winston]. Examples of such generators include the linear congruential generator, the multiplicative generator, and the mixed generator. The linear congruential generator is by far the most widely used [Winston]. In fact, most built-in random number functions on computer systems use this generator. With this method, we produce a sequence of integers x1 , x2 , x3 ,...xn between 0 and m -1 according to the following recursive relation [Winston]: xi +1 = ( a ⋅ xi + c ) modulo m,

i = 0,1, 2,...

(2.24)

The initial value of x0 is called the seed, a is the constant multiplier, c is the increment, and m is the modulus. These four variables are called the parameters of the generator. Using this relation, the value of xi +1 equals the remainder from the division of ( a ⋅ xi + c ) by m. The random number between 0 and 1 is then generated using the equation [Winston]:

Ri =

xi , m

i = 1, 2,...

(2.25)

The generator, introduced by (2.24) and (2.25), is so-called mixed generator. It can become an additive generator, if a = 1, and can become multiplicative generator, if c = 0. By carefully selecting the values of a, c, m, x0 , the pseudorandom numbers can be made to meet all the statistical properties of true random numbers [Winston]. In addition to the statistical properties, random number generators must have several other important characteristics if they are to be used efficiently within computer simulations [Winston]. (1) the routine must be fast; (2) the routine should not require a lot of core storage; (3) the random numbers should be replicable; and (4) the routine should have a sufficiently long cycle—that is, we should be able to generate a long sequence without repetition of the random numbers. Most programming languages have built-in library functions that provide random (or pseudorandom) numbers directly [Winston]. Therefore, most users need only know the library function for a particular system. In some systems, a user may have to specify a value for the

45

seed, x0 , but it is unlikely that a user would have to develop or design a random number generator. In the following two chapters, Theory of stochastic processes and Introduction to basic

queueing models, we are going to give some theoretical basics, which can be also very useful for the deeper understanding of discrete-event simulation of queueing systems.

3 THEORY OF STOCHASTIC PROCESSES The theory of stochastic processes is generally defined as the "dynamic" part of probability theory, in which one studies a collection of random variables (called a stochastic process) from the point of view of their interdependence and limiting behavior [Parzen]. One is observing a stochastic process whenever one examines a process developing in time in a manner controlled by probabilistic laws. Examples of stochastic processes are provided by the path of a particle in Brownian motion, the growth of a population such as a bacterial colony, the fluctuating number of particles emitted by a radioactive source, and the fluctuating output of gasoline in successive runs of an oil-refining mechanism. Stochastic or random processes often occur in nature. They also occur in medicine, biology, physics, oceanography, economics, and psychology, to name only a few scientific disciplines. If a scientist is to take account of the probabilistic nature of the phenomena with which he is dealing, he should undoubtedly make use of the theory of stochastic processes [Parzen].

3.1 Definition of stochastic processes and basic properties Stochastic processes are the processes, which change in dependence of time and/or space with respect to probability laws [Hillier, Winston, Dragan]. Their definition is as follows [Hillier]: Random (stochastic) process is the family (sequence) of the random variables:

{ X t = X (t ), t ∈ T }

(3.1)

46

where the time t is the parameter, which in general takes its values from the set of real numbers.

By other words this means that a stochastic process is defined to be an indexed collection of random variables { X t = X (t ), t ∈ T } , where the index t runs through a given set T [Hillier]. Often T is taken to be the set of nonnegative integers, and X t represents a measurable characteristic of interest at time t. For example, X t might represent the inventory level of a particular product at the end of week t. So the stochastic processes are of interest for describing the behavior of a system operating over some period of time [Hillier]. Figure 24 represents two typical examples of stochastic processes: a) Random behavior of the temperature in dependence of hours time progress, b) Stock value behavior in dependence of days time progress.

Figure 24: Illustration of stochastic process: a) Random behavior of the temperature in dependence of hours time progress, b) Stock value behavior in dependence of days time progress.

47

The stochastic processes are usually separated into two main categories [Winston]: • Discrete stochastic processes ( X t is the function of discrete time moments), • Continuous stochastic processes ( X t is the function of continuous time moments).

A discrete-time stochastic process is simply a stochastic process in which the state of the system can be viewed at discrete instants in time [Winston]. On the contrary, a continuoustime stochastic process is a stochastic process in which the state of the system can be viewed at any time. For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process. Since the price of a share of stock can be observed at any time (not just the beginning of each trading day), it may be also viewed as a continuous-time stochastic process. Viewing the price of a share of stock as a continuous time stochastic process has led to many important results in the theory of finance, including the famous Black–Scholes option pricing formula [Winston]. Let us count some more stochastic processes, which are typical in the real practice [Hudoklin, Dragan]: • Markov processes, • Random walk processes, • Poisson processes, • Birth-death processes, • Epidemic processes, • Diffusion processes, etc.

With respect to the nature of time and the nature of state space, the stochastic processes can be divided into the following categories [Hudoklin, Dragan]: • Processes with discrete time and discrete state space, • Processes with discrete time and continuous state space, • Processes with continuous time and discrete state space, • Processes with continuous time and continuous state space.

48

3.2 Markov processes Markov processes represent an important group of stochastic processes, since they can describe many real situations. They have got their name after the scientist Andrey Markov, which introduced the so-called Markov chains in the year 1907 [Hudoklin]. Certain random process is Markov process, if the following expression can be applied (Markovian property) [Winston, Hillier]:

) (

(

P X X , X ,... X =P X X tn t 1 t 2 tn − 1 tn tn − 1

)

(3.2)

It means that the conditional probability of the random variable X to take some value at the tn time tn , depends only on the last state of the process X

tn − 1

, and does not depend on the

process states in the previous times t1 ,..., tn − 2 . It means that the Markov process does not have the memory and if we want to predict the process behavior in the future, we only must get familiar with the present and not with the past.

By other words, the Markovian property implies that the conditional probability of any future “event,” given any past “event” and the present state, is independent of the past event and depends only upon the present state.

The Markov processes can be divided into the two main categories [Hudoklin, Winston]: •

The Markov processes with the discrete states in discrete time (Markov chains), and



The Markov processes with the discrete states in continuous time.

In sequel, let us briefly introduce the Markov chains.

3.3 Markov chains Markov chains are Markov processes with discrete states in discrete time, where the time is defined as: tn+1 → n + 1,

tn → n, ..., t1 → 1,

t0 → 0

(3.3)

49

Random variables can then be defined in the form:

{Xn,

n ≥ 0} , where some space of

possible states S = {1, 2,..., m} is given, in which the number of states can be finite (finite chains) or accountably infinite (infinite chains). If the following relation can be expressed: X n = i , it means that the given Markov chain is located in state i , when the time is n. Discrete Markov chain can be characterized with the following expression [Wilson, Hillier]:

P ( X n +1 = j X 0 = i0 , X 1 = i1 ,..., X n = i ) = P ( X n +1 = j X n = i

)

(3.4)

where the conditional probabilities:

P ( X n +1 = j X n = i ) = pij = P ( i → j )

(3.5)

are called (one step) transition probabilities. The probability pij is the probability that given the system is in state i at time t, it will be in a state j at time t + 1. If the system moves from state i during one period to state j during the next period, we say that a transition from i to j has occurred. Equation (3.5) implies that the probability law relating the next period’s state to the current state does not change (or remains stationary) over time. For this reason, the relation (3.5) is often called the Stationary

Assumption. Any Markov chain that satisfies (3.5) is called a stationary Markov chain [Winston].

50

In most applications, the transition probabilities are displayed as an N X N dimensional transition probability matrix P. The transition probability matrix P may be written as [Winston, Hillier]: 0 0  p00 1  p10 2 . P = { pij } = . .  . .  .  N  pN 0

1

...

N

p01

...

p11

...

.

...

.

...

p0 N  p1N  .   .  .   pNN 

... pN 1 ...

(3.6)

where {0,1, 2,..., N } are the discrete states of the process.

For every transition matrix the following properties can be expressed [Winston, Hillier]: 1. It is a stochastic and square matrix with non-negative elements, 2. The sum of probabilities in every row is equal 1. To get more familiar with the use of Markov chains, let us show one example, which is called the Gambler's Ruin [Winston].

Example: At time 0, I have 2 EUROS. At times 1, 2, . . . , I play a game in which I bet 1 EURO. With probability p, I win the game, and with probability 1 - p, I lose the game. My goal is to increase my capital to 4 EUROS, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0 EUROS. If we define X n to be my capital position after the time t game (if any) is played, then X 0 , X 1 ,..., X n may be viewed as a discrete-time stochastic process. Note that X 0 = 2 is a known constant, but X 1 and later X n are random. For example, with probability p, X 1 = 3 , and with probability 1 - p, X 1 = 1 . Note that if X n = 4 , then X n +1 and all later X n will also equal 4. Similarly, if X n = 0 , then X n +1 and all later X n will also equal 0. For obvious reasons, this type of situation is called a gambler’s ruin problem. Find the transition matrix and state-transition diagram!

51

Since the amount of money I have after n +1 plays of the game depends on the past history of the game only through the amount of money I have after t plays, we definitely have a Markov chain. Since the rules of the game do not change over time, we also have a stationary Markov chain. The transition matrix is as follows (state i means that we have i EUROS): 0

1

2

0 1 0 0  1 1 − p 0 p P = 2 0 1− p 0  3 0 0 1− p 4  0 0 0

3

4

0

0 0  0  p 1 

0 p

0 0

(3.7)

where {0,1, 2,3, 4} is the state space, observed in EUROS.

If the state is 0 or 4 EUROS, I do not play the game anymore, so the state cannot change, hence, p00 = p44 = 1 . For all other states, we know that with probability p, the next period’s state will exceed the current state by 1, and with probability 1 - p, the next period’s state will be 1 less than the current state. The corresponding state-transition diagram is shown on figure 25.

Figure 25: The state-transition diagram for the Gambler's Ruin problem

52

3.4 Poisson processes Poisson processes can be ranked into the category of Markov processes, which have the discrete states and continuous time [Kay]. On the contrary to the Markov chains, the transitions between states can be observed in the short time interval ( t , t + ∆t ) .

Poisson process is one of the simplest Markov processes with the discrete states and continuous time. Since the theory of this kind of processes is much harder to understand, the Poisson process represent the welcome asset for understanding more complex stochastic processes [Hudoklin, Kay]. It can be used as a model of the great number of real stochastic processes, such as [Hudoklin]:



The decay of radioactive nuklei,



Thermal emission of electrons,



Machinery repair and maintenance,



Inventory resupply,



Road accidents,



Queueing theory,



Space distribution of animals or vegetables, etc.

So the Poisson process is a random process that is very useful for modeling events occurring in time [Kay]. A typical realization is shown in Figure 26 in which the events occur randomly in time. The random process that counts the number of events in the time interval [0, t] and which is denoted by N ( t ) , is called the Poisson counting random process. It is clear from Figure 26 that the two random processes are equivalent descriptions of the same random phenomenon. Note that N ( t ) is a continuous-time/discrete-valued random process. Also, because N ( t ) counts the number of events from the initial time t = 0 up to and including the time t, the value of N ( t ) at a jump is N ( t + ) . Thus, N ( t ) is right-continuous [Kay].

In sequel, we are going to derive some important results for a Poisson process. This results will represent a good basis for better understanding of findings, which were already expressed

53

in chapters 2.2 and 2.3, when the inter-arrival times distribution and distribution of the number of arrivals were introduced.

Figure 26: The typical realization of the Poisson counting process ( N ( t ) counts the number of events, time points t1 , t2 ,... represent the formation of new events, while z1 , z2 ,... are the times between these events).

3.4.1 Derivation of distribution of the number of events At first, we are going to give the following definition of the Poisson process [Kay, Winston, Hudoklin]:

For example, we are dealing with the sequence of events, which occur individually and completely random. Let us denote with N ( t , t + ∆t ) the number of events in the time interval

(t , t + ∆t ] ,and with N ( t ) the number of events in the time interval (0, t ] . Poisson process is then the family { N ( t )} , which for certain positive constant λ and ∆t → 0 corresponds to the following demands [Kay, Winston, Hudoklin]:

54

a)

P  N ( t , t + ∆t ) = 0  = 1 − λ ⋅ ∆t + o ( ∆t ) ..............................no event in ( t , t + ∆t )

b)

P  N ( t , t + ∆t ) = 1 = λ ⋅ ∆t + o ( ∆t ) ..................................one event in ( t , t + ∆t )

(3.8) c)

P  N ( t , t + ∆t ) > 1 = o ( ∆t ) ..............................................more than one event in ( t , t + ∆t )

d)

The number N ( t , t + ∆t ) is completely independent from the number of events

in the interval (0, t ] ( Markovian property )

where the o ( ∆t ) is some function, which limits to the 0 faster than ∆t , and λ has the dimension:

number of events . Thus the λ represents the frequency of events and is the time unit

parameter of the process.

Let us now denote with N ( t + ∆t ) the number of events in the time interval (0, t + ∆t ] . The probability pi ( t + ∆t ) = P  N ( t + ∆t ) = i  , which means the probability, that i events happened until the time t + ∆t , can be expressed in the following way [Kay, Hudoklin]: P  N ( t + ∆t ) = i  = { N ( t ) = i and N ( t , t + ∆t ) = 0 }  { N ( t ) = i − 1 and N ( t , t + ∆t ) = 1 }  { N ( t ) = i − 2 and N ( t , t + ∆t ) = 2 } = P  { N ( t ) = i − 3 and N ( t , t + ∆t ) = 3}  ..................................................  { N ( t ) = i − i and N ( t , t + ∆t ) = i } 

or   or   or   or   or   

(3.9)

which means the probability, that i events happened until the time t and 0 events happened in the interval (t , t + ∆t ] , or the probability, that i-1 events happened until the time t and 1 event happened in the interval (t , t + ∆t ] , or the probability, that i-2 events happened until the time t and 2 events happened in the interval (t , t + ∆t ] , etc.

55

The expression (3.9) can also be written in the form:

P  N ( t + ∆t ) = i  = = P  N ( t ) = i  ⋅ P  N ( t , t + ∆t ) = 0 N ( t ) = i  + + P  N ( t ) = i − 1 ⋅ P  N ( t , t + ∆t ) = 1 N ( t ) = i − 1 +

(3.10)

+ P  N ( t ) = i − 2  ⋅ P  N ( t , t + ∆t ) = 2 N ( t ) = i − 2  + +............. + + P  N ( t ) = 0  ⋅ P  N ( t , t + ∆t ) = i N ( t ) = 0  Since we are dealing with the independent events, the conditional probabilites can be abandoned, thus we have:

P  N ( t + ∆t ) = i  = = P  N ( t ) = i  ⋅ P  N ( t , t + ∆t ) = 0  + + P  N ( t ) = i − 1 ⋅ P  N ( t , t + ∆t ) = 1 + + P  N ( t ) = i − 2  ⋅ P  N ( t , t + ∆t ) = 2  +

(3.11)

+............. + + P  N ( t ) = 0  ⋅ P  N ( t , t + ∆t ) = i  Based on expressions in (3.8), we can now write:

P  N ( t + ∆t ) = i  = = P  N ( t ) = i  ⋅ 1 − λ ⋅ ∆t + o ( ∆t )  + + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t )  +

(3.12)

+ P  N ( t ) = i − 2  ⋅ o ( ∆t ) + +............. + + P  N ( t ) = 0  ⋅ o ( ∆t )

56

It can be shown, that the probabilites, which are multiplied by o ( ∆t ) , take the value 0, when ∆t → 0 . Thus we have:

P  N ( t + ∆t ) = i  = = P  N ( t ) = i  ⋅ 1 − λ ⋅ ∆t + o ( ∆t )  + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t ) 

(3.13)

and consequently:

P  N ( t + ∆t ) = i  = = P  N ( t ) = i  ⋅ (1 − λ ⋅ ∆t ) + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t +

{

}

+ P  N ( t ) = i  + P  N ( t ) = i − 1 ⋅ o ( ∆t ) = 

(3.14)

o ( ∆t )

= P  N ( t ) = i  ⋅ (1 − λ ⋅ ∆t ) + P  N ( t ) = i − 1 ⋅ λ ⋅ ∆t + o ( ∆t ) If we now (for the sake of simplicity) apply relations pi ( t + ∆t ) = P  N ( t + ∆t ) = i  and

pi ( t ) = P  N ( t ) = i  , we have:

pi ( t + ∆t ) = pi ( t ) − pi ( t ) ⋅ λ ⋅ ∆t + pi −1 ( t ) ⋅ λ ⋅ ∆t + o ( ∆t ) pi ( t + ∆t ) − pi ( t ) = − pi ( t ) ⋅ λ ⋅ ∆t + pi −1 ( t ) ⋅ λ ⋅ ∆t + o ( ∆t ) pi ( t + ∆t ) − pi ( t ) o ( ∆t ) = − pi ( t ) ⋅ λ + pi −1 ( t ) ⋅ λ + ∆t ∆t

: ∆t

lim

∆t → 0

(3.15)

p ( t + ∆t ) − pi ( t ) o ( ∆t ) lim i = − pi ( t ) ⋅ λ + pi −1 ( t ) ⋅ λ + lim ∆t →0 ∆t → 0 ∆t ∆t   0 dpi ( t ) dt

57

Thus, we get the following system of differential equations [Hudoklin, Kay]:

dpi ( t ) = −λ ⋅ pi ( t ) + λ ⋅ pi −1 ( t ) , dt where :

i = 0,1, 2,... (3.16)

pi ( t ) = P  N ( t ) = i  which can be also written in the form:

dp0 ( t ) = −λ ⋅ p0 ( t ) + λ ⋅ p−1 ( t ) = −λ ⋅ p0 ( t )  dt 0 dp1 ( t ) = −λ ⋅ p1 ( t ) + λ ⋅ p0 ( t ) dt dp2 ( t ) = −λ ⋅ p2 ( t ) + λ ⋅ p1 ( t ) dt ..........................

(3.17)

In sequel, the corresponding system of differential equations must be solved, for example by use of Laplace transformation. For this purpose, we can firsty try to solve the first equation:

dp0 ( t ) = −λ ⋅ p0 ( t ) dt

L

s ⋅ p0 ( s ) − p0 ( 0 ) = −λ ⋅ p0 ( s ) (3.18)

p0 ( s ) =

p0 ( 0 ) 1 = s+λ s+λ

L

−1

p0 ( t ) = e− λ ⋅t where "L" denotes the Laplace operator and is p0 ( 0 ) = 1 , since it is hundred percent, that no event happened until the time 0.

58

The result from (3.18) can be now put into the second differential equation, which can be solved by use of Laplace transformation also. The solution of the second differential equation is [Dragan]: (3.19)

p1 ( t ) = λ ⋅ t ⋅ e − λ ⋅t

Similarly, we can solve the third, fourth, and all other differential equations. The solutions are [Dragan]:

p2 ( t ) =

p3 ( t ) =

p4 ( t ) =

λ2 2

⋅ t 2 ⋅ e− λ ⋅t

λ3 2⋅3

⋅ t 3 ⋅ e− λ ⋅t

λ4 2 ⋅3⋅ 4

(3.20)

⋅ t 4 ⋅ e − λ ⋅t

etc... If we now combine the results from (3.18), (3.19) and (3.20) all together, we get the following expression [Dragan, Hudoklin, Kay]:

pi ( t ) =

λi i!

⋅ t i ⋅ e− λ ⋅t

and : pi ( t ) = P  N ( t ) = i  =

(3.21)

(λ ⋅ t ) i!

i

⋅ e − λ ⋅t ,

i = 0,1, 2,...

The result (3.21) can now be compared with the result (2.3). Obviously, both results are the same, which proves, that the distribution of the number of arrivals is Poisson distribution and actually governed by the Poisson stochastic process .

59

3.4.2 Derivation of distribution of the times between events Let us go back to the typical realization of a Poisson random process as it was shown in Figure 26. The times t1 , t2 ,... can be called the arrival times while the time intervals z1 , z2 ,... can be called the interarrival times. The interarrival times shown in Figure 26 are realizations of the continuous random variables Z1 , Z 2 ,... We wish to be able to compute probabilities for a finite set, say Z1 , Z 2 ,...Z k . To begin we first determine the probability density function f Z1 ( z1 ) . Note that Z1 = T1 , where T1 is the random variable denoting the first arrival. By the definition of the first arrival, we can conclude (c.f. figure 26): if Z1 > ξ1 , then N (ξ1 ) = 0 (the first arrival has not yet occured). So we can introduce the following expression [Kay]:

( λ ⋅ ξ1 ) P ( z1 > ξ1 ) = P  N (ξ1 ) = 0  =

0

0!

⋅ e − λ ⋅ξ1 = e− λ ⋅ξ1

(3.22)

Based on expression (3.22), the probability density function f Z1 ( z1 ) can be derived [Kay]:

f Z1 ( z1 ) = =

dFZ1 ( z1 ) dz1

=

dP ( z1 < Z1 ) d = 1 − P ( z1 > Z1 )  = dz1 dz1 

d d 1 − e − λ ⋅ξ1  = λ ⋅ e− λ ⋅ξ1 1 − P ( z1 > ξ1 )  = dz1 dz1

(3.23)

Thus, for the probability density function of the time for the first event ("arrival"), which is also the interarrival time between 0 and the occurence of first event, we can write [Kay]:

λ ⋅ e− λ ⋅ z1 , f Z1 ( z1 ) =   0,

z1 ≥ 0 z1 < 0

(3.24)

The result (3.24) can now be compared with the result (2.1). Obviously, both results are the same, which proves, that the distribution of the inter-arrival time in the time interval (0, t ] between the time origin and the first following event (arrival) follows the exponential law.

60

3.5 Birth processes Pure birth processes can be used as the models of real situations such as reproduction of bacteria, chain reaction of the nuclear fission, etc [Hudoklin, Taha]. In order to model the pure birth process, the following assumptions must be made [Hudoklin]:

• Let us have the population of specimens, where the probability, that the specimen present in time t, will born a new specimen in the interval (t , t + ∆t ] , is λ ⋅ ∆t + o ( ∆t ) , while the probability, that will not born a new specimen in the interval (t , t + ∆t ] , is 1 − λ ⋅ ∆t + o ( ∆t ) . • The probability of birth should be the same for all specimens, independent of their age. Of course, the births, which are based on a variety of specimens, should be independent of each other also. • Birth of each specimen represents an independent Poisson process. Since there are more specimens observed in a population, we obviously have a combination of several independent Poisson processes, where the events represent each birth. This kind of process is so-called combined Poisson process [Hudoklin, Dragan], with the frequency of events, which is equal to the sum of the frequencies of events of individual processes. • Frequency of events of individual processes in our case is presumably the same for all specimens and is equal λ . If at the time t there are i specimens in the population present, than the frequency of events of combined process is obviously equal i ⋅ λ . And the probability of the new birth of the combined process in the interval (t , t + ∆t ] is equal i ⋅ λ ⋅ ∆t + o ( ∆t ) , while the probability not to have a new birth is 1 − i ⋅ λ ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random variable, denoted by pi ( t ) = P  N ( t ) = i  . Let us presume that the size of the population at time t is equal n0 . Then with respect to some additional assumptions, the following expression can be applied for the probability, that we have n0 specimens in the population at time t + ∆t [Hudoklin]:

61

pn0 ( t + ∆t ) = pn0 ( t ) ⋅ P ( not a new birth ) + pn0 −1 ( t ) ⋅ P ( new birth ) + o ( ∆t ) pn0 ( t + ∆t ) = pn0 ( t ) ⋅ [1 − n0 ⋅ λ ⋅ ∆t ] + pn0 −1 ( t ) ⋅ ( n0 − 1) ⋅ λ ⋅ ∆t + o ( ∆t )

(3.25)

where the o ( ∆t ) is some function, which limits to the 0 faster than ∆t . So either stays n0 specimens, which did not born a new specimen, either there were n0 − 1 specimens, which have borned a new specimen. In the similar matter, we can made the further conclusions. The probability, that we have n0 + 1 specimens in the population at time t + ∆t , can be wirtten in the following way:

pn0 +1 ( t + ∆t ) = pn0 +1 ( t ) ⋅ P ( not a new birth ) + pn0 ( t ) ⋅ P ( a new birth ) + o ( ∆t ) (3.26) pn0 +1 ( t + ∆t ) = pn0 +1 ( t ) ⋅ 1 − ( n0 + 1) ⋅ λ ⋅ ∆t  + pn0 ( t ) ⋅ n0 ⋅ λ ⋅ ∆t + o ( ∆t ) The probability, that we have n0 + 2 specimens in the population at time t + ∆t , can be wirtten in the following way: pn0 + 2 ( t + ∆t ) = pn0 +2 ( t ) ⋅ P ( not a new birth ) + pn0 +1 ( t ) ⋅ P ( a new birth ) + o ( ∆t ) (3.27) pn0 + 2 ( t + ∆t ) = pn0 +2 ( t ) ⋅ 1 − ( n0 + 2 ) ⋅ λ ⋅ ∆t  + pn0 +1 ( t ) ⋅ ( n0 + 1) ⋅ λ ⋅ ∆t + o ( ∆t ) In general, we obviously have [Hudoklin]:

pi ( t + ∆t ) = pi ( t ) ⋅ P ( not a new birth ) + pi −1 ( t ) ⋅ P ( a new birth ) + o ( ∆t ) pi ( t + ∆t ) = pi ( t ) ⋅ [1 − i ⋅ λ ⋅ ∆t ] + pi −1 ( t ) ⋅ ( i − 1) ⋅ λ ⋅ ∆t + o ( ∆t ) ,

(3.28)

i = no , no + 1, no + 2,... Naturally, during this derivation, it was considered, that the specimens in a very short time ∆t → 0 can not born more than one new specimen. The expression (3.28) can be modified, which leads us to:

62

pi ( t + ∆t ) − pi ( t ) = −i ⋅ λ ⋅ ∆t ⋅ pi ( t ) + pi −1 ( t ) ⋅ ( i − 1) ⋅ λ ⋅ ∆t pi ( t + ∆t ) − pi ( t ) ∆t

: ∆t (3.29)

= −i ⋅ λ ⋅ pi ( t ) + ( i − 1) ⋅ λ ⋅ pi −1 ( t )

lim

∆t →0

where the terms with o ( ∆t ) were neglected with respect to the reasons already mentioned before. This way, we have got the following system of differential equations [Hudoklin]: dpi ( t ) = −i ⋅ λ ⋅ pi ( t ) + ( i − 1) ⋅ λ ⋅ pi −1 ( t ), dt

i = no , no + 1, no + 2,...

(3.30)

The corresponding system could be solved by the use of similar procedure (sequential use of Laplace transformation) as in the case of Poisson process derivation, where we would get the following solution [Hudoklin, Dragan]:

i − n0  i − 1  − n0 ⋅λ ⋅t pi ( t ) =  ⋅ 1 − e − λ ⋅t  , ⋅e  i − n0 

i = n0 , n0 + 1, n0 + 2,...

(3.31)

3.6 Death processes Pure death processes can be used as the models of real situations such as emptying of inventory in the warehouse, denial of group of unrepairable products, etc. In order to model the pure death process, the following assumptions must be made [Hudoklin]:

• Let us have the population of specimens, where the probability, that the specimen present in time t, dies in the interval (t , t + ∆t ] , is µ ⋅ ∆t + o ( ∆t ) , while the probability, that will not die in the interval (t , t + ∆t ] , is 1 − µ ⋅ ∆t + o ( ∆t ) . • The probability of death should be the same for all specimens, independent of their age. Of course, the deaths, which are based on a variety of specimens, should be independent of each other also.

63

• Death of each specimen represents an independent Poisson process. Since there are more specimens observed in a population, we obviously have a combination of several independent Poisson processes, where the events represent each death. This kind of process is also called the combined Poisson process, as in the case of Birth processes, with the frequency of events, which is equal to the sum of the frequencies of events of individual processes. • Frequency of events of individual processes in our case is presumably the same for all specimens and is equal µ . If at the time t there are i specimens in the population present, than the frequency of events (deaths) of combined process is obviously equal i ⋅ µ . And the probability of the new death of the combined process in the interval (t , t + ∆t ] is equal i ⋅ µ ⋅ ∆t + o ( ∆t ) , while the probability not to have a new death is 1 − i ⋅ µ ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random variable, denoted by pi ( t ) = P  N ( t ) = i  . Let us presume that the size of the population at time t is equal n0 . Then with respect to some additional assumptions, the following expression can be applied for the probability, that we have n0 specimens in the population at time t + ∆t [Hudoklin]: pn0 ( t + ∆t ) = pn0 ( t ) ⋅ P ( not a new death ) + pn0 +1 ( t ) ⋅ P ( a new death ) + o ( ∆t ) pn0 ( t + ∆t ) = pn0 ( t ) ⋅ [1 − n0 ⋅ µ ⋅ ∆t ] + pn0 +1 ( t ) ⋅ ( n0 + 1) ⋅ µ ⋅ ∆t + o ( ∆t )

(3.32.)

The probability, that we have n0 − 1 specimens in the population at time t + ∆t , is:

pn0 −1 ( t + ∆t ) = pn0 −1 ( t ) ⋅ P ( not a new death ) + pn0 ( t ) ⋅ P ( a new death ) + o ( ∆t ) (3.33) pn0 −1 ( t + ∆t ) = pn0 −1 ( t ) ⋅ 1 − ( n0 − 1) ⋅ µ ⋅ ∆t  + pn0 ( t ) ⋅ n0 ⋅ µ ⋅ ∆t + o ( ∆t )

64

The probability, that we have n0 − 2 specimens in the population at time t + ∆t , is:

pn0 − 2 ( t + ∆t ) = pn0 − 2 ( t ) ⋅ P ( not a new death ) + pn0 −1 ( t ) ⋅ P ( a new death ) + o ( ∆t ) (3.34) pn0 − 2 ( t + ∆t ) = pn0 − 2 ( t ) ⋅ 1 − ( n0 − 2 ) ⋅ µ ⋅ ∆t  + pn0 −1 ( t ) ⋅ ( n0 − 1) ⋅ µ ⋅ ∆t + o ( ∆t ) In general, we obviously have [Hudoklin]:

pi ( t + ∆t ) = pi ( t ) ⋅ P ( not a new death ) + pi +1 ( t ) ⋅ P ( a new death ) + o ( ∆t ) pi ( t + ∆t ) = pi ( t ) ⋅ [1 − i ⋅ µ ⋅ ∆t ] + pi +1 ( t ) ⋅ ( i + 1) ⋅ µ ⋅ ∆t + o ( ∆t ) ,

(3.35)

i = no , no − 1, no − 2,...2,1, 0 Similarly as in the case of the Birth process, we can derive the following system of differential equations from the expression (3.35) [Hudoklin]: dpi ( t ) = −i ⋅ µ ⋅ pi ( t ) + ( i + 1) ⋅ µ ⋅ pi +1 ( t ), dt

i = no , no − 1, no − 2,...,1,0

(3.36)

The corresponding system could be solved by the use of similar procedure (sequential use of Laplace transformation) as in the case of Poisson process derivation, where we would get the following solution [Hudoklin, Dragan]:

n  pi ( t ) =  0  ⋅ e − i⋅µ ⋅t ⋅ 1 − e− µ ⋅t   i 

n0 −i

,

i = n0 , n0 − 1, n0 − 2,...,1, 0

(3.37)

3.7 Birth-Death processes In this case, we are observing the population, which can either change by new births or by new deaths. Due to simpler understanding of the subject, the birth-death process will be explained through the treatment of the queueing system, where customers in waiting line and

65

in service will be considered. For this purpose, the model in the form of state transition diagram (c.f. figure 27) is also introduced [Winston, Stewart, Hillier].

λ0 p0

p1

λ1

1

0

µ1

p2

λ2

2

µ2

p3

λ3

λi −1 p4

3

µ3

4

µ4



pi

µi

λi +1

λi i

i+1



µi +1 µi + 2

Figure 27: The model of the birth-death process in the form of state transition diagram The individual states represent the current number of the customers (specimens) in the system. When the new customer enters into the system, the new birth happens, and when the served customer departs (leaves) the system, the new death happens. Let us assume, that in very short time ∆t → 0 only one customer can simultaneously enter into the system (only one birth can happen). Similarly let us assume, that in very short time ∆t → 0 only one customer can simultaneously departs the system (only one death can happen). Thus in very short time the number of specimens in the system can be increased or decreased only for one specimen, which means that only the transitions between the neighboring states are possible. In figure 27, the quantities pi are also applied, which are talking about the probabilities, that the system is in the i.th state. By other words, the i.th state of the system means that there are i

specimens present in the system. Let us assume that we have i specimens in the system. The probability for a birth in interval

(t , t + ∆t ] is λi ⋅ ∆t + o ( ∆t ) , probability not to have a birth in this interval, is 1 − λi ⋅ ∆t + o ( ∆t ) , the probability for a death in interval (t , t + ∆t ] is µi ⋅ ∆t + o ( ∆t ) , and the probability not to have a death in this interval, is 1 − µi ⋅ ∆t + o ( ∆t ) .

The number of the specimens, present in the population at time t, represents the random variable, denoted by pi ( t ) = P  N ( t ) = i  . Then with respect to some additional assumptions, the following expression can be applied for the probability, that we have 0 specimens in the population at time t + ∆t [Hudoklin]:

66

p0 ( t + ∆t ) = p0 ( t ) ⋅ P ( not a new birth ) + p1 ( t ) ⋅ P ( a new death ) p0 ( t + ∆t ) = p0 ( t ) ⋅ 1 − λ0 ⋅ ∆t + o ( ∆t )  + p1 ( t ) ⋅  µ1 ⋅ ∆t + o ( ∆t ) 

(3.38)

The probability, that we have 1 specimen in the population at time t + ∆t , is [Hudoklin]:

p1 ( t + ∆t ) = p1 ( t ) ⋅ P ( not a new birth) ⋅ P ( not a new death) + p0 ( t ) ⋅ P ( a new birth) + + p2 ( t ) ⋅ P ( a new death) (3.39)

p1 ( t + ∆t ) = p1 ( t ) ⋅ 1− λ1 ⋅∆t + o ( ∆t )  ⋅ 1− µ1 ⋅∆t + o ( ∆t )  + p0 ( t ) ⋅ λ0 ⋅∆t + o ( ∆t )  + + p2 ( t ) ⋅ µ2 ⋅∆t + o ( ∆t )  The probability, that we have 2 specimens in the population at time t + ∆t , is [Hudoklin]:

p2 ( t + ∆t ) = p2 ( t ) ⋅ P ( not a new birth) ⋅ P ( not a new death) + p1 ( t ) ⋅ P ( a new birth) + + p3 ( t ) ⋅ P ( a new death) (3.40)

p2 ( t + ∆t ) = p2 ( t ) ⋅ 1− λ2 ⋅∆t + o ( ∆t )  ⋅ 1− µ2 ⋅∆t + o ( ∆t )  + p1 ( t ) ⋅ λ1 ⋅∆t + o ( ∆t )  + + p3 ( t ) ⋅ µ3 ⋅∆t + o ( ∆t )  In general, we obviously have [Hudoklin, Stewart, Winston]:

p0 ( t + ∆t ) = p0 ( t ) ⋅ 1− λ0 ⋅∆t + o ( ∆t )  + p1 ( t ) ⋅ µ1 ⋅∆t + o ( ∆t )  pi ( t + ∆t ) = pi ( t ) ⋅ 1− λi ⋅∆t + o ( ∆t )  ⋅ 1− µi ⋅∆t + o ( ∆t )  + pi−1 ( t ) ⋅ λi−1 ⋅∆t + o ( ∆t )  +

(3.41)

+ pi+1 ( t ) ⋅ µi+1 ⋅∆t + o ( ∆t ) 

67

Similarly as in the cases of the Birth and of the Death process, we can derive the following system of differential equations from the expression (3.41) [Hudoklin, Stewart, Winston]: dp0 ( t ) = − p0 ( t ) ⋅ λ0 + p1 ( t ) ⋅ µ1 dt dpi ( t ) = − pi ( t ) ⋅ ( µi + λi ) + pi −1 ( t ) ⋅ λi−1 + pi+1 ( t ) ⋅ µi+1 dt

(3.42)

i = 1,2,..., n −1

where n is the capacity of the system (naturally, n can also go to the infinity). As it turns out, on the contrary to the case of pure birth and pure death process, it is very difficult to find the solution of the system (3.42). Thus, in sequel we will rather observe the situation in stationary conditions, where the analysis of steady-state probabilities will be applied. In stationary conditions, the derivatives can be set to 0, so we have [Hudoklin, Stewart, Winston, Hillier]: 0 = − p0 ⋅ λ0 + p1 ⋅ µ1 0 = − pi ⋅ ( µi + λi ) + pi−1 ⋅ λi−1 + pi+1 ⋅ µi+1

(3.43)

i = 1,2,..., n −1

It follows: p0 ⋅ λ0 = p1 ⋅ µ1 pi ⋅ ( µi + λi ) = pi −1 ⋅ λi−1 + pi+1 ⋅ µi+1

(3.44)

i = 1,2,..., n −1

68

and: p0 ⋅ λ0 = p1 ⋅ µ1 p1 ⋅ ( µ1 + λ1 ) = p0 ⋅ λ0 + p2 ⋅ µ2 p2 ⋅ ( µ2 + λ2 ) = p1 ⋅ λ1 + p3 ⋅ µ3

(3.45)

p3 ⋅ ( µ3 + λ3 ) = p2 ⋅ λ2 + p4 ⋅ µ4 ... pi ⋅ ( µi + λi ) = pi−1 ⋅ λi−1 + pi+1 ⋅ µi+1 ... pn−1 ⋅ ( µn−1 + λn−1 ) = pn−2 ⋅ λn−2 + pn ⋅ µn

The expressions in (3.45) are obviously the balance equations for the states in figure 27, which means: Rate in = Rate out Principle [Hillier]. The system (3.45) can be relatively easly solved. In sequel, let us stress the corresponding solutions [Dragan, Hudoklin, Hillier, Stewart]. The steady-state probability for 0 specimens in the system is:

p0 =

1 = S

1 n

λj j =0 µ j +1

i −1

1+ ∑∏ i =1

=

1

λ0 ⋅ λ1⋅ λ2 ⋅ λ3 ⋅ ... ⋅⋅λi −1 i =1 µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µi n

1+ ∑

(3.46)

The steady-state probabilities for one, two and more specimens in the system are:

λ0 ⋅p µ1 0 λ ⋅λ p2 = 0 1 ⋅ p0 µ1 ⋅ µ2 λ ⋅λ ⋅λ p3 = 0 1 2 ⋅ p0 µ1 ⋅ µ2 ⋅ µ3 p1 =

...

λ ⋅ λ ⋅ λ ⋅ λ ⋅ ... ⋅⋅λi−1 pi = 0 1 2 3 ⋅p µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µi 0

(3.47)

... pn =

λ0 ⋅ λ1⋅ λ2 ⋅ λ3 ⋅ ... ⋅⋅λn−1 ⋅p µ1 ⋅ µ2 ⋅ µ3 ⋅ µ4 ⋅ ... ⋅ µn 0

69

The probabilities in (3.47) can also be written in more compact form:

λj , j =0 µ j +1

i −1

pi = p0 ⋅ ∏

i = 1,2,..., n

(3.48)

At this point, we have finished with the short overview of stochastic processes theory. The derived results in the chapter with stochastic processes will help us to better understand the methodology of the following chapter, which introduces the basic queueing models.

4 INTRODUCTION TO BASIC QUEUEING MODELS In this chapter, we will get more familiar with some basic queueing models. The basic characteristics of queueing systems, terminology and basic parameters were allready shortly introduced in chapter 2.4. So in this chapter, we will firstly focus on the derivation of basic M/M/1 model, where all the significant details of the modeling will be shown. Secondly, the results for some other most typical models of the type »M/M/…« (models with the Markovian property) will be also shortly stressed. Naturally, as already mentioned in the previous chapter, we will observe only the situation in stationary conditions, where the analysis of steady-state probabilities and other important statistical quantities will be applied. With the theoretical background, aquired in this chapter, the bigger understanding of working mechanisms of queueing simulation models will definitely also become more pronounced.

4.1 Single channel queueing models The main property of the single channel queueing models is, that they have only a single waiting line and a single server. For all models treated in this chapter we will suppose that the input arrival process is the Poisson process, and the service times are distributed exponentially, while the queue discipline is FIFO (first in, first out).

70

4.1.1 Basic model M/M/1 For the basic model, the following assumptions must be made [Hudoklin]: • The population of customers is infinite, and • The waiting space is infinitely big. The arrival rate λ is governed by the Poisson process, the service times are random variables, exponentially distributed, and the departure rate (mean service rate) is equal to µ . The corresponding system is shown in figure 28, and its state transition diagram is shown in figure 29.

Figure 28: The basic model M/M/1

Figure 29: The state transition diagram of the basic model M/M/1 As it can be seen from figure 29, we assume the arrival and departure rates, which are always constant regardless to the number of already present customers in the system. So we have:

λ0 = λ1 = ... = λn−1 = ... = λ µ1 = µ2 = ... = µn = ... = µ

(4.1)

71

The derivation of steady-state probabilities Firstly, we are going to calculate the steady-state probabilities for the number of customers being present in the system. The expressions in (3.47) take the following form:

λ0 λ ⋅ p0 = ⋅ p0 µ1 µ

p1 =

λ0 ⋅ λ1 λ ⋅λ λ2 ⋅ p0 = ⋅ p0 = 2 ⋅ p0 µ1 ⋅ µ2 µ ⋅µ µ

p2 = p3 = ...

(4.2)

λi ⋅p µi 0

pi = ... pn = ...

λ3 ⋅p µ3 0

λn ⋅p µn 0

Now let us try to calculate the probability p0 . For this purpose, the following expression can be given:

p0 + p1 + p2 + ... + pn + ... = 1

λ λ2 λn p0 + ⋅ p0 + 2 ⋅ p0 + ... + n ⋅ p0 + ... = 1 µ µ µ  λ λ2  λn p0 ⋅ 1 + + 2 + ... + n + ...  = 1 µ  µ µ  1

p0 =

2

1+

n

λ λ λ + 2 + ... + n + ... µ µ µ

=

(4.3)

1 S

Naturally, the result (4.3) could be also directly calculated from the result (3.46). Now, if we introduce the utilization factor from (2.11), the expression for p0 takes the following form: p0 =

1 1 = n 1 + ρ + ρ + ... + ρ + ... S 2

(4.4)

The condition for the existance of stationary distribution is [Hudoklin, Winston]: S = 1 + ρ + ρ 2 + ρ 3 + ... < ∞

(4.5)

72

which implies:

ρ N ) , which means that there are more than N customers in the system:

P (i > N ) =









pi =

i = N +1

= ρ N +1 ⋅ (1 − ρ ) ⋅

ρ i ⋅ (1 − ρ ) =

i = N +1 ∞

∑ρ

∞ ∞ ρ N +1 ρi i N +1 ⋅ 1 − ⋅ = ⋅ 1 − ⋅ ρ ρ ρ ρ ( ) ( ) ∑ ∑ N +1 = ρ N +1 i = N +1 i = N +1 ρ

(4.19)

i − ( N +1)

i = N +1

Let us introduce a new variable s = i − ( N + 1) , which leads us to: P (i > N ) = ρ

N +1



⋅ (1 − ρ ) ⋅ ∑ ρ s = ρ N +1 ⋅ (1 − ρ ) ⋅ s =0

P (i > N ) = ρ

N +1

λ =  µ

1 1− ρ

N +1

(4.20)

where the relationship for the geometric series was also used.

4.1.2 Model M/M/1 with limited waiting space Similarly as in the case of basic model, the modifified model for the situation, when there is a limited waiting space, can be derived. Here, we are going to treat the situation, where the arrival rate is constant, until the waiting space becomes fully loaded. At that moment, the arrival rate immediately falls down to 0 and new potential customers will definitely go away un-served. Typical systems of this kind are for example the automotive services, the laundry services, etc [Hudoklin]. The state transition diagram of the model M/M/1 with limited waiting space is shown in figure 30, where we have the maximum possible number of customers in the system equal to n, which means maximally n states of the system.

76

Figure 30: The state transition diagram of the model M/M/1 with limited waiting space(total number of customers in the system can be n, which means n states) So, we are dealing with the situation, when maximally n-1 customers can be in the waiting queue, while one customer is just in the service. For the arrival and departure rates the following expressions can be written [Hudoklin, Gross]:

λ , i < n 0 , i ≥ n

⇒ λ0 = λ1 = λ2 = ... = λn −1 = λ

λi = 

(4.21)

µi = µ , i = 1, 2,.., n. The derivation of steady-state probabilities As in the case of basic model, we are going to calculate the steady-state probabilities for the number of customers being present in the system at first. The expressions in (4.2) now take the following form: pi =

λi ⋅ p = ρ i ⋅ p0 , i = 1, 2,..., n µi 0

(4.22)

The expression for p0 , which was given for basic model in (4.4), now takes the following form: p0 =

1 2

1 + ρ + ρ + ... + ρ

n

=

1 S

(4.23)

It can be shown that for the series S the following expression can be given [Gross, Bose]: 1 − ρ n +1 S = 1 + ρ + ρ + ... + ρ = ∑ ρ = 1− ρ i =0 2

n

n

i

(4.24)

77

Thus, the probability p0 is:

p0 =

1 1− ρ = S 1 − ρ n +1

(4.25)

and the stationary probability distribution for the number of i ≤ n customers being in the system is: pi =ρ i ⋅

1− ρ , 1 − ρ n +1

i = 0, 1, 2,..., n

(4.26)

The derivation of basic statistical parameters The derivation of basic statistical parameters is now a little bit more complicated as it was in the case of basic model. In sequel, we will just show how the average number of the customers in the system E ( N ) = L can be derived. The details of derivation of the other statistical parameters ( E ( N q ) = Lq , E( W ) and E( Wq ) ) can be investigated in the literature [Gross, Bose, Hudoklin, Stewart, Dragan]. At first, let us apply the following expression:

n

n

i =0

i =0

L = E ( N ) = ∑ i ⋅ pi =∑ i ⋅ρ i ⋅ =

n 1− ρ 1− ρ = ⋅ i ⋅ρ i n +1 n +1 ∑ 1− ρ 1− ρ i =0

n 1− ρ ρ ⋅ ⋅ i ⋅ρ i −1 ∑ n +1 1− ρ i =0

(4.27)

Since we know, that the following relation can be given:

d ( ρ i ) = i ⋅ ρ i−1 dρ

(4.28)

we obviously have:

L = E(N) = =

n 1− ρ d ρ ρi ) = ⋅ ⋅ ( ∑ n +1 1− ρ i =0 d ρ

1− ρ d  n i ρ ⋅ ⋅ ∑ρ 1 − ρ n +1 d ρ  i =0 

(4.29)

78

Due to relation (4.24) we can write: L=

1− ρ d  1 − ρ n +1  ρ ⋅ ⋅   1 − ρ n +1 d ρ  1− ρ 

(4.30)

which leads to:  − ( n + 1) ⋅ ρ n ⋅ (1 − ρ ) − (1 − ρ n +1 ) ⋅ ( −1)  1− ρ = L= ⋅ ρ ⋅ 2   1 − ρ n +1 ρ 1 − ( )    − ( n + 1) ⋅ ρ n ⋅ (1 − ρ ) + (1 − ρ n +1 )  1− ρ = = ⋅ ρ ⋅ 2   1 − ρ n +1 ρ 1 − ( )    − ( n + 1) ⋅ ρ n + ( n + 1) ρ n +1 + 1 − ρ n +1  1− ρ = ⋅ ρ ⋅ = 2   1 − ρ n +1 (1 − ρ )  

=

 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 + 1  1− ρ ρ ⋅ ⋅  = 2   1 − ρ n +1 ρ 1 − ( )  

(4.31)

1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 1− ρ = ⋅ρ⋅ = 2 1 − ρ n +1 (1 − ρ ) =

=

1 1− ρ

⋅ρ⋅ n +1

1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1

(1 − ρ )

=

ρ 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 

(1 − ρ ) (1 − ρ ) n +1

So the the average number of the customers in the system E ( N ) = L is equal:

L = E(N) =

ρ ⋅ 1 − ( n + 1) ⋅ ρ n + n ⋅ ρ n +1 

(1 − ρ ) ⋅ (1 − ρ ) n +1

(4.32)

79

4.1.3 Model M/M/1 (gradually "scared" or "balked" customers) In this case, the arrival process depends on the number of customers, which are already present in the system. The rate at which customers arrive to the facility, decreases graduallly when the facility becomes too crowded. For example, if the customer sees that the bank parking lot is almost full, he might change his mind, pass by and come another day. If a customer arrives but fails to enter the system, we say that the customer has balked. At this kind of the system, we are treating the situation, when the arrival rate gradually decreases. It means that as bigger the quantity of already present customers in the system is, more balked ("scared") the new potential customer will become. Let us assume that in this case the arrival rate decreases in proportion to the increasing of the queue length. The state transition diagram of the model M/M/1 with gradually "scared" or "balked" customers is shown in figure 31 [Hudoklin].

λ 0

λ

λ

λ

λ

λ

2

3

n −1

n

n +1

1

µ

2

… µ

µ

n

n-1

µ

µ



n+1

µ

µ

Figure 31: The state transition diagram of the model M/M/1 with gradually "scared" or

"balked" customers. For the arrival and departure rates the following expressions can be written [Hudoklin]:

λi =

λ

, i = 0, 1, ... i +1 µi = µ , i = 1, 2,...

(4.33)

The derivation of steady-state probabilities Now let us apply the balance equations for the states in figure 31 (Rate in = Rate out Principle):

80

µ ⋅ p1 =λ ⋅ p0 λ µ ⋅ p2 = ⋅ p1 2

λ µ ⋅ p3 = ⋅ p2 3

...

(4.34)

λ µ ⋅ pi = ⋅ pi −1 i

...

λ µ ⋅ pn = ⋅ pn −1 n

... which leads us to:

p1 =λ ⋅ p0 ⋅

µ

2

λ⋅ p3 =

µ

1

λ⋅ p2 =

1

⋅ p1 =

λ λ λ2 1 ⋅ ⋅ p0 = ⋅ p0 ⋅ 2 µ 2µ µ 2!

⋅ p2 =

1 λ λ λ λ3 ⋅ ⋅ ⋅ p0 = ⋅ p0 ⋅ 3 3µ 2 µ µ 3! µ

1

µ

3 

(4.35) i

ρ  λ⋅ λ⋅ λ⋅ i λi 1  λ  p0 µ µ µ pi = ⋅ ⋅ ... ⋅ ⋅ p0 = ⋅ p0 ⋅ i =   ⋅ µ  µ  i! i i −1 1 i!  1

1

1

n

ρ  λ⋅ λ⋅ λ⋅ n λn 1  λ  p0 µ µ µ pn = ⋅ ⋅ ... ⋅ ⋅ p0 = ⋅ p0 ⋅ n =   ⋅ µ  µ  n! n n −1 1 n!  1

1

1

Thus, in general we have: pi =

ρi i!

⋅ p0 ,

i = 0, 1, 2, ...

(4.36)

81

The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + p2 + ... + pi +...=1

ρ

ρ2

1!

⋅ p0 +

2!

⋅ p0 + ... +

ρi

⋅ p0 +...=1 i!  ρ ρ2  ρi p0 ⋅ 1 + + + ... + +...  =1 i!  1! 2!  1 1 1 1 p0 = = = ∞ i = + ρ = e− ρ 2 i ρ ρ ρ ρ S e 1+ + + ... + +... ∑ 1! 2! i! i =0 i ! p0 +

(4.37)

and the stationary probability distribution for the number of i customers being in the system is: pi =

ρi i!

⋅ e− ρ ,

(4.38)

i = 0, 1, 2, ...

So, the stationary probability of the system being in the i.th state in this case is governed by the Poisson distribution.

The derivation of basic statistical parameters In sequel, we will just show how the average number of the customers in the system E ( N ) = L can be derived. The details of derivation of the other statistical parameters ( E ( N q ) = Lq , E(W ) and E( Wq ) ) can be investigated in the literature [Bhat, Gross, Bose, Stewart, Ross, Takacz]. At first, let us apply the following expression:





ρi



L = E ( N ) = ∑ i ⋅ pi = 0 ⋅ p0 + ∑ i ⋅ ⋅ e − ρ = e − ρ ⋅ ρ ⋅ ∑  i =1 i ! i =0 i =1 0 ∞

ρ i −1

i =1

( i − 1)!

= e− ρ ⋅ ρ ⋅ ∑

i ⋅ ρ i −1 = i!

(4.39)

Now, let us introduce a new variable m = i − 1 , which leads us to:

82

L=e

−ρ



ρm

⋅ρ ⋅∑ = e− ρ ⋅ ρ ⋅ e ρ = ρ m =0 m !  eρ

(4.40)

So the average number of the customers in the system is equal to the utilization factor.

4.1.4 Model M/M/1 with limited number of customers In this case we are dealing with the limited number of n customers, which can visit the queueing system. Naturally, if there are k customers already in the system, only n - k customers remain in the origin population. The state transition diagram of the model M/M/1 with limited number of customers is shown in figure 32 [Bose, Gross, Stewart].

( n − 1) λ

nλ 0

1

µ

( n − 2) λ 2

µ

… µ



( n − k + 2 ) λ ( n − k + 1) λ

µ

µ



k

k-1

k-2

µ

µ

λ n

n-1

µ

µ

Figure 32: The state transition diagram of the model M/M/1 with limited number of

customers. Typical application of this kind of system is the maintenance and repair of the machinery. Let us suppose that we have n machines of the same type. Also we assume, that the mechanic man works all the time and do not loose any time for walking from one machine to another. Let be the frequency of faults of machines equal to λ , and the frequency of completions of repairs equal to µ . The faulty machines represent the customers, which are waiting for repair. Naturally, if all machines are working properly, the queuing system is empty (state 0 in figure 32), and the arrival rate is n ⋅ λ (which means that n machines can become faulty in any time in the future). If suddenly one machine become faulty and goes to repair, the system jumps into the state 1 in figure 32, and the arrival rate is now ( n − 1) ⋅ λ (which means that n - 1 machines can become faulty in any time in the future). The same logic can be further applied for the other states and arrival rates of the system. The working mechanism of the model

M/M/1 with limited number of customers is clearly shown in figure 33. 83

Figure 33: The working mechanism of the model M/M/1 with limited number of customers. For the arrival and departure rates the following expressions can be written [Hudoklin, Gross]:



λi = 

( n − i ) ⋅ λ,

0

i = 0,1, 2,..., n i>n

(4.41)

µi = µ , i = 1, 2,... The derivation of steady-state probabilities Now let us apply the balance equations for the states in figure 32 (Rate in = Rate out Principle):

µ ⋅ p1 = n ⋅ λ ⋅ p0 µ ⋅ p2 = ( n − 1) ⋅ λ ⋅ p1 µ ⋅ p3 = ( n − 2 ) ⋅ λ ⋅ p2

...

(4.42)

µ ⋅ pk = ( n − (k − 1) ) ⋅ λ ⋅ pk −1

...

µ ⋅ pn = ( n − (n − 1) ) ⋅ λ ⋅ pn −1

84

which leads us to:

p1 =

n ⋅λ

⋅ p0

µ

p2 =

( n − 1) ⋅ λ ⋅ p

p3 =

( n − 2) ⋅ λ ⋅ p

... pk = ... pn =

1

µ

(4.43)

2

µ

( n − (k − 1) ) ⋅ λ ⋅ p

k −1

µ

( n − (n − 1) ) ⋅ λ ⋅ p µ

n −1

If these expressions are slightly modified, we have:

p1 = p2

p3 =

n⋅λ

µ

⋅ p0

( n − 1) ⋅ λ ⋅ p =

1

µ

( n − 2) ⋅ λ ⋅ p µ

2

=

( n − 1) ⋅ λ ⋅ n ⋅ λ ⋅ p = µ

µ

0

λ2 = ( n − 1) ⋅ n ⋅ 2 ⋅ p0 µ

( n − 2 ) ⋅ λ ⋅ ( n − 1) ⋅ λ ⋅ n ⋅ λ ⋅ p µ

µ

= ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅

µ

0

=

λ3 ⋅p µ3 0



λk

pk = ( n − k + 1) ⋅ ( n − k + 2 ) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ k ⋅ p0    µ n! ( n − k )!

(4.44)



λn

pn = ( n − n + 1) ⋅ ( n − n + 2 ) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ n ⋅ p0 =    µ n! ( n − n )!

λn = 1 ⋅ 2 ⋅ ... ⋅ n ⋅ n ⋅ p0 µ n!

85

Thus, in general we have:

pk = ( n − k + 1) ⋅ ... ⋅ ( n − 2 ) ⋅ ( n − 1) ⋅ n ⋅ ρ k ⋅ p0 ,  n! ( n − k )!

k = 1,..., n in ρ =

λ µ

(4.45)

The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 +p1 + ... + pk + ... + pn = 1

p0 +n ⋅ ρ ⋅ p0 + ( n − 1) ⋅ n ⋅ ρ 2 ⋅ p0 + ... + n !⋅ ρ n ⋅ p0 = 1 p0 ⋅ 1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ 2 + ... + n !⋅ ρ n  = 1

p0 =

(4.46)

1 1 = 2 n 1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ + ... + n !⋅ ρ S

and the stationary probability distribution for the number of i customers being in the system is:

n! ⋅ ρk ( n − k )! λ pk = , k = 1,..., n and ρ = 2 n 1 + n ⋅ ρ + ( n − 1) ⋅ n ⋅ ρ + ... + n !⋅ ρ µ

(4.47)

The derivation of basic statistical parameters In general, the average number of the customers in the system E ( N ) = L can be derived by using the expression:

n

n

k =0

k =0

L = E ( N ) = ∑ k ⋅ pk = ∑ k ⋅ ( n − k + 1) ⋅ ... ⋅ ( n − 1) ⋅ n ⋅ ρ k ⋅ p0 = n! = ∑k ⋅ ⋅ ρ k ⋅ p0 ( n − k )! k =0 n

(4.48)

86

It is not easy to simplify the expression (4.48). It can be shown that after more extensive derivation we can get the following result [Hillier, Hudoklin]:

L = E(N) = n−

1

ρ

⋅ (1 − p0 )

(4.49)

The details of derivation of the other statistical parameters ( E ( N q ) = Lq , E(W ) and E( Wq ) ) can be investigated in the literature [Bhat, Gross, Bose, Stewart, Hillier, Winston].

4.1.5 Model M/M/1 with additional server for longer queues In practice often can happen, that the number of serving places depends on the number of the customers in the system (for example, the number of store cashiers, the number of bank counters, etc). Let us suppose, that we have a single server spot, as long as the number of customers is ≤ n . But as soon as the number of customers exceeds some value n, the additional server spot is employed. But the latter is closed as soon as the number of customers is ≤ n again. The state transition diagram of the model M/M/1 with additional server for

longer queues is shown in figure 34.

Figure 34: The state transition diagram of the model M/M/1 with additional server for

longer queues

87

For the arrival and departure rates the following expressions can be written [Hudoklin]:

λi = λ ,

i = 0,1, 2,...

µ , 2µ ,

i = 1,..., n

µi = 

(4.50)

i = n + 1, n + 2,...

The derivation of steady-state probabilities Now let us apply the balance equations for the states in figure 34 (Rate in = Rate out Principle):

µ ⋅ p1 =λ ⋅ p0 µ ⋅ p2 =λ ⋅ p1 µ ⋅ p3 =λ ⋅ p2

...

µ ⋅ pi =λ ⋅ pi −1

...

µ ⋅ pn =λ ⋅ pn −1

(4.51)

2 ⋅ µ ⋅ pn +1 =λ ⋅ pn 2 ⋅ µ ⋅ pn + 2 =λ ⋅ pn +1 ... which implies for the states between 0 and n:

p1 =

λ ⋅p µ 0 2

λ λ p2 = ⋅ p1 =   ⋅ p0 µ µ 3

λ λ p3 = ⋅ p2 =   ⋅ p0 µ µ

(4.52)

i

λ pi =   ⋅ p0 µ n λ pn =   ⋅ p0 µ

88

while for the states bigger than n we have:

n

λ λ λ pn +1 = ⋅ pn = ⋅   ⋅ p0 2µ 2µ  µ   λ  λ pn + 2 = ⋅ pn +1 =   2µ  2µ 

2

 λ  λ pn + 3 = ⋅ pn + 2 =   2µ  2µ 

3

n

λ ⋅   ⋅ p0 µ

(4.53)

n

λ ⋅   ⋅ p0 µ

Thus, in general we have [Hudoklin]:   λ i    ⋅ p0 ,  µ  pi =  i −n n  λ   λ   2 µ  ⋅  µ  ⋅ p0 ,    

If the utilization factor ρ =

i = 0, 1,..., n

(4.54) i = n + 1, n + 2,...

λ is additionally applied, we get the following form: µ

 ρ i ⋅ p0 ,  pi =  ρ i − n n   ⋅ ρ ⋅ p0 ,  2   ρ i ⋅ p0 ,  pi =  ρ i − n ⋅ ρ n  i − n ⋅ p0 ,  2  ρ i ⋅ p0 ,  pi =  ρ i ⋅ p 0  i−n ,  2

i = 0, 1,..., n i = n + 1, n + 2,... i = 0, 1,..., n i = n + 1, n + 2,...

(4.55)

i = 0, 1,..., n i = n + 1, n + 2,...

89

The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + ... + pn + pn +1 + pn + 2 +...=1 p0 + ρ 1 ⋅ p0 + ρ 2 ⋅ p0 + ... + ρ n ⋅ p0 +

ρ n +1 21

⋅ p0 +

ρ n+2 22

⋅ p0 + ... = 1

  ρ n +1 ρ n + 2 p0 ⋅ 1 + ρ + ρ 2 + ... + ρ n + 1 + 2 + ... = 1 2 2         ρ n +1 ρ n + 2  2 n p0 ⋅ 1 + ρ + ρ + ... + ρ + 1 + 2 + ... = 1   2   2   n   i i ∞ ∑ρ ρ   i =0 ∑ i −n   i = n +1 2 p0 =

1 ∞

n

ρ

∑ρ + ∑ 2 i

i

=

(4.56)

1 S

i −n

i =0 i = n +1    S

The series S can be slightly modified, which lead us to:

n

S = ∑ ρi + i =0



ρi

∑2

i = n +1

i−n

n −1

=



∑ ρi + ∑ i =0  1− ρ n 1− ρ

i =n

ρi 2i − n

=

1− ρ n ρ n ∞ ρi + ⋅∑ = 1 − ρ ρ n i = n 2i − n

∞ ∞ ρ i −n 1 − ρ n 1− ρ n ρ = + ρ n ⋅ ∑ i −n = + ρn ⋅∑  1− ρ 1− ρ i=n 2 i=n  2 

(4.57) i −n

If now the new variable i − n = m is introduced, we get:

geometric

series  m n ∞ 1− ρ 1− ρ n ρn 1− ρ n 2 ⋅ ρ n ρ n S= + ρ ⋅∑  = + = + 1− ρ 1− ρ 1− ρ 1− ρ 2 − ρ m =0  2  2

(4.58)

90

and consequently:

(1 − ρ ) ⋅ ( 2 − ρ ) + 2ρ ⋅ (1 − ρ ) = 2 − ρ − 2 ⋅ ρ S= n

n

+ ρ n +1 + 2 ⋅ ρ n − 2 ⋅ ρ n +1 = (1 − ρ ) ⋅ ( 2 − ρ )

(1 − ρ ) ⋅ ( 2 − ρ )

=

n

(4.59)

2 − ρ − ρ n +1 (1 − ρ ) ⋅ ( 2 − ρ )

So, the expression for p0 in this case can be written as follows [Hudoklin, Gross]:

p0 =

1 (1 − ρ ) ⋅ ( 2 − ρ ) = , S 2 − ρ − ρ n +1

ρ=

λ µ

(4.60)

and the stationary probability distribution for the number of i customers being in the system is [Hudoklin, Gross]:

 i (1 − ρ ) ⋅ ( 2 − ρ ) , ρ ⋅ n +1 ρ ρ 2 − −  pi =  i (1 − ρ ) ⋅ ( 2 − ρ ) ρ ⋅ 2 − ρ − ρ n +1  ,  2i − n

i = 0, 1,..., n where ρ =

λ µ

(4.61)

i = n + 1, n + 2,...

The derivation of basic statistical parameters As it turns out, the derivation of basic statistical parameters is this this case rather than easy to be carried out. The details of derivation of the other statistical parameters ( E ( N q ) = Lq , E(W ) and E( Wq ) ) can be investigated in the literature [Bhat, Gross, Bose, Stewart, Ross, Takacz]. Let us just mention that, for instance, the most suitable way to calculate the average number of the customers in the system E ( N ) = L is when we are using the following expression [Hudoklin, Gross]: ∞

E ( N ) = L = ∑ i ⋅pi

(4.62)

i =0

91

4.2 Multiple channel queueing models The main property of the multiple channel queueing models is, that they have the multiple servers (more than one server). For all models treated in this chapter we will suppose that the input arrival process is the Poisson process, and the service times of the servers are distributed exponentially, while the queue discipline is FIFO (first in, first out).

4.2.1 Basic model M/M/r For the basic model, the following assumptions must be made [Hudoklin]: • The population of customers is infinite, and • The waiting space is infinitely big. Let us suppose that we have the r servers in the system, which operate independently from each other. Also we assume that the arriving customers form one single queue in the sequence as they arrive into the system. As soon as certain service spot becomes empty, the customer, which is first in the queue, enters into that spot. If there are more servers free, the customer can go to any of them. The input arrival process is the Poisson process with the arrival rate λ , while the service times of the servers are random variables, which are due to equivalency of servers distributed exponentially, where the departure rate (mean service rate) for all of servers is µ . While the number of customers is lower than the number of servers ( is i ≤ r ) , all the customers will be simultaneously in the service. But as soon as i > r , the number of customers becomes bigger

than the number of servers, which implies that besides r

simultaneously served customers, i − r customers will have to wait. The corresponding system is shown in figure 35, and its state transition diagram is shown in figure 36.

92

Figure 35: The basic model M/M/r

Figure 36: The state transition diagram of the basic model M/M/r For the arrival and departure rates the following expressions can be written [Hudoklin, Gross, Winston, Hillier]:

λi = λ , i ⋅ µ , r ⋅ µ ,

µi = 

i = 0,1, 2,... 1≤ i < r i≥r

(4.63)

where the µ is the mean service rate of the individual server.

The derivation of steady-state probabilities Now let us apply the balance equations for the states in figure 36 (Rate in = Rate out Principle):

93

µ ⋅ p1 =λ ⋅ p0 2 ⋅ µ ⋅ p2 =λ ⋅ p1 3 ⋅ µ ⋅ p3 =λ ⋅ p2 i ⋅ µ ⋅ pi =λ ⋅ pi −1 ... ( r − 1) ⋅ µ ⋅ pr −1 =λ ⋅ pr −2

(4.64)

r ⋅ µ ⋅ pr =λ ⋅ pr −1 r ⋅ µ ⋅ pr +1 =λ ⋅ pr r ⋅ µ ⋅ pr + 2 =λ ⋅ pr +1 ... which leads us to:

p1 =

λ ⋅p µ 0

p2 =

λ λ λ λ2 ⋅ p1 = ⋅ ⋅ p0 = ⋅ p0 2µ 2µ µ 2!⋅ µ 2

λ λ λ λ λ3 p3 = ⋅ p2 = ⋅ ⋅ ⋅ p0 = ⋅ p0 3µ 3µ 2 µ µ 3!⋅ µ 3

λ λ λ λ λ λi pi = ⋅p = ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = ⋅ p0 i ⋅ µ ( i −1) i ⋅ µ ( i − 1) ⋅ µ 2µ µ i !⋅ µ i pr −1 =

λ

( r − 1) ⋅ µ

⋅ pr − 2 =

λ

λ



( r − 1) ⋅ µ ( r − 2 ) ⋅ µ =

λ r −1

( r − 1)!⋅ µ

r −1

⋅ ... ⋅

λ λ ⋅ ⋅ p0 = 2µ µ

⋅ p0

(4.65)

λ λ λ λ λ λr pr = ⋅ pr −1 = ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = ⋅ p0 r⋅µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ r !⋅ µ r λ λ λ λ λ λ pr +1 = ⋅ pr = ⋅ ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = r⋅µ r ⋅ µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ λ λr ⋅ ⋅ p0 r ⋅ µ r !⋅ µ r λ λ λ λ λ λ λ pr + 2 = ⋅ pr +1 = ⋅ ⋅ ⋅ ⋅ ... ⋅ ⋅ ⋅ p0 = r⋅µ r ⋅ µ r ⋅ µ r ⋅ µ ( r − 1) ⋅ µ 2µ µ =

2

 λ  λr = ⋅ ⋅ p0  r  r ⋅ µ  r !⋅ µ ... etc. 94

Thus, in general we have:

λi pi = ⋅ p0 , i !⋅ µ i pr =

i = 0, 1, 2,..., r

λr ⋅ p0 r !⋅ µ r

 λ  pj =   r⋅µ 

j −r

(4.66)

 λ  ⋅ pr =    r⋅µ 

j −r

λr ⋅ ⋅ p0 , r !⋅ µ r

j = r + 1, r + 2,..., ∞

In the case of M/M/r system, the utilization factor is defined in the following way [Hudoklin, Winston, Hillier]:

ρ=

λ r⋅µ

λ = ρ ⋅r µ



(4.67)

so we have:

i

pi

(ρ ⋅r) =

pr

(ρ ⋅r) =

⋅ p0 ,

i!

r

r!

pj = ρ

i = 0, 1,..., r

j −r

⋅ p0

⋅ pr

(ρ ⋅r) =ρ ⋅ r!

=

ρ j ⋅ rr r!

(4.68)

r

j −r

⋅ p0 =

⋅ p0 ,

j = r + 1, r + 2,..., ∞

and finally:

 ( ρ ⋅ r )i ⋅ p0 ,   i! pi =  i r  ρ ⋅r  r ! ⋅ p0 ,

i = 0, 1,..., r , where ρ = i = r + 1, r + 2,..., ∞

λ r⋅µ

(4.69)

95

The expression for p0 in this case can be derived as follows [Hudoklin, Dragan]:

p0 + p1 + ... + pr + pr +1 + pr + 2 +...=1

p0 +

ρ ⋅r 1!

⋅ p0

(ρ ⋅r) +

2

⋅ p0

2!

(ρ ⋅r) + ... + r!

r

ρ r +1 ⋅ r r

⋅ p0 +

r!

⋅ p0 +

ρ r +2 ⋅ r r r!

⋅ p0 + ... = 1

r  ρ ⋅ r ( ρ ⋅ r )2  ρ ⋅ r ) ρ r +1 ⋅ r r ρ r + 2 ⋅ r r ( + + ... + + + + ... = 1 p0 ⋅ 1 + 1! 2! r! r! r!  

1

p0 = 1+

ρ ⋅r 1!

+

(ρ ⋅r)

2

(ρ ⋅r)

+ ... +

2!

r

+

r!

ρ

r +1

r +2

r

r

⋅r ρ ⋅r + + ... r! r!

=

(4.70)

1 S

The series S can be slightly modified:

ρ ⋅r

S = 1+

1! r −1

= 1+ ∑

(ρ ⋅r) + 2!

(ρ ⋅r)

= 1+ ∑

(ρ ⋅r)

r −1

= 1+ ∑

i

+

i!

i =1

(ρ ⋅r)

i =1

i

i!

i =1 r −1

2

i!

i

+

(ρ ⋅r) + ... + r!

ρ r ⋅ rr

+

r!

ρ r ⋅ rr r!

r

ρ r +1 ⋅ r r r!

ρ r +1 ⋅ r r

+

r!

+

ρ r +2 ⋅ r r

+

r!

ρ r +2 ⋅ r r r! + ... =

(4.71)

⋅ (1 + ρ + ρ 2 + ...) =  1 1− ρ i

+

+ ... =

r

r −1 (ρ ⋅r) + (ρ ⋅r) ρ r ⋅ rr =∑ r !⋅ (1 − ρ ) i =0 i ! r !⋅ (1 − ρ )

So the probability p0 is equal to:

p0 =

1 = S

1 r −1

∑ i =0

(ρ ⋅r) i!

i

r

+

(ρ ⋅r) r !⋅ (1 − ρ )

,

ρ=

λ r⋅µ

(4.72)

and the stationary probability distribution for the number of i customers being in the system is [Hudoklin, Gross]:

96

 ( ρ ⋅ r )i 1 , ⋅  i r r − 1 (ρ ⋅r) + (ρ ⋅r)  i! ∑  i! r !⋅ (1 − ρ )  i =0 pi =  i r 1  ρ ⋅r , r  r ! ⋅ r −1 ρ ⋅ r i ⋅ r ρ ( ) ( )  + ∑  i! r !⋅ (1 − ρ ) i =0

i = 0, 1,..., r , where ρ = i = r + 1, r + 2,..., ∞

λ r⋅µ

(4.73)

The condition for the existance of stationary distribution is [Hudoklin, Winston]:

ρ=

λ