An On-line Simulation Approach to Search Efficient ... - Springer Link

1 downloads 458 Views 265KB Size Report
May 5, 2004 - It tries to estimate an auto-regressive model, and construct mean and ... of the given system through an on-line and single run method within a short time ...... SMS94-8, School of Industrial Engineering, Purdue University, USA ...
Int J Adv Manuf Technol (2005) 25: 1232–1240 DOI 10.1007/s00170-003-1951-0

ORIGINAL ARTICLE

K.-J. Park · Y.-H. Lee

An On-line Simulation Approach to Search Efficient Values of Decision Variables in Stochastic Systems

Received: 23 June 2003 / Accepted: 2 September 2003 / Published online: 5 May 2004  Springer-Verlag London Limited 2004 Abstract This paper deals with a discrete simulation optimization method for designing a complex probabilistic discrete event system. The proposed algorithm in this paper searches the effective and reliable alternatives satisfying the target values of the system to be designed through a single run in a relatively short time period. It tries to estimate an auto-regressive model, and construct mean and confidence interval for evaluating correctly the objective function obtained by a small amount of output data. The experimental results using the proposed method are also shown.

1 Introduction Because manufacturing fields are improving, systems are larger and more complex than past systems. These systems, therefore, are not solved by simple analytical methods or mathematical methods, which require theoretical assumptions. For example, in a complex FMS (Flexible Manufacturing System) that is modelled by a queueing network model because inter-arrival times of parts or service times of machines follow a stochastic distribution, or a number of products that have finite buffers and various complex priorities, we don’t have a solution using the traditional mathematical methods. So we should use discrete event simulation techniques. Also, in the case of the design of a complex and stochastic discrete event system (for example, the number of operators in a work station, the selection of order point and reorder point in (s,S) inventory problems, etc.) we don’t analytically express an objective function because some or all components of the system are stochastic and complex. When we observe a complex modK.-J. Park (u) Division of Digital Business Administration, Gwangju University, Jinwol-Dong, Gwangju-City, Korea E-mail: [email protected] Y.-H. Lee Dept. of Industrial Engineering, Hanyang University, Ansan-City, Kyunggi-Do, Korea

elling process of various systems we don’t find a system which is expressed by an accurate analytical method. Though we obtain an analytical expression it has the nonlinear or stochastic characteristics. So study using a discrete event simulation is accepted as the best technique in the case of a complex system. The simulation modeling methods are used in various fields in order to analyse or calculate a complex and stochastic discrete event system of an FMS. However, most traditional methods that are used to search for an alternative (a feasible solution or an optimal solution) needed by a system designer use multiple simulation runs which require high computation cost. We should solve some problems in order to use discrete event simulation as a design tool of a complex and stochastic discrete event system. First, the values of object functions and the constraints to evaluate performance measures of a system are obtained not by simple calculation of known functions but by simulation run. Second, in the case of a stochastic discrete event system, because the results of the simulation run include stochastic elements, we require an efficient statistical analysis and a stochastic optimization. Third, when the simulation model type is steady state simulation, to calculate performance measures of an alternative in a steady state we need many output data and expensive calculation costs because the output data have auto-correlation properties. Fourth, if the search spaces of a considered problem are large we need more calculation costs than in general cases (Azadivar 1992). When we design a discrete event system using simulation, the steps that are processed by a general designer can be described as follows: Step 1: Define responses of a system, assign values of design variables of the system, and determine alternatives of the system. Step 2: Run independently the replication simulation of a defined system configuration and collect output data. Step 3: Analyse the obtained output data. If the output data are accepted move to Step 4, otherwise, change the values of the system design variables and move to Step 2. Step 4: Select the best system alternative and stop the simulation.

1233

The difficult point of the above steps is the recursive run of Step 2 and Step 3. As the number of design variables of a system increases we assign the values of design variables, execute the replication run of a simulation model, and analyse the simulation output data. If it is not accepted we must reassign the values of design variables and again execute the replication run of the simulation model. So, these procedures are very difficult and consume expensive time. Therefore, this study deals with a discrete simulation method to get the system evaluation criteria required for designing a complex probabilistic discrete event system and to search the effective and reliable alternatives to satisfy the objective values of the given system through an on-line and single run method within a short time period. Finding the alternative, we construct an algorithm which changes the values of decision variables and a design alternative by using a stopping algorithm which ends the simulation in a steady state of system. In this section we discussed general problems of simulation techniques. In Sect. 2, we explain literature reviews of simulation optimization method in. In Sect. 3, we propose an algorithm to search for a feasible solution. Also, the basic explanation of the proposed algorithm, the detailed algorithm to configure the value of decision variables, and the stopping conditions of the proposed algorithm are discussed. When we process a steady state simulation, we propose an evaluation method which accurately calculates decision variables of given objective functions with a few output data. In this case we propose an auto-regressive model with a few output data to model it. In Sect. 4, in order to explain availability and efficiency of the proposed algorithm, we experiment and analyse using an (s,S) inventory model. Finally, we explain the results of this research and propose future research in Sect. 5.

2 Literature reviews In this paper, considered problems are included in the category of simulation optimization in a broad sense, but strictly speaking the problems are not optimization problems but problems of finding alternatives that satisfy objectives (a value or interval) of given objective functions that are evaluated by simulation output analysis. Using the proposed algorithm, we obtain best (or optimal) solutions or may use it as an initial solution to get better solutions. Simulation optimization problems have been discussed continuously by Glynn (1986), Meketon (1987), Jacobson and Schruben (1989), Safizadeh (1990), Ho and Cao (1991), Rubinstein and Shapiro (1992), etc. Methods using Finite Differences, which is widely used in optimization, have the disadvantage that at least n + 1 simulation runs are necessary to estimate the gradient of a given problem when the number of parameters is n (Heidergott 1995). Therefore, in order to solve multiple replication runs that occur in simulation optimization, we must develop the optimization methods using single run simulation. In single run optimization fields, Perturbation Analysis (Ho and Cao 1983) and Score Function (Rubinstein and Shapiro 1993) have

been developed but all of them are focussed on continuous decision variables. Until now the study of simulation optimization have mainly been focussed on continuous decision variables but the study of discrete decision variables was not considered important. But nowadays, the area of discrete decision variables is more important than past trends in the fields of buffer optimization, resource optimization of a system, etc. In simulation optimization, discrete stochastic optimization methods with discrete decision variables are studied using simulated annealing (Lee and Iwata 1991, Ahmed et al. 1997), the stochastic ruler method (Yan and Mukai 1992), the stochastic comparison method (Gong et al. 1992), random walk (Andradottir 1992, 1995, 1996), the nested partitions method (Shi and Sigurdur 1997), evolutionary (genetic) algorithms (Pierreval and Tautou 1997), the multi-armed bandit method (Barry and Fristedt 1985), learning automata (Yakowitz and Lugosi 1990), etc. However, the methods optimize simulation processes using multiple runs. In the literature reviews Chen (1994) proposes the algorithm using Monte Carlo sampling and retrospective approximation (Fu and Healy 1997) for stochastic root-finding problems which are expressed by g(x) = γ . Because the method requests many replication runs of the simulation process and uses simple mathematical simulation as an evaluation tool of objective functions, we use it only in case the form of an objective function is g(x) = γ . However, it is not adjustable to steady state simulations that have complex properties and long simulation times per run. Wild and Pignatiello (1994) proposed a method which is called reverse-simulation in order to find an alternative that satisfies given objective values (g(x) ≤ γ ) in the design of steady state discrete simulation. The method changes the value of decision variables during simulation to find a feasible solution. This method is used in simple cases like M/M/s systems, as if we change the value of decision variables we get direction of value of decision variables. But the method does not give stable and reliable alternatives because the values of alternatives of a system are changed whenever each entity is released, and it is impossible to use when there are various types of objective functions. Also, in the optimal design of discrete event systems using discrete event simulation methods, the previous research does not considered long runs of simulation, which must be mentioned, and only focussed on the algorithm of stochastic optimization based on simulation. This study, therefore, deals with a discrete simulation method to obtain the system evaluation criteria required for designing a complex probabilistic discrete event system and to search the effective and reliable alternatives to satisfy the objective values of the given system through on-line and single run within a short time period.

3 Feasible solution search algorithms The method to search design alternatives that occur in the manufacturing system fields in a single run simulation is proposed in Algorithm 3.1.

1234

[Algorithm 3.1] Step 1: Set objective functions ( f i (X )), decision variables (x j , j = 1, . . . , n), objective values (Ai ), the value of monotonic increasing and decreasing (∆x j ) about decision variables, x j , j = 1, . . . , n, and time intervals to evaluate objective functions, modify the value of decision variables (∆t), and start simulation. Step 2: During simulation we compare objective functions with objective values, change the value of decision variables as ∆x j in the right direction, and continue simulation (reference: Algorithm 3.2). Step 3: Check the stopping condition of the algorithm. If its conditions are satisfied then the combinations of decision variables that have been frequently visited to current times are set by x ∗j , j = 1, . . . , n (reference: Algorithm 3.3). Step 4: Using the obtained final solution we process the verification simulation with abundant run lengths and collect the necessary data. In this paper we suppose that x j has discrete event cases, therefore ∆x j is expressed by the minimal resolution of a decision variable. The minimal resolution with integer values is generally 1 but if the search spaces are very large, the user may configure the size of the minimal resolution. The unit of time interval, ∆t, to evaluate the objective functions and the value of decision variables can be set by transaction counts after the previous step that modification behaviour of the decision variables is set. For example, in the case when ∆t is 10, in order to move the next step we must collect the information of 10 transactions. If the simulation type is terminating conditions, ∆t can be the total run length of simulation. But in the case of steady state simulation, because an alternative which is the combination of the given decision variables requires long simulations runs to get output data in steady state, total times to obtain required final solutions during the evaluation of alternatives are very long. We should therefore endeavour to reduce it. Generally speaking, as the ∆t is a small value, we can frequently evaluate objective functions and the system can more rapidly converge to the value of decision variables that satisfy the objective values. However, because the gathered data are small quantities, the evaluation errors of objective functions are larger than in the case of large ∆t. Also we should consume much time in evaluating objective functions and configure the value of decision variables so that we propose the algorithm that effectively estimates the value of objective functions in steady state during short time simulation. In Step 2 of the Algorithm 3.1, we explain the value of the decision variable, x j , is changed by ∆x j in the right direction during simulation. 3.1 Configuration of decision variables In Step 2 of Algorithm 3.1 we explained the value of decision variables, x j , is configured by ∆x j in the desirable direction

using the compared results of the objective functions and the objective values at each ∆t during simulation. According to monotonic increasing and decreasing of the value of x j , we can have two cases: the change of value of objective functions is easily seized or not. As an example of the former case, in a single job shop problem the increasing (decreasing) of the number of machines (decision variables) decreases (increases) the average waiting times (objective functions) of entities. In a job shop problem the increasing (decreasing) of the number of machines in the work space increases (decreases) the average breakdown times of machines, and the increasing (decreasing) of the number of repairs decreases (increases) the average breakdown times of machines. As an example of the latter case, when we change the value of decision variables we must consider that we don’t know the direction of the objective functions. In a (s,S) inventory control system, because we don’t seize average inventory cost per unit period through the changing of the value of decision variables (s,S) in the increasing (decreasing) direction, we must consider it in the proposed algorithm. We explain the basic algorithm to change the value of a decision variable, x j , at each ∆t during simulation according to the compared result of objective functions and objective values. We observe that the value of objective functions that are obtained by simulation output is either a monotonic increasing function or monotonic decreasing function. Using the forms of given Ai , [{ fi = c}, { fi > c}, or { f i < c}], the value of decision variable j at time t − 1 and t, x j,t−1 and x j,t , the value of objective functions i at time t − 1 and t, yi,t−1 and yi,t that are obtained from the simulation model, the value of the objective functions i at time t, yi,t that are obtained from the simulation model, and the value of the given objective values, c. According to the increasing value and decreasing value of decision variables and the accumulative number of unchanged decision variables, we change the value of the decision variables as ∆x j in the direction that the frequencies of the accumulative number of unchanged decision variables has the largest value. Using the technique we can consider the case when the changing direction of the value of the decision variables is conflicted by characteristics of the objective functions. 3.2 The explanation of notations and algorithm In this section, the definition of notations and algorithm to configure the value of the decision variables can be described as follows: x j,t :

The output value of the decision variable, j, at a time, t, in the stochastic simulation model. yi,t : The obtained simulation value of the objective function, i, at a time, t, in the stochastic simulation model. c: The given objective value of the stochastic simulation model. ε: The allowed tolerance between yi,t and c. count0j : The count index showing the unchanging direction of yi with decision variable, j. count+ j :The count index showing the increasing direction of yi with decision variable, j.

1235

count− j :The count index showing the decreasing direction of yi with decision variable, j. count∗ : The maximum value between count0j , count+ j , and count− . j aj : The upper bound of x j with decision variable, j. bj : The lower bound of x j with decision variable, j. L ++ i : The sets showing the proportional direction of x j with decision variable, j, and objective function, i. When the value of x j is monotonic increasing (decreasing), if the value of yi is monotonic increasing (decreasing) then L ++ includes x j . i +− L i : The sets showing the inversely proportional direction of x j with decision variable, j, and objective function, i. When the value of x j is monotonic decreasing (increasing), if the value of yi is monotonic increasing (decreasing) then L +− includes x j . i The sets of decision variables that are not included in L 0i : L ++ or L +− i i . According to the rules of Sect. 3.1, we can make a searching algorithm to find an efficient value of decision variables as follows.

END IF END IF Step 3: IF fi < c AND x j,t ≤ x j,t−1 THEN IF x j ∈ L 0i THEN yi,t < c → count0j = +1 yi,t−1 ≤ yi,t &yi,t ≥ c → count+ j = +1 yi,t−1 > yi,t &yi,t ≥ c → count− j = +1 ELSE yi,t < c → count0j = +1 → count− yi,t ≥ c&x j ∈ L ++ i j = +1 → count+ yi,t ≥ c&x j ∈ L +− i j = +1 END IF END IF Step 4: IF fi > c AND x j,t ≤ x j,t−1 THEN IF x j ∈ L 0i THEN yi,t > c → count0j = +1 yi,t−1 ≤ yi,t &yi,t ≤ c → count− j = +1

[Algorithm 3.2]

yi,t−1 > yi,t &yi,t ≤ c → count+ j = +1

Step 0: INITIALIZE VARIABLES Step 1: IF f i < c AND x j,t > x j,t−1 THEN IF x j ∈ L 0i THEN

ELSE yi,t > c → count0j = +1 → count+ yi,t ≤ c&x j ∈ L ++ i j = +1

yi,t < c → count0j = +1 yi,t−1 ≤ yi,t &yi,t ≥ c → count− j = +1 yi,t−1 > yi,t &yi,t ≥ c → count+ j = +1 ELSE yi,t < c → count0j = +1

→ count− yi,t ≤ c&x j ∈ L +− i j = +1 END IF END IF Step 5: IF fi = c AND x j,t = x j,t−1 THEN IF x j ∈ L 0i THEN

→ count− yi,t ≥ c&x j ∈ L ++ i j = +1

yi,t = c → count0j = +1

yi,t ≥ c&x j ∈ L +− → count+ i j = +1

yi,t−1 ≤ yi,t &yi,t < c → count+ j = +1

END IF END IF Step 2: IF f i > c AND x j,t > x j,t−1 THEN IF x j ∈ L 0i THEN yi,t−1 ≤ yi,t &yi,t ≤ c → count+ j = +1 yi,t−1 > yi,t &yi,t ≤ c → count− j = +1 ELSE

yi,t ≤ c&x j ∈ yi,t ≤ c&x j ∈

yi,t−1 > yi,t &yi,t > c → count+ j = +1

yi,t = c → count0j = +1 → count+ yi,t < c&x j ∈ L ++ i j = +1 yi,t < c&x j ∈ L +− → count− i j = +1 → count− yi,t > c&x j ∈ L ++ i j = +1

yi,t > c → count0j = +1 → count+ j → count− j

yi,t−1 > yi,t &yi,t < c → count− j = +1

ELSE

yi,t > c → count0j = +1

L ++ i L +− i

yi,t−1 ≤ yi,t &yi,t > c → count− j = +1

= +1 = +1

→ count+ yi,t > c&x j ∈ L +− i j = +1 END IF END IF

1236

Step 6: IF f i = c AND x j,t = x j,t−1 THEN  IF  yi,t − c ≤ ε THEN

changed size is dependent on the considered system. Therefore, we need not simultaneously consider the direction and size of the objective function.

count0j = +1 ELSE IF x j ∈ L 0i THEN − SELECT count+ j = +1 .OR. count j = +1

ELSE → count− x j ∈ L ++ i j = +1 x j ∈ L +− → count+ i j = +1 END IF END IF END IF Step 7: SELECT count∗ AND GO TO Step 1. − count∗ = max[count0j , count+ j , count j ]

3.3 Stopping algorithm When the algorithm satisfies the objective functions or alternatives at an arbitrary time point we should stop the simulation. We explain the notations and algorithm as following to inspect the stopping conditions of the proposed algorithm at time t. x ∗j : In current time t and the number of K trial, the value of a decision variable that has the most visited count frequency. K : The number of decision variables that visit the allowed intervals of objective values to inspect the stopping conditions. K is a constant and set by the user (K = 1, 2, . . . , n). η: The integer value to calculate the tolerance (x ∗j ± η∆x j ) about x j , j = 1, 2, . . . , n for stopping conditions (η = 0, 1, 2, . . . , m).

count∗ .EQ. count+ j → x j,t+1 = min[(x j,t + ∆x j ), b j ] count∗ .EQ. count− j → x j,t+1 = max[a j , (x j,t − ∆x j )] In Step 0 of Algorithm 3.2, we initialize all variables to start the simulation. In Step 1, if the given objective function is fi < c and the obtained value of decision variables in the simulation model is x j,t > x j,t−1, then we should check the condition,x j ∈ L 0i . If so, and the obtained objective value of a simulation model is less than the given objective value, we should add 1 to the variable, count0j , to preserve the status. This means that the given objective function is satisfied with the stochastic simulation model. If yi,t−1 ≤ yi,t and yi,t > c, we should add 1 to the variable, count− j , to move in the lower direction of the stochastic simulation model. If yi,t−1 > yi,t and yi,t > c, we should add 1 to the variable, count+ j , to move in the higher direction of the stochastic simulation model. In case of x j ∈ / L 0i , if yi,t−1 ≥ c ++ and x j ∈ L i , we should add 1 to the variable, count− j , to move to the lower direction of the stochastic simulation model. If yi,t−1 ≥ c and x j ∈ L +− i , we should add 1 to the variable, count+ j , to move in the higher direction of the stochastic simulation model. Finally the algorithm means that if the obtained value is increasing we should move in the lower direction, if the obtained value is decreasing we should move in the higher direction, and if obtained value is preserving we should keep the steady status in the stochastic simulation model. Step 2 to Step 6 follow the same rules as Step 1. At Step 7, we obtain the value of count ∗ . If count ∗ is count+ j then we should increase the value of the decision variables and if the count ∗ is count− j then we should decrease the value of the decision variables. We should continue simulation to search for a stopping condition. When we evaluate a system, we can consider either the direction of the objective function or the size of the objective function. But, when we simultaneously consider the direction and size of the objective function in the stochastic simulation model, the

[Algorithm 3.3] Step 0: Set the integer value of K and η. Step 1: For the decision variables, x j , j = 1, . . . , n we process the next step and if it is satisfied by all decision variables then go to Step 2. Using Algorithm 3.1, We obtain x ∗j , and if all values of  decision variables, x j , j = 1, . . . , n, are included in x ∗j ± η∆x j about K then go to Step 2. Otherwise, stop the proposed algorithm and continue simulation to given time t + ∆t. Step 2: With objective value, i, if it is satisfied by conditions then stop the simulation. Otherwise, If η is 0 then stop the algorithm and simulation, and conclude that an alternative which satisfies the objective value does not exist. If η > 0 then we change η to η = max[0, η − 1] and continue simulation to time t + ∆t. In Step 0, the value of K (K = 1, 2, . . . , n) depends on the size of probability which is obtained by simulation. The value of η (η = 0, 1, . . . , m) is changed by the characteristics of the defined problem and decision variable. Usually we set η to 0 or 1. In Step 2, though the value of all decision variables is included in the allowed intervals to stop simulation, if they do not satisfy objective values we should reduce the allowed intervals. If the allowed interval does not exist(η = 0) any more, we conclude that the alternative which satisfies the objective values does not exist and stop the simulation. If the allowed interval exists (η > 0), then we continue simulation. If the allowed interval is large, it means that the changing depth of the decision variables has a large interval. Therefore, the decision variable converges rapidly to the value if the allowed interval is large. However,

1237

we should decrease the allowed value and simulate it again because the changing depth of the objective value has a large value against the converging time.

pressed by Eq. 2.  µ(Φ) = lim E[X t ] = φ0 1 − t→∞

3.4 The evaluation of the objective functions in steady state In Algorithm 3.1 explained in the previous chapter, when the ∆t goes to a small value we can frequently evaluate an objective function and the system can more rapidly converge to the values of decision variables that satisfy a objective value. However, the evaluation errors of objective functions are larger than in the case of large ∆t because the gathered data are small. To solve this problem, we propose the algorithm which efficiently estimates objective functions in the steady state using the small amount of data that are obtained by short simulation runs. As we pointed out in the previous section, in order to obtain estimates that are not affected by the initial bias of the simulation output we must take very long simulation runs. Therefore, many researchers studied hard to reduce the initial bias (Law and Kelton 1995). Voss et al. (1996) propose the algorithm that efficiently obtains the value of objective functions in steady state during a short simulation of transient period using the auto-regressive model. Through experimentation, comparing to the previous methods, unweighted batch means (Law and Kelton 1995) and weighted batch means (Bischak et al. 1993), the method is superior or similar to the previous methods in respects of mean error, root mean square error, coverage, mean interval length, etc. Therefore, in this paper we develop an algorithm to estimate the value of objective functions during short ∆t based on the method proposed by Voss et al. Because the output data obtained during a short transient period has strong correlation between data, the output process is expressed very well by the auto-regressive model (Voss et al. 1996).

p 

−1 φi

(2)

i=1

ˆ as the estimate of coefficient vector, Φ, If we estimate Φ ˆ in steady state of the auto-regression model, the estimates Φ of p ˆ −1 ˆ ˆ is expressed by equation µ(Φ) = φ0 {1 − i=1 φi } . The conˆ is obtained by Eq. 3 (Fuller ditional least square estimate of Φ 1996). ˆ = A−1 Φ n vn ,

(3)

where, n  1 Yt Yt , n− p t= p+1  Yt = 1, yt−1 , . . . , yt− p , n  1 Yt yt . vn = n− p

An =

t= p+1

If the Eq. 1 is verified and εt has properties of the normal ˆ becomes the conditional maximum likelihood esdistribution, Φ ˆ becomes the conditional maximum likelihood timate and µ(Φ) estimate of µ (Voss et al. 1996). 3.4.2 The order estimates of auto-regressive model The selection of effective order, p, of Eq. 1 of AR( p) model using the number of data, n, is based on the method proposed by Broerson and Wensink (1993). The algorithm Eq. 4 explains the procedures. [Algorithm 3.4]

3.4.1 Auto-regressive model Output data of the simulation model in short transient periods has strong auto-correlation between data. We can therefore explain the output process using the auto-regressive model (Fishman 1978). The auto-regressive model supposes a new observed value, yt , that has a linear connection with a previous observed value. Auto-regressive model, AR( p), with order, p, is expressed by Eq. 1 (Fuller 1996). yt = φ0 +

p 

φi yt−i + εt ,

t = 1, 2, . . .

(1)

i=1

Where, order p is finite and {εt , t = 1, 2, . . . } is IID normal probabilistic variables with average 0 and variance σε2 < ∞. Φ = {φ0 , φ1 , . . . , φ p } means the vectors of auto-regressive coefficients. If we suppose the average of system responses converges to only one point, the average in steady state, µ = µ(φ), is ex-

Step 1: Select the maximum value, pmax , of p. Step 2: Calculate S2 ( p) at 0 ≤ p ≤ pmax . Step 3: Calculate FIC( p) at 0 ≤ p ≤ pmax . Step 4: Select order, p∗ , to minimize FIC( p). S2 ( p) means residual variance and νi are the finite sample variance coefficients and has equations of many types. However, in ˆ this paper we use the Yule-Walker method that the variance of Φ is minimized by the experimentation of Broerson and Wensink (1993), and the formula is expressed by Eq. 4. p n 1  2

yt (1 − νi ) n t=1 i=1 n p  1 2 2 ln{S ( p)} = ln yt + ln(1 − νi ) n

S2 ( p) =

t=1

νi =

n −i ∼ 1 = n(n + 2) n

i=1

(4)

1238

FIC( p) means Finite Information Criterion and is a standard which explains the prediction error of the model using the coefficient of order p. In this paper we use Eq. 5 to consider aptness and efficiency of future data that are not used.   p  2 (5) FIC( p) = ln S ( p) + 2 vi i=1

The reason to use FIC is that FIC is approximately similar to GIC (Generalized Information Criterion) which is the generalized form of AIC (Asymptotic Information Criterion) (Boersen and Wensink 1993, Voss et al. 1996). In theoretical aspects when we use the Batch Means method, pmax has the value of (batch size −1). But, in this paper we set pmax = 20 to consider p = 15 (n ≥ 256) proposed by Voss et al.(1996) and p value used by Bischak et al. (1993)’s Weighted Batch Means method. 3.4.3 The estimates of average and confidence interval in the steady state condition The formula to estimate the average and confidence intervals in steady state using the number of data, n, obtained by a single n  run is derived as following. If we set Y¯n− p = 1 yt as n− p

t= p+1

p-truncated sample mean and {Yt } converge to covariance stationary process which satisfies Eq. 1, µ ˆ n which is the estimation of average, µ(Φ), in steady state is expressed by Eq. 6 to consider standard average and bias correction. The method to obtain average in steady state is arranged in Algorithm 3.5 and µ ˆ n is applied to Algorithm 3.1, 3.2, and 3.3. p p n    φˆ i yt − yt i=1 t= p−i+1 t=n−i+1 

. (6) µ ˆ n = Y¯n− p + p  (n − p) 1 − φˆ i i=1

[Algorithm 3.5] Step 1: Calculate p∗ using Algorithm 3.4. ˆ using Eq. 3. Step 2: Calculate Φ Step 3: Calculate µ ˆ n using Eq. 6.

A company that sells a single product would like to decide how many items it should have in inventory for each of the next n months. The times between demands are IID exponential random variables with a mean of 0.1 month. The size of the demands, D, are IID random variables (independent of when the demands occur), with  1 w. p.1/6    2 w. p.1/3 , D=  3 w. p.1/3    4 w. p.1/6 where w.p. is read “with probability.” At the beginning of each month, the company reviews the inventory level and decides how many items to order from its supplier. If the company orders Z items, it incurs a cost of K + i Z, where K = $32 is the setup cost and i = $3 is the incremental cost per item ordered. (If Z = 0, no cost is incurred.) When an order is placed, the time required for it to arrive (called the delivery lag or lead time) is a random variable that is distributed uniformly between 0.5 and 1 month. The company uses a stationary (s,S) policy to decide how much to order, i.e.,  S − I if I < s Z= , 0 if I ≥ 0 where I is the inventory level at the beginning of the month. And the more detailed contents are explained in Law and Kelton (1995). In the (s,S) inventory model the occurred costs are ordering cost, holding cost, and shortage cost and we set objective functions as average total cost which is the sum of ordering cost, holding cost, and shortage cost. In this experimentation, we set (s,S) as (40, 60) and process long run simulations. Using the results we test to see if we find the (s,S) near to the experimental value with changing ∆t. To determine the experimental value we simulate during 10 000 months with condition (40, 60), from which we obtain an average total cost of 125.74, which is the experimental value. Therefore the mathematical model of the explained (s,S) inventory control system is explained as following.

Finally, in order to select the optimal system alternative with a few data we substitute µ ˆ n of Algorithm 3.5 for yi,t of Step 2 in Algorithm 3.2.

arg{E[ f(X )] ≤ c}

4 Experimentation and evaluation

[x1 , x2 ] = [s: Reorder point, S: Quantity]

In Sect. 3, we develop an algorithm that is focused on the two cases if we change the value of decision variables we guess either the moving direction of the objective functions or not. Therefore we handle a (s,S) inventory control problem included in all cases as the considering example and explain the results. We use the example which is explained by Law and Kelton (1995).

In the proposed algorithm we initialize the lower bound and the upper bound of reorder point, s, as (10, 50) and the lower bound and the upper bound of quantity, S, as (20, 70). We start simulation with the initial value of (s,S) as (10, 30). Also, we set the stopping condition, K , as 10, the allowed error of variable A, α, as ±5, and the incremental value of decision variables, ∆s and

E[ f(X )] = Average inventory cost occurred at each unit period A = {125.74 − α ≤ Average cost occurred at each unit period ≤ 125.74 + α}

1239

∆S, as 1. In this paper, we suppose the applied auto-regressive model has a correlation between data. So, we maintain the correlation of yi,t obtained by (s,S) model using accumulate average of entities. Figure 1 shows the results of the proposed algorithm when the ∆t is 10, 30, and 100. Also, we know the total cost converge to 127.74, which is the experimental value during simulation. The Table 1 is the results of (s,S) and the average inventory costs that are obtained when simulation is stop with changing ∆t. In Table 1 when ∆t is 50 we observe that the average inventory cost converges to 125.20 near to 125.74 which is experimental value, and that the obtained value, (39, 59), of decision variables, (s,S), is approximately similar to the experimental value, (40, 60). When ∆t is 100 using the proposed algorithm we see that (s,S) is determined as (39, 59) and the obtained average total cost, 125.62, is approximately similar to 125.74. When ∆t are less than or equal to 30, though the ending times of simulation are faster than for ∆t > 30, because the deviations between the obtained value of objective functions and the value of the decision variables compared with the given experimentation value are large, we know that we should increase ∆t. To obtain adequate ∆t, we experimented an M/M/s model, known-function problem, and (s,S) inventory model. But, we could not find good design alternatives when ∆t is larger than 1000 in the problems with more than 4 decision variables and the problems we can’t guess the direction of the objective function value as the decision variable are changed. In general, to obtain the adequate system alternative that satisfy the given system requirement we should increase the size of ∆t. However, because the size of ∆t is intimately concerned with a proposed problem, the properties of ∆t are problem dependent.

Table 1. The results of simulation by the proposed algorithm in the (s, S) inventory system ∆t

(s,S)

Average total cost

10 30 50 100

(29, 49) (32, 52) (39, 59) (39, 59)

121.05 122.12 125.20 125.62

Fig. 1. Average total cost at ∆t in the (s,S) inventory system

5 Conclusions and future research This study deals with a discrete simulation method to get the system evaluation criteria required for designing a complex probabilistic discrete event system and to search the effective and reliable alternatives to satisfy the objective values of the given system through an on-line, single run method within a short time period. When we analyse alternatives of the decision variables in a steady state, an auto-regressive model is used to prevent data loss in order to evaluate given objective functions using short simulation runs and few output data. Using the proposed algorithm, we obtain values of decision variables that satisfy desired objective levels with a single run and few output data without replication runs. Because the size of ∆t which satisfy the objective functions may be infinitely changed through the area of a given problem and the characteristics of variables, we should take care to select adjustable ∆t. We should improve the algorithm of searching and stopping solutions in a steady state simulation in future research, in the case when decision variables of a system have properties of qualitative factors; operation rules (FIFO, LIFO, etc.), job sequences, or facility layouts.

References 1. Ahmed MA, Alkhamis TM, Hasan M (1997) Optimizing discrete stochastic systems using simulated annealing and simulation. Comput Ind Eng 32(4):823–836 2. Andradottir S (1996) A global search method for discrete stochastic optimization. SIAM J Optim 6(2):513–530 3. Andradottir S (1995) A method for discrete stochastic optimization. Manage Sci 41(12):1946–1961 4. Andradottir S (1992) Discrete optimization in simulation: a method and applications. Proceedings, 1992 Winter Simulation Conference, pp 483–486 5. Azadivar F (1992) A tutorial on simulation optimization. Proceedings, 1992 Winter Simulation Conference, pp 198–204 6. Barry DA, Fristedt B (1985) Bandit problems. Chapman and Hall, London 7. Bischak DP, Kelton WD, Pollock SM (1993) Weighted batch means for confidence intervals in steady state simulations. Manage Sci 39:1002– 1019 8. Broerson PM, Wensink HE (1993) On finite sample theory for autoregressive model order selection. IEEE Trans Signal Proc 41:194–204 9. Chen H (1994) Stochastic root finding in system design. working paper SMS94-8, School of Industrial Engineering, Purdue University, USA

1240 10. Fishman GS (1978) Principles of Discrete Event Simulation. Wiley, New York 11. Fu MC, Healy KJ (1997) Techniques for optimization via simulation: an experimental study on (s,S) inventory system. IIE Trans 29(3): 191–200 12. Fuller WA (1996) Introduction to statistical time series. Wiley, New York 13. Glynn PW (1986) Optimization of stochastic systems. Proceedings, 1986 Winter Simulation Conference, pp 52–59 14. Gong WB, Ho YC, Zhai W (1992) Stochastic comparison algorithm for discrete optimization with estimation. Proceedings of the 31st IEEE Conference on Decision and Control, pp 795–800 15. Heidergott B (1995) Sensitivity analysis of a manufacturing workstation using perturbation analysis techniques. Int J Prod Res 3:611–622 16. Ho YC, Cao XR (1991) Perturbation analysis of discrete event dynamic systems. Kluwer, Boston 17. Ho YC, Cao XR (1983) Perturbation analysis and optimization of queueing networks. J Optim Theory Appl 4(4):559–582 18. Jacobson SH, Schruben LW (1989) Techniques for simulation response optimization. Oper Res Lett 8:1–9 19. Law AM, Kelton WD (1995) Simulation Modeling and Analysis. McGraw-Hill, Maidenhead 20. Lee YH, Iwata K (1991) Part ordering through simulation-optimization in a FMS. Int J Prod Res 29(7):1309–1323

21. Meketon MS (1987) Optimization in simulation: a survey of recent results. Proceedings, 1987 Winter Simulation Conference, pp 58–67 22. Pierreval H, Tautou L (1997) Using evolutionary algorithms and simulation for the optimization of manufacturing systems. IIE Trans 29:181– 189 23. Rubinstein RY, Shapiro A (1993) A discrete event systems. Wiley, New York 24. Rubinstein RY, Shapiro A (1992) Discrete event systems: sensitivity analysis and stochastic optimization by the score function method. Wiley, New York 25. Safizadeh MH (1990) Optimization in simulation: current issues and the future outlook. Naval Res Logistics 37:807–825 26. Shi L, Sigurdur O (1997) Nested partitions method for stochastic optimization. Technical Report, Dept of IE, University of WisconsinMadison 27. Voss PA, Haddock J, Willemain TR (1996) Estimating steady state mean from short transient simulations. Proceedings, 1996 Winter Simulation Conference, pp 222–229 28. Wild RH, Pignatiello JJ (1994) Finding stable system designs: a reverse simulation technique. Commun ACM 35(10):87–98 29. Yakowitz S, Lugosi E (1990) Random search in the presence of noise with application to machine learning. SIAM J Sci Stat Comput 11:702–712 30. Yan D, Mukai H (1992) Stochastic discrete optimization. SIAM J Control Optim 30:594–612

Suggest Documents