Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
150
Comparison of Periodic Review Inventory Models under Poisson Demand Distribution İbrahim Zeki Akyurt* İstanbul University School of Business ABSTRACT This study deals with single step single-item inventory models under Poisson demand distribution. Inventory control is based on periodic review. At the beginning of the period, decision maker has to decide about inventory replenishment. In the first model, decision maker gives order up to the replenishment point in each period regardless of inventory level. In the second model, decision maker reviews the inventory level at the beginning of the period and gives order only when inventory level is equal or under reorder point. The demand during the period is stochastic and fits to Poisson distribution. The demand structure is analyzed as a Markov modulated demand. So, the inventory control model is formulated as a stochastic decision process. In this model, the penalty cost and the order cost are taken into account, and there is no backlogging. However, holding cost is very low, and therefore ignored in this study. Long-term average costs according two periodic review models are compared based on data of raw material demand taken from an international polyurethane firm. Keywords: Inventory control, periodic review, Markov, Poisson distribution
Introduction Single echelon single location inventory models have been extensively studied in literature (Kiesmüller et al., 2011). In industry, periodic review (R,s,S) model and continuous review (s,S) model are used widely. However, some kinds of material are not suitable for continuous review. In a successful inventory management, the decision maker gives order at a point where the sum of holding cost and order cost is minimum. Sometimes, demand is greater than the inventory on hand. As a result, inventory shortage occurs. In that moment the production may stop if it is a raw material. If it is a product, lost sales or backorder may emerge. Many inventory models are developed accordingly. In addition to classic economic order quantity model based on fixed order quantity, periodic review models play a significant role in inventory management. According to this model, inventory is reviewed periodically. In this model, the decision maker will decide to replenish the inventory or not. In the base stock inventory model, replenishment is performed at the beginning of each period. The amount of order varies in order to fill the inventory up to a certain replenishment point. For instance, the inventory level of a certain product in a supermarket is counted every week, and the shelves are filled fully. This model contains two parameters, namely period (R) and replenishment point (S), and is called therefore (R,S) model. The other periodic review model also reviews the inventory level periodically. If the inventory level is higher than the reorder point at the beginning of the period, no order occurs. However, if it is equal to or lower than the reorder point, an order will be given to fill the inventory up to a certain replenishment point. Therefore, the order quantity may differ from period to period. *
Assist. Prof.,
[email protected].
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
151
This model contains three parameters, namely period (R), reorder point (s) and replenishment point (S), and is called therefore (R,s,S) model. This model can be seen as a version of two-bin system based on periodic review. In both models, demand shows a random distribution. Therefore, the analysis and modeling of probability distribution is necessary. Demand may be continuous or discrete. In this study, demand displays a discrete Poisson distribution, and is formulated as Markov process. Therefore, the problem becomes a stochastic Markov decision process. Markov decision process can be used in inventory problems in order to determine the optimum inventory policy (Zhu et al., 2015; Taylor & Karlin, 1984; Papoulis, 1984; Doob, 1990; Hillier & Lieberman, 2000; Ross, 2003). Krishnamoorthy (2013) solved the continuous review inventory policy (s,S) problem using Markov decision process under Poisson distribution. Diaz et al. (2016) formulated the demand as a Markov process and developed a simulation based algorithm in a case of periodic review model with shortage. Nasr & Maddah (2015) analyzed the continuous review inventory model as a Markov modulated Poisson process. In their study, lead time is normally distributed. Zhang et al. (2008) studied Markov decision process in the context of pricing strategy and inventory policy. Ahıska & King (2009) proposed an optimum inventory policy for a remanufacturable product. This study deals with two periodic inventory control models. The demand follows a discrete Poisson distribution and is formulated as a Markov process. Two models are compared in terms of long term expected average inventory costs. Model parameters are accepted as fixed values. Mathematical Model In this study, periodic review (R,s,S) and (R,S) inventory control models are compared in terms of long term expected average inventory costs. A random process where the next state depends only on the current state is called Markov process. There is only a serial dependence between adjacent periods, where other states before are irrelevant. Markov process is a stochastic process, where the probability of the transition to next state only depends on the present state, not on previous states (Saldana & Changho, 2000: 204). A stochastic discrete-state process with Markov property is called Markov chain (Dayar, 1994: 2). All possible values of the Markov chain are symbolized as positive integral numbers. In case of Xt=it, Equation 1 shows the Markov chain:
{
} {
P X t +1 = it +1! X t = it ,........., X 0 = i0 = P X t +1 = it +!1 X t = it
}
1
In our model, the demand distribution is stochastic. During n periods, demand ξ1 , ξ2 , ξ3 ,...........ξn constitute random variables. The probability follows as in Equation 2:
ak = P(ξn = k ), k = 0,1, 2,3, 4.........
2
According to the (R,s,S) model, the review time point ( tn ), tn = nR, n ≥ 1 . An amount of demand k occurs during the period. The inventory level at the beginning of the next period ( x1 , x2 , x3 ,...........xn ) is determined by k (Equation 3).
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
$%( S − ξ n +1 ) xn +1 & %'( X n − ξ n +1 )
X n ≤ s,
152
3
Xn > s
It is seen that ξ n +1 has no time dependency. The inventory level xn is defined as a state. And the next state is influenced only by this state. Therefore, the mentioned (R,s,S) inventory model shows Markov property. In order to formulate the Markov transition matrix, the conditional probability of the transition from state i= xn to state j= xn +1 has to be calculated. If the inventory level is between 0 and s at the end of n period, order is given and inventory level at the beginning of n+1 period became S. If the demand ( ξ n +1 ) is equal to or higher than replenishment point, the value of state j will be 0 (Equation 4). ∞
P( X n +1 = 0 X n = i ) = P( S − ξ n +1 ≤ 0) = P(ξ n +1 ≥ S ) = ∑ α k
4
k =S
If the demand of the period ( ξ n +1 ) is 0 or up to S units, xn +1 won’t be 0, i.e. j=1, 2 ..., S. In this case, conditional probability will be as follows in the Equation 5. 5 P( X n+1 = j X n = i) = P(S − ξn+1 = j ) = P(ξn+1 = S − j ) = α S − j If the inventory level is between s and S at the end of n period, order is not given and inventory level at the beginning of n+1 period stayed stable, i.e. at the i level. In order to formulate the transition matrix, we have to analyze the states of j where j is equal to 0, between 0 and i, and higher than i. The conditional probability of J = 0 is shown in the Equation 6.
P( X n+1 = 0 X n = i) = P(i − ξn+1 ≤ 0) = P(ξn+1 ≥ i) = α k
6
The conditional probability of 0 < J ≤ i is calculated when the demand ξ n +1 between 0 and i-1 (Equation 7). 7 P( X n+1 = j X n = i) = P(i − ξn+1 = j ) = P(ξn+1 = i − j ) = αi − j The conditional probability of the state J > i is equal to 0 (Equation 8). 8 P( X n+1 = j X n = i) = 0 The conditional probability of transition matrix can be calculated as in the Equation 9. " 1 '23 &' """""""()"""0 ≤ ( ≤ +"""",-."/ = 0 """"""&34% """"""""""()"""0 ≤ ( ≤ +"""",-."1 ≤ / ≤ 6 1 #$% 9 '2$ &' """"""""()""+ + 1 ≤ ( ≤ 6"""",-."/ = 0 """"""&$4% """""""""""()"""+ + 1 ≤ ( ≤ 6""",-."1 ≤ / ≤ ( 0""""""""""""""""()"""+ + 1 ≤ ( ≤ 6""",-."/ ≥ ( + 1 Using these probabilities, the transition matrix is calculated as in the Figure 1. Because sum of every line is equal to 1, and the states are countable and discrete, the matrix displays a steady state and therefore proper to be analyzed.
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
153
Figure 1 Transition Matrix of (R,s,S) 0 0
1
2
s −1
s
s +1
S −1
S
S −1
α S −2
α S − s +1
αS −s
α S − s −1
α1
α0
k
α S −1
α S −2
α S − s +1
α S −s
α S − s −1
α1
α0
k
α S −1
α S −2
α S − s +1
α S −s
α S − s −1
α1
α0
k
α S −1
α S −2
α S − s +1
α S −s
α S − s −1
α1
α0
k
α S −1
α S −2
α S − s +1
α S −s
α S − s −1
α1
α0
k
αs
α s −1
α2
α1
α0
0
0
k
α s +1
αs
α3
α2
α1
0
0
k
α S −2
α S −3
α S −s
α S − s −1
α S −s −2
α0
0
α S −1
α S −2
α S − s +1
α S −s
α S − s −1
α1
α0
∞
∑α
α
k
k =S
1
∞
∑α k =S
2
∞
∑α k =S
s −1
∞
∑α k =S
s
∞
∑α k =S
s +1
∞
∑α
k = s +1
s+2
∞
∑α
k =s+2
S −1
∞
∑α
k = S −1
S
∞
∑α k =S
k
π = [π1 , π 2 ,.........., π s ] is given showing the state space for an irreducible ergodic Markov chain, lim n→∞ P n will be as in the Equation 10 (Winston, 2004: 934). Equation 11 can be formulated as follows (Ross, 2003: 200). $π 1 π 2 &π π 2 & 1 & limn→∞ P n = && & & (&π 1 π 2
π j = lim Pijn n →∞
πs % π s '' ' ' ' ' ' π s )'
10
11
The vector π = [π1 , π 2 ,.........., π s ] is called steady state (Medhi, 2003: 5). As a result, the transition matrix representing the long term periods will show the following values (Equation 12).
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
154
π j = Pijn = Pijn +1 Pijn +1 = Pikn pkj 12
k =s
π j = ∑ π k pkj k =1
π = P.π Infinite results can be reached from the Equation 12. Therefore the following condition has to be fulfilled (Equation 13) (Ross, 2003: 201). ∞
∑π
j
13
=1
j =o
The Markov decision process consists of 5 elements. These are time of decision, states, action, transition probabilities and penalty or reward (Puterman, 1994: 17). The value set of decision time T is positive and discrete or continuous, finite or infinite. The periodic review inventory model will be discrete because of that the review periods are certain and equal to each other. At each decision time, the process reaches the state i which is included by the set of discrete states M. In case of the state i, the action Ai is chosen by the decision maker. In our model, Ai is both discrete and finite. As a result of this decision, at the end the action a ∈ Ai , the reward rt (i, a) is reached. In our inventory model, the action always leads to a negative reward, namely a cost. The reward depends on the state at the time t + 1 , and can be shown as rt (i, a, j ) . In this case, the expected value of rt (i, a) is shown in the Equation 14. rt (i, a) = ∑ rt (i, a, j ) pt ( j i, a) j∈S
14
∑ p ( j i, a ) = 1 t
j∈S
In sum, (R,s,S) inventory policy shows Markov property. The decision maker acts a ∈ Ai at every t time. As a result of this decision, there will be a negative reward (cost). So, this is a policy of Markov decision process. This policy is both deterministic and steady (Hillier & Liberman, 2001: 1057). The number of decisions are S-s+1 . The decision defines the act within this policy Rb and is shown in the Table 1. Table 1 Decisions and Acts in the (R,s,S) Inventory Policy Decision di ( R ) Act a ∈ Ai 0 No order 1 Order S-s unit 2 Order S-s+1 unit ... ... K-2 Order S-2 unit K-1 Order S-1 unit K Order S unit The order cost for u units is shown in the Equation 15.
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
! K + c( u ), u > 0" O( u ) = # $ , u=0& %0
155
15
The holding cost of u units during a time period is displayed as h( u ) . As a result, all these costs will be a function of the demand and state. The expected long term average cost stems from the Equation 14 and shown in the Equation 16 (Hillier & Liberman, 2001: 816).
%1 n & S lim E ' ∑ C ( X t , ξt +1 ) ( = ∑ k ( j )π j n →∞ ) n t =1 * j =0 k ( j ) = E [C ( X t , ξt +1 )]
16 17
Equation 17 shows the expected cost in the state X t and demand ξ n +1 . Numerical Example This study is based on real data regarding the raw material MDI amount of an international firm producing and exporting polyurethane systems. MDI (Methylene Diphenyl Diisocyanate) is the raw material of polyurethane and has a limited production all around the world. MDI is a highly nondurable and imported from Germany and Hungary. Although MDI is priced and purchased in kilogram unit, it can be ordered and imported only as containers. A container consists of 14,625 kg. MDI. So, the inventory model is formulated based on container unit. Since it is impossible to review MDI level continuously, the inventory is reviewed periodically. In order to observe the inventory level X n , the reviewed inventory level has to be divided into 14.625 kg. and rounded to the nearest integral number. Weekly demand data of the last 3 years show Poisson distribution with λ = 3,9 , σ = 1,9748 and σ 2 = 3,9 . The probability rates of weekly demand values under Poisson distribution are shown in the Table 2. The firm controls the inventory level weekly and determined the reorder point s as 4 containers, and the replenishment point S as 7 containers. So, the inventory policy ( Rb ) is displayed in the Table 3. And the Figure 2 shows the transition matrix of the inventory policy. Table 2 Probability of Weekly Demand Distribution Demand Probability 0 0,0400 1 0,0800 2 0,1600 3 0,1933 4 0,1733 5 0,1733 6 0,0530 7 0,0670 5 and 5+ 0,3534 6 and 6+ 0,1801 7 and 7+ 0,1271 8 and 8+ 0,0601
Social Sciences Research Journal, Volume 5, Issue 1, 150-160 (March 2016), ISSN: 2147-5237
State
156
Table 3 The Inventory Policy (R,s,S) Inventory Level (kg) Decision Act a ∈ Ai di ( R )
0 1 2 3 4 5 6 7
0< X≤14.625 14.625