mnp px mn n m p and hx g p n m p and hx g mnp px nx k gxn mnp nx k nx m mnp k mx mnp x xf. X. 2. 2 ..... The Algebra of Random. Variables, John Wiley and Sons, 1979. ... [23] A. M. Law and W. D. Kelton, Simulation. Modeling and Analysis ...
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
A Convolution Algorithm for Evaluating Supply Chain Delivery Performance Alfred L Guiffrida Kent State University Kent, OH USA
Robert A. Rzepka Rzepka and Associates Alexandria, VA USA
Abstract The effective management of a supply chain requires performance measures that accurately represent the underlying structure of the supply chain. Measures such as delivery performance to the final customer often require summing a set of random variables that capture the stochastic nature of activities across the various stages of the supply chain. The convolution calculus required to evaluate these measures is complex and often leads to intractable results. In this paper we present a discrete convolution algorithm that simplifies this evaluation. The algorithm is demonstrated for a delivery performance measure in a three stage serial supply chain. Numerical results and a supporting error analysis are presented for a set of experiments utilizing reproductive and nonreproductive probability density functions.
1. Introduction A review of the literature identifies three major limitations to current supply chain performance measures. First, several researchers have concerns over the lack of cost-based performance measures [1], [2]. Cost-based performance measures are attractive since they are compatible across all stages of a supply chain and easily integrate into existing cost analyses that are required for process improvement. An extensive collection of articles that examine supply chain management from the cost management and financial perspectives is found in [3]. A second concern is that supply chain performance measures often ignore variability [4], [5]. A typical supply chain consists of three fundamental stages, procurement, production and distribution. Within each of these stages there are
Mohamad Y. Jaber Ryerson University Toronto, Ontario CA
numerous operations and processes which are stochastic in nature. An effective performance measure must incorporate this inherent uncertainty. The reduction of variability is recognized as a critical aspect to the improvement of supply chain performance [6]. Lastly, simplifying assumptions are often used when modeling stochastic components (e.g., supply, demand processing times, etc.) of the individual stages of the supply chain. Densities that are reproductive under addition, such as the normal, Poisson, or exponential, are routinely used since the additive property vastly simplifies the mathematical complexity of the modeling effort [7], [8]. Hence, performance measures that are used for inventory and process management may be limited in their general application. Given the wide array of possible performance measures and supply chain configurations, no single modeling effort can capture all aspects of a supply chain. In this research we concentrate on one aspect of supply chain performance, the delivery timeliness to the final customer in a serial supply chain that is operating under a centralized management structure. Delivery performance has been identified as a strategic level performance metric in supply chain management [9]. Recent empirical research has also identified delivery performance as a key concern among supply chain practitioners [10], [11]. In a centralized supply chain structure, a single decision maker attempts to optimize overall chain performance working in union with information provided by decision makers at each stage of the supply chain. This approach to supply chain management has been addressed in detail in the literature [12]. The objectives of this paper are as follows: (1) develop a cost-based performance measure for analyzing delivery performance within the supply chain, and (2) develop a framework for incorporating the variability found in the
1530-1605/07 $20.00 © 2007 IEEE
1
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
individual components of delivery lead time into the delivery performance measure. In satisfying the first research objective, a cost-based performance metric for evaluating delivery performance will be developed. Delivery lead time is defined to be the elapsed time from the receipt of an order at the first stage of a supply chain to the receipt of an order by the final customer in the terminating stage of the supply chain. Delivery lead time is composed of a series of internal manufacturing and processing lead times and external distribution and transportation lead times found at the various stages of the supply chain. Delivery to the final customer is analyzed with regards to the customer’s specification of delivery timeliness as defined by an on-time delivery window. Untimely deliveries, e.g., early and late, are subject to penalty costs. In fulfilling the second research objective, a systems based modeling approach is used to quantify uncertainty found within the supply chain. Under the assumption of independence among stages in a supply chain, we model each component of delivery lead time as a random variable with overall delivery lead time defined as the sum of the internal (manufacturing and processing) and external (distribution and transportation) lead times components found within the various stages of the supply chain. Unlike previous research, we do not restrict our model solely to distributions that are reproductive under addition. A PERL based discrete convolution algorithm is used to evaluate the cost-based delivery performance metric. This paper is organized as follows. In Section 2, the cost-based delivery performance measure is developed. The measure is demonstrated for a supply chain when delivery lead time is defined by both reproductive and non-reproductive random variables. In Section 3, a discrete convolution algorithm is introduced for approximating the expected cost-based delivery performance measure. Numerical experiments are presented in Section 4. Conclusions and directions for future research are presented in Section 5.
2. Model Development Consider an n-stage serial supply chain structure where an activity at each stage contributes to the overall delivery time to the final customer. All members of the supply chain
are assumed to operate on a make-to-order basis. Clearly other supply chains configurations exist (e.g., make-to-stock, combination make-to-stock and make-to-order) however at this stage of the research we restrict our modeling to a make-tostock orientation. Let the activity duration of stage i, Wi , be defined by a continuous probability density function fW w; T with parameter set T . Delivery time to the final customer is defined by X
n
¦ Wi .
Under
the
assumption
of
i 1
impendence between stages, the form of the probability density function f X x is defined by the following convolutions
fW1W2 x
f
³ fW1 x w2 fW2 w2 dw2
f
f
f W1 W2 W3 x
³ fW1 W2 x w3 f W3 w3 dw3
(1)
f
fW1W2 Wn x
f
³ fW1W2 Wn1 x wn fWn wn dwn .
f
The mathematics defined in (1) are vastly simplified when the summand density functions Wi are reproductive under addition. For the purposes of conciseness, this paper restricts the number of stages to three, i.e. n = 3. Our assumption of independence between stages is not unreasonable as it has been commonly adopted in the literature in serial systems [7], [13]. Delivery to the final customer is analyzed with regard to the customer’s specification of delivery timeliness as defined by a delivery window. Under the concept of a delivery window, the customer supplies an earliest allowable delivery date and a latest allowable delivery date. A delivery window is defined as the difference between the earliest acceptable delivery date and the latest acceptable delivery date. Within the delivery window, a delivery may be classified as early, on-time, or late (see Figure 1). Delivery lead time, X, is a random variable with probability density function f X x . The ontime portion of the delivery window is defined by c2 c1 . Ideally, c2 c1 0 . However, the extent to which c2 c1 ! 0 may be measured in hours, days, or weeks depending on the industrial situation. Delivery windows are widely used in modeling supply chain and time-based manufacturing systems [14], [15].
2
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
f X x
demonstrate the computation aspects of evaluating f X x in the following subsections.
2.1 Modeling Reproductive Densities
x a
c1 Early
b
c2 On-Time
Late
Delivery Scenario
Legend: a = earliest delivery time c1 beginning of on time delivery c2
end of on time delivery
b = latest acceptable delivery time Figure 1. Normally distributed delivery window.
Consider an n-stage serial supply chain in operation over a time horizon of length T years, where a demand requirement of the final customer for a single product of D units will be met with a constant delivery lot size Q. The expected penalty cost per delivery period for untimely delivery, Y, is
Y
c 1
b
a
c2
QH ³ (c1 x) f X ( x)dx K ³ x c2 f X x dx (2)
where: Q = constant delivery lot size per cycle; H = inventory holding cost per unit per time; K = penalty cost per time unit late (levied by the final customer); a, b, c1 , c2 = parameters defining the delivery window; W j duration of delivery lead time component j j 1,2,, n and f X x fW1 W2 Wn is the density function of delivery time
X. It is a common purchasing agreement practice to allow the final customer to charge suppliers for untimely deliveries [16]. A critical computational issue when using (2) is to determine the analytical form of f X x . For density functions that are reproductive under addition (such as the normal or gamma), f X x is relatively easy to define and the evaluation of (2) is straightforward. For derivations of (2) when f X x follows the normal and gamma densities see [17]. However, when f X x is composed of the sum non-reproductive densities, the evaluation of f X x is often intractable and prevents an exact representation of (2). We
In this section we illustrate modeling of the expected penalty cost model for an n stage serial supply chain when the density functions that define the activity times at each stage are reproductive under addition. Let the individual stage activity times follow a normal (Gaussian) probability density function. The normal probability density function is defined by the parameter set T ^P , v` and is reproductive under addition with respect to both the mean P and variance v . For fW w; P , v defined as a normal(N) probability density function, applying the convolution calculus outlined in (1) yields a delivery time X with density function n n X ~ N §¨ x; P ¦ P i , v ¦ vi ·¸ where P i and vi are i 1 i 1 © ¹ the delivery mean and variance for stage i, i[1, n] . Evaluating (2) for the reproductive normal and introducing I and ) as the standard normal density and cumulative distribution functions, respectively, yields the total expected penalty cost for normally distributed delivery time Y
ª § § c P · ·º §c P· ¸¸ c1 P ¨ )¨¨ 1 ¸¸ ¸» QH « vI ¨¨ 1 ¸ ¨ © v ¹ © © v ¹ ¹¼» ¬«
ª § §c P · § c P · ·º ¸¸ c2 P ¨1 )¨¨ 2 ¸¸ ¸» (3) K « vI ¨¨ 2 ¸ ¨ «¬ © v ¹ © v ¹ ¹»¼ ©
(see [17] for derivation details).
2.2 Modeling Non-Reproductive Densities When delivery time X is defined as a sum of non-reproductive independent densities the convolution mathematics defined by (1) is complex and the defining form of f X x often becomes intractable for even small n. We demonstrate this complexity with two examples.
2.2.1 Example 1 Consider the independent sum of a uniform random variable, with density function f1 w1 1 for 0 d w1 d 1 , and an exponential random variable with density function, f 2 w2 e w2 for 0 d w2 d f . Finding the density
3
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
function for the sum X W1 W2 requires the inversion of the Laplace transformation Lr g x Lr f1 w1 Lr f 2 w2 and then the evaluation of the resulting contour integral f X x
1 c if rw 1 e r e dr 2S i c³if r r 1
(4)
for residue, Rer d 0. After considerable mathematical evaluation and simplification (see [18] for details), the density function of the sum is 1 e x 0 d x d1 . f X x ® x 1 x e 1d x f ¯e
(5)
2.2.2 Example 2 Let X W1 W2 W3 represent the sum of the three independent uniform random variables 1 for each defined as follows: f1 w1 m 1 0 d w1 d m ; f 2 w2 for 0 d w2 d n ; and n 1 f 3 w3 for 0 d w3 d p where m n p . p After considerable calculus and mathematical simplification (see [19]), the density function for X is x2 ° 2mnp ° ° k ° ° 2mnp ° 2 ° k x n ° 2mnp ° ° k x n 2 x p 2 ° 2mnp ° ® 1 ° ° p ° 2 ° 2mn x p ° 2mnp ° ° m 2n p x ° 2np ° ° m n p x 2 ° ° 2mnp ¯
mdxdn
f X x
0d xdm
ndxdg
gdxdh and p d m n gdxdh and
p ! m n
h d x d m p
m n d x d n p
k
x x m ; g 2
The two examples presented in Section 2.2 demonstrate the complexity introduced in defining f X x for densities that are not reproductive under addition. In both examples, the mathematical definitions of the summand exponential and uniform densities were relatively simple. Despite these elementary forms, the resultant probability density function for the sum of these random variables, f X x , was complex to the point that its use would lead to a difficult, if not intractable, deviation of the expected penalty cost. Given this complexity, an alternative approach is to use a methodology for approximation the probability density function for f X x . Two approximation techniques have been proposed in the literature for finding the convolution of the sum of independent random variables (see [20] for a detailed discussion). The first method involves polynomial approximation while the second method involves discretization. Under the polynomial approximation method, the distribution functions of the random variables of interest are approximated by piece-wise polynomial equations that are stated in terms of the parameters of the associated densities. An iterative integration procedure is then used to evaluate the piece-wise polynomial equations over a defined set of intervals to approximate the true convolution. The polynomial approximation method can lead to computational complexity concerns due to the rapid growth in the order of the polynomials as a result of the convolution operations and also due to the number of subintervals over which the convolution is defined. Under the discretization procedure, the density function of a continuous random variable is defined as a discrete density function. Once the summand densities are represented in discrete forms, a discrete convolution operation to determine X
min p, m n ; h
Section 3 for details of the algorithm) and the expected penalty cost model defined by (2) can be directed approximated by Y
max p, m n .
n
¦ Wi can be easily performed (see i 1
m p d x d m n p
where: 2
2.3 Approximation Procedures
QH
m1
m2
i 1
i 1
¦ xi c1 p X ( xi ) K ¦ c2 xi p X xi (6)
where: m1 the number of early deliveries x c1 and m2 the number of late deliveries x ! c2 . Note that (6) is no longer dependant on
4
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
obtaining a mathematically defined form for f X x . An approximate form for f X x can be determined by evaluating D k
¦X
k
P X to
obtain estimates of the mean P X , variance V X2 , coefficient of skewness J 1 , coefficient of excess kurtosis J 2 for X using the following formulae [21]
PX
D1
V X2
D 2 D 1
J1
D 3 3D1D 2 2D12 D 2 D12 1.5
J2
(7) 2
(8)
D 4 4D 1D 3 6D 12D 2 3D 14
D
2
D 12
2
(9)
3 .
(10)
Using the values of (9) and (10), the J 1 , J 2 plane illustrated in Figure 2 can be searched to find a candidate density for approximating f X x . A quantitative distance measure for determining the best fit of an empirically estimated J 1 , J 2 pair to the theoretical values of J 1 and J 2 illustrated in Figure 2 is found in [22].
1. 2. 3. 4.
Code Library Experiment Design Calculate Discrete Convolutions Calculate Statistical Measures
3.1 Code Library The code library contains code that generates variants, or values, that correspond to a mathematical function with specific statistical characteristics. The probability density functions in this implementation include the normal, exponential, gamma, uniform, Weibull and triangular [23], [24 ]. The first code library requirement was to create functions that were portable. The concept is code that is independent of operating system or hardware platform, can be ported to other programming languages, and maintains the same characteristics across all executions [24]. The second requirement is the ability to initialize a function each time it was executed. The seed, or seed state, sets an arbitrary starting point for the code generating the variants. The seed value should be unique for each execution; otherwise, the variants from one execution to another will not be unique [24]. A third requirement is periodicity. Given a sufficient number of iterations, a function executed on a deterministic computer will revisit a previous value at some point. The goal is that the underlying algorithm should have a periodicity long enough that a value is not repeated too soon.
3.2 Experiment Design
Legend: N = normal, Lo = logistic, La = Laplace E = exponential Figure 2. J 1 , J 2 Plane for common densities.
3. Discrete Convolution Algorithm The main functional areas that this algorithm needed to address were:
The key goal in this functional area is the ability to specify probability density functions of an n-stage model. The analyst will be presented with a series of options requiring structured responses. This information forms the basis of the experimental design that the individual is attempting to execute. The primary option would be the choice of the probability density function at each stage of the experiment as well as the value of N, or the number of variates to be generated from each density function. The structured responses will specify applicable parameters for each density function. For example, the normal distribution requires the specification of a mean and a
5
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
variance, while the triangular distribution requires a minimum, modal and maximum value.
absolute percentage error (APE) was evaluated using
3.3 Calculate Discrete Convolutions
3.4 Calculate Statistical Measures The final functional area of the algorithm performed statistical calculations on the delivery time X and evaluated the expected penalty cost for untimely delivery. Initially, the first four central moments for the distribution of X were determined. Using these values the mean, variance, and coefficients of skewness and excess kurtosis were calculated as defined by equations (7)-(10). Lastly, expected cost for untimely delivery, Y, as defined by equation (6) was evaluated.
4. Numerical Experiments The first set of numerical experiments that were conducted was designed to evaluate the error associated with estimating the expected penalty cost model using the discrete convolution algorithm. A three stage supply chain configuration was selected with the activity times per stage each defined by a normal distribution. Delivery time to the customer was defined as X W1 W2 W3 with W1 ~ N 10,2 , W2 ~ N 8,1 and W3 ~ N 12,3 . Since the normal density is reproductive under addition, the delivery distribution defined by f X x is known to be normal and (3) can be used to determine an exact cost. This exact baseline cost was then compared to the estimated cost resulting from the discrete convolution algorithm (6) and the
eq6 eq3 u 100% . eq3
(11)
The parameters used were: Q = 500 units; H = $3 per unit per day; K = $1,500 per day late; a 20 days, b 40 days, c1 29 days and c2 31 days. The number of samples used at each stage of the discrete convolution algorithm was varied for N 200, 300, 400 and 500 and five sample runs were conducted for each value of n. Figure 3 illustrates the average APE that resulted across the five sample runs. 95% Confidence Interval for Absolute Percentage Error. 6 5
Absolute Percentage Error
In mathematical theory, a convolution is a mathematical operation which takes two functions and produces a third. While this definition is conceptually easy, its execution can be quite daunting. Computationally, we must iteratively calculate the sum of two variates and the product of their probabilities, where the first variate is drawn from stage n’s density function and the second variate is drawn from stage n + 1’s density function. Conceptually, this is not a difficult until the magnitude of the number of calculations is understood. The complexity arises from the fact that the calculations must be executed in a nested loop structure.
APE
4 3 2 1 0
N=200
N=300
N=400
N=500
Figure 3. Error analysis for normal distribution (based case) experiments.
As expected, the average APE decreased as N increased, with an APE of less than 1.0% achieved for N = 500. These test results suggest that for large N (N = 400 or 500), that the discrete convolution algorithm provides a reasonably good approximation (APE no worse than 2%) to the exact expected delivery cost. Additional experiments with the same parameter set and supply chain configuration were conducted for N 400 with 5 runs per experiment to estimate the expected delivery cost and when the activity times at each stage were defined by independent densities that are not reproductive under addition (see Table 1). Given the non-reproductive nature of the densities defined in Table 1, f X x would have a highly complex and likely intractable form if traditional convolution calculus or a polynomial functional approximation was applied. Results for the experiments based on the discrete convolution algorithm are summarized in Table 2. All table entries represent values averaged over the 5 replications for each experiment.
6
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
Table 1. Experiments for non reproductive densities. Experiment 1
2
3
Stage 1 2 3 1 2 3 1 2 3
Density(parameters in days) triangular (4,5,6) triangular (11,15,19) triangular (8,10,12) uniform(4,6) triangular(11,15,19) normal(10,2) triangular (13,15,17) gamma(4,2) uniform (10, 12)
Table 2. Expected cost and parameter values. Experiment
Expected Cost
f X x Parameters
1
$1,023
PX
29.9 days
V X2 3.3 J 1 0. 0 J 2 0.3
2
$1,471
PX
30.1
2 X
V 5.1 J 1 0. 0 J 2 0.2
3
$1,443
PX
29.98
V X2 5.2 J 1 0. 8 J 2 1. 0
Examining Table 2, we observe that the distributions of X in experiments 1 and 2 are symmetric J 1 0 , while the distribution in experiment 3 is positively skewed J 1 0.8 . This result is not surprising since all summand densities used experiments 1 and 2 were symmetric to start with (see Table 1). In experiment 3, the stage 2 gamma distribution was positively skewed and hence contributed to the positively skewed result found in experiment 3. The coefficient of excess kurtosis J 2 provides a measure of the “peakedness” of a density relative to the normal distribution. As a base case, the normal is termed mesokurtic with J 2 0 . Densities with J 2 ! 0 have a sharper peak than the normal and are termed leptokurtic; densities with J 2 0 have a more flattened peak than the normal and are termed platykurtic. In comparison to the normal, the distributions of X resulting from experiments 1 and 2 are similar to
a normal due to their common symmetry, but are not exactly normal due to their leptokurtic behavior. The density of X for experiment 3 is most different from the normal as evident by its lack of symmetry and platykurtic behavior. Comparing the J 1,J 2 values of each experiment to Figure 2, suggests that the distributions of X may be approximated by known densities. For experiments 1 and 2 a Weibull density represents a reasonable approximation while for experiment 3 the lognormal density appears to be a reasonable fit.
5. Summary and Conclusions Given the inherently uncertain nature of activities in a supply chain, it is likely that performance measures for the supply chain will be based on the sum of random variables. Under the assumption of independence, the defining probability density function for such a performance measure can be evaluated using a convolution. However the mathematics associated with a convolution can become complex and often intractable even for relatively simple model environments. In this paper we presented a discrete convolution model that eliminates the mathematical complexity associated with the use of convolution calculus. The model presented herein was demonstrated for a cost based delivery performance measure in a three stage supply chain. In comparison to a base case model involving the reproductive normal distribution, the discrete convolution algorithm generated results that had an absolute error of less than one percent. The robustness of the discrete convolution model for evaluating sums of random variables that are not reproductive under addition was demonstrated through a series of numerical experiments. Numerical solutions and candidate forms of known density functions where easily identified for a set of experiments whose mathematical convolution solution would have been extremely complex and perhaps intractable. When modeling more complex supply chain environments computational aspects of the model may be a limitation and require attention. The realization is that the time to complete processing will increase greatly as the number of stages increases beyond three and when multiple performance measures are evaluated simultaneously. One consideration is that this application may need to be executed as a batch
7
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
or transaction processing type application. This concept is not uncommon in many supply chain applications due to the magnitude and/or complexity of the required calculations. Further research will focus on relaxing the assumption of independence among the activities times comprising the various stages of the supply chain. A limited set of experiments were reported in this research. The scope of the modeling effort can be expanded to support more numerical experiments. These experiments would investigate the effect of symmetry and kurtosis aspects of the summand densities on the resulting shape characteristics of the density of the sum. A set of experiments could also be designed to investigate under what conditions (e.g., number of stages, symmetry and kurtosis aspects of the summand densities) a normal density would accurately represent the sum of a set of densities that are not reproductive under addition. Lastly, the modeling approach developed in this paper can serve as a test bed for future research on improving supply chain operations. For example, the model presented herein captures variability at each stage of the supply chain and translates this variability into an overall variability measure for delivery timeliness. This structure can be used as a model to study the bullwhip effect as it relates to delivery performance. An interesting extension to the current model would be the incorporation of an optimization model for allocating resources to reduce the magnitude and the translation of variance through the supply chain.
[5] E. H. Sabri, and B. M. Beamon, “A Multiobjective Approach to Simultaneous Strategic and Operational Planning in Supply Chain Design”, Omega, 28(5), 2000, 581-598. [6] M. E. Johnson, and T. Davis, “Improving Supply Chain Performance by Using Order Fulfillment Metrics”, National Productivity Review, 17(3), 1998, 3-16. [7] L. B. Schwarz and Z. K. Weng, “The Design of a JIT Supply Chain: The Effect of Leadtime Uncertainty on Safety Stock”, Journal of Business Logistics, 20(1), 1999, 141-163. [8] H-S. Ahn and P. Kaminsky, “Production and Distribution Policy in a Two-Stage Stochastic PushPull Supply Chain”, IIE Transactions, 37(7), 2005, 609-621. [9] A. C. Gunasekaran, C. Patel, and R. E. McGaughey, “A framework for supply chain performance measurement”, International Journal of Production Economics, 87(3), 2004, 333-347. [10] A. Lockamy III, and K. McCormack, “Linking SCOR planning practices to supply chain performance”, International Journal of Operations and Production Management, 24(12), 2004, 11921218. [11] S. Vachon, and R. D. Klassen, “An Exploratory Investigation of the Effects of Supply Chain Complexity on Delivery Performance”, IEEE Transactions on Engineering Management, 49(3), 2002, 218-230.
6. References
[12] R. Pibernik, and E. Sucky, “Centralised and Decentralized Supply Chain Planning”, International Journal of Integrated Supply Management, 2(1/2), 2006, 6-27.
[1] L. M. Ellram, “Strategic Cost Management in the Supply Chain: a Purchasing and Supply Management Perspective”, CAPS Research, 2002.
[13] S. J. Erlebacher, M. R. Singh, “Optimal Variance Structures and Performance Improvement of Synchronous Assembly Lines”, Operations Research, 47(4), 1999, 601-618.
[2] R. A. Lancioni, “New Developments in Supply Chain Management for the Millennium”, Industrial Marketing Management, 29(1), 2000, 1-6. [3] S. Seuring, and M. Goldbach (Eds.), Cost Management in Supply Chains, Physica-Verlag, Heidelberg, New York, 2002. [4] J. Blackhurst, T. Wu, and P. O’Grady, “Networkbased Approach to Modeling Uncertainty in a Supply Chain”, International Journal of Production Research, 42(8), 2004, 1639- 1658.
[14] W. Jaruphongsa, S. Cetinkaya, C-H. Lee, “Warehouse Space Capacity and Delivery Time Window Considerations in Dynamic Lot-sizing for a Simple Supply Chain”, International Journal of Production Economics, 92(2), (2004) 169-180. [15] C-H. Lee, S. Cetinkaya, and A.P.M. Wagelmans, “A Dynamic Lot-sizing Model with Demand Time Windows”, Management Science, 47(10), (2001) 1384-1395. [16] A. M. Schneiderman, “Metrics for the Order Fulfillment Process (Part-1)”, Journal of Cost Management”, 10(2), 1996, 30-42.
8
Proceedings of the 40th Hawaii International Conference on System Sciences - 2007
[17] A. L. Guiffrida, Cost Characterizations of Supply Chain Delivery Performance, Ph.D. thesis, Department of Industrial Engineering, University at Buffalo (SUNY), 2005. [18] M. D. Springer, The Algebra of Random Variables, John Wiley and Sons, 1979. [19] V. Chew, “Distribution of the Sum of Independent Uniform Variables with Unequal Ranges”, The Virginia Journal of Science, 12, 1961, 45-50. [20] M. K. Agrawal, and S. E. Elmaghraby, “On Computing the Distribution Function of the Sum of Independent Random Variables”, Computers and Operations Research, 28(5), (2001), 473-483. [21] N. Balakrishnan and V. B. Nevzorou, A Primer on Statistical Distributions, Wiley-Interscience, Hoboken, NJ, 2003. [22] Y. Wang, R. C. M. Yam, M. J. Zuo, “A Multicriterion evaluation Approach to Selection of the Best Statistical Distribution”, Computers and Industrial Engineering, 47(2-3), 2004, 165-180. [23] A. M. Law and W. D. Kelton, Simulation Modeling and Analysis, McGraw-Hill, New York, NY, 1982. [24] W. H. Press, B. P. Flannery, S.A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, Cambridge, England, 1988.
9