Inventory-service Optimization in Configure-to-Order ... - CiteSeerX

1 downloads 2082 Views 416KB Size Report
performance of the model and the solution approach is examined by extensive numerical studies on ... IBM in order to meet a desirable customer service level.
Inventory-Service Optimization in Con gure-to-Order Systems Feng Cheng, Markus Ettl, Grace Lin IBM Research Division, T.J. Watson Research Center Yorktown Heights, NY 10598 fcheng,msettl,[email protected]

David D. Yao



IEOR Department, 302 Mudd Building Columbia University, New York, NY 10027 [email protected] June, 2000; revision: March, December, 2001

Abstract This study is motivated by a process-reengineering problem in PC (personal computer) manufacturing, i.e., to move from a build-to-stock operation that is centered around endproduct (machine type model) inventory, towards a con gure-to-order (CTO) operation that eliminates end-product inventory | in fact, CTO has made irrelevant the whole notion of pre-con gured machine types | and focuses instead on maintaining the right amount of inventory at the components. Indeed, CTO appears to be the ideal operational model that provides both mass customization and a quick response time to order ful llment. To quantify the inventory-service tradeo in the CTO environment, we develop a nonlinear optimization model with multiple constraints, re ecting the service levels o ered to di erent market segments. To solve the optimization problem, we develop an exact algorithm for the important case of demand in each market segment having (at least) one unique component, and a greedy heuristic for the general (non-unique component) case. Furthermore, we show how to use sensitivity analysis, along with simulation, to ne-tune the solutions. The performance of the model and the solution approach is examined by extensive numerical studies on realistic problem data. We demonstrate that the model can generate considerable new insights into the key bene ts of the CTO operation, in particular the impact of risk pooling and improved forecast accuracy. We present the major ndings in applying our model to study the inventory/service impacts in the reengineering of a PC manufacturing process.



Research undertaken while an academic visitor at IBM Research Division, T.J. Watson Research Center.

1 Introduction

A con gure-to-order (CTO) system is a hybrid of make-to-stock and make-to-order operations: a set of components (subassemblies) are built to stock whereas the end products are assembled to order. This hybrid model is most suitable in an environment where the time it takes to assemble the end product is negligible, while the production/replenishment leadtime for each component is much more substantial. The manufacturing process of personal computers (PC's) is a good example of such an environment. By keeping inventory at the component level, customer orders can be lled quickly. On the other hand, postponing the nal assembly until order arrival provides a high level of exibility in terms of product variety, and also achieves resource pooling in terms of maximizing the usage of component inventory. Therefore, the CTO system appears to be an ideal business process model that provides both mass customization and a quick response time to order ful llment. Such a hybrid model is often referred to as an assemble-to-order (ATO) system in the research literature (e.g. [8, 16]). In an ATO system, usually there is a pre-con gured set of endproduct types from which customers must choose. In contrast, a CTO system takes the ATO concept one step further, in allowing each customer to con gure a product in terms of selecting a personalized set of components that go into the product. Aside from some consistency check on the product so con gured, there is no \menu" of product types that limits the customer's choice. From a modeling perspective, it might appear that the CTO system is no di erent from the ATO system: one can exhaust all possible con gurations under CTO, and treat all the con gurations as part of a \mega-menu". In applications, however,this conceptual reduction does not o er anything practically useful, as one will have to deal with the curse of dimensionality, not only in terms of computation, but also in terms of data collection and forecasting. For instance, suppose any subset of n components is a possible con guration. Then, one will have to deal with the intractable task of forecasting 2 demand streams. The CTO model we develop here formalizes what has become prevailing industry practice i.e., starting from forecasting on the aggregate demand (say, of a market segment), along with the \feature ratios" (of component usage) associated with such demand. Hence, suppose there are m aggregate demand streams (or, product families). Then, demand forecast amounts to an e ort of order O(mn), including forecasts of feature ratios. Our study reported here is part of a larger project that aims at helping IBM's Personal Systems Group (PSG) to migrate from its existing operation to a CTO system. The PC n

2

manufacturing at PSG is traditionally a build-to-plan (or build-to-forecast) process, a process that is internally referred to as the MTM (\machine-type model") operation. There is a set of end products, or MTM's. Demand forecasts over a future planning horizon are generated for each MTM, and updated periodically for each planning cycle (typically, a weekly cycle). An MRP-type (\material requirements planning") of explosion technique is then used to determine the requirements for the components over the planning horizon based on the bill-of-materials (BOM) structure of each end product. Because of the random variation involved in demand forecasts, safety stock is usually kept for each end product between the channel partners and IBM in order to meet a desirable customer service level. However, holding nished goods inventory for any length of time is very costly in the PC business, where product life cycle is measured in months and price reduction takes place almost every other week. To move from this business process to a web-based CTO operation (in particular, customer orders will be taken from the Internet), the nished-goods inventory will be eliminated, and the emphasis will be shifted to the components, or \building blocks" (BB). Because of their long leadtimes, the BB's will be powered o a forecast and executed o a replenishment process. The focus of our study is on the inventory-service trade-o of the new system, and on the performance gains, in terms of reduced inventory cost and increased service level. There are other bene ts as well: The CTO operation can achieve better forecast accuracy through demand aggregation. Customer demand is expected to increase, as orders will no longer be con ned within a restricted set of pre-con gured MTM's. The optimization model we develop below provides an analytical tool to quantify these performance impacts. A brief review of related literature for the ATO system is in order. There are many studies of ATO systems that di er quite substantially in the detailed modelling assumptions and approaches. For example, Hausman, Lee and Zhang [9] and Zhang [20] study periodic-review (discrete-time) models with multivariate normal demand and constant component replenishment leadtimes. Song [14, 15] studies continuous-review models with multivariate compound Poisson demand and deterministic leadtimes. Song et al. [16], Glasserman and Wang [8] and Wang [19] also consider multivariate (compound) Poisson demand but the supply process for each component is capacitated and modeled as a single-server queue. Gallien and Wein [6] consider uncapacitated leadtimes, focusing on a single demand stream and assume order synchronization. Cheung and Hausman [3] also assume uncapacitated leadtimes in the context of a repair shop. They use a combination of order synchronization and disaggregation in their analysis. In terms of approaches, Song [14, 15] and Song et al. [16] focus on developing exact and 3

approximate performance evaluation procedures that are computationally eÆcient. Glasserman and Wang [8] study the leadtime-inventory tradeo using a large deviations approach, and derive a linear relationship between leadtime and inventory in the limiting sense of high ll rates. Wang [19] further applies this asymptotic result in an optimization problem to minimize average inventory holding cost with a constraint on the order ll-rate. Swaminathan and Tayur [18] use stochastic programming models to study three di erent strategies at the assembly stage: utilizing component commonality, postponement (the \vanilla box approach"), and integrating assembly task design and operations. Other related recent works, not necessarily in the ATO setting, include Aviv and Federgruen [1], Garg and Lee [7], Li [11], Mahajan and van Ryzin [12], and Zipkin [21]. Whereas most studies in the literature focus on certain segments of the supply chain, modeled as simple stand-alone queues, the recent work of Ettl et al. [5] aims at modeling large-scale, end-to-end enterprise supply chains, such as those in the PC industry. In contrast, the model we develop in this paper is simpler in the network con guration | there are only two levels of BOM: the components and the end products; but more demanding in service requirement | the ll rate of each product is essentially the o -the-shelf availability of all its components. To deal with this more stringent service requirement, we use a lower bound as a surrogate for the analytically intractable ll rate. This treatment is essentially in the same spirit as the approach in [4, 16, 17]. The novelty of our approach here is to exploit the simpler BOM structure to come up with algorithms that solve the optimization problem in a more eÆcient manner than the gradient search in [5]. The rest of the paper is organized as follows. In the next three sections we present details of our model: the given data required as input to the model (x2), the base-stock control policy that we focus on (x3), the modeling of the customer service requirement (x4). The optimization problem and the algorithms that solve the problem are presented in x5. These are followed by x6, where the performance of the model and the solution technique is examined in detail. In x7, we present the major ndings in applying our model to study the inventory/service impacts in reengineering PSG's business process. Brief concluding remarks are presented in x8. 2 The Model and Given Data

We consider a hybrid model, by which each end product is assembled to order from a set of components, which, in turn, are built to stock. In other words, no nished goods inventory is kept for any end product, whereas each component (\building block") has its own inventory, replenished from a supplier following a base-stock policy. 4

Each component inventory is indexed by i, i 2 S , where S denotes the set of all components. Associated with each component is a \store," where the inventory is kept. In the CTO environment, there is no pre-speci ed product \menu"; in principle, every order can require a distinct set of components. Let M denote the set of product/demand families that use the same set of components. For instance, M = f low-end machines, high-end machines, servers g; or M = f individuals, small business, corporations g. (Also, refer to Figure 1 for a numerical example considered in x6.) Time is discrete, indexed by t, with each time unit called a period. Let D (t) denote the demand associated with product family m in period t. Each order of type m requires a random number of units from component i, denoted as X , which takes on non-negative integer values. Denote: S := S fi : X  0g; M := M fm : X  0g: That is, S denotes the set of components used in type m products, whereas M denotes all the product families that use component i. (Here, X  0 means P(X = 0) = 1.) There are two kinds of leadtimes: those associated with the components, and those associated with the end products:  L , i 2 S : the in-bound leadtime | the time for the supplier of component i to replenish to store i once an order is placed. Assume this leadtime is known through a given distribution. For instance, a normal distribution with mean and variance given.  L , m 2 M: the out-bound leadtime | the time to supply a customer demand of type m, provided there is no stockout of any component i 2 S . This time includes the order processing time, the assembly/recon guration time, and the transportation time to deliver the order. The distribution of this leadtime is also assumed known. The rst step in our analysis is to translate the end-product demand into demand for each component i: m

mi

m

mi

i

mi

m

i

mi

mi

in i

out m

m

D (t) = i

X 2M

m

X out )

Dm (t+Lm

X

mi

i

k =1

(k);

(1)

where X (k), for k = 1; 2; :::; are i.i.d. copies of X , and the inclusion of L in the endproduct demand is a standard MRP type of demand-leadtime o set. Assume the mean and the variance of X are known. For instance, these can be derived from empirical demand data. Applying Wald's identity (refer to, e.g., [13]) and conditioning on L , we can then derive the mi

out

mi

m

mi

out m

5

mean and the variance of D (t): X E[D (t)] = E(X )E[D (t + L )] i

i

=

m

and Var[D (t)] i

X

= =

+

E(X

mi

2M

)

X

m

E[D (t + `)]P[Lout = `]; m

(2)

m

`

i

out

m

i

2M

m

i

out

m

fE[D (t + L )]Var(X ) + Var[D (t + L )]E (X )g

2M

X

m

2M

X

m

mi

Var(X

mi

X i

E2 (X

2M

m

X

)

mi

X

)

out

m

m

E (X

mi

X

E[D2 (t + `)]P[Lout = `] m

X

)

mi

m

m

!2

`

2

2

E[D (t + `)]P[Lout = `] m

`

i

2M

m

mi

m

E[D (t + `)]P[L

out

m

m

= `]

:

(3)

`

i

The variance calculation above assumes the independence of the demands across product families. If these are correlated, the above derivation can be modi ed without essential diÆculties by incorporating the covariance terms. Also note that in applying Wald's identity to (1) certain dependence between X and D is allowed. mi

m

3 Base-Stock Control

As mentioned earlier, each store i follows a base-stock policy. Let R (t) denote the basestock level at store i in period t, i.e., on hand inventory plus any outstanding orders (onorder position). Let I (t) and B (t) denote the inventory and backorder levels in period t. A combination of standard inventory theory (e.g., [21]) and queueing analysis ([5]) yields the following results. Denote: D (t `; t) := D (t `) + D (t ` + 1) +    + D (t); and i

i

i

i

i

i

 (t `; t)

=

i

2 (t `; t)

=

i

i

X t

E[D (s)];

(4)

Var[D (s)]:

(5)

i

s=t

X

`

t

i

s=t

`

Assuming that demands follow normal distributions, we write D (t `; t) =  (t `; t) + Z   (t i

i

i

6

`; t);

where Z denotes the standard normal variate; and write R (t) =  (t `in ; t) + k (t)   (t `in ; t); i

i

i

i

i

(6)

i

where ` := E[L ] is the expected in-bound leadtime, and k (t) is often referred to as the safety factor. Then, in

in

i

i

i

I (t) = [R (t) D (t `in ; t)]+ : i

i

i

and

i

B (t) = [D (t `in ; t) R (t)]+ ; i

i

(7)

i

i

(Note that since time is discrete, we shall round up any real-valued ` to the next integer.) Furthermore, recall the following standard function in inventory theory (e.g., [21]): Z1 G(x) := E[Z x] = (z x)(z)dz = (x) x (x); (8) in i

+

x

with Z denoting the standard normal variate,  and  denoting, respectively, the density function and the distribution function of Z , and  (x) := 1 (x). A related function is: H (x) := E[x Z ]+ = x + G(x) = (x) + x(x):

(9)

We can then derive: E[I (t)] = E[R (t)

D (t `in ; t)]+ =  (t `in ; t)H (k (t));

(10)

E[B (t)] = E[R (t)

D (t `in ; t)]+ =  (t `in; t)G(k (t));

(11)

`in ; t)  R (t)] = P[Z  k (t)] =  (k (t)):

(12)

i

i

i

i

i

i

i

i

i

i

i

i

i

i

and P[I (t) = 0] = P[D (t i

i

i

i

i

To facilitate implementation, it is often desirable to translate R (t) into \days-of-supply" (DOS), or more precisely, periods of supply. To do so, note that the  part of R (t) simply translates into ` periods (up to t) of demand. In addition, we can turn the safety-stock part of R (t) into k (t) (t ` ; t)  (t ` ; t)=` periods of demand. Hence, we can express R (t) in terms of periods of demand, or DOS, as follows: " #  (t ` ; t) DOS (t) = ` 1 + k (t)  (t ` ; t) : (13) i

i

in i

i

i

in

i

i

i

in

in

i

i

i

i

in i

i

i

i

7

in i

in i

i

Note the intuitively appealing form of (13), in particular the safety-stock (or rather, safety time) part, which is equal to the product of the safety factor and the coeÆcient of variation (i.e., the ratio of standard deviation to mean) of the component demand over the (in-bound) leadtime. Next, suppose demand is stationary, i.e., for each product family m, D (t) is invariant in distribution over time. Then, (4) and (5) reduce to the following (omitting the time arguments): m



i

= ` E[D ] and in

2 = `in Var[D ]:

i

i

i

We can then write R

i

= ` E[D ] + k in

i

i

q

in

i

i

i

(15)

i

i

q

DOS = R =E[D ] = ` + k  i

`insd[D ];

i

and hence, i

(14)

i

i

i

 `in = `in[1 + k q ]; `in

(16)

i

i

i

i

i

where  := sdq[D ]=E[D ] is the coeÆcient of variation of the demand per period for component i. (Hence  = ` is the coeÆcient of variation of the demand over the leadtime ` , which is consistent with the general formula in (13).) Sometimes it is more appropriate to adjust the demand distribution to account for nonnegativity. Speci cally, instead of D =  + Z , where Z is the standard normal variate, we should have D~ = [ + Z ] . The adjusted mean follows from (8): ~ ] = E[Z (  )] = G(  ): E[D (17) i

i

i

i

in

in

i

i

+



+



To derive the adjusted variance, note the following: Ef[(Z

x)+ ]2 g

Z1

= (z x) (z)dz = x(x) +  (x) 2x(x) + x  (x) =  (x) xG(x); 2

x

2

where the last equation makes use of (8). Hence, ~ ] =  Varf[Z (  )] g Var[D   =  Ef[(Z (  )) ] g [E(D~ )] =  [ (  ) +  G(  ) G (  )]: 2

+

2

+ 2

2

2

2

8

(18)

For moderately large x (say, x  2), from (8), we have G( x) = x, and hence  E[D];  Var[D]; E[D~ ] = Var[D~ ] = from (17) and (18). Therefore, the above adjustment is only needed when the coeÆcient of variation of the demand, =, is relatively large, say, 0.5 or above. (In the numerical examples of x6) we indeed implement this adjustment in the relevant cases.) From (6), (15) and (16), it is clear that to identify the base-stock policy is tantamount to specifying the safety factor k for each component inventory. In the following sections, we discuss how to set the safety factor values so as to achieve the best inventory-service performance as speci ed in the optimization problems below. For ease of exposition, we shall focus on stationary demand. For non-stationary demand, we can simply solve the optimization problems below period by period. i

4 Service Requirement 4.1 O -the-Shelf Availability

To start with, consider the special case of each order of type m requires exactly one unit of component i 2 S . Let be the required service level, de ned here as the immediate (i.e., o -the-shelf) availability of all the components required to assemble a unit of type m product, for any m. Let E denote the event that component i is out of stock. Then, we require P[[ 2S E ]  1 : m

i

i

i

m

Making use of the well-known inclusion-exclusion formula (e.g., [13]): X X X P[[ 2S E ] = P(E ) P(E \ E ) + P(E \ E \ E )    ; i

m

i

i

i

i

j

i

i 0, from to X [ r (1 )]; m

m

mi

2S

i

m

m

with  = d= . This sensitivity analysis can also be used in another way. Since, the service level used in the optimization problem is a lower bound of the true value, the optimal solution is conservative, in that it returns a service level that is higher than what is required. Our numerical experience shows this typically results in an objective value that is 10-15% higher (than if the true service levels were used). Therefore, after solving the optimization problem, we can use the above sensitivity analysis to nd the reductions on the values (required service levels) that correspond to reducing the objective value by say, 10%; and then re-solve the optimization problem using the reduced values. (Refer to the discussions on the results in Tables 4 and 5 of the next section.) m

m

m

6 Numerical Results

For our numerical studies, we consider a family of desktop computers which are assembled from a set of 12 di erent components i = 1; 2; : : : ; 12 as illustrated in Figure 1. FIGURE 1 ABOUT HERE All components used in the assembly of an end product are purchased from external suppliers. The supplier lead times are deterministic with E(L ) time units, which represent the in i

17

time required to manufacture the component and ship it from a supplier warehouse to the PC assembly plant. The lead times, unit costs and demand con gurations are summarized in Table 1. These are based on actual data from a PC assembly system, with the necessary alterations to protect proprietary information. The modi ed values, however, still provide a realistic product representation. As the nal assembly process for end products takes no more than a few hours, which is an order-of-magnitude shorter than the component lead time, the system ts well within the framework discussed in x4. P(Xmi > 0)

i

1 2 3 4 5 6 7 8 9 10 11 12

category shell memory motherboard motherboard motherboard storage storage software software options options options

component lead time E(Lin) unit cost c Base unit 5 215 128MB card 15 232 450 MHz board 12 246 500 MHz board 12 316 600 MHz board 12 639 7GB disk drive 18 215 13GB disk drive 18 250 Pre-load A 4 90 Pre-load B 4 90 CD ROM 10 126 Video graphics card 6 90 Ethernet card 10 90 i

i

low-end mid-end high-end 1.0 1.0 1.0 1.0 1.0 1.0 1.0 { { { 1.0 { { { 1.0 1.0 0.4 { { 0.6 1.0 0.7 0.5 0.3 0.3 0.5 0.7 1.0 1.0 1.0 { 0.3 0.6 { 0.2 0.5

Table 1: Bill-of-materials structure for example con gure-to-order system. We consider 3 customer segments, i.e., m = 1; 2; 3, representing low-end, mid-end and highend demands, respectively. Orders for end products di er by customer segment, in terms of the possible selection of components that constitute the end product. We assume that each order requires no more than one unit from component i. The random variables X de ned in x2 thus take on values of zero or one. The marginal distribution of X is displayed in the last three columns of Table 1 for each customer segment. It determines the possible combination of components that can be selected by orders from each customer segment. The distribution indicates the proportion of orders that request a speci c component. For instance, for high-end customers, 100% of the orders select a 13GB disk drive. For mid-end customers, 40% of orders request a 7GB disk drive, and 60% request a 13GB hard drive. Each order requests exactly one component from each component category. The only exception is the "options" category of which more than one component can be selected. mi

mi

18

We represent the customer orders per time unit as i.i.d. normal random variables with mean E(D ) = 100, and coeÆcient of variation cv(D ) = 0:25 and 0:50, for all m = 1; 2; 3. Notice that end products from any given customer segment use a motherboard that is unique to that customer segment. Therefore, we can use the algorithm described in x5.1 to compute the optimal base-stock policies. m

m

6.1 Validation of the Analytical Model

Here we examine two aspects of the analytical model developed in the previous sections, the solution algorithm (of x5.1), and the lower bound (for the ll rate) involved in the service- level constraints. For the rst issue, recall the bisection procedure is guaranteed to converge, but it may converge to a sub-optimal solution. More speci cally, the convergent point (solution) is optimal if and only if all the service constraints are binding. Hence, in Table 2, we present the solutions for two di erent demand scenarios (in terms of the coeÆcients of variation), and several di erent levels of required service, ranging from 80% to 98% (but uniform across customer segments). The column labeled analy reports the ll rates corresponding to the solution returned by the  . algorithm, with the objective values reported in the column labeled zanaly There are three solutions in each demand scenario that are sub-optimal, re ected by the gaps between and analy For each of these solutions, we perform a random search of 5,000,000 points around the neighborhood of the point that the bisection procedure converges to, and  report under zsearch the best objective value among all solutions feasible to (28). We can see that the objective values match very well. In particular, in the two cases where the algorithm over-achieves the require service level, the objective value is only slightly above the best value returned by the search procedure. Table 3 extends the same study to allow di erent service requirements for di erent customer segments (one demand scenario only). The various combinations of service requirements are displayed in the rst three columns. For all the above examples, the number of iterations until convergence is between 15 and 23, for a tolerance of  = 10 . To address the second issue, the quality of involving the lower bound on the ll rate (o -theshelf availability) in the service constraints, for every combination of the service requirements listed in Table 3, we determine the \true" ll rate for each customer segment by using simulation to evaluate the solution generated by the analytical method. Figures 2 to 4 show the comparisons between the analytical and simulated values. (The coeÆcient of variation of demand was 0.50.) In each gure, we varied the service requirements of one customer segment and 6

19

cv(D ) = 0:25

cv(D ) = 0:50

m



.800 .820 .840 .860 .880 .900 .920 .940 .960 .980

analy

(.800,.800,.800) (.820,.819,.818) (.839,.837,.840) (.860,.860,.860) (.880,.880,.880) (.901,.900,.900) (.920,.920,.920) (.940,.940,.940) (.960,.960,.960) (.980,.980,.980)

 zanaly

 zsearch

437,637 { 450,311 451,121 462,308 463,088 477,489 { 494,050 { 513,383 512,050 536,004 { 564,446 { 602,862 { 664,478 {

m

rel.dev. { 0.2% 0.2% { { -0.3% { { { {

analy

(.800,.800,.800) (.820,.819,.818) (.839,.837,.840) (.860,.860,.860) (.880,.880,.880) (.901,.900,.900) (.920,.920,.920) (.940,.940,.940) (.960,.960,.960) (.980,.980,.980)

 zanaly

 zsearch

875,273 { 900,621 902,243 924,616 926,176 954,978 { 988,100 { 1,026,766 1,024,199 1,072,007 { 1,128,892 { 1,205,723 { 1,328,956 {

rel.dev. { 0.2% 0.2% { { -0.3% { { { {

Table 2: Quality of the algorithm: identical service target for all customer segments. held the other two constant. For example, the service requirements (targets) for the high-end and mid-end customer segments in Figure 4 were 0.98 and 0.95, respectively, while the service target for the low-end segment varies between 0.90 and 0.99. (The simulation point estimates are obtained by the batch-mean method, dividing every run into 10 batches of 5,000 arrivals per customer segment.) Clearly, the analytical values, being a lower bound of the true ll rate, underestimate the achieved service level (i.e., over-achieve the service requirements). Overall, however, the analytical curves track the simulated curves quite closely, and the gap decreases as the service requirement increases. FIGURES 2-4 ABOUT HERE Next, we continue examining the second issue raised above, but from a di erent angle: Since the lower bounds involved in the service constraints over-achieve the service requirements, what is the true optimal solution that exactly meets the service requirements? To study this, we rst apply simulation to the cases in Table 2, and report the results in Table 4, where the column under analy lists the actual (i.e., simulated) service levels achieved by the analytical solutions,  and the column under zanaly lists the corresponding objective values (which are the same as in Table 2). Next, we use the sensitivity analysis in x5.3, combined with simulation, to search for the best solution that exactly meets the service requirements (as veri ed by simulation). Speci cally, we gradually decrease the service requirements in the optimization model (i.e., the values), using the sensitivity analysis as a guideline; solve the new optimization problem, and then use simulation to nd the achieved service levels. This procedure is repeated until the analytical solution returns a set of simulated service levels that are within a pre-de ned tolerance of the original service requirements. Furthermore, when this procedure terminates, we m

20

cv(D ) = 0:50 m

1

.920 .920 .920 .920 .920 .920 .920 .920 .920 .920 .900 .920 .940 .960 .980

2

3

.950 .950 .950 .950 .950 .900 .920 .940 .960 .980 .950 .950 .950 .950 .950

analy

.900 .920 .940 .960 .980 .980 .980 .980 .980 .980 .980 .980 .980 .980 .980

 zanaly

(.921,.950,.900) (.920,.950,.920) (.919,.950,.940) (.919,.950,.960) (.918,.948,.980) (.918,.908,.980) (.919,.925,.980) (.916,.943,.980) (.920,.964,.980) (.923,.980,.980) (.904,.947,.980) (.918,.948,.980) (.942,.949,.980) (.958,.950,.980) (.980,.955,.980)

rel.dev. 0.2% { 0.2% 0.3% 0.4% 0.1% 0.2% 0.2% -0.1% -0.2% 0.4% 0.4% 0.3% 0.3% 0.0%

 zsearch

1,083,953 1,102,866 1,128,973 1,168,104 1,239,232 1,215,895 1,224,201 1,234,501 1,256,906 1,292,961 1,233,337 1,239,232 1,251,034 1,263,566 1,298,052

1,085,977 { 1,131,144 1,172,183 1,244,627 1,217,521 1,226,235 1,237,110 1,255,050 1,290,429 1,238,372 1,244,627 1,254,381 1,267,927 1,297,527

Table 3: Quality of the algorithm: di erent service targets for di erent customer segments. generate 10,000 more points randomly around the neighborhood of the identi ed solution. The best solution found in this procedure (i.e., the one with the lowest expected inventory cost) is   reported under the columns labelled search and zsearch . The relative di erence between zanaly  and zsearch is reported in the column labelled  . We applied the same study to the cases in Table 3, with the results summarized in Table 5. z

cv(D ) = 0:25

cv(D ) = 0:50

m



.800 .820 .840 .860 .880 .900 .920 .940 .960 .980

analy

.890 .896 .908 .919 .928 .939 .950 .962 .974 .985

 zanaly

437,636 450,310 462,307 477,492 494,050 513,383 536,003 564,446 602,861 664,477

search

.805 .839 .852 .877 .893 .915 .934 .947 .969 .982

m

 zsearch

372,116 385,133 398,653 414,776 428,233 452,212 477,074 503,809 553,908 610,014

 15.0% 14.5% 13.8% 13.1% 13.3% 11.9% 11.0% 10.7% 8.1% 8.2% z

analy

.893 .902 .911 .919 .928 .939 .949 .961 .973 .985

 zanaly

875,272 900,620 924,614 954,983 988,099 1,026,766 1,072,007 1,128,892 1,205,723 1,328,955

search

 zsearch

.813 744,232 .828 767,392 .844 792,946 .865 823,529 .895 856,465 .915 904,428 .931 954,148 .950 1,011,283 .968 1,107,816 .982 1,220,027

 15.0% 14.8% 14.2% 13.8% 13.3% 11.9% 11.0% 10.4% 8.1% 8.2% z

Table 4: Comparisons between the analytical and the search solutions: identical service target for all customer segments. As the numbers in the two Tables 4 and 5 illustrate, the objective values corresponding to the 21

1

.920 .920 .920 .920 .920 .920 .920 .920 .920 .920 .900 .920 .940 .960 .980

2

.950 .950 .950 .950 .950 .900 .920 .940 .960 .980 .950 .950 .950 .950 .950

3

.900 .920 .940 .960 .980 .980 .980 .980 .980 .980 .980 .980 .980 .980 .980

analy

(.947, .969, .930) (.945, .968, .945) (.945, .966, .962) (.942, .964, .977) (.940, .958, .989) (.944, .919, .991) (.943, .936, .990) (.940, .953, .989) (.941, .972, .988) (.938, .986, .986) (.930, .958, .990) (.940, .958, .989) (.959, .956, .989) (.972, .955, .988) (.987, .956, .987)

 zanaly

1,083,953 1,102,866 1,128,973 1,168,104 1,239,232 1,215,895 1,224,201 1,234,501 1,256,906 1,290,429 1,233,337 1,239,232 1,251,034 1,263,566 1,297,527

search

(.925, .959, .901) (.925, .961, .920) (.932, .951, .941) (.926, .955, .966) (.927, .953, .982) (.924, .902, .980) (.924, .921, .986) (.922, .940, .983) (.920, .960, .981) (.926, .981, .981) (.901, .951, .981) (.927, .953, .982) (.941, .955, .981) (.962, .958, .981) (.985, .956, .980)

 zsearch

922,025 948,462 978,332 1,013,516 1,077,292 1,046,427 1,112,248 1,123,316 1,094,290 1,114,633 1,071,165 1,077,292 1,118,018 1,141,255 1,129,761

 14.9% 14.0% 13.3% 13.2% 13.1% 13.9% 9.1% 9.0% 12.9% 13.6% 13.1% 13.1% 10.6% 9.7% 12.9% z

Table 5: Comparisons between the analytical and the search solutions: di erent service targets for di erent customer segments. analytical solution are about 8-15% sub-optimal, due to the lower bounds used in the service constraints. In practice, the analytical method can be used to quickly generate a starting solution. It can also be combined with with simulation and sensitivity analysis, as outlined above, to ne-tune the starting solution until optimality. 6.2 E ect of Risk-Pooling

In the examples presented above, the inventory of each common component forms a common bu er from which the demand in all customer segments can draw upon (if the component is needed). Hence, demand for each component is an aggregate of the demand over all customer segments. To the extent that the safety stock of the component inventory is determined, among other things, by the variability of this aggregate demand (on the component), the required inventory level of this component to achieve a give service target can be lowered by this aggregation, or risk pooling. In general, the impact of risk pooling become more signi cant when the number of product con gurations is large and the correlation among demands is small (e.g., Aviv and Federgruen [1], Brown et al. [2], Lee [10]). To understand how risk-pooling in uences the performance of a CTO system, we compare some examples studied above with their no risk-pooling counterparts. Speci cally, we consider the cases in Table 4, with the coeÆcient of variation of demand being 0.5. For the no riskpooling scenario, we apply the optimization to each customer segment separately, and then sum 22

up the expected inventory costs over all three customer segments. The comparison against the risk-pooling results are plotted in Figure 5. FIGURE 5 ABOUT HERE In the gure, the service levels are the actually realized values obtained by simulating the analytical solutions. We observe that risk-pooling results in a signi cant reduction of inventory investment: For the same service target, the expected inventory cost is 20-25% lower with riskpooling, with the largest gap appearing at the high end: 25.8%. when = 0:98. Our other experiments indicate that the gap also increases with demand variability. For instance, when the coeÆcient of variation of demand is reduced from 0.5 to 0.25, the relative di erence at a service target of 0.98 is 18.4%. i

1 2 3 4 5 6 7 8 9 10 11 12

component Base unit 128MB card 450 MHz board 500 MHz board 600 MHz board 7GB disk drive 13GB disk drive Pre-load A Pre-load B CD ROM Video graphics card Ethernet card

days-of-supply (risk-pooling) all segments 1.6 2.4 3.0 3.0 2.9 3.2 3.2 2.3 2.3 3.3 2.5 3.4

days-of-supply (no risk-pooling) low-end mid-end high-end 2.5 2.6 2.7 3.8 3.9 4.2 3.5 { { { 3.3 { { { 3.0 4.2 4.7 { { 3.7 4.4 2.3 2.7 3.1 3.0 2.7 2.5 3.7 3.8 4.0 { 3.6 3.2 { 4.8 4.2

Table 6: Comparison between risk-pooling and no risk-pooling scenarios, service target = 0:90 for all customer segments. Table 6 shows the optimal component safety stock levels expressed in days-of-supply (as de ned in (16)) for both risk-pooling and no risk-pooling. As expected, risk-pooling requires a lower days-of-supply value. Further, the amount of safety stock needed tends to be lower when the commonality of a component is higher. For example, the 128MB memory card, which is used in all customer segments, requires 2.4 days-of-supply, whereas the Ethernet card, which is used in the high-end and mid-end segments, only requires 3.4 days-of-supply.

23

7 A Process Reengineering Application

Here we describe the study mentioned in the Introduction, which was part of a larger project aimed at the reengineering of IBM/PSG's business process | from a build-to-stock operation to a con gure-to-order operation centered around \building blocks" (i.e., keeping inventory only at the component level). To carry out the study, we have developed two basic models: the \as-is" model that is a re ection of PSG's present operation, and the \to-be" model that is based on the optimization model described in the previous sections | in particular, with the component inventory levels generated by the algorithm in x5. For both models, we aggregate PSG's production-inventory system into two stages, the rst stage consists of the components, or building blocks, and the second stage includes the assembly and the order ful llment. Three factors have been identi ed as focal points of our study: (i) manufacturing strategy { the \as-is" operation versus the \to-be" model; (ii) the accuracy of demand forecast at the end product level versus at the component level; (iii) the e ect of mass customization as a result of, for instance, direct sales over the Internet. To study the rst factor, we select a major product family at PSG, which consists of 18 end products assembled from a total of 17 components. We use PSG's existing data { including BOM, unit costs, assembly and procurement leadtimes { to run a detailed simulation model. The demand for each end product is statistically generated, based on historical data. The days-of-supply targets are set to meet a required service level of 95% (for all end products). Following PSG's current practice, these targets are determined using a combination of simple heuristic rules and judgement calls from product managers, and veri ed by simulation (via trial and error). We then feed the same data, including the same statistically generated demand streams, into the optimization model, which eliminates the nished-goods inventory at the end-product level and optimally sets the base-stock level for each component inventory. The optimization model minimizes the overall inventory cost while meeting the same service level of 95% for all end products. We take the optimal base-stock levels, and rerun the simulation to verify the analytical results. Figure 6 shows the comparison between the \as-is" and the \to-be" models, in terms of the overall inventory cost. (To protect proprietary information, the vertical axis in all gures is normalized with respect to the inventory cost of the \as-is" model, which is 100. As expected, 24

the inventory cost at the end-product level is virtually eliminated in the \to-be" model. (The cost shown is due to WIP; the cost due to nished goods is nil.) In contrast, the \as-is" model keeps a signi cant amount of end-product inventory. On the other hand, the amount of component inventory is higher in the \to-be" model, which is again expected, since the required service level of 95% is common to both models. Overall, the \to-be" model reduces the overall inventory cost by about 30%. FIGURE 6 ABOUT HERE Note in the above study, both models use the same demand forecast, at the end-product level. The \to-be" model, however, can easily switch to forecasting demand directly at the component level. This will result in improved forecast accuracy, as we can take advantage of parts commonality, as each component is generally used in several end products. Hence, in our study of the second factor, we evaluate the e ect of forecast accuracy through a sensitivity analysis. Figure 7 shows the overall inventory cost associated with three di erent levels of forecast accuracy. The rst two columns repeat the comparison in the last gure, i.e., both \as-is" and \to-be" models assume 30% forecast error (i.e., the coeÆcient of variation equals 0.3) at the end-product level; the next two columns represent improved forecast errors, at 20% and 30%, achieved by the \to-be" model through forecasting at the component level. FIGURE 7 ABOUT HERE Our study of the third factor aims at analyzing the impact on inventory when the system supports a richer product set, in terms of product variety. The motivation is to support mass customization: in the Internet-based, direct-sales environment, for instance, the number of di erent product con gurations that customers want to order can be signi cantly larger than what is currently supported in the build-to-stock environment. Figure 8 shows the inventory costs: the four columns on the left correspond to the current product set (1x), with the rst scenario (S1) being the \as-is" model, and the other three being the \to-be" model at the current (S2) and improved (S3, S4) forecast accuracy levels, respectively; the four columns on the right repeat these scenarios with a product set that is ten times as large in variety (10x), while maintaining the overall volume. (Also refer to Table 7 for a summary of all the di erent scenarios.) FIGURE 8 ABOUT HERE 25

Scenario

Description

1x Cases

S1

\as-is"

S2

\to-be"

S3

\to-be"

S4

\to-be"

original product set, 30% forecast error, 90% service forecast at MTM level, 30% forecast error, 90% service forecast at BB level, 20% forecast error, 90% service forecast at BB level, 10% forecast error, 90% service

10x Cases

ten times larger product set, 30%p10 forecast error at MTM level ten times larger product set, forecast error as in S2(1x) ten times larger product set, forecast error as in S3(1x) ten times larger product set, forecast error as in S4(1x)

Table 7: Summary of the scenarios used to study the e ect of product variety. Observe that as the product variety increases, a signi cantly higher level of inventory is required in the \as-is" model. This is because forecast accuracy will deteriorate when the end products proliferate (i.e., larger varieties at smaller volumes). On the other hand, in the \to-be" environment, the increase in inventory cost is very modest. This is because the proliferation of end products will have minimal e ect on the forecast accuracy at the building-block level, due to parts commonality. This strongly supports the fact that the building-block model is the right process to support an Internet-based, direct-sales operation. 8 Concluding Remarks

We have developed an analytical model for the con gure-to-order operation, which has become an important new business model in Internet commerce. We use the model to study the optimal inventory-service tradeo , formulated as a nonlinear programming problem with a set of constraints re ecting the service levels o ered to di erent market segments. To solve the optimization problem, we have developed an exact algorithm for the important case of demand in each market segment having a unique component, and a greedy heuristic for the non-unique component case. We have also shown how to use the sensitivity analysis, along with simulation, to ne-tune the solutions. On the qualitative side, we have highlighted several new insights to some of the key bene ts of the CTO operation, in terms of risk pooling and improved forecast accuracy. As part of a larger project aimed at the reengineering of IBM/PSG's business process, we have applied our model to study three factors, in terms of their impact on reducing inventory capital and enhancing customer service: (i) manufacturing strategy { the machine-type-model based operation versus the building-block based operation, (ii) the accuracy of demand forecast 26

at the end-product (machine con gurations) level versus at the market segment level; (iii) the e ect of mass customization as a result of direct sales over the Internet. Acknowledgements

We thank Maike Schwarz for her assistance in obtaining the CTO simulation results. We also thank Lisa Koenig of the IBM Personal Systems Group for providing us with the opportunity to work on this project. References

[1]

Aviv, Y. and Federgruen, A.,

[2]

Brown, A., Lee, H., and Petrakian, R.,

[3] [4] [5] [6] [7] [8] [9]

The Bene ts of Design for Postponement. In: Quantitative Models for Supply Chain Management, S. Tayur, R. Ganeshan and M. Magazine (eds.), Kluwer Academic Publishers, Norwell, 1999, 553-584. Xilinx improves its Semiconductor Supply Chain using Product and Process Postponement, Interfaces, 30 (4), 2000, 65-80. Multiple Failures in a Multi-Item Spare Inventory Model, IIE Transactions, 27 (1995), 171-180. Methods for Job Con guration in Semiconductor Manufacturing, IEEE Trans. Semiconductor Manufacturing, 9 (1996), 401-411. A Supply Network Model with Base-Stock Control and Service Requirements, Operations Research, 48 (2000), 216-232. A Simple and E ective Component Procurement Policy for Stochastic Assembly Systems, Queueing Systems, 38 (2001), 221-248. Managing Product Variety: an Operations Perspective. In: Quantitative Models for Supply Chain Management, S. Tayur, R. Ganeshan and M. Magazine (eds.), Kluwer Academic Publishers, Norwell, 1999, 467-490. Leadtime-Inventory Tradeo s in Assemble-to-Order Systems, Operations Research, 46 (1998), 858-871. Order Response Time Reliability in a Multi-Item Inventory System, European J. of Operational Research, 109 (1998), 646-659. Cheung, K.L. and Hausman, W.,

Connors, D.P. and Yao, D.D.,

Ettl, M., Feigin, G.E., Lin, G.Y., and Yao, D.D.,

Gallien, J. and Wein, L.,

Garg, A. and H.L. Lee,

Glasserman, P. and Wang, Y.,

Hausman, W.H., Lee, H.L.

and Zhang, A.X.,

27

[10]

Lee, H.,

[11]

Li, L.,

[12] [13] [14] [15] [16] [17] [18]

E ective Inventory and Service Management Through Product and Process Redesign, Operations Research, 44 (1996), 151-159. The Role of Inventory in Delivery-Time Competition. Management Science, 38 (1992), 182-197. Retail Inventories and Consumer Choice. In: Quantitative Models for Supply Chain Management, S. Tayur, R. Ganeshan and M. Magazine (eds.), Kluwer Academic Publishers, Norwell, 1999, 491-552. Stochastic Processes, 2nd ed., Wiley, New York, 1998. Evaluation of Order-Based Backorders, Management Science, to appear. On the Order Fill Rate in a Multi-Item, Base-Stock Inventory System, Operations Research, 46 (1998), 831-845. Order Ful llment Performance Measures in an Assemblyto-Order System with Stochastic Leadtimes, Operations Research, 47 (1999), 131-149. Performance Analysis and Optimization in Assemble-to-Order Systems with Random Leadtimes, Operations Research, to appear. Stochastic Programming Models for Managing Product Variety. In: Quantitative Models for Supply Chain Management, S. Tayur, R. Ganeshan and M. Magazine (eds.), Kluwer Academic Publishers, Norwell, 1999, 585-624. Mahajan, S. and G.J. van Ryzin,

Ross, S.M.,

Song, J.S.,

Song, J.S.,

Song, J.S., Xu, S. and Liu, B.

Song, J.S. and D.D. Yao,

Swaminathan, J.M. and S.R. Tayur,

[19]

Wang, Y.

[20]

Zhang, A.X.

Service Levels in Production-Inventory Networks: Bottlenecks, tradeo s, and Optimization, Ph.D. Dissertation, Columbia University, 1988.

Demand Ful llment Rates in an Assemble-to-Order System with Multiple Products and Dependent Demands, Production and Operations Management, 6 (1997), 309-324. [21] Foundations of Inventory Management, Irwin/McGraw-Hill, New York, 2000. Zipkin, P.,

28

shell

1

memory

2 3

motherboard

3

high-end desktop

2

mid-end desktop

1

low-end desktop

4 5 6

storage 7 8 software 9 10 options

11 12

Figure 1: A con gure-to-order system with three product families.

29

✍ 1 = 0.92, ✍ 2 = 0.95 1.00

order fill rate

0.98 0.96 0.94 0.92 0.90 0.88 0.90

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

service target ✍ 3

high-end(sim)

high-end(analy)

mid-end(sim)

mid-end(analy)

low-end(sim)

low-end(analy)

Figure 2: Comparison between analytical and simulated ll rates.

30

✍ 2 = 0.95, ✍ 3 = 0.98 1.00

order fill rate

0.98 0.96 0.94 0.92 0.90 0.88 0.90

0.91

0.92

0.93

0.94

0.95

service target

0.96

0.97

0.98

0.99

✍1

high-end(sim)

high-end(analy)

mid-end(sim)

mid-end(analy)

low-end(sim)

low-end(analy)

Figure 3: Comparison between analytical and simulated ll rates.

31

✍ 1 = 0.92, ✍ 3 = 0.98 1.00

order fill rate

0.98 0.96 0.94 0.92 0.90 0.88 0.90

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

service target ✍ 2 high-end(sim)

high-end(analy)

mid-end(sim)

mid-end(analy)

low-end(sim)

low-end(analy)

Figure 4: Comparison between analytical and simulated ll rates.

32

2,000,000

inventory cost ($)

1,800,000 1,600,000 1,400,000 1,200,000 1,000,000 800,000 0.86

0.88

0.90

0.92

0.94

0.96

order fill rate (simulated)

no risk pooling

risk pooling

Figure 5: E ect of risk pooling.

33

0.98

1.00

Figure 6: Comparison between build-to-stock (\as-is") and assemble-to-order (\to-be") system.

34

Figure 7: E ect of improving forecast accuracy.

35

Figure 8: E ect of product variety on inventory.

36