Using forecasted information from early returns of used ... - CiteSeerX

0 downloads 0 Views 145KB Size Report
If the decision can be delayed, the reduction in the revenue ... pointed out that remanufacturing generates over $53 billion of total sales per year and suggested that there ... reduces the manufacturing cost of a new product by 40 to 65% (Ginsburg, 2001). .... Expansion costs consist of a fixed cost K and a unit variable cost c .
Using forecasted information from early returns of used products to set remanufacturing capacity

Suphalat Chittamvanich Sarah M. Ryan* Department of Industrial & Manufacturing Systems Engineering 2019 Black Engineering Building Iowa State University Ames, IA 50011-2164

*Corresponding author: 515-294-4347 [email protected]

Using forecasted information from early returns of used products to set remanufacturing capacity Abstract We jointly analyze forecasting and capacity management of returned products to evaluate the benefit of information obtained from early returns. Using this information, the return time distribution parameters are estimated with confidence intervals by maximum likelihood with censored data. We formulate a single-period model for capacity planning to determine the optimal level of remanufacturing capacity.

Through numerical simulation, we study the

combined effects of revenue discounting and variability in the estimates of the return time distribution based affect the capacity decision. If the decision can be delayed, the reduction in the revenue discounting encourages building capacity. Variability in the parameter estimates from highly censored information leads to a capacity decision error; e.g., forego capacity that would lead to higher revenue or build capacity that is not needed. Cases representing corporate returns and individual consumer returns differ in both the direction of errors and their frequency.

Keywords: Forecasting; Capacity planning; Remanufacturing; Maximum likelihood

1

1. Introduction Remanufacturing, the process of restoring or reusing parts from returns as input for new products, is becoming more prevalent in many industries. Numerous studies have claimed that remanufacturing should be considered as a value-added business process. Guide et al. (2000) pointed out that remanufacturing generates over $53 billion of total sales per year and suggested that there is a high potential for reuse of products in the consumer electronics market. Remanufacturing returned products not only reduces the amount of new material consumption but also compensates for the costs of taking back products. It is estimated that remanufacturing reduces the manufacturing cost of a new product by 40 to 65% (Ginsburg, 2001). In addition, market share can be improved by offering affordably priced products to more customers. However, it is well known that management of remanufacturing operations is made complex by the uncertainty in timing and amount of returns (Fleishmann et al., 1997). There are a broad range of examples in numerous industries, e.g., reusable beverage containers, disposable cameras, toner cartridges, and PCs. Although there are many studies of return forecasting to reduce this uncertainty for a variety of products, the application to electronic goods has been limited. Because it is difficult to forecast how many products will be returned in the future, additional resources must be available to mitigate some of the risk inherent in an irregular stream. However, carrying too much capacity should be avoided because it creates unnecessary investment in unused capacity that potentially lowers profit. Nevertheless, if we fail to have enough capacity, we lose the opportunity to earn profit from processing all the future returns due to limited capacity. This research finds the optimal capacity level for remanufacturing by utilizing information from early returns. Forecasting product returns involves monitoring current and previous sales along with early returns in order to determine future product availability for remanufacturing. Currently, due to rapid advances in technological innovation, the economic life of electronic products has been reduced considerably. The National Safety Council reported in 1999 that the average lifespan of a personal computer (PC), which was 4.5 years in 1992, would be reduced to only 2 years in 2005, estimating that more than 315 million PCs would be obsolete by 2004. This prediction has been confirmed by Grenchus et al. (2002), who observed, “the useful life of a PC has dropped to between 2 and 3 years”. In the presence of lead times to add capacity, the remanufacturing capacity must be planned ahead of time to ensure that sufficient capacity will be available to

2 extract value from the returns before they become obsolete. This is a variation of the capacity expansion problem, which consists of adding the right facilities at the right time so that the total cost of expansion is minimized (Luss, 1982). In comprehensive reviews and discussion regarding the major remanufacturing research (Fleishmann et al., 1997; Guide et al., 2000), the authors indicated that although there has been significant research development in remanufacturing, consideration of capacity management in the remanufacturing context has been limited. To determine the optimal remanufacturing capacity, in this paper we use information from early returns to estimate future returns, which constitute a demand for remanufacturing capacity. Adopting the viewpoint of an original equipment manufacturer of electronic products, the goal is to maximize expected profits from remanufacturing returned products, taking into account all relevant capacity costs. This research explores the effect of variability in the return time distribution and studies how decisions regarding building new capacity change based on the information available at the decision time. After describing the relation to previous research in the next section, we present the return and capacity expansion models in section 3 and methods for estimating parameters and computing capacity levels in section 4. In section 5, we discuss numerical examples. Finally, we summarize the results and describe future directions for this research.

2. Relevant research studies This paper estimates necessary parameters along with their confidence intervals for the return time distribution. Several methods to estimate future returns have appeared in the literature. The early work on return forecasts is by Goh and Varaprasad (1986), who used the basic time series techniques and modeled the amount of returns of reusable containers as a proportion of present and all earlier returns. They applied the Box and Jenkins statistical approach to estimate the return proportion. Kelle and Silver (1989a, 1989b) utilized the estimate of the return proportion as described by Goh and Varaprasad (1986) to forecast the net demand and its variability based on a variety of available information. They compared the results to the maximum information case, which records the information of individual movements, corresponding to when an item is issued and returned. They also suggested that in the case where individual item information is available; a substantial improvement in estimation could be achieved. We also consider different patterns of return that follow the product life cycle curve and develop a method for estimating necessary parameters for such return distributions. The idea

3 of using the product life cycle curve together with time series forecasting techniques to estimate the future availability of returns was first introduced by Srivastava and Guide (1995). They proposed a method to forecast the future availability and material recovery rates from returns. They modeled a reverse relationship between product availability and material recovery rate, e.g., the product availability increases as the sales increase; on the other hand, the longer the product is in service, the lower the material recovery will be. Hess and Mayhew (1997) considered the merchandise returns problem and offered both a split adjusted hazard model and a regression model with logit split to estimate the return rate. They explained that the split hazard model, commonly used in the measurement of reliability, can take into account information from returned and non-returned items, unlike the split regression model that uses only data from returned items, so that the split hazard model can explain not only the timing but also the probability of return. Using observations of actual returns of apparel the results showed that the split hazard model is more robust and offers a better estimation than the regression model. Marx-Gomez et al. (2002) introduced a fuzzy reasoning model to forecast the quantities and timing of returns of photocopier returns. They generated return data by a simulation model. The fuzzy model forecasts the returns by considering relevant life-cycle data and other influencing factors together with the fuzzy rule-base developed from prior expert knowledge. Another study of product return forecasting is based on partially observed information. Toktay et al. (2000) studied the inventory procurement problem for producing single-use cameras and remanufacturing returned cameras. Using data obtained from Kodak, they modeled the return flow with a geometrically distributed lag between sales and returns and noted that it is right-censored (more detail about right-censoring will be discussed in the next section). A Bayesian approach and the Expectation Maximization (EM) algorithm, a way of doing maximum likelihood estimation, were used to estimate the probability that a product will be returned and the probability that a sold item is returned in the next period given that it will be returned. In this paper, we use maximum likelihood estimation with partial observations and investigate predictability performance for the estimated parameters. Several studies have investigated the impacts of unobserved data, and we will mention only one which is closely related to our work. Ding et al. (2002) studied the impact of unobserved demand on the optimal inventory policy in a newsvendor model. That is, the demand is censored

4 by the inventory level. The optimal inventory level was determined by using a Bayesian Markov decision process in each period. They showed that in early periods the impacts of unobserved (censored) demand would lead to a higher optimal inventory level than fully observed demand. The reason is that maintaining a higher inventory (resulting in less censoring) would yield better information about the future demand that would facilitate a better decision in later periods. In this paper, product returns constitute a demand for remanufacturing capacity and the demand is censored by the lead time to obtain remanufacturing capacity. Also, we investigate the impacts of censored data on selecting the capacity decisions with a variety of patterns for the sales time and the mean time to return. Concerning remanufacturing capacity, Guide et al. (1997) developed a capacity planning method for remanufacturing operations that considers material recovery rate and probabilistic routing and determines the amount of capacity needed to be available at each workstation. Shih (2001) studied reverse logistics planning for electronic products in Taiwan. Using historical data, the author presented a model to determine the optimal capacity expansion plans of storage and disassembly facilities for different product take-back rates. A method for parameter estimation regarding the amount of future returns and relevant costs was discussed. Recently, Aytekin and Savaskan (2004) studied the relationship of the new product pricing and remanufacturing decisions over the life cycle of either durable or non-durable products. In their model, the remanufacturing capacity has no limit as it is bounded by the number of items returned at any time. Finally, Franke et al. (2005) developed a model for mobile phone remanufacturing to determine the required capacities for remanufacturing operations. They used information about uncertainties in the amount and conditions of returns as well as combinatorial optimization to determine the capacities of work stations. In this research, we develop a single-period model for capacity planning that determines the optimal amount of expansion for different lead times to obtain remanufacturing capacity. In addition, we explore the use of the available information from the early returns to help forecast later returns. The difference between this research and past work is that we focus jointly on the forecasting and capacity management of returned products. This enables evaluation of the benefit of information from early returns in determining the optimal remanufacturing capacity with a variety of patterns for the sales time and the mean time to return. In addition, we consider the

5 effect of variability in the return time distribution and study how the variation in the estimates based on partially observed information affects the capacity decision.

3. Model Formulation We present a simple model for planning remanufacturing capacity. Although stylized, the model captures the key features of a lead time to obtain remanufacturing capacity, perishability of the remanufactured product caused by impending obsolescence, and the ability to use early return times to help forecast later returns. 3.1 Model assumptions and notation Let τ 0 be the time at which the product becomes obsolete. The capacity decision is made at time τ 1 < τ 0 . After the capacity decision is made, the capacity will be available at time τ 0 and all the products returned up to time τ 0 will be remanufactured then. Thus, the lead time to install capacity is τ 0 − τ 1 . No additional returns will be remanufactured after time τ 0 because they are considered worthless. Expansion costs consist of a fixed cost K and a unit variable cost c . Processing the returns results in a net revenue of V per unit . The decision variable is the amount of capacity, denoted by x , which represents the maximum number of returns that can be processed instantaneously at time τ 0 . The goal is to determine the value of x to maximize the expected profit. Because the decision must be made before all the returns are observed, we wish to use the available information to balance the potential revenue from remanufacturing against the risk of excessive capacity. The timeline of events in the model is as follows: (a) obtain a forecast at time τ 1 of the number of returns up to time τ 0 , (b) determine the optimal capacity, (c) expand capacity to process the future returns at time τ 0 . This problem is similar to the newsvendor problem, in which there is only a single opportunity to decide the amount of product to purchase while taking into account uncertainty in the product demand (Gallego and Moon, 1993). Let N r (τ 0 ) be the number of items returned from the time of product introduction to τ 0 . At time τ 1 , we wish to maximize the expected profit at time τ 0 discounted by the one period discount factor, γ . The optimal expected profit is given by:

6

{

rττ01 = max V γ x

where

⎧0 if ⎩1 if

δ ( x) = ⎨

− (τ 0 −τ1 )

}

EN r ⎡⎣ min ( N r (τ 0 ) , x ) ⎤⎦ − K δ ( x ) − cx ,

(1)

x=0 . x>0

We assume the following: 1. All items returned can be remanufactured profitably, i.e., Vγ − (τ 0 −τ1 ) > c . In reality, not all items can be remanufactured and sold but some may be dismantled as parts for new products. The net revenue V represents an average value across all returned products. Certainly, considering different amounts of net revenue based on the different conditions of the returns is an interesting area for research but it is beyond the scope of this paper. 2. Every item is eventually returned. Typically, it is unusual that all items will be returned. However, under extended producer responsibility, eventually an item will be returned to electronics manufacturers because they are responsible for all end-of-life products. 3. No remanufacturing capacity exists at the beginning of the study horizon. 4. No net revenue is received from returns before τ 0 in excess of capacity or returns after

τ 0 . This assumption corresponds to assigning responsibility for excess and late returns to a third party, with negligible net profit or cost. 5. The amount of capacity can be determined only once. This single decision model provides a first cut at incorporating the information available at τ 1 in the future returns forecast and it allows a simple computation of the optimal solution. Multi-period capacity planning is a promising area for research that we leave for future work. 6. The total number of items to be sold is known. That is, we capture a situation of planning for capacity investment based on the best or most likely sales projection. This assumption correspondingly provides an optimistic or most likely estimate of returns that can occur and provides some analytical tractability. Given a distribution of N r (τ 0 ) , it is simple to solve equation (1), but there are potential risks associated with choosing the optimal capacity based on an incorrect distribution of N r (τ 0 ) . In particular, using an incorrect distribution can lead to foregoing capacity expansion when it is necessary or building capacity when it is not required. If we fail to have enough capacity, we lose an opportunity to earn revenue from processing all the future returns before product

7 obsolescence. On the other hand, having too much capacity creates a risk also because not all the capacity will be utilized. Carrying excess capacity should be avoided because it creates unnecessary investment in unused capacity that lowers profit. Furthermore, the capacity decision is also influenced by the discount factor. If the lead time (the interval between τ 1 and τ 0 ) can be decreased, the revenue discounting lessens so that the expected discounted revenue increases relative to the capacity cost; therefore, the optimal decision is to build more capacity. To the remanufacturing manager, the value of τ 0 is fixed. However, by careful planning, choice of technology, and training a flexible workforce, it may be possible to delay the decision time τ 1 relative to τ 0 by reducing the capacity lead time. In addition to decreasing revenue discounting, lead time reduction influences decisions by allowing them to be made with better information. Analyzing this effect requires a probability model for returns. 3.2 Product return model We consider two random variables: (1) the time to the sale of an individual unit such as an individual PC from when the product, e.g., Pentium 4, is introduced to the market and sold, and (2) the time from the sale to the return of each unit. The gamma distribution is chosen to represent both random variables because of its flexibility to fit various distributional shapes and its reproductive property. The gamma probability density function with shape parameter α and scale parameter β is

f (yα, β ) =

1 −y y α −1 exp( ) β β Γ(α ) α

,0 < y ,

(2)

where α > 0 , β > 0 , and Γ(.) is the gamma function defined as ∞

Γ(z ) = ∫ u z −1e −u du ,

0< z.

(3)

0

The mean and variance are μ = αβ and σ 2 = αβ 2 , respectively. We measure the variability in the distribution by the coefficient of variation (C.V .) , which is defined as the ratio of the

standard deviation to the mean. For a gamma distribution, the C.V . equals 1 α . In addition, the gamma distribution has a reproductive property. That is, the sum of two independent gamma distributed random variables with possibly different shape parameters (α ′, α ′′) but with a

8 common scale parameter (β ) also has a gamma distribution with the same value of β and with

α = α ′ + α ′′ (Johnson et al., 1994, p 340). The distribution of electronic product sales over the life of a product has been observed to follow a bell-shaped curve, which is characterized into five phases: introduction, growth, maturity, decline, and obsolescence (Bayus, 1998; Bollen, 1999; Solomon et al., 2000; TibbenLembke, 2002). We model the distribution of the time to sale of each item as a gamma distribution with a large shape parameter, which approximates a bell shape. However, unlike a normal random variable, a gamma random variable can take on only nonnegative values, which makes it more appropriate to model uncertain lengths of time. Several researchers have applied different distributions to represent the time interval between the sale and the return of an item. De Brito and Dekker (2003) studied the distribution of the returns by analyzing real data and suggested that the time to return of an item could be modeled by a negative exponential distribution. Toktay et al. (2000) explored the data on returns of Kodak single-use cameras and modeled the time to return with a discrete-time distributed-lag model using geometric and Pascal distributions. We model the distribution of the time from sale to return of each item as a gamma distribution because of its flexibility. For instance, the negative exponential distribution is a special case of the gamma distribution with α = 1 . Let i be the index number of products (i = 1,..., N ) where N is the total number of units sold. Let Y1,i be the time at which the unit i is sold, measured from the time of product introduction to the market, and Y2,i be the time between sale and return or the time the unit spends with the original customer. The return time of unit i is denoted by Ti where

Ti = Y1,i + Y2,i . We assume that {Y1,i } are independent and identically distributed (iid ) random variables having a gamma distribution with parameters α 1 and β , and {Y2,i } are iid gamma distributed with parameter α 2 and with the same value of the scale parameter. We assume that Y2,i is independent of Y1,i for each i . The reproductive property of the gamma distribution with

common parameter β implies that {Ti : i = 1,..., N } are iid gamma random variables with parameters α 3 = α 1 + α 2 and β .

9 Capacity decisions are influenced by the expected proportion of used items returned before time τ 0 . We estimate this quantity using the probability that unit i will be returned before time

τ 0 , given by: R(α 3 , β ) ≡ Pr (Ti < τ 0 ) =

τ0

−t 1 α 3 −1 β t e dt . ∫0 β α3 Γ(α 3 )

(4)

Assuming a known fixed number, N , of items sold, the number of items that will be returned before product obsolescence follows a binomial distribution. With R ≡ R(α 3 , β ) , the probability distribution of the number of returns up to time τ 0 is: ⎛N⎞ Pr[N r (τ 0 ) = n] = ⎜⎜ ⎟⎟ R n (1 − R) N − n for n = 0,1,..., N . ⎝n⎠

(5)

Note that this distribution is unknown and must be estimated at time τ 1 . 3.3 Optimal capacity level

Using the information about the return time distribution obtained from early returns, the goal is to determine the capacity level that maximizes the expected profit from processing the returns before product obsolescence. Let F (u ) be the cumulative distribution function (CDF) of N r (τ 0 ) where F (0) = 0 . For analytical convenience, we approximate N r (τ 0 ) as a continuous

random variable; however, the results are similar if N r (τ 0 ) is discrete. Excluding the fixed cost in (1), we can express the expected marginal profit at time τ 0 for a given capacity level x as ∞ ⎡x ⎤ H ( x) = b ⎢ ∫ uf (u )du + ∫ xf (u )du ⎥ − cx x ⎣0 ⎦

where

(6)

b = Vγ − (τ 0 −τ1 ) .

Differentiating (6) with respect to x and applying Leibnitz’s rule we get

dH ( x) = b(1 − F ( x)) − c dx

(7)

d 2 H ( x) = −bf ( x) ≤ 0 . dx 2

(8)

and

From (7) and (8), we see that, since H (x) is concave, the capacity level that maximizes marginal profits corresponds to a critical fractile:

10 ⎛ c⎞ x * = F −1 ⎜1 − ⎟ . ⎝ b⎠

(9)

We note that H (0) = 0 and lim H ( x) = −∞ . In addition H ' (0) = b − c and H ' (∞) = −c . Thus, x →∞

the desired capacity level is positive and finite. Substituting (9) in (6) we obtain H ( x* ) = b

c F −1 (1− ) b

∫ uf (u )du .

(10)

0

Identically,

H ( x* ) = b

c F −1 (1− ) b

∫ 0

⎞ ⎛ c ⎜1 − − F ( x) ⎟dx . ⎠ ⎝ b

(11)

The derivation of (11) is presented in Appendix A. Considering (10) including fixed cost, we can categorize the solution into two cases: Case 1: Expand capacity to x * ,

if H ( x * ) − K > 0 ,

Case 2: Do not expand capacity,

if H ( x * ) − K ≤ 0 .

Note that, in case expanding capacity shows a negative profit, we can choose not to invest in remanufacturing capacity (although legislation might require subcontracting to a third party remanufacturer). In addition, since N r (τ 0 ) follows the binomial distribution, its CDF is a discontinuous step function. Thus, for the discrete case let x * be the largest value of x that satisfies F ( x * ) ≥ 1 −

c . b

In view of the potential risk associated with choosing the optimal capacity based on an incorrect distribution of N r (τ 0 ) , we use stochastic ordering to analyze the impacts of different distributions for the number of returns on selecting the capacity decision. Let V1 and V2 be two continuous random variables with respective CDFs F1 (v) and F2 (v) . Suppose F1 (v) and F2 (v) are strictly increasing on (0, ∞) . If F1 (v) ≤ F2 (v) for all v , then we say that V1 is

stochastically larger than V2 , denoted by V1 ≥ st V2 (Shaked and Shanthikumar, 1994). Suppose that F1 (v) ≤ F2 (v) for all v and define x1* = F1−1 (1 − c b ) and x2* = F2−1 (1 − c b ) . Then we have F2 ( x1* ) ≥ F1 ( x1* ) = F2 ( x 2* ) ,

(12)

11 and because F2 is a strictly increasing function, we can conclude x1* ≥ x 2* . Therefore, we have H ( x1* ) − K ≥ H ( x 2* ) − K . As a result, a stochastically larger number of returns leads to a larger optimal capacity level and expected profit. This effect has implications for capacity decisions made with different lead times. At decision point τ 1 , if the expected profit is greater than the fixed cost (case 1 solution), we choose to install the capacity equal to the critical fractile (9). The stochastic ordering property may hold for the estimated distributions of N r (τ 0 ) at different τ 1 because for a fixed N and n , the binomial CDF is a nonincreasing function of R (Rao, 1952). Let R j be the estimated return probability x * (τ 1j ) be the optimal capacity level derived from this estimate at a possible decision time τ 1j , j = 1, 2. Then if R 1 > R 2 , stochastic ordering implies that x * (τ 11 ) ≥ x * (τ 12 ) . Reducing the lead time influences the capacity decision by decreasing the discounting of future revenues and increasing the available information with which to estimate the return probability. We can predict the effect of a changing return probability estimate on the optimal decision, but we cannot predict how the estimate may change with additional information. In addition, since the effect of variability in parameter estimates cannot be easily predicted, those effects are studied in the next section using numerical simulation.

4. Parameter estimation and computation We forecast the parameters of the product return distribution using maximum likelihood estimation ( MLE ) for the gamma distribution with censored observations and extend this estimation framework to determine the confidence intervals for estimated parameters such as the shape and scale parameters and a cumulative probability. Recall our assumption that all items will be returned eventually. In each time period, the return data are considered as incomplete observations, in which some individual return times may not be observed. For these individuals, only a portion of the return time is known and its remainder is observed only to exceed a certain time value (Cox et al., 1984). Suppose that S , a censoring time, is an observation period such that observation on the individual item ceases at S if its return time has not occurred by then. Let Wij be the observed return time of unit i at censoring time S j , where j is the index for the censoring time period. The observed return time is defined by: Wij = Min(Ti , S j ) .

(13)

12 If Ti ≤ S j , item i is an uncensored datum and if Ti > S j , item i is a censored datum. Let D j ≡ {i : Ti ≤ S j } be the set of indices for uncensored data and C j ≡ {i : Ti > S j } be the set of

indices for censored data at time S j . We refer to these observations as Type I censoring at S j ; moreover, this type of censoring results in what are also called “right-censored” data, which implies that if the event of interest is to the right of the censoring time then it will be excluded from analysis (Lawless, 1982). 4.1 Maximum likelihood function

The parameters for the gamma distribution with censored data are estimated using the maximum likelihood method. Let t i be the observed value of Wij for a given S j . The likelihood function for a censored sample is given as (Lawless, 1982): ⎡ 1 ⎛ Ti ⎜⎜ l (α 3 , β ) = ⎢∏ ⎢⎣ i∈D β Γ(α 3 ) ⎝ β

⎛ S Q⎜⎜ α , i ⎝ β

where

⎞ ⎟⎟ ⎠

α 3 −1

⎛ −T exp⎜⎜ i ⎝ β

S ⎞⎤ ⎡ ⎛ ⎟⎟⎥ ⎢∏ Q⎜⎜ α 3 , i β ⎠⎥⎦ ⎣ i∈C ⎝

⎞⎤ ⎟⎟⎥ , ⎠⎦

(14)

∞ ⎞ 1 ⎟⎟ = u α −1e −u du . ∫ ⎠ Γ(α ) Si

β

For computational convenience, it is more common to work with the log-likelihood function, the logarithm of (14), instead of likelihood function itself. For a set of observed product return times such that D = r and C = N s − r , where N s denotes the number of item sold so far and r denotes the total number of products returned, the log likelihood may be written in terms of 1

t t =∑ i i∈D r

and

⎞r ⎛ ~ t = ⎜⎜ ∏ t i ⎟⎟ as ⎝ i∈D ⎠

⎡ ⎧ t ⎫⎤ ⎡ t ⎤ ~ L(α 3 , β ) = r ⎢(α 3 − 1) log( t ) − α 3 log( β ) − log Γ(α 3 ) − ⎥ + ∑ log ⎢Q ⎨α 3 , i ⎬⎥ . β ⎦ i∈C ⎣ ⎩ β ⎭⎦ ⎣

(15)

4.2 Parameter, interval and probability estimation

MLE is employed to derive the estimators for scale and shape parameters. For our model these values (αˆ , βˆ ) are not available in analytical form but must be found numerically. We convert the likelihood maximization to an equivalent minimization problem and use the NMinimize function in Mathematica® (Wolfram, 2003), to identify the parameter values that minimize it.

13 The technical conditions necessary for maximum likelihood estimates to be asymptotically normal ( AN ) (Serfling, 1980) are met for our model (Lawless, 1982, p 525-526). Thus, inference may be based on the fact that

(αˆ , βˆ ) is AN ((α , β ), I

−1 tot

(α , β )),

(16)

where I tot is the Fisher observed information in a random sample (see Appendix B). The Fisher information utilized to find the estimation intervals of the parameters is given by: ⎡ ∂2 ⎤ ∂2 ( ) ( ) α β α β L , L , 3 3 ⎢ ⎥ 2 ∂α 3 ∂β ∂α 3 ⎢ ⎥. I tot (α 3 , β ) = − ∂2 ⎢ ∂2 ⎥ ⎢ ∂α ∂β L(α 3 , β ) ∂β 2 L(α 3 , β ) ⎥ ⎣ 3 ⎦

Let

I tot

−1

⎡ i 1,1 (α 3 , β ) i 1, 2 (α 3 , β )⎤ ⎥. 2, 2 ( ) ( ) i α , β i α , β 3 3 ⎦ ⎣

(α 3 , β ) = ⎢ 1,2

( (

⎡ i 1,1 αˆ , βˆ −1 We estimate it as I tot (αˆ 3 , βˆ ) = ⎢ 1, 2 3 ˆ ⎣i αˆ 3 , β where

(

)

(

)

Var (αˆ 3 ) = i 1,1 αˆ 3 , βˆ

and

) )

( (

) )

i 1, 2 αˆ 3 , βˆ ⎤ ⎥ i 2, 2 αˆ 3 , βˆ ⎦

(17)

(18)

(19) (20)

Var ( βˆ ) = i 2, 2 αˆ 3 , βˆ .

(21)

We estimate R(α 3 , β ) with Rˆ , where Rˆ =

τ0

1

∫ βˆ α Γ(αˆ )

t αˆ 3 −1e

ˆ3

0

−t

βˆ

dt .

(22)

3

By the invariance property, Rˆ (αˆ 3 , βˆ ) is a MLE for R(α 3 , β ) . Let

⎡ ⎤ ⎛ ∂R ⎞ ⎛ ∂R ⎞ ⎢ ⎥ . Then according to the Delta method (Appendix B), ⎟ ⎜ ⎟ G = ⎜⎜ ⎢⎝ ∂α 3 ⎟⎠ α 3 =αˆ 3 ⎜⎝ ∂β ⎟⎠ α 3 =αˆ 3 ⎥ β = βˆ ⎦ ⎥ β = βˆ ⎣⎢

(

)

−1 T Rˆ (αˆ 3 , βˆ ) is AN Rˆ (α 3 , β ), GI tot G , where −1 T Var ( Rˆ ) = GI tot G .

(23)

14

5. Numerical simulation results We consider an example based on data collected for the useful life of personal computers (Grenchus et al., 2002). Six cases were constructed to represent different product sales characteristics and varying lengths of time the products are kept by customers. We assumed the expected time until a unit is sold equals 100 time units (weeks). To describe the sale patterns of three different types of products, we used C.V .s of 0.5, 0.35, and 0.25 in the time to sale distribution. The scale parameters were chosen correspondingly to equal 25, 12.5, and 6.25. Therefore, the shape parameters equal 4, 8, and 16, respectively. The various intensities of variability imply several types of sales patterns: Low variability describes a product that is slow to gain popularity at the introduction, then sells fast, but its acceptance diminishes shortly after peak in sales. High variability implies a product that is successful in sales volume soon after it is introduced to the market and its sales stay at a steady high for a long time before they start to decrease. For each time to sale distribution, we formulated two time to return distributions to represent corporate and individual customers, respectively. We fixed the scale parameter to match the time to sale distribution and set the shape parameters to result in a mean time to return of 100 weeks for corporate and 200 weeks for consumer use (Grenchus et al., 2002). Table 1 summarizes the parameters of the six cases. The probability density functions g (t ;α 3 , β ) of the return times Ti are shown in Fig. 1 along with the time τ 0 ≡ 300 at which products were assumed to become obsolete. These six cases were used in a simulation to illustrate the effect of different patterns of sales distribution and mean time to return from different return origins on the predictability of returns and how the capacity decision depends on the decision time. The results are taken from the average over 100 replications, each of which represents N = 750 items. To focus on early returns, the censoring times were chosen as S1 = 125, S 2 = 150, …, S8 = 300. Because of the different mean times to return, “early” returns from cases 1-3 and cases 4-6 are considered to occur at times τ 1 ∈ {S1 , S 2 , S3 , S 4 } and τ 1 ∈ {S 4 , S5 , S7 , S8 } , respectively. The simulations were conducted in Mathematica® (Wolfram, 2003).

15 Table 1. The parameters for lifetime distribution C.V . of Y2,i

μ2

α3

C.V . of Ti

μ3

100 4

0.500

100

8

0.354

200 25

0.354

100 8

0.354

100

16

0.250

200 12.5

16

0.250

100 16

0.250

100

32

0.177

200 6.25

4

4

0.500

100 8

0.354

200

12

0.289

300 25

5

8

0.354

100 16

0.250

200

24

0.204

300 12.5

6

16

0.250

100 32

0.177

200

48

0.144

300 6.25

Case α 1

C.V . of Y1,i

μ1

1

4

0.500

2

8

3

α2

β

Fig. 1. Probability density functions of return time distribution 5.1 Capacity expansion

To explore the effects of variability in lifetime data on the capacity decision, the cost and revenue parameters were chosen as follows: γ = 1.0009387 per week (annual interest rate of 5% per year), V = 100, c = 82.75, K = 3000, and τ 0 = 300 . Tables 2 and 3 show for each value of the decision (and censoring) time, γ (τ 1 ) = the discount factor where γ (τ 1 ) = γ − (τ 0 −τ 1 ) , D = the percent average (over replications) of censored observations at time τ1, R = the probability that a unit will be returned by τ 0 using true parameters, R = the average probability estimated at τ 1 that a unit will be returned by τ 0 , and the number of runs in which building ( x > 0 ) or not building ( x = 0 ) new capacity was found to be optimal using the estimated return probability. In addition, 95% confidence intervals (CI) for the optimal capacity levels ( x * ) are given for the case where x > 0 . The results show how the revenue discounting and the variability in the estimates for R combine to affect the capacity decision. From Section 3.3, we know that if the lead time decreases, the expected discounted revenue increases relative to the capacity costs; therefore, the optimal decision would tend toward building capacity even if R is small. Large (estimated) values of R also encourage building capacity. However, we cannot predict how additional information at a later decision time will change the estimated value of R other than to improve its accuracy and precision.

16 The top margins of Tables 2 and 3 also show the true value of R, which represents perfect information. Under perfect information, it is profitable to build new capacity at time 175 for cases 1 and 2 and at time 150 for case 3. For cases 4-6, by the time 200, building capacity is preferred. In Tables 2 and 3, the replications where a decision error occurred are highlighted with bold type. We observe the following: ƒ

As expected, for larger values of R (moving to the right in each table), it is optimal to build capacity at earlier decision times.

ƒ

As also expected, the effect of reduction in the revenue discounting (moving down the column for each case) changes the optimal decision from not investing in capacity toward building capacity.

ƒ

Considering the combined effect of the revenue discounting and the trend in Rˆ , the revenue discounting appears to dominate the effect of Rˆ . That is, the reduction in the revenue discounting influences cases 1, 2, and 4-6 to build capacity in more instances even though R decreases over time and even though the values of R in cases 4-6 are little more than half of those in cases 1-3.

ƒ

For corporate returns (cases 1-3), although the effect of revenue discounting is pronounced, the variability in Rˆ causes errors in the direction of building capacity when it is not profitable to do so. On the other hand, for individual returns (cases 4-6) even though the revenue discounting is less, the effect of variability in Rˆ leads to foregoing capacity expansion when it would be profitable.

ƒ

Overall, there are fewer decision errors in corporate returns than individual returns. One explanation is that the decisions made in the former case were obtained from fewer censored observations than the latter one. Therefore, we can obtain Rˆ with less variability.

ƒ

Considering different decision times within a case, we observe that larger R mostly resulted in larger x * corresponding to the stochastic ordering discussed in section 4.6. An exception occurred only in case 3, in which R slightly increased from 0.991 at time 150 to 0.993 at time 175 but x * decreased from 743 to 741. One explanation is that the calculation of x * at time 150 was obtained from the 87 runs choosing to build capacity

17 but R was taken from all 100 runs, including the runs with lower R values where capacity was not added. The variability in the estimates for R leads to a mixed decision, e.g., both decisions appear in case 2 at decision time 150. The variation in the parameter estimates will be explored in terms of the accuracy and precision of the estimates in the next two subsections.

Table 2. The discount factor at different decision times, D , R , and R including 95 % CI for x * for cases 1-3

τ1

γ (τ1 )

Case 1

( R = 0.910 )

Case 2

( R = 0.966)

Case 3

( R = 0.994)

D

R

x= 0

x >0

95% CI for x*

D

R

x= 0

x >0

95% CI for x*

D

R

x= 0

x >0

95% CI for x*

125

0.849

87%

0.955

100

0

0

95%

0.976

100

0

0

99%

0.929

100

0

0

150

0.869

74%

0.946

100

0

0

84%

0.971

67

33

734 ± 1 . 21

93%

0.991

13

87

743 ± 0 . 94

175

0.889

60%

0.934

0

100

690 ± 2 .53

67%

0.969

0

100

719 ± 1 .56

75%

0.993

0

100

741 ± 0 . 76

200

0.910

45%

0.925

0

100

683 ± 1 .86

47%

0.967

0

100

718 ± 1 .15

48%

0.994

0

100

742 ± 0 . 42

Table 3. The discount factor at different decision times, D , R , and R including 95 % CI for x * for cases 4-6

τ1

γ (τ1 )

Case 4

( R = 0.538)

Case 5

( R = 0.527 )

Case 6

( R = 0.519)

D

R

x= 0

x >0

95% CI for x*

D

R

x= 0

x >0

95% CI for x*

D

R

x= 0

x >0

95% CI for x*

200

0.910

89%

0.567

19

81

421 ± 7 .06

96%

0.541

42

58

449 ± 16 . 69

99%

0.543

43

55

541 ± 21 . 05

225

0.932

81%

0.543

0

100

391 ± 5 .97

90%

0.538

1

99

390 ± 10.74

97%

0.539

23

77

434 ± 19 . 55

250

0.954

70%

0.545

0

100

394 ± 3 .43

79%

0.529

0

100

380 ± 4 . 91

88%

0.519

0

100

374 ± 8 . 76

275

0.977

58%

0.539

0

100

390 ± 3 .07

64%

0.530

0

100

383 ± 3 . 20

71%

0.520

0

100

376 ± 3 . 81

5.2 Accuracy of estimation

We measure predictability performance in terms of the accuracy and precision as in the previous analysis, using the average over 100 replicates. Accuracy describes the closeness of the estimate to the true value. We measure inaccuracy by the percent deviation from the true value

θˆ − θ defined as

θ

×100 %, where θ is the true value, and θˆ is an estimated value.

18 Fig. 2-4 illustrate the percent deviations from the true values of αˆ 3 , βˆ , and Rˆ at different censoring times. The results show that percent errors in the estimates tend to decrease. Overall, the errors in Rˆ were considerably smaller compared to errors in αˆ 3 and βˆ ; in other words, the estimates of Rˆ were insensitive to the errors in the parameter estimates. The errors in Rˆ for cases 1-3 were smaller than cases 4-6, even though highly censored data were used. Fig. 2. Comparisons of % error in estimating αˆ 3 at different decision times

Fig. 3. Comparisons of % error in estimating βˆ at different decision times

Fig. 4. Comparisons of % error in estimating Rˆ at different decision times

5.3 Precision of estimation

The precision can be measured by the narrowness of the approximate confidence interval and quantified by the relative standard deviation (RSD),

Var (θˆ) × 100 , where Var (θˆ) denotes ˆ θ

the variance of an estimate (National Institute of Standards and Technology, 2003). Fig. 5-7 illustrate RSDs for αˆ 3 , βˆ , and Rˆ at different censoring times. In all cases the RSDs decreased and then leveled out over time. Generally RSDs for cases 1-3 stabilized faster with lower RSDs than cases 4-6 even though highly censored data were used. Thus, the precision in parameter estimates for cases 1-3 improved faster than for those from cases 4-6. Notice that the RSDs for

Rˆ were lower than those for αˆ 3 and βˆ . Hence, the confidence intervals for estimates of Rˆ were generally narrower than those for the parameter estimates. Either inaccuracy or imprecision in the estimates can cause decision error. Even with highly censored data, overall percent errors and RSDs for Rˆ were lower for cases 1-3 than cases 4-6. The effect is to observe fewer decision errors in Table 2 than in Table 3. For cases 2 and 3, the decision errors occurred at time 150. Although with perfect information it is not optimal to build capacity given the heavy revenue discounting, some high estimates for R resulted in incorrect decisions to build. These errors show the benefit of delaying

19 the capacity decision if possible because with better information the risk of carrying excess capacity can be avoided. For cases 4-6, the decision errors occurred at times 200 and 225, where the high variability in Rˆ from highly censored observations were larger than in cases 1-3, causing more decision errors. In these cases, all errors were in the direction of failing to build capacity and losing the opportunity to earn revenue from processing the returns nearing obsolescence. A cost for reducing the lead time, which allows the decision to be delayed while more returns are received, might be outweighed by this added revenue. Considering the results in all six cases, the decisions improved as the quality of parameter estimates increased even with highly censored observations, e.g., case 6 at time 250 has 88 percent of data censored but the error and RSD for Rˆ are both less than ten percent. Overall percent errors and RSDs in Rˆ from cases 1-3 were lower than those from cases 4-6; therefore, the decisions made in the corporate return cases 1-3 had less variability and fewer decision errors than those in the individual return cases. Fig. 5. Comparisons of % RSD in estimating αˆ 3 at different decision times

Fig. 6. Comparisons of % RSD in estimating βˆ at different decision times

Fig. 7. Comparisons of % RSD in estimating Rˆ at different decision times

6. Summary In this paper, we jointly analyzed forecasting and capacity management of returned products to demonstrate the benefit of information from early returns in determining the optimal remanufacturing capacity. We used maximum likelihood estimation for the gamma distribution with censored data to estimate parameters of the product return distribution including the probability an item will be returned before obsolescence. We used a single-period model for capacity planning to determine the optimal size of capacity expansion at different lead times. Using numerical simulation, we studied the combined effect of revenue discounting and the estimates based on censored data on the capacity decision.

20 As expected, the effect of reduction in the revenue discounting influences the optimal decision from not building toward building capacity. In addition, the reduction in the revenue discounting has a stronger effect on the decision than the return probability. That is, it influences the firm to build capacity at later decision points although the estimated return probability mostly decreases over time. Variability in the return time distribution parameter estimates affects the capacity decisions. That is, the capacity decision is sensitive to the error in these estimates. Specifically, for corporate returns the error in estimates causes errors of building capacity when it will not be profitable. Therefore, it is beneficial to delay the capacity decision to reduce the risk of excess capacity. On the other hand, for individual returns even though the revenue discounting is less, the effect of high variability in parameter estimates leads to foregoing capacity expansion when it would be profitable. In this case, the benefit of additional information on returns, which could be obtained with a shorter capacity lead time, is to earn additional revenue from reprocessing returns. In particular, the estimated probability of return before obsolescence had greater accuracy and precision than the estimated parameters of the return time distribution. The capacity decisions observed in cases representing corporate returns, with shorter expected time to return, showed fewer errors than for individual returns. Therefore, it may be beneficial to offer incentives for timely return of end-of-life products. An interesting extension of this research is to apply other estimation approaches and compare performance of estimation with MLE method. Another possible extension is to relax the assumption that all units will be returned, to better represent the remanufacturing environment where only a portion of units sold will be returned. Finally, considering a series of capacity decisions over multiple periods is an interesting area for future research.

Appendix A. 1. Proof. Eq. (9) c F −1 (1− ) b



xf ( x)dx =

0

c F −1 (1− ) b

∫ 0

=

⎛x ⎞ ⎜ ∫ dt ⎟ f ( x)dx = ⎟ ⎜ ⎝0 ⎠

c F −1 (1− ) b

∫ 0

c c F −1 (1− ) F −1 (1− ) b b

∫ 0

∫ f ( x)dxdt t

c ⎡ ⎤ −1 F ( F ( 1 − )) − F (t )⎥dt = ⎢ b ⎣ ⎦

c F −1 (1− ) b

∫ 0

(1 −

c − F (t ))dt . b

(A.1)

21

Appendix B. We review some useful methods, properties and theorems related to MLE that we have discussed in our analysis. More detail can be found from Serfling (1980) and Lawless (1982).

(

Define θ ≡ θˆ1,θˆ2 ,...,θˆ p

) where p denotes the number of estimated parameters. T

Let θˆn be an MLE for θ and Y1 , Y2 ,..., Yn be iid with density or mass functions f ( y i ;θ ) that depend on θ . 1. Invariance:

(

If θˆn ≡ θˆ1 ,θˆ2 ,...,θˆ p

)

T n

is a MLE for θ , and if g (.) is a real-valued function then

()

g θˆ is an MLE for g (θ ) .

2. Asymptotic normality: ⎛ 1 ⎞ ⎟⎟ MLE θˆn is AN ⎜⎜θ , ⎝ nI (θ ) ⎠ 3. Fisher Information (single variable): Let ⎡ ⎧⎪ ∂ ⎫⎪⎤ ∂ I (θ ) ≡ ⎢ E ⎨ log f ( yi ;θ ) f ( yi ;θ ) ⎬⎥ ∂θ j ⎢⎣ ⎪⎩ ∂θ i ⎪⎭⎥⎦ p× p ⎡ ⎧⎪ ∂ 2 ⎫⎪ ⎤ = ⎢− E ⎨ log f ( yi ;θ ) ⎬ ⎥ , i, j = 1, 2,..., p ⎢⎣ ⎩⎪ ∂θi ∂θ j ⎭⎪ ⎥⎦ p× p

Then the total information is n ⎛ ⎡ ∂2 ⎤⎞ I tot (θ ) = ∑ − ⎜ E ⎢ log f ( y i ;θ )⎥ ⎟ = nI (θ ) ⎜ ⎢ ∂θ i ∂θ j ⎟ i =1 ⎥ ⎣ ⎦ ⎝ ⎠ p× p

4. Delta method: ⎡ ∂g (θ ) ⎤ ⎛ V (θ ) ⎞ If θˆn is AN ⎜θ , ⎟ and let D = ⎢ ⎥ n ⎠ ⎝ ⎣⎢ ∂θ p ⎦⎥ Then

( )

(

−1 g θˆn is AN g (θ ), DI tot (θ ) D T

5. Approximate Interval:

)

22 An (1 − ϕ )100% approximate interval for θ is 1 1 ⎛ 2 2⎞ ⎜θˆ − Z ⎡ 1 ⎤ ,θˆ − Z ⎡ 1 ⎤ ⎟ , ϕ ⎢ n ϕ ⎢ ⎜ n 1− 1− nI (θ ) ⎥⎦ nI (θ ) ⎥⎦ ⎟ 2 ⎣ 2 ⎣ ⎝ ⎠

and an (1 − ϕ )100% approximate interval for g (θ ) is ⎛ ˆ −1 ⎜ g (θ ) − Z ϕ DI tot (θ ) D T ⎜ 1− 2 ⎝

[

]

1

2

, g (θˆ) − Z

1−

ϕ

[DI

−1 tot

(θ ) D T

2

]

1

2

⎞ ⎟. ⎟ ⎠

References Bayus, B. 1998. An analysis if product lifetimes in a technologically dynamic industry. Management Science 44(6), 763-775. Bollen, N. 1999. Real options and product lifecycles. Management Science 45(5), 670-684. Cox, D. R., and Oakes, D. 1984. Analysis of Survival Data, Chapman and Hall. De Brito, M. P., and Dekker, R. 2003. Modeling product returns in inventory control exploring the validity of general as assumptions. International Journal of Production Economics 8182, 225-241. De Brito, M. P., and van der Laan, E. 2002. Inventory management with product returns: the impact of (mis)information." Econometric Institute Report EI 2002-29, Faculty of Econometric, Erasmus University Rotterdam, The Netherlands. Ding, X., Puterman, M. L., and Bisi, A. 2002. The censored newsvendor and the optimal acquisition of information. Operations Research 50 (3), 517-527. Fleischmann, M., Bloemhof-Ruwaard, J.M., Dekker, R., van der Laan, E., van Nunen, J.A.E.E., Van Wassenhove, L.N. 1997. Quantitative models for reverse logistics: A review. European Journal of Operational Research 103, 1-17. Gallego, G., and Moon, I. 1993. The distribution free newsboy problem: review and extensions. Journal of the Operational Research Society 44 (8), 825. Goh, T. N., and Varaprasad, N. 1986. A statistical methodology for the analysis of the life-cycle of reusable containers. IIE Transactions 18, 42-47. Grenchus, E., Keene, R., Luce, R., and Nobs, C. 2002. Composition and value of returned industrial information technology equipment revisited. Proceedings of the IEEE Symposium on Electronics and the Environment. San Francisco, California, pp.157-160. Guide, Jr., V. D. R., Jayaraman, V., Srivastava, R., and Benton, W. C. 2000. Supply-chain

23 management for recoverable manufacturing systems. Interfaces 30 (3), 125-142. Guide, Jr., V. D. R., Srivastava, R., and Spencer, M. S. 1997. An evaluation of capacity planning techniques in a remanufacturing environment. International Journal of Production Research 35 (1), 67-82. Hess, J. D., and Mayhew, G. E. 1997. Modeling merchandise returns in direct marketing. Journal of Direct Marketing 11 (2), 20-35. Johnson, L. N., Kotz, S., and Balakrishnan, N. 1994. Continuous Univariate Distributions, Wiley, New York. Kelle, P., and Silver, E. A. 1989a. Purchasing policy of new containers considering the random returns of previously issued containers. IIE Transactions 21 (4). 349-354. Kelle, P., and Silver, E. A. 1989b. Forecasting the returns of reusable containers. Journal of Operations Management 8 (1), 17-35. Kokkinaki, A. I. D., van Nunen, J., and Pappis, C. 2000 . An exploratory study on electronic commerce for reverse logistics. Supply Chain Forum (1), 10. Krupp, J. 1992. Core obsolescence forecasting in remanufacturing. Production and Inventory Management Journal 33 (2), 12-17. Lawless, J. F. 1982. Statistical Models and Methods for Lifetime Data, Wiley, New York. Luss, H. 1982. Operations research and capacity expansion problems: a survey. Operations Research 30 (5), 907-947. Marx-Gómez, J., Rautenstrauch, C., and Nürnberger, A. 2002. Neuro-fuzzy approach to forecast returns of scrapped products to recycling and remanufacturing. Knowledge-based Systems 15, 119-128. National Institute of Standards and Technology, Retrieved March 30, 2003, from http://www.itl.nist.gov/div898/software/dataplot.html/refman1/ch2/relsd.pdf. Rao, C. R. 1952 . Advanced Statistical Methods in Biometric Research, Wiley, New York. Saar, S., and Thomas, V. 2002. Advanced product tags for recycling. Proceedings of the IEEE Symposium on Electronics and the Environment. San Francisco, California, pp. 254-256. Serfling, R. J. 1980. Approximation Theorems of Mathematical Statistics, Wiley, New York. Shaked, M., and Shanthikumar, J. G. 1994. Stochastic Orders and Their Applications, Academic Press, Boston. Solomon, R., Sandborn, P. A., and Pecht, M. G. 2000. Electronic part life cycle concepts and

24 obsolescence forecasting. IEEE Transactions on Components and Packaging Technologies 23 (4), 707-717. Srivastava, R., and Guide, V. D. R. J. 1995. Forecasting for parts recovery in a remanufacturing environment. Decision Science Institute, 1206-1208. Tibben-Lembke, R.S. 2002. Life after death: reverse logistics and the product life cycle. International Journal of Physical Distribution and Logistics 32 (3), 223-244. Toktay, B., Wein, L., and Zenios, S. 2000. Inventory management of remanufacturable products. Management Science 46 (11), 1412-1426. Wolfram, S. 2003. The Mathematica Book, Cambridge University press.

25

g (t ;α 3 , β )

τ0

0.01

Case3

Case6

0.008 Case5 Case2 0.006 Case4 0.004

Case1

0.002

100

200

300

400

500

t

Fig. 1. Probability density functions of return time distribution

70 Case 1 60

Case 2 Case 3

% error

50

Case 4

40

Case 5

30

Case 6

20 10 0 125

150

175

200

225

250

275

Sj τ1

Fig. 2. Comparisons of % error in estimating αˆ 3 at different decision times

26

% error

100 90

Case 1

80

Case 2

70

Case 3

60

Case 4

50

Case 5

40

Case 6

30 20 10 0 125

150

175

200

225

250

275

Sj τ

1

Fig. 3. Comparisons of % error in estimating βˆ at different decision times

50 Case 1 Case 2

40 % error

Case 3 Case 4

30

Case 5 Case 6

20 10 0 125

150

175

200

225

250

275

Sj τ1

Fig. 4. Comparisons of % error in estimating Rˆ at different decision times

27

70

% RSD

Case 1 60

Case 2

50

Case 3 Case 4

40

Case 5

30

Case 6

20 10 0 125

150

175

200

225

250

275

Sj τ1

Fig. 5. Comparisons of % RSD in estimating αˆ 3 at different decision times

70

% RSD

Case 1 60

Case 2

50

Case 3 Case 4

40

Case 5

30

Case 6

20 10 0 125

150

175

200

225

250

275

Sj τ1

Fig. 6. Comparisons of % RSD in estimating βˆ at different decision times

28

70

% RSD

Case 1 60

Case 2

50

Case 3 Case 4

40

Case 5

30

Case 6

20 10 0 100

125

150

175

200

225

250

275

300

Sj τ1

Fig. 7. Comparisons of % RSD in estimating Rˆ at different decision times

Suggest Documents