Optimization design of control charts based on Taguchi's loss function ...

2 downloads 0 Views 132KB Size Report
An algorithm was developed for the optimization design of control charts based on the probability distribution of the random process shifts (e.g. mean shift). The.
int. j. prod. res., 2004, vol. 42, no. 2, 379–390

Optimization design of control charts based on Taguchi’s loss function and random process shifts Z. WUy*, M. SHAMSUZZAMANy and E. S. PANz An algorithm was developed for the optimization design of control charts based on the probability distribution of the random process shifts (e.g. mean shift). The design objective was to minimize the overall mean of Taguchi’s loss function per out-of-control case (denoted as ML) by adjusting the sample size, sampling interval and control limits of the chart in an optimal manner. The optimal chart was therefore named as the ML chart. A three-phase operational scenario for statistical process control (SPC) was also proposed to design and operate the ML chart. The probability distribution of the mean shifts can be modelled by a Rayleigh distribution based on the sample data of the mean shifts acquired in the three-phase scenario. Unlike in the economic control chart designs, the design of the ML chart only requires a limited number of specifications that can be easily determined. The results of the comparison studies show that the ML chart is significantly superior to the Shewhart control chart in view of overall performance. Although the ML chart was discussed in detail only for the X chart, the general idea can be applied to many other charts such as CUSUM and EWMA.

1. Introduction The Shewhart X control chart is widely used in industry to monitor process mean shifts in SPC. Typically, the sample size n is small, often either 4, 5 or 6 (Montgomery 2001). The 3-sigma control limits are commonly used, but the control limits LCL and UCL can be easily adjusted in order to satisfy different requirements for false alarm rate. The sampling interval h is usually decided according to the concept of rational subgroups. However, since on-line measurement and distributed computing systems are becoming the norm in today’s SPC applications (Woodall and Montgomery 1999), the sampling interval may be much smaller than the working shift and the notation of rational subgrouping is rarely enforced (Palm et al. 1997). While the Shewhart X chart is easy to design and operate, its performance is unsatisfactory from either the statistical or the economic viewpoint, especially for small or moderate process shifts. The performance of the Shewhart X chart and other control charts can be measured by the Average Time to Signal (ATS). It is the average time required to signal a process shift after it has occurred. The outof-control ATS is commonly used as an indicator of the power (or effectiveness) of the control chart and the in-control ATS0 for the false alarm rate.

Revision received July 2003. ySchool of Mechanical and Production Engineering, Nanyang Technological University, Singapore 639798. zDepartment of Industrial and Management Engineering, Shanghai Jiaotong University, P. R. China 200030. *To whom correspondence should be addressed. e-mail: [email protected] International Journal of Production Research ISSN 0020–7543 print/ISSN 1366–588X online # 2004 Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 10.1081/00207540310001614169

380

Z. Wu et al.

Research effort has been made to develop more advanced control charts in two main directions, i.e. the statistical designs (Page 1954, Keats et al. 1995, Prabhu et al. 1997, Wu et al. 2002) and the economic designs (Duncan 1956, Montgomery 1986, Castillo and Montgomery 1996, Al-Oraini and Rahim 2002). The statistical design is to minimize the out-of-control ATS at one or a few out-of-control states by optimizing n, h, LCL and UCL, on condition that the in-control ATS0 is no smaller than a specified value. However, the statistical design does not directly measure the costs (or losses) resulting from the out-of-control cases. The economic design aims at minimizing the total cost associated with the implement of SPC in a process. Many researchers (Woodall 1986, Montgomery 2001) pointed out the weak points of the economic design, including the poor statistical properties, the complex mathematical models, and the difficulty in estimating costs and other model parameters. Another problem for both statistical and economic designs stems from the fact that only one or a few values of the process shifts (e.g. mean shift ) are taken into consideration during the optimization design (Page 1954, Wheeler 1983, Saniga 1989), and these  values are usually decided subjectively. As a result, the design procedure may be inadequate to reflect the real working conditions. In other words, the resultant control chart may have optimal performance for the particular value(s) of , but may work unsatisfactorily for other values of . The mean shift  is a random variable and has different probability distributions for different processes. If the data of  can be collected during the field operation of the control chart, the optimization design can be carried out based on these data (or the corresponding probability distribution). Consequently, the performance of the control chart will be improved over the whole range of . This article proposes a design algorithm based on the data of the process shifts (e.g. mean shifts ). These data are acquired from the observations of the out-ofcontrol cases within a three-phase SPC scenario. The probability distribution of  may be approximated by a Rayleigh distribution (Wu et al. 2002). The design algorithm optimizes n, h, LCL and UCL in order to minimize ML (the overall mean of the loss function per out-of-control case). The loss function is used broadly in industry to measure the cost due to poor quality (Spring and Yeung 1998). The loss ‘ associated with a particular value x of the quality characteristic is proportional to the square of the difference between x and the target value T: ‘ ¼ kðx  TÞ2 ,

ð1Þ

where k is a constant depending on the cost associated with the specification limits. It is clear that the smaller the loss, the more x is close to T and the better the product quality. Therefore, reducing ‘ is consistent with one’s effort for continuous quality improvement. The expected value, L, of ‘ over the distribution of x is equal to (Ross 1989):   L ¼ k  2 þ ð  TÞ2 , ð2Þ where  and  are the mean and standard deviation of x. This paper assumes that the probability distribution of the quality characteristic x is normal with known in-control mean 0 and standard deviation  0. When mean shift occurs, the process mean  will change:  ¼ 0 þ  0 ,

ð3Þ

Optimization design of control charts

381

where  is the mean shift in terms of  0. When the process is in control,  ¼ 0. It is also assumed that, in the discussion of the X chart, the shift in standard deviation is not taken into consideration (i.e.    0).

2. Optimization design 2.1. Specifications To design the optimal ML X chart, only the following three parameters need to be specified by the designers:  R 

minimum allowable in-control ATS0, maximum allowable inspection rate, mean of the mean shifts .

The value of  is specified based on the trade-off between the false alarm rate and the detection power. If the cost of handling the false alarms is high, a larger  should be used to reduce the false alarm frequency. However, a large  may impair the effectiveness of the control chart at the same time. The actual in-control ATS0 must be greater than or equal to . R is decided according to the available resources (operators and measuring instruments) and can be estimated from the field test during the pilot runs. The value of  can be estimated from the m sample values of  (denoted by d1, d2, . . ., dm) that are obtained from the observations of m outof-control cases during the operation of the control chart. In fact, the specification of  can be substituted by the m discrete values di. The optimization design of the ML X chart can be carried out by fixing 0 at zero and  0 at one. The final control limits can be determined by a simple modification using the actual values of 0 and  0. 2.2. Optimization model The design algorithm of the ML X can be described by the following optimization model: Objective function : ML ¼ minimum

ð4Þ

Constraint functions : ATS0  

ð5Þ

rR

ð6Þ

Design variables: n, h, LCL, UCL. where r is the actual (or resultant) inspection rate. The optimization model optimizes n, h, LCL and UCL in order to minimize ML (the overall mean of the loss function per out-of-control case, over the probability distribution of the random mean shift ), on condition that the constraints on ATS0 and r are all satisfied. The minimization of ML will reduce the loss in quality (or the cost, or the damage) incurred by the out-of-control cases. Among the four design variables n, h, LCL and UCL, the sample size n is the only independent variable. Other three variables can be determined as follows: (1) Sampling interval (h): to satisfy constraint (6) on the inspection rate r and to make full use of the available resource, it is desired to have R ¼ r ¼ n=h:

ð7Þ

382

Z. Wu et al. Therefore, h ¼ n=R:

ð8Þ

(2) Upper control limit (UCL): to satisfy constraint (5) on ATS0 and to make the control chart most powerful, it is ideal to have: ATS0 ¼ : The corresponding type I error probability is:  ¼ h=ATS0 ¼ h=:

ð9Þ

ð10Þ

Therefore, UCL can be calculated as follows: UCL ¼ 0 þ

1 ð1  0:5Þ0 pffiffiffi , n

ð11Þ

where 1 ( ) is the inverse function of the cumulative probability function of the standard normal distribution. (3) Lower control limit (LCL). Since the control limits of the X chart are symmetric about 0, the lower control limit LCL can be determined easily: LCL ¼ 20  UCL:

ð12Þ

After n, h, LCL and UCL have all been determined, the objective function ML can be calculated by: Z Z 1h      i d , ML ¼ g  ATS   L   f  ð13Þ 0

where L() and ATS() are the expected values of the loss function and the out-ofcontrol ATS, respectively, for a given value of the mean shift . The variable g is the number of products fabricated in a time unit. The product of g and ATS is the average number of units produced under an out-of-control case. The calculation of the probability density function f() of  will be detailed in the next section. It is noted that only positive mean shifts need to be considered because of the symmetry of the normal distribution of x. It is well known that a small mean shift  may contribute a substantial amount to ML, because small  results in large ATS. On the other hand, a large  may also significantly increase the value of ML, because large  makes the expected loss L() great. Integration (13) takes into account of various mean shifts with different sizes and probabilities, and will provide a more comprehensive measure of the overall loss. The objective function can be rewritten as (see equations 2 and 13, and note T ¼ 0): Z 1h      i ATS    2 þ 2 02  f  d : ð14Þ ML ¼ kg 0

Since the product of k and g is a constant and has no effect on the optimal values of n, h, LCL and UCL, it will be neglected in the subsequent discussion for simplicity. It is assumed that x has been stabilized at the in-control distribution at the time when the process shift occurs and that the random time of process shift has a

Optimization design of control charts

383

uniform distribution within a sampling interval (Reynolds et al. 1990). Then, ATS() is calculated by:     ATS  ¼ ARL  h  0:5h, ð15Þ where Average Run Length (ARL) is the average number of samples required to signal a process shift after it has occurred: ARLð Þ ¼ 1=ð1  ð ÞÞ       UCL  0 þ  0 LCL  0 þ  0 pffiffiffi pffiffiffi ð Þ ¼   ; = n = n

ð16Þ ð17Þ

where  is the probability of the type II error for a given value of . It is well known that ARL is a decreasing function of the sample size n, and h is an increasing function of n (because of constraint 6). Namely, if a small n is used, ARL will be large and h will be small. Conversely, if a large n is used, ARL will be small and h will be large. As a result, the change of the independent parameter n in either direction will engender ARL and h to compete with each other. Since both ARL and h are the factors of ATS (equation 15), there must be an optimal value of n that will minimize ATS for a given value of . Furthermore, from an overall viewpoint, there must be an optimal value of n that will make the objective function ML minimum. The final expression of ML is: 20 1 Z1 C 6B h 6B      0:5hC ML ¼ 4@ A UCL  0   0 LCL  0   0 0 pffiffi pffiffi þ 1 = n

3

= n

ð18Þ

   7   2 þ 2 02 f  7 5d : The entire optimization design is implemented as a single-variable search. The single variable n is increased from one in step of size one. The procedures of the search algorithm are outlined below: (1) Specify , R,  and set 0 ¼ 0, 0 ¼ 1. (2) Initialize MLmin as a very larger number, say 107 (MLmin is used to store the minimum ML). (3) Search n from one in step of one until ML cannot be further reduced. For a given value of n: (3.1) Calculate h by using equation (8). (3.2) Calculate UCL by using equations (10) and (11). (3.3) Calculate LCL by using equation (12). (3.4) Calculate ML by using equation (18). (3.5) If the calculated ML is smaller than the current MLmin, replace the latter by the former, and the current values of n, h, LCL and UCL are stored as the temporary optimal design. (4) At the end of the entire search, the optimal ML X chart that produces the minimum ML can be identified. The corresponding optimal values of n, h, LCL and UCL are also determined.

384

Z. Wu et al. (5) Finally, the control limits have to be modified by using the actual values of 0 and 0: LCLfinal ¼ 0 þ 0 LCL UCLfinal ¼ 0 þ 0 UCL:

ð19Þ

The design algorithm can be implemented in a computer program, by which the optimization design of a ML X chart can be completed at almost no time in a personal computer.

2.3. Three-phase SPC scenario and the probability distribution of the mean shift The probability distribution of  can be established through a three-phase SPC scenario. In the first phase, the data of x are collected in order to estimate the incontrol 0 and  0 and to determine the control limits of a workable control chart (Montgomery 2001). The workable control chart is then used to monitor the process in the second phase and to acquire m sample values of  (i.e. d1, d2, . . ., dm). Suppose an out-of-control condition is signalled at the k2th sample and the follow-up investigation discovers that the responsible assignable cause occurs between the (k1  1)th and the k1th samples (k1  k2). It means that the k1th to k2th samples are taken under the out-of-control condition. As a result, the grand average X of the sample means X K (K ¼ k1, k1 þ 1, . . ., k2) of these samples can be used as the estimate of the mean shift di (the sample value of  for this particular out-of-control case): k2 P



XK

K¼k1

k2  k1 þ 1  X  0 di ¼ : 0

ð20Þ

When m sample values, di, are available, the ML X chart can be designed by using one of the following two approaches. (1) Non-parametric approach: sample values of  (d1, d2, . . ., dm) are used directly without being fitted into a theoretical probability distribution. The expression of ML (equation 14) is written below (with the product kg neglected):

ML ¼

m    1X ATSðdi Þ   2 þ di2 02 m i¼1

ð21Þ

ATS(di) is still calculated by equations (15–17). (2) Parametric approach: sample values, d1, d2, . . ., dm, are first fitted into a theoretical probability distribution. The Rayleigh distribution may be an appropriate candidate (Wu et al. 2002). This probability distribution is often used to model the positional deviation from a target in geometrical

385

Optimization design of control charts

f( ) 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.5 Figure 1.

1

1.5

2

2.5

3

3.5

Rayleigh distribution for mean shift ( ¼ 1.0).

tolerancing. The probability density function of the Rayleigh distribution is given by: !    2 f  ¼ 2 exp  2 , 2 4

ð22Þ

which is characterized by a single parameter: the mean  of . Figure 1 shows the density function of a Rayleigh distribution with  ¼ 1. The parameter  can be estimated from (d1, d2, . . ., dm): m P

di   i¼1 m

ð23Þ

Then, equation (18) can be used directly to calculate ML. Now, the designed ML X chart can be used to monitor the forthcoming process in phase three. Moreover, when new data of  have become known, the ML X chart can be redesigned using the latest information. Consequently, the three-phase SPC scenario is able to continuously update the ML X chart and maintain it as an optimal one. The three-phase SPC scenario is obviously superior to the current two-phase SPC procedure (in which phase one is used to estimate 0 and  0 in order to design the workable control chart, and then, the workable chart is used to monitor the process in phase two). While the two-phase SPC scenario results in just a workable chart with average performance, the three-phase SPC scenario produces an optimal chart that is designed according to the actual working conditions of a particular process and will have the first-rate performance for that process.

386

Z. Wu et al.

Either the parametric or non-parametric approach is employed, the accuracy of the calculated ML depends on the number m of the sample values of the mean shifts . The larger the m, the smaller the following error:



MLa  MLm

, ð24Þ e ¼



ML a

where MLa is the accurate value of ML (assuming that the probability distribution of  is exactly known) and MLm is the estimated value (i.e. the probability distribution is approximated based on the m sample values of ). A simulation test shows that, if  follows a Rayleigh distribution, the average value of e is smaller than 10% for m  10 and smaller than 5% for m  40. 3.

Comparison studies The performance of the ML X chart and other three charts is compared in this section, in which the mean shift  is assumed to follow a Rayleigh distribution, and 0 and  0 are fixed at zero and one, respectively. The four control charts are as follows: (1) An Shewhart X chart that uses a fixed sample size of five. (2) An X 1.0 chart. The design of this chart searches the optimal values of n, h, LCL and UCL in order to minimize ML. However, it only considers a fixed mean shift value ( ¼ 1.0) rather than the whole probability distribution of . Referring to equation (21):   ML ¼ ATSð1:0Þ   2 þ 1:02  02 :

ð25Þ

The fixed value ( ¼ 1.0) is adopted in many previous optimization designs of the control charts (Prabhu et al. 1997, Das and Jain 1997). (3) An X  chart. It is almost the same as the X 1.0 chart, except that  is fixed at  rather than 1.0. It implies that  is known. (4) An ML X chart that is proposed here. To facilitate the comparison, a normalized MLnormalized for each chart is calculated by using the ML of the ML X chart as the norm: MLnormalized ¼

ML : MLML

ð26Þ

Obviously, if the MLnormalized of a chart is >1, the performance of this chart is inferior to that of the ML X chart, and vice versa. 3.1. Study one The comparison is first conducted under the following general conditions:  ¼ 400, R ¼ 4,  ¼ 0:8:

ð27Þ

The charting parameters, the ML and the normalized MLnormalized for each of the four control charts are worked out and listed below: Shewhart X chart: n ¼ 5, h ¼ 1.25, LCL ¼ 1.322, UCL ¼ 1.322, ML ¼ 45.568, MLnormalized ¼ 2.437. X 1.0 chart: n ¼ 14, h ¼ 3.50, LCL ¼ 0.701, UCL ¼ 0.701, ML ¼ 24.492, MLnormalized ¼ 1.310.

387

Optimization design of control charts

X  chart: n ¼ 20, h ¼ 5.00, LCL ¼ 0.559, UCL ¼ 0.559, ML ¼ 20.932, MLnormalized ¼ 1.119. ML X chart: n ¼ 36, h ¼ 9.00, LCL ¼ 0.380, UCL ¼ 0.380, ML ¼ 18.702, MLnormalized ¼ 1.000. Note that all of above charts have satisfied the constraint functions on both ATS0 and r (equations 5 and 6). The MLnormalized of the four charts clearly indicate that the ML X chart outperforms all other charts. The ML X chart is more effective (in terms of ML) than the Shewhart X chart, the X 1.0 chart and the X  chart by 143.7, 31.0 and 11.9%, respectively, for this particular case. Note that both the ML X chart and the X  chart are designed based on the information of . However, since the ML X chart is designed by taking into account the entire probability distribution of , it is considerably more effective than the X  chart. It is also found that, even though the first three charts are all designed based on a single value of , the X  chart substantially outperforms the Shewhart X chart and the X 1.0 chart. It is because that the X  chart uses the mean  of  rather than a subjectively determined value of . This finding highlights the importance and usefulness of the three-phase SPC scenario. Thanks to this scenario, the sample values of  can be acquired and used to estimate , which can be, in turn, used to design the X  chart or ML X chart. 3.2. Study two Next, the performance of the four charts is further studied by a 23 factorial experiment (Montgomery 2001). The three parameters , R and  are used as the input factors, and MLnormalized is taken as the response. Each of the three factors , R and  varies at two levels, resulting in eight runs (i.e. eight combinations of the values of the three factors). The low and high levels for each factor are decided below:  R 

100 2 0.7

600 10 1.8

(28)

The MLnormalized of the control charts under each of the eight runs are calculated and enumerated in table 1. It can be seen that the ML X chart is always the most effective chart in all of the runs. The average, MLnormalized , of the MLnormalized for a chart over the eight runs is calculated and listed at the bottom of table 1. The MLnormalized indicate that, from an overall viewpoint (over different combinations of the values of , R and ), the ML X chart is more effective than the Shewhart X chart, the X 1.0 chart and the X  chart by 91.7, 28.6 and 13.3%, respectively. 3.3. Study three Finally, it may be also interesting to compare the performance of the four control charts when the economic conditions are used. The four charts are designed under the combination of ( ¼ 400, R ¼ 4,  ¼ 0.8) and their charting parameters n, h, LCL and UCL are listed in study one. In this study, the loss cost, ‘c, defined by

388

Z. Wu et al. Factor

Run 1 2 3 4 5 6 7 8

MLnormalized



R



Shewhart

X 1.0

X 

100.0 100.0 100.0 100.0 600.0 600.0 600.0 600.0

2.0 2.0 10.0 10.0 2.0 2.0 10.0 10.0

0.7 1.8 0.7 1.8 0.7 1.8 0.7 1.8 Average

1.421 1.022 2.285 1.069 2.431 1.099 4.414 1.591 1.917

1.122 1.314 1.333 1.135 1.394 1.103 1.884 1.003 1.286

1.005 1.000 1.077 1.069 1.090 1.099 1.315 1.406 1.133

Table 1.

Comparison study.

Duncan (1956) in his pioneer work in economic chart designs is used as the objective function. It is different from the total cost only by a constant. A set of likely numerical values selected by Duncan in his paper are also used here for the following additional parameters:  ¼ 0.01, the probability of occurrence of the assignable cause. E ¼ 0.05 h, the time of the taking and inspection of a unit for a sample. D ¼ 2 h, the average time taken to find the assignable cause after a sample point is plotted beyond the control limits. M ¼ $100, the difference between the in-control and out-of-control incomes per hour from the operation of the process. V ¼ $50, the cost per occasion of looking for an assignable cause when none exists. W ¼ $25, the average cost per occasion of finding the assignable cause when it exists. b ¼ $0.50, the cost per sample of sampling and plotting that is independent of the sample size. c ¼ $0.10, the cost per unit of sampling, testing and computation that is related to the sample size. The mean MLC of ‘c over the probability distribution of the mean shift is calculated by: Z

1

MLC ¼ 0

h    i ‘c   f  d ,

ð29Þ

where ‘c() is the loss cost for a given mean shift . The MLC produced by the Shewhart X chart, the X 1.0 chart, the X  chart and the ML X chart are 21.834, 14.640, 13.469 and 13.221, respectively, for this particular case. It indicates that the ML X chart still excels other charts under the economic conditions, even though it is not designed to minimize the loss cost. Moreover, the performance ranking of the four charts is the same as in study one. The reason may be that the Taguchi’s loss function is somewhat analogous to the Duncan’s loss cost used in the economic designs. It is also noted that the superiority of the ML X chart over the Shewhart X chart is quite significant in this case.

Optimization design of control charts No. 1 2 3 4 5 6 7 8

389

di

No.

di

0.566 0.619 0.462 0.600 1.697 0.226 1.865 1.065

9 10 11 12 13 14 15

0.432 0.406 0.402 0.575 0.507 1.275 0.465

Table 2.

Sample values of mean shift.

4. Example A manufacturing factory produces a special type of bearing. The diameter x of the bearing is a key dimension and specified as 80  0.008 mm. The process mean can be easily adjusted to the nominal value (80 mm) at the centre between the lower and upper specification limits. The quality assurance engineer specifies the minimum allowable ATS0 as 400 h and the maximum allowable inspection rate as 4 units/h. In the phase one operation, the pilot runs are conducted. It is found that the distribution of the diameter can be well approximated by a normal distribution and the standard deviation  0 is very close to 0.002 mm. From these data, a workable Shewhart X chart is designed in which the sample size n is set as five. Other parameters are also worked out as follows: n ¼ 5, h ¼ 1.25 h, LCL ¼ 79.99736 mm, UCL ¼ 80.00264 mm. Then, in phase two operation, the Shewhart X chart starts to monitor the process mean. Meanwhile, 15 sample values of  (d1, d2, . . ., d15) are observed and recorded (table 2,   0.774 by equation 23). At the end of phase two, these di values are used to design the ML X chart directly (non-parametric procedure). The parameters of the optimal ML X chart are: n ¼ 36, h ¼ 9.00 h, LCL ¼ 79.99924 mm, UCL ¼ 80.00076 mm. Both the Shewhart X chart and the ML X chart produce the same in-control ATS0 and inspection rate r that are equal to the specified values. The number g of units produced per hour is equal to 45 and the constant k of the loss function (see equation 1) is taken as one by properly scaling. The ML values resulting from the Shewhart X chart and the ML X chart are 2057 and 618, respectively. It indicates a significant improvement achieved by the ML X chart. It reduces the loss incurred by an out-of-control case by 232%, on average, compared to the Shewhart X chart. 5. Conclusion The ML X control chart was proposed that minimizes the overall mean ML of the loss function per out-of-control case by optimizing the sample size, sampling interval and control limits of the chart. Unlike the statistical designs, the design algorithm of the ML X chart links the operating characteristics of the chart with quality cost. However, different from the economic designs (in which many input costs have to be determined), the design specifications of the ML X chart are easy to handle. The design algorithm considers the whole probability distribution of the

390

Z. Wu et al.

mean shifts that can be modelled by a Rayleigh distribution. Comparison studies show that the ML X chart outperforms, to a significant degree and from an overall viewpoint, the Shewhart X chart, as well as some other charts that are designed based on the fixed mean shifts. The design and operation of the ML chart was implemented through a threephase SPC scenario, in which the data for the random mean shifts  can be collected and the probability distribution model built. The concept of the three-phase SPC scenario paves the way for converting the workable control chart with average performance to the optimal chart. The design algorithm of the ML X chart can be easily computerized. The design of a chart has to be conducted only once, and the resultant ML X chart can be used continuously until the process parameters and conditions are changed. More importantly, the use of the ML X chart will not in any sense increase the difficulties for the operators to run and understand the control chart. The general ideas of the ML X chart discussed here can be easily modified and then applied to the designs of other types of control charts such as CUSUM and EWMA. References AL-ORAINI, H. A. and RAHIM, M. A., 2002, Economic statistical design of X control charts for systems with gamma (, 2) in-control times. Computers and Industrial Engineering, 43, 645–654. CASTILLO, E. D. and MONTGOMERY, D. C., 1996, A general model for the optimal design of X charts used to control short or long run processes. IIE Transactions, 28, 193–201. DAS, T. K. and JAIN, V., 1997, An economic design model for X charts with random sampling polices. IIE Transactions, 29, 507–518. DUNCAN, A. J., 1956, The economic design of X charts used to maintain current control of a process. Journal of American statistical Association, 51, 228–242. KEATS, J. B., MISKULIN, J. D. and RUNGER, G. C., 1995, Statistical process control scheme design. Journal of Quality Technology, 27, 214–225. MONTGOMERY, D. C., 1986, Economic design of an X control chart. Journal of Quality Technology, 14, 40–43. MONTGOMERY, D. C., 2001, Introduction to Statistical Quality Control (New York: Wiley). PAGE, E. S., 1954, Control charts for the mean of a normal population. Journal of the Royal Statistical Society, B16, 131–135. PALM, A. C., RODRIGUEZ, R. N., SPIRING, F. A. and WHEELER, D. J., 1997, Some perspectives and challenges for control chart methods. Journal of Quality Technology, 29, 122–127. PRABHU, S. S., RUNGER, G. C. and MONTGOMERY, D. C., 1997, Selection of the subgroup size and sampling interval for a CUSUM control chart. IIE Transactions, 29, 451–457. REYNOLDS, M. R., JR, AMIN, R. W. and ARNOLD, J. C., 1990, CUSUM charts with variable sampling intervals. Technometrics, 32, 371–384. ROSS, P. J., 1989, Taguchi Techniques for Quality Engineering, Loss Function, Orthogonal Experiments, Parameter and Tolerance Design (New York: McGraw-Hill). SANIGA, E. M., 1989, Economic statistical control chart designs with an application to X and R chart. Technometrics, 31, 313–320. SPRING, F. H. and YEUNG, A. S., 1998, A general class of loss functions with individual applications. Journal of Quality Technology, 30, 152–162. WHEELER, D. J., 1983, Detecting a shift in process average: tables of the power function for charts. Journal of Quality Technology, 15, 155–170. WOODALL, W. H., 1986, Weaknesses of economical design of control charts. Technometrics, 28, 408–409. WOODALL, W. H. and MONTGOMERY, D. C., 1999, Research issues and ideas in statistical process control. Journal of Quality Technology, 31, 376–386. WU, Z., XIE, M. and TIAN, Y., 2002, Optimization design of the X&S charts for monitoring process capability. Journal of Manufacturing Systems, 21, 83–92.

Suggest Documents