Modeling quality control data using Weibull ... - Springer Link

1 downloads 0 Views 333KB Size Report
Aug 24, 2012 - Modeling quality control data using Weibull distributions in the presence of a change point. Jorge Alberto Achcar · Claudio Luis Piratelli ·.
Int J Adv Manuf Technol (2013) 66:1611–1621 DOI 10.1007/s00170-012-4444-1

ORIGINAL ARTICLE

Modeling quality control data using Weibull distributions in the presence of a change point Jorge Alberto Achcar · Claudio Luis Piratelli · Roberto Molina de Souza

Received: 17 April 2012 / Accepted: 1 August 2012 / Published online: 24 August 2012 © Springer-Verlag London Limited 2012

Abstract In this paper, we introduce a Bayesian analysis of a data set selected at a Brazilian food company. This data set is related to the times for different quality control analysts to test samples of manufactured products that arrive at the company’s quality control department. A sample is selected from each batch of products arriving at the company’s quality control sector, and these samples have a mix of different products, which spend different times in quality control testing. From preliminary data analysis, the histograms for the data obtained are observed to have different clusters, with different parametrical distributions. Thus, we assumed two standard Weibull distributions in the presence of a covariate and a change point to analyze the data and to get standards to be used in the control of the analysts’ work. Inferences and predictions are obtained using a Bayesian approach with standard existing Markov Chain Monte Carlo methods. We also assumed a mixture of two Weibull distributions as an alternative

J. A. Achcar Department of Social Medicine, University of São Paulo, Av. Bandeirantes, 3900-Monte Alegre, CEP 14049-900, Ribeirão Preto, São Paulo, Brazil e-mail: [email protected] C. L. Piratelli Master Program in Production Engineering, University Center of Araraquara - UNIARA, Rua Carlos Gomes, 1338-Centro, CEP 14801-340, Araraquara, São Paulo, Brazil e-mail: [email protected] R. M. Souza (B) Coordination of Mathematics, Federal Technological University of Paraná, Av. Alberto Carazzai, 1640-Centro, CEP 86300-000, Cornélio Procópio, Paraná, Brazil e-mail: [email protected]

model to be fitted to the data. The great advantage of the two models proposed under a Bayesian approach is related to better simulation procedures to be used by the quality control engineers, better predictions, and the possibility of including informative prior commonly available to industrial engineers in the choice of prior distributions. Keywords Weibull distributions · Change point · Regression · Quality control times · MCMC methods

1 Introduction Since variability is described in statistical terms, statistical methods have been used in quality improvement efforts. The first to apply statistical methods to the problem of quality control was Walter A. Shewhart of Bell Telephone Laboratories. He introduced a technical report on May 16, 1924, that became a draft of a modern control chart. In 1931, Shewhart published a book on statistical quality control, Economic Control of Quality of Manufactured Product, published by Van Nostrand, in New York. Two other Bell Labs statisticians, H.F. Dodge and H.G. Romig, made a great effort to apply statistical theory to sampling inspections. The work done by these three pioneers constitutes much of what, nowadays, comprises the theory of statistical quality and control (see, for example, [13, 15, 16, 20, 22, 26, 39, 47, 49, 57]). Variations in service processes are observed in various practical situations in which the production rate is highly time-dependent. In the company, statistical modeling of these data is essential in choosing a model of queues that is suited to the production system that is

1612

Int J Adv Manuf Technol (2013) 66:1611–1621

to be diagnosed (establishing performance indicators) or carrying out experiments using discrete event simulation. These two approaches to operational research allow managers to take better decisions about investments, production planning, the allocation of productive capacity, and so on. In practice, two kinds of data are essential to the statistical modeling of production systems: the counting of units produced per unit of time, or the service (or production) times for these units. To model the former, one possibility is to assume homogenous or nonhomogenous Poisson processes [10, 29, 46, 51]. For the latter, one alternative is to collect and statistically analyze production times at the units denoted by ti = b i − ai

(1)

where i = 1, 2, ..., n denotes the i-th unit; ai denotes the production start time at the i-th unit; b i denotes the production end time at the i-th unit; and n is the number of units produced in a certain period of time. In our study, we consider a data set of quality control tests in a food company in São Paulo state, Brazil. In this company, the times for the quality control tests vary greatly due to different factors, such as the expertise of the quality control analysts, different kinds of products (with short and long quality testing times), among many other factors. This paper presents and compares two alternatives with which to analyze these data: as a first alternative, we used two standard Weibull distributions with two parameters in the presence of a change point and one covariate (see, for example, [2]). Another alternative is the use a mixture of parametrical distributions (see, for example, [11, 12, 17, 45, 53, 54, 56]). According to Miguel [38], methodologically this study can be classified as applied, objectively descriptive, and taking a quantitative approach. Bertrand and Fransoo [7] define quantitative research in production engineering as that in which a problem is modeled whose variables present causal and quantitative relations. It is, thus, possible to quantify the behavior of the dependent variables under one specific domain, allowing researchers to make forecasts. In general, quantitative research uses mathematical, statistical, or computational modeling (simulation)—specifically in this paper, statistical modeling will be adopted. As for research techniques, bibliographical research and direct intensive observation shall be used, as classified by Lakatos and Marconi [30], or bibliographical research and case study, as classified by Gil [19]. In queuing theory (QT), the statistical analysis of times—times between the arrival of entities at a system,

and service times—it is essential to choose the appropriate queuing model for the object to be diagnosed (identifying performance indicators, such as average use of resources, number of entities in the queue, average time the entities remain in the system, etc.) (see, for example, [21, 28, 31]). Several articles in the QT area deal with analysis of arrival times, (see some examples: [4, 5, 14, 33, 41, 50, 55, 59, 60]. In discrete event simulation, one of the main objectives for the analyst is to carry out experiments in models that represent real systems as a way to forecast future behavior. The identification of the distribution of time probabilities (arrivals or services) and their parameters or the identification of a mixture of parametrical distributions is crucial to achieving an authentic model of reality that is to be modeled [23, 27, 42]. The stage of identifying the distribution of time probabilities and their parameters is often dealt with superficially in articles on the modeling process in a discrete event simulation (see, for example, Sakurada and Miyake [48]). This paper, then, illustrates how to identify the Weibull, adherent to the industrial data on time between the arrival of products at a quality control department (which is what the company wants to diagnose). Change-point models are applied in many areas. For example, medical researchers are interested in knowing if a new therapy for leukemia produces a departure from the usual experience of a constant relapse rate after the induction of a remission (see, for example, [24, 35, 36]). Bayesian analysis for change-point models has been introduced by many authors (see, for example, [44]). A Bayesian approach was used for lifetime data with a constant hazard function and censored data in the presence of a change point by Achcar and Bolfarine [2].

2 The model To analyze the times for quality control tests, we considered a very popular distribution model used in industrial reliability applications: the Weibull distribution, with two parameters (see [25, 40, 58]) with a probability density function given by f (ti ) =

  α  αtiα−1 ti exp − α θ θ

(2)

where ti > 0 denote the times for quality control tests. The parameters θ and α denote, respectively, the scale and shape parameters. Different values of α give different shapes for the distribution, that is, we have a

Int J Adv Manuf Technol (2013) 66:1611–1621

1613

lot of flexibility in fit for the times for quality control tests. Note that a three-parameter Weibull distribution is a commonly used distribution for the study of reliability and breakage data. In this three-parameter distribution, we have an additional location parameter. Since, in our case, we have no interest in this additional location parameter and usually there are some difficulties to estimate the parameters of the threeparameter Weibull distribution, we decided to assume a two-parameter Weibull distribution in the analysis of our data. The mean and variance of the Weibull distribution with density (2) are given, respectively, by   1 μ = E (T) = θ 1 + (3) α      2  2 1 σ 2 = var (T) = θ 2  1 + −  1+ α α where (.) denotes a gamma function, ∞  (z) = 0 e−t t z−1 dt (see, for example, [1]). In the presence of a change point, we assume two Weibull distributions with densities given, respectively, by

f1 (ti ) = α1 μ1i tiα1 −1 exp −μ1i tiα1 where μ1i = λ1 exp (β1 xi ), for ti ≤ ζ , and

f2 (ti ) = α2 μ2i tiα2 −1 exp −μ2i tiα2

(4)

where μ2i = λ2 exp (β2 xi ), for ti > ζ , for i = 1, . . . , n. Note that we are using a reparametrization μ = θ1α in the probability density function (2). In our case, the covariate xi is related to two analysts; that is, xi = 0 for analyst 1 and xi = 1 for analyst 2. Thus, we have the following: (a) For ti ≤ ζ (product 1), μ1i = λ1 for analyst 1 and μ1i = λ1 exp (β1 ), for analyst 2. (b) For ti > ζ (product 2), μ2i = λ2 for analyst 1 and μ2i = λ2 exp (β2 ), for analyst 2. Assuming an indicator variable δi = 1 if ti ≤ ζ and δi = 0 if ti > ζ , the likelihood function for θ = (α1 , α2 , λ1 , λ2 , β1 , β2 , ζ ) is given by L (θ) =

n

δi α1 μ1i tiα1 −1 exp −μ1i tiα1 =1



1−δi × α2 μ2i tiα2 −1 exp −μ2i tiα2 .

(5)

For a Bayesian analysis of the model, we assume the following prior distributions for the parameters α1 , α2 , λ1 , λ2 , β1 , and β2 :

α j ∼ Gamma a j, b j (6)

λ j ∼ U 0, c j   β j ∼ N 0, d2j where j = 1, 2; Gamma(a, b ) denotes a gamma distribution with mean ba and variance ba2 ; U(0, c) denotes a uniform distribution in the continuous interval (0, c); and N(0, d2 ) denotes a normal distribution with a mean of zero and variance d2 . The hyperparameters a j, b j , c j, and d j are assumed known, j = 1, 2. For the change-point parameter, we assume a discrete uniform distribution for every observation; that is, considering ζ taking any value ti , i = 1, . . . , n, P (ζ = ti ) = pi =

1 n

(7)

for n i = 1, . . . , n, where n is the sample size and i=1 pi = 1. We further assume independence among the parameters. Using the Bayes formula, we combine the likelihood function with priors (6) and (7) to get the joint posterior distribution for θ = (α1 , α2 , λ1 , λ2 , β1 , β2 , ζ ) given the data (see, for example, [8]). Simulated samples of the joint posterior distribution for θ = (α1 , α2 , λ1 , λ2 , β1 , β2 , ζ ) given the data are obtained using standard Markov Chain Monte Carlo (MCMC) methods, such as the popular Gibbs sampling algorithm (see, for example, [18]) or the Metropolis– Hastings algorithm (see, for example, [9]).

3 Case study The food company studied is in São Paulo state, Brazil, and it produces tomato sauce, mustard, condiments, and tinned food (such as peas, corn, sweet preserves, etc.). Each product type has a specific standard guaranteed in its recipe. There are operators who are responsible for preparing each recipe and collecting samples to be analyzed by the quality control department (the company inspects every recipe so that all the products are of guaranteed quality). In the quality control department, the samples arrive in random fashion and are analyzed in the order they arrive in—first come first served—by two analysts trained to inspect any kind of product. After the physical and chemical testing and organoleptic tests, the approved samples have their recipes released for the packaging lines.

Int J Adv Manuf Technol (2013) 66:1611–1621

200

400

Frequency

300 200 0

0

100

Frequency

400

600

500

800 0 5 10 15 times−analyst 1

0 4 8 12 times−analyst 2

Fig. 1 Histogram of times-analysts 1 and 2

200 400 600 800 0

Frequency

The company has two production shifts and the quality control department works for 10 h a day. The data on the time taken for analysis of the recipes by each analyst in the department were obtained via chronometer, according to Eq. 1, for a period of 30 consecutive production days. This data set consists of quality control times for two different analysts observed for different days. This data set consists of random samples selected from all the manufactured product batches. These products have different times taken to perform the quality control tests: short and longer times. In Fig. 1, we observe that two analysts perform the control tests differently (analyst 1 with 1,200 samples, and analyst 2 with 1,504 samples). A discordant observation (time greater than 64 min) of analyst 1 was discarded. We observe that analyst 2 has shorter times than analyst 1. In Fig. 2, we have the histogram for all the combined data for both analysts (n = 2,704 observations). From histograms in Figs. 1 and 2, we note a mixture of two distributions for the times of the quality control tests, where a proportion of units has short times and a second proportion of the data has long times. This paper aims to present and compare two alternatives for analyzing these data: as a first alternative, we used two standard Weibull distributions, with two parameters in the presence of a change point and one covariate (see, for example, [2]). Another alternative is the use of a mixture of parametrical distributions (see, for example, [11, 12, 17, 45, 53, 54, 56]). In general, companies are very interested in modeling data sets to make better inferences and predictions, to identify factors that may increase or decrease these times, or to build more accurate simulation models for capacity planning.

1200

1614

0

5 10 times−two analysts

15

Fig. 2 Histogram of times for two analysts

The step for identifying probability distributions of times and their parameters is often dealt with superficially in papers on discrete event simulation modeling (see, for example, [48]). In queuing theory, on the other hand, the identification of distributions and possible covariates that affect data has been the focus of several studies (for example, [4, 5, 50, 59] and others). Weibull distribution has been used in medical data analysis (see, for example, [3]) and industrial data analysis (see, for example, [6]) due to its great flexibility for adjustment (see, for example, [37, 40]). 3.1 A Bayesian analysis of the food company To analyze the times for quality control tests done by the two analysts at the food company, under a Bayesian approach, we assume Weibull distributions (4) in the presence of a change point ζ and the prior distributions (6) and (7) with hyperparameter values a1 = a2 = b 1 = b 2 = 0.1; c1 = c2 = 5; and d1 = d2 = 1 (approximately noninformative priors). In the sample simulation procedure for the joint posterior distribution of α1 , α2 , λ1 , λ2 , β1 , β2 , and ζ , we used OpenBugs software (code in the Appendix 1 at the end of the paper), a free software available from the site http://openbugs.info/w/Downloads (see, for example, [34]). OpenBUGS only requires the inputting of the distribution for the data and the prior distributions for the model’s parameters, and it is the conditional posterior distributions used for the Gibbs sampling algorithm do not need to be specified; that is, the simulation of samples for the joint posterior distribution of interest is greatly simplified. In the simulation procedure, we initially simulated a sample size of 20,000 from the joint posterior distribution discarded to eliminate the effect of the initial values used in the iterative routine (“burnin sample”). After this burn-in sample, we generated

Int J Adv Manuf Technol (2013) 66:1611–1621

1615

another 50,000 Gibbs samples, taking every 50th sample to have approximately uncorrelated samples, from which we have a final simulation sample size of 1,000 used to get the posterior summaries of interest. Convergence of the Gibbs sampling algorithm was monitored from the usual trace plots for each parameter sample. Table 1 shows the posterior summaries obtained, assuming the Weibull distributions in the presence of a change point. From the formulas given by Eq. 3, we get the estimated Bayesian means for the quality control test times for the two analysts at the food company considering the two different products: (a) Product 1 (times ti , such that i ≤ 1302):

 1 + α11 Analyst1 : mean11 = 1 α λ1 1

 1 + α11 Analyst2 : mean12 = 1 (λ1 eβ1 ) α1 (b)

Product 2 (times ti , such that > 1302):

 1 + α12 Analyst1 : mean21 = 1 α λ2 2

 1 + α12 Analyst2 : mean22 = 1 (λ2 eβ2 ) α2

That is, shorter times are observed for analyst 2 considering the two products. In Fig. 3, we have the dispersion plot for the times of the quality control tests for the two analysts at the food company, considering the two different products and for simulated samples of Weibull distributions with parameters α1 = λ1 = 6.546 and μ1 = 4.291 (product 1, analyst 1); α1 = 6.546 and μ1 = λ1 exp (β1 ) = 5.0704 (product 1, analyst 2); α2 = λ2 = 0.0000694 (product 2, analyst 1) and μ2 = λ2 exp (β2 ) = 0.000083830 obtained from the Bayesian estimates for α1 , α2 , λ1 , λ2 , β1 , and β2 given in Table 1. From the dispersion plot of Fig. 3, we observe good fit of the Weibull distributions for the times for quality control tests of the two analysts of the food industry for the two different products.

(8) 3.2 Using a mixture of two Weibull distributions to analyze the data

(9)

Assuming the Bayes estimates given in Table 1 for α1 , α2 , λ1 , λ2 , β1 and β2 (posterior means obtained), we get from Eqs. 8 and 9 Bayesian estimates for the mean times of the two analysts for the two products considering the estimated change point (kth observation): (a) Product 1 (times ti , such that i ≤ 1302): analyst 1: estimated mean11 = 0.746182; analyst 2: estimated mean12 = 0.727397. (b) Product 2 (times ti , such that > 1302): analyst 1: estimated mean21 = 7.91428; analyst 2: estimated mean22 = 7.58398.

Another possibility for the analysis on the times for different quality control analysts to test samples of manufactured products that arrive at the company’s quality control department is to assume a mixture of two Weibull distributions. In the parametric mixture model, the component distributions are from a parametric family with known parameters θ j with probability density function given by

f (t) =

k j=1



pj fj t | θj

for some mixture proportions 0 ≤ p j ≤ 1, where p1 + p2 + . . . + pk = 1. If k = 2, we have a mixture of two distributions. In this case, we assume in Eq. 10, a mixture of two Weibull distributions (see, for example, [32]) given by the density f (ti ) = pf1 (ti | μ1i ; v1 ) + (1 − p) f2 (ti | μ2i ; v2 )

Table 1 Posterior summaries (Weibull distributions with a change point) Parameter

Mean

SD

95 % Credible interval

α1 α2 β1 β2 λ1 λ2 k

6.546 4.431 0.1669 0.1889 4.291 0.0000694 1302

0.1517 0.09625 0.05984 0.05519 0.2431 0.00001552 1.078

(6.259, 6.854) (4.23, 4.607) (0.04588, 0.2851) (0.08325, 0.2942) (3.837, 4.763) (0.000045, 0.000106) (1300.0, 1304.0)

SD standard deviation, kth observation

(10)

(11)

where

v v

f j ti | μ ji ; v j = v jμ ji ti j−1 exp −μ ji ti j and the scale parameter of the

Weibull distributions is given by μ ji = λ j exp β j Xi , j = 1, 2; i = 1, 2, . . ., 2704; λ2 = θλ1 ; v j is the shape parameter of the Weibull distribution; Xi = 0 (analyst 1) and Xi = 1 (analyst 2).

1.0 0.8 0.4 0.6 0.8 1.0 Product 1 − Analyst 1

1.2

0.4 0.6 0.8 1.0 Product 1 − Analyst 2

10 8 6 4 2

2

4

6

8

10

Weibull − Product 2 − Analyst 2

12

12

14

0.4

Weibull − Product 2 − Analyst 1

0.6

Weibull − Product 1 − Analyst 2

1.0 0.8 0.6

Weibull − Product 1 − Analyst 1

0.4

Fig. 3 Histograms of the observed times and simulated Weibull times

Int J Adv Manuf Technol (2013) 66:1611–1621 1.2

1616

2

4 6 8 10 12 14 Product 2 − Analyst 1

For a Bayesian analysis of the model, we assume the following prior distributions for the parameters: p ∼ Beta (1, 1)

(12)

θ ∼ Gamma (0.1, 0.1) λ1 ∼ Gamma (1, 1) v j ∼ Gamma (0.1, 0.1) ; j = 1, 2 β j ∼ N (0, 10) where Beta(a, b ) denotes a Beta distribution with mean a ab a+b and variance [(a+b )2 (a+b +1)] . We also assume prior independence among the parameters. In the simulation procedure for samples of the joint posterior distribution, the OpenBUGS software was also used. In the simulation procedure, we initially simulated a sample size of 5,000 from the joint posterior distribution discarded to eliminate the effect of the initial

2

4 6 8 10 Product 2 − Analyst 2

12

values used in the iterative routine (burn-in sample); after this burn-in sample, we generated another 20,000 Gibbs samples, taking every 20th sample to have approximately uncorrelated samples, from which we have a final simulation sample of size 1,000 used to get the posterior summaries of interest. Convergence of the Gibbs sampling algorithm was monitored from the usual trace plots for each parameter sample. Table 2 shows the posterior summaries obtained assuming the mixture of two Weibull distributions. From the results in Table 2, it can be observed that since μ ji = λ j exp(β j Xi ), j = 1, 2; i = 1, 2, . . ., 2704, we get Bayesian estimates for the scale parameter of the Weibull distributions (3), given, respectively by μˆ (1) 1 ˆ = λˆ 1 = 4.265 and μˆ (1) 2 = λ2 = 0.000076 for analyst 1 ˆ ˆ (Xi = 0) and μˆ (2) 1 = λ1 exp(β1 ) = 4.265 exp(0.1733) = (2) 5.0720 and μˆ 2 = λˆ 2 exp(βˆ2 ) = 0.000076 exp(0.1839) = 0.000091 for analyst 2 (Xi = 1).

Int J Adv Manuf Technol (2013) 66:1611–1621 Table 2 Posterior summaries distributions)

(mixture of

1617 two Weibull

Parameter

Mean

SD

95 % Credible interval

p 1− p β1 β2 λ1 λ2 θ v1 v2

0.4807 0.5193 0.1733 0.1839 4.265 0.000076 0.000018 6.5520 4.3970

0.0099 0.0099 0.0594 0.0554 0.2283 0.00002 0.000005 0.1460 0.1170

(0.4615; 0.5015) (0.4991; 0.5386) (0.0592; 0.2868) (0.0760; 0.2945) (3.8420; 4.6990) (0.000046; 0.00016) (0.00001; 0.00003) (6.2630; 6.8580) (4.1950; 4.5970)

From Eq. 3, we get Bayesian estimates for the means of both analysts, given by (a) Analyst 1:

  1 1 (1)  m ean1 =  1 + =

1 vˆ1 (1) vˆ1 μˆ 1   1 1 =  1+ = 0.7471 1 6.552 4.265 6.552 for cluster 1, and (1)  m ean2

(b)

 1 =

vˆ1  1 + vˆ = 2 2 μˆ (1) 2   1 1 =  1 + = 7.8796 1 4.397 0.000076 4.397 1



for cluster 2 of the mixture of two Weibull distributions. Analyst 2:   1 1 (2)  m ean1 =  1 + =

1 vˆ1 (2) vˆ1 μˆ 1   1 1 =  1+ = 0.7276 1 6.552 5.07205 6.552 for cluster 1, and

  1 1 (2)  m ean2 =  1 + =

1 vˆ2 (2) vˆ2 μˆ 2   1 1 = 1+ = 7.5633 1  4.397 0.000091 4.397 for cluster 2 of the mixture of two Weibull distributions. We observe very similar results for the means of the two distributions considering the two model formulations (the mixture model assuming Weibull distributions or the Weibull distributions in the presence of a change point).

4 Concluding remarks In industrial applications, managers and industrial engineers are interested in modeling the times taken in tasks carried out by different operators, especially to get performance indicators for their systems. This paper analyzed the times taken in quality control carried out by two different analysts in a Brazilian food company. The main goal at every company is to standardize and optimize these times, as a reference that should be followed by all analysts in the company. In many cases, as it was considered in this paper based on the data set from the food company, the batches of manufactured products arrive in a random order at the company’s quality control department, with a mixture of different products, and the quality control tests usually take different times. To analyze this data set, we considered popular parametrical distributions like the Weibull distribution in the presence of a change point and covariates, or a mixture of parametrical distributions in the presence of a covariate. Considering both model approaches, we had some difficulty getting inferences using standard existing classical inference. Thus, the use of the Bayesian approach, especially considering existing MCMC methods to simulate samples of the joint posterior distribution of interest, was a good alternative to get the inferences of interest. The great advantage of the two proposed models under a Bayesian approach is related to better simulation procedures to be used by quality control engineers, better predictions, and the possibility of including informative prior commonly available for industrial engineers in the choice of prior distributions. By using informative prior, we usually get better inference results. It is important to point out that each assumed model has some computational costs and different interpretations that could be useful for industrial researchers and engineers in companies. At some times, it is important to estimate the change point; in other cases, the usual mixture model (10) is better in terms of interpretations. As we have observed in this paper, both models give similar inferences in terms of estimated means for each combination of product and analyst. These results could be of great interest for managers and industrial engineers. It should also be pointed out that quality engineers usually consider using standard classical inference approaches to analyze this kind of data, assuming four data set groups. Under this approach, the maximum likelihood estimates for the parameters of the Weibull distribution obtained using software R [43] are given respectively, as follows: shape =

1618

Int J Adv Manuf Technol (2013) 66:1611–1621

6.73721, scale = 0.802847 (product-1-analyst-1); shape = 6.42842, scale = 0.779206 (product-1-analyst-2); shape = 4.58276, scale = 8.74355 (product-2-analyst2); and shape = 4.30636, scale = 8.34323 (product2-analyst-2). Note that the shape estimates are very similar to the Bayesian results using the two proposed models (see Tables 1 and 2). In comparison of the shape and scale parameters from the Weibull distribution for both analysts, we have the following test results from R:

2.0 0

0.00

0.5

1

1.0

1.5

Density

2

Density

3

2.5

3.0

4

Fig. 4 Weibull probability plots for the four groups

0.2

0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2

0.4

Product 1 − Analyst 1

0.8

1.0

0.25

0.25

0.00

0.05

0.10

Density

0.15

0.20

0.20 0.15 0.10

Density

0.6

Product 1 − Analyst 2

0.05

(b)

Product-1-analyst-1; product-1-analist-2 : Chisquare test for equal shape parameters ( p value=0.266); Chi-square test for equal scale parameters ( p value = 0.001) Product-2-analyst-1; product-2-analist-2 : Chisquare test for equal shape parameters ( p value=0.176); Chi-square test for equal scale parameters ( p value

Suggest Documents