Dec 30, 2016 - Pass/Fail Statistics Compared to Parametric Statistics. Abstract. The case considered is that in which the quality of some manufactured unit, ...
L. D. Edmonds 12/30/16
Pass/Fail Statistics Compared to Parametric Statistics Abstract The case considered is that in which the quality of some manufactured unit, such as an electronic part, is measured by some parameter that will be called the characterization parameter. Another parameter, called a specification limit, determines whether the unit is satisfactory. The unit is unsatisfactory if the characterization parameter exceeds the specification limit. A vendor of these units performs a qualification test by randomly selecting m units and measuring the characterization parameter of each. The vendor’s qualification requirement is satisfied if all units were found to be satisfactory. A user of these units has a different qualification requirement. The user specifies a confidence level and a percentile and uses these together with the mean and standard deviation of the characterization parameter of the test sample to calculate a number that is compared to the specification limit to determine if the user’s requirement is satisfied. The vendor did not report the measured values of the characterization parameter to the user. The vendor reported only that the test sample passed the vendor’s qualification test. The question considered is whether m is large enough so that the user can conclude, without knowing the measured values of the characterization parameter, that passing the vendor’s test is an acceptable alternative to performing a user’s test, when “acceptable alternative” is defined to mean that the probability that a randomly selected sample will pass the vendor’s test is less than or equal to the probability that an independent random sample will pass the user’s test.
1. Introduction As a prototype example, consider the case in which electronic parts are tested for susceptibility to ionizing dose produced by some radiation environment. All devices considered are sufficiently similar in construction so that any difference in their responses to dose is entirely stochastic. The dose level of the radiation environment might have some uncertainty but the usual practice is to compensate for this uncertainty by including some conservatism to obtain a design dose level. The design dose level is regarded as given. The vendor of these parts (or any other organization but the name is not important so we will call it the vendor) performs a qualification test by randomly selecting m (often greater than 20) parts and exposing all of them to the design dose level. Some part characterization parameter (e.g., a change in some electrical property of the device) of interest is selected and an individual part fails the test if the applied dose caused this parameter to exceed some selected specification limit. Otherwise the individual part passed the test. The vendor’s qualification test is passed if and only if all m parts passed the test. This is a pass/fail test because the only information recorded for each part is whether the part passed or failed. A user of these parts has a different qualification test. The user’s qualification test randomly selects n (often less than 10) parts and exposes each to the design dose level. 1 The characterization parameter is measured for each part. The user specifies a confidence level and a percentile (explained in Section 3) and uses these together with the test data to calculate an upper confidence bound (also explained in Section 3). The user’s qualification test is passed if and only if this calculated upper bound is less than or equal to the specification limit. This is a parametric test because the characterization parameter for each part 1
The theory given here also applies to a variation of this test procedure that measures dose-to-failure and is explained in Section 3.
1
is used in the data processing. However, if the vendor’s qualification test is sufficiently stringent (by making m sufficiently large) the user will be willing to accept the vendor’s qualification test result, so the user’s qualification test becomes unnecessary. Given the user’s qualification test specifications (confidence level, percentile, and n), the question to be answered is how large must m be in order to satisfy this condition. “Sufficiently stringent” in the preceding paragraph requires a definition. It is assumed that the user is willing to accept the vendor’s qualification test result if the vendor’s qualification test is more difficult to pass, in the sense of having a smaller probability of being passed, than the user’s qualification test. Therefore, “sufficiently stringent” is defined here to mean that the probability of a randomly selected sample passing the vendor’s qualification test is less than or equal to the probability of a randomly selected sample passing the user’s qualification test. The goal now is to determine how large must m be in order to satisfy this condition. Section 2 gives some additional information pertinent to the pass/fail test (the vendor’s qualification test) while Section 3 explains the analysis used to process the data for the parametric test (the user’s qualification test). Section 4 compares the probability of passing the vendor’s qualification test to the probability of passing the user’s qualification test and finds the value of m needed to make the probability of passing the vendor’s qualification test less than or equal to the probability of passing the user’s qualification test. Conclusions are in Section 5. Readers that are interested only in the conclusions and not in the derivations can skip Sections 2 through 4 and proceed to Section 5. Interested readers might want to know that a companion paper titled “Implications of Pass/Fail Test Data Relevant to a Parametric Qualification Requirement” [1] also compares pass/fail qualification tests (a.k.a., vendor’s tests) to parametric qualification tests (a.k.a., user’s tests) but uses a different approach to answer a different question. The question in the present paper is whether passing the vendor’s test is an acceptable alternative to performing a user’s test, while the question in the companion paper is whether passing the vendor’s test implies that the same sample selected by the vendor will also pass the user’s test. There is some subjectivity in the present paper because there is some subjectivity as to what constitutes an acceptable risk. The subjectivity is in the definition of “acceptable alternative.” This is defined here to mean that the probability that a randomly selected sample will pass the vendor’s test is less than or equal to the probability that an independent random sample will pass the user’s test, so a statistical analysis is needed to answer the question in the present paper. In contrast, there is no subjectivity and no statistical analysis in the companion paper. However, the present paper allows (in fact, requires) the vendor and user to use the same specification limits. In contrast, an affirmative answer to the question in the companion paper is possible only if the user’s specification limits are more lenient than the vendor’s specification limits.
2. The Pass/Fail Test The pass/fail test (a.k.a., the vendor’s qualification test) was explained in the previous section. The only number pertaining to this test that is relevant to the analysis in Section 4 is the number, m, of tested parts. However, depending on the given information, this number might not have been explicitly given. Instead, the given information might have been a specified confidence level corresponding to a specified confidence bound for the probability that a randomly selected part can tolerate the design dose level.2 If so, then it is necessary to calculate 2
The population that the parts were selected from is assumed to be large enough so that the removal of test parts from this population negligibly affects the statistical properties of the remainder of the population.
2
m from this information. The probability interpretation of a confidence level is implied by the definition of a confidence level and is explained in [2] for pass/fail tests. An analysis in [2] applied to this definition of a confidence level produces an algorithm for calculating m from the above information.3 The algorithm is as follows. Let CVQT ×100% be the confidence level selected for the vendor’s qualification test, expressed as a percent so CVQT is the confidence level expressed as a fraction. Let pVQT be the confidence bound (expressed as a fraction instead of a percent) selected for the vendor’s qualification test. This is a lower confidence bound (explained in [2]), corresponding to the selected confidence level, for the probability that a randomly selected part can tolerate the design dose level. The value of m corresponding to these selected numbers is the smallest integer that satisfies pVQT m 1 CVQT
(m = smallest integer).
(2.1)
Some example calculations of m for some selected confidence levels corresponding to some selected confidence bounds are in Table I. TABLE I: VALUES OF SAMPLE SIZE (m) FOR SEVERAL COMBINATIONS OF CONFIDENCE BOUND AND CONFIDENCE LEVEL Confidence Bound Confidence Level m (%) (%) 90 22 90 95 29 99 44 90 45 95 95 59 99 90 90 229 99 95 298 99 458
Let p denote the actual probability that a randomly selected device can tolerate the application dose. This probability is unknown because the only available information is pass/fail test data on a limited test sample consisting of m devices (if p was known, there would be no need for confidence levels) but it is still informative to express the probability, denoted PVQT, of passing the vendor’s qualification test in terms of p. It is a special case (m successes in m tries) of the binomial distribution given by
PVQT p m .
3
(2.2)
The analysis in [2] considers a more general case. The case considered here is obtained from this more general case by letting x = 0, y = 0, and n = 1, where x, y, and n are notations used in [2] (which are not the same as the notations used here). These special values simplify the algorithm in [2]. The simplified algorithm becomes the algorithm given here after changing notation so that the γ and v in [2] are denoted (respectively) CVQT and pVQT.
3
3. The Parametric Test The theory in this report applies to either of two versions of the parametric test (a.k.a., the user’s qualification test). Both versions randomly select n parts.
A. First Version The first version exposes all n parts to the design dose level and measures the part characterization parameter for each part. The characterization parameter is a random variable and there is one for each of the n tested parts. From the above set of random variables we construct another set of random variables, one for each of the n tested parts, denoted X1, …, Xn. It is denoted X when there is only one randomly selected part and is required to satisfy two conditions. One condition is imposed so that the theory of upper confidence bounds in [3] can be applied directly to the analysis given here. The condition is that greater radiation degradation produces a larger value of X. For example, if the characterization parameter is the threshold voltage shift in a field-effect transistor, the polarity (i.e., sign convention) must be selected so that this quantity increases with increasing radiation degradation. The second condition is that the probability distribution for X can be adequately approximated by some normal distribution. If the threshold voltage shift (for example) has a normal distribution, then this shift (with the above stated sign convention) can serve as the random variable X. If the threshold voltage shift does not have a normal distribution but the logarithm of it (with the above stated sign convention) does, then this logarithm can serve as the random variable X. The analysis in Section 4 will not require that we identify the function of the characterization parameter that is best described by a normal distribution. The analysis will require only that there exists some function of the characterization parameter, which defines the random variable X, that increases with increasing radiation degradation and is adequately approximated by a normal distribution. This function evaluated at the specification limit for the characterization parameter produces a specification limit of X that is denoted xSL. The characterization parameter that is measured in a given experiment for each of the n tested parts is used to construct a set of measured values of X denoted x1, …, xn.4 The sample mean, denoted x-bar, and the sample standard deviation, denoted s, are then calculated from x
1 n xk , n k 1
s2
1 n xk x 2 . n 1 k 1
(3.1)
A percentile (also called a target value in [3]) and a confidence level are selected and used to calculate a number t using the method in [3],5 which is also explained in Section 4. Some example values of t are in Table II. The user’s qualification test is passed if and only if
x t s xSL .
4
(3.2)
We will use upper-case letters to denote random variables and lower-case letters to denote specific values of the random variable. 5 We are using the notation in [3]. Another popular notation writes KTL instead of t.
4
The probability, denoted PUQT, of passing the user’s qualification test is the probability of satisfying (3.2) when the specific measured values x-bar and s are replaced by the random variables X-bar and S that are expressed in terms of the random variables X1, …, Xn according to X
1 n Xk , n k 1
S2
1 n X k X 2 . n 1 k 1
(3.3)
This probability can be expressed as
PUQT PX t S xSL 1 PX t S xSL or
1 PUQT PX t S xSL
(3.4)
where P is the probability function. Note that a larger value of t (produced by selecting a larger confidence level and/or a larger percentile in Table II) produces a larger value for the right side of (3.4) and therefore a smaller value for PUQT. In other words, a larger value of t makes the user’s qualification test more stringent in the sense of being more difficult to pass.
B. Second Version The second version of the parametric qualification test exposes each of the n parts to ionizing dose that is applied in increments with part functionality monitored after each increment. This is continued until the part fails (the characterization parameter exceeds the specification limit) and the dose-to-failure is recorded for each of the n parts. This version differs from the first version in that the random variable is dose-to-failure. The random variable denoted X is some function of dose-to-failure selected to satisfy two conditions. The first condition is that a smaller value of X is a more desirable value. Dose-to-failure does not satisfy this condition because a larger dose-to-failure is the more desirable part characterization. However, any decreasing linear function of dose-to-failure does satisfy this condition. Also, any positive constant divided by dose-to-failure also satisfies this condition. The second condition is that the distribution for X can be adequately approximated by a normal distribution. If dose-to-failure has a normal distribution, then so does any linear function of it, and a decreasing linear function will satisfy both conditions. If dose-to-failure does not have a normal distribution but the logarithm of dose-to-failure does, then the logarithm of a positive constant divided by dose-to-failure (which is a linear function of the logarithm of dose-to-failure) will have a normal distribution and will also satisfy the first condition. The analysis in Section 4 will not require that we identify the function of dose-to-failure that is best described by a normal distribution. The analysis will require only that there exists some strictly decreasing function of dose-to-failure, which defines the random variable X, that is adequately approximated by a normal distribution. This function evaluated at the design dose level produces a design value of X that is denoted xDV. The only distinctions between the first and second versions of the parametric qualification tests are the interpretation of the random variable X and the use of a design value xDV versus a specification limit xSL. Otherwise the theory is identical for both versions. In
5
particular, all equations derived in this report can be applied to the second version by simply replacing xSL with xDV. TABLE II: VALUES OF t FOR SEVERAL COMBINATIONS OF SAMPLE SIZE, TARGET NUMBER, AND CONFIDENCE LEVEL Number of Target X Confidence t Tested Devices (percentile) Level (%) 90 4.26 90 95 6.16 99 14.0 90 5.31 95 95 7.66 3 99 17.4 90 7.34 99 95 10.6 99 23.9 90 2.74 90 95 3.41 99 5.36 90 3.40 95 95 4.20 5 99 6.58 90 4.67 99 95 5.74 99 8.94 90 2.07 90 95 2.35 99 3.05 90 2.57 95 95 2.91 10 99 3.74 90 3.53 99 95 3.98 99 5.07 Entries under t are rounded to the number of digits shown.
6
4. A Comparison between PVQT and PUQT
This section will use the vocabulary and notation used in the first version (Section 3A) of the parametric test. The analogous equations for the second version are obtained by replacing xSL with xDV as explained in Section 3B. Equations and conclusions that do not contain xSL apply to the second version by simply changing vocabulary so that “characterization parameter” is replaced by “dose-to-failure”.
A. Expressing PUQT in Terms of p
PVQT was expressed in terms of the (unknown) p via (2.2). If we can also express PUQT in terms of p then we can compare PVQT to PUQT. Therefore, the first step of this analysis is to express PUQT in terms of p. We start by defining some notation. The probability function P1 refers to a sample space in which only one part is tested. We continue to use P for the sample space in which n parts are tested. The cumulative distribution function for the standard normal distribution is denoted N and is given by N (K )
K 1 1 exp z 2 dz . 2 2
(4.1)
The function denoted Tn (see Eq. (A11) in [3]) is given by T n (t , K ) 1
n v 2 1 2 X ( n 1 ) exp n K v 2 dv n 1 2 0 2 2 t
(4.2)
where Xn2 is the cumulative chi-squared distribution. There are various ways to express this function. One way (see Eq. (38) in [3]) is
X n 2 ( )
1
...... 2 n / 2 n
2 x k
1 n exp xk 2 dx1...dxn . 2 k 1
(4.3)
k 1
A numerical routine for evaluating Tn(t, K) is given in the appendix. Having defined the notation, we can now use a conclusion derived in [3] (see Eqs. (43c) and (58) in [3]). This conclusion states that if a parameter K associated with some given number xT is selected to satisfy
P1( X xT ) N ( K ) then
PX t S xT T n (t, K ) .
(4.4) (4.5)
The usual convention for obtaining confidence levels is to select xT in (4.4) and (4.5) to be a known percentile, having no relationship with the specification limit. A known percentile means that the left side of (4.4) is known, instead of xT being known. This allows K to be solved via 7
(4.4) and then (4.5) will solve for the left side of (4.5). The confidence level is defined to be the left side of (4.5). To be more specific, let CUQT ×100% be the confidence level selected for the user’s qualification test, expressed as a percent so CUQT is the confidence level expressed as a fraction. Let pUQT ×100% be the percentile selected for the user’s qualification test, so pUQT is the corresponding fraction. If we now let xT satisfy the condition that the left side of (4.4) is equal to pUQT then the left side of (4.5) becomes CUQT. This result can be written as
CUQT T n t , N 1( pUQT )
(4.6)
where N ‒1 is the inverse of the function N. This equation is used to solve for t and produced the values in Table II. A different application of (4.4) and (4.5) is one in which xT is taken to be xSL. For this case it is xT, instead of the left side of (4.4), that is the known quantity. However, the left side of (4.4) can be expressed in terms of the unknown p. If we let xT = xSL then the left side of (4.4) is the probability that a randomly selected device can tolerate the design level dose. This probability is p. Therefore this application of (4.4) and (4.5) can be written as
PX t S xSL T n t, N 1( p)
and (3.4) now becomes
PUQT 1 T n t, N 1( p) .
(4.7) (4.8)
The t in (4.8) is obtained by solving (4.6) for t (or looking it up in Table II) so (4.8) expresses PUQT as a function of p.
B. Comparing PVQT to PUQT
When PVQT and PUQT are calculated from (2.2) and (4.8), they are expressed as functions of p (an unknown). To emphasize this, we will write them as PVQT(p) and PUQT(p). It was stated in Section 1 that passing the vendor’s qualification test is considered to be an acceptable alternative to performing a user’s qualification test if PVQT ≤ PUQT, or PVQT/PUQT ≤ 1. One way to ensure that this condition is satisfied is by selecting m to satisfy PVQT(p)/PUQT(p) ≤ 1 for all p satisfying 0 < p < 1. We will call this the strong requirement for passing the vendor’s qualification test to be an acceptable alternative to performing a user’s qualification test. So the strong requirement is
PVQT ( p) / PUQT ( p) 1 when 0 p 1
(strong requirement).
(4.9)
However, a more lenient requirement, called the weak requirement, is also reasonable. This states that passing the vendor’s qualification test is an acceptable alternative to performing a user’s qualification test if either PVQT/PUQT ≤ 1 or p ≥ pUQT. This requires PVQT/PUQT ≤ 1 only for p satisfying 0 < p < pUQT because the second condition is satisfied for larger p. Therefore the weak requirement, for passing the vendor’s qualification test to be an acceptable alternative to performing a user’s qualification test, is
8
PVQT ( p) / PUQT ( p) 1 when 0 p pUQT
(weak requirement).
(4.10)
Note that a necessary condition to satisfy either requirement is
PVQT ( pUQT ) / PUQT ( pUQT ) 1
(necessary for either requirement).
By combining (4.8) with (4.6) and then using (2.2), we can write this necessary condition as pUQT m 1 CUQT
(necessary for either requirement).
(4.11)
The question that will now be considered is whether the necessary condition (4.11) is also a sufficient condition to satisfy the weak requirement (4.10). If (4.10) is satisfied when selecting m to be the smallest integer satisfying (4.11) (which produces an approximate equality in (4.11)) then (4.10) will also be satisfied for any larger m because a larger m produces a smaller value for PVQT(p). The question now becomes whether selecting m to be the smallest integer satisfying (4.11) is a sufficient condition to satisfy the weak requirement (4.10). We can avoid some difficult analysis by answering this question one example at a time. Consider the example given by n = 5, pUQT = 0.95 and CUQT = 0.90, so t = 3.40 (from Table II). The smallest m satisfying (4.11) for this example is the m satisfying (2.1) when pVQT = 0.95 and CVQT = 0.90 and is listed in Table I as m = 45. From this information we can calculate PVQT(p)/PUQT(p) using (2.2) and (4.8) to obtain the plot in Fig. 1 (a plotting routine is in the appendix). The horizontal axis is 1 ‒ p instead of p so that a logarithmic scale will place the most resolution where it is most needed. The dashed vertical line is the line at which p = pUQT and the dashed horizontal line is at unity on the vertical axis. The weak requirement (4.10) is satisfied if and only if the plotted curve is at or below the dashed horizontal line at all points to the right of the dashed vertical line. We see from Fig. 1 that this is true for this example. Repeating this plotting routine for other parameter combinations (number of tested devices, percentile, and confidence level) leads to the same conclusion for each parameter combination listed in Table II. We conclude that (4.11) is a sufficient condition to satisfy the weak requirement for each parameter combination in Table II. Unfortunately, (4.11) does not have universal applicability when considering parameter combinations not listed in Table II. For example, suppose n = 5, pUQT = 0.5 and CUQT = 0.90, so t = 0.686. The smallest integer m satisfying (4.11) is 4, but we see from Fig. 2 that the corresponding curve is not below the dashed horizontal line at all points to the right of the dashed vertical line. However, m = 5 will satisfy this condition for this example, as seen in Fig. 2. Therefore, m = 5 is a sufficient condition to satisfy the weak requirement for this example (it also satisfies the strong requirement) but m = 4 is not. This plotting test is recommended for parameter combinations that are not listed in Table II.
9
PVQT / PUQT
n=5 pUQT =0.95 CUQT = 0.90 t = 3.40 m = 45
1‒p Fig. 1: This example is given by n = 5, pUQT = 0.95, and CUQT = 0.90, so t = 3.40. The smallest integer m satisfying (4.11) is 45. This m satisfies the weak requirement because the curve is below the dashed horizontal line at all points to the right of the dashed vertical line.
PVQT / PUQT
n=5 pUQT =0.50 CUQT = 0.90 t = 0.686
m=4 m=5
1‒p Fig. 2: This example is given by n = 5, pUQT = 0.50, and CUQT = 0.90, so t = 0.686. The smallest integer m satisfying (4.11) is 4. This m does not satisfy the weak requirement because the curve is above the dashed horizontal line at some points to the right of the dashed vertical line. However, m = 5 does satisfy the requirement.
10
5. Conclusions Recall that the vendor’s qualification test (a pass/fail test) randomly selects m parts and exposes all of them to the design dose level. Some part characterization parameter (e.g., a change in some electrical property of the device, such as the threshold voltage shift in a field-effect transistor) of interest is selected and an individual part fails the test if the applied dose caused this parameter to exceed some selected specification limit. Otherwise the individual part passed the test. The qualification test is passed if and only if all m parts passed. The user’s qualification test (a parametric test) randomly selects n parts. One version of this test exposes all parts to the design dose level and measures the characterization parameter for each. Another version measures the dose-to-failure for each part. For either version, the user specifies a confidence level, CUQT ×100%, and a percentile, pUQT ×100%, and uses these selected numbers to calculate a parameter t via (4.6). This t together with the test data are used to calculate the left side of (3.2). The first version of the user’s qualification test is passed if and only if the inequality in (3.2) is satisfied. The same condition applies to the second version by replacing xSL with xDV in (3.2). All discussions after this point apply to both versions. The probability that a randomly selected part can survive the design level dose is denoted p. Confidence levels are relevant to the analysis because p is regarded as unknown and the only available information about it is from one or the other qualification tests. The weak requirement, for passing the vendor’s qualification test to be an acceptable alternative to performing a user’s qualification test, is defined to be a condition that m must satisfy in order to ensure that either the vendor’s qualification test is more stringent (has a smaller probability of being passed) then the user’s qualification test, or p ≥ pUQT. This is in contrast to the strong requirement, which requires the vendor’s qualification test be at least as stringent as the user’s qualification test regardless of the value of p. The weak requirement is much easier to satisfy than the strong requirement so the focus is on the weak requirement. The conclusion applicable to all combinations of parameters (number of tested devices, percentile, and confidence level) listed in Table II is as follows. The weak requirement will be satisfied if the vendor’s choice for m was any integer satisfying pUQT m 1 CUQT .
(5.1)
This applies to any integer m satisfying (5.1), including the smallest such integer. The smallest integer can be found in Table I.6 In other words, if the pass/fail test used a number of parts corresponding to the same percentile and confidence level as the parametric test, the pass/fail test will be at least as stringent as the parametric test for p up to pUQT. Some parameter combinations that are not listed in Table II were found to be exceptions to the above discussion in the sense that selecting m to satisfy (5.1) does not ensure that the weak requirement is satisfied. A graphical method was provided for determining whether a particular parameter combination not listed in Table II is or is not an exception. This graphical method can also determine the value of m needed to satisfy either the weak requirement or the strong requirement for a general case, including those cases in which (5.1) might not apply.
6
When using Table I for this purpose, the first column heading should be changed to pUQT ×100% and the second column heading should be changed to CUQT ×100%.
11
Appendix: Numerical and Plotting Routines Routines for constructing plots, such as shown in Fig. 1, are given here. These routines run in the Octave (a GNU package that closely resembles Matlab) platform. The upper left textbox below is a function file that calculates the integrand in the integral in (4.2). Because this is a function file, the file name must be Tn_integrand.m. The lower left textbox is a function file that calculates the right side of (4.2), which is Tn(t, K). Because this is a function file, the file name must be Tn.m. This function file allows users to calculate numerical values for Tn(t, K) to be used in equations such as (4.6) and (4.8). The textbox on the right is a “.m” file that can be given an arbitrary name and it produces plots such as shown in Fig. 1. It assigns values to n (denoted n in the file), pUQT (denoted perc in the file), t (denoted t in the file) and m (denoted m in the file). The current assignments produced Fig. 1. These assignments can be changed by editing the file to change the assignments within the file. Another option is to comment out one or more assignment statements and enter the assignments in the command window. Note that the line in this file that calls the function Tn is in a loop. This is necessary because the integration routine contained within Tn will not accept vectors as input arguments.
12
Tn_integrand.m function rho=Tn_integrand(v,t,K,n); argx2=(n-1).*v.*v./(t.*t); x2=chi2cdf(argx2,n-1); rho=x2.*exp(-0.5.*n.*(v-K).*(v-K)); endfunction
Tn.m function T=Tn(t,K,n); c=sqrt(n/(2*pi)); f=@(v) Tn_integrand(v,t,K,n); [q,ier,nfun,err]=quad(f,0,Inf); T=1-c.*q; endfunction
n=5; perc=0.95; t=3.40; m=45; for i=1:50 p(i)=i*0.018; endfor for i=1:50 p(50+i)=0.9+i*0.0018; endfor for i=1:50 p(100+i)=0.99+i*0.00018; endfor for i=1:150 K=norminv(p(i)); PVQT(i)=p(i)^m; PUQT(i)=1-Tn(t,K,n); ratio(i)=PVQT(i)/PUQT(i); endfor Lin1x=[1-perc,1-perc]; Lin1y=[0,1]; Lin2x=[1-perc,1]; Lin2y=[1,1]; semilogx(1-p,ratio, "k-", "linewidth", 2); hold on; semilogx(Lin1x,Lin1y, "k--", "linewidth", 2); semilogx(Lin2x,Lin2y, "k--", "linewidth", 2); hold off; ylabel("PVQT/PUQT"); xlabel("1-p"); axis([0.001,1,0,3]);
13
References [1] L. D. Edmonds and L. Z. Scheick, “Implications of Pass/Fail Test Data Relevant to a Parametric Qualification Requirement,” technical report, December 30, 2016. [2] L. D. Edmonds, “Confidence Level for y Defective Units Out of n Units Corresponding to an Observed Count of x Defective Units Out of m Units,” technical report, September 24, 2016 (online:https://www.researchgate.net/publication/308677839_Confidence_Level_for_y_Defectiv e_Units_Out_of_n_Units_Corresponding_to_an_Observed_Count_of_x_Defective_Units_Out_o f_m_Units ). [3] L. D. Edmonds, “An Elementary Derivation of Confidence Intervals and Outlier Probability,” technical report, May 27, 2015 (online:https://www.researchgate.net/publication/278785933_An_Elementary_Derivation_of_Co nfidence_Intervals_and_Outlier_Probability).
14