October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
International Journal of Reliability, Quality and Safety Engineering Vol. 17, No. 3 (2010) 245–260 c World Scientific Publishing Company DOI: 10.1142/S0218539310003780
ON THE DEVELOPMENT OF UNIFIED SCHEME FOR DISCRETE SOFTWARE RELIABILITY GROWTH MODELING
P. K. KAPUR∗,‡ , ANU G. AGGARWAL∗ , OMAR SHATNAWI† and RAVI KUMAR∗ ∗Department of Operational Research University of Delhi, Delhi-110007, India †Department of Computer Science Al al-Bayt University, Mafraq 25113, Jordan ‡
[email protected]
Received 18 December 2009 Revised 15 April 2010 The importance of Software Reliability Growth Models to control the testing process and for quantitative assessment of software reliability is a well established fact. However, difficulties created by their underlying assumptions, their relevance and validity to real testing environment have made the selection of appropriate model an uphill task. Recently, new dimensions have been added to software reliability engineering with the development of unified modeling schemes. These schemes have proved seminal in the development of the general theory, partially because of their simplicity and mathematical tractability. In this paper, we propose a unified scheme for discrete software reliability growth modeling using the fault detection/correction rate function. Standard probability distributions have been used to model the fault correction and detection times. Initially, we have formulated the unified scheme when the fault correction is immediate to the failure observation and later we extend it to the cases where removal is a two stage process namely failure observation followed by fault detection/correction. The use of fault detection/correction rate or else known as hazard rate to represent stage wise removal of fault during testing highlights the importance of the proposed framework. Convolution of probability distribution functions has been used to represent Stage-wise removal of fault i.e. failure observation, fault detection/fault correction. Few Discrete models have been derived using the proposed methodology. Parameter estimation on two real software failure datasets has been worked out. The results obtained are fairly accurate and quite encouraging. Keywords: Software reliability; failure observation; fault correction; hazard rate function; discrete probability distribution functions.
1. Introduction The role of software is expanding rapidly in many aspects of modern life, ranging from critical infrastructures to work-place automation, productivity enhancement, education, entertainment, etc. Given the potentially costly impact of software 245
October 26, S0218539310003780
246
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
failures for many of these applications, it is important to have sound methods of developing reliable software as well as accurate methods of quantitatively certifying software quality. The software reliability engineering attempts to provide quantitative measures of software reliability by predicting the number of faults remaining, the number of failures expected in a given time, how much time/effort is needed to find a specified number of faults, or the probability of operating without failure in a specified time interval. Software reliability analysis is performed at various stages of software development process as an attempt to evaluate if the software reliability requirements have been met. The analysis results not only provide feedback to the designers but also become a quantitative metric of software quality. There are two activities related to software reliability analysis: estimation and prediction. In either activity, statistical inference techniques and reliability models are applied to failure data collected during software testing or during its operation to measure software reliability. The Software Reliability Growth Model (SRGM) is the tool, which is used to evaluate the software quantitatively, develop test status, delivery status and observe the changes in reliability performance. Numerous Software Reliability Growth models, which relate the number of faults corrected to execution time/number of test cases run, have been discussed in the literature.11,13,17 . These SRGMs have been developed under a number of sets of assumptions regarding Fault removal process, skills of the testing team, possibility of imperfect debugging and fault generation, distinction between fault detection and correction processes etc. But no SRGM can be claimed to be the best as the physical interpretation of the testing and debugging changes due to numerous factors e.g. design of test cases, defect density, efficiency of testing team, availability of testing resources etc. The plethora of SRGMs makes model selection a tedious task. Of late, a paradigm shift has been observed in the research in this field. Earlier research efforts concentrated on proposing new models along with their comparison to some existing models. But no model could be recommended to be universally best. To overcome this difficulty, generalized modeling approaches have been proposed recently by many researchers. These schemes prove be very successful for obtaining several existing SRGM with single methodology and thus provide an insightful investigation for the study of general models without making many assumptions. The work in this area started as early as in 1980s with Shantikumar18 proposing a generalized birth process model. Miller2 proposed generalized modeling framework based on order statistics and record value and showed how NHPP based models can be represented by probability distribution functions of fault –detection times. Gokhale et al.4 used testing coverage function to present a similar unified framework. Dohi et al.3 proposed a unification method for NHPP models describing test input and program path searching times stochastically by an infinite server queuing theory. They assumed test cases are executed according a homogeneous Poisson process (HPP) with parameter λ. The fault correction process was taken to be same as M/G/∞ queuing process i.e. the arrivals are assumed to be Poisson and
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
247
distribution of service times are general with infinite number of servers. The recent unification scheme (due to Kapur et al.7 ) is based on Cumulative Distribution Function for the detection/correction times and incorporates the concept of imperfect debugging in the generalized modeling framework. However, the generalized modeling frameworks proposed earlier are restricted to continuous time SRGMs. Though the continuous time models are mathematically amenable but the importance of discrete modeling cannot be overlooked. The failure datasets collected during the testing phase of SDLC are mainly discrete in nature. Therefore, it is advisable that the model being used for reliability assessment be also discrete in nature so as to obtain an accurate picture of the software reliability growth process. The discrete modeling also yields good results with small amount of data.10,12,19 The initial effort for generalization of discrete models is due to Huang et al.5 . He has discussed a unified scheme of discrete NHPP models by applying the concepts of weighted arithmetic, weighted geometric or weighted harmonic means. Okamura et al.16 have discussed a unified parameter estimation method based on the expectation–maximization (EM) principle and investigated the effectiveness of the estimation method based on the EM algorithm by comparing with Newton’s method. In this paper, we discuss a unified framework for discrete software reliability growth models in which the software failure-occurrence or the fault correction times follow a discrete-time probability distribution and the initial fault content of the software is assumed to be a Poisson random variable. Using our generalized framework, we are able to express the mean value function for fault correction process (FCP) in terms of discrete probability distribution. The fault correction rate per remaining fault (or hazard rate function) has been written in terms of fault correction probability density function and complementary distribution functions. The proposed scheme is developed for the case when the fault correction is immediate to the failure observation process i.e. FCP is a single stage process. It is easy to handle and help in deriving various models using Geometric, Discrete Rayleigh (D-Rayleigh), D-Weibull, D-Logistic and D-Erlang K-type distribution. After that, we extend our proposed framework for the case when FCP is a two stage process- Failure observation followed by fault correction. Two-fold discrete convolutions have been used to represent the time differentiation between Failure observation and fault correction processes. In this case, the hazard rate function has also been written in terms of 2-fold convolutions of two discrete distributions. We have also performed data analysis using real life software testing and debugging data for the FCP derived for different discrete standard probability distribution functions. The parameter estimation has been done with. Statistical software packages using Non-linear Regression. Rest of this paper is organized as follows: In Sec. 3, we discuss unified Scheme for discrete software reliability growth modeling for one stage Fault Correction Process followed by derivation of several discrete NHPP based SRGM by using different discrete probability distribution functions in Sec. 4. Section 5 includes numerical
October 26, S0218539310003780
248
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
examples for the proposed model using actual testing and debugging data sets. In Sec. 6, we present unified modeling framework for the two-stage FCP. Finally, the paper concludes with a brief summary and directions for future research in Sec. 7. 2. Notations m(n) a λ(n) F (n), G(n) f (n), g(n)
Mean value function (MVF) or the expected number of faults corrected by n test cases. Expected number of initial faults lying dormant in the software when the testing starts. Intensity functions for Fault Correction Process. Probability Distribution Function for Number of Test cases run during Failure observation and Fault Correction respectively. Probability mass Function for Number of Test cases run during Failure observation and Fault Correction respectively.
3. Unified Scheme for Discrete Software Reliability Growth Modeling for One Stage Fault Correction Process Here, we discuss a unified framework for discrete software reliability growth modeling in which the software failure-occurrence or the fault correction times follow a discrete-time probability distribution. The unified scheme is based on the following assumptions: 3.1. Basic assumptions (1) During testing, test cases are executed, and the results obtained are compared with the desired output. If there is a mismatch, a failure is said to have occurred. (2) Each time a software failure is observed, immediate correction effort starts and the underlying fault is corrected perfectly and no new faults are introduced in the software. (3) There is no time delay between the observation of the failure and the correction of the underlying fault i.e. as and when a failure is observed, the underlying fault is immediately corrected. Therefore, fault correction process is a one-stage process. (4) Fault correction times are i.i.d. random variables with probability distribution function G(n). (5) The initial number of faults in the software system Xo at n = 0 is a Poisson random variable with mean of a. 3.2. Model development Let the counting processes {X(n), n ≥ 0} represent the cumulative number of failures observed/faults corrected by n number of test cases and let the test begun
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
249
at n = 0. Then the distribution of X(n) is given by Pr{X(n) = m} =
∞
Pr{X(n) = m| Xo = j} Pr{Xo = j}
(1)
j=0
Here it may be noted that the conditional probability Pr{X(n) = m|Xo = j} is zero for j < m. For j ≥ m it is given by j Pr{X(n) = m|Xo = j} = (2) (G(n))m (1 − G(n))j−m m Here we have assumed that the initial fault content of the software, denoted by Xo , is a discrete random variable and its probability mass function is given by Poisson distribution with mean a i.e. Pr{Xo = j} =
aj exp(−a) j!
(3)
Therefore, we have Pr{X(n) = m} =
∞ j j=0
m
(G(n))m (1 − G(n))j−m
aj exp(−a) j!
∞
=
[a(1 − G(n))]j−m [aG(n)]m exp(−a) m! (j − m)! j=0
Here it can be noted that ∞ [a(1 − G(n))]j−m j=0
(j − m)!
= exp(a(1 − G(n)))
From above we obtain Pr{X(n) = m} =
(aG(n))m exp(−aG(n)) m!
(4)
From Eq. (4), we can conclude that the fault correction process (FCP) follows a poison process if the probability mass function of the initial fault content follows the Poisson distribution and mean value function (MVF) of FCP is given by: m(n) = E[X(n)] = aG(n)
(5)
This equation defines the correction process as one stage process where no time is lost between the failure observation and its correction. By selecting suitable probability distribution function, we can derive MVF for several existing and new Finite failure count Discrete models. From Eq. (5), the instantaneous failure intensity function λ(n) is given by: λ(n) = a g(n) Here λ(n) = m(n + 1) − m(n) and g(n) = ∆G(n) = G(n + 1) − G(n).
October 26, S0218539310003780
250
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
Or we can write λ(n) =
g(n) [a − m(n)] 1 − G(n)
(6)
g(n) 1 − G(n)
(7)
Let us define s(n) =
Here s(n) represents hazard rate function or failure occurrence rate per remaining fault of the software, or the rate at which the individual faults manifest themselves as failures during testing. The expression of hazard rate function s(n) in terms of probability distribution function gives the directions for incorporating the case of time differentiation between the stages of failure observation and fault correction. This study has been discussed in detail in Sec. 5 of this paper under the title “Scope and Directions of future research”. 4. Derivation of Existing and New Discrete SRGM 4.1. Probability distribution functions for modeling detection/correction times In this paper we have used following probability distributions functions for modeling random number of test cases run during fault detection/correction process. Geometric distribution This is the most simple and widely used discrete distribution in reliability engineering modeling because it has a constant rate. It indicates the uniform distribution of faults in the software code where each and every fault has same probability for its removal. The faults are easy to correct and testing team is highly skilled. Geometric Distribution is a discrete analog of the exponential distribution. Though in most of the software testing projects, for sake of simplicity, the corrections are assumed to follow Geometric Distribution, but to achieve a more flexible modeling of removal times, we can use Discrete Rayleigh or Weibull distribution. Both of these distributions are generalization of Geometric distribution only. Rayleigh distribution Contrary to the Geometric distribution, Discrete Rayleigh distribution has monotonically increasing failure rate. Applying the discrete Rayleigh distribution to a software-fault correction means that the faults in the software are complex and the initial testing skill of test-case designers is low; however, their testing skill improves as the testing progresses. Weibull distribution It can represent different types of curves depending on the values of its shape parameter and hence extremely flexible. It is very appropriate for representing the
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
251
processes with fluctuating rate i.e. increasing/decreasing or constant rates. It can implicitly model both types of correction processes: • When there is imperfect debugging & error generation (decreasing failure rate); • When the skills of testing team grows as testing progress (increasing failure rate). Logistic distribution A Logistic distribution is one of the simplest distributions for estimating an S-shaped software reliability growth curve. It represents the learning of the team and has monotonically increasing failure rate. Erlang distribution Discrete Erlang distributions are extensions of Geometric distribution where the fault removal consists of multiple steps e.g. generation of failure report, its analysis and correction followed by verification and validation. Discrete Erlang distribution is nothing but the distribution of sum of independently distributed geometric variates. It is more commonly known as Negative Binomial distribution in probability theory. By combining above-mentioned distributions in our proposed approach; we can explain a number of existing SRGM formulated for different T&D scenario. In the next section we discuss how to obtain MVF of the various existing SRGM and propose few new models also. 4.2. MVF for various new and existing models SRGM-1 The following Geometric distribution function can be used to model SRGM-1: G(n) = [1 − (1 − b)n ]
(8)
g(n) =b 1 − G(n)
(9)
Then, s(n) =
Substituting the value of G(n) from Eq. (8) into Eq. (5), we get: m(n) = a [1 − (1 − b)n ]
(10)
which is same as Discrete Exponential model given by Yamada and Osaki.19 SRGM-2 Now we will discuss the case when G(n) is Discrete Rayleigh Distribution. i.e. 2
G(n) = [1 − (1 − b)n ]
(11)
October 26, S0218539310003780
252
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
Then, s(n) = 1 − (1 − b)2n+1
(12)
And the corresponding MVF is given by 2
m(n) = a[1 − (1 − b)n ]
(13)
This is Discrete Rayleigh model. SRGM-3 This software reliability growth model represents the case when correction process follows discrete Weibull distribution. i.e. k
G(n) = [1 − (1 − b)n ]
(14)
Then, Pk−1
s(n) = 1 − (1 − b)
j=0
„ « k nj j
(15)
And the corresponding MVF is given by k
m(n) = a[1 − (1 − b)n ]
(16)
Here b, k are scale and shape parameters. This is same as the Discrete Weibull model. SRGM-4 This software reliability growth model represents the case when correction process follows discrete Logistic distribution. i.e. 1 − (1 − b)n (17) G(n) = 1 + β(1 − b)n Then, s(n) =
b (1 + β(1 − b)(n+1) )
And the corresponding MVF is given by 1 − (1 − b)n m(n) = a 1 + β(1 − b)n
(18)
(19)
It is same as the discrete flexible model due to Kapur et al.12 Here it may be observed that for β = 0, this model is nothing but Discrete Exponential model.
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
253
5. Model Validation, Comparison Criteria and Data Analysis 5.1. Model validation As we know SRGM are used to assess the software quality based on which decision such as release time and maintenance costs during the field use are estimated. A successful application of an SRGM requires accurate estimation of its parameters to make accurate decisions by using actual testing and debugging data set of the same project or of a similar software project. Non-linear regression based on least square method has been used for Parameter estimation and MSE (Mean Squared Error) and R2 has been used as the performance comparison criteria. For faster and accurate calculations, the statistical package SPSS has been utilized for the purpose. The goodness of fit curves have been drawn to illustrate the fitting of the models to the data graphically. To illustrate the estimation procedure and application of the SRGM we have carried out the data analysis of following two real software data sets. Data set 1 (DS-1) The first data set (DS-1) had been collected during 35 months of testing a radar system of size 124 KLOC and 1301 faults were detected during testing. This data is cited from Brooks and Motley.1 Data set 2 (DS-2) The second data set (DS-2) had been collected during 19 weeks of testing a real time command and control system and 328 faults were detected during testing. This data is cited from Ohba.15 5.2. Comparison criteria for SRGM The performance of SRGM are judged by their ability to fit the past software fault data (goodness of fit) and predicting the future behavior of the fault. Goodness of fit criteria The term goodness of fit is used in two different contexts. In one context, it denotes the question if a sample of data came from a population with a specific distribution. In another context, it denotes the question of “How good does a mathematical model (for example a linear regression model) fit to the data”? 5.2.1. The mean square -error (MSE) The model under comparison is used to simulate the fault data, the difference between the expected values, m(n ˆ i ) and the observed data yi is measured by MSE as follows. MSE =
k (m(n ˆ i ) − yi )2 i=1
k
October 26, S0218539310003780
254
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
where k is the number of observations. The lower MSE indicates less fitting error, thus better goodness of fit. 5.2.2. Coefficient of multiple determination (R2 ) We define this coefficient as the ratio of the sum of squares resulting from the trend model to that from constant model subtracted from 1. i.e. R2 = 1 −
residual SS corrected SS
R2 measures the percentage of the total variation about the mean accounted for the fitted curve. It ranges in value from 0 to 1. Small values indicate that the model does not fit the data well. The larger R2 , the better the model explains the variation in the data.11 5.3. Data analyses The SRGM with mean value function m(n) given by Eqs. (10), (13), (16) and (19) are estimated for finding their unknown parameters. For DS-1 The parameter estimation and comparison criteria results for DS-1 of all the models under consideration can be viewed through Table 1. The fitting of the models to DS-1 is graphically illustrated in Fig. 1. SRGM-1 shows a poor fitting to the actual values of the real time data set while SRGM-4 fits the data excellently well. For DS-2 The parameter estimation and comparison criteria results for DS-2 of all the models under consideration can be viewed through Table 2. The fitting of the models to DS-2 is graphically illustrated in Fig. 2. SRGM-2 shows a poor fitting to the actual values of the real time data set while SRGM-3 and SRGM-4 fit the data quite well. 5.4. Model parameter estimation results Table 1.
Parameter estimation results for DS-I. Parameters
Model SRGM1 SRGM2 SRGM3 SRGM4
a 10588 1373 1326 1331
b
k/β
0.004218 − 0.003 − 0.00161 2.227 0.18175 20.163
R2
MSE
0.9581 0.996 0.997 0.999
8923.86 837.89 584.17 203.79
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete Table 2.
255
Parameter estimation results for DS-2. Parameters a
b
k/β
R2
MSE
761 327 428 382
0.0318 0.01149 0.0355 0.1637
− − 1.284 2.8864
0.986 0.969 0.990 0.992
139.82 320.97 102.12 82.70
Model SRGM1 SRGM2 SRGM3 SRGM4
5.5. Goodness of fit curves 1600
Number of Faults
1400 1200
m(t)
1000
SRGM-1
800
SRGM-2
600
SRGM-3
400
SRGM-4
200 0 1 5 9 13 17 21 25 29 33 Time
Fig. 1.
For DS-1.
400
Number of Faults
350 300
m(t)
250
SRGM-1
200
SRGM-2
150
SRGM-3
100
SRGM-4
50 0 1
4
7
10 13 16 19 Time
Fig. 2.
For DS-1I.
6. Scope and Directions for Future Research: Unified Scheme for Discrete Software Reliability Growth Modeling for Two Stage Fault Correction Process 6.1. Model development Most of the SRGM reported in the last few decades assume failure observation process is followed by immediate fault correction. While in reality, each observed
October 26, S0218539310003780
256
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
failure is reported, the underlying fault is detected, corrected and then verified. Therefore, the time from failure observation to fault detection and finally its correction should not be neglected in software testing process. When the correction in done in two stages, then the fault correction process is given by the following differential equation: n−1 k=0 g(n − k)∆F (k) [a − m(n)] (20) ∆m(n) = m(n + 1) − m(n) = (1 − n−1 k=0 G(n − k)∆F (k)) Let us define s(n) =
n−1 (1 −
k=0 g(n − k)∆F (k) n−1 k=0 G(n − k)∆F (k))
(21)
Here s(n) is fault observation-correction rate per fault or the hazard rate function. It is given in terms of two-fold discrete convolution of failure observation and correction distribution functions only. Solving the above, we have n−1 m(n) = a G(n − k)∆F (k) (22) k=0
This equation represents two stage fault correction under perfect debugging conditions. 6.2. Derivation of existing and new discrete SRGM By assuming different probability distribution functions for number of test cases executed during failure observation and fault correction processes, we can derive few SRGM as follows: SRGM-5 This SRGM is based on the assumption that numbers of test cases run during failure observation as well as fault correction follow Geometric distribution. i.e. F (n) = [1 − (1 − b)n ] and G(n) = [1 − (1 − b)n ]
(23)
Then, s(n) =
b2 (n + 1) (1 + bn)
(24)
It may be observed that s(n) is an increasing function of n and the limiting value of s(n) is b. It indicates the learning of testing team during the correction process. Using Eq. (23) in Eq. (22), we get: m(n) = a[1 − (1 + bn)(1 − b)n ] which is same as discrete delayed S-shaped model due to Kapur et al.9
(25)
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
257
SRGM-6 This SRGM is based on the assumption that there is a greater time lag between failure observation and fault correction as compared to SRGM-5. Here it is assumed that number of test cases run during failure observation follow Geometric distribution while number of test cases run during fault correction is Erlang-2 type i.e. F (n) = [1 − (1 − b)n ] and G(n) = [1 − (1 + bn)(1 − b)n ]
(26)
Then, s(n) =
b3 (n + 1)(n + 2)/2 (1 + bn + b2 n(n + 1)/2)
(27)
Like SRGM-5, here also the corresponding s(n) is an increasing function of n with b as its limiting value. Here the faults of the software are more severe and require more time for correction. Using Eq. (26) in Eq. (22), we get: m(n) = a[1 − (1 + bn + b2 n(n + 1)/2)(1 − b)n ]
(28)
Which is same as discrete Erlang-3 stage model due to Kapur and Younes.10 These are just the few illustrations of the models which may be derived with the proposed unified modeling scheme for two-stage fault correction process. Presently we are working on their parameter estimation and goodness of fit/predictive analysis. More discrete distributions apart from discussed here may be incorporated in this framework and many new models may be worked out. Their feasibility and workability will be discussed in our future research papers. 7. Conclusion In this paper, a unified framework for discrete software reliability growth models has been presented. This approach considers two cases for the modeling of fault correction process: Case 1: FCP is a one stage process i.e. Fault is corrected immediately after failure observation. Case 2: FCP is a two stage process with a time delay between failure observation and its correction. The framework presented here proves to be excellent for deriving a wide variety of discrete models by using different probability distribution functions. The technique is simple and presents a unique methodology for developing many new as well as existing models for different design environment. In this paper we have used standard distributions e.g. Geometric, Erlang 2-type, Discrete Weibull, Rayleigh and Logistic. Their validity and accuracy have been carried out on two real software failure datasets. The results obtained are quite encouraging as can be viewed through the numerical illustrations shown in tables obtained after the parameter
October 26, S0218539310003780
258
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
estimation. Future research areas under unified modeling approach include working out few other techniques using Infinite Server queuing theory or Delayed Fault detection- correction process. Convolutions of Discrete distributions have been used to represent the two-stage correction process which can be further studied to define the complexity of faults of the software. In future work the possibility of including change point or the modeling using stochastic differential equations can be worked out. The concept of unification provides an area of interesting study which can ease out the problem of model selection for the software developer and thus make these techniques more accessible and applicable.
Acknowledgments The research work presented in this paper is partially supported by grants to the first author from Defence Research & Development Organization (DRDO), India.
References 1. W. D. Brooks and R. W. Motley, Analysis of discrete software reliability models, Technical Report (RADC-TR-80-84), Rome Air Development Center (New York, 1980). 2. D. R. Miller, Exponential order statistic models of software reliability growth, IEEE Trans. Software Eng. 12(1) (1986) 12–24. 3. T. Dohi, S. Osaki and K. S. Trivedi, An infinite server queuing approach for describing software reliability growth — Unified modeling and estimation framework, in Proceedings of the 11th Asia-Pacific Software Engineering Conference (APSEC’04)(2004), pp. 110–119. 4. S. S. Gokhale, T. Philip, P. N. Marinos and K. S. Trivedi, Unification of finite failure non-homogeneous poisson process models through test coverage, in Proc. Intl. Symposium on Software Reliability Engineering (ISSRE 96), White Plains, NY (1996), pp. 289–299. 5. C. Y. Huang, M. R. Lyu and S. Y. Kuo, A unified scheme of some non-homogeneous Poisson process models for software reliability estimation, IEEE Trans. on Software Eng. 29 (2003) 261–269. 6. S. Inoue and S. Yamada, Discrete software reliability assessment with discretized NHPP models, Computer & Mathematics with Applications 51(2) (2006) 161–170. 7. P. K. Kapur, H. Pham, S. Anand and K. Yadav, A unified approach for developing software reliability growth models in the presence of imperfect debugging and error generation, under revision, IEEE Transactions on Reliability (2009). 8. P. K. Kapur and R. B. Garg, A software reliability growth model for an fault removal phenomenon, Software Engineering Journal 7 (1992) 291–294. 9. P. K. Kapur, M. Bai and S. Bhushan, Some stochastic models in software reliability based on NHPP, In Contribution to Stochastic, N. Venugopal (Ed.), Wiley Eastern Limited, New Delhi (1992). 10. P. K. Kapur and S. Younes, A general discrete software reliability growth model, published in Operations Research-Theory and Practice, Spaniel Publishers, New Delhi (1995). 11. P. K. Kapur, R. B. Garg and S. Kumar, Contributions to Hardware and Software Reliability, World Scientific Publishing Co. Ltd. Singapore, (1999).
October 26, S0218539310003780
2010 14:22 WSPC/S0218-5393
122-IJRQSE
On the Development of Unified Scheme for Discrete
259
12. P. K. Kapur, Gupta Amit, Gupta Anu and A. Kumar, Discrete software reliability growth modeling, published in “Quality, Reliability and IT (Trends & Future Directions)”, P. K. Kapur and A. K. Verma (Eds.), Narora Publications Pvt. Ltd., New Delhi (2005). 13. J. D. Musa, A. Iannino and K. Okumoto, Software Reliability: Measurement, Prediction, Applications (McGraw Hill, New York, 1987). 14. N. Langberg and N. D. Singpurwalla, A unification of some software reliability models, SIAM J. Scientific and Statistical Computing 6(3) (1985) 781–790. 15. M. Ohba, Software reliability analysis models, IBM Journal of Research and Development 28 (1984) 428–443. 16. H. Okamura, A. Murayama and T. Dohi, EM algorithm for discrete software reliability models: A unified parameter estimation method, in Proc. 8th IEEE Int. Symp. (2004), pp. 219–228. 17. H. Pham, System Software Reliability, Reliability Engineering Series (Springer, 2006). 18. J. G. Shanthikumar, A General Software Reliability Model For Performance Prediction, Microelectronics Reliability 21 (1981) 671–682. 19. S. Yamada and S. Osaki, Discrete software reliability growth models, Applied — Stochastic Models and Data Analysis 1 (1985) 65–77.
About the Authors P. K. Kapur is Professor and Head of the Department of Operational Research and Dean of the Faculty of Mathematical Science, University of Delhi. He is currently the president of Society for Reliability Engineering, Quality and Operations Management (Regd.) (since 2000) and former president of Operational Research Society of India. He obtained his Ph.D. degree in Reliability Theory (Operational Research) from University of Delhi in 1977. He has published extensively in Indian journals and abroad in the areas of Marketing, Hardware Reliability, Optimization, Queueing Theory and Maintenance and Software Reliability (more than 200 papers). Prof. P. K. Kapur has edited three volumes and currently editing fourth volume on Quality, Reliability and IT. Prof. P. K. Kapur has edited three volumes on Operations Research and co-authored two books: “Contributions to Hardware and Software Reliability (1999)” published from World Scientific, Singapore and “Software Reliability Assessment with O.R. Applications,” Springer (to be published in 2010). He has edited special issues of International Journal of Reliability, Quality and Safety Engineering (2004), OPSEARCH, India (2005) and International Journal of Performability Engineering (IJPE India, 2006). Anu G. Aggarwal received her Ph.D. degree in Software Reliability from University of Delhi, Delhi. She is presently on the faculty in the Department of Operational Research, University of Delhi. She has published several papers in the area of software reliability. Her current research interests include soft computing. She is a member of Society for Reliability Engineering, Quality and Operations Management (SREQOM). Omar Shatnawi received his Ph.D. in Computer Science and his M.Sc. in Operational Research from University of Delhi in 2004 and 1999, respectively. Currently,
October 26, S0218539310003780
260
2010 14:22 WSPC/S0218-5393
122-IJRQSE
P. K. Kapur et al.
he is head of Department of Information Systems at al-Bayt University. His research interests are in software engineering, with an emphasis on improving software reliability and dependability. Ravi Kumar received his M.Sc. and M.Phil. degree in Operational Research from University of Delhi, Delhi. He is pursuing his Ph.D in the Department of Operational Research University of Delhi, Delhi. He has published several papers in the area of Software Reliability. He is the member of Society for Reliability Engineering, Quality and Operations Management (SREQOM).
Copyright of International Journal of Reliability, Quality & Safety Engineering is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.