An Empirical Study of Product Metrics in Software Testing YOGESH SINGH1 ARVINDER KAUR2 BHARTI SURI3 1.Professor, University School Of Information Technology, G.G.S.Indraprastha University, Delhi, India,
[email protected]. 2.Reader, University School Of Information Technology, G.G.S.Indraprastha University, Delhi, India. 3.Lecturer, University School Of Information Technology, G.G.S.Indraprastha University, Delhi, India,
[email protected]. Abstract— Metrics are essential part of any software development organization to improve the quality of software. Measurement of a software process is a pre-requisite for planning and monitoring of cost effective test strategy. The analysis of trend shown by the metrics gives a way to take appropriate action for the process improvement and provide confidence in the software. Though various metrics exist for each phase of software life cycle but substantial work is needed in testing phase in particular. This paper surveys and classifies various testing metrics. It also proposes some new software product testing metrics. The proposed metrics are analyzed over project data given by NASA[1].
I. INTRODUCTION Measurement helps us in understanding, controlling & improving our process & product [2]. According to Kaner, “Measurement is the empirical, objective assignment of numbers, according is a rule derived from a model or theory, to attribute of object or events with the intent of describing them.” [3]. Software metric [4,5,6] measures specific attribute of a software product or a software development process. Software metrics can be defined as, “The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products”[7]. Software metrics help in making many kind of predictions, assessment and trade-offs during software life cycle [8, 9, 10,11,12,13,14,15]. They help to minimize the risk as they provide quality input in decision making at various points in software life cycle. Software testing is a dominant part of software development process. Since software is constantly growing in size, power and complexity, extensive testing and evaluation of metrics may result in high level of confidence in software reliability. Software testing metric [16,17] is used to improve the software production & quality by quantifying testing activities or test process. They also provide insight stage of software as they are computed at the end of software
development. If the results of test metrics are adequately used, they can help in improving quality of software. Also, for new projects, previous trends help in comparison & improvement of process. The result from test metrics are used to understand the current status and helps in prioritizing various activities in a project to reduce the risk of schedule over-run on software releases. Due to new additions or testing done at client end next release of software is often modified and regression testing is done [18,19]. Data of previous testing helps in deriving test metrics which further helps in regression testing. After analyzing results of test metrics a chosen strategy for that software may not effective for some other software. Test metrics helps in measuring critical aspects of software, estimating accurate schedule, and cost and hence improve quality, productivity. It helps in controlling test process by providing input to software management. The metrics can be classified as: product metrics, process metrics and project metrics. Product metrics describe the characteristics of the product e.g. Size, Performance, Efficiency etc. Process metrics describe the effectiveness and quality of the process that produce the software product e.g. Effort, Time, No. of defects found during testing. Project metrics describe the project’s characteristics and execution e.g. No. of software developers, Cost, Schedule. In this paper we concentrate on product metrics.
The paper is organized into following sections. Section 2 overviews the research conducted so far in the area of product metrics in testing. Section 3 calculates some already existing product metrics in testing. Section 4 discusses proposed product metrics in testing with their observation and calculation over MDP data repository from NASA. Conclusions drawn from literature survey, classification and their analysis are being highlighted in the Section 5 of this paper
M. Iskander (ed.), Innovative Techniques in Instruction Technology, E-learning, E-assessment, and Education, 64–72. © Springer Science+Business Media B.V. 2008
AN EMPIRICAL STUDY OF PRODUCT METRICS IN SOFTWARE TESTING
A number of useful product metrics in have been reported in the literature. Stark et. al.[20] evaluated a set of five metrics during testing phase of NASA’s project e.g. software size, software reliability, test session efficiency, test focus, software maturity. They have examined trends for each metric and suggested that trends are more important than individual data points since by trend analysis and correlation, corrective action can be taken earlier in the process. They have also identified three more test metrics: subprogram complexity, test coverage and computer resource utilization. Their usage of the metric set throughout the testing effort leads to identify risks and problems early in the test process, minimizing the impact of problems. Further, by having a metric set with good coverage, managers were provided with more insight into the causes of problems, improving the effectiveness of response. According to them by maintaining the data and analysis a project can serve as benchmark which can be used for future project planning. Chen et. al. . [21] has applied set of test metrics in results of IBM Electronic Commerce Development (ECD) project. They gave the reason for low metric usage among organization that organization may not have control over measurement of results, data collection is difficult and time consuming and that no single metric is sufficient. They have recommended quality metric which compute quality of code, quality of product, test improvement, test effectiveness, time-to-market metrics like test time, test time over development time and cost to market metrics like test cost normalized to product size, test cost as ratio of development cost, cost per weighted defect unit. They have also modified and reused existing metric to guide further improvement like test improvement in product quality, test time needed normalized to size of product, cost per weighted defect unit. One new metric is also suggested: test effectiveness for driving out defects in each phase. They emphasize that metric should be computed regularly, defect arrival pattern should be studied and process should be improved accordingly. Quality metric in test phase has been used in [22] which analyzed relation between change volume of results and detected faults. They have proposed three metrics: procedures per module, conditions per module and loops per module as shown in table 1. Lewis [23] states that test metrics aims to make testing process more effective. He has listed metrics like: defect analysis, test effectiveness, development effectiveness, test automation, test cost test status and user involvement. The test metric should be measurable, independent, accountable and precise. The change volume of regularly measured results and fault information helps in improving accuracy and timeliness of quality management of testing. The quality metric can also be used to predict contents of change of program and to grasp the stability tendency in test phase. All the product metrics proposed and empirically calculated by various researchers are shown in table 1. We have discussed their purpose, measurement methodology and effectiveness. III. CALCULATION OF SOME PRODUCT METRICS We have calculated some product testing metric already given in literature and mentioned in table1 on the data
available on the website of NASA Metric data program(MDP)- Repository access[1] on which data for certain projects is available. The data of the projects is comprised of certain files classifying the data with respect to various modules with their attributes and the different attributes of the defects associated with those modules. The Metrics Data Repository is used, from which three projects, i.e. KC1, KC2 and KC4 are taken for calculation of some already existing metrics: Quality code (QC) metric, quality of product (QP) metric, Test effectiveness for driving out defect in each test phase. Test effectiveness (TE) metric is also computed with the change that instead of weighted defects we have taken number of defects. Those already proposed metric are computed for which corresponding data was available. The results for these metrics are given in table 2. For Quality of code, the expected result states that lower the value of QC, indicating fewer defects found, the higher the quality of code delivered. In the above calculated QC for the three projects, project KC3 has the maximum code quality as shown in fig. 1.
Quality of code (QC)
0.3
0.2 Value
0.1
0 Quality of code
KC1
KC3
KC4
0.069
0.281
0.177
Project
Fig. 1: Quality of code
For PQ, lower the value, indicating fewer defects found, the higher the quality of code delivered. Project KC1 has the maximum value of product quality as shown in fig. 2.
Quality of product
0.2 0.15 VALUE
II. RELATED WORK
65
0.1 0.05 0
Quality of product
KC1
KC3
KC4
0.013
0.18
0.022
PROJECT
Fig. 2: Quality of product
Similarly, Test effectiveness in driving out defects in each test phase is shown in fig. 3. The higher this number, indicating a higher ratio of defects was closed of the appropriate problem
SINGH ET AL.
66
type, higher is the effectiveness of this phase to drive out the defects.
We further computed this metric on the data available from NASA MDP database and fig. 5 demostrate the results.
Error Focus
Test effectiveness for driving out defects in each phase KC1
KC2
1
KC4
120 100 80 60 40 20 0
0.6
VALUES
VALUES
0.8
0.4
D
0
C
So ur ce
C
od e on es D fi g ign oc ur a um ti en on t N ati o ot a n Pr B oc ug ed u Sc re Te ript st s Da U ta nr ep CO ro TS du ci bl e Pr ob Bu . w i ld H /o f ar ix dw U ar nk e no Pr wn oc es s Bl an k
0.2
PROBLEM TYPE
Error focus
KC1
KC3
KC4
0.797
0.673
0.893
PROJECT
Figure 3: Test effectiveness for driving out defects in each phase
Figure 4 shows results of Test effectiveness metric which is calculated as ratio of defects closed and total no. of defects. As testing proceed it should approach to unity for an ideal case which is maximum for KC1.
0.7
Value
0.5 0.4 0.3 0.2 0.1
Test Effectiveness
The second metric is Test session efficiency (TSE) for each stage that captures the relation between numbers of weighted defects found at each stage and the number of added or changed source lines of code. TSE=(Zstage)/CASLOC Where, Z Stage denotes the total number of weighted defects at
Test Effectiveness
0.6
0
Fig.5 Error Focus
KC1
KC3
KC4
0.3
0.633
0.367
Projects
each stage found in the product after release and CASLOC denotes the number of changed or added source lines of code. The session which results highest value for test session efficiency contributes most in finding defects. The purpose of test session efficiency (TSE) is to compare the effectiveness of different stages of testing. TSE for MDP data is computed for three projects and is shown in fig. 6. TEST SESSION EFFICIENCY FOR EACH STAGE
ErrorFocus = (Fixed_Closure_Error_Report )/ (Total_Error_Report )
Error reports are the reports that are generated for each type of error found in the system. Reasons for closure of error reports can be: fix, duplicate, obsolete and reject. Fixed_Closure_Error_Report denotes the total number of error reports for the whole project which are closed with a fix. Total_Error_Reports are the total number of error reports generated. The purpose of ErrorFocus metric is to provide information about effectiveness of testing in fixing defects.
in g Te st In sp ec t io R eg n re ss io n Te st R el ea se _I & C T us to m er U se M is si on C rit Ac ica ce l pt an ce Te st
KC3
En gi ne er
We have proposed following two metrics which further give insight into the software product and provide some more information about error report and test session efficiency at each stage. The first metric is: error focus that shows the relation between the error reports that are closed with a fix with respect to total number of error reports in the project.
KC2
Te st
PROPOSED METRICS
KC1
An al ys is
IV.
0.2 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0
Pl an ne d
From these metric we conclude that project KC3 has best quality among the projects compared.
VALUE
Figure 4: Test effectiveness
PROJECT
Fig, 6. Test session efficiency
In project KC1, planned test stage contributes most in finding defects. In KC3, customer use stage finds most defects and in KC4 also planned test stage has maximum TSE value.
V.
CONCLUSION
This paper discusses the existing metrics for software testing focusing on product metrics in testing. The carefully selected testing metric suite is beneficial in measurement and provides useful information to for decision making and future improvements. Hence, concentration should not be placed on
AN EMPIRICAL STUDY OF PRODUCT METRICS IN SOFTWARE TESTING
the specific concerns of the individual metric as one metric is not sufficient to measure testing process efficiency. The paper summarizes existing product metrics in testing and uncovers gaps in existing research. A set of software testing metrics at product level have derived/proposed
intuitively with their calculation and graphical representation. In this work we have calculated the proposed metrics value, but same metrics can be calculated on other established projects and are open to further research.
TABLE 1 COMPARISON OF SOFTWARE TESTING PRODUCT METRICS Year and Size of project 2004 *IBM ECD Project 2004 *IBM ECD Project
Test Metric
Definition/ Purpose
Formula
Effect
Quality code (QC)
WTP+WF/KCSI
The lower the value of QC, indicating fewer defects or less serious defects found, the higher the quality of code delivered.
WF/KCSI
A low number of QP indicates fewer defects, or less serious defects were shipped, implying a higher quality of the code delivered by the test teams.
2004 *IBM ECD Project
Test Improvement (TI)
WTTP/KCSI
2004 *IBM ECD Project
Test effectiveness (TE)
The higher the number, indicating more defects or more important defects were detected the higher the improvement to the quality of the product which can be attributed to the test teams. The higher the TE, indicating a higher ratio of defects or important defects were detected before release, the higher is the effectiveness of the test organization to drive out defects.
2004 *IBM ECD Project 2004 IBM ECD Project 2004 *IBM ECD Project 2004 IBM ECD Project 2004 *IBM ECD Project
Test time (TT)
2004 *IBM ECD Project 2004 *IBM ECD Project 2004 *IBM ECD Project 2004 *IBM ECD Project
Test improvement in product quality.
It captures the relation between the number of weighted defects and the size of product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects shipped to customers and size of the product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects detected by the test team during testing and the size of the product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects detected during testing and the total number of weighted defects in the product. Purpose is to deliver a high quality product. It shows the relation between time spent on testing and the size of the product release. Purpose is to decrease time-to-market. It shows the relation between time spent on testing and the time spent on developing. Purpose is to decrease time-to-market. It shows the relation between resource or money spent on testing and the size of the product release. Purpose is to decrease cost-to-market. It shows the relation between testing cost and development cost of the product. Purpose is to decrease costto-market. It shows the relation between money spent by test team and the number of weighted defects detected during testing. Purpose is to decrease cost-tomarket. It shows the relation between the number of weighted defects detected and the size of the product release.
Test time needed normalized to size of product. Test cost normalized to size of product. Cost per weighted defect unit.
Quality of the Product (QP)
Test time over development time(TD) Test cost normalized to product size (TCS) Test cost as a ratio of development cost (TCD) Cost per weighted defect unit (CWD)
67
WT/(WTP+WF)*100%
TT/KCSI
The lower this number, the less time required by the test teams to test the product.
TT/TD*100%
The lower this number, the lower is the amount of time required by the test teams to test the product compared to the development team. The lower this number, the lower is the cost required to test each thousand lines of code.
CT/KCSI
CT/CD*100%
The lower this number, the lower is the cost required by the test teams to test the product compared to development team.
CT/WT
The lower this number, the lower is the cost of finding one defect unit, and the more cost-effective is the test process.
WP/KCSI
The higher this number, the higher is the improvement of the quality of the product contributed during this test phase.
It shows the relation between time spent on testing and the size of the product release.
TTP/KCSI
The lower this number, the less time required for the test phase relatively.
It shows the relation between resource or money spent on the test phase and the size of the product release.
CTP/KCSI
The lower this number, the lower is the cost required to test each thousand lines of code in the test phase.
It shows the relation between money spent on the test phase and the number of weighted defects detected.
CTP/WT
The lower this number, the lower is the cost of finding one defect unit in the test phase, and the more cost-effective is this test phase.
SINGH ET AL.
68
WD/(WD+WN)*100%
2004 *IBM ECD Project
Test effectiveness for driving out defects in each test phase.
It shows the relation between the number of one type of defects detected in one specific test phase and the total number of this type of defect in the product.
815,362 SLOC of NASA’s MCCU project, 1992 815,362 SLOC of NASA’s MCCU project, 1992 815,362 SLOC of NASA’s MCCU project, 1992
Software size
Measured by a count of source lines of code (SLOC). Goal is to show the risks to the system over time.
Software reliability
It is the probability of failure free operation of a computer program for a specified time in a specified environment.
Z(t) = (h)exp(-ht/N)
The decreasing failure rate represents the growth in the reliability of software.
Test session efficiency
Goal of the test session efficiency metric is to identify trends in the scheduled test time’s effectiveness
SYSE= Active Test Time/Scheduled Test Time
Both should be greater than 80%.
815,362 SLOC of NASA’s MCCU project, 1992 815,362 SLOC of NASA’s MCCU project, 1992
Test focus
Goal is to identify the amount of effort spent finding and fixing real faults versus the effort spent either eliminating false defects or waiting for a hardware fix.
Software maturity
Goal is to quantify the relative stabilization of a software subsystem (ii) To identify any possible overtesting or testing bottlenecks by examining the fault density of the subsystem over time. Three components are: T, O, H.
[20]
Subprogram complexity
Goal is to identify the complexity of each function and to track the progress of function with a relatively high complexity as they represent the highest risk.
[20]
Test coverage
Goal of the metric is to examine the efficiency of testing over time.
[20]
Computer resource utilization
[22]
Procedures/Mo dule Conditions/Mo dule Loops/module
[23]
Defect analysis
[22] [22]
Goal is to estimate the utilized capacity of the system prior to operations to ensure that sufficient resources exist.
Distribution of defect causes
__
TE= Total no. of good runs/Total runs TF = No. of DRs closed with a software fix/Total no. of DR
T = Total no. of DRs changed to a subsystem/1000 SLOC O = No. of currently open subsystem DRs/1000 SLOC H = Active test hours per subsystem/1000 SLOC % if functions with a complexity greater than a recommended threshold.
The higher this number, indicating a higher ratio of defects or important defects was detected in the “appropriate” test phase, the higher is the effectiveness of this test phase to drive out its target type of defects. Large increases in software size late in the development cycle often result in increased testing and maintenance activities.
In an ideal case Test Focus approaches unity as testing proceeds.
Graph of T versus H should begin with a near infinite slope and approach a zero slope. Otherwise a low quality subsystem is indicated and should be investigated. Graph of O versus H is an indication of low rapidly faults are being fixed. It should begin with a positive slope, then as debuggers begin to correct the faults, the slope should become negative A positive trend indicates the needs to reevaluate the software change philosophy, possibly resulting in some re-design. A negative trend indicates the changed function had been redesigned to reduce complexity and increase maintainability.
% of code branches that have been executed during testing
__
__
__
No. of procedure per module No. of conditions per module No. Of loops per module Histogram, Pareto
Number of defects by cause over time Number of defects by how found over time Distribution of defects by module
Multi-line graph Multi-line graph
Distribution of defects by priority (critical, high, medium, low)
Histogram
Histogram, Pareto
Using results with the fault information, progress situation can be grasped quickly and accurately
Every defect is analyzed to answer questions about root cause, how detected, when detected, who detected. The causes are analyzed. Defects are analyzed with respect to time. Defects are analyzed with respect to module. Defects are analyzed with respect to priority.
AN EMPIRICAL STUDY OF PRODUCT METRICS IN SOFTWARE TESTING
[23]
Development effectiveness
Distribution of defects by functional Area Distribution of defects by environment (platform) Distribution of defects by type (architecture, connectivity, consistency, database integrity, documentation, GUI, installation, memory, performance, security, standards and conventions, stress, usability, bad fixes) Distribution of defects by who detected (external customer, internal customer, development, QA, other) Distribution by how detected (technical review, walkthroughs, JAD, prototyping, inspection, test execution) Distribution of defects by severity (high, medium, low defects) Average time for development to repair defect
[23]
Test automation:
Percent of manual vs. automated testing
[23]
Test cost:
Distribution of cost by cause
Total repair time ÷ number of repaired defects Cost of manual test effort ÷ total test cost Histogram, Pareto
Distribution of cost by application
Histogram, Pareto
Percent of costs for testing
Test testing cost ÷ total system cost Line graph
Total costs of testing over time Average cost of locating a defect Anticipated costs of testing vs. actual cost Average cost of locating a requirements defect with requirements reviews Average cost of locating a design defect with design reviews Average cost of locating a code defect with reviews Average cost of locating a defect with test execution
[23]
Test effectiveness
Number of testing resources over time Percentage of defects discovered during maintenance
Percentage of defects uncovered due to testing
Histogram Histogram, Pareto Histogram, Pareto
-Defects are analyzed with respect to environment. Defects are analyzed with respect to type.
Histogram, Pareto
Defects are analyzed with respect to persons involved.
Histogram, Pareto
Defects are analyzed with respect to phase of detection.
Histogram
Defects are analyzed with respect to severity. It gives information about how well is development fixing the defects.
Total cost of testing ÷ number of defects detected Comparison Requirements review costs ÷ number of defects uncovered during requirement reviews Design review costs ÷ number of defects uncovered during design reviews Code review costs ÷ number of defects uncovered during code reviews Test execution costs ÷ number of defects uncovered during test execution Line plot Number of defects discovered during maintenance ÷ number of defects uncovered Number of detected errors through testing ÷ total system defects
To find the effort expended on test automation. To find the resources and time spent on testing. -To compare testing cost. ---
To analyzed deference between actual and estimated cost. To analyzed cost of finding requirement defects using requirement review.
To analyzed cost of finding design defects using design review.
To analyzed cost of finding code defects using code review.
To analyzed cost of finding execution defects using test execution.
-To find how well is testing doing.
To find how well testing is done.
69
SINGH ET AL.
70 Average effectiveness of a test Value returned while reviewing Requirements Value returned while reviewing design Value returned while reviewing programs Value returned during test execution Effect of testing changes
People’s assessment of effectiveness of testing
[23]
Test extent
To analyzed effectiveness of requirement review.
To analyzed effectiveness of design review.
To analyzed effectiveness of program review.
To analyzed effectiveness of testing.
To analyzed effectiveness of testing with respect to modification.
To analyze people perspective.
Total QA verification time ÷ total number of defects to verify
Number of defects over time
Line graph
Cumulative number of defects over time Number of application defects over Time Percentage of statements executed
Line graph
--
Multi-line graph
--
Percentage of acceptance criteria tested
User involvement
To find which test case is more effective.
Average time for QA to verify fix
Percentage of logical paths executed
Test status
Number of tests ÷ total system defects Number of defects uncovered during requirements review ÷ requirements test costs Number of defects uncovered during design review ÷ design test costs Number of defects uncovered during program review ÷ program test costs Number of defects uncovered during testing ÷ test costs Number of tested changes ÷ problems attributable to the changes Subjective scaling (1– 10)
Number of requirements tested over time Number of statements executed over time Number of data elements exercised over time Number of decision statements executed over time Number of tests ready to run over time Number of tests run over time Number of tests run without defects uncovered Number of defects corrected over time Percentage of user testing
Number of statements executed ÷ total statements Number of logical paths ÷ total number of paths Acceptance criteria tested ÷ total acceptance criteria Line plot Line plot Line plot Line plot Line plot Line plot Line plot Line plot User testing time ÷ total test time
WTP – no. of weighted defects found in the product under test (before official release) WF – no. of weighted defects found in the product after release. KCSI – no. of new or changed source lines of code in thousands. WTTP – is the no. of weighted defects found by the test team in the test cycle of the product.
--
To analyze defect with respect to time.
To find the percentage of testing done so far. To analyze coverage with respect to path over time. To analyze acceptance testing.
To analyze how many requirements are covered by testing. To analyze coverage with respect to code over time. To analyze coverage with respect to data elements over time. To analyze coverage with respect to decision node over time. To find where we are in the testing process. ---Finding how much the user involved in testing.
AN EMPIRICAL STUDY OF PRODUCT METRICS IN SOFTWARE TESTING
71
WT – no. of weighted defects found by the test team during the product cycle. TT – no. of business days used for product testing. TD – no. of business days used for product development. CT – total cost of testing the product in dollars. CD – total cost of developing the product in dollars. WP – no. of weighted defects found in one specific test phase. TTP – no. of business days used for a specific test phase. CTP – total cost of a specific test phase in dollars. WD – no. of weighted defects of this defect type that are detected after the test phase. WN – no. of weighted defects of this defect type (any particular type) that remain uncovered after the test phase (missed defects) Z(t) – instantaneous failure rate. h – Failure rate prior to the start of testing N – no. of faults inherent in the program prior to the start of testing. SYSE – System Efficiency TE – Tester efficiency DR – Discrepancy Report TF – Test Focus T – Total Density O – Open Density H – Test Hours TABLE 2 COMPARISON OF CALCULATED RESULTS ON EXISTING METRICS S.No.
Name of Test Metrics
Project KC1
Project KC3
Project KC4
1.
Quality Code (QC)
0.069
0.281
0.177
2.
Quality of product
0.013
0.180
0.022
Test effectiveness in driving out defects in each test phase:
Source Code
3.
4.
99.55
76.129
94.35
Design
100
50
100
Configuration
100
73.076
100
Documentation Not a Bug Procedure Scripts Test Data COTS Unreproducible Build Prob. w/o fix
100 100 100 100 -100 100 100 100
100 48.148 100 100 100 100 100 -100
-100 --------
Hardware
100
--
--
Unknown Process Blank Hardware Unknown Process Blank
100 100 100 100 100 100 100
--0 ---0
--------
.3
.633
.367
Test Effectiveness (TE)
SINGH ET AL.
72
TABLE 3 COMPARISON OF CALCULATED RESULTS ON PROPOSED METRICS S.No.
1
2
Name of Test Metrics
Project KC1
Project KC3
Project KC4
.797
.673
.893
Test Session efficiency for each stage Planned Test Analysis Engineering Test Inspection Regression Test Release_I&T Customer Use Demo
0.018 0.005 0.015 0.006 0.005 0.001 0.013 1.4 x 10-4
0.089 0.088 0.023 0.002 0.026 0.008 0.180 -
0.058 0.018 0.035 0.023 0.014 0.022 0.004
Mission Success
7.2 x 10-5
-
-
Mission Essential Mission Critical Acceptance Test
7.2 x 10-5 0.0002 0.002
-
-
Error Focus
REFERENCES [1] http://mdp.ivv.nasa.gov/repository.html [2] Fenton, N.F. and Pfleeger, S.L., “Software Metrics: A Rigorous and Practical Approach”, second edition, Thomson Asia Pvt. Ltd., 2002. [3] Kaner C. and Bond W.P., “Software Engineering Metrics: What Do They Measure and How Do We Know?”, 10th International Software Metrics Symposium, 2004. [4] Aggarwal K.K., Singh, Y., Kaur, A., Malhotra, R.,”Empirical study of Object-Oriented Metrics”, Journal of Object Technology, Published by Swiss Federal Institute of Technology, Switzerland, vol.5, no.8, Nov.-Dec.2006 [5] Aggarwal K.K., Singh, Y., Kaur, A., Malhotra, R. “Software Design Metrics for Object Oriented Software”, Journal of Object Technology, Published by Swiss Federal Institute of Technology, Switzerland, vol.6, no.1, Jan.-Feb.2007 . [6] Grady R.B., “Successfully Applying Software Metrics”, IEEE Computer, pp.18-25, September ,1994 [7] Goodman, P., “Practical Implementation of Software Metrics”, McGraw Hill Book Company, UK, 1993. [8] Fenton E.N. & Neil M., “Software Metrics: Roadmap”, Proceeding of the Conference on the Future of Software Engineering, pp. 357-370, June 04-11, 2000, Limerick, Ireland. [9] Aggarwal, K.K. & Singh,Y. “Software Engineering Programs Documentation Operating Procedures (Second Edition)” New Age International Publishers, 2005. [10] Fenton N., “Software Measurement: A Necessary Scientific Basis”, IEEE Transactions on software engineering, vol. 20, no.3, pp. 199-206, 1994. [11] Fenton E.N. and Neil M., “Software Metrics and Risk”, Proceeding of 2nd European Software Measurement Conference , Amsterdam, ISBN 90-76019-07-X, pp. 39-55, 1999. [12] Mills E.E., “Software Metrics, Curriculum Module SEI-CM-12-1.1,” Software Engineering Institute, Carnegie-Mellon University, 1998. [13] Pavur R., Jayakumar M. and Clayton H., “Software Testing Metrics: Do they have Merit?”, Industrial management & Data Systems, vol. 99, no.1, 1999, p.p. 5-10. [14] Wiegers K.E., “A Software Metrics Primer”, Software Development Magazine, July 1999. [15] Schneidewind, N. F. “Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics”, IEEE transaction on software engineering, vol. 25, no.6, pp.769-781, Nov-Dec 1999. [16] Pusala R.”Operational Excellence through Efficient Software Testing Metrics”, Infosys Technologies Limited, 2006. [17] Bradshaw S., “Test Metrics: A Practical Approach to Tracking and Interpretation”, Questcon Technologies, A division of Howard Systems International, Inc., 2004. [18] Aggarwal K.K., Singh, Y., Kaur, A., “Code Coverage Based Technique For Prioritizing Test Cases For Regression Testing”, ACM SIGSOFT, Vol.29, Issue 5.2004. [19] Singh, Y., Kaur, Suri. B., “A New Technique for Version –Specific Test Case Selection and Prioritization for Regression Testing”, Journal of Computer Society of India, vol. 36, no. 4, pp. 23-32, 2006. [20] Stark G.E., Durst R.C. and Pelnik, T.M., “ An Evaluation of Software Testing Metrics for NASA’s Mission Control Center”, Software Quality Journal, vol.1, no.2, pp. 115-132. [21] Chen Y., Probert R.L. and Robeson K., “Effective Test Metrics for Test Strategy Evolution”, Proceeding of the 2004 conference of the Centre for Advanced Studies on Collaborative Research, pp.111-123, October 4-7, 2004, Markham, Outario, Canada. [22] Ogasawara H., Yamada A., Kojo M. “, Experiences of Software Quality Management Using Metrics through the Life-Cycle”, IEEE proceeding of ICSE, 1996. [23] Lewis, W.E., “Software testing and Continuous Quality Improvement,” Second edition, Auerbach publication, CRC press.