Decision Support for Using Software Inspections Ioana Rus Fraunhofer Center Maryland
[email protected]
Forrest Shull Fraunhofer Center Maryland
[email protected]
Abstract In support of decision-making for planning the effort to be allocated to inspections in different software development phases, we propose combining empirical studies with process modeling and simulation. We present the simulator developed for answering questions and running “what-if” scenarios specific to NASA software development projects.
1. Introduction To achieve the goal of zero defects in delivered software, NASA projects can employ a range of verification and validation (V&V) methods and techniques. Industrial data has shown that inspections are among the most effective of all V&V activities, measured by the percentage of defects typically removed from a document when they are applied [1]. However, partly because inspections rely on the effort of experienced personnel as well as on having calendar time available for their application, they have not been applied as widely or as formally among NASA projects as their benefits warrant [2]. Since project managers have to make decisions regarding what is the best combination of V&V techniques, including testing, that would be most efficient for a given project, we feel that a reliable model capable of showing the effect of using inspections on the whole project lifecycle would address this lack of dissemination. Software Inspections are a set of technical reviews that provide a detailed examination of a product on a step-by-step or line-of-code basis, whose objective is to increase the quality and reduce the cost of software development by detecting and correcting errors early. A primary feature of inspections is the removal of engineering errors before they amplify into larger and more costly problems downstream in the development process. Some early engineering products that typically undergo Software Inspection are requirements, designs, test plans, and code. Inspection teams are composed of peers from development, test, user groups and quality assurance. Interviews with NASA
Paolo Donzelli University of Maryland, Computer Science Department
[email protected]
personnel have confirmed that more benefits are exhibited as a higher level of process formality is applied [2]. Since inspections can be applied to any document created to support software development as soon as it has been authored, their contribution to system quality comes long before any executable code has been implemented. Hence their most dramatic benefits are seen when applied in early lifecycle phases, catching defects at their source and so reducing their propagation to the later stages of the software development process. From a project manager’s point of view, the primary benefit is that this results in fewer defects propagating from one development phase to another, and so less testing and rework necessary late in the lifecycle [1]. In this way, software products of the desired quality can be produced with less effort. Unfortunately, inspections can be difficult to implement on projects. Most directly, this is because implementing inspections requires spending upfront effort (e.g. extra effort in the design phase to do inspections) for downstream gain (better quality software entering the testing phase). As a result, we have seen that project managers may not be motivated to include sufficient time and effort for performing inspections in early project phases. Another factor making it hard to get inspection practices in use is that there is also a learning curve involved, meaning that it takes some time for developers to understand how to effectively find defects on their own in software work products. And, for team leads who are trying to set up inspection processes and get people to take part, there is little guidance on what types of people to involve and how much direction to provide. We find that project managers need some convincing data concerning the accepted benefits of inspections to catch their attention, get them interested in introducing the practice, and help them continue the practice through the difficult early phases. To provide this we have the benefit of many experience reports and controlled studies that can be found in the literature. However, these empirical studies, while useful for understanding the relative benefits of various ways of conducting inspections, do not generally report results in a way meaningful
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
to project managers. Most empirical studies of inspection practices describe results by measuring the percentage of defects detected and removed from a product in a given lifecycle phase (e.g. [3], [4], [5]). However, this was not sufficient information for the projects we have been working with, which are not concerned with the quality of intermediate artifacts in themselves, but only with how that quality can affect the effort required and quality produced for the project as a whole. Based on input from projects at NASA’s Goddard Space Flight Center, we have identified specific concerns relevant for inspections, such as how much effort should the team spend in performing inspections, or where are inspections more effective, at the end of each phase, or only at the end of some phases? To answer such questions we first considered an empirical studies approach. However, it is not clear that the relationship between inspection effort and test effort is amenable to experimentation or case studies: Because of the potentially long time period between the application of an inspection and its measurable benefits with respect to testing, it is not feasible to have multiple similar projects where some apply early inspections and others do not, and all other differences over the lifetime of the project are minimized. Simply too many opportunities for variation exist over a project’s lifetime! Most of the empirical studies that have been run to date have, in contrast, been successful at showing the benefits due to inspections within a lifecycle phase. For example [1] summarizes several studies which agree that inspections of a requirements document will catch up to 60% of the defects that would otherwise have been passed on to the next phase. Alternately, the relationship could be studied by examining historic data and looking across different projects to detect trends. In this case, studying the linkage between early inspections and the testing phase would require data collection in the following areas: x The number of defects likely to be committed in each phase, i.e. an estimate of the defect injection rate of a project. This number should be broken down into an estimate for major and minor defects, given the differing impact of each type of defect on overall system quality. A “major” defect is any that would prevent attainment of the mission goal; A “minor” defect is one that would result in difficulties if not corrected, but not necessarily prevent the overall system goal. x The percentage of defects in each phase that could be caught by inspections (if they are used) and removed from the lifecycle, along with the amount of additional effort that
would be spent in-phase by applying the inspections. x The effort required during the testing phase to find and fix defects that have slipped through earlier lifecycle phases and have to be removed in the late lifecycle. However, not enough projects collect data in sufficient detail across all phases to permit meaningful comparisons of the above relationships. This is due to the difficulty and the cost of running experiments while controlling a domain as wide as a software development project, with many and different intertwined factors, and, above all, the contingent necessity of delivering a product within schedule and budget constraints. Rather, project data and empirical studies have examined parts of this overall relationship in greater detail. For example, x Historical data from NASA projects describes their inspection use in great detail – the number of participants, effort spent, and total number of major and minor defects found. However, since the objective of data collection was to monitor the inspection process only, the other project activities were not followed in sufficient data to evaluate how many defects slipped through inspection. x Empirical studies run by the University of Maryland and other institutions to investigate the effectiveness of inspection variants have been collected as a resource for researchers by the NSF Center for Empirically Based Software Engineering (CeBASE)1. By using benchmark documents for the inspection with a known number of seeded defects, these studies provide greater confidence concerning the effectiveness of inspections but do not follow slipped defects forward through other lifecycle phases. x Case studies run by the University of Southern California and also collected in a CeBASE repository provide more detail over the project lifetime. This provides more information about the effectiveness of inspections for catching defects from previous lifecycle phases and also provides measures of overall testing effort. x “Expert heuristics” were also collected by CeBASE as a way of abstracting actionable heuristics from more specific data. Several expert practitioners in the field of software defect reduction were asked to discuss and refine statements about defects and their behavior, based on their own experiences and any data they had collected. Although less confidence should be assigned to such 1
The authors of this paper also participate in the CeBASE project.
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
heuristics, since they cannot always be traced back to verifiable data, when expert consensus is achieved, there is a reason to invest the resulting heuristics with some validity. The heuristics produced by CeBASE and reused in this work were concerned with topics such as the defect removal rate of inspections and the relative cost of finding and fixing defects in early lifecycle phases versus during testing. Therefore we concluded that what was needed was a way to link the information known about these pieces of the problem into a coherent framework that supported reasoning about the relationship of inspections to test effort. Because the data came from various environments, this had to be done by looking for commonalties across the various data sets and using these generic relations to build a model that would illustrate these effects. Of course, as more specific data from an environment is available (e.g. a project can estimate specific defect injection rates based on history), the model should be updateable to produce more specific results. For this we need to turn to process simulation modeling solutions. Most of the process models used by the software community are analytical models, usually parametric ones. Examples are the COCOMO model [6], to estimate schedule and effort for a given software product, or the Rayleigh model [7], to forecast the staffing profile during a project. Although fairly easy to apply, these models provide only a static representation of the software process, normally cover a limited subset of the process attributes (e.g., effort, schedule, defect density), separate the effects of such attributes (e.g. the development effort from product defect density), and do not permit analysis of the process behavior in a perturbed environment (e.g. changes in product requirements or staff reductions). More detailed, and dynamic predictions of the process behavior require instead the adoption of more sophisticated models, able to represent the different aspects of a software development process, e.g. the different activities, the various artifacts, the employed resources, together with their dynamic relationships. Due to their complexity, for the construction and solution of such models, we need to turn to simulation techniques. The resulting simulation models are executable models of a software development process. By providing data about the desired process characteristics (e.g., effort, schedule, and staffing distribution for the whole process or for a specific activity, and number of defects discovered during V&V activities), for different inputs (e.g., project size and changed requirements), under different scenarios (e.g., available resources and personnel and the effort dedicated to inspections in different phases), they can help predict the effect of changes
on product and project parameters, plan, track, replan and train, run what-if scenarios and sensitivity analyses, examine different alternatives and select the one that most suits the project at hand, identify the changes that have more impact - either positive or negative - and could lead to success or failure. In other terms, software simulation models can be applied to assess and analyze the as-is process, to support process improvement activities and design the to-be process that best fits specific project needs. In comparison with the higher precision and reliability of the data provided by empirical studies, it is generally argued that simulation solutions are unlikely to give exact representations of the real process behavior. Nevertheless, they give projections as to how the process would behave under given assumptions on external and internal factors, stimulate debate and provide a way to learn about, and to improve, the software process. Empirical studies and simulation therefore benefit from each other’s support. Their combination can provide the process analysts and the software managers an efficient and effective tool to understand, analyze, optimize, and finally apply a specific technology in a defined operational context. While experimentation allows a better understanding of the capability of a specific technique and tuning such capability to the specific context needs (i.e. studying the process in the small), simulation provides the way to evaluate the impact of such a technique on the overall process (i.e., studying the process at large). There is an increasing number of publications presenting work in software process simulation. Many of these papers are published in the proceedings of the 1998-2003 Software Process Simulation Modeling workshops (ProSim http://web.pdx.edu/~prosim/) and a subset of these papers can also be found in the April 1999 and December 2001 special issue of the Journal of Systems and Software. These models capture various levels of software development (activity, project, multi-project), having different goals (prediction, training, or decision support), and addressing multiple problems (cost reduction, lead time reduction, quality improvement) related to software development. The decisions that are supported by these simulators are very diverse, for example, they concern aspects of software design [8], requirements change [9], software acquisition [10], or inspections [11], [12], [13]. The simulation modeling techniques employed by these models vary from continuous type (system dynamics based) [14], [9], [15] to discrete type [13], to a hybrid combination of them [16], [8], [17]. Although models and simulators capturing inspections have been developed before, when we considered the possibility of using any of them, we
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
The goal of our model is to help project managers or process engineers make decisions regarding planning for inspections and testing. By adopting the goal-question-metric (GQM) format [18], the goal can be stated as “Analyze the software developing process with respect to the allocation of effort in inspections and testing, from the point of view of the project manager or process engineer, in the context of NASA projects“.
realized that they had either a different scope, goal, assumptions, and level of detail, or did not address the specific questions for our study. For this reason we decided to develop our own simulator, based on concepts previously used, and knowledge acquired by the authors of this paper in building other process simulators.
2. Simulator description
E ffo rt V & V
E ffo rt re w o rk
In c r e a s e d e ffo r t a llo c a te d fo r in s p e c tio n s in r e q u ir e m e n ts
Figure 1. Variation of V&V and rework distribution per phases, more effort is allocated to requirements inspections
T o ta l e ffo rt V & V a n d re w o rk 1 2 5 0 s ta ff h o u rs 1 1 0 0 s ta ff h o u rs
D e liv e r y tim e 4 3 5 [ t im e u n its ] D e liv e r y t im e 7 9 0 [ t im e u n it s ]
In c r e a s e d e ffo r t a llo c a te d fo r in s p e c tio n s in r e q u ir e m e n ts
Figure 2. Variation of effort distribution per phases, if more effort is allocated to requirements inspections Defect detection per activities
Defect detection per activities 300 Number of 200 defects detected 100
Requirements
Design
Coding
0 Major
Minor
250 200 Number of 150 defects 100 detected 50 0
Requirements Design Coding
Major
Testing
Minor
Testing
Activities and severity
Activity and severity
In c r e a s e d e f f o r t a llo c a t e d f o r in s p e c t io n s in r e q u ir e m e n t s
Figure 3. Variation of defect detection with increasing effort allocation for requirements inspections According to GQM, the goal has been then refined into questions we need to answer to in order
to be able to achieve it: “Given that the project ending date is fixed and the delivered software must
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
be free of defects, how will different effort allocation for inspections affect the effort that needs to be spent in testing?”, “What is the optimal allocation of effort and staff to inspections in different phases vs. system testing?”, “If we train the staff to be more efficient in inspecting artifacts, for example by introducing a new reading technique, how will that affect the overall effort distribution and project cost?”. Our simulator needs therefore to provide the data (i.e., the metrics in GQM terms) to answer such questions. The scope of the model is a project for development of new software, covering the requirements specification, design, coding, and testing phases. The inputs to the model are: a) project data, such as the estimated software size, staff, effort allocated for inspections (for each phase), and b) calibration parameters that characterize a development environment, such as a company or a team. Example of calibration parameters are: density of defects produced per artifact, by severity (minor/major); effort needed to find and to fix a defect of specific severity per phase; rate (productivity) for inspecting artifacts (items/hour); effectiveness of inspections for detecting defects per type, per phase. This data can be obtained by analogy with historical data or estimated from personal experience. The main output of the simulator is the distribution of effort for each phase. Other parameters of interest that can be captured and displayed are: effort distribution per activities, for each phase and for the entire development and number of defects (detected and remaining) at the
end of each phase, per type (requirements, design, and coding defects) and per severity (major or minor). The outputs can be either graphically displayed by using plotters, or sent to external files such as Excel spreadsheets, which allows postprocessing of these data. Figure 1 and 2 show examples of graphical output and Figure 3 shows histogram charts derived from the values corresponding to number of defects exported to an Excel spreadsheet. An example of how the simulator can be used is described by the following what-if scenario: “what-if the effort allocated to inspections in different phases varies?” x For a given environment (that is, given and fixed calibration parameter values corresponding to quality and productivity characteristics) and x For a given project (that is, given values of software size, and staff) x Case Ei: Allocate ERi effort to requirements inspections, EDi effort to design inspections, and ECi effort to code inspections (where i corresponds to the number of times n the simulator is run) x Execute simulator for Case 1, Case 2, …, Case n, and each time i vary the effort allocated to inspections in different phases (that is requirements, design, and coding) x Examine the output effort distribution over phases for each case and compare, concluding how inspections effort allocation affects testing effort.
T e s t in g p h a s e
E f f o r t t o f in d & f ix a d e f e c t in te s t A c tu a l te s t e ffo rt
D e fe c ts r e m a in in g a t te s t
D e fe c t in je c t io n ra te A r t if a c t s iz e
D D eeffeecct tss in in je jecctteedd
In s p . d e f e c t d e t e c t io n ra te D D eeffeecct tss d d eetteecct teedd
P e r p ro d u c tio n p h a s e
E ffo rt to f ix a d e fe c t
T o ta l in s p e c t io n e ffo rt
T o ta l re w o rk e ffo rt
D e fe c t re m o va l e ffo rt
A c tu a l o v e r a ll e ffo rt A A ccttuuaal l p p rree--tteesst t e e ffffoorrtt
P r o d u c t io n e ffo rt
Figure 4. Influence diagram The outputs for this usage scenario are shown in Figures 1 and 2. In Figure 1 “V&V” means verification and validation activities, which are inspections in the requirements, design, and coding phase, and testing in the system testing phase.
Rework means fixing the defects that are detected by the V&V activities. Figure 1 shows two different executions of the simulator for the same project, run in the same environment. The right graph corresponds to an increase of effort allocated to
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
inspections in the requirements phase. This graph shows not only an increase in the V&V and rework effort in the requirements phase, but also a decrease of both the V&V and rework effort in testing, compared to the case on the left. Figure 2 shows a decrease of the overall effort (that is the sum of the V&V and rework effort) for the second case, and even an early delivery date. This is actually the expected behavior of the simulator, based on the relationships between the process parameters in Figure 4.The total development effort is the sum of the testing effort and pre-testing effort. If inspections are performed, the more effort is allocated to them, the more the pre-testing effort increases. The more effort is allocated to inspections, the more defects will be detected and fixed. This will increase the effort to remove the detected defects in the phase where they were created, which also will increase the pre-testing effort. However, the more defects are detected before testing, the fewer defects remain to be detected in the testing phase, therefore the less effort to detect and fix those remaining defects. This leads to a decrease in the test effort and therefore a
decrease in the overall effort. However, it might not always be the case that inspections pay-off for every project and in every phase. For example, that would be the case if the developers have high skills and experience and produce artifacts with very little and only minor defects (defect injection rate in Figure 4), or if testing is highly automated and takes little time and effort compared to inspections, or the effort to fix defects during testing is not too great compared to previous phases, or inspections are not very efficient and people spend a lot of time without finding many defects. Therefore, by giving to the simulator the values characteristic to a project, the effects of different effort allocation can be analyzed for that specific project. The relations in Figure 4 are captured in the model that we developed. This model is composed of several similar modules, each corresponding to a software development phase (requirements, design, and coding). By connecting these modules in different ways, we can capture different life-cycle models, such as waterfall or incremental. The structure of a module is presented in Figure 5.
Staff
Size D efect injection density (i) Effort allocated for inspections Effort for detecting a defect (i)
Production activity
D efect detection rate (i)
Artifacts
Production effort & duration
Injected D efects
Inspection U ndetected D etected activity D efects D efects
Defects
Rew ork activity
(from previous phase)
Effort for fixing a defect (i)
Inspection effort & duration
R em aining D efects R ew ork effort & duration
N ote: (i) denotes different defect attributes and values – e.g., severity
Figure 5. Structure of a simulator module Each phase has a Production activity, when artifacts are produced, an Inspection activity for defect detection, and a Rework activity, for fixing the detected defects. On the left are the inputs to the phase, and on the right, the outputs to the next phase. We consider that not all artifacts have the same size and defect distribution, and also that defects are different, depending on where they were introduced (that is, during which phase) and also depending on their severity (major or minor). To capture these attributes of the entities modeled, we decided to use the discrete event modeling approach. We also incorporate basic principles of system dynamics
modeling such as structural (and behavioral) quantitative modeling, and feedback loops. Using discrete event modeling allows for the capture of more details of the attributes of the items modeled (e.g., type and severity of defects in our case), thus differentiating between items – a more realistic approach for our purpose than continuous modeling where all items are considered identical. The simulation environment that we used is Extend [19]. A more detailed description of this model is outside the scope of this paper.
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
3. Using Empirical Data to Develop and Evolve a Dynamic Model Some ideas about how empirical studies and process simulation can complement and support each other and be combined to assist decision making are presented in [20] and [21]. We used those concepts and also followed the process for developing a software process simulator presented in [22]. As shown in Figure 6, the first step was to define the goals and the intended usage for the simulator. As already described in Section 2, the main goal (expressed in a GQM format) was to “Analyze the software developing process with respect to the allocation of effort in inspections and testing, from the point of view of the project manager or process engineer, in the context of NASA projects“. This goal was then refined into the questions we needed to answer to achieve it, and, finally, into the data that the simulator had to provide to enable us to answer them. The description of the software development process adopted by our target organization (that is, NASA) has been used to decide on the main structure of the simulation model. The main components represent the different activities and sub-activities relevant for our analysis (Figure 5), the corresponding attributes that we needed to investigate and their relationships (Figure 4), as well as the data to collect to provide the information required by and from the simulator. As pointed out in section 1, existing empirical data sets provide detailed descriptions for the behavior of specific parts and aspects of the software development process. Although derived from different projects (in different contexts) these data allowed us to define the behavior of different simulator components. Finally, the validation of the simulator was performed by obtaining a good level of confidence in the representativeness of the simulation model against real-life situations. Thus we are producing a simulator generic (Sim Generic) for a class of projects that have the same process description. The model will be deployed and used as shown in Figure 7. First, it has to be calibrated for a specific software development environment, and then it can be executed for a specific project, following usage scenarios like the one described in section 2. The model is calibrated by associating to the simulator parameters values that are representative of the target environment, obtainable from empirical data. In the absence of these calibration data, the user of the model can, at an initial stage, try an estimate. The closer this estimation is to reality, the more accurate the result of the simulation. By examining the results of the simulation, the user can then identify where more accurate input data are needed,
and therefore what type of data needs to be collected from projects. In this way, the simulation results can provide valuable guidance to the data collection program and also on possible future empirical experimentations within the target organization. The output of the simulator will help its user in making decisions. Also, if the user observes inaccuracies or unexpected behavior that cannot be explained, he/she should notify the developer of the model, who might need to tune or refine either the version of the simulator specifically for the project under consideration, or even the generic simulator. Thus, the simulator can be the subject of an iterative process of evolution and improvement.
4.Conclusions The model constructed in this work has provided a mechanism for combining data describing various aspects of inspections impact into a coherent illustration of the overall effect of inspections on the software project lifecycle. It has therefore demonstrated how empirical study and simulation benefit from each other’s support. Their combination can provide process analysts and software managers an efficient and effective tool to understand, analyze, optimize, and finally apply a specific technology in a defined operational context. While experimentation allows for a better understanding of the capability of a specific technique and for the tuning of such capability to the specific context needs (i.e. studying the process in the small); simulation provides a way to evaluate the impact of such a technique on the overall process (i.e., studying the process at large). As future work we need to deploy the model at NASA, run the specified scenarios for specific projects and refine the model. The model will be delivered first to the Goddard Center, together with a set of measures that are needed to calibrate and execute the simulator. We will also add more detail to the module capturing inspections, so that we can model, for example, introducing reading techniques to enhance individual artifact reading productivity. In the long term we will extend our model beyond inspections, to capture other methods or techniques.
5. Acknowledgements This work was conducted as part of the “State of the Art Software Inspections at NASA” Initiative, funded by the NASA Office of Safety and Mission Assurance (OSMA) Software Assurance Research Program. Many thanks to Mike Stark for his important help and to our colleagues at the Fraunhofer Center Maryland for their comments, especially Dr. Vic Basili.
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
E m p ir ic a l D a ta S e ts S im u la t io n G o a ls & u s a g e
P ro c e s s d e s c r ip t io n
P ro c e s s p a ra m e te rs r e la t io n s
S im G e n e r ic
S im u la t o r r e f in e m e n t
Figure 6. Developing a process simulator
S im G e n e ric
S im u la to r e v o lu tio n
S im S p e c ific
C a lib ra tio n D a ta fo r sp ecific project
S im u la to r ca lib ra tio n
In p u t D a ta fo r sp e cific p ro je ct
S im u la to r e xe cu tio n
S im u la tio n re su lts
D a ta co lle ctio n D e cisio n m a kin g
Figure 7. Deploying, using, and evolving a process simulator
6. References [1] F. Shull, V.R. Basili, B. Boehm, A. W. Brown , P. Costa, M. Lindvall, D. Port, I. Rus, R. Tesoriero, and M. V. Zelkowitz, "What We Have Learned About Fighting Defects", in Proceedings of 8th International Software Metrics Symposium, 2002, pp. 249-258. Available: http://fc-md.umd.edu/fcmd/Papers/shull_defects.ps
[2] F. Shull, J. Bachman, J. Van Voorhis, and P. Larsen,“Lessons Learned Report for the ‘State-of-the-Art Software Inspections and Reading’ Initiative.” Deliverable to the NASA OSMA SARP, 2001. Available by email from Forrest Shull (
[email protected]). [3] V. R. Basili, S. Green, O. Laitenberger, F. Lanubile. F. Shull, S. Sorumgaard, and M.V. Zelkowitz, "The Empirical Investigation of Perspective-Based Reading,"
Empirical Software Engineering: An International Journal, Kluwer Academic Publishers, Boston, Massachusetts, 1996, pp. 133-164. [4] M. Ciolkowski, C. Differding, O. Laitenberger, J. Muench. “Empirical Investigation of Perspective-based Reading: A Replicated Experiment”, Technical Report ISERN-97-13, International Software Engineering Research Network, April 1997. [5] O. Laitenberger, C. Atkinson, M. Schlich, K. El Emam, An experimental comparison of reading techniques for defect detection in UML design documents, Journal of System and Software, Elsevier, New York, 2000, pp. 183204. [6] Boehm B., E. Horowitz, R. Madachy, D. Reifer, B. Clark, B. Steece, A. W. Brown, S. Chulani, and C. Abts, Software Cost Estimation with Cocomo II, Prentice Hall, New Jersey, 2000.
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE
[7] Putnam, L.H., and W. Meyer, Measures for Excellence: Reliable Software on Time within Budget, Prentice-Hall, New Jersey, 1992. [8] P. Lakey, “A Hybrid Software Process Simulation Model for Project Management,” in Proceedings of Software Process Simulation Modeling (ProSim03), 2003. [9] S. Ferreira, J. Collofello, and D. Shunk, “Utilization of Process Modeling and Simulation in Understanding the Effects of Requirements Volatility in Software Development,” in Proceedings of Software Process Simulation Modeling (ProSim03), 2003. [10] T. Haberlein, “A Framework for System Dynamic Models of Software Acquisition Projects,” in Proceedings of Software Process Simulation Modeling (ProSim03), 2003. [11] R. Madachy, A Software Project Dynamics Model for Process Cost, Schedule, and Risk Assessment, Ph.D. Dissertation, University of Southern California, 1994.
[19] Extend. Simulation Software for The New Millenium, User's Manual, Imagine That, Inc., 1997. [20] J. Muench, D. Rombach, and I. Rus, "Creating an Advanced Software Engineering Laboratory by Combining Empirical Studies with Process Simulation", in Proceedings of Software Process Simulation Modeling (ProSim03), 2003. [21] I. Rus, M. Halling, and S. Biffl, “Supporting Decision-Making in Software Engineering with Process Simulation and Empirical Studies,” Journal of Software Engineering and Knowledge Engineering, World Scientific, New Jersey, 2003, pp. 531-545. [22] I. Rus, H. Neu, and J. Muench, “A Systematic Methodology for Developing Discrete Event Simulation Models of Software Development Processes,” presented at the Software Process Simulation Modeling Workshop (ProSim 2003), Portland, Oregon, 2003.
[12] J. Tvedt, An Extensible Model for Evaluating the Impact of Process Improvement on Software development Cycle Time, Ph.D. Dissertation, Arizona State University, 1996 [13] H. Neu, T. Hanne, J. Muench., S. Nickel, and A. Wirsen A., "Creating a Code Inspection Model for Simulation-based Decision Support", in Proceedings of Software Process Simulation Modeling (ProSim03), 2003. [14] Abdel-Hamid, T.K. and S.E. Madnick. Software Project Dynamics: An Integrated Approach. Prentice-Hall, New Jersey, 1991. [15] I. Rus, J. Collofello, and P. Lakey, "Software Process Simulation for Reliability Management", Journal of Systems and Software, Elsevier, New York, 1999, pp. 173182. [16] P. Donzelli, and G. Iazeolla, "A Dynamic Simulator of Software Processes to Test Process Assumptions", Journal of Systems and Software, Elsevier, New York, 2001, pp. 81-90. [17] Martin, R.H., D. Raffo. "A model of the Software Development Process Using both Continuous and Discrete Models", International Journal of Software Process Improvement and Practice, 2000, pp. 147-157. [18] V. R. Basili, G. Caldiera, and D. H. Rombach, “The Goal Question Metric Approach”, Encyclopedia of Software Engineering, Wiley, New York, 1994, pp.528532.
Proceedings of the 28th Annual NASA Goddard Software Engineering Workshop (SEW’03) 0-7695-2064-2/03 $ 17.00 © 2003 IEEE