Improving Software Development Tracking and Estimation Inside the ...

9 downloads 54 Views 1MB Size Report
software development projects as they determine the scopes and the resources ... “cone of uncertainty” defined in [5] and calibrated to com- pleted projects in [6].
Improving Software Development Tracking and Estimation Inside the Cone of Uncertainty Pongtip Aroonvatanaporn, Thanida Hongsongkiat, and Barry Boehm Center for Systems and Software Engineering University of Southern California Los Angeles, CA, USA

{aroonvat, thongson, boehm}@usc.edu ABSTRACT Software cost and schedule estimations are fundamental in software development projects as they determine the scopes and the resources required. With accurate estimations, the goals of project outcome can be assured within the available resources. However, developing accurate and realistic estimates require high level of experience, expertise, and historical data. Oftentimes, once the resources have been estimated, little is done to reduce the uncertainties in the estimations as the project progresses through its life cycle. To address this issue, we have developed the COTIPMO tool, an implementation of the COnstructive Team Improvement Process MOdel framework, to help automate the recalibration and estimation improvement processes. The tool allows software development teams to effectively track their development progress, assess the team’s performance, and adjust the project estimates based on the assessment results. The COTIPMO tool has been used by 13 software engineering projects and the results are presented in this paper.

Categories and Subject Descriptors D.2 [Software Engineering]: Management—cost estimation, life cycle, time estimation, software process models

General Terms Management, Measurement, Human Factors, Economics

Keywords Cost Estimation, COCOMO II, Uncertainty, Continuous Assessment, Project Planning

1.

INTRODUCTION

Good team performance and accurate software cost and schedule estimations are essential in determining the quality and timely delivery of the final product. The 2009 Standish Report reported that out of the 9000 projects surveyed, 32% were delivered with full capability within budget and schedule, 24% were cancelled, and 44% were either over budget,

over schedule, or undelivered [22]. This shows that nearly half of the projects were unsuccessful due to issues related to cost and schedule estimations. With more accurate estimations and less uncertainties within the team, the number of failed or over budgeted projects could be reduced significantly. Yet, producing accurate and realistic estimates require high level of expertise and experience as well as good historical data. This is a luxury that software development teams often lack of as these data and resources are not always readily available. Although the initial estimates for cost and schedule are important in determining the time, resources, and budget required for project completion, the ability to adapt the estimations to the changing environments, requirements, and performances allows teams to better control quality and help ensure timely deliveries of the products. In this paper, we introduce the COTIPMO tool, an implementation of the COnstructive Team Improvement Process MOdel developed in [2]. The tool allows software development teams to quickly track their development progress, assess the team’s performance, and adjust their estimations based on the team’s status. With better tracking and estimation mechanisms, the number of uncertainties in team performance and estimation can be reduced as the project progresses through its life cycle. This allows the team to continuously monitor their abilities to complete the project within the available resources, budget, and time. We have deployed the tool at the University of Southern California (USC) and experimented with 13 software engineering projects. The results of the tool application are analyzed and discussed in this paper. The rest of this paper offers an approach for dealing with problems related to uncertainties in project tracking and estimation. We will discuss in detail about the common problems that occur due to these uncertainties and the motivation behind developing this tool. It will be shown that the use of the COTIPMO tool significantly improves team’s performance and reduces estimation uncertainties within the project. Finally, we conclude with discussion of our plans for future work.

1.1

Terms and Definitions

Development project refer to the type of project that the product must be developed from scratch. The development team must write the majority of the source code to imple-

proper predictions with respect to project scope, complexity, and resources required. These data include aspects and attributes that are specified in the COCOMO II model in [6]. Typically, projects progress through their life cycle based on these inaccurate estimates. This means that regardless of how well or poorly the projects progress, the estimates remain constant.

Figure 1: The cone of uncertainty in software cost and size estimation ment the end user functionalities. NDI-intensive project refers to the type of project that aims at integrating and/or tailoring either one or a set of non-developmental items (NDI) or commercial off-the-shelf (COTS) products. As defined in [14], this is when 30-90% of the end user features and capabilities are provided by the NDI or COTS products.

2.

PROBLEMS AND MOTIVATION

The main motivation behind this research is the well-known “cone of uncertainty” defined in [5] and calibrated to completed projects in [6]. Figure 1 shows that until the project is completed and delivered, there can be a wide range of products that the project can result in due to the various levels of uncertainties. In certain development paradigms where cost or schedule is fixed, the project scopes must be adjusted to compensate for any changes in environment, resources, or uncertainties in the project. These paradigms include the Schedule As An Independent Variable (SAIV) and Cost As An Independent Variable (CAIV) covered in [8]. Because uncertainties cannot be avoided, projects must be able to adapt to these changing environments; otherwise, schedules can slip, costs and budgets may overrun, and product qualities may suffer significantly. Menzies et al. [18] discussed about the two approaches in effort estimation - model-based and expert-based methods. While the model-based method requires past data to make predictions for future projects, the expert-based method utilizes experience and expertise of the estimators in order to develop meaningful estimates. Our research framework and tool combine the two methodologies to create improvements to the project estimates during its life cycle.

2.1

Inaccurate Project Estimations

Software development teams often do not have sufficient data and information to develop accurate cost and schedule estimations for the project to be developed. Without the necessary data, it is nearly impossible for teams to make

Once the project proceeds into its life cycle, the status and progress of the project are often not properly assessed by the team in order to analyze the accuracies of the estimates. There are significant numbers of uncertainties at the beginning of the project as there are instability in requirements and many directions that the project can proceed on. This is clearly shown in the well-known “cone of uncertainty”. With these levels of uncertainties, project estimates are typically not realistic or accurate.

2.2

Lack of Effective Tracking and Assessment Tools

To date, there have been no tools or data that monitor the evolution of the project’s progression in the “cone of uncertainty”. In order to collect enough information for useful assessment data, the teams are required to perform various surveys and reviews. Due to the tediousness and complexity of assessing project status and performance, these processes are discouraging to the teams to perform them regularly [15]. Furthermore, in traditional processes, to accurately report the progress of software development projects, the teams are usually required to carefully count the source lines of code (SLOC) developed, analyze the logical lines of code, and compare them to the potentially inaccurate estimates discussed in section 2.1. These tasks require significant amount of effort to perform manually, thus, the processes are discouraging to the teams. The more discouraging the processes are, the less they get done effectively. Without the proper tools or mechanisms to assess the team and project’s status and performance, the project would progress through its life cycle with high level of uncertainties. As the cone of uncertainty remains wide, the project estimates also remain uncertain and inaccurate, while the project performance remains unimproved.

2.3

Limitations in Software Cost Estimation Models

There is little that software cost estimation models can compensate for when software projects lack the necessary information and knowledge at the time when cost and schedule are estimated. Moreover, most estimation models require thorough understandings of the model parameters as well as certain level of expertise in order to use them effectively. Without proper understandings, teams may potentially end up overstating the capability of the team’s personnel or understating the complexities of the project. These misrepresentations lead to inaccurate and non-realistic estimations discussed in 2.1. Additionally, software projects are prone to changes in requirements, designs, and specifications as they progress in the life cycle; therefore, the uncertainties in resource estima-

tions are constantly changing. This may be more apparent in agile projects or when clients are over enthusiastic. Software estimation models cannot automatically adapt to these varying environments.

3.

BACKGROUND AND RELATED WORK

To date, there are various techniques used for project tracking and assessment as well as for software sizing and estimation. All of the methods that we describe in this section are commonly used in the industry and have certain levels of tool support.

3.1

Project Tracking and Assessment Tools

Presented in [24], the Program Evaluation and Review Technique (PERT) network charts enable projects to effectively manage uncertainties by providing mechanisms to identify critical paths and dependencies between tasks and activities. The tasks and their corresponding paths are updated so the progress of the project is visible to the developers and other stakeholders. However, even with proper tool support, the number of tasks and dependencies can grow very large fairly quickly especially when tasks and dependencies are not properly defined. As the charts grow large, they require high overhead to maintain and may be disregarded by the development teams due to their complexity. The Goal-Question-Metric (GQM) in [4] is another popular method for progress tracking and measurement with the ability to capture them from the conceptual, operational, and quantitative levels. This allows the assessment process to align with the organization environment as well as the project context. Various tools have been developed to leverage the use of GQM method since GQM plans can grow very large and complex. The “GQM tool” developed in [17] automates the GQM process to help manage the GQM plans, while the GQM-PLAN tool developed in [1] integrates the use of GQM into each stages and phases of the project life cycle. However, the GQM approach is only useful and effective when used correctly by specifying the appropriate goals, questions, and measurements to be monitored. Furthermore, the Earned-Value Management (EVM), Burnup, and Burn-down charts in [9] are good for capturing the project progress based on team’s velocity and completed features. However, these approaches are not effective at responding to major changes during each iteration. EVM requires high overhead to accurately report the progress and to map them to the requirements for the earned-value analysis. When projects constantly change, the development teams may end up spending more time updating the earned-values instead of spending them towards the actual development activities. On the other hand, in Burn-up and Burn-down charts, when major changes occur, the charts may end up showing no progress as the shift in requirement priorities may prevent certain features from being completed. This prevents the team from accurately determining the actual progress and the productivity rates of the developers.

3.2

Software Sizing and Estimation Tools

Story points in [10] are commonly used among agile development processes. The method allows the team to analyze the team’s velocity, or productivity, based on the story points

completed and use those data for planning future iterations. Planning Poker in [12] is the tool often used for estimating the complexity of the story points. However, the method requires expert opinions and analogies in order to estimate accurately. Furthermore, in most cases, re-estimations only take place when story sizes change and not based on the changes in the development progress. The PERT sizing method in [20] focuses on sizing the individual components based on the size distributions (optimistic, most likely, and pessimistic) provided by the developers. Many tools such as SEER-SEM [11], COCOMO 81 [5], and PRICE-S [19] utilizes PERT for sizing and estimation. The method reduces the bias towards overestimation and underestimation, although people tend to choose the “most likely” estimates towards the lower limit, while the actual product sizes cluster towards the upper limit. Based on [5], this underestimation bias is due to the following reasons: • People are optimistic and tend to have the desire to please. • People tend to not have a complete recall of past experiences. • People are generally not familiar with the entire software job. The COCOMO-U developed in [25] extends the COCOMO II model to allow creating estimations even when some of parameters are unknown or uncertain. It utilizes the Bayesian Belief Network (BBN) to handle these uncertainties. However, in order for the tool to compute correctly, the users must be knowledgeable and have expertise in specifying the uncertainty of the unknown cost drivers.

4.

THE COTIPMO TOOL

The COTIPMO tool is an implementation of the COTIPMO framework in [2]. Having an effective tool is essential in enabling the potentials of the framework. We focused on the ease and usability of the tool as its tediousness can be discouraging to the development teams to use.

4.1

The Process Model Framework

The COTIPMO framework model relies heavily on the COCOMO II estimation model developed in [6] and the Unified CodeCount (UCC) tool in [23] for effective software tracking and estimation. In addition, it uses the concepts from IBM Self-Check in [15] and [16] for team retrospectives as well as quick assessments on team’s status and performance. As mentioned earlier in section 2, the COTIPMO framework bridges the gap of using model-based and expert-based estimation methods. The framework utilizes data during the project progression in the life cycle, assesses and analyzes them, and automatically suggests adjustments to the COCOMO II estimation parameters for improved expert judgments. Moreover, the framework utilizes the pre-calibrated COCOMO II model, which further contributes to the modelbased aspect of the framework. The model consists of three main parts: • Project progress tracking • Continuous team assessment • COCOMO II estimation adjustments

In the framework model, the project tracking and assessments are to be done consistently as the project progresses. The framework is expected to be used on a per project basis, so it does not require data from past projects but uses data continuously collected since the beginning of the project and analyzes them for future improvements to team performance and estimations. This means that regardless of how inexperienced the team may be in estimating project cost or how poorly and inaccurate the cost and schedule were estimated, the framework enables these estimates to improve over time throughout the project’s life cycle. Details of the COTIPMO process framework can be found in [3] and [2].

4.2

Powered by Jazz

We have chosen IBM Jazz [13] as the supporting platform for the COTIPMO tool for its scalability and strong support for high collaborative environment. Jazz provides the following foundational capabilities that are currently utilized by COTIPMO: • • • •

User management Project and team management Resource and repository management RESTful web services architecture

Moreover, Jazz has a highly extensible architecture that allows collaboration and integration with other life cycle applications and tools working under a single logical server. This means that the COTIPMO tool can be extended to be used as part of existing project management tools such as the Rational Team Concert [21]. The potential extensions to the COTIPMO tool will be discussed later in this paper.

4.3

Use with Development Projects

The COTIPMO tool was developed to provide substantial benefits for development projects due to the integration of the UCC tool. The automatic code counting capability takes away the complexity of size and progress reporting process enabling the development teams to quickly assess their progress and productivity. The tool uses the COCOMO II estimation model to convert SLOC to effort by using the formula shown in equations 1 and 2. Instead of estimating the Size variable, we use the SLOC of developed source code reported by the UCC tool to compute the equivalent effort. This allows the tool to calculate the amount of effort spent towards development and use that as a basis to estimate the effort required to complete the project. P M = A × SizeE ×

17 Y

EMi

(1)

i=1

E = 0.91 + 0.01 ×

5 X

SFi

j=1

Where: • A = 2.94 (a constant derived from historical project data) • PM is for Person-Months • Size is in KSLOC

(2)

• EM is the effort multiplier for ith cost driver • SF is the scale factor used to compensate for the economies or diseconomies of scale Figure 2 shows the main screen of the COTIPMO tool for a development project. The screen consists of 3 main sections: 1) the “Initial Project Estimates”, 2) the “Iteration List”, and 3) the “Project Progress”. The “Initial Project Estimates” section allows the team to enter the initial estimates for the project. Based on the information known at the beginning of the project, the development team specifies the modules planned for development and all the corresponding COCOMO II information for each module. As the project progresses, the team periodically adds iterations to the tool, which show up in the ”Iteration List” section. For each iteration, if the development of source code has not started, the team can update their estimates based on the team’s status and knowledge. Otherwise, they can upload the source code files and the COTIPMO tool automatically counts the number of logical lines of code for each file. The tool accumulates all the lines of code for all modules and, using the COCOMO II model, computes the equivalent effort. The developers then enter the percentage developed, tested, and integrated for each module. All of these data are used to compute the new estimated effort required to complete the project. The progression of the project with respect to the team’s accumulated effort spent and estimated effort are shown in graphical format in the ”Project Progress” section, which allows the teams to see their development progress as well as the improvements to their estimations. The detailed implementation of the source code tracking framework was developed and discussed in [3]. For every iteration created, the COTIPMO tool automatically generates a corresponding survey for each team member to complete. Figure 3 shows the survey ballot to be submitted by each member. As mentioned in section 4.1, the concept of the survey assessment is based largely on the IBM Self-Check methodology [16]. Each team member fills out and submits the survey individually without knowing each other’s answers in order to reduce bias in the answers. Figure 4 displays the result of the survey showing the answers given by each team member. The standard deviation is computed to detect any inconsistencies between the answers for each question. A high deviation in the answers means that there are differences in opinions or understandings within the team; thus, a flag is triggered raising an issue for the team to discuss about that specific question. The team then identifies actions to take in order to resolve or prevent those issues in the upcoming iterations. Figure 5 shows the list of all the actions and the corresponding iteration that they were identified in. The team is able to mark each action as ”resolved” once they have successfully addressed the originating issue and completed that specific action. This allows the team to effectively keep track of the tasks that they need to perform as well as any outstanding problems that exist within the team and project. Finally, based on the survey results, the COTIPMO tool analyzes the survey data and automatically computes the adjustments that should be made to the COCOMO II parameters. These adjustments are suggested to the develop-

Figure 2: The iteration list for development projects

Figure 5: List of identified actions.

Figure 3: Survey ballot

ment team as shown in figures 6 and 7. The suggestions are reflective of the answers given by all the team members. Since each survey question contains different levels of impact on each of the COCOMO II parameters, these suggestions are calculated based on the model discussed in section 4.1. The number of arrows (1, 2, or 3 arrows) represents the level of adjustments that should be made to the corresponding COCOMO II parameter. One arrow represents a minor increase or decrease in the rating, while three arrows suggest that a major change is required. The developers must then use judgements to make any necessary adjustments to the COCOMO II ratings based on these recommendations.

4.4

Figure 4: Survey result - answers

Use with NDI-intensive Projects

The COTIPMO tool also provides strong support for NDIintensive projects. It utilizes the COCOMO II Application Point estimation model in [6] for effort estimation and tracking. The process of reporting progress in the Application Point model is much less complex compared to the regular COCOMO II model. Instead of reporting the number of SLOC written, the development team reports the num-

rate of the team. Figure 9 shows the suggestions computed by the COTIPMO tool for NDI-intensive projects.

5.

Figure 6: Survey Result - COCOMO II scale factors adjustment suggestions

Figure 7: Survey Result - COCOMO II cost drivers adjustment suggestions

ber of screens, reports, and third generation language (3GL) components developed, customized, and configured. These are called application points. Additionally, the Application Point model uses the developer’s capability and experience and the integrated computer-aided software engineering (ICASE) maturity and experience levels for calculating the productivity rate as well as the capability level of the team. The effort spent on development and estimated effort required to complete the project are computed based on these information. Similar to a development project, the team continuously adds iterations as the project progresses. Figure 8 shows the main screen listing the iterations for NDI-intensive projects. They report the number of application points completed up to each iteration as well as the percentage developed and tested for each application point. The tool uses the COCOMO II Application Point model to compute the New Application Point (NAP), converts them into equivalent effort, and computes the new estimates based on these data. For every iteration, the team members are also required to complete the survey assessments individually. However, instead of suggesting adjustments to the scale factors and cost drivers, the COTIPMO tool analyzes the assessment data and suggests changes to the developer’s capability and experience and ICASE maturity and experience levels, which are the two dynamic parameters that affect the productivity

OBTAINING THE DATA

The experimentation of the tool was done in a classroom environment using the data obtained from the graduate software engineering course at USC. In the two-semester team project based course sequence CSCI577ab, students learn to use best software engineering practices to develop software systems from the Exploration phase to Operations phase adopting the Incremental Commitment Spiral Model (ICSM) [7] for development process. Each team consists of five or six on-campus students with generally less than 2 years of working experience, and one or two off-campus students who are full-time professionals with at least 5 years of industry experience. Typically, the on-campus students act as operational concept engineers, requirements engineers, software architects, UML modelers, coders, life cycle planners, and feasibility analysts, while the off-campus students take on the roles of Integrated Independent Verification and Validation (IIV&V) personnel, quality assurance personnel, and testers. The course consists of both development projects and NDI-intensive projects and are completed either within a 12-week (1 semester) or 24-week (2 semesters) schedule depending on their scopes and complexities [14]. The COTIPMO tool was deployed at USC during the Fall 2011 semester. The semester consisted of 79 graduate students making up 13 project teams of which 5 were development projects and 8 were NDI-intensive projects. The teams started using the COTIPMO tool immediately after the requirements had been gathered and continued to use the tool weekly to report development progress and to recalibrate their project estimates throughout the semester. By the end of the semester, 4 projects were completed with products completely delivered to the clients, while the remaining projects continued onto the next semester. Throughout the life cycle of the projects, we collected data from various aspects including the following: • COCOMO II estimation data (i.e. scale drivers, cost drivers, application points) • Project issues and defects • Individual and team effort spent on project activities • Client satisfactions

6.

ANALYSIS AND DISCUSSION

We had analyzed the data obtained from the software engineering projects and compared them with the previous years to observe the differences in project performances.

6.1

Improved Software Estimation Accuracies

First, we focused on the correctness of the COCOMO II estimation parameters. We analyzed the parameter ratings provided by each team keeping track of the mistakes made by them. The ratings are considered to be incorrect when they are not appropriate to the team or project status. For example, if the team provided that their programmers’ experiences (APEX, PLEX, and LTEX) were high, but the team consisted of members with less than 2 years of industry experience, we consider these as mistakes.

Figure 8: The estimation list for NDI-intensive projects

Figure 9: Survey Result - Developer’s capability and experience level and ICASE maturity and experience level adjustment suggestions

Figure 10 shows the average mistakes in COCOMO II scale factors and cost drivers ratings of all the teams. Since each project had different number of modules, we took the average of the cost driver mistakes across all modules for each project. Both Fall 2009 and Fall 2010 semesters showed consistent number of errors in the ratings and showed no improvements as the projects progressed. However, in Fall 2011, the projects showed less number of mistakes in their estimations after the COTIPMO tool was introduced. More importantly, though, the projects showed improvements overtime with better accuracies towards the end of the semester during the Foundations phase. To ensure the validity of the data, we selected sample sets from each year and had them evaluated and analyzed by an independent party. The results of the mistakes identified were consistent with our initial analysis. For all three years, the projects were carefully reviewed by the stakeholders at every major milestone. The review pro-

Figure 10: Average mistakes in the COCOMO II ratings for scale factors and cost drivers.

cess includes analyzing the accuracy, correctness, and appropriateness of the project estimates (i.e. the COCOMO II parameter ratings). Based on these results, it shows that even though the projects’ estimates were periodically evaluated by the stakeholders to point out any errors, without proper guidance and direction for corrections, the estimates and COCOMO II parameter ratings were not effectively improved.

6.2

Improved Project Issues and Defects Detection

Figure 11: Average defects and issues filed by the team each year

Figure 12: Average defects and issues filed by the team during each phase

Throughout the project life cycle, the IIV&V personnel independently review the projects and their artifacts to detect any inconsistencies and defects in the documentations as well as the team understandings in general. We analyzed the issues and defects that were filed and resolved by the team, which were categorized into the following severities: 1) blocker, 2) critical, 3) major, 4) normal, 5) minor, 6) trivial, and 7) enhancement. Since these are project related issues, we normalized the data and re-categorized them into normal and critical severities in order to make the observations more visible. The blocker, critical, and major issues were considered to be critical, which included any inconsistencies, misunderstandings, or errors that impact the project at the conceptual level. The rest of the data were categorized as normal, which included insignificant defects such as grammatical, typographical, and formatting errors.

schedule instead. The project immediately proceeded into the construction phase and the product was delivered to the client with 100% of end user functionalities implemented. In the previous years, when projects had to switch from a 24week to 12-week schedule, they required major re-scoping of features and capabilities in order to meet the new deadlines.

Figure 11 shows that the average number of critical defects and issues had decreased during the Fall 2011 semester even though the total number filed remained consistent with the previous years. We looked further into the behavior of the issues and defects by observing the rates that they were filed during each project phase. Figure 12 shows that in Fall 2011, the average number of critical defects and issues remained less than the previous years through all the phases. It is especially interesting to observe the significant reduction during the Valuation phase. Since the requirements were negotiated and gathered during this phase, the number of uncertainties and inconsistencies were expected to be fairly high. However, with the use of the COTIPMO tool, the number of issues and defects recorded were significantly reduced. This is possibly due to the fact that the assessment mechanisms of the tool helped detect any inconsistencies and potential problems that existed in the team early before they turned into critical issues.

6.3

Improved Project Tracking

With better progress tracking mechanism of the COTIPMO tool, 2 projects were able to deliver before the anticipated deadline during the Fall 2011 semester. The first project was initially planned for a 24-week schedule, but based on the progress tracking and re-estimations reported by the tool, they were able to determine that the project only required half the resources and could be completed within a 12-week

In addition, another project had to be re-scoped due to client’s shift in goals and needs. The project was initially planned to deliver a fully developed system in a 24-week schedule; however, the client changed the project scope and asked for a prototype of the system with only a subset of the end-user functionalities instead. The team updated their project sizes and estimates in the COTIPMO tool. Based on the project progress and prototypes developed at that time, the tool reported that they had enough resources to complete the newly scoped project within a 12-week timeframe. The project was completed with the prototyped functionalities delivered to the client.

6.4

Reduced Project Effort

The use of the COTIPMO tool also showed benefits in other aspects of the software development process. Figure 13 shows a clear reduction in the effort spent on the projects during the Fall 2011 semester when the COTIPMO tool was introduced to the projects. We looked into details of the effort spent on the projects by breaking down the efforts into categories of project activities shown in figure 14. Since the development process and projects’ scopes, sizes, and complexities were similar for all 3 years, it is expected that the efforts required for the various activities were also similar. However, the Fall 2011 semester showed a significant reduction in the effort spent in the areas of communication and team synchronization. The continuous use of the COTIPMO tool allowed the teams to properly assess their progress and performances so issues and inconsistencies can be detected early before they turn into critical problems. The earlier these are detected, the less effort is required to resolve them and to synchronize team understandings.

7.

THREATS TO VALIDITY

Representativeness of projects. Most projects were small e-services projects, which may not represent the industry at a larger scale. Nonetheless, the projects were done for

We have developed the COTIPMO tool, an implementation of the COTIPMO framework developed in [2], to aid software development teams in tracking their development progress, assessing their team’s performance, and improving their project estimations throughout the project life cycle. The tool provides strong support for both development projects and NDI-intensive projects. For development projects, the team can benefit substantially from the integration of the UCC tool for automated sizing of the developed software and the use of the COCOMO II model for SLOCeffort conversion. For NDI-intensive projects, the tool utilizes the COCOMO II Application Point model where the team reports the number of screens, reports, and 3GL components developed and customized. Figure 13: Average effort spent in hours by individuals on the project

Figure 14: Average effort spent in hours by individuals for each project activity

real clients with real fixed schedules and costs. Also, all projects followed the same incremental development process and project activities that are used in the industry. Representativeness of personnel. The student teams and their members may not be representative of the industry personnel since the majority of them had less than 2 years of industry experience. However, even though the on-campus students may be less experienced, the off-campus students and the clients were working professionals. The unavoidable Hawthorne effect. The student teams of Fall 2011 were aware that the experiments were being conducted with the COTIPMO tool. This may have had some level of impact on the way the teams developed their project estimates using the COCOMO II model built into COTIPMO and caused the classic Hawthorne effect on the experiment. However, for all three years, the student teams were evaluated and graded based on their performances. This means that all the teams were equally motivated to produce correct and accurate estimation results. Furthermore, we were more interested in the improvements to the COCOMO II parameter ratings that occurred overtime during the Fall 2011 semester, whereas Fall 2009 and 2010 semesters showed no signs of improvements.

8.

CONCLUSION AND FUTURE WORK

As teams continuously assess themselves with the COTIPMO tool, issues and inconsistencies can be detected early helping them reduce the number of uncertainties as they progress. The tool analyzes these assessment data to suggest adjustments to the either the COCOMO II or COCOMO II Application Point estimation parameters. This creates more realistic and accurate estimations reflecting the team’s status instead of the potential “guess” work done by the teams. The COTIPMO tool had been deployed at USC and was used by 13 software engineering projects. The teams had shown significant improvements in project estimations and performances. Their estimates became more accurate and realistic over time reflecting the actual teams’ performances, while the team members were better synchronized and stabilized because potential problems could be detected early on before they became critical issues or defects. Furthermore, some projects were able to deliver earlier than planned as they were able to effectively track their development progress, recalibrate their resource estimations, and realize the ability to complete within a shorter timeframe. Our plan for future work is to extend the COTIPMO tool to be integrated with some configuration management tools such as Subversion, CVS, or Rational Team Concert. This would enhance the software development progress tracking process to be more automated as the tool can constantly monitor the source code checked in to the version control system. The tool would also be able to utilize the differencing function of the UCC allowing it to count the number of SLOC that were added, modified, and deleted from the previous version in addition to the total logical SLOC. This would make the tracking of progress even more accurate and more realistic. Another target for our future work is to experiment the COTIPMO tool in the industry to verify and validate the framework and the tool when used on the industry projects at a larger scale. To observe its effectiveness, we will also apply the tool to projects of different sizes and domains as the majority of the projects that we have experimented with were only small e-services projects.

9.

ACKNOWLEDGMENTS

The authors would like to thank Bhargav Rajagopalan and Sergio Romulo Salazar for their effort in helping develop the COTIPMO tool.

10.

REFERENCES

[1] J. C. Abib and T. G. Kirner. A GQM-based tool to support the development of software quality measurement plans. SIGSOFT Softw. Eng. Notes, 24:75–80, July 1999. [2] P. Aroonvatanaporn, S. Koolmanojwong, and B. Boehm. COTIPMO: A COnstructive Team Improvement Process MOdel. In Proc. of the 2012 Int. Conf. on Software and Systems Process (ICSSP’12), Zurich, Switzerland, 2012. [3] P. Aroonvatanaporn, C. Sinthop, and B. Boehm. Reducing estimation uncertainty with continuous assessment: tracking the “cone of uncertainty”. In Proc. of the IEEE/ACM Int. Conf. on Automated Software Engineering (ASE’10), pages 337–340, Antwerp, Belgium, 2010. [4] V. R. Basili. Applying the Goal/Question/Metric paradigm in the experience factory. In N. Fenton, R. Whitty, and Y. Lizuka, editors, Software Quality Assurance and Measurement: Worldwide Perspective, pages 21–44. International Thomson Computer Press, 1995. [5] B. Boehm. Software Engineering Economics. Prentice-Hall, 1981. [6] B. Boehm, C. Abts, A. W. Brown, S. Chulani, E. Horowitz, R. Madachy, D. J. Reifer, and B. Steece. Software Cost Estimation with COCOMO II. Prentice-Hall, 2000. [7] B. Boehm and J. A. Lane. Using the Incremental Commitment Model to integrate system acquisition, systems engineering, and software engineering, 2006. [8] B. Boehm, D. Port, L.-G. Huang, and W. Brown. Using The Spiral Model and MBASE to generate new acquisition process models: SAIV, CAIV, and SCQAIV. CrossTalk, pages 20–25, January 2002. [9] A. Cockburn. Earned-value and burn charts (burn up and burn down). In Crystal Clear. Addison-Wesley, 2004. [10] M. Cohn. Agile Estimating and Planning. Prentice-Hall, 2006. [11] D. D. Galorath and M. W. Evans. Software Sizing, Estimation, and Risk Management. Auerbach Publications, 1 edition, March 2006. [12] J. Grenning. Planning poker. http://renaissancesoftware.net/files/articles/ PlanningPoker-v1.1.pdf, April 2002.

[13] Jazz Foundation. https://jazz.net/projects/jazz-foundation/. [14] S. Koolmanojwong and B. Boehm. The Incremental Commitment Model process patterns for rapid-fielding projects. In Proc. of the 2010 Int. Conf. on New Modeling Concepts for Today’s Software Processes: Software Process (ICSP’10), pages 150–162, Paderborn, Germany, 2010. [15] W. Krebs, P. Kroll, and E. Richard. Un-assessments reflections by the team, for the team. In Proc. of the Agile 2008, pages 384 –389, Washington, DC, USA, aug 2008. [16] P. Kroll and W. Krebs. Introducing IBM Rational Self Check for software teams. http://www.ibm.com/developerworks/rational/ library/edge/08/may08/kroll_krebs, May 3 2008. [Online; accessed 20-January-2012]. [17] L. Lavazza. Providing automated support for the GQM measurement process. Software, IEEE, 17(3):56 –62, may/jun 2000. [18] T. Menzies, Z. Chen, J. Hihn, and K. Lum. Selecting best practices for effort estimation. Software Engineering, IEEE Transactions on, 32(11):883 –895, nov. 2006. [19] PRICE Systems. Your guide to PRICE-S: Estimating cost and schedule of software development and support, 1998. [20] L. Putnam and A. Fitzsimmons. Estimating software costs. Datamation, pages 189–198, September 1979. [21] Rational Team Concert. https: //jazz.net/projects/rational-team-concert/. [22] Standish Group. Chaos summary 2009. http://standishgroup.com, 2009. [23] Unified CodeCounter. http://sunset.usc.edu/research/CODECOUNT/. [24] J. D. Wiest and F. K. Levy. A Management Guide to PERT/CPM. Prentice-Hall, Englewood Press, 1977. [25] D. Yang, Y. Wan, Z. Tang, S. Wu, M. He, and M. Li. COCOMO-U: An extension of COCOMO II for cost estimation with uncertainty. In Q. Wang, D. Pfahl, D. Raffo, and P. Wernick, editors, Software Process Change, volume 3966 of Lecture Notes in Computer Science, pages 132–141. Springer Berlin / Heidelberg, 2006.

Suggest Documents