implementing software metrics in an agile ...

2 downloads 0 Views 332KB Size Report
José Rodríguez. CIS Graduate Program. University of Costa Rica, San Pedro, Costa Rica 2060. (506) 2511-4020 jose.rodriguez@gmail.com. Marcelo Jenkins.
IMPLEMENTING SOFTWARE METRICS IN AN AGILE ORGANIZATION: A CASE STUDY Rodrigo A. Bartels CIS Graduate Program University of Costa Rica, San Pedro, Costa Rica 2060 (506) 2511-4020 [email protected] José Rodríguez CIS Graduate Program University of Costa Rica, San Pedro, Costa Rica 2060 (506) 2511-4020 [email protected] Marcelo Jenkins Department of Computer Science University of Costa Rica, San Pedro, Costa Rica 2060 (506) 2511-4020 [email protected]

Abstract In this paper we describe the experience of a small agile software development organization in starting a metrics program in Costa Rica. We provide a description of the program design and the results of the implementation effort. Details are provided on how the ISO/IEC 15939 for software measurement was used as a guideline in an organization that uses Scrum. The contents of this paper might interest agile software development organizations starting a measurement program. Keywords Software metrics, agile software development, ISO/IEC 15939, Scrum 1. INTRODUCTION For any software organization the ability to accurately estimate its projects attributes, such as time, effort, and cost, is a priceless advantage in a highly-competitive global market. In order to attain useful estimates of these attributes, a set of software metrics must be implemented by the organization. For this reason, the appropriate use of quality and productivity metrics embodies one of the most important quality management tools any software organization can have. On the other hand, over the last decade organizations of all types have replaced their traditional software development life cycles with a new set of so-called agile

methodologies, whose main principle is to return to the origins of software engineering where the development tasks were mostly empirical and without much planning and documentation. After almost a decade of the Agile Manifesto’s publication [6] and much experimentation, agile methods have gained acceptance as effective software development methodologies and are now widely used in many different types of organizations. For them, however, the problem of successfully implementing software metrics is still present. From the software measurement standpoint, there is nothing in agile methodologies opposite to measuring quality and performance. By the contrary, the short-iteration, rapid-change-response, rolling-planning, test-driven development nature of these methodologies require constant measurement of projects, processes, and products as to ascertain progress. If “working software is the primary measure of progress” as stated in the Agile Manifesto [6], then agile organizations need a method to measure completion of its products and determine progress. Measurement is a primary tool for managing software life cycle tasks, including planning, controlling and monitoring of project plans. This is especially true in agile organizations, where these are “every day” activities. However, not all metrics processes and standards developed over the years for traditional “non-light” lifecycle models can be straightforwardly implemented by agile organizations without some adaptation. They need to follow a different approach in implementing a successful metrics program, particularly if they are small organizations (less than 100 people). In this paper we describe the experience of a small agile software development organization in starting a metrics program. We provide a description of the design made and the results of the implementation effort. Details are provided on how the ISO/IEC 15939 for software measurement was used as a guideline in an organization that uses Scrum. The contents of this paper might interest agile organizations starting a measurement program. 2. SOFTWARE METRICS STANDARDS There are many standards available to help organizations starting a measurement program. In our case, the ISO/IEC 15939 standard [15] for establishing software measurement programs served as our main guideline. This standard, based on the Practical Software Measurement (PSM) methodology developed by McGarry et al. [17], describes a four-phase process for implementing a measurement program. Table 1 shows the four phases described in clause 5 of the standard and its main outcomes. The process starts by establishing Management commitment and assigning roles and responsibilities to organizational units and individuals. Then, the metrics program is planned by deriving the set of metrics based on business goal. Next, in step three the program is implemented and the first results are obtained. Finally, in step 4 the program is evaluated by validating the implemented metrics and making changes accordingly.

We followed a streamlined version of this process to implement our project since the 15 steps proposed in this standard are too burdensome for small organizations. Besides, the organization was not specifically interested in complying with the standard. In addition, for activity 5.2 Plan the measurement process, we used the Goal/Question/Metric (GQM) methodology originally developed by Basili and Weiss [5] since it has been widely used over more than two decades, it is simple to use, and very effective. Clause number and name 5.1: Establish and sustain measurement commitment 5.2: Plan the measurement process 5.3: Perform the measurement process 5.4: Evaluate measurement

Main outcome • Management commitment and responsibility, team roles • Selected metrics definition • Metrics program implementation results • Metrics analysis and validation

Table 1. Phases described in the ISO/IEC 15939 standard

We also took a look at the IEEE Standard for a Software Quality Metrics Methodology [13], that describes a very similar process to the one defined in the ISO/IEC 15939 standard except that it consists of five phases instead of four. Finally, we also used the IEEE Std 1045-1992 Standard for Software Productivity Metrics [14] as a guideline to identify some productivity metrics. 3. RELATED WORK The academic literature is abundant on case studies about implementing metrics programs in software organizations. However, for the purpose of this paper, our particular interest focused on recent works associated with applying metrics in agile organizations. Many works have been published on this issue. Most recently, [12] proposes a quality model and a set of metrics for agile software development organizations and [10] discusses the problem of trying to implement conventional metrics models in agile organizations. In [7] the authors present a similar experience but using object-oriented metrics as part of their agile methodology. In a similar line of work, [19] presents an exercise in validating three different sets of object-oriented metrics in an agile process. Similar studies were published in [20] and [4]. In [8] role playing is used to derive metrics in an academic environment. In addition, [9], [18], [16], and [11] are just a few examples of case studies on implementing agile metrics and processes in traditional software organizations. Additionally, [3] explain the experience of incorporating software metrics practices into an iterative commercial process. An experience in automating the management of the tools for metrics collection is described in [22] and a case study on setting up an agile project management office is

explained in [24]. Finally, [23] and [1] contain interesting cases on using earned value management with agile methodologies. 4. THE ORGANIZATION RoundBox Global (RBX) originated in 2003 in Atlanta, Georgia, USA. Since 2006, the company has a 90-person development team in San Jose, Costa Rica. The organization develops mainly Java web applications for corporate education and training programs. The several development teams are supported by a project management infrastructure along with a quality assurance group. RBX uses Scrum [21], one of the best known agile methodologies. Scrum proposes a process model comprising a set of practices and roles that can be used as a guideline to define a software development process. There are three main roles: the ScrumMaster maintains the process and acts as project manager, the Product Owner represents the stakeholders, and the Team includes all the participating developers. At each sprint, which lasts between 15 and 30 calendar days, the team develops a deliverable software increment. The Product Backlog, a prioritized list of high-level software requirements, defines the functionality to be implemented in each increment. For each sprint, and during the Sprint Planning meeting, the Product Owner requests requirements from this list to the team, whose members determine the quantity of work that can be developed during the sprint. During a sprint, the Sprint Backlog can no be changed thus a requirements baseline can be defined. Agile organizations that use Scrum generally would have a matrix structure where each project team would have the mixture of people from various Departments with the skills required for each sprint (e.g., project manager, developer, programmer, quality assurance engineer). Each individual in a team would belong to a different but work under a single ScrumMaster for a particular project. The organizational process, depicted in Figure 1, is based on Scrum. The process consists of 4 main phases: 1. Project Inception: elicits the software requirements and determines the project’s goals and scope. Sprints for executing iterations of the following three phases are defined based on this information. 2. UxD Design: designs the application’s human-computer interface. 3. Software Engineering: consists of implementing the application interface and backend (business and data management layers). This also called development or implementation. 4. Quality Assurance: performs regression and acceptance testing before customer delivery.

Project Inception

UxD Design

Software Engineering

Quality Assurance

Scope Definition

User Experience Design

Interactive Engineering

Regression Tests

Initial Requirements Gathering

Client Signoff

Backend Development

Release Acceptance

Figure 1. The organization’s development process.

The last three phases are iteratively repeated in each sprint until the final product is built and delivered to the customer. Each sprint takes between 2 and 4 weeks.

5. METRICS PROGRAM DESIGN Team members are off course responsible for collecting the data. The ScrumMaster is the person responsible for summarizing the metrics data and make it visible to all team members. Team members are the most important stakeholders of the metrics program, thus, the metrics results should be presented to the team members the same way as the user stories (requirements) are discussed and analyzed by the team. From the metrics standpoint, Management identified two main organizational objectives: 1. Control the number of defects delivered to the customer. 2. Control project time and cost. Based on these two goals, we used the GQM methodology [5] to build up the Goal/Question/Metrics trees and determine the software metrics based on organizational objectives. Figure 2 shows the ensuing metrics for the defects goal. Control the number of defects delivered to the customer

Goal

Question

Where is the origin of defects?

How effective is th e testing?

How many defects are injected?

What types of defects are injected?

Metric

Pareto distribution of defects per origin

#defects/ # bugs

# of defects injected per sprint

Pareto distribution of defects per origin

TBD

TBD

TBD

Target Value

TBD

Figure 2. The GQM diagram for the defects goal.

Four metrics were formulated, one for each question posted. In this case, defects are pre-release errors found during development (mostly during testing), and bugs are errors found post-release (by the customer). The organization is mostly interested in knowing the origin and type of defects, and controlling how many are injected and detected. The organizational target values for the four metrics are yet to be determined in the absence of historical data. Figure 3 shows the metrics for the time and cost goal. In this case, three metrics were derived. The target value for SPI metric was determined by Management to be 1.0 plus minus 10%, that is, they would consider a 10% deviation from planning as an acceptable project variation. Control project time and cost

Goal

Question

Wher e is the effort spent?

M etric

Target Value

% distribut ion of eff ort per project phase N/A

Ar e the tasks finished on time?

How accurate was t he time estimate?

Schedule Perfor mance Index ( SPI) 1.0 +- 10 %

What ar e the causes of delay? Causeeffect analysis N/A

Figure 3. The GQM diagram for the time and cost goal.

Finding the causes of a project delay is not really a metric but a root-cause analysis task. We included it here because Management wanted that question to be answered. 6. THE IMPLEMENTATION We performed a pilot implementation of the seven metrics. Several sample projects were selected and data were collected. Figure 4 shows the Pareto chart for the origin of defects. The first two phases (front end and back end development) account for 83% of the defects found during development. Figure 5 shows the XmR chart for the number of defects resolved per sprint during a 22sprint project. The CL or control line (calculated as the sample average) is 35 defects per sprint. The upper control limit is calculated as the average plus three times the standard deviation. The range line plotted in the same chart clearly shows high variation from one sprint to another, making the process basically unstable.

100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00%

Percentage Accumulate

Figure 4. Pareto analysis of defects by origin.

110 100 90 80 70

Resolved Bugs

60

Moving Range

50 40

UCL

30

CL

20 10 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22

Figure 5. Number of defects solved per Sprint for one project.

This unpredictability basically means the number of resolved defects per sprint cannot be accurately estimated, hence, estimation of product quality cannot be precisely determined. Figure 6 shows the effort distribution per phase for three sample projects. In this organization, the quality assurance effort (mainly testing) invested in these three projects is remarkably low. Given that testing is the only independent product verification task performed before customer delivery, the organization should consider investing more development effort in some front-end quality assurance activities such as reviews and inspections, and in this way enhance their early defect detection process. As [26] states, poor effort allocation is among the major root causes of rework and poor project performance.

Effort Distribution

100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%

Requirements Project Management QA UI Design UI Development Business Logic Project 1

Project 2

Project 3

Projects

Figure 6. Effort distribution for three sample projects

On the other hand, average project management effort accounts for about 10% of total effort, an acceptable amount in an agile organization. Figure 7 shows the Pareto distribution of defect types for one sample project. The first three types of defects (functional, visual, and no reproduced) account for more than 80% of all defects. The other categories are almost negligible. Given that other projects reflect a similar pattern of defect type distribution, the organization can now focus its process improvement initiative on removing the root causes of such categories of defects. Management has from here on a quantitative assessment on where to invest its quality assurance resources. 100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00%

Percentage Accumulate

Figure 7. Pareto analysis of defects by type

Figure 8 shows the XmR (average and range) chart for Schedule Performance Index (SPI) for one project with 22 sprints. In this case, SPI variation is relatively (0.18 in average), but the average value above 1.0 means the project’s sprints were mostly delivered ahead of planned schedule. 1.6 1.4 1.2 SPI 1

UCL CL

0.8

LCL 0.6

Moving Range Moving Range CL

0.4

Moving Range UCL 0.2 0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22

Figure 8. XmR chart for Schedule Performance Index (SPI) for one project

We calculate the SPI metric as the earned value divided by the planned cost measured in person hours. The target value is therefore 1.0, meaning the earned value is the same as the planned value. Values below this target number denote delays. The SPI is computed at the end of each sprint based on the tasks planned for it. The center line (sample average) at 1.1 reflects an average delay of about 10% for the 22 sprints in the sample. The 22-point sample is almost balanced with respect to the average (about half the points are above and the other half are below the center line), and there are no points outside the control range, calculated as the average plus/minus three standard deviations. Additionally, the moving range curve (shown below in the same chart) shows a relatively low variation in SPI from one sprint to the next. The moving range is calculated as the difference in absolute value of each pair of consecutive points is the sample. 7. CONCLUSIONS This project represents the first organization’s attempt at implementing a software metrics program; hence the metrics implemented in this project are basic but very effective for an agile organization. Minimum overhead was introduced into the Scrum process and henceforth project managers have a quantitative tool to assess and report project progress and product quality. Such data are currently starting to be used for

estimating purposes. As a result, the organization is now considering starting to measure software size directly as to further enhance the estimation process. Prior to this project, the organization was capturing data about projects but it had no process for analyzing them. Now, the organization knows its performance and can take improvement initiatives based on such information. Agile organizations need metrics to control their processes as all other types of software organizations do as well. The issue here is to follow a proven process to design and implement them, and to derive a suitable set of metrics that can be accommodated straightforwardly into the agile process with very little overhead. In developing this first metrics project we followed a simplified version of the process described in the ISO/IEC 15939 software measurement standard. Even thought our experience shows the suitability of the standard for immature organizations interested in starting a metrics program, the proposed process is too burdensome for small organizations in which most development teams are less than ten people. Therefore, some adaptation and simplification are necessary for small organizations, particularly agile ones. The next step in this metrics program initiative is focused on acquiring better metrics tools to further automate the collection, calculation, and analysis processes. This is imperative in an agile organization given that minimum overhead is allowed into the process. ACKNOWLEDGMENTS Our thanks to RoundBox Global for allowing us to work with them in this project. REFERENCES [1]

Alleman, G. B., Henderson, M., and Seggelke, R. 2003. Making Agile Development Work in a Government Contracting Environment - Measuring velocity with Earned Value. In Proceedings of the Conference on Agile Development (June 25 - 28, 2003). ADC. IEEE Computer Society, Washington, DC, 114.

[2]

Anderson, David J. 2003 Agile Management for Software Development. Prentice Hall Professional Technical Reference.

[3]

Anderson, David. J. 2005. Stretching Agile to fit CMMI Level 3 - the story of creating MSF for CMMI® Process Improvement at Microsoft Corporation. In Proceedings of the Agile Development Conference (July 24 - 29, 2005). ADC. IEEE Computer Society, Washington, DC, 193-201. DOI= http://dx.doi.org/10.1109/ADC.2005.42.

[4]

Alshayeb, M. and Li, W. 2003. An Empirical Validation of Object-Oriented Metrics in Two Different Iterative Software Processes. IEEE Trans. Softw. Eng. 29, 11 (Nov. 2003), 10431049. DOI= http://dx.doi.org/10.1109/TSE.2003.1245305.

[5]

Basili, V. R. and Weiss, D. M. 1984. A Methodology for Collecting Valid Software Engineering Data, IEEE Transactions on Software Engineering, vol. SE-10, no. 3, November 1984, pp. 728-738.

[6]

Beck, K. et al. 2001. The Agile Manifesto. Downloaded March 6th 2009 from http://www.agilemanifesto.org.

[7]

Concas, G., Francesco, M., Marchesi, M., Quaresima, R., and Pinna, S. 2008. Study of the Evolution of an Agile Project Featuring a Web Application Using Software Metrics. In Proceedings of the 9th international Conference on Product-Focused Software Process Improvement (Monte Porzio Catone, Italy, June 23 - 25, 2008). A. Jedlitschka and O. Salo, Eds. Lecture Notes In Computer Science, vol. 5089. Springer-Verlag, Berlin, Heidelberg, 386-399. DOI= http://dx.doi.org/10.1007/978-3-540-69566-0_31.

[8]

Dubinsky, Y. and Hazzan, O. 2006. Using a role scheme to derive software project metrics. J. Syst. Archit. 52, 11 (Nov. 2006), 693-699. DOI= http://dx.doi.org/10.1016/j.sysarc.2006.06.013

[9]

Dubinsky, Y., Talby, D., Hazzan, O., and Keren, A. 2005. Agile Metrics at the Israeli Air Force. In Proceedings of the Agile Development Conference (July 24 - 29, 2005). ADC. IEEE Computer Society, Washington, DC, 12-19. DOI= http://dx.doi.org/10.1109/ADC.2005.8.

[10]

Hartmann, D. and Dymond, R. 2006. Appropriate Agile Measurement: Using Metrics and Diagnostics to Deliver Business Value. In Proceedings of the Conference on AGILE 2006 (July 23 - 28, 2006). AGILE. IEEE Computer Society, Washington, DC, 126-134. DOI= http://dx.doi.org/10.1109/AGILE.2006.17.

[11]

Hirsch, M. 2005. Moving from a plan driven culture to agile development. In Proceedings of the 27th international Conference on Software Engineering (St. Louis, MO, USA, May 15 21, 2005). ICSE '05. ACM, New York, NY, 38-38. DOI= http://doi.acm.org/10.1145/1062455.1062472.

[12]

Kunz, M., Dumke, R. R., and Zenker, N. 2008. Software Metrics for Agile Software Development. In Proceedings of the 19th Australian Conference on Software Engineering (March 26 - 28, 2008). ASWEC. IEEE Computer Society, Washington, DC, 673-678.

[13]

IEEE 1999. IEEE Std 1061-1998 Standard for a Software Quality Metrics Methodology, IEEE Inc.

[14]

IEEE 1999. IEEE Std 1045-1992 Standard for Software Productivity Metrics, IEEE Inc.

[15]

ISO 2002. International Standard ISO/IEC 15939:2002 Software engineering — Software measurement process. First Edition.

[16]

Layman, L., Williams, L., and Cunningham, L. 2004. Motivations and measurements in an agile case study. In Proceedings of the 2004 Workshop on Quantitative Techniques For Software Agile Process (Newport Beach, California, November 05 - 05, 2004). QUTE-SWAP '04. ACM, New York, NY, 14-24. DOI= http://doi.acm.org/10.1145/1151433.1151436

[17]

McGarry. John, et al. 2002. Practical Software Measurement, Addison Wesley.

[18]

Mencke, R. 2008. A Product Manager's Guide to Surviving the Big Bang Approach to Agile Transitions. In Proceedings of the Agile 2008 (August 04 - 08, 2008). AGILE. IEEE Computer Society, Washington, DC, 407-412. DOI= http://dx.doi.org/10.1109/Agile.2008.65.

[19]

Olague, H. M., Gholston, S., and Quattlebaum, S. 2007. Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes. IEEE Trans. Softw. Eng. 33, 6 (Jun. 2007), 402-419. DOI= http://dx.doi.org/10.1109/TSE.2007.1015.

[20]

Roden, P. L., Virani, S., Etzkorn, L. H., and Messimer, S. 2007. An Empirical Study of the Relationship of Stability Metrics and the QMOOD Quality Models Over Software Developed Using Highly Iterative or Agile Software Processes. In Proceedings of the Seventh IEEE international Working Conference on Source Code Analysis and Manipulation (September 30 - October 01, 2007). SCAM. IEEE Computer Society, Washington, DC, 171-179. DOI= http://dx.doi.org/10.1109/SCAM.2007.2

[21]

Schwaber, K. and Beedle, M. 2001. Agile Software Development with Scrum, Prentice Hall.

[22]

Sillitti, A., Russo, B., Zuliani, P., and Succi, G. 2004. Deploying, updating, and managing tools for collecting software metrics. In Proceedings of the 2004 Workshop on Quantitative Techniques For Software Agile Process (Newport Beach, California, November 05 - 05, 2004). QUTE-SWAP '04. ACM, New York, NY, 1-4. DOI= http://doi.acm.org/10.1145/1151433.1151434.

[23]

Sulaiman, T., Barton, B., and Blackburn, T. 2006. AgileEVM - Earned Value Management in Scrum Projects. In Proceedings of the Conference on AGILE 2006 (July 23 - 28, 2006). AGILE. IEEE Computer Society, Washington, DC, 7-16. DOI= http://dx.doi.org/10.1109/AGILE.2006.15.

[24]

Tengshe, A. and Noble, S. 2007. Establishing the Agile PMO: Managing variability across Projects and Portfolios. In Proceedings of the AGILE 2007 (August 13 - 17, 2007). AGILE. IEEE Computer Society, Washington, DC, 188-193. DOI= http://dx.doi.org/10.1109/AGILE.2007.24.

[25]

Tudor, D. and Walter, G. A. 2006. Using an Agile Approach in a Large, Traditional Organization. In Proceedings of the Conference on AGILE 2006 (July 23 - 28, 2006). AGILE. IEEE Computer Society, Washington, DC, 367-373. DOI= http://dx.doi.org/10.1109/AGILE.2006.60.

[26]

Yang, Y., He, M., Li, M., Wang, Q., and Boehm, B. 2008. Phase distribution of software development effort. In Proceedings of the Second ACM-IEEE international Symposium on Empirical Software Engineering and Measurement (Kaiserslautern, Germany, October 09 10, 2008). ESEM '08. ACM, New York, NY, 61-69. DOI= http://doi.acm.org/10.1145/1414004.1414016

Suggest Documents