Designing and Implementing a Measurement Program for Scrum Teams: What do agile developers really need and want? Oualid Ktata
Ghislain Lévesque
University of Québec at Montréal 201, avenue du Président-Kennedy Montréal, Québec, Canada
University of Québec at Montréal 201, avenue du Président-Kennedy Montréal, Québec, Canada
[email protected]
[email protected]
ABSTRACT Agile developers are generally reluctant to non-agile practices. Promoted by senior software practitioners, agile methods were intended to avoid traditional engineering practices and rather focus on delivering working software as quickly as possible. Thus, the unique measure in Scrum, a well known framework for managing agile projects, is velocity. Its main purpose is to demonstrate the progress in delivering working software. In software engineering (SE), measurement programs have more in depth purposes and allow teams and individuals to improve their development process along with providing better product quality and control over the project. This paper will describe the experience and the approach used in an agile SE company to design and initiate a measurement program taking into account the specificities of their agile environment, principles and values. The lessons learned after five months of investigation are twofold. The first one shows how agile teams, in comparison to traditional teams, have different needs when trying to establish a measurement program. The second confirms that agile teams, as many other groups of workers, are reluctant and resistant to change. Finally, the preliminary results show that agile people are more interested in value delivery, technical debt, and multiple aspects related to team dynamics and will cooperate to the collection of data as soon as there tools can do it for them. It is believed that this research could suggest new guidelines for elaborating specific measurement programs in other agile environments.
Categories and Subject Descriptors K.6.3 [Management of computing and information systems]: Software Management– Software development, Software process.
General Terms Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. C3S2E-10 2010, May 19-21, Montreal [QC, CANADA] Editor:B.C.Desai, Copyright© 2010 ACM 978-1-60558-901-5/10/05 $5.00
Management, Measurement and Economics.
Keywords Agile software process, agile metrics, measurement program, Scrum, business value, goal-question-metric.
1. INTRODUCTION The research work presented here was aimed at designing and implementing a measurement program in a purely agile environment that had no systematic measurement program yet. The company hosting the program is a leading SE company in agile software development and consulting. It fully embraced the agile paradigm since its beginning in 2002. The company adopted Scrum as a project management technique. Furthermore, the company has a flat organization structure and has adopted the agile paradigm for all its internal managerial activities. As any other firm in SE, the company was experimenting major problems with estimation of software projects and had no valuable reference based on projects done. So it was clear that the company could not maintain for long its leading edge as an agile SE company if nothing was done to solve these problems. A collaborative research with university and a doctoral candidate was initiated with the company to introduce a measurement program in accordance with Scrum with the aim of monitoring and improving software process performance and eventually set a database of projects done as a referential to support estimation and learning. As researchers, the mandate was to model the development process as executed in the day-to-day operations by different development teams in the organization, to initiate a measurement program and test it in a pilot project. This paper begins with an overview of the Scrum framework and the context of this investigation. Grounding considerations are then stated and a presentation of the methodological approach follows. Finally, an intermediate set of results is presented and discussed. In conclusion, lessons learned from this investigation are presented as guidelines for similar experiences.
2. OVERVIEW In this section, an introduction to main agile principles and Scrum activities are presented first, and then followed by a brief description of the organizational environment where this work has taken place.
2.1 Introduction to agility and Scrum For a decade now, agile software development has recruited adepts in the software community and it is becoming a real alternative to traditional software development. Agile development has outlived the software crisis ‘by legalizing what was forbidden’ by traditional plan-driven development approaches. Instead of avoiding change it defines itself as a response to it. Instead of trying to understand user’s needs as soon as possible, it is preferred to combine feature planning and prioritization with progressive iterative cycles. Instead of overoptimizing, ‘the just enough’ principle is enacted. Instead of involving customer through documentation, face-to-face collaboration is promoted. Instead of analyzing risk and uncertainty thoroughly, tackling it empirically by prototyping is the rule [1]. Scrum identifies three distinct roles [2] : (1) Product owner (P.O.): it is the development leader; a unique accountable role in the hands of the customer for the success of the project. (2) Development team: A self-organized and cross-functional group of developers. (3) Scrum Master (S.M.): a facilitator responsible for the team adherence to the scrum process. Agile methods reinvent software development by encapsulating the development effort in the development team self-organization principle. In fact, requirements engineering, project management (planning, estimating), coding and testing are now performed by a self-organized team with no hierarchical constraints. The code is collectively owned and maintained. User stories are collectively identified and estimated. The just enough principle reduces any unnecessary effort towards over-optimization. Code refactoring is the key practice to keep the code clean. Continuous delivery of working software shows development progress. Tacit knowledge transfer is the main vehicle of communication. Whereas pre-planned development seeks to avoid risks, evolutionary development harnesses nature and confronts risks [3]. The Scrum Master is responsible for managing the Scrum process so that it fits within an organization’s culture and still delivers the expected benefits, and for ensuring that everyone follows Scrum rules and practices. The Scrum Master is also responsible for resolving impediments encountered during the Sprint in order to assure smooth running of the development process [4]. At the end of the Sprint, a Sprint review meeting is held at which the Team presents Sprint results to the Product Owner. After the Sprint review and prior to the next Sprint planning meeting, the Scrum Master also holds a Sprint retrospective meeting in order to ensure continuous improvement [3].
2.2 Context of investigation Here are some interesting facts about the agile environment under study: •
All company developers are Scrum Master certified.
•
The company provides Scrum Master and Product Owner certified trainings and counts in its staff three certified Scrum trainers.
•
Most of the development is done according to Test Driven Development (TDD).
Five projects were under analysis for approximately five months. As observers, we have participated in some of their activities such as daily scrums, retrospectives, sprint planning. The average team size is four developers with one Product Owner (P.O.) and one Scrum Master (S.M.). One of the five projects was on iteration 0 and another one was in its closing step when observations started. The other three were in progress. One of the three is a client-oriented development. The two other projects are in-house commercial product developments.
3. GROUNDING CONSIDERATIONS Dave Nicolette warns that poorly designed metrics lead to poor outcomes [5]. Before going further with this investigation, it was decided to carefully analyse several aspects before committing to any intermediate outcome. Fully understanding the measurement program environment was the first priority. As a second priority, the Goal-Question-Metric approach was used to take account of the specific needs of the company. The agile heuristics presented in section 3.2 were very helpful to successfully adapt the approach and create a highly dependable measurement program. Finally, the general pitfalls that any measurement program implementation can face were considered and dealt with appropriately.
3.1 Contextual considerations Here are some of the decisions that were made in order to accommodate the company environment specificities. D1: It was decided to observe and represent the Scrum process used in the company as is instead of focusing on the Scrum framework as described in the literature. The reason behind this is that some influent stakeholders consider that the company is mature enough to use Scrum efficiently and that there is no need for a measurement program to replace common sense and intuitive decision making. However, the first observations showed that the company was not as enough mature as it was thought. Thus, analysing Scrum as practised by the company will avoid a lot of wrong assumptions and could make stakeholders aware of the necessity of a consistent measurement program. D2: BPMN (Business Process Modeling Notation) was used as the visual notation to represent the processes. BPMN is a standard notation that belongs to the O.M.G.[6]. This traditional way to represent the process facilitates in depth analysis. To improve readability of the representation a CMMI-like representation was used as a complement to the visual representation (A CMMI representation is a tabular representation describing, sequentially, all software activities involved in a process, a sub-process or an activity). D3: All members of the company (developers, P.O., and high management (H.M.)) were invited to express their needs on indicators that were needed to do their jobs. The first reason behind such a decision was that, in first meetings, developers seemed to be afraid of the overhead work they could have to deal with and secondly, the misuse of metrics that could create a hostile working environment. Accordingly, any measure introduced would mostly be seen as a waste if it doesn’t benefit
directly to them. Another reason for this was that the few valuable metrics in the literature such as Earned Value Measure, Running Tested Features, etc, were not adopted internally. The company still relies solely on the burn-down chart to monitor progress and using intuition as a primary decision tool.
3.2 Agility considerations It’s very easy to acknowledge that identifying metrics for software process improvement purposes is very tedious, indeed boring, particularly in the context of agile environments. In fact, as Hartman [7] and Nicolette [5] suggested, agile metrics should consider not only technical aspects but also human aspects. Heuristics for good agile measurements given by Deborah Hartmann and Robin Dymond [7] and Levison [8] are summarized here:
•
Don’t create false measurement goals
•
Acquire implicit quality models from the team
•
Consider context
•
Derive appropriate metrics
•
Stay focused on goals when analyzing data
•
Let the data be interpreted by the people involved
•
Integrate the measurement activities with regular project activities
•
Do not use measurement for other purposes
•
Secure management commitment to support measurement results
A good agile metric: •
Affirms and reinforces Lean and Agile principles
•
Measures outcome, not output
•
Follows trends, not numbers
•
Belongs to a small set of metrics and diagnostics
•
Is easy to collect
•
•
Reveals, rather than conceals, its context and significant
As clearly seen from these three sets of considerations (Company, Agile and GQM), many overlapping considerations exist. This implies that these considerations must be handled carefully in order to succeed in designing and implementing a dependable measurement program in such environments.
variables •
Provides fuel for meaningful conversation
•
May measure Value (Product) or Process
•
Encourages "good-enough" quality
Furthermore, Nicolette call attention to the fact that Practitioners of "agile" must be mindful of the values and principles and choose techniques appropriate to their organization and situation in the short term while remaining alert to opportunities for long term improvement [5]. Finally Hartmann and Dymond remind us that when designing a metric, it’s important to not only consider when to use it, but also, when to stop using it and how can it be gamed [7].
•
Establish an infrastructure to support the measurement program
•
Ensure that measurement is viewed as a tool, not the end goal
3.4
Get training in GQM before going forward
Measurement program approach
As mentioned before, the adopted approach aims at identifying indicators (goals) that are relevant to people involved in the measurement program. The ultimate goal is to create a dependable program that can be fully supported not only by the high management but mostly by developers themselves since they are the providers of the low level data. Getting a buy-in from developers is one of the main factors of success established by us along with the high management of the company. The main planned steps are described below:
3.3 GQM considerations In the literature, implementation of software measurement programs has already been discussed thoroughly. Most of them use the Goal Question Metric approach (GQM) or an adaptation of it like (GQ(I)M where the I stands for Indicator) [9]. To accommodate the company needs and context, it was decided to stick to the specific considerations of the GQM that are presented here [10]: •
Get the right people(at all levels of development) involved in the GQM process
•
Set and state explicit measurement goals and state them explicitly
•
Thoroughly plan the measurement program and document it (explicit and operational definitions)
1.
Model the development process.
2.
Analyze the process and identify general areas of improvement.
3.
Provide recommendations to support the measurement model.
4.
Apply GQM phases to establish the measurement program
4.1
Step 1: identify indicators
4.2
Step 2: choose the most relevant indicator that needs to be answered first.
4.3
Step 3: create a cause-effect diagram to distinguish between a real problem and a symptom.
4.4
Step 4: Propose a plan to address the root cause using
•
specific metrics 5.
6.
7.
8.
observed and common. Such re-opening makes data
Identify changes to implement in order to support the fulfillment of the chosen indicator.
inconsistent and invaluable. •
team have an urgent need to quantify the debt cumulated
make sure to keep this step as smooth as possible.
over time in order to convince the P.O. that it is the time to
Collect, validate and analyze the data in real time to
pay back some of this debt. Currently, the team is ill-
provide feedback to projects for corrective action.
equipped to document and justify such technical debt.
In a post-mortem phase, analyse the results, show value
The main recommendations resulting from this modeling activity consist of:
This plan enables a stepwise introduction of some metrics which can be incorporated into the Scrum method seamlessly without affecting the agility of the development process. Once the pilot project confirms the new metrics and the team overcome the cultural change, new indicators could be implemented.
•
4. RESULTS AND DISCUSSIONS
•
At this point of the research, the intermediate results are divided into three categories: the results after the modeling milestone (steps 1 to 3), the results after the indicator survey milestone (steps 4.1 and 4.2), and the results after the cause-effect diagram milestone (steps 4.3 and 4.4).
4.1 Observations from the modeling of the day-to-day processes Following the modeling steps, several issues were identified that could directly compromise the data collection step (Step 7). Here are some examples of these issues that need to be solved first: No agreed definition of work items and their types: the agile management is too flexible causing terminology problems; •
No discipline or constraint in entering estimated time and real time for tasks;
•
Inconsistent task creation: tasks are often not created in the system. Furthermore, when created some tasks are not linked to their respective user story.
Other obstacles influence the overall success of the measurement program: •
Visibility on technical debt: The S.M. and the rest of the
Develop mechanisms for data collection and iteratively
and identify improvements.
•
Some bad practices such as re-opening a user story are
Failure to learn: Knowing the real size at completion of a user story would be extremely useful for next stories.
Creating a consistent terminology along with appropriate relationships between work items and rules to follow;
•
Convincing that more discipline is required in doing day-today activities for the benefit of all developers; Accepting the use of metrics as a tool to help decision making and make the currently used empirical process less chaotic;
•
Initiate a learning process in a more efficient way and stop relying entirely on team’s own knowledge or intuition.
4.2 Looking for the managing indicators needed The next step, after identifying major impediments and possible improvements, is the identification of a pipeline of indicator needs or improvement areas that all stakeholders agree upon. The aim of this step is to prioritize these needs on indicators and to choose the most valuable ones for in depth analysis while ensuring a shared ownership of the program. Initially, the approach used consisted of individual semistructured interviews based on the following GQ(I)M template for indicator gathering [11] (table 1). Unfortunately, most of the interviewees had trouble in following the template. As a countermeasure, a more familiar user-story-like template was proposed. By doing this, it was noticed that interviewees enjoyed the exercise and the gathering work went faster. Table 1: Template used initially to gather indicator needs Object :
Currently, the size data is not updated during a sprint and
The product or process under study; e.g., testing phase or a subsystem of the end product
no improvement of the estimation capacity is observable. As
Purpose :
a result, new estimations are made based on old estimations
Motivation behind the goal (why); e.g., better understanding, better guidance, control, prediction, improvement
and never on updated data.
Focus : The quality attribute of the object under study (what); e.g., reliability, effort, error slippage
Viewpoint : Perspective of the goal (who’s viewpoint); e.g., project manager, developer, customer, project team
issues (table 5). The next set of indicators, as detailed in table 6, concerns the customer involvement in software development. The final set deals with internal quality aspects of code (table 7).
Environment : Context or scope of the measurement program; e.g., project X or division B
Table 4: Team dynamics indicators Indicator need or improvement area
Votes
Team dynamics indicators
16
Here is an example of a user story used in gathering indicator needs:
Visibility of Debt (technical)
4
Transparency on collaboration issues
1
‘As a developer I would like to know for each user story the number of tasks forgotten during the initial planning session in order to improve my ability to decompose user stories in tasks’.
Team efficiency (in taking the right decisions)
4
Clearly, this could be easily brought to the standard form of GQ(I)M as presented in the following table (table 2): Table 2: User story decomposition ability indicator
Individual delivery
performance:
contribution
to
value
3
Team and individual motivational level variations
1
Team performance: evaluate learnability
2
Maturity- on transitions to agility
1
Object
User story decomposition activity which belongs to the User Story Cycle
Purpose
better understanding, better estimation, prediction
Process and project related indicators
12
Focus
Developer and team ability to handle requirements
Estimation of user stories in terms of size and time
5
Viewpoi nt
Developer
Estimation of tasks decomposition
1
Environ ment
Project level
Adherence to the process and good practices
2
User story cycle time (time between identification and completion)
1
Work in progress : Variability in time
1
Project governance : Risk management: user story cycle: indicator about risky user stories
2
At this point, it’s worth mentioning that the level of understanding of what indicators should be used or is needed is something that varies from one interviewee to another. This variation led to some degree of ambiguities between the different levels of abstraction in the GQM method. The adaptation of the GQM approach was done according to these well defined levels: indicators, metrics and measures as described in the literature [12]. To deal with these ambiguities, indicators and metrics were treated equally since at this point of the experience the goal to achieve is to identify the most valuable starting point in order to conduct an in-depth analysis. It’s also worth mentioning that every stakeholder was interviewed separately without prior knowledge of what other stakeholders had already identified. Here is the data collected as a result of this survey, beginning by table 3. Table 3: Total stakeholders and indicators by role Voters
Total
Total Stakeholders by real role
12
Total Distinct Indicators by role
30
Total Vote Indicators by role
41
The first set of indicators is related to team dynamics (table 4). The second set is related to process and project management
Table 5: Process and project related indicators
Table 6: Customer related improvement indicators Customer related improvements
11
Visibility of Business Value
7
Visibility on support performance
1
Financial aspects of projects, ROI
2
Customer satisfaction indicator
1
Table 7: Internal quality indicators Internal quality aspects
2
Test coverage : number of tests in each level of test and area of the software
2
As a conclusion, it seems that the need for indicators is concentrated in team dynamics and process and project management categories, which are key areas for software developers. However, the most voted indicator is a customer oriented indicator, namely, the Business value visibility. The next is estimation of user stories in terms of size and decomposition on tasks. The third indicator is related to visibility
issues of the technical debt. The least important category of indicators concerns internal quality of code.
4.3 Searching for what was behind the concept of technical debt Most problems in organizations are systemic. The “system” (the organization) has a glitch that needs to be fixed. Until you find the source of the glitch, most attempts to fix the problem will be futile or even counterproductive [13] and [14]. The primary value in diagrams is in the discussion done while diagramming—the aim of modeling is to have a conversation [13]. With this intent, a cause-effect diagram was used to trigger conversations that would result in a team sharing their concerns and issues and coming to a shared view. Since, a lot of problems were identified in step 4.1, having a diagram of cause-effect was necessary to dig further on the dynamics surrounding these problems. The technical debt problem was selected as a starting point. The reason for that is very simple: Developers were burden by the heaviness of the process due to technical debt. Teams were directly impacted by that and P.O. seems to do not understand the developers’ dilemma.
4.3.1
About the technical debt dilemma
As defined by David Draper, at its broadest, technical debt is any side of the current system that is considered sub optimal from a technical perspective. The debt aspect reflects the cost of ownership of that trait. For example an overly complex and untidy method is sub optimal, it incurs cost each time it needs to be re-visited and re-understood due to either a defect in the logic or a new requirement. This method obviously incurs more cost if it is re-visited more often [15]. Two kinds of technical debt coexist. The first category of debt is a direct result of doing the simplest thing that could work and resisting the temptation to predict the future. A programmer develops simple software that is easily changed through effective use of tests, the code being written is well written. This healthy technical debt is the kind of work that the P.O. agreed to delay in order to get value sooner. As a price to this debt, the P.O. agree to pay interests when comes the time to liquidate the debt, usually, before shipping a suitable version. Very often, the trigger of such behaviour is called a business opportunity. The second category is the unhealthy technical debt. It is usually a trick used by developers to solve a problem to the prejudice of quality. In fact, in most cases, such untidy or poorly designed software incurs cost both in the short and long term. In the short term, defects delay the release of the software and in the long term software is difficult to maintain and rigid in the face of changing business need. Figure 1 presents a cause-effect diagram that explains this concept of the technical debt problem.
Figure 1: The cause-effect diagram of the unhealthy technical debt. The bold lines indicate the root-cause path while the dotted ones show vicious cycles. The star flag indicates the root causes that need to be addressed to solve the unhealthy technical debt. What is the final outcome of this cause-effect diagram? Problem identified: Unhealthy Technical Debt Good solution: Underestimation leads to false-impression of productivity. Coupled with a commitment to deliver software by the end of the sprint, developers use the estimation as a target and try to cut corners, creating by that more debt. Increasing the team’s estimation ability will provide better size estimates and consequently generate less debt. The next section will look at this problem from developers’ perspective.
4.3.2
About estimation issues
This is a wicked problem in agile environments. Scrum pioneers believe that the planning poker technique estimations give results with a mean error rate of 20%, which is considered acceptable. They believe that there is no need for extra effort to get better than that and instead that they should apply the wonderful principle of just enough. All the scrum literature admits this [16]. However, they should admit that this is highly dependent from the maturity level of the developers involved. In this investigation, it has been noticed that, most of the time, developers estimate user stories and never got such rates (MRE of 20 %). Worst, they use the estimate as targets (Parkinson Law [17]) and the development effort becomes stressful since everyone is committed to achieve the goal. At the bottom line, technical debt is accumulated over sprints at incredible rates and the overall motivation of the team gets affected. One of the reasons that some sceptics in the organisation try to provide is that at the end of a cycle, sprint or release, good estimations and bad estimation balance the final results. This is visible in the way the velocity of the team is calculated. If the points are not attributed in this sprint because the team did not deliver the
required user story, in the next sprint, this will be visible when the velocity will rise. Technically speaking, this is correct. However, in terms of lessons learned, teams loose opportunities to learn from their mistakes. It is believed that providing size and time estimation and comparing them with real figures may lead to a variation of 20 % on estimations that will be far more interesting than making a 150% error up and another 150% down, without ever knowing why and where.
5. CONCLUSION In this paper, the idea was to show how difficult and what a challenge it is when trying to design and implement an appropriate measurement program in what can be called an hostile environment. The agile environment hosting this investigation was studied thoroughly and impediments were clearly identified and dealt with openly. Risks of failure in implementing such programs could come primarily from the resistance to change and a fear of having increased overhead activities to support data collection. Second risk could come from choosing the inappropriate set of indicators. These two risk factors were dealt with by integrating all interested stakeholders in shaping the measurement program. To go beyond merely involving them, it is important to understand their most hurting problems efficiently and make them visible to come to a shared solution. This will create a feeling of ownership that will motivate teams and make the necessary but undesired overhead work seamless. These observations and intermediate results show that unhealthy technical debt, business value visibility and estimation issues are the most urgent needs that all stakeholders agreed upon. In the developers’ perspective, the technical debt seems to have the worst impact in team’s productivity, collaboration efficiency and transparency and a severe impact on product quality in the bottom line. After digging further using a cause-effect diagram, the technical debt seems to be a symptom for the real problem. The true root-cause identified lie in the optimistic estimations provided by the team that themselves are caused by a lack of real data to understand what explain variations in estimations and learn from them. Thus, the next challenge of this project is to tackle the estimation issue that will provide the team with more knowledge of their real productivity which will induce a positive impact on their commitment to the P.O. At the bottom line, they will have more time at doing the things right without cumulating unhealthy technical debt. It is hoped that the presented approach will be helpful for further researches in the area of Scrum-based development process.
6. ACKNOWLEDGMENTS
business perspective in large projects. In Proceedings of the 2nd Canadian Conference on Computer Science and Software Engineering (Montreal, Quebec, Canada, May 19 21, 2009). C3S2E '09. ACM [2] Schwaber, K., The enterprise and SCRUM, Microsoft Press, 2007 [3] Shore, J., ‘The art of agile development’, O’reilly, 2008 [4] Mahnic, V., Vrana, I., ‘Using stakeholder driven process performance measurement for monitoring the performance of a Scrum based software development process,’ Electrotechnical Review, Ljubljana, Vol. 74, No. 5, 2007, pp. 241-247. [5] Nicolette, D., http://davenicolette.wikispaces.com/Agile +Metrics., presentation agile conference 2009 [6] 2008 BPM & Workflow Handbook - Spotlight on HumanCentric BPM, May 5, 2008 by Layna Fischer (editor) [7] Hartmann, D. and Dymond, R. 2006. Appropriate Agile Measurement: Using Metrics and Diagnostics to Deliver Business Value. In Proceedings of the Conference on AGILE 2006 (July 23 - 28, 2006). IEEE Computer Society [8] Levison M., ‘What is a Good Agile Metric?’, November 2009, InfoQ [9] Van Solingen, R., Basili, V., Caldiera, Gianluigi, and Rombach, D. H. Goal Question Metric (GQM) Approach, Encyclopedia of Software Engineering (Marciniak, J.J. ed.), online version @ Wiley Interscience, John Wiley & Sons, 2002. [10] Ragland, B. Measure, Metric, or Indicator: What's the Difference?, Software Technology Support Center, http://www.stsc.hill.af.mil/crosstalk /1995/03/Measure.asp, last visited January 2009 [11] Van Solingen, Rini and Berghout, Egon, ‘The Goal/Question/Metric Method: A Practical Guide for Quality Improvement of Software Development’, McGraw Hill, 1999. [12] Basili V.R., "Software Modeling and Measurement: The Goal Question Metric Paradigm," Computer Science Technical Report Series, CS-TR-2956 (UMIACS-TR-9296), University of Maryland, College Park, MD, September 1992 [13] Russell L. Ackoff, ‘Systems Thinking for Curious Managers.’ Triarchy Press, 2010 [14] Henrik, Knieberg, http://www.crisp.se/henrik.kniberg /cause-effect-diagrams.pdf, Last viewed January2009
This work is supported by MITACS Inc., a Canadian research network of Centres of Excellence (NCE) program, FQRNT and an industrial partner. All opinions, findings, conclusions and recommendations of this work are those of the authors and do not reflect the views of these partners.
[15] Draper, D., http://www.agiledesign.co.uk/technical/technical-debt-revisited/, last visited January 2010
7. REFERENCES
[17] McConnel,S., Software Estimation: Demystifying the Black Art. Redmond, Wa.: Microsoft Press, 200
[1] Ktata, O. and Lévesque, G. 2009. Agile development: issues and avenues requiring a substantial enhancement of the
[16] Boehm, B. and Turner, R. 2003 Balancing Agility and Discipline: a Guide for the Perplexed. Addison-Wesley Longman Publishing Co., Inc.