From the chairman - Capability Maturity Model ...

3 downloads 0 Views 214KB Size Report
Feb 2, 2004 - following are some of the “pros and cons” in the adoption and use of CMM/CMMI. #1 CMM/CMMI is … but one of the many available standards ...
NEWSLETTER

ISSN 1325-2070 Volume 16, Issue 2 February 2004

From the chairman - Capability Maturity Model Integration Aiming to help small-to-medium software developers lift their game, the Victorian Government is providing $1 million over the next 3.5 years for obtaining Capability Maturity Model Integration (CMMI) Level 2. While applauding the good intentions (and of course the funding), I have personal reservations about how this is being carried out. To be blunt, I think it is a big mistake to push CMMI certification as a good approach, let alone the only approach, for smallto-medium developers. I am not against CMMI per se – in fact I am currently trying to organise a CMMI course on similar lines to the SPICE upgrade course we ran last month. But we have to bear in mind that the CMM and CMMI models were designed to show that major suppliers are capable of meeting US DoD requirements. These organisations need CMMI to qualify as DoD suppliers. For other organisations, especially small-to-medium developers, I think there are other equally effective approaches which require a fraction of the time and effort, both to implement and to maintain. What about ISO 9001 and related software industry guidance? That is accepted as at least equivalent to CMMI L2.and comes at a fraction of the CMMI cost. The Victorian Government fact sheet (see below) gives the impression that the grant, capped at $10,000, represents 50% of the cost (although the Multimedia Victoria contact advises this is probably a mistake.) It is not clear what the expected costs are but you can bet they will be a lot more than $20000. One CMM study suggests it takes about two years per level to move between CMM 1 and CMM 3, at a cost ranging from $500 to $2000 per person per year (see Process Assessment Considered Wasteful below). Looking at return on investment (ROI ) rather than cost does not put the CMMI in a favourable light, see http://davidfrico.com/roi-metrics-f.htm. From the early years, there have been arguments about the CMM. The criticisms mostly still apply to CMMI, e.g. no formal theoretical basis, only vague empirical support, overly-prescriptive, misdirects process improvement efforts, flawed assessment processes. For more detail, check out The Immaturity of CMM (see below). Also Can you Trust Software Capability Evaluations at http://csdl.computer.org/comp/mags/co/2000/02/r2028abs.htm. And for a more positive view, check out CMM vs. CMMI: From Conventional to Modern Software Management at http://www.therationaledge.com/content/feb_02/f_conventionalToModern_wr.html Another argument against pushing CMMI is the recent software industry backlash against old-style methodologies? Can we ignore the Agile Alliance (eXtreme Programming, et al)? Apart from that, the old process-based approaches are now being questioned by new standards. The traditional models assume that the best we can do is to evaluate the processes because it is not feasible to evaluate the product. We now have software product evaluation standards (ISO/IEC 9126 and 14598) which challenge that assumption.

Ted Smillie In this Issue SQA (NSW) Executive Committee 2003/2004 ...................................................................................................... 2 SQA National Contacts........................................................................................................................................... 2 $1M Support for Victorian Software Developers................................................................................................... 2 Is CMMI Appropriate for SME? ............................................................................................................................ 3 Process Assessment: Considered Wasteful............................................................................................................. 5

February 2004 Page 2 SQA (NSW) aims to provide a forum for the free exchange of ideas. Expressions of position or opinion appearing herein are solely those of the author and are not by fact of publication necessarily those of SQA (NSW). Address:

Executive Committee 2002/2003 Chairman Ted Smillie, Montrose Computer Services 9212 0300 [email protected] Vice Chairman Helen Aitken 0410 608 670 [email protected] SQA Administration Jeff Lock, TAFE 9244 5223 fax: 9905 2579 [email protected] Newsletter Editor James Leo, Vision IT 0438 126 118 [email protected] Committee Members Michael Adamietz, Keycorp 9414 5431 John Purssey, ProQual 0409 874 871 Rudolf Wirawan, Wirasoft Consulting 9628 3379 Bernard Wong, UTS 9514 1825 Daniel Wong, SQM 9418 4728 SQA National Contacts National Chairman Ted Smillie, Montrose Computer Services Tel: (Bus) (02) 9223-0300 Victoria Bruce Hill, Australia Post Tel: (Bus) (03) 9204 8374 http://www.acsvic.com/html/itquality.html ACT Mike Bowern [email protected] NT Lou Martini, DMR Tel: (Bus) (089) 81-7366 Queensland Mike Carson, Technology One Tel: (Bus) (07) 3349-8122 SA Josh Jones, Aspect Tel: (Bus) (08) 8379-0466 Tasmania Ian Shield, Deloitte Touche Tohmatsu Tel: (03) 6237 7074

C/O 4 Hillcrest Place, North Manly, NSW 2100 Web page: http://www.ozemail.com.au/~sqain To subscribe to the SQA mail group send an e-mail to: [email protected] ABN: 86 541 712 427

$1M Support for Victorian Software Developers (Source http://www.mmv.vic.gov.au/newsbriefs 11 Dec 2003) On Dec 2003 the Victorian Government announced a $1 million assistance program to help Victoria's small and medium sized software developers attain CMMI (Level 2) accreditation. Victorian Information and Communication Technology Minister, Marsha Thomson, said the program would give developers access to cost-prohibitive international accreditation, ensuring they are well placed to compete in the global marketplace. This is the first time an Australian State Government has financially supported accreditation for software developers. Fact Sheet - Software Accreditation Assistance Program The Victorian Government will provide $1 million in funding over the next 3.5 years to assist Victorian software development businesses gain global accreditation. The grants program is open to Victorian small and mediumsized software development businesses to assist them in obtaining a rating of CMMI Level 2. The Victorian Government will pay up to 50% of the costs associated with achieving a CMMI Level 2 rating. The grant will be capped at $10,000 for each organisation. The grants will commence from 1 February 2004. Organisations will be able to apply for inclusion in the program from this date. Those eligible to apply for assistance in gaining accreditation are Victorian software development businesses that: -

are headquartered in Victoria or their principle place of software development is located in Victoria;

-

have less than 200 employees; and

-

have four projects successfully completed in the last 18 months (a CMMI requirement).

The grant will be made payable to businesses upon them providing evidence of achieving the CMMI Level 2 rating. Gaining Accreditation Companies will require a certification that “according to a Class A or Class B appraisal they have been rated as a CMMI Level 2 organisation”.

February 2004 Page 3 Class A, B and C appraisals relate to the amount of rigour applied to determine an organisation’s capability. These appraisals can range from a single source, usually just an interview (Class C) to a much more detailed analysis by a team of appraisers (Class A). Class A appraisals can take up to two weeks of investigation. All companies or individuals that conduct appraisals must follow a recognised Class A or Class B appraisal process and be formally recognised as being eligible to conduct assessments. A list of certified appraisers will be available on the Multimedia Victoria website from 1 February 2004.

Table 1: Characteristics of CMM Maturity Level

Characteristics

Focus of Measurements

1.Initial

Ad hoc, chaotic

Establishing baselines for planning and estimating

2.Repeatable

Processes depend on individuals

Project tracking and control

3.Defined

Processes are defined and institutionalised

Definition and quantification of intermediate products and processes

4.Managed

Processes are measured

Definition, quantification, and control of sub-processes and elements

5.Optimising

Improvements are fed back to processes

Dynamic optimisation and improvement across projects

*********************************************

Is CMMI Appropriate for SME? By James Leo Since Fred Brook (1975) description of the state of software industry in the 1970’s as a “tar pit”, there have been numerous attempts to promote the understanding and engineering aspect of software in the IT industry. In the 1990’s the Software Engineering Institute (SEI, 1994) in Carnegie-Mellon University introduced a Capability and Maturity Model (CMM), which provides a standard model and benchmark for identifying an organisational process capability in enterprise IT system engineering. The CMM standard as depicted in figure 1 promotes a set of best practices grouped under “key practice areas” within a five-stages model of organisational process maturity progression in its capability to effectively utilise IT and conduct IT system development work. These stages are the ‘initial, ‘repeatable’, ‘defined’, ‘managed’, and ‘optimising’ levels.

Source: SEI (1992, p. 18) The SEI is heavily funded by the US Department of Defense and thus the use of CMM has gained a lot of publicity, and found support in big defence industry sectors and large government agencies. This especially so in the stipulated requirement in the conditions of award of some big budget contracts. The proliferation of CMM and its adoption outside the software discipline has seen the emergence of numerous CMM’ish standards. These included but not limited to the following variants: - Software Acquisition CMM - System Engineering CMM - System Security Engineering CMM - FAA iCMM - Integrated Product Development CMM - People CMM and etc. The CMMI (Capability Maturity Model Integration) standard is an attempt by SEI to bring these variants together and also introduced changes in attempts to counter some criticisms laid against the use of CMM. Chief among them are the criticism on the underlying traditional staged default model of waterfall SDLC process and management practice. Like all other standards, CMM and now the CMMI are not without their supporters as well as distracters. The following are some of the “pros and cons” in the adoption and use of CMM/CMMI.

Fig 1 Capability and Maturity Model

Table 1 presents a summary of the characteristics of the maturity level and the focus of measurements applicable at each stage of the CMM:

#1 CMM/CMMI is … but one of the many available standards and practices on SDLC As a professional IT practitioner, we need to be aware that there are “many ways to skin a cat”. In fact the world of SDLC is so full of competing standards, models and practices that it resembles more like a quagmire as depicted in Figure 2.

February 2004 Page 4 - Assumes defined “canned” process as the key to software development. Over emphasis on process and rigid adherence to hierarchical management control can run contrary to many practices in organisational creativity, problem solving, entrepreneurship, and innovation management. - A prescriptive default linear view of SDLC activity-based approach to process model that does not cater for other best practices in result-based non-linear innovation activities (Royce 2002). Preoccupied with predictability, the CMM is profoundly ignorant of the dynamics of innovation (Bach 1994)

Fig 2 SDLC Framework Quagmire (Sheard, 1997)

#2 Pros - Best-known, widely discussed and highly promoted standard. - Available experienced practitioners and record of company case studies. - Defined a standard activity-based model for measurement and control of progress. - Encoded most conventional best practices which are especially highly relevant in the traditional waterfall SDLC approach. These include “Freeze requirements before design”; “Maintain detailed traceability among all artefacts”; “Assess quality with an independent team”; “Inspect everything”; “Plan everything early with high fidelity”; “Control source code baseline rigorously” (Royce 2002). - Supported some but not all best practices in modern iterative SDLC approach. These include “Establish a change management environment”; “Instrument the process for objective quality control”; “Establish a scalable controllable process”; and CMMI supports the modern practice of “Attack risks early with an iterative lifecycle.” (Royce 2002). #3 Cons - Implicit assumptions of neo-classical management model, hierarchical organisation, practices and culture that favour heavy process and document centric view of managing software development activities. - No emphasis on architecture, design and deployment issues which are critical in ensuring success of software development innovation (Royce 2002). - Reveres institutionalisation of process. Tends to decry people as unreliable and assume that defined processes can somehow render individual excellence less important (Bach 1994; Jones 1994)

- Most implementations of the CMM drive organisations to produce more documents, more checkpoints, more artefacts, more traceability, more reviews, and more plans. Furthermore, thicker documents, more detailed information, and longer meetings are considered to be better. This flies in the face of the primary technique for improving software economics: reducing complexity and the volume of humangenerated "stuff." (Royce, 2002) - May encourages displacement of goals from the true mission of improving process to the artificial mission of achieving a higher maturity level (Bach 1994). Therefore a possible misdirected efforts in the pursuing the means as the end. - High cost of adoption, training, royalty, tailoring and deployment relative to other International standards such as ISO-9001. Conclusions From the above discussion we can see there are many complex issues and “pros & cons” factors in the adoption of standards and process. Please consider the “end goal” is to have a better quality, more effective and minimal cost way of “doing business” to develop and deliver the much needed product/service. It is not about any “religious war” on methodologies. They are in essence just part and parcel of a large variety of available “means to an end”. In conclusion, I would like to propose that organisations including SME should focus on critical factors on whether they should adopt and use a specific standard. To date the use of CMM and CMMI are not common among many Small and Medium size Enterprises (SME) and other “shrink wrapped” software product organisations. A SDLC standard should not be viewed as something that lives during certification period and then lies dormant on some bookshelves and not use most of the time. Please consider the following factors when evaluating whether CMMI or any other specific SDLC standard is appropriate for your organisation: a)

What are the benefits, when and where?

b) How do we quantify and measure the benefits? c)

Who are the beneficiaries?

February 2004 Page 5 d) Fit of Management Conceptual Model and Practices?

Assessment Background

e)

Fit of Organisational Culture?

f)

Ease of Initial Adoption and “Ramp Up” Cost? Can the IT industry make it more affordable?

In most assessment models, the organization evaluates its development capability against a set of "best practices" that are supposed to be found in effective organizations. The number of practices, their mastery, and their level of integration into the development organization determine the organization' s assessment score. There are a large number of process improvement initiatives, of which, the best known are SEI's CMM, SPICE, the United States Department of Defense SDCE, ISO 9000, ISO/IEC 12207. Some programs allow self-assessment while others require outside certification.

g) Range and Depth of Skills Needed? h) Availability of Relevant Expertise and Resources? i)

On-going maintenance cost, organisation structure to support it and management activity?

j)

Compliance Cost? Is future cost likely to be higher or lower?

SEI (1992) Software Measurement for DoD Systems: Recommendations for initial core measures, CMU/SEI-92TR-19 | ESC-TR-92-019 Sept 1992.

The Software Engineering Institute's Capability Maturity Model (CMM) is one of the best known and most widely discussed software process improvement model. It defines five levels of organizational maturity, from initial or chaotic to optimizing. Each increasing maturity level, starting at level 2, has associated with it a set of key process areas. For example, level 2 includes, among other things, requirement management and project planning as key areas. Level 3 includes training and peer reviews. Levels 4 and 5 include software quality management and defect prevention respectively. Each level also includes the process areas of its lower levels. The SEI also specifies the methods by which organizations can assess themselves against the CMM or use an outside agency to perform the assessment.

SEI (1994) System Engineering - Capability Maturity Model, SECMM-94-04|CMU/SEI-96-HB-004.

Problems with Assessment

References Bach, J. (1994) “The Immaturity of CMM”, American Programmer Brooks, F.P. (1975) Mythical Man-Month, AddisonWesley. Jones, C. (1994) Assessment & Control of Software Risks, Prentice-Hall

Sheard, S.A. (1997) “The Frameworks Quagmire, a Brief Look”, INCOSE Conference Proceedings, 1997. Royce, W. (2002) “CMM vs. CMMI: From Conventional to Modern Software Management”, Rational Edge magazine, Feb 2002. **********************************************

Process Assessment: Considered Wasteful From Communications of the ACM, November 1997 -Volume 40, Number 11 Mohamed E. Fayad & Mauri Laitinen The received view of assessment is that it must be the first step in any software process improvement program. The assessment is supposed to give an organization a sense of where it stands in terms of software production skills. While we believe assessments can be valuable, and in some cases necessary, there are many instances in which assessment is at best wasteful and at worst counterproductive.

While all of the above-mentioned process improvement initiatives are valuable, and we can hardly argue with the need for process-oriented software improvement, we feel there are a number of problems with assessment s that limit or misdirect attention. We use the CMM as our prime example although many of the points apply as much to other assessment schemes. 1. The most obvious reason to question the value of assessment is that for organizations just starting, it can be a waste of money and time. Few level 1 software organizations believe they are at level 3 or above. Assessments are very expensive, and in an immature organization, the results are likely to be meaningless. For example, if a department does not use configuration management to keep track of product versions, there is no process to assess. Until configuration management is put in place, and used for a while, assessment of this facet of development will have significant cost but will not even serve as a baseline for further measurement. Time and effort could be better spent in acquiring and implementing a configuration management system. 2. All assessment models are artificially derived, and while all describe generally agreed upon practices, the list itself is artificial. When an organization compares itself to an assessment model, it is comparing a real world organization's practices against an idealized list. The first quibble with the idealized list is that it is a onesize-fits-all representation. A ten-person software startup

February 2004 Page 6 uses the same criteria as a giant defense contractor that has been developing large systems for decades. More importantly, the assessment is the same for the huge corporation and its subcontractors. This is only reasonable if the effort in assessment does not exceed the development effort. 3. A more serious objection is that the idealized list of practices has not been proven to work. Although individual practices within the assessment model are of unquestionable value, the models' lists of practices and their order of adoption has not been shown to be either necessary or sufficient to produce a marked organizational improvement. Moreover, an organization experienced in software development may have a set of practices that help it produce good software but do not map well into the model processes. For example, in SEI's CMM level three, project performance is supposed to be tracked by key activity. If tracking is done instead by person or sub-group, it may be effective but doesn't match the assessment criteria. Even more interesting is possibility that other factors may be involved. Candidate factors include improvements in software tools, the 'startup effect' in which new initiatives get much more highly qualified and motivated people than standard projects, and the idea of 'heroic efforts' [1]. 4. The CMM levels are unevenly weighted in terms of membership, cost achievement, and value. For example, level 3 seems to be the first acceptable plateau for a software development to aspire to. Next year, the United States Department of Defense will require CMM level 3 certification to qualify for contracts. This suggests that level 2 is nothing more than an "at least we're not level 1" sort of designation. Regardless of the SEI's original intent, levels 1 and 2 are now negative designations. By contrast, moving from level 4 to level 5 is a newsmaking event since it happens so rarely. In the decade since the CMM was first published, most development organizations are still at level 1, the bottom level, and only a handful of organizations in the world are accepted to be at level 5. Most of the world's best-known commercial software is produced by organizations at or below level 3. These facts suggest that level 1 needs more differentiation and that level 5 has little relevance to most software development. A group of experienced software developers who work with few formal processes are lumped together with monkeys pounding at terminals in level 1. While such a characterization may please process purists, it is not especially useful for assessing the capabilities of the majority of development groups in the world. At the other end of the scale, what is the value of attaining CMM level 5? If getting there is so difficult and rare, does it have meaning for most development organizations? Perhaps it is similar to the difference between joggers and Olympians. The jogger runs for pleasure and exercise. While it might be enjoyable to fantasize about winning a world-class event, the talent, time, and dedication it takes to run at that level make the

comparison meaningless. This is not to belittle the Olympic contender: competitive running is just a different activity, available to and of interest to the very few whom value its particular payoff. Assessing joggers against Olympians, then is a meaningless activity. So should CMM level 5 be a meaningful goal for most organizations, or should it be the particular goal of those few groups who value whatever reward it brings? If it is limited, then why include it in the assessment? And if level 3 is all that is needed to get contracts, is the incentive there to move beyond it? If we extend this question of rewards, perhaps there should be different assessments for different groups. Software groups that work on life-critical systems might find value in the more stringent requirements of the upper levels. Groups working on the more common and less critical systems might benefit from a greater differentiation in the lower assessment scales. 5. Moving to levels 4 and 5 sounds worthwhile, but there is little empirical evidence to justify the move. As stated before, while some of the assessment practices correlate with more effective development, there is no evidence that all the processes or the order of their acquisition is right. In fact, for the CMM there is some evidence, both direct and indirect, to the contrary. As reported by El Emam, many organizations find the early adoption of certain processes, such as change control and code reviews, more effective than adopting them in the recommended sequence [2]. Bamberger reports that when she works with clients, she helps them look more at the essence of the CMM's ideas, rather than the explicit maturity levels involved, so they can get control of their software projects [3]. At lower levels then, getting a start in controlling projects is more important than orthodox progression through levels of maturity. The indirect data comes from Hersleb, et al., who report that approximately 44% of survey respondents had little or limited success addressing the CMM assessment recommendations. While not conclusive, it is clear that adopting the CMM recommended practices is not especially easy, and for some it may be completely impractical (although we can't tell from the data). If the recommendations are hard to adopt, then it may be that the current CMM assessment has not identified the optimal SPI characteristics. And if it is difficult to progress at the lower levels, it may help explain why progress to the upper levels is so rare. If the SPI practices recommended aren't optimal, then they should not be cast in the assessment as the important activities. It makes them harder to change as more is learned. To make an analogy, with modern skiing instruction and equipment, most people start skiing in one day. Twenty years ago, the number of people who skied after one day was much smaller, and the number of people who adopted the sport was likewise diminished. If the techniques and equipment had been fixed as the way to ski, far less progress would have been made. 6. According to Herbsleb, et al., it takes about two years per level to move between CMM 1 and CMM 3 and the cost of

February 2004 Page 7 software process improvement ranges from almost $500 to $2,000 per person per year Herbsleb 97]. There is no meaningful data for the higher levels, but given the ambitiousness of the goals and the fact that each level includes all the activities of the previous levels, it seems unlikely that moving to levels 4 and 5, even if possible for all organizations, will take much less time or money. 7. We think there are also questionable assumptions about the ability of people to master the continually increasing levels and quantities of new technology. For an organization that is truly chaotic and has little exposure to newer technology, the learning curve is steep indeed. A group getting its first exposure to object-oriented development or framework technology will have trouble simultaneously assimilating much of the background of software process improvement. Additionally, training adds to the cost of each new technology acquired. New group members have a higher learning curve and consequently higher training cost at each new level. Moreover, regardless of the rosy picture suggested in the CMM level 5 actions, technology transfer means changing technology, and change means reduced effectiveness while learning occurs.

respect to various standards is that a smart organization can use the assessment as a framework to evaluate how projects are done. And then, by conscious analysis rather than slavish adherence, the organization can plan and take steps that will improve its operation. We are also concerned that the focus on the existence of processes, at least in the way that many people apply CMM and ISO 9000 assessments, tends to over- value the technique at the expense of the goal. Processes are tools that help with solutions rather than solutions themselves [5]. References 1. Bach, J. Enough About Process: What we Need are Heroes. IEEE Software, Vol. 12, No. 2, February 1995, pp. 96-98. 2. El Emam, Khaled and Briand, Lionel, "Costs and Benefits of Software Process Improvement," in Better Software Practice for Business Benefit: Principles and Experience, IEEE Computer Society Press, Washington, DC, 1997. 3. Bamberger [will be provided]

8. Most SPI studies are based on the analysis of one or a few data points. Herbsleb points out that we have only a few data points about the actual cost of SPI programs [4]. The Hersleb article representation of improvements such as productivity gain per year, time to market, post-release defects (reduction per year) and business value ratio are each based on so few data points that the positive results can only be thought of as tantalizing possibilities rather than established facts. 9. Small and medium sized companies have been put off by the presentation of software assessment and SPI. There is no question that assessment can be costly, and for a small, lean organization, the overhead involved in meeting and verifying CMM or ISO 9000 criteria can be prohibitive. ISO 9000, for example, lists dozens of conditions and types of documentation required for certification. For these groups, if ISO 9000 certification is required, they may elect to buy an off-the-shelf solution that meets the letter but not the spirit of the requirements. As Bamberger notes, the way CMM is presented often makes smaller organizations feel it offers them no value [3]. Moreover, smaller organizations cannot afford the two to three year duration it normally takes to reach CMM level 3. If CMM certification becomes required for contract awards to subcontractors, then rather than fostering improvement, it will more likely become a meaningless regulation. Conclusions Process improvement, then, must be tailored to the organization and with the goal of improving systems, but doing assessment to measure against CMM standards distracts from such a goal. The best part of assessment with

4. Herbsleb, J.et. al. Software Quality and the Capability Maturity Model. The Communications of ACM, Vol. 40, No. 6, June 1997. 5. Fayad, M.E., Laitinen, M. Transition to Object-Oriented Software Development, Wiley, New York, 1997.to appear.