Interpreting continuous-view capability models for higher ... - Ewp.rpi.edu

9 downloads 18040 Views 1MB Size Report
is best applied across an organization's processes to address strategic business needs. This paper first makes the case that, for organizations at Level 3 or 4 maturity, the business ... Capability models have been around since the Software.
Regular Papers

Interpreting Continuous-View Capability Models for Higher Levels of Maturity Sarah A. Sheard1,* and Garry J. Roedler2 1

Software Productivity Consortium, 2214 Rock Hill Road, Herndon, VA 20170

2

Lockheed Martin, P.O. Box 8048, Bldg. C, Room 25C44, Philadelphia, PA 19101

Received February 19, 1999; Accepted March 9, 1999

ABSTRACT When assessing an organization’s process maturity using a continuous-view model like the three systems engineering capability models, the capability of each process area is rated separately. This view can be very helpful to lower-maturity organizations because it provides an improvement path that can address one area at a time according to business needs. However, rating process areas separately makes less sense for more mature organizations because their processes map less cleanly to process areas and because process improvement is best applied across an organization’s processes to address strategic business needs. This paper first makes the case that, for organizations at Level 3 or 4 maturity, the business processes themselves are what should be rated, rather than the process areas of the model. The rating process consists of two steps: verifying that the processes cover the base practices, and then rating the maturity of the processes. The paper then provides an interpretation of how the term “Level 4” or “Level 5” should be applied to an organization as a whole. © 1999 John Wiley & Sons, Inc. Syst Eng 2: 15–31, 1999

1. INTRODUCTION

pability models were first created, it was known that organizations must adopt process discipline in a stepby-step manner. A five-step, “staged” architecture (or staged view) was introduced. For quite a few years, however, the number of organizations assessed that

Capability models have been around since the Software Engineering Institute (SEI) of Carnegie Mellon University released Version 1.0 of the Capability Maturity Model®1 for Software (SW-CMM) in 1991. When ca*

Author to whom all correspondence should be addressed.

1 CMM and Capability Maturity Model are registered in the U.S. Patent and Trademark Office.

© 1999 John Wiley & Sons, Inc. CCC 1098-1241/99/010015-17

15

16

SHEARD AND ROEDLER

have reached the top two levels has been small (less than 4% as of 1998). Systems engineering (SE) capability models are more recent [Sheard and Lake, 1998], with the Systems Engineering Capability Maturity Model (SE-CMM) first appearing in 1995. These models used a different architecture, the “continuous” view. The number of organizations publishing that they achieved one of the top two ratings in a capability model with the continuous architecture is miniscule. As a result, confirmation of higher-level concepts as they apply to continuous models has been a long time coming. The Software Productivity Consortium (the Consortium) has worked with a handful of organizations achieving Level 4 or Level 5 in the SW-CMM and thus has learned some lessons about how these levels are actually implemented when using a staged model. The Consortium is just beginning to work with organizations pursuing levels above Level 3 using continuous models. Implementation questions arise that are not answered by the model directly, so interpretations must be made. This paper outlines two of these interpretations in the hope of contributing to industry agreement on questions that many organizations are now beginning to ask: • How do we rate the organization’s maturity against the model’s process areas2 when our implemented processes do not map cleanly to process areas? • If we want to consider ourselves a Level 4 or Level 5 organization, do we have to apply Level 4 or 5 requirements to all process areas, independent of the business value of doing so? This paper makes recommendations on both issues. To do so, the paper must first establish an understanding of several terms and concepts. This paper is organized as follows: • Section 2 describes the purposes of capability models. • Section 3 provides a brief history of capability models. • Section 4 describes and compares staged and continuous views. 2 Focus Areas (FAs) of EIA/IS 731 and Key Focus Areas of the Systems Engineering Capability Assessment Model (SECAM) are equivalent to “process areas” of the SE-CMM. The latter term is used in this paper because most Consortium experience to date has been with the SE-CMM, and because “process areas” is the term used in the Capability Maturity Model Integration (CMMI) effort.

• Section 5 describes what happens in practice as capability models are implemented, first in lower-maturity organizations and later in highermaturity organizations. • Section 6 provides recommendations that answer the questions posed above. • Appendix A describes the concept of “staging” continuous models and compares the various literature recommendations that have been made to date. • Appendix B demonstrates that the interpretations that are the subject of this paper also apply to EIA/IS 731, a newer systems engineering capability model, which has a more involved version of a continuous architecture.

2. THE PURPOSES OF CAPABILITY MODELS 2.1. Two Purposes Capability models have been used for several purposes. Internally, a company can use the guidance of the model to improve its processes and obtain gains in productivity, early defect detection, product quality, and time-tomarket [Herbsleb et al., 1994]. Externally, the U.S. Department of Defense (DoD) and some other government agencies have specified achievement of Level 2 or Level 3 in the SW-CMM as a required qualification for bidders on some contracts. In other competitions, which did not specify a maturity level as a requirement, the bidder with the highest achieved maturity rating was still considered to have a significant competitive advantage.

2.2. Conflict in Purposes Ironically, these two purposes conflict with each other in some ways, as shown in Table I. It also should be noted that using assessment results as a source selection criterion tends to create incentives for “gaming” the results. This makes the model much less useful for encouraging true process improvement.

2.3. Changes in Use of SW-CMM for Source Selection Enthusiasm toward requiring a SW-CMM maturity level for source selection peaked in about 1995, with promises from parts of the DoD that contractors would be ineligible to bid at all without at least a “strong Level 3” [Mosemann, 1995]. After that point, an opinion grew that some contractors were achieving results that their processes did not warrant by showing auditors an unrealistically positive view of the organization. There also was a concern that some competitors might be able to

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

17

Table I. Conflicts in Capability Model Purposes

“purchase” a more favorable rating by choosing less strict assessors. Concerns about reproducibility of ratings led to increased third-party auditing of various bidders, using the Software Capability Evaluation (SCE) method based on the SW-CMM, leading to protests by defense contractors that repeated SCEs were becoming prohibitively expensive. For these reasons and others, the government is rethinking its use of SW-CMMlevel numbers and SCEs in source selection.

2.4. Assumed Purpose of Systems Engineering Capability Models The SE capability models all discourage the use of assessment results for source selection purposes. “Systems engineering” has a wide range of definitions and applications [Sheard, 1996]. It is hard to imagine standardizing a small set of things that all SE efforts must do, and an associated order in which to improve them, with any real meaning. The focus of SE capability models, therefore, remains process improvement. This paper assumes that the purpose of SE capability models is to guide internal process improvement; in other words, flexibility takes precedence over rigid standardization.

3. A BRIEF HISTORY OF CAPABILITY MODELS 3.1. History of the SW-CMM: a staged capability model Capability models began with the SW-CMM, created by the SEI of Carnegie Mellon University. Version 1.0 was released in 1991 [Sheard, 1997a, 1997b]. This was in response to problems in the software development industry [Gibbs, 1994] of late, poor-quality, and over-

run software projects. The SW-CMM established a ladder of process maturity for organizations to climb. Figure 1 shows the SW-CMM architecture, demonstrating this stepwise approach. An organization improves a few key aspects of their processes [key process areas (KPAs)] at a time, receives a new “level” number for their efforts, and then improves additional areas required for the next level. This is called a “staged” view.

3.2. SE-CMM History The SE-CMM [Bate et al., 1995] was first developed in 1994, based on adaptation of the SEI’s SW-CMM. This model uses an architecture called the “continuous” view, derived from a draft of the international Software Process Improvement Capability determination (SPICE) guidelines. The model is divided into areas for improvement called process areas. In this model, the process content (what the organization should do) is separated from the process maturity (how well it is performed). Process content is evaluated by assessing compliance with a set of base practices that constitute a process area. Process maturity is evaluated by applying a generic scale of levels to each process area separately. Processes must comply with all process content elements (base practices) for the process area to be evaluated at the first capability level. For example, an operational concept is determined, in some manner; no two people need to do it the same way, and it may require heroic efforts by bright people, but it does get done. Each subsequent level is achieved by incorporating the process maturity elements (generic practices) associated with the level. Essentially, generic practices address in what manner the process content is performed. For example, eventually there is a plan and a process

18

SHEARD AND ROEDLER

Figure 1 Software CMM architecture.

for defining operational concepts, and people are trained to define operational concepts in the manner that the organization has measured to be most effective. This two-axis model, shown in Figure 2, allows the organization to determine the capability (or maturity; the words are used interchangeably in this article) of each process area separately and does not prescribe an

order in which the process areas should be implemented. [See Appendix A for a discussion of some recommended orders of addressing process areas, as published in International Council on Systems Engineering (INCOSE) literature and elsewhere.] An assessment result for the first seven SE-CMM process areas might look like Figure 3.

Figure 2 Continuous model architecture.

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

19

3.5. The CMMI Project The CMMI project [SEI, 1999] currently is integrating models of both architectures, namely, the staged-view SW-CMM and the continuous-view EIA/IS 731. CMMI has stated that it will provide a choice of either staged or continuous views for several choices of content. This is feasible because the architectures share basic concepts about what the levels mean, as shown in the next section.

4. COMPARISON OF STAGED AND CONTINUOUS-VIEW ARCHITECTURES

Figure 3 Continuous model assessment result.

This section shows that the staged- and continuousview architectures are implemented differently, but there are similarities in the concepts behind the levels.

4.1. Staged-View Architecture 3.3. SECAM History INCOSE developed its own capability assessment model for systems engineering, the SECAM [INCOSE, 1996], from material used by four companies to assess their own internal SE capabilities. Version 1.5, the last version, was released in July 1996. This also used a continuous-view architecture, with additional features such as increasing technical expectations as the level increases (i.e., the process content is not limited to the first capability level) and questions about product quality and process effectiveness.

3.4. EIA/IS 731 History The Electronic Industries Alliance (EIA) facilitated a merger of the SECAM and SE-CMM, and released the merged model as an interim standard (IS) in January 1999 [EIA, 1999]. Called Systems Engineering Capability, EIA/IS 731 includes both a capability model (EIA/IS 731-1) and an appraisal method (EIA/IS 7312). The architecture of the model combines features of both the SE-CMM and the SECAM, and includes both process maturity (as quantified by generic practices) and process content (through the use of specific technical practices for each capability level). This paper uses the SE-CMM for most examples because the EIA/IS 731 architecture is more complex, but the principles can be applied to all continuous-view models. See Appendix B for more detailed information about how these principles are applied to the EIA/IS 731 model architecture.

Staged models such as the SW-CMM [Paulk et al., 1993] look at the organization’s maturity in a single collective view, shown in Figure 1. Compliance with a KPA in a staged model is binary (the organization’s processes comply or they do not). The organization increases its maturity by complying with additional KPAs; for example, six for Level 2 and an additional seven, two, and three for Levels 3, 4, and 5, respectively. The order of complying with KPAs is fixed by the model. The initial level in the staged model is Level 1, which indicates that the organization does not fully comply with the Level 2 KPAs. The basic idea behind Level 2 is achieving basic management discipline at the project level. Are plans made? Are resources allocated? Is progress tracked? Project processes need not be related to each other.3 At Level 3, organizations have a standard process, which the projects tailor. The organization as a whole understands the importance of having and adhering to processes, and the staff is trained in all applicable processes. Management is better integrated across the organization. Measurements that are collected can be meaningfully compared across projects. At Level 4, organizations begin to manage key processes in a quantitative manner. They know what results their processes are expected to have, and they take action when the results show variations. At Level 5, organizations continuously improve their processes. This includes seeking and eliminating 3 It should be noted that in order to focus on key concepts at Level 2, staged models delay description of some project-related concepts (such as software engineering) until Level 3.

20

SHEARD AND ROEDLER Table II. Process Areas of the SE-CMM

root causes of product and process defects, as well as incorporating new technology as appropriate.

4.2. Continuous-View Architecture In the SE-CMM, a continuous model, an organization first achieves Level 1 in a process area (see Table II) by performing all of the process area’s base practices in any manner. Not performing the base practices will earn the organization a Level 0, the initial level in continuous-view models, in that process area. To improve the capability of a process area beyond Level 1, an organization complies with additional generic practices (see Table III) in the way it performs the base practices. For each level of the SE-CMM, these generic practices are based on the same principles as the corresponding level in the SW-CMM. SE-CMM Level 2 generic practices, for example, address project management discipline, as applied to a process area. This means that an organization can achieve Level 2 in Verify and Validate System if its projects perform all the process area’s base practices and • allocate adequate resources to perform the validation and verification activities, • assign responsibilities for preparing for and performing the necessary tests and analyses, • document the project’s test and other verification processes, and so on through the Level 2 generic practices. If these project processes are tailored from an organizational standard set of test processes, using organizationally approved tailoring guidelines, and if the organization uses well-defined data and performs ap-

propriate reviews including peer reviews where needed (per the Level 3 generic practices), then the organization achieves Level 3 in Verify and Validate System.

Table III. Generic Practices of the SE-CMM

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

21

Table IV. Comparison of Architectures

Level 4 is achieved by collecting and analyzing enough measures about the processes and their products that the organization understands what normal process behavior looks like and can take action when data indicates that something abnormal is happening. Level 5 is achieved by continuous improvement of the organization’s standard processes according to lessons learned through causal analysis of defects. Clearly, both Levels 4 and 5 in the continuous models apply principles in the same levels in the staged models. In this manner, the organization earns a level between 0 and 5 in each of 18 process areas which it appraises. It should be noted that there are additional (more rigorous) challenges for reaching a Level 4 or Level 5 with a continuous-view model. Each process area is still assessed independently through compliance with the generic practices rather than looking at the implementation of generic practices across the organization. Because of this, an organization attempting to receive Level 4 ratings for all process areas may be forced into performing Level 4 or Level 5 practices 18 times. This concern and its resolution are discussed in Section 6.2.

associated generic practices earns the organization an overall Maturity Level rating (1–5 or 2–5). Appendix A compares four different SE-CMM-related stagings published in the literature. Observations indicate that organizations actually adopt an implicit staging concept, regardless of this staging literature. In most cases, the organization develops a goal of reaching Level 2 or Level 3 in all process areas and then decides to call itself a Level 2 or Level 3 organization upon goal achievement (see Fig. 4). To achieve these levels, the organization examines its process architecture and identifies gaps to fill, as a whole, and deploys processes across projects. In doing so, these organizations behave very much like organizations achieving Level 2 or 3 in a staged model, although the particulars of the processes described will be different. Where this paper discusses Level 1 through Level 3 organizations in conjunction with continuous models, what is meant is that the organization has met these

4.3. Comparison of Continuous and Staged Views Each of these architectures has advantages and disadvantages (see Table IV), and each certainly has vocal supporters and detractors.

4.4. Staging for Continuous-View Models “Staging” is a concept by which a continuous-view model can be made to behave like a staged-view model. A staging concept defines an order in which process areas of a continuous model are to be addressed and capability levels to be achieved. Achieving those capability level ratings in that subset of process areas and

Figure 4 Assumed staging.

22

SHEARD AND ROEDLER

levels on many process areas and is implementing their principles.

5. APPLICATION OF CAPABILITY MODELS AT VARIOUS ORGANIZATIONAL MATURITY LEVELS 5.1. Recommendation on Organizational Processes and Process Areas The Consortium does not recommend that an organization create one organizational process for each process area; rather, the organization should document its own processes, according to its business needs, and map these processes to process areas and base practices of the model. Figure 5 shows processes that do not map to process areas in a 1:1 fashion but still cover the practices of all process areas. If the mapping discovers any missing base practices, then the missing practices should be added to the appropriate organizational processes. If there is a sound business reason to not perform the practice or to perform some other practice in its place, the practice is tailored out. The rationale for the omission or replacement of the practice should be documented for future needs in process improvement and assessments. Isolating process architectures from the model architecture keeps the focus of the effort on the organization’s needs. This also isolates the organization from changes due to updates in the models themselves. For example, the SE-CMM included defining derived requirements as a base practice in PA 03, Evolve System Architecture. EIA/IS 731 moved this practice to focus area (FA) 1.2, Define Technical Problem. Table V shows the general process area-to-focus area mapping, but some practices were moved separately to other focus areas. Should an organization that put defining derived requirements into its architecture process now be forced to rewrite both that process and the requirements proc-

ess because the models changed? Changing the mapping instead is much smarter. Despite this general recommendation, in many cases beginning with the structure of the model makes sense, as shown next.

5.2. Process Structuring at Lower Maturity Levels Based on Consortium experience working with organizations striving for various process area ratings on the SE-CMM (or other SE models), a typical progression through the levels has been determined. When an organization first decides to adopt the SE-CMM, it typically has performed some investigation of the model. The organization seeks some kind of preliminary assessment, help in selecting process areas to address first, and/or help in documenting processes. Typically there is also a request for a sanity check on plans to achieve Level 2 or Level 3 (in some or all PAs) by a given date. 5.2.1. Process Definition Lower-maturity organizations often either do not know exactly what their own processes are. In this case, the SE-CMM (or other SE capability model) provides a usable definition of what constitutes good systems engineering, according to industry best practices. For these reasons, process definitions for Level 1 and Level 2 organizations tend to be structured according to the process areas of the chosen capability model. An organization might write a process for defining customer needs (resulting in an operational concept), a second process for turning those needs into well-formulated requirements at the system and subsystem level, and a third process for designing a system that meets those needs. These three processes correspond closely to PA 06 (Understand Customer Needs and Expectations), PA 02 (Develop and Allocate Requirements), and PA 03 (Evolve System Architecture), respectively. Each

Figure 5 Processes mapped to process areas.

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

23

Table V. General Mapping between SE-CMM Process Areas and the EIA/IS 731 Focus Areas

process defines how a project performs all the base practices of the associated process area description.

5.3. Process Organization at Higher Maturity Levels

5.2.2. Achieving Level 1 In theory, to achieve Level 1 in each of these process areas, the organization must train its people in the organization’s processes that map to the model’s process areas, and then perform an assessment showing that all these requirements are met. But, in practice, an organization that is process oriented enough to plan a structured process improvement effort is probably already practicing most of the base practices of most of the process areas in some manner. Thus many organizations receive a “1” in most process areas without changing any business practices.

5.3.1. Defining Processes at Level 3 Organizations working toward Level 3 or Level 4 less often structure systems engineering processes according to a capability model. By the time they reach these levels, organizations usually have a much better definition of their business goals, and find they want processes that reflect these business goals rather than simply mirror a model. The organization’s process improvement effort may have one of several histories. Some organizations have defined processes in pursuit of ISO 9000 registration [ISO, 1991] or a Baldrige [NIST, 1999] award. Their processes may be documented according to the structure of these frameworks. Some organizations have been using the SE-CMM long enough to know which of their documented processes are most useful, and they may restructure their process architectures to take advantage of this knowledge. If the organization already has defined its software processes in pursuit of Level 2 or higher against the SW-CMM, it only needs to expand or refine many of the processes to satisfy the system engineering model, rather than rewriting, restructuring, or duplicating them. These organizations understand that the best approach is to inventory their software process assets, develop an organizational process architecture that shows how currently defined processes fit together, and then write systems engineering processes where

5.2.3. Achieving Level 2 Level 2 is a bit harder to achieve, although those organizations that have experience complying with the management practices required by most government contracts tend to have a great deal of project management practices already in place. When seeking Level 2 ratings, the organization performs a gap analysis to verify that actual project practices meet the requirements of Level 2 generic practices and of the base practices of all relevant SE-CMM process areas. Gap analyses tend to show that current project processes need documentation and refinement. Other processes may need to be created. Training in these new or improved processes and tools may consume the greatest amount of process improvement funds because all practitioners must know and follow the processes.

24

SHEARD AND ROEDLER

6. TWO RECOMMENDED INTERPRETATIONS OF CONTINUOUS MODELS 6.1. Processes, Not Process Areas

Figure 6 Process architecture spanning systems engineering and software development.

needed. A sample organizational process architecture is shown in Figure 6. Below this level of detail would be another level showing, for example, the exact relationship between software configuration management and system configuration and data management. To complete the organization’s process architecture, there is often a need to define a significant number of organization-specific processes. The organization then will map these to the SE-CMM or other model, analyze gaps, and add or change practices as needed. For example, a system integrator, who does not perform detailed tasks such as software coding or avionics unit designs but instead works with contractors building the system elements, may have an integrated requirements and architecture process, which maps to PA 02 and PA 03. Many organizations would have several different types of test and validation processes, all of which map to PA 07. These variations to a oneto-one mapping are shown in Figure 5. This, of course, is a simplified case. An example of a more typical process structure is shown in Table VI. Table VI. SE-CMM and the Process Structure

An assessment for an organization whose processes are structured as in Figure 6 or Table VI typically begins with a documentation tree that shows the structure of the set of organizational processes and a map to the practices of the SE model. As long as base practices are all accounted for (and the previous gap analysis process virtually guarantees this), the assessment then can concentrate on evaluating the application of generic practices to each process area. Ideally the assessment profile would look like Figure 7. Note how in several ways the assessment can now get a bit tricky. When a process covers two process areas fully, how can an assessment team give the process areas anything but the same rating? If three processes all map to the same process area, how does one rate the process area if one of these processes is organizationally standardized, one is not, and the third meets the requirements of Level 4? If the mapping is yet more complex, so are the issues associated with the rating of process areas. One can make recommendations for how to handle this complex mapping, and in practice assessment teams manage to determine sensible answers, but consider that there is a better way than rating process areas, as follows below. 6.1.1. Higher Maturity Ratings Recommendation When the organization behaves in a unified manner, maturing its well-defined processes, the idea of rating process areas separately should be discarded. The first of this paper’s two recommendations is that it is the processes themselves that should be rated against the generic practices, as shown in Figure 8. This is

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

25

Figure 7 Rating process areas is possible, but...

easy to do with the SE-CMM because of the separation of base practices (what is done) from generic practices (how maturely it is done) in the continuous architecture. It is also feasible for EIA/IS 731, with some additional steps, which are described in Appendix B.

The objective of the assessment is to demonstrate how well the organization is performing the essential practices, as tailored for business needs, rather than showing how well the organization’s processes mirror the model’s process areas.

6.1.2. Two Steps to Rating Rating processes instead of process areas requires a two-step rating process.

6.2. A Level 4 Paradox

Step 1: First, the organization’s processes are checked to ensure that all base practices of the model appear in appropriate places in the processes. This involves confirming the mapping of processes to process areas to account for all Level 1 requirements. By performing this accounting of base practices, the assessment team can be sure that the technical intent of the model is preserved. Step 2: Once the processes are verified to cover the base practices (and therefore all process areas are at least Level 1), then the performance of the organization’s processes themselves is checked against the generic practices of Levels 2–5. The assessment team should tailor the assessment questionnaires to the processes of the organization to reflect the terminology and scope of those processes. The general conduct of the rest of the assessment proceeds as described in the assessment method but focuses on how well the organization’s processes meet the requirements of the generic practices. This will yield a profile of ratings between 1 and 5 for each of the organization’s processes.

The second of this paper’s two recommendations concerns how to rate an organization as Level 4 (or Level 5) using a continuous model. This is best illustrated by a discussion of selecting key metrics [Roedler, Sheard, and Gantzer, 1999]. First, for simplicity, suppose that organizational processes do map 1:1 to SE-CMM process areas. Now, further suppose that one of the process areas applies only marginally to the organization and never causes major headaches. The organization clearly will benefit little from measuring the work products or activities associated with this process. Can this process (or its associated process area) be assessed at Level 4 while making a business decision not to measure it? And if a particular process area cannot be given a Level 4 rating, does this mean the organization cannot consider itself a Level 4 organization, even if the organization as a whole performs quantitative management of key processes? This is actually a critical question to many sponsors of organizational process improvement efforts because improvement efforts often are tied to incentives. There must be a clear goal and a clear way of measuring whether the goal is achieved.

26

SHEARD AND ROEDLER

Figure 8 Better to rate processes.

Yet no continuous model is clear about how Level 4, as an organizational maturity number, is defined. The most common assumption is that “Organizational Level N” means all process areas are evaluated at Capability Level N or higher. This assumption has a distinct downside at Level 4 and up. Using this definition prevents an organization from deciding, for business reasons, to keep some processes at Level 3 and mature others to Level 4, without jeopardizing the overall Level 4 rating. Hence decisions may be made that run contrary to business needs, for example: “Measure everything.” In a staged model, this issue would not arise. At Level 4, an organization determines its most important issues based on business goals, implements data collection for metrics to provide insight into these key issues, and then manages the applicable processes using appropriate metrics from the data collected. Nowhere is the distribution across process areas of said key metrics predetermined. There could be just one or two key metrics that address the organization level needs, and the organization can achieve Level 4. 6.2.1. What Should Happen at Level 4 Implicit or explicit stagings that define Level N for an organization as a whole allow executives to commit to simple process goals, such as achievement of a certain level by a certain date. Although there are various interpretations in the literature about what process areas should be included (see Appendix A), even these do not address whether Level 4 can be reached in a process

area if none of the key metrics of the organization relate to that process area. The right thing is to explicitly offer a stagedlike interpretation of continuous models at higher levels. The following interpretation has been verbally endorsed as appropriate by at least one architect of a continuous-view model. An organization should consider itself a Level 4 organization when it implements a quantitative management policy based on organization-wide business needs. This means specific or base practices up to Level 4, and generic practices up to Level 3, are met on all assessed process areas, and the Level 4 generic practices are met for processes defined as key by the organization. Figure 9 illustrates this recommendation. At this level, there should be neither a process-byprocess nor a process-area-by-process-area approach to measurement. Instead, a few critical measures must be determined for the organization. Some processes will be working well or may be noncritical support, while others represent the main business and its main opportunities for improving product quality, cycle time, and cost. It is the latter set of processes that must be measured and managed according to the data, at least initially. A dynamic measurement process itself will make obvious the decisions as to which measurements to collect and analyze. A Level 4 organization will be able to show that it has prioritized its issues against its business goals, identified and implemented the key

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

27

Figure 9 A Level 4 organization. All processes considered and critical measurements included in quantitative measurement program.

metrics needed to manage the processes requiring measurement insight, and has procedures in place to identify when metrics related to other processes need to be initiated. Similarly, Level 5 should demand an organizationwide view. Processes must be improved, piloted, added, and deleted according to business needs, not in a piece-

meal manner. Figure 10 shows a Level 5 organizational profile. Note in Figure 9 that four processes were selected as key and were matured to Level 4 using the Level 4 generic practices. In Figure 10, the process on the left has been modified because of lessons learned in the organization, and processes on the right have been

Figure 10 A Level 5 organization. All processes considered; critical measurements included in quantitative measurement program; processes added and improved as necessary.

28

SHEARD AND ROEDLER

added. In addition, more processes are being measured using Level 4 generic practices, presumably because a need was discovered during root cause analysis. Finally, the most critical processes were matured to Level 5 using the Level 5 generic practices.

7. CONCLUSION These two recommendations provide a baseline interpretation and method for applying continuous models at Maturity Levels 4 and 5. Organizations at higher maturity levels write processes that accurately reflect their business practices and serve their business needs. These processes map to the process areas in the SECMM or other systems engineering models, but not in a simple manner. Above Level 3, it makes the most sense to rate the maturity of the organization’s own processes rather than that of the model’s process areas. This rating consists of verifying that the processes meet all the specific practices of the model, up to the target level, as a first step. The maturity of the processes themselves then is checked against the model’s generic practices (and generic attributes, as appropriate). An interpretation of the meaning of Level 4 and Level 5 for organizations should be explicitly included in future continuous-view models in order to encourage appropriate, business-needs-based organizational behavior at Levels 4 and 5. If these process models are going to serve the true process improvement needs of the organizations that use them, and thereby benefit their customers in the long run, the models must be used in a way that supports business goals. The proper interpretation of Level 4 is that it is sufficient and appropriate for the organization to show it has selected and implemented key metrics to address issues related to the business goals and that a procedure is in place to identify when other metrics are needed.

8. ADDITIONAL WORK The subject is not closed. One remaining question is whether there will be community acceptance of this two-step process of assessing compliance of the processes to the process areas first and then rating the maturity of the processes themselves rather than the PAs. A second question relates to broader verification of the assertion above: that organizations tend to improve all processes together, so that even when using in a continuous-view model, they tend to achieve a mostly-2 rating, then an almost-all-3 rating, then an all-4 rating, and so forth. The authors have seen this happen with a

small number of organizations but wonder if it is true for a broader range of organizations, including, for example, those organizations whose prime motive is achieving a number to put on a coffee mug rather than for the indirect benefits of improved processes. A third question is how continuous models should treat Level 4 and 5 organizational views. While interpreting continuous models in a way that supports organizational decisions for best business value is the right way to go at present, will this interpretation be incorporated into continuous models to come? Will the international SPICE effort, which was the original creator of the continuous view, recognize and implement this as well?

APPENDIX A. CONTINUOUS MODEL STAGING SCHEMES IN THE LITERATURE The SW-CMM has organizational levels or stages, and several authors [POSE4; Cusick, 1998b; Ibrahim, 1997; SEI, 1999] have proposed stagings for systems engineering process improvement. Typically these staging methods entail fulfilling the requirements of a defined subset of the model’s process areas, according to generic practices up to the desired level. For example, Level 2 means the organization performs all the method’s designated Level 2 process areas according to the Level 1 and 2 generic practices of the systems engineering model. In practice, all of these staging methods require inclusion of most of the process areas by the time an organization reaches Level 3. The difference between Level 3, Level 4, and Level 5 in these staging methods tends to concentrate on how maturely all applicable process areas are practiced. The staging mentioned by the SE-CMM [Bate et al., 1995] is similar to that in [Cusick, 1998b]: The idea is to first perform systems engineering via the engineering process areas at Level 1, then add the project process areas to support bringing engineering PAs up to Level 2, and so forth. Another staging concept, as in the SW-CMM, concentrates first on project management and then adds development engineering as an organization approaches Level 3 maturity. The project management and process documentation activities called for at Level 2 are important to accomplish before Level 3 because Level 2 makes people believe in process discipline before insisting they change what they are doing, which is Level 3. 4 Process Oriented Systems Engineering Staged Model, quoted in Johnson and Dindo, 1998.

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

29

It should be noted that achieving organizational maturity Levels 2–4 using any of these stagings entails fewer requirements than the “assumed staging” shown in Figure 4. Instead of addressing all process areas to Level 2 capability, any staging method only requires some of the process areas to Level 2 capability. Figure 11 is comparable to Figure 4, except that the staging scheme of the CMMI (Column e in Table VII) is used. This stagings table first appeared in Sheard [1999].

APPENDIX B. APPLICABILITY TO THE EIA/IS 731 ARCHITECTURE Figure 11 Staging using CMMI scheme.

Table VII shows that all proposed systems engineering stagings are different; therefore, organizations should consider choosing whatever staging best suits them.

Table VII. Systems Engineering Model Stagings

The continuous view, as embodied in the SE-CMM, is augmented in EIA/IS 731 with some attributes and concepts from the INCOSE SECAM. Implementation of the SE-CMM showed that some of the “base” practices in the SE-CMM logically mapped to generic practices at Level 2 or higher and to concepts at Level 2 or higher in staged models. To address this, EIA/IS 731 assigned a level to each practice and called them “specific practices” to distinguish them from SE-CMM base practices, which appear only at Level 1. To reach Level 2, an organization must perform all the specific practices assigned to Levels 1 and 2 and must meet Level 2 generic practices as in an SE-CMM appraisal. The concepts shown in Figure 8 still apply, but the first of the two steps to rating has some modifications. Since there are now specific practices designated at Level 1, Level 2, Level 3, and so forth, the first step to rating must include a substep of identifying all specific practices to be accomplished to receive a desired rating. This is simply the sum of the specific practices in each assessed focus area that are assigned to the level desired or any lower level. Mapping to organizational processes is then performed, and Step 2 can proceed as before. EIA/IS 731 also includes concepts of process effectiveness and product value, called generic attributes. These separately receive ratings between 0 and 5. As the level increases, the worth of the product and process also increases. The effectiveness of the process activities and the value of the products are evaluated in subjective terms (marginal, adequate, significant, measurably significant, or optimal) using engineering judgment. A fairly complicated algorithm combines the specific practices, generic practices, and generic attributes to assess capability. These attributes can be considered in the same manner as generic practices for the purpose of implementing this paper’s proposals. They would be evaluated during Step 2 of the two-step rating process.

30

SHEARD AND ROEDLER

REFERENCES R. Bate, et al., A Systems Engineering Capability Maturity Model, Version 1.1, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 1995. B. Curtis, W. E. Hefley, and S. Miller, People Capability Maturity Model, CMU/SEI-95-MM-02, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, September 1995. K. Cusick, Engineering management pocket guides: The “Triptik” approach to capability maturity models, Proc INCOSE, Vancouver, BC, Canada, 1998a, pp. 267–273. K. Cusick, Improvement stages, CrossTalk, 11 (10) (1998b), pp. 22–24, available at http://www.stsc.hill.af.mil/CrossTalk/1998/oct/cusick.html. Electronics Industries Alliance (EIA), Systems Engineering Capability Model, EIA/IS 731, available at http://www.geia.org/eoc/G47/eiag47.htm, 1999. L. Ibrahim, et al., Federal Aviation Administration Integrated Capability Maturity Model, Version 1.0, available at http://www.faa.gov/ait/ait5/FAA-iCMM.htm, Washington, DC, November 1997. W. W. Gibbs, Software’s chronic crisis, Sci Am, 271 (3) (1994), 86–95. J. Herbsleb, A. Carleton, J. Rozum, and D. Zubrow, Benefits of CMM-based software process improvement: initial results, CMU/SEI-94-TR-13, ESC-TR-94-013, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 1994. International Council on Systems Engineering (INCOSE), Systems Engineering Capability Assessment Model (version 1.50), Seattle, Washington, June 1996. International Organization for Standardization (ISO), ISO 9000 International Standards for quality management, Geneva, Switzerland, 1991. K. A. Johnson and J. Dindo, Expanding the focus of software process improvement to include systems engineering, Crosstalk, 11(10) (1998), 13–18.

L. K. Mosemann II, keynote speech, Software Productivity Consortium’s Member Forum, Herndon, VA, October 1995. National Institute of Standards and Technology (NIST), Malcolm Baldrige National Quality Award, Gaithersburg, MD, available at http://www.quality.nist.gov/, 1999. M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber, Capability Maturity Model for Software, Version 1.1, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, February 1993. G. J. Roedler, S. A. Sheard, and D. J. Gantzer, Measurement, standards, and models, INSIGHT, INCOSE 1(4) (Winter 1999), pp. 5–9. S. A. Sheard, The frameworks quagmire, CrossTalk, 10(9) (1997a), 17–22, available at http://www.software.org/ Quagmire/frampapr/. S. A. Sheard, The frameworks quagmire, a brief look, Proc INCOSE, Los Angeles, 1997b, pp. 159–166. S. A. Sheard, Twelve systems engineering roles, Proc INCOSE, Boston, available at http://www.software.org/ pub/papers/12ROLES.html, 1996, pp. 481–488. S. A. Sheard, and J. G. Lake, Systems engineering standards and models compared, Proc INCOSE, Vancouver, BC, Canada, available at http://www.software.org/pub/papers/ 9804-2.html, 1998, pp. 586–596. S. A. Sheard, H. Lykins, and J. R. Armstrong, Overcoming barriers to systems engineering process improvement, Proc INCOSE, Brighton, UK, 1999. Software Engineering Institute (SEI), Software Acquisition Capability Maturity Model, Carnegie-Mellon University, Pittsburgh, PA, CMU/SEI-96-TR-020, December 1996. Software Engineering Institute (SEI), Capability Maturity Model® Integration (CMMI) project, available at http://www.sei.cmu.edu/cmm/cmms/cmms.integration. html, Carnegie-Mellon University, Pittsburgh, PA, 1999. SSE-CMM Author Group, Systems Security Engineering Capability Maturity Model (personal communication).

Sarah Sheard has been with the Software Productivity Consortium for four years, where she helps member companies improve and assess their processes, and develops systems engineering-related tools, methods, assessment materials, and courses. Ms. Sheard is the original author of the Frameworks Quagmire, which describes the complexity existing in the community of process and quality models and standards. She chairs the International Council on Systems Engineering’s Measurement Technical Committee, which encompasses the Measurement and the Capability Assessment working groups. She has participated in the Measurement Working group and is active on the Practical Systems Measurement project. INCOSE positions previously held by Ms. Sheard include chair of the Communications Committee and Program Chair for the Washington Metropolitan Area chapter. She also serves as an associate editor of Systems Engineering. Prior to joining the Consortium, Ms. Sheard practiced systems engineering for fifteen years: on satellites for twelve years, and on software systems for the last three years. She holds bachelor’s and master’s degrees in chemistry from the University of Rochester and the California Institute of Technology, respectively.

INTERPRETING CONTINUOUS-VIEW CAPABILITY MODELS

31

Garry Roedler is a Principal Systems Engineer with Lockheed Martin Management and Data Systems. He has 18 years experience with systems and software engineering processes, analysis, measurement, and teaching. Currently, he chairs the Systems Integration Process Review Board, focusing on systems engineering and integration process improvement. Mr. Roedler was an integral part of the Systems Integration programs, becoming the first major organization to formally achieve Levels 3 and 4 against the engineering and management process areas of the SE-CMM. Within Lockheed Martin, he is on the corporate Software and Systems Engineering Councils, where he has co-authored several corporate process standards and guidebooks for software and systems engineering. Mr. Roedler holds degrees in mathematics education and mechanical engineering from Temple University and has completed extensive graduate work in computer science and business. He is currently active in many professional and standards organizations, including INCOSE and IEEE. In INCOSE, Mr. Roedler chairs the Measurement Working Group and is co-author of the INCOSE Systems Engineering Measurement Primer. He is also very active in the Practical Software Measurement initiative, on which he is a member of Technical Steering Group and the project leader of the Practical Systems Measurement project. He also represents Lockheed Martin in the U.S. Technical Advisory Group for ISO/IEC JTC1 SC7, in which he is working on the development of ISO/IEC 15939—Software Measurement Process Framework.

Suggest Documents