Assessment of Measurement Indicators in ... - Semantic Scholar

6 downloads 0 Views 194KB Size Report
usually attempt to do so by implementing software process improvement (SPI) .... to the software production process in the Software Production Maturity Model.
Assessment of Measurement Indicators

287

Assessment of Measurement Indicators in Software Process Improvement Frameworks Luigi Buglione 1 2, Alain Abran 1 1

École de Technologie Supérieure - ETS 1100 Notre-Dame Ouest, Montréal, Canada H3C 1K3 2 SchlumbergerSema Via Riccardo Morandi 32 I-00050 Rome, Italy [email protected] [email protected]

Abstract: Measurement is progressively becoming a mainstream management tool to help ICT organizations plan, monitor and control. However, measurement itself is not a mature domain of knowledge in software engineering. The assessment of proposed measurement indicators in these process improvement models is investigated, and a methodology is proposed for the design of a measurement indicator assessment grid. A case study on the use of this assessment grid is presented and results discussed. Key words: Process and product indicators, ISO 9001:2000, SPI frameworks, CMM, SPICE, SMART criteria

1

Introduction

Measurement is progressively becoming a mainstream management tool to help Information and Communications Technology (ICT) organizations plan, monitor and control. Measurement is also receiving greater international recognition with the new ISO 9001:2000 standard which stresses, in clause 8, the role of measurement and analysis in helping to enable continuous improvement. Furthermore, ISO 9001:2000 clearly distinguishes two types of entities to be monitored and measured: process (clause 8.2.3) and product (clause 8.2.4). In addition to pursuing this ISO certification, a number of ICT organizations are striving to implement 9001-based Quality Management Systems (QMS). They usually attempt to do so by implementing software process improvement (SPI) models, such as Sw-CMM, CMMI, SPICE. Mappings of SPI models against the ISO model have been reported in the literature. For instance, [PAUL94] documents which Key Process Areas (KPAs) map to ISO clauses and which do not, and identifies which ISO clauses do not correspond at all. 'Process' is the specific concept to take into account when carrying out such a mapping.

L. Buglione, A. Abran

288

The assessment of proposed measurement indicators in these models is investigated in this paper. Do the SPI models propose measures which are adequate to meet the ISO 9001 measurement requirements for both process and product entities? Are processes monitored and controlled through proper indicators? Are product indicators used as substitutes for process evaluation? How should an ICT company design and verify process indicators within its own measurement system? The objective of this paper is to provide insights into how the assessment of measurement indicators can help respond to these questions in current SPI models. Sections 2 and 3 present highlights on how measurement is addressed in the main SPI frameworks. Section 4 focuses on the differences between process and product indicators, using the STAR taxonomy and ISO 9001:2000 definitions. Section 5 presents a methodology for the assessment of measurement indicators for process entities, and a case study is presented in section 6. Finally, Section 7 presents some conclusions and suggestions for further research. 2

Measurement in SPI frameworks

Five software entity types to be measured in a comprehensive measurement program have been identified by [BUGL02]:

Figure 1: The five software entity types measurable in a Software Intensive Organization (SIO) [BUGL02] • • • • •

organization, which manages projects, each based on the classical production schema, which includes: inputs (resources) processes (processes) outputs (products)

Assessment of Measurement Indicators

289

One of the eight quality management principles in the ISO 9000:2000 family of standards is the “factual approach to decision making”, based on the measurement and analysis (clause 8) of both process and product (clauses 8.2.3 and 8.2.4 respectively): • 8.2.3 -> to apply suitable methods for monitoring and, when applicable, measurement of QMS processes, demonstrating the ability of such processes to achieve planned results; • 8.2.4 -> to verify that product requirements have been met, during the appropriate stages of the product realization process. The product is the output of a process and should be reproducible and of consistent quality, while a process is a function of the resources used in it. Management of each of the five entity types requires the setting up of measurable goals, with interrelationships across entity types. For instance, an organization entity in the ICT world can be measured using a Software Process Improvement (SPI) model, such as the Sw-CMM, ISO 15504-2, etc. The staged version of these models contains a Common Feature referred to as 'Measurement and Control'. In Sw-CMM v1.1, this is labeled “Measurement and Analysis” (MA), and is replicated for each KPA of the model; it “describes the need to measure the process and analyze the measurement. MA typically includes examples of measurements that could be taken to determine the status and effectiveness of the Activities performed” [PAUL93, p. 0-28]. Some examples of indicators which need to be assessed are presented below. In the Software Quality Management (SQM) KPA at Level 4, for example, two of the suggested measures are: • the cost of poor quality (CONQ – Cost Of Non Quality) • the cost for achieving the quality goals However, a reduction in such costs is not necessarily linked to increased quality. For instance, when a global ICT company moves some of its activities to a subsidiary in a country where costs are lower, these costs have been reduced, but has quality improved? Therefore, can SQM activities be adequately evaluated only on the basis of their cost? The second example is from the newer CMMI [SEI02a-b]: it now has four Common Features, and MA has become a Process Area at Level 2 called MEA, while the Directing Implementing (DI) common feature suggests process measures under the “DI3 – Monitoring and Control the Process” label. For instance, the Technical Solution (TS) process area at Level 3 lists in the DI3 section four proposed measures, including the “size and complexity of the

L. Buglione, A. Abran

290

product, product components, interfaces and documentation”. “The purpose of the Technical Solution is to design, develop, and implement solution to meet? Requirements.” However, while these proposed measures of intermediate products are of interest, they are not direct process measures. As a third example, ISO 15504-2 offers a specific process area, coded ORG.5 (Measurement), managing measurement criteria in the capability dimension, exactly in the process attributes PA4.1 (Measurement attribute) and PA4.2 (Process Control attribute), to demonstrate the achievement of Capability Level 4 (CL4 – Predictable process). Models such as ISO 15504-2:1998 [ISO98] do not provide a list of specific measures, but only process verification criteria. 3

Measurement assessment frameworks

The basic maturity model was derived from the initial QMMG (Quality Management Maturity Grid) by Phil Crosby [CROS79], revisited by [RADI95] and used for evaluating contractors by [HUMP87]. It was subsequently applied to the software production process in the Software Production Maturity Model [HUMP88], the Sw-CMM v1.0 [PAUL91] and v1.1 [PAUL93], SPICE [ISO98] and CMMI [SEI02a-b]), as illustrated in Figure 2. Do the SPI models propose adequate measures to quantitatively manage the five distinct entity types? What is the current status of measurement within these SPI models? At the top of Figure 2 are the three authors who have addressed the measurement issue for software-related SPI models specifically:

Figure 2: Capability Maturity Models and Measurement CMMs • Daskalantonakis, Basili & Yacobellis [DASK90] formalized a maturity path based on the well-known five levels and ten themes (formalization of the development process, formalization of the measurement process, scope of measurement, implementation support, measurement evolution, measurement support for management control, project involvement, product involvement, process involvement, predictability). This model is based on the following maturity sequence: project (ML2) è product (ML3) è process (ML4). The rating mechanism is the same as in Sw-CMM v1.x: an assessed organization

Assessment of Measurement Indicators

291

is at Level X if the answers to at least 80% of all Level X questions (from the maturity questionnaire) are “Yes”. Otherwise, the organization is at Level X1. • Budlong & Peterson [BUDL95] observed that “metrics maturity is one dimension of overall process maturity” and that “sometimes organizations that rate well in terms of overall process maturity have weak metrics programs.” Their Software Metrics Capability Maturity (SMCM) framework, derived from [DASK90], has three maturity levels and six themes (formalization of development process, formalization of metrics process, scope of metrics, implementation support, metrics evolution, metrics support for management control). As in the Daskalantonakis study, the maturity sequence is similar: project (ML1) è product (ML2) è process (ML3). In addition, two questionnaires were designed: one for acquisition organizations and one for companies dedicated to software development or maintenance, complemented by a list of organizational information for deriving a complete profile report for measurement improvement. • Niessink & van Vliet [NIES98] reported on studies about “measurement” maturity, evaluating its relative strengths and weaknesses. They then proposed their own model, referred to as the Measurement Capability Maturity Model (M-CMM). M-CMM consists of eleven KPAs across the five maturity levels: MATURITY LEVEL 5 4 3

2

KPA (K EY PROCESS AREA ) Measurement Change Management Technology Selection Measurement Cost Management Training Program Organization Measurement Database Organization Measurement Design Organization Measurement Focus Measurement Feedback Measurement Analysis Measurement Collection Measurement Design

Table 1: Key Process Areas (KPAs) by Maturity Level It can be observed that the scope of these specialized assessment models/methods is the measurement of the 'organization' entity, and that they do not address the other types of measurable entities. So, what about the maturity level of process/product measures used in an organization? 4

Process-Product Indicators and ISO 9001:2000

L. Buglione, A. Abran

292

Do SPIs provide adequate measures for each type of entity? Can a process be properly evaluated through a product measure? Are ICT companies able to design a proper list of measurement indicators for their processes? Do they have some guiding criteria for choosing the right ones and validating them? While there is a clear relationship between a process and a product, they are still quite distinct concepts, and related measurement indicators must be clearly distinct. For example: A) A product indicator must: • Directly evaluate the product • Provide insights for new products • Monitor the stability of the quality of the product over time B) A process indicator must: • Evaluate the process • Find weaknesses in the environment in which the process is being applied • Provide insights into process improvement • Help tailor and develop the process over time The ISO in the 9000:2000 standard defines validation as the “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled” (clause 3.8.5). In a process notation such as IDEF0 [FIPS93] (Figure3a), each process includes four elements, the so-called ICOM (input-control-output-mechanism). The “control” element for a process can be represented by norms and internal standards to be respected and – last, but not least – measures. It is also possible to model a control as a process, as proposed in the IEEE SESC Software Process Model (Figure 3b): “A software engineering process consists of related activities performed to produce a software engineering product. Resources are used to perform a process. To manage or improve a process, a person needs the following: measurement, control and action. Measurement is quantitative evidence regarding the state of the process. Measurement can be made on three fundamental conditions: conditions inside the process, products of the process, and satisfaction of downstream users of the product. Control is the decision-making mechanism using measurement. Goals and constraints are taken into account when formulating action. Action is the response of control to influence the process in the desired direction. These can be depicted as a closed feedback loop” [SESC03].

Assessment of Measurement Indicators

(a)

293

(b)

Figure 3: (a) IDEF0 generic process notation; (b) IEEE SESC Software Process Model A challenging issue is therefore the validation of such control mechanisms, through an assessment of their measurement indicators. 5

Assessment of the measurement indicators for the process entity types

An assessment of the measurement indicators for the process entity types must include verification of the nature of the indicator: that is, to which entity they refer (in this case, the process). For the design of the methodology for this type of assessment, we used an instantiation of the Measurement Process Model of Jacquet and Abran [JACQ97]. This model includes four major concepts: the design of the measurement method, the measurement method application, the analysis of the measurement results and, finally, the exploitation of these measurement results in quantitative models. Table 2 presents the proposed list of assessment activities on the basis of this model. These activities are then described individually.

L. Buglione, A. Abran

294 JACQUET -ABRAN M EASUREMENT PROCESS MODEL 1. Design of the measurement method a. Definition of the objectives b. c. d. 2. a. b. c. 3. a. b. 4.

Design-Selection of the metamodel Characterization of the concept to be measured Definition of the numerical assignment rules Measurement method application (Software) documentation gathering Construction of the (software) model Application of the numerical assignment rules Measurement Result Analysis Result Audit Exploitation of the Result

ASSESSMENT ACTIVITIES

Define the objective of the evaluation of the the measurement indicators in SPI frameworks Select the evaluation criteria (SMART) in a grid format Define of each the criteria for each cell in the Indicator Assessment Grid (IAG) Define the rules for the rating in each cell

Collect feedback on the indicators to be evaluated Apply the evaluation criteria Assign the ratings

Document results, as in Figure 4 Audit the results against defined thresholds (one by one; per SMART criteria; per Process Group; etc.) • Design improvement actions according to the flow chart in Figure 5 • Position all process indicators in the QMS system?

Table 2: Mapping the Jacquet-Abran model to the process indicator validation process 1a. Definition of the objectives: Measurement indicators are, by design, part of the control process for all QMS defined processes. To verify that such indictors are indeed efficient, it is important to be able to assess them. The objective is then to assess these indicators in this context. To do so, assessment criteria must be designed. 1b. Design-Selection of the meta-model: A set of criteria is proposed in Table 3 to assess the measurement indicators, on the basis of the 5W’s+H rule (What, Who, Where, When, Why + How): we refer to them as the SMART criteria (Specific, Measurable, Add value and actionable, Realistic and Relevant, Timely 1).

1

Sometimes the letters A, R and T in the acronym have different interpretations, i.e. A=Attainable, Action-Oriented; R=Reasonable; T=Tangible.

Assessment of Measurement Indicators SMART CRITERIA S – Specific M – Measurable A – Add Value & Actionable R – Realistic T – Timely

295 5W’S +H ELEMENT What How Why Who, Where When

Table 3: Mapping between SMART criteria and the “5W’s+H” rule 1c. Characterization of the concept to be measured: Once the assessment criteria have been selected, a proper description is provided to ensure unambiguous interpretation in distinct assessment contexts. The selected definitions are provided in the second column of Table 4. 1d. Definition of the numerical assignment rules: Often in software engineering, only checklists are available for evaluating a criterion, using yes-no logic. We prefer the use of numerical values with the ordinal rating scales proposed in ISO 14598-5:1998 [ISO98a]. However, to ensure repeatable classification within each of these ordinal ratings, further description must be provided, as illustrated in Table 3 where each cell represents the assessment sub-criteria for each criterion. From now on, this grid is referred to as the Indicator Assessment Grid (IAG). 2a. Documentation gathering: This step refers to the collection of the feedback (informal and formal) that will be used for the assessment. If done explicitly, it may be based on feedback questionnaires from the QMS processes. 2b. Construction of the model: This step refers to the application of the IAG in the specific context of the assessment of the selected indicators. Note that usually the thresholds for each process indicator should be defined and documented. 2c. Application of the numerical assignment rules: The evaluation process using the IAG will provide a numerical value for each of the criteria, leading to profiles for each of the SMART criteria. Table 5 illustrates how a single process indicator can be assessed with the IAG, and Table 6, for a set of indicators.

L. Buglione, A. Abran

296 SMART Description S Specific

Indicators must be specific and targeted to the area intended to be measured.

0 Poor/Abse nt Not focused on the area intended to be measured Incomplete or bad definition of the elements needed for calculating the indicators.

1 Fair

2 Good

Informally addresses and covers the area intended to be measured. Formal definition of the elements needed for calculating the indicators, without examples and tools.

Addresses and covers the area intended to be measured. M– Indicators must permit Definition Measura collection of accurate of the eleble and complete data ments needed for calculating the indicators, with fair examples but with no suggestions for tools. A - Add Indicators must be easy No Basic Intermediat Value to understand, showing, evidence evidence on e evidence and over time, which on the the “bad” about the Actioperformance direction is “bad” and and “good” “bad” and nable "good" and which “good” directions in “good” direction is "bad", so directions trend directions that it is known when to in trend analysis. in trend take action analysis. analysis. R– Indicators to be taken Provides Provides Provides Realistic into account should only no useful minor intermediat and be those really relevant suggestions suggestions e Relevant and important to the for for suggestions business. improveme improvemen for nts to the ts to the improveme related related nts to the process. process. related process. T– Indicators are useful if The timing The timing The timing Timely related information is defined for defined for defined for used in a timely manner. data collec- data data coltion does collection lection pernot permit permits mits execuexecution execution of tion of reof the reminimal porting and porting and reporting data analydata analy- and data sis in a prosis tasks as analysis. fitable way. forecasted.

Table 4: The Indicator Assessment Grid (IAG)

3 Excellent Properly addresses and covers the area intended to be measured. Complete and exhaustive definition of the elements for calculating the indicators, with valuable examples and suggestions for tools. Clearly shows the “bad” and “good” directions in trend analysis.

Provides critical suggestions for improvements to the related process.

The timing defined for data gathering easily permits execution of reporting and data analysis.

Assessment of Measurement Indicators

297

Indicator Title

Related Process

INDICATOR OBJECTIVE Criterion RATING (0-3) Weaknesses – POSSIBLE IMPROVEMENTS

S 0

1

M 2

3

0

1

A 2

3

0

1

R 2

3

0

1

T 2

3

0

1

2

3

Table 5: Formal profile for evaluating a process indicator using the IAG grid When reporting assessment results for a set of indicators, both assessment values and thresholds for the acceptability of a certain indicator can be illustrated in a table format, such as in Table 6 where the assessed values of the indicator are in bold and the threshold values in grey. Set of Indicators I1 I2 …. IN

S

M

A

R

T

0 0

1 1

2 2

3 3

0 0

1 1

2 2

3 3

0 0

1 1

2 2

3 3

0 0

1 1

2 2

3 3

0 0

1 1

2 2

3 3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Table 6: KPI thresholds using the IAG checklist Using a spreadsheet, the results from the assessment can be summarized at two levels, automatically highlighting which criteria and indicators are above their threshold acceptability values – Figure 4. 3a. Results Once step 2c has been performed, with all criteria for all indicators, reports and related documentation are prepared for presenting the output of the assessment, as illustrated in Figure 4.

Figure 4: Indicator Assessment Results (partial list)

298

L. Buglione, A. Abran

3b. Audit: An analysis of the partial list presented in Figure 4 suggests that more in-depth analysis is appropriate for some indicators out of the entire set (in particular FWR, SPI and PSCA have two out of five values under the established threshold, that’s equal to 2). The same occurs for one criterion (A, since of the considered set of indicators, five out of ten indicators present the “A” rating under the established threshold). The second-level analysis should focus on: • analyzing individual indicator assessments in order to improve the respective measurement designs and definitions of the indicators; • analyzing a possible common weakness in one criterion (A-Actionable in the example), looking at the distribution of the rating frequencies across the 0-3 rating scale. Information derived from this grid analysis can be used for gap analysis between: • planned and assessed values, to verify that the definition of a process indicator is correct, or, if the indicator definition is quite good and the process stable, between: • two subsequent assessments, in order to improve those criteria for making the data gathering tasks easier (M criterion). 4. Exploitation of results: One of the objectives of Process Indicators (PIs), as mentioned previously, is to provide information to improve the related processes. It is fundamental to maintain an alignment between a process and its indicators. Misalignments should lead to a redesign/revision of the indicator for this process. PIs must represent one (control) element of the (level of performance of the) process, necessarily based on the process tasks performed. Therefore, if a process is redefined, then this should lead to a review of the related PIs. The flow chart in Figure 5 illustrates the corresponding revision path in this context. The next step is to analyze with a matrix the relationship between KPI performance (lower and higher than its established threshold level) and the level of achievement of the goal(s) for each related process, according to an SPI rating technique. For instance, in ISO 15504, this attribute corresponds to the PA 1.1 (Process Performance) process attribute: “The process performance attribute is a measure of the extent to which the process purpose is achieved. As a result of full achievement of this attribute: a) the process achieves its defined outcomes”. A performance-level matrix is illustrated in Table 7, using the four Process

Assessment of Measurement Indicators

299

Process GOAL ACHIEVEMENT LEVEL

Attribute rating values in part 2? of the SPICE model: N (Not Achieved), P (Partially Achieved), L (Largely Achieved) and F (Fully Achieved). In that case, it could be possible to aggregate the N/P ratings within the “-” cells and the L/F ratings within the “+” cells.

+

D

A

-

C

B

-

+

(< threshold)

(≥ threshold)

KPIs value

Table 7: Classification quadrants for performance and maturity level assessments • Quadrants A and C correspond to an alignment of results from the two dimensions. Recommended action: in the A cell, none. In the C cell, a redesign of the process and, whenever required, of its KPIs. • Quadrants B and D correspond to a misalignment of results from the two dimensions. Recommended actions: in the B cell, most of the failure can be assigned to the process. This requires an in-depth analysis of the other PAs in the whole process rating, as well as Root-Cause Analysis [ISHI86]. In the D cell, a rating lower than the threshold suggests that the failure might be caused by an inadequate KPI. Such a criterion must be analyzed, and redesigned if required.

300

L. Buglione, A. Abran

Figure 5: Revision path of a PI when the process is revised or changed In Figure 6, there is also an indication of where further actions for the four quadrants are to be performed. In summary, all the modifications on a process (Quadrants B, C) must be based on the whole assessment, including all the assessment criteria and not just one,

Assessment of Measurement Indicators

301

as in the matrix. Referring to the SPICE model, this means that the analysis has to be performed according the 9-PA spread across the five capability levels for rating the process.

6

A Case Study at SchlumbergerSema

Here, we present a case study of the application of this indicator assessment approach in a multinational ICT organization, SchlumbergerSema, the IT business segment of Schlumberger Limited plc. This case study provides some quantitative data on the trial use of the Indicator Assessment Grid (IAG) proposed to help improve control of QMS processes in its Italian organization. SchlumbergerSema initiated its quality management system (QMS) in December 2002 and revised its list of processes and related indicators in April 2003 (Rev.0). The IAG approach was then used to revisit this issue, and is referred to as (Rev.1). Readers are reminded that, according to ISO 9001:2000, each defined process has an “owner”, who also defines all its elements, including indicators. This self-evaluation – for this trial purpose – therefore addresses only the indicators “owned” by the Quality Management office (e.g. 32% of the total number of defined process indicators in Rev.0 and 32% in Rev.1). In Table 8, the information on the two reviews of the indicators is presented by process groups: number of processes in a group, number of indicators for that process group, the ratio of indicators per process group (e.g. I/P).

302

L. Buglione, A. Abran

Figure 6: Process and KPI: flow chart

Assessment of Measurement Indicators

PROCESS GROUP PG01 PG02

QMS REV.0 # #IND I/P

303

QMS REV.1 # PROC #INDIC I/P

±∆ # PROC #INDIC COM-

PROC IC

MENTS

3 4

1 7

33.33% 175.00%

3 4

1 4

33.33% 100.00 %

0 0

0 -3

PG03

3

6

200.00%

3

5

0

-1

PG04

4

0

0.00%

4

7

166.67 % 175.00 %

0

7

PG05

3

4

133.33%

3

4

0

0

PG06

7

17

242.86%

9

18

133.33 % 200.00 %

2

+1

PG07

4

6

150.00%

4

6

0

0

PG08

5

12

240.00%

5

12

150.00 % 240.00 %

0

0

Total

33

53

35

57

+2

+4

Deleted low value indicator

First definition of indicators.

Two new processe s

Table 8: Process groups, processes and indicators across revisions The three right-hand columns contain a comparison of the two reviews: two additional processes were added to the PG06 group, 4 indicators were dropped and 8 were added, 7 of them to the PG04 group where none had been previously defined. Further considerations in order to establish acceptability thresholds for each indicator will be fully taken into account during 2003. ISO 9001:2000 internal audits will also be carried out on the basis of two audit criteria: conformity to the ISO standard (and to internal QMS procedures) and process capability level assessment (with a rating mechanism compatible with ISO 15504-2:2002). Table 9 presents the results of the two reviews of indicators using the SMART criteria and assessment methodology presented in section 5. In particular, those rows present the percentage distribution of ratings per each SMART criterion, in order to provide also the way an indicator has been perceived and evaluated across two subsequent revisions. Table 10 shows the differences, in percentage values, between the two revisions; the positive differences are in bold (i.e. referring to the “S” criterion, an +3.52%

L. Buglione, A. Abran

304

increase in the %3 ratings has been noted: therefore, according to the IAG, there has been a general satisfaction in the definition of indicators about their specific usage). These differences were caused by the improvement actions implemented between Rev.0 and Rev.1: correction to the “S”, “M” and “A” criteria, and to the indicator definitions in Rev. 1. For the “R” and “T” criteria, the average values decreased slightly. For the “R” criterion, greater attention is required from the internal process owners 2 in choosing process indicators providing useful suggestions for process improvement (ref. Quadrant D of the performancematurity level matrix in Table 6).

Percentage of processes, by rating level % rated Poor/Absent (0) % rated Fair (1) % rated Good (2) % rated Excellent (3)

QMS R EV.0 A R

S

M

0.00%

11.32 % 26.42 % 58.49 % 3.77%

0.00%

0.00%

T 0.00%

13.21 32.08 3.77% 22.64% % % 16.98 35.85 37.74% 22.64% % % 69.81 32.08 58.49% 54.72% % % 100.00 100.00 100.00 100.00 100.00 % % % % %

S 0.00%

M

QMS R EV.1 A

6.67%

0.00%

10.00% 18.33% 25.00%

R 0.00%

S

16.67% 68.33% 45.00% 48.33% 33.33% 73.33% 100.00 %

6.67% 30.00% 46.67% 48.33% 100.00 %

100.00 %

100.00 %

M

±∆ A

R

T

0.00

-4.65

0.00

0.00

0.00

-3.21 -0.31 3.52

-8.08 9.84 2.89

-7.08 1.23 9.15 10.60 -2.08 -11.82

-4.31 10.69 -6.38

0.00

0.00

0.00

0.00

0.00

Table 10: IAG assessment results across revisions: differences

2

0.00%

5.00% 18.33%

Table 9: m IAG assessment across revisions

Percentage of processes, by rating level % rated Poor/Absent (0) % rated Fair (1) % rated Good (2) % rated Excellent (3)

T

Each process is assigned to a “ process owner ”, and any modification is driven by the owner’s comments/decisions.

100.00 %

Assessment of Measurement Indicators

305

Figure 7: IAG value distribution across SMART Criteria For the “T” criterion, an informative action must be taken by process owners on process measurement and the usefulness of gathering and analyzing data for improvement actions within shorter time frames. Part of the solution identified includes implementation of automated collection tools for reducing the effort, which is almost entirely manual, expended on these issues (already collected within the “M” criterion). From the percentage figures in the lower part of Table 10, it is possible to obtain graphs for each of the five SMART criteria. On the graphs, the percentage scales for both Rev 0 and Rev 1 are on the left-hand side, while the scale for the differences between the revisions is on the right-hand side of the graph (this scale will vary for each criterion). For the “S” criterion, the decrease in rating levels 1 and 2 was offset by an increase in rating level 3, with a high average (2.71 on 3). The “M” criterion shows the greatest average increase: +13.47% in absolute values from the assessment and a convergence of the two curves around Rating Level 2 (RL2). The next frontier for this M criterion would be a proper

L. Buglione, A. Abran

306

and efficient definition of automated tools, in order to simplify and improve the speed of all the related processes. The “A” criterion follows the same trend as the “M” criterion, but with a slight increase from Rev.0 in absolute and percentage terms. One first-level signal was, in fact, the achievement of the established acceptability threshold. The “R” criterion graph indicates trends of probably the most relevant improvement area on the dataset examined, stressing in particular the decrease in RL 3 (-11.82%). The new indicators recently introduced in process group PG04 probably need to be better balanced towards a process perspective, instead of their current product perspective. For the “T” criterion, there has been a strong increase RL2 (+10.69%) and reduced RL3 (6.38%). Here, the improvement action, as already discussed, must be strongly focused due to a consistent action in favor of the adoption of tools across the organization. Further actions in the near future will look at balancing the number of indicators in different PGs (some PGs actually use several measures, while others only one) and a reinforcement of the improvement areas noted in this section. 7

Conclusions and Prospects

The revised versions of software process assessment and improvement models have given greater importance to measurement. We have also illustrated with the IEEE SESC Software Process Model that measurement is itself part of the control process. However, measurement as a discipline is not yet mature in software engineering and is itself in need of improvement. Measurement indicators must therefore be analyzed, assessed and, whenever required, improved. In particular, the following measurement issues must be addressed: • An assessment methodology for the design of the measurement system of a company must be developed, in which the cause-effect relations across software development processes are addressed; • Processes must be constantly monitored to identify candidate process reengineering actions which are aligned with the mission-vision-values of the organization; • Consistency between the goal to be verified and its related candidate measures must be verified. The Measurement Process Model by Jacquet and Abran was used as the reference model for developing an assessment methodology to assess process indicators.. This included definition of the SMART assessment criteria using the ISO 14598-5:1998 rating scale, and the design of an Indicator Assessment Grid (IAG). This IAG helps in evaluating process indicators, since these indicators

Assessment of Measurement Indicators

307

address distinct objectives (i.e. product or resource indicators), and IAG assessment results can be used in process improvement initiatives. In addition, a matrix for the analysis of candidate courses of action was presented and discussed. Finally, a case study reported on an initial application of IAG concepts in a multinational ICT organization, which included a discussion identifying the strengths and weaknesses observed. Future actions will be needed to investigate balancing the distribution of the number of indicators across the different process groups defined in the Quality Management System (QMS). Similarly, further work will be required to mprove weak processes and indicators, using the full IAG approach described in this paper. References [BUDL95]

BUDLONG F.C. & PETERSON J.A., Software Metrics Capability Evaluation Guide, Version 2.0, STSC, Hill Air Force Base, Utah, October 1995, URL: http://www.stsc.hill.af.mil/resources/tech_docs/MASES200.D OC [BUGL02] BUGLIONE L. & ABRAN A., ICEBERG: a different look at Software Project Management, IWSM2002 in "Software Measurement and Estimation", Proceedings of the 12th International Workshop on Software Measurement (IWSM2002), October 7-9, 2002, Magdeburg (Germany), Shaker Verlag, ISBN 3-8322-0765-1, pp. 153-167 [CROS79] CROSBY P.B., Quality is free, McGraw-Hill, 1979, ISBN 0451-62585-4 [DASK90] DASKALANTONAKIS M, BASILI V.R. & YACOBELLIS R., A Method for Assessing Software Measurement Technology, Quality Engineering 3(1), pp. 27-40, 1990-91, URL: http://www.cs.umd.edu/projects/SoftEng/ESEG/papers/82.38.p df [FIPS93] FIPS, Integration Definition for Function Modeling (IDEF0), Federal Information Processing Standard, Publication 183, December 21 1993, URL: http://www.idef.com/Downloads/pdf/idef0.pdf [HUMP87] HUMPHREY W.S. & SWEET W.L., A Method for Assessing the Software Engineering Capability of Contractors, SEI Technical Report, SEI-87-TR-23, September 1987 [HUMP88] HUMPHREY W.S., Characterizing the Software Process: A Maturity Framework, IEEE Software, IEEE Computer

308

L. Buglione, A. Abran

Society, Vol. 5, No.3, March 1998, pp. 73-79 [ISHI86] ISHIKAWA K, Guide to Quality Control, Productivity Inc, 1986, ASIN: 9283310357 [ISO98a] ISO/IEC JTC1/SC7/WG6, Information technology -- Software product evaluation -- Part 5: Process for evaluators, International Organization for Standardization, Geneve, 27/02/1998 [ISO98b] ISO/IEC JTC1/SC7/WG10, TR 15504-5, Software Process Assessment - Part 5: An Assessment Model and indicators guidance, v.3.03, 1998 [ISO02] ISO, IS 19011:2002 – Guidelines for quality and/or environmental management systems auditing, International Organization for Standardization, Geneve, 30/10/2002 [JACQ97] JACQUET J.P. & ABRAN A., From Software Metrics to Software Measurement Methods: A Process Model, 3rd International Software Engineering Standard Symposium (ISESS’97) Conference Proceedings, Walnut Creek, CA (USA), June 2-6 1997, IEEE Computer Society, pp.128-137 [NIES98] NIESSINK F. & VAN VLIET H., A Pastry Cook's View on Software Measurement, In: Software Measurement - Current Trends in Research and Practice, Reiner Dumke and Alain Abran (ed.), Deutscher Universitäts Verlag, Wiesbaden, Germany, 1999, pp. 109-125, URL: http://www.cs.vu.nl/~hans/publications/y1999/IWSM98.pdf [PAUL91] PAULK M.C. , CURTIS B., CHRISSIS M.B. ET AL, Capability Maturity Model for Software, Software Engineering Institute, CMU/SEI-91-TR-24, ADA240603, August 1991 [PAUL93] PAULK M.C., WEBER C.V., GARCIA S.M., CHRISSIS M.B. & BUSH M., Key Practices of the Capability Maturity Model Version 1.1, Software Engineering Institute/Carnegie Mellon University, CMU/SEI-93-TR-25, February 1993 [PAUL93a] PAULK M.C., CURTIS B., CHRISSIS M.B. & WEBER C.V., The Capability Maturity Model For Software Version 1.1, Software Engineering Institute/Carnegie Mellon University, CMU/SEI93-TR-24, February 1993, URL: http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr24.93. pdf [PAUL93b] PAULK M.C., WEBER C.V., GARCIA S.M., CHRISSIS M.B. & BUSH M., Key Practices of the Capability Maturity Model Version 1.1, Software Engineering Institute/Carnegie Mellon University, CMU/SEI-93-TR-25, February 1993, URL: http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr25.93. pdf [PAUL94] PAULK M., A Comparison of ISO 9001 and the Capability

Assessment of Measurement Indicators

[RADI85]

[SEI02a]

[SEI02b]

[SESC03]

309

Maturity Model for Software, Technical Report, CMU/SEI-94TR-12, July 1994, http://www.sei.cmu.edu/pub/documents/94.reports/pdf/tr12.94. pdf RADICE R.A., HARDING J.T., MUNNIS P.E. & PHILLIPS R.W., A Programming Process Architecture, IBM Systems Journal, IBM Corp., Vol. 24, No. 2, 1995, pp. 79-90, URL: http://www.research.ibm.com/journal/sj/382/radice.pdf CMMI PRODUCT TEAM, CMMI for Software Engineering, Version 1.1, CMMI-SE/SW/IPPD/SS v1.1, Continuous Representation, CMU/SEI-2002-TR-011, Technical Report, Software Engineering Institute, March 2002, URL: http://www.sei.cmu.edu/pub/documents/02.reports/pdf/02tr011 .pdf CMMI PRODUCT TEAM, CMMI for Software Engineering, Version 1.1, CMMI-SE/SW/IPPD/SS v1.1, Staged Representation, CMU/SEI-2002-TR-012, Technical Report, Software Engineering Institute, March 2002, URL: http://www.sei.cmu.edu/pub/documents/02.reports/pdf/02tr012 .pdf SESC, FP-04 Software Process Model, IEEE Software Engineering Standards Committee, February 2003, URL: http://standards.computer.org/sesc/sesc_pols/FP04_Software_Process_Model.htm

L. Buglione, A. Abran

310 Appendix A – List of Acronyms ACRON DESCRIPTION YM

CL CMM CMMI CONQ DI GAL IAG ICT IEEE ISO KPA KPI MA MCMM MEA ML PA QMM G QMS RL SESC SMAR T SMC M SPI SPICE SQM Sw TS

Capability Level Capability Maturity Model Capability Maturity Models Integration Cost Of Non Quality Directing Implementing Goal Achievement Level Indicator Assessment Grid Information & Communication Technology Institute of Electrical and Electronics Engineers, Inc. International Organization for Standardization Key Process Area Key Performance Indicator; Key Process Indicator Measurement & Analysis Measurement CMM Measurement & Analysis Maturity Level Process Area; Process Attribute Quality Management Maturity Grid Quality Management System Rating Level Software Engineering Standards Committee Specific-Measurable-Add Value & Actionable-Realistic & RelevantTimely Software Metrics Capability Maturity Software Process Improvement Software Process Improvement & Capability dEtermination Software Quality Management Software Technical Solution