A framework for evaluation and prediction of software ...

4 downloads 8031 Views 80KB Size Report
The literature shows that software process improvement (SPI) is a current popular approach to ... companies are undertaking formal or informal SPI programs.
The Journal of Systems and Software 59 (2001) 135±142

www.elsevier.com/locate/jss

A framework for evaluation and prediction of software process improvement success David N. Wilson a,*, Tracy Hall b,1, Nathan Baddoo b a

Faculty of Information Technology, University of Technology, Sydney, P.O. Box 123, Broadway, NSW 2007, Australia b Department of Computer Science, University of Hertfordshire, Hat®eld, AL10 9AB, UK Received 21 August 2000; received in revised form 4 January 2001; accepted 5 February 2001

Abstract The literature shows that software process improvement (SPI) is a current popular approach to software quality and that many companies are undertaking formal or informal SPI programs. However, the anticipated improvements to software quality through SPI have not, as yet, been fully realised. Many companies are neither ready nor equipped to implement a successful SPI program: how do companies evaluate and validate the necessary organisational requirements for the establishment of a successful SPI program? This paper examines the outcomes of a UK study of a sample of SPI programs and compares these programs with an evaluation framework we have developed. The validated framework will help companies conduct a self-assessment of their readiness to undertake SPI. Ó 2001 Elsevier Science Inc. All rights reserved.

1. Introduction The issue of software quality has become increasingly important to the software engineering community and the emphasis on quality improvement has become more prevalent in the 1990s (Glass, 1998). Many frameworks, such as TQM, quality standards, and quality award guidelines, have been used to apply the quality approach to software development (Fox and Frakes, 1997). Software process improvement (SPI), improving the software product by concentrating on the processes used during software development (Humphrey, 1989), ``has had enormous impact on bridging the gap between quality paradigm elements and software engineering practice'' (Fox and Frakes, 1997, p. 27) and has become the most popular, if not e€ective, means of achieving software quality (Krasner, 1997). However, the anticipated improvements to software quality through SPI have not, as yet, been fully observed (Gray and Smith, 1998) and 70% of CMM assessments in the US do not lead to the formulation of a process improvement plan, *

Corresponding author. Tel.: +61-2-9514-1832; fax: +61-2-95141807. E-mail addresses: [email protected] (D.N. Wilson), hallt@ herts.ac.uk (T. Hall), [email protected] (N. Baddoo). 1 Tel.: +44-1707-2847821.

i.e., companies test where they are on the CMM but do not undertake process improvement thereafter (ESPI, 1999). Many companies are neither ready nor equipped to implement a successful SPI program (Wilson et al., 2000): how do companies evaluate and validate the necessary organisational requirements for the establishment of a successful SPI program? This paper proposes a framework to help companies conduct such an evaluation and validates that framework through indepth group interviews with seven companies. The aim of the framework is to enable companies to self-assess their readiness to implement an SPI program. Saiedian and Chennupati (1999, p. 139) have published an evaluative framework for SPI models but this framework enables companies to evaluate the di€erent SPI models, in order to ``help an organization choose the most suitable model for its SPI program'', rather than, as is the intention here, enabling a company to assess its own readiness to undertake an SPI program. In this paper, the organisational requirements of an SPI program are described in Section 2 before Section 3 presents an evaluation framework based on these requirements. Section 4 then discusses the results of the study conducted to evaluate this framework and Section 5 presents our conclusions.

0164-1212/01/$ - see front matter Ó 2001 Elsevier Science Inc. All rights reserved. PII: S 0 1 6 4 - 1 2 1 2 ( 0 1 ) 0 0 0 5 7 - 7

136

D.N. Wilson et al. / The Journal of Systems and Software 59 (2001) 135±142

2. Organisational requirements Software metrics and SPI have much in common: both have an extensive literature, the bene®ts of each are well documented but published empirical research into implemented programs is rare, anecdotal reports of success and failure abound in equal measure and, more importantly, the organisational critical success factors appear to be similar. Je€ery and Berry (1993) classi®ed four perspectives for the organisational requirements of a metrics program, based on the work of Fenton (1991) and Grady and Caswell (1987). We have adapted these perspectives for SPI: 1. Context ± the environment in which the SPI program is developed and operated: · have clearly stated objectives and goals, · have realistic assessments of pay-back period. 2. Inputs ± factors or resources that are applied to the SPI program: · resource the program properly, · allocate training resources to motivate and sustain interest. 3. Process ± the method used to develop, implement and maintain the program: · let the objectives determine the improvements, · have an independent SPI team, · do not use the program to assess individuals. 4. Products ± the processes improved, reports produced and other output of the program: · facilitate actions to be taken to improve processes. Other authors support these perspectives and provide additional insights. For example, Goodman (1995) suggests a context perspective when he cites lack of management commitment and sta€ inexperience as reasons for process improvement initiatives failing. He adds a process perspective when he suggests ine€ective implementation and the absence of change management as additional reasons for failure. Hadden (1999, p. 10) reinforces both the context and process perspective when she writes that highly e€ective SPI projects require that ``the SPI foundation must be ®rm, the SPI message must be consistent, the SPI goals must be valuable, the SPI plan must be realistic, and the changes must be manageable''. Hayes and Zubrow (1995) emphasise the input perspective when listing sta€ time and resources dedicated to process improvement and SPI sta€ed by highly respected people as two characteristics of highly successful SPI e€orts. The study by Hersleb et al. (1997, p. 38) provides further support for the context, process and input perspectives while introducing a product perspective by identifying that ``obtaining observable results, backed up with data if possible, is important early on to keep the e€ort visible, and to motivate and sustain interest''. Finally, Hammock (1999) proposes four

critical success factors with a context and product perspective: senior management support, internal involvement, treating SPI as a project, and demonstrating the bene®ts of SPI. The objective of this research is to validate these organisational requirements as part of a broader study into software process improvement programs. The study comprises two data gathering strategies: group sessions with volunteer companies, using focus groups and the repertory grid technique (Fransella and Bannister, 1977); and a questionnaire-based survey. This paper reports results from the group interviews. 3. Evaluation framework for SPI Je€ery and Berry (1993) further proposed a framework of questions for the evaluation of a metrics program under the four perspectives referenced above. Again, we have adapted these questions for SPI: Context C1. Were the goals of the SPI program congruent with the goals of the business? C2. Could technical sta€ participate in the development and improvement of processes? C3. Had a quality environment been established? C4. Were the processes adequately de®ned at an appropriate level of detail? C5. Was the SPI program tailored to the needs of the company? C6. Was senior management commitment available? C7. Were the objectives and goals clearly stated? C8. Were there realistic assessments of the pay-back period? Inputs I1. Was the SPI program resourced properly? I2. Was SPI program sta€ed by highly respected people? I3. Were an appropriate number of people assigned to the SPI program? I4. Were resources allocated to training? I5. Was research done? Process (a) Process motivation and objectives PM1. Was the program promoted through the publication of success stories and encouraging the exchange of ideas? PM2. Was a ®rm implementation plan published? PM3. Was the program used to assess individuals? (demotivating) (b) Process responsibility and SPI team PR1. Was the SPI team independent of the software developers? PR2. Were clear responsibilities assigned? PR3. Was the process improvement program adequately sold to practitioners initially?

D.N. Wilson et al. / The Journal of Systems and Software 59 (2001) 135±142

(c) Process improvement PI1. Were the important initial processes to be improved de®ned? PI2. Was there a mechanism for changing the SPI program in an orderly way? PI3. Were capabilities provided for users to explain events and phenomena associated with the program? PI4. Did the objectives determine the processes to be improved? (d) Process training and awareness PT1. Was adequate training in SPI carried out? PT2. Did everyone know what processes were being improved and why? Products P1. Were the processes identi®ed for improvement clearly de®ned and were they of obvious bene®t? P2. Did the end result provide clear bene®ts to the management process at the chosen management audience levels? P3. Was feedback on results provided to sta€? P4. Was the process improvement program ¯exible enough to allow for changes in strategy and the addition of new techniques? 4. Validating the evaluation framework for SPI 4.1. Method of study The data gathering stage involved in-depth group interviews with representatives from seven volunteer companies and a questionnaire completed by the SPI coordinator. The interviews sought to establish perspectives of SPI from operational, tactical and strategic levels within the companies as SPI is likely to e€ect them di€erently. The group interviews required that groups of between four and six people who normally work together meet to talk to each other about SPI. For each company, three groups were conducted: developers, supervisors/team leaders and senior managers. Each group was convened in a separate meeting room so that only the group members and the researchers were present. The three meetings were conducted at di€erent times on the same day and each meeting lasted for an hour to an hour-anda-half. We acted as facilitators, observers and recorders of the conversation (using a tape recorder once permission had been granted by all group members). As facilitators, we provided a general introduction to get the conversation started and thereafter let the conversation ¯ow, only interrupting to keep the group on track (if the conversation started to go o€ the subject), to bring particular members into the discussion (if one or two members started to dominate the conversation), and

137

to re-focus the group (if the conversation waned or started to become repetitive). As observers (and in analysing the recorded conversation later), we are identifying a range of positive and negative attributions about people, methods, processes, etc. related to SPI. The aim is to develop a view of the perspectives of SPI at three levels in each company in order to identify and propose ways of making SPI programs successful. Negative attributions are identi®ed so that we can: o€er coping strategies for the real problems that each group faces; and detect less realistic prejudices so that material can be designed to undermine them. Positive attributions are identi®ed so that we can: emphasise the real advantages that are most appreciated by each group; and detect helpful attitudes or behaviours so that they can be encouraged. In addition, for each company, the SPI coordinator completed a detailed questionnaire on the company's SPI activities. 4.2. Pro®le of respondents Of the seven companies we report here, six were multinationals and one was UK-based. The companies were either computer/software companies or companies manufacturing products with a signi®cant software component. The classi®cation of respondent companies is shown in Table 1, while Table 2 shows the size of the respondent companies. Table 1 Classi®cation of respondent companies Computer/Software Company Hardware Software Services

0 4 1

Other Hardware Software Services (multiple responses possible)

3 3 2

4

57%

3

43%

Table 2 Size of respondent companies 1999 Total UK employment Less than 10 11±100 101±500 501±2000 More than 2000

0 1 2 1 3

0% 14% 29% 14% 43%

1999 UK IT employment Less than 10 11±25 26±100 101±500 501±2000 More than 2000

1 0 1 2 2 1

14% 0% 14% 29% 29% 14%

138

D.N. Wilson et al. / The Journal of Systems and Software 59 (2001) 135±142

4.3. Study results The validation of the evaluation framework is based on:  Answers provided by the SPI coordinator to speci®c questions in the questionnaire. The relevant questions in the questionnaire require either a yes/no response or a rating of 1±5 on a Likert Scale. The validation of the evaluation framework reported below will present the percentage of positive (yes) responses or the average of the Likert Scale scores.  Observation of the group interviews and analysis of the interview recordings ± we separately provided a subjective criteria rating based on our individual perceptions. Following the method adopted by Je€ery and Berry (1993) (where they used subjective criteria scoring based on their involvement in the group sessions, their analysis of the session recordings and the analysis of the questionnaires) we rated each question of the evaluation framework from 0 to 3: 0 ˆ did not meet any of the requirement, 1 ˆ met some of the requirement, 2 ˆ met most of the requirement, 3 ˆ fully met the requirement. The three scores were summed and then averaged by the number of companies to produce the ratings reported here. Because the literature provides no indication of the relative importance of any criteria, equal weighting was applied to each in the analysis. Some elements of the evaluation framework are not addressed directly in the questionnaire and, for these elements, only the subjective criteria rating is used. Several con®rming questions were posed directly in the questionnaire (these are shown in Table 3). For all

but one of the questions (PT1, Question 4.9), the respondents agreed 100% that each factor was important. With respect to process improvement training, one company felt that SPI should be an integral part of everyone's job description and that speci®c SPI training was not necessary; the other six (85.7%) agreed that speci®c SPI training was important. This provides an initial validation of the appropriateness of the framework ± further more detailed analysis follows. In response to `Did the end result provide clear bene®ts to the management process at the chosen management audience levels?' (P2, Question 5.3), four of the seven companies answered `yes' while the remaining three answered `no' or `don't know'. Our own assessment (based on our involvement in the interviews) for P2, Question 5.3 were {0 0 0} {1 1 1} {1 1 0} giving an average of 1.66 for the companies answering `no' or `don't know', and {2 3 3} {2 1 2} {2 0 1} {2 1 2} giving an average of 5.25 for the companies answering `yes'. Based on the combination of these two responses, the four companies that responded `yes' are deemed to be `successful' whilst the three companies that responded `no' or `don't know' are deemed to be `unsuccessful'. The study results are shown in Table 4. For example, in response to Question 2.11 (C6: Was senior management commitment available?) the SPI coordinators of the three unsuccessful companies provided Likert scores of 1 3 3 giving an average of 2.33, while those from the successful companies provided Likert scores of 3 5 4 4 giving an average of 4.0. For C6, Question 2.11, we provided subjective ratings of {1 1 1} {2 2 2} {2 2 1} for the unsuccessful companies giving an average of 4.66, and {2 2 3} {3 2 2} {3 1 2} {3 3 3} for the

Table 3 Con®rming questions Identity

Question number

Question

C1

2.6

C2

4.1

C5

2.9

C6

2.11

C8

2.10

I1

4.14

I2

4.15

I5

2.8

PR3

4.5

PT1

4.9

Do you think it is important that the goals of the process improvement program are congruent with the goals of the company? Do you think it is important that practitioners are involved in developing the process improvement program? Do you think it is important to tailor the process improvement program to the needs of the company? Do you think it is important for senior management to be committed to the process improvement program? Do you think it is important to make a realistic assessment of the pay-back period of the process improvement program? Do you think it is important that the process improvement program is properly resourced? Do you think it is important that the people dedicated to the process improvement program are highly respected by other sta€? Do you think it is important to research these models or other approaches before developing a process improvement program? Do you think it is important that the process improvement program be `sold' to practitioners initially? Do you think it is important that process improvement training is provided for the process improvement program?

D.N. Wilson et al. / The Journal of Systems and Software 59 (2001) 135±142

139

Table 4 Study results Unsuccessful companies

Successful companies

SPI coordinator response

Subjective criteria rating

SPI coordinator response

Subjective criteria rating

Context C1 C2 C3 C4 C5 C6 C7 C8

Goals Participation Quality environment Process de®nition Needs driven Management commitment Stated objectives Pay-back period

2.67 3.33 100% ± 3.33 2.33

4.3 4.7 6.0 4.0 4.3 4.7

3.87 3.75 100% ± 4.62 4.00

5.8 5.5 6.0 5.3 6.0 7.3

67% 2.00

3.0 2.7

75% 3.00

5.0 4.8

Inputs I1 I2 I3 I4 I5

Resourcing Respect Team size Training Research

2.67 3.00 ± 33% 3.00

3.7 3.0 6.0 2.7 5.7

4.00 3.50 ± 50% 3.25

6.0 6.8 5.3 4.0 6.3

Process (a) Motivation PM1 PM2 PM3

Promotion Implementation plan Individuals

67% 33% 0%

2.7 2.3 ±

75% 75% 0%

3.5 5.8 ±

(b) Responsibility PR1 PR2 PR3

Independence Responsibilities SPI sold

33% 67% 100%

2.7 2.7 4.3

50% 75% 0%

4.8 5.0 5.0

±

3.7

±

6.5

± ± ±

3.7 1.3 3.3

± ± ±

4.8 4.5 5.3

3.00 67%

5.0 3.3

3.83 25%

5.5 4.5

± 0% 67% ±

3.0 1.7 1.7 ±

± 100% 100% ±

5.0 5.3 4.5 ±

(c) Process improvement PI1 Initial process de®nition PI2 Change mechanism PI3 Explanations PI4 Objectives driven (d) Training and awareness PT1 Training PT2 Awareness

Products P1 P2 P3 P4

Obvious bene®t Clear bene®ts Feedback Application

successful companies giving an average of 7.25. We recognise that, in a small sample survey such as this, one low score, for example 0 4 4, can distort the average. However, we considered that the combination of the SPI coordinators' questionnaire responses and our own subjective ratings provided sucient validation of the framework. The objective of the analysis of the results is to validate that the evaluative framework is in fact a useful predictor of success in implementing an SPI program.

4.4. Analysis of the study results The sample sizes are too small for serious formal statistical analysis, and the conclusions are better used as a source of hypotheses for future studies. However, there is fairly strong cumulative evidence that the successful companies are measured as better on the scales used under each of the four perspectives (context, inputs, process and products). Comparing the ratings of the successful and unsuccessful companies presented in

140

D.N. Wilson et al. / The Journal of Systems and Software 59 (2001) 135±142

Table 4 shows 23 results in favour of the successful companies, three results equal or mixed, and two results in favour of the unsuccessful companies. Applying a sign test to these outcomes gives a p-value

Suggest Documents