A measurement framework for assessing the maturity of ... - CiteSeerX

4 downloads 59445 Views 339KB Size Report
having a major impact on the effectiveness of the software development ... Empirical Software Engineering, National ICT Australia, Sydney, Australia ... finding in a recent industrial survey, employees wanted their companies to invest more.
Software Qual J DOI 10.1007/s11219-007-9033-4

A measurement framework for assessing the maturity of requirements engineering process Mahmood Niazi Æ Karl Cox Æ June Verner

 Springer Science+Business Media, LLC 2007

Abstract Because requirements engineering (RE) problems are widely acknowledged as having a major impact on the effectiveness of the software development process, Sommerville et al. have developed a requirements maturity model. However, research has shown that the measurement process within Sommerville’s model is ambiguous, and implementation of his requirements maturity model leads to confusion. Hence, the objective of our research is to propose a new RE maturity measurement framework (REMMF) based on Sommerville’s model and to provide initial validation of REMMF. The main purpose of proposing REMMF is to allow us to more effectively measure the maturity of the RE processes being used within organisations and to assist practitioners in measuring the maturity of their RE processes. In order to evaluate REMMF, two organisations implemented the measurement framework within their IT divisions, provided us with an assessment of their requirements process and gave feedback on the REMMF measurement process. The results show that our measurement framework is clear, easy to use and provides an entry point through which the practitioners can effectively judge the strengths and weakness of their RE processes. When an organisation knows where it is, it can more effectively plan for improvement. Keywords

Process maturity  Process improvement  Requirements engineering

M. Niazi (&) School of Computing and Mathematics, Keele University, Staffordshire, UK e-mail: [email protected] K. Cox  J. Verner Empirical Software Engineering, National ICT Australia, Sydney, Australia K. Cox e-mail: [email protected] J. Verner e-mail: [email protected]

123

Software Qual J

1 Introduction Requirements engineering (RE) is concerned with describing a client’s problem domain (the context), determining the desired effects the client wants to exert upon that domain (the requirements) and specifying the proposed IT (the specification) to (a) help enable those desired effects to occur and (b) to give designers a specification to help them build the proposed IT. The RE process thus has a huge impact on the effectiveness of the software development process (El Emam and Madhavji 1995). When RE processes are ad hoc, poorly defined or poorly executed, the end product is typically unsatisfactory (Standish-Group 1999). The Standish group reported that, on average, the percentage of software projects completed on-time and within budget improved from 16% in 1995 (Standish-Group 1995) to 34% in 2003 (Standish-Group 2003). However, nearly two-thirds of the projects examined in the 2003 report (Standish-Group 2003) were ‘challenged’ (i.e. only partially successful) with the authors observing that one of the main reasons for project failure is unstable requirements caused by poor management of RE processes. Several other studies have also identified problems with the RE process (Beecham et al. 2003b; El Emam and Madhavji 1995; Hall et al. 2002; Kamsties et al. 1998; Niazi 2005a; Nikula et al. 2000; Nuseibeh and Easterbrook 2000; Siddiqi and Chandra 1996). A UK study found that of 268 documented development problems, requirements problems accounted for 48% of those problems (Hall et al. 2002). Another survey, of 150 organisations in the US, showed that the majority requirements modelling technique of choice was ‘‘none’’ (Neill and Laplante 2003). A remarkable observation to make in light of this lack of an effective RE process is that 40–50% of software development effort is dedicated to rework: locating and correcting defects found by testing. Rework increases as the project continues through its phases, and by the final integration and test phase normally expends 66% of the total effort (Boehm 1987). Thus, more investment upfront might well lead to less rework. This is the majority finding in a recent industrial survey, employees wanted their companies to invest more upfront in RE (Neill and Laplante 2003). The actual effort in RE is not very large. Alexander and Stevens (2002) recommend that about 5% of project effort goes into requirements effort (elicitation, analysis, verification, validation, testing), not including specification. This might be about 25% of calendar time (or no more than 3 months dependent upon project size). They state that system specification might also take 20–25% of calendar time. Hoffmann and Lehner (2001) examined 15 projects and found they expended on average 16% of project effort on RE activities. Chatzoglou and Macaulay (1996) surveyed 107 projects and found requirements capture and analysis took over 15% of total elapsed time. In a study of 16 software projects, MacDonell and Shepperd (2003) found that there was so much variance in effort in Project Planning and Requirements Specification phases and in comparison with overall project effort that no patterns could be drawn from it, except that without the requirements phase, or with a poor requirements phase, the project was not successful. When requirements are poorly defined and RE processes are poor, the end result is nearly always a poorer product or a cancelled project (Standish-Group 1999). An industry survey in the UK reported that only 16% of software projects could be considered truly successful: ‘‘Projects are often poorly defined, codes of practice are frequently ignored and there is a woeful inability to learn from past experience’’ (Jobserve.com). The evidence is clear: problems in the requirements phase have a wide impact on the success of software development projects (Hall et al. 2002; Sommerville 1996) and this is a lesson that

123

Software Qual J

continues not to be learned despite the evidence and the low amount of effort needed to have a reasonable requirements process. Many software projects have failed because they contained a poor set of requirements (El Emam and Madhavji 1995) and the state of the industry indicates that only about 60% of organisations keep a record of requirements in a single repository, a highly significant factor in the success or otherwise of IT projects (Verner et al. 2005; Verner and Evanco 2005). No software process can keep delivery times, costs and product quality under control if the requirements are poorly defined and managed (Sommerville et al. 1998). Yet despite the regularly documented and recognised importance of RE, little work has been done on developing ways to improve the requirements process. Sommerville et al. (Sawyer et al. 1997; Sommerville and Ransom 2005; Sommerville et al. 1998) developed a maturity model derived from existing standards. Their maturity model has three maturity levels: Level 1—Initial, Level 2—Repeatable and Level 3— Defined. The model can be used to assess a current RE process and provides a template for RE practice assessment. In this model, Sommerville et al. propose a number of practices that lead to RE process improvement and that ultimately should lead to business benefits (Sommerville and Ransom 2005). In order to investigate Sommerville et al.’s maturity model in previous work, we conducted an empirical study with Australian practitioners (Niazi et al. 2005a; Niazi and Shastry 2003). During this study we observed that although all the practices of Sommerville et al.’s maturity model are very well defined, the measurement process designed for these practices was very confused and could lead organisations to undesirable results. This is caused by an ambiguous measurement process without a strategic and systematic approach being used to decide different scores for various practices. The practitioners we interviewed (Niazi et al. 2005a; Niazi and Shastry 2003) wanted a more formal and structured measurement process in order to better assess the maturity of their RE processes. Somerville et al. have noted: ‘‘this assessment scheme is not a precise instrument and is not intended for formal accreditation’’ (Sawyer et al. 1997, p. 29). As research has shown that effective RE practices provide multiple benefits including help in keeping delivery times and product quality under control (Sommerville and Sawyer 1997; Wiegers 2003), it is very important for organisations to systematically discover which RE practices are weak or strong. Sommerville et al.’s model does not provide this detailed practice-based assessment except for the provision of an indication of the RE maturity levels, i.e. Level 1, Level 2 etc., which lack much useful information. Their model does not provide any indicators to measure each RE practice and it also evaluates RE process using a single dimension (Marjo and Sari 2001). Beecham et al. (2003a) has designed a requirements capability maturity model (RCMM) that adheres to the characteristics of the Software Engineering Institute’s Software Capability Maturity Model (SW-CMM). The R-CMM guides users towards a view of RE that is based on goals and is problem driven. The R-CMM has five maturity levels and is based on 20 RE practices. Practices are given a structure by coupling them to the SW-CMM at incremental levels of process maturity. The basis of R-CMM development is SW-CMM and researchers and practitioners are aware that the SW-CMM was not a perfect model of Software Process Improvement (SPI). The SPI literature highlights some of SW-CMM’s flaws. One of its fundamental design flaws is the weak link between process improvement goals and customer expectations. In addition, SW-CMM is (Hall et al. 2002; Ngwenyama and Nielsen 2003; Sommerville and Sawyer 1997):

123

Software Qual J

• • • •

Complex and hard to implement. Hard to use by small and medium Enterprises. Based mainly on experience of large government contractors. Ignores people.

The R-CMM model (Beecham et al. 2003a) is based on only 20 RE practices and is still in the development and evaluation stage. Rather than developing all five maturity levels of the R-CMM simultaneously, Beecham has thus far only focused on the level 2 process maturity. R-CMM has only been evaluated through experts’ opinions and its validity in real world is still questionable. The experts’ opinions suggest that the R-CMM is unlikely to appeal to all practitioners and researchers and the R-CMM cannot reflect all the various kinds of RE development processes (Beecham and Hall 2003). Gorscheck et al. (2003) have developed a five level maturity model aimed more at SMEs. This work was created during master thesis at Blekinge Institute of Technology, Ronneby, Sweden. The model has been designed for project rather than organisational assessment. Like Sommerville et al’s model, this model does not provide the detailed practice-based assessment except for the provision of an indication of the RE maturity levels, i.e. Level 1, Level 2 etc. In addition, creating solutions that are based on previous work and frameworks may help to progress software improvement (Humphery 2002). Once the structural and practical limits of previously developed models and frameworks have been reached, one should turn to improved models. Our paper has two objectives: i.

To present a RE maturity measurement framework (REMMF) designed to be used with Sommerville et al.’s model in order to effectively measure the maturity of the RE process. This measurement framework can also be used to evaluate the strength or otherwise of RE practices. ii. To test the REMMF process in two industry case studies, where our measurement framework was implemented independently by practitioners, not by researchers. Some parts of our measurement framework have been previously published (Niazi 2005b). To achieve these objectives, we address the following research questions which are based on the Technology Acceptance Model (TAM) (Davis 1989; Davis et al. 1989): • RQ1. Does REMMF help practitioners measure the maturity of RE processes? In other words, what is the perceived usefulness of REMMF? • RQ2. Is REMMF easy to use? i.e. perceived ease of use. The major contributions of this paper are: • To provide a more complete picture of the REMMF; • To present the results of two case studies implementing and independently evaluating REMMF. An independent evaluation by practitioners is extremely rare in RE research. Our paper is organised as follows, Sect. 2 provides the background to the research. Section 3 describes how our measurement framework was designed. In Sect. 4 the REMMF is explained in detail. In Sect. 5 we present and discuss both case studies. Section 6 presents conclusions and future work.

123

Software Qual J

2 Background to Sommerville’s maturity model In order to improve the RE process, Sommerville et al. (Sawyer et al. 1997; Sommerville et al. 1998) suggested a requirements maturity model. The model is derived from existing standards and has three levels of maturity: Level 1—Initial, Level 2—Repeatable and Level 3—Defined, and is based upon 66 good requirements practices, classified as basic, intermediate and advanced. The 36 basic practices are concerned with fundamental activities required to gain control of the RE process. The 21 intermediate practices are mostly concerned with the use of methodical approaches and tools. The 9 advanced practices are concerned with methods such as formal specification used typically for critical systems development. The 66 practices categorized in (Sawyer et al. 1997; Sommerville et al. 1998) are grouped into eight major categories: • Requirements documentation: practices relating to structuring and organizing the requirements documents. • Requirements elicitation: practices to help discover the requirements from stakeholders, application domain and organisational environments. • Requirements analysis and negotiation: practices to help identify and resolve incompatibilities and missing information problems. • Describing requirements: practices for effectively writing requirements. • System modelling: practices for the development of models in order to better understand requirements. • Requirements validation: practices to help establish formal validation procedures relating to incompleteness, inconsistency or incompatibility problems. • Requirements management: practices for requirements management. • Requirements for critical systems: practices particularly useful for critical systems. (Note that we do not assess this category because neither of the organisations involved in this study deal with critical systems). Four types of assessments are made against each practice: • • • •

Three points are scored for standardized practice, Two for normal use, One for discretionary use and Zero for practices that are never used

Organisations with fewer than 55 points from the basic practices are classified as Level 1—Initial, organisations with more than 55 points in basic practices and fewer than 40 points in intermediate and the advanced practices are classified as Level 2—Repeatable and organisations with more than 85 points in the basic practices and more than 40 points in the intermediate and advanced practices as Level 3—Defined (Sawyer et al. 1997; Sommerville et al. 1998). Thus far, little research has considered validation of the requirements maturity model. Niazi and Shastry (2003) conducted an empirical study of requirements problems identified by 22 requirements practitioners. Their research was a two-step process; first, each organisation’s requirements process maturity level was assessed using the Sommerville et al. model (Sommerville and Sawyer 1997) and second, the types and number of problems faced by different practitioners during their software project was documented. The results indicated that there were no significant differences in the numbers of problems faced by organisations with mature and immature RE processes. The authors found that 75% of

123

Software Qual J

problems were common between the two data sets (respectively, organisations with mature and immature processes) and 25% of problems were only cited in one data set. However, it was found that while the problems cited only by mature organisations were related to organisational issues, e.g. lack of training, complexity of application and communications etc., the problems cited within immature organisations were related to technical aspects of the RE process, e.g. an undefined requirements process. During interviews the practitioners suggested that although all the practices in Sommerville et al.’s maturity model are very well defined, the measurement process designed for these practices was very confusing and could lead organisations to incorrect results. More recently, in order to evaluate the requirements maturity model (Sawyer et al. 1997) and to assess if requirements process improvement leads to business improvement Sommerville and Ransom (2005) conducted an empirical study with nine organisations. They concluded that the ‘‘RE process maturity model is useful in supporting maturity assessment and in identifying process improvements and there is some evidence to suggest that process improvement leads to business benefits. However, whether these business benefits were a consequence of the changes to the RE process or whether these benefits resulted from side-effects of the study, such as greater self-awareness of business processes, remain an open question’’ (Sommerville and Ransom 2005). In this paper a measurement framework has been reported to assess the effectiveness of RE practices designed in Sommerville et al.’s requirements maturity model (Sawyer et al. 1997; Sommerville et al. 1998). This measurement framework should provide requirements’ practitioners with some insight into designing appropriate RE processes in order to achieve better results. The next section describes how REMMF was conceived and developed.

3 How REMPP was designed An examination of the RE literature, together with previous empirical studies (Niazi et al. 2005a; Niazi and Shastry 2003), highlight the need to develop a process to effectively measure the maturity of RE processes. REMMF was initiated by creating its success criteria. Objectives were set to clarify the purpose of the process and to outline what the process was expected to describe. These criteria guided development and were later used to help evaluate REMMF. REMMF building activities involved abstracting characteristics from four sources: • RE literature (Beecham et al. 2003a; Daskalantonakis 1994; Hall et al. 2002; Niazi et al. 2005b, c). • Interview data (Niazi et al. 2005a; Niazi and Shastry 2003). • Sommerville et al.’s Maturity Model (Sawyer et al. 1997; Sommerville and Ransom 2005; Sommerville and Sawyer 1997; Sommerville et al. 1998). • The authors’ industry and research experience. Figure 1 outlines the stages involved in designing the REMMF. The first stage in the development of REMMF was to set criteria for its success. The motivation for setting these criteria comes from empirical research with Australian software development organisations (Niazi et al. 2005a; Niazi and Shastry 2003) and by a consideration of the Technology Acceptance Model (Davis 1989; Davis et al. 1989). The following criteria were used.

123

Software Qual J

1 Specify criteria for framework development

2 Research question

3 Data from Literature review

4 Rationalization and Structuring of Sommerville et al.’s model

5 Development of REMMF

6 Evaluation through a case study

Fig. 1 Stages involved in designing REMMF

• User satisfaction: end users need to be satisfied with the results of the measurement framework. End users should be able to use the measurement framework to achieve specified goals according to their needs and expectations without confusion or ambiguity. • Ease of use: complex models and standards are unlikely to be adopted by the organisations as they require resources, training and effort. REMMF has different levels of decomposition starting with the highest level and gradually leads the user from a descriptive framework towards a more prescriptive solution. The structure of REMMF was designed to be flexible and easy to follow. In order to address these desired criteria, a research question (see Sect. 1) was developed (Stage 2). In Stage 3 we extensively reviewed the process improvement literature. In Stage 4 we undertook rationalisation and structuring of Sommerville et al.’s model. Then in Stage 5 REMMF was designed. In the final stage an evaluation of REMMF was performed using two case studies.

4 RE maturity measurement framework A measurement instrument was developed at Motorola in order to assess the organisation’s current software process status for initial benchmarking before software process improvement initiatives (Daskalantonakis 1994). Diaz and Sligo (1997) describe the use of this instrument: ‘‘at Motorola Government Electronics Divisions, each project performs a quarterly SEI self-assessment. The project evaluates each key process area activity as a score between 1 and 10, which is then rolled into an average score for each key process area. Any key process area average score that falls below seven is considered a weakness’’ (Diaz and Sligo 1997, p. 76). Motorola’s assessment instrument has three evaluation dimensions (Daskalantonakis 1994): • Approach: the organisation’s commitment and management support for the practice as well as the organisation’s ability to implement the practice. • Deployment: The breadth and consistency of practice implementation across project areas. • Results: the breadth and consistency of positive results over time and across project areas. Motorola’s instrument successfully assisted in producing high quality software, reducing cost and time, and increasing productivity according to (Diaz and Sligo 1997). In order to effectively measure the maturity of the RE process we have adapted, Motorola’s instrument (Daskalantonakis 1994) and used the work of Sommerville et al.

123

Software Qual J

and Niazi et al. (2005c) to design a measurement framework. There are many compelling reasons for adapting Motorola’s instrument: • It is a normative instrument designed to be adapted; • It has been successfully tried and tested at Motorola; • It has a limited set of activities. To ensure that the Motorola instrument can be used effectively in the domain of RE, we have tailored it and made some very minor changes to different evaluation activities based on the literature and the study reported in Table 1. The structure of REMMF is shown in Fig. 2. The 66 good practices designed by Sommerville et al. can be divided into eight categories: requirements documents, requirements elicitation, requirements analysis and negotiation, describing requirements, system modelling, requirements validation, requirements management and requirements for critical systems. These categories are shown as ‘requirements process category’ in Fig. 2. Figure 2 shows that the requirements process category maturity indicates how mature the requirements process is (Sommerville et al. 1998). These requirements process categories contain different good practices designed for RE processes (Sommerville and Sawyer 1997; Sommerville et al. 1998). For each good practice we have designed a process, in order to effectively measure its maturity. The following steps are used in REMMF (Beecham et al. 2003a; Daskalantonakis 1994; Niazi et al. 2005c) to measure the maturity of RE processes (A more detailed example is shown in Appendix A in order to measure the capability of the ‘describing requirements’ category): Step 1: For each practice, a key participant who is involved in the RE process assessment calculates a 3-dimensional score for each RE practice using the tailored Motorola instrument as shown in Table 1. Step 2: The 3-dimensional scores for each practice are added together, divided by 3 and rounded up. A score for each practice is placed in the evaluation sheet (see example in Table 2). Step 3: This procedure is repeated for each practice. The score for each practice is then summed then an average is used to gain an overall score for each ‘requirements process category’. Step 4: A score of 7 or higher for each ‘requirements process category’ indicates that a specific category maturity has been successfully achieved (Daskalantonakis 1994; Niazi et al. 2005c). A category maturity a score that falls below seven is considered a weakness (Daskalantonakis 1994; Niazi et al. 2005c). Step 5: It is possible that some practices may not be appropriate for an organisation and need never be implemented. For such practices a ‘Not applicable’ (NA) option is selected. This NA practice should be ignored when calculating the average score of a ‘requirements process category’. At the end of this measurement process it will be very clear to an organisation which requirements process categories are weak and need further consideration.

5 Evaluation of REMMF via case studies The case study method is used because this is a powerful evaluation tool and can provide useful real world information (Yin 1993). A case study also provides valuable insights for

123

Fair (4)

Weak (2)

Poor (0)

Score

Management has some awareness of investment required and long term benefits of this practice

Several supporting items for the practice in place

Road map for practice implementation defined

Wide but not complete commitment by management

Management begins to aware of investment required and long term benefits of this practice

A few parts of organisation are able to implement the practice

Support items for the practice start to be created

Management begins to recognize need

Higher management is not aware of investment required and long term benefits of this practice

Practice not evident

No organisational commitment

No organisational ability

No management recognition of need

Approach

Key activity evaluation dimensions

Table 1 Motorola’s instrument (Daskalantonakis 1994)

No mechanism to distribute the lessons learned to the relevant staff members

Monitoring/verification of use for several parts of the organisation

Deployed in some major parts of the organisation

Some consistency in use

Less fragmented use

Limited to monitoring/ verification of use

Inconsistent results for other parts of the organisation

Consistent and positive results for several parts of the organisation

Some evidence of effectiveness for some parts of the organisation

Inconsistent results

Deployed in some parts of the organisation

Spotty results

Inconsistent use

Ineffective

Results

Fragmented use

No part of the organisation shows interest

No part of the organisation uses the practice

Deployment

Software Qual J

123

123

Outstanding (10)

Qualified (8)

Marginally qualified (6)

Score

Table 1 continued

Consistently positive results over time across many parts of the organisation

Mostly consistent use across many parts of the organisation Monitoring/verification of use for many parts of the organisation

Practice implementation well under way across parts of the organisation

Supporting items in place

Consistent use over time across all parts of the organisation

Organisational excellence in the practice recognized even outside the organisation

Monitoring/verification for all parts of the organisation

Pervasive and consistent deployed across all parts of the organisation

A mechanism has been established, and used in all parts of the organisation, to distribute the lessons learned to the relevant staff member

Counsel sought by others

Consistently world-class results

Requirements exceeded

Consistently positive results over time across almost all parts of the organisation

Consistent use across almost all parts of the organisation Monitoring/verification of use for almost all parts of the organisation

Positive measurable results in almost all parts of the organisation

Deployed in almost all parts of the organisation

Management provides zealous leadership and commitment

Management has wide and complete awareness of investment required and long term benefits of this practice

A mechanism has been established to use and monitor this practice on continuing basis

Supporting items encourage and facilitate the use of practice

Practice established as an integral part of the process

Majority of management is proactive

Total management commitment

A mechanism has been established, and used in some parts of the organisation, to distribute the lessons learned to the relevant staff members

Positive measurable results in most parts of the organisation

Deployed in many parts of the organisation

Some management commitment; some management becomes proactive

Management has wide but not complete awareness of investment required and long term benefits of this practice

Results

Deployment

Approach

Key activity evaluation dimensions

Software Qual J

Software Qual J

Inform

Requirements process category maturity

Indicate

Requirements process maturity

Sommerville et al‘s model Form

Contain

Inform

Sommerville et al.’s model

Good RE practices

Describe

Organized into

Organized by

Activities Describe

Motorola’s instrument, Niazi et al.’s model, literature, research and industry experience

How to measure each practice

Measurement framework

Inform

Fig. 2 The structure of REMMF

Table 2 The 3-dimensional scores of requirements document category ID

Type

Requirements documents practices

0

1

2

3

4

5

RD1

Basic

Define a standard document structure

RD2

Basic

Explain how to use the document

RD3

Basic

Include a summary of the requirements

RD4

Basic

Make a business case for the system

RD5

Basic

Define specialized terms

H

RD6

Basic

Make document layout readable

H

RD7

Basic

Help readers find information

RD8

Basic

Make the document easy to change

6

7

8

9

10

NA

H H H H

H H

The 3-dimensional overall score: (5 + 5 + 1 + 3 + 5 + 5 + 5 + 3/No of practices) ? 32/8 = 4

problem solving, evaluation and strategy (Cooper and Schindler 2001). Since REMMF is applicable to a real software industry environment, the case study research method is believed to be an appropriate method for testing it. A real life case study is necessary because it can show: • Whether REMMF is a suitable for use in a real world environment • Areas where REMMF requires improvement • The practicality of REMMF in use Initially we talked to participants from two organisations (i.e. organisation A and organisation B) explained what the case study was about, and provided them with a hard copy of REMMF. The participants also asked, through emails, for more information about the use of REMMF. The participants involved in the case studies were senior software

123

Software Qual J

development professionals. For the case studies, participants used REMMF to assess the RE processes maturity of their organisations independently without any suggestion or help from the authors. Organisation A is a well established information technology solution provider with 1,400 professionals. The main purpose of this organisation is to provide business process re-engineering/improvement services and to develop management information systems. Our respondent was a senior systems analyst who considered that he had very expert knowledge of RE. Organisation A has used ISO 9001 and a home grown methodology for process improvement activities in the past. Organisation B, employing 2,000 professionals, is a long established international organisation providing consultancy and information technology services to both private and public sector. The main purpose of the organisation is to enhance the efficiency and effectiveness of information systems in the public and private sectors by applying relevant state-of-the-art technologies related to computer software, hardware and data communication. Our respondent was a senior software developer who considered that he had expert knowledge of RE. Company B has also used ISO standards for process improvement in the past. At completion of the case studies, a review session was conducted with each participant in order to obtain feedback about REMMF. A questionnaire (Niazi 2004) (available from authors) was used to structure the review session. This questionnaire is divided into three parts: company details, ease of learning and user satisfaction. In order to evaluate REMMF the criteria described in Sect. 3 was used. The primary evaluation criteria emanates directly from the criteria designed for the development of REMMF (Sect. 3), and TAM (Davis 1989; Davis et al. 1989) i.e. • Ease of use How easily RE practitioners can interpret, use and understand REMMF. • User satisfaction The level of user satisfaction with the results of REMMF. 5.1 Implementation of REMMF The assessment results for seven of Sommerville’s RE categories are summarised in Appendix B. We did not evaluate one category, requirements for critical systems as neither of our organisations developed this type of system. 5.1.1 Result of implementation at organisation A It is clear from Appendix B that organisation A’s requirements documentation category is weak (i.e. the overall 3-dimensional score is four, which is \7). All practices in this category are weak thus organisation A needs to improve its requirements documentation process. What is surprising is that organisation A does not follow practice RD4, i.e. make a business case for a project. Often, the finance department and/or CIO will insist that a project will only be funded if a business case is made, i.e. the system must meet an organisational business goal. We consider this activity should be required for every project as senior sponsor support and customer/user participation have been shown as important for software project success (Verner and Evanco 2005).

123

Software Qual J

Although the state of three requirements elicitation practices (i.e. RE1, RE2 and RE3) was found to be fair at best, overall this category is weak. Organisation A does not follow four important requirements elicitation practices, i.e. ‘record requirements rationale’, ‘collect requirements from multiple viewpoints’, ‘prototype poorly understood requirements’, and ‘use scenarios to elicit requirements’. No practices in the ‘requirements analysis and negotiation’ category have been designed and implemented at organisation A. For ‘describing requirements’ organisation A has made some efforts to use diagrams appropriately (DR3) and supplement natural language with other descriptions of requirements (DR4). However these practices are still weak and need improvement. Organisation A does not have standard templates for describing requirements. The systems modelling category of organisation A is also weak. Organisation A is neither using a data dictionary (SM5) nor documenting the links between stakeholders’ requirements and system models (SM6). Although organisation A writes a draft user manual for each piece of software (RV6), other requirements validation practices are not designed and implemented at organisation A. It is surprising that organisation A has not made any effort to design practices relating to requirements management. All the practices in the area of requirements management receive a zero score in this assessment. All the RE categories of organisation A are scored as low. REMMF shows that overall the RE processes of organisation A are weak. It is also clear from Appendix B that some practices do not require a great deal of effort to be improved (i.e. practices with five or six scores) while other practices need more effort to be improved (i.e. practices with zero score). The weak practices, identified by REMMF, can be improved using the guidelines provided by Sommerville et al.

5.1.2 Results of implementation at organisation B Overall organisation B performed well except for three categories, systems modelling, requirements validation and requirements management; these categories are shown to be weak, i.e. the overall 3-dimensional score is \7. However, organisation B needs a little effort to improve its systems modelling and requirements management. However, more effort is required to improve requirements validation process as most practices in this category are weak except ‘write a draft user manual’ and ‘propose requirements test cases’. A score of five for four of the practices within the requirements validation section, and six for two of the other practices in this section indicates that management needs increased awareness of the value of requirements validation. Though management may have a basic awareness of requirements validation they still need to be convinced that such a process is really useful. Another practice that requires attention in organisation B is the recording of requirements sources. Although this practice is part of the requirements elicitation process it can also be important for requirements validation. Two requirements categories—requirements documents and describing requirements— are strong enough at organisation B; they achieved score of eight. Use of REMMF shows that overall the RE process of organisation B is relatively strong. However, a small amount of effort is required to improve the 3 week requirements categories. REMMF shows very clearly where organisation B should apply RE process improvement activities.

123

Software Qual J

5.2 The case study validity In the case of SCAPMI (2001), the lead assessor assesses the current status of an organisation’s software development processes and sends the assessment results to the Software Engineering Institute (SEI). The SEI’s quality department checks the authentication of the assessment results. This is done in order to reduce any bias in the assessment. Although for the validation of the case study, results were not checked by a quality department, we have validated our results via a post case study questionnaire. Each participant was asked to fill out the questionnaire to give us feedback on case study results (i.e. their organisation’s current RE status, ease of use, user satisfaction). However, two types of threats to case study validity are relevant to this study: construct validity and external validity (Briand et al. 2001). Construct validity is concerned with whether or not the measurement scales represent the attributes being measured. The attributes are taken from a substantial body of previous research (Daskalantonakis 1994; Sawyer et al. 1997) and further studies conducted by one of the authors (Niazi et al. 2005c). The responses from the post case study questionnaire show that all the attributes considered were relevant to their workspace. Also, both participants agreed with the assessment results. External validity is concerned with the generalization of the results to other environments than the one in which the initial study was conducted (Regnell et al. 2000). Since only two case studies were conducted, it is hard to justify the external validity at this stage. However, in the lessons learned, via the post case study questionnaire, it was observed that REMMF is general enough and can be probably applied to most organisations.

5.3 The case study lessons learned In assessment it is often hard to control the subjective interpretation of assessors (Kauppinen et al. 2002). In order to reduce the subjective interpretation of assessors it is important to create a systematic approach that is based on the sound judgment of those who are using the process. For REMMF evaluation, individuals were chosen who represent the entire project/ organisation/requirements phase. The people who used and evaluated REMMF were experts in their fields and have enough knowledge and understanding of their organisation’s current RE processes. The lessons learned via post case study questionnaires are summarised as follows: • REMMF is capable of determining the current state of the RE process. • Rather than just evaluating RE practices in a one-dimensional way (Sommerville et al.’s model), the REMMF suggests a 3-dimensional evaluation. This 3-dimensional approach has provided a systematic way where management commitment, magnitude of deployment, and results of deployment are evaluated for each RE practice. • The REMMF provides indicators for management commitment, magnitude of deployment, and results of deployment, in order to evaluate each RE practice (Sommerville et al.’s model does not provide any indicators to evaluate each RE practice). • REMMF can be effectively used to identify weak or strong RE practices (Sommerville et al.’s model does not identify weak or strong RE practices instead it provides RE maturity levels).

123

Software Qual J

• REMMF can identify weak or strong RE categories, i.e. requirements elicitation etc (Sommerville et al.’s model does not identify weak or strong RE categories). • REMMF is clear and easy to use. • Participants were able to successfully use REMMF without any confusion and ambiguity in order to measure their RE process maturity. • REMMF is general enough and can be applied to most organisations. • REMMF provides an entry point through which the participant can judge the effectiveness of their different RE practices. • Components of REMMF are self explanatory and require no further explanation to be used effectively. In summary, our participants were satisfied with both the use of REMMF and its structure. However they suggested that it was difficult to use such a measurement process without tool support. This is because many small calculations need to be performed during the process. A tool for support that can perform these calculations and can generate different assessment reports is needed. This tool does not have to be particularly complicated and with some small amount of programming a spreadsheet-based solution could be developed. This would allow for further test of the model. The participants also suggested that provision of guidelines regarding who should do an assessment in a organisation would be useful.

6 Conclusion and future research In this paper REMMF is presented and it is shown that this measurement framework has the potential to help an organisation to assess its RE process maturity. Developing REMMF included abstracting characteristics from four sources, i.e. RE literature data, interview data, Sommerville et al.’s maturity model and the authors’ industry and research experience. The main purpose in designing REMMF is to develop a better way to assist practitioners in effectively measuring the maturity of their organisation’s RE process. In order to design REMMF some criteria were necessary for it to be successfully deployed, i.e. ease of use, and user satisfaction. Using these success criteria two research questions were proposed in Sect. 1. In order to address these questions and to evaluate the application of REMMF in practice, a practical evaluation was undertaken, i.e. two case studies. The results of these two studies showed that REMMF is not only based on appropriate research literature but also is useful in a real world environment. It was suggested by the participants in the case studies that REMMF is clear, easy to use and is capable of measuring the maturity of the RE process. The case studies participants agreed that the weak and strong RE practices REMMF were identified and with the assessment results. However, they suggested a need for tool support in order to perform calculations and to generate different assessment reports. REMMF is a dynamic process that will be extended and evolved based on feedback and input from the software industry. However, because only two case studies were used to evaluate ease of use and user satisfaction further studies are required in order to evaluate and improve REMMF. Thus, it is recommended researchers and organisations take the opportunity to trial this measurement framework and provide feedback in order to further evaluate its effectiveness in the domain of RE process improvement. After further evaluation, consideration will be given to the development of tool support for REMMF.

123

Software Qual J

Appendix A: An example of requirements category assessment The following example shows how REMMF measures the capability of the ‘describing requirements’ category. The practices listed in Table 3 define the describing requirements category. Practice ‘‘DR1’’ is highlighted as an example. Three elements of each RE practice are measured: the approach, the deployment and the results. The objective is to assess the strength of an individual RE practice as well the RE process category. The first of the three measurement elements is based on the participant’s understanding of the organisation’s approach to the RE practices, i.e. the organisation’s commitment and management support for the practice as well as the organisation’s ability to implement the practice. Table 4 gives an example of how a participant might respond. The RE practice is as follows. DR1: Define standard templates for describing requirements The respondent should tick one of the options in the ‘Score’ column. Using their expert understanding and by collecting relevant information from different sources, imagine the respondent selecting: Weak (2) (i.e. Management begins to recognize need). The second element assesses how a practice is deployed in the organisation, i.e. the breadth and consistency of practice implementation across project areas. Table 5 gives an example of how a participant might respond. The RE practice is as follows.

DR1: Define standard templates for describing requirements The respondent needs to tick one of the options in the ‘Score’ column. Using their expert understanding and by collecting relevant information from different sources, imagine the respondent selects Fair (4) (i.e. Less fragmented use). The last element assesses the breadth and consistency of positive results over time and across project areas (using that particular practice). Table 6 gives an example of how a participant might respond. The RE practice is as follows.

DR1: Define standard templates for describing requirements The respondent should tick one of the options in the ‘Score’ column. Using his understanding and by collecting relevant information from different sources, imagine the Table 3 Measuring capability example ID

Type

Practice

DR1

Basic

Define standard templates for describing requirements

DR2

Basic

Use languages simply and concisely

DR3

Basic

Use diagrams appropriately

DR4

Basic

Supplement natural language with other description of requirement

DR5

Intermediate

Specify requirements quantitatively

123

Software Qual J Table 4 Approach Score Poor (0)

Approach No management recognition of need No organisational ability No organisational commitment Practice not evident Higher management is not aware of investment required and long term benefits of this practice

Weak (2)

Management begins to recognize need Support items for the practice start to be created A few parts of organisation are able to implement the practice Management begins to aware of investment required and long term benefits of this practice

Fair (4)

Wide but not complete commitment by management Road map for practice implementation defined Several supporting items for the practice in place Management has some awareness of investment required and long term benefits of this practice

Marginally qualified (6)

Some management commitment; some management becomes proactive Practice implementation well under way across parts of the organisation Supporting items in place Management has wide but not complete awareness of investment required and long term benefits of this practice

Qualified (8)

Total management commitment Majority of management is proactive Practice established as an integral part of the process Supporting items encourage and facilitate the use of practice A mechanism has been established to use and monitor this practice on continuing basis Management has wide and complete awareness of investment required and long term benefits of this practice

Outstanding (10)

Management provides zealous leadership and commitment Organisational excellence in the practice recognized even outside the organisation

respondent selects Marginally qualified (6) (i.e. Positive measurable results in most parts of the organisation). Now the score of three elements is: 2 + 4 + 6/3 = 4. So we can say that the DR1 practice is not strong (i.e. \7) and can be considered at FAIR. The above three elements are performed for all the RE practices in any particular requirements category. This procedure is repeated for each practice. The score for each practice is summed and an average is used to gain an overall score for each ‘requirements process category’.

123

Software Qual J Table 5 Deployment Score

Deployment

Poor (0)

No part of the organisation uses the practice No part of the organisation shows interest

Weak (2)

Fragmented use Inconsistent use Deployed in some parts of the organisation Limited to monitoring/verification of use

Fair (4)

Less fragmented use Some consistency in use Deployed in some major parts of the organisation Monitoring/verification of use for several parts of the organisation No mechanism to distribute the lessons learned to the relevant staff members

Marginally qualified (6)

Deployed in many parts of the organisation Mostly consistent use across many parts of the organisation Monitoring/verification of use for many parts of the organisation A mechanism has been established, and used in some parts of the organisation, to distribute the lessons learned to the relevant staff members

Qualified (8)

Deployed in almost all parts of the organisation Consistent use across almost all parts of the organisation Monitoring/verification of use for almost all parts of the organisation A mechanism has been established, and used in all parts of the organisation, to distribute the lessons learned to the relevant staff members

Outstanding (10)

Pervasive and consistent deployed across all parts of the organisation Consistent use over time across all parts of the organisation Monitoring/verification for all parts of the organisation

Table 6 Results Score

Results

Poor (0)

Ineffective

Weak (2)

Spotty results Inconsistent results Some evidence of effectiveness for some parts of the organisation

Fair (4)

Consistent and positive results for several parts of the organisation Inconsistent results for other parts of the organisation

Marginally qualified (6) Positive measurable results in most parts of the organisation Consistently positive results over time across many parts of the organisation Qualified (8)

Positive measurable results in almost all parts of the organisation Consistently positive results over time across almost all parts of the organisation

Outstanding

Requirements exceeded Consistently world-class results Counsel sought by others

123

Software Qual J

Appendix B: Assessment summaries of all RE categories

ID

Type

Practice

Organisation A Organisation B

The 3-dimensional scores of requirements documents practices RD1

Basic

Define a standard document structure

5

8

RD2

Basic

Explain how to use the document

5

8

RD3

Basic

Include a summary of the requirements

0

9

RD4

Basic

Make a business case for the system

3

9

RD5

Basic

Define specialized terms

5

7

RD6

Basic

Make document layout readable

5

9

RD7

Basic

Help readers find information

5

8

RD8

Basic

Make the document easy to change

3

6

3.8

8

The 3-dimensional overall score of requirements document category The 3-dimensional scores of requirements elicitation practices RE1

Basic

Assess system feasibility

7

6

RE2

Basic

Be sensitive to organisational and political consideration

7

7

RE3

Basic

Identify and consult system stakeholders

7

7

RE4

Basic

Record requirements sources

5

5

RE5

Basic

Define the system’s operating environment

6

8

RE6

Basic

Use business concerns to drive requirements elicitation

6

8

RE7

Intermediate

Look for domain constraints

6

8

RE8

Intermediate

Record requirements rationale

0

7

RE9

Intermediate

Collect requirements from multiple viewpoints

0

6

RE10

Intermediate

Prototype poorly understood requirements

0

7

RE11

Intermediate

Use scenarios to elicit requirements

0

8

RE12

Intermediate

Define operational processes

4

6

RE13

Advanced

Reuse requirements

0

8

3.6

7

The 3-dimensional overall score of requirements elicitation category

The 3-dimensional scores of requirements analysis and negotiation practices RA1

Basic

Define system boundaries

1

8

RA2

Basic

Use checklists for requirements analysis

0

6

RA3

Basic

Provide software to support negotiations

0

6

RA4

Basic

Plan for conflicts and conflict resolution

0

8

RA5

Basic

Prioritise requirements

0

9

RA6

Intermediate

Classify requirements using a multi-dimensional 0 approach

7

RA7

Intermediate

Use interaction matrices to find conflicts and overlaps

6

RE8

Advanced

Assess requirements risks

The 3-dimensional overall score of requirements negotiation category

0 0

7

0

7

1

9

The 3-dimensional scores of describing requirements practices DR1

Basic

Define standard templates for describing requirements

123

Software Qual J

Appendix B continued ID

Type

Practice

Organisation A Organisation B

DR2

Basic

Use languages simply and concisely

3

DR3

Basic

Use diagrams appropriately

5

7

DR4

Basic

Supplement natural language with other description of requirement

5

9

DR5

Intermediate

Specify requirements quantitatively

The 3-dimensional overall score of describing requirements category

9

0

7

2.6

8

The 3-dimensional scores: systems modelling practices SM1

Basic

Develop complementary system models

6

6

SM2

Basic

Model the system’s environment

6

7

SM3

Basic

Model the system architecture

6

7

SM4

Intermediate

Use structured methods for system modelling

2

6

SM5

Intermediate

Use a data dictionary

0

8

SM6

Intermediate

Document the links between stakeholders requirement and system models

0

7

3.3

6.8

1

5

The 3-dimensional overall score of systems modelling category The 3-dimensional scores: requirements validation practices RV1

Basic

Check that the requirements document meets your standards

RV2

Basic

Organise formal requirements inspections

0

5

RV3

Basic

Use multi-disciplinary teams to review requirements

0

5

RV4

Basic

Define validation checklists

0

6

RV5

Intermediate

Use prototyping to animate requirements

0

5

RV6

Intermediate

Write a draft user manual

8

7

RV7

Intermediate

Propose requirements test cases

0

7

RV8

Advanced

Paraphrase system models

1

6

1.2

5.7

The 3-dimensional overall score of requirements validation category The 3-dimensional scores: requirements management practices RM1

Basic

Uniquely identify each requirement

0

8

RM2

Basic

Define policies for requirements management

0

7

RM3

Basic

Define traceability policies

0

6

RM4

Basic

Maintain a traceability manual

0

6

RM5

Intermediate

Use a database to manage requirements

0

7

RM6

Intermediate

Define change management policies

0

8

RM7

Intermediate

Identify global system requirements

0

7

RM8

Advanced

Identify volatile requirements

0

7

RM9

Advanced

Record rejected requirements

The 3-dimensional overall score of requirements management category

123

0

6

0

6.7

Software Qual J

References Alexander, I., & Stevens, R. (2002). Writing better requirements. Addison-Wesley. Beecham, S., & Hall, T. (2003). Expert panel questionnaire: Validating a requirements process improvement model, http://www.homepages.feis.herts.ac.uk/*pppgroup/requirements_cmm.htm, Site visited May 2003. Beecham, S., Hall, T., & Rainer, A. (2003a). Building a requirements process improvement model. Department of Computer Science, University of Hertfordshire, Technical report No: 378. Beecham, S., Hall, T., & Rainer, A. (2003b). Software process problems in twelve software companies: An empirical analysis. Empirical Software Engineering, 8, 7–42. Boehm, B. W. (1987). Improving software productivity. IEEE Computer, 20(9), 43–57. Briand, L., Wust, J., & Lounis, H. (2001). Replicated case studies for investigating quality factors in object oriented designs. Empirical Software Engineering, 6(1), 11–58. Chatzoglou, P., & Macaulay, L. (1996). Requirements capture and analysis: A survey of current practice. Requirements Engineering Journal, 1, 75–87. Cooper, D., & Schindler, P. (2001). Business research methods (7th ed.). McGraw-Hill. Daskalantonakis, M. K. (1994). Achieving higher SEI levels. IEEE Software, 11(4), 17–24. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1003. Diaz, M., & Sligo, J. (1997). How software process improvement helped Motorola. IEEE Software, 14(5), 75–81. El Emam, K., & Madhavji, H. N. (1995). A field study of requirements engineering practices in information systems development. In Second International Symposium on Requirements Engineering (pp. 68–80). Gorscheck, T., Svahnberg, M., & Kaarina, T. (2003). Introduction and application of a lightweight requirements engineering process evaluation method. In Proceedings of the Requirements Engineering Foundations for Software Quality (REFSQ’03) (pp. 83–92). Klagenfurt/Velden, Austria. Hall, T., Beecham, S., & Rainer, A. (2002). Requirements problems in twelve software companies: An empirical analysis. IEE Proceedings—Software, 149(5), 153–160. Hoffmann, H., & Lehner, F. (2001). Requirements engineering as a success factor in software projects. IEEE Software, 18(4), 58–66. Humphery, W. S. (2002). Three process perspectives: Organizations, teams, and people. Annuls of Software Engineering, 14, 39–72. Jobserve.com. UK Wasting Billions on IT Projects, http://www.jobserve.com/news/NewsStory. asp?e=e&SID=SID2598, 21/4/2004. Kamsties, E., Hormann, K., & Schlich M. (1998). Requirements engineering in small and medium enterprises. Requirements Engineering, 3(2), 84–90. Kauppinen, M., Aaltio, T., & Kujala, S. (2002). Applying the requirements engineering good practice guide for process improvement. In Proceedings of the Seventh European Conference on Software Quality (QC2002) (pp. 45–55). MacDonell, S., & Shepperd, M. (2003). Using prior-phase effort records for re-estimation during software projects. In 9th International Symposium on Software Metrics (pp. 73–86). 3–5 Sept., Sydney, Australia. Marjo, K., & Sari, K. (2001). Assessing requirements engineering processes with the REAIMS model: Lessons learned. In Proceedings of the Eleventh Annual International Symposium of the International Council on Systems Engineering (INCOSE2001). Neill, C. J., & Laplante, P. A. (2003). Requirements engineering: State of the practice. IEEE Software, 18(4), 40–45. Ngwenyama, O., & Nielsen, P. A. (2003). Competing values in software process improvement: An assumption analysis of CMM from an organizational culture perspective. IEEE Transactions on Software Engineering, 50, 100–112. Niazi, M. (2004). A framework for assisting the design of effective software process improvement implementation strategies. Ph.D. thesis, University of Technology Sydney. Niazi, M. (2005a). An empirical study for the improvement of requirements engineering process. In The 17th International Conference on Software Engineering and Knowledge Engineering (pp. 396–399). July 14 to 16, 2005, Taipei, Taiwan, Republic of China. Niazi, M. (2005b). An instrument for measuring the maturity of requirements engineering process. In The 6th International Conference on Product Focused Software Process Improvement (pp. 574–585). LNCS, Oulu, Finland, June 13–16.

123

Software Qual J Niazi, M., Cox, K., & Verner, J. (2005a). An empirical study identifying high perceived value requirements engineering practices. In Fourteenth International Conference on Information Systems Development (ISD’2005). Karlstad University, Sweden August 15–17. Niazi, M., & Shastry, S. (2003). Role of requirements engineering in software development process: An empirical study. In IEEE International Multi-Topic Conference (INMIC03) (pp. 402–407). Niazi, M., Wilson, D., & Zowghi, D. (2005b). A framework for assisting the design of effective software process improvement implementation strategies. Journal of Systems and Software, 78(2), 204–222. Niazi, M., Wilson, D., & Zowghi, D. (2005c). A maturity model for the implementation of software process improvement: An empirical study. Journal of Systems and Software, 74(2), 155–172. Nikula, U., Fajaniemi, J., & Kalviainen, H. (2000). Management view on current requirements engineering practices in small and medium enterprises. In Fifth Australian Workshop on Requirements Engineering (pp. 81–89). Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: A roadmap. In 22nd International Conference on Software Engineering (pp. 35–46). Regnell, B., Runeson, P., & Thelin, T. (2000). Are the perspectives really different-further experimentation on scenario-based reading of requirements. Empirical Software Engineering, 5(4), 331–356. Sawyer, P., Sommerville, I., & Viller, S. (1997). Requirements process improvement through the phased introduction of good practice. Software Process—Improvement and Practice, 3, 19–34. SCAMPI. (2001). Standard CMMI1 appraisal method for process improvement (SCAMPISM), Version 1.1: Method Definition Document. SEI, CMU/SEI-2001-HB-001. Siddiqi, J., & Chandra, S. (1996). Requirements engineering: The emerging wisdom. IEEE Software, 13(2), 15–19. Sommerville, I. (1996). Software engineering (5th ed.). Addison-Wesley. Sommerville, I., & Ransom, J. (2005). An empirical study of industrial requirements engineering process assessment and improvement. ACM Transactions on Software Engineering and Methodology, 14(1), 85–117. Sommerville, I., & Sawyer, P. (1997). Requirements engineering—a good practice guide. Wiley. Sommerville, I., Sawyer, P., & Viller, S. (1998). Improving the requirements process. In Fourth International Workshop on Requirements Engineering: Foundation of Software Quality (pp. 71–84). Standish-Group. (1995). Chaos—the state of the software industry. Standish group international technical report, pp. 1–11. Standish-Group. (1999). Chaos: A recipe for success. Standish Group International. Standish-Group. (2003). Chaos—the state of the software industry. Verner, J., Cox, K., Bleistein, S., & Cerpa, N. (2005). Requirements engineering and software project success: An industrial survey in Australia and the US. Australian Journal of Information Systems (to appear Sept 2005). Verner, J., & Evanco, W. M. (2005). In-house software development: What software project management practices lead to success? IEEE Software, 22(1), 86–93. Wiegers, K. E. (2003). Software requirements (2nd ed.). Redmond, WA: Microsoft Press. Yin, R. K. (1993). Applications of case study research. Sage Publications.

Author Biographies Dr. Mahmood Niazi is a lecturer in the School of Computing and Mathematics at Keele University. Dr. Niazi is an active researcher in the field of software engineering. He has spent more than a decade with leading technology firms and universities as a process analyst, senior systems analyst, project manager, research scientist and lecturer. He has participated in and managed several software development projects and research projects. He holds a Ph.D. from the Faculty of IT, University of Technology Sydney. His research interests are software process improvement, requirements engineering, empirical software engineering, software measurement and global software development. Previously Dr. Niazi worked at National ICT Australia, University of Technology Sydney, University of Sydney and University of Manchester.

123

Software Qual J

Dr. Karl Cox Prior to joining NICTA, Dr. Cox was a Research Fellow in the School of Computer Science and Engineering at the University of New South Wales (UNSW), Sydney, Australia, and Lecturer and Leader of the Software Development Methodology Unit for the Masters of Science in Computing (Software Engineering) at Bournemouth University in the UK. Dr. Cox was awarded a Masters degree in Software Engineering with Distinction from Bournemouth University in the UK in 1998, and a Ph.D. in Computer Science from Bournemouth University in 2002. Dr. Cox’s research interests are centred on RE, specifically: the Problem Frames approach as a means of providing a framework for understanding the problem context of business needs; goal modelling, combined with problem frames, as a means of describing business goals, strategies, and objectives that are aligned to software; process modelling, which captures the details of processes that businesses implement to carry out their daily work; and use cases, which is concerned with ways to improve the comprehensibility of use case descriptions, and the misunderstanding and misuse of use cases that often occurs.

Professor June Verner Prior to joining NICTA in 2003 as Principal Research Scientist of the ESE research program, Professor Verner was Professor of IS at the College of Information Science and Technology at Drexel University, Philadelphia PA, USA. In 1977, Professor Verner was awarded a Bachelor of Science in Mathematics. She completed a Graduate Diploma in Social Science (Computer Science) in 1979. She was awarded her Masters of Business Studies in Management Information Systems with First Class Honours in 1982, and a Ph.D. in Software Engineering in 1989, with a thesis entitled A Generic Software Size Estimation Model Based on Component Partitioning. All qualifications were completed at Massey University, New Zealand. Professor Verner’s research interests include software project management and its effect on project success, software risk, software measurement, RE, and the definition of software project success.

123

Suggest Documents