An Instrument for Measuring the Maturity of Requirements Engineering Process Mahmood Niazi National ICT Australia, Sydney, NSW 1430, Australia
[email protected]
Abstract. Requirements problems are widely acknowledged to have impact on the effectiveness of the software development process. In order to improve the requirements engineering (RE) process and to reduce requirements problems, Sommerville et al. [1] have developed a requirements maturity model. Literature shows that the measurement process, designed in this model, is very confused and can lead organizations to incorrect results. This is because the measurement process is ambiguous and no strategic and systematic approach is used to decide different scores for various RE practices. The objective of this paper is to propose a measurement instrument for Sommerville et al.’s model to effectively measure the maturity of the RE process. The main purpose of proposing this measurement instrument is to develop better ways to assist practitioners in effectively measuring the maturity of the RE process. This instrument provides a very practical structure with which to measure the maturity of the RE process. I have tested this instrument in one case study where only one category of RE process, i.e. ‘requirements elicitation’ was used as an exemplar. The case study results show that the measurement instrument has potential to assist practitioners in effectively measuring the maturity of ‘requirements elicitation’ category of the RE process. Thus, I recommend organizations trial this instrument for other categories of RE process in order to further evaluate its effectiveness in the domain of RE process.
1 Introduction Requirements problems are widely acknowledged to have impact on the effectiveness of the software development process [2; 3]. Many software projects have failed because they contained a poor set of requirements [4]. No software process can keep delivery times, costs and product quality under control if the requirements are poorly defined and managed [5]. In order to produce software that closely matches the needs of an organisation and the stakeholders, great attention needs to be paid to improvement of the RE process [5; 6]. In order to improve the RE process and to reduce requirements problems, some researchers have published their work [1; 5]. Sommerville et al.’s model is derived from existing standards and has three levels of maturity: Level 1-Initial, Level 2repeatable and Level 3-Defined. The model can be used to assess the current RE F. Bomarius and S. Komi-Sirviö (Eds.): PROFES 2005, LNCS 3547, pp. 574–585, 2005. © Springer-Verlag Berlin Heidelberg 2005
An Instrument for Measuring the Maturity of Requirements Engineering Process
575
process and it provides a template for requirements engineering practice assessment. It is based upon 66 good requirements practices which are classified into Basic, Intermediate and Advanced. Against each practice, four types of assessments are made: 3 points are scored for standardized practice, 2 for normal use, 1 for discretionary use and 0 for practices that are never used. This model classifies organizations with less than 55 points in the basic guidelines as Level 1-Initial, organizations with above 55 points in the basic guidelines and less than 40 points in the intermediate and the advanced guidelines as Level 2-Repeatable and organizations with more than 85 points in the basic guidelines and more than 40 points in the intermediate and advanced guidelines as Level 3-Defined. In earlier work, in order to use the Sommerville et al. maturity model, the author has conducted an empirical study with Australian practitioners [7]. This study was a two-fold process; firstly, requirements process maturity was assessed using the requirements maturity model developed by Sommerville et al. and secondly, the types and number of problems faced by different practitioners during their software project was documented. In this study the interviews were conducted with twenty-two Australian practitioners. During interviews the practitioners had the opinion that although all the practices, in Sommerville et al. maturity model, are very well defined, the measurement process designed for these practices was very confused and could lead organizations to incorrect results. This is because the measurement process is ambiguous and no strategic and systematic approach is used to decide different scores for various practices. The practitioners in this study were very happy with the list of practices for RE process but they wanted more formal and structured measurement instrument in order to assess the maturity of the RE process. Somerville et al. have also indicated this: “this assessment scheme is not a precise instrument and is not intended for formal accreditation” [1:p29]. I have developed an instrument in order to effectively measure the maturity of the RE process. The objective of this paper is to describe this measurement instrument and to test this instrument in one case study. To focus this study, I investigated the following research question: RQ. Does the proposed measurement instrument help practitioners measure the maturity of their RE processes? The main reason for addressing this research question is to develop better ways in order to assist practitioners in effectively measuring the maturity of the RE process. This paper is organised as follows. Section 2 provides background. Section 3 describes the development process of an assessment instrument. In Section 4 the assessment instrument is described. In Section 5 a case study is presented. Finally the paper presents a summary of conclusions and recommendations for practitioners as concluding remarks.
2 Background RE is often considered an important process of the software life-cycle. This is because software industry and research communities have realised the difficulties of producing high quality requirements [8]. It has been observed through RE literature that one can achieve better quality requirements if the RE process is properly defined
576
M. Niazi
[4; 5]. Despite the importance of RE processes, little research has been conducted to reduce the requirements problems and to improve requirements quality. Several studies have been conducted to identify the problems with RE process [3; 4; 9-13]. To highlight few of these: •
• •
•
El Emam and Madhavji [4] described a field study that indicated that there are seven key issues that must be addressed in a successful RE process improvement effort: package consideration, managing the level of detail of functional process models, examining the current system, user participation, managing uncertainty, benefits of case tools and project management capability. Hall et al. [3] discussed the requirements process problems in twelve software companies. Their main findings show that the requirements process is a major source of problems in the software development process. Nikula et al. [10] analysed the requirements engineering practices in a survey with twelve small and medium enterprises in order to understand current requirements engineering practices and the desire to improve them. Their results show that the problem is not in the practitioners’ lack of desire for improvement, but in management not knowing that many requirements engineering issues can be solved with standard practices that are well documented in the literature. Finally, Siddiqi and Chandra [12] mentioned a gap between current research and practice and in order to reduce this gap they suggested a continuous discussion between researchers and practitioners.
Only identification of requirements problems is not sufficient but a holistic approach is required in order to reduce these problems and to achieve quality requirements [6; 7]. A very limited work has been done in developing ways to reduce requirements problems by improving the requirements process. One of these is the REAIMS (Requirements Engineering Adaptation and IMprovement for Safety and dependability) project, partially funded by the European Commission, started at the University Of Lancaster UK [1]. The objective of REAIMS was to provide a framework for improving the RE process for dependable and safety-related systems and to assess this in a real industrial context [1]. In this project a requirements maturity model was developed that was derived from the Capability Maturity Model [14]. A set of guidelines was proposed for a requirements maturity model. The aim of the present study is to design a measurement instrument to measure the effectiveness of each practice designed in Sommerville et al. requirements maturity model [1]. This measurement instrument should provide requirement practitioners with some insight into designing appropriate RE processes in order to achieve better results.
3 A Requirements Process Maturity Assessment Instrument Development Process An examination of the RE literature, together with the previously conducted empirical study [7], highlights the need to develop an instrument to effectively measure the
An Instrument for Measuring the Maturity of Requirements Engineering Process
577
maturity of RE process. The requirements process maturity assessment instrument (RPMAI) development was initiated by creating its success criteria. Objectives are set to clarify the purpose of the instrument and outline what the instrument is expected to describe. These criteria guide development and are later used to help evaluate the instrument. The RPMAI building activities involved abstracting characteristics from three sources: • • •
RE literature data [3; 15-18] Interviews data [7] Sommerville etc al Maturity Model [1; 5; 19]
Figure 1 outlines the stages involved in creating the RPMAI.
1 Specify criteria for instrument developme nt
2 Research question
3 Rationalization and Structuring of Sommerville et al. model
4 Development of RPMAI
5 Evaluation through a case study
Fig. 1. Activities involved in building the requirements process maturity assessment instrument
The first stage in RPMAI development is to set the criteria for its success. The motivation for building these criteria for RPMAI development comes from empirical research with eleven Australian software development companies [7]. In order to design the RPMAI the following criteria have been decided. • Consistency The RPMAI features need to be consistent and complete. Language between different components of the RPMAI will be consistent. The RPMAI will include all the information, which is necessary, and there will be no conflicts between different parts of the instrument. The structures of different components of the RPMAI will have consistent granularity. • User satisfaction The end users need to be satisfied with the results of the measurement instrument. The end users will be able to use the measurement instrument and will achieve specified goals according to their needs and expectations without any confusion and ambiguity. • Ease of use The complex models and standards are unlikely to be adopted by the organizations as they require lot of resources, training and efforts. The RPMAI will have different levels of decomposition starting with the highest level in order to gradually lead the
578
M. Niazi
user through from a descriptive framework towards a more prescriptive solution. The structure of the RPMAI will be flexible and easy to follow. In order to address these criteria, a research question (see Section 1) was developed in stage 2. The stage 3 is the stage where the author undertook a rationalisation and structuring of Sommerville et al. [1] model. The stage 4 is the stage where RPMAI was developed. The final stage is where an evaluation of the RPMAI was performed using a case study.
4 A Requirements Process Maturity Assessment Instrument At Motorola an instrument has been developed in order to assess the organization’s current status relative to software process improvement and to identify weak areas that need attention and improvement [18]. Diaz and Sligo [20] describe the use of this instrument: “at Motorola Government Electronics Divisions, each project performs a quarterly SEI self-assessment. The project evaluates each KPA activity as a score between 1 and 10, which is then rolled into an average score for each key process area. Any key process area average score that falls below 7 is considered a weakness” [20:p76]. The Motorola’s assessment instrument has the following three evaluation dimensions [18] • • •
Approach: Criteria here are the organization commitment and management support for the practice as well as the organization’s ability to implement the practice. Deployment: The breadth and consistency of practice implementation across project areas are the key criteria here. Results: Criteria here are the breadth and consistency of positive results over time and across project areas.
Motorola’s instrument successfully assisted in producing high quality software, reducing cost and time, and increasing productivity [20]. In this paper, I have adapted Motorola’s instrument [18] and based on the work of Sommerville et al. [1] and Niazi et al. [16] developed an instrument in order to effectively measure the maturity of the RE process. There are many compelling reasons for adapting Motorola’s instrument: • • •
it is a normative instrument designed to be adapted; it has been successfully tried and tested at Motorola; and it has a limited set of activities.
In order to be used effectively in the domain of RE, I have tailored the Motorola instrument and made some minor changes in different evaluation activities of three dimensions using literature and the empirical study (as shown in Appendix A). The structure of the RPMAI is shown in Figure 2. The 66 best practices designed by Sommerville et al. can be divided into 8 categories: requirements documents, requirements elicitation, requirements analysis and negotiation, describing requirements, system modelling, requirements validation, requirements management and requirements for critical systems. These categories are shown as ‘requirements process category’ in Figure 2. Figure 2 shows that
An Instrument for Measuring the Maturity of Requirements Engineering Process
579
requirements process category maturity indicates requirements process maturity [5]. These requirements process categories contain different best practices designed for RE process [5]. For each best practice an instrument has been developed, in this paper, in order to effectively measure its maturity. In order to effectively measure the maturity of RE process, the following steps have been adapted in RPMAI [15; 16; 18] (One example is provided in Table 1): • • • •
•
Step 1: For each practice, a key participant who is involved in the RE process assessment calculates the 3-dimensional scores of the assessment instrument. Step 2: The 3-dimensional scores for each practice are added together and divided by 3 and rounded up. A score for each practice is ticked in the evaluation sheet. Step 3: Repeat this procedure for each practice. Add together the score of each practice and average it to gain an overall score for each requirements process category. Step 4: A score of 7 or higher for each requirements process category will indicate that specific category maturity has been successfully achieved [16; 18]. Any category maturity with an average score that falls below 7 is considered a weakness [16; 18]. Step 5: It is possible that some practices may not be appropriate for an organization and need never be implemented. For such practices ‘Not applicable’ (NA) option is selected. This NA practice should not be used in calculating the average score of requirements process category. Inform Indicate
Sommerville et al.‘s model
Requirements process category maturity
Requirements process maturity
Form
Contain
Inform Sommerville et al.’s model
Best practices Describe Organized by
Activities
Describe Measurement
Organized into Inform Motorola’s instrument How to measure and Niazi each practice et al.’s model
Fig. 2. The structure of RPMAI [1; 5; 16; 19]
580
M. Niazi
At the end of this measurement process it will be very clear to companies to know which requirements process categories are weak and needs further consideration.
5 The Case Study The case study method was used because this method is said to be powerful for evaluation and can provide sufficient information in the real software industry environment [21]. The case study also provides valuable insights for problem solving, evaluation and strategy [22]. Since the RPMAI is more applicable to a real software industry environment, the case study research method is believed to be a more appropriate method for this situation. Real life case study was necessary because it: • • •
Showed that the RPMAI is suitable or will fit in the real world environment. Highlighted areas where the RPMAI needs improvements. Showed the practicality of the RPMAI in use.
In the case study, a participant from Company A has used RPMAI and measured the RE process maturity of his/her company independently without any suggestion or help from the researcher. Company A is a multinational company that provides software services, employing (101-500) professionals worldwide and less than 10 in Australia. The main purpose of the company is to develop warehouse management system, manufacturing executive systems and ERP integration. The process improvement programme was initiated 3-5 years ago in Company A. This improvement programme was initiated to: shorten software development cycle times, reduce time-to-market, improve the quality of the software developed and reduce maintenance. At the end of a case study, a feedback session was conducted with the participant in order to provide feedback about RPMAI. The questionnaire (available from author) used in the study of Niazi et al. [17] was adapted in order to structure this feedback session. In order to evaluate the RPMAI the following criterion has been decided. The primary motivation for building this criterion for RPMAI evaluation emanates from the criteria designed for the development of RPMAI (Section 3). • Usability of the RPMAI Usability is formally defined as the “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [23]. The goal of a usability analysis is to pinpoint areas of confusion and ambiguity for the user which, when improved will increase the efficiency and quality of a users' experience [23]. The usability of the RPMAI can be linked with ‘ease of use’ and ‘user satisfaction’. If one can successfully use the instrument and achieve specified goals according to user needs and expectations without any confusion and ambiguity then this fulfils the usability criteria of RPMAI [23].
An Instrument for Measuring the Maturity of Requirements Engineering Process
581
5.1 Implementation of RPMAI In this case study, in order to evaluate the RPMAI, the requirements elicitation category is used as an exemplar. A participant of Company A has carried out requirements elicitation category assessment using RPMAI (Section 4). I have summarised the assessment results in Table 1. The key points of this assessment are as follows: • • • • •
It is clear from Table 1 that requirements elicitation process of Company A is weak (i.e., score is < 7). In order to achieve matured requirements elicitation process, the Company A needs to improve all those individual practices with less than seven score as shown in Table 1. Table 1 shows that Company A is not developing prototypes for poor understood requirements. In Company A, stakeholders are not identified and consulted. Requirements rationale are not recorded
Table 1. Practice evaluation sheet for Requirements Elicitation category (6+6+4+8+6+6+6+4+ 6+2+10+8+6=44/No of practices) -> 78/13=6, Average core=6 Requirements Elicitation Assess System Feasibility Be sensitive to organizational and political consideration Identify and consult system stakeholders Record requirements sources Define the system’s operating environment Use business concern to drive requirements elicitation Look for domain constraints Record requirements rationale Collect requirements from multiple viewpoints Prototype poorly understood requirements Use scenarios to elicit requirements Define operational processes Reuse requirements
0
1
2
3
4
5
6
7
8
9
1 0
X X
X X X X
X X X X X X X
N A
582
M. Niazi
• • •
Scenarios are strongly used to elicit requirements Requirements sources are recorded Operational processes are well defined
Out of thirteen requirements elicitation practices, ten are identified as weak practices. The results suggest that Company A should focus on these weak practices in order to improve its requirements elicitation process and to reduce problems associated with requirements elicitation. Only three practices of requirements elicitation are well defined in Company A. The weak requirements elicitation practices, identified by RPMAI, can be improved using the guidelines provided by Sommerville et al. [19]. 5.2 The Case Study Lessons Learned The case study was conducted in order to test and evaluate the RPMAI in the real world environment. The lessons learned from this case study are summarised as follows: • •
A participant agreed that the RPMAI is clear and easy to use. A participant was able to successfully use the RPMAI without any confusion and ambiguity in order to measure the RE process maturity.
•
A participant agreed that RPMAI is general enough and can be applied to most companies. The RPMAI provides an entry point through which the participant can effectively judge the effectiveness of different requirements elicitation practices. A participant who used the RPMAI was fully agreed and satisfied with the assessment results A participant was agreed that the use of RPMAI would improve the RE process maturity. All the components of the RPMAI are self explanatory and require no further explanation to be used effectively. The RPMAI is capable of determining the current state of the ‘requirements elicitation’ category of the RE process.
• • • • •
In summary, the participant was very satisfied with the use of RPMAI. The participant was also very satisfied with the structure of the RPMAI. The participant suggested to extend the RPMAI to include all phases of the RE process. He also suggested providing some guidelines about who should do this assessment in a company.
6 Conclusion and Future Work This research has met its objectives and aim with respect to providing an instrument in order to assist practitioners in measuring the maturity of the RE process. The main purpose of proposing this measurement instrument was to develop better ways in order to assist practitioners in effectively measuring the maturity of the RE process. In
An Instrument for Measuring the Maturity of Requirements Engineering Process
583
order to design this instrument some criteria were decided for its success. The research question was that: Does the proposed measurement instrument help practitioners measure the maturity of their RE processes? The results of the case study are very positive. However, only one category ‘Requirements Elicitation’ was used as an exemplar in the case study. Thus, I recommend organizations trial this instrument for other categories of RE process in order to further evaluate its effectiveness in the domain of RE process. Further large case studies using other RE categories will further evaluate the effectiveness of the RPMAI. In particular, one aspect that the participant considered important is to provide some guidelines about who should be using the RPMAI in a company. This comment is very positive and a valuable suggestion. RPMAI is a dynamic instrument that will be extended and evolved based on feedback and input from software industry. This suggestion will be considered in future work by developing some guidelines for its use. It is hope that I will be able to continue further evaluation of RPMAI through multiple case studies in order to improve it and to make it more complete.
Acknowledgements I am thankful to Professor Ross Jeffery (Program leader, Empirical Software Engineering NICTA) and Dr. Mark Staples (Researcher at NICTA) for their comments to complete this paper.
References 1. Sommerville, I., Sawyer, P. and Viller, S.: Requirements Process Improvement Through the Phased Introduction of Good Practice, Software Process-Improvement and Practice (3). (1997) 19-34. 2. Sommerville, I.: Software Engineering Fifth Edition. Addison-Wesley.(1996). 3. Hall, T., Beecham, S. and Rainer, A.: Requirements Problems in Twelve Software Companies: An Empirical Analysis, IEE Proceedings - Software (August). (2002) 153160. 4. El Emam, K. and Madhavji, H. N.: A Field Study of Requirements Engineering Practices in Information Systems Development. Second International Symposium on Requirements Engineering. (1995) 68-80. 5. Sommerville, I., Sawyer, P. and Viller, S.: Improving the Requirements Process. Fourth International Workshop on Requirements Engineering: Foundation of Software Quality. (1998) 71-84. 6. Niazi, M.: Improving the Requirements Engineering Process: A Process Oriented Approach. Thirteenth Australasian Conference on Information Systems (ACIS02). (2002) 885-898. 7. Niazi, M. and Shastry, S.: Role of Requirements Engineering in Software development Process: An empirical study. IEEE International Multi-Topic Conference (INMIC03). (2003) 402-407. 8. Lamsweerde, A., van: Requirements Engineering in the year 00: A research perspective. 22nd International Conference on Software Engineering (ICSE). (2000) 5-19. 9. Kamsties, E., Hormann, K. and Schlich, M.: Requirements Engineering in Small and Medium Enterprises, Requirements Engineering 3 (2). (1998) 84-90.
584
M. Niazi
10. Nikula, U., Fajaniemi, J. and Kalviainen, H.: Management View on Current Requirements Engineering Practices in Small and Medium Enterprises. Fifth Australian Workshop on Requirements Engineering. (2000) 81-89. 11. Nuseibeh, B. and Easterbrook, S.: Requirements Engineering: a roadmap. 22nd International Conference on Software Engineering. (2000) 35-46. 12. Siddiqi, J. and Chandra, S.: Requirements Engineering: The Emerging Wisdom, IEEE Software 13 (2). (1996) 15-19. 13. Niazi, M. and Shastry, S.: Critical success factors for the improvement of requirements engineering process. International conference on software engineering research and practice. (2003) 433-439. 14. Paulk, M., Weber, C., Curtis, B. and Chrissis, M.: A high maturity example: Space shuttle onboard software, in the Capability Maturity Model: Guidelines for improving software process. Addison-wesley.(1994). 15. Beecham, S., Hall, T. and Rainer, A.: Building a requirements process improvement model. Department of Computer Science, University of Hertfordshire, Technical report No: 378 (2003). 16. Niazi, M., Wilson, D. and Zowghi, D.: A Maturity Model for the Implementation of Software Process Improvement: An empirical study, Journal of Systems and Software 74 (2). (2005) 155-172. 17. Niazi, M., Wilson, D. and Zowghi, D.: A Framework for Assisting the Design of Effective Software Process Improvement Implementation Strategies, Accepted for publication: Journal of Systems and Software (2005) 18. Daskalantonakis, M. K.: Achieving higher SEI levels, IEEE Software 11 (4). (1994) 1724. 19. Sommerville, I. and Sawyer, P.: Requirements Engineering - A good Practice Guide. Wiley.(1997). 20. Diaz, M. and Sligo, J.: How Software Process Improvement helped Motorola, , IEEE software 14 (5). (1997) 75-81. 21. Yin, R. K.: Applications of Case Study Research. Sage Publications.(1993). 22. Cooper, D. and Schindler, P.: Business research methods, seventh edition. McGrawHill.(2001). 23. ISO/DIS-9241-11: International Standards Organization, ISO DIS 9241-11: Guidance on Usability. (1994).
24. Appendix A Measurement Insptrument [18] Score Key Activity evaluation dimensions Poor (0)
Approach
Deployment
Results
No management recognition of need No organizational ability No organizational commitment Practice not evident Higher management is not aware of investment required and long term benefits of this practice
No part of the organization uses the practice No part of the organization shows interest
Ineffective
An Instrument for Measuring the Maturity of Requirements Engineering Process Weak (2)
Fair (4)
Marginally qualified (6)
Qualified (8)
Outstanding (10)
Management begins to recognize need Support items for the practice start to be created A few parts of organization are able to implement the practice Management begins to aware of investment required and long term benefits of this practice Wide but not complete commitment by management Road map for practice implementation defined Several supporting items for the practice in place Management has some awareness of investment required and long term benefits of this practice Some management commitment; some management becomes proactive Practice implementation well under way across parts of the organization Supporting items in place Management has wide but not complete awareness of investment required and long term benefits of this practice Total management commitment Majority of management is proactive Practice established as an integral part of the process Supporting items encourage and facilitate the use of practice A mechanism has been established to use and monitor this practice on continuing basis Management has wide and complete awareness of investment required and long term benefits of this practice Management provides zealous leadership and commitment Organizational excellence in the practice recognized even outside the company
585
Fragmented use Inconsistent use Deployed in some parts of the organization Limited to monitoring/verification of use
Spotty results Inconsistent results Some evidence of effectiveness for some parts of the organization
Less fragmented use Some consistency in use Deployed in some major parts of the organization Monitoring/verification of use for several parts of the organization No mechanism to distribute the lessons learned to the relevant staff members
Consistent and positive results for several parts of the organization Inconsistent results for other parts of the organization Positive measurable results in most parts of the organization Consistently positive results over time across many parts of the organization Positive measurable results in almost all parts of the organization Consistently positive results over time across almost all parts of the organization
Deployed in some parts of the organization Mostly consistent use across many parts of the organization Monitoring/verification of use for many parts of the organization A mechanism has been established, and used in some parts of the organization, to distribute the lessons learned to the relevant staff members Deployed in almost all parts of the organization Consistent use across almost all parts of the organization Monitoring/verification of use for almost all parts of the organization A mechanism has been established, and used in all parts of the organization, to distribute the lessons learned to the relevant staff members
Pervasive and consistent deployed across all parts of the organization Consistent use over time across all parts of the organization Monitoring/verification for all parts of the organization
Requirements exceeded Consistently world-class results Counsel sought by others