Software Process Improvement Model for a Small Organization: An Experience Report Amr Kamel, Sundari Voruganti, H. James Hoover and Paul G. Sorenson Dept. of Computing Science, 615, GSB, University of Alberta, Edmonton, Canada { T6G 2H1
famr,sundari,hoover,
[email protected]
Phone: 1 403 492 4589
Fax: 1 403 492 1071
Abstract
Up until the 1990's, software quality was determined mostly by its functionality. But market competition and contractual obligations have forced today's software suppliers to enlarge their notion of quality beyond functionality, and to include such factors as user support, product evolution and quality of implementation. There are some up-front costs to adding quality to a software product that, in theory, should be recovered later through reduced development costs and increased product sales to satis ed customers. Unfortunately, these critical quality costs can be prohibitive for small companies who must carefully consider whether to introduce a quality system and to what extent. In this paper, we explore an approach for introducing process improvement into a small project that is eective, aordable and scalable. Speci cally, we describe the current baseline practices of the SEAF project, identify improvement opportunities based on the assessment, and develop an initial version of a quality model for small companies.
Keywords: Small organization, process improvement, light-weight process, SPICE. Amr Kamel is a Ph.D student. His research interests include software requirements, pro-
cesses, quality and metrics. Sundari Voruganti is a Masters student working in the area of quality issues for small companies and their light-weight solutions. H. James. Hoover is an Associate Professor. His research interests include building OO application frameworks and the application of proof-theoretic techniques to the development of correct programs. Paul G. Sorenson is Professor and Chair. His research interests include software process and quality, software engineering environments and software frameworks. 1
1 Introduction In this paper, we explore an approach for introducing process improvement into a small project that is eective, aordable and scalable. Speci cally, we describe our experiences in developing a quality process for an industrial project, as well as a quality process model for small organizations and a method for process improvement abstracted from that experience. What kind of quality process, if any, should a small project use? Both academia and industry recognize that the quality of a product depends greatly on the kind of the process used to produce that product. Several methods have been proposed and used to assess and improve the quality of the software process. The Capability Maturity Model (CMM)[1], Software Process Improvement and Capability dEtermination (SPICE) [2] and ISO 9001 [4] are three of the most common international standards that have been developed to assess process quality. But progress from assessment to a quality process requires guidance (as in [12]) and tool support (EssentialSET1 is one of many). Process improvement guidelines and tools are mostly motivated and geared towards large software development organizations. They are simply too heavy-weight for a small organization. Our light-weight process addresses the resource limitations that are faced by small to medium organizations. Since a rigorous quality control process might be beyond the needs of a small project, our light-weight approach allows partial enaction of the quality process based on a cost-bene t analysis.
1.1 Overview of the Project An application framework for sizing engineering products (henceforth SEAF2), is currently being developed as a collaborative eort between the Software Engineering Lab at the University of Alberta and an industrial partner. ISO 9001 certi cation was a quality goal of the SEAF project. This initiated our interest in small team software process. SEAF is divided into a set of six parallel sub-projects and among these, the development of a Small Enterprise Quality Process is the focus of this paper. The outcome of this subproject is a general, adaptable process model for applying software engineering techniques and technologies in small to medium system development. The SEAF project philosophy is to introduce just enough process and technology just in time. The quality process is rapid prototyped in the same manner as development. For example, using standard o-the-shelf internet and web tools (Javascript, Netscape, Perl, ftp etc.) we assembled a basic distributed development environment called Cafe Seaf. The environment centers around a master repository. Participants have a cloned copy of the master repository. Con guration management is maintained through a check-in check-out system. The tool also has a noti cation mechanism that updates SEAF project participants with the most recent repository status and changes. 1 2
EssentialSET is a trademark of Software Productivity Centre. Size Engineering Application Framework
2
Pool of generic Quality Assurance Processes
Pool of generic Development Processes Project Processes Actual Actual Software Quality Development Assurance Activities Activities
Measure-ment data
Work Products
Project Deliverables
Interactions
Process Improvement
Problems
Enhancements External Reports
Figure 1: Framework for the Quality Activities
2 A Small Enterprise Quality Process We de ne a small organization as:
A small organization is one that is a primary producer of relatively new software and which has between four and ten employees
In our approach, at the start of a project, a process is selected from the pool of generic processes and customized to suit the needs of the new project (see Figure 1). These processes consist of both development and quality assurance activities. When the selected processes are enacted, they become the actual project processes. Development processes are those processes that directly specify, implement or maintain a product. Quality assurance processes are the processes employed during a project life cycle to reduce defects, measure process eciency, estimate product quality, and perform other tasks to ensure the quality of the product. The actual processes are customized instantiations from a pool of available (possibly generic) processes. These processes interact through a set of repositories containing the following object classes:
Work-Products are the various outcomes of the development processes. Work-products
include both project deliverables and internal items. Each work product should be fed into at least one other actual process. Problems are reported as a result of failures uncovered in the quality assurance process, including external reports from customers. Enhancements are recorded as external reports that focus on things such as suggested enhancements and requirement changes, and do not include speci c malfunctions in the software. 3
Measurement Data include product and process related measurements. This data is used as a basis for process improvement.
Along a dierent dimension, the actual processes are divided into compulsory and optional processes. Compulsory processes must be performed as speci ed, while optional processes can be partially enacted, depending on the needs and requirements of the project at various points in the cycle. In a small organization, the set of compulsory processes typically include the basic development activities. In such an environment, most of the quality assurance activities are small and will initially be viewed as optional. For a light-weight quality system, any process or activity can be compulsory or optional depending on the project context. For example, \develop detailed design" should be a compulsory process for large projects, but it can be optional for small size projects. Similarly, thorough walkthroughs may be omitted during the alpha phase, but instantiated during beta. To complete the de nition of the quality system, the bene ts and additional costs of moving a process to the compulsory side, as well as the risks of moving it to the optional side must be addressed.
3 Process Improvement Model Implementing a quality system that satis es the dynamic nature of a small project requires a quality model that can be tuned as rapidly as the development model. This model must be evolutionary to allow the introduction of new quality processes and should address the risks of introducing new processes. The Spiral Model of the software life cycle [8] meets all the above requirements for software development. It is generally accepted as one of the most realistic approaches to developing complex software systems [10]. It has also been successfully applied to the software process domain as a process model generator [11]. In this section we will show how the traditional phases of the spiral model paradigm (re ne objectives, specify constraints and identify alternatives; identify and resolve risks; incorporate improvements; evaluate improvements and plan for next spiral) can be adopted and extended to t process improvement requirements.
3.1 Quality Spiral 0: Base-lining the Process During this round of the quality spiral:
The current process of the organization is modeled, formally or informally, to understand the type and nature of activities taking place in the organization. The process is then assessed formally to quantify the starting point for the process improvement program, and to establish the maturity level of the current process. Areas of improvements are then identi ed to furnish the starting point for the next spiral. 4
3.2 Quality Spiral 1-Onward: Process Evolution Cycle The rst phase includes:
Determine objectives for the current improvement cycle; Determine alternatives for implementing and enacting target improvements; Identify constraints imposed on each alternative.
The activity of determining objectives and identifying constraints addresses the requirements for the target improvements, schedule, budget and constraints. The activity of determining alternatives addresses process architecture, identi cation of best industrial practices to apply, and plans for enactment of required improvements. The objective of the Second Phase of the spiral is to identify and resolve risks. As the traditional paradigm suggests, this phase includes:
Identify areas of uncertainty and sources of risk. Evaluate alternatives relative to objectives, constraints and risks Resolve major risk issues.
During this phase, the risk of process participants resisting the full enaction of the improvements has to be addressed. Joint reviews and the recruitment of a client representative for the development team are typical ways to resolve these risks. The Third Phase incorporates improvements in the current process. This phase includes:
Tailoring the current processes, and/or developing new ones in order to incorporate the improvements in the current process model (designing the new process). Modifying the existing process model, testing the changes (the implementation step). Enactment of the new process.
Depending on the chosen alternatives, this phase might take many forms like changing process standards, process programming, buying o-the-shelf software or introducing templates and forms. The Fourth and nal phase of the spiral is to plan for the next spiral. This phase includes:
Collecting feedback from the development team. Formally assessing the new process after it has stabilized.
Assessment results are used to quantify the current state of the process, to evaluate current spiral improvement eorts, and to suggest areas of improvements for next spiral. 5
3.3 The Formal Assessment The formal assessment of the SEAF project in spiral 0 was done using SPICE, an international collaborative project under the auspices of the international committee on software engineering ISO/IEC JTC1/SC7 3 through the software process assessment group. The project, established in 1993, provides a framework for software process assessment. It also embodies a sophisticated model of software process management drawn from the world-wide experience of major software process method suppliers, such as the SEI (Software Engineering Institute) and European Bootstrap Consortium, as well as the experience of software developers and suppliers, such as Bell Canada and British Telecommunication. The SPICE document set, divided into nine parts, provides among other things:
a framework for determining key process strengths and weaknesses a framework for improving the software process and to measure such improvements. a framework for determining the risks of a business considering the development of a new software product or service.
The embedded reference model is based on the principle of examining the practices used to implement the process to determine the capability of that process. The practices are grouped in two categories: Base Practices that are speci c to the process and, Generic Practices that apply to all processes. Base Practices provide an indication of achievement for the process. On the other hand, Generic Practices indicate how well the process is managed. The model is two dimensional. In the process dimension, it de nes 29 dierent processes grouped in ve categories: Customer-Supplier (5), Engineering (7), Supporting (8), Management (4) and Organizational (5). On the other dimension, the capability dimension, the model de nes 9 dierent capability attributes, grouped into 6 Capability Levels: Incomplete, Performed, Managed, Established, Predictable and Optimizing.
4 Process Improvement Eorts on SEAF Project The process improvement eorts presented here provide sucient information to understand at a general level our process improvement model. We also present improvement details to re ect our view of the essential activities for building the quality process.
4.1 Base-lining the Process The SPICE reference model was used as a basis for SEAF's process model by simply including all applicable reference processes. Goals and expected results were de ned for each process International Organization for Standards/International Electrotechnical Commission Joint Technical Committee 1 (responsible for information technology)/Sub Committee 7 (responsible for Software Engineering). 3
6
and then the SEAF project underwent a formal SPICE assessment. The key ndings were:
4% of the assessed processes are at Capability Level 3 (Established), 14% of the processes are at Capability Level 2 (Managed), 36% of the processes are at Capability Level 1 (Performed) and 45% of the processes are at Capability Level 0 (Incomplete). All applicable Engineering Processes are at Capability Level 1 (Performed). These ndings indicate the adequacy of the process to develop a speci c product. All applicable Customer-Supplier processes are at Capability Level 3 (Established) which was the best overall capability level. These ndings apply to an in-house customer only. All applicable Supporting Processes have the lowest Capability levels - 75% of the processes are at Level 0 and 25% at Level 1. Because of their low capability levels, if the supporting processes do not evolve with the project, later stages of the project are associated with high risk. The assessment participants are very enthusiastic about process improvement which indicates a high degree of success for any process improvement program.
To complete the process model, all processes that were enacted by the process participants were de ned as compulsory processes. All other applicable processes were de ned as optional processes. For example, the team was already performing revision control and this was de ned as a compulsory process. Whereas the team was not performing reviews and this was de ned as an optional process. The assessment results also speci ed areas for suggested improvements. Areas of immediate improvements were identi ed based on the SEAF project needs, forecasted challenges and risk for the near future. In this paper we limit our discussion to the area of performing and managing quality assurance. To assure quality, each work product has to properly re ect the requirements of its creation, and suit its intended use. We suggest traceability analysis, reviews and inspections as tools to verify work products. Analyzing the traceability network is useful to ensure that all requirements have been incorporated into the software, to trace defects to their roots, and to better understand the intended use of work products. Finally, a testing suite for code and design has to be identi ed and deployed. This may require that unit, integration, regression, and acceptance tests be de ned and performed.
4.2 First Evolution Cycle (Initial Improvement Opportunities) The improvements are mapped, but not limited, to the SPICE processes, SUP.2 (Perform Con guration Management), SUP.3 (Perform Quality Assurance), SUP.4 (Perform Work Product Veri cation) SUP.5 (Perform Work Product Validation) and MAN.2 (Manage Quality). The three major issues that we concentrate on in the quality assurance processes are: 7
1. Metrics 2. Reviews 3. Traceability
Metrics To better manage the product and the process quality, the SEAF project needs
to deploy a metrics program to establish its quality standards and to assure that they are maintained. The suggested suite has to be easy to collect and analyse. Metrics that provide information that helps in scheduling (time taken for any task) and process improvement (number of defects in each phase and during reviews) must be collected. In general it is important that metrics collection be semi-automated since the developers have little time to record extensive logs. As a result, the recommended metrics to be collected are: number of defects detected during reviews and testing, number of class interface changes, time taken to do any task, severity of problems found during reviews and the reasons for the problems. Collecting metrics and using them for process improvement is a criteria of SPICE Level 4 (Predictable). Therefore, if the proposed metrics are implemented the related processes can move to Level 4 in the areas of metrics usage.
Reviews The SEAF project needs to perform reviews for early defect removal and to foster understanding among the team. Currently, the SEAF project is not performing any reviews. Reviews[13] have to be performed at each phase of the project: requirements, design, code and test. A checklist will be provided for each type of review to aid in the review process. Since a formal review is beyond the capability of a small project, we suggest that reviews be performed o-line. That is, the reviewers are noti ed of the availability of the work-product that is to be reviewed. The reviews can be performed at any time by the reviewers[9]. A form will be provided which records the results of the review. Once each reviewer is done, their comments are posted to a newsgroup. The author can then read these messages and changes can be made accordingly. Performing reviews will help the SEAF project verify their work products and if properly carried out will place related processes at SPICE Level 2 (Managed). Traceability As a rst step towards a useful quality assurance program, traceability among
work-products has to be established. Dierent work-products should be traceable back to requirements. In other words, a view of all work products with their connections and relations to each other has to be established. The traceability network is also eective in change management, as work products aected by a speci c change can easily be identi ed. When the reviews are being conducted for a work-product, the related work-products are speci ed, thereby establishing forward traceability. Defects discovered during testing are also traced back to the code, design and requirements thereby establishing backward traceability. Since this information is stored in a database, reports of all the modules and their related documentation can be produced. This is similar to a veri cation matrix [7]. Analyzing the traceability network helps in tracing defects to their root causes. This information aids in 8
process improvement where more attention can be given to the phase that produced the most defects. The traceability information is collected and placed in a database. At present, the team has to evaluate the information manually since automatic traceability would be expensive to support. Therefore, reports will be provided, but the analysis is manual. Performing traceability will place related processes at SPICE Level 2 (Managed).
5 Conclusions and Future Developments This paper has described an approach for introducing process improvement in a small software development project. The improvement strategy is based on a spiral model supported by an ongoing SPICE assessment activity. We have applied this approach to establish the base-line of the existing quality process (Spiral 0) for the project and have recommended a set of improvement opportunities that are now being implemented. To assist in the implementation of these opportunities we are currently developing some automated support for the Quality Process that can be integrated with the Cafe Seaf development environment. Like Cafe Seaf, the quality support environment will run over the internet using web technology. Our long term objective is to produce a general quality process model that is eective, aordable and scalable. The model must also support clearly de ned interfaces between the development and quality processes. With these capabilities, it will be possible to provide services that will enable small companies to out-source their quality assessment and improvement activities, leaving them free to focus more on product development.
References [1] Mark C Paulk, Bill Curtis, Mary Beth Chrissis and Charles V. Weber, \Capability Maturity Model for software Version 1.1 (CMU/SEI-93-TR-24, ADA 263403)", Pittsburgh, PA: Software Engineering Institute, February 1993. [2] Dorling, A., \SPICE: Software Process Improvement and Capability dEtermination", Software Quality J., Vol 2, pp 209 - 224 (1993). [3] Mark C. Paulk, \How ISO 9001 compares with the CMM ", IEEE Software, January 1995, pp 74-83. [4] \Lloyd's Register TickIT Auditor's course", Issue 1.4, Lloyd's Register, March 1994. [5] Hossien Saiedian, Richard Kuzara, \SEI capability Maturity Model's Impact on Contractors", in IEEE Computer, January 1995, pp 16-26. [6] Judith G. Brodman and Donna L. Johnson, \What Small Businesses and Small Organizations say about the CMM", Proceedings of the 16th International conference on Software Engineering (ICSE 16), IEEE Computer society, Sorrento (Italy) 1994. 9
[7] Darrel Ince, \An Introduction to Software Quality Assurance and its Implementation", McGraw Hill Book Company, 1994 [8] B. Boehm, \A Spiral Model for software development and enhancement", IEEE Computer, Vol 21, pp61-72,May 1988 [9] Wheeler D, Brykczynski B and Meeson R Jr., \Peer Review Processes Similar to inspection", Software inspection - an industry best practice, pp 228 - 236, 1996. [10] Roger S. Pressman, \Software Engineering - a practitioner's approach", McGraw-Hill Inc, 1992 [11] Barry Boehm and Frank Bels, \Applying process programming to the spiral Model", Proceedings of the fourth international Process workshop, IEEE/ACM, May 1988 [12] Software Engineering Institute: \The Capability Maturity Model: Guidelines for improving the Software Process". Addison-Wesley, 1995. [13] Bisant, David B. and James R. Lyle, \A Two-Person Inspection Method to Improve Programming Productivity," IEEE Trans. Software Eng., Vol. 15, No. 10, Oct. 1989, pp. 1,294-1,304.
10