SOFTWARE QUALITY ASSURANCE IN A PROJECT BASED ON RAPID EVOLUTIONARY PROTOTYPING METHODOLOGY JOZO DUJMOVIC San Francisco State University 1600 Holloway Avenue San Francisco, CA 94132, USA
ARÍSTIDES DASSO, ANA FUNES Universidad Nacional de San Luis Departamento Informática Ejército de los Andes 950 (5700) San Luis - Argentina
EDUARDO PETROLO Ministerio de Educación de la Nación Avenida Santa Fé 1548 piso 14 (1060) Buenos Aires – Argentina
DANIEL RIESCO, GERMÁN MONTEJANO Universidad Nacional de San Luis Universidad Nacional de Río Cuarto Enlace Ruta 6 y 36 km 603 (5800) Río Cuarto - Argentina
ROBERTO UZAL (contact with AoM / IAoM) Universidad Nacional de San Luis Departamento Informática Ejército de los Andes 950 (5700) San Luis - Argentina e-mail
[email protected] Fax: (54) (11) 4811-2363
ADRIANA ECHEVERRÍA Universidad de Buenos Aires Facultad de Ingenieria Paseo Colon 860 (1062) Buenos Aires – Argentina
Abstract In this paper we discuss Software Quality Assurance (SQA) issues in the environment of an information system project, based on rapid evolutionary prototyping. The information system project provides a specialized software support for a sophisticated human resources management system in educational departments of Argentine provinces. This paper presents a general structure of this project and benefits of a specific software development model. In this context we focus on SQA issues. The selection of organizational concepts, design methods, and related tools were primarily based on satisfying SQA goals. In particular, the rapid evolutionary prototyping approach helped in substantially reducing costly errors in early stages of system design and software development.
Key words: Rapid Evolutionary Development; Human Resources Information System; Software Quality Assurance. 1. The project environment 1.1 Administrative Reform Programs and the Educational Human Resources Management Project Several administrative reform programs have been recen tly introduced in order to solve various structural and functional problems of government in Argentina. Most of these programs are supported by international financial institutions. Projects currently included in the Administrative Reform Program for the Ministries of Education of Argentine Provinces, and financially supported by the World Bank are: - Administrative processes reengineering [1]. - Human resources management improvement including information technology support [2]. - Program oriented budgeting techniques. - Executive information system. This paper deals specifically with the human resources (HR) management improvement project. The project is managed by the Federal Ministry of Culture and Education of Argentina (FMCEA). The main goal of this project is to reform and improve the management of educational human resources for participating governments of provinces of Argentina. This improvement process is based on information technology support and aimed at substantially reducing the current operational costs while increasing the spectrum and level of services offered to teachers, schools, and government.
1
In order to prepare administrative management for the use of modern information technology it was first necessary to develop new administrative and managerial solutions through a process of reengineering. These results are presented in [2] and show a high level of complexity of the HR management system. The complexity is caused by legal issues, various social protection issues, and a high level of interaction with health protection, and financial management systems. Reengineering of administrative processes is based on our methodology presented in [2], and supported by a specialized tool (Optima! by Micrografx, Inc.) which generates process diagrams used to describe 32 basic administrative processes related to HR activities. For each administrative process the process diagram shows activities of administrative departments necessary to complete the process, including all logical conditions and corresponding options.. The resulting HR information system is an effort at the level of 400 person-months. The first stages of the HR information system project were performed by a FMCEA project team. In final stages of system development and implementation the project team is assisted by an independent software house. The selection of the software house was based on a standard World Bank evaluation procedure which includes a detailed specification of requirements, and a process of formal evaluation and selection of the most appropriate provider. This process was carried out in parallel with the work of the project team. In this paper we primarily focus on information technology support for educational human resources management, and address the system development model issues, and the strategy of SQA. However, our approach (based on a specific evolutionary development model) is rather general and applicable in many cases where a complex information system can be decomposed in a set of related subsystems and where these subsystems have to be implemented for a set of users.
1.2 Basic project requirements The educational HR management project implies the satisfaction of the following requirements: •
• •
• • • • •
Computer support to record and use CV information about thousands of teachers. This includes components like: − Personal and family data. − Data about education, degrees, professional skills, and special training. − Positions and performance evaluation (teaching career record). − Management of teacher applications in order to move from one career plan to another. Information technology support to manage information corresponding to hundreds of educational institutions. Computer facilities to budget HR needs to be able to cover educational requirements like: − Number of students at each educational level. − Geographical distribution of the student population. − Geographical distribution of the educational institutions. − Current educational programs. − Budget and control of the HR financing. − Improvement of the performance of the HR management systems (better serviceability). − Management of diverse teaching career plans (to manage transition situations). − Performing pay roll processes with different criteria (also to manage transition situations). − Assigning teachers in educational institutions taking into account several optimizing criteria. Health management in the educational area. Computer aided tracking of documents related to human resources management. Information about infrastructure aspects (e.g. space management, buildings, etc.). Providing statistical information to improve the decision making process of governmental bodies. Support for managerial tasks of the educational institution authorities.
An initial effort of the FMCEA project team was to analyze and evaluate a spectrum of commercially available HR management systems to find out whether they can satisfy the specific needs of our users. Since these systems can only partially support the specific needs of the educational HR system, it was necessary to develop a specific solution, based on process reengineering and the use of modern information technology methods and tools.
2
1.3 The Rapid Evolutionary Development Approach There are many models of software development. Compared with other information system development approaches, rapid evolutionary development [3,4] provides the following advantages: • • • • • • • • •
Achieving a very effective communication between the parties involved in the project. Minimizing uncertainty and the risk associated with it. Increasing the ability to offer a permanent feedback process during system development. Incorporating a learning process scheme into the development process. Increasing the chance of discovering opportunities along the development process. Gaining competitive advantages through a 80/20 Pareto scheme (80% of results with 20% of resources spent) Reducing defects through continuous testing and system functionality evaluation. Increasing the users ownership feeling and commitment Easing the final product (information system) acceptance.
To achieve these advantages the whole project must first be decomposed into subsystems. We identified 7 subsystems of the educational HR system: 1. 2. 3. 4. 5. 6. 7.
Personal data (CV for all employees) Payroll Health management Document tracking (current status of a process) Budgeted positions Schools and facilities System administration (used by system administrators for management of the above 6 subsystem)
The subsystems #1,2,3,5, and 6. include 32 administrative processes. In addition, each of them includes a “generic process” which is a free-format general purpose process used to handle special nonstandard requests. The subsystem #4 shows the current status of each of 32 administrative processes (the user specifies document identifier, and the document tracking subsystem shows the department that currently processes the document, the activity that is being performed, and individuals that are involved in the current activity). In addition, it generates a spectrum of statistical indicators used for management decision making. Initially, each of subsystems includes a subsystem administration module that provides a spectrum of services designed for subsystem administrator. Some of services are similar for all subsystems, and deal with the same data base. At the end of subsystem design process all nonredundant systems administration services are integrated in the system administration subsystem #7. Our approach to the evolutionary development is based on a rapid prototyping model and contains the following main steps: • • • • • •
Design efficient administrative processes using a tool-based reengineering approach. Develop a high-level “horizontal prototype” containing unified prototypes of all subsystems. Verify all high-level functionality with end users. Select a subset of subsystems and for them develop detailed “vertical prototypes” using a process of stepwise refinement under constant verification of a selected pilot user. Implement vertical prototypes in the pilot user environment and continue evolutionary refinements in close interaction with the pilot user(s). After successful verification of previous steps design and implement the final system.
Of course, after the iterative prototyping process, the evolution of the system moves in the right direction and the last step (design and implementation of the final system) is only a definitive incremental refinement of the “final prototype”. Consequently, this step is well prepared and rather easy, with minimum risk involved. More detailed (and more formal) presentation of our project development model is presented in Section 2.
3
1.4 The current state of the project At the time of writing this paper, the project is entering its final phase. Reengineering of all administrative processes, horizontal prototyping, final vertical prototyping of four subsystems (#1,2,3 and 4) is completed and final prototypes of these subsystems are implemented in five provinces where they operate with real data and serve real users (of course, this assumed preparation of communication infrastructure, installation of hardware, preparation of personnel, loading of data base, data validation, and many other parallel activities). A two-tier client-server architecture was used in our initial implementation with two mirrored two-processor servers hosting the MS SQL-Server, and the data base (extended relational data base model). Clients include GUI presentation software (implemented in Visual Basic) and the process logic that communicate with the data base server using ODBC (Open DB Connectivity) communication infrastructure under the NT operating system. System design benefited from the use of tools. ERwin/ERX 2.5 for Visual Basic from Logic Works Inc has been used to generate the suggested conceptual data base scheme. Erwin/ERX is a case tool based on IDEF1X (Zachman) methodology [5]. This case tool allows the use of concepts like client-server architecture, data base scheme generation, and business rules implementing cardinality, referential integrity, stored procedures, and triggers. Erwin/ERX 2.5 for Visual Basic improved the project team performance in requirement specification tasks and contributed to the communication with potential software providers and to project documentation quality assurance. At this time GUI prototype is complete, prototypes of principal applications are working in the real implementation environment, and the selection of the service provider has been successfully completed. SQA tools from Rational Software [6,7,8] are currently used as a part of system testing and other SQA activities.
2. Software development by combined horizontal and vertical stepwise refinement of prototypes The selected project development model substantially affects SQA issues. Our project development model is a specific version of rapid evolutionary prototyping designed to minimize both the delivery time and the risk of design errors, yielding quickly reliable operational software. The approach is general and applicable wherever a complex system has M subsystems and K users (M≥1, K≥1). It has a high level of parallelism consistent with object-oriented life cycles described in [4]. We first develop a high-level horizontal prototype which properly specifies the basic functionality of all M subsystems. This develops mutual understanding between designers and users as well as confidence in requirements specification. The horizontal prototype specifies functionality, GUI screens, interfacing of modules, order of execution, and works with sample data. Once the horizontal prototype is accepted we select some pivot subsystems and develop them through a stepwise refinement of prototypes. The subsystems are immediately implemented at selected users in order to provide verification and feedback. Vertical prototypes work with real data. Since the number of pivot subsystems is small (sometimes it can be only one), and the development process contains a high level of parallelism, the development time is minimized. Following is a formal description of our software development model. Process Reengineering {preparation for information system design}; Repeat {Horizontal prototype design by stepwise refinement} Requirements specification; Data design; Functional design; GUI design Until (User verification successful [3 to 4 times]); for subsystem = 1 to 7 do {System design by vertical stepwise refinement of selected subsystem prototypes} parallel_begin Select the subsystem prototype and subschema; Prepare a test database containing real data; Repeat {Evolutionary development of the selected subsystem} Data design update; Module design; GUI refinement; Coding; Testing (functions and performance) Until (User verification successful [7 to 8 times])
4
for province = 1 to all_provinces do {Selective implementation} if province in set_of_participating_provinces then parallel_begin {Implementation in a selected province} Hardware installation (first time only); Subsystem software installation; Loading data base (first time only); Training of system consultants (responsible for training operational personnel); Training of operational personnel; Acceptance testing; System update and adjustment parallel_end parallel_end Operation and maintenance The block of operations between parallel_begin and parallel_end is initialized and the sequential realization of specified operations is performed in parallel with sequential realization of other selected subsystems (i.e. the subsystem loop can proceed without waiting for completion of all operations inside the loop). Similarly, the implementation in a selected province can be performed in parallel with implementations in other provinces. This yields a substantial level of parallelism of the processes of subsystem design and implementation. Development speed and permanent control of software quality are the primary reasons for selecting the above approach. It is well known that costs of correcting errors made in initial phases of software development cycle (requirements specification, data design, and functional design) can be for two orders of magnitude larger than the cost of correcting errors in final stages of software development. The costly errors in initial phases are usually caused by differences between user expectations and specifications based on designer’s understanding of user requirements. Our approach, which includes prototyping and evolutionary loops with permanent (low granularity) user verification practically eliminates these types of errors. However, this approach requires both responsive users and designers that are especially prepared and motivated to go through many repetitions of the same procedure. Numbers in braces show average values of the necessary number of repetitions and illustrate a very intensive user-designer interaction, which eliminates design errors before their dimension becomes significant. In addition, various aspects of testing are distributed during the whole process. They are presented in more detail in the next section.
3. Testing and measurement Theoreticians claim that software testing can only be used to prove the presence of errors and not their absence. The proposed alternative is to provide correctness proofs which derive programs similarly as mathematical formulas, so that each step of the derivation comes with a corresponding correctness proof. Correctness proofs are formal mathematical verification that a product is correct. They differ from testing because no code execution is involved. Unfortunately, correctness proofs are regularly much longer than the programs whose correctness should be proved, and such techniques don’t have the necessary industrial strength to be consistently used in the industrial production of thousands lines of code. Therefore, the quality of software must still be validated through the process of systematic and (as much as possible) automated testing. This approach cannot provide a proof of correctness, but if expertly applied it can dramatically reduce the probability of errors in the final software product. In other words, quality testing increases the reliability of software products to those levels that can satisfy rigorous industrial standards. A traditional software testing program includes the following components: • • • • • •
unit testing module testing integration testing functional testing performance testing acceptance testing
5
3.1 Basic testing procedures Unit testing is individual testing of each elementary program unit, such as function or procedure. A traditional method of unit testing is based on the concept of cyclomatic complexity. The cyclomatic complexity of a program can be defined as a number of predicates in a program plus 1. Predicates are defined as logic conditions (relations that depending on values of variables can be either true or false). For many programming language there are tools for reliable measurement of cyclomatic complexity. Cyclomatic complexity is the number of independent paths in a flow chart; this is also the number of tests that must be performed to make sure that all program statements are executed at least once. This type of testing is also called exhaustive testing. It is a kind of white-box testing, because it needs a detailed knowledge of the internal program logic. Module testing. Several software units are used to design a software module. Module testing is primarily oriented towards testing the quality of unit integration and module functionality. Functionality testing is a form of black-box testing which can be partially or completely automated using special tools. However, module testing is a destructive process which cannot be done by software engineers who built the module. It is one of tasks of the SQA group. According to B. Beizer “bugs lurk in corners and congregate at boundaries”. Similarly to unit testing, the module testing is assumed to focus on “corners and boundaries”, by focusing on values for which predicates change their logical values, and verifying that the consequences of these changes are correct. Integration testing. All software design methods solve the fundamental problem how to systematically and efficiently deal with complexity. Therefore, all systems consist of subsystems and it is important to properly evaluate the assembling of applications from the set of components that were developed and tested separately. Integration testing involves testing collections of modules and it is done incrementally, on progressively larger sets of modules, from small subsystems until the entire system is built. The focus of integration testing is on the compatibility of components being integrated (compatibility of type and number of objects transferred from component to component, and from level to level). The emphasis is on testing the correctness of transfer of data objects between each module and its environment. Functional testing is the most important final stage of testing a software product for conformity with initial requests. It is assumed to be realized using automated testing tools which automatically (according to a script) perform and exhaustive set of black-box tests and generate the corresponding reports. This process supports verification and validation of software. Verification is an assessment whether the system performs correctly with respect to the stated requirements. Validation refers to an assessment of how the product responds to the needs of the customer. Functional testing is sometimes expanded through the “alpha testing” which is the testing of software on the running applications under realistic conditions (i.e. actual use) but within the developing organization. During alpha testing the underlining assumption is that we deal with understanding and forgiving users, who tolerate and promptly report all detected problems. In our case the rapid prototyping team enjoyed the full support of the government in its province (San Luis) and that was the place where we performed a complete alpha testing. The next phase is usually a “beta test” which is a testing with actual pilot users. It is performed with a selected group of real customers prior to the official release of software. The purpose is to make a controlled experiment where the feedback from the selected users is used to determine whether the changes are necessary prior to the official release. Beta testing was performed using vertical prototypes in 5 provinces. Rapid prototyping as a software development method has a property that both alpha testing and beta testing become a natural component of the development process an not dramatic, stressful, and uncertain exams. In addition to standard testing we indirectly contributed to SQA by redundant hardware organization, and by mirrored data operations that promote fault tolerance and operational data security 3.2 Performance testing Performance testing is organized to make sure that the analyzed system satisfies several important conditions: •
specified minimum level of performance expressed as the maximum acceptable value of the average response time
6
• • •
load testing conditions (conditions related to measuring the average response time as a function of an increasing number of N simultaneous users); these conditions can be expressed through a desired response time curve, or through parameters defining asymptotes of the response time curve specified minimum value of the critical number of users Ncrit (this is the number of users that can be supported by multiple clients and a server; it is one of parameters derived through load testing) passing stability test; the stability test consists of simulating large number of users over a substantial time period, in order to detect system crashes and similar malfunctions caused by memory leakage and similar cumulative accidental phenomena.
In addition to satisfying the above performance testing conditions the performance testing can be used to develop the following: • • •
performance tuning recommendations capacity planning recommendations error checking and recovery recommendations
To define performance testing specifications we must first define basic performance indicators of analyzed client-server systems. Let Z denote the average user think time. The think time includes user’s analysis of results presented on the screen and manual input of a new request. When the request is submitted it will first wait in server’s queue; then, it will receive the necessary service, and generate a new screen of results. Let the average service time be S. So, at the point when the system becomes saturated serving the critical number of Ncrit users we have the situation where the service of Ncrit-1 service requests must be completed during the average think time Z. From (Ncrit-1)S = Z we compute the critical number of users: Ncrit = Z/S + 1 . Obviously, Ncrit depends on both the think time (characteristic of a specific type of user) and the service time (characteristic of the selected hardware/software system). One of goals of performance testing is to determine the critical number of users, Ncrit. This parameter can also be determined as the knee of the response time characteristic R(N,Z,S), where N denotes the current number of users: R(N,S,Z)
≅ S, ≅ S(N - Ncrit + 1) = SN-Z ,
N < Ncrit N > Ncrit
Therefore, the specification of performance testing must be based on giving the values of all necessary parameters. Here are examples of suggested typical performance testing requirements: • • • • • • •
Load test: measure the response time curve R(N,S,Z) for specific range of N and for specific value of Z (e.g. 0 20) Stability test: show that the system can run Ndays days with Ntest simulated users, without interrupts and without any registered error (e.g. Ndays = 5 days, Ntest = 20 users). In all measurements use a slow modem connection (e.g. 28.8 Kbd) between the client and the server system. Provide performance tuning recommendations. Provide capacity planning recommendations.
Obviously, the presented measurements cannot be manually performed. It is necessary to use specialized automated performance testing tools. The traditional client-server model (Fig. 1) assumes a desktop system which includes data input/presentation and application logic, and a server system providing efficient data storage and data access capabilities, usually a DBMS. When the user submits an input request, X 0 , the input and presentation subsystem generates input(s) for the application logic, and the application logic then generates input request(s) X 1 for the data access subsystem. The output from the data access
7
subsystem is denoted Y1 . Then the presentation subsystem generates the final output Y0 . In the case of using advanced performance analysis tools a Record and Replay Tool (RRT) creates a log file containing a sequence of inputs X 1 , outputs
Y1 , and corresponding server response times.
Tcs
Tc CLIENT DATA INPUT AND PRESENTATION
X0
R USER
APPLICATION LOGIC
Ts SERVER
X1 Y1
DATA ACCESS (DBMS)
RRT
DATA BASE
Y0 LOG FILE
Figure 1. Client-server system The user’s average response time (from X 0 to Y0 ) is R = Tc + Tcs + Ts , where Tc denotes the fraction of R spent in client processing, Tcs is the time for data transfer between the client and the server, and Ts is the average time the server needs to process a client request. Furthermore, R = Tc
( user )
+ Tc( system) + Tc( wait ) = Ts( user ) + Ts( system) + Ts( wait ) , where
user times denote the processor activity for user’s program, and system times denote the processor activity working for the operating system (as a part of user requests). The wait time is the time processor was either idle or performing the operating ( user )
system overhead operations. In the case of client the service time Tc ( user )
the case of server the service time Ts
+ Tc( system ) corresponds to a single user, while in
+ Ts( system ) corresponds to serving all active users. The most important
performance indicators of this system are the response time R, its components Tc , Tcs , Ts , the utilization of client
U c = (Tc(user ) + Tc( system) ) / R , and the utilization of server U s = (Ts(user ) + Ts( system) ) / R . In addition, the ratios
Pc = Tc / R , Pcs = Tcs / R , and Ps = Ts / R can be used as indicators of subsystem utilization, where the highest value of P denotes the bottleneck component of this system. These performance indicators can be easily interpreted, but their measurement is not easy. Our approach to measurement the performance indicators includes a combination of three techniques: • • •
Measurements inside application programs. External measurements by standard performance monitors. External measurements using advanced tools.
Measurements inside application programs are based on instrumenting application software with instructions which record various event times and the size of free memory. The recorded data are used during software development as a part of the SQA program, and we found these results very useful to early detect some functional and performance problems. Unfortunately, in many cases user’s programs only initiate operations that are then carried out by the operating system asynchronously, without user’s control. Such operations can start any time after they are initiated and such events cannot be correctly recorded by instrumenting user’s programs. This reduces the accuracy of measured performance indicators and creates a need for special measurement tools. The most frequent external measurement tool is a performance monitor which is used for measuring the utilization of system resources and for capacity planning. These indicators are useful for detecting excessive use of resources by inefficient programs and for the comparison of the behavior of software when executed in different hardware environments. However, software monitors cannot measure response times and discover saturation points, detect nonlinear phenomena related to caching and optimizations algorithms, and automatically study the behavior of system under artificially generated
8
variable load. These experiments can only be performed using advanced performance analysis tools (e.g. [8]). The concept and the role of such tools are shown in Figures 1 and 2.
R1 X1 LOG FILE
RRT
ANALYST CONSOLE
Y1
SERVER DATA ACCESS (DBMS)
DATA BASE
Figure 2. Benchmarking the server subsystem
Using RRT the benchmarking of server subsystem consists of two phases: (1) record phase, and (2) replay phase. The record phase (Fig. 1) consists of creating a log file containing a sequence of actual transactions and recorded response times: X 1 [i ], Y1 [i ], R1 [i], i = 1,..., n . The replay phase consists of benchmarking the server using various flows of transactions generated by the RRT. A specific type of transactions can be selected from an analyst console (Fig. 2). RRT can replay transactions X 1 [1], ... , X 1 [n] and then measure the response times R1 [1],..., R1 [ n] and compute the average response time R1 = ( R1 [1]+...+ R1 [n]) / n . This measurement can be done systematically under different rates of submitting the transactions X 1 [1], ... , X 1 [n] . This is the way to detect nonlinear phenomena, saturation points, and to measure the critical number of clients a server can support. These analyses are very important for capacity planning. In order to enforce SQA results for all involved in this project our procedures include a serious acceptance test which is scheduled as a part of system implementation process. The acceptance test includes the final verification of functionality, fault tolerance, as well as system stability, performance, and load tests.
4. Tools for evolutionary software development Information system projects of this complexity cannot be developed without extensive use of appropriate tools, and without an advanced operating system support. Following is a list of supporting tools and environmental components: • • • • • • • • •
Process reengineering tool. Data design tool. GUI design tool/environment Advanced testing tool Performance testing and analysis tool Version control tool Data communication support Data base management system Operating system supporting advanced process management
The tools impose strict design rules and directly or indirectly promote SQA. For process reengineering we used our methodology [2] supported by Optima! (Micrografx, Inc.). Data design was supported by ERwin/ERX 2.5. We used MS Visual Basic for GUI design, ODBC communication support, and MS SQL-server for DB support, all under the NT operating system. Advanced testing tools support systematic testing of program components and provide automated technology selected to cover the SQA issues during the entire life cycle. In particular, we need automated record/playback technology and facilities
9
for comprehensive testing of Windows objects (and custom objects) and components. Rational SQA Suite [6] has been selected to test from planning and design, through software development and deployment. In the area of performance testing we need a tool that supports recording (automatically makes scripts ready to handle multiple transactions), scheduling (automatically creates multiuser workloads with appropriate synchronization), and playback (automatically makes virtual user activity similar to real human operator activity and computes performance indicators for load tests, capacity planning, and other analyses). For performance testing we have selected Rational Performance Studio [8]. Configuration and change management is a serious challenge in any evolutionary prototyping project. In this environment software development is a dynamic and increasingly complex process. In our case this complexity takes many forms, including an external software provider team, the geographical distribution of the development sites, and a great number of product releases generated by refinement of prototypes and by changing conditions in the implementation environments. We needed a tool that supports project coordination, the version control of every software module, disaster recovery, software maintenance, display of version evolution history, version directories, subdirectories and all file system objects, easy access to all version objects, version selection based on rules, and reconstruction of past configurations. To support the configuration and change management, we selected Clear Case from Rational Software [7].
5. The use of the Process Maturity Framework An organizational framework specifically designed to promote software quality is the Capability Maturity Model (CMM) [9,3]. CMM is developed by the Software Engineering Institute (SEI). SEI defines five levels of software process maturity: 1. 2. 3. 4. 5.
Initial: the software development process is ad hoc and close to be chaotic. Success depends on individual effort. Repeatable (based on strict project management): cost tracking, schedule and functionality definitions are established. Defined (based on methods and tools): the software development process is documented, standardized and integrated into a standard software process. Managed (based on the use of standard quality metrics): detailed measures of the software process and quality assurance are collected. The product and the process are quantitatively understood and controlled. Optimized (based on continuous quality improvement): continuos process improvement is enabled using a measurable feedback. Innovative ideas and new technologies are included in the software process.
CMM comes with a set of goals which have to be attained at each of the above levels [9]. These goals can be used to develop a questionnaire for the assessment of current software development practice [3]. Both the goals and the assessment procedure can be used as guidelines for the development of techniques that yield increased software quality. Following is the list of items we adopted following the CMM guidelines: • • • • • • • • •
A life cycle model for the project has been defined. Project goals, intermediate goals and policies have been defined. The project schedule and budget are under strict control. A project auditor is engaged to investigate SQA and performance issues. Standards are been adopted for the project documentation. An independent SQA group has been organized. Detailed measures of the software development process and intermediate software products quality are collected continuously. Continuous development improvement is enabled by quantitative feedback. The main source of this feedback is the operation of the sub systems evolutionary prototypes by the real end users. Substantial time and resources are committed to permanent training of the project team, particularly in the areas of using modern software tools and related design techniques.
10
4. Conclusions In this paper we presented SQA as a comprehensive program that includes a spectrum of techniques which are distributed over the whole life cycle of a software product. The first contribution to SQA is the reengineering of information processes in a way that is consistent with the current state of technology and efficient. We found that process reengineering followed by the method of software development by combined horizontal and vertical stepwise refinement of prototypes (rapid evolutionary prototyping methodology) significantly contributes to the elimination of expensive design errors. Through rapid prototyping it is possible to constantly keep open communication channels between software designers and end users, and make a continuous transition from a high-level prototype to a reliable finalized system. Horizontal and vertical prototyping eliminates major requirements and design errors, but does not eliminate errors in detailed software design. Therefore, it is necessary to properly implement all components of traditional testing (unit, module, integration, and functionality). Functionality and performance of the final product is verified using an acceptance test. Our SQA standards and procedures are derived from CMM guidelines. Performance is considered a key component of software quality and we discussed all relevant aspects of performance measurement and analysis. That includes the tests of load, capacity, stability, and performance level. It also includes a permanent measurement and monitoring of selected performance indicators (response times, resource utilizations, throughputs, etc.). Both the software design and SQA are based on extensive use of specialized design, development, and performance measurement tools. Additional support comes from procedures that provide operational data security and fault tolerance In this project SQA standards have an additional function. In final stages of system development the original project team is assisted by an independent software house. Therefore, SQA standards would help in guiding both the contractor and the SQA group in keeping track of the quality of developed software and in guaranteeing that the final product will meet the same quality standards across all it is implementations in every of the participating provinces. Our project is currently in its final stage. Our experiences with the presented evolutionary prototyping development method and with distributed SQA activities have been very positive because we managed to keep errors and rework at the minimum noticeable level, while satisfying end users at the level of very positive and sometimes enthusiastic acceptance.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Petrolo, E, Uzal, R. et al “Rapid Evolutionary Prototyping of Data Base Applications”, Software Engineering IASTED Software Engineering Conference, Las Vegas, 1998. Petrolo, E. et al., “ Optimization of the educational administrative management in thirteen Argentine Provinces using reengineering of process.” IASTED CATE’98 Proceedings. Arthur, Lowell Jay, “Rapid Evolutionary Development”, Wiley, 1992 Connell, J. and L. Shafer, “Object-Oriented Rapid Prototyping”. Yourdon Press / Prentice Hall, 1995. Bruce, Thomas, “Designing Quality Databases with IDEF1X Information Models”, Dorset House, 1990 Rational Software, “SQA Suite.” Http://www.rational.com/products/sqa/prodinfo/index.jtmpl Rational Software, “ClearCase.” Http://www.rational.com/products/clearcase/prodinfo/index.jtmpl Rational Software, “Performance Studio.” Http://www.rational.com/products/pstudio/prodinfo/index.jtmpl Paulk, M.C et al., “Capability Maturity Model for Software, Version 1.1.” Software Engineering Institute, CMU/SEI-93-TR-024, ESC-TR-93-177, February 1993.
11