An experimental approach to OO Software Process ... - Semantic Scholar

1 downloads 0 Views 143KB Size Report
May 16, 1995 - Software Process Technology (EWSPT 94), B C Warboys (Ed.), Springer-Verlag (1994), pp. 207{. 226. 2] Bache, R., and Bazzana, G. Software ...
An experimental approach to O-O Software Process and Product Measurements Sita Ramakrishnan Department of Software Development Monash University PO Box 197, Caul eld East Australia 3145 email: [email protected] May 16, 1995

Abstract

This paper reports on an ongoing software measurement experiment which has been set up to monitor and evaluate team based student projects on a progressive basis. Student Assessment Tutorial Ongoing Pro le Sheets (SA-TOPS) have been used to provide a more objective feedback to improve students' learning and for measuring their understanding for grading purposes. The objective of the measurement is to improve the quality of the software process and product by measuring the activities of a scenario based development of an Object-Oriented system by various student teams.

1 Introduction Software process improvements using the top-down process such as a Capability Maturity Model (CMM) and the bottom-up process such as the Quality Improvement Paradigm (QIP) [16] have been promoted by the Software Engineering Institute and NASA Software Engineering Laboratory respectively. These improvement concepts are important in academic setting as well [1]. Integration of a well organised software process and software measurement with an aim to improve the quality of the process and product is a valid experiment [2] for objectively assessing students' projects [7]. Software measurement is a hot topic of research as is evident from the world-wide web entries (1994) on O-O metrics by Whitty of South Bank University. I also share the views expressed by the panel members of OOIS'94 that proper mechanisms for educating new entrants to the O-O paradigm is vitally important for this new wave of development to become mainstream. Teaching a measurement based quality improvement approach for developing O-O projects is a step in the right direction. The paper outlines the motivation behind this experiment in Section 2. Section 3 discusses an experimental design for assessing O-O process and products using a measurement experiment. This section also covers the inspection process in detail. This is followed by Section 4 which deals with the details of the project monitoring, evaluation and feedback mechanism in an educational setting, although this should apply to real world situations as well. Then some initial observations and conclusions are given.

2 Motivation I have been teaching a third year one semester undergraduate unit called Object-Oriented Programming Systems (OOPS) at Monash University since 1991. Students doing the OOPS unit in the past 1

have been asked to work on a set of small BANK exercises to master the O-O terminologies and apply them to implement the BANK exercises. The set of exercises were given as tutorial laboratory tasks after covering the topics in the lectures. This was followed by two major assignments, the rst one where the analysis and high-level design were given to the students and the students were required to work in groups of three to produce a working system in seven (elapsed) weeks. The second assignment was to produce design diagrams and documentation for a given problem. Although students were working from the same high-level design which had been produced by \experts", the quality of the implemented product varied dramatically between student groups. This resulted either due to the team deliberation going for too long without any detailed monitoring/controlling mechanism, or due to non-committed or weak members in a group. A detailed design presentation of the project made by student teams in week 7, and the nal submission in week 13 were the only deliverables that were assessed. The organisation of the software process was adhoc and at SEI CMM's initial process level [10]. I introduced project management processes to elevate the project to be a repeatable experiment. I decided to test my hypothesis using a controlled experiment and evaluate the relationship between a development process and product quality [13]. This experiment has been set up to test the hypothesis that a scenario based development using a Business Object Notation (BON) methodology [17] undertaken with formal inspection process for tracking defects and testing can improve the process and product.

3 Measurement and experimentation 3.1 Experimental Design An empirical framework which includes the statistical principles of formal measurement in a software engineering setting [3] has been used for the software measurement experiment. This framework requires the experiment to follow statistical and software development principles. As a number of student teams will be developing the same project, the experiment is an example of a replicated project where the basic experiment is repeated by various teams and therefore satis es the principles of experimental design. The scope of the experiment has been narrowed by providing the students with a high-level design for the assignment. A wide scope may dilute the results of the initial experiment [13] As testing and inspection are empirical activities, they are used as valid areas for software measurement and experimentation [18]. The software process and products are assessed in this experiment by applying various techniques to the metrics data collection. Software is measured to check its quality at various stages of its life-cycle. Process assessment aims to analyse the process for process improvements. Product assement is meant to address the software quality based on Boehm's quality tree criteria [4]. Software development would bene t by engineering software systems using quality tractable models of requirements, processes and products [3]. The experiment in this study involves seventeen student teams (in groups of three) following a prescribed process for an incremental, iterative development to implement a Conference Management System. They are required to use the same high level design and produce the low-level design, have organised inspection meetings to review the team members' work, document the defects using the given classi cation codes [15]. This is followed by coding and inspection, and if required, changes are fedback to the design phase. The test plan covers black-box or speci cation testing, program testing, white-box and integration testing [11]. The inspection process is used to detect, record and correct the errors captured at various phases of the software life-cycle. This process is also used to gather statistics about the time taken to detect and record the errors. Testing captures the errors that went undetected during inspection. The emphasis of the process model prescribed for this measurement experiment is on producing gantt charts for project management information, creating test plans that include Class (program) and integration (system) testing, and tracking defects found during inspection. These activities will give students insight into a scienti c and engineering approach to building software systems [18]. It is meaningless to collect these data if they are not properly

analysed and used to re ne or improve the quality of the process and product. Review sessions are crucial for keeping a mutual shared understanding of the state of the project [6]. Inspection process is used to track defects and can be used to arrest any common patterns of errors that may be arising due to misunderstanding of concepts. An important di erence between the traditional software systems development and Object-Oriented Systems development is in the area of project management. The recommended incremental iterative cluster based development for O-O software systems forces the development team to move through the various phases of the software life-cycle during the development of a cluster. Students are required to estimate the time required to complete these activities and track the actual time taken so that they can measure the resources spent by various members of a group and also gain appreciation of the diculty in project cost and time estimation [18]. These statistics are also used to compare various student groups' performance and help in arriving at a more objective measure for their grades.

3.2 Software Inspection Process One of the important software engineering principle is that all the work products of a software development such as requirement models, high level design, low level design, code, test plans and others must all be reviewed using a formal procedure. Michael Fagan instituted this formal walkthru to improve the inspection process and increase the productivity of programmers in IBM in 1972. The inspection process also aimed to improve the quality of the product - software quality. Software inspection is a static testing method and can be applied to all the phases of development. The process is used to verify that the output requirements (exit criteria) of a stage have been met. The project gantt chart shows these work products as milestones and can be used as an objective measure. The gantt chart includes the estimates and schedules for the tasks and subtasks in planning, design, implementation, inspection, testing and documentation phases. The exit criteria must be measurable quantitatively: for example, any of the quality criteria speci ed [12] for the work product must be veri ed as being met as part of the inspection process and reported as a defect according to the given classi cation [15] if defects are found. The defect classi cation types help the developers/inspectors to identify and classify the defects found during inspection. Inspections are conducted as a peer group formal review of work products at scheduled times. The inspection process must be repeatable and collect various measurement data so that the process can be used to monitor and provide feedback to the teams and encourage best quality practice. Defects were discovered once mainly during the testing phase. Inspections have proven to detect more defects in the product at lower cost than machine testing and requires less resources for rework. The developers must adhere to the inspection process that has been spelt out for a project but may ne tune it to improve the process. Any change to the process must be approved by the project manager (lecturer in our case). All the inspection process documentation (textual, graphs and so on) are produced on Word 6 for Windows and Excel, as it is available on the university network. An inspection method is followed during the process of software development by checking for defects at each stage of the product life cycle. The products are veri ed during their development with team members to ensure that they conform to the design and are correct. The results of the inspection are analysed to feed back and control the production process and for improving the process by reducing the defect rate. The inspection process has check points which spells out when a product is ready to enter and exit the inspection procedure. Inspection meetings are formal with the participants coming prepared to identify the defects with all the 3 inspectors of the team: author, moderator and the recorder being present at the meetings. The author has to x the defects and the veri cation of the \ x" is conducted and controlled by the inspection process. The moderator describes the work product for inspection, the recorder acts as the scribe for the inspection and records the defects on the inspection defect list. The moderator is the leader for the inspection process and should ensure that the time is spent eciently and focus the e ort on nding defects. The inspectors should use the meeting to read the work product, focus on error detection and not for working out solutions. The

author's revisions are formally veri ed in a follow-up inspection process where the rework/changes are veri ed by checking if defects identi ed in the earlier inspection have been corrected and the work product meets the inspection exit criteria [8]. An inspection meeting is arranged through email inviting team members to meet at a certain time/place for a certain duration. An inspection defect list is prepared by the recorder to classify and document the problem found during the meeting and by the author during rework. An inspection defect summary is produced by the moderator at the end of the meeting to report the number of defects produced per classi cation. An inspection report is produced for the tutor/lecturer to certify that inspection is complete. The O-O Project tracking graphs produced as part of the experiment to track deliverables are: classes against time, number of methods against time, number of Non Commented Source Code (class size) against time, number of defects per hour of inspection time and number of defects per hour of testing time [9].

4 Project Monitoring, Evaluation and Feedback Mechanism 4.1 Educational Objectives The subject has evolved over the years and this semester, I have introduced a detailed software process to monitor the various activities of a team based project. This software process experiment on measurement pays due regard to the educational objectives as discussed by [5] such as mastery of knowledge, comprehension, application, analysis, synthesis, and evaluation. These levels of mastery of a topic, ranging from simple object-oriented terminologies to the ability of project synthesis and evaluation are being measured using Student Assessment -Tutorial Ongoing Pro le Sheets ( SA-TOPS). With the introduction of SA-TOPS, students have been progressively assessed from week 3 of the course as per Bloom's educational objectives. The assessment \carrot" has become the invisible stick and students have been assessed using a prescribed process leading to a more controlled progressive monitoring of their project (assignment) work.

4.2 The Experiment The experimental method has been adopted from [2,3,9,10,11,13 and 18]. Students are given an assignment about a conference management system. It is based on the rst case study given in [17]. They are required to use the high-level design given in that text for their system development and are monitored using project management techniques. They follow the following prescribed process for a scenario at a time : list tasks and estimates in a gantt chart; seek approval from the lecturer for the contract showing each member's obligations and record defects found in the various phases during inspection and testing. Templates of inspection documents based on [8, 15] are given to the students. They are also required to consider external software quality criteria such as functionality, usability, maintainability, reliability and performance in building the system and incorporate internal software quality criteria such as modularity, documentation, assertion and reuse to improve the product quality [12, 4]. They demonstrate their work in the labs and together with their documentation folder form the basis of their evaluation and suggestion for improvement.

4.3 Process and Product Measures A combination of process [9] and product assessment [2] measurements have been used in our experiment. The project management metrics collected during low-level design, inspection and testing of classes and methods are analysed using graphs. The software products are assessed by analysing the defects recorded according to various classi cation types and levels [15] during inspection and testing [2] and by using the quality criteria of Boehm [4] and Meyer [12]. The objective of this experiment is to use the \software by re nement" approach within an O-O development approach to introduce

procedures for inspection, defect tracking and testing of a scenario at a time, and provide project monitoring, evaluation and feedback. The experiment may produce awed results when the developers are new to an approach as their learning process a ects the results produced [13]. This aspect has been taken into account as the students are new to the O-O development approach. It must be pointed out that our students would already be well versed in setting up test plans for conducting program and system testing. The measurement of the software process and its products for the rst scenario are only used for feedback to students rather than for assessment purposes. The next scenarios form part of the measurement requirements. This helps students to perform the activities of the next scenarios with a better understanding thereby achieving an improvement in the quality of the process and the product.

4.4 Scenario based incremental development The scenario based development of the system which is followed in our O-O system development means that the team members participate in a scenario at a time and follow the prescribed process and are assessed and their process and products are compared against the other teams. The problem de nition and a high-level design for a \Conference Management System" is given in Business Object Notation [17]. Students are given a plan of activities to follow so that product quality improvements can be measured in the context of a clear process model [9]. They also receive procedures to be followed in an inspection process with template documents [8] of inspection meeting notices, defect analysis, summary and management report [15]. Discussions are held on how to work in teams and about the testing activity [11]. Student teams make a presentation in week 6 of the course and give an estimate of their time/activities as a gantt chart for fully implementing scenario 1 and sign a contract declaring the roles and responsibilities of the team members for scenario 1. Once it has been approved by the lecturer, they are required to implement this scenario in two elapsed weeks and follow the prescribed process. Regular inspection meetings are held by the teams where defects found are recorded based on various categories [15] and graphs are produced to show the time spent vs defects found during inspection and also during testing. Graphs to show class size vs time and number of methods developed vs time are also produced. A review/evaluation for scenario 1 is led by the lecturer in the lecture so that students can use this feedback mechanism to improve their process from scenario 2 onwards. The incremental iterative model of developing O-O software has been exploited to integrate the process maturity level [14] to metrics collection. [18] gives examples of areas within the undergraduate computing curricula which would bene t from software measurement and experimentation. His model ts with our curriculum objective of teaching O-O paradigm and applying quality improvement measures in preparing students for the new ways of incremental iterative development method of Object-Oriented System development.

4.5 Initial Observations As the semester is still in progress, it is too early to provide a complete interpretation of the experimental data. Even at this stage, it has been possible to detect some trends. The process gures shows outliers which have been used to discuss ways to improve the low-level design and code. Ofcourse, the whole experiment rests on students' cooperation. One group who have not had regular inspection meetings owing to time table clashes and other constraints of the team members have relied on testing alone to detect errors. Their work till now has been of a lower quality and they have used more development time. The scenario based approach is an incremental iterative approach in system development and in a collaborative team based project requires the team members to have a shared understanding of the system. This tends to suggest that a team based object-oriented development will de nitely bene t from an organised inspection process and cost less resources to develop the system. Appendix gives a sample layout of SA-TOPS, inspection documentation and three graphs

produced as part of this experiment.

5 Conclusion A repeatable process experiment has been set up to assess students on their O-O project. It requires student teams to work on a scenario at a time and work in an incremental iterative style of development. The students collect data about their development process and the product quality using the templates given for inspection and testing. One of the important bene ts of the introduction of a repeatable process in an educational context has been to make the process aspects visible and available for comparison across student teams more objectively than it has been possible in the past. The data across the student teams are being compared using graphs to show di erences visually and used in class to discuss bottlenecks and outliers. A number of these controlled experiments have to be conducted before measurements on scenario based development undertaken with formal inspection process and testing to improve the process and product can be validated. The data collected so far is too small for making such claims. Having established an organised process within an O-O method, the on going experiment will be considering the tools for storing the metrics data and data analysis. This will ensure that meaningful metrics can be produced by this controlled software process and product improvement experiment.

Acknowledgment Thanks to Monique Spence from our department for helping with data collection by setting up the templates for inspection and for keeping records on SA TOPS. I would also like to acknowledge the contributions of SFT3021 students of semester 1 1995 towards this experiment.

References [1] Ambriola, V., Meglio, R. D., Gervasi, V., and Mercurio, B. Applying a metric framework to the software process: an experiment. In Proceedings of the 3rd European Workshop on Software Process Technology (EWSPT 94), B C Warboys (Ed.), Springer-Verlag (1994), pp. 207{ 226. [2] Bache, R., and Bazzana, G. Software Metrics for Product Assessment. McGraw-Hill, 1994. [3] Basili, V. R. The experimental paradigm in software engineering. In Experimental Software Engineering Issues: Critical Assessment and Future Directions, International Workshop, Germany, H D Rombach and V R Basili and R W Selby (Eds.), LNCS 706, Springer-Verlag (1992), pp. 3{12. [4] Basson, H., Haton, M. C., and Derniame, J. C. Use of quality characteristics graphs for a knowledge-based assistance in software quality management. In Proceedings of Software Quality Management, Elsevier Science and CMP, Southampton (1993), pp. 807{818. [5] Bloom, B. Taxonomy of Educational Objectives: Handbook I: Cognitive Domain. David McKay, New York, 1956. [6] Bruegge, B., and Coyne, R. F. Teaching iterative and collaborative design: Lessons and directions. In Software Engineering Education, 7th SEI CSEE Conference, San Antonio, Texas, USA, J L Diaz-Herrera (Ed.), LNCS 750, Springer-Verlag (January 1994), pp. 411{428. [7] Fenton, N. E. Software Metrics. Chapman and Hall, London, 1991.

[8] Gilb, T., and Graham, D. Software Inspection. Addison-Wesley, Wokingham, England, 1993. [9] Grady, R. B. Practical Software Metrics for Project Management and Process Improvement. Prentice-Hall, Englewood Cli s, 1992. [10] Humphrey, W. Managing the Software Process. Addison-Wesley, Readings, Mass, 1989. [11] Jacobson, I., Christerson, M., Jonsson, P., and Overgaard, G. Object-oriented software engineering. Addison-Wesley, Readings, Mass, 1992. [12] Meyer, B. Object-oriented Software Construction. Prentice-Hall,Hemel Hemstead, 1988. [13] Mohamed, W. E., Sadler, C. J., and Law, D. Experimentation in software engineering: A new framework. In Proceedings of Software Quality Management, Elsevier Science and CMP, Southampton (1993), pp. 417{430. [14] Pfleeger, S. L., and McGowan, C. Software metrics in a process maturity framework. Journal of Systems and Software 12 (July 1990), 255{261. [15] Strauss, S. H., and Ebenau, R. G. Software Inspection Process. McGraw Hill, 1994. [16] Thomas, M., and McGarry, F. Top-down vs. bottom-up process improvement. IEEE Software 11, 4 (July 1994), 2. [17] Walden, K., and Nerson, J.-M. Seamless Object-Oriented Software Architecture. PrenticeHall, Englewood Cli s, New Jersey, 1995. [18] Zweben, S. E ective use of measurement and experimentation in computing curricula. In Experimental Software Engineering Issues, International Workshop, Rombach and Basili and Selby (Eds.), LNCS 706, Springer-Verlag (1992), pp. 247{251.

Appendix