An Evaluation of Applying Use Cases to Construct

14 downloads 0 Views 237KB Size Report
subjects, and the task was to construct a design model, .... abstractions or noun phrases in requirements documents. 2. Make a list of the responsibilities of the ...
An Evaluation of Applying Use Cases to Construct Design versus Validate Design Erik Syversen1, Bente Anda2 and Dag I.K. Sjøberg2 1

2

Department of Informatics University of Oslo P.O. Box 1080 Blindern NO–0316 Oslo NORWAY [email protected]

Abstract Use case models capture and describe the functional requirements of a software system. A use case driven development process, where a use case model is the principal basis for constructing an object-oriented design, is recommended when applying UML. There are, however, some problems with use case driven development processes and alternative ways of applying a use case model have been proposed. One alternative is to apply the use case model in a responsibility-driven process as a means to validate the design model. We wish to study how a use case model best can be applied in an object-oriented development process and have conducted a pilot experiment with 26 students as subjects to compare a use case driven process against a responsibility-driven process in which a use case model is applied to validate the design model. Each subject was given detailed guidelines on one of the two processes, and used those to construct design models consisting of class and sequence diagrams. The resulting class diagrams were evaluated with regards to realism, that is, how well they satisfied the requirements, size and number of errors. The results show that the validation process produced more realistic class diagrams, but with a larger variation in the number of classes. This indicates that the use case driven process gave more, but not always more appropriate, guidance on how to construct a class diagram The experiences from this pilot experiment were also used to improve the experimental design, and the design of a follow-up experiment is presented.

1. Introduction The authors of the Unified Modeling Language (UML) recommend a use case driven process for developing object-oriented software with UML [4,8,9,15]. In a use case driven development process, a use case model, possibly in combination with a domain model, serves as the basis for deriving a design model. Use case driven development processes have, however, been criticized for not providing a sufficient basis for the construction of a design model. For example, it is claimed that such a development process leads to: • a too wide gap between the use case model and the class diagram [13], • missing classes as the use case model is insufficient for deriving all necessary classes, and • the developers mistaking requirements for design [15]. An alternative to a use case driven process is to use another development process, for example, a responsibility driven process [12], and subsequently apply a use case model to validate the design. In the following, the term validation process is used to denote such development processes. We are interested in investigating how a use case model best can be applied in an object-oriented design process, and have conducted a pilot experiment to investigate differences imposed by the use case driven process and the validation process on the resulting design models. This may influence the choice of development process, and also how to teach object-oriented design, even though the choice of development process in a development project is typically determined by characteristics of that project, for example the experience

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Simula Research Laboratory P.O. Box 134 NO–1325 Lysaker NORWAY Tel. +47 67828200 Fax. +47 67828201 {bentea,dagsj}@simula.no

and skill of the developers, the problem domain and existing architecture. The pilot experiment had 26 undergraduate students as subjects, and the task was to construct a design model, consisting of class and sequence diagrams, for a library system. The resulting design models were evaluated relative to how well they implemented the requirements and to what extent they were specified at an appropriate level of syntactic granularity. We also investigated differences in number of classes and number of errors between the class diagrams constructed respectively by the two processes. The results show that the validation process resulted in design models that better described the requirements. The subjects following the use case driven process mostly mapped the steps of the use case descriptions directly onto methods in the class diagram, while those following the validation process were more successful in deriving appropriate methods from the written requirements document. The results also show that the validation process led to class diagrams with larger variation regarding number of classes. The use case driven process led to more errors in the class diagrams. This indicates that the use case driven process provides more, but not necessarily better, guidance. In our opinion, the results support the claims that a use case model is insufficient for deriving all necessary classes and may lead the developers to mistake requirements for design. We will use the experiences from the pilot experiment to improve the experimental design, and the design of a follow-up experiment is presented. The remainder of this paper is organized as follows. Section 2 describes the two processes investigated in this experiment. Section 3 discusses evaluation of the processes. Section 4 presents the measurement framework used in the experiment. Section 5 describes the experimental design, results and, analysis. Section 6 describes an improved experimental design. Section 7 concludes and suggests future work.

2. Two Processes for Applying Use Case Models in Object-Oriented Software Development The UML meta-model defines a use case as a subclass of the meta-model element Classifier [16]. This implies that a use case contains a state and behaviour from which classes, attributes, and methods can be derived. A use case driven development process prescribes a stepwise and iterative transitioning of a use case model to a design model [3,9]. The use case model, possibly together with a

design model, serves as a basis for deriving the classes necessary for implementing the system. Sequence diagrams are used for identifying the system classes necessary to realize the use cases and for allocating responsibilities to the classes. The result is a complete class diagram for the system. A design model is thus a refinement of the analysis model. In our opinion, a use case driven development process implicitly assumes the definition of a use case as a classifier. The steps of a use case driven process is outlined in figure 1 and described in table 1. In practice, use cases often do not describe a state that can be represented by classes. Hence, an alternative definition of use case has been proposed where a use case is a behavioural feature of a system and thus only describes entities of behaviour of a system. Several problems with use case driven development processes have also been identified as described in the last section. A proposed solution to these problems is to apply the use case driven process in combination with another process, for example, a responsibility driven processes [12]. A responsibility driven process starts by identifying responsibilities in a textual requirements specification. The responsibilitites of the system are the high-level operations that the system must be able to perform. An initial class diagram with system classes to which the responsibilities can be allocated, is thus derived in parallel with the development of a use case model. A use case model can then be used to validate the class diagram. A set of reading techniques for inspecting the quality and consistency of diagrams and textual descriptions, and for validating them according to the requirements specification, is presented in [14]. In our opinion, the validation process supports the definition of a use case as a behavioral feature of a system. Figure 2 shows the steps of a responsibility driven process where a use case model is applied in validating the design, and the steps are described in Table 2. In our experience, few organisations apply a use case model in a completely systematic way in their development process. Therefore, the processes described and evaluated in this paper represent recommended practice more than actual practice, but we believe that recommended practice should be subject to evaluation before becoming actual practice. The evaluation of a recommended software development process will give indications about its strengths and weaknesses, and about when the process is particularly suitable, something which may facilitate its transfer into actual practice.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Figure 1. The use case driven process

Figure 2. The validation process

Table 1. The steps in the use case driven process 1. Identify the use cases of system behaviour. 2. Construct a domain model showing real-world objects in the application domain. 3. Describe each use case in detail, for example, using a template format. 4. Define a scenario for each “interesting path” through the use case, and draw a sequence diagram for each scenario. Use objects from the domain model, and add new objects where necessary. 5. Identify the methods needed in every scenario, thus all the methods needed for the realization of the use cases. 6. Transfer the objects and methods from the sequence diagrams to a class diagram.

Table 2. The steps in the validation process

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

1. Identify the classes in the system using abstractions or noun phrases in requirements documents. 2. Make a list of the responsibilities of the objects of each class. 3. Draw a class diagram to show the classes and their responsibilities, that is their methods. 4. Identify the use cases of system behaviour. 5. Describe each use case in detail, for example, using a template format. 6. Define a scenario for each “interesting path” through the use case, and draw a sequence diagram for each scenario. Use objects from the class diagram. 7. Identify the methods needed in every scenario, thus all the methods needed for the realization of the use cases. 8. Use the sequence diagrams to validate that the class diagram contains all the necessary methods. Add or rearrange methods if necessary.

3. Comparing the Processes

4.1. Qualitative Metrics

The two design processes can be evaluated in terms of quality attributes of the resulting design models, and in terms of direct quality attributes of the processes themselves. The following quality attributes of design models were used in our pilot experiment: • The realism of the class diagrams, that is, to what extent the processes are successful in constructing design models that implement the requirements. • The correspondence between the class and the sequence diagrams. • The level of syntactic granularity in the class diagrams. • The size of the class diagrams, that is, the number of classes, attributes, methods and associations. • The number of false classes, attributes, methods and associations in the class diagrams. • The number of superfluous classes, attributes, methods and associations in the class diagrams.

Our qualitative metrics are based on the marking scheme for evaluating quality properties of a use case description presented in [1]. The realism of the class diagrams, that is, to what extent the processes are successful in constructing design models that implement the requirements, can be measured along three dimensions: • Realism in class abstractions – to what extent the class diagrams contain the necessary classes. • Realism in class attributes – to what extent the necessary attributes are identified, and whether they are specified as attributes of the correct classes. • Realism in class methods – to what extent the necessary methods are identified, and whether they are specified as methods of the correct classes. The realism of sequence diagrams is similar to the realism of class methods. The advantage of measuring sequence diagrams is that it is easier to follow the flow of successive method calls and flow of parameters than it is in class diagrams. Correspondence between class and sequence diagrams is measured by verifying that • For the use case driven process the objects used in the sequence diagrams are derived from the domain model, and the class diagram contains exactly the methods found in the sequence diagrams. • For the validation process, the objects used in the sequence diagrams are derived from the class diagram, and the class diagram contains complementary methods to those found in the sequence diagrams. • For both processes, the direction of method calls between objects in the sequence diagrams should be consistent with the way the methods are defined in the class diagrams.

The obvious direct process quality attribute is • the time spent on creating the design models. To compare the two development processes, we tested the following hypotheses: H10: There is no difference in the realism of the design models. H20: There is no difference in the correspondence between the class and sequence diagrams. H30: There is no difference in the level of detail in the class diagrams. H40: There is no difference in the size of the class diagrams. H50: There is no difference in the number of false elements in the class diagrams. H60: There is no difference in the number of superfluous elements in the class diagrams. H70: There is no difference in time spent creating the design models.

4. Measurement Framework This section describes both qualitative and quantitative metrics that can be used to measure the quality attributes introduced in Section 3. The qualitative metrics are used to test hypotheses H1-H3. The quantitative metrics are used to test hypotheses H4-H7.

The level of syntactic granularity is measured relative to the syntactic elements used in the resulting class diagrams. The class diagrams should show the • visibility of attributes and methods, • type and names of attributes, methods and parameters, and • navigation and cardinality of associations. Table 3 summarizes the qualitative metrics described above. The score of each metric is made according to a checklist1 with yes/no questions for each property. The marking scales match the number of questions (between 6 and 9) in the checklists. 1 The checklist and the other experimental material can be found at http://www.ifi.uio.no/~isu/forskerbasen.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Table 3. Qualitative metrics

Table 4. Quantitative metrics

Hypothesis

Property

Mark

Comment

Hypothesis

Property

Description

H1

Realism in class abstractions Realism in class attributes

0-6

0 = all wrong, 6 = all correct

H4

NC

Total Number of Classes

NA

Total Number of Attributes

NM

Total Number of Methods

NAssoc

Total Number of Associations

NFC

Total Number of False Classes

NFA

Total Number of False Attributes

NFM

Total Number of False Methods

NFAssoc

Total Number of False Associations Total Number of Superfluous Classes Total Number of Superfluous Attributes Total Number of Superfluous Methods Total Number of Superfluous Associations Time used to develop a design model

H2

H3

0-7

0 = all wrong, 7 = all correct

Realism in class methods

0-8

0 = all wrong, 8 = all correct

Realism in sequence diagrams Class and sequence diagram correspondence Level of detail in class diagram

0-8

0 = all wrong, 8 = all correct

0-9

0 = all wrong, 9 = all correct

H5

H6

NSA 0-6

0 = all wrong, 6 = all correct

NSM NSAssoc

H7

4.2 Quantitative Metrics Many design metrics for object-oriented code have been proposed, but only a few are applicable to high-level design [11]. The metrics suggested in [6,7] are adapted to UML class diagrams and have been empirically validated [7]. To examine the size of the class diagrams, we use a subset of those metrics: the total number of classes, attributes, methods and associations. Faults in class diagrams are measured in terms of the number of classes, attributes, methods and associations that are false relative to the problem domain, and those that are superfluous, that is, those that do not contribute to the implementation of the requirements. Time measures the total time spent on constructing the design model. The quantitative metrics are summarized in Table 4.

Time

5. An Experiment to Evaluate the two Processes This section describes the design, results and analysis of the pilot experiment conducted to evaluate the two processes. To the authors’ knowledge, no empirical studies have been conducted to compare alternative ways of applying use case models in a development process. This study is therefore explorative; the goal of the evaluation is to get indications of differences between the two processes.

5.1. Experimental Design 5.1.1. Subjects. The subjects were 26 undergraduate students following a course in software engineering. Half of the subjects received guidelines for the use case driven process; the other half received guidelines for the validation process. The assignment of guidelines was done at random. The experiment was voluntary, and the subjects were paid for their participation. The subjects had learned the basics of UML and had constructed an

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

NSC

object-oriented design as part of a compulsory assignment in the course. 5.1.2. Procedure of the Experiment. The experiment lasted for three hours. The subjects used pen and paper. They wrote down the exact time they started and finished each exercise. The experiment consisted of two parts. The first part contained three exercises guiding the subjects in developing a design model with a class diagram and three sequence diagrams, modelling three functional services of a library system. The second part was not included in our analysis, but was part of the experiment to make sure that all the subjects had enough to do for three hours. The subjects had no training in neither of the two alternative development processes, so detailed guidelines were given. 5.1.3. Experimental Material. The task of the experiment was to construct a design model for three functional services of a library system. This case is described in many books on UML, for example [12,15]. It was chosen because it is a well-known domain and simple enough for students just introduced to UML. The subjects were given a use case model with the following use cases: • Checking out an item. • Checking in an item. • Checking the status of an item. The use cases were described using a template format based on those given in [5]. Those following the validation process also received a textual requirements document of the system. The guidelines were based on the descriptions presented in Tables 1 and 2. In order to provide complete design processes, some additions were made to the original descriptions of the validation process regarding how to identify classes and responsibilities and regarding how to validate the class diagram with regards to the sequence diagram. These additions were based on the reading techniques for object-oriented design described in [14]. Because of the time constraint on the experiment, some steps of the original descriptions were also removed. The subjects were given use cases instead of being asked to make them themselves, and in the validation approach the listing of responsibilities for each class was excluded. Table 5 shows the detailed guidelines used in the experiment.

5.2 Results and Analysis The design models were analysed according to the measurement framework presented in Section 4. The Kruskal-Wallis statistical test was performed on the results. This non-parametric test was chosen because the data distributions were non-normal. A p-value of 0.1 was chosen as the level of significance for all the tests due to the explorative nature of the experiment. Table 6 summarises the results of the statistical tests. Some of the subjects did not complete a design model and were therefore discarded from the analysis. The total number of subjects following the use case driven process were 10, and 11 followed the validation process. Table 6 shows the results of the statistical tests. 5.2.1. Assessment of realism, Hypothesis H1. The test on realism in class methods shows a significant difference in favour of the validation process. We believe that the reason for this result was that the subjects mostly mapped the steps in the use case descriptions directly onto methods when creating sequence diagrams. This led to problems both with inappropriate names of the methods and flaws in the order of the method calls. In the use case driven process the sequence diagrams are used to identify all the methods needed in the class diagram, while in the validation process the sequence diagrams are only used to detect any necessary functionality overlooked in the first attempt at a class diagram. Therefore, the problems concerning the direct mapping of methods had a much larger impact on the design models constructed with the use case driven process. The tests on realism in class abstractions, class attributes, and sequence diagrams showed no difference with regards to realism between the two processes. 5.2.2. Assessment of correspondence, Hypothesis H2. The test on correspondence showed no significant difference, but there is a difference in the median in favour of the use case driven process. We expected a difference in favour of the use case driven process since a claimed strength of this process is that it assures traceability between the diagrams in a design model. 5.2.3. Assessment of level of syntactic granularity, Hypothesis H3. We neither expected nor found a difference in the level of syntactic granularity of the class diagrams.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Table 5. The exercise guidelines Guidelines for the use case driven process Exercise 1: Domain model 1. Underline each noun phrase in the use case descriptions. Decide for each noun phrase if it is a concept that should be represented by a class candidate in the domain model. 2. For the noun phrases that do not represent class candidates, decide if these concepts should be represented as attributes in a domain model instead. (Not all attributes are necessarily found this way.)

Exercise 2: Sequence diagrams 1. Create one sequence diagram for each use case. 2. Study each use case description carefully, and underline the verbs or sentences describing an action. Decide for each action if it should be represented by one or more methods in the sequence diagrams. 3. The sequence diagrams should contain only the methods derived from the use case descriptions, and the objects from the domain model from exercise 1. (Note! Not all methods needed are necessarily identified this way) Exercise 3: Class diagram 1. Transfer the domain model from exercise 1 into a class diagram. 2. For each method in the sequence diagram: 1. If an object of class A receives a method call M, the class A should contain the method M in the class diagram. 2. If an object of class A calls a method of class B, there should be an association between the classes A and B.

Guidelines for the validation process Exercise 1: Class diagram 1. Underline all noun phrases in the requirements document. Decide for each noun phrase if it is a concept that should be represented by a class in the class diagram. 2. For the noun phrases that do not represent classes, decide if these concepts should be represented as attributes in the class diagram instead. (Not all attributes are necessarily found this way.) 3. Find the verbs or other sentences that represent actions performed by the system or system classes. Decide if these actions should be represented by one ore more methods in the class diagram. (Not all methods needed are necessarily identified this way.) Exercise 2: Sequence diagrams 1. Create one sequence diagram for each use case. 2. Study each use case description carefully, and underline the verbs or sentences describing an action. Decide for each action if it should be represented by one or more methods in the sequence diagrams. 3. The sequence diagrams should contain only the methods derived from the use case descriptions, and the objects from the class diagram from exercise 1. Exercise 3: Validation of the class diagram 1. For each method in the sequence diagram, draw a circle around it. If several methods together form a system service, treat them as one service. 2. For each method or service circled out: 1. Validate that the class that receives the method call contains the same or matching functionality. 2. If an object of class A calls a method of class B, there should be an association between the classes A and B in the class diagram. If the class diagram contains any hierarchies, remember that it may be necessary to trace the hierarchy upwards when validating it. 3. If the validation in the previous steps failed, make the necessary updates in the class diagram.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Table 6. Statistical results Hypothesis

Process

Median

H10 -

Use case driven Validation Overall Use case driven Validation Overall Use case driven Validation Overall Use case driven Validation Overall Use case driven Validation Overall Use case driven Validation Overall Use case driven Validation Overall

4.0

Realism H20 Correspondence H30 Syntactic Granularity H40 Size H50 False H60 Superfluous H70 Time

Pvalue

Reject

0.03

Yes

0.12

No

0.40

No

0.68

No

6.0 7.0 6.0 5.0 4.0 6.0 6.0 1.0 0.0 0.06

Yes

0.43

No

0 0 113,5 125 0.16

No

5.2.4. Assessment of size, Hypothesis H4. We expected a difference in the size of the class diagrams, with larger class diagrams produced with the validation process, because the use case driven process provides stricter guidance on how to identify classes, attributes, methods and associations. However, the tests regarding size did not show any difference. There is, however, a larger variance in the number of classes obtained using the validation process as opposed to using the use case driven process. In our opinion this indicates that the use case driven process gives stricter guidance on how to identify classes. No difference was found in number of attributes, methods or associations. 5.2.5. Assessment of false elements, Hypotheses H5. The test showed a significant difference in favour of the validation process, that is, the use case driven process led to more false classes. Combined with the assessment of hypothesis H4, this indicates that the use case driven process gave stricter, but not better guidance on how to

identify classes. No difference was found in the number false attributes, methods or associations. 5.2.6. Assessment of superfluous elements, Hypotheses H6. We expected a difference in superfluous elements in the class diagrams created by the two processes for the same reason as we expected a difference in size. However, none of the tests on superfluous elements showed any difference. 5.2.7. Assessment of time spent, Hypothesis H7. A difference in time spent creating the design models was expected as exercises 1 and 3 (Table 5) were more comprehensive for the validation process. Those following the validation process spent more time than did those following the use case driven process, but the difference was not significant.

6. Improving the Experimental Design This pilot experiment was exploratory, and a first study aimed at investigating strengths and weaknesses of different ways of applying a use case model in an objectoriented design process. The threats to the validity of the results from the pilot experiment and the design of an improved follow-up experiment are discussed below.

6.1. Subjects A threat to the validity of our results is that the subjects were novices to modelling with UML. Therefore, more experienced subjects, or training in the processes in advance might have led to different results. Our followup experiment will be conducted with twice as many students as subjects. These subjects will be more experienced with UML, and when we have gained sufficient experience from conducting the experiment with students, we will replicate it with professional software developers as subjects.

6.2. Procedure of the Experiment Another threat to the validity of our results is that the procedure of the experiment differed in several ways from how software developers actually work when designing with UML: • The experiment was conducted with pen and paper, instead of using a UML case tool. • The subjects worked individually, while design is typically done in teams. • The guidelines led the subjects to work linearly and construct the diagrams one after another, while design

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

is usually done iteratively, developing the diagrams in parallel. • There was a time constraint on the experiment, so the subjects were not permitted to spend as much time as they wanted. Actual software development projects also have time limits, but since almost 20% of the subjects did not finish, the experiment may have been too comprehensive for the three hours that were allocated for the experiment. Some of these problems will be remedied in the follow-up experiment. The subjects will work on a computer using a UML case-tool with which they are familiar, and an experiment support environment, described in [2], will be used in the data collection. The use of a case tool will make the procedure of the experiment more realistic, and it will permit the subjects to work in a more iterative. In later experiments we will let some subjects work in teams, and also let them spend as long time as they need on the experiment. 6.3. Experimental material Other threats to the validity of our results are caused by the experimental material: • The problem domain was very small because of the time limit on the experiment. This resulted in small design models, leaving little room for variations in structure. • The guidelines were quite detailed and restricted the subjects in several ways. This was done to ensure that the subjects actually followed the process descriptions. One example of a restriction was that only the subjects following the validation process received the textual requirements document and performed a validation step, another restriction was that the subjects were asked to make only one sequence diagram for each use case. • We provided the subjects with a use case model instead of letting them construct one for themselves. The analysis showed that the use case descriptions were important in the use case driven process because the subjects often assumed a direct mapping from the steps in the use case descriptions to methods in the class diagram. We will also attempt to remedy some of these problems in the follow-up study. • The problem domain will be increased, • the guidelines will be less strict; • all the subjects will receive the same material,

• the use case driven process will also include a validation step, • and the subjects will decide for themselves how many sequence diagrams to make for each use case. To ensure process conformance even with less strict guidance, we will log the subjects actions. At regular time intervals, i.e. every 15 minutes, a small screen will pop up and ask the subject to write down exactly what he/she is doing. This has been done in a previous experiment described in [10]. Logging the subjects actions will give us an idea about how the subjects actually worked while solving the task. The pop-up screen will also remind the subjects about what they should be working at at this point in the experiment. We will, however, provide the subjects with use case models in the follow-up study, since the format used is a typical one and there will be time constraints. In later experiments we may ask the subjects to construct the use case model themselves.

6.4. Measurements We were not able to measure all the quality attributes on the design models, nor on the process, that we wanted in the pilot experiment. In the follow-up experiment a larger problem will make it possible to measure complexity of the design models in terms of coupling and cohesion, in addition to size. Logging what the subjects are doing at regular intervals, will let us assess process conformance in a better way than what was possible in the pilot experiment.

7. Conclusions and Future Work We conducted a pilot experiment to evaluate two different approaches to applying use case models in an object-oriented design process. One approach was use case driven, in such processes a use case model serves as the basis for constructing a design model. The other approach was a validation process where a use case model was applied in validating the design. The aim of the experiment was to investigate differences between the two approaches with regards to the quality of the resulting design models defined in terms of realism as well as size and number of errors in the class diagrams. The results show that the validation process led to design models with greater realism in the method composition of the class diagrams. No significant difference regarding size was found, but there was a larger variance in the number of classes of the class diagrams constructed with the validation process than with the use case driven process. The use case driven process led to more erroneous classes relative to the

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

problem domain than did the validation process. This indicates that the use case driven process gives more, but not necessarily better, guidance on how to identify classes and their attributes and methods. In our opinion, the results support the claims that a use case model is insufficient for deriving all necessary classes and may lead the developers to mistake requirements for design [15]. The results also indicate that it may be more appropriate to consider a use case as a behavioural feature of the system against which class diagrams can be validated, rather than consider a use case as having a state and methods from which the design can be derived. This study was exploratory. One experiment can only provide insight on how the alternative processes perform in a limited context. To gain knowledge which is transferable to actual software development practice, it is therefore necessary to conduct a series of experiments in different environments. An experiment permits the indepth study of some aspects of a development process, but an experimental context will necessarily differ from actual work practice, so experiments should be combined with other types of studies, for example, case studies. We plan to conduct further studies to investigate how to best apply a use case model in an object-oriented design process, and in particular we intend to improve the experimental design through further studies investigating how a use case model best can be applied in object-oriented design. A follow-up experiment is planned which will incorporate the modifications described in Section 6. We also plan to conduct an experiment with professional software developers.

3. 4. 5. 6.

7.

8.

9. 10.

Acknowledgement

11.

We acknowledge all the students at University of Oslo who took part in the experiment. We thank Ray Welland and the anonymous referees for their constructive comments on this paper.

12.

References

13.

1.

2.

Anda, B., Sjøberg, D. and Jørgensen, M. Quality and Understandability in Use Case Models. 13th European Conference on Object-Oriented Programming (ECOOP2001), June 18-22, Budapest, Hungary, LNCS 2072, Springer Verlag, pp. 402-428, 2001. Arisholm, E., Sjøberg, D., Carelius, G.J. and Lindsjørn, Y. SESE – an Experiment Support Environment for Evaluating Software Engineering Technologies, NWPER’2002 (Tenth Nordic Workshop on Programming and Software

14.

15. 16.

Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03)

0-7695-1874-5/03 $17.00 © 2002 IEEE

Development Tools and Techniques), pp. 81-98, Copenhagen, Denmark, 18-20 August, 2002. Arlow, J. and Neustadt I. UML and the Unified Process. Practical Object-Oriented Analysis and Design. Addison-Wesley, 2002. Booch, G., Rumbaugh, J. and Jacobson, I. The Unified Modeling Language User Guide. AddisonWesley, 1999. Cockburn, A. Writing Effective Use Cases. Addison-Wesley, 2000. Genero, M., Piattini, M. and Calero, C. Early Measures for UML Class Diagrams. L’Objet.Vol. 6 No. 4, Hermes Science Publications, pp. 489-515, 2000. Genero, M. and Piattini, M. Empirical validation of measures for class diagram structural complexity through controlled experiments. 5th International ECOOP workshop on Quantitative Approaches in Object-Oriented Software Engineering, June 2001. Jacobson, I., Christerson, M., Jonsson P. and Object-Oriented Software Overgaard, G. Engineering: A Use Case Driven Approach. Addison-Wesley, 1992. Jacobson, I., Booch, G., and Rumbaugh, J. The Unified Development Process. Addison-Wesley, 1999. Karahasanovic, A. and Sjøberg, D. Visualizing Impacts of Database Schema Changes – A Controlled Experiment, In 2001 IEEE Symposium on Visual/Multimedia Approaches to Programming and Software Engineering, Stresa, Italy, September 5-7, 2001, pp 358-365, IEEE Computer Society. Reissing, R. Towards a Model for Object Oriented Design Measurement. 5th International ECOOP workshop on Quantitative Approaches in ObjectOriented Software Engineering, June 2001. Richter, C. Designing Flexible Object-Oriented Systems with UML. Macmillan Technical Publishing, 1999. Rosenberg, D. & Kendall, S. Applying Use Case Driven Object Modeling with UML. An Annotated E-commerce Example. Addison-Wesley, 2001. Shull, F., Travassos, G., Carver, J. & Basili, V. Evolving a Set of Techniques for OO Inspections. University of Maryland Technical Report CS-TR4070, October 1999. Stevens, P. and Pooley, R. Using UML. Software Engineering with Objects and Components. Addison-Wesley, 2000. The UML meta-model, version 1.3. www.omg.org, 2001.