Sizing Use Cases: How to Create a Standard Metrical Approach B. Henderson-Sellers, D. Zowghi. T. Klemola and S. Parasuram University of Technology, Sydney P.O. Box 123 Broadway 2007 NSW Australia {brian,didar}@it.uts.edu.au, {tklemola,sparasuram}@yahoo.com
Abstract. Use-case modelling provides a means of specifying external features of a system during requirements elicitation. In principle, use cases can be used to size the system about to be built but, for that, a standard format for their documentation is required. Furthermore, gathering use-case metrics requires a software development process that produces complete use-case descriptions in a repeatable way. Here, we set out the requirements for such a standardization so that use cases can be metricated. Once accomplished, it is possible to evaluate the important research questions of whether use-case attributes such as size and complexity can be controlled and whether use-case metrics are sufficiently rigorous for estimating effort. Finally, we note that this added rigour applied to use cases should improve the consistency and quality of communication between client and developer, helping to ensure that the right system is built.
1. Introduction The success of a software development project is measured by many factors. Among them are how close the final product meets the requirements of the client, how close the delivery date is to the scheduled date and how close the final cost is to original budget forecasts. Improving any of these project success attributes involves changing something about the existing software development process. Getting the correct user requirements is known to be challenging [1], yet improvements made in the early stages of a software development process can have significant benefits compared to changes in later stages of the process [2]. In this paper, we examine the use of use cases for requirements engineering and propose a framework for developing use-case metrics appropriate to the requirements phase of the software development life cycle. In order to undertake a quantitative comparison of use cases between projects, or possibly to use them as a basis for a future prognostic model for effort, it is necessary to ensure that use cases are constructed in such a way that they are consistent and to a standardized format both within the project and from project to project. Use cases are an old technique manifested in many guises – most recently in Ivar Jacobson’s work in OOSE [3]. What is today called use-case modelling was first used in telecommunications software development. As its power for describing large systems has become known, it has become a popular technique for specifying object-
oriented systems in general, especially user interfaces [4]. It is widely used for describing systems and subsystems where user-initiated events drive the application. However, use cases possess no specifically object-oriented characteristics. Indeed, they can be used most effectively as a precursor to either an object-oriented analysis/design or, via direct decomposition, to a functionally-oriented design [5]. A use-case-driven approach facilitates communication between the requirements engineer and the system sponsors during requirements elicitation. The concept of a use case can be described very simply to system users and hence can reduce the wellknown communication gap that exists between the problem solving and problem owning communities. To the extent that they explore the user interfaces directly, early feedback can be obtained from the users on this crucial and volatile aspect of system specification [6]. In addition to capturing the requirements, they are meant to be of value during the analysis and design activities and can play a significant role in the testing process. However, use cases cannot capture non-functional requirements and, although it has been argued that use cases in their present form cannot replace systems requirements specifications entirely [7], as Cockburn [8] puts it: “they really are requirements but they are not all of the requirements”. The task of defining measures for the size of use cases requires consideration of many issues. We commence this paper with a discussion of the UML standard for use cases followed, in Section 3, by a survey of popular practices in the construction of use cases to ensure that any chosen definition is both theoretically sound and also fits with common practice. This background then gives us the foundation for our proposals for appropriate metrics for use cases (Section 4) before concluding in Section 5.
2. Defining Use Cases A use-case model describes an external view of the system. It consists of actors and their interactions with a software system, described in terms of use cases, which aim to capture the functional requirements. Jacobson [9] argues that system-level responsibilities and use cases are related, although such a formal linkage is not included in the current version of the UML [10]. Use-case modelling is a technique for eliciting, understanding and defining functional system requirements in a way that both developers and users can understand [11]. The “story” of how the system behaves from the user’s perspective can be told with the use case and its associated scenarios. It can be used to reach an agreement between the people who order the system and people who develop the system. Jacobson [12] argues that the use-case model is used to ensure that the right system is built. It should also be noted that use cases only capture functional requirements, specifically those that interact with an actor. They cannot capture non-functional requirements or the actual internal processing of a function. Hence their size can only be used as an estimate for the effort needed to develop the functional requirements that involve an interaction with an actor.
In the OO world, use cases have been incorporated as one of the notations which comprise the notational suite known as the Unified Modeling Language or UML [10]. According to this OMG standard, the definition of a use case is: "The use case construct is used to define the behavior of a system or other semantic entity without revealing the entity’s internal structure. Each use case specifies a sequence of actions, including variants, that the entity can perform, interacting with actors of the entity." [10, p2-137] "Use case diagrams show actors and use cases together with their relationships. The use cases represent functionality of a system or a classifier, like a subsystem or a class, as manifested to external interactors with the system or the classifier." [10, p3-94] Notwithstanding these definitions, there remain serious ambiguities or at least ambiguous yet valid interpretations of the concept of "use case" as seen in the series of versions of the UML. This has led to the industry joke that if you gather together n experts in use cases in the one room, there will be at least n different definitions (Both Graham and Cockburn have validated this empirically [13, p244]). Cockburn [8] also identifies "failures" with the current usage of use cases. He notes that the original ideas of Jacobson have been changed as a consequence of the drawing-tools influence of the OMG's committees, thus losing the original predominantly textual nature of the use case. Since a use case is text, he argues that nothing in the UML standard (which discuss only use case diagrams) describes the essence of the use case itself. Indeed, many developers think that a use case is the picture you can draw as a use case diagram whereas, as Cockburn [8, pxxi] points out, a use case fits inside one of the ellipses on a use case diagram. Consequently, any metrics suite for use cases needs to discriminate between these two elements in the use (and misuse) of use cases versus use case diagrams. While use cases were intended originally to help in eliciting user requirements in an investigative rather than a documentary manner, prior to design, recent modifications are seen [14] as adding unnecessary complications. This leads to experienced developers having multiple conflicts of interpretation. In particular, users confuse the "for instance" and the "prototypical" interpretations – the former (usually in analysis) eliciting sample user interactions while the latter (usually in design) specifying typical courses of all interactions of this type. Refinements made as use cases became part of UML led to the formalization of two relationships between use cases which have been generally misunderstood and, throughout the various versions of UML, significantly altered in their definitions. The current (Version 1.4) stereotyped relationships of interest are named «include» and «extend», probably the latter causing the greater confusion of the two, generally understood to be offering support for exceptional cases. On the other hand, if the comment [15] that "The semantics of «extend» cannot handle exceptions" is correct, then this represents further disorder in what the «extend» stereotype is meant to represent. Their hard-hitting comments on use cases, together with the warnings in [16,17], underline the potential difficulty of creating a standard use-case metric for a far-from-unambiguous concept. Use cases are defined in the context of a software system, whereas requirements engineering makes no such assumption. Consideration of contextual factors that may
influence the time needed to implement the system segment specified by a use case is needed. Given a standard unit of measurement for use cases, rules for the consistent construction of use cases and the gathering of historical effort data, it becomes possible to investigate linking use cases with effort using use-case metrics.
3. Use-Case Models Although use cases are included in the OMG’s UML standard, as noted above, they appear to be ambiguously defined in the sense that different authors interpret and use them very differently. In order to identify appropriate metrics, it is therefore necessary to evaluate some of the major use-case variants. Here we evaluate use-case models as proposed in [8,18,19]. Cockburn template 1. Use-case Name 2. Context of use:
3. Scope: 4. Level: 5. Primary Actor: 6. Stakeholders & Interests: 7. Precondition: 8. Minimal Guarantees: 9. Success Guarantees: 10. Trigger: 11. Main Success Scenario: 12. Extensions : : 13. Technology and Data Variations List 14. Related Information Fig.1. Use-case template proposed in [8] 3.1 Cockburn’s Approach Cockburn [20] focusses on the goals involved in use cases and identifies 18 use-case types along the four dimensions of purpose, context, multiplicity and structure, noting that the most common involves semi-formal structure. Use cases also can have dif-
ferent scope (e.g. system, organization); all of which tends to leave users lost in multi-dimensional space [21]. From these earlier ideas, Cockburn [8] has constructed a standard template for use cases with 14 major constituent elements. Figure 1 shows one of several alternatives he offers, another favourite being named as a "fully dressed" form, which is much the same in content but slightly differently laid out. 3.2 Rational Unified Process (RUP) Figure 2 depicts six main fields in the RUP use case [18] which, although similar to those in Figure 1, are more skeletal and do not include context, scope etc., focussing instead on the technical descriptors (cf. items 7, 9, 11-13 in Figure 1). RUP's use case template focusses on the flow of events, which describe a sequence of actions within the use case, written in natural language. All possible alternative flows are mandated to be grouped within the single related use case, thus defining a "use case class" – or, more commonly, simply a "use case". Instances of the use case are scenarios, which emphasize one particular sequence of actions RUP Template 1.
Use-case Name 1.1 Brief Description 1.2 Actions 1.3 Triggers 2. Flow of Events 2.1 Basic Flow 2.2 Alternative Flows 2.2.1 < First Alternative Flow > 2.2.2 < Second Alternative Flow > 3. Special Requirements 3.1 < First special requirement > 4. Pre-Conditions 4.1 < Pre-condition One > 5. Post-Conditions 5.1 < Post-condition One > 6. Extension Points 6.1 < Extension Point 1 > Fig. 2. Use-case template as used in RUP [18] . 3.3 Regnell’s Model Regnell’s use-case model [19] hides the complexity of the use case in three levels: • Environment level • Structure level • Event level Each level can have corresponding metrics for the size of the use case. For example, in the environment level, only the actors, stakeholders and their goals are consid-
ered together with the services offered to the actors/stakeholders to meet their goals. The structure level is mainly concerned with the normal flow and alternative flows indicating the breadth and the depth of the use case. In the event level, the individual actions are captured and, thus, in this level, the concept of atomic actions and the number of actions a measure of size seems appropriate. Regnell [19] also proposes a use-case algebra to express the actions at the event level. This proposed algebra attempts to add formality to what is, in essence, an informal approach [20]. 3.4 A Brief Comparison The RUP model concentrates on the structure and event levels and does not include environment level details such as stakeholders and goals. The structure and event levels are described in detail but the event level is expressed in unstructured natural language leading to ambiguities and problems in the interpretation of the use-case details. Thus, the RUP model for use cases appears to have serious limitations when the overall goals and the stakeholders’ interests are not taken into account explicitly in the use case. In that sense, Cockburn's method seems more relevant since he adds the environment level details that the RUP model lacks. Cockburn’s model expresses all three levels of Regnell’s use-case model but, again, uses natural language to express event level details. This becomes a limitation for this method, since this introduces ambiguities in expressing the event level actions. 3.5 Writing Textual Use Cases Whichever model of use cases is used, we need to finally write the use cases in natural language or some semi-formal or formal language that expresses the customer’s requirements. This is the core part of the use case, since many of the errors in requirements that are not identified until the later phases are due to the informal way in which this part of the use-case construction is handled. Use-case descriptions are often written in a natural language such as English. This introduces ambiguities in the expressions and frequently does not translate the realworld problem into precise requirements. Thus, a standard or template is required that constrains the user to write in a specific way. Some progress has been made in the CREWS-SAVRE project [22] in which they suggest an incrementally guided process of use-case specification. They discuss the linguistic patterns and structures that helps in the writing of use-case specifications and relate the use-case model to natural language. Cockburn [8] offers advice on style guidelines to assist in writing narrative prose together with contents guidelines to advise the author on the expected contents of the prose. Other guidelines are given in [23].
4. Metrics for Requirements In engineering in general, it is important to be able to estimate the resources needed to construct a product. The size attribute is an important input to the process of resource estimation [24]. In software engineering, the estimation of the scale and scope of a
project is difficult to do accurately at an early stage of development. At the same time, early estimates can be very important to the success of a project, since resource allocations are often made on their basis (e.g. [25]). Metrics that can be gathered from requirements include requirements size (pages of specification, number of requirements, function points etc.) requirements completeness (everything the software is to do is included; all responses of software to realizable classes of input data in all realizable situations are included; all pages, figures, tables etc. are numbered and referenced & all terms and units provided; no sections marked TBD (to be determined) [26] requirements defect density (from inspection) traceability between requirements (ensures the origin of each requirement is clear; facilitates the referencing of each requirement in future enhancements) Requirements quality can be measured in terms of volatility, traceability, consistency and completeness. Volatility is the degree that requirements change over a finite time period (e.g. [27]). Traceability can be from requirement to requirement, from requirement to design and from requirement to test [28]. Normally a requirements traceability matrix is used. Requirements completeness metrics are used to assess when a requirement is too complex, at the wrong level, or too superficial [25]. Cockburn [8] offers as measures the categories of small, medium, large and huge. With only four categories of size, the range of the size of a project within a category is significant. The ability to produce precise metrics depends on the available historical data and how past activities resemble present ones. A similarly roughly quantitative approach [29] quotes work of Karner based on a function point approach. It is suggested that useful metrics can be derived from counts of • number of actors, weighted by their complexity (weights of 1, 2 or 3) • number of use cases, weighted by the number of transactions that they contain (weights of 5, 10 or 15) or • number of analysis classes used to implement each use case, weighted by their complexity (weights of 5, 10 or 15) Note that as well as the complexity factors being arbitrary and coarse-grained, use cases (for RE) are at a high abstraction level. They do not, therefore, contain information on either transactions or classes, these being too fine-grained. In addition, there is a many-to-many, not one-to-one as implied above, relationship between classes and use cases. From these numbers, the authors simply create a cumulative sum called UUCP (unadjusted use-case points) which is then subject to further subjective (and complicated) weightings to get a final count for use-case points (UCP); these latter calculations bearing a strong resemblance to the weighting calculations of FPs. Effort is then calculated as x UCP where x is a figure representing effort in person hours per UCP, a value varying widely. A very different approach is preferred by Constantine and Lockwood [30]. They suggest five metrics, linked to GUI and essential use cases. Those relating to essen-
tial use cases (three in number) are at a higher abstraction level than our focus here so are not discussed further in this paper. Marchesi [31] suggests three primary metrics: UC1 = total number of use cases UC2 = overall number of communications between actors and use cases UC3 = total number of communications between actors and use cases neglecting include and extend structures Marchesi then argues that an estimate of global complexity of the system is given by 2
UC4 = K1 UC1 + UC3 + K2 [ smm([C]) - smm([E])] where coefficients K1 and K2 are computed empirically and smm[(M)] is the sum of all elements of matrix M and C and E are matrices for the communications between use cases and actors' and E is derived from C (for details see original paper). The squaring of UC1 is said to be "for homogeneity reasons"! UC4 is said to be proportional to system complexity although no coefficient of proportionality is discussed and no justification for the statement is given. There is stated to be significant subjectivity in the calculation of the metric. Having chosen a use-case model and then using some guidelines for constructing the use cases, the use cases can be specified in a form ready to be measured. Here, we use primarily the hierarchical use-case model by Regnell together with the CREWS-SAVRE approach for writing the descriptions of the use case. Actions can be either atomic actions or a compound of atomic actions called a flow of actions. An atomic unit that is common to all representations of use cases would give any measures based on it the widest potential application. Since actions are common to all usecase representations, we propose that atomic actions should be used as a unit of measurement for use cases, where an atomic action is one that cannot be further decomposed without leaving the domain [13, p219]. Once we have something we can count, we can define metrics based on that. For example, we propose the following size measures: 1. Number of atomic actions in the main flow 2. The number of atomic actions in each alternative flow 3. The longest path between the first atomic action of the use case to the final atomic action of the use case 4. Number of alternative flows (represents the breadth of the use case [32]) Alternative flows are measured from the start of the use case to its termination. Apart from the above metrics, we need to consider the environment level factors [19] that will contribute to the complexity of the use case independently of size measures. We suggest the following measures to account for environment factors: 5. Number of stakeholders 6. Number of actors 7. Total number of goals We consider these as complexity factors in that a use case with the same number of atomic actions can have different values for each of the environment factors. When those latter values change, it will, intuitively, alter the effort associated with the use
case. For instance, when more goals must be met, the use case must be reviewed for each one and conflicts must be resolved. The following composite metrics can be derived from the above measures. 8. Total number of atomic actions in the alternative flows 9. Total number of atomic actions in all flows 10. Number of atomic actions per actor 11. Number of atomic actions per goal 12. Number of goals per stakeholder As an example, we analyze the use case for "Identify Assets" from the small business loans system [33, pp191-193]. This is given in Table 1. Some use case actions are expressed as compound statements to avoid repeating similar statements; hence, a dimension factor is included to indicate how many times the action is required. The values of the calculated use case metrics are given in Table 2. It can be seen that there are no extreme values, suggesting reasonable balance across main and alternative flows. What is now required is for these metrics to be collected from industry projects and related to external characteristics [34] such as maintainability, effort of implementation etc. Table 1. Use case example – modified from use case: Identify Asset [33] Use Case Thumbnail UC100-IdentifyAsset (version 0.9) Use Case Description This use case describes the process of identifying a single asset of a borrower, so that it can be used by the securities and assets officer of the bank in order to assess whether it can be used as collateral for the purpose of granting a loan to the borrower. Pre-Conditions Asset details (as a single piece of free-form text) must be provided by the customers (potential borrower at this stage) before any classification and verification process can start. The A07-Securities&AssetsOfficer should already have such data in hard form (the actual loan application form) and/or soft copy at hand. Post-Conditions None
Actors A03-SmallBusinessBorrower A07-Securities&AssetsOfficer A09-Database Stakeholders S01=A03-SmallBusinessBorrower S02=A07-Securities&AssetsOfficer S03 Business Manager S04 Board of Directors
Goals A03-SmallBusinessBorrower – to provide information related to their Asset; to assist in calculation of the net asset value available as collateral A07-Securities&AssetsOfficer – to calculate the net asset of the borrowers; to calculate the collateral by balancing the net assets with the loan amount requested; to store all details in the database A09-Database – to store the asset details Assumptions: 1. Number of Asset Details =4 (used in main and alternative flow) Use Case Text 1. 2.
A03-SmallBusinessBorrower provides name of asset to be offered as collateral A03-SmallBusinessBorrower provides details of asset to be offered as collateral (dimension =4) 3. A07-Securities&AssetsOfficer records name of the asset 4. A07-Securities&AssetsOfficer records details of the asset (dimension =4) 5. A07-Securities&AssetsOfficer queries the system if the asset already exist in the system (Alternative 1) 6. System prompts for asset name 7. A07-Securities&AssetsOfficer records name of the asset 8. System prompts for asset details (dimension =4) 9. A07-Securities&AssetsOfficer records details of the asset (dimension =4) 10. Asset name is sent to the A09-Database 11. Asset details are sent to the A09-Database (dimension =4) Alternative Courses Alternative 1: Asset already exists in the system (as made available by the borrower as existing customers of the bank) 1. System provides asset details (dimension =4) 2. A07-Securities&AssetsOfficer verifies the details of the asset with A03SmallBusinessBorrower (dimension =4) Constraints None
Table 2. Results of metrics for the use case example of Table 1. Metric
Value of metric
Number of atomic actions in main flow Number of atomic actions in alternative flows Longest path Number of alternative flows Number of stakeholders Number of actors Total number of goals
26 Alternative 1: 8 26 1 4 3 3
Derivative metrics Total number of atomic actions in alternative flows Total number of atomic actions in all flows Number of atomic actions per actor Number of atomic actions per goal Number of goals per stakeholder
8 34 11.33 11.33 0.75
5. Conclusion Use case modelling, whilst commonly used for documenting requirements, needs to be standardized before reliable and repeatable metrics can be obtained. We have set out the requirements for such a standardization and also proposed the basic metrics likely to be useful. These are based primarily on use-case models of [8,19] and offer a first attempt to create a size measure for use cases. The next step is to evaluate the important research questions of whether use-case attributes such as size (and complexity) can be controlled and whether such use-case metrics can be sufficiently rigorous for estimating effort. Finally, we should note that this added rigour applied to use cases should improve the consistency and quality of communication between client and developer, helping to ensure that the right system is built.
Acknowledgements This is Contribution number 02/13 of the Centre for Object Technology Applications and Research of the University of Technology, Sydney. We wish to thank the Australian Research Council for financial support through the ATN Grant scheme.
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
Nuseibeh, B and Easterbrook, S., 2000, Requirements engineering: a roadmap”, in Future nd of Software Engineering (22 IEEE Int. Conf. on Software Engineering), 35-46 Davis, A. M., Jordan, K, and Nakajima, T., 1997, Elements underlying specifications of requirements, Annals of Software Engineering, 3, 63-100 Jacobson, I., Christerson, M., Jonsson, P. and Övergaard, G., 1992, Object-Oriented Software Engineering: A Use Case Driven Approach, Addison-Wesley Constantine, L.L., 1997, The case for essential use cases, Object Magazine, 7(3), 72-70 Korson, T., 1998, The misuse of use cases (managing requirements), Object Magazine, 8(3), 18-20 Leffingwell, D., Widrig, D., 2000, Managing Software Requirements, A Unified Approach, Addison-Wesley Kosters, G. et al., 2001, Coupling use cases and class models as a means for validation and verification of requirements specification, Req. Eng. Journal, 6(1), 3-17 Cockburn, A., 2000, Writing Effective Use Cases, Addison Wesley Jacobson, I., 1994a, Basic use-case modeling. ROAD, 1(2), 15-19
10. OMG, 2001, OMG Unified Modeling Language Specification, Version 1.4, September 2001, OMG document formal/01-09-67 [Online]. Available http://www.omg.org 11. Jacobson, I., 1994c, Use cases and object, ROAD, 1(4), 8-10 12. Jacobson, I., 1994b, Basic use-case modeling (Continued), ROAD, 1(3), 7-9 13. Graham, I., 1998, Requirements Engineering and Rapid Development. An ObjectOriented Approach, Addison-Wesley 14. Simons, A.J.H. and Graham, I., 1998, 37 things that don't work in object-oriented modelling with UML, BCS Obj.-Oriented Prog. Sys. Newsletter, 35 (eds. S Kent & R Mitchell) 15. Simons, A.J.H. and Graham, I., 1999, 30 Things that go wrong in object modelling with UML 1.3, chapter 16 in: Behavioral Specifications of Businesses and Systems, (Eds. H Kilov, B Rumpe and I Simmonds), Kluwer Academic Publishers, 237-257 16. Simons, A.J.H., 1999, Use cases considered harmful, Procs. TOOLS 29, (Eds. R. Mitchell, AC Wills, J Bosch and B Meyer), IEEE Computer Society, 94-203 17. Lilly, S., 1999, Use case pitfalls: top 10 problems from real projects using use cases, Procs. TOOLS 30 (eds. D. Firesmith, R. Riehle, G. Pour and B. Meyer), IEEE Computer Society Press, 174-183 18. Kruchten, Ph., 2000, The Rational Unified Process: An Introduction, Second edition, Addison Wesley 19. Regnell, B., 1996, Hierarchical use case modelling for requirements engineering, Report Number 120, Doctoral Thesis, Dept Communication Systems, Lund University, Sweden 20. Cockburn, A., 1997a, Goals and use cases, J. Obj.Oriented Progr., 10(5), 35-40 21. Cockburn, A., 1997b, Using goal-based use cases, J. Obj.Oriented Progr., 10(7), 56-62 22. Achour, C.B., Rolland, C., Maiden, N.A.M. and Souveyet, C., 1999, Guiding use case authoring: results of an empirical study, Procs. Fourth IEEE International Symposium on Requirements Engineering (RE99), University of Limerick, Ireland 23. Firesmith, D.G., 1999, Use case modeling guidelines, Procs. TOOLS 30 (eds. D. Firesmith, R. Riehle, G. Pour and B. Meyer), IEEE Computer Society Press, 184-193 24. Verner, J.M. and Tate, G., 1992, A software size model, IEEE Trans. Soft. Eng., 18(4), 265-278 25. Costello, R.J. and Liu, D.-B., 1995, Metrics for requirements engineering, Journal of Systems Software, 29, 39-63 26. Davis, A., 1993, Software Requirements Analysis and Specification (2nd ed.), Prentice Hall 27. Zowghi, D., Offen, R. and Nurmuliani, 2000, The impact of requirements volatility on the software development lifecycle, Procs Int Conf on Software, Theory and Practice (ICS2000), IFIP World Computer Conference, Beijing, China, August 2000 28. Gotel, O.C.Z and Finkelstein, A.C.W., 1994, An analysis of the requirements traceability problem, Procs. First Int. Conf. Requirements Engineering (ICRE94), 94-101 29. Schneider, G. and Winters, J.P., 1998, Applying Use Cases: A Practical Guide, AddisonWesley 30. Constantine, L.L. and Lockwood, L.A.D., 1999, Software for Use, Addison-Wesley 31. Marchesi, M., 1998, OOA metrics for the Unified Modeling Language, Euromicro Conference on Software Maintenance and Reengineering 32. Whitmire, S.A., 1997, Object Oriented Design Measurement, John Wiley & Sons 33. Henderson-Sellers, B. and Unhelkar, B., 2000, OPEN Modeling with UML, AddisonWesley 34. Fenton, N.E., 1994, Software measurement: a necessary scientific basis, IEEE Trans. Soft. Eng., 20, 199-206