Using a Functional Size Measurement Procedure to

0 downloads 0 Views 1005KB Size Report
a strategy phase, a mapping phase, and a measurement phase. ...... Montpellier, France, June 18-20 2008b, Z. Bellahsene, C. Woo, E. Hunt, X. Franch AND R.
Using a Functional Size Measurement Procedure to Evaluate the Quality of Models in MDD Environments BEATRIZ MARÍN, Universidad Diego Portales GIOVANNI GIACHETTI, Universidad Politécnica de Valencia OSCAR PASTOR, Universidad Politécnica de Valencia TANJA E. J. VOS, Universidad Politécnica de Valencia ALAIN ABRAN, Université du Québec Models are key artifacts in Model-Driven Development (MDD) methods. To produce high quality software by using MDD methods, quality assurance of models is of paramount importance. To evaluate the quality of models, defect detection is considered a suitable approach and is usually applied using reading techniques. However, these reading techniques have limitations and constraints, and new techniques are required to improve the efficiency at finding as many defects as possible. This paper presents a case study that has been carried out to evaluate the use of a Functional Size Measurement (FSM) procedure in the detection of defects in models of a MDD environment. To do this, we compare the defects and the defect types found by an inspection group with the defects and the defect types found by the FSM procedure. The results indicate that the FSM is useful since it finds all the defects related to a specific defect type, it finds different defect types than an inspection group, and it finds defects related to the correctness and the consistency of the models. Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification – correctness proof, validation General Terms: Measurement, Design, Experimentation, Verification Additional Key Words and Phrases: Case Study, Defect Detection, Functional Size, Model-Driven Development ACM Reference Format: Marín, B., Giachetti, G., Pastor, O., Vos, T. E. J., and Abran, A. 2012. Using a Functional Size Measurement Procedure to Evaluate the Quality of Models in MDD Environments. ACM Trans. Softw. Eng. Methodol. DOI = 10.1145/0000000.0000000 http://doi.acm.org/10.1145/0000000.0000000

1.

INTRODUCTION

Software development methods are being continuously improved by researchers aiming at producing software at lower costs, in a faster way, and with a higher level of quality by reusing resources. Model-Driven Development (MDD) methods are targeting the same objectives [Hailpern and Tarr 2006]. MDD methods separate the

Authors’ addresses: B. Marín, Escuela de Ingeniería Informática, Facultad de Ingeniería, Universidad Diego Portales, Chile, E-mail: [email protected]; G. Giachetti, Escuela de Informática, Facultad de Ingeniería, Universidad Andrés Bello, Chile. E-mail: [email protected]; O. Pastor, T. E. J. Vos, Centro de Investigación en Métodos de Producción de Software, Universidad Politécnica de Valencia, Spain, Email: {opastor, tvos}@pros.upv.es; A. Abran, Department of Software Engineering and Information Technology, École de Technologie Supérieure, Université du Québec, Canada, E-mail: [email protected] This paper is a revised and extended version of a short paper that was presented at ESEM 2010 in Bolzano-Bozen, Italy. Permission to make digital or hardcopies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credits permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or [email protected]. @2012 ACM 1049-331X/2012/03-ART39 $10.00 DOI10.1145/0000000.0000000 http://doi.acm.org/10.1145/0000000.0000000

39

business logic from the platform technologies in order to allow automatic generation of software through well-defined model transformations [Selic 2003]. To do this, MDD methods combine Domain-Specific Modeling Languages (DSMLs) [Selic 2007] and tools for model transformations and code generation to express domain concepts effectively and to alleviate the complexity of implementation platforms [Schmidt 2006]. In order to produce high quality software by using MDD methods, quality assurance techniques must be developed [Mohagheghi and Aagedal 2007] for the three main components of a MDD method: the models, the transformation engines, and the code generators. Of these components, we believe that the quality assurance of the models is the most important one since it directly affects the quality of the model transformations and the code generation. To evaluate the quality of models used in MDD methods, various approaches have been proposed using different techniques [Moody 2005] such as consensus-based techniques, theory-based techniques, experience-based techniques, observation-based techniques, etc. — Consensus-based techniques need consensus among experts in the field of conceptual modeling and also expert practitioners. This approach represents the alternative with the broader support to develop a framework to evaluate the quality of models due to it is supported by a broad agreement between experts that belongs from the academy and the industry. Therefore, it is not necessary to convince practitioners to use the approach since it represents their collective wisdom. However, the consensus-based approach is an arduous and long process that needs the coordination of several experts. An example of a consensus-based quality framework is the ISO 9126 standard [ISO/IEC 2001]. — Theory-based techniques are strongly based on theory, usually emerged from academy, and there is no a-priori reason why these techniques are likely to be effective in practice. It is necessary to perform empirical studies with real models to demonstrate their effectiveness in practice. An example of a theory-based technique is the quality framework defined by [Lindland et al. 1994]. — Experience-based techniques involve one or two expert practitioners that generalize and codify from their experience in practice and tacit knowledge what they consider a good model (for instance, [Davenport and Prusak 1998]). The limitation of this approach is that resulting quality frameworks represent subjective views of few people, and have limited applicability to other contexts. — Observation-based techniques consist in identify defects in models, and then obtain the quality characteristics that the defects identified have revealed. In contrast to theory-based or experience-based approaches, which develop quality frameworks in a top-down manner from theory or experience, observation-based approaches build descriptive quality frameworks in a bottom-up manner based on the defects identified. Thus, we consider that an observation-based technique (also called a defect detection technique) is the most suitable technique to evaluate the quality of models since it provides results with a high level of empirical validity, which is provided by the variety of models that are observed. However, if the models are not representative enough, the combination of different techniques would provide a suitable approach. Approaches to detect defects in models are usually applied using reading techniques or rules (heuristics) defined from the experience of the researchers. However, the use of a single approach to find defects does not guarantee that all the defects will be found [Trudel and Abran 2008]. Thus, the use of several approaches is recommended in order to find as many defects as possible.

A Functional Size Measurement (FSM) method defines a set of rules to measure the size of software by quantifying the Functional User Requirements [ISO 1998]. To apply an FSM method to models, FSM procedures must be defined to provide a detailed description of the application of the FSM method [ISO 2004]. We advocate that a FSM procedure can be used to identify defects in the conceptual models of a MDD approach because it systematically analyzes all the conceptual constructs that participate in the system functionality. Thus, a FSM procedure corresponds to a new approach to detect defects in conceptual models, which can be combined with other defect detection approaches in order to improve the quality of models. For instance, the FSM procedure can be used to identify defects related to the correctness and consistency of models, and then, inspections of the models can be performed to identify semantic defects. Therefore, the arduous task of identifying defects by inspectors is alleviated by using the FSM procedure. In this paper, we present a case study that aims to evaluate the use of an FSM procedure to detect defects in models of an MDD approach. To do this, the study compares the defects detected by an inspection group and the defects detected by an FSM procedure. Moreover, we determine the types of the defects found and the implications that these defects have on the quality of the models. In the literature, there is no consensus about the definition of case study [Runeson and Host 2009]. There are several proposals that use the term case study to refer to well-organized studies in the field or even to refer to small toy examples. There are also many software engineering proposals that use the case study methodology following the guidelines of social sciences [Robson 2002] [Yin 2003] or the guidelines of information systems [Benbasat et al. 1987]. In this work, we use the definition provided for software engineering case studies by [Runeson and Host 2009], which states that a case study is an empirical method aimed at investigating contemporary phenomena in their context. Thus, in this case, the phenomena correspond to the defects of conceptual models used to generate final applications, and the context corresponds to the industrial MDD approach with the corresponding industrial modeling tool that allows the generation of final applications from the conceptual models. We plan, conduct and report the case study following the guidelines presented by Runeson et al. [Runeson and Host 2009]. These guidelines indicate that to plan a case study it is necessary to define the objective (what to achieve?), the case (what is studied?), the theory (or frame of reference), the research questions (what to know?), the methods (how to collect data?), and the selection strategy (where to seek data?). These guidelines also indicate that to conduct a case study it is necessary to have the design of the case study, prepare the data collection procedures, collect evidence by executing the studied case, analyze the collected data, and report the study. Finally, these guidelines indicate that to report a case study it is necessary to report the related work (context and early studies), the design of the case study (which includes the research questions, case and subjects selection, data collection procedures, analysis procedures, and validity procedures), the results of the case study (which includes case and subjects description, covering execution, analysis and interpretation issues), and the conclusions and limitations of the study. We have rigorously followed these guidelines, presenting the context and early studies in Sections 2 and 3, the design of the case study in Section 4, the results of the study in Section 5, and our conclusions and further work in Section 6. 2.

RELATED WORKS

This section presents related works on some of the relevant approaches proposed to detect defects in conceptual models; it includes brief descriptions of the MDD approach and the FSM procedure selected for this study.

2.1 Defect Detection Approaches

Currently, there are several proposals to detect defects in models that can be used in an MDD context. These approaches usually apply reading techniques or rules derived from the experience of the researchers involved. We perform a systematic literature review according to [Kitchenham 2007] of the existent proposals that can be used to detect defects in conceptual models. Thus, the following research question has been formulated: RQ: What defect detection approaches are applied to object-oriented conceptual models? The research question has been structured following the PICOC (Population, Intervention, Comparison, Outcome, and Context) criteria [Petticrew and Roberts 2005]. The population in this study is the domain of object-oriented conceptual models used to develop software. Intervention includes different defect detection approaches that identify different defect types in conceptual models. The comparison intervention is not applicable in our case as our research questions are not aimed to make a comparison. The outcomes of our interest represent different types of defects that can be identified in conceptual models, the classifications of these defect types, and the quality characteristics that can be evaluated using these defect types. In terms of context and experimental design, we do not enforce any restrictions. The search terms used in our systematic review were constructed using the following strategy: • Derive major terms from the questions by identifying the population, intervention and outcome • Identify alternative spellings and synonyms for major terms. This is done to minimize the effect of differences in terminologies. • Use the Boolean OR to incorporate alternative spellings and synonyms. • Use the Boolean AND to link the major terms. Whenever a database did not allow the use of complex Boolean search strings we designed different search strings for each of these data bases. We use the following search terms to create the search strings: • Population: object-oriented, object oriented, object-orientation, object orientation, conceptual model, model, modeling, modelling, diagram, schema, schemata. • Intervention: defect detection, fault detection, failure detection, error detection, defect identification, fault identification, failure identification, error identification, defect inspection, fault inspection, failure inspection, error inspection. • Outcomes: type, classification, categorization, quality characteristic. From the resulting approaches, we consider relevant the following features: technique used to identify defects, type of object-oriented conceptual models, types of defects identified, quality characteristics related to the defects identified, and automation of defect detection. The search process was performed in relevant electronic databases. Candidate studies were obtained after the revision of title and abstract. These candidate studies were analyzed by the reviewers applying the selection procedure, and the inclusion/exclusion criteria to finally obtain 10 primary studies. Details of the systematic literature review are out of the scope of this paper. The findings from our literature survey are summarized in Table I according to the inspection techniques used to find defects, the nature of the technique (consensus-based, theory-based, experience-based, observation-based), the models inspected, and the manual or automated tools used. The analyzed proposals present defect types that are related to the consistency amongst diagrams and to the correctness of the syntax of a particular diagram. The

consistency is defined in the IEEE 610 standard [IEEE 1990] as the degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component. The correctness is defined by the same standard [IEEE 1990] as follows: the degree to which a system or component is free from faults in its specification, design, and implementation. Table I shows that all the proposals for defect detection in conceptual models focus on UML models. However, it is well-known that UML diagrams [OMG 2010] do not have enough semantic precision to allow the specification of unambiguous software applications [Berkenkötter 2008] [France et al. 2006] [Opdahl and Henderson-Sellers 2005], which is clearly observed in the semantic extension points that are defined in the UML specification [OMG 2010]. In addition, it is also well known that UML does not allow the functionality of an application to be completely specified [France, Ghosh, Dinh-Trong and Solberg 2006] (for instance, it is not possible to specify the presentation view of a system). For this reason, many methodologies (such as the Fusion Method [Coleman et al. 1994], the WebML Method [Moreno et al. 2007], and the OO-Method [Pastor and Molina 2007].) have selected a subset of UML diagrams and conceptual constructs and have aggregated the needed semantic expressiveness in order to be able to completely specify the final applications in the conceptual model, making the implementation of the MDD technology a reality. Table I. Approaches to detect defects in models Author(s) Guilherme Travassos et al. [Travassos et al. 1999] Laitenberger et al. [Laitenberger et al. 2000] Conradi et al. [Conradi et al. 2003] Gomma et al. [Gomaa and Wijesekera 2003] Kuzniarz et al. [Kuzniarz 2003] Berenbach [Berenbach 2004] Lange et al. [Lange and Chaudron 2004] Leung et al. [Leung and Bolloju 2005] Bellur et al. [Bellur and Vallieswaran 2006] Egyed [Egyed 2006]

Inspection Technique Reading technique

Nature of Technique Observationbased

Models

Application

UML Class, UML Statetransition, and UML Sequence

Manual

Checklist-based reading technique and Perspectivebased reading technique Reading technique

Observationbased

UML Class, UML Collaboration, and Fusion Operation Model [Atkinson 1998]

Manual

Observationbased

Manual

Rules that were defined from their own expertise

Experiencebased

Rules that were defined from their own expertise Rules that were defined from their own expertise Rules that were defined from their own expertise

Experiencebased

UML Class, UML Sequence, UML State-transition, and UML Use-Case UML Use-case, UML Sequence, UML Class, and COMET State transition [Gomaa 2000] UML Use-Case and UML Sequence

Experiencebased

UML Use-case, UML Class, and UML Business

DesignAdvisor tool

Experiencebased

UML Use-case, UML Sequence, and UML Class

MetricView tool [Lange et al. 2007]

Rules that were defined from their own expertise Rules that were defined from their own expertise

Experiencebased

UML Class

Manual

Experiencebased

A consistency checking tool

Rules that were defined from their own expertise

Experiencebased

UML Use-Case, UML Sequence, UML Class, UML State transition, UML Component, and UML Deployment UML Sequence, UML Class, and UML State transition

Manual

Manual

UML / Analyzer tool

Some of the proposals use reading techniques, which require instructing the inspection team on what to look for and how to scrutinize software documents in a systematic manner [Laitenberger, Atkinson, Schlich and Emam 2000]. Empirical studies have indicated that these techniques are effective for finding defects. However, reading techniques also have limitations; for instance, the manual inspection of models takes a lot of time, which increases the costs and the delivery date of software products. The remainder proposals for defect detection are based on rules defined by the researchers from their own expertise. However, since these works do not explicitly present the rationale for the definition of these rules for the specific diagrams analyzed, it is difficult to define new rules for these diagrams or other kind of diagrams. In this paper, these limitations are tackled by using an FSM procedure as the technique to find defects in the conceptual models of a specific MDD approach that has been successfully applied to industry. This approach explicitly presents the rationale for the definition of rules to detect defects. Both the industrial MDD approach called OO-Method and the FSM procedure called OOmCFP are briefly presented below. The industrial approach (OO-Method) has a formal basis [Pastor et al. 1992] that allows the subset of UML conceptual constructs that are selected as building units of conceptual models to be characterized; i.e., the formal basis provides semantic precision to the OO-method approach. This indicates how to proceed when projecting these ideas on other similar MDD-based approaches. With such a formal basis, the experience reported in this work can be projected onto any other UMLbased MDD method (such as WebML [Moreno, Fraternali and Vallecillo 2007], COMET method [Gomaa 2000], and VBAC-MDA approach [Fink et al. 2006]), provided that their conceptual constructs also have a precise and formal counterpart. 2.2 The OO-Method MDD Approach

The OO-Method approach is an object-oriented method that puts the MDD technology into practice [Pastor and Molina 2007] by separating the business logic from the platform technology, which allows the automatic generation of final applications by means of model transformations. These generated applications correspond to the domain of Management Information System (MIS). The OO-Method software production process is comprised of four models: the Requirements Model, the Conceptual Model, the Logical Execution Model, and the Implementation Model (see Figure 1). These models have a direct correspondence with the models of the MDA architecture: the Computation-Independent Model (CIM), the Platform-Independent Model (PIM), the Platform-Specific Model (PSM), and the Implementation Model (IM). Model to Model Requirements Model

Conceptual Model

Model to Model

Model to Text

Logical Execution Model

Implementation Model

Fig. 1. The software production process of the OO-Method MDD approach. In the Requirements Model [Insfrán et al. 2002] [España et al. 2009] [Pastor and Giachetti 2010], the systems analyst describes the requirements of the system using one or several of the following techniques: mission statement, a function refinement tree, use-case diagrams, i* model, etc. From the Requirements Model, the basis of the Conceptual Model can be semiautomatically generated. The Conceptual Model has four views to completely specify

the software system in an abstract way: a structural view, a functional view, a dynamic view, and a presentation view. The structural view must be defined by means of a class model. Once this view is properly defined, the analysts must define the functional view to indicate how the services that are specified in the structural view change the values of the attributes of the classes. Then, the dynamic view, which specifies the valid lives of objects, is defined by means of a state transition diagram and an object interaction diagram. Finally, the presentation view is defined to specify the graphical user interface and interaction aspects of the intended system to the final user. With these four views, the Conceptual Model has all the details that are needed for the generation of the corresponding software application. More details on the conceptual constructs used by the OO-Method Conceptual Model can be found in [Pastor and Molina 2007]. From the OO-Method Conceptual Model, the corresponding software product can be generated by building the Logical Execution Model [Gómez et al. 1998]. The Logical Execution Model allows the transformation of the conceptual constructs of the Conceptual Model (which are defined in a formal language named OASIS [Pastor, Hayes and Bear 1992]) into their associated software representations. In fact, a Conceptual Model Compiler is responsible for executing that task. The OASIS language is also used to describe some particular conceptual constructs of the OOMethod Conceptual Model, such as the preconditions, the derivation formulas for the derived attributes, the body of the services, the integrity constraints, the transitions between states, the filter formulas, and the initialization of arguments. The specification of these conceptual constructs is essential to allow the generation of fully working applications from the OO-Method conceptual models. Finally, the OO-Method Model Compiler automatically generates the Implementation Model, which corresponds to the final software applications. The OO-Method applications are generated with a three-tier architecture in several technological platforms, for instance: JSP, ASP, C#, EJB, VB, SQL, ORACLE, DB2. The OO-Method approach has been successfully applied to the software industry by means of a MDD tool created by the enterprise CARE-Technologies: OlivaNova The Programming Machine [CARE-Technologies 2011]. 2.3 The OOmCFP FSM Procedure

The OOmCFP (OO-Method COSMIC Function Points) procedure has been developed to measure the functional size of the applications generated with the OO-Method MDD approach from their conceptual models [Marín et al. 2008b]. The OOmCFP measurement procedure was defined in accordance with the COSMIC measurement manual version 3.0 [Abran et al. 2007], meaning that a mapping between the concepts used in COSMIC and the concepts used in the OO-Method Conceptual Model has been defined [Marín et al. 2008a]. OOmCFP is structured in three phases: a strategy phase, a mapping phase, and a measurement phase. In the strategy phase, the strategy to perform the measurement must be specified; i.e., the purpose, the scope, the granularity level, the layers, the boundaries, and the functional users of an OO-Method application must be defined. The purpose of the measurement in OOmCFP is defined as measuring the accurate functional size of the OO-Method applications generated in a MDD environment from the involved conceptual models. The scope of OOmCFP comprises all the functionality of an OOMethod application, which is completely specified in the OO-Method Conceptual Model. The granularity level is low since all the details in the OO-Method Conceptual Model are needed to generate final applications. The OO-Method applications are generated in a three-tiered architecture; therefore, the layers are the client tier, the server tier, and the database tier. In each layer of an OO-Method application, there is a piece of software that can interchange data with the pieces of software of the other

layers. Thus, three pieces of software are differentiated in an OO-Method application: the client piece of software, the server piece of software, and the database piece of software (see Figure 2). A boundary is a conceptual interface between the functional user and the piece of software that will be measured. The functional users in the OOMethod applications are the human users (represented by the agents of the class model), the Client piece of software, the Server piece of software, and the Legacy piece of software. These users interact (send or receive data) with the layers of an application and are also separated by a boundary of each layer of an OO-Method application (see Figure 2). Legacy Views

Human User

E

X Client

Database R

X

Functional User

Client Layer

Piece of  Software

Server  L ayer

Boundary

W

Server

E

X

E

E

Database Layer

Fig. 2. Functional users and data movements in OO-Method applications. In the mapping phase of OOmCFP, the significant elements of the conceptual models that contribute to the functional size of the OO-Method applications must be identified. The mapping phase includes the identification of functional processes, data groups, and data attributes. For this purpose, OOmCFP has 16 rules designed to reduce misinterpretation of the generic concepts of COSMIC and to facilitate the measurement of the functional size of OO-Method applications from their conceptual models. Some COSMIC concepts are functional process (which corresponds to a set of Functional User Requirements comprising a unique, cohesive, and independently executable set of data movements), data group (which corresponds to a set of different attributes that describe an object of interest), and data attribute (which corresponds to the smallest pieces of information of a data group) [ISO/IEC 2003]. Thus, some example rules are: each PIU1, SIU2 or MDIU3 of the presentation view is identified as a functional process; each class that is used in a functional process is identified as a data group; each attribute contained in a class that corresponds to a data group is considered a data attribute, etc. In the measurement phase of OOmCFP, the functional size of the OO-Method applications is calculated from their conceptual models. In this phase, the identification of the data movements and the aggregation of the data movements must be performed. The data movements correspond to the movements of data groups between the users and the functional processes. A data movement can be an Entry (E), an Exit (X), a Read (R), or a Write (W) data movement. Each functional process has two or more data movements. Each data movement moves a single data group. OOmCFP has 74 rules to correctly identify all the data movements that can occur in the OO-Method applications (see Figure 2). Each rule is structured with a PIU is the OO-Method acronym for “Population Interaction Unit”. A PIU represents an entry-point for the application, through the presentation of a set of instances of a class. An instance can be selected, and the corresponding set of actions and/or navigations specified in the Presentation Model are offered to the user. 2 SIU is the OO-Method acronym for “Service Interaction Unit”. A SIU represents an entry-point for the application through the presentation of the arguments of a service of a class. 3 MDIU is the OO-Method acronym for “Master Detail Interaction Unit”. A MDIU represents an entrypoint for the application for the master part of the MDIU and an exit-point for the detail part of the MDIU. 1

concept of the COSMIC measurement method, a concept of the OO-Method approach, and the cardinalities that associate these concepts. These rules have been validated by conducting empirical studies that evaluate the precision of the measurement results obtained by OOmCFP [Marín 2011]. To aggregate the data movements, OOmCFP has 5 measurement rules that allow the quantification of the functional size according to the unit defined for data movements in the COSMIC standard [ISO/IEC 2003]: 1 CFP (Cosmic Function Point) for each data movement. The measurement rules allow the measurement of the functional size of each functional process from its data movements in the client layer; each functional process from its data movements in the server layer; each layer from its functional processes; and the whole OO-Method application from its layers. A complete description of OOmCFP can be found in its measurement guide [Marín 2010]. The OOmCFP procedure has been designed to obtain accurate measures of the applications that are generated from the OO-Method conceptual model [Marín et al. 2010b] and has been automated to provide measurement results in a few minutes using minimal resources [Marín et al. 2008c]. In this paper, we use the OOmCFP procedure to evaluate the quality of the conceptual model of a Photography Agency system by means of defect detection. 3.

DETECTING DEFECTS BY USING OOMCFP

The OOmCFP measurement procedure has rules that allow the measurement of all the conceptual constructs that contribute to the functionality of applications generated in an MDD environment. To do this, the OOmCFP procedure analyzes all the conceptual constructs and their relationships that are specified in a software model. In this process, it is possible to detect defects when the rules cannot be applied. These defects may impede the compilation of the involved conceptual models or may cause faults in the generated application. The main concepts of the models that comprise the OO-Method conceptual model are well-known because they are the same as those used in the UML diagrams. Nevertheless, the OO-Method conceptual model has some slightly different conceptual constructs, which are essential to automate the generation of final applications from the corresponding conceptual models. These conceptual constructs are briefly described in the following paragraphs. The class model of the OO-Method approach describes the static part of the system. This model allows the specification of classes, attributes, derived attributes, events, transactions, operations, preconditions, integrity constraints, agents, and relationships between classes. In this model, the agents are active classes that can access specific attributes of the classes of the model and that can execute specific services of the classes of the model. The functional model of the OO-Method approach allows the specification of the effects that the execution of an event has over the value of the attributes of the class that owns the event by means of a valuation formula. The dynamic model of the OO-Method approach does not have differences with the UML specification for state machines and collaboration diagrams. The presentation model has a set of abstract presentation patterns that are organized hierarchically in three levels: access structure, interaction units, and auxiliary patterns. The first level allows the specification of the system access structure. Based on the menu-like view provided by the first level, the second level allows the specification of the interaction units of the system. The interaction units are groups of functionality that allow the users of the application to interact with the system. Thus, the interaction units of the interaction model represent entry-points for the application, and they can be: SIU, PIU, MDIU, or IIU (an Instance Interaction Unit represents the interaction with an object of the system).

The third level of the presentation model allows the specification of the auxiliary patterns that characterize lower level details about the behavior of the interaction units. These auxiliary patterns are: entry, selection list, arguments grouping, masks, filters, actions, navigations, order criteria, and display set. The display set pattern is used to specify which attributes of a class or its related classes will be shown to the user in a PIU or an IIU. In order to determine the defect types of MDD conceptual models, the OOmCFP procedure were applied to three conceptual models of different functional sizes (small, medium, and large). These models correspond to a Publishing application (a small model); a Photography application (a medium model); and an Expense Report application (a large model). Appendix I presents some characteristics of these models regarding number of classes, number of attributes, number of services, number of relationships, number of interaction units, size of the xml file that represents the model in KB, and COSMIC Functional Size. In these models, 39 different defects were identified, which we grouped into 24 defect types (see [Marín et al. 2009] for details). Table II presents a set of rules of the OOmCFP measurement procedure that are related to the mapping between COSMIC and OO-Method, and the defect types which can be found using these rules. Table II. Mapping Rules of OOmCFP. COSMIC Functional User Functional Process Data Group

Attributes

OOmCFP Rule 1: Identify one functional user for each agent in the OO-Method object model. Rule 5: Identify one functional process for each interaction unit that can be directly accessed in the menu of the OO-Method presentation model. Rule 6: Identify one data group for each class defined in the OO-Method object model, which does not participate in an inheritance hierarchy. Rule 9: Identify the set attributes of the classes defined in the OO-Method object model.

Defects Defect 1: An object model without a specification of an agent class. Defect 2: An OO-Method Conceptual Model without a definition of the presentation model. Defect 3: A presentation model without the specification of one or more interaction units. Defect 4: An object model without the specifications of one or more classes. Defect 5: A class without a name. Defect 6: Classes with a repeated name. Defect 7: A class without the definition of one or more attributes. Defect 8: A class with attributes with repeated names.

Table III presents a set of rules of the OOmCFP measurement procedure that are related to the identification of data movements that occur in display patterns and filter patterns. This table also presents the defect types which can be found using these OOmCFP rules. Table III. OOmCFP rules to identify the data movements that occur in the display and filter patterns of the OOMethod Conceptual Model. OOmCFP Rules Rule 10: Identify one X data movement for the client piece of software for each display pattern in the interaction units that participate in a functional process.

Rule 11: Identify one E data movement for the client piece of software, and 1X and 1R data movements for the server piece of software for each different class that contributes with attributes to the display pattern. Rule 13: Identify one R data movement for the server piece of software for each different class that is used in the effect of the derivation formula of derivate attributes that appear in the display pattern. Rule 16: Identify one R data movement for the server piece of software for each different class that is used in the filter formula of the filter patterns of the interaction units that participate in a functional process.

Defects Defect 9: An instance interaction unit without display pattern. Defect 10: A population interaction unit without display pattern. Defect 11: A display pattern without attributes. Defect 12: Derived attributes without a derivation formula. Defect 13: A filer without a filter formula.

Table IV presents a set of rules of the OOmCFP measurement procedure that are related to the identification of data movements that occur in services, and the defect types which can be found using these OOmCFP rules. Table IV. OOmCFP rules to identify the data movements that occur in services of the OO-Method Conceptual Model. OOmCFP Rules Rule 20: Identify one R data movement for the server piece of software for each different class that is used in the effect of the valuation formula of events that participate in the interaction units contained in a functional process. Rule 21: Identify one W data movement for the server piece of software for each create event, destroy event, or event that has valuations (represented by the class that contains the service) that participate in the interaction units contained in a functional process. Rule 22: Identify one R data movement for the server piece of software for each different class that is used in the service formula of transactions, operations, or global services that participate in the interaction units contained in a functional process.

Rule 23: Identify one E data movement and 1X data movement for the client piece of software, and 1E data movement for the server piece of software for the set of data-valued arguments of the services (represented by the class that contains the service) that participate in the interaction units contained in a functional process. Rule 24: Identify one E data movement and 1X data movement for the client piece of software, and 1E data movement for the server piece of software for each different object-valued argument of the services that participate in the interaction units contained in a functional process. Rule 31: Identify one R data movement for the server piece of software for each different class that is used in the precondition formulae of the services that participate in the interaction units contained in a functional process. Rule 32: Identify one X data movement for the client piece of software for all error messages of the precondition formulae of the services that participate in the interaction units contained in a functional process. Rule 34: Identify one R data movement for the server piece of software for each different class that is used in the integrity constraint formulae of the class that contains each service that participates in the interaction units contained in a functional process. Rule 35: Identify one X data movement for the client piece of software for all error messages of the integrity constraint formula of the class that contains each service that participates in the interaction units contained in a functional process.

Defects Defect 14: An event of a class of the object diagram without valuations. Defect 15: A class without a creation event. Defect 16: Transactions without a specification of a sequence of services (service formula). Defect 17: Operations without a specification of a sequence of services (service formula). Defect 18: Global services without a specification of a sequence of services (service formula). Defect 19: A service without arguments. Defect 20: A service with arguments with repeated names.

Defect 21: A precondition without the specification of the precondition formula. Defect 22: A precondition without an error message. Defect 23: An integrity constraint without the specification of the integrity formula. Defect 24: An integrity constraint without an error message.

Based on the Conradi el al. proposal [Conradi, Mohagheghi, Arif, Hegde, Bunde and Pedersen 2003], we classify the defect types into: • Omission: missing item, • Extraneous information: information that should not be in the model, • Incorrect fact: misrepresentation of a fact, • Ambiguity: unclear concept, • Inconsistency: disagreement between representations of a concept. Thus, Defects 1, 2, 3, 4, 7, 9, 10, 15, 19, 22, and 24 correspond to omissions; Defect 5, 11, 12, 13, 14, 16, 17, 18, 21, and 23 correspond to an incorrect fact; and Defects 6, 8, and 20 correspond to ambiguities. Therefore, we can state that the OOmCFP

measurement procedure helps in the identification of defects types that are related to omissions, incorrect facts, and ambiguities in conceptual models. In particular, these defect types are related to structural models and interaction models. Regarding the severity of defects, the Standard Classification of Software Anomalies (IEEE 1044) [IEEE 2009] assigns five values to this attribute: blocking (testing is inhibited or suspended pending correction or identification of suitable workaround), critical (Essential operations are unavoidably disrupted, safety is jeopardized, and security is compromised), major (Essential operations are affected but can proceed), minor (Nonessential operations are disrupted), and inconsequential (No significant impact on operations). According to these values, Defects 1, 2, and 4 are blocking; Defects 3, 5, 9, 10, 14, and 15 are critical; Defects 6, 7, 8, 11, 12, 13, 16, 17, 18, 20, 21, 22, 23 and 24 are major; and Defect 19 is minor. Thus, we can state that OOmCFP helps in the identification of blocking, critical, major, and minor defects. To automatically detect defects in conceptual models by using OOmCFP, the OOmCFP tool has been updated (see [Marín et al. 2010a]). This tool has been tested to assure the validity of the defects found (more details can be found in [Marín 2011]). The process of the OOmCFP tool is the following: 1. An XML file that represents an OO-Method model is loaded into the tool. Then, the OOmCFP tool identifies the functional users and the functional processes by applying rules 1-10 of the OOmCFP measurement procedure. If one or more problems arise during the application of these rules, then, one defect is stored for each detected problem. The defects that may arise in this stage correspond to defect types 1-3 of Table II. 2. The OOmCFP tool identifies the elements that contribute to the functionality of the final system. One defect is stored for each problem produced during the application of the OOmCFP rules 11-16. The defects that can be found in this stage correspond to defect types 4-8 of Table II. 3. All 74 rules of OOmCFP are applied to identify the data movements. If these rules cannot be applied, the tool stores the defects related to the involved modeling elements in order to present detailed information in the final report. These defects correspond to defect types 9-24 in Tables III and IV. 4. The tool aggregates the results obtained by analyzing the stored defects. However, if the model does not present any defect (there are not stored defects), the tool applies the measurement rules to calculate the functional size of the application that will be generated from this model. 5. Finally, the tool generates an XML file with the result obtained. This file contains (1) the list of defects identified, which has information of the involved modeling elements and the corresponding defects; or (2) the COSMIC functional size obtained by the application of the OOmCFP measurement procedure. This XML file is transformed in a friendly format to the user: an HTML page and an Excel sheet.

4.

CASE STUDY DESIGN

This section presents the design of a case study, which includes the objective of the study, the research questions, the case and subject selection, the data collection procedures, the analysis procedures, and the validity procedures. The case study corresponds to an exploratory study about the defects and the defect types that are found in the models of the OO-Method MDD approach by a reviewer’s group and by the OOmCFP FSM procedure. The objective of this study aims to evaluate the usefulness of the OOmCFP FSM procedure to detect defects by comparing these defects and defect types with the defects and defect types found by

the reviewers and by determining the quality characteristics that the models do not achieve due to these defects. Thus, we have formulated the following research questions: • Q1: Are the defect types found by the inspection group the same as the defect types found by the OOmCFP FSM Procedure? • Q2: Are the quality characteristics related to the defects found by the inspection group the same as the quality characteristics related to the defects found by the OOmCFP tool? • Q3: Is the OOmCFP FSM procedure efficient at finding defects related to a defect type? • Q4: Is the OOmCFP FSM procedure useful at finding defects in models of an MDD environment? The main idea behind the case study is to compare two techniques to find defects in conceptual models: reading techniques and OOmCFP. Q1 is precisely formulated to obtain knowledge about the defect types found using reading techniques and the defect types found using the OOmCFP approach. Even though OOmCFP finds the defects which it has been designed for, it is also interesting to explore the quality characteristics related to the defects found using both reading techniques and OOmCFP. For this reason, we have formulated Q2. Q3 is focused on determining the efficiency of applying a functional size measurement procedure to find defects. From our point of view, these questions are necessary to achieve the goal of the case study. Finally, Q4 was defined to obtain knowledge about the usefulness of OOmCFP in practice. In this case study, we are not evaluating the subjects’ skills to find defects, but we are evaluating the defect types that real conceptual models may have. For this reason, questions about the skills of the subjects finding defects were not formulated. 4.1 Case and Subject Selection

The case of the study is a software development project of a Management Information System that has been developed in an MDD environment. To ensure that the case study is performed in a real-world context, we selected a software project from an industrial partner that is using a tool that implements the OOMethod approach. We are aware that it is possible to define different model versions for the intended software system depending on the vision of different analysts. Consequently, in order to find defects that represent real-world defects introduced by analysts in their daily work, we selected a software project that has several model versions that have been developed by different analysts. Therefore, the defects that the models may have are not biased by the researchers. In other words, we select a software project that has several versions of design models, which do not correspond to consecutive models nor even correspond to models of evolving software. Considering these selection criteria, we selected the Photography Agency project. This project was modeled by several groups of analysts in order to find the best way to represent a solution to the problem. Then, final applications were automatically generated from the models in the OO-Method MDD environment. Therefore, the Photography Agency can choose the system that best fits its organization. From this project, we selected five versions of the conceptual model, each of which was developed by groups of three different novice analysts. Therefore, the defects in these models were not introduced on purpose. Nevertheless, since these five versions of the conceptual model correspond to the same project, they are similar among them. For ethical considerations, the names of the analysts are kept confidential to avoid repercussions within the enterprise. Also, as a consent agreement, the versions of the

conceptual model that were selected for the case study do not correspond to the final versions of the model used to generate the Photography Agency application. In order to understand the defects found, a brief description of the operation of the Photography Agency is presented below: The photography agency is dedicated to the management of photo reports and their distribution to publishing houses. This agency operates with freelance photographers, who must present a request to the production department of the photography agency. This request contains: the photographer’s personal information, a description about the equipment owned, a brief curriculum vitae, and a book showing the photographic projects performed by the photographer. An accepted photographer is classified in one of three possible levels for which minimum photography equipment is required. For this, the technical department creates a new record for the photographer and saves it in the photographer’s file. A new record with a sequential code is created for each photo project presented by a photographer. This record has the price that the publishing houses must pay to the agency, which is established according to the number of photos and the level of the photographer. This record also contains a descriptive annotation about the content of the project. Depending on the level of photographer, the sales department establishes the price that will be paid to the photographer and the price that will be charged to the publishing house for each photo. The subjects of the study correspond to people with knowledge of the OO-Method MDD approach and the OO-Method modeling tool having different levels of expertise: expert, intermediate, and novice. Expert analysts can correctly model and generate the final software system using the OO-Method approach; Intermediate analysts can produce a complete model, but the model is not correct for software generation; and Novice analysts cannot produce a complete model of a system, and, hence, they cannot generate the final application. The subjects were selected from the PROS Research Center. This group of subjects was made up of 16 researchers with different levels of expertise on the OO-Method approach and its modeling tool: 5 were expert, 5 were intermediate, and 6 were novice. In order to maintain the confidentiality of the subjects who participated in the study, we assigned the code E1, E2, E3, E4, and E5 to the experts; I1, I2, I3, I4, and I5 to the intermediates; and N1, N2, N3, N4, N5, and N6 to the novices. This group of subjects does not have expertise using the OOmCFP measurement procedure. This expertise was not required since the OOmCFP procedure is applied by the researchers using the OOmCFP tool. Regarding to the inspection technique used, subjects have some knowledge reading design models and requirements specifications. 4.2 Data Collection Procedure

The data collection procedure was defined taking into account the triangulation that will be used to analyze the results obtained in the study. Triangulation means taking different angles towards the studied object in order to provide a broader picture [Runeson and Host 2009]. In this study, two types of triangulation were considered: data triangulation and theory triangulation. Data triangulation refers to using more than one data source or collecting the same data on different occasions. In this study, data triangulation is taken into account since there are five versions of the Photography Agency conceptual model that are reviewed. Theory triangulation refers to using alternative theories in the study. This type of triangulation is taken into account since the models will be reviewed using two different techniques: analysis of the models performed by the selected subjects and analysis of the models performed by the tool that automates the

application of the OOmCFP FSM procedure. In summary, the following common steps were defined to collect data in the study for the five models: 1. Each model was reviewed by an 8-person inspection group of analysts with different levels of expertise (three expert, three intermediate, and two novice) for 30 minutes. Work diaries with the defect founds were completed by each subject for each model reviewed. 2. Each model was loaded into the OOmCFP tool by means of its XML representation generated by the tool that implements the OO-Method approach. The OOmCFP tool delivered an Excel sheet with the defects found for each model, the corresponding defect types, and the time used in its analysis. 4.3 Analysis Procedure

Since the Photography Agency models versions have been developed by different analysts, the defects in the models are not known in advance. Thus, it is necessary to define a procedure to analyze the defects reported and distinguish between valid defects and invalid defects. Therefore, in the Photography Agency case study, a qualitative analysis procedure [Seaman 1999] was conducted in several steps. In the first step, the defects found by both subjects and the OOmCFP tool were investigated in order to find the valid defects. Then, valid defects were classified into defect types. The defect types were coded using an editing approach (i.e., including a set of a priori codes that were extended and modified during the analysis). Each code was composed of the view of the conceptual model where the defect was found and the conceptual construct involved in the defect. If a defect involves several views, the code is composed of all the views related to such defect. Afterwards, defects (valid or not) and defect types with the corresponding codes constituted a body of knowledge that was used to answer the research questions formulated in this study. In addition, two independent variables and seven quantitative dependent variables were considered to investigate the research questions. 4.3.1. Independent variables.

• •

Industrial models with their intrinsic complexity. The techniques used to review the models: (1) A horizontal reading technique and a vertical reading technique [Travassos, Shull, Fredericks and Basili 1999] that were used by the reviewers in the models selected for the study. The horizontal reading technique refers to reading software artifacts that are built in the same software lifecycle phase. The vertical reading technique refers to reading software artifacts that are built in different software lifecycle phase. In our case, the software artifacts correspond to design models. Thus, the horizontal review correspond to the following: Class models with respect to State transitions diagrams, Class models with respect to Functional models, and Class models with respect to Presentation models. The vertical review corresponds to Class models with respect to textual requirements specification. (2) An automated measuring system called OOmCFP that implements an ISO standard for Functional Size Measurement (i.e., COSMIC) as well as a FSM procedure defined to apply the standard to the conceptual model of an MDD approach. This automated measuring system also has a feature to find and to display defects to the users.

4.3.2. Dependent variables.

a) Number of defects found by the inspection groups in each model, which corresponds to the total amount of issues detected and classified as defects by the inspection groups. b) Number of invalid defects found by the inspection groups in each model. c) Number of defect types found by the inspection groups in each model, which corresponds to the classifiers that group similar defects according to the conceptual constructs. d) Number of defects found by the OOmCFP tool in each model, which corresponds to the total amount of defects detected by the OOmCFP tool. e) Number of defect types found by the OOmCFP tool in each model, which corresponds to the classifiers of the defects found. f) Time used to find defects by the inspection groups in each model, with time measured in minutes. g) Time used to find defects by the OOmCFP tool in each model, with time measured in minutes. 4.4 Validity Procedure

The validity of a study denotes the trustworthiness of the results, i.e., to what extent the results are true and not biased by the researchers’ point of view [Runeson and Host 2009]. There are three main aspects to describe the validity of a study: construct validity, internal validity, and external validity. The construct validity reflects to what extent the variables that are studied really represent what the researchers have in mind and what is investigated according to the research questions. The following threats to the construct validity were identified in the case study: • The defects found by people depend on the experience of each person involved with the MDD approach and the OO-Method modeling tool. Expert people look for defects in complex conceptual constructs in contrast to novice people who look for defects in the most frequently used and basic conceptual constructs. To mitigate this threat, we selected people with different levels of experience (expert, intermediate, and novice). • The defects found may not correspond to a real defect in the model. To mitigate this risk, we investigated the defects found and the amount of invalid defects detected in the dependent variables a and b (see Section 3.3.2). The internal validity expresses the extent to which the design and analysis may have been compromised by the existence of confounding variables and other unexpected sources of bias. The following threats to the internal validity were identified in the case study: • The experience of people with the OO-Method approach and its modeling tool. People with experience can find real defects in the models quicker than people with lack of experience. These are two things that are measured and can be influenced by the experience of people, and not only by changing the independent variables. To mitigate this risk, we selected people with at least a basic knowledge of OO-Method and its modeling tool. • The experience of people with the reading techniques. People with experience performing horizontal and vertical readings of OO-Method models can focus in common valid defect types. Thus, experience of people regarding reading techniques can influence the number of invalid defect types identified. Training the subjects regarding the reading techniques can help to diminish this threat. Nevertheless, it is important to mention that the subjects were not trained to apply reading techniques in this case study.







The models selected for the case study do not represent all the conceptual constructs of the OO-Method MDD approach since they do not have a presentation view. Thus, other conceptual constructs may have different related defect types. The model of a system represents only one way to model the system. To mitigate this risk, we selected alternative models of the system performed by different groups of analysts in order to take into account different modeling possibilities. The learning effect of selected subjects during the inspections of models. Several persons work on several versions of design models for the same system. Thus, the first time that they analyze the models, they need to understand the selected case, the concepts involved, and how the model satisfy the requirements of the system. Thus, the time that they took to find defects is supposed to be longer in the first model analyzed. To mitigate this threat, we selected models that were developed by different groups of analysts. Thus, since different analysts take different decisions to better represent the system, people involved need to understand how the models defined satisfy the system’s requirements. In any case, if there was indeed a learning effect, it was in favour of the inspection, not OOmCFP. Therefore, comparison results are conservative.

The external validity is concerned with to what extent it is possible to generalize the findings, and to what extent the findings are of interest to other people outside the case study. Taking into account these aspects of the validity, the following threats to the external validity were identified in the case study: • The representativeness of the selected case study models since all the models correspond to a specific MDD approach. This can cause the findings to be valid only in this industrial context. Repeating the case study with other MDD approaches can give more information about the generalizations of the results. • The representativeness of the selected subject population since wrong people can be involved in the case study. To mitigate this threat, we have recruited people with different expertise levels in the MDD approach. Therefore, results can be generalizable to other people. • The representativeness of the inspection technique. We choose the inspection techniques that better fit with the goal of the case study. However, the repetition of the case study with other inspection techniques would generate different results. 5.

RESULTS

This section presents the execution of the case study, and the analysis and interpretation of the collected data. At this point, it is important to mention that the MDD modeling tool already has detected some defects according to the OO-Method metamodel. These defects are related to the structural relationships among the conceptual constructs of the OOMethod metamodel, for instance, it is not possible to specify a precondition of an attribute due to preconditions can only be related to services in the OO-Method metamodel. Since the OO-Method approach has a well-defined metamodel, the tool that implements OO-Method prevents analysts from performing some actions that infringe the structural properties of the metamodel when they are designing a model. Therefore, the modeling tool alleviates the difficult task of detecting defects in the models. However, defects related to the semantics of conceptual constructs used in the models defined, or defects related to the values that are assigned (or not) to these conceptual constructs, are not addressed by the MDD tool. Precisely these defects are

found by the inspectors and the OOmCFP tool, respectively, which correspond to defects that are not detected by the OO-Method modeling tool. Therefore, if the conceptual models have some defects related to structural relationships, the MDD tool will detect them before the generation of the final application; i.e., the model compilation is not performed. Nevertheless, if the conceptual models have defects related to semantics or syntactical correctness, the MDD tool will generate a final application that has faults. To prevent these faults, inspections and OOmCFP are used to detect this kind of defects. 5.1 Execution Description

The five selected models were inspected by both the inspection group using reading techniques and the tool that implements the OOmCFP FSM procedure. For the reading technique, each model was inspected by a group of eight subjects (three expert, three intermediate, and two novice). Table V shows how the 16 subjects were distributed across the 8-person inspection groups for each of the five models. To perform the inspections, the group was located in a room that had one computer for each inspector. Each inspection group received the models they were assigned to find defects for. Also, each person received a set of instructions to perform inspections and a template that had to be completed during the inspection with the defects found for each inspected model and the time that had passed. Table V. Distribution of subjects in the inspection groups for the five models Models Model1 Model2 Model3 Model4 Model5

Subjects E1, E3, E4, I1, I3, I4, N1, N3 E2, E3, E4, I1, I4, I5, N2, N3 E2, E3, E4, I1, I2, I3, N2, N5 E2, E3, E4, I1, I2, I3, N2, N6 E2, E4, E5, I2, I3, I4, N3, N4

For the tool that implements the OOmCFP measurement procedure, each model was loaded in the tool, which delivers an excel sheet with the defects that have been detected in each model, the related defect types, and the time used to find these defects. 5.2 Analysis and Interpretation Issues

In the first step, the defects detected by the inspection groups in the five models were analyzed. To do this, the research team put all the defects found by the inspection groups for each model in an Excel sheet. If a defect was detected by more than one person on the inspection group in each model, it was considered only once for the analysis since we are interested in the defects found and not in the skills of the inspectors. Figure 3 schematizes the number of distinct defects found by the inspection groups in the five models. For instance, in Model2, the defect service edit_instance of class Photographer does not have a valuation4 was found by E3, E4, I4, N2, N3. Each diagram of Figure 3 presents the number of defects detected by each person on the inspection group, for instance, in Model2, inspector E2 found 5 defects in a total of 50 defects. However, it is clear that some defects were detected by more than one inspector (defects located in the intersection among the circles). For instance, inspectors I4 and I5 found the same identical defect, so we counted it only once for our dependent variable a: number of defects found by the inspection groups in each model. Therefore, there are only 36

Valuations are formulas that are used to assign values to the attributes of class using a formal language called OASIS. 4

distinct defects found by the inspection group in Model2. The same rationale was applied to the defects found in the remaining four models. Model1 Total  =  36  Distinct Defects E4 3

I3 6

1

Model2 Total  =  36  Distinct Defects E2 E3

2

2

1

1

E4

7

N3    

I1 0

I2    

1

5

13

N1    

E3

2

E1 I4

1

I1

N3     E4 12

4

9

N4 0

5

3

2

2

3

E4

E2

E2

1

I3

E3 2

3

E3

4

N6

3

I1

Model5 Total  =  38  Distinct Defects

Model4 Total  =  52  Distinct Defects

1

1 N2    

6

I1

I4

2

3

N4

N2    

3

3

E2 15

2

1

1

E4 N3    

2

1

I5

5

3

1

7

I3

1

Model3 Total  =  36  Distinct Defects

4

1 12 I2

1 N2

12

1 1 I4 I2

3 2

E5

1

I3 1

4

2

Fig. 3. Defects found by each inspector in Model1, Model2, Model3, Model4, and Model5. Figure 3 also shows that novice inspectors found more defects than expert inspectors; i.e., in Model2, novice inspectors (N2 and N3) detected a total of 24 defects, and expert inspectors (E2, E3, and E4) detected a total of 17 defects. This occurs because novice inspectors identified several invalid defects (i.e., 10 defects) in contrast to expert inspectors. The reason for this is that novice inspectors do not have enough knowledge of the modeling approach, which may impede them to properly identify when a modeling specification is incorrect or missing. Once the defects were reorganized to consider them only once for each model, it can be observed that N3 detected 13 different defects compared to the other inspectors. However, 10 of these defects were misinterpretations of the novice inspector N3. For this reason, the research team analyzed the different defects found in each model in order to leave out the issues identified by the inspection groups that are invalid defects that arose due to misinterpretations by the inspection group. For instance, in Model4, the identified issue: ‘the derivation formula of the attribute total of class DeliveryNoteEx is missing’ is not a defect since this formula is actually specified for this attribute. The total defects found by the inspection group in Model4 is up to 52 defects. However, there are 13 defects that do not really correspond to a defect. Thus, Table VI presents the real defects of Model4 (39 defects) and the issues that do not correspond to a defect (13 invalid defects). The same situation occurs in the five models (for details see Appendix II). In order to illustrate the defects that were found by the inspectors in Model4, we listed the defects found with D1, D2, D3, etc.

Table VI. Analysis of Defects found in Model4 Subjects E2, N6, I3 E2 E2, E4, I2 E2, I2 E2, I1 E3 E4, E2

Number of Issues Detected 1 3 1 1 2 2 12

Valid Defects

E4

12

E4, N2 I1

4 4

I2

8

D43, D44, D45, D46

I3

5

D47

N2 Total

1 52

39 Valid Defects

D1 D2, D7 D4 D5 D6

Invalid Defects

D3

D42 D8, D9

D10, D11, D12, D13, D14, D15, D16, D17, D18, D19, D20, D21 D22, D23, D24, D25, D26, D27, D28, D29, D30, D31, D32, D33 D34, D35, D36, D37 D38, D39, D40, D41 D43, D44, D45, D46 D48, D49, D50, D51 D52 13 Invalid Defects

Many times novice and intermediate inspectors identified invalid defects. However, Table VI shows that even the expert inspectors identified invalid defects (for instance E3). In this case, the expert inspector E3 interpreted that to fulfill the requirements it was necessary to use complex derivation formulas for the price that the publishing houses must pay to the photography agency. Since this inspector did not find this specification in Model4, E3 detected defects D8 and D9. Nevertheless, these issues were not considered as defects by the research team since the specifications to fulfill the requirements were done with simple derivation formulas in Model4. This situation also occurs in Model2, Model3, and Model5 (for details see Appendix II). Next, the research team analyzed each defect and assigned a code to classify the defect types found by the inspection groups for later analysis. For instance, in Model1 the following defects corresponded to the same defect type: DM_SReach: A state of the STD of a class that is not reachable: the state signed of the DeliveryNote class in its STD is not reachable (found by E4, I3, N1, N3); the state charged of the DeliveryNote class in its STD is not reachable (found by E4, I3); the state charged of the Exclusive class in its STD is not reachable (found by I3). Figure 4 shows the codes assigned to the defect types related to the defects found in Model1.

Number  of  Defects

Model1 3,5 3 2,5 2 1,5 1 0,5 0

Defect    Types

Fig. 4. Defect Types found by the inspection group in Model1.

Once the defect types were assigned, the research team noticed that expert inspectors were mainly focused on the specifications of the relationships in the models (such as agent relationships, dynamic relationships, inheritance relationships, etc). These are more complex constructs than the ones that novice inspectors were mainly focused on (such as attributes, valuations, etc). For instance, Figure 5 shows that in Model1 there are some defect types only detected by N1 (e.g. DM_SWC, CM_MR, and CM_WInh), and other defect types are only detected by E4 (e.g. CM_SAgent).This situation occurred in the five models of our study, i.e., experts were focused on in the complex constructs leaving out the analysis of the basic constructs (see Appendix III). Thus, we conclude that the inspection group must not only be comprised of expert inspectors, because, in a limited period of time, they were only focused on the complex constructs. For that reason, it is very important to have an inspection group made up of inspectors with different expertise in the MDD approach and its modeling tool. Model1 E1 E1,  E3 E3 E3,  I3 E4,  N3 E4,  I3,  N1,  N3 E4,  I3 E4 I1 I3 I4 N1 N3

PM_NM

DM_SWC DM_SReach FM_NV CM_SAgent CM_MR CM_WInh CM_FDer CM_AN CM_AS 0

1

2

3

4

Fig. 5. Defect Types found by the inspectors in Model1. In summary, 24 defect type codes were assigned (19 for the structural view, 1 for the functional view, 2 for the dynamic view, and 2 for the presentation view) to the defects found by the inspection groups in the five models of the study. These defects were related to the completeness of the models regarding the requirements, the consistency among the views of the models, and the syntactical correctness of each view of the models. Table VII shows the description of the 24 defect types found by the inspection groups and the related quality characteristics. Figure 6 shows the defect types found by the inspection groups in the five models of the study. This figure shows that Model5 had 7 defects related to the CM_SEditC defect type, Model2 had only 1 defect related to the same defect type, and the remainder models did not have defects related to this defect type. Analyzing the number of occurrences of the defect types detected, it could be possible to use this information to better train novice analysts. In contrast to the defects found by the inspection groups, the defects found by the OOmCFP tool were detected only once. In addition, all defects detected by the OOmCFP tool are valid, that is, the tool does not have misinterpretations in the identification of defects since it applies specific rules that analyze the entire functionality of a system that is represented in the model in a systematic way. Also, for each defect, the OOmCFP tool presents the corresponding defect type. It is important to mention that the OO-Method model compiler detects some defects when it transforms the conceptual model to an execution model and then generates the corresponding final application. These defects are related to the structure of the XML

representation of the conceptual model. Nevertheless, the OO-Method model compiler does not analyze the functionality specified with the conceptual constructs. OOmCFP analyzes the specification of the functionality of final applications in the conceptual constructs instead of its XML representation. Hence, the defects found by the OOmCFP are not found by the model compiler. Therefore, it is very important to rely on a technique that can find defects that the model compiler does not detect. Table VII. Description of Defect types found by the inspection groups. Defect Type

Description

CM_Id

A class with more than one identifier. Attribute of string data type with small size for the requirements. Attribute that allows a null value with a default value specified. Wrong derivation formula for the requirements. Wrong argument in a service. Missing argument in a service. Duplicated creation event in a class. An edit service in a class that only has constant and derived attributes. Inheritance hierarchy between two classes without a liberator event. An identifier defined in a child of an inheritance hierarchy. Wrong definition of an inheritance hierarchy between two classes for the requirements. Duplicated service in the children of an inheritance hierarchy. Missing relationship in the class model. Wrong cardinality in a relationship. Missing specification of dynamic services in a dynamic relationship. Wrong integrity constraint formula. A service without a defined agent. An attribute without a defined agent. A role without a defined agent. A service without the specification of a valuation. A state of the STD of a class that is not reachable A state of the STD is reachable without the creation of an object. A model without the specification of a presentation view. Empty introduction pattern.

CM_AS CM_AN CM_FDer CM_WArg CM_MArg CM_DCServ CM_SEditC CM_InhLib CM_InhId CM_WInh CM_InhServ CM_MR CM_WCard CM_RServ CM_FIC CM_SAgent CM_AAgent CM_RAgent FM_NV DM_SReach DM_SWC PM_NM PM_IP

Quality Characteristic Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Completeness Correctness Completeness Correctness Correctness Correctness Consistency Correctness Correctness Consistency Correctness

14

Number  of  Defects

12 10 8 Model1 6

Model2 Model3

4

Model4

2

Model5

0

Defect  Types

Fig. 6. Defect Types found by the inspection groups.

Even though the OOmCFP tool can detect defects related to 24 defect types [Marín, Giachetti, Pastor and Vos 2010a] in the OO-Method conceptual models, the defects found in the five models of the study were related to only three defect types (1 for the structural view, 1 for the functional view, and 1 for the presentation view). This occurs because several defect types detected by the OOmCFP tool are related to the presentation view of a model, and the selected models do not have this view specified. Table VIII shows the description of the 3 defect types found and the related quality characteristics. Table VIII. Description of Defect types found by the OOmCFP tool. Defect Type

Description

CM_NA FM_NV

A class without the definition of one or more attributes. A service without the specification of a valuation. A model without the specification of a presentation view.

PM_NM

Quality Characteristic Correctness Consistency Consistency

The difference between the defect types found by the inspection groups and the OOmCFP tool is not surprising due to the following: (1) Some defect types found by the inspection groups are related to the semantics of the model, i.e., CM_Id, CM_AS, CM_AN, CM_FDer, CM_WArg, CM_MArg, CM_DCServ, CM_SEditC, CM_InhLib, CM_InhId, CM_WInh, CM_InhServ, CM_MR, CM_WCard, and CM_FIC. These 15 defect types are detected by the inspection groups since they have the requirements specification of the Photography Agency system. Since the OOmCFP tool applies the OOmCFP FSM procedure to analyze the functionality of design models by a systematic analysis of the models, there is no knowledge process that the tool can perform in order to analyze the semantics of a model for the requirements. (2) Another defect type found by the inspection groups is related to the correctness of conceptual constructs that do not contribute to the functionality of the applications, i.e., PM_IP. This is not surprising since the OOmCFP FSM procedure takes into account the conceptual constructs that contribute to the functionality of the applications leaving out aesthetic aspects that do not represent data movements. (3) Other defect types found by the inspection groups are related to the relationships between classes in the structural view of the model (i.e., CM_RServ, CM_SAgent, CM_AAgent, and CM_RAgent) and the transitions among states in the dynamic view of the model (i.e., DM_SReach, and DM_SWC). One limitation of the OOmCFP FSM procedure is that it does not analyze the relationships and the cardinalities of the conceptual constructs because they do not represent data movements. (4) The remaining defects (FM_NV, PM_NM) correspond to the defect types found by the tool. In addition, the OOmCFP tool identified a defect type (i.e., CM_NA) that was not identified by the inspection groups. Figure 7 shows the defect types found by the OOmCFP tool in the models. This figure shows that the five models had one defect related to the defect type PM_NM, Model4 had four defects related to defect type CM_NA, Model2 had eight defects related to defect type FM_NM, etc. Table IX presents the number of defects, defect types, and times collected for the dependent variables from the inspections performed by the inspection groups and the OOmCFP tool. Based on the dependent variables, we try to provide an answer to the underlying research questions.

9 8 7 6

Model1

5

Model2

4

Model3

3

Model4

2

Model5

1 0 CM_NA

FM_NV

PM_NM

Fig. 7. Defect Types found by the OOmCFP tool. Table IX. Defects found by the inspection groups (IG) and OOmCFP

Model1

36

Number of invalid defects by IG 18

18

Number of defect types by IG 10

Model2

36

16

20

10

17

22

9

12

3

214

7

8

2

249

Number of defects by IG

Model3

39

Number of valid defects by IG

Model4

52

13

39

Model5

38

21

17

7

Time IG

Time OOmCFP

3

229

0,31

2

233

0,43

2

198

0,95 0,76 0,38

Number of defects by OOmCFP

Number of defect types by OOmCFP

7 9 4

Are the defect types found by the inspection group the same as the defect types found by the OOmCFP FSM Procedure? The inspection groups found defects related to 24 defect types in the five models of the study (see Table VII). These defect types are related to the four views (structural view, functional view, dynamic view, and presentation view) of the OO-Method conceptual model. In contrast, the OOmCFP tool found defects related to only 3 defect types in the five models (see Table VIII), which are related to the structural view, the functional view, and the presentation view of the OO-Method conceptual model. This does not mean that the tool cannot find defects related to the dynamic view of a model; however, the tool does not identify defects related to the transitions between the states of the state-transition diagram. Two of the defect types found by OOmCFP were also found by the inspection groups (i.e., FM_NV and PM_NM). However, it is important to note that defect type CM_NA was not detected by the inspection groups. Thus, it can be observed that some of the defect types found by the inspection groups are the same as the defect types found by the OOmCFP tool; also, there are defect types found by the inspection groups that were not found by the tool, and vice versa. Are the quality characteristics related to the defects found by the inspection group the same as the quality characteristics related to the defects found by the OOmCFP tool? Focusing on the defect types found by the inspection groups in the five models, the great majority of defect types are related to the semantics of a model. Since the inspectors had the requirement specifications, they were able to inspect the models according to the requirements; thus, they detected defects related to the completeness of the models. The remaining defect types found by the inspectors were related to the consistency among the views of a model (e.g. FM_NV) and the syntactical correctness of a model (e.g. CM_SAgent). In the same models, the defect types found by the OOmCFP tool were related to the consistency among the views of a model (e.g. FM_NV) and the syntactical

correctness of the model (e.g. CM_NA). Understanding the completeness of a model as being a model that contains all the statements according to the requirements, this quality characteristic cannot be achieved by the OOmCFP tool since the tool does not have the requirements specification of the system. In summary, the quality characteristics related to the defects found by the inspection group are completeness, consistency, and correctness; and the quality characteristics related to the defects found by the OOmCFP tool are consistency and correctness. Is the OOmCFP FSM procedure efficient at finding defects related to a defect type? Focusing on the defect types found in Model5, the inspection group found 38 defects. However, 21 were misinterpretations of the group and did not correspond to real defects. Therefore, there were only 17 defects found in Model5 by the inspection group. These defects were related to 7 defect types. In the same model, 8 defects were found by the OOmCFP tool, which were related to 2 defect types (FM_NV and PM_NM). However, it is important to note that for defect type FM_NM, the inspection group found 1 defect and the OOmCFP tool found 7 defects. This occurs because the OOmCFP tool analyzes the model completely in a systematic way by applying the OOmCFP rules, while the people on the inspection group often forget to inspect some parts of the model. Also, the OOmCFP tool took 23 seconds (0.38 minutes) to completely analyze Model5, in contrast to the inspection group which took 249 minutes (4 1/4 hours) to partially analyze the same model. In the remaining models evaluated (i.e., Model1, Model2, Model3, and Model4), the situation was very similar to Model5. Taking these results into account, it is clear that OOmCFP is more efficient than an inspection group in finding defects related to a defect type. Is the OOmCFP FSM procedure useful at finding defects in models of an MDD environment? Taking into account all the results obtained for the dependent variables, it is clear that the OOmCFP FSM procedure detected fewer defects than traditional inspections, although there are also advantages: it found the total number of defects related to a defect type; it found different defect types than the inspection groups did (i.e., CM_NA); and it found defects using less resources (time, effort, etc). Thus, we consider that the OOmCFP FSM procedure is useful in finding defects in models of the OO-Method MDD approach. 6.

CONCLUSIONS & FUTURE WORKS

In this paper, we have reported a case study that was conducted to find out the usefulness of a FSM procedure to detect defects in the conceptual models of an MDD environment. The results indicate that the FSM is useful since it found all the defects related to a specific defect type and also found different defect types than an inspection group. However, the inspection group is also necessary to find defects that the FSM cannot find. Thus, the combination of these two techniques can be an interesting approach to evaluate the quality of conceptual models. There are successful studies of the combination of FSM methods and inspections to find defects manually in textual representations of requirements [Trudel and Abran 2008; Trudel and Abran 2010]. However, to the best of our knowledge, this study corresponds to the first study of the usefulness of an FSM procedure to detect defects in conceptual models of MDD environments. Thus, further empirical research is necessary to establish greater external validity for these results. Other researchers are invited to replicate our study in other MDD contexts.

In spite of the fact that the OOmCFP rules are oriented to a specific MDD environment (OO-Method), many conceptual constructs of the OO-Method conceptual model can be found in other object-oriented methods. Moreover, the main modeling constructs used by OO-Method are basic constructs that have UML representation support. For instance, the OO-Method object model has a direct correspondence with the UML class model. Thus, the concepts and rationale involved in the definition of the OOmCFP rules can be generalized to other object-oriented methods, i.e., OOmCFP can be used as a reference for different MDD approaches to implement their own automated verification mechanisms. To do this, we have developed a proposal for automating the interoperability of different modeling approaches [Giachetti et al. 2009] [Giachetti et al. 2010], which is based on the identification of equivalent concepts among the modeling approaches that will interoperate, and then, a systematic process guides the definition of necessary pivot models, model transformation rules, and metamodels extensions. One limitation of this study is that none of the analyzed models had a presentation view specified; still the OOmCFP finds defect types that are related to conceptual constructs of the presentation view. As future work. we plan to replicate our study considering models with a presentation view specification. APPENDIX I Model

Number of Classes

Number Of Attributes

Number of Services

Number of Relationships

Number of Interaction Units

Size (KB)

Publishing

5

19

25

3

6

476

Total Functional Size (COSMIC) 375

Photography Expense Report

17 23

66 56

100 74

18 14

25 40

995 1049

835 970

APPENDIX II Table X. Analysis of Defects found in Model1 Subjects E1 E1, E3 E3 E3, I3 E4, N3 E4, I3, N1, N3 E4, I3 E4 I1 I3 I4 N1 N3 Total

Number of Issues Detected 2 1 2 2 1 1

Valid Defects

1 3 0 6 3 7 7

D10 D11, D12, D13

36

18 Valid Defects

Invalid Defects

D1, D2 D3 D4, D5 D6, D7 D8 D9

D14, D19 D23, D24, D26

D15, D16, D17, D18 D20, D21, D22 D25, D27, D28, D29 D30, D31, D32, D33, D34, D35, D36 18 Invalid Defects

Table XI. Analysis of Defects found in Model2 Subjects

Number of Issues Detected 1 3 1 1

Valid Defects

1 2 5 3 3 1 2 13

D7 D8, D9 D10, D11, D12, D13, D14 D16 D18 D19 D22, D23 D30, D34, D35

Total

36

20 Valid Defects

Subjects E2, E4, I2 E2, E4, N2 E2, I1 E3 E4

Number of Issues Detected 1 3 2 2 15

I3

9

N2 N4

1 6

D33

Total

39

22 Valid Defects

Subjects

Number of Issues Detected 3 1

Valid Defects

5 1 3 3 2 1 1 4 2 12

D5, D6, D7, D8, D9 D10 D13, D14 D12, D15, D16 D19 D18 D20 D22 D26

E2, E4 E2 E2, E4, N3 E3, E4, I4, N2, N3 E3 E4, N2, N3 E4 I1 I4 I4, I5 N2, N3 N3

D1 D4 D5 D6

Invalid Defects

D2, D3

D15, D17 D20, D21

D24, D25, D26, D27, D28, D29, D31, D32, D33, D36 16 Invalid Defects

Table XII. Analysis of Defects found in Model3 Valid Defects

Invalid Defects

D1 D2, D3, D4 D7 D9, D10, D11, D12, D13, D14, D15, D16, D17, D18, D19, D20, D21, D22, D23 D28

D5, D6 D8

D24, D25, D26, D27, D29, D30, D31, D32 D34, D35, D36, D37, D38, D39 17 Invalid Defects

Table XIII. Analysis of Defects found in Model5

E2 E2, E4, E5, I2, N3 E2, E4 E4, I4, N3 E4 E4, I4 E5 E5, I4 I3, I4, N3 I3 I4 N3 N4 Total

0 38

Invalid Defects D1, D2, D3

D4

17 Valid Defects

D11 D17

D21, D23, D24 D25 D27, D28, D29, D30, D31, D32, D33, D34, D35, D36, D37, D38 21 Invalid Defects

APPENDIX III

Model2

E2,  E4

E2

PM_NM

E2,  E4,  N3

FM_NV

E3,  E4,  I4,  N2,  N3

CM_RAgent

E3

CM_SAgent

E4,  N2,  N3

CM_FIC

E4

CM_InhId

I1

CM_InhLib

I4

CM_SEditC

I4,  I5

CM_MArg

N2,  N3

CM_WArg

N3 0

2

4

6

8

Fig. 8. Defect Types found by the inspectors in Model2. Model3 E2,  E4,  I2

PM_NM

E2,  E4,  N2

CM_RAgent

E2,  I1

CM_AAgent

E3

CM_SAgent

E4

CM_WCard

I3

CM_CServ

N2 N4

CM_Id 0

2

4

6

8

10

12

Fig. 9. Defect Types found by the inspectors in Model3. Model4

E2,  N6,  I3 E2

PM_IP

E2,  E4,  I2

PM_NM

E2,  I2

FM_NV

E2,  I1 E3

CM_RAgent

E4,  E2

CM_AAgent

E4 E4,  N2

CM_SAgent

I1

CM_InhLib

I2

CM_InhServ

I3

CM_DCServ

N2 0

2

4

6

8

10

12

14

Fig. 10. Defect Types found by the inspectors in Model4.

Model5

E2 E2,  E4,  E5,  I2,  N3 E2,  E4 E4,  I4,  N3 E4 E4,  I4 E5 E5,  I4 I3,  I4,  N3 I3 I4 N3 N4

PM_NM FM_NV CM_SAgent CM_RServ CM_InhId CM_SEditC CM_WArg 0

2

4

6

8

Fig. 11. Defect Types found by the inspectors in Model5.

ACKNOWLEDGMENTS We would like to thank PROS Research Center and CARE Technologies for their participation in the study. This work has been developed with the support of the Spanish Government under the projects PROS-REQ TIN2010-19130-C02-02 and GVA ORCA PROMETEO/2009/015.

REFERENCES ABRAN, A., DESHARNAIS, J., LESTERHUIS, A., LONDEIX, B., MELI, R., MORRIS, P., OLIGNY, S., O’NEIL, M., ROLLO, T., RULE, G., SANTILLO, L., SYMONS, C. AND TOIVONEN, H. 2007. The COSMIC Functional Size Measurement Method - Version 3.0. GELOG web site www.gelog.etsmtl.ca ATKINSON, C. 1998. Adapting the fusion process to support the unified modeling language. Object Magazine, 32-39. BELLUR, U. AND VALLIESWARAN, V. 2006. On OO Design Consistency in Iterative Development. In Proceedings of the 3rd Int. Conf. on Information Technology: New Generations (ITNG), April 10-12 2006 IEEE, 46-51. BENBASAT, I., GOLDSTEIN, D. AND MEAD, M. 1987. The case research strategy in studies of information systems. MIS Q 11, 369–386. BERENBACH, B. 2004. The Evaluation of Large, Complex UML Analysis and Design Models. In Proceedings of the 26th ICSE, May 23-28 2004 IEEE Computer Society, 232-241. BERKENKÖTTER, K. 2008. Reliable UML Models and Profiles. Electronic Notes in Theoretical Computer Science 217, 203-220. CARE-TECHNOLOGIES 2011. Web site. http://www.care-t.com/ COLEMAN, D., ARNOLD, P., GODOFF, S., DOLLIN, C., GILCHRIST, H., HAYES, F. AND JEREMAES, P. 1994. Object-Oriented Development: The Fusion Method. Prentice-Hall, Englewood Cliffs, NJ CONRADI, R., MOHAGHEGHI, P., ARIF, T., HEGDE, L.C., BUNDE, G.A. AND PEDERSEN, A. 2003. Object-Oriented Reading Techniques for Inspection of UML Models – An Industrial Experiment. In Proceedings of the 17th ECOOP, July 2003 2003 Springer, 483-501. DAVENPORT, T.H. AND PRUSAK, L. 1998. Working Knowledge: How Organisations Manage What They Know. Business School Press, Boston, Massachusetts EGYED, A. 2006. Instant Consistency Checking for the UML. In Proceedings of the 28th ICSE, Shangai, China, May 20-28 2006 ACM, 381-390. ESPAÑA, S., GONZÁLEZ, A. AND PASTOR, O. 2009. Communication Analysis: A Requirements Engineering Method for Information Systems. In Proceedings of the 21st International Conference on Advanced Information Systems Engineering (CAiSE 2009), Amsterdam, The Netherlands2009, P. van Eck, J. Gordijn AND R. Wieringa Eds. Springer, 530-545. FINK, T., KOCH, M. AND PAULS, K. 2006. An MDA approach to Access Control Specifications Using MOF and UML Profiles. Electronic Notes in Theoretical Computer Science 142, 161-179. FRANCE, R.B., GHOSH, S., DINH-TRONG, T. AND SOLBERG, A. 2006. Model-driven development using uml 2.0: Promises and pitfalls. IEEE Computer 39, 59–66. GIACHETTI, G., ALBERT, M., MARÍN, B. AND PASTOR, O. 2010. Linking UML and MDD through UML Profiles: a Practical Approach based on the UML Association. The Journal of Universal Computer Science (JUCS) 16, 2353-2373 GIACHETTI, G., MARÍN, B. AND PASTOR, O. 2009. Using UML as a Domain-Specific Modeling Language: A Proposal for Automatic Generation of UML Profiles. In Proceedings of the 21st International Conference on Advanced Information Systems Engineering (CAiSE 2009), Amsterdam, The Netherlands2009, P. van Eck, J. Gordijn AND R. Wieringa Eds. Springer, 110-124.

GOMAA, H. 2000. Designing Concurrent, Distributed, and Real-Time Applications with UML. AddisonWesley. 0-201-65793-7 GOMAA, H. AND WIJESEKERA, D. 2003. Consistency in Multiple-View UML Models: A Case Study. In Proceedings of the Workshop on Consistency Problems in UML-based Software Development II, San Francisco, USA, October 20 2003 IEEE, 1-8. GÓMEZ, J., INSFRÁN, E., PELECHANO, V. AND PASTOR, O. 1998. The Execution Model: a componentbased architecture to generate software components from conceptual models. In Proceedings of the Workshop on Component-based Information Systems Engineering1998. HAILPERN, B. AND TARR, P. 2006. Model-driven develpment: The good, the bad, and the ugly. IBM Systems Journal 45, 451–461. IEEE 1990. IEEE 610 Standard Computer Dictionary. A Compilation of IEEE Standard Computer Glossaries. IEEE 2009. IEEE 1044 Standard Classification for Software Anomalies. INSFRÁN, E., PASTOR, O. AND WIERINGA, R. 2002. Requirements Engineering-Based Conceptual Modelling. Journal Requirements Engineering (RE) 61–72. ISO 1998. ISO/IEC 14143-1 – Information Technology – Software Measurement – Functional Size Measurement – Part 1: Definition of Concepts ISO 2004. International vocabulary of basic and general terms in metrology (VIM), I.O.f. Standardization Ed., Geneva, Switzerland. ISO/IEC 2001. ISO/IEC 9126-1, Software Eng. – Product Quality – Part 1: Quality model. ISO/IEC 2003. ISO/IEC 19761, Software Engineering – COSMIC-FFP – A Functional Size Measurement Method. KITCHENHAM, B. 2007. Guidelines for performing Systematic Literature Reviews in Software Engineering, UK. KUZNIARZ, L. 2003. Inconsistencies in Student Designs. In Proceedings of the Workshop on Consistency Problems in UML-based Software Development II, San Francisco, USA, October 20 2003 IEEE, 9-17. LAITENBERGER, O., ATKINSON, C., SCHLICH, M. AND EMAM, K.E. 2000. An experimental comparison of reading techniques for defect detection in UML design documents. Journal of Systems & Software 53, 183-204. LANGE, C. AND CHAUDRON, M. 2004. An Empirical Assessment of Completeness in UML Designs. In Proceedings of the 8th Conf. on Empirical Assessment in Software Eng. (EASE), May 2004 2004 IEEE, 111-121. LANGE, C., WIJINS, M. AND CHAUDRON, M. 2007. Metric View Evolution: UML-based Views for Monitoring Model Evolution and Quality. In Proceedings of the 11th European Conference on Software maintenance and Reengineering (CSMR’07), Amsterdam, The Netherlands, March 2007 2007 IEEE, 327-328. LEUNG, F. AND BOLLOJU, N. 2005. Analyzing the Quality of Domain Models Developed by Novice Systems Analysts. In Proceedings of the 38th Hawaii International Conference on System Sciences2005 IEEE, 1-7. LINDLAND, O.I., SINDRE, G. AND SOLVBERG, A. 1994. Understanding Quality in Conceptual Modeling. IEEE Software 11, 42-49. MARÍN, B. 2010. Technical Report DSIC-II/05/10: OOmCFP Measurement Guide Unviersidad Politécnica de Valencia, Valencia, Spain. MARÍN, B. 2011. Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach. In Departamento de Sistemas Informáticos y Computación Unviersidad Politécnica de Valencia, Valencia, Spain, 333. MARÍN, B., CONDORI-FERNÁNDEZ, N. AND PASTOR, O. 2008a. Design of a Functional Size Measurement Procedure for a Model-Driven Software Development Method. In Proceedings of the 3rd Workshop on Quality in Modeling (QiM) of MODELS, Toulouse, France2008a, J.-L. Sourrouille, M. Staron, L. Kuzniarz, P. Mohagheghi AND L. Pareto Eds., 1-15. MARÍN, B., CONDORI-FERNÁNDEZ, N., PASTOR, O. AND ABRAN, A. 2008b. Measuring the Functional Size of Conceptual Models in an MDA Environment. In Proceedings of the Forum at the CAiSE'08 Conference, Montpellier, France, June 18-20 2008b, Z. Bellahsene, C. Woo, E. Hunt, X. Franch AND R. Coletta Eds., 33-36. MARÍN, B., GIACHETTI, G. AND PASTOR, O. 2008c. Automating the Measurement of Functional Size of Conceptual Models in an MDA Environment. In Proceedings of the Product-Focused Software Process Improvement (PROFES)2008c Springer, 215-229. MARÍN, B., GIACHETTI, G. AND PASTOR, O. 2009. Applying a Functional Size Measurement Procedure for Defect Detection in MDD Environments In Proceedings of the 16th European Conference EUROSPI 2009, Alcalá (Madrid), Spain, 2-4 september 2009, R.V. O' Connor Ed. Springer-Verlag, 57-68. MARÍN, B., GIACHETTI, G., PASTOR, O. AND VOS, T.E.J. 2010a. A Tool for Automatic Defect Detection in Models used in Model-Driven Engineering. In Proceedings of the 7th International Conference on the Quality of Information and Communications Technology (QUATIC), Oporto, Portugal2010a IEEE, 242-247.

MARÍN, B., PASTOR, O. AND ABRAN, A. 2010b. Towards an accurate functional size measurement procedure for conceptual models in an MDA environment. Data & Knowledge Engineering 69, 472– 490. MOHAGHEGHI, P. AND AAGEDAL, J. 2007. Evaluating Quality in Model-Driven Engineering. In Proceedings of the International Workshop on Modeling in Software Engineering (MISE'07)2007 IEEE Computer Society. MOODY, D.L. 2005. Theoretical and practical issues in evaluating the quality of conceptual models: current state and future directions. Data & Knowledge Engineering 55, 243-276. MORENO, N., FRATERNALI, P. AND VALLECILLO, A. 2007. WebML Modeling in UML. IET Software 1, 67–80. OMG 2010. UML 2.3 Superstructure Specification. OPDAHL, A.L. AND HENDERSON-SELLERS, B. 2005. A Unified Modelling Language without referential redundancy. Data & Knowledge Engineering 55, 277-300. PASTOR, O. AND GIACHETTI, G. 2010. Linking Goal-Oriented Requirements and Model-Driven Development. In Intentional Perpectives on Information Systems Engineering Springer-Verlag, 255– 274. PASTOR, O., HAYES, F. AND BEAR, S. 1992. OASIS: An Object-Oriented Specification Language. In Proceedings of the Int. Conference on Advanced Information Systems Engineering (CAiSE), Manchester, UK1992, 348–363. PASTOR, O. AND MOLINA, J.C. 2007. Model-Driven Architecture in Practice: A Software Production Environment Based on Conceptual Modeling. Springer, New York. 978-3-540-71867-3 PETTICREW, M. AND ROBERTS, H. 2005. Systematic Reviews in the Social Sciences: A Practical Guide. ROBSON, C. 2002. Real World Research: A resource for social scientists and practitioner-researchers. Blackwell (2nd Edition) RUNESON, P. AND HOST, M. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering Journal 14, 131–164. SCHMIDT, D. 2006. Model Driven Engineering. IEEE Computer 39, 25-31. SEAMAN, C. 1999. Qualitative methods in empirical studies of software engineering. IEEE Transactions on Software Engineering 25, 557-572. SELIC, B. 2003. The Pragmatics of Model-Driven Development. IEEE Software 20, 19–25. SELIC, B. 2007. A Systematic Approach to Domain-Specific Language Design Using UML. In Proceedings of the 10th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC)2007, 2–9. TRAVASSOS, G., SHULL, F., FREDERICKS, M. AND BASILI, V. 1999. Detecting Defects in ObjectOriented Designs: Using Reading Techniques to Increase Software Quality. In Proceedings of the OOPSLA' 99, Denver, CO, USA1999, 47-56. TRUDEL, S. AND ABRAN, A. 2008. Improving Quality of Functional Requirements by Measuring Their Functional Size. In Proceedings of the IWSM/Metrikon/Mensura, Munich, Germany2008, R.D.e. al. Ed. Springer, 287-231. TRUDEL, S. AND ABRAN, A. 2010. Functional Requirements Improvements through Size Measurement: A Case Study with Inexperienced Measurers. In Proceedings of the 8th ACIS International Conference on Software Engineering Research, Management and Applications - SERA 2010, Montreal, May 24-26 2010 IEEE-CS Press, 181-189. YIN, R. 2003. Case study research. Design and methods. Sage (3rd Edition), London

 

Suggest Documents