DOMAIN-SPECIFIC MODELING ENVIRONMENT BASED ON UML PROFILES* Darius Silingas1, Ruslanas Vitiutinas2, Andrius Armonas3, Lina Nemuraite3 1
No Magic Europe, Training Department, Savanoriu av. 363, Kaunas, Lithuania,
[email protected] 2 Vytautas Magnus University, Faculty of Informatics, Vileikos st. 8, Kaunas, Lithuania,
[email protected] 3 Kaunas University of Technology, Department of Information Systems, Studentu st. 50-308, Kaunas, Lithuania, andrius.armonas ktu.lt,
[email protected] Abstract. Domain-Specific Modeling Languages (DSML) play a key role in model-driven development. There are many approaches how to create a DSML. Recent trends in domain-specific modeling languages and issues of creating and using UML profiles are discussed in this paper. Then we present a novel approach for defining a full-featured DSML based on a UML profile and its customization instead of heavyweight metamodeling. This approach was implemented in MagicDraw UML tool and already successfully accepted by its users. Also, MagicDraw UML developers themselves applied it for creating SysML, DoDAF and UPDM modeling environments. The main benefit of this approach is that it allows a DSML modeler to reuse powerful features of already existing tools. We propose a seven-step DSML development process and illustrate it by an example demonstrating creation of a DSML for modeling organization structures. We also discuss benefits of taking the presented approach and some ideas for future enhancements based on the feedback of users. Keywords: DSML, UML, UML profile, MagicDraw UML.
1
Introduction
There were many modeling languages and notations 15-20 years ago. Most of them, like OOSE [15], OMT [26], Booch [2], CRC cards [32], and ER [5] were designed for certain domains. Then, the UML [23] language was created as a reconciliation of those (and other) methods resulting in a general-purpose software modeling language. Starting from UML 2.0 version, the language was targeted not only at software systems, but also at embedded, business, real-time, system modeling, and other domains. However, our practice shows that UML without modifications can only be used efficiently for modeling software systems. Therefore new approaches how specialized modeling languages can be easily defined are under active research. Some parallels with programming world can be observed here: in the recent years, such languages as Ruby [29] and XML [28] became very popular because of their ability to introduce domain-specific concepts and define new languages. In the modeling world, correspondingly, new modeling approaches are getting popular which also allow definition of new domain-specific languages. A domain-specific language (DSL) is usually a formal, interpretable language for describing a specific domain (or part of that domain) for which the system is being designed. In this paper, we mostly focus on creation of graphical DSLs designed for modeling purposes, which we call domain-specific modeling languages (DSML). During the last decade a number of techniques and approaches evolved where DSMLs play the key role, namely MDE (Model Driven Engineering), MDD (Model Driven Development), and MDSD (Model Driven Software Development), MDA (Model Driven Architecture) [20], MDI (Model Driven Integration) [7], DDD (Domain Driven Design) [11], and Microsoft Software Factories [13]. Recently, OMG issued a number of domain-specific modeling languages: the SysML [22] language for systems modeling, MARTE [24] profile for real-time and embedded systems modeling and UML Profile for DoDAF/MODAF (UPDM) [25] for modeling defense systems. Besides software development, domain-specific modeling languages are capable to support many emerging fields of activities, particularly enterprise knowledge modeling [14], creation of rule repositories [17] etc. For implementing DSML, there are two fundamental approaches: the heavyweight one, creating a new language from scratch by using metamodeling techniques, or the lightweight one, reusing and extending existing environments by creating UML profiles. Both of them have their advantages and drawbacks. In this paper, we present a novel approach how a new DSML can be created on top of a UML profile combining features from both extending UML and creating a new language from the ground up. It allows quickly designing and easily maintaining a new DSML, which is based on UML profile, but does not have features irrelevant for its domain-specific purposes. Moreover, support for model transformations, model *
The work is supported by Lithuanian State Science and Studies Foundation according to High Technology Development Program Project "VeTIS" (Reg.No. B-07042)
- 167 -
comparison and merge, validation, code generation, and other features supported by UML tool can be reused without modifications in DSML environments created using our approach. This approach has already been implemented in MagicDraw UML tool, tested and praised by its users. Among other applications, it was used by MagicDraw tool developers themselves for implementing SysML, UPDM and DoDAF modeling environments that are successfully used by our customers in practice. The remainder of this paper is structured as follows: in section 2 we review related work in the field, mostly other DSML tools and techniques they are based on; in section 3 we shortly present the supporting DSML framework implemented in MagicDraw UML; in section 4, a workflow for defining a new DSML is illustrated by a case study on organization structure modeling; in section 5, user feedback is analyzed; in section 6 we summarize results of our research and draw conclusions.
2
Related Work
The development of a DSML involves definition of the syntax, semantics, validation, and transformations to models or code. We can distinguish between two approaches that are used in the field of development of domain specific modeling languages: create a DSML as an instance of a certain metamodel, or create a DSML by extending some modeling language. While the first approach deals with the composition of languages, the second is related to their extension. Currently many research papers are focusing on improving the metamodeling-based development of DSMLs. However, not much research is done on combining those two approaches. Today metamodeling environments are usually based on metalanguages close to MOF: GME (Generic Metamodeling Environment) supports MetaGME [19]; MetaEdit+ supports GOPPRR (Graph-Object-PropertyPort-Role-Relationship) [30], Eclipse EMF and oAW – Ecore [31], XMF-Mosaic – Xcore [6], and AMMA (ATLAS Model Management Architecture) supports KM3 [16]. Microsoft DSL tools are promoting concept of Software Factories [13] and use their proprietary language and notation. Such a variety of metamodeling languages causes problems related with interoperability of tools, produced artifacts and their reuse. One of the first successful commercial tools in that area was MetaCase MetaEdit+ [30]. The tool allows creating domain-specific languages as separate metamodels. It also has a means to specify graphical representation for the DSML constructs as well as define semantics on the created metamodel which can be later used for validation purposes; and specifying a code generator, which would transform the user model into the code. A bit more popular framework called EMF is available in Eclipse environment. Together with GMF and GMT projects [9], it can be used for achieving analogous results as with the MetaEdit+ tool. Eclipse offers the Ecore metamodel which is actually a light version of MOF [21] for creating DSMLs, the GMF framework for creating graphical representation of DSL elements, and the GMT framework tools for model transformations. Microsoft DSL Tools is an alternative option to the tools mentioned above in a way it offers a heavyweight approach to DSL creation. Abouzahra et al [1] have noticed the problems related with the volume of artifacts produced by using metalanguages as well as creating UML profiles. They emphasize the need for interoperability between these approaches and propose to use special weaving metamodel in their AMW tool for external linking of models. Interesting approach is suggested in [3] for interconnecting different DSLs by using ontologies. The authors created a common upper ontology for software modeling languages and employed it for integrating DSMLs. The Moses framework [10] has the ability to embed other languages. This addresses quite an important problem of combining or relating one DSML to another. In this case internal language is employed to do certain subtasks. Bridging the gap of creating ordinary DSLs (textual languages) and DSMLs is also an important task. Some ideas on that are proposed in [12]. The author proposes to formulate languages as relational schemas, very similarly to what is done with DSMLs, thus eliminating the need for the parser as such. Such implementations naturally gain features like refactoring, versioning, or merging implemented easier because a unified repository might be available at the DSML developer hands. Chen et al [4] see the greatest problem in the lack of a practical, effective method for the formal specification of DSML semantics. They propose a formal methodology based on Abstract State Machines with supporting GME toolset to anchor the semantics of DSMLs to precisely defined and validated “semantic units” that could lead toward an infrastructure for DSML design integrating formal verification and automatic generation of code translators with practical engineering tools. Authors of [27] and [18] propose the use of UML profiles. However this approach does not allow creating a proper DSML, as after applying stereotypes on UML elements, these elements are still UML elements and preserve the semantics associated with them. UML profiles are designed to extend the language, but not to restrict it or convert it to another language. They are best applicable to situations where the DSML does not have a large deviation from the standard UML metamodel. In contrast to both approaches described in this section, we propose a lightweight approach combining features from both worlds – from DSMLs based on metamodels and DSMLs based on UML profiles. By saying - 168 -
“lightweight” we mean that DSMLs built using our approach heavily reuse the UML infrastructure both at the language level and tool level and creators of those DSMLs usually do not have to use any programming at all. The proposed approach is implemented in MagicDraw UML tool. We will describe it in detail in the next sections.
3
MagicDraw UML Framework for Creating DSML
The key idea of our approach is to build a “customized UML profile”, which means that we do not use only plain profiles to define a DSML. By definition a stereotype only extends an element of the UML language by adding additional properties. However, the semantics and appearance of the element remains unchanged. If we apply a stereotype to a class, the stereotyped element is still a class, which just has additional properties defined by the applied stereotype. We propose to add one more layer to profile modeling. We call this layer a “customization module”. It contains customization classes that customize stereotypes by virtually transforming them to completely new metamodel elements. As seen in Figure 1, the customization is two-fold: on one hand it transforms stereotypes to new metamodel elements by defining customization classes for stereotypes; on the other hand it restricts the UML metamodel by hiding unnecessary language parts and enabling certain rules for relating UML elements to the new domain-specific elements. M3
MOF
DSML Definition defined using
M2
UML
defined using
M1
extends
UML Profile
transforms into metamodel Customizations
restricts
User Model reflects
M0
Real World
Figure 1. DSML definition framework in the context of OMG meta-layer architecture
Standard UML provides stereotypes, constraints and profiles for extending the UML language. We propose some enhancements of the standard UML profiling mechanisms enabling profile validation and customization. For validation, we add a stereotype for Object Constraint Language (OCL) constraints that should be used for validating the correctness and completeness of the domain-specific user model. In addition to the OCL constraint specification, this stereotype defines the following custom properties (tags): − severity – importance of validation error (debug, info, warning, error, fatal); − error message – a detailed explanation for displaying to the modeler if a model element does not conform to the constraint specification; − abbreviation – a shorter reference name for identifying validation error type. Model validation should be based on a collection of validation rules. For this purpose, we propose a stereotype , which is applicable to Package metaclass instances. This enables grouping validation rules into suites. In a case a modeler needs to define a selection of validation rules from multiple packages, validation suite should use import relationships to the selected validation rule constraints. We also propose to allow groups of tag definitions for structuring purposes, and custom dialog-based relationship style definition for creating on-the-fly scalable icon image for stereotyped relationships. While these are minor implementation-specific enhancements, we have found that they are highly demanded in industrial practice. Although stereotypes allow specialization of UML metaclasses, they do not hide the fact that the firstclass modeling elements are UML concepts, e.g. Class, Actor, Package, Dependency, with their properties. Since this is confusing and distracting for domain users, we propose to use a special mechanism, which virtually converts stereotypes to metamodel elements, i.e. stereotyped elements are treated as instances of new metaclasses in modeling environment. We propose a stereotype , which contains tags for customization purposes, see Table 1.
- 169 -
Table 1. The main tags for stereotype Customization
Tag Name
Description
customizationTarget
The stereotype for which the customization applies
representationText
Alternative name to be used in the modeling environment instead of the UML element + stereotype names
usedUMLProperties
Standard properties of UML element to be used by the customized element
allowedRelationships disallowedRelationships typesForSource
Relationships allowed/disallowed to connect to the customized element Elements allowed to be used as source/target for the customized relationship
typesForTarget suggestedOwnedDiagrams suggestedOwnedTypes
Diagrams/model elements that should be suggested to create in the context of the customized element
possibleOwners
Possible owners of the customized element
When designing a DSML, we should create customizations for each stereotype. We suggest storing customizations in a separate module in order to separate the two parts of DSML (profile and customization). When the customization module is loaded into the modeling project, the modeling environment should find all model elements stereotyped by and should reflect these settings appropriately in user interface components such as model element specification dialogs, model repository browser, contextual menus, report scripting language, etc. The specified restrictions for ownership and relationships should be enabled during creating model elements and drawing diagrams.
4
DSML Development Process
In this chapter, we propose a simple seven-step process for creating a DSML environment based on customized UML profile, which is described in the activity diagram presented in Figure 2. In the following subsections we will give a more detailed description for each of the tasks presented in Figure 2 together with simplified examples for building and using a DSML for organization structure modeling. Tasks
Artifacts
1. Identify Domain Concepts and Relations
Domain Metamodel
2. Prepare UML Profile
UML Profile for DSML
3. Specify Validation Rules
4. Specify Stereotype Customizations
Customization Module for DSML
5. Define Custom Diagram
Configuration of Modeling Environment
6. Implement Code Generator
Template/Plugin for Code Generation
7. Model Sample for Testing DSML
DSML Sample Model
Figure 2. A process for creating DSML using customized UML profile - 170 -
4.1
Identifying Domain Concepts and Relations First, we have to identify domain concepts, their properties, and relations. For creating domain metamodel, it is possible to use a UML class diagram limited to classes, their properties, and associations (this subset of UML infrastructure is reused in both MOF and UML superstructure). In Figure 3 we present a sample metamodel for defining an organization structure. The properties of classes and association ends are hidden in order to minimize a complexity of the diagram. Organization
0..1 OrganizationUnit
Role 1
-innerUnits 0..*
-unit 1
-members 1..* -supervised
-roles 1..* -mainRole OrganizationRole
Employee
0..*
ProjectRole
1 -team 1..*
-supervisor 0..1
-temporaryRoles 0..*
-roles 1..*
has
-projects -possessedSkills 0..* Skill
Project
0..* -requiredSkills 1..*
Figure 3. Conceptual metamodel for defining organization structures
4.2
Preparing a UML Profile Since we are building DSML based on UML profiles, we need to map the metamodel to UML metaclasses and necessary extensions. The mapping of the sample organization structure metamodel to UML profile is presented in Figure 4. Employee [Class] first name : String last name : String e-mail : String phone : String manager
skills 1..* team
Skill [Class]
Role [Actor]
requiredSkills 1..*
description : String
instructions : String
1..*
1
OrganizationUnit [Package] title : String address : String description : String 1
suppliers 1..* clients
Project [Package]
project 1
ProjectRole [Actor]
OrganizationRole [Actor]
start : date end : date description : String
1..* department
Organization [Package]
Supervise [Dependency]
id : Integer VAT code : Integer type : OrgType
start : date end : date
Figure 4. UML profile for defining organization structure - 171 -
RoleAssignment [Dependency] start : date end : date workload : float
The mapping is a non-trivial task because the deep knowledge of how to apply the UML language is needed. Most of the concepts, e.g. Organization, will map to stereotypes on a selected metaclass, e.g. Organization stereotype on Package metaclass, with tags defining additional properties that are missing in UML metamodel, e.g. ID, VAT code, type. Some of the domain concept properties will map straightforwardly to UML metaclass properties, for example, the name property. Some of the relations defined in the metamodel will be mapped to standard UML metamodel relations, e.g. the relation contains from Organization to OrganizationUnit is mapped to a standard UML ability to contain a package inside another package. The other relations should be mapped to stereotypes, e.g. the relations between Employee and OrganizationRole or ProjectRole should be mapped to stereotype RoleAssignment on Dependency metaclass with additional tags defining start and end dates and workload ratio. Also we define icons for most of the stereotypes, including custom line styles and ends for path stereotypes. This allows the modeler to use intuitive symbols instead of UML shapes that might be unacceptable for the specific domain needs. 4.3
Defining Validation Rules Some of the properties defined in metamodel will be difficult to map to stereotypes and their properties. Those ones should be mapped to UML constraints specified in OCL. Current state-of-the-art modeling tools have integrations with OCL execution environments like Dresden OCL2 toolkit [8]. These constraints can be used as validation rules to ensure correctness and completeness of DSML user models. The metamodel rule that an employee could be supervised by at most one supervisor can be expressed in this OCL constraint defined in the context of Supervise stereotype: context Supervise inv singleSupervisor Supervise::allInstances()->excluding(self)-> forAll(s | not (s.supplier = self.supplier))
In Figure 5, we present a screenshot from MagicDraw UML modeling environment displaying a result of the validation of model not conforming to this validation rule. We should define multiple validation rules for covering all the important aspects for model correctness and completeness. Most of the validation rules will be defined in the context of stereotypes. We should apply stereotype to the profile itself to enable using all its validation rules for validating DSML user model.
Figure 5. A screenshot indicating validation error of double supervising
4.4
Customizing the Language For visualizing the possibilities of customization, we present a sample customization element and a resulting specification dialog for model element to which the customized stereotype was applied, see Figure 6. We should define such customizations for every stereotype in the organization structure profile.
- 172 -
b
a
c
Figure 6. Diagram symbol representing user defined DSML element (employee); b) DSML customization element; c) a custom specification dialog for editing employee’s properties.
4.5
Defining Custom Diagrams For easy usage of a DSML, it is recommended to create a custom diagram that is limited only to domain modeling concepts. MagicDraw UML provides a wizard for this purpose, which takes user through the steps of specifying diagram name, type, and icon; selecting profile(s) and toolbars to use; constructing specific toolbars from DSML elements; specifying symbol properties and rules for smart manipulators (quick access relationships and elements). A sample screenshot of the resulting custom diagram was presented in Figure 5. Note that the diagram contains customized organization structure modeling toolbar containing selection buttons for defined DSML concepts. Together with stereotype customizations, the custom diagram provides the simple and intuitive DSML environment hiding the complexity of UML, but reusing the powerful features of modeling environment of MagicDraw. The created diagram would be portable to other modeling environments by using XML for saving custom diagram setup and exporting/importing diagrams in the same tool across different user machines. 4.6
Implementing Code Generators It is a common practice to use UML models as an input for code generators and MDA tools to generate implementation-level artifacts such as source code, XML or database schemas. In most tools, generation of the code is based on textual templates, where the specific template engine parses those templates and fills-in values from the model. The DSML environment should provide ability to use domain specific types and their properties in templates in the same way as standard UML classes and properties are used. We have adopted the Velocity Template Language (VTL) for creating model report engine and implemented the ability to explicitly reference DSML elements and their properties in VTL specifications (as opposed to accessing their info through UML metaclasses and stereotype properties). This makes it possible to write code generators that use only domain concepts to generate the code. As an example of using the proposed template engine we will generate organization structure web site from the user model produced in previous sections. This task might be performed by creating a HTML page filled with dummy data in any HTML editor and replacing dummy data with references to model elements and properties. The Figure 7 shows a code fragment for representing employee’s info in a HTML template (note that domain-specific concepts are explicitly used for code generation) and the Figure 8 shows the outlook of the generated HTML page in Web browser. In VTL scripts, employee’s personal information is accessed using the firstName, lastName, and email properties of Employee. Employee’s organization unit is obtained from standard UML property owner as the Organization Unit is a customized UML Package element. A standard Velocity #foreach directive is used for iterating over employee skills referenced from skills property of Employee. Using this report engine, we may generate a full-featured web site representing complete organization structure.
- 173 -
Figure 7. A screenshot of HTML template with VTL scripts for generating employee info pages
Figure 8. A screenshot displaying how the generated HTML code looks in a browser
In the same way, we can make templates for generating relational database or XML schemas from the organization structure metamodel. More sophisticated code generation features are available in MDA tools that typically use UML models as input. Since we use DSML built on the top of UML profiles, those tools can easily be reused with organization structure models as well. 4.7
Testing DSML and User Models It is very important to set the rules for validation of user models. For example, for modeling organization structure we can set the following rules: − The top-level element should be an Organization; − An Organization Unit should be defined inside the Organization and can possibly be nested under the higher level Organization Unit; − Inside each Organization Unit we should have packages Projects and Roles that are used for storing appropriate model elements; − Skills should be defined in a separate package structure outside of the Organization; − An Employee should be defined inside the Organization Unit, for which he primarily works. For testing these rules the user should create a sample model containing all DSML elements in valid and invalid modeling situations for testing each validation rule. Although we present this as the last step, in practice it might be created and developed iteratively. Many modelers will prefer test-driven approach when a small model sample would be added before defining each validation rule. A fragment of the organization structure model repository is shown in Figure 9. It is also a recommended practice to define a sample model project, including the profile and defining empty model structure that could be used as a template for starting a new model.
- 174 -
Figure 9. A screenshot displaying a fragment of the organization structure model repository and a DSML diagram specifying project roles and assigning them to organization employees
5
User Feedback and Future Work
Being members of MagicDraw UML R&D team, we have got a lot of positive feedback from our users after implementing and releasing the DSML engine in our tool. Also, our request database shows 163 requests for improvements proposed by our users. We cannot extensively analyze all of these requests due to space restrictions in this paper but we can clearly observe general trends where the suggested DSML approach needs to be improved. We will briefly discuss the main issues reported by our users that we are working on currently. The first and the biggest group of issues is the lack of functionality to change the appearance of standard UML symbols when designing a new DSML. It is currently possible to change the icon of the symbol, add or remove additional compartments, hide existing subsymbols such as names, tags, specify which symbols can by connected to this symbol, etc. However, this approach is insufficient if it is needed to create a completely new symbol, which has nothing in common with existing UML symbols. OMG is currently working on the Diagram Definition Metamodel definition. This metamodel allows specifying graphical symbols. After implementing this metamodel in our tool, users will be able to design their own symbols that are completely different from UML symbols. The other issue is that current DSML engine does not allow changing default UML metamodel values. What that practically means is that for example it is impossible to specify that in the concrete DSML attribute visibility should be „public“ by default. The other frequently reported issue is inability to inherit DSML customizations. Users tend to inherit stereotypes and expect inherited stereotypes to be treated as new metamodel elements because the superstereotype has a customization class already. Other requests that will be implemented in the nearest release include: derived tag support (ability to have derived properties just like UML derived properties); ability to set default symbol sizes in the customization class; restrict custom specification tables by element type (it should be possible to specify types of elements in the collections to display in custom specification tables).
6
Conclusions
We have discussed the recent trends in domain-specific languages and emphasized the need for a lightweight method for building domain-specific modeling languages (DSML) based on UML profiles. UML profile is preferable in many situations since it allows getting the result faster. However the UML profiling mechanism is insufficient for creating a DSML. The implemented additional customization layer for virtually transforming stereotypes into new metaclasses enables creating a full-featured DSML supporting validation of customer models. The main benefit of our approach is that it allows defining and using a DSML in a UML tool, which leverages many standard modeling environment features like diagramming, managing model repository, analyzing model element dependencies, relationships matrices, comparing and merging models, working in a team on a single model, calculation of model metrics, support for patterns, libraries and templates, automated layout functions, model refactoring and transformations, report and code generation, plug-in platform, generating model reports, and using DSML elements in the context of UML diagrams. For achieving efficient DSML transformations, including reports and code generation, one can reuse existing approaches like Velocity Template Language. - 175 -
However, the creators of DSML need a deep knowledge of UML metamodel for mapping domain concepts to UML concepts and OCL syntax for creating validation rules. They would also need to invest considerable amount of time into defining customizations. We understand the limitations of the current implementation and have ascertained this from the user feedback. The nearest release will include the following enhancements: functionality for creating completely new graphical symbols, setting default values for UML elements, inheriting customizations, using derived tags, and others.
7
Acknowledgements
We would like to thank Nerijus Jankevicius, who was the primary author of the idea to customize stereotypes, Tomas Juknevicius, who proposed the validation framework, and the other members of MagicDraw UML R&D team who contributed to the presented approach for creating DSMLs based on UML profiles.
References [1]
[2] [3]
[4]
[5] [6] [7] [8] [9] [10]
[11] [12]
[13] [14] [15] [16]
[17] [18]
[19] [20] [21] [22]
Abouzahra, A., Bézivin, J., Fabro, M.D.D., Jouault, F. A practical approach to bridging domain specific languages with UML profiles. In: Proceedings of the Best Practices for Model Driven Software Development at OOPSLA‘05, San Diego, 2005. Booch, G. Object Oriented Design with Applications. Benjamin Cummings, California, 1991. Bräuer, M., Lochmann, H. Towards Semantic Integration of Multiple Domain-Specific Languages Using Ontological Foundations. In: 4th International Workshop on (Software) Language Engineering (ATEM'07), Nashville, 2007. Chen, K., Sztipanovits, J., Neema, S., Emerso, M., Abdelwahed, S. Toward a Semantic Anchoring Infrastructure for Domain-Specific Modeling Languages. In: The Fifth ACM International Conference on Embedded Software (EMSOFT'05), Jersey City, pp. 35–43, 2005. Chen, P. The Entity-Relationship Model – Toward a Unified View of Data. ACM Transactions on Database Systems, Vol. 1, No. 1, 9 – 36, 1976. Clark, T., Evans, A., Sammut, P., Willans, J. Applied metamodelling a foundation for language driven development. Version 0.1, http://www.vanguard-technologies.com/Services/AppliedMetamodellingV01.pdf, 2004. Denno, P. Model-Driven Integration Using Existing Models. Software, IEEE Computer Society, 20 (5), 59-63, 2003. Dresden OCL2 Toolkit : Dresden OCL2, http://dresden-ocl.sourceforge.net/. Eclipse: Eclipse Modeling Framework, http://www.eclipse.org/emf/ Esser, R., Janneck, J.W. A framework for defining domain-specific visual languages. In: Workshop on Domain Specific Visual Languages, in conjunction with ACM Conference on Object-Oriented Programming, Systems, Languages and Applications OOPSLA-2001, Tampa Bay, 2001. Evans, E. Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-Wesley, 2003. Feilkas, M. How to represent Models, Languages and Transformations? In: 6th OOPSLA Workshop on DomainSpecific Modeling (DSM’06), Gray, J., Tolvanen, J.-P., Sprinkle, J. (eds.), Computer Science and Information System Reports, Technical Reports, TR-37, Jyväskylä, 2006. Greenfield, J., Short, K., Cook, S., Kent, S., Crupi, J. Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools. Wiley, 2004. Gudas S., Skersys T. Lopata A. Approach to Enterprise Modelling for Information Systems engineering. Informatica, Vol. 16(2), 2005, 175–192. Jacobson, I., Christerson, M., Jonsson, P., Overgaard, G. Object-Oriented Software Engineering – A Use Case Driven Approach. ACM Press, Addison-Wesley, Mass, 1992. Jouault, F., Bezivin, J. KM3: a DSL for Metamodel Specification. In: Gorrieri, R., Wehrheim, H. (eds.) Formal Methods for Open Object-Based Distributed Systems, LNCS Springer Berlin / Heidelberg, vol. 4037, pp. 171-185, 2006. Kapocius, K., Butleris, R. Repository for business rules based IS requirements. Informatica, 2006, Vol. 17(4), 503–518. Lagarde, F., Espinoza H., Terrier F., Gerard S. Improving UML profile design practices by leveraging conceptual domain models. In: 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007), Atlanta, 2007. Ledeczi, A., Maroti, M., Bakay, A., Karsai, G., Garrett, J., Thomason, C., Nordstrom, G., Sprinkle, J., Volgyesi, P. The Generic Modeling Environment. In: Workshop on Intelligent Signal Processing, Budapest, 2001. OMG: MDA Guide Version 1.0.1, http://www.omg.org/docs/omg/03-06-01.pdf, 2003. OMG: Meta Object Facility (MOF) Core Specification, OMG Available Specification, Version 2.0, http://www.omg.org/docs/formal/06-01-01.pdf, 2006. OMG: OMG Systems Modeling Language (OMG SysML™), V1.0 http://www.omg.org/cgi-bin/doc?formal/200709-01, 2007.
- 176 -
[23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
OMG: OMG Unified Modeling Language (OMG UML), Superstructure, V2.1.2, http://www.omg.org/spec/UML/2.1.2/Superstructure/PDF , 2007. OMG: UML Profile for MARTE, Beta 1, http://www.omg.org/cgi-bin/doc?ptc/2007-08-04 , 2007. OMG: UML Profile for the Department of Defense Architecture Framework (DoDAF) and the Ministry of Defence Architecture Framework (MODAF), Beta 1, http://www.omg.org/docs/dtc/07-08-02.pdf , 2007. Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., Lorensen, W. Object-Oriented Modeling and Design. Prentice Hall, New Jersey, 1991. Selic, B. A Systematic Approach to Domain-Specific Language Design Using UML. Object and ComponentOriented Real-Time Distributed Computing, ISORC 07. 10th IEEE International Symposium, pp. 2–9, 2007. The Extensible Markup Language specification, http://www.w3.org/TR/REC-xml/ The Ruby language website, http://www.ruby-lang.org/ Tolvanen, J., Pohjonen, R., Kelly, S. Advanced Tooling for Domain-Specific Modeling: MetaEdit+. In: Sprinkle, J., Gray, J., Rossi, M., Tolvanen, J.P. (eds.) The 7th OOPSLA Workshop on Domain-Specific Modeling, Finland, 2007. White, J., Schmidt, D. C., Mulligan, S. The Generic Eclipse Modeling System. Model-Driven Development Tool Implementer's Forum, TOOLS '07, Zurich, 2007. Wirfs-Brock, R., Wilkerson, B., Wiener, L. Designing Object-Oriented Software. Prentice Hall, New Jersey, 1990.
- 177 -
MATCHING DSP ALGORITHM TRANSFORMATIONS FOR POWER, PERFORMANCE AND MEMORY TRADE-OFFS Vytautas Stuikys1, Robertas Damasevicius1, Jevgenijus Toldinas2, Giedrius Ziberkas1 1
Kaunas University of Technology, Software Engineering Department, Studentų 50-405, Kaunas, Lithuania,
[email protected],
[email protected],
[email protected] 2 Kaunas University of Technology, Computer Department, Studentų 50-209, Kaunas, Lithuania,
[email protected] Abstract. The traditional development of complex digital signal processing (DSP) applications is now reaching its limits due to the intense pressure on design cycle and strict performance constraints. The new approach, called Algorithm-Architecture Matching, aims to leverage the design flow by a simultaneous study of both algorithmic and architectural issues, taking into account multiple design constraints, as well as algorithm and architecture optimizations. The aim of this paper is to consider the matching between a family (variants) of the given representative DSP algorithms and a given set of prescribed constraints (performance, energy consumption, accuracy, memory). We propose to use Feature Diagrams for the representation of the algorithm variability features per se and as well as for the matching representation, which is based on the algorithm feature and constraint trade-offs. Experimental results for the cosine look-up table, Fast Fourier Transform (FFT) and sparse matrix multiplication algorithms are presented. Keywords: DSP algorithms, power consumption, energy awareness, embedded applications, pocket PC.
1
Introduction
The traditional development of complex digital signal processing (DSP) applications, which is based on the consecutive design flow (a theoretical study of the algorithms, a study of the target architecture, and finally the implementation), is now reaching its limits due to the intense pressure on design cycle and strict performance constraints. The Algorithm-Architecture Matching approach aims to leverage the design flow by a simultaneous study of both algorithmic and architectural issues, taking into account multiple design constraints, as well as algorithm and architecture optimizations. The matching problem, if we want to consider it systematically, requires a thorough study of the domain in the whole. The first step towards dealing with the problem systematically is the identification of the best strategy that has been already approved in the related areas such as software engineering. Current approaches for architectural design of systems or components in software engineering (including embedded software) predominantly use the product line (PL) concept as a strategic goal. A software PL is a set of software systems that share a common, managed set of features satisfying the specific needs and are developed from a common set of core assets in a prescribed way [1]. The concept of PLs, if applied systematically, allows for the dramatic increase of software design quality, productivity, provides a capability for mass customization and leads to the ‘industrial’ software design [2]. We suggest to apply and adapt the PL concept focusing on the analysis of DSP algorithms domain, the analysis of constraints and identifying the adequate high-level models in order to be able to build a matching between these domains at the implementation phase. The key for the PL concept implementation is the use of domain analysis and domain modelling methods with a subsequent representation of the results explicitly. The basic concept we exploit in the analysis is variability of the domain (or more precisely, the commonalityvariability relationships [3]). In general, variability has many dimensions (e.g., variability of DSP algorithms even for the same task, variability of ever-growing technology capabilities, variability of constraints which are inspired by market demands, technology itself, user requirements, new appliances such as ambient intelligence, mobile computing, etc.). As a result of the previous observation, one can predict the complex relationships between various kinds of variability dimensions in the DSP domain. For example, variability of algorithms can be considered not only as an optimization problem (e.g., inventing more efficient algorithms) but also as a specialization problem (e.g., the one which is based on the extraction of useful data properties, algorithmic properties or application properties with respect to the given constraints). Thus the boundaries of matching between DSP algorithms, constraints and architecture are becoming extremely wide, especially having in mind such an important constraint now as energy awareness at the application level (e.g., in the context of mobile computing). The aim of this paper is to consider the matching between a family (variants) of the given DSP algorithms and a given set of prescribed constraints (e.g., performance, energy consumption awareness, accuracy, memory and various trade-offs amongst the constraints). We suppose that having this matching a - 178 -
designer could be able to justify the selection of the appropriate architecture at the implementation phase. On the other hand, a designer has two possibilities to implement algorithms: to use the hardware architectural solution or to implement the given algorithm as a part of application software. In this paper we propose to use Feature Diagrams [4] for the representation of variability features of the analysed domains per se and as well as for the matching representation, which is based on algorithmic properties and constraint trade-offs identified either analytically or empirically. The paper is organized as follows. Section 2 discusses the related works. Section 3 presents a general framework for the analysis of the algorithm-architecture matching task. Section 4 describes representation of narrow DSP sub-domains using Feature Diagrams. Section 5 considers algorithm and data specialization problem for the DSP domain. Section 6 presents the experimental results with representative algorithms. Finally, Section 7 presents the conclusions.
2
Related Work
The related works can be categorized into two research streams: 1) analysis of the architectures and design techniques for power, execution performance and/or calculation accuracy trade-offs, and 2) transformation-based optimization of embedded software for high performance or low power. 1) Power and performance trade-offs. A relationship between execution time and power consumption of different instructions for the i960 RISC (Reduced Instruction Set Computer) processor is analyzed by [5]. Since there has been no significant statistical variation between two criteria established, minimizing execution time also means minimizing energy consumption. Therefore, for DSP applications, instruction reordering, instruction packing, operand reordering, register allocation, and memory assignment can result in power savings. Aspects of DSP code generation for performance, code size, power and retargetability requirements are discussed by [6]. Graph-based solutions to embedded DSP software synthesis trade-offs are analyzed by [7]. Different processor configuration parameters for performance/energy optimization of DSP transforms are analyzed by [8]. An energy-aware source compilation framework for DSP applications based on multiple energy saving criteria (cache hits, processing units, anticipated scheduling factor, central processing unit (CPU) bus activity, binary code size) is described by [9]. 2) Transformation-based software optimization. In [10] energy consumption is optimized using two well-known transformation methods – loop unrolling, where it aims at reducing the number of processor cycles by eliminating loop overheads, and loop blocking (tiling), where it breaks large arrays into several pieces and reuses each one without self interference. Compiler optimizations such as linear loop transformations, tiling, unrolling, fusion, fission, and scalar expansion are also considered by [11]. However, only loop unrolling has been shown to decrease the consumed energy. Software pipelining and recursion elimination for software-level energy optimization are also considered by [12]. Various source code transformations for software power optimization are discussed in [13, 14]. Different search strategies in optimization space for finding good (in terms of performance) source-level transformation sequences for typical embedded programs written in C are considered by [15].
3
General Framework for the Analysis of the Matching Task
A general framework of the matching task is depicted in Figure 1. At the core of the framework is the Y-diagram that links together three basic constituents of the task: algorithms, constrains (criteria) and architectures. The framework also outlines a context of the task (e.g., possible methods of the solution domain such as transformation, optimization, etc.), possible implementation technologies (e.g., in software or hardware), and applications) and identifies the sub-domain spaces for each constituent. The intersection between the constituents identifies the space for the matching task presented by a circle in Figure 1.
Figure 1. DSP sub-domain spaces - 179 -
The space of the DSP algorithms is to be considered for each class of algorithms dedicated for a specific DSP task. If we look at the DSP algorithm taxonomies and the role a specific algorithm plays in the DSP domain, one can identify representative algorithms of the domain. We call representative algorithms those which are 1) most sensible to achieving the relevant criteria/constraints and 2) applications use them intensively. Examples are cosine calculation, sparse matrix multiplication, Fast Fourier Transform (FFT), etc. The space of basic constraints is as follows: performance (time), energy, accuracy, memory and various trade-offs of the previous mentioned items. These constraints (criteria) usually are common to the entire DSP algorithm domain. However, some specific criteria cannot be relevant to a specific algorithm (e.g., accuracy for matrix multiplication algorithm). Each representative algorithm is to be matched to this set of constraints that are relevant to a given application. On the other hand, a representative algorithm has not a unique realization but rather a family of possible solutions. This family is composed having in mind such aspects as functional variants of algorithms (e.g., the same functionality can be implemented in a different way), properties of data in a domain. The other source of variability is the variants of selected constraints or their tradeoffs. As the scope of variability can be actually large, we need to manage this variability in some well-established way. In the next section we present a systematic approach to dealing with the problem using Feature Diagrams.
4
DSP Sub-Domain Representation Using Feature Diagrams
Here by the DSP sub-domain we mean the representative DSP algorithms and various constraints/criteria trade-offs which are induced by a given application. Figure 2 outlines a model of the subdomain, which is represented using a feature diagram. In general, a feature diagram is the tree-like notation or directed acyclic graph (DAG) that consists of a set of nodes, a set of directed edges, a set of edge decorations, relationships and constraints among features. A feature is understood as an externally visible characteristic of an item (i.e., concept, entity, algorithm, system or domain per se). The root represents the top level feature. The intermediate nodes represent compound features and leaves represent atomic features that are non-decomposable to smaller ones in a given context. The edges are used to progressively decompose a compound feature into more detailed features. Edges of the graph also denote relationships or dependencies between features. One can learn more about the notation from [16, 17]. In Figure 2, mandatory features are denoted by black circles above a feature box, while optional and alternative features are represented by white circles. Mandatory features express common aspects of the concept, whereas optional and alternative features express variability. All basic features may appear either as a solitary feature or in groups. If all mandatory features in the group are derived from the same parent in the parent-child relationship we can speak about the and–relationship among those features. An optional feature is the one which may be included or not if its parent is included in the feature model. Alternative features, when they appear in groups as derivates from the same parent, may have the following relationships: or, xor, case, etc. The xor– relationship among features derived from different parent also can be treated as a constraint. Usually this relationship appears as a constraint when features are derived from different parents.
Figure 2. DSP sub-domain expressed through essential features and their ‘parent-child’ relationships (Pperformance, A-accuracy, M-memory, E-energy) - 180 -
The presented model focuses on the specialization of the DSP algorithms. We discuss the specialization problem in more details in Section 5. We admit that for each representative algorithm there are a set of specialized algorithms that are identified as variants from 1 to n (see Figure 2). What is needed to solve the matching problem is to build the relationships between variants of constraints and variants of specialized algorithms. The relationships are summarized in Table 1. Table 1. Possible relationships (+) and limitations (–) among constraints and specialized algorithms
Constraints/ criteria space trade-offs
Energy-Performance Energy-Accuracy Energy-Memory Energy-PerformanceAccuracy Energy-PerformanceMemory Performance-Accuracy
5
Specialized DSP representative algorithms Cosine computation Cosine Multiplication with Taylor series computation using of sparse using Horner scheme look-up tables matrices
FFT
+ + – +
+ + + +
+ – + –
– – + –
–
+
+
+
+
+
–
–
Algorithm and Data Specialization Problem for DSP Domain
Algorithms and data structures of an application are two interrelated attributes that have a direct impact on characteristics of the implementation. For a specific task, both algorithm and its data structure (if it is relevant to the given task) can be represented in two different modes: either as a generic or as a specialized representation. The first gives a general solution of the problem under consideration but, in many cases, this solution may be either not optimal, or not satisfying the given constraints. The second representation is a reconstruction of a generic algorithm (or perhaps together with its data structure) aiming at achieving some balance between optimality and given constraints. We call it the specialization of an algorithm. The specialized version of an algorithm is derived from the generic one through some reconstruction process. In general, the problem of algorithm specialization is a generalization of the well-known program specialization problem [18]: a specifically annotated program (metaprogram) together with some known portion of its input (parameter value), is transformed into a specialized version obtained by pre-computing the parts of the program that only depend on this input. What is common for those problems is that both are aiming (1) to improve some characteristics, usually the same from the implementation viewpoint; (2) both are based on analysis and elicitation of some properties from the generic solution in order to realize the specialization. What is different is that for the first task analysis is performed at a higher abstraction level with the elicitation of algorithmic-based features (such as operations and data structures), while the second task considers programrelated features (e.g., loops, variables, etc.). We define program specialization more formally as a technique for separating a computation process into two parts, so that given f (x, y), it is rewritten as fx (y), where x constitutes the static (early, constant) part of a computation and y – the dynamic (late, variable) part of it.
TS : f ( x, y ) → f x ( y ), x = const
(1) A separate case of specialization is data specialization [19]. This method aims at encoding results of early computations in data structures. The execution of a program is divided into two stages. First, computations on known input values are performed and their result is saved in a data structure (specialized cache or look-up table (LUT). Then the algorithm is modified to use a specialized data structure instead of a performance-costly function. Of course, caching an expression is beneficial only if its execution cost exceeds the cost of a cache reference, i.e. is recommended only for such performance-costly functions as sine, cosine, logarithm, etc. As specialization is based on properties of algorithms and data structures (e.g., level of sparseness, distribution of zeroes for the sparse matrix multiplication, etc.), different values of properties may imply different specialized algorithms [20]. Having in mind also other features (e.g., different generic algorithms, even for the same task, such as FFT; different extrapolations schemes for cosine calculation from look-up tables, etc.), which increase the number of possible specializations, variability of the algorithm space can be indeed large in terms of possible relationships (see Figure 2, the internal algorithmic features are missing). Some kind of relationships (e. g., performance and type of a specialized algorithm) can be derived through the use of analytic methods. However, a vast majority of evaluations for energy consumption awareness and algorithms - 181 -
characteristics require empirical investigation. With the advent of mobile computing, this kind of evaluation is becoming as crucial as never before though other characteristics and constraints are important as ever.
6
Experimental Results with Representative Algorithms
6.1
Cosine Calculation using Look-Up Table (LUT) The cosine calculation has been chosen as a representative algorithm of the calculation-intensive application. We analyse three variants of the representative algorithm in C#: standard C# library Math.Cos() function, cosine LUT and cosine LUT with linear interpolation. LUT-based cosine implementations consist of the automatically generated LUT of a predefined size and a wrapper function. The trade-off here is that accuracy of the result depends upon the size of the LUT (Figure 3). Here accuracy is expressed in terms of max. error of the cosine LUT with respect to the standard Math.Cos() function on a mobile device. In a simple LUT without interpolation of function values, the value of a function argument is rounded to the nearest value for which a function value in a LUT exists. Thus the accuracy of this approach is not fine. A more complex approach includes a LUT with linear interpolation of the function values for these arguments of a function, which are not available in a LUT.
Figure 3. Accuracy of cosine calculation using look-up tables (LUT)
The complexity of the LUT based method without interpolation is constant. It requires only 1 multiplication for the calculation of a LUT index and does not depend upon the size of a LUT. The complexity of the LUT with linear interpolation is also constant. It requires 2 multiplications and 4 additions, and does not depend upon the size of a LUT. The experiments were performed on a Compaq iPAQ H3900 (Pocket PC platform, Intel PXA250 400 MHz CPU, 32 MB RAM, Windows CE 3.0 OS). The calls to a standard Math.Cos() function, cosine LUT, and cosine LUT with linear interpolation were put into a loop and repeated 1E8 times. The results of the execution time and battery voltage drop measurements are expressed in percents with respect to the corresponding measurement results of the standard Math.Cos() function (i.e., we assume that execution time and voltage drop of Math.Cos() is equal to 100%). For the LUT-based cosine implementations with and without interpolation the execution time and voltage drop values are flat (see Figure 4).
a)
b) Figure 4. Cosine calculation using look-up tables: a) execution time, and b) power consumption in relation to a standard library function - 182 -
The LUT without interpolation has the lowest energy consumption (about 5-6% of Math.Cos() energy consumption) and the best performance (about 4% of Math.Cos() execution time). The LUT with linear interpolation has worse energy consumption (about 12-25% of Math.Cos() energy consumption) and performance (about 9% of Math.Cos() execution time), but higher accuracy. However, there is a significant data memory usage overhead for the LUTs, which is a relevant concern for mobile devices. Therefore, here we have a Performance-Accuracy-Energy-Memory trade-off between standard cos function and two LUT-based implementations of cosine. More details about this experiment can be found in [20]. 6.2
Sparse Matrix Multiplication Sparse matrices often appear in various scientific and engineering applications such as finite element computation. When sparse matrices are stored and processed on a computer, it is beneficial to use specialized algorithms and data structures that take advantage of the matrix sparsity. Standard matrix structures and algorithms are slow and consume large amounts of memory when applied to large sparse matrices. Sparse data is by nature easily compressed and this compression almost always results in less memory usage, higher processing speed and lower power consumption. An important problem for the engineers is to find a sparsity limit, from which the application of the matrix compression pays off in terms of memory usage, power consumption and multiplication time. Here we compress sparse matrices using “Storage-by-columns” algorithm [21], where a matrix is substituted with 3 vectors, which store the nonzero elements of the matrix and their position. Such specialized matrix data structure is used by a matrix multiplication algorithm. Both algorithms for uncompressed and compressed matrix multiplication were implemented in C#.
b)
a)
c) Figure 5. Sparse matrix multiplication results: a) data memory usage, b) multiplication time and c) energy consumption expressed via voltage drop
The experiments were performed on a Compaq iPAQ H3900 (Pocket PC platform, Intel PXA250 400 MHz CPU, 32 MB RAM, Windows CE 3.0 OS) with 100x100 randomly generated matrices for both uncompressed and compressed sparse matrices. Multiplication operation for both cases was performed 1500 times. The results of the experiments are presented in Figure 5. From the results, we can see that compression is efficient when sparsity of matrices is above 67%, multiplication time is lower then sparsity is above 63% and power consumption is lower then sparsity is above 63%.
- 183 -
6.3
FFT specialization FFT (Fast Fourier Transform) is widely used in various DSP applications and image processing. Here we specialise a FFT algorithm [22] with respect to the number of points (data block size). FFT loops were unrolled and calls to the performance-costly sine and cosine functions were replaced with the early pre-computed function values. The annotations for loop unrolling and value insertions were coded in a metaprogram using a metalanguage, and the generation of the specialized FFT algorithm (in C#) is performed automatically. The experiments were performed on a ASUS P750 (Pocket PC platform, Intel PXA270 520 MHz CPU, 64 MB RAM, Windows Mobile 6 Professional OS). The results for an original FFT algorithm and a FFT algorithm specialized with respect to the data block size using program specialization technique are presented in Figure 6. Each FTF operation was repeated 10E6 times. We can see that FFT specialization can improve performance and power characteristics by 60% (see Figure 6, b and c). However, the specialised FFT uses much more program memory than the original version of the FFT algorithm. While the size of the executable file of the original FFT algorithm remains constant, the size of the executable file of the specialized FFT algorithm grows exponentially (see Figure 6, a; note that plot axes are logarithmic).
b)
a)
c) Figure 6. FFT specialization results: a) program memory usage, b) time and c) energy consumption expressed via voltage drop
6.4
Evaluation of Results The matching between different variants of the DSP algorithms and a given set of prescribed constraints (e.g., performance, energy consumption, accuracy, program and data memory and various trade-offs amongst the constraints) can help an embedded software developer to choose the most suitable implementation for his design task. E.g., for sparse matrix multiplication task, the experimentally established trade-off between using uncompressed and compressed matrices is about 63-67% (see Figure 5). A specialized program is almost always more efficient (in terms of execution time or energy consumption) than the general program, but the general program tends to be easier to write, debug and maintain; and it does not need to be rewritten for each new case. On the other hand, when the size of the specialization problem is large, code explosion can occur (see Figure 6, a). For example, this situation can occur when a loop needs to be unrolled and the number of iterations is high. It may degrade the execution time of the specialized program, because of the instruction cache misses. In case of data specialization, where calculation-intensive computations are substituted with memory access operations, the performance and energy consumption values usually make only a small fraction of the - 184 -
values shown by the calculation-intensive function (see Figure 4). However, the primary concerns here are low accuracy, which can be improved 1) by using some additional computation such as linear or quadratic interpolation of function values, or 2) by increasing the size of a data look-up table. The latter approach is however limited by small random access memory (RAM) and cache sizes on mobile devices.
7
Conclusions
We have presented the Algorithm-Architecture Matching framework of the methodology for managing embedded DSP software development trade-offs (speed, energy, accuracy, memory), which uses feature diagrams for describing trade-off models at a high level of abstraction. The proposed framework considers the analysis of the problem and solution domains at the program construction time. The problem domain identifies the influential factors, the awareness factors, the type of the problem, single criteria, and trade-offs of the criteria and relationships. Analysis of the problem domain consists of identification of the influential and awareness factors, identification of the performance, accuracy, memory, and energy criteria and related trade-offs. Analysis of the solution domain consists of identification of the calculation-intensive algorithms for embedded applications, selection of representative algorithms for embedded applications, and specialization of the representative algorithms for different criteria. For specialization, we use two methods: algorithm specialization for known input values and data structure specialization. Our methodology allows the developer to represent embedded program design trade-offs in energy-speed and energyaccuracy dimensions, and to select a program implementation that best matches system requirements and/or constraints. Future work will focus on the extension of the proposed methodology for other types of algorithms such as communication-intensive algorithms.
References [1] [2] [3]
[4] [5] [6]
[7] [8]
[9] [10] [11] [12] [13] [14] [15]
[16]
Clements P., Northrop L. Software Product Lines: Practices and Patterns. Boston, MA: Addison-Wesley, 2002. MacGregor J. Requirements Engineering in Industrial Product Lines. Proc. of Int. Workshop on Requirements Engineering for Product Lines REPL’02, Essen, Germany, 2002, 5-11. Lee S.-B., Kim J.-W., Song C.-Y., Baik D.-K. An Approach to Analyzing Commonality and Variability of Features using Ontology in a Software Product Line Engineering. Proc. of the 5th ACIS Int. Conf. on Software Engineering Research, Management & Applications SERA 2007, 20-22 August 2007, Besan, Korea, pp. 727-734. Kang K, Cohen S. G., Hess J. A, Novak W. E., Peterson A. S. Feature-Oriented Domain Analysis (FODA) Feasibility Study. CMU/SEI-90-TR-21, Carnegie Mellon Univ., Pittsburgh, PA, November, 1990. Russell J., Jacome M. Software Power Estimation and Optimization for High Performance, 32-bit Embedded Processors. Proc. of the Int. Conf. on Computer Design ICCD’98, 5-7 October 1998, Austin, Texas, USA, pp. 328-333. Gebotys C. H., Gebotys R. J. Complexities in DSP Software Compilation: Performance, Code Size Power, Retargetability. Proc. of 31st Annual Hawaii Int. Conf. on System Sciences, Kohala Coast, Hawaii, USA, January 6-9, 1998, pp. 150-156. Bhattacharyya S.S. Optimization trade-offs in the synthesis of software for embedded DSP. Proc. of the Int. Workshop on Compiler and Architecture Support for Embedded Systems, pp. 97-102, 1999, Washington, D.C, USA. D’Alberto P., Puschel M., Franchetti F. Performance/Energy Optimization of DSP Transforms on the XScale Processor. In K. De Bosschere et al. (Eds.), 2nd Int. Conf. On High Performance Embedded Architectures and Compilers HiPEAC 2007, Ghent, Belgium, 28-30 January, 2007. Springer LNCS 4367, pp. 201–214. Azeemi N. Z. Compiler Directed Battery-Aware Implementation of Mobile Applications. Proc. of 2nd IEEE Int. Conf. on Emerging Technologies 2006, pp. 151-156, Peshawar, Pakistan, 2006. Chung E.-Y., Benini L., De Micheli G. Source code transformation based on software cost analysis. Proc. of the 14th Int. Symposium on Systems Synthesis (ISSS ’01), New York, NY, USA, 2001, pp. 153–158 Kandemir M., Vijaykrishnan N., Irwin M., Ye W. Influence of Compiler Optimizations on System Power. IEEE Trans. on Very Large Scale Integration (VLSI) Systems, 9(6): 801-804, 2001. Mehta H., Owens R. M., Irvin M. J., Chen R., Ghosh D. Techniques for Low Energy Software. Int. Symposium on Low Power Electronics and Design ISPLED 1997, 18-20 August, 1997, Monterey, CA, USA, pp. 72—75. Benini L., de Micheli G. System-level power optimization: techniques and tools. ACM Trans. Des. Autom. Electron. Syst., 5(2):115–192, April 2000. Wong P.Y.H. An Investigation in Energy Consumption Analyses and Application-Level Prediction Techniques. MSc. Theses, University of Warwick, UK, 2006. Franke B., O'Boyle M., Thomson J., Fursin G. Probabilistic source-level optimisation of embedded programs. Proc. of Conf. on Languages, Compilers, and Tools for Embedded Systems (LCTES'05), Chicago, Illinois, USA, June 15-17, 2005, pp. 78-86. Kang K., Lee J., Donohoe P. Feature-Oriented Product Line Engineering. IEEE Software. 19(4): 58-65, 2002.
- 185 -
[17] Schobbens P.-Y., Heymans P., Trigaux J.-Ch., Bontemps Y. Feature Diagrams: A Survey and a Formal Semantics. 14th IEEE Int. Requirements Engineering Conference (RE'06), 11-15 September 2006, Minneapolis/St.Paul, Minnesota, USA, pp. 136-145. [18] Jones N.D., Gomard C.K., Sestoft P. Partial Evaluation and Automatic Program Generation. Prentice Hall Int., 1993. [19] Knoblock T. B., Ruf E. Data Specialization. ACM SIGPLANS Notices 31 (5), 215-225, 1996. [20] Damaševičius R., Štuikys V., Toldinas E. Embedded program specialization for multiple criteria trade-offs. Electronics and Electrical Engineering, 8(88), pp. 9-14, 2008. [21] IBM. ESSL Guide and Reference: Sparse Matrices. http://www.navo.hpc.mil/usersupport/IBM/ESSL/essl148.html [22] Chirokoff S., Consel C., Marlet R. Combining Program and Data Specialization. Higher Order and Symbolic Computation 12 (4), pp. 309-335, Kluwer Academic Publishers, 1999. [23] Štuikys V., Damaševičius R. Metaprogramming Techniques for Designing Embedded Components for Ambient Intelligence. T. Basten, M. Geilen, and H. de Groot (eds.), Ambient Intelligence: Impact on Embedded System Design. Kluwer Academic Publishers, 2003, pp. 229-250.
- 186 -
SOFTWARE SYSTEM TESTING METHOD BASED ON STATE TRANSITIONS Dominykas Barisas, Eduardas Bareisa Kaunas University of Technology, Software Engineering Department Studentų st. 50, LT−51368 Kaunas, Lithuania
[email protected],
[email protected] Abstract. In most cases, software is delivered together with many detailed specifications and models. System state machine models can be used for state-based testing techniques and help to understand the behavior of the application. The proposed approach presents the program testing framework which uses behavioral models and identifies state transitions. State transition path identification is based on system behavioral models as well as the source code which helps to construct more detailed tests corresponding with a real system and reduces maintenance effort. The aim is to gather specifications from code and construct model which is considered to be useful in terms of testing. The document defines a framework which identifies states and automatically generates tests from the model. Keywords. Integration testing, model based testing, automated test generation.
1
Introduction
Model-based testing has become popular not only in software design and development, but is widely used for testing as well. There is a number of advantages, but also difficulties and shortcoming of various modelbased approaches. Many of object-oriented techniques have been used as solutions to address the increasing demand for assuring software quality. There is a lot of investigation done to ensure the correct object functionality, however a large amount of software bugs are introduced by object integration. Errors appearing in interactions between objects are hard to find. Many different UML models have been used for different kinds of testing including state machine, sequence and communication diagrams [1, 2, 3]. Object-oriented systems are based on their object interactions and incorrect behaviors are observed during integration. It can be some missing functions, different kinds of conflicts between objects. The presented technique improves object-oriented software integration testing by taking into account all class states interacting in a communication diagram. The UML model includes state machine and communication diagrams. There are many researches made on testing system state-based behavior using state machine or UML interaction diagrams for the interacting behavior [4, 5, 6, 7]. In this paper, system object interactions in all possible states will be modeled using state machine and communication diagrams, while state transitions retrieved from the source code.
2
The Testing Technique Proposal
The proposed testing model is created by generating communication and state machine diagrams and by covering all paths. The main focus is on testing all possible state interactions between objects in the model [8]. The proposed approach can be used to test the object integration, therefore it should be applied during the class integration phase. 2.1
Recognizing object states
In the proposed technique the state of each object has to be obtained. A lot of investigation made is regarding object states [14, 15, 16]. However there is a number of different ways of getting or recognizing object states from the code. One of them is provided in this section. A system is composed of a set of states, where each state describes the system at a particular point in its execution [17]. The behavior of the system is divided into a set of state transitions, where each transition changes one state to another. It is important to reverse-engineer state transitions and identify each transition with its respective source code. For example, the s state transition method f() changes the state State1→BState2. It consists of the source code that is executed between the entry and exit points of the method f(). The state transition is represented by a set of paths through the program that cause the system to change the state. The goal is to try to identify these paths. - 187 -
Let’s say, the state can be denoted as follows: State=(V,PCondition,PCounter) where V represents the values of program variables, PCondition is a condition and PCounter is a number pointing to the current statement of the program. First of all, state transition points in terms of the source code syntax (for example, method calls, exceptions) should be identified. It is straightforward and can be fully automated. As an example, in UML state charts transitions are triggered by calls to methods, like in this case. The method is responsible for the state change in the system. Next step would be constructing the tree, selecting all execution states that correspond to transition points and identify state transitions between the states by detecting marked transition points in the tree. Every state corresponds to the execution of a statement in the source code. Such implementation first is responsible for instrumenting the program to make the program symbolically executable and inserting annotations at transition points. Another part is a listener class that monitors the Java PathFinder search process and identifies the state transitions when they occur. Below here is shown a small example of Money balance class (Figure 1) and two states of the class (Figure 2). It is presumed that we want to find out how the system behaves in terms of Empty and Positive balance states.
Figure 1. Money balance class
Figure 2. States for simple money balance
The object states are encoded in the getState function. Every public method can be responsible for a state transition. The program is executed so that the state transitions can be identified. This involves executing every possible set of calls to Money.add, Money.subtract, Money.lessEqual and Money.toString calling each combination a limited number of times. This can be done with the model checker [18] and returning application to the initial state after each combination. Transition functions are determined by the set of statements that are executed between a pair of state transition points. Here is provided a small example of conditioned program: 1
if (k1 == k2) {
2
if (k1 == k3) {
3
// state1
4
} else {
5
// state2
6 7
} } else {
8
if (k1 == k3) {
9 10
// state2 } else if (k2 == k3) {
11 12
// state2 } else {
13 14
// state3 }
15 }
Figure 3 illustrates the execution tree of this snippet of code.
- 188 -
Figure 3. Execution tree of the conditioned program.
To get a better understanding, the following steps define the process of getting a set of state transitions from the source code: - Identify state transition points, it can be method calls, exceptions, etc. - Construct the tree, identifying states that correspond to transition points. - Bind transition points with states. - Identify state transitions using the execution tree. The process of getting state transitions from an implementation is shown in Figure 4.
Figure 4. State transition identification (arrows indicate transtion points).
- 189 -
Having information where a transition can start and end, we add a state for every transition point in the execution tree. The interval between a pair of transition points corresponds to a state transition function. When using this approach, for example, if tests show that a fault in the code results in the transition when calling method Money.add() and the state change is Empty→Empty instead of Empty→Positive balance, then what is the mistake in the source code? The state transitions that were reverse-engineered initially did not match the transitions that were predicted (Figure 2). From the Empty state, the add() state transition remained in the same state, but it was supposed to change. The transition function that is add() and looking at this function makes it clear why this happen. There is a guard condition ensuring that amount is not null. If amount is null, the method doesn’t add anything. There might be a several constructors in Money class, and the reason for this error could be that one of them does not initiate this value. The issue may not arise until the other constructor is used. Therefore the problem is fixed by initiating a value if it is null. This is very simple example, but is useful and shows that this approach helps improving the software system.
2.2
The Testing Process Concept The process can be separated into these activities [19]: 9 Construction of UML communication diagram. 9 Defining state transitions. 9 Test path generation based on all path coverage criteria [9] and test execution. Paths from the test model are executed and object states before and after execution of each message in a test path are stored [10]. States of the objects are defined using state invariants. 9 Stored result comparison with the expected object states from the model. The test case is considered to have failed in case the object state is not in the expected state [11]. Figure 5 illustrates the described approach with more details.
Figure 5. A chart representing a concept of the proposed testing process
There is a need to specify guard conditions (state invariants), which is done by using OCL [12]. Each object in the communication diagram corresponds to an instance of a class and should have a corresponding state machine diagram. Connections in the proposed model can emerge between objects in the communication diagram and between states in the state machine diagram. Connections in the communication diagram should have a unique number, operation, receiver and sender objects. Connections between states include a unique number, operation, accepting state and sending state. 2.3
Building UML Test Model An example of the described model will be provided in this section. The simulation of an Automated Teller Machine application will be modeled [13]. The system is started up when the operator turns the operator switch to the "on" position. A session is started when a customer inserts an ATM card into the card reader slot of the machine. The ATM pulls the card into the machine and reads it. (If the reader cannot read the card due to an improper insertion or a damaged stripe, the card is ejected, an error screen is displayed, and the session is aborted). The customer is asked to enter his/her PIN, and is then allowed to perform one or more transactions, choosing from a menu of possible types of transaction in each case. When the customer is through performing transactions, the card is ejected from the machine and the session ends. A transaction is aborted due to too many invalid PIN entries or the user cancel request. A transaction is started within a session when a customer chooses a transaction type from a menu of options. If PIN is valid, any steps needed to complete the transaction will be performed. If the bank reports that the customer's PIN is invalid, then an attempt will be made to continue the transaction. If the customer's card is retained due to too many invalid PINs, the transaction will be aborted, and the customer will not be offered the option of doing another one. - 190 -
If a transaction is cancelled by the customer, or fails for any reason other than repeated entries of an invalid PIN, a screen will display the information for the customer about the reason of the transaction failure. The customer may cancel a transaction by pressing the Cancel key as described for each individual type of transaction below. A corresponding communication diagram is shown in Figure 6.
Figure 6. Simplified ATM application communication diagram
The communication diagrams model focuses on the use case execution by calling system level operations. The object state machine diagram contains its states and the messages the object can receive in those states. There is a set of messages with state information of each object in the communication diagrams. The model’s goal is to create a graph combining communication and state machine diagrams. There is a number of vertices created for the classes. They represent different states in which the message can be received. Vertices belonging to the same class are grouped in the box. In this way the graph is built combining communication and state machine diagrams, an example is shown in Figure 7. All objects in the graph have information about the class name and state. Connections between communication diagram objects have unique numbers identifying them.
Figure 7. Test model graph
2.4
Covering Paths in the Graph The generated test paths test communication between classes, each of them having states. This testing path starts from the first graph node and has a set of messages of the communication. When constructing a path, only those state machine diagram connections are selected which are valid for the corresponding connection in the communication diagram. Every generated path is stored as a string containing a chain of connections between objects in appropriate states. These tests paths can be presented as a set of strings. Every connection contain detailed information about the message, therefore messages in test paths are identified by numbers and names. Each message contain information regarding the test path, the condition which is needed for conditional messages only, message and class names, state, guard, result state and they are combined in the following way: - 191 -
Sequence_nr:[iteration][Condition]msg_name@class_name@state_id->[Guard]result_state The whole test path is composed of a set of such messages combined with each other. In order to test the integration of application completely using the proposed approach, each state connection of the diagram has to be executed at least once. Then generated test paths are parsed in order to identify objects and states. The result of one test path generation would look as follows: 1. start$DisplayManager@IDLE->[activeSession=0]ServingCustomer 2. [transaction=true]initSession$Session@Reading Card->[cardRead=false]Ejecting Card 3. [cardEntered=true]initTransaction()$Transaction@Getting Specifics-> [specificsCorrect=true]Sending to Bank Each test path execution requires test data; these data has to be provided by the user or generated automatically. In practice with an arbitrary software system, the number of state transitions can grow exponentially. In order to reduce complexity of the approach a different path coverage technique need to be investigated. 2.5
Test Execution and Test Result Assessment The provided technique defines a way to the system testing by automatically constructing a model from communication and state machine diagrams. Then test paths are generated and executed using the provided test data, results evaluated and saved in a file. First of all, the model has to be created using UML communication and state machine diagrams. This UML model has to be presented in XMI format, so that it can be parsed and needed objects identified. Each message is retrieved from the communication diagram. Then all possible sending object states in which message can be received are stored. The following action that has to be taken is test path generating. UML model and all paths coverage algorithm are needed for this. In order to be able to execute test paths, test data using a state invariant have to be generated manually. These test data include initial message parameter values and class variables needed to set states for the objects in communication diagram. The user can add test values manually by picking random values from the state invariants and saving them in a data file. Test data are provided for the methods called in test paths, then the application is tested and results stored in a result file. This test result file contains object states before and after each test path message. Afterwards, the test results are compared with the expected results. Test path object states before and after the message compose the expected result. The test path is successfully passed if all object states are equal. Tests results with message names are saved in a file as well.
3
Conclusions and Future work
This paper presented a formal technique to the testing process based on UML model consisting of communication diagrams and state transitions retrieved from the implementation. Using this approach the object graph is generated and the system tested by checking communication between objects in different states. The proposed technique is suitable for the systems where the functionality of one object depends on the state of another object. Furthermore, this paper presented a technique for getting object state transitions from the source code. It uses program conditioning to identify branches of the source code that are responsible for the execution of a particular transition. These are the accomplished tasks: 9 UML communication and state machine diagram construction. 9 Object state identification and mapping in the source code. 9 Test graph generation and path coverage. 9 Result storage and assessment. The testing approach presented in this document is rather complex and need to be simplified in order to use it for integration testing widely. In real software systems the number of state transitions can grow exponentially and testing process may become difficult and time consuming. One of the future investigations could be finding the solution to identify a set of optimal paths which has the highest possibility to detect faults in the system. Another improvement of this technique would be finding an optimal way for the automated test data generation.
- 192 -
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]
Abdurazik A., Offut J. Using UML Collaboration Diagrams for Static Checking and Test Generation. Third International Conference on the Unified Modelling Language, York, UK. 2000. Briand L. C., Di Penta M., Labiche Y. Assessing and Improving State-Based Class Testing: A Series of Experiments. IEEE Transactions on Software Engineering, Vol. 30. 2004. Briand L., Labiche Y. A UML-Based Approach to System Testing. Fourth International Conference on the Unified Modelling Language. 2001. Kim S. K., Wildman L., Duke R. A UML Approach to the Generation of Test Sequences for Java-based Concurrent Systems. Australian Software Engineering Conference, IEEE. 2005. Jiong Y., Ji W., Houwang C. Deriving Software Statistical Testing Model from UML Model. Third International Conference on Quality Software, China, IEEE. 2003. Rayadurgam S., Heimdahl M. P. E. Test-Sequence Generation from Formal Requirement Models. 6th IEEE International Symposium on High Assurance Systems Engineering. 2001. Chevalley P., Thevenod-Pose P. Automated Generation of Statistical Test Cases from UML State Diagrams. 25th Annual International Computer Software and Applications Conference, IEEE. 2002. Badri M., Badri L., Naha M. A Use Case Driven Testing Process: Towards a Formal Approach Based on UML Collaboration Diagrams. University of Quebec Trois-Rivieres, Canada. 2004. Offut J., Abdurazik A. Generating Tests from UML Specifications. George Mason University. 2000. Frohlich P., Link J. Automated Test Case Generation from Dynamic Models. ABB Corporate Research Centre, Germany. 2000. Ali S., Briand L. C., Rehman M. J., Asghar H., Iqbal M. Z., Nadeem A. A State-based Approach to Integration Testing based on UML Models. Carleton Technical Report SCE-05-02. 2006. Benattou M., Bruel J. M., Hameurlain N. Generating Test Data from OCL Specification. Universite de Pau et des Pays de l’Adour, France. 2003. Bjork R. C. ATM Simulation. Gordon College, Wenham, MA. 2004. Lee D., Yannakakis M. Principles and methods of testing finite state machines—A survey. Proceedings of the IEEE, Vol. 84. 1996. Borger E. Abstract state machines and high-level system design and analysis. Theoretical Computer Science. 2005. Stark R. F. Abstract State Machines A Method for High-Level System Design and Analysis. Computer Science Department, ETH Zurich. 2004. Walkinshaw N., Bogdanov K., Ali S., Holcombe M. Automated discovery of state transitions and their functions in source code. Software Testing, Verification and Reliability. Wiley InterScience. 2007. Visser W, Pasareanu C, Khurshid S. Test input generation with Java PathFinder. Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’04), Boston, USA. 2004. Barisas D., Bareiša E. Formal Approach to Software Testing Process based on UML Models. Proceedings of the 14th International Conference on Information and Software Technologies - Information Technologies’ 2008. ISSN 2029-0063. Kaunas, 2008
- 193 -
AGENT-BASED FRAMEWORK FOR EMBEDDED SYSTEMS DEVELOPMENT IN SMART ENVIRONMENTS Egidijus Kazanavicius, Vygintas Kazanavicius, Laura Ostaseviciute Kaunas University of Technology, Faculty of Informatics, Real-time computing centre, Studentu g. 50, LT-51368,Kaunas, Lithuania
[email protected],
[email protected],
[email protected] Abstract. This paper presents a new approach for embedded real time systems design based on multiagent paradigm. The essence of our work lies on smart environment domain. Jade - a Java based middleware is selected as a platform, facilitating the development of multi-agent embedded systems framework. Generic system architecture, the framework prototype implementation and consequently smart home temperature control application are demonstrated, reasoning that the proposed method is effective in dealing with challenges raised by contemporary embedded system development issues. Keywords: Embedded systems, multi-agent paradigm, Jade, smart home.
1
Introduction
Permanent advances and innovations in various technological fields, such as wireless communication technologies, computing devices, sensor networking and artificial intelligence techniques enable the possibilities of shifting computational services out of conventional desktop computers into surrounding environments, and enhancing the human-computer interaction as well as communication between things and devices. This might consequently lead to the “Internet of things” – a paradigm, related to the future Internet, its’ infrastructure, communication and processing technologies, putting the focus on object identity, functionality and seamless integration [5,15]. To be more precise, today’s vision on the Internet of things denotes that “uniquely addressable interconnected objects will be operating in smart spaces using intelligent interfaces to communicate within social, environmental and user contexts” [16]. Major enablers for this vision to become a reality are concerned with embedded systems, communication protocols and standards, energy storage and generation, device intelligence, technological interoperability and other issues. However, most of the above mentioned subjects require advanced theoretical investigations and practical innovations. The application domain for the Internet of things is widespread, covering retail, logistics, medicine, ubiquitous intelligent devices, smart home, transportation and other areas. As it was mentioned before, embedded system domain is an important integral part of future internet vision as well as our everyday life. Nowadays, embedded systems are everywhere, from telecommunication systems to home appliances and medical applications, but in the future, they will spread even more and might be integrated into every device surrounding us. Embedded systems are mainly designed to extend system functionality, assuring higher reliability, accuracy, lower price and power consumption [7]. Traditional embedded system design methods require prior knowledge about the system – predefined system goals, components and interaction mechanisms. However, the problems emerging today are getting more and more complex, requiring the systems to work in dynamic, ever changing environments. Thus, new requirements for the systems arise: to act autonomously, make decisions, adapt to changes – behave intelligently, saying in other words. A denoted in [8], intelligent solutions for embedded systems are beneficial in several aspects: they can assure the self- properties, such as self-management, self-healing, self-configuration; provide more autonomy; ease the modeling of the system – shorten design time as well as lower design and maintenance costs. This paper presents a new approach for embedded real time systems design based on multi-agent paradigm. The focus lies on the domain of smart environments, which are highly dynamic and perfectly suite for potential applications of the internet of things. The proposed agent-based software framework and smart home temperature control application validates the feasibility of the proposed method. The paper is organized as follows: section 2 provides the statement of the problem domain, section 3 discusses agent-based approach as a potential solution for the problem, as well as the motivation for choosing the Jade platform. Section 4 proposes the design method, revealing the proposed framework architecture and the structure of service agents. The framework prototype implementation and application scenario is proposed in section 5, conclusions – in section 6.
2
Problem statement
Although the research of smart environments domain has been an ongoing process in academic community for the last decade [1,2,9,13,14], it hasn’t realized its full potential yet. There is still space for exploration of theoretical methods and technologies suitable for this domain until smart home and smart appliance applications shift from research mode to commercial mass production. Therefore, the essence of our - 194 -
work lies on embedded systems design for smart environment, which is the basis for enhancing domestic appliances’ and other household devices’ functionality, thus assuring safety, comfort and economy services for its users [11, 12]. Looking at smart home and all its components as a part of the Internet of things from the engineering point of view, first of all we face the requirements, emerging from the openness and dynamics of this networking infrastructure. The Internet of things will be comprised of indefinite and varying number of connected subsystems, interacting and exchanging information about their context. As the context might be constantly changing, it means they will be sharing the same environment and seeking their goals in a cooperative or competitive manner, thus overall system behavior might become unpredictable. Thus, an application, being a part of such open system as Internet, is supposed to fulfill certain requirements, such as robustness, easy maintenance, easy modification and have the before mentioned self- properties. Similar requirements arise from designer’s perspective: as there will be plenty of such systems in the future, most of them might share similar characteristics or devoted to perform the same functions: sense the environment, collect, store and update data, process information, react to changes, predict user preferences, control devices, assure the communication between system components, external components and the user and many others. Thus, there comes the need of methods and techniques for developing such applications easier and faster, speeding up the realization of new systems by reusable components and extending existing systems by adding new system components without modifications to it. Linking various implementations into higher level abstractions could result in hiding the complexity of system components, allowing specialists without specific scientific knowledge develop their own applications under domain requirements. System flexibility and fault tolerance are also desirable. Finally, from the user perspective, the smart home systems are expected to provide assistance and act autonomously, but unobtrusively. To tackle most of the above mentioned requirements, we propose to develop smart environment applications as autonomous, flexible, reactive, capable of decision making software entities, called agents.
3
Multi-agent paradigm as a potential solution
As the research on agent technologies initially concentrated on providing methodologies and tools for building agent-based systems, today it is more concentrated on understanding the features that agent approach can bring to the development of conventional software [4]. 3.1
Software agents as software components As the emphasis of today’s system engineering is put on easier system development and faster time-tomarket, one of the primary characteristics provided by agent systems is a higher level of abstraction. Software components, such as JavaBeans, Corba, .NET components, that are extensions of objects, also serve as component-oriented software engineering approach. This approach offers separating the details of component implementation and its usage, making it easier to replace one component with another or add new components to the system, while these operations are dependent only on the component’s interface, not on its implementation. Moreover, by hiding realization details, designers do not have to deal with the complexity associated with the implementation, making their development tasks easier [6]. From the above perspective, agents can also be treated as software components. An agent-component conception has already been discussed in [10]. The differences and similarities between software agents and software components are explicitly elaborated in [6], reasoning that agent paradigm is appropriate not only for applications, requiring specific agent characteristics such as autonomy and adaptation ability, but can also serve as an alternative to other solid technologies, providing the developer with higher level abstractions than any other technology available today and having concrete advantages over software components in terms of reusability. 3.2
Platform selection There are many available platforms, providing the environment for agents’ existence and operation as well as supporting the development of agent based applications [17]. One of the most efficient platforms is Jade [19] - a Java based middleware designed to ease the development of multi-agent systems, providing functionalities which are independent of the specific application and which simplify the realization of distributed applications. Jade can operate both in wired and wireless environments, furthermore, it complies with FIPA (Foundation for Intelligent Physical Agents) [18] standards and enables interoperability between other platforms, based on this standard. The Jade platform is composed of one main container and several containers that can be launched on one or more hosts distributed across the network. Each container can run zero, one or multiple agents. The following agents and services must be present at the main container at all times: the AMS (Agent Management System), the DF (Directory Facilitator) and the MTS (Message Transport System) which is also referred to as the ACC (Agent Communication Channel). For more detailed information refer to [3]. Jade is beneficial to our work under several criteria: - 195 -
4
•
It can be executed on every type of Java Virtual Machine, except for Java Card. It also supports embedded java versions, thus the applications therefore can be simply portable among other embedded systems.
•
Jade provides the mobility service.
•
Jade provides lightweight communication protocol which is more simple than RMI.
•
Jade is continuously developed, improved and maintained (in contrast to some other agent platforms), as well as open-source and easily accessible (in contrast to other commercial platforms).
•
Jade achieves fault tolerance through combining two features - main container replication and DF persistence.
•
Jade supports .NET platform.
Embedded systems design method
This section presents embedded systems design method, based on Jade agent platform. Every sensor and actuator is modeled as a generic software agent. Later, these agents are used for creating service agents, which perform application specific functions. 4.1
Proposed system architecture for embedded system design A four-layered system architecture, built on the basis of Jade platform is proposed in Figure 1. Actual hardware sensor and actuator devices are situated in the lowest hardware level. A hardware abstraction layer (HAL) is proposed in order to hide the complexity of hardware level. Device drivers, presented in this layer, are implemented according to unified device driver interface. Framework layer consists of generic agents and helper classes that are used to construct application specific services in application layer.
Figure 1. System architecture
We propose to map each hardware sensor and actuator to each sensor/actuator agent. This implies that generic sensor or actuator device agent can be created. Our framework presents generic device agents that can be specialized at runtime by attaching different hardware drivers to them via configuration. This way we eliminate the need to develop specific device agents and developers can focus on development on services instead of infrastructure. Jade platform provides Directory facilitator (DF), Agent management (AMS) and Message transport (MTS) services [3]. Our framework utilizes Jade DF service by publishing information about every single agent and its capabilities – events and actions. The agents reside on framework and application layers. The framework provides generic agent templates that are consequently used by the developer to construct application specific agents by inheritance. - 196 -
There are six generic types of agents in the system, both static and dynamic. Static agents are platform management and configuration agents, while all service agents, as well as sensor/actuator agents are dynamic – created upon request and can be deleted after accomplishing their goal. When the system is started/restarted, platform management agent is the first agent to be created. Consequently it initiates the creation of configuration agent, which has a connection to the database and helps in detecting the host configuration, according to which the service agents and appropriate sensor/actuator agents are created and deployed on the system. All system agents register to the directory facilitator. Service agents are responsible for accomplishing some specific function in the smart home, such as temperature control, security or home appliance management. 4.2
Service agent structure Service agent has its own state and behavior. Generic Service agent structure is depicted in figure 2. Service agent can report its state changes via published events. Other agents can trigger various behaviors of the service agent by sending action messages. In order to hide protocol complexity from developer, our framework proposes helper classes and behaviors that help in defining and triggering service agent actions. Basic Service Agent structure
Temperature Control Agent structure
State
State: targetTemperature
Code
Code
Publish events
Publish OnTargetTempChange event
Publish actions
Publish SetTargetTemp action
Publish service
Publish service
Search and connect to devices
Connect to temp1 and onOff1 devices
Add service logic via behaviors
Subscribe to temp1OnSample event Implement temperature control behavior
Figure2. Basic service agent structure and temperature control example When service agent declares its events and actions, it publishes itself in Jade Directory facilitator and becomes discoverable to other agents. In order for the service agent to function properly it needs to discover and establish connectivity to sensor/actuator devices. After that, service agent can add service logic which is related to internal triggers or connected devices.
5
Prototype implementation
We have developed a prototype implementation (figure 3) of the proposed framework as a proof of concept. Later we used this framework prototype to develop a simple house temperature control application in order to validate and demonstrate the framework and its usability in real world scenarios. The framework prototype consists of generic DeviceAgent class which implements the loading of the device driver and its communication functionality. Each device driver has to implement the DeviceDriverInterface which is used in framework as a bridge to hardware. ActuatorAgent and SensorAgent are implementations of generic devices. ActuatorSupport and SensorSupport classes have been developed in order to facilitate communications and device discovery. The framework provides generic ServiceAgent class which is only responsible for providing methods, facilitating the development of real world services. ServiceAction class is a behavior which can be added to services and allows the developer to easily implement action logic without dealing with communication protocol
- 197 -
specifics. Finally, the ServiceSupport class can be used by other services and agents in order to easily subscribe to the developed service events and trigger actions.
Figure 3. Proposed framework prototype diagram 5.1
Real-world example – temperature control in smart house The smart house temperature control application developed on the basis of our framework is presented to further illustrate the proposed platform. Consider a house environment outfitted with temperature sensors and heating switches in different rooms. System integrator could use our framework with provided components to develop a temperature control application. System architecture and its deployment scenario are depicted in figure 4.
Figure 4.Temperature control application deployment architecture
The smart house system is deployed on two hosts. One of the hosts is a master host which is responsible for the platform and configuration management. Master host also runs agents responsible for interaction with the end-user (interface agents). Several non-controlling service agents such as user preference, statistical and heat counter are deployed on master host as well. The temperature control host is a small embedded device, running embedded Java Virtual Machine. This host has hardware temperature sensors and on/off switches connected. Thus device agents have to reside inside this host. Temperature Control service agent is deployed on this host for ensuring fault-tolerance and performance. Still, this service can be running on the master or any other host as well. The temperature control agent is a service agent which has established connections to temperature sensor and on/off switch. This agent periodically measures temperature and adjusts heating switch state according to the - 198 -
control algorithm. The implementation of TemperatureControlAgent in the context of our proposed framework is depicted in the figure below.
Figure 5.Temperature control agent class diagram in framework context
The temperature control agent has a target temperature set. In order to set the new target temperature, agent implements SetTargetTemp action. Each time target temperature is changed, the OnTargetTempChange is fired in order to notify all subscribed agents about new targets. The temperature control agent implements temperature sensor data sampling event which triggers temperature control behaviour. The structure of the agent and its implementation steps are shown in figure 5. 5.2
Results The smart house temperature control application, presented above, illustrates the advantages of the proposed method. The proposed framework provides functionality for platform management and configuration, service access, as well as the control of sensors and actuators and the developer only needs to focus on service logic implementation. This guarantees faster and easier application development. The temperature control implementation was very easy – it took us to write 75 lines of java code. The temperature control application with proposed framework overhead takes about 25Kbytes. 20C Threshold temp.
...
Room temperature
OnOffswitch state ...
...
Time, samples
Figure 6. Results obtained from temperature control application From the architectural viewpoint, multi-agent paradigm is rather similar to component-based approach. In multi-agent systems, the components are replaced by agents, having wider characteristics, such as autonomy, reactivity, proactivity, ontologies and others. Most agent-based platforms are dedicated to realizing enterprise solutions, satisfying soft real-time requirements and Jade is not an exception. Our aim is to take advantage of the features, offered by this platform and adapt it to designing real-time embedded components and their systems, taking into account the hardware part. The proposed sensor and actuator abstractions allow an easy the - 199 -
integration of any physical device into the embedded system. Moreover, jade platform allows monitoring of realtime events – the communication between agent components, as depicted in figure 6. Target temperature, dynamics of room temperature and on/off switches are shown in the top area of figure. Agent communication results obtained from Jade sniffer are shown at the bottom part of the picture. The temperature sensor Temp1 agent periodically sends INFORM messages to the temperature control agent Control1. These messages trigger temperature control behavior. Each time room temperature reaches threshold level, a message is sent to on/off switch OnOff1.
6
Conclusions
State-of-the-art agent platforms such as Jade provide a good background facilitating the development of smart agent-based systems. However, they are not adjusted to design real time embedded systems. Consequently we have developed a framework, facilitating embedded system design through a set of generic reusable components, hiding the implementation complexity. The developer, not necessarily an embedded system specialist, can construct a system by only adding application specific functionality. Such system development method clearly distinguishes between two roles of an engineer: low level system developer – creating drivers and smart system developer – the one, creating applications. Moreover, real-time monitoring of events allows observing the interaction of system components. The future work focuses on complementing the framework with additional components, e.g. for assuring several non-functional requirements, such as fault-tolerance, components for interacting with external systems and others.
References [1]
Abowd G. A., Bobick I., Essa E., and Mynatt W. The aware home: Developing technologies for successful aging. Proceedings of AAAI Workshop and Automation as a Care Giver. July 2002.
[2]
Augusto J. C., McCullagh P. J.: Ambient Intelligence: Concepts and Applications. Journal of Computer Science and Information Systems 4(1):1-27. 2007.
[3]
Bellifemine F., Caire G., Greenwood D. Developing multi-agent systems with JADE. Wiley Series in Agent Technology. ISBN 978-0-470-05747-6. February 2007.
[4]
Bergenti, F., Cleizes, M.-P., Zambonelli, F. Methodologies and Software Engineering for Agent Systems. The Agent-Oriented Software Engineering Handbook, Kluwer. 2004. Buckley J. From RFID to the Internet of Things. Conference on Pervasive networked systems, final report. ftp://ftp.cordis.europa.eu/pub/ist/docs/ka4/au_conf670306_buckley_en.pdf Deugo, D., Oppacher, F., Ashfield, B., and Weiss, M. Communication as a Means to Differentiate Objects, Components and Agents. Technology of Object-Oriented Languages & Systems, IEEE, 376-386. 1999. Edwards, S.; Lavagno, L.; Lee, E.A.; Sangiovanni-Vincentelli, A. Design of embedded systems: formal models, validation, and synthesis, Proceedings of the IEEE, Vol,85, Issue 3, Page(s):366 – 390. March 1997. Elmenreich W. Intelligent methods for embedded systems. Proeedings of 1st Workshop on Intelligent Solutions in Embedded Systems (WISES03), Vienna. June 2003. Helal A., Mann W., Elzabadani H., King J., Kaddourah Y., and Jansen E.. Gator tech smart house: A programmable pervasive space. IEEE Computer magazine, pages 64–74. March 2005.
[5] [6] [7] [8] [9]
[17] [18]
Krutisch R., Meier P., Wirsing M. The AgentComponent Approach, Combining Agents and Components. In Proceedings of MATES-03, Springer series of Lecture Notes on Artificial Intelligence. 2003 Ostaseviciute L., Kazanavicius E. The design of agent-based smart fridge system. In Proceedings of International Conference on Information Technologies 2008, Kaunas, Lithuania. 2008 04. Ostaseviciute L., Kazanavicius E. Agent-component design of smart appliances. In Periodical Journal „Solid State Phenomena Vols. 147-149. 2009 01 Rodríguez M. and Favela J. A Framework for Supporting Autonomous Agents in Ubiquitous Computing Environments. Proceedings of System Support for Ubiquitous Computing Workshop at the Fifth Annual Conference on Ubiquitous Computing (UbiComp 2003), Seattle, Washington. 2003. Sadeh N., Chan T., Van L., Kwon O. and Takizawa K. Creating an Open Agent Environment for Context-aware M-Commerce. In Agentcities: Challenges in Open Agent Environments, LNAI, Springer Verlag (Ed. by Burg, Dale, Finin, Nakashima, Padgham, Sierra, and Willmott), pp. 152-158. 2003 Early Challenges regarding the “Internet of Things”. http://www.iotvisitthefuture.eu/fileadmin/documents/earlychallengesIOT.pdf Internet of things in 2020. A roadmap to the future. http://www.smart-systems-integration.org/public/internet-ofthings Publicly available agent platform implementations. http://www.fipa.org/resources/livesystems.html#ADK Foundation for Intelligent Physical Agents. http://www.fipa.org/
[19]
Java Agent Development Framework. http://jade.tilab.com
[10] [11] [12] [13]
[14]
[15] [16]
- 200 -
UNIVERSAL UNIT TESTS GENERATOR BASED ON SOFTWARE MODELS Sarunas Packevicius, Andrej Usaniov, Eduardas Bareisa Kaunas University of Technology, Department of Software Engineering, Studentų 50, Kaunas, Lithuania,
[email protected],
[email protected],
[email protected] Abstract. Unit tests are viewed as a coding result of software developers. Unit tests are usually created by developers and implemented directly using specific language and unit testing framework. Unit test generation tools usually does the same thing – generates tests for specific language using specific unit testing framework. Thus such generator is suitable for only one situation. Another drawback of these generators – they mainly use software code as a source for generation. In this paper we are presenting tests generator model which could be able to generate unit tests for any language using any unit testing framework. It will be able to use not only software under test code, but and other artifacts: models, specifications. Keywords: unit testing, tests generator, model based testing
1
Introduction
Software testing automation is seen as a mean for reducing software construction costs by eliminating or reducing manual software testing phase. Software tests generators are used for software testing automation. Software tests generators have to fulfill such goals: 1. Create repeatable tests. 2. Create tests for bug detection. 3. Create self-checking tests. Current unit tests generators (for example Parasoft JTest) fulfill only some of these goals. They usually generate repeatable tests, but a tester has to specify the tests oracle (a generator generates test inputs, but provides no way to verify if test execution outcomes are correct) – tests are not self-checking. Tests generators usually target only one programming language and are able to generate tests using only one specific unit testing framework. These generators usually use a source code of software under test as an input for tests generation. For example, Parasoft JTest tests generator generates tests for software which are implemented using Java and unit tests use JUnit testing framework only. In this paper, we propose a modified unit tests generator model which will allow generating a test in any programming language using any unit testing framework and will be able to use software under test implemented or modeled in any language. The remaining part of this paper is organized as follows: The tests generator architecture is presented in Chapter 3. A generator example is presented in Chapter 4. Finally conclusions and the future work are given in Section 5.
2
Related works
Usually software is firstly modelled and only in the later phase its code is written. If the developers have chosen to use agile software development methodologies [1, 2] , they have to overcome another obstacle. Many of agile methodologies state that software tests have to be prepared before writing software code. But majority of tests generators uses software code as an input for test generation. These generators create test directly in the same programming language as the software implementation is. Thus, if during software design phase the implementation language is not yet chosen, tests can not be created. Also tests can not be generated by tests generator then software implementation is not yet available. Duo to the fact that test code is tightly coupled with software implementation code (uses the same programming language, and usually resides in the same compilation unit), it’s almost impossible to reuse tests between several different projects. Also the fact that the tests generator is only able to create tests for one programming language, limits developers possibility to select the best available tests generator. They have to choose the tool based not on the generated tests quality but on the supported programming language by that tool. A. Krass et al. [3] proposed a way to generate tests from UML models. They were using UML models as an input for the tests generator and have generated tests which are stored as XML documents. These XML documents could be transformed later into a unit test code using a specific unit testing framework (for example, JUnit [4] or TTCN-3). The drawback of this approach is that the tests generator abstracts the generated tests from test frameworks only, and is able to generate tests using only UML models as an input. - 201 -
Other authors have proposed generators which take UML models directly [5, 6] or the program code in specific programming language [7, 8] and generate the testing code using a specific unit testing framework directly. Thus their generator can produce tests for one specific unit testing framework only.
3
Platform Independent Unit Tests Generator
We are proposing a model of the unit tests generator which would be able to generate tests using any unit testing framework and will take not only software’s under test code as an input, but also can use other artifacts as an input for tests generation. Artifacts could be: UML models, OCL constraints, Business Rules. The generators model is based on Model Driven Engineering (MDE) ideas [9]. 3.1
Tests Generation Using MDE Ideas The MDE idea is based on the fact that a developer does not write the software code. He or she only models it. Models are transformed to software implementation later using any chosen programming language, platform and/or framework. MDE targets implementation code development. But its principles can be used for tests generation also. Tests could be created not only as a code using a selected unit testing framework; they can be created as tests models and transformed later into any selected programming language and unit testing framework. For example, we could model tests as UML diagrams and OCL constraints and transform the modeled tests later into the test code which uses JUnit testing framework; the code is generated in Java programming language. Such tests generator can be roughly depicted in Figure 1.
Figure 1. Tests generator model
The tests generator takes software under test (SUT - software under test in Figure 1) as an input and generates tests which are represented as a model. For example, the UML diagram using Testing profile stereotype could be such a model. After that the tests model is transformed into the platform specific test model. The platform specific test model represents tests as a model also, but this model uses platform specific elements. For example, test classes extend the required unit testing framework base classes, implement the required methods, sequence diagrams show calls to specific unit testing framework methods. The final phase is to transform the platform specific model into tests implementation (test code). This transformation could be no more different than the ones used today for code generation from a model (usually such transformations are used in many UML diagramming tools). The benefit of this generator is that tests can be transformed to any selected unit testing framework and implementation language. We just have to select a different transformation if we want to generate tests for another programming language or/and unit testing framework. 3.2
Software Under Test Meta-Model The tests generator does not take software under test code or model as tests generation input directly. The tests generator transforms SUT into a software model firstly (Figure 2). That model represents software independently from its implementation language or modeling language. For example, if SUT is implemented using Java programming language, the generator does reverse code engineering. If the program is modeled using OMT notation its model is converted into our model (Similar to UML).
- 202 -
Figure 2. SUT Meta-model
Figure 2 presents a meta-model of software under test. This model is similar to UML static structure meta-model. We have extended UML meta-model by adding the elements “Constraints” into meta-model. This addition allows us to store software under test model, code, OCL constrains or any combination of them into one generic model. SUT meta-model has all information about software under test; in order to generate tests it contains information about classes, methods, fields in this software under test. If software was modeled using OCL language, the meta-model links OCL constraints to associated classes, methods, fields, method parameters. 3.3
Tests Meta-Model The tests meta-model represents the generated tests in an abstracted from implementation form. This meta-model is presented in Figure 3.
Figure 3. Tests Meta-model
This model stores unit tests. The meta-model has classes to store the generated input values; it references classes, methods, fields from software under test meta-model. It carries the expected test execution - 203 -
values and/or references to OCL constraints which are used as a test oracle [10]. When implementing our unit tests generator we can store the generated tests in a database. And then the required tests from database can be transformed into a test code using the selected programming language and unit testing framework. Transformations into the test code can be performed using usual code generation techniques.
4
Generator Example
We have an implementation of the unit tests generator based on our idea. Its source code is available at http://atf.sourceforge.net/ web page. The generator’s structure is represented in Figure 4. Business Rules
1. Transform
1. Transform
Software Code (C++, Java,..)
OCL
UML
2. Transform 2. Reverse 2. Transform
SUT Meta Model
3. Generate
Tests Meta Model
4.Transform
4. Transform
4. Transform
Tests Model C#, NUnit
Tests Model JUnit, Java
Tests Model C++, CppUnit
5. Generate
5. Generate
5. Generate
Tests Code C#
Test Code Java
Tests Code C++
Figure 4. Tests generator structure
Our generator is able to generate tests for software which is modeled in UML and/or has OCL constraints, requirements expressed as Business Rules (We have presented how to transform Business Rules into UML model and OCL constraints for this generator [11]). The generator is composed of three main parts: 1. The meta-model extractor. 2. The tests generator. 3. The tests transformer. It also has an additional part: tests runner. The model extractor extracts the software under test metamodel from various sources: implementation code and/or UML, OCL models and stores it in its meta-model data structures. The tests generator takes the data stored in the meta-model structures and generates test cases. Test cases are stored in an abstract test meta-model. The tests transformer takes the tests stored in test meta-model and transforms it into an actual test code for a specified programming language and unit testing framework. The tests runner is than able to execute the generated tests. The generator is able to extract software model and store it into its internal meta-model from java code. It can also extract model from UML diagrams and/or OCL constraints and store it into its internal meta-model. The model extractor is designed by using factory design pattern [12] what allows to easily to integrate another additional source of model to be extracted from. The generator also uses the factory design pattern for test data generation algorithm. It defines the interface which has to be implemented in order to use the selected test data generation algorithm. The tests stored in the tests model data structures are transformed into actual test code by the tests transformer component. This component is also implemented using a factory design pattern. The tests transformer defines an interface which takes tests stored in a tests model data structures and generates unit tests code stored in code files. Using the defined interface the test code transformer can be implemented for creating tests in any programming language using any unit testing framework. The current implementation is able to transform tests from tests model to test code implemented in java programming language. The transformed unit tests code uses JUnit 4.0 [4] testing framework. The generated test code can finally be executed by tests runner and the actual unit testing can be - 204 -
performed. The tests executor module invokes a unit testing framework (JUnit testing framework) for generated unit tests execution. The tests executor invokes a java compiler to compile test classes into a binary code and after the compilation it invokes the JUnit tests runner for the generated tests execution. The test runner is implemented using a factory design pattern. The class implementing this interface has to execute all test files stored in the specified folder. The tests runner transfers execution to the JUnit testing framework main class (org.junit.TestsRunner). The JUnit testing framework itself enumerates generated tests and executes them. The tests generator tool has been implemented as an eclipse [13] plug-in. By being an eclipse plug-in it can be easily integrated into various integrated development environments which are based on eclipse platform. Some of these environments are: IBM Rational Software Architect [14], Oracle WebLogic development environment. Then this tool is integrated into IBM Rational Software Architect, it can easily access software under test model using the libraries provided by the integrated development environment. In this case exporting and parsing XMI file can be omitted. The tests generator seamlessly integrates into an eclipse development application. Tests can be generated by a single mouse clip. Then the software under test is availabe as only a software model test generator performs the following steps: 1. Transform the software model into its SUT model. (Step 2) 2. Generate tests using the software SUT model and store tests as a tests model. (Step 3) 3. Generate the test code for the selected unit testing framework using the tests model. (Steps 4, 5) If software under test is represented as a code, the test generating procedure is such: 1. Reverse the code into the SUT model. (Step 2) 2. Generate tests using the software SUT model and store the tests as a tests model. (Step 3) 3. Generate the test code for the selected unit testing framework using the tests model. (Steps 4, 5) If we have software specified as Business Rules, the tests generation procedure is: 1. Transform Business Rules into UML, OCL models. (Step 1) 2. Transform the software model into its SUT model. (Step 2) 3. Generate tests using the software SUT model and store the tests as a tests model. (Step 3) 4. Generate the test code for the selected testing framework using the tests model. (Steps 4, 5) The generator has to generate tests using always the same type of data and has to produce the same type of results (Step 3). All we have to do is just to implement different transformations, if we want to target another unit testing framework or we want to generate tests for programs which are implemented in other programming languages or modeled using other modeling languages.
5
Conclusion
The suggested abstract unit tests generator is able to generate tests for software implemented in any programming language and/or modeled using any modeling language. It can generate the test code in any programming language using any unit testing framework. The tests generator mechanism is independent from the unit testing framework and software under test implementation or modeling language. This generator model can be used for benchmarking tests generators when we have benchmarks implemented in various programming languages.
References [1]
[2]
[3] [4] [5]
Highsmith J., Cockburn A. Agile software development: the business of innovation. Computer, 2001. 34(9): p. 120. Meszaros, G. Agile regression testing using record & playback. In Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications. 2003, ACM: Anaheim, CA, USA. Kraas, A. et al. A Generic toolchain for model-based test generation and selection. In TESTCOM / FATES 2007. Tallinn, Estonia. Louridas P. JUnit: unit testing and coding in tandem. Software, IEEE, 2005. 22(4): p. 12-15. Kim S.K., Wildman L., Duke R. A UML approach to the generation of test sequences for Java-based concurrent systems. In 2005 Australian Software Engineering Conference (ASWEC'05) 2005. - 205 -
[6] [7]
[8]
[9] [10 [11]
[12] [13] [14]
Kim Y.G. et al. Test cases generation from UML state diagrams. IEE Proceedings on Software Engineering, 1999. 146(4): p. 187-192. Boyapati C., Khurshid S., Marinov D. Korat: automated testing based on Java predicates. In Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis. 2002, ACM Press: Roma, Italy. Visser W., Pasareanu C.S., Khurshid S. Test input generation with java PathFinder. In Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis. 2004, ACM Press: Boston, Massachusetts, USA. Schmidt D.C. Guest Editor's Introduction: Model-Driven Engineering. Computer, 2006. 39(2): p. 25-31. Packevičius Š., Ušaniov A., Bareiša E. Using Models Constraints as Imprecise Software Test Oracles. Information Technology and Control, 2007. Packevičius, Š., Ušaniov A., Bareiša E. Creating unit tests using business rules. In Information Technologies' 2008 : 14th International Conference on Information and Software Technologies. 2008. Kaunas, Lithuania. Gamma E. et al. Design patterns : elements of reusable object-oriented software. 2003, Boston: AddisonWesley professional computing series, 395. Carlson D. Eclipse Distilled. 2005: Addison-Wesley Professional. Quatrani T., Palistrant J. Visual Modeling with IBM Rational Software Architect and UML. 2006: IBM Press.
- 206 -
AN APPROACH FOR MODELING TECHNIQUE SELECTION CRITERIONS * Gytenis Mikulenas, Rimantas Butleris Kaunas University of Technology, Department of Information Systems, Studentu str. 50, Kaunas, Lithuania,
[email protected],
[email protected] Abstract. There are various information system modeling techniques such as UML, BPMN, IDEF, ORM, Petri Nets, ERD, DFD, CRC, etc. A very interesting question is how to effectively select an appropriate technique for the specific task. The paper introduces an approach for the identification of the selection criterions that can be applied during the modeling technique selection. We analyzed different modeling perspectives, views, roadmaps and presented 8 criterions, such as requirements artifacts, modeling perspectives, project member roles, end users of models, enterprise size, software development method, mastering modeling techniques and project environment constrains. Possible values of 8 criterions are also presented. We conclude with a discussion on the future framework research. Keywords: Modeling method, information system, modeling technique selection, software development, selection criterions.
1
Introduction
There are various information system (IS) modeling techniques, methods, languages and roadmaps. They include UML, BPMN, IDEF, ORM, Petri Nets, ERD, DFD, CRC and others [5]. The amount of techniques and their evolution should be a proof of software development maturity and project success but it is not so. The famous CHAOS Report published by Standish Group shows that still only about one third of software projects can be called successful, i.e. they reach their goals within planned budget and on time [24, 31]. So the wealth of approaches does not fix existing problems but it brings other problems, namely, selection and choice problems. The more options there are, the harder the choice. One of the pretexts of this research was the problem of UML complexity and its adaptation. UML is one of the most popular modeling languages [16, 34], extended with various profiles. Some authors propose that UML is the evolutionary general purpose [3], broadly applicable, tool-supported, and industry-standardized modeling language for specifying, visualizing, constructing, and documenting the artifacts of a system-intensive process [26]. UML 2.1.1 has 13 main modeling diagrams, more than 100 metaclasses, profiles, Object Constraint Language (OCL) extension [29]. This amount of possibilities makes UML difficult to adopt efficiently for personal needs [14, 18]. There are different UML roadmaps but they do not fix all the problems [17]. Inspired by this problem we analyzed different modeling roadmaps. The paper aims to analyze what impact factors affect the choice of modeling technique, method or language selection. We prefer calling them criterions. Though it may seem that modeling technique selection issues do not change so much, the evolution of UML and Agile based approaches, newest surveys determines the need of new approaches. We evaluated those modeling technique selection criterions that could be used in real life competitive project environments, because every effort must create value. During the research we identified 8 criterions which will be in the heart of the future framework research.
2
Objectives of the study and research approach
Our primary research objective was to identify criterions, affecting modeling method, technique or language selection. Another objective was to establish a foundation for the further research for building modeling technique selection framework. The paper also includes background of the problem, identification of criterions, affecting IS modeling technique selection, criterions classification approach, directions for further research.
3
Problem background
Modeling is the process of creating abstract, conceptual, graphical and/or mathematical models. Modeling is performed to support other activities, such as review, analysis, specification and elicitation [19]. There are a lot of various modeling methods, techniques and languages. So the key question for the research presented in this paper is why in some cases we choose one modeling techniques over the others? * This work is supported by Lithuanian State Science and Studies Foundation according to High Technology Development Program Project "VeTIS" (Reg.No. B-07042) - 207 -
According to Alhir S. S. [3] there is no ‘silver bullets’ in systems development. This means that every technique has its own application domain where it should be used for best results. The question on what and when to choose could be answered if we address an even more fundamental question: what is the value of a process or methodology, independent of its weight, and why would one process or methodology be more suitable than another? Various tactics attempting to reconcile or debate various approaches miss the opportunity to address this fundamental question regarding the value of a process or methodology. So, only analyzing the criterions affecting the value of modeling, we can understand why one modeling technique in some situations brings more value than the other. There is a Methodology Engineering branch first introduced by Kumar and Welke [22] as a discipline aimed at constructing methodologies to match given organizational settings or specific development projects. They proposed an ISD-Personal Value Questionnaire which can be assessed within three different groups, addressing technical values, such as timeliness of information; economical values, such as ISD costs; or sociopolitical-psychological values, such as system responsiveness to people. But another approach will be taken in this paper.
4
Research boundaries
During research we tried to focus on the identification of modeling technique selection criterions for use in our future modeling technique selection framework. But criterions identification determines other research objects. Due to this, we define some boundaries for this paper analysis process: Dependence. Some modeling techniques bring models and some models elements that could be detailed using other techniques. For example, UML use case element could be detailed with sequence diagram. We do not analyze these dependencies and this kind of selection criterions. Overlapping. Some modeling techniques propose similar models to those offered by other techniques. We do not analyze such overlapping and this kind of selection criterions. Ambiguity. Some of our proposed selection criterions or their values may seem to belong to other criterions. We do not perform a detailed analysis of criterions in order to eliminate ambiguous criterions. Criterions prioritization. Some criterions are more important than others but in this paper we limit our research to their identification only. Prioritazation will be included in the future framework research because it is a part of selection process model. In this paper we take multidimensional, independent, unambiguous, non-overlapping modeling technique selection criterions view, because our goal is to establish a foundation for further research. This multidimensional approach is transformed into the modeling technique selection criterions.
5
Modeling technique selection criterions
Environment Artifacts Various artifacts from environment are captured into models during modeling process at various project phases and stages. Modeling is used for capturing and visualizing these artifacts, so every model consists of elements and every element is coherent with the corresponding artifact which is captured in the model. In a paper based on analysis results, Silingas D. discusses how various domain and requirements analysis elements – semantic map of business concepts, lifecycles of business objects, business processes, business rules, system context diagrams, use cases and their scenarios, constraints, and user interface prototypes – can be modeled using UML [34]. Some of the most popular requirements artifacts are shown in the Table 1. 5.1
Table 1. Some of the most popular requirements artifacts [34]
Requirements Artifact Business concept Document (information) sample Business concept relationship Information flow Business object lifecycle User group Business goal User task Business role Usage scenario Business process Non-functional requirement Business task Time constraint Business rule GUI navigation schema Business fact GUI prototype
- 208 -
These requirements analysis elements are common to many modeling techniques not only to UML and can be found in most software development projects. Aurum A. and Wohlin C. [8] state that the goal of requirements engineering is to transform potentially inconsistent, incomplete and conflicting stakeholder goals into a complete set of high quality requirements. But requirement artifact is not the only type of artifacts that are captured during whole project lifecycle. We propose term environment artifact by meaning anything valuable that could be captured from environment into model or model element. Modeling technique selection depends on what environment artifacts we need to capture in models. So, the question, what modeling method, technique or language to select, leads to another question, what environment artifacts do we need to model? 5.2
Modeling Perspectives A little bit different but similar to environment artifacts is modeling perspectives approach. It is modeling perspective, which defines context and boundaries of modeling. We analyzed several modeling perspectives (Table 2.) that in literature are sometimes called modeling approaches or views: Table 2. Modeling approaches
Zachman J. A. [37] • scope modeling • business modeling • system modeling • technology modeling • component modeling • operation modeling Ambler S. W. [7] • Code distribution modeling • Data storage modeling • Data Transmission modeling • Deployment modeling • Function/Logic/Services modeling • Events modeling • Hardware modeling • Network modeling • System interface modeling • User interface modeling • Usage modeling Wiegers K. E. [36] • Business requirements modeling • User requirements modeling • System requirements modeling • Business rules modeling • Quality attributes modeling • External interfaces modeling • Constraints modeling
OMG MDA [28] • Computation independent modeling (CIM) • Platform independent modeling (PIM) • Platform specific modeling (PSM)
Ambler S. W. [6] • Usage modeling • User interface development • Supplementary requirements modeling • Conceptual domain modeling • Process modeling • Architectural modeling • Dynamic object modeling • Detailed structural modeling
Alhir S. S. [3] • Use case (user) modeling • Structural (static) modeling • Behavioral (dynamic) modeling • Component (implementation) modeling • Deployment (environment) modeling • Constraint (OCL) modeling
Some perspectives, for example, behavioral, use case and structural modeling are well-known while others are not. Though modeling perspectives and environment artifacts in some cases may seem to be the same thing, we separate them according to the classification rule we identified. We use an identified classification rule that modeling perspective may include several environment artifacts, while environment artifacts may not include several modeling perspectives and must belong to one perspective only, but we do not exclude exceptions. There is a need for an in depth comparative analysis and investigation of other possible modeling perspectives. The goal of our paper is not to compare these perspectives. Our task is to identify what criterions affect modeling technique selection. So, the question, what modeling method, technique or language to select, leads to another question, what modeling perspective is the most suitable for modeling required environment artifacts? 5.3
Project Member Roles It is not enough to know what environment artifacts we need to model. IS researchers define stakeholders as those participants in the development process together with any other individuals, groups or organizations whose actions can influence or be influenced by the development and use of the system whether - 209 -
directly or indirectly [4, 30]. Typical stakeholders are product managers, various types of users and administrators from the client side, and software team members from the software development side. Ambler S. W. [11] assumes that a project stakeholder is anyone who is a direct or indirect user, manager, senior manager, operations staff member, the project owner who funds the project, support staff member, auditor, program/portfolio manager, software developer or maintenance professional potentially affected by the development and/or deployment of a software project. In addition there are software architects, quality assurance engineers [33]. Theoretically modeling could be done by anyone who is involved in the project software development but in practice not all project stakeholders do modeling. For example, clients usually do not do modeling. People usually do modeling because they have a task to capture software requirements into specifications. Silingas D. [33] proposed the schema of project member roles applicability to modeling tasks. Table 3. Project member role applicability to modeling tasks [33]
Project member role Business Analysts
Software Architects
Developers
Quality Assurance Engineers
Modeling tasks Capture business processes Model organizational structures Perform analysis of domain concepts Prepare use case-driven requirements model ... Relate different architectural views Transition from business to system models Define top-level component and package structures Manage modeling teamwork … Prepare detailed class models Model interactions for use case Introspect existing systems by reversing and visualizing code Design OO, relational, and XML data Transform class structures to database or XML schemas … Analyze workflows for use cases Prepare action flows for test cases Model test data Model interaction for unit and system testing …
Cockburn A. proposes similar role mapping to modeling [12]: Table 4. Project member role mapping to modeling [12]
Project member role Business expert Lead designer Designer-programmer Writer
Modeling tasks Produce Actor-Goal List Produce Use Cases and Requirements File Produce User Role Model Produce Architecture Description Produce Screen Drafts Produce Common Domain Model Produce Design Sketches and Notes Produce User Help
Depending on the role of a modeler, different environment artifacts are captured into the models. So, the question, what modeling method, technique or language to select, leads to another question, who will do the modeling? 5.4
End Users of Models In modeling technique selection it is also important to know who will use the model, created using the selected technique. For example, if software enterprise analysts or developers use models for communication or negotiation with client representatives, these models must be understandable to people with no IT qualification [6]. This means that analysts or developers will probably not choose UML sequence or collaboration diagrams because sometimes they are hard to understand even for IT specialists. Client representatives rarely have enough modeling technique skills to understand complex modeling notations. There are modeling roadmaps that propose - 210 -
client – centered modeling. One such example is the use case modeling because the use case notation is quite simple to understand for client representatives. Many of Agile methods and best practices claim that modeling should be done in the simplest way [5, 12] because in most cases it is too time consuming and difficult to use for communication with client. So if the end users of the model will be the client representatives, the selection of modeling technique will be narrowed to those techniques that offer the simplest notations. On the other hand, there are projects of a certain type, called environment analysis projects. These projects are initiated for inception, analysis and elicitation of requirements that must be met for building future software product. The company that creates requirements specification must capture all possible requirements in the most suitable for later use format that partly consists of models. User of that requirement specification could be other software development company whose analysts, quality assurance specialists and developers will use that specification to build the software product. In this case the specialist of the first project company will be encouraged to use comprehensive modeling. So, the question, what modeling method, technique or language to select, leads to another question, who will use the model? 5.5
Enterprise Size: Quality vs. Efforts Some Agile software development methods such as Xp [9] or Scrum [32] and their practitioners propose using those methods for small projects only [1]. For example, Cocburn A. even classifies the use of Crystal family methods according to the number of the active project members [11]. This shows how project size could impact the selection of a software development method. In 2005 European Commission published “The New SME definition: User Guide and Model Declaration” [15] where the following three levels of small to medium-sized enterprises (SME) were defined: Small to medium –“employ fewer than 250 persons and which have an annual turnover not exceeding 50 million Euro, and/or an annual balance sheet total not exceeding 43 million Euro”; Small – “which employ fewer than 50 persons, and whose annual turnover or annual balance sheet total does not exceed 10 million Euro” and Micro – “which employ fewer than 10 persons ”. In 2008th there was a survey of 392 participants from 28 countries worldwide [23] which results show that there are 36 % micro and 22% small size enterprises worldwide (Figure 1., part in the left). There are large software companies that build large and complex software systems. Development of some parts is outsourced from micro and small companies by large companies [27]. So micro and small companies are a part of a large software development process. Research identified that most of those enterprises do not use any quality or maturity standards such as CMMI, SPICE, ISO-12207. Because of that, any bug or mistake made by micro or small company might be the reason of the entire large system failure and might cost millions of dollars. Some research identified the key reasons of not using standards were the shortage of time, lack of support and lack of resources (Figure 1, part in the right) because micro and small companies have small budget and often do not have enough circulating assets.
Figure 1. Survey results [23]
If company size affects the use of standards, it could affect the modeling technique selection because most complex modeling techniques require training which might be too time/money - consuming. Low efforts on training could have a negative impact on the modeling quality (Figure. 2).
- 211 -
Figure 2. Quality dependency to efforts
Because of the software company size and its budget, some modeling techniques might be unavailable due to the lack of training, resources, time-span and encouragement from management. So, the question, what modeling method, technique or language to select, leads to another question, what is the size of your enterprise or project? 5.6
Software Development Method: lightweight or heavyweight The selection of the modeling technique itself is a process step of creating a model. This process step of modeling fits into the project lifecycle. There are various software development methods that define various project lifecycle models. Each of them defines some modeling techniques that should be used during the appropriate process steps. A lot of IT practitioners discuss about strong and weak sides of various methods, about what software development process and technique should be used, etc. Kroll P. and MacIsaac B. [21] distinguish some dipole between heavyweight and lightweight software development methods while Kroll P. and Kruchten P. [20] identify two converse dipoles – iteration and ceremony weight (Figure 3.).
Figure 3. Process map of software development [20]
Every software development method has its own features and they could be evaluated according to dipole metrics. Low ceremony dipole methods (Figure 3.) require less documentation than high ceremony dipole ones. For example Agile methods such as XP [9], Scrum [32], Agile Modeling [5, 6] propose a minimal amount of modeling such as CRC cards, user stories, product and sprint backlogs. Although theoretically Agile philosophy [2] does not prohibit the use of other requirements modeling and gathering techniques, most of them prefer less documentation, which means less modeling. The more heavyweight is the process the more heavyweight modeling ought to be. This is obvious evidence of how existing software project development methods impact the use of modeling techniques. Other important aspect of modeling is that software development methods use software lifecycle models which consist of phases and steps. Modeling technique selection depends on when the modeler must do the modeling. Interesting misunderstanding about Agile methods is that most people think they restrict any modeling or specification to sketch, CRC or user stories. That is not true. Agile practitioners are against Big Requirements Up Front (BRUF) at the early project phases [5]. They authorize only the kind of modeling that brings value to software development product but not to its specification. So if there is a need to capture and specify some business logic, rules or software, then this should be done during the later phases of the project when no or small requirement changes may occur. So, the question, what modeling method, technique or language to select, leads to another question, what software development method do we use and at what step do we need to model? - 212 -
5.7
Mastering Modeling Techniques A very primitive but humane feature of modeling technique selection is the fact that choice is made depending on skills and knowledge about the options. Some authors blame UML 2 for its adaptation and use complexity [14, 18]. That means practitioners do not use all UML modeling techniques because of their complexity. A worldwide survey [23] showed that micro and small enterprises do not use standards because it is too time consuming and demands too much training and resources. So the reasons of not using some modeling techniques could be the same as those of not using standards. IT market is very dynamic and competitive. Because of the lack of circulating assets micro and small companies cannot allocate appropriate training and materials for mastering appropriate modeling techniques. This means for the specific modeling task the modeler might choose a modeling technique that he knows better than others. Alhir S. S. [3] notes that every project team is a unique object bringing its own flavor to the software development process. This means that even if two project teams use the same modeling techniques, they might produce a little bit different results. So, the question, what modeling method, technique or language to select, leads to another question, what modeling techniques do we master? 5.8
Project Environment Constraints There is one more group of criterions affecting modeling selection. It is various standards that are implemented in the project environment. They are CMMI, ISO 12207, ISO 9001, SPICE maturity standards that help companies grow in terms of quality. But these models do not prohibit companies from adding individual flavor to the standard installation. So, in some cases companies that installed CMMI [13] or ISO 9001 [25] just formalized and specified their know-how knowledge and started using their own methods “legally” [35]. However, the key is that standards create restrictions in the project environment. For example, company may implement ISO 9001 standard for the IS development process. As a result, software development activities, project member roles, techniques and even specification document templates may be defined. This will undoubtedly have an impact on what, how and when to model. So, the question, what modeling method, technique or language to select, leads to another question, what constraints exist in the project environment?
6
Research Results: Criterions Affecting Modeling Selection
Different approaches, roadmaps were analyzed and the combined approach was offered. It is clear that modeling technique selection is multidimensional and cannot be done in a simple way. Table 5 shows the summary of gathered, identified and analyzed criterions: Table 5. Classification of criterions
Criterion group Environment Artifacts (EA) Modeling Perspectives (MP) Project Member Roles (PMR) End User of Models (EUM) Enterprise Size (ES) Software Development Method (SDM) Mastering Modeling Techniques (MMT) Project Environment Constrains (PEC)
What question does it answer to? What environment artifacts do we need to model? What modeling perspective is the most suitable for modeling required environment artifacts? Who will do the modeling? Who will use the model? What size is the enterprise or project? What software development method do we use and at what step do we need to model? What modeling techniques do we master? What constraints exist in the project environment?
- 213 -
What software development method we use and at what step do we need to model?
What modeling perspective is the most suitable for modeling required requirements artifacts?
What size is your enterprise or project?
What environment artifacts do we need to model?
Modeling task Who will do modeling?
Who will use model?
What project environment constrains exists?
What modeling techniques do we master?
Figure. 4. Criterions affecting modeling technique selection
Figure 4 shows a conceptual view of the modeling task and displays several different views of solving both this modeling task and the technique selection problem.
Figure 5. Example of modeling technique selection by using criterions
Let’s go through a visualized modeling technique selection process example. Assume that we have a task to create some model. At the very begining it seems there are a lot of possible modeling techniques to select (Figure 5, part a). Not all modeling techniques can capture desired environment artifacts, however, so after answering the first “Environment Artifacts” criterion (C1) question “What environment artifacts do we need to model?” less possible options remains (Figure 5, part b). They are marked as medium grey circles. Those possibilities that do not meet the criterion requirements are on the crossings. As mentioned in research boundaries section, we do not analyze overlapping, dependency and ambiguity. These possibilities are marked as light grey circles. After considering our other seven criterions we have a few possibilities left (Figure 5, part c). This example demonstrates how proposed 8 criterions might be used in practice. During our research, we also distinguished prioritization as an efficient way to rate criterions according to their importance. Prioritization is widely used in software requirements engineering, software project management, software development methods, etc. [10, 36]. Because some criterions are more important than others there is a need for the evaluation of criterion prioritization. However, this is a subject of the future framework research. The proposed criterion-based approach for the selection of the modeling technique helps gain the better understanding on how choices are made. In real life competitive project environments, every selection must bring value. But if the choice is made without a proper knowledge, it is very hard to achieve this value. Some authors suggest that inappropriate selection could be the reason for the project failure [5]. Sometimes too many resources are spent on modeling with techniques that require more effort than they create value. If so, the proposed criterion-based approach should help understand why one modeling technique is more preferable than other and to make the selection accordingly.
- 214 -
7
Conclusions
Our research proved that modeling technique selection mechanism is complex and multidimensional. During the presented research, 8 criterions were identified, each affecting modeling technique selection. But this number may change during the further research as it could result in the discovery of new criterions. The research presented in this paper kept within certain boundaries. Such issues as criterions dependency, overlapping and ambiguity were not addressed. Every proposal has its exceptions, so it is possible that there might be exceptions. But the goal was to establish an approach for evaluating modeling value by identifying its selection criterions. The resulting methodology can be used for answering such questions as what modeling technique to choose. There are no ‘silver bullets’ not only for systems development, but also for modeling. Various tactics attempting to reconcile or debate various approaches miss the opportunity to address the fundamental question of value. Only understanding the criterions affecting the value of modeling, we can understand why one modeling technique in some situations brings more value than the other. Only then we can make our selection effectively. This paper does not address the modeling technique selection situations in non real-life competitive project environments. Presented research was based on the proposal that in such cases every effort must create value. Sometimes there is an illusion that a wide range of modeling technique possibilities is available. But under closer inspection we discover that for every situation only a few options are worth considering.
8
Future work
One of the goals of the paper was to establish a foundation for the future research. To make further progress the following research activities should be addressed: •
Search for additional possible criterions;
•
Detailed analysis of dependency, overlapping and ambiguity of every criterion;
•
We use a rule that modeling perspective may include several environment artifacts, while environment artifacts may not include several modeling perspectives and must belong to one only. More research is needed to identify possible exceptions;
•
Criterions prioritization analysis;
•
Building of modeling technique selection framework.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
Abrahamsson P., Salo O., Ronkainen J., Warsta J. Agile Software Development Methods: Review and Analysis, VTT Publications, 2002. Agile Alliance. Principles behind the Agile Manifesto. http://agilemanifesto.org/principles.html Alhir S. S. Guide to Applying the UML. Springer-Verlag, 2002. Ambler S. W. Active Stakeholder Participation: An Agile Best Practice. http://www.agilemodeling.com/essays/activeStakeholderParticipation.htm Ambler S. W. Agile Modeling: Effective Practices for eXtreme Programming and the Unified Process. John Wiley & Sons, 2002. Ambler S. W. The Object Primer, Third Editon. Cambridge University Press, 2004. Ambler S. W. UML Data Modeling Profile. http://www.agilemodeling.com/essays/agileArchitecture.htm Aurum A., Wohlin C. Requirements Engineering: Setting the Context. In: Aurum A., Wohlin C. (Eds.): Engineering and Managing Software Requirements, Springer-Verlag, 2005, pp. 1-16. Beck K. Extreme Programming Explained: Embrace Change, Second Edition. Addison Wesley Professional, 2004. Berander P., Andrews A. Requirements Prioritization. In: Aurum A., Wohlin C. (Eds.): Engineering and Managing Software Requirements, Springer-Verlag, 2005, pp. 69-94. Cockburn A. Agile Software Development: The Cooperative Game, Second Editon. Addison Wesley Professional, 2006. Cockburn A. Crystal Clear A Human-Powered Methodology for Small Teams. Addison Wesley Professional, 2004. Dayan R., Evans S. KM your way to CMMI. Journal of Knowledge Management, Emerald Group Publishing Limited, 2006, Vol. 10, Nr. 1, pp. 69-80. Dobing B., Parsons J. Dimensions of UML Diagram Use: A Survey of Practitioners. Journal of Database Management, IGI Publishing, 2008, Vol. 19, Issue 1, pp. 1-18. European Commission. The New SME Definition: User Guide and Model Declaration 2005. http://europa.eu.int/comm/enterprise/enterprise_policy/sme_definition/sme_user_guide.pdf
- 215 -
[16] [17] [18] [19] [20] [21] [22] [23]
[24]
[25] [26]
[27] [28] [29] [30] [31] [32] [33] [34]
[35]
[36] [37]
Evans G. K. Agile RUP: Taming the Rational Unified Process. In: B. Roussev; L. Liu (Eds.): Management of the Object-Oriented Development Process, Idea Group Publishing, 2006, pp. 231-246. Fowler M. UML Distilled: A Brief Guide to the Standard Object Modeling Language, Third Edition. Addison Wesley, 2003. Grossman M., Aronson J., McCarthy R. Does UML make the grade? Insights from the software development community. Information and Software Technology, 2005, Vol. 47, Issue 6, pp. 383-397. Hood C., Wiedemann S., Fichtinger S., Pautz U. Requirements Management: the interface between requirements development and all other systems engineering processes. Springer-Verlag, 2008. Kroll P., Kruchten P. The Rational Unified Process Made Easy: A Practitioner's Guide to the RUP. Addison Wesley, 2003. Kroll P., MacIsaac B. Agility and Discipline Made Easy: Practices from OpenUP and RUP. Addison Wesley Professional, 2006. Kumar K., Welke R. J. Methodology Engineering: a Proposal for Situation-Specific Methodology Construction, Challenges and Strategies for Research in Systems Development, John Wiley & Sons, 1992, pp. 257-269. Laporte C. Y., Alexandre S., O’Connor R. V. A Software Engineering Lifecycle Standard for Very Small Enterprises. In: O'Connor R. V. Baddoo N., Smolander K., Messnarz R. (Eds.): Software Process improvement: 15th European Conference, Springer, 2008, pp. 129-141. Lopata A., Gudas S. Enterprise model based specification of functional requirements. In Information Technologies' 2008 : proceedings of the 14th International Conference on Information and Software Technologies, IT 2008, Kaunas University of Technology, 2008, pp. 189-194. McMichael B., Lombardi M. ISO 9001 and Agile Development. AGILE 2007 Conference, pp. 262-265. Nemuraite L., Ceponiene L., Vedrickas G. Representation of business rules in UML&OCL models for developing information systems. In: Stirna, Janis Persson, Anne (Eds.): The Practice of Enterprice Modeling : First IFIP WG 8.1 Working Conference, PoEM 2008, Springer Berlin Heidelberg, 2008, Vol. 15, pp. 182-196. Oktaba H., Piattini M. Software Process Improvement for Small and Medium Enterprises: Techniques and Case Studies. Information Science Reference, 2008. OMG. MDA Guide Version 1.0.1, 2003. OMG. Unified Modeling Language: Superstructure. Version 2.1.1, 2007. Pouloudi A, Whitley E. Stakeholder identification in inter-organizational systems: Gaining insights for drug use management systems. European journal of information systems, Palgrave Macmillan, 1997, Vol. 6, No. 1, pp. 1-14. Rubinstein D. Standish Group Report: There’s Less Development Chaos Today. SD Times, March 1, 2007. Schwaber K. Agile Project Management with Scrum. Microsoft Press, 2004. Silingas D. Best Practices for Applying UML, Part I. http://www.magicdraw.com/files/whitepapers/Best_Practices_for_Applying_UML_Part1.pdf Silingas D., Butleris R. UML-intensive framework for modeling software requirements. In Information Technologies' 2008 : proceedings of the 14th International Conference on Information and Software Technologies, IT 2008, Kaunas University of Technology, 2008, pp. 334-342. Tutkute L., Butleris R., Skersys T. An Approach for the Formation of Leverage Coefficients-based Recommendations in Social Network. Information Technology And Control, Kaunas University of Technology, 2008, Vol. 37, No. 3, pp. 245-254 Wiegers K. E. Software Requirements, Second Edition. Microsoft Press. 2003. Zachman J. A. Zachman Enterprise framework. http://www.zachmaninternational.com/index.php/homearticle/13#maincol
- 216 -