An Evaluation Framework for MAS Modeling Languages based on Metamodel Metrics Iv´an Garc´ıa-Magari˜ no, Jorge J. Gomez-Sanz, and Rub´en Fuentes Dept. Software Engineering and Artificial Intelligence Facultad de Inform´ atica Universidad Complutense de Madrid, Spain ivan
[email protected],
[email protected],
[email protected]
Abstract. Using meta-modeling techniques for defining the abstract syntax of MAS modeling languages(MLs) is a common practice today. This paper presents a framework for evaluating the abstract syntax of MLs. This framework is made of a set of metrics for measuring the metamodels. These metrics help in quantifying three features of the language: specificity, availability, and expressiveness. The specificity and availability metrics are focused on the measurement of the abstract syntax of the final ML products. On the other hand, the specificity metric is focused on the measurement of the abstract syntax of the MLs in its creation and evolution phases. The application of the specificity and availability metrics is exemplified with a study on five agent-oriented MLs. The application of the expressiveness metric is exemplified with the measurement of certain evolution phases of a particular agent-oriented ML. In conclusion, among other things, the presented evaluation framework can be used to compare agent-oriented meta-models and to evaluate the progress of an agent-oriented meta-model with time. Key words: Metric, metamodel, modeling language, multi-agent system, evaluation.
1
Introduction
There are several Multi-agent Systems(MAS) methodologies, each one with its own modeling language(ML), like AAII/BDI[31], Tropos[5], Vowels[13], MASCommonKADS[17], INGENIAS[30], MASSIVE[21], GAIA[41], MaSE[11], AALAADIN[14], Agile PASSI[8], PASSI and ADELFE[3]. On the other hand, AUML [2] is an agent-oriented ML which is not associated with any particular agentoriented methodology. The mentioned MLs are not completely static. In fact, some of them keep evolving, introducing new concepts and improving the proposed abstract syntax. According to this diversity and evolution of the MAS methodologies and MLs, the evaluation of them is an increasing demand. For this reason, some comparison and evaluation frameworks for agent methodologies and MLs [10, 35, 36, 33, 38, 18] are provided in order to satisfy this demand. The MLs are usually composed of the abstract syntax (defined by a metamodel),
the semantics and probably, but not necessarily, the concrete syntax, known as notation. This paper just focuses on the evaluation of the abstract syntax of MLs by measuring the corresponding metamodels. The difference between the mentioned evaluation frameworks and the presented framework is the following. The existent frameworks qualify the MAS MLs; however the presented framework quantifies the MLs with numeric values. The existent evaluation frameworks define quality features using a discrete scale of few values, less than ten. In some cases, this evaluation may not be enough. For instance, for tracking little improvements in the progress of a ML, some precise metrics may be necessary for quantifying these little improvements. This framework presents the mentioned kind of metrics. Furthermore, the presented framework cannot only track the improvements of a ML but also compare the existent MAS MLs in a quantitative way. Looking for guidelines, works in the meta-modeling area provide clues of relevant metrics. For instance, Kargl[19] defines a metric for measuring the explicitness of modeling concepts in meta-models. In addition, V´epa[39] presents several metrics on KM3 meta-models. Inspired by those works, the paper will introduce a framework made of three new metrics which have been applied to agent MLs: availability, specificity and expressiveness. This framework can assist the MAS designer in different ways. One is selecting the appropriate MAS ML for a specific domain according the measurements of the previous metrics. Another use consists in using the measurement to track the progress made along the development lifecycle of the ML. In particular, the availability and specificity metrics focus on the comparison of the different existent ML; and the expressiveness metrics focuses on tracking the improvements of the MLs. The structure of the remaining of this paper is the following. Section 2 describes the presented evaluation framework made of certain metamodel metrics. Section 3 measures several MAS MLs with the availability and specificity metrics. Then, Section 4 analyses the improvements of a certain ML with the expressiveness metric. Section 5 indicates the related work. Finally, Section 6 mentions the conclusions and the future work.
2
Evaluation Framework
The evaluation framework presented here uses measurement of metamodels to produce the evaluation of the ML abstract syntax. The main advantage of using metamodels is the existent software support for handling metamodels. So, instead of manually inspecting a meta-model, it is possible to elaborate programs that measure for us. The metrics of the presented framework are called availability, specificity and expressiveness. The availability and specificity metrics measure how appropriate are the concepts of a ML for a particular problem domain. The expressiveness metric measures the amount of instantiated elements for representing the problem domain models. These metrics are described in this section.
2.1
Availability Metric
The goal of the availability metric is to measure how much appropriate is a ML for modeling a particular problem domain. For the experimentations of this work, Multi-agent Systems(MAS) are used. The higher the value is, the better use the problem domain is making of the ML. A low value indicates the agent ML does not have concepts that the developer considers necessary. The idea behind this metric is that, for a particular problem domain, some concepts are necessary and others are not. The availability metric measures the percentage of these necessary concepts that are contained in the ML. Therefore, the availability metric is defined with the ratio indicated in Equation 1. In this equation, nc indicates the number of necessary concepts(metamodel elements) in the modeling of the particular problem domain; and ncmm indicates the number of these necessary concepts that are actually contained in the metamodel. The considered concepts can be either entities, relationships or any kind of metamodel element. ncmm (1) nc The set of necessary concepts for a particular problem domain must be adapted according to the ML, i.e., the user must stick to what the metamodel contains. So, the user must decide which of the metamodel elements are necessary for solving the particular problem. This is the number of necessary metamodel elements (ncmm). Once this set is selected, the user must detect if any concept is missing for solving the problem. This is the number of missing concepts (mc). Then, it turns out the number of necessary components(nc) is the sum of ncmm and the number of missing concepts (mc). Thus, Equation 1 can be converted into Equation 2 which can also calculate the availability measurement. The best result is obtained when there are no missing concepts (mc = 0). In this case, the availability measurement is the unity (100%). availability =
availability =
ncmm ncmm + mc
(2)
This metric depends on the expertise of the user about the utility of the concepts of the ML for a particular problem domain. In order to reduce these variations, the presented framework strongly recommends the two following precautions. Firstly, for comparing certain MLs, the same person must evaluate the different MLs. Secondly, a large number of users must evaluate the collection of MLs to reduce statistically the mentioned variations. This statistical assumption is already applied to known successful empirical cost estimation methods such as COCOMO[4]. 2.2
Specificity Metric
The goal of the specificity metric is to measure the percentage of the modeling concepts that are actually used for modeling a particular problem domain. If
the value of this metric is low, it means there are many ML concepts not used for modeling the problem domain. Then, the scope of the ML is probably more general than necessary. On the contrary, if the value of the specificity metric is high, the scope of the tool is specific for the problem domain. specif icity =
ncmm cmm
(3)
The specificity metric is defined with the Equation 3. This equation introduces a new term cmm which represents the number of all the concepts in the metamodel. The specificity metric measures the ratio of the metamodel concepts necessary for a problem domain, divided by the whole number of metamodel elements. In the specificity metric, the best result is obtained when all the concepts of the metamodel are necessary(ncmm = cmm) for the corresponding problem domain. In this case, the specificity value is the unity (100%). As a final remark, the multi-perspective MLs[20] generally use more modeling concepts than other MLs. The multi-perspective MLs get low results for this metric only if some of the perspectives are not necessary. On the contrary, the multi-perspective MLs get high results if all the perspectives are necessary. In particular, the use of multi-perspective MLs is necessary for the MAS modeling. For instance, the CommonKADS[20] is a multi-perspective MAS ML which gets high results for the specificity metric because its modeling perspectives are necessary. This remark about multi-perspective MLs can be considered for the other metrics presented in this paper. 2.3
Maximising the Availability and Specificity
A model designer should try to get high values in both availability and specificity metrics. Nevertheless, a high availability is commonly preferable to a high specificity. High specificity implies the language is very domain specific. High availability implies having all elements needed to solve the problem, which is preferable to not using all the concepts of the ML. The way availability and precission affect the problem-domain modeling is different. For example, an industrial company may need to work with a specific platform and a certain ML. They want to construct a code generation engine that converts the models expressed with the mentioned ML into programming code suited for their specific platform. Then, it is recommended to select a subset of the mentioned ML with the following mechanism. First of all, a high availability value of the subset is necessary to track the effectiveness of the new interpreter for solve the company domain problem. On the other hand, the high specificity value is necessary to reduce costs. Implementing the smallest necessary subset of the ML reduces the costs of implementing the new code generator. Theoretically, availability and precission would get the highest scores when ncmm = nc and mc = 0. So a designer trying to select a ML subset with the maximum availability and precission should do the following: given an agent ML, create a subset of the ML with those elements needed in the concrete problem,
and only those elements. This way, all the problems obtain the maximum in the availability, which is the metric with the highest priority. 2.4
Expressiveness Metric
The goal of the expressiveness metric is to measure the amount of model elements necessary for modeling a system of a problem domain. This metric is related to the metamodel instantiation process. The less model elements are necessary, the more expressive is the ML. It is important to highlight this metric uses model elements. The model elements are the elements used in the model specification or model design of a system. The model elements are different from metamodel elements. The metamodel elements represent ML concepts. The model elements are instances of the metamodel elements. size(system) (4) nme The expressiveness for a ML(denoted as ml ) for a system (denoted as system) is defined with the Equation 4. The nme term denotes the number of model elements necessary for modeling the system of the problem domain. The size of a system[40] is not free from ambiguity since there is no accepted unified metric for the system sizes. Thus, this metric for measuring a single ML is not as relevant as the previous ones. However, this metric is useful for comparing two MLs applied to the same problem. The same system is modeled with the two MLs. Then, the system size is the same, because the system size only depends on the system and does not depend on the way it is modeled. The ratio of the expressiveness of two MLs (denoted as ml1 and ml2 ) is calculated with the Equation 5. In this equation, the nme1 and nme2 terms denote number of model elements for modeling the system with the ml1 and ml2 MLs respectively. expressiveness(ml, system) =
expressiveness(ml1) nme2 = expressiveness(ml2) nme1
(5)
In conclusion, the expressiveness metric can be used for comparing the expressiveness of two MLs for solving a same problem. The number of elements necessary for modeling the same problem with two MLs are considered. The most expressiveness ML is the one which needs less elements for representing the model for solving the same problem. The expressiveness metric is specially useful for evaluating improvements in the expressiveness of a ML. Therefore, if some changes are done in a ML, then the new ML expressiveness can be compared with the old ML expressiveness. If the ratio is greater than one, the expressiveness of the ML has improved. For example, a ML that includes the one-to-many interaction is more expressiveness than other ML that needs several the one-to-one interactions to express the oneto-many interaction, because the first ML needs less elements for representing the same models. As a remark, the quality of the modeling can influence the number of modeling elements. For this reason, the presented framework strongly recommends that
the same designer is asked to model the system in both MLs. In this manner, the quality of modeling is assumed to be similar in the two mentioned MLs. In order to improve the quality of the measurement, it is also recommended to use as many model designers as possible to model the system in both MLs. A battery of multi-agent systems of several problem domains can be established for the comparison of MAS MLs. This battery can be considered a benchmark. Establishing this benchmark is left for future work. With this benchmark, a ML can be selected as the unit. The expressiveness of this ML is considered the unit. Then, the expressiveness of any ML can be measured with a comparison with the mentioned unit ML.
3
Measuring Several MAS MLs with the Availability and Specificity Metrics
This section introduces an example of application of the availability and specificity metrics to compare results across different MLs. The comparison uses case studies included in the INGENIAS Development Kit(IDK)[1] tool distribution and in the training section of the Grasia site (http://grasia.fdi.ucm.es/UK/ index.php). These uses cases are completely specified, and implemented, so they are a reliable source of information. The table 1 presents the measurement values, with the aid of the metamodels published for different agent-oriented MLs. Readers should remember that these metrics are based exclusively on the meta-model elements and that they do not require a complete modeling, just the appreciation of the usefulness of a concrete concept in a concrete problem domain. In this case, the problem domain is determined by the leftmost column and each row shows the measurement for the different MLs. Further details of the measurement presented in this section are available in http://grasia.fdi. ucm.es/gschool. The concrete studied MLs are: Tropos, PASSI, Agile PASSI, Prometheus, MaSE, and INGENIAS. The Tropos metamodel is presented in [37]. PASSI and Agile PASSI metamodels[7, 9] are already defined. The Prometheus methodology is introduced in [29]. MASE is described in [11, 12]. INGENIAS metamodel is available at [1]. The battery of examples used as problem domains for this measurement is described next. The Cinema and Request examples are distributed with IDK 2.7[1]. The first one provides support for users wanting to get a ticket to see a movie. The system implements an interface agent, a buyer agent that looks for suitable cinemas, and the cinemas themselves. The second example is a demonstration of how to construct a GUI connected to an agent. The Delphi example[16] is a demonstration of how to reach consensus among experts for evaluating the relevance of a document. Finally, the Crisis management example solves the crisis management case study[26, 32]. This example uses a MAS to coordinate the people on the ground to help each other.
Table 1. Measuring Availability and Specificity of several MAS MLs
Cinema Request example Delphi Crisis Management Total Average
Cinema Request example Delphi Crisis Management Total Average
Tropos PASSI Agile-PASSI availability specificity availability specificity availability specificity 75.0% 75.0% 88.2% 75.0% 68.7% 73.3% 91.7% 45.8% 90.1% 50.0% 80.0% 53.3% 72.7% 66.7% 88.9% 80.0% 70.6% 80.0% 63.6% 58.3% 85.7% 90.0% 75.0% 100.0% 75.8% 61.5% 88.4% 73.7% 73.6% 76.7% Prometheus MaSE INGENIAS availability specificity availability specificity availability specificity 85.7% 94.7% 77.7% 60.9% 100.0% 25.7% 90.0% 47.4% 100.0% 52.2% 100.0% 14.9% 84.2% 84.2% 82.4% 60.9% 97.3% 24.3% 72.7% 84.2% 77.7% 60.9% 97.3% 25.0% 83.1% 77.6% 84.5% 58.7% 98.7% 22.5%
Inspecting the table 1), it may be observed the specificity measurements of the request example are low. The main reason for this fact is the simplicity of the example, which only needs few concepts. Prometheus is specially useful for BDI-like agents. Thus, the scores obtained in the measurements of Prometheus depend on whether the examples are BDIlike. This explains the lower scores obtained in the request example and the higher values in more complex developments, like Delphi or Cinema. Agile-PASSI is a subset of PASSI. Agile-PASSI only uses the most necessary concepts, according to their authors. This explains why the specificity of AgilePASSI is greater than PASSI: the number of Agile-PASSI concepts is smaller. However, the Agile-PASSI availability decreases. The reason is simple as well: some MASs need some concepts contained at PASSI, but not contained at AgilePASSI.
4
Analysing the Improvements of a ML with the Expressiveness Metric
This section describes a use case of the presented evaluation framework. This use case measures the improvements of a particular ML. In particular, the INGENIAS ML is used for the example presented in this section. In particular, two versions (2.6 and 2.7) of the mentioned ML are compared. These versions are supported respectively by the versions 2.6 and 2.7 of the IDK tool. According to this change, the expressiveness metric scores indicate a relevant improvement in the new version. The new version supports one-to-many and many-to-many interactions. These interactions were not supported in the old version(2.6). The MAS with one-to-many or many-to-many interactions can be represented directly with the new version. However, in the old version, these kind of interactions cannot be represented directly. Instead, several one-to-one
interactions are needed to represent just one interaction of the new kinds. Thus, the new ML version needs less elements than old ML version to represent the same MAS model. Therefore, the 2.7 version is more expressiveness than the 2.6 version. The formal comparison uses the Request Example MAS. In this MAS, an agent, called PersonalAssistant, call for proposals to other agents, called providers. There is an one-to-many interaction from the PersonalAssistant agent to the provider agents. The distributed RequestExample MAS has three providers. However, several numbers of providers are considered for this comparison, because the expressiveness ratio between both versions increases when considering a higher number of providers. Table 2. Comparison of Expressiveness between 2.7 and 2.6 versions, with the RequestExample MAS. The ratio of expressiveness is calculated with the ratio of numbers of elements. The number of elements is the addition of entities and relationships. # Providers ML Version # Entities # Relations # Elements Ratio Expressiv. 2.7 52 45 97 3 1, 19(+19%) 2.6 58 57 115 2.7 101 94 195 10 1, 41(+41%) 2.6 128 148 276 2.7 731 724 1455 100 1, 61(+61%) 2.6 1028 1318 2346 2.7 52 + 7 ∗ n 45 + 7 ∗ n 97 + 14 ∗ n ∞ 1, 64(+64%) 2.6 58 + 10 ∗ n 57 + 13 ∗ n 115 + 23 ∗ n
As indicated in Table 2, the 2.7 version is 19% more expressive than the 2.6 version for the selected example. The mentioned increment considers the MAS with three providers. However this increment is greater when considering higher numbers of providers. The increments are 41%, 61% and 64% for ten, one hundred and infinity providers respectively. For the infinity number measurement, n represents the number of additional providers apart from the three initial providers.
5
Related Work
There are several evaluation frameworks[10, 35, 36, 33, 38, 18] for Multi-agent Systems(MAS). The innovation of our evaluation framework over those works is the quantitative measurement of the MAS MLs, instead qualitative evaluations. Another difference is the following. Those frameworks can evaluate only MAS MLs. On the other hand, the presented framework can evaluate several kinds of MLs, not only MAS MLs. However the presented framework can only measure the metamodel part; not notation or ontology. Some works are related to the presented availability metric. In fact, several works also test the MLs for some specific domain by consulting whether several
concepts are present in the ML. For instance, Mulyar[25] measures the clinical computer interpretable MLs. The measure value is the degree of support of 43 control-flow patterns. The applications of the presented availability metric are similar to those works. The difference is the following. The presented work defines the metric for ML on the metamodel for the general case. Those works can only be considered some sort of applications of the availability metric. The expressiveness of MLs is considered a positive property in the MAS literature. For instance, Shehory[33] enumerates the desirable agent-oriented software engineering properties. The expressiveness is included. In addition, Sturm[35] also considers the expressiveness for evaluating agent-oriented MLs. Nevertheless, those authors do not provide any metric for measuring or comparing the expressiveness. On the contrary, this work provides a metric for measuring and comparing the expressiveness. Furthermore, there are several works related to the evaluation of MLs. For instance, Opdahl[28] uses the Bunge-Wand-Weber(BWW) model to analyse and evaluate the Unified Modeling Language(UML) for representing concrete problem domains, with an ontological approach. That work considers the Wand and Weber ontological discrepancies, such as the Construct overlad, redundancy, excess or deficit. In the same line, the OPEN Modeling Language(OML) [27] is also evaluated in terms of the BWW model. On the other hand, the presented evaluation framework considers the concepts clearly stated in the MAS literature in order to evaluate the appropriateness of the ML for a problem domain. The presented evaluation framework can be complemented with the Opdahl’s work in some cases. Other works related to ML evaluation are the following. For instance, Cahill[6] measures the four most relevant metamodels for processes, from an industry approach, to evaluate and compare the corresponding MLs. In addition, List[22] evaluates the Conceptual Business Process Modelling Languages(BPMLs). The evaluation of the BPML is carried out by the comparison with a generic BPML metamodel. Lastly, Strahonja[34] provides some attributes of the workflow metamodels, such as traceability, applicability, generality and maturity. Finally, there are works focused on the evaluation of metamodels. For instance, Ma[24] uses the Object-Oriented metrics for measuring several versions of the UML metamodel. Then, Ma[23] evaluates the UML metamodel using taxonomic patterns on meta-classes. In addition, Fuentes[15] found 450 errors of the UML 1.5 metamodel defined by OMG. The number of errors can be used as a measure of metamodels. At last, Kargl[19] defines a metric for measuring the explicitness of modeling concepts in metamodels.
6
Conclusions and Future Work
In conclusion, an evaluation framework for MLs is presented. The goal of this framework is to satisfy some necessities in the variety of Multi-agent Systems(MAS). This necessities are the following. Firstly, the framework provides a mechanism for selecting the appropriate ML for a particular problem domain. Secondly,
the framework assists the designer in selecting a suitable subset of a ML for a specific kind of problem domain. Finally, the framework aids to measure the improvements in any ML. Some issues are left for future work. At this moment, the measurement of the metamodels with the presented metrics is not computerised yet. This computerisation is left for future work. Furtheremore, in the future, a complete benchmark can be defined for availability and specificity metrics. This benchmark can be a battery of MASs. In addition, a benchmark can be defined for the expressiveness metric for MAS. A ML can be considered as the unit ML. The expressiveness of this ML would be the unit. Then, the expressiveness of a MAS ML can be measured as a comparison with the unit ML. Finally, a further evaluation of the presented framework is left for future work. A comparison of the presented measurement values with human evaluations can provide a realistic evaluation of the presented metrics. Acknowledgements This work has been supported by the project “Methods and tools for agent-based modelling” supported by Spanish Council for Science and Technology with grant TIN2005-08501-C03-03, and by the grant for Research Group 910494 by the Region of Madrid (Comunidad de Madrid) and the Universidad Complutense Madrid.
References 1. INGENIAS Development Kit. http://ingenias.sourceforge.net/, available on March 6, 2008. 2. B. Bauer, J.P. Muller, and J. Odell. Agent UML: A Formalism for Specifying Multiagent Interaction. Agent-Oriented Software Engineering, 1957:91–103, 2001. 3. C. Bernon, M.P. Gleizes, S. Peyruqueou, and G. Picard. ADELFE: A Methodology for Adaptive Multi-agent Systems Engineering. Engineering Societies in the Agents World III: Third International Workshop, ESAW 2002, Madrid, Spain, September 16-17, 2002: Revised Papers, 2003. 4. B. Boehm, B. Clark, E. Horowitz, C. Westland, R. Madachy, and R. Selby. Cost models for future software life cycle processes: COCOMO 2.0. Annals of Software Engineering, 1(1):57–94, 1995. 5. P. Bresciani, A. Perini, P. Giorgini, F. Giunchiglia, and J. Mylopoulos. Tropos: An Agent-Oriented Software Development Methodology. Autonomous Agents and Multi-Agent Systems, 8(3):203–236, 2004. 6. B. Cahill, D. Carrington, B. Song, and P. Strooper. An Industry-Based Evaluation of Process Modeling Techniques. LECTURE NOTES IN COMPUTER SCIENCE, 4257:110, 2006. 7. A. Chella, M. Cossentino, L. Sabatucci, and V. Seidita. The PASSI and Agile PASSI MAS meta-models. 8. A. Chella, M. Cossentino, L. Sabatucci, and V. Seidita. Agile PASSI: An Agile Process for Designing Agents. International Journal of Computer Systems Science & Engineering. Special issue on” Software Engineering for Multi-Agent Systems”. May, 2006.
9. M. Cossentino, S. Gaglio, L. Sabatucci, and V. Seidita. The PASSI and Agile PASSI MAS Meta-models Compared with a Unifying Proposal. Multi-agent Systems And Applications IV: 4th International Central and Eastern European Conference on Multi-Agent Systems, CEEMAS 2005, Budapest, Hungary, September 15-17, 2005: Proceedings, 2005. 10. K.H. Dam and M. Winikoff. Comparing Agent-Oriented Methodologies. AgentOriented Information Systems: 5th International Bi-conference Workshop, AOIS 2003, Melbourne, Australia, July 14, 2003 and Chicago, IL, USA, October 13, 2003: Revised Selected Papers, 2004. 11. S. DeLoach, M.F. Wood, and C.H. Sparkman. Multiagent Systems Engineering. International Journal of Software Engineering and Knowledge Engineering, 11(3):231–258, 2001. 12. S.A. DeLoach. Multiagent systems engineering of organization-based multiagent systems. ACM SIGSOFT Software Engineering Notes, 30(4):1–7, 2005. 13. Y. Demazeau, M. Occello, and C. Baeijs. Systems Development as Societies of Agents. Knowledge Engineering and Agent Technology. IOS Press, Amsterdam, 2000. 14. J. Ferber and O. Gutknecht. Aalaadin: a meta-model for the analysis and design of organizations in multi-agent systems, Y. Demazeau, Ed. Third International Conference on Multi-Agent Systems, Paris, 1998. 15. J.M. Fuentes, V. Quintana, S. Legan´es-Madrid, J. Llorens, G. G´enova, and R. Prieto-D´ıaz. Errors in the UML Metamodel? ACM SIGSOFT Software Engineering Notes, 28(6), November 2003. 16. Iv´ an Garc´ıa-Magari˜ no, Jos´e R. P´erez Ag¨ uera, and Jorge J. G´ omez-Sanz. Reaching Consensus in a Multi-agent System. In 6th International Workshop on Practical Applications on Agents and Multi-agent Systems, IWPAAMS’07, pages 349–358, 2007. November 12/13, 2007, Salamanca Spain. 17. C.A. Iglesiast, M. Garijo, J.C. Gonzalez, and J.R. Velasco. Analysis and Design of Multiagent Systems Using MAS-CommonKADS. Intelligent Agents IV: Agent Theories, Architectures, and Languages: 4th International Workshop, ATAL’97, Providence, Rhode Island, USA, July 24-26, 1997: Proceedings, 1998. 18. Q.N.N. Iran, G. Low, M.A. Williams, N.S. Wales, and A. Mary. A Preliminary Comparative Feature Analysis of Multi-agent Systems Development Methodologies. Agent-oriented Information Systems II: 6th International Bi-conference Workshop, AOIS 2004, Riga, Latvia, June 8, 2004 and New York, NY, USA, July 20, 2004: Revised Selected Papers, 2005. 19. H. Kargl, M. Strommer, and M. Wimmer. Measuring the Explicitness of Modeling Concepts in Metamodels. ACM/IEEE 9th International Conference on Model Driven Engineering Languages and Systems (MoDELS/UML 2006), Workshop on Model Size Metrics, Genova, Italy, October, 2006. 20. J. Kingston and A. Macintosh. Knowledge management through multi-perspective modelling: representing and distributing organizational memory. Knowledge-Based Systems, 13(2-3):121–131, 2000. 21. J. Lind. Iterative Software Engineering for Multiagent Systems: The Massive Method. Springer-Verlag New York, Inc. Secaucus, NJ, USA, 2001. 22. B. List and B. Korherr. An evaluation of conceptual business process modelling languages. Proceedings of the 2006 ACM symposium on Applied computing, pages 1532–1539, 2006. 23. H. Ma, Z. Ji, W. Shao, and L. Zhang. Towards the UML Evaluation Using Taxonomic Patterns on Meta-Classes. Proceedings of the Fifth International Conference on Quality Software, pages 37–44, 2005.
24. H. Ma and W. Zhang. L. Applying OO Metrics to Assess the UML Meta-Models. Proc. of the 7th International Conference of the UML (UML’04). Springer-Verlag, LNCS, 3273:12–26, 2004. 25. N. Mulyar, W.M.P. van der Aalst, and M. Peleg. A Pattern-based Analysis of Clinical Computer-interpretable Guideline Modeling Languages. Journal of the American Medical Informatics Association, 14(6):781, 2007. 26. A. Oomes. Organization awareness in crisis management. Proc. ISCRAM2004, pages 63–68, 2004. 27. A.L. Opdahl and B. Henderson-Sellers. Grounding the OML metamodel in ontology. The Journal of Systems & Software, 57(2):119–143, 2001. 28. A.L. Opdahl and B. Henderson-Sellers. Ontological Evaluation of the UML Using the Bunge–Wand–Weber Model. Software and Systems Modeling, 1(1):43–67, 2002. 29. L. Padgham and M. Winikoff. Prometheus: A Methodology for Developing Intelligent Agents. Proceedings of the Third International Workshop on Agent Oriented Software Engineering, at AAMAS, 2002. 30. J. Pav´ on and J. G´ omez-Sanz. Agent Oriented Software Engineering with INGENIAS. Multi-Agent Systems and Applications III, 2691:394–403, 2003. 31. A.S. Rao and M.P. Georgeff. BDI Agents: From Theory to Practice. Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95), pages 312–319, 1995. 32. J. Schraagen, A. Eikelboom, and G. te Brake. Experimental evaluation of a critical thinking tool to support decision making in crisis situations. Proceedings of the 2nd International Conference on Information Systems for Crisis Response and Management, Brussels, Belgium, 18–20 April 2005, 2005. 33. O. Shehory and A. Sturm. Evaluation of modeling techniques for agent-based systems. Proceedings of the fifth international conference on Autonomous agents, pages 624–631, 2001. 34. V. Strahonja. The Evaluation Criteria of Workflow Metamodels. Information Technology Interfaces, 2007. ITI 2007. 29th International Conference on, pages 553–558, 2007. 35. A. Sturm and O. Shehory. A Framework for Evaluating Agent-Oriented Methodologies. Agent-Oriented Information Systems: 5th International Bi-conference Workshop, AOIS 2003, Melbourne, Australia, July 14, 2003 and Chicago, IL, USA, October 13, 2003: Revised Selected Papers, 2004. 36. J. Sudeikat, L. Braubach, A. Pokahr, and W. Lamersdorf. Evaluation of AgentOriented Software Methodologies-Examination of the Gap Between Modeling and Platform. Agent-Oriented Software Engineering V: 5th International Workshop, AOSE 2004, New York, NY, USA, July 19, 2004: Revised Selected Papers, 2005. 37. A. Susi, A. Perini, and J. Mylopoulos. The Tropos Metamodel and its Use. Informatica, 29(4):401–408, 2005. 38. Q.N.N. Tran and G.C. Low. Comparison of Ten Agent-Oriented Methodologies. Agent-oriented Methodologies, 2005. ´ V´epa, J. B´ezivin, H. Bruneli`ere, and F. Jouault. Measuring Model Repositories. 39. E. 40. F. Weil and A. Neczwid. Summary of the 2006 Model Size Metrics Workshop. LECTURE NOTES IN COMPUTER SCIENCE, 4364:205, 2007. 41. M. Wooldridge, N.R. Jennings, and D. Kinny. The Gaia Methodology for AgentOriented Analysis and Design. Autonomous Agents and Multi-Agent Systems, 3(3):285–312, 2000.