Towards an AOSE: Game Development Methodology

12 downloads 122326 Views 172KB Size Report
been provided. Agent Oriented Software Engineering (AOSE) methodologies en- hance the ability of software engineering to develop complex applications such ...
Towards an AOSE: Game Development Methodology Rula Al-Azawi1 , Aladdin Ayesh1 , Ian Kenny1, and Khalfan Abdullah AL-Masruri2 1 2

DMU, UK [email protected], [email protected], [email protected] Higher College of Technology, Muscat, Oman [email protected]

Abstract. Over the last decade, many methodologies for developing agent based systems have been developed, however no complete evaluation frameworks have been provided. Agent Oriented Software Engineering (AOSE) methodologies enhance the ability of software engineering to develop complex applications such as games; whilst it can be difficult for researchers to select an AOSE methodology suitable for a specific application. In this paper a new framework for evaluating different types of AOSE, such as qualitative and quantitative evaluations will be introduced. The framework assists researchers to select a preferable AOSE which could be used in a game development methodology. Furthermore the results from this evaluation framework can be used to determine the existing gaps in each methodology.

1

Introduction

Within the last few years, with the increase in complexity of projects associated with software engineering, many AOSE methodologies have been proposed for development[1]. Nowadays, intelligent agent-based systems are applied to many domains including robotics, networks, security, traffic control, and commerce. This paper will focus on the application of AOSE to game domains. In the evaluation framework literature, the majority of the researchers [2][3] have used qualitative evaluation which has been dependent on the author’s viewpoint or with regards to the questionnaire. Furthermore some of those evaluations have been presented by the same author of the methodology in question which makes those evaluations subjective. In this paper a framework is presented which is based on two criteria: firstly some adopted from existing qualitative frameworks; secondly new quantitative evaluation frameworks. Furthermore the results derived from this particular framework guided us in the selection of the most suitable methodology for the game development domain. The paper is structured in the following manner: Section 2 presents methodologies in AOSE and explains why we have selected MaSE and Tropos for the purpose of evaluation in our framework. Section 3 presents an overview of game development. Section 4 presents common evaluation frameworks introduced by other S. Omatu et al. (Eds.): Distrib. Computing & Artificial Intelligence, AISC 217, pp. 493–501. c Springer International Publishing Switzerland 2013 DOI: 10.1007/978-3-319-00551-5_59 

494

R. Al-Azawi et al.

authors. Section 5 presents our own framework where both qualitative and quantitative evaluations are explained in detail. In section 6, the critical evaluation of our own framework is presented and is compared with the results of other authors. Furthermore an analysis of the weaknesses in AOSE methodologies is also presented. Section 7 contains the conclusion followed by future works in Section 8.

2 Choosing AOSE Methodologies There are several methodologies in AOSE. Each one has its own life cycle. However, some of them are precise only for analysis and design such as Gaia, while others cover the complete life cycle such as Trops, MaSE and Prometheus. In this paper we perform comparisons of these well-known methodologies. These methodologies were selected based on the following criteria: a) they were described in the most details and had a complete life cycle methodology. Most of the existing AOSE only focus on analysis and design; whilst MaSE, Prometheus and Tropos have a full life cycle. b) were influenced by software engineering root. c) were perceived as significant by the agent community [4]. Depending on these criteria we decided to take in account only the MaSE .[5] [6] and Tropos[7] [8] methodologies (recent references are available for more details) for evaluation in our own evaluation framework.

3 Overview of Game Development Game development has evolved to have large projects employing hundreds of people and development time measured in years. Unlike most other software application domains, game development present unique challenges that stem from the multiple disciplines that contribute to games. Until now some game development companies are still using the waterfall methodology but with modifications. A major issue leveled against the game development industries is that most adopt a poor methodology for game creation[9]. The relationship between games and AOSE is clear given that software agent or intelligent agents are used as virtual players or actors in many computer games and simulations. The development process is very close to the process of game development[10]. The goal of evaluating AOSE methodologies is to find the most convincing methodology that could be adopted for game development with modifications.

4 Common Evaluation Frameworks The majority of the existing work found in the literature is based on qualitative evaluations by using different techniques such as feature based analysis, surveys, case studies and field experimented. In [11]with regard to qualitative evaluation are dealt with four main criteria: Concepts and properties; Modeling techniques;

Towards an AOSE: Game Development Methodology

495

and Process and pragmatics. His evaluation technique is based on a feature based analysis. Each criterion contains different attributes whereby Yes or No have been used to represent the criteria in each of the methodologies. The criteria with the same definition have also been used in [12]. Another interesting evaluation of methodologies is provided by [13] in which he attempted to cover ten of the most important methodologies by using different criteria. Thus instead of using Yes or No as in the case of the previous author, ’H’ has been used for high, ’L’ for low and ’M’ for medium. Some of the authors like [14] have presented a quantitative evaluation which evaluated the complexity of diagrams for MESSAGE and Prometheus using case study methods to measure magnitude and diversity and to determine the final complexity. Increasing magnitude and diversity increases the complexity of the model. Another effort has been performed in [2] through measuring the complexity of the methodology using different attributes. Lower software complexity provides advantages such as lower development and maintenance time and cost, less functional errors and increased re-usability. Therefore, this is commonly used in software metrics research for predicting software qualities based on complexity metrics . By studying evaluation frameworks which are proposed until now, it seems that: a) there is not any appropriate framework for evaluating methodologies. b) there are frameworks which mostly based on feature- based analysis or simple case study methods. c) there are only small amount of frameworks based on quantitative approach. Therefore presenting a proper framework containing both quantitative and qualitative methods and using feature based analysis, survey and case study methods could be really helpful.

5 Game Development Methodology Framework (GDMF) The quantitative evaluation is an important part of the evaluation process because it is based on a fixed result for comparison purposes. The majority of the previous research has focused on the qualitative approach to enable a comparison to be made between methodologies. Some difficulties were encountered during the literature review regarding the quantitative evaluation: a) the majority of the evaluations such as [14] [15] compared two methodologies by using a specific case study to determine their results. b) there were no standard attributes or metrics which had been used for evaluation. To evaluate our framework, the following criteria will be used: •

Select the common criteria. Regarding qualitative evaluation, we decided to adopt [11] because his evaluation precision covered qualitative criteria and used feature based analysis methods and [13] since his evaluation is based on survey methods. Regarding quantitative evaluation criteria, criteria are divided into three sub-criteria: First by transferring the existing qualitative attribute values to the quantitative numbers; second, by dealing with Meta model metrics and use case evaluation methods, and finally by dealing with conversion of existing diagrams such as use case to numerical results.

496



R. Al-Azawi et al.

Transfer the qualitative attributes into quantitative values, then convert those values by using a proposed common scale for each metric as shown in the following: – Yes-No To 0-1 and the common scale 0-10 – None-Low- Medium- High To 0-1-2-3 and the common scale 0-3-7-10

5.1

Converting the Qualitative Results to Quantitative Evaluations

In this section, we adopted [11] and [13] by converting their criteria to numerical results to facilitate the comparison between MaSE and Tropos. Table 1 Comparison of MaSE and Tropos using criteria adopted from [11] Criteria

Sub-Criteria

Tropos MaSE Comment

Autonomy

10 * 10

10 10

achieve goals and soft goals

10 10 10 10

Needs to add iteration

Mental Mechanism Concept and Properties Reactivity Pro-activeness. Adaptation Concurrency

Modeling techniques

Process

Pragmatics

Agent interaction

10

10 10 * 0 10 * 0

Collaboration Teamwork Agent-oriented

10 10 10

10 10 10

Expressiveness Modularity Refinement Traceability Accessibility

10 10 10 10 10

10 0 10 10 10

Life-cycle Coverage Architecture Design Implementation Tools Deployment. Management Requirement capture

10 10 10 0 0 10

10 10 10 10 * 0 10

Tools Available Modeling Suitability Domain Applicability Scalability

0 0 10 10

10 10 10 10

Needs to add agent interaction

Needs to add management

Towards an AOSE: Game Development Methodology

497

Table 2 Comparison regarding steps and usability of Tropos and MaSE adopted from [13] Steps Identify system goal Identify system tasks/behaviors Specify use case scenario Identify roles Identify agent classes Model domain conceptualization Specify acquaintance between agent classes Define interaction protocol Define content of exchange message Specify agent architecture Define agent mental Define agent behavior interface Specify system architecture Specify organizational structure Model MAS environment Specify agent environment interaction mechanism Specify agent inheritance Instantiate agent classes Specify instance agent deployment

Tropos 10 10 10 10 10 0 10 10 7 7 0 0 0 0 0 0 0 10 10

MaSE 10 10 0 0 10 0 7 10 7 0 7 0 0 10 10 0 0 0 0

Initially we represent the summarized lists which have been provided from [11] in each of the main four criteria (i.e. Concept and properties, Modeling techniques and Process and pragmatics), as shown in table 1. From the criteria defined [13], we selected the criteria of steps and usability, however around twenty steps were used to compare Tropos and MaSE as shown in table 2.

5.2 Evaluating the Methodologies by Meta-Model Metrics This part of our framework deals with Meta-Model diagrams, using meta-modeling techniques for defining the abstract syntax of MAS modeling languages (MLs) which is a common practice today. We used a set of metrics in order to measure the meta-models. These metrics helped to quantify two features of the language: specificity and availability. •

The availability metric as shown in equation 1 measures how appropriate an ML is to model a particular problem domain. A higher value is better in the domain problem. The ncmm indicates the number of necessary meta-model elements;nc indicates the number of necessary elements; mc indicates the number of missing concepts.

498

R. Al-Azawi et al.

Availability = nccm ÷ (nccm + mc) •

(1)

The specificity metric as shown in equation 2 measures the percentage of the modeling concepts that are actually used for modeling a particular problem domain. If the value of this metric is low, it means there are many ML concepts which are not being used for modeling the problem domain [16]. Speci f icity = nccm ÷ cmm

(2)

The term cmm represents the number of all the concepts in the meta-model. Table 3 has been created by [16] whereby he has used four use cases and has selected six methodologies to find the availability and specificity for the purposes of comparison. Table 3 Availability and specificity to compare between them. of Tropos and MaSE.

Case study Cinema Request example Delphi Crisis Management Total Average

Tropos Availability 75 91.7 72.2 63.6

Specificity 75 45.8 66.782.4 58.3

MaSE Availability 77.7 100.0 82.4 77.7

Specificity 60.9 52.2 60.9 60.9

75.8

61.5

84.5

58.7

5.3 Evaluation of the Methodologies by Diagrams The majority of the AOSE methodologies were delivered in some phase diagrams or tables especially during the analysis and design phase, such as UML diagrams and agent diagrams. The important point to consider in this evaluation is that we worked within an abstract level of methodology, which presented difficulties in finding artifacts that can be qualified. The important point in this evaluation is that it was based on a case study and the results depend on the case study itself [17]. Therefore in some case studies, the MaSE may obtain higher results than those associated with Tropos and vice versa. According to results found in the paper of Basseda et al [17],which compared by diagrams the complexity of dependency of modules for three AOSE, MaSEs has greater complexity of dependency modules than those of MESSAGE and Prometheus. This means that MaSE has more dependencies between its models; thus, using MaSE needs more time and effort. Although MaSE is suggested in critical systems which not only detailed designs are not considered overhead, but also are essential and worthwhile.

6 Critical Evaluation In this section, the proposed framework has been applied to MaSE and Tropos as two case studies to demonstrate how the framework could be used to evaluate the

Towards an AOSE: Game Development Methodology

499

methodologies. There are slight changes between the results of comparison. Both [18] [4] have evaluated MaSE to be better than Tropos. Moreover when we made the suggested change in MaSE, this would increase efficacy of MaSE. In [19], the author compared two popular reference works [4]and [18]and used profile analysis which is a multivariate statistical method. The majority of the results were found to be similar to those of our own evaluation results. In Section 5.1, from table 1 we calculated the means of Tropos =0.84 and we obtained the same results as with MaSE, in case we applied the suggested enhancement to MaSE shown in comment column of Table 1, the means of MaSE after enhancement is=0.96. The means for MaSE is=1.7894 from Table 2 which was greater than Tropos=1.2631. As we noticed from Table 3, MaSE obtained a higher percentage in availability than Tropos; but Tropos had a higher percentage in terms of specificity. Availability is a more important parameter than specificity in game development because game development implements the modules based on the priority of modules. When we calculated the total percentage of both specificity and availability, MaSE obtained 71.6 and Tropos obtained 68.65. Thus, MaSE was considered to be better than Tropos with regards to the final results from that metrics. From the previous measurement, we found that MaSE in the majority of measurements used in our frameworks obtained higher scores than Tropos by using feature base analysis, survey and case study evaluation methods . We noticed the following weaknesses in most AOSE methodologies: •

All AOSE methodologies lacked industrial strength tools and standards. In addition, they did not seem to deal with team work since project management and time planning were not considered in AOSE methodologies found in the literature.



There was a weakness during the implementation phase.



There were no standard metrics for used in each phase to evaluate the phase output, the complete system and the most effective methodology for an application.

7 Conclusion This study focused on comparing different AOSE methodology from the point of view of the game development domain. The results of the experiment were summarized to select the MaSE as a methodology to be adopted in game development methodology as future work for the three reasons. Firstly, the MaSE from the previous evaluation was better than Tropos. Secondly, many references such as [20]have used MaSE in creating methodology for robotics which is similar to this particular area of research. Finally, MaSE has defined the goal in the first stage and every goal has to be associated with its role which is an important feature in game development.

500

R. Al-Azawi et al.

8 Future Works An interesting future piece of work would be to use MaSE with a software engineering methodology such as agile in game development methodologies following the addition of the following improvements to enhance the MaSE methodologies: Firstly, it would be necessary to add iterations to the methodology to solve any problems from the previous stage; or to add new goals or requirement to the system and to obtain module prototypes. Secondly, it would be necessary to utilize project management from software engineering. Finally, it would be necessary to put more attention on the implementation and testing phases.

References 1. Akbari, O.: A survey of agent-oriented software engineering paradigm: Towards its industrial acceptance. Journal of Computer Engineering Research 1, 14–28 (2010) 2. Basseda, R., Alinaghi, T., Ghoroghi, C.: A dependency based framework for the evaluation of agent oriented methodologies. In: IEEE International Conference on System of Systems Engineering, SoSE 2009, pp. 1–9 (June 2009) 3. Dam, K.: Evaluating and comparing agent-oriented software engineering methodologies. PhD thesis, School of Computer Science and Information Technology, RMIT University, Australia (2003) 4. Dam, K.H., Winikoff, M.: Comparing agent-oriented methodologies. In: Giorgini, P., Henderson-Sellers, B., Winikoff, M. (eds.) AOIS 2003. LNCS (LNAI), vol. 3030, pp. 78–93. Springer, Heidelberg (2004) 5. DeLoach, S.: Multiagent systems engineering of organization-based multiagent systems. In: SELMAS 2005: Proceedings of the 4th International Workshop on Software Engineering for Large-Scale Multi-Agent Systems, pp. 1–7. ACM, New York (2005) 6. DeLoach, S.: Analysis and Design using MaSE and agentTool. In: Midwest Artificial Intelligence and Cognitive Science, pp. 1–7. Miami University Press (2001) 7. Bresciani, P., Perini, A., Giorgini, P., Giunchiglia, F., Mylopoulos, J.: Tropos: An AgentOriented Software Development Methodology. Autonomous Agents and Multi-Agent Systems 8, 203–236 (2004) 8. Mouratidis, H.: Secure Tropos: An Agent Oriented Software Engineering Methodology for the Development of Health and Social Care Information Systems. International Journal of Computer Science and Security 3(3), 241–271 (2009) 9. Kanode, C., Haddad, H.: Software Engineering Challenges in Game Development. In: 2009 Sixth International Conference on Information Technology: New Generations, pp. 260–265. IEEE Computer Society (2009) 10. Gomez-Rodriguez, A., Gonzalez-Moreno, J.C., Ramos-Valcarcel, D., Vazquez-Lopez, L.: Modeling serious games using AOSE methodologies. In: 11th International Conference on Intelligent Systems Design and Applications (ISDA), pp. 53–58 (2011) 11. Lin, C.-E., Kavi, K.M., Sheldon, F.T., Potok, T.E.: A methodology to evaluate agent oriented software engineering techniques. In: 40th Annual Hawaii International Conference on System Sciences, HICSS 2007, Island of Hawaii, USA, pp. 1–20. IEEE Computer Society (2007) 12. Akbari, O., Faraahi, A.: Evaluation Framework for Agent-Oriented Methodologies. In: Proceedings of World Academy of Science, Engineering and Technology, WCSET, Paris, France, vol. 35, pp. 419–424 (2008)

Towards an AOSE: Game Development Methodology

501

13. Tran, Q., Graham, C.: Comparison of ten agent-oriented methodologies, ch. XI, p. 341. Idea Group Inc. (2005) 14. Saremi, A., Esmaeili, M., Rahnama, M.: Evaluation complexity problem in agent based software development methodology. In: Second International Conference on Industrial and Information Systems (ICIIS 2007), pp. 577–584 (August 2007) 15. Cernuzzi, L.: On the evaluation of agent oriented modeling methods. In: The OOPSLA Workshop on Agent-Oriented Methodologies, Seattle (2002) 16. Garc´ıa-Magari˜no, I., G´omez-Sanz, J.J., Fuentes-Fern´andez, R.: An Evaluation Framework for MAS Modeling Languages Based on Metamodel Metrics. In: Luck, M., Gomez-Sanz, J.J. (eds.) AOSE 2008. LNCS, vol. 5386, pp. 101–115. Springer, Heidelberg (2009) 17. Basseda, R., Taghiyareh, F., Alinaghi, T., Ghoroghi, C., Moallem, A.: A framework for estimation of complexity in agent oriented methodologies. In: IEEE/ACS International Conference on Computer Systems and Applications, AICCSA 2009, pp. 645–652 (May 2009) 18. Sturm, A., Shehory, O.: A framework for evaluating agent-oriented methodologies. In: Giorgini, P., Henderson-Sellers, B., Winikoff, M. (eds.) AOIS 2003. LNCS (LNAI), vol. 3030, pp. 94–109. Springer, Heidelberg (2004) 19. Cernuzzi, L.: Profile based comparative analysis for AOSE methodologies evaluation. In: SAC 2008 Proceedings of the 2008 ACM Symposium on Applied Computing, pp.60–65 (2008) 20. DeLoach, S., Matson, E.T., Li, Y.: Applying agent oriented software engineering to cooperative robotics. In: The 15th International FLAIRS Conference (FLAIRS 2002), Pensacola, Florida, pp. 391–396 (May 2002)