Testing in Agent Oriented Methodologies - Springer Link

5 downloads 170 Views 199KB Size Report
Testing is an important step in software development in order to assure the correct- ness of software. Although there are some works on agent oriented testing [1] ...
Testing in Agent Oriented Methodologies Mailyn Moreno1, Juan Pavón2, and Alejandro Rosete1 1

Departamento de Inteligencia Artificial e Infraestructura de Sistemas Informáticos, Instituto Superior Politécnico José Antonio Echeverría, Marianao, 19390, Havana, Cuba {my,rosete}@ceis.cujae.edu.cu 2 Departamento de Ingeniería del Software e Inteligencia Artificial, Universidad Complutense de Madrid, Ciudad Universitaria, 28040, Madrid, Spain [email protected]

Abstract. Testing is an important activity in software development in order to assure the correctness of software. However, testing is often disregarded in most agent oriented methodologies, mainly because they focus on analysis and design activities, and consider that implementation and testing issues can be performed using traditional techniques. But multi-agent systems implementation has some features that make it distinctive from traditional software. This paper presents an overview of testing in agent orientation based on the V-Model in order to establish the role of testing activities in an agent oriented development lifecycle. It also identifies how different types of testing are covered by previous work and the directions for further work. Keywords: Test, Testing process in agent orientation, V-Model.

1 Introduction Testing is an important step in software development in order to assure the correctness of software. Although there are some works on agent oriented testing [1], [2], [3], [4], this activity is often disregarded in most agent oriented methodologies. One reason for this may be that these methodologies mainly focus on analysis and design, as they consider that implementation and testing issues can be performed using well established techniques, mainly from object-oriented software engineering. However, there are relevant features of the agent paradigm that are not yet covered by those more traditional techniques. For instance, autonomy, proactivity, and interactions of agents. This paper presents an overview of testing in agent orientation based on the V-Model [5], in order to establish the role of testing activities in an agent oriented development lifecycle. The use of the V-Model facilitates the identification of different testing activities and techniques, and provides a framework to review previous work and identify the necessity of further work in some directions. The paper is structured as follows. Section 2 introduces the general concepts of software testing and the V-model. Section 3 describes different proposals of testing activities in agent oriented software engineering. Section 4 presents a framework for agent oriented testing. This is used to identify the necessity of new lines of work, which are presented in Section 5. S. Omatu et al. (Eds.): IWANN 2009, Part II, LNCS 5518, pp. 138–145, 2009. © Springer-Verlag Berlin Heidelberg 2009

Testing in Agent Oriented Methodologies

139

2 Software Testing A classical definition [6] states that “test is the process of executing a program with the intent of finding errors”. Over the last years, the view of testing has evolved. Software testing is now seen as a complete process that supports the development and maintenance activities. Tests can be derived from requirements and specifications, design artefacts, or the source code. Depending on the activities of the software lifecycle, different types of tests can be defined. This is shown in the V-Model [5], as shown in Figure 1. The left branch of the V represents the specification flow, and the right branch represents the testing flow where the software product is tested at different abstraction levels.

Fig. 1. Software development activities and testing levels in the “V-Model” [7]

Information for each test level is usually derived from the related development activity. Certainly, an important advice is to design the tests simultaneously with each development activity, although the software will not be in an executable form until the implementation phase [8]. The purpose of acceptance testing is to determine whether the final software satisfies system requirements. System testing is intended to determine whether the assembled system meets its specifications. Integration testing is aimed to assess whether the interfaces between modules in a given subsystem have consistent assumptions and communicate correctly. Integration testing must assume that modules work correctly. A program unit is one or more contiguous program statements, with a name that other parts of the software use to call it [9]. A module is a collection of related units that are assembled in a file [9]. The purpose of module testing is to assess individual modules in isolation, including how the component units interact with each other and their associated data structures. At the lowest level, unit testing is aimed to assess the units produced by the implementation phase.

140

M. Moreno, J. Pavón, and A. Rosete

2.1 Test Process This subsection exposes test activities integrated with development, where testing activities begin as soon as development activities begin, and are carried out in analogy with the development stages [6], [7]. In the requirement analysis the main objective of testing is to evaluate the requirements themselves. The test activities in this stage are: write Testing Criteria for the software system; describe Support Software needed for testing at each stage; establish the high-level Test Plan that should be developed to delineate the testing approach and to perform the Requirement Test. The main test aim in the architectural design is to validate the mapping between the requirements specification and the design. The test activities in this stage are: Validate the Design; Design System Test; prepares for unit testing and integration testing by choosing Development Coverage Criteria and Design Acceptance Test Plan. In the intermediate design and the detailed design the main test objective is to evade mismatches of interfaces and to make sure that all test materials are ready for testing when the modules are written. The test activities in this stage are: Specify Systems Test Case; Design Integration and Unit Test Plan; Create Unit Test Cases and build Test Specifications for integration testing. The main test objective in the implementation is to perform effective and efficient Unit Test. In the test stage many important activities are performed. These activities have been organized during all development process. These activities are: Module Test, Integration Test, Systems Test and Acceptance Test.

3 Testing in Agent-Oriented Methodologies Agent-oriented methodologies, as they have been proposed so far, mainly focus on the analysis, design and implementation of Multi-Agent Systems (MAS) [10]. Little attention has been paid to how MAS can be tested or debugged [10], [11]. However, many of the tools that support each methodology include some features which are relevant to testing. Theses features are: interaction debugging, MAS behaviour debugging, other debugging tool, unit testing framework, and other testing framework. Now, the methodologies are analyzed against this set of features. PASSI only includes a simple unit testing framework [12]. The testing framework assists developers to build a test suite naturally in a reasonable and incremental way. The consistency of the approach and the binary representation of the results help the developers to create test cases and interpret the information. The framework allows the developer to test the agents during development. In particular, as changes are made to the system and the new functionality is tested, previously tested functionality has to be re-tested to assure that the modifications have not corrupted the system. The Prometheus Design Tool (PDT) supports interaction debugging [11], by offering a debug agent that monitors the exchanges of messages between agents and checks them against interaction protocols [11]. Violations of the interaction protocols such as a failure to receive an expected message or receiving an unexpected message can then be automatically detected and precisely explained. Besides, the PDT has

Testing in Agent Oriented Methodologies

141

been extended to incorporate a unit testing framework [13] which performs model based unit testing. The framework gives an overview of the testing process and the mechanisms for identifying the order in which the units are to be tested and for generating the input that forms test cases [13]. The ZEUS toolkit [3] provides a suite of visualization and debugging tools to support MAS development. Zeus debugging tools shift the burden of inference from the user to the visualizer. These debugging tools are: a society tool, a report tool, a micro tool, a control tool, a statistic tool and a video tool. Despite all those tools, the Zeus toolkit cannot solve the following problem: as the information compilation becomes through messages request, it is not possible to predict if an agent is behaving as expected since it may not respond with information of its state. Hence, the society tool cannot provide information about a specific agent into the organization. INGENIAS provides basic interaction debugging support through the INGENIAS Agent Framework (IAF) [14]. This support has been recently increased with the incorporation of ACLAnalyser. The purpose of the integration of the ACLAnalyser is to support the analysis of the interactions during design [15]. In a recent work INGENIAS includes MAS behaviour debugging, considering the application of a Knowledge Discovery in Databases (KDD) process oriented by the analysis of MAS execution [16]. The KDD process has three phases: extraction, pre-process, and data mining. The outcome of these phases is illustrated using a MAS to simulate a pizza market. The final conclusion is that, thanks to the infrastructure for forensic analysis of the MAS, it is possible to draw conclusions about the behaviour of the MAS detecting emerging structures or anomalous behaviours. The MaSE methodology proposes an interaction debugging based on model checking to support automatic verification of multiagent conversations [17]. The Tropos Methodology has an agent testing framework, called eCAT [18]. eCAT is a tool that supports deriving test cases semi-automatically. Four test cases generation techniques are included in eCAT. - Goal-oriented test case generation technique generates test case skeletons from goal analysis diagrams. eCAT takes these artefacts as inputs to generate test case skeletons that are aimed at testing goal fulfilment [19]. - In the Ontology-based test case generation technique, the eCAT can take benefit of agent interaction ontologies in order to automatically produce equally valid and invalid test inputs, to provide guidance in the exploration of the input space, and to obtain a test oracle to validate the test outputs [19]. - In the Random test case generation technique, the eCAT generates random test cases selecting the communication protocol and randomly generating messages [18]. - The technique Evolutionary Mutation allows generating the test cases automatically. Intuitively, it uses the mutation satisfactoriness score as a suitability measure to guide evolution, under the premise that test cases that are better at killing mutants are also likely to be better at revealing real faults [18]. Other interesting works are presented in [1], [2], where testing agents are proposed following a unit test approach, including testing framework. It is not clear that all these frameworks allow testing of the proactive and autonomous behaviour of agents, because they focus on testing towards the actions induced by interactions. This approach ignores the importance of environment and assigned goals to test this kind of behaviours. This is a general limitation of all analysed

142

M. Moreno, J. Pavón, and A. Rosete

proposals. Although in [4] it is explicitly said that agent testing must consider message, environment and learning, all these factors are not considered in the proposed testing approaches. The relationship between internal goals and proactive behaviour is not explicitly considered as a fundamental aspect of agent testing. This is important because proactive and autonomous behaviours are two of the main characteristics of agent paradigm.

4 Testing in Agent Oriented Software Development Process There are many differences between object orientation and agent orientation. For example, tests activities have different objectives in some cases. Nevertheless the V-Model can also be useful for testing in an agent oriented methodology. This section proposes a framework for agent oriented testing. This framework extends the V-model by taking characteristics of the agent perspective approach. Some proposed activities are not yet developed in agent oriented methodologies, so they arise as open issues that are discussed in the conclusions. Figure 2 illustrates these proposed activities in the context of the testing process in the Agent Oriented Software Engineering [6], [7], [8]. 4.1 Testing Activities in Requirement Analysis The requirements evaluation is the principal test activity associated to Requirements Analysis. The goal-oriented (GO) test case generation technique proposed by Tropos [19] can be adopted as an activity in this stage for Requirements Testing. This technique generates test cases that are focused on testing goal fulfilment. Few agent oriented methodologies model requirements as goals. The main objective of test activities is to prepare for the Acceptance Test and (in small measure) for the System Test. The following activities play this objective: Testing Criteria, Support Software and Testing Plan. These activities are not supported by any agent oriented methodology, but this does not imply a big problem. They can be adopted easily, because they do not need to be supported by tools. The artifacts obtained in these activities help human testers in the testing process. 4.2 Testing Activities in Architectural Design The main test activity in architectural design is similar to object orientation: to validate the mapping between the requirements specification and the design. The most important test activity in this stage is Validate Design. In the Validate Design activity is important to check the correlation between the system goals and the capacity and roles of the agent. This is a difference with regard to object orientation [8], [20]. The main purpose of these test activities is to prepare for the System Test and (with little emphasis) for the Acceptance Test and Unit-and-Integration Test. The following activities fulfil this purpose: Design System Test, Develop Coverage Criteria and Design Acceptance Test Plan. The agent oriented methodologies do not have these testing activities. However, Design Acceptance Test Plan and Develop Coverage Criteria are activities that can be

Testing in Agent Oriented Methodologies

143

performed manually and be easily adopted. The artifacts obtained in these activities help human testers in the testing process. The organizational structure is important for the Design System Test; this is a difference with respect to object orientation [20]. Design System Test and Validate Design need a support tool and they require a detailed study to be adopted in an agent oriented methodology. 4.3 Testing Activities in the Intermediate and Detailed Design The main test activities at this level are: to check mismatches of interfaces and to make sure that all test materials are ready for testing when the modules are written. The main purpose of these test activities is to prepare for the next activities: Module Test, Integration Test, and System Test (with little emphasis). The following activities fulfil this purpose: Specify System Test Case, Design Integration and Module Test Plans, Create Module Test Cases and Build Test Specifications for Integration. The activity Design Integration and Module Test Plans may be easily adopted; this activity can be developed manually. The artifacts obtained in these activities help human testers in testing process. We must emphasize that two types of entities may be considered as modules: agents or agents' organizations that work together to fulfil a goal. The Specify Systems Test Cases can be supported by the Evolutionary Mutation (EM), a test case generation technique of Tropos [18]. This technique generates test cases automatically. Create Module Test Cases is not proposed by any agent oriented methodology and this activity needs a tool support. Some framework, such as Prometheus [13], PASSI [12], Zeus[3] and [1], [2] include interesting tool that partially support this objective. Build Test Specifications for Integration is other activity that can be developed using Ontology_Based (OB) test case generation [21] and Random (R) test case generation [18] techniques of Tropos. Both techniques are based on the communication between agents, this is an important aspect in the Integration Test. 4.4 Testing Activities in Implementation The main test objective here is similar to the object orientation: to perform effective and efficient Unit Test. Now, it is important to emphasize that agents are comparable to modules and not to unit. Inside the agent there are program units. These units need Unit Test similar to the object orientation. 4.5 Testing Activities in Test The activities in this stage have been prepared during all development process. Module Testing can be supported by the unit test framework of PASSI [12], Prometheus [13] or the framework presented for [1], [2]. These frameworks need to be extended for testing the proactivity and autonomy of the agents. Integration Testing can be supported by few agent oriented methodologies. This activity is designed to assess the communication among agents. In Prometheus, Zeus toolkit and INGENIAS the exchanges of messages between agents may be debugged. This is helpful, although it does not fulfils the whole interest of this activity. Tropos can support this activity through two types of test case generation: Ontology-Based (OB) test case generation [21] and the Random (R) test case generation [18].

144

M. Moreno, J. Pavón, and A. Rosete

Fig. 2. Testing process in the Agent Oriented Software Engineering

System Test activity can be done by using some artifacts obtained during the development process. These artifacts are derived from the following activities: Testing Plan, Design System Test and Specify System Test Case. Also Acceptance Testing activity can be done by using some artifacts obtained during development process. Test cases can be derived from Acceptance Test Plans.

5 Conclusions The review of testing activities from an agent oriented perspective, based on the VModel, allows the classification of relevant methods and tools for testing in agent oriented software engineering. It also raises open issues concerning agent characteristics that are not yet covered by existing testing techniques. In concrete, future work must address design activities and tools that can provide support for the design validation of the MAS, improve the integration test, as well as testing important characteristics of agent such as proactivity and autonomy.

Acknowledgements This work has been done thanks to a fellowship for M. Moreno by the Spanish Agency for International Cooperation (AECI), and with the project Agent-based Modelling and Simulation of Complex Social Systems (SiCoSSys), supported by Spanish Council for Science and Innovation, with grant TIN2008-06464-C03-01.

References 1. Coelho, R., Kulesza, U., Staa, A.v., Lucena, C.: Unit Testing in Multi-Agent Systems Using Mock Agents and Aspects. In: International Workshop on Software Engineering for Large-Scale Multi-Agent Systems, pp. 83–90. ACM, Shanghai (2006)

Testing in Agent Oriented Methodologies

145

2. Tiryaki, A.M., Öztuna, S., Dikenelli, O., Erdur, R.C.: SUNIT: A Unit Testing Framework for Test Driven Development of Multi-Agent Systems. In: Padgham, L., Zambonelli, F. (eds.) AOSE VII / AOSE 2006. LNCS, vol. 4405, pp. 156–173. Springer, Heidelberg (2007) 3. Nwana, H., Ndumu, D., Lee, L., Collis, J.: ZEUS: A Toolkit for Building Distributed Multi-Agent Systems. Applied Artificial Intelligence 13, 129–185 (1999) 4. Rouff, C.: A Test Agent for Testing Agents and Their Communities. In: Aerospace Conference Proceedings, vol. 5, pp. 2633–2638 (2002) 5. The V-Model: The Development Standards for IT Systems of the Federal Republic of Germany (2005), http://www.v-modell-xt.de (cited December 2008) 6. Myers, G.J.: The Art of Software Testing. John Wiley & Sons, New Jersey (2004) 7. Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2008) 8. Jacobson, I., Booch, G., Rumbaugh, J.: The Unified Software Development Process. Addison-Wesley, Reading (1999) 9. IEEE Standard Glossary of Software Engineering Terminology. IEEE (1990) 10. Tran, Q.N., Low, G.C.: Comparison of Ten Agent-Oriented Methodologies. In: AgentOriented Methodologies. Idea Group Inc., London (2005) 11. Padgham, L., Winikoff, M., Poutakidis, D.: Adding Debugging Support to the Prometheus Methodology. Engineering Applications of Artificial Intelligence 18, 173–190 (2005) 12. Caire, G., Cossentino, M., Negri, A., Poggi, A., Turci, P.: Multi-Agent Systems Implementation and Testing. In: 4th Int. Symp. AT2AI, Vienna (2004) 13. Zhang, Z., Thangarajah, J., Padgham, L.: Automated Unit Testing Intelligent Agents in PDT. In: 7th AAMAS 2008, pp. 1673–1674 (2008) 14. Gómez-Sanz, J.: INGENIAS Agent Framework. Development Guide version 1.0, Grupo de Agentes de Software: Ingeniería y Aplicaciones, UCM (2007) 15. Botía, J.A., Gómez-Sanz, J.J., Pavón, J.: Intelligent Data Analysis for the Verification of Multi-Agent Systems Interactions. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds.) IDEAL 2006. LNCS, vol. 4224, pp. 1207–1214. Springer, Heidelberg (2006) 16. Serrano, E., Gómez-Sanz, J.J., Botía, J., Pavón, J.: Intelligent Data Analysis applied to Debug Complex Software Systems. Neurocomputing (to appear) (2008) 17. Lacey, T., DeLoach, S.: Automatic Verification of Multiagent Conversations. In: Eleventh Annual Midwest Artificial Intelligence and Cognitive Science Conference, pp. 93–100. AAAI Press, Arkansas (2000) 18. Nguyen, C.D., Perini, A., Tonella, P.: eCAT: a Tool for Automating Test Cases Generation and Execution in Testing Multi-Agent Systems (Demo Paper). In: AAMAS 2008, pp. 1669–1670 (2008) 19. Nguyen, D.C., Perini, A., Tonella, P.: A Goal-Oriented Software Testing Methodology. In: Luck, M., Padgham, L. (eds.) Agent-Oriented Software Engineering VIII. LNCS, vol. 4951, pp. 58–72. Springer, Heidelberg (2008) 20. Wooldridge, M.: An Introduction to MultiAgent Systems. John Wiley & Sons, Chichester (2002) 21. Nguyen, C.D., Perini, A., Tonella, P.: Ontology-based Test Generation for MultiAgent Systems (Short Paper). In: AAMAS 2008, pp. 1315–1318 (2008)

Suggest Documents