Eighth IEEE International Conference on Advanced Learning Technologies
Transforming Learning through Agent Augmented Virtual World Han Yu, Zhiqi Shen, Chor Ping Low, Chunyan Miao Nanyang Technological University
[email protected],
[email protected],
[email protected] [email protected] promote active learning and therefore, enhance the learning results. There has been increasing interest in both computer science and in the learning sciences in how agents might foster more effective and situated learning experiences in virtual environments [2]. Agents as learning technologies can be used in different forms, such as “learning buddy", pedagogical agents, virtual avatar/character, and so on. Preliminary research shows that such agents may help create more meaningful learning experiences and, in turn, positively influence learning outcomes [2]. In order to give the human users the perception of intelligence [1] and accommodate user intervention [3] in a complex VLE application, autonomous software agents are widely used. Incorporating perceived intelligence such as emotions and personality can draw significantly higher level of attention from the human participants [1, 11]. However, designing and implementing a Multi-Agent System (MAS) and plugging it into a VLE application requires a lot of investment in time and effort even by a team of programmers well versed in the Agent Oriented Software Engineering (AOSE) paradigm. This paper presents a goal-oriented framework (consisting of the Goal Net Designer [10] and the Multi-Agent Development Environment (MADE) [8] Runtime System), which aspires to simplify the process of designing and implementing an agent mediated interactive storytelling application which is a popular form of VLE. In the following sections, we will discuss, in detail, the methodology behind the framework, the design of the framework and how to build an agent and add it into an IS application using the tools provided by this framework.
Abstract Recent years have seen new forms of immersive learning such as interactive storytelling in virtual learning environment. Intelligent software agents are able to provide a more attractive narrative and get the users more involved in the unfolding of the story line in the virtual world. However, designing and implementing a software agent or a multi-agent system (MAS) for this purpose requires the developer to be highly skilled in agent oriented software engineering (AOSE) and a large investment in time and effort. This raises the hurdle to develop non-trivial agent augmented interactive e-learning applications. Our research proposes a framework – the Multi-Agent Development Environment (MADE) - to resolve this problem by abstracting the agent implementation part into a reusable runtime environment so as to enable less technically inclined people (e.g. teachers) and developers with limited knowledge in AOSE to rapidly prototype a multi-agent system to be incorporated into interactive learning applications.
1. Introduction
The continued advancement in interactive digital media technologies has provided new dimensions to the landscape of e-learning. Gone was the day when elearning mainly revolved around the management of learning materials and the provision of flexible and personalized curricula. Nowadays, the emphasis on the learning experience has been ushered in with the concept of the virtual learning environment (VLE) which drastically differs from the traditional rote learning approach. The importance of the VLE is rooted in the fact that it not only provides a richer and more interactive learning experience, but also enables the learners to acquire analytical skills through participation into the learning activities [1]. Apart from providing accurate visualization of knowledge, realistic social interaction is also paramount to the success of a VLE [1]. Interactions, both among the learners and between the learners and the virtual objects in the learning environment, can
978-0-7695-3167-0/08 $25.00 © 2008 IEEE DOI 10.1109/ICALT.2008.65
2. Related Work
While most of the research attention is focused on studying the impact of MAS on IS applications such as Riedl et al [4] or monitoring and mediating students’ learning process and outcome (Anane et al [5]), there do exist some research projects trying to resolve the issue of making building an IS application easier and more flexible.
933
One approach proposed by Spierling et al [6] provides a modular approach for authoring an IS application. It divides an IS application into four different levels and each level has a certain degree of autonomy. Extensive efforts have been devoted into designing the avatars of the IS system. In addition, the project avoids letting the users write program code where possible in the belief that this will simplify the system design process. However, this approach may affect the flexibility of the system and limit it to handling only certain categories of interactive storytelling applications (e.g. fairy tales). Moreover, since its main focus is to design a IS application itself instead of plugging a MAS into an existing IS application, it does not make adequate provision for decoupling the agents from the IS application itself. Other examples such as the Prometheus Design Tool (PDT) proposed by Padgam et al [7] emphasize more on designing and implementing a MAS. However, none of them makes provision for integration of the multi-agent systems to existing interactive digital media application platforms in general or VLE systems in particular. They also tends to provide extensive programming support to let the designers make detailed fine tuning of the multi-agent systems under development and thereby raises the technical skill level required for the users to use them productively. Our research assumes the users to have a basic understanding of the multi-agent systems – not up to the level to be able to program under the AOSE paradigm. We strive to provide a balanced mixture of high level goal oriented design and low level VE specific programming in our proposed framework and let the users treat the detailed workings of a MAS as a black box in their overall design of a multi-agent mediated interactive virtual learning application.
input goal to the output goal. A complex system can be recursively decomposed into sub-goals and sub-goalnets. Each composite goal takes precedence of a simple goal during execution. In such a manner, the system can be easily modeled and simplified. A Goal Net model defines behaviors of an agent that executes it. Each agent has at least one goal net. A complex goal net can be split to many goal nets. Therefore a multiagent system can be formed in either top-down or bottom-up fashion. The goals of an agent or, on a higher level, a story line represented by a MAS are synonymous to the goals we, as human beings, may set for ourselves in everyday life. The whole concept of the goal net is made as simple as possible so that it is not too difficult for non-technically savvy people to grasp in a short period of time.
4. Framework Architecture
Figure 1. System Architecture for the MADE Framework
3. Methodological Basis The proposed framework of this research is based on the Goal Net methodology [8]. A goal net is a composite goal hierarchy which is composed of goals (alternatively known as states) and transitions. These two entities will be collectively referred to as nodes in the subsequent sections of this paper. The goals are used to represent the internal states that an agent needs to go through in order to achieve its final goal. The transitions connect one goal to anther specifying the relationship between goals it joins. Each transition must have at least one input goal and at least one output goal. Each transition is associated with a task list which defines the possible tasks which might be performed by the agent in order to transit from the
As illustrated in Figure 1, the MADE framework consists of two major parts: 1) the Goal Net Designer which provides a graphical user interface (GUI) for the users to make high level designs from the overall multi-agent system all the way down to the mental states of each individual agent and to set up the linkage with an existing VLE. 2) the MADE runtime system which interprets the goal net designs from the users and manages the detailed operations of a multi-agent system. These two parts shares a central database which stores the goal net designs in a format that can be easily loaded and interpreted at runtime.
934
Figure 2. Goal Net Designer The Goal Net Designer is implemented according to the client-server architecture. This makes provision for distributed design efforts and collaborative development among team members spanning a large geographic region. The framework leaves it open for the users to provide routines to connect to the virtual environment (essentially a 2D/3D virtual world) of their choice. Functions called at each node are concerned with the lower level details of how an agent is supposed to behave at this node. In the MADE framework, C/C++ functions organized into dynamic linked library (DLL) files are required to be provided by the users. Alternatively, the users can have the option of implementing these functions as web services. One should note that these functions are meant for logics or detailed manipulations of the avatar represented by the agent or the virtual environment (VE) itself in accordance with the application programming interface (API) provided by the VE. They are not concerned with the architecture of the MAS. Thus, the users do not need to master the AOSE paradigm in order to write these functions. The MADE runtime system treats these functions as black boxes. If the users want to port an agent into a different virtual environment, all they have to do is to modify the underlying implementation of the functions to change the system specific parts. Therefore, the goal net design for that agent could be reused across multiple virtual environments.
The MADE runtime is responsible for loading the goal net designs from the database, creating agents based on the designs and provide agent management. It can be run from a Windows machine. During runtime, it accesses the goal net information from the central database remotely and invokes web-services or functions in the DLLs locally. DLLs are required to be stored in a local/shared folder or a shortcut to a local/shared folder and this folder needs to be made know to the MADE at design time.
5. Building an Agent from the Ground Up To build an agent using our proposed framework, the users need to accomplish two tasks: 1) to design the goal net which dictates what internal states the agent has and how to go from one state to the next; 2) to write the functions needed for the agent to execute in each state or transition as defined in the design and compile them into standard DLL files or publish them as web-services. In this section, we will discuss how to design an agent to control an avatar in the SRC project. The SRC project constructs an online IS application called Virtual Singapura for lower secondary school students to learning science topics by exploring it [9]. In this experiment, we will design an agent to control a nurse figure in the storyline which provides students with introductions and hints about the case in the story the
935
students need to investigate (a dengue outbreak) using the MADE framework. The design of a goal net is done on the canvas as highlighted in Area 1 of Figure 2. In order to draw up a goal net representing an agent’s various states and their relationships, the user needs to have a clear goal for the agent to achieve in the end. It can be embodied as an overall composite state as shown in Figure 2. Then on a lower level, the user can break it down into smaller, more manageable goals for the agent to achieve in certain sequences to accomplish the overall goal. These small goals can be either simple states or composite ones which may be broken down further. In the case of our example, we would like the nurse to walk around the hospital ward to introduce the conditions of each patient. Therefore, we added one state for each available patient in the scene (5 in total), and made each state represent the nurse at a different location. Then, we defined the action to be performed by the nurse avatar in order to transit from one state to the next. This is accomplished by using the transition entities. A transition links up one or more input states to one or more output states and contains functions which need to be performed to go through the transition. In the sample design, we make the nurse avatar wait at a patient’s sick bed until the student have understood the introduction and prompt for further actions. Therefore, a transition which loops back to each state is linked with that state. A function which makes the avatar to check for user actions needs to be added to the loop back transition. To add it to the transition, we only need to select the appropriate function from the Function List highlighted as Area 2 in Figure 2 and drag-and-drop it onto the desired transitions in the canvas. To go from location 1 to location 2 for example, the nurse avatar needs to move in a certain direction for a certain distance, stop and start speaking. Therefore, two atomic actions – walk and talk – need to be performed in the transition linking states Pos-1 with Pos-2. The transition entities can accommodate multiple functions which are grouped into a Task and execute them in their order of appearance in the task. Multiple tasks and the algorithm to select which of them to execute can also be added to a transition. Multiple tasks can be added to a transition. Therefore, we could add two functions – move(int dir, int dist) and say(char* string), with appropriate arguments, to the desired transitions. The task section algorithm [12] to select which of them to execute can also be defined. This facilitates agent to use various reasoning mechanisms like human being for tasks selection such as rule based reasoning, case based reasoning,
probabilistic reasoning and fuzzy cognitive reasoning [14, 15]. The linkage of the states and transitions is accomplished by the Arc entities (directed arrows). Although they do not perform functions at runtime, they offer a simple and efficient way of specifying the relationships (e.g. 1-to-1, 1-to-n, n-to-m, etc.) among states and transitions. All basic entities for drawing up a goal net are listed on the left-hand toolbar for the designers to use. Since the MADE runtime interprets the design data on the fly, the behaviors of the agents can be changed even after the IS application is released to the customers. For example, more patients may be included into the scene. In this way, one release of the IS application can be customized to suit different customers’ needs without having to substantially rewrite the program code.
6. Putting an Agent into a Virtual World
Figure 3. The Agent Controlled Nurse Avatar Built with the MADE Framework The Active World system, on which the Virtual Singapura is based, provides the capability for users to write scripts in C/C++ to manipulate the actions of the avatars in it over the Internet. Therefore, we have written our functions for the transitions with this in mind by incorporating the appropriate remote login procedures according to the Active World API into our function code. We started an instance of the MADE runtime and let it load the goal net design from the shared database. It will then go through the states from the start state following the transitions. Everything after this will be automated by the MADE runtime system. Figure 3 shows a screenshot of the agent controlled nurse explaining the situation to a student [13]. The agent provides information for the student, and rises to help whenever the student has lost the clues. The MADE framework is flexible enough to be plugged into different VEs as it leaves the details of
936
setting up the connection open. It is up to the designers to write procedures in accordance with the requirements of the target VE. The functions which make the avatar move and talk use calls to APIs of the specific VE (Active World in our case). Thus, although the designers need to write program code when using the MADE framework to build agents, the code itself only depends on the VE used and is completely decoupled from the agent or MAS.
[5] R. Anane, K.M. Chao, R.J. Hendley, M. Younas, “An agent-mediated approach to eLearning”, Seventh IASTED International Conference on Internet and Multimedia Systems and Applications, Honolulu, USA, 13-15 Aug. 2003. pp. 104-108.
[6] U. Spierling, D. Grasbon, N. Braun, I Iurgel, “Setting the scene: playing digital director in interactive storytelling and creation”, Computers & Graphics 26 (2002) pp.31–44. [7] L. Padgham, J. Thangarajah, M. Winikoff, “Tool support for agent development using the Prometheus methodology”, Fifth International Conference on Quality Software, 2005 (QSIC 2005).
7. Conclusions and Future Work
The MADE framework separates agent mental state modeling from task design in order to incorporate intelligent and autonomous behaviors into agents. The MADE framework has been used by final year computer engineering undergraduates from Nanyang Technological University and teachers from the National Institute of Education, Singapore, during the development of the SRC project. Although the students had no prior knowledge of the AOSE concept and only went through basic training on how to use the Goal Net Designer tool, they were able to develop prototype agent controlled avatars (excluding art work) and put together a workable interactive storytelling application within a short period of time due to the fact that they only had to familiarize with the Active World API but not the AOSE paradigm. In future development of the framework, we will include a more sophisticated agent environment model to simulate more complex interactions between the agents and the virtual world settings. In addition, to expand the realm of application of the framework, a trust model will also be incorporated to cater for higher level of security requirement for deployment in large scale and open online communities.
[8] Z.Q. Shen, C.Y. Miao, R. Gay, “Goal-oriented Methodology for Agent-oriented Software Engineering”, IEICE Transactions on Information and Systems, Special Issue on Knowledge-based Software Engineering, Vol. E89D, No. 4, 2006.
[9] Y.D. Cai, Z.Q. Shen, C.Y. Miao, “G-MADE: A Hybrid Interactive Storytelling Architecture”, The 2007 AAAI Fall Symposium on Intelligent Narrative Technologies.
[10] H. Yu, Z.Q. Shen, C.Y Miao, “Intelligent Software Agent Design Tool Using Goal Net Methodology”, The 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (IAT 2007). [11] Z.Q. Shen, C.Y. Miao, Y.D. Cai, “Agent Augmented Game Development”, the 2nd Annual Microsoft Academic Days Conference on Game Development, 2007. [12] Z.Q. Shen, C.Y. Miao, Y. Miao, X.H. Tao, R. Gay, “A Goal-oriented Approach to Goal Selection and Action Selection”, IEEE World Congress on Computational Intelligence (WCCI), 2006.
8. References [1] Z.G. Pan, A.D. Cheok, H.W. Yang, J.J. Zhu, J.Y. Shi, “Virtual reality and mixed reality for virtual learning environments”, Computer & Graphics 30 (2006) pp. 20-28.
[13] M. Ashoori, C.Y. Miao, Y.D. Cai, “Socializing Pedagogical Agents for Personalization in Virtual Learning Environments”, Web Intelligence and Intelligent Agent Technology Workshops, 2007.
[2] D.J.Ketelhut, C. Dede, J. Clarke, B. Nelson, C. Bowman, “Studying Situated Learning in a Multi-user Virtual Environment”, Learning Online Info, 2006.
[14] C.Y. Miao, A. Goh, Z.H. Yang, “Agent that models, reasons and makes decisions,” Knowledge-Based Systems (KBS), vol. 15, no. 3, 2002.
[3] M. Cavazza, F. Charles, S.J. Mead, “Developing Reusable Interactive Storytelling Technologies”, Building the Information Society, 2004.
[15] C.Y. Miao, A. Goh, Y. Miao, Z.H. Yang, “A Dynamic Inference Model For Intelligent Agent,” International Journal of Software Engineering & Knowledge Engineering (IJSEKE), vol. 11, no. 5, pp. 509-528, 2001.
[4] M. Riedl, C.J. Saretto, R.M. Young, “Managing Interaction between Users and Agents in a Multi-agent Storytelling Environment”, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, 2003, pp. 741 – 748.
937