REAL: an Agent-based Learning Environment - CiteSeerX

1 downloads 0 Views 424KB Size Report
Daoquan Li, Qing Xia, Sungbong Kim, Seokmin Kang. Teachers College, Columbia University. Box 8, 525 W 120th Street. New York, New York 10027 U.S.A..
Learning in One’s Own Imaginary World Xin Bai, John B. Black, Lance Vikaros, Jonathan Vitale, Daoquan Li, Qing Xia, Sungbong Kim, Seokmin Kang Teachers College, Columbia University Box 8, 525 W 120th Street New York, New York 10027 U.S.A. ABSTRACT This paper details the implementation of an agent-based educational gaming environment utilizing the REAL cognitive framework (Reflective Agent Learning Environment). REAL provides an interactive learning environment that allows students to 1) construct their imaginary world; 2) reflect upon the quality of their understanding; 3) test it out in the dynamically generated simulation games. The REAL framework stresses reflection as the critical component of thinking processes. The reflective agent’s action is monitored by an expert agent, a pedagogical agent, and a communication agent. With a rule-based reasoning engine and game engine embedded, REAL makes it easier for developers to model domain knowledge and develop simulation games that are otherwise time-consuming. Our studies show that this kind of learning environment engages students in learning and encourages collaboration among researchers in different areas. 1. INTRODUCTION Some research on intelligent agents to support education benefits from the development Intelligent Tutoring Systems (ITS), which use computers as an alternative to human tutors. These ITS’s can be regarded as Pedagogical Agents, which have a set of normative teaching goals, plans for achieving these goals (e.g., teaching strategies), and associated resources in the learning environment (Thalmann, Noser, & Huang, 1997). Researchers have attempted to design intelligent agents in the roles of domain expert e.g. MYCIN (Shortliffe & Edward, 1976; Davis, Buchanan, & Shortliffe, 1977), learning companion (Chan & Chou, 1997), collaborator (Blandford, 1994; Dillenbourg & Self, 1992), and teachable agent (Biswas, Schwartz, Bransford, & The Teachable Agents Group at Vanderbilt, 2001), to name a few. The intelligent agents should be capable of making decisions, generating behaviors that are as close as that of their human counterparts. The resulting learner-centered environment

allows researchers to control what, when, and how to assist learning, thus gaining a better insight into human cognition. The advances of technologies also allow students to explore the real world through educational simulation games. Simulation games allow users to explore their curiosity without the danger of being exposed to the harmful environment, to make mistakes without real consequences, to express their new ideas anonymously without feeling embarrassed. The entities in this world can be living objects such as a tiger, grass, a person, or a static object such as rock and water. They are represented as intelligent agents. Agents are autonomous, computational entities that can be viewed as perceiving their environment through sensors and acting upon their environment through effectors (Weiss 1999). Students can get a reallife like experience of how a system works through observing these automated agents interacting with each other and understanding the consequences of these events. In a traditional, teacher-centered model of instruction the teacher is responsible for organizing and delivering information to the students. Thus, the teacher must reflect upon his or her own understanding of the topic and create ways to relate the topic to the student, such as an appropriate analogy. This process may be seen as the development of mental models that incorporate the various propositions, procedures, and images that are associated with a concept (Black, 1992; 2006). As the teacher develops teaching methods his or her mental model becomes richer and increasingly linked with related knowledge stored in memory. However, within this framework, a student’s learning is dependent upon his or her ability to understand the concept as the teacher does. If there is deficiency in the teacher’s own mental model or communication ability, the student may have difficulty assimilating the information. One alternative to the traditional model of instruction is learning by teaching.

In the

preparatory, active, and reflective stages of instruction teachers are engaged in organizing knowledge, clarifying their interpretation of a concept, and integrating feedback (Biswas et. al. 2001). This process facilitates constant evaluation and modification of mental models to increase clarity and thoroughness. Thus, allowing students to take the role of a teacher should allow them to reap the benefits of mental model construction, evaluation, and improvement. The effectiveness of instructor preparation was shown in a study by Bargh and Schul (1980), in which students who prepared to teach physics to students in preparation

for a quiz actually learned the material better than students preparing to take the quiz themselves. However, the value of learning by teaching may fluctuate, depending on the quality of feedback given by the tutored student. If the tutored student readily understands the material or has difficulty formulating good questions, the teacher may not feel the need to improve his or her mental model for the concept. The development of virtual teachable agents is an attempt to standardize and maximize the benefits of this experience.

Teachable agent

software allows a user to play the role of teacher to a virtual character that responds to instruction by engaging in some activity in a virtual world and providing feedback about its performance (Biswas et. al, 2005). When the user recognizes discrepancies between what he or she believes and how the agent performs, the user is motivated to revise the mental model for the concept.

These revisions in understanding are then reflected in the updated

instructions that the user provides for the agent. The initial work by the Teachable Agents Group at Vanderbilt showed great potential of teachable agents in education (Viswanath et. al, 2004). The study of Betty’s Brain revealed the significant advantage of the learning by teaching paradigm. In this study software was provided to two groups of fifth graders, either with or without a teachable agent. Both groups were provided resources to learn about the subject matter, query the system, and construct and evaluate concept maps. However, in addition, the teachable agent group was also provided with the performance of the agent as it acted in the virtual world according to their concept maps. During the study the teachable agent group showed a greater degree of interaction with the material. Furthermore, after a delay period, the teachable agent group showed significantly greater memory retention and transfer ability. The opportunity for one to construct, evaluate, and modify one’s mental model for a concept is an inherent component of teaching, regardless of whether a virtual agent or human student is receiving the instruction. 2. MOTIVATION The REAL cognitive framework informs an effective model for using technology to facilitate student centered learning while addressing the historic short-comings of intelligent tutoring systems (ITS). Some early ITS and ICAI systems from the 70’s and 80’s failed to engage the interest of learners.

They failed to take into consideration learners’ needs inherent in

motivating their engaged attention, their personal ownership of the experience, and their formation of robust mental models. A few notable examples such as Inquiry Island seem to support this assertion. Inquiry Island (ref. missing, check with Lance), an expert-system based on Cognitive Apprenticeship, while dry, did succeed in providing coaching and modeling of both skill-based knowledge, and more importantly, of meta-cognitive strategies for connecting ideas to prior knowledge as well as for putting such ideas into practice. Inquiry Island combined adaptive feedback, like that found in early ICAI systems, with a more constructivist approach to learning. However it did not take advantage of the digital medium to motivate students’ interest. The interface itself was predominantly a text-based educational tool that integrated together artificially intelligent tutoring with productivity tools for managing and sharing observational empirical data. We feel that this illustrates the right cognitive approach applied without full regard for affective motivation or the medium’s potential to reduce cognitive load through visualizations. Some may argue that computers HAVE been able to pass the Turing Test. The problem is that such computers are able to hold convincing conversations by avoiding the kinds of open dialog needed to promote learning.

For example computers like Eliza and others have

convincingly imitated dialog patterns typical of people suffering from extreme paranoia (where they quickly become defensive, make paranoid accusations, and then refuse to engage in a give-and-take style of communication) or self-absorption (where they fail to listen or respond to another, instead monopolizing control of a conversation to only acknowledge topics that they want to talk about). This is to say that the computers that have successfully been mistaken for human have done so in a very limited manner where they strategically avoided, rather than engaged in, the types of open discourse conducive to learning. Affectively this kind of design is very insensitive to the learner. Efforts were made in REAL to improve upon this approach by offering feedback that is exciting, interactive, multi-modal, and that does not disrupt the flow of ongoing learning activities with explicit assessments.

The idea of REAL is not to replace a teacher, but to

replace the “hypotheticality” of a lesson about complex systems. This is to say that teachers and textbooks often hypothetically describe concepts that defy comprehension. Studies by Chi (2005) show that people have difficulty understanding concepts that exhibit complex and multivariate behaviors. Chi describes such elusive concepts as “complex systems,” but the

use of the word ‘system’ seems to incorrectly imply a mechanical, deterministic sum of properties that all belong to a single contained entity (system). All of these concepts are similar in that they are very hard to imagine, let alone describe hypothetically in a classroom. The key aspect of the REAL interface expected to address such knowledge acquisition and transfer issues is intelligently adaptive visualizations of particular aspects of complex systems from multiple perspectives.

As Schneiderman (2004) pointed out when

summarizing the findings of related research of computer aided visualization, “information visualization can be defined as the use of interactive visual representations of abstract data to amplify cognition.” Computer-based science simulation games are great media to put a user’s imagination in action. Gee (2003) claims that games are “not only pushing the creative boundaries of interactive digital media but also suggesting powerful models of next-generation interactive learning environments.” Aguilera and M'endiz (2003) also reported that “in addition to stimulating motivation, video games are considered very useful in acquiring practical skills, as well as increasing perception and stimulation and developing skills in problem-solving, strategy assessment, media and tools organization and obtaining intelligent answers. Of all the games available, simulations stand out for their enormous educational potential.” However, empirical studies have also shown that, although educational games are usually highly engaging, they often do not trigger the constructive reasoning necessary for learning (Conati & Zhao, 2004; Klawe, 1998). It is often possible to learn how to play a game effectively without necessarily reasoning about the underlying domain knowledge. In a learning process students need positive feedback when they are frustrated, domain-related guidance when they have certain questions that are beyond them, and support when they want to give up. This can be improved by embedding domain expert agents and pedagogical agents in the simulation. An expert agent aims to provide expert-like solutions to problems in a specific domain. Expert modeling is often time-consuming. Expert system shells like JESS can dramatically reduce our development time spent on constructing domain knowledge in a local context. We also need to define pedagogical strategies to decide when feedback can be given to a student with a specific performance level at an appropriate time. These guidelines will be embedded within the pedagogical agent, enabling it to observe students’ interactions closely, and make decisions accordingly.

For the system to get a better understanding of students, and for students to have enough freedom to express their beliefs, we adopted knowledge representation in the form of propositional networks, production systems, mental images, as well as mental models (Black, 1992). Based on how knowledge is represented, the agent must infer what the student is thinking and believing. The REAL project is founded on the assumption that the advantage of using artificial intelligence in educational tools is its ability to 1) be at least somewhat responsive to the affective, personal, and intellectual needs of a student, drawing from many HCI and narrative strategies used to convey the ideal principles of effective instruction that Jim Gee argues are missing from classrooms but present in “successful” video games; 2) model complex phenomena simplistically enough for students to grasp; and experientially enough for students to interactively build richer mental models that incorporate relational and procedural systems-knowledge. We hope, with the goal of winning a game, students will actively review instructional materials, revise game rules, and reflect upon the just-in-time feedback, thereby mastering the required cognitive skills. We hope in the process of designing REAL variants in different domains, we can draw interests and discussions from researchers in the field of compute science, instructional design, and human cognitions. Lessons learned can be applied to the next iteration of the design. 3. THE REAL FRAMEWORK Cognitive architectures serve as a bridge between human cognition and computer application. They are computational systems that try to characterize how the different aspects of the human system are integrated to achieve coherent thought (Anderson, 1998). The ACT* system of J. R. Anderson (1983b), the SOAR system of Newell (1990) and his colleagues (Laird et. al, 1987) are some of the promising implementations of the theory. They are also the theoretical basis of the REAL cognitive framework. One of the main components in a cognitive architecture is knowledge representation, which provides a formalization of factual knowledge, procedural skills and mental images (Black, 1992) available to the person. The representations correspond to the objects of thought, which include percepts, memories, goals, beliefs, desire, etc. REAL uses the following

components to help users develop meaningful knowledge constructions, and to reflect and develop the users’ cognitive mental states Reflective agent – This agent is the student model which stores specific information about individual learners. It models students’ level of competence and contains information to be used by the pedagogical agent of the system. Similar to the expert agent, it contains declarative and procedural knowledge, as well as imagery representations of the world. Once constructed, the reflective agent can be released in the simulation gaming environment for the students to observe. The reflective agent’s behaviors represent those that run in the student’s imaginary world [4]. What the reflective agent can do in the simulation is constrained by system mechanism implemented in the expert agent. For instance, a store owner can not purchase store products if he exceeds his purchasing power. Expert agent – This agent contains expert knowledge of a particular subject domain. For instance, in REAL Planet, the domain knowledge is about an ecological system. It consists of propositional networks as chunks of facts and procedural rules in the form of if-then clauses. The agent knows what the ecological system is like and how such a system works. In REAL Business, the expert agent is able to interpret statistical formulas, understand how to make the system (a business store) work, and why a specific student designed system doesn’t work. Pedagogical agent – By comparing the knowledge of the expert agent with that of the reflective agent, the pedagogical agent provides teaching strategies. Based upon these strategies the system decides which topic to present, what problem to generate, when to give feedback, and how to provide scaffolding. The feedback could be in the form of positive or negative responses, thinking bubbles attached with entities, game scores, or statistical reports in the form of charts, line diagrams, decision making trees, or a business report. Different gaming scenarios will be generated, targeting different students according to their different level of understanding. Different types of feedback will be used to motivate users and facilitate learning. For instance, during the game, the pedagogical agent could decide in realtime to 1) encourage users (e.g. showing business profits in a line diagram); 2) congratulate users, using text like “Congratulations”; 3) give reasons, showing in the thought bubble why certain formulas are not correct; 4) challenge users (timer, game level, performance scores); 5) provide hints, questions, and suggestions (e.g. the agent may point out that purchases exceeding the maximum amount is not allowed).

Communication agent - The human computer interactions are controlled by this agent. It observes the user’s mouse clicks and provides users with help for using tools; clarifies learning goals; gives navigation orientations; trouble shoots user’s local system configurations, etc. For instance, in the Design Mode, students may get help from the communication agent regarding how to move entities around, modify properties, and save their profiles.

Figure 1. REAL System Architecture 4. THE REAL IMPLEMENTATION We designed REAL variants in the domain of ecology and probability. We chose these two domains because they required two different kinds of skills to be taught. An ecological system represents a typical complex emergent system that can be better predicted through simulation; while problems on probability can be modeled in a relatively straightforward deterministic system. REAL Planet (Bai & Black, 2004; 2005; 2006) was developed in the domain of ecology. In order to generate their own simulation games, students are supposed to teach an alien how to design an ecological system on an alien planet, which has environmental conditions similar to that of the earth. The agents in an ecosystem interact locally with each other and with their environments. The global behavior of the entire ecosystem is hard to predict beforehand, it is an emergent outcome of the interactions among agents. We design the agent properties from two levels: 1) cognitive level: agents base upon definitions in this level to reason and behave. It includes such properties as predator-prey (energy flow) relations, category (herbivore,

carnival, producer or decomposer), goals, and behaviors; 2) biological level: properties in this level are controlled by a thread built in each agent. It’s adjusted in each bio-clock tick. It includes energy level, age, growth rate, re-production rate, etc. Each agent will perceive what is happening in the environment (sense), make decisions (reason), and change the environment by interacting with other agents (act).

REAL Business is the second REAL implementation developed in the domain of probability. Students are supposed to help a store owner design business strategies to run an ice cream store. They help a reflective agent through observing the event network, probabilities to multiple events, as well as data from the prior sales. The teaching is done through students’ designing production rules. The outcome is observed in the simulation game. Data are collected, reorganized, and displayed for students so that they can justify and evaluate their predictions based on the simulation and data evaluation. We chose this domain because, most of the time, students learn in highly artificial classroom activities involving such tools as marble and coins. They do not have the opportunity to see how probability is manifested in society and collect relevant data, in the process of which they learn to decide what data to collect and how to analyze and represent those data using appropriate statistical tools. 4.1 DESIGN MODE In REAL Design Mode students design knowledge contents and structures in the form of propositional networks, which represent declarative knowledge with entities and relations among those entities. Students also define production rules that become the basis for the reflective agent’s reasoning and behavior. Images of entities are stored in a container showing up at the bottom of the screen. These represent the mental images users may have and can be dropped to the working canvas upon the user’s request. The best practices of Graphical User Interface (GUI) design are considered. For instance, mouse-click and dragand-drop are applied whenever keyboard typing can be avoided (Figure 2).

Figure 2. Design Modes in REAL Planet (left) and REAL Business (right) 4.2 GAME MODE The game is generated by applying the knowledge taught by students in the Design Mode. The students evaluate how well they have taught the reflective agent by observing the feedback from the simulation. In REAL Planet (Figure 2), students can observe how entities they designed interact with other entities. One of the main relations among these entities is energy flow. For instance, when a prey is eaten by a predator, energy flows from the prey to the predator. Likewise, when an herbivore eats grass, energy flows from grass to the

herbivore. The goal is to maintain a balanced ecological system where maximum entities can co-exist. In REAL Business, students observe how a store owner decides the amount of ice cream to purchase for each day. The store owner uses strategies taught by users. He has to maintain a demand-supply balance to maximize his gross profit. If the store owner does well with his first business store, he will be able to open new stores in different communities. The same rules will be applied to those stores (Figure 4). 4.3 REFLECTION MODE Students debug their own thinking and reasoning processes by evaluating the knowledge contents and structures from different perspectives. The Design Mode in Figure 4 lists some of the perspectives illustrating the student’s understanding. It shows the distribution of the bio-energy in the four major categories according to the student’s design. The learning objective is to have the student identify that 10 percent of the total bio-energy will flow to the next category on the left. Accordingly there should be enough energy in each category to ensure a balanced energy flow, hence a balanced ecological system. Reflection Mode is displayed together with the Game mode (Figure 4) in the form of store report. It shows gross profits, product inventories, system networks, and production rules. The learning objective here is to have students make meaningful connections between these various sources of data and the actions in the world, thus gaining better understanding of the system mechanism. If a student’s agent is failing within the game-world, the reflection tools may provide clues for improving the agent. As students’ familiarity with the system grows, we expect them to integrate the various representations of knowledge into a meaningful whole.

Figure 3. REAL Planet Game Mode (left) and Reflection Mode (right)

Figure 4. REAL Business Game Mode and Reflection Mode 4.4 JESS - REASONING ENGINE IN A RULE-BASED SYSTEM Rule-based programming paradigms emerged as ways to implement systems that appear to think and reason like human beings. Expert systems are a kind of rule-based systems that can be developed to answer the type of question that are generally directed to a professional, such as a doctor, computer technician, teacher, etc. A rule-based system consists of facts, rules, and an engine that acts on them. Facts represent declarative knowledge (propositional networks in our case). For instance, in REAL Business, it represents an agent’s goal, belief, desire, physical conditions, etc. Rules represent procedural knowledge, instructing how an agent should act at certain conditions (patterns). A rule-based system operates by applying rules on facts, asserting new facts during the execution of those rules, and allowing other rules to fire as a result. Rule-Based Systems (RBS) are very simple to implement and understand, which makes them easy to extend and customize. JESS has become increasingly popular as an expert system shell due to its stable performance and its native support to the Java programming language. We can efficiently model users in terms of working memory, goals, activation level, and procedural skills using JESS. Figure 6 illustrated an example of how knowledge was

represented in a reflective agent vs. a human user in REAL Planet. In the game, each entity was represented as a java object, which maintaining its own status such as age, energy level, or attack power through a java thread. These objects are observable in the form of shadow facts in JESS. The JESS reasoning engine monitors agents’ status as declarative facts in a virtually parallel fashion. It will trigger events whenever it finds pattern matches in the predefined rules. The rules reside in an expert module and a student module, allowing an agent to sense, reason and act. It can generate goals, and attempt to achieve goals. For instance, in REAL, if a tiger’s energy level is below the minimum allowed threshold, it will take the initiative of “generating” a goal of hunting for food. Sometimes, the sub-goal could be to get a bigger prey if it has the option to choose. But this tiger is not the only living creature in this simulated world. It has to take other factors into account, which may include temperature, size of the preys, its predators, etc. Cognitive Knowledge Reflective Agent

Human

Propositional Networks Facts (Tiger (category carnival) (food carnivore, herbivore, omnivore) (AttackPower 90) )(Wolf (category carnival) (food carnivore, herbivore, omnivore) (AttackPower 60) ) Declarative Knowledge Tiger is an animal. Tiger is a carnivore that eats other omnivores, carnivores or herbivores. Wolf is an animal. Wolf is a carnivore that eats other omnivores, carnivores or herbivores.

Production Systems

Mental Mental Simulation Images Model Tiger.gif Althoug The tiger Rules (perceive (?entity1 h the and the wolf ?entity2)) Wolf.gif wolf is both look (?enity1 (hungry also for food true)(AttackPower ?p1)) hungry, because (?enity2 (hungry it may they are true)(AttackPower ?p2)) be killed hungry. => (if (?p1 > ?p2) if it tries When they (Attack ?entity1 ?entity2) to attack encounter, else (Attack ?entity2 the tiger. each So it ?entity1)) estimates runs whether Procedure Skills away. they can get If two hungry carnivals their food meet, the stronger will without attack the weaker. losing their lives. As a result, the tiger chases while the wolf escapes.

Figure 5: REAL Knowledge Representation

5. STUDY METHOD We evaluated the REAL application using the internal and external evaluation methods. The internal evaluation, focusing on internal system performance, addresses the question: “what is the relationship between the REAL cognitive framework and the performance of the REAL application?” More specifically internal evaluation answers the following questions: “What do reflective agent, expert agent and pedagogical agent know? How do these agents do what they do? What should these agents do?” This is an iterative design process, where we evaluate the advantages and disadvantages using a set of technologies. We looked at ways to standardize our design so that this framework can be developed and deployed to a local context easily. We also evaluated several reasoning engines, game engines, and tested them out in our system. The external evaluation of REAL aims at the usefulness of the REAL system to the user, such as its ability to foster learning, motivate learns, and encourage scientific exploration. It addresses the questions: “what is the educational impact of REAL?”, “Can REAL increase user’s interest in the related domain?”, “Do users want to learn more about the related domain after playing with REAL?” The result of the above two evaluations will reveal the educational impact and value of REAL, and will therefore influence the direction for future research and development. Procedure We conducted the study in two sessions. Eight students participated in the first session. They worked individually on REAL Business. Four students participated in the second session where they worked in groups of two. We made this change because we’d like to capture their thinking processes while they exchange their ideas with partners. These are all students from Teacher College at Columbia University. Students first read the learning material on the targeted domain for 5 minutes. Then they did a pre-test that evaluates the level of their understanding. A reflective agent on the computer screen informed students of the goal of the game. Then our researchers gave a brief orientation on REAL, showing them different tools they could use in different modes of the application. Students started to design and run the simulation on their own or with their partners. Later, students worked on a post-test, and some participated in a brief interview.

Data source Data was collected through interviewing with users and researchers. User activity processes were saved in a user’s local computer, and retrieved as data source after the game. We also captured user interactions with video cameras or voice recorders Results In both sessions, participants mentioned that the simulation was attractive, and the game interface was engaging. They said they paid more attention to Game Mode in the beginning. For instance, they paid a lot of attention to thought bubbles in the first several trials, and then they spent more time on observing the charts in Reflection Mode like money monitor and ice cream monitor. Later on, they usually adjusted the speed to the maximum, which meant they just wanted to get the general report about a given ice cream store. Students generally had no problem using the tools in Design Mode to teach the agent, although some took some time clicking around nodes or relations to figure out where they could start first. In the first session, two students got up to game level II; the others were not able to pass game level I. The successful participants orderly used constant numbers, variables, simple expression and complex expressions combined with constants and variables, like S/(V+S+C)*12; while the unsuccessful participant did not make an apparent progress. Some of them were unable to make the conceptual jump from using (concrete) numbers to (abstract) variables. Their strategy proved insufficient because variables could be bound to prior sales data dynamically. In the second session, both groups were able to get to game level II, with hints from instructors on using variables instead of constant numbers. One group used constant numbers the first time, and tried complex expressions using variables afterwards. They started to feel frustrated when they couldn’t figure out the right formulas. Tutors intervened at that moment and supplied some hints so that they could continue to work on the simulation and finally figure out the solution. In another group, after trying out different constant numbers, the participants realized that in order for them to run the second store successfully they would have to use design strategies that considered the previous sales dynamically on a daily basis (i.e. variables). They made changes to the formulas several times, realizing that they need to take the maximum purchasing power into account as well.

In some cases, participants did not finish the simulation even if they had entered the correct formulas because they were not sure the current production system would provide most profit. For example, when running the simulation, it might be possible that several customs could not get ice cream in their favorite flavors because it was sold out, though the correct formulas were entered. The reason for this phenomenon was because the sample size they considered was too small. The survey results showed student evaluation of REAL on the scale of 1 – 5. Yes/No were used as the scale in the first study session. They were converted as below. As the first session was too short (1 hour), only some students were able to finish the survey. QUESTION Does playing REAL increase your interest in probability?

MEAN SCALE N 4.0 1 = strongly disagree 20 2 = disagree 3 = neutral 3.7 4 = agree 5 = strongly agree 4.5 = Yes 3.9 1.5 = No

Does REAL give you greater confidence that you can understand and use probabilities? Do you want to learn more about probability? Figure 6: Survey results on REAL Business in Post Test. 6. DISCUSSION

The REAL framework involved researchers from the areas of computer science, instructional design, and human cognition. It inevitably invited heated discussion and tension as we work to balance design ideas from those researchers during the design. We originally treated our work as design-based research. As the platform became better developed, we have started to think about applying lessons learned to the next iteration of the system design and follow up with more empirical studies. The first prototype, REAL Planet, involved complex emergent artificial ecosystem simulation. It was a good opportunity for developers to consider technologies and design system architecture that should be scalable and extensible. REAL Planet was designed in such a way that allows for future REAL-based applications to be easily developed by programmers in other subject domains. While REAL is intended to be a re-useable framework that can be used to build simulationbased proxies for participatory experiences otherwise unobtainable in classrooms, it is not

presented as a panacea for effectively presenting all educational subject matter. Even when fully implemented the REAL framework will be a constructionist tool for building educational simulations into which educators would need to invest serious amounts of time and programming expertise in order to effectively use. This cost of production will make the REAL Framework impractical for subject areas that can already be effectively experienced or conceptualized by learners. Thus the REAL Framework will be best suited for building simulations of deterministic complex systems that learners 1) typically fail to understand even with instruction; 2) otherwise have little opportunity to witness, replicate, or experience for themselves. This is not to insinuate that REAL would not greatly simplify and accelerate the production of simulations, but merely that it will not be a framework that could be repurposed by non-programmers. Despite technical challenges one might face using the REAL framework to develop one’s own simulations, REAL simulations themselves, once produced, are, by design, constructionist learning environments that allow for non-technically inclined learners to transparently see, customize, and even build underlying system behaviors while the simulation is running. We did internal and external evaluations on REAL Business. As the system architecture was designed with generality and scalability in mind, we finished the prototyping of REAL Business within a couple of weeks. And we spent the rest of the time focusing on instructional design. We will address questions we’ve asked earlier. What do reflective agent, expert agent and pedagogical agent know? How do these agents do what they do? What should these agents do? In REAL Business, the reflective agent knows about the learning processes that a user undergoes. For instance, it knows how often uses switch from Design Mode to Game Mode, what kind of changes have been made in each step, and what is the user’s understanding about the domain. Later, users will be able to choose different version of reflective agents they’ve built and launch it in the game. The expert agent is able to understand the free-form formulas users provided to teach the reflective agent. It knows how a system functions given a specific community. It is able to give hints when it thinks certain rules are not valid, thus preventing users from advancing to the next game level. The pedagogical agent knows whether and when it should give positive or negative feedback. It will trace rules designed by

the user and compare them with rules defined in expert module. If rules are not consistent with those in the expert module, the pedagogical agent will point out the errors in rules, highlighting them in red, instructing users to modify the rules or referring them to more reading material. REAL is embedded in a rule-based system. It maintains a working memory about the details of the simulation world. While each intelligent agent (customers) walking into the ice cream store, their consuming preferences will be obtained, and their behaviors will be observed by cognitive agents. Patterns matches will trigger rules defined in those cognitive agents to fire, which enable these agents to take actions accordingly. What is the educational impact of REAL? Does playing REAL increase users’ interests in the related domain? Do users want to learn more about the related domain? Most students did well on pre-test and post-tests. This suggested that they have had some basic knowledge about this domain. Participants in our studies showed great interests to our reflective game. Most of them indicated that the game was very fun and believed that children would love it. Some showed the interest to continue playing with it even after the study. Accordingly to the survey result in Figure 6, most participants think REAL had a positive impact on their learning, and they would be interested to learn more in this domain in the future. There are three reasons that may lead to their failure to win a game in REAL Business. 1) The game instruction was not clearly delivered due to the time limit and the wording in the game instructions. For instance, the variables S, V, C were not used as examples in the orientation of the game, many of the students ignored that information in the first few trials, and the wording of some terms was confusing for some people. 2) There is not enough just-in-time feedback to help student make progress when they get lost or when they need position confirmation. The simulation indeed helps them construct a “How the store works” mental model while beginning to play the game. But after they are familiar with the game, they put their attention mainly on the charts and look forward to seeing the results of the simulation instead of simulation per se. Two problems happened during this time, one is that they have to wait some time to see the simulation results, and

another problem is that sometimes they will notice a discrepancy between their beliefs and the agent’s actions, causing them to cancel their observation while the simulation is playing. 3) As is the problem in a traditional classroom, many subjects showed a weak association between the concepts embedded in problem sets and the same concepts in the simulation. For instance, many students did well with probability problems on sample sizes and formulas. But, rather than focusing on the relevant charts and graphs that provided this information in the simulation, a few users focused on features of the individual characters (e.g. thought bubbles). They failed to realize the rule governing over the system – probability applies to a large sample over time. Schank (2001.) claimed that “the opportunity to make mistakes and experience failures is essential to learning”. This is the benefit we think educational simulation games can bring to the traditional classroom. After experiencing successive failures, we expect to see students begin to parse relevant information from less pertinent information.

7. CONCLUSION From the study on the REAL applications, we found the following values of REAL as a cognitive framework. 1) The learning environment is motivational. Participants in our studies showed great interests in our reflective game. Most of them indicated that the game was very fun and believed that children would love it. Some showed the interest to continue playing with it even after the study. 2) REAL encourages reflection. Most students mentioned that they ran the game in fast speed once they were familiar to the game environment - paying more attention to statistical reports in the Reflection Mode. They actively switched back and forth between Design Mode and Game Mode and made great efforts on improving the knowledge base for the reflective agent. 3) The REAL framework is reusable and extensible. We had a quick turn-around in terms of application development in a new domain, REAL Business on probability, for instance. The modules were well defined and the developers could easily fit in and worked on a specific area while being able to collaborate with others efficiently. 4) The REAL platform encourages collaboration among researches with interests in such areas as artificial intelligence, intelligent computer-aided instruction, human computer interaction,

human cognition, and educational games, to name a few. It can serve as a generic platform for those researches to embed their ideas, and test out their hypothesis. The broad area REAL covers also exposed its limitations. Most of the time, instructional strategies are domain-specific, therefore, it’s hard to define a generic way to illustrate when and how just-in-time feedback can be delivered to users when they need it. The feedback could include negative, positive, direct, indirect, immediate, delayed feedback, etc. The timing and type of feedback will determine the quality of learning. In the future version, researchers may need to observe and collect the coaching hints and pedagogical information supplied by human tutors during previous studies and build them into the knowledge base of a pedagogical agent. For example, when students want to close the simulation prematurely, the pedagogical agent should come up and tell them that some phenomenon (e.g. a local minimum in a graph) is normal and encourage them to finish the simulation and see the final results. Nevertheless, the REAL framework is designed in such a way that allows developers to add feedback with little efforts. This study builds a framework for developing intelligent agents in simulation games to encourage active reflection and engage learning. REAL gives students opportunities to take an inner look at their thinking processes, evaluate their knowledge, and recognize their misconceptions. This research benefits greatly from the field of Intelligent Tutoring Systems in an effort to offset some of the limitations of traditional games (i.e., entertaining rather than educational). As a result, the generation of game simulation is based upon an understanding of the user, a pre-defined expert agent, and carefully designed pedagogical strategies. We believe that our system design and implementation of REAL could contribute to the muchneeded ongoing efforts of developing intelligent agents for simulation in educational games.

REFERENCES Aguilera, M., & M'endiz, A. (2003). Video games in education. Computers in Entertainment 1, 1-14. Anderson, J. R. L., C. (1998). The Atomic Components of Thought, Erlbaum. Anderson, J. R. (1983). A general learning theory and its application to the acquisition of proof skills in geometry. Palo Alto, CA, Tioga Publishing.

Bai, X., & Black, J. (2004). TALE: a Teachable Agent Embedded in an Intelligent Tutoring System. In G. Richards (Ed.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2004 (pp. 1070-1072). Chesapeake, VA: AACE. Bai, X., & Black, J. B. (2005). REAL: a Generic Intelligent Tutoring System Framework. In C. Crawford et al. (Eds.), Proceedings of Society for Information Technology and Teacher Education International Conference 2005 (pp. 1279-1283). Chesapeake, AACE. Bai, X., & Black, J. B. (2006). REAL: an agent-based learning environment. Paper presented at the Agent-Based Systems for Human Learning Conference, Hakodate, Japan. Bargh, J. A. & Schul, Y. (1980). On the cognitive benefits of teaching. Journal of Educational Psychology, 72, 593-604. Baylor, A. L., & Ritchie, D. R. (2002). What factors facilitate teacher skill, teacher morale, and perceived student learning in technology-using classrooms? Detail Reference needs Black, J. B. (1992). Types of Knowledge Representation. CCTE Report, New York, Teachers College, Columbia University. Black, J.B.

(2006) Games with explanatory transparency for better learning and

understanding. Paper presented at the Game Developers Conference (Serious Games Summit), San Jose, CA. Biswas, G., Schwartz, D. L., Bransford, J. D., & The Teachable Agents Group at Vanderbilt. (2001). Technology support for complex problem solving: From SAD Environments to AI. Menlo Park, AAAI/MIT Press. Biswas, G., Schwartz, D. L., Leelawong, K., Vye, N., & TAG-V. (2005). Learning by teaching: A new paradigm for educational software. Applied Artificial Intelligence, 19(3). Blandford, A. E. (1994). Teaching through collaborative problem solving. Journal of Artificial Intelligence in Education, 5, 51-84. Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13, 4-16.

Chan, T. W., & Chou, C. Y. (1997). Exploring the Design of Computer Supports for Reciprocal Tutoring. International Journal of Artificial Intelligence in Education, 8, 1–29. Chi, M. (2005).

Commonsense Conceptions of Emergent Processes: Why Some

Misconceptions are Robust. The Journal of the Learning Sciences, 14, 161-199. Conati. C., & Zhao, X. (2004). Building and Evaluating an Intelligent Pedagogical Agent to Improve the Effectiveness of an Educational Game. Proceedings of IUI '04, International Conference on Intelligent User Interfaces, Island of Madeira, Portugal, pp. 6-13. Craik, K. (1943). The Nature of Exploration. Cambridge: Cambridge University Press. Davis, Randall, Buchanan, Bruce; and Shortliffe, Edward (1977). Production Rules as a Representation for a Knowledge-Based Consultation Program, Artificial Intelligence, 8, no. 1, February. Dillenbourg, P., & Self, J.A. (1992). A computational approach to socially distributed cognition. European Journal of Psychology of Education, 3, 353-372. Gee, J. P. (2003). What video games have to teach us about learning. New York: Pal grave. Jacobson, M. J., & Kozma, R. B. (2000). Innovations in science and mathematics education: Advanced designs for technologies of learning. Mahwah, NJ: Erlbaum. Klawe, M. (1998.). When do the use of computer games and other interactive multimedia software help students learn mathematics? In NCTM Standards 2000 Technology Conference, Arlington, VA. Laird, J., Newell, Allen and Rosenbloom, Paul (1987). "SOAR: An Architecture for General Intelligence". Artificial Intelligence, 33: 1-64. . Leelawong, K., Wang, Y., Biswas, G., Vye, N., Bransford, J.D., & Schwartz, D. L. (2001). Qualitative reasoning techniques to support learning by teaching: The teachable agents project. In G. Biswas (Ed.), AAAI Qualitative Reasoning Workshop. (pp. 73-81). San Antonio, TX. Newell, A. (1990). Unified theories of cognition. Cambridge, MA, Harvard University Press. Schank, R., & Neaman, A. (2001). Motivation and failure in educational simulation design. In K. D. Forbus, & P. J. Feltovich (Eds.), Smart Machines in Education: the Coming Revolution in Educational Technology, (pp. 37-69). MIT Press, Cambridge, MA.

Sheingold, K. (1991). Restructuring for learning with technology: the potential for synergy. Phi Delta Kappan, September, 17-27. Shneiderman, B. & Plaisant, C. (2004). Designing the user interface: Strategies for effective human-computer interaction. New York: Addison Wesley (p. 580). Shortliffe, Edward H. (1976). Computer-Based Medical Consultations: MYCIN. American Elsevier, New York, NY. Thalmann, D., Noser, H., & Huang, Z. (1997) AutonomousVirtualActors based on Virtual Sensors. In R.Trappl, & P. Petta, (Eds.), Creating Personalities (pp. 25-42). Lecture Notes in Computer Science, Springer Verlag, . Viswanath, K., Adebiyi, B., Leelawong, K., & Biswas, G. (2004). A mult-agent architecture implementation of learning by teaching systems.

Proceedings of the 4th IEEE

International Conference on Advanced Learning Technologies (pp. 61-65). Joensuu, Finland. Weiss, G. (1999). Multiagent Systems. Cambridge, The MIT Press. Wooldridge, M., J. P. Mller, M. Tambe. (1995) Intelligent Agents II. IJCAI'95 Workshop (ATAL), Montreal, Cananda, Springer