Embodied Tutors for Interaction Skills Simulation Training - CiteSeerX

1 downloads 23 Views 1MB Size Report
skills require politeness, responsiveness to a virtual human's questions ..... Arlington, VA: National Training Systems Association. ... Life-like Pedagogical Agents.
The International Journal of Virtual Reality, 2008, 7(1):1-8

1

Embodied Tutors for Interaction Skills Simulation Training Robert Hubal RTI International, Research Triangle Park, NC 27709, USA

Abstract—This paper describes intelligent virtual tutors for interaction skills training who can serve the roles of demonstrator, coach, trainer, mentor, and observer. These roles meet the needs demanded of tutors across stages of skill learning, stages that were derived from simulation-based individual training designed to meet international distribution standards. Lessons learned regarding elicitation of instructional knowledge, virtual tutor behavior modeling, and performance measurement are discussed. Index Terms—Coaching, distributed training, interaction skills, intelligent tutoring, mentoring, remediation.

I. FAPV AND TUTORING The series of stages through which students advance in mastering skills has been described as familiarization, acquisition, practice, and validation (FAPV) [1]. Becoming familiarized with to-be-mastered skills implies gaining knowledge about components, events, or procedures. Acquiring a skill is learning “school solution” or best-practice techniques and procedures, often in lock-step (as opposed to free-play) fashion. During practice students internalize techniques and procedures and learn strategic knowledge about their application, experiencing multiple scenarios with more free play under different ‘setup’ or ‘fault’ conditions. Validation tests students on their performance of skills to established standards within set conditions. Meanwhile, the literature suggests that tutoring greatly benefits students. This is true for one-on-one instruction that results in highly significant improvement in learning [2] and also for automated, technology-assisted instruction [3]. Both assessment of student performance and remediation provided to students are obviously crucial components to tutoring. Assessment requires comparison of oral or written knowledge test scores against established competency grades, and monitoring of student performance on ‘standardized’ skills within validation conditions. Tutorial remediation is based on test score comparisons or behavioral observations, along with the instructional knowledge to help the student learn. Tutors’ roles change as students advance through stages of learning. Initially tutors transmit important information and demonstrate appropriate techniques, familiarizing relatively passive students. As students pass knowledge tests, they can begin directing their own learning, and tutors act as facilitators or coaches, yielding control but reacting to differences between Manuscript received on 16 July 2007. E-Mail: [email protected].

student actions and performance criteria [4, 5]. Typically student actions are observed by tutors both explicitly (e.g., through tests) and implicitly (e.g., through the students’ activities; in a simulation-based learning environment [6] these activities include tracking navigation, keystrokes, mouse movement, menu selection, and elapsed time). Eventually, students have acquired the skills and know-how to apply the skills, and tutors validate skills, and act as mentors for proficient students, prepared to provide guidance or feedback but refraining from interjecting absent an obvious error. Types of learning support provided include direct support (help functions, mentoring), encouragement to reflect to realize gaps in knowledge, and internal support (reducing task complexity, focusing attention). II. INTELLIGENT TUTORING SYSTEMS Early computer-assisted instruction systems were the equivalent of “automated flash cards” [7], having very little adaptation to students or their responses and no modeling of students’ cognitive abilities. Later systems did incorporate student models, but considered student knowledge as “buggy” expert knowledge [8], not knowledge that is represented very differently from expert knowledge. These systems were thus confined to specific domains where expert knowledge and student mistakes were well understood. Intelligent tutoring systems today are more flexible in domain applications as well as in student modeling and in instructional strategies [9, 10, 11, 12]. Tutoring systems that are merged with simulation-based training follow several principles. They provide scaffolding [13, 14], gradually guiding students by providing decreasing support as knowledge and skills are gained in a planned fashion through case-based scenarios [15]. They provide repeated practice on isolated skills with feedback linked to learning objectives, and collect critical-skill performance data based on student actions as evidence of competency [16]. They train-up skills, increasing factors like volatility, ambiguity, uncertainty, and complexity. They identify trainable moments [17], that is, opportunistic events (based on student actions) where just the right support can enhance learning. They target multiple level of learning, such as at behavioral, conceptual, and metacognitive levels [18], as well as specific sub-populations [19]. They include after-action reports detailing customerdefined performance measures for the critical tasks, showing session histories relating student actions against specific

2

The International Journal of Virtual Reality, 2008, 7(1):1-8

Fig. 1. Representative after-action review.

performance criteria [16]. And they embed training within naturalistic environments [1]. To meet these principles, tutoring systems typically comprise three components [11, 20]: •





A domain model contains familiarization content, qualitative expert reasoning models, and meta-knowledge of typical student mistakes, misunderstandings, and misconceptions in naturalistic settings. More broadly, for simulation training, the domain model encompasses the virtual environments and objects and their behaviors that make each simulation run appear realistic. A tutor model includes assessment and remediation strategies, representational knowledge, understanding of how to adapt to students, and performance measures. When instantiated as a virtual tutor (see below, where five roles are described that a tutor might take), the model provides appropriate semantic and emotional reactions to the student’s inputs, following a flexible script. Subject-matter experts provide the basic inputs for the scripts, and also review the scripts to ensure appropriate and consistent results. In simulation training systems, the smart presentation of guidance and feedback, enabling textual, graphic, auditory, and discourse interaction with an embodied intelligent tutor [21], is included in the tutor model. A student model captures students’ ongoing mastery and deficiencies as well as learning motivators. For simulation training, a key element of this model is ensuring that a student has succeeded in achieving the overall goals of a

given session. Another key element is to track whether or not the student has obtained the information necessary to make appropriate decisions. Yet another element is to design the model as an adaptive system that learns from continued use [22]. III. REAL-WORLD SIMULATION TRAINING REQUIREMENTS Increasingly, simulation-based training is distributed and modular. For instance, the U. S. Army has led a push towards “lifelong learning centers” through which soldiers can download and run simulation-based training packages from anywhere in the world [17]. As a result, the Army requires that work performed under its primary simulation-based training contracting mechanisms meet SCORM standards that “enable interoperability, accessibility and reusability of web-based learning content” [23]. Specific training requirements include: • • •



Web-downloadable simulations that run standalone as asynchronous training (i.e., with no live instructor). Simulations that provide multiple levels of scaffolding, defined by SCORM learning activity trees, including the potential to interrupt to provide real-time feedback. Simulations that provide repeated practice on isolated skills with feedback, linked to learning objectives defined by SCORM learning activity trees. A typical skill may have a single acquire and several practice and validate lessons. Simulations that collect critical-skill performance data

The International Journal of Virtual Reality, 2008, 7(1):1-8





based on student actions as evidence of competency. Learning objectives and measures of performance on critical skills are defined by subject-matter experts. Simulations that enable students to demonstrate complexskill competency, either within a single simulation session or across multiple sessions. SCORM sequencing and navigation rules define how to roll up performance data. After-action reports that detail pass/fail on performance measures for the critical tasks [24] (see Fig. 1), showing session histories relating student actions against specific performance criteria.

If a virtual tutor is embedded in this form of simulationbased training, then it might take one of five roles along a continuum of intrusiveness [25] (see TABLE 1). As a demonstrator, the tutor demonstrates best practices and step-by-step techniques. As a coach, the tutor actively prompts and assists as the students perform, suggesting actions to guide students while providing remedial feedback after student actions. As a trainer, the tutor provides content-relevant help, with students largely in control, while frequently assessing knowledge to keep learning on track. As a mentor, the tutor monitors student actions and only offers context-sensitive help or remediation or critique when necessary or requested. As an observer, the tutor observes and records and conducts afteraction review involving playback and reflection. TABLE 1: TUTOR ROLES. Role Learning stage Demonstrator

Coach Trainer

Mentor

Observer

Presentation Text, graphics, video, animations, Familiarization simulation (but Tutor merely demonstrates, is not interactive). Simulation augmented with highlighting, pointers, backup Acquisition capabilities. Student interacts with simulated environment under careful guidance by the Tutor. Acquisition/practice Student interacts with simulated environment. Tutor may chime in upon egregious errors or when it perceives student has strayed from appropriate path. Level of intervention is variable; how many Practice mistakes student can make, how much time student has, how much remediation to give when student errs, all can be adjusted. Student performs task with no aid Validate from Tutor. After-action review given after completion of a task.

IV. LEARNING OBJECTIVES Learning objectives are measurable activities that a student must exhibit after learning has taken place [26]. Learning objectives that are well-defined, structured, and wellunderstood, such as demonstrating acquired skills at troubleshooting equipment or at inspection, can be met using structured, ordered materials with content-relevant help, provided by trainers. Learning objectives for these skills are known to be well-defined because there are usually manuals or standard procedures that students can access to ensure that they perform the skills appropriately.

3

In contrast, objectives that are less easily defined, not as obviously structured, or less well understood, such as acquiring interaction skills, can better be met using more free-form instructional materials with context-sensitive help, provided by coaches and mentors. Learning objectives for these skills are not typically well-defined because there are rarely manuals or standard procedures for ensuring appropriate performance of the skills. As an example, there are some protocols to follow when interacting with trauma patients or with pediatric patients in a clinic, but no set order nor defined dialogs to ensure that these interactions are successful [27, 28]. The focus for, and indeed the majority of, simulation-based training is on procedural skills where there are, typically, best practices and known correct and incorrect student actions. So there have been some applications of assessment/remediation systems to simulation-based learning that support ‘learning by doing’ [16], [29, 30, 31]. Many of the systems focus on problem-solving, where student activity is goal-driven and the designer can communicate the goal structure underlying the problem-solving task. In these systems, the units of procedural knowledge are rules called productions [32], and the student’s knowledge is represented as a production set. (Some advanced systems represent student, and expert, knowledge as “fuzzy” rules [33].) These systems are fine for certain applications while insufficient for others: Monitoring and remediation are more easily accomplished when the learned skill is easily specified; they are harder when they must not ignore gaining of communications or strategic skills and reflection on learning, which many researchers contend is critical for truly constructing (rather than passively acquiring) knowledge [34]. V. INTERACTION SKILLS TRAINING Most of the rest of this paper focuses not on simulation-based procedural skills training but instead on simulation-based interaction skills training, such as interviewing, negotiating, and providing topical understanding. Traditionally, interaction skills training has relied on peer-to-peer role playing and passive learning (e.g., through videos), limiting the practice time and variety of scenarios that students encounter. By enabling simulation-based training, students can instead practice skills in a safe, repeatable setting. It is exactly this practice that promises improved learning [35]. For students to practice interaction skills requires them to have interaction partners. Responsive virtual human technology has been used by many application developers for this purpose [27], [36, 37, 38]. The technology defines, through various behavior models, how a virtual human should react in response to a student’s actions. Though the implementation details differ across applications (e.g., in the level of realism, the types of behavior models used, and the means of interacting with the virtual humans), students become nearly universally engaged with the virtual humans in the context of learning [39]. For both standalone and distributed interaction skills training, virtual human tutors are quite appropriate to lead to competency and mastery of skills [40]. The realism of interacting with an emotive, responsive virtual human engages students, while immediate guidance and appropriate feedback (through good student and instructor models) leads to effective

4

The International Journal of Virtual Reality, 2008, 7(1):1-8

Fig. 2. Virtual tutors and environments. See Color Plate1.

acquisition and greater retention [4]. Enabling students to query virtual tutors, either during or after an interaction, allows strategic and reflective thinking that together produce stronger learning [29][34][41]. Further, interaction skills training increasingly employs virtual humans as interactive partners, hence virtual human tutors fit naturally with this design. To embody a virtual tutor (see images in Fig. 2) involves use of what are now industry-standard tools and techniques, having 3D interactive human models (full-body or head-only) rendered by a gaming graphics engine. One decision demanded of a developer (see TABLE 2) is how to represent the virtual tutor, what choices of gender, age, body type, clothing style, and ethnicity to create [25]. Another decision is what animations to generate that have instructional merit, such as breathing, blinking, gesturing, and posture[42]. Dynamic facial expression, and lip-synching to text-to-speech or to prerecorded speech, adds to the realism of the virtual tutor. For interaction skills, the tutor is given domain knowledge to track student actions on lexical, syntactic, semantic, and pragmatic aspects of interactive dialog. For instance, certain skills require politeness, responsiveness to a virtual human’s questions, and empathy for the virtual human’s feelings [20][29] [35]. Information on these aspects of dialog is extracted through context-specific linguistic analysis of the student’s verbal responses to the situation. The tutor also tracks, as appropriate, negative student habits, such as the use of gratuitous profanity, impoliteness, and overuse of technical terminology and jargon. Domain-specific language models allow the virtual tutor to manage simulation flow by controlling how virtual humans respond with answers, denials, objections, and challenges to the student’s requests, questions, and commands [43]. Student input is analyzed to generate (for text-to-speech) or select (for prerecorded speech) the most appropriate response, based on the tutor’s current state. Of possible benefit to the students, at least for certain tutor roles, is that they be granted control over some virtual tutor parameters (see TABLE 2). Controls are available for such parameters as level of support (a sliding scale rather than discrete choices), need for proactive guidance or reactive feedback, extent of review after each iteration of an interaction has completed, and even personality style. With virtual humans

as tutors, these parameters are immediately accessible. TABLE 2: SAMPLE TUTOR CONTROL PARAMETERS. Appearance

Gender; age; ethnicity; realism

Personality

Humor; politeness; volatility

Role

Level of support; record/playback the interaction

Application flow

Timeout; scenario difficulty; minimum errors allowed; natural language reliability

VI. INTELLIGENT ASSESSMENT AND REMEDIATION Developing intelligent tutoring systems for interaction skills requires modeling expert, student, virtual human, and virtual tutor behaviors, and placing those models within a simulation system. Some of the issues involved are presented here. •



The delineation and aggregation of student performance measures requires careful assessment of protocols and standards in the literature, if they even exist for what are often poorly-defined interaction skills learning objectives. Having realistic-enough scenarios and tutoring requires rigorous expert or instructor input and review. These individuals can say whether or not the activities that students, virtual humans, and virtual tutors exhibit within the simulation accurately reflect the actions of participants and practitioners in live environments. Though the tutor behavior models may be implemented using a number of different architectures, conceptually they all work by having the tutoring component continuously monitor the student’s actions and other events (e.g., passage of time). For instance, one architecture employs a transition network to control what actions or inputs are expected from the student at a point in time during the simulation run, and how any virtual humans should react to those actions or inputs or any other events [43]. Briefly, this design maintains state, so that a given action or event causes the simulation to step (or “transition”) from the current state to a new state. Before, during, and after each transition from state to state, the network may specify behaviors to be performed, such as

The International Journal of Virtual Reality, 2008, 7(1):1-8 having the virtual human gesture, or causing a change in, say, the lighting of the virtual environment. The specified behaviors often depend on the exact action or event which causes the transition; a different action or event could cause different behaviors to be performed. In this design, a virtual tutor’s behaviors can be conceived as a separate network that may be enmeshed with the main network. Every transition between states in the main network would signify a tutor state, in which the tutor may choose to perform a behavior. For an expected transition where the student’s action or input causes a “good” change in state, the tutor may compose a congratulatory message. For an expected transition where the student’s action or input causes a “bad” change in state, the tutor may instead compose a hint or else an explanatory message, since this transition represents a trainable moment, where the right tutorial message might positively impact learning. Whether or not the tutor expresses the message is dependent on other components of the tutor’s state, such as its role. As a coach the tutor would certainly express the message, as an observer the tutor would certainly not express the message, and as a trainer or mentor the tutor may or may not, depending on other state considerations. Furthermore, how the tutor expresses a message also depends on the tutor’s state. Though as a virtual human the tutor can maintain unlimited patience, the tutor may be programmed to become more frustrated as the student repeats unacceptable actions, and the tutor’s message may be coded so as to portray that increased impatience. To code for having the same tutor message expressed in a more or less impatient manner, and more generally to enable the tutor to piece together different messages into a coherent single message, requires a means to parameterize exactly what gets expressed. Again there are a number of architectures capable of dynamically determining the message produced [16][29][43]. They may work differently if the tutorial message is expressed through after-action review or text-to-speech or prerecorded speech, but the general idea is the same across these architectures. •

Diagnostic assessment measures competencies before, during, and after training. Performance is measured against a criterion, therefore results should focus training on what students need to know, and how students would actually perform skills. For interaction skills, as indicated, the idea is to engage students in dialogs with virtual humans, within a virtual environment that mimics the live environment, and have a virtual tutor present to monitor and comment on the dialogs. Easier and harder scenarios, focusing on more or less basic interaction skills (e.g., working up from learning how to greet a person or introduce oneself [36], to learning how to handle an encounter with a relatively calm person who has a relatively minor concern to be addressed [39], to learning how to manage a potentially explosive situation with potentially multiple persons involved [43]), require both virtual humans to act as conversational partners who understand what the current training objective is and virtual tutors who can help facilitate the learning.





5

When the student errs, there are a number of different types of errors. These include errors of omission (of key steps), errors of commission (extraneous steps are included, some inconsequential, some critical), sequencing errors (sometimes order matters, sometimes it does not), and timing errors. The tutor needs to respond to the appropriate level given the specific types of errors and the tutor’s own role. When possible the tutor might specify the familiarization source that the student might review to understand the error. However, since interaction skills often do not have formalized best practices, the domain knowledge provided to the tutor must be flexible enough to allow the tutor to be able to evaluate the deviance by the student from expected interaction paths. The simulation itself should support a number of commands, shown in TABLE 3, available to the virtual tutor. Which commands are chosen as a means of remediation depend on the role that the tutor takes. For instance, as a demonstrator, presentation of information may be in multiple modalities, including textual, spoken word, line drawings, pictures, video, and simulated demonstrations. The presentation can be “marked up” so that the student may follow links to gain better understanding of particular concepts, if desired. Particular classes of past student errors might bias the presentation. Thus, if the student has had problems locating or describing objects that are referred to in the dialog, a video or an auditory cue or a demonstration might be helpful. In contrast, if the student has had problems identifying objects, pictures or voiceovers might be more helpful, whereas if the student has had problems with sequences of events, then highlighting or display of flowcharts might help.

TABLE 3: COMMANDS FOR VIRTUAL TUTORING. Commands issued by the tutor Go to named location; highlight during a simulation run object; perform action as student; reset internal state to specified point; report student actions; respond to state query; pause/resume; initialize; terminate Commands that are generally Display text; remove text; play a pre-scripted (though the content voiceover; wait for a student action; may be dynamically determined) wait for a set time; display a quiz

Similarly, for the virtual tutor as a coach or trainer, the emphasis is on process, so that the student should be guided through the process of performing the skill. Here, the remedial presentation is of the “message” generated by the tutor while monitoring the student. The modes of presentation to the student may be visual (in flowcharts and diagrams), auditory, or through the student’s direct interaction within the simulation. Regardless, student mistakes would be quickly corrected and remedial messages given immediately. Remediation can occur across several levels. If the student model indicates merely a small error, the tutor could simply point it out to the student, backtrack, and have the student try again. Otherwise information relevant to that task could be presented, to a varying degree based on student knowledge, and the tutor could back up the simulation and allow the student to try again.

6

The International Journal of Virtual Reality, 2008, 7(1):1-8

With the virtual tutor as a mentor, the student is in control, with the tutor only providing input at the request of the student or if some variable error threshold is reached. At any step along the process, the student may query the tutor about the current step. The query can indicate how much instruction the tutor should provide. That is, a query control box might have buttons labeled Minimal, Moderate, and Maximum. Each of these buttons would invoke the appropriate tutoring role for that goal. Minimal instruction might be a simple text string describing the task the student should complete. Moderate instruction would provide more detail, giving the “how” and the “why” of the step. Maximum instruction would involve a tutor-student dialog, or would present information similar to that given in familiarization lessons, with detailed theory.



VII. SOME LESSONS LEARNED Researchers at the author’s institution have developed several interaction skills training systems with virtual human tutors. In one pair of applications, students learn to obtain consent from a respondent to participate in a survey [36, 37]. The tutors were given knowledge of correct/incorrect consent procedures and strategies for obtaining consent. In another pair of applications, students learn to conduct a patient medical history to determine underlying causes of skin or respiratory problems [27][44]. The tutoring systems were given knowledge of types and causes of these problems and understanding of sound questioning strategies. Lessons learned from developing these interaction skills training systems and from other researchers’ training systems, as well as procedural-skills, distributed, SCORM-compliant training systems [24][45], include: •





Students are engaged with virtual human tutors and understand how tutors can take on the different roles. Embodying tutors increases their salience. However, embodiment is not absolutely necessary for tutoring and remediation; the training content determines if it makes sense. For interaction skills training already using virtual humans, virtual human tutors generally make sense. Results of studies of applications that employ virtual tutors suggest that they motivate students to learn [46, 47], that their different roles (like those listed in TABLE 1) influence how and what students learn [48], and that in general virtual tutors are effective supplements to the simulation training [49, 50]. Performance must be measured against defined criteria, and tutoring should focus students on need-to-know competencies, providing links when possible to prescriptive training. Scored, interpreted assessments identify students’ strengths and weaknesses on declarative and procedural content. Implementation of scored assessment tests can range from the simple to the complex. Simpler assessment such as use of established test items (e.g., pulled from existing course curricula) as measures of knowledge make interpretation of test results easy, but only for familiarization. If these tests are well constructed, for instance by using validated test items, then the tutor can be assured that correct/incorrect responses indicate



areas of strength/weakness. These forms of assessment lend themselves to use of a virtual tutor mainly in a post-assessment dialog [4][29][51]. More complex assessment by observing student activity, which demands careful definition of performance criteria, lends itself to more extensive use of a virtual tutor. Interaction skills often fail to have best-practice criteria, so designer and expert decisions drive assessment and tutoring. Simulations can collect an enormous amount of data about student actions, but that data needs to be processed to get meaningful information about specific student competencies. Tutoring in the form of after-action review [16] provides pass/fail on the performance measures defined by subject-matter experts for the critical tasks that students must be able to perform. The reviews also provide feedback to the student, showing in the history of a session exactly which actions completed required tasks or how they caused the student to fail a performance measure. Learning objectives that are commonly viewed as procedural performance measures associated with practice of skills may be extended to allow conceptual performance measures. These measures may be associated with variable criterion settings, rather than static, so that “good enough to be successful” practices (in the absence of established best practice criteria) can be assessed. A virtual tutor may be given learning objectives that are tagged with links to remediation modules within familiarization and acquisition. (If the training is designed modularly, then remediation of or within specific lessons can occur, without students needing to repeat lessons for assessments that they passed.) These additions improve student modeling by taking into account not only student actions but also reasoning behind those actions, and theory behind the reasoning, to drive remediation. REFERENCES

[1]

[2] [3]

[4] [5]

[6]

[7]

G. A. Frank, R. F. Helms and D. J. Voor. Determining the Right Mix of Live, Virtual, and Constructive Training, in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 1268-1277, 2000. Arlington, VA: National Training Systems Association. B. S. Bloom. The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-one Tutoring, Educational Researcher, vol. 13, pp. 3-15, 1984. J. D. Fletcher. Evidence for Learning from Technology-assisted Instruction, In H. F. O’Neil Jr. & R. Perez (Eds.), Technology Applications in Education: A Learning View, pp. 79-99, Hillsdale, NJ: Lawrence Erlbaum Associates, 2003. A. C. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings and R. Kreuz. AutoTutor: A Simulation of a Human Tutor, Journal of Cognitive Systems Research, vol. 1, pp. 35-51, 1999. J. C. Lester, B. A. Stone and G. D. Stelling. Life-like Pedagogical Agents for Mixed-initiative Problem Solving in Constructivist Learning Environments, User Modeling and User-Adapted Interaction, vol. 9, pp. 1-44, 1999. G. Frank. A Taxonomy to Aid Acquisition of Simulation-based Learning Systems, in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 1425-1435. Arlington, VA: National Training Systems Association, 2007. M. Urban-Lurain. Intelligent Tutoring Systems: A Historic Review in the Context of the Development of Artificial Intelligence and Educational Psychology, [Online: available at http://www.cse.msu.edu/rgroups/ cse101/ ITS/its.htm].

The International Journal of Virtual Reality, 2008, 7(1):1-8 [8] [9] [10] [11]

[12] [13]

[14]

[15]

[16]

[17]

[18]

[19] [20] [21]

[22]

[23] [24]

[25] [26] [27]

[28]

D. Sleeman and J. S. Brown (Eds.). Intelligent Tutoring Systems, London: Academic Press, 1982. E. Wenger. Artificial Intelligence and Tutoring Systems. Los Altos, CA: Morgan Kaufmann, 1987. P. L. Brusilovsky. The Construction and Application of Student Models in Intelligent Tutoring Systems, Journal of Computer and Systems Sciences International, vol. 32, pp. 70-89, 1994. A. T. Corbett, K. R. Koedinger and J. R. Anderson. Intelligent Tutoring Systems, in M. G. Helander, T. K. Landauer and P. V. Prabhu (Eds.), Handbook of Human-computer Interaction, pp. 849-874, Amsterdam: Elsevier, 1997. J. Self. The Defining Characteristics of Intelligent Tutoring Systems Research: ITS’s Care, Precisely, International Journal of Artificial Intelligence in Education, vol. 10, pp. 350-364. S. L. Jackson, J. Krajcik and E. Soloway. The Design of Guided Student-adaptable Scaffolding in Interactive Learning Environments, in proceedings of the Conference on Human Factors in Computing Systems , pp. 187-194. Los Angeles, CA: ACM Press. M. Ringenberg and K. VanLehn. Scaffolding Problem Solving with Annotated, Worked-out Examples to Promote Deep Learning, in K. Ashley and M. Ikeda (Eds.), Intelligent Tutoring Systems, pp. 625-634. Amsterdam: IOS Press. S. Luperfoy, E. Domeshek, E. Holman, D. Struck, B. Glidewell and R. Houlette. Machine and Human Analogical Reasoning for a Case-method Intelligent Tutoring System, in proceedings of the Industry/Interservice, Training, Simulation & Education Conference, pp. 1241-1247. Arlington, VA: National Training Systems Association. G. Frank, B. Whiteford, R. Hubal, P. Sonker, K. Perkins, P. Arnold, T. Presley, R. Jones and H. Meeds. Performance Assessment for Distributed Learning Using After Action Review Reports Generated By Simulations, in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 808-817, Arlington, VA: National Training Systems Association, 2004. W. R. Wilson and R. F. Helms. A Business Model for Lifelong Learning, in proceedings of the Interservice/Industry Training Systems and Education Conference, pp. 1089-1098, 2003. Arlington, VA: National Training Systems Association. M. Hannafin, S. Land and K. Oliver. Open Learning Environments: Foundations, Methods, and Models, in C. M. Reigeluth (Ed.), Instructional Design Theories and Models, vol. 2, pp. 115-140. Mahwah, NJ: Erlbaum, 1999. C. S. Lányi, Z. Geiszt, P. Károlyi, T. Tilingerand and V. Magyar. Virtual Reality in Special Needs Early Education, the International Journal of Virtual Reality, vol. 5, no. 4, pp. 55-68, 2006. S. Göbel, I. A. Iurgel, M. Rössler, F. Hülsken and C. Eckes. Design and Narrative Structure for the Virtual Human Scenarios, the International Journal of Virtual Reality, vol. 6, no. 4, pp. 1-10, 2007. C. Elliott, J. Rickel and J. Lester. Lifelike Pedagogical Agents and Affective Computing: An Exploratory Synthesis, in M. Wooldridge and M. Veloso(Eds.), Artificial Intelligence Today, pp. 195-212. SpringerVerlag, Berlin, 1999. T. M. Barnes, D. L. Bitzer and M. A. Vouk. Experimental Analysis of the Q-Matrix Method in Automated Knowledge Assessment, in proceedings of the Conference on Computers and Advanced Technology in Education, Calgary, AB: Acta Press, 2005. Sharable Content Object Reference Model. Alexandria, VA: ADL Co-laboratory Hub. G. Frank, B. Whiteford, R. Brown, G. Cooper, K. Merino and N. Evens. Web-delivered Simulations for Lifelong Learning, in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 170-179, Arlington, VA: National Training Systems Association. R. C. Hubal and C. I. Guinn. A Mixed-initiative Intelligent Tutoring Agent for Interaction Training, Poster presented at the Intelligent User Interface Conference, January 15, 2001, Santa Fe, NM. R. F. Mager. Preparing Instructional Objectives: A Critical Tool in the Development of Effective Instruction, Atlanta, GA: CEP Press, 1997. P. N. Kizakevich, L. Lux, S. Duncan, C. Guinn and M. L. McCartney. Virtual Simulated Patients for Bioterrorism Preparedness Training, in J. D. Westwood, H. M. Hoffman, G. T. Mogel, R. Phillips, R. A. Robb and D. Stredney (Eds.), NextMed: Health Horizon, pp. 165-167. Amsterdam: IOS Press, 2003. R. Deterding, C. Milliron and R. Hubal. The Virtual Pediatric Standardized Patient Application: Formative Evaluation Findings, in J. D.

[29] [30] [31] [32] [33]

[34] [35]

[36]

[37]

[38]

[39]

[40] [41] [42] [43]

[44]

[45]

[46]

7

Westwood, R. S. Haluck, H. M. Hoffman, G. T. Mogel, R. Phillips, R. A. Robb and K. G. Vosburgh (Eds.), The Magical Next Becomes the Medical Now, pp. 105-107, Amsterdam: IOS Press, 2005. J. Rickel and W. L. Johnson. Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control, Applied Artificial Intelligence, vol. 13, pp. 343-382, 1999. P. Bello and S. Bringsjord. HILBERT & PATRIC: Hybrid Intelligent Agent Technology for Teaching Context-Independent Reasoning, Educational Technology & Society, vol. 6, pp. 30-42, 2003. W. J. Clancey. Simulating Activities: Relating Motives, Deliberation, and Attentive Coordination, Cognitive Systems Research, vol. 3, pp. 471-499, 2002. J. R. Anderson. The Architecture of Cognition, Cambridge, MA: Harvard University Press, 1983. R. Stathacopoulou, G. D. Magoulas, M. Grigoriadou and M. Samarakou. Neuro-fuzzy Knowledge Processing in Intelligent Learning Environments for Improved Student Diagnosis, Information Sciences, vol. 170, pp. 273-307, 2005. G. M. Johnson. Constructivist Remediation: Correction in Context, International Journal of Special Education, vol. 19, 2004. R. Hubal and C. Guinn. Interactive Soft Skills Training Using Responsive Virtual Human Technology, in proceedings of the Interactive Technologies Conference, Warrenton, VA: Society for Applied Learning Technologies, 2003. M. W. Link, P. P. Armsby, R. C. Hubal and C. I. Guinn. Accessibility and Acceptance of Responsive Virtual Human Technology as a Survey Interviewer Training Tool, Computers in Human Behavior, vol. 22, pp. 412-426, 2006. R. C. Hubal and R. S. Day. Informed Consent Procedures: An Experimental Test Using A Virtual Character in A Dialog Systems Training Application, Journal of Biomedical Informatics, vol. 39, pp. 532-540, 2006. A. Stevens, J. Hernandez, K. Johnsen, R. Dickerson, A. Raij, C. Harrison, M. DiPietro, B. Allen, R. Ferdig, S. Foti, J. Jackson, M. Shin, J. Cendan, R. Watson, M. Duerson, B. Lok, M. Cohen, P. Wagner and D. S. Lind. The Use of Virtual Patients to Teach Medical Students History Taking and Communication Skills, American Journal of Surgery, vol. 191, pp. 806-811, 2006. C. Guinn, R. Hubal, G. Frank, H. Schwetzke, J. Zimmer, S. Backus, R. Deterding, M. Link, P. Armsby, R. Caspar, L. Flicker, W. Visscher, A. Meehan and H. Zelon. Usability and Acceptability Studies of Conversational Virtual Human Technology, in proceedings of the SIGdial Workshop on Discourse and Dialogue, pp. 1-8. East Stroudsburg, PA: Association for Computational Linguistics, 2004. J. D. Fletcher. What Do Sharable Instructional Objects Have to Do with Intelligent Tutoring Systems, And Vice Versa? International Journal of Cognitive Ergonomics, vol. 5, no. 1, pp. 317-333, 2001. V. J. Shute. SMART: Student Modeling Approach for Responsive Tutoring, User Modeling and User-Adapted Interaction, vol. 5, pp. 1-44, 1995. N. Magnenat-Thalmann and A. Egges. Interactive Virtual Humans in Real-TimeVirtual Environments, The International Journal of Virtual Reality, vol. 5, no. 2, pp. 15-24, 2006. R. C. Hubal, G. A. Frank and C. I. Guinn. Lessons Learned in Modeling Schizophrenic and Depressed Responsive Virtual Humans for Training, in proceedings of the Intelligent User Interface Conference, pp. 85-92. New York, NY: ACM Press, 2003. S. Supinski, U. Obeysekare, N. Johnson and R. Wisher. Development of an Immersive Learning Environment for U. S. Northern Command (USNORTHCOM), in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 169-178. Arlington, VA: National Training Systems Association, 2005. R. C. Hubal, P. N. Kizakevich, C. I. Guinn, K. D. Merino and S. L. West. The Virtual Standardized Patient-simulated Patient-practitioner Dialogue for Patient Interview Training, in J. D. Westwood, H. M. Hoffman, G. T. Mogel, R. A. Robb and D. Stredney (Eds.), Envisioning Healing: Interactive Technology and the Patient-practitioner Dialogue, pp. 133-138. Amsterdam: IOS Press, 2000. G. Cooper, B. Whiteford, G. Frank, K. Perkins and J. Lizama. Embedded Distributed Training: Combining Simulations, Ietms, and Operational Code, in proceedings of the Interservice/Industry Training, Simulation and Education Conference, pp. 205-213, Arlington, VA: National Training Systems Association, 2004.

8

The International Journal of Virtual Reality, 2008, 7(1):1-8

[47] W. L. Johnson, J. W. Rickel and J. C. Lester. Animated Pedagogical Agents: Face-to-face Interaction in Interactive Learning Environments, International Journal of Artificial Intelligence in Education, vol. 11, pp. 47-78, 2000. [48] C. Conati and X. Zhao. Building and Evaluating an Intelligent Pedagogical Agent to Improve the Effectiveness of an Educational Game, in proceedings of the Intelligent User Interface Conference, pp. 6-13, New York, NY: ACM Press, 2004. [49] A. L. Baylor and Y. Kim. Simulating Instructional Roles through Pedagogical Agents, International Journal of Artificial Intelligence in Education, vol. 15, pp. 95-115, 2005. [50] L. Ieronutti and L. Chittaro. Employing Virtual Humans for Education and Training In X3D/VRML Worlds, Computers & Education, vol. 49, pp. 93-109, 2007. [51] J. Q. Yu, D. J. Brown and E. Billet. Design of Virtual Tutoring Agents for a Virtual Biology Experiment, European Journal of Open, Distance and E-Learning, vol. 11, 2007. [52] N. P. Person and A. C. Graesser. Pedagogical Agents and Tutors, in J. W. Guthrie (Ed.), Encyclopedia of Education, pp. 1169-1172, New York: Macmillan, 2006. Robert Hubal is a senior research engineer in the Digital Solutions Group at RTI International. He is interested in research focusing on development, presentation, and evaluation of learning materials and identifying approaches to improve learning and training effectiveness. He has developed behavioral software that enables virtual humans to act and behave realistically in controlled learning contexts, for interaction skills training and assessment. Applications include assessing medical practitioners in history taking for both asthmatic and pediatric patients, training civilian police officers in how to handle mentally disturbed individuals, and training telephone and field interview staff in obtaining respondent participation. Dr. Hubal holds advanced degrees in Computer Science and Cognitive Psychology.