Didactic Galactic: Types of Knowledge Learned in a Serious Game

0 downloads 0 Views 133KB Size Report
1 The University of Memphis, Institute for Intelligent Systems, Memphis, TN. {cmfrsyth ... 3 Claremont McKenna College, Psychology Department, Claremont, CA.
Didactic Galactic: Types of Knowledge Learned in a Serious Game Carol Forsyth1, Arthur Graesser1, Breya Walker1, Keith Millis2, Philip I. Pavlik, Jr.1, and Diane Halpern3 1

The University of Memphis, Institute for Intelligent Systems, Memphis, TN {cmfrsyth,graesser,bswlker2,ppavlik}@memphis.edu 2 Northern Illinois University, Psychology Department, Dekalb, IL [email protected] 3 Claremont McKenna College, Psychology Department, Claremont, CA [email protected]

Abstract. Operation ARA is a serious game that teaches scientific inquiry using natural language conversations. Within the context of the game, students completed up to two distinct training modules that teach either didactic or applied conceptual information about research methodology (e.g., validity of dependent variables, need for control groups). An experiment using a 4-condition between-subjects pretest-interaction-posttest design was conducted in which 81 undergraduate college students interacted with varying modules of Operation ARA. The four conditions were designed to test the impact of the two distinct modules on different types of learning measured by multiple-choice, short answer, and case-based assessment questions. Results revealed significant differences on training condition and learning gains on two of the three types of questions. Keywords: Intelligent Tutoring Systems, reasoning, serious games, learning.

1

Introduction

Cognitive scientists often make distinctions as to whether knowledge is acquired, stored and used. Different types of training and assessment may be required for one to completely understand a new topic, depending in part as to whether it is shallow versus deep. The current study examined the learning of didactic, factual information versus conceptually applied knowledge about research methodology by interacting with a serious game. 1.1

Types of Knowledge Acquisition and Test Questions

Previous research suggests that basic didactic information, including vocabulary, facts, and simple procedures, may be learned through iterative presentation and practice over an extended period of time rather than in a single session [1,2] . However, understanding didactic information in research methodology does not directly K. Yacef et al. (Eds.): AIED 2013, LNAI 7926, pp. 832–835, 2013. © Springer-Verlag Berlin Heidelberg 2013

Didactic Galactic: Types of Knowledge Learned in a Serious Game

833

translate to the learner being able to apply the knowledge within a case-based reasoning framework [3,4]. To obtain this deeper-level understanding, students may need to complete tasks which require making fine-grained discriminations among alternatives [3-5] constructing explanations, or generating questions about difficult conceptualizations [6]. These two very separate types of knowledge acquisition (didactic factual recall versus conceptual applications) may be reflected in performance on different types of test questions. Specifically, a continuum from shallow to deep-level questions may start out with recognition-oriented questions exemplified in most multiplechoice questions, move on to recall-oriented questions that elicit words or sentences in an answer [6,7], and progress to a deep level captured by performance on case-based test questions where students apply their knowledge on concrete practical problems. The current study utilizes three different types of questions to capture knowledge on a continuum from a shallow to a deep level, in the context of a serious game called OperationARA. 1.2

Operation ARA: A Serious Game

Operation ARA is a serious game that teaches research methodology through a number of pedagogical components, including natural language conversation [8]. Operation ARA encompasses training of both didactic and applied knowledge of 21 core concepts of research methodology. The two types of instruction (i.e. didactic and applied) are given across three separate modules of the game (i.e. Cadet Training, Proving Ground, Active Duty), however the focus is on the first two modules only. In the Cadet Training module, students learn didactic knowledge by focusing primarily on the definition and importance of the concepts in research methodology. They read an E-Text, answer multiple-choice questions, and hold natural language tutorial conversations with pedagogical artificial agents. In the Proving Ground module, students apply their knowledge by analyzing summaries of research cases and identifying flaws that are aligned with core concepts of research methodology. This paper explores the relationship between (a) learning procedures that emphasize either didactic knowledge (Cadet Training) or application (Proving Ground) and (b) measures that either emphasize relatively low-level didactic information (multiple choice questions), intermediate (short answer) or higher-level conceptual knowledge (case study analysis).

2

Methods

The participants were 81 undergraduate students (N=81) enrolled in an Introduction to Psychology course who completed the study across the course of a semester. Participants were given course credit for their completion but not performance of the study. They participated in a 4 condition, between-subjects pretest-training -posttest study. The conditions included 1) interaction with Cadet Training only, 2) interaction with Proving Ground only, 3) interaction with both Cadet Training and Proving Ground, and 4) a control condition with no interaction.

834

C. Forsyth et al.

Participants were randomly assigned to one of the four conditions. After completing a pretest, students in the experimental conditions interacted with ARA and subsequently completed the posttest. Participants in the control condition had no interaction with the game. Two versions of the test were created and counterbalanced over pretest and posttest. Each test had a total of 50 questions, including 21 multiple-choice questions, 21 short-answer questions, and 8 questions which required deep application. Learning gains were computed by subtracting the proportional pretest scores from the proportional posttest scores for the multiple-choice, short-answer, and casebased questions, respectively. 2.1

Planned Comparisons

The hypotheses were tested by planned comparisons. The first hypothesis was that the Cadet Training module learning gains would lead to greater learning gains on the MC questions than the other conditions. Using contrast coefficients, the Cadet Training Only module and Cadet Training with Proving Ground conditions were compared against the Proving Ground only and Control conditions. The mean MC learning gains for conditions with the Cadet Training module was .08 and -.01 for conditions without it. The contrast was statistically significant (t (77) = 2.64, p

Suggest Documents