Guided Skill Practice as an Adaptive Scaffolding ... - Semantic Scholar

1 downloads 1680 Views 452KB Size Report
support, these learners use system tools incorrectly and adopt suboptimal .... monitoring progress toward achieving goals, and using the evaluation of ..... ences CASL Grant #R305A120186 and the National Science Foundation's IIS Award.
Guided Skill Practice as an Adaptive Scaffolding Strategy in Open-Ended Learning Environments James R. Segedy, Gautam Biswas, Emily Feitl Blackstock, & Akailah Jenkins {james.segedy, gautam.biswas, emily.a.feitl, akailah.t.jenkins}@vanderbilt.edu Institute of Software Integrated Systems, Department of Electrical Engineering and Computer Science, Vanderbilt University, 1025 16th Avenue South, Nashville, TN, 37212, U.S.A.

Abstract. While open-ended learning environments (OELEs) offer powerful learning opportunities, many students struggle to learn in them. Without proper support, these learners use system tools incorrectly and adopt suboptimal learning strategies. Typically, OELEs support students by providing hints: suggestions for how to proceed combined with information relevant to the learner’s situation. However, students often ignore or fail to understand such hints. To address this problem, we present an alternative approach to supporting students in OELEs that combines suggestions and assertions with guided skill practice. We demonstrate the feasibility of our approach through an experimental study that compares students who receive suggestions, assertions, and guided skill practice to students who receive no such support. Findings indicate that learners who received the scaffolds approached their tasks more systematically.

Keywords: Open-ended learning environment, scaffolds, guided practice

1

Introduction

Advances in technology have provided learning technology researchers the affordances for designing computer-based learning environments that provide students with opportunities to take part in authentic, complex problem solving tasks. These environments, generally called open-ended learning environments (OELEs) [1-2], are learner-centered; they provide students with a learning context and a set of tools for exploring, hypothesizing, and building solutions to problems. Examples include hypermedia and modeling and simulation learning environments [3-4]. OELEs place high cognitive demands on learners [2]. To be successful, learners must understand how to execute: (1) cognitive processes for accessing and interpreting information, constructing problem solutions, and assessing constructed solutions; and (2) metacognitive processes for coordinating the use of cognitive processes and reflecting on the outcome of solution assessments. This presents significant challenges to novice learners; they may have neither the proficiency for using the system’s tools nor the experience and understanding necessary for explicitly regulating their adfa, p. 1, 2011. © Springer-Verlag Berlin Heidelberg 2011

learning behaviors. Not surprisingly, research has shown that novices often struggle to succeed in OELEs (e.g., [2], [5]). Without adaptive scaffolds, these learners tend to use tools incorrectly and adopt suboptimal learning strategies [6-7]. For the purposes of this article, adaptive scaffolds in OELEs refer to actions taken by the learning environment, based on the learner’s interactions, intended to support the learner in successfully completing their task [8]. While several OELEs have been developed and used with learners, relatively few provide adaptive scaffolds. Instead, these systems include non-adaptive scaffolded tools (e.g., lists of guiding questions) designed to provide support for learners who choose to use them. Systems that do provide adaptive scaffolds usually do so in the form of hints: suggestions for how to proceed combined with information relevant to the learner’s situation. However, researchers have found that learners, perhaps due to misunderstandings or incomplete knowledge, often ignore such hints [1], [9-10], instead continuing to employ sub-optimal learning behaviors. In this paper, we present an alternative approach to adaptive scaffolds in OELEs that combines suggestions and assertions with guided skill practice. We demonstrate the feasibility of our approach through an experimental study that compares students who receive suggestions, assertions, and guided skill practice to students who receive no such support.

2

Background

The importance of adaptive scaffolding in intelligent computer-based learning environments is well-recognized, and several computer-based learning environments incorporate adaptive scaffolds by providing suggestions and making assertions). VanLehn [11], for example, discusses a Point, Teach, and Bottom-out strategy for scaffolding in intelligent tutoring systems (ITSs). Pointing hints direct attention to specific problem features, suggesting that students consider those features; teaching hints assert knowledge components and how to apply them; and bottom-out hints assert how to solve the current problem step. Several OELEs also utilize suggestions and assertions in order to scaffold students. Ecolab [12], for example, is an OELE in which students learn about ecology by building and executing simulations of food chains and food webs. The learning task is broken down into activities of different difficulty levels, and learners are allowed to choose from among these activities. When learners select an activity that the system feels is too easy or too difficult for them, the system suggests a more appropriate activity. It also asserts information in order to help students who incorrectly construct food chains and food webs (e.g., “Caterpillars do not eat thistles”). TheoryBuilder [4], an OELE for learning through model-building, helps learners plan their learning activities by providing a set of guiding questions. The system recognizes specific suboptimal behaviors, such as choosing not to create a plan for how to construct a model, and responds by suggesting alternative approaches (e.g., creating a plan before embarking on the task). Suggestions and assertions provide learners with information that may allow them to overcome the challenges associated with learning in OELEs. However, research

with OELEs has found that students often ignore suggestions and assertions provided by the learning environment. For example, Segedy, Kinnebrew, & Biswas [10] analyzed video data from students using Betty’s Brain, finding that 77% of the suggestions and assertions delivered by the system were ignored by students. Similarly, work by Clarebout & Elen [1] found that students working in an OELE followed the system’s suggestions only 20% of the time. Finally, work with PrimeClimb [9] used eye tracking to measure how long students spent reading system-delivered suggestions and assertions. Results showed that students fixated on the content for far less than the expected reading time calculated for those hints. One challenge in relying solely on suggestions and assertions for scaffolding is that it pre-supposes students’ ability to understand and take advantage of the information provided in the scaffolds. This is particularly problematic when a scaffold encourages the use of a cognitive skill that the learner is unfamiliar with or unable to perform correctly. For example, an OELE for modeling and simulation may encourage students to compare their simulation of a science process to a written description of that process. This suggestion constitutes a problem for low-ability readers, and their difficulty in reading may lead to frustration. Such a problem can be dealt with in multiple ways depending on the learning goals for which the system is designed. For example, the goal of most ITSs is to help students develop declarative and procedural understanding of how to solve specific classes of problems. Thus, when students reach an impasse, ITSs use bottom-out hints in order to “essentially [convert] a too-challenging problem step into an annotated example” [13]. This strategy is effective; it provides students with opportunities to study the example and infer procedural information required for solving future problems. OELEs, on the other hand, expect students to learn by exploring, testing, and developing abilities for explicitly setting goals, establishing plans for achieving goals, monitoring progress toward achieving goals, and using the evaluation of progress in achieving goals to regulate and improve their approach to completing tasks. Additionally, activities within an OELE often focus on learning a particular process or topic (e.g., climate change), and students are expected to learn about that process in addition to learning how to solve complex problems. This last aspect of OELEs makes the bottom-out hint strategy difficult to implement, as it could compromise the system’s learning goals by giving away aspects of the domain content. A more effective scaffolding strategy for OELEs may involve dynamically modifying the learning task when learners demonstrate that they are unable to succeed. These modification scaffolds, unlike suggestions and assertions, do not operate by communicating information to the learner; rather, they alter aspects of the learning task itself. In doing so, they seek to maintain the learner’s engagement by adapting the task to their needs and abilities. A good example of a computer-based learning environment that employs modification scaffolds is AutoTutor [14], which teaches science topics by posing questions and then holding natural language dialogues with learners as they attempt to answer those questions. When students are unable to answer one of AutoTutor’s questions, the system modifies the learning task: it breaks down the larger question into a series of smaller questions. To illustrate this process, consider the example AutoTutor-Learner dialogue from [14]; it shows AutoTutor asking a learner

the following question: The sun exerts a gravitational force on the earth as the earth moves in its orbit around the sun. Does the earth pull equally on the sun? Explain why. In the example, the learner indicates that she doesn’t know the answer, and this prompts AutoTutor to alter the learning task by asking the learner a simpler question: How does Newton’s third law of motion apply to this situation? Again, the learner cannot answer the question, prompting AutoTutor to ask an even simpler question: Newton’s third law refers to the forces exerted by one body on another _____? When the learner successfully responds with body, AutoTutor continues by posing another question, and this dialogue continues until the learner and AutoTutor co-construct an answer to the original question, with AutoTutor continuing to adjust the learning task based on the needs of the learner. Few (if any) OELEs employ modifications to scaffold students, and to the best of our knowledge, no empirical studies have examined the effect of modification scaffolds on students’ learning activities in OELEs. In this paper, we investigate a specific type of modification scaffold, guided practice. Our approach recognizes when students repeatedly fail to take advantage of system hints, and then temporarily modifies the learning task by requiring students to practice the skills targeted by those hints. This paper presents an experiment designed to test the effectiveness of guided practice scaffolds using Betty’s Brain [15], an OELE for science learning.

3

Overview of Betty’s Brain

The Betty’s Brain learning environment [15] presents students with the task of teaching a virtual agent, Betty, about science topics by constructing a causal map that represents relevant science phenomena as a set of entities connected by directed links, which represent causal relations. Once taught, Betty can use the map to answer causal questions and explain those answers. The goal for students using Betty's Brain is to teach Betty a causal map that matches a hidden, expert model of the domain. The students' learning and teaching tasks are organized around three activities: (1) reading hypertext resources, (2) building the map, and (3) assessing the correctness of the map. The hypertext resources describe the science topic under study (e.g., climate change) by breaking it down into a set of sub-topics. Each sub-topic describes a system or a process (e.g., the greenhouse effect) in terms of entities (e.g., absorbed heat energy) and causal relations among those entities (absorbed heat energy increases the average global temperature). As students read, they need to identify causal relations and then explicitly teach those relations to Betty by constructing a causal map. Learners can assess the quality of their constructed map in two ways. First, they can ask Betty to answer a question. After Betty answers the question, learners can ask Mr. Davis, another pedagogical agent that serves as a mentor, to evaluate her answer. If the portion of the map that Betty uses to answer the question matches the expert model, then Betty’s answer is correct. Learners can also have Betty take a quiz on one or all of the sub-topics in the resources. Quiz questions are selected dynamically by comparing Betty’s current causal map to the expert map. Since the quiz is designed to

reflect the current state of the student’s map, a set of questions is chosen (in proportion to the completeness of the map) for which Betty will generate correct answers. The rest of the quiz questions produce either incorrect or incomplete answers. These answers can be used to infer which causal links are correct and which causal links may need to be revised or removed from the map. Should learners be unsure of how to proceed in their learning task, they can ask Mr. Davis for help. Mr. Davis responds by asking the learner about what they are trying to do, and he provides information and examples based on learners’ responses.

4

Method

The present experimental study tested the effectiveness of incorporating a guided practice scaffold into Betty’s Brain. The guided practice scaffold was used in conjunction with suggestions and assertions in a knowledge construction (KC) support module, which scaffolded students’ understanding of how to construct causal maps by identifying causal relations in the resources. Participants were divided into two treatment groups. The experimental group used a version of Betty’s Brain that included the KC support module and a causal link discovery tutorial (Figure 1) that they could access at any time. The tutorial allowed students to practice identifying causal relations in text passages and provided correctness feedback after each solution attempt. The control group used a version of Betty’s Brain that included neither the support module nor the tutorial. Our hypothesis was that students who worked with the KC support module would gain a better understanding of the skills related to knowledge construction. Thus, we predicted that they would: (1) be more accurate in editing their causal maps, and (2) more often edit their maps based on recent reading activities.

Fig. 1. Causal Link Discovery Tutorial For students in the experimental group, the KC module activated when three out of a student’s last five map edits were incorrect, at which point Mr. Davis informed students that they seemed to be having trouble and offered some suggestions for improv-

ing. In addition, Mr. Davis monitored students’ activities, offering suggestions and assertions to students when they performed uninformed or shortcut edits. Uninformed edits describe causal map edits that are not connected to recent reading activities, and shortcut edits refer to adding a link between two concepts that, in the expert map, are connected by a chain of links. Should students continue to make several incorrect map edits despite the suggestions and assertions from Mr. Davis, the KC module activated a second tier of support: guided practice. During guided practice, students were moved to the causal link tutorial and were not permitted to access any other portion of the program. Students completed the tutorial session once they solved five problems correctly on the first try. At this point, Mr. Davis brought them to the resources activity, highlighted a paragraph, and asked them to identify a causal link from that paragraph. This last step attempted to illustrate the connection between the skill practice and the overall task of teaching Betty. Once they successfully identified a link, the KC support module was deactivated and students were once again allowed to navigate the program freely. 4.1

Participants

Forty-one seventh grade students from four middle Tennessee science classrooms, taught by the same teacher, participated in the study. Because use of Betty’s Brain relies on students’ ability to independently read and understand the resources, the system is not suited to students with limited English proficiency or cognitivebehavioral problems. Therefore, while all students were encouraged to participate, data from ESL and special education students were not analyzed. We also excluded data from students who missed more than two class periods. The final sample included 20 students in the experimental group and 15 students in the control group. 4.2

Topic Unit and Text Resources

Students used Betty’s Brain to learn about climate change. The expert map (Figure 2) contained 22 concepts and 25 links representing the greenhouse effect, human activities affecting the global climate, and impacts on climate. The resources were organized into one introductory page, three pages covering the greenhouse effect, four pages covering human activities, and two pages covering impacts on climate. Additionally, a dictionary section defined key terms contained in the resources. The text was 4,188 words with a Flesch-Kincaid reading grade level of 8.4. 4.3

Learning and Performance Assessments

Learning was assessed using a pre-post test design. Each test consisted of five questions that asked students to consider a given scenario and explain its causal impact on climate change (e.g., explain how an increase in carpooling would affect the amount of carbon dioxide in the air). Scoring was based on the causal relations students used to explain their answers. These relations were compared to the causal relations that would be used to derive the answer from the expert map. For each expert causal link,

learners either received 0 points (if they did not use the link), 1 point (if they did use the link, or half of a point (if they used a link that was related to the expert link; e.g., fossil fuel use increases pollution instead of carbon dioxide). The maximum combined score for the five questions was 16. Two coders independently scored a subset of the written tests with at least 85% agreement, at which point they split and individually scored the remainder of the tests.

Fig. 2. Climate Change Expert Map

Performance was assessed by analyzing the knowledge construction activities students employed while using Betty’s Brain. For each student, we calculated a measure of map edit effectiveness and four measures of map edit support. Map edit effectiveness was calculated as the percentage of causal link additions, removals, and modifications that improved the quality of Betty’s causal map, where causal map quality is defined as the number of correct links minus the number of incorrect links in the map. Map edit support was defined as the percentage of causal map edits that were supported by previous resource accesses. An edit was “supported” if students had previously accessed pages in the resources that discuss the concepts connected by the manipulated link. A further constraint was added: an action could only support another action if both actions occurred within the same time window, and we calculated support in relation to four time windows: 10, 5, 3, and 2 minutes. 4.4

Procedure

Study duration was 9 school days. During the first 60-minute class period, students completed the pre-test. During the second and third class periods, researchers introduced students to causal modeling, reasoning with causal models, and identifying causal relations in text passages. During the fourth class period, students were intro-

duced to the system. Students in each treatment group then spent four class periods using their respective versions of Betty’s Brain with minimal intervention by the teachers and the researchers. On the ninth day, students completed the post-test.

5

Results

Results of the pre-post tests are displayed in Table 1. A repeated measures ANOVA performed on the data revealed a significant effect of time (F = 9.541, p < 0.01). However, the analysis failed to reveal a significant interaction between time and treatment, indicating that while all students learned as a result of using the system, the experimental manipulation was not associated with differences in pre-post gains. This may be partially attributed to the small sample size and large variations in performance within groups. However, one positive aspect of this finding is that while students in the experimental group spent 17% of their time in guided practice, they seemed to learn just as much as control group students. Table 1. Means (and standard deviations) of pre-post test scores

Control Group Exp Group

Pre-test Score 5.07 (2.03) 3.85 (2.54)

Post-test Score 6.10 (2.64) 5.13 (3.37)

Gain Score 1.03 (1.99) 1.28 (2.33)

Results of the effectiveness and support calculations for both experimental groups are shown in Figure 3. Students in the experimental group exhibited higher map edit effectiveness (51.9% vs. 45.7% for the control group students). However, an ANOVA performed on the data revealed only a slight trend for an effect of condition on map edit effectiveness (F = 3.074, p = .089).

Fig. 3. Means (and standard deviations) of effectiveness and support measures Students in the experimental group also performed a higher proportion of map edits that were supported by recent reading activities. In all four time windows used to calculate support, the experimental group students achieved a higher level of average

support. ANOVAs performed on the data revealed significant effects of condition on map edit support with a window of ten minutes (F = 7.787, p = .009), five minutes (F = 7.824, p = .009), three minutes (F = 6.639, p = .015), and two minutes (F = 5.140, p = .030).

6

Discussion and Conclusions

Open-ended learning environments provide opportunities for learners to take part in authentic, complex problem solving tasks. However, the complexity of such tasks places high cognitive demands on learners, and the success of such environments may rely on the adaptive scaffolds that the system provides to learners. In this paper, we have presented preliminary data in support of the potential for including guided practice modification scaffolds as part of effective scaffolding strategies in OELEs. Our approach recognizes when students repeatedly fail to take advantage of hints and then intervenes with a guided practice tutorial. While in the tutorial, students must practice skills related to identifying causal relations from reading materials. The results of our experimental study showed that students who received scaffolding that consisted of both hints and guided practice were more effective in constructing their causal maps; their causal map edits were both more likely to be correct and more likely to be related to recently accessed resource pages. The results suggest that students in the experimental condition may have gained a better understanding of how to find causal links in the resources. Moreover, these students may have learned the importance of connecting their information seeking activities (i.e., reading) to the construction of their causal maps. However, the results presented in this paper are not conclusive. Students in the experimental group did not show larger learning gains when compared to the control group. Additionally, while students in the experimental group were more accurate in their map edits, this difference did not reach statistical significance. Moreover, the experiment tested the effectiveness of the entire scaffolding module, including both hints and guided practice. Future studies will need to separately test the effects of these scaffolds in OELEs. Finally, the data analysis was performed at a relatively course-grained level; the metrics used to compare experimental groups evaluated students’ overall use of the system. Future analyses will need to analyze the immediate effect of scaffolds in OELEs. As we continue in this line of research, we will develop and improve upon skill tutorials in Betty’s Brain. We will also combine these tutorials with techniques we are developing for the online measurement of students’ use of cognitive and metacognitive processes as they work in OELEs. Ideally, these measurements will allow us to better detect when students need support and in relation to which cognitive or metacognitive process. Acknowledgements. This work has been supported by Institute for Educational Sciences CASL Grant #R305A120186 and the National Science Foundation’s IIS Award #0904387

References 1. Clarebout, G., Elen, J.: Advice on tool use in open learning environments. Journal of Educational Multimedia and Hypermedia, 17, 81-97 (2008) 2. Land, S.M.: Cognitive requirements for learning with open-ended learning environments. Educational Technology Research and Development, 48, 61-78 (2000) 3. Azevedo, R., Landis, R.S., Feyzi-Behnagh, R., Duffy, M., Trevors, G., Harley, J.M., Bouchet, F., Burlison, J., Taub, M., Pacampara, N., Yeasin, M., Rahman, A.K.H.M., Tanveer, M.I., Hossain, G.: The effectiveness of pedagogical agents’ prompting and feedback in facilitating co-adapted learning with metatutor. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.), ITS 2012. LNCS, vol. 7315, pp. 212–221. Springer, Heidelberg (2012) 4. Jackson, S.L., Krajcik, J.,Soloway, E.: The design of guided learner-adaptable scaffolding in interactive learning environments. In: Karat, C., Lund, A., Coutaz, J., Karat, J. (eds.) Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ’98), pp. 187-194. ACM Press, New York (1998) 5. Mayer, R.E.: Should there be a three-strikes rule against pure discovery learning?, American Psychologist, 59, 14-19 (2004) 6. Azevedo, R., Hadwin, A.: Scaffolding self-regulated learning and metacognition – Implications for the design of computer-based scaffolds. Instructional Science, 33, 367-379 (2005) 7. Kinnebrew, J.S., Biswas, G.: Identifying learning behaviors by contextualizing differential sequence mining with action features and performance evolution. In: Proceedings of the 5th International Conference on Educational Data Mining, (2012) 8. Puntambekar, S., Hübscher, R.: Tools for scaffolding students in a complex learning environment: What have we gained and what have we missed? Educational Psychologist, 40, 1-12 (2005) 9. Muir, M. Conati, C.: An analysis of attention to student-adaptive hints in an educational game. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.), ITS 2012. LNCS, vol. 7315, pp. 112-222. Springer, Heidelberg (2012) 10. Segedy, J.R., Kinnebrew, J.S., Biswas, G.: Supporting student learning using conversational agents in a teachable agent environment. In: van Aalst, J., Thompson, K., Jacobson, M.J., Reimann, P. (eds.). The Future of Learning: Proceedings of the 10th international conference of the learning sciences (ICLS 2012): Vol. 2. Short Papers, Symposia, and Abstracts, pp. 251-255, ISLS (2012) 11. VanLehn, K.: The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16, 227–265 (2006) 12. Luckin, R., Hammerton, L.: Getting to know me: Helping learners understand their own learning needs through metacognitive scaffolding. In: Cerri, S., Gouardères, G., Paraguaςu, F. (eds.), ITS 2002. LNCS, vol. 2363, pp. 759-771. Springer, Heidelberg (2002) 13. Roll, I., Aleven, V., McLaren, B.M., Koedinger, K.R.: Improving students’ help-seeking skills using metacognitive feedback in an intelligent tutoring system. Learning and Instruction, 21, 267-280 (2011) 14. Graesser, A.C., McNamara, D.: Self-regulated learning in learning environments with pedagogical agents that interact in natural language. Educational Psychologist, 45, 234-244 (2010) 15. Leelawong, K., Biswas, G.: Designing learning by teaching agents: The Betty’s Brain system. International Journal of Artificial Intelligence in Education, 18, 181-208 (2008)

Suggest Documents