Learning and Instruction 19 (2009) 158e170 www.elsevier.com/locate/learninstruc
The effects of cooperative learning and feedback on e-learning in statistics Ulrike-Marie Krause a,*, Robin Stark a, Heinz Mandl b b
a Institute of Education, Saarland University, P.O. Box 151150, 66041 Saarbru¨cken, Germany Department of Psychology, Ludwig Maximilian University, Leopoldstraße 13, 80802 Munich, Germany
Received 2 August 2007; revised 30 December 2007; accepted 7 March 2008
Abstract This study examined whether cooperative learning and feedback facilitate situated, example-based e-learning in the field of statistics. The factors ‘‘social context’’ (individual vs. cooperative) and ‘‘feedback intervention’’ (available vs. not available) were varied; participants were 137 university students. Results showed that the feedback intervention clearly supported learning. Feedback proved especially beneficial for students with little prior knowledge. Cooperation did not promote learning outcomes; however, group performance in the learning phase was superior to individual performance. Also, cooperative learning enhanced perceived performance and perceived competence. Probably, collective efficacy had a halo effect on self-efficacy. Ó 2008 Elsevier Ltd. All rights reserved. Keywords: Cooperative learning; Feedback; E-learning; Statistics education; Worked examples; Perceived performance; Perceived competence
1. Introduction Many students of social sciences have difficulties understanding and applying statistical concepts and procedures (Broers & Imbos, 2005; Stark & Mandl, 2000). Yet, large numbers of students make individual tutoring difficult. In addition, many students lack motivation concerning mathematical issues, and some suffer from mathematics anxiety (Onwuegbuzie, 2004). Starting from this problem, the e-learning environment Koralle (Tyroller, 2005; see also Krause, 2007) on correlation analysis was developed. In the present study, we tested two interventions that were expected to increase effectiveness of the e-learning environment: cooperative learning and feedback. We investigated effects of these interventions on objective and subjective learning outcomes (Stark, Gruber, Renkl, & Mandl, 1998). Concerning objective outcomes, we focussed on students’ ability to solve realistic problems. As regards subjective outcomes, we examined perceived performance and perceived competence, which are relevant for students’ self-efficacy (Bandura, 1997) and thus for their motivation (Deci & Ryan, 2000; Pajares, 1997). Subjective outcomes are especially important in the field of statistics, where many students lack confidence in their abilities and therefore avoid the subject. The following section outlines the conception of the learning environment and the theoretical and empirical background of the two additional interventions. Then, the study is described and main findings are discussed. * Corresponding author. Tel.: þ49 681 302 3391; fax: þ49 681 302 4708. E-mail addresses:
[email protected] (U.-M. Krause),
[email protected] (R. Stark),
[email protected] (H. Mandl). 0959-4752/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.learninstruc.2008.03.003
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
159
1.1. The e-learning environment Koralle To facilitate learning in statistics a number of instructional interventions have been tried (Lan, 1998; Stark & Mandl, 2000). In this respect, e-learning has gained importance in recent years, as it enables students to learn in a self-regulated manner and is considered to be a way to assist learners individually even under adverse learning conditions (Mandl & Krause, 2003). In the present study, the e-learning environment Koralle on correlation analysis was implemented. Correlation is a central concept in statistics and, at the same time, a topic that consistently creates problems to students. The e-learning environment was developed for statistics education in social sciences. It is geared towards university students who have already attended introductory statistics courses (and who should possess basic mathematical knowledge). Koralle addresses often neglected descriptive aspects of correlation, such as linearity and effects of outliers on the correlation coefficient. As the e-learning environment is meant to enhance acquisition of applicable knowledge, it was designed according to principles of situated learning (Cognition and Technology Group at Vanderbilt, 1997): subject matter is embedded in a realistic context, and learners have to deal with complex problems that are relevant to students of social sciences (e.g., analyzing data for a thesis). Koralle is based on worked examples; this approach has repeatedly proved efficient in well-structured fields (Sweller & Cooper, 1985; Van Gog, Paas, & Van Merrie¨nboer, 2006). Worked examples demonstrate problem solving in a step-by-step manner and therefore facilitate acquisition of appropriate solution schemas for structurally similar problems (Sweller & Cooper, 1985). Effectiveness of example-based learning is often explained by cognitive-load theory (Sweller, 1988): studying examples requires hardly any mnemonic search processes, so instruction-based demands on working memory (extraneous load) are low. Capacity can thus be more thoroughly used for productive learning activities (germane load), such as self-explanations (Renkl, 2005). However, when students are not explicitly activated, they often rather passively read than actively self-explain and mindfully process example information (Stark, Mandl, Gruber, & Renkl, 2002). Therefore, in Koralle worked examples are systematically combined with problem-solving tasks (Stark, Gruber, Renkl, & Mandl, 2000): students first actively solve a problem, and afterwards a worked example is presented that demonstrates the correct solution procedure. Students can compare their own solutions to the example information; the examples, therefore, function as feedback. This approach should be effective when students have prior knowledge that can be activated by problem solving (Krause & Stark, 2006). In an experimental study, Koralle facilitated knowledge acquisition (Tyroller, 2005). Students who had learned with Koralle scored higher in a test on correlation analysis than their peers who had merely attended a lecture on correlation. However, further analysis of students’ answers revealed deficits in their knowledge structure; many answers indicated that even students who had worked with Koralle partly lacked deeper understanding. Especially, participants with little prior knowledge did rather poorly. These results might be due to special properties of e-learning and of learning with worked examples. Unlike learning in the classroom, e-learning is generally a solitary process. However, students need communication with others in order to externalize their own ideas, to elaborate the presented information, to get feedback, to identify their own knowledge gaps as well as their misconceptions (see, e.g., Resnick, Levine, & Teasley, 1991), and, of course, in order to experience relatedness (Deci & Ryan, 2000). The lack of social interaction in e-learning might therefore have been a reason for the suboptimal results. Furthermore, example-based learning requires active processing of example information. Probably, many students, especially those with little prior knowledge, did not effectively compare their own solutions with the worked examples; it is likely that they could (or would) not use the (standardized) feedback. Especially, weaker or less motivated students might need specific feedback that refers to individual errors and knowledge gaps. Without social interaction and specific feedback students easily develop misconceptions and illusions of understanding (Krause, 2007; Kruger & Dunning, 1999). Therefore, in the present study, cooperative learning and a feedback intervention were implemented. Both interventions should lead to greater effectiveness of the example-based e-learning approach (Krause, 2007; Krause, Stark, & Mandl, 2004). 1.2. Cooperative learning Cooperative learning is increasingly regarded as an effective means to facilitate learning and higher order thinking (Cohen, 1994). There are several definitions of cooperation and cooperative learning that stress different aspects of it, for example the goal structure or the nature of the task (Slavin, 1983). Based on Cohen (1994) and Slavin (1983), we
160
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
define cooperative learning as a setting where people learn together in a group that is small enough to allow active participation of each group member.1 Empirical findings show that cooperative learning can promote knowledge acquisition (Lou, Abrami, & d’Apollonia, 2001), elaboration of subject matter (Krol, Janssen, Veenman, & Van der Linden, 2004), mindfulness (Lambiotte et al., 1988; see also Salomon & Globerson, 1989), as well as social and motivational processes (Gupta, 2004; Johnson & Johnson, 1989). However, for cooperative learning to be effective, adequate cooperation design is necessary (Cohen, 1994; Lou et al., 2001; Slavin, 1983). For individual learning of every group member, active participation of each student is crucial (Cohen, 1994; O’Donnell & Dansereau, 1992); cooperation design must, therefore, prevent detrimental group phenomena such as diffusion of responsibility or social loafing (Latane´, Williams, & Harkins, 1979; Salomon & Globerson, 1989). Large groups are quite susceptible to these phenomena (Ingham, Levinger, Graves, & Peckham, 1974). So, in well-structured fields, where cooperation aims at elaboration of the material rather than at extensive discussion of multiple perspectives, small groups are advisable. This especially applies to cooperative learning in front of the computer, where dyads are most efficient (Lou et al., 2001). Referring to Vygotsky’s (1978) zone of proximal development, some authors emphasize the benefits of heterogeneous groups, where high-ability students learn by externalizing and elaborating their own knowledge and lowability students benefit from peer explanations and help (Hogan & Tudge, 1999). However, in heterogeneous groups, high-ability members tend to dominate the group process; these status effects may lead to passivity of weaker learners and thus impair their learning (Dembo & McAuliffe, 1987). This is especially true for a field like statistics where many students lack self-confidence. In this study, homogeneous dyads were formed in order to facilitate active participation of every student (see also Mulryan, 1992; Webb & Farivar, 1999). Besides, students were watched via monitors, and they were informed that their participation and interaction would be observed by the researchers. This should motivate them to actively contribute to group work. Cooperative learning should thus enhance knowledge acquisition of every group member. An indicator of active participation and effective use of the presented information is group success: when students effectively participate, groups can benefit from greater resources. Research on information processing in groups suggests that groups are more effective in decision making and complex problem solving than individuals (Hinsz, Tindale, & Vollrath, 1997). So, group performance should be superior to individual performance. Successful group work might also enhance subjective learning outcomes (Ha¨nze & Berger, 2007) through collective self-efficacy (Bandura, 1997). 1.3. Feedback Feedback differs in form and degree of elaboration and hence concerning the information that has to be processed by the learner (Kulhavy & Stock, 1989). The impact of feedback on learning depends not only on the kind of feedback provided, but also on how the learner deals with feedback information (Bangert-Drowns, Kulik, Kulik, & Morgan, 1991; Hattie & Timperley, 2007). Two main factors can, thus, be distinguished that determine feedback effectiveness: feedback design and feedback reception (Krause, 2007). Feedback is generally subdivided into many categories, three of which are the following: knowledge of results, knowledge of correct response, and elaborated feedback (Dempsey, Driscoll, & Swindell, 1993; Kulhavy & Stock, 1989). Empirical findings show that elaborated feedback is more effective than mere knowledge of results or knowledge of correct response (Moreno, 2004). By highlighting mistakes and offering explanations or other additional information, elaborated feedback helps students to reflect on the presented information and on their own knowledge and should thereby facilitate elaboration of the material, correction of misconceptions, and filling of knowledge gaps. In example-based learning, as in the e-learning environment Koralle, feedback should prevent errors that are caused by superficial reception of the worked example. In e-learning, feedback can either be provided by tutors (via e-mail or discussion boards) or automatically by the programme. Automatic feedback is much more economic and allows immediate feedback to every student; this form of feedback provision is therefore recommendable when student numbers are high and resources are scarce. Automatic feedback in e-learning environments can be standardized (every 1 Learning together is also referred to as collaborative learning; some authors distinguish between cooperative learning (working on the same task but dividing it up) and collaborative learning (actually working together; see, e.g., Dillenbourg, Baker, Blaye, & O’Malley, 1996). We use the term ‘‘cooperative learning’’ with the above-mentioned broad definition.
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
161
student receives the same feedback, e.g., knowledge of correct response) or adaptive (feedback information is adapted to students’ answers; cf. Sales, 1993). As mentioned above, especially students with little prior knowledge need feedback that refers to their respective mistakes. Automatic adaptive feedback requires a testing mode, such as multiple-choice, that permits automatic answer analysis. In well-structured fields (like correlation analysis), where there is a clear ‘‘right’’ or ‘‘wrong’’ response, this form of testing and feedback provision is easy to implement. So, in this study, multiple-choice tests with immediate, adaptive feedback were employed. Like the rest of the learning environment, the multiple-choice tests referred to realistic and relevant problems. As feedback was meant to facilitate elaboration and to fill knowledge gaps, elaborated feedback was provided, which consisted of knowledge of results, knowledge of correct response, and explanations why the student’s answer was correct or not. Bangert-Drowns et al. (1991) stated that feedback information must be used mindfully by the recipient so that its impact on learning is positive (see also Hattie & Timperley, 2007). However, empirical results suggest that students often lack mindfulness in feedback reception or do not use feedback in the intended way (Hancock, Thurman, & Hubbard, 1995; Krause, 2002). Here, a cooperative learning setting might be advantageous. Findings on information processing in groups suggest that groups use feedback information more effectively than individuals (Hinsz et al., 1997). Thus, it seems likely that group feedback (feedback that is given to a group; Nadler, 1979) is beneficial for learning. Positive (i.e., confirming) feedback that is perceived as supporting and informational promotes feelings of competence and thus fosters students’ motivation to deal with subject matter (Deci & Ryan, 2000; see also Kunter, Baumert, & Ko¨ller, 2007; Narciss & Huth, 2006). The multiple-choice tests that were part of the feedback intervention in this study were only moderately demanding; therefore, the subsequent elaborated feedback should be largely confirming and thus enhance perceived performance and perceived competence. 1.4. Research questions e hypotheses The following research questions were addressed in the study: (1) Do cooperative learning and the feedback intervention enhance individual learning with Koralle? We expected both types of interventions to promote objective learning outcomes, that is, performance on a posttest (Hypothesis 1a). Moreover, we expected an interaction of feedback intervention with social context (i.e., individual vs. cooperative): The feedback intervention should be especially beneficial for students who work in groups (Hypothesis 1b; group-feedback hypothesis). There should also be an interaction of feedback intervention with students’ level of prior knowledge: feedback should be especially beneficial for students with little prior knowledge (Hypothesis 1c). (2) Do groups outperform individuals in problem solving during the learning phase? We expected that groups would be more successful in problem solving than individuals (Hypothesis 2). (3) Do cooperative learning and the feedback intervention promote subjective learning outcomes? We expected that both interventions would enhance students’ perceived performance and perceived competence (Hypothesis 3). 2. Method 2.1. Sample Participants were 137 students (105 females; M ¼ 23.82 years, SD ¼ 5.08) of the University of Munich (semester: M ¼ 3.02, SD ¼ 2.60), who studied social sciences (Education: 84, Psychology: 53). Students participated on a voluntary basis. As basic prior knowledge in statistics was required, students were asked via e-mail to rate their background knowledge about statistics and report their previous grades in statistics. 2.2. Design e procedure In a 2 2 factorial laboratory experiment with a pre-/posttest design, the factors Social Context (individual vs. cooperative) and Feedback Intervention (available vs. not available) were varied. Fig. 1 outlines the experimental procedure.
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
162
Phase 1. Pretest and questionnaire on motivation 2. Instructions for the learning phase 3. Learning phase; experimental conditions: - Individual learning without feedback intervention (n = 17) - Individual learning with feedback intervention (n =18) - Cooperative learning without feedback intervention (n = 50; 25 dyads) - Cooperative learning with feedback intervention (n = 52; 26 dyads) 4. Posttest and questionnaire on subjective learning outcomes and on sociodemographic aspects
Time span (approximately) 20 min. 3 min. Maximum: 200 min.
30 min.
Fig. 1. The design and procedure of the study.
The students were randomly assigned to the four experimental conditions.2 For the cooperative conditions, homogeneous dyads were composed (based on the students’ information about their own knowledge and grades); full randomization was, therefore, not possible. The participants first took the pretest and filled in the questionnaire on motivational learning prerequisites. Then, individuals and dyads worked with Koralle. In the cooperative conditions, dyads shared one computer (i.e., they cooperated in a face-to-face manner). The e-learning environment first provided information on how to proceed; students in dyads were instructed to discuss the statistical issues and solve the tasks interactively. Afterwards, all participants had to deal with six problemsolving tasks on correlation analysis (see 2.3). Solutions were to be written into text-entry fields. Upon each task completion, students received a worked example that they could compare with their own solution. Thus, in all four experimental conditions some feedback was available. The additional feedback intervention consisted of six multiplechoice tests with adaptive, elaborated feedback (see 2.3). Each test followed one of the six worked examples and addressed the same topic as the respective example (e.g., linearity). As dyads took the multiple-choice tests together, the feedback referred to the dyad’s performance and can, therefore, be regarded as group feedback. Their solutions in the learning phase displayed the group’s performance, whereas the posttest results demonstrated individual learning outcomes. After the learning phase, the posttest was administered; perceived performance, perceived competence, and sociodemographic variables were also recorded. During the whole session, students were supervised via monitors. To ensure ecological validity of the study, time on task was only minimally restricted: a limit of 200 min was not to be exceeded. 2.3. Learning tasks The e-learning environment Koralle consists of six problem-solving tasks and six corresponding worked examples. For example, students are presented with a scatter plot that displays a positive correlation and have to solve the following problem (slightly shortened): ‘‘You analyze the relationship between logical thinking and arithmetical thinking. You first look at the scatter plot. What can you say about the variables’ correlation?’’ The following three guiding questions scaffold students’ problem solving: ‘‘(1) Is it a linear correlation? (2) Is it a positive or a negative correlation? (3) How strong is the correlation?’’ These questions are faded out in the learning process (see also Atkinson, Renkl, & Merrill, 2003). Students type their solutions into a text-entry field. Afterwards, they are presented with the following worked example (slightly shortened): ‘‘(1) Correlation between logical thinking and arithmetical thinking is linear, as the data in the scatter plot can approximately be represented by a straight line. (2) The correlation is positive, as high values in logical thinking are associated with high values in arithmetical thinking, and low values in logical thinking are associated with low values in arithmetical thinking. (3) The data points are close to the regression line. At the same time, logical thinking data scatter in a wide range. Therefore, the correlation is strong.’’ Students can get additional help by consulting a glossary that explains central concepts of the learning environment. 2 To obtain about the same number of individual and group products, an equal number of individual and cooperative settings was aimed at. This led to a greater number of students in the cooperative conditions. About 10% of the students who were interested in the study did not participate (mainly because of exams, as it was the end of the semester), which also contributed to disparate allocation to the four conditions.
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
163
In the feedback conditions of the present study, each worked example was followed by a multiple-choice test. Feedback on the response was automatically presented on the screen. Multiple-Choice Test 1, for example, shows a scatter plot that displays a strong positive correlation between verbal comprehension and verbal fluency. Students are asked to rate 12 statements as either ‘‘true’’ or ‘‘false’’; for example, ‘‘The correlation is positive as the data in the scatter plot can approximately be represented by a straight line with a positive slope’’ or ‘‘The two variables correlate only weakly, therefore, the correlation coefficient is smaller than 0.5’’. After having clicked on the button ‘‘continue’’, students receive feedback on their answers. The feedback reports the respective results, for example, ‘‘8 out of 12 answers were correct’’. Moreover, the correct responses are indicated and elaborated; for example, ‘‘Higher values in verbal comprehension tend to go with higher values in verbal fluency, and lower values in verbal comprehension tend to go with lower values in verbal fluency; therefore, correlation is positive’’. And finally, corrective information concerning errors is provided, for example, ‘‘The correlation is strong as the data are close to the regression line; the correlation coefficient is about r ¼ 0.9’’. 2.4. Measures 2.4.1. Prior knowledge and objective learning outcomes Prior knowledge and objective learning outcomes were assessed with a pre- and a posttest, respectively, which have been proven reliable in a previous study with Koralle (Tyroller, 2005). Both tests consisted of realistic, open-ended problems. The pretest was composed of three tasks with maximum performance score 12. An example problem is the following one: ‘‘In a study with 16 participants the variables intelligence and study success correlated weakly, r ¼ 0.12. The researcher discovers that the low correlation was caused by an outlier. What does that mean?’’ The posttest consisted of five tasks with maximum score 20. An example problem is the following: ‘‘A friend asks your advice: He investigated the correlation of statistics anxiety and statistics performance in a sample of 20 psychology students. Statistics anxiety was recorded by means of a well-validated questionnaire. Moreover, intelligence was assessed by an established intelligence test. The grade in a statistics test served as a performance indicator. Correlation of anxiety and performance was r ¼ 0.18; your friend had expected a correlation of about r ¼ 0.60. (a) What could he do to find the reason for the low correlation? (b) What factors might explain the low correlation?’’. Students’ answers were evaluated by means of standardized assessment schemes.3 Internal consistencies of the preand the posttest were quite low, Cronbach’s a ¼ 0.60 for both tests. This is probably due to the heterogeneous task requirements. The tasks demanded different kinds of knowledge, namely declarative, procedural, and conditional knowledge. 2.4.2. Performance on the learning tasks For comparison of group and individual performance, the dyads’ and individuals’ solutions to the six problemsolving tasks of the e-learning environment were analyzed by means of standardized assessment schemes (maximum score was 93). Cronbach’s a of the six problem-solving tasks was 0.77. Furthermore, students’ scores in the six multiple-choice tests of the feedback conditions were looked at. Cronbach’s a of the multiple-choice tests was only 0.51; therefore, the tests were scored separately. As each test consisted of 12 items that had to be rated as ‘‘true’’ or ‘‘false’’ by the students, the maximum score for each test was 12. 2.4.3. Motivational prerequisites Motivational prerequisites were assessed by self-rating on a six-point response scale that ranged from 1 (strongly disagree) to 6 (strongly agree). Topic interest was measured with six items, for example, ‘‘I am interested in correlation analysis’’. Internal consistency was Cronbach’s a ¼ 0.81. Also, task orientation and work avoidance responses were gathered based on scales by Nicholls, Patashnick, and Nolen (1985). Task orientation was assessed with seven items, for example, ‘‘I feel most successful when I understand a complicated idea’’. Internal consistency was Cronbach’s a ¼ 76. Work avoidance was measured with three items, for example, ‘‘I feel most successful when I don’t have to work hard’’. Internal consistency was Cronbach’s a ¼ 0.78. 3 All assessment tasks and scoring schemes can be requested from the first author. The responses were scored by the first author. In ambiguous cases the final score was given after consultation with the second author.
164
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
Table 1 Means (and SD) of pretest performance and motivational prerequisites as a function of experimental condition Experimental condition
Self-rated motivation Pretest
Individual learning without feedback intervention Individual learning with feedback intervention Cooperative learning without feedback intervention Cooperative learning with feedback intervention
Interest
Task orientation
Work avoidance
M (SD)
M (SD)
M (SD)
M (SD)
4.28 4.24 4.71 5.07
3.88 3.69 3.47 3.80
5.13 5.22 5.30 5.30
3.84 3.91 3.99 3.92
(2.13) (2.33) (1.90) (1.54)
(0.90) (0.76) (0.80) (0.91)
(0.65) (0.47) (0.54) (0.45)
(1.03) (1.11) (1.14) (1.21)
2.4.4. Subjective learning outcomes To assess subjective learning outcomes, students were asked to evaluate their own performance by responding to the single item ‘‘Please assess the quality of your freely formulated answers within the learning environment Koralle on a scale from 1 (very good) to 6 (very bad)’’. They were also asked to evaluate their own competence by responding to the single item ‘‘Please assess your knowledge about correlation analysis on a scale from 1 (very good) to 6 (none)’’. 2.4.5. Time on task Time on task was automatically registered by the e-learning application. 3. Results 3.1. Learning prerequisites The students in the four experimental conditions were comparable as regards their prior knowledge and motivational prerequisites (see Table 1). Specifically, the one-way ANOVAs for the pretest and each of the motivational prerequisites revealed no main effect of condition in all cases: for pretest performance, F(3, 133) ¼ 1.32, ns; for topic interest, F(3, 133) ¼ 1.67, ns; for task orientation, F(3, 133) < 1, ns; and for work avoidance, F(3, 133) < 1, ns. 3.2. Objective learning outcomes The mean posttest scores in all experimental conditions were located in the middle of the scale (see Table 2). The overall mean was 12.31 (SD ¼ 3.02), which comes up to 61.55% of the maximum score (20 points). Students who were presented with the feedback intervention scored higher in the posttest than students who only had worked-example feedback. A 2(social context) 2(feedback intervention) ANOVA with posttest score as dependent variable showed that the main effect of feedback intervention was significant, F(1, 133) ¼ 32.91, p < 0.001, partial h2 ¼ 0.20.4 The main effect of social context, however, was not significant, F(1, 133) < 1, ns. Thus, Hypothesis 1a was only partially confirmed. The group-feedback hypothesis (Hypothesis 1b) was not confirmed, as shown by the significant interaction of social context with feedback intervention,5 F(1, 133) ¼ 5.03, p < 0.05, partial h2 ¼ 0.04. With the feedback intervention, students who had worked alone scored higher in the posttest than students who had worked in dyads; without feedback intervention, students who had worked in dyads scored higher. So, contrary to Hypothesis 1b, the feedback intervention was especially beneficial in individual learning, not in cooperative learning. To test Hypothesis 1c, a 2(prior knowledge) 2(social context) 2 (feedback intervention) ANOVA was applied. For this analysis, prior knowledge was dichotomized at the median into high (>4.75) vs. low (4.75). Table 3 shows that students with low prior knowledge scored higher when the feedback intervention was available. There was no interaction between social context and prior knowledge, F(1, 129) < 1, ns, but the interaction between feedback 4
All effect sizes were classified according to Cohen (1988) as small (h2 0.05), medium (0.06 h2 0.13), or large (h2 0.14). Because of the disparate sample sizes in the four experimental conditions, U-tests were calculated, which confirmed the significant effect of the feedback intervention and the significant interaction. 5
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
165
Table 2 Means (and SD) of posttest performance as a function of experimental condition Experimental condition
Posttest M (SD)
Individual learning without feedback intervention Individual learning with feedback intervention Cooperative learning without feedback intervention Cooperative learning with feedback intervention
10.25 14.51 11.31 13.18
(3.41) (2.15) (3.06) (2.28)
Table 3 Means (and SD) of posttest performance as a function of experimental condition and level of prior knowledge Experimental condition
Posttest M (SD)
Individual learning without feedback intervention Low (n ¼ 10) High (n ¼ 7)
8.85 (3.63) 12.25 (1.81)
Individual learning with feedback intervention Low (n ¼ 10) High (n ¼ 8)
14.25 (1.92) 14.84 (2.50)
Cooperative learning without feedback intervention Low (n ¼ 29) High (n ¼ 21)
10.04 (2.77) 13.06 (2.59)
Cooperative learning with feedback intervention Low (n ¼ 23) High (n ¼ 29)
12.64 (2.55) 13.60 (1.98)
Low 4.75 and high > 4.75 indicate two levels of prior knowledge based on median split.
intervention and prior knowledge was significant, F(1, 129) ¼ 5.94, p < 0.05, partial h2 ¼ 0.04. Therefore, Hypothesis 1c was confirmed. 3.3. Time on task Cooperative learning and the feedback intervention prolonged time on task (see Table 4). In a 2(social context) 2(feedback intervention) ANOVA with time on task as dependent variable, the main effects of both factors were significant: for social context, F(1, 133) ¼ 8.82, p < 0.01, partial h2 ¼ 0.06, and for feedback intervention, F(1,133) ¼ 44.34, p < 0.001, partial h2 ¼ 0.25. Besides, time on task was significantly correlated with learning outcomes, r ¼ 0.28, p < 0.01. Therefore, time on task was used as a covariate in an ANCOVA with social context and feedback intervention as independent variables and the posttest score as dependent variable. The main effect of feedback intervention, F(1,132) ¼ 19.92, p < 0.001, partial h2 ¼ 0.13, and the interaction of social context with feedback intervention, F(1,132) ¼ 4.95, p < 0.05, partial h2 ¼ 0.04, remained significant. Table 4 Means (and SD) of time on task (in minutes) as a function of experimental condition Experimental condition
Time on task M (SD)
Individual learning without feedback intervention Individual learning with feedback intervention Cooperative learning without feedback intervention Cooperative learning with feedback intervention
70.06 104.11 85.80 117.85
(24.05) (23.72) (29.77) (21.27)
166
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
Table 5 Means (and SD) of performance in the learning phase as a function of experimental condition Experimental condition
Individual learning without feedback intervention Individual learning with feedback intervention Cooperative learning without feedback intervention Cooperative learning with feedback intervention
Open-ended problem-solving tasks
Multiple-choice tests Test 1
Test 2
Test 3
Test 4
Test 5
Test 6
M (SD)
M (SD)
M (SD)
M (SD)
M (SD)
M (SD)
M (SD)
21.47 (9.62)
e
e
e
e
e
e
8.72 (1.13)
8.83 (1.76)
8.33 (1.37)
9.50 (1.47)
25.06 (8.89)
9.39 (1.46)
9.94 (1.35)
29.06 (8.95)
e
e
e
e
e
e
27.19 (7.32)
10.69 (0.97)
10.92 (1.16)
8.88 (0.99)
9.73 (1.46)
8.85 (1.97)
9.92 (1.38)
3.4. Performance on the learning tasks Table 5 displays the mean scores in the tasks of the e-learning environment. Dyads outperformed individuals both in the open-ended problem-solving tasks and in the multiple-choice tests. A 2(social context) 2(feedback intervention) ANOVA with the overall score on the six problem-solving tasks as dependent variable showed that the main effect of social context was significant, F(1, 82) ¼ 6.61, p < 0.05, partial h2 ¼ 0.08, as Hypothesis 2 predicted. The main effect of feedback intervention, F(1, 82) < 1, ns, and the interaction of social context with feedback intervention, F(1, 82) ¼ 2.08, ns, were not significant. For the multiple-choice tests, which were part of the feedback intervention, one-way MANOVA was applied with the six multiple-choice tests as dependent variables. In this case, there was no factor Feedback Intervention; only Social Context was included (N ¼ 44, that is, 18 individuals and 26 dyads). The MANOVA revealed a significant main effect of social context, Pillai’s Trace ¼ 0.33, F(6, 37) ¼ 3.07, p < 0.05, partial h2 ¼ 0.33. Univariate analyses showed significant effects of social context for the first two multiple-choice tests: for Multiple-Choice Test 1, F(1, 42) ¼ 12.69, p < 0.01, partial h2 ¼ 0.23, and for Multiple-Choice Test 2, F(1, 42) ¼ 6.60, p < 0.05, partial h2 ¼ 0.14. The univariate F tests for the other multiple-choice tests were not significant: for Tests 3, 5, and 6, F(1, 42) < 1, ns, and for Test 4, F(1, 42) ¼ 3.41, ns. 3.5. Subjective learning outcomes The mean self-ratings of performance and competence were located at the middle of the scale (see Table 6). Two participants did not answer the competence question (i.e., N ¼ 135 for perceived competence). A 2(social context) 2(feedback intervention) MANOVA with the two subjective learning outcomes as dependent variables revealed a significant main effect of social context, Pillai’s Trace ¼ 0.10, F(2, 130) ¼ 7.43, p < 0.01, partial h2 ¼ 0.10. However, there was no main effect of feedback, Pillai’s Trace ¼ 0.02, F(2, 130) ¼ 1.37, ns, and the interaction of social context with feedback intervention was also nonsignificant, Pillai’s Trace ¼ 0.01, F(2, 130) < 1, ns. The univariate F tests displayed a significant effect of social context on perceived performance, F(1, 133) ¼ 13.85, p < 0.001, partial h2 ¼ 0.09, but the effect of feedback intervention, F(1, 133) < 1, ns, and the interaction, F(1,133) ¼ 1.07, ns, were nonsignificant. Social context also had a significant main effect on perceived competence, Table 6 Means (and SD) of perceived performance and perceived competence as a function of experimental condition Experimental condition
Individual learning without feedback intervention Individual learning with feedback intervention Cooperative learning without feedback intervention Cooperative learning with feedback intervention
Performance
Competence
M (SD)
M (SD)
3.53 3.44 2.64 2.94
3.35 3.12 2.96 2.73
(0.87) (1.20) (0.72) (1.07)
(1.17) (1.05) (0.71) (0.91)
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
167
F(1, 131) ¼ 4.80, p < 0.05, partial h2 ¼ 0.04, but, again, there was no main effect of feedback intervention, F(1,131) ¼ 1.69, ns, and the interaction was again nonsignificant, F(1, 131) < 1, ns. Thus, Hypothesis 3 was partially confirmed.
4. Discussion The feedback intervention clearly promoted objective learning outcomes and compensated for knowledge deficits. The substantial feedback effect remained significant when time on task was statistically controlled. Apparently, the feedback design was appropriate and allowed every student easy and mindful feedback reception. As all participants were presented with worked examples and, therefore, in all experimental conditions some feedback was available, the considerable effect of the additional feedback intervention is not trivial. The positive impact of feedback on learning outcomes is consistent with findings of previous feedback research (Bangert-Drowns et al., 1991; Hattie & Timperley, 2007; Moreno, 2004). Interpretation of our findings has to acknowledge, however, that the feedback intervention included multiple-choice testing, which allowed additional practice and thus might have supported learning as well. Further feedback studies should investigate the processes that are responsible for feedback effectiveness, for example by means of thinking aloud protocols. Probably, feedback promotes elaborative and reflective activities by highlighting mistakes and offering explanations and thus fosters deeper understanding. Process analyses might also reveal motivational and emotional reactions to feedback, which could also be relevant for feedback effects. The feedback intervention did not influence perceived performance and perceived competence. Thus, although students clearly benefited from additional feedback, the intervention did not promote subjective learning outcomes. Discrepancies between objective and subjective outcomes were found before (Stark et al., 1998). In the present study, students received feedback repeatedly in the learning process. Depending on the respective feedback information (confirming correct responses vs. correcting errors), feedback might lead either to positive or to negative selfassessments. Possibly, each feedback event was perceived differently, which led to inconsistent evaluations throughout the learning process. Here, too, process studies might be informative, especially if they involve repeated measurement of feedback perception and of self-assessment. Cooperative learning did not affect objective learning outcomes. This was not due to a lack of participation and interaction: observation via monitors showed that active participation and interaction were present in all dyads. However, these observations were not systematic as regards the sequence and content of interaction. In any case, at least group superiority concerning performance in the learning phase demonstrates that social loafing or other detrimental group phenomena did not (or not extensively) occur. Still, participation did not lead to better individual learning. Either cooperation did not induce special elaborative and reflective activities, or these activities did not result in immediate learning gains. To examine this, micro-analyses of the cooperation process are needed that provide information about the quality of interaction (see, e.g., Arvaja, Salovaara, Ha¨kkinen, & Ja¨rvela¨, 2007; Meier, Spada, & Rummel, 2007). These analyses should be especially informative when individual and group-level perspectives are combined (Arvaja et al., 2007) and when cognitive, motivational, and affective aspects of social interaction are integrated: different aspects of negotiation (reciprocity, search for consensus, etc.; Meier et al., 2007), commitment (Ha¨nze & Berger, 2007), and indicators of emotions and their regulation (Ja¨rvenoja & Ja¨rvela¨, 2005). All the same, cooperation did not impair learning. From a pedagogical point of view, this is an acceptable finding: cooperative learning is often implemented for several reasons, not only in order to promote knowledge acquisition. Possible objectives are, for instance, facilitating relatedness, enhancing social skills, or reducing anxiety. In e-learning, moreover, small numbers of computers in classrooms often require student cooperation. Furthermore, effects of cooperation on performance in the learning phase and on subjective learning outcomes display a specific potential of this method. Group performance was better than individual performance. Obviously, as expected, dyads benefited from the greater knowledge base and from collective information processing. In the multiple-choice tests, effects were only significant for Tests 1 and 2; this might be explained by increasing fatigue, as the experimental condition that involved both cooperative learning and the feedback intervention was very demanding. Hence, whereas individual learning was not supported by cooperation, group performance in the learning phase was better. This shows that effective group problem solving is by no means to be equated with successful learning of group members (see also Lou et al., 2001; Slavin, 1983).
168
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
However, perhaps elaborative and reflective processes in cooperation promote long-term retention and, thus, sustainability of learning. Findings on subjective learning outcomes provide a clear hint that cooperative learning has the potential to increase sustainability. Cooperation enhanced perceived performance and perceived competence; favourable self-assessments should be beneficial for self-efficacy beliefs and future motivation to deal with the respective field (Deci & Ryan, 2000; Pajares, 1997). Although students who had worked in dyads actually were not superior in the (individually taken) posttest, they evaluated their competence more positively than students who had worked alone. Probably, there was some sort of ‘‘halo effect’’ of perceived collective efficacy on perceived selfefficacy. Perception of effective problem-solving might have enhanced individual feelings of competence. Also, as dyads gave better answers in the problem-solving tasks, their solutions better corresponded to the worked examples, and as they were superior in the multiple-choice tests, the feedback they received was more positive. Probably, students assessed their individual competence and performance on the basis of group feedback. Confirming peer feedback might also have played a role. And, maybe, externalization made students become aware of their own knowledge, thus leading to greater feelings of competence and preventing incompetence illusions (Krause, 2007). The group-feedback hypothesis, however, was not confirmed; apparently, individuals used feedback more effectively than groups. Possibly, dyads did not feel the need for instructional feedback as much as individuals, because peer feedback was available. Also, collective efficacy was quite high, so the need for confirmation or correction by feedback might have been less intense. Still, cooperative learning with adaptive group feedback was more effective than cooperative learning with standardized feedback. Given a research paucity concerning group feedback in learning settings, further studies are needed here. In so far as statistics education is concerned, the first study on Koralle (Tyroller, 2005) suggested that the e-learning environment is a useful supplement for regular statistics education. The present study demonstrated that cooperative learning with Koralle does not impair learningdwhich is practically relevant when social learning is aimed at or when computers are scarcedand that learning outcomes can be considerably enhanced by adaptive elaborated feedback.
References Arvaja, M., Salovaara, H., Ha¨kkinen, P., & Ja¨rvela¨, S. (2007). Combining individual and group level perspectives for studying collaborative knowledge construction in context. Learning and Instruction, 17, 448e459. Atkinson, R. K., Renkl, A., & Merrill, M. M. (2003). Transitioning from studying examples to solving problems: combining fading with prompting fosters learning. Journal of Educational Psychology, 95, 774e783. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61, 213e238. Broers, N. J., & Imbos, T. (2005). Charting and manipulating propositions as methods to promote self-explanations in the study of statistics. Learning and Instruction, 15, 517e538. Cognition and Technology Group at Vanderbilt. (1997). The Jasper project: Lessons in curriculum, instruction, assessment, and professional development. Mahwah, NJ: Erlbaum. Cohen, E. G. (1994). Restructuring the classroom: conditions for productive small groups. Review of Educational Research, 64, 1e35. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum. Deci, E. L., & Ryan, R. M. (2000). The ‘‘what’’ and ‘‘why’’ of goal pursuits: human needs and the self-determination of behavior. Psychological Inquiry, 11, 227e268. Dembo, M. H., & McAuliffe, T. J. (1987). Effects of perceived ability and grade status on social interaction and influence in cooperative groups. Journal of Educational Psychology, 79, 415e423. Dempsey, J. V., Driscoll, M. P., & Swindell, L. K. (1993). Text-based feedback. In J. V. Dempsey, & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 21e54). Englewood Cliffs, NJ: Educational Technology Publications. Dillenbourg, P., Baker, M., Blaye, A., & O’Malley, C. (1996). The evolution of research on collaborative learning. In E. Spada, & P. Reimann (Eds.), Learning in humans and machines: Towards an interdisciplinary learning science (pp. 189e211). Oxford, England: Elsevier. Gupta, M. L. (2004). Enhancing student performance through cooperative learning in physical sciences. Assessment and Evaluation in Higher Education, 29, 63e73. Hancock, T. E., Thurman, R. A., & Hubbard, D. C. (1995). An expanded control model for the use of instructional feedback. Contemporary Educational Psychology, 20, 410e425. Ha¨nze, M., & Berger, R. (2007). Cooperative learning, motivational effects, and student characteristics: an experimental study comparing cooperative learning and direct instruction in 12th grade physics classes. Learning and Instruction, 17, 29e41. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81e112. Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121, 43e64.
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
169
Hogan, D. M., & Tudge, J. R. H. (1999). Implications of Vygotsky’s theory for peer learning. In A. M. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 39e65). Mahwah, NJ: Erlbaum. Ingham, A. G., Levinger, G., Graves, J., & Peckham, V. (1974). The Ringelmann effect: studies of group size and group performance. Journal of Experimental Social Psychology, 10, 371e384. Ja¨rvenoja, H., & Ja¨rvela¨, S. (2005). How students describe the sources of their emotional and motivational experiences during the learning process: a qualitative approach. Learning and Instruction, 15, 465e480. Johnson, D. W., & Johnson, R. T. (1989). Cooperation and competition: Theory and research. Edina, MN: Interaction Book Company. Krause, U.-M. (2002, January). Elaborated group feedback in virtual learning environments. Paper presented at the doctoral consortium of the Computer Support for Collaborative Learning Conference, Boulder, Colorado, USA. Krause, U.-M. (2007). Feedback und kooperatives Lernen. [Feedback and cooperative learning]. Mu¨nster, Germany: Waxmann. Krause, U.-M., & Stark, R. (2006). Vorwissen aktivieren [Activating prior knowledge]. In H. Mandl, & H. F. Friedrich (Eds.), Handbuch Lernstrategien (pp. 38e49). Go¨ttingen, Germany: Hogrefe. Krause, U.-M., Stark, R., & Mandl, H. (2004). Fo¨rderung des computerbasierten Wissenserwerbs durch kooperatives Lernen und eine Feedbackmaßnahme [Fostering computer-based knowledge acquisition by cooperative learning and a feedback intervention]. Zeitschrift fu¨r Pa¨dagogische Psychologie, 18, 125e136. Krol, K., Janssen, J., Veenman, S., & Van der Linden, J. (2004). Effects of a cooperative learning program on the elaborations of students working in dyads. Educational Research and Evaluation, 10, 205e237. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121e1134. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: the place of response certitude. Educational Psychology Review, 1, 279e308. Kunter, M., Baumert, J., & Ko¨ller, O. (2007). Effective classroom management and the development of subject-related interest. Learning and Instruction, 17, 494e509. Lambiotte, J. G., Dansereau, D. F., O’Donnell, A. M., Young, M. D., Skaggs, L. P., & Hall, R. H. (1988). Effects of cooperative script manipulations on initial learning and transfer. Cognition and Instruction, 5, 103e121. Lan, W. Y. (1998). Teaching self-monitoring in statistics. In D. H. Schunk, & B. J. Zimmermann (Eds.), Self-regulated learning: From teaching to self-reflective practice (pp. 86e105). New York: Guilford. Latane´, B., Williams, K., & Harkins, S. (1979). Many hands make light the work: causes and consequences of social loafing. Journal of Personality and Social Psychology, 37, 822e832. Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001). Small group and individual learning with technology: a meta-analysis. Review of Educational Research, 71, 449e521. Mandl, H., & Krause, U.-M. (2003). Learning competence for the knowledge society. In N. Nistor, S. English, S. Wheeler, & M. Jalobeanu (Eds.), Toward the virtual university: International online perspectives (pp. 65e86). Greenwich, CT: Information Age Publishing. Meier, A., Spada, H., & Rummel, N. (2007). A rating scheme for assessing the quality of computer-supported collaboration processes. International Journal of Computer-Supported Collaborative Learning, 2, 63e86. Moreno, R. (2004). Decreasing cognitive load for novice students: effects of explanatory versus corrective feedback in discovery-based multimedia. Instructional Science, 32, 99e113. Mulryan, C. M. (1992). Student passivity during cooperative small groups in mathematics. Journal of Educational Research, 85, 261e273. Nadler, D. A. (1979). The effects of feedback on task group behavior: a review of the experimental research. Organizational Behavior and Human Performance, 23, 309e338. Narciss, S., & Huth, K. (2006). Fostering achievement and motivation with bug-related tutoring feedback in a computer-based training for written substraction. Learning and Instruction, 16, 310e322. Nicholls, J. G., Patashnick, M., & Nolen, S. B. (1985). Adolescents’ theories of education. Journal of Educational Psychology, 77, 683e692. O’Donnell, A. M., & Dansereau, D. F. (1992). Scripted cooperation in student dyads: a method for analyzing and enhancing academic learning and performance. In R. Hertz-Lazarowitz, & N. Miller (Eds.), Interaction in cooperative groups: The theoretical anatomy of group learning (pp. 120e144). Cambridge, UK: Cambridge University Press. Onwuegbuzie, A. J. (2004). Academic procrastination and statistics anxiety. Assessment & Evaluation in Higher Education, 29, 3e19. Pajares, F. (1997). Current directions in self-efficacy research. In M. L. Maehr, & P. R. Pintrich (Eds.), Advances in motivation and achievement, Vol. 10 (pp. 1e49). Greenwich, CT: JAI. Renkl, A. (2005). The worked-out examples principle in multimedia learning. In R. Mayer (Ed.), Cambridge handbook of multimedia learning (pp. 229e246). Cambridge, UK: Cambridge University Press. Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.). (1991). Perspectives on socially shared cognition. Washington, DC: American Psychological Association. Sales, G. (1993). Adapted and adaptive feedback in technology-based instruction. In J. V. Dempsey, & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 159e175). Englewood Cliffs, NJ: Educational Technology Publications. Salomon, G., & Globerson, T. (1989). When teams do not function the way they ought to. International Journal of Educational Research, 13, 89e99. Slavin, R. E. (1983). Cooperative learning. New York: Longman. Stark, R., Gruber, H., Renkl, A., & Mandl, H. (1998). Instructional effects in complex learning: do objective and subjective learning outcomes converge? Learning and Instruction, 8, 117e129. Stark, R., Gruber, H., Renkl, A., & Mandl, H. (2000). Instruktionale Effekte einer kombinierten Lernmethode: Zahlt sich die Kombination von Lo¨sungsbeispielen und Problemlo¨seaufgaben aus? [Does the combination of worked-out examples and problem-solving tasks pay off?] Zeitschrift fu¨r Pa¨dagogische Psychologie, 14, 206e218.
170
U.-M. Krause et al. / Learning and Instruction 19 (2009) 158e170
Stark, R., & Mandl, H. (2000). Training in empirical research methods: analysis of problems and intervention from a motivational perspective. In J. Heckhausen (Ed.), Motivational psychology of human development (pp. 165e183). Amsterdam: Elsevier. Stark, R., Mandl, H., Gruber, H., & Renkl, A. (2002). Conditions and effects of example elaboration. Learning and Instruction, 12, 39e60. Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cognitive Science, 12, 257e285. Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2, 59e89. Tyroller, M. (2005). Effekte metakognitiver Prompts beim computerbasierten Statistiklernen [Effects of metacognitive prompting on computerbased learning of statistics]. Doctoral dissertation, University of Munich, Germany. . Accessed 24.07.07 Van Gog, T., Paas, F., & Van Merrie¨nboer, J. J. G. (2006). Effects of process-oriented worked examples on troubleshooting transfer performance. Learning and Instruction, 16, 154e164. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Webb, N. M., & Farivar, S. (1999). Developing productive group interaction in middle school mathematics. In A. M. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 117e149). Mahwah, NJ: Erlbaum.