Computers & Education 58 (2012) 12–20
Contents lists available at SciVerse ScienceDirect
Computers & Education journal homepage: www.elsevier.com/locate/compedu
E-tutorial support for collaborative online learning: An explorative study on experienced and inexperienced e-tutors Birgitta Kopp a, *, Maria Cristina Matteucci b,1, Carlo Tomasetto b, 2 a b
Department Psychology, Ludwig-Maxilimans-University, Leopoldstaße 13, 80802 Munich, Germany Department of Education, Alma Mater Studiorum, University of Bologna, Piazza Aldo Moro 90, I-47023 Cesena, Italy
a r t i c l e i n f o
a b s t r a c t
Article history: Received 14 March 2011 Received in revised form 12 July 2011 Accepted 10 August 2011
The e-tutor plays a major role in supporting virtual collaborative learning, as he/she supervises learners in collaboratively solving tasks, acquiring new skills, and applying new knowledge. This study is aimed at gaining further insights into the daily support practices of e-tutors. Seventy-six e-tutors from 17 different European countries were invited to fill in an online questionnaire to evaluate collaborative activities, and to answer yes/no-questions regarding their intervention to support these collaborative activities. A cluster analysis identified two profiles of e-tutors according to the importance ascribed to collaborative activities, and to the number of times they intervened to foster such activities. The cluster validation revealed a difference between experienced and inexperienced European e-tutors in their support of online collaboration: e-tutors with experience considered specific cognitive activities to be more important for effective online collaboration, and they seemed to be more familiar in detecting and adequately intervening to avoid dysfunctional social phenomena. Thus, experience in supporting online collaboration seems to be a useful precondition for successfully intervening to stimulate necessary learning activities and to avoid dysfunctional collaborative activities. Ó 2011 Elsevier Ltd. All rights reserved.
Keywords: Cooperative/collaborative learning E-tutoring Experience of e-tutors
1. Introduction Virtual collaborative learning is being used increasingly in different contexts: in schools, in universities, in higher or in further education. This is due to the fact that collaborative learning has several benefits, e.g. fostering individual knowledge acquisition (Lou et al., 1996), supporting knowledge application (De Corte, 2003), and encouraging the acquisition of social competencies (Cohen, 1994). But collaborative learning per se is not successful without adequate support (Kirschner, Sweller, & Clark, 2006). Furthermore, virtual collaborative learning is even more demanding for learners as the virtual context involves new ways of communication and collaboration. For example, since this context lacks non-verbal cues (O’Connaill, Whittaker, & Wilbur, 1993), learners often do not know how to collaborate adequately (Kopp & Mandl, 2011). Therefore, in collaborative learning situations, it is necessary to provide support. Support for virtual collaboration and collaborative learning is often realized by the e-tutor. There is a wide range of different names for etutors that are used interchangeably, e.g. tele-tutor, online coach, e-moderator, tele-teacher, online facilitator or e-trainer (Rautenstrauch, 2001). All these names are used to describe the same phenomenon, namely the support of e-learners, even though the range of tasks may differ somewhat depending on the respective name. For the purpose of this article, e-tutors are defined according to their main function, which is to supervise and to support learners (Kopp, Germ, & Mandl, 2010). According to this perspective, e-tutoring comprises all the activities of a teacher that support a learner in constructively and actively handling the learning environment (Kopp et al., 2010). In this context, the focus is on the importance of socially shared activities which are essential for individual cognitive development and for learning (Larreamendy-Joerns & Leinhardt, 2006). Thus, the interaction between learners and between e-tutors and learners is important for promoting the activities necessary for the acquisition and application of knowledge.
* Corresponding author. Tel.: þ49 89 2180 3779; fax: þ49 89 2180 99 3779. E-mail addresses:
[email protected] (B. Kopp),
[email protected] (M.C. Matteucci),
[email protected] (C. Tomasetto). 1 Tel.: þ39 0537 339838; fax: þ39 0547 339819. 2 Tel.: þ39 0547 339847; fax: þ39 0547 339819. 0360-1315/$ – see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2011.08.019
B. Kopp et al. / Computers & Education 58 (2012) 12–20
13
The major collaborative activities in virtual learning environments which should be supported by e-tutors are specifically (a) contentspecific cognitive activities (b) social activities (Kopp & Mandl, 2011), and (c) meta-cognitive activities. a) Cognitive activities include knowledge exchange (dissemination of shared and unshared knowledge between group members), online discussion (discussing different points of view in depth), argumentation,3 collaborative problem solving, and considering different perspectives (actively thinking about the opinions of collaborating partners). b) Social activities which need to be supported by e-tutors include the motivation of the group members, interpersonal interaction, social influence processes and information processing. As to group motivation, achievement goals should be taken into account – i.e., whether learners are more interested in their performance or in mastery (Elliot & McGregor, 2001) – as well as competition. Competition results from different competing motives of the group members (Deutsch, 1949). With respect to interpersonal interaction, conflicts, balanced participation and lack of responsibility are of importance. Social influence processes include activities such as ignoring minorities (Nemeth, 1986) or imposing conformity upon other group members. Information processing includes superficial discussion in order to prevent conflicts, but simultaneously deprives the group members of intensive cognitive processing (De Lisi & Goldbeck, 1999). Information processing also includes addressing the e-tutors rather than group members. c) Meta-cognitive activities are essential for self-guided collaborative learning. In this context, planning/organizing, monitoring and regulating collaborative learning are the main meta-cognitive strategies that support the guidance of the group work. Planning and organizing collaborative activities includes choosing specific strategies and often takes place at the beginning of collaboration before the main learning process starts (Schiefele & Pekrun, 1996). In contrast, monitoring and regulating collaborative activities are essential during the collaboration process. With help of a target-performance-comparison, the group’s learning success could be evaluated (Schreblowski & Hasselhorn, 2006). Based on the results of this evaluation process, learners may regulate their activities when necessary. Even though specific cognitive activities, social activities, and meta-cognitive activities are essential for effective collaborative learning, the question is how e-tutors support such activities in practice. On a theoretical basis, there are two methods of supporting online collaboration: providing specific structures in the virtual learning environment (such as scripts) or directly intervening during the collaborative learning process (Kopp & Mandl, 2011). 1.1. Providing feedback in collaborative learning In e-tutoring, it is highly important to utilize direct intervention with feedback. Providing feedback “is conceptualized as information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding” (Hattie & Timperley, 2007, p. 81). Feedback is very helpful for preventing the student’s sense of being totally alone and unguided (Schweizer, Paechter, & Weidenmann, 2001) and specifically focuses on the problems of the learners. The objective of giving feedback is to reduce the discrepancies between the student’s current understanding and the desired goal. There are four main kinds of feedback: feedback (1) on the task level, (2) on the process level, (3) on the self-regulation level, and (4) on the self level. Most often, feedback is given on the task and on the process level. Feedback on the task includes how well the tasks are understood or performed (Hattie & Timperley, 2007). This corrective feedback is related to the concrete accomplishment of the task. Meta-analyses of task feedback are highly related to fostering performance (e. g. Tenenbaum & Goldring, 1989; Walberg, 1982). Feedback about the processing of the task refers to the processes that take place during the task-solving and specifically involves aspects of deep understanding (Hattie & Timperley, 2007). In this context, it is important to provide strategies for error detection that provide information on how to improve and change activities to solve a task correctly or to utilize cueing mechanisms that provide information about the processes underlying the task. Providing learners with such feedback is one of the main tasks of e-tutors (Kopp et al., 2010). 1.2. E-tutoring Since e-tutoring is defined as all activities that support learners in their learning process, it seems necessary for e-tutors themselves to be equipped with an appropriate set of skills and attributes in addition to subject matter expertise (McPherson & Nunes, 2004). E-tutoring specifically differs from face-to-face tutoring in number of ways. For example, e-tutoring uses written messages more often and with a more formal tone, promotes multiple conversations and communication between several learners, and requires teachers to assess the worth of online contributions and to develop new ways of encouraging participation (McPherson & Nunes, 2004). Therefore, it seems necessary for etutors to be experienced in fostering online learning. In addition to discipline expertise, it is necessary for e-tutors to possess pedagogical, communicational and technological competencies (Denis, Watland, Pirotte, & Verday, 2004). In previous research, a number of different preconditions have been linked to specific competencies. In summary, we essentially have three kinds of knowledge (Lepper, Drake, & O’Donnell-Johnson, 1997; Salmon, 2000; Schmidt & Moust, 1995): (a) content-specific knowledge to support cognitive activities, (b) pedagogical knowledge to initiate and sustain adequate learning processes on a motivational and meta-cognitive level, and to adequately cope with difficulties of the learners on a social level (social activities), and (c) technical knowledge about the functioning of the Internet, technical skills and knowledge on net-based communication. This kind of knowledge reflects the “e” in the term “e-tutor”. Furthermore, the e-tutor needs a wide range of tutoring skills in order to assume different roles, such as content facilitator, metacognition facilitator, process facilitator, advisor/counselor, assessor, technologist, resource provider, etc. (e.g. Denis et al., 2004; McPherson & Nunes, 2004). Because of this diverse range of different competencies, skills and roles, e-tutors must adequately prepare for their work in facilitating online collaboration (Banks, Denis, Fors, & Pirotte, 2004).
3 Argumentation is “a verbal and social activity of reason aimed at increasing (or decreasing) the acceptability of a controversial standpoint for the listener or reader, by putting forward a constellation of propositions intended to justify (or refute) the standpoint before a rational judge” (Van Eemeren, Grootendorst, & Henkemans, 1996, p. 5).
14
B. Kopp et al. / Computers & Education 58 (2012) 12–20
1.3. E-tutors’ expertise When taking a closer look at the connection between e-tutors’ expertise and competencies, the question arises whether e-tutors with expertise differ from e-tutors without expertise in their specific daily practices. Research literature on the topic of expertise in other domains shows us that experts differ from novices in the way that they deal with problems (Jacobson, 2001). Beyond that, studies on the e-tutors’ role showed that experience of the e-tutors seems to be important to how they handle their online courses. With respect to the course organization, e-tutors with experience need less time for their online courses and have better time management than e-tutors without experience (Hemsing, 2008). Regarding interaction, experienced e-tutors post more contributions in their courses than novice instructors, ranging from an average of 19 postings per course for novice instructors to an average of 193 postings per course by experienced instructors (Morris, Xu, & Finnegan, 2005). This finding was closely mirrored in the study of Goold, Coldwell, and Craig (2010). They showed that the experienced e-tutor had 129 postings and 215 postings in two different modules, compared to 109 postings when the module was moderated by a novice e-tutor. Looking at the kind of postings in more detail, further differences between experienced and novice e-tutors were identified: experienced e-tutors give more direct instructions and feedback, and the postings more often involve pedagogical knowledge than the postings of novice tutors (Maor, 2008). Furthermore, experienced e-tutors mainly encouraged the students to engage with the learning resources and activities beyond what was specified in order to promote deep learning (Goold et al., 2010). Therefore, experienced e-tutors appeared to be able to provide learners with the scaffolding they needed to understand the content, while novice etutors were not. Based on this research, we define experience of e-tutors on the basis of their competencies in fostering learning activities with specific support methods. In the existing research, expertise of e-tutors seems to make a difference in their support behavior, but there is no data about how experienced e-tutors differ from novice e-tutors in how they handle problems when fostering online collaboration. Therefore, in this study, we investigated the differences between experienced and inexperienced e-tutors when supporting online collaboration in practice. 2. Research question Support for collaborative online learning is essential for effective learning. E-tutors may intervene to promote cognitive, social, and metacognitive activities in order to help learners in their learning efforts. Literature suggests the importance of the past experiences of e-tutors on the development of their competence in adequately supporting learners. Therefore, the question is whether experienced e-tutors differ from inexperienced e-tutors in their support behaviors. Therefore, we addressed the following research question: Do experienced and inexperienced e-tutors differ in how they support online collaboration? We expect to find two clusters: one cluster with more experienced e-tutors and one cluster with novice e-tutors. We assume that these two clusters will have differences in support behavior with experienced e-tutors supporting their learners more often and in more different ways than e-tutors without experience. 3. Method 3.1. Participants The sample comprised a total of 78 online courses from 17 different European countries described by e-tutors (2 from Austria, 3 from Belgium, 15 from Finland, 16 from France, 13 from Germany, 15 from Italy, 4 from Switzerland, and 1 each from Denmark, Greece, Hungary, Ireland, Latvia, Netherlands, Poland, Romania, Spain, United Kingdom). Courses at the higher education or university level comprised 74.4% of the sample, while lifelong learning courses comprised 19.2%. Specifically, 58 online courses (74.4%) were offered at universities for one semester, 14 courses (17.9%) were offered in vocational trainings as a single learning opportunity, and 6 (7.7%) were offered at institutes as part of their curriculum. Inclusion criteria were as follows: (a) the respondent is a teacher, an instructor, or a tutor of an e-learning course (or the individual knows the experience sufficiently to provide details about it); (b) the course is ongoing or has been delivered recently; (c) the course includes online social interaction activities/collaborative learning activities (such as discussions, cooperation, collaboration, etc.); (d)
Fig. 1. Example of the online questionnaire for e-tutors.
B. Kopp et al. / Computers & Education 58 (2012) 12–20
15
Table 1 Results of e-tutor’s evaluation of cognitive activities. Clustering items
Cluster 1 (n ¼ 51)
How important was it for you.
M
SD
z
M
SD
z
. . . . .
5.41 5.47 5.33 4.90 5.25
(.90) (.09) (1.07) (1.58) (1.00)
.14 .20 .26 .10 .13
4.83 4.58 4.08 4.13 4.63
(1.76) (1.95) (2.00) (2.27) (2.28)
.29 .44 .54 .31 .27
to to to to to
have have have have have
participants participants participants participants participants
exchange their knowledge? involved in content-related online discussion?* involved in argumentation?** work together online on problems and cases? integrate their different perspectives?
Cluster 2 (n ¼ 24)
*t(73) ¼ 2.70, p ¼ .009; **t(73) ¼ 3.53, p ¼ .001.
collaboration in the course as well as e-tutors’ interventions occur in full-distance modality. Participants were asked to complete one questionnaire for each e-learning course, permitting more than one questionnaire to be answered by an individual. Only two e-tutors completed the questionnaire for two different e-learning experiences. Therefore, in total, 76 e-tutors took part in the study. 3.2. Procedure In the framework of a co-financed European project (Socrates Minerva Programme), a research team including researchers from five European countries (Italy, Germany, Finland, Switzerland, and France) contacted colleagues who were involved in e-learning experiences via e-mail or telephone, and invited them to complete a questionnaire on their e-learning experiences (see below). A chain-referral sampling method (Bogdan & Biklen, 1992) was used in order to involve this small and hard-to-reach segment of the general population involved in elearning experiences. Through a snowball technique, each contact was asked to propagate the questionnaire to other online instructors who might meet the above-mentioned inclusion criteria. To reduce the sampling bias, we included e-tutors from different countries and with different languages. Furthermore, the five investigation coordinators were researchers from different scientific fields (psychology, pedagogy and education science, engineering, communication sciences), thus improving the effectiveness of the recruitment, with an increased possibility of recruiting experiences from different populations. The survey took place from June 15th, 2007 to October 1st, 2007. In each of the five countries, there was one coordinator who was responsible for data collection. These five coordinators sent a letter to colleagues asking them to participate in a survey for e-tutors. Participants had four weeks to answer the questionnaire. All data were centrally saved on the server of the one coordinator for the whole research team. This individual was also responsible for the delivery of the questionnaire. 3.3. Instrument An online questionnaire was created in order to gain further insights into how e-tutors support online collaboration (see Fig. 1). In the questionnaire, we first collected general information about the e-tutors and the e-learning experiences. In this context, we especially asked for the experience of e-tutors in designing and/or realizing online courses (using a “yes” or, “no” question, e.g.: “Do you have experience in designing and/or realizing e-learning experiences?”). The second part surveyed collaborative activities, especially items concerning contentspecific cognitive aspects (5 items), social aspects (11 items), and meta-cognitive aspects of collaboration (2 items), as well as giving feedback (3 items) including evaluation criteria (7 criteria to choose), and evaluation procedures (5 procedures to choose from). Regarding the content-specific cognitive aspects of collaboration, the questionnaire included five items concerning the following activities: knowledge exchange, online discussion, argumentation, collaborative problem solving, and considering different perspectives. In each item, e-tutors were first asked on a six-point Likert scale (ranging from 1, not important, to 6, very important) to evaluate certain aspects of collaborative online learning. In the second step, e-tutors were asked whether they intervened to foster the specific collaborative activity. In a third step, they were asked through open-format questions either how they intervened or why they did not intervene. The questionnaire also asked e-tutors to evaluate social activities, namely the motivation of the group members, interpersonal interaction, social influence processes, and information processing. Regarding motivational aspects, two dimensions were investigated, namely different group goals (2 items), and dysfunctional competition (1 item). Interpersonal interaction included items concerning phenomena such as dysfunctional interpersonal conflicts (1 items), balanced participation (1 item), and diffusion/lack of responsibility, for a total of 5 items. Social influencing factors were ignoring minorities (1 item), and putting pressure on group members (1 item). Information processing included the sub-dimensions of superficial discussion to avoid conflicts (1 item), and addressing the e-tutor rather than group members (2 items). Each item asked e-tutors first whether they intervened to prevent dysfunctional phenomena. If they answered yes, they were asked how they intervened, and if they answered no, they were asked why they did not intervene.
Table 2 E-tutors’ intervention rates for cognitive activities. Clustering items
Cluster 1 (n ¼ 51)
Did you intervene to promote.
Yes
No
Yes
No
. . . . .
46 48 46 48 40
5 3 5 3 11
10 14 13 14 8
14 10 11 10 16
the exchange of knowledge/information? content-related online discussion?* argumentation?* collaboration in problem solving?* the integration of different perspectives?*
*p < .05 according to Chi-Square Test.
Cluster 2 (n ¼ 24)
16
B. Kopp et al. / Computers & Education 58 (2012) 12–20
Table 3 E-tutors’ intervention rates for social activities. Clustering items
Cluster 1 (n ¼ 51)
Did you intervene to prevent such a phenomenon.
Yes
No
Yes
No
. . . . . . . . . . .
21 23 16 21 30 27 20 21 24 35 38
30 28 35 30 21 24 31 30 27 16 13
1 0 2 5 7 2 6 1 5 3 4
23 24 22 19 17 21 18 23 19 21 20
individual goals (different group goals I)* outcome than process (different group goals II)* dysfunctional competition* dysfunctional conflicts balanced participation* lack of responsibility* ignoring minorities putting pressure on group members* superficial discussion (in order to preserve positive relationships)* content-related questions to elicit a response (addressing to the e-tutor I)* turning to e-tutor (addressing to the e-tutor II)*
Cluster 2 (n ¼ 24)
*p < .05 according to Chi-Square Test.
Meta-cognitive activities included the planning (1 item), and organization (1 item) of group work. In this context, e-tutors were first asked how important they considered these activities for collaboration, and secondly, whether they intervened to support these activities or not. If they answered yes, they were asked how they intervened, and if no, they were asked why they did not intervene. According to the theoretical introduction on providing feedback, feedback was differentiated between feedback on the task level and on the process level. Feedback on the task level included the final product of the collaborative work (1 item), while feedback on the process included two questions; one on content-related feedback, and one on feedback about group activities. Again, e-tutors were asked how they evaluated the respective feedback, and whether they provided such kind of feedback in their respective e-learning course. Furthermore, etutors were asked how they evaluated the final product. Seven criteria were used: knowledge gain, knowledge application, understanding of the content, creativity, ability to collaborate, mastery/skillfulness, and effort. In addition, e-tutors were asked which procedure they used for evaluation, namely tests, essays, collection of documents, quality of online participation, and observation of collaboration. 3.4. Data analyses Cluster analysis is useful in identifying homogenous groups of individuals or clusters based on the characteristics they possess that are similar to each other, but different from individuals in other groups. Cluster analysis methods organize the observations into smaller numbers of groups that account for most variance based on the clustering algorithm chosen (Meece & Holt, 1993). In this way, rather than grouping similar variables, as is the case in factor analysis, cluster analysis groups similar people. The cluster analysis is well suited for the present study as we seek to profile e-tutors’ characteristics. Specifically, we used the TwoStep cluster methodology in order to identify groups of e-tutors that responded similarly in terms of practices used when tutoring/supporting learners. This methodology is convenient for dealing with a mix of quantitative and qualitative variables and provides a classification model which is optimized through the use of an information criterion such as Schwarz’s Bayesian Information Criterion (BIC). The SPSS 17.0 two-step clustering model was used, based on an algorithm using a decrease in the log-likelihood for combining clusters as distance measure. The algorithm selects the optimal number in clusters based on either the Schwarz Bayesian Criterion or the Akaike Information Criterion. If the sample size is small, one proficient approach for validating the cluster solution is external validation; that is to evaluate the results with respect to a pre-specified structure. This is a procedure based on measuring the extent to which clusters match external variable(s) which are not used to form the clusters (Aldenderfer & Blashfield, 1985). In this study, clusters were built on the basis of several variables of the questionnaire, namely on the evaluation of cognitive activities, as well as on the intervention rate regarding cognitive, social, and metacognitive activities (see Table 1 to 4). Once clusters were established, we used the following items of the questionnaire to validate the analyses: the e-tutor’s experience, the scale on providing feedback, the criteria how the final product was evaluated, as well as the procedures used for evaluation were used for cluster validation (see Table 5 to 7). Separate t-tests and chi-square tests were used to test the validity of the cluster solution comparing e-tutors intervention clusters in terms of past experience of e-tutors and the feedback and evaluation practices. Before the cluster analysis was performed, all variables were standardized using z-scores (mean of 0, and a standard deviation of 1). 4. Results 4.1. Identification of e-tutors’ profiles Based on the research summarized above on support methods, we expected that the cluster analysis would identify theoretically meaningful profiles of e-tutors based on their reports of support methods. Table 4 E-tutors’ intervention rates for meta-cognitive activities. Clustering Items
Did you intervene to promote long-term planning?* Did you intervene to support participants in the organization of group work? *p < .05 according to Chi-Square Test.
Cluster 1 (n ¼ 51)
Cluster 2 (n ¼ 24)
Yes
No
Yes
No
35 34
16 17
3 11
21 13
B. Kopp et al. / Computers & Education 58 (2012) 12–20
17
Table 5 Cross-tabulation of cluster allocation and experience of e-tutors. Cluster
1 (n ¼ 51) 2 (n ¼ 24)
E-tutors Experienced
Inexperienced
44 (3.4) 12 (3.4)
7 (3.4) 12 (3.4)
c2
F
11.35***
.39
Note. *** ¼ p < 0.001. Adjusted standardized residuals appear in parentheses below group frequencies.
The automatic clustering suggested two clusters whose stability was ascertained until 96.2% of the sample size (3.8% of excluded cases). Using a theoretically meaningful distinction, we reported the data on content-specific cognitive activities, on social activities, and on metacognitive activities. Cluster 1 (n ¼ 51; 65.4%) comprised e-tutors whose responses were generally in the highest part of the rating scales regarding the importance of fostering relevant collaborative activities for the acquisition and application of knowledge (content-specific cognitive activities), and who affirmed having intervened in most cases to foster content-specific cognitive, social and meta-cognitive activities. Cluster 2 (n ¼ 24; 30.8%) included e-tutors whose responses were not concentrated around a specific point of the rating scales, but which principally ranged considerably around the means of responses concerning the importance of fostering relevant collaborative activities. In this cluster, most e-tutors reported that they did not intervene to support the cognitive, social and meta-cognitive activities which were investigated. Regarding cognitive activities, e-tutors in cluster 1 significantly differed from e-tutors in cluster 2 in how they evaluated the importance of online discussion and argumentation. Both activities received higher ratings by e-tutors in cluster 1 (see Table 1). Regarding the intervention rate for cognitive activities, e-tutors in cluster 1 reported, that they intervened significantly more often in four out of five activities in comparison to e-tutors in cluster 2. These activities included online discussion, argumentation, problem solving, and integrating different perspectives (see Table 2). When looking at social activities, e-tutors in cluster 1 declared that they intervened significantly more often in 9 out of 11 dimensions than e-tutors in cluster 2 (see Table 3). In the context of meta-cognitive activities, e-tutors in cluster 1 indicated they intervened significantly more often in long-term planning than e-tutors in cluster 2 (see Table 4). 4.2. Cluster validation The validity of the two cluster solution was evaluated by testing group differences on variables that were theoretically or empirically related to each cluster, but which were not included in the previous analysis to form the clusters. In particular, we used the following items: 1. one item concerning the past experience of the e-tutors in designing and/or realizing online courses, 2. items pertaining to providing feedback, and 3. items regarding the evaluation criteria and the evaluation procedures. We expected e-tutors with experience to be overrepresented in cluster 1, which was confirmed by a chi-square analysis (see Table 5). The great majority of e-tutors with past experience in designing and/or realizing e-learning courses belonged to cluster 1, while only few inexperienced e-tutors belonged to this cluster. Supplementary analyses were conducted for the 7 experiences included in cluster 1 (experienced e-tutors), but whose e-tutors claimed to be inexperienced, and the 12 experiences of cluster 2 (inexperienced e-tutors), but whose e-tutors perceived themselves as experienced. The findings revealed that e-tutors of cluster 1 who evaluated themselves as inexperienced behaved fairly similarly in their daily practice to e-tutors of cluster 1 – who described themselves as experienced – with respect to evaluating the importance of fostering collaborative activities for the acquisition and application of knowledge. They also behaved similarly with respect to their intervention rate in fostering content-specific cognitive, social and meta-cognitive activities. A similar pattern emerged also for the 12 e-tutors of cluster 2 who perceived themselves as experienced and who behaved similarly to e-tutors of their belonging cluster. Furthermore, we examined the e-tutor’s feedback by looking at the feedback rate (see Table 6), at the final product used to evaluate the collaborative work of the participants, and at the procedure used for evaluation using t-test analyses. E-tutors classified in cluster 1 reported to provide significantly more feedback on the process and on the task solution than e-tutors without experience. Regarding the criteria and procedure for giving feedback, there were significant differences between the two clusters with respect to four dependent variables (see Table 7). As expected, e-tutors in cluster 1 evaluated participants’ contributions during collaboration as a key factor of evaluation. Moreover, compared to e-tutors in cluster 2, they agreed significantly more often with the evaluation criteria based on mastery or skillfulness of learners, and on focusing effort on learning activities. 5. Discussion On the basis of the cluster analysis, it was possible to build two clusters which differ in various dimensions concerning the daily practice of e-tutors interviewed. E-tutors in cluster 1 are those who regard specific collaborative activities and feedback in online learning as Table 6 Results of e-tutor’s evaluation of intervention rate for feedback activities. Clustering Items
Did Did Did Did
you you you you
evaluate or rate the ongoing activities of the collaborative work?* give feedback related to group activities?* evaluate or rate the final product of the collaborative work?* give content-related feedback to your participants?*
*p < 0.05 according to Chi-Square Test.
Cluster 1 (n ¼ 51)
Cluster 2 (n ¼ 24)
Yes
No
Yes
No
35 40 42 48
16 11 9 3
2 8 13 19
22 16 11 5
18
B. Kopp et al. / Computers & Education 58 (2012) 12–20
Table 7 Differences between clusters regarding evaluation criteria and evaluation procedures. Cluster 1 Items on evaluation criteria Knowledge gain Knowledge application Understanding of the content Creativity Ability to collaborate Mastery/skillfulness Effort Items on the evaluation procedure Tests Essays Collection of documents Quality of online participation Observation of collaboration
t
df
2
4.93 5.67 5.45 4.86 4.48 5.26 4.95
(1.40) (.75) (.77) (1.26) (1.31) (1.08) (1.15)
4.54 5.46 4.62 3.69 3.69 4.38 3.54
(1.56) (.66) (1.50) (2.25) (1.75) (1.66) (1.98)
0.85 0.88 1.93 1.78 1.73 2.23 2.44
53 53 14 14 53 53* 14*
3.57 5.38 4.62 4.69 4.55
(2.44) (1.34) (1.90) (1.46) (1.48)
3.85 5.00 4.69 3.62 2.92
(2.58) (1.91) (2.06) (2.02) (2.14)
0.35 0.80 0.12 2.11 3.09
53 53 53 53* 53*
Note. * ¼ p < .05. Standard Deviations appear in parentheses.
extremely important. These tutors also report to intervene more frequently to promote specific activities to avoid dysfunctional group phenomena and report to give feedback on the collaborative process and on the task solutions. The cluster validation procedure revealed that e-tutors in cluster 1 are for the most part those who claimed to have past experience in designing and/or realizing e-learning experiences. Thus, according to the data analysis, it seems that e-tutors who perceive themselves as experts are more sensitive to the problems and pitfalls of virtual collaboration than e-tutors without experience. With respect to content-specific cognitive activities, e-tutors in cluster 1 (which includes the majority of e-tutors with experience) evaluated these core activities as more important, and they reported to support these activities more frequently. Even though the intervention rate for social activities is less than 50% overall, and thus much lower than for cognitive activities, it seems that e-tutors in cluster 1 intervene much more often than e-tutors in cluster 2. This may be due to the fact that more experienced e-tutors are more able to detect dysfunctional social phenomena in online collaboration. As such phenomena are crucial for effective interaction in computer-supported learning environments (e.g., Mandl, Ertl, & Kopp, 2006), it seems necessary that e-tutors themselves receive more support in detecting problems in group work, especially with regard to social activities. Meta-cognitive activities, namely the organization and long-term planning of group work before the main collaboration process are rated as more relevant by e-tutors in cluster 1 than by e-tutors in cluster 2. This is evidenced by the higher intervention rate. Again, it seems that experience in supporting online collaboration makes e-tutors sensitive to fostering meta-cognitive activities, as such planning mainly determines the subsequent procedures and strategies used by the groups. These activities of planning and organizing collaboration are much more demanding in virtual learning environments. E-tutors need to work to prevent social loafing, or a lack or diffusion of responsibility, because virtual learning is more anonymous than face-to-face learning. A key predictor for high group performance and deep understanding is providing feedback not only on the task level (e.g. task solutions and performance of the groups), but also on the process level (e.g. collaboration process) (Hattie & Timperley, 2007; Tenenbaum & Goldring, 1989). As collaborative learning in virtual environments is more demanding than in usual environments (e.g., a different kind of communication caused by missing non-verbal cues which learners are not familiar with), continuous feedback is even more important for the learner’s success. E-tutors included in cluster 1, composed predominantly of experienced e-tutors, provided more feedback on the task and on the process level to continuously improve learners’ collaboration processes and performance. This is in line with a previous study showing that experienced e-tutors are able to provide their learners with the scaffolds necessary to foster their understanding of the content (Goold et al., 2010). According to the self-reported results of the e-tutors, we might make the general presumption that e-tutors with experience have more knowledge about collaboration processes, problems and pitfalls. Therefore, e-tutors in cluster 1 rated the importance of specific collaboration activities as more important and intervened more frequently in order to foster desired activities and outcomes, and to avoid difficulties. To guarantee adequate support and supervision in online collaboration, it seems to be necessary that all e-tutors are trained for their specific competencies, skills and roles as e-tutors. Also, previous authors suggest a training in online facilitation for tutors, because there is a need to prepare teachers for the online environment (Goold et al., 2010; see also Wilson & Stacey, 2004; McDonald & Reushle, 2002; Harris & Sandor, 2007). Finally, the analysis of the outlier experiences included in both clusters revealed that, in a minority of cases, the perceived experience of e-tutors is not fundamental to determining their practice in fostering collaboration strategies. In those cases, they behave similarly to their allocated cluster. This suggests the need for further investigation on e-tutors competencies and on the role of specific training as a prerequisite for fostering online collaboration activities. Furthermore, it suggests that the perceived experience of e-tutors has no impact on their expectations of their role as an e-tutor. 6. Conclusions This study showed differences in e-tutors’ support behavior based on their previous experience. These results indicate that experience in designing and/or realizing e-learning courses is a critical precondition for giving the proper weight and importance to collaborative activities, and in adequately intervening to foster the interaction between group members. Thus, experience has an impact on the way etutors support virtual online collaboration. This is in line with previous studies that investigated e-tutors’ course organization, their time
B. Kopp et al. / Computers & Education 58 (2012) 12–20
19
management, and the number of posted contributions that provide more direct instructions and feedback (Goold et al., 2010; Hemsing, 2008; Maor, 2008; Morris et al., 2005). Even though this study gives us initial indications of the importance of the e-tutor’s experience for supporting online collaboration, and is a starting point for gaining further insights into the daily practice of e-tutors, there are several limitations. First of all, the data sources are limited, as we only refer to subjective perceptions of the e-tutors regarding their past experience and how they support online collaboration. Future studies should investigate how e-tutors intervene in their actual practices using different and more objective measures for e-tutor’s support behavior (such as those based on log-files or on content analysis of the contributions in the forums). In line with this is a further limitation of the study that concerns the sampling technique. The snowball technique, like other respondent-driven sampling (RDS), is a network-based technique used especially in hard-to-reach populations (Salganik & Heckathorn, 2004). In recent years, RDS has been used in more than 120 studies in more than 20 countries, but despite the widespread use and growing popularity of this technique, this sampling methodology is substantially less accurate than the standard one (Goel & Salganik, 2010). The accuracy of RDS estimates is affected by the structure of the underlying social network, the distribution of traits within the network, and the recruitment dynamics, thus leading to the possibility of an under-coverage, non-response or voluntary response bias. Second, based on this and previous studies, we still do not know whether experienced e-tutors just intuitively support online collaboration based on their naïve beliefs and on the functioning of virtual collaboration in their daily practice, or whether they have the theoretical and empirical knowledge as a basis on which to act in a reflective and profound manner. This latter possibility could explain the presence of e-tutors without experience in cluster 1 – but who behaved similarly to experienced e-tutors – and vice versa. Third, it would be of interest to examine the effects of such behavior on the learning process and learning performance of the students. Regarding practical implications of the study, it seems necessary to prepare tutors for their tasks in online learning environments with adequate trainings which sensitize them to the specific phenomena involved in collaboration (e.g. Harris & Sandor, 2007; McDonald & Reushle, 2002; Wilson & Stacey, 2004). Acknowledgment This project (“Social networks and knowledge construction promotion in e-learning contexts” http://minerva.ing2.unibo.it) has been funded with support from the European Commission. This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein. Authors acknowledge the project partners and the EACEA. References Aldenderfer, M. S., & Blashfield, R. K. (1985). Cluster analysis. Sage University Paper series on Quantitative applications in the Social Science. Newbury Park, CA: Sage Publications. Banks, S., Denis, B., Fors, U., & Pirotte, S. (2004). Staff development and e-tutors training. In Proceedings of networked learning conference. [Available under: http://www. networkedlearningconference.org.uk/past/nlc2004/proceedings/symposia/symposium6/banks_et_al.htm]. Bogdan, R., & Biklen, S. K. (1992). Qualitative research for education. Boston: Allyn and Bacon. Cohen, E. G. (1994). Restructuring the classroom: conditions for productive small groups. Review of Educational Research, 64(1), 1–35. De Corte, E. (2003). Designing learning environments. In L. Verschaffel, N. Entwistle, & J. J. G.v. Merrienboer (Eds.), Powerful learning environments: Unravelling basic components and dimensions (pp. 21–33). Amsterdam: Pergamon. De Lisi, R., & Goldbeck, S. L. (1999). Implications of Piagetian theory for peer learning. In A. M. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 3–37). Mahwah, NJ: Lawrence Erlbaum Associates. Denis, B., Watland, Ph., Pirotte, S., & Verday, N. (2004). Roles and competencies of the e-tutor. In Proceedings of networked learning conference. [Available under: http://www. networkedlearningconference.org.uk/past/nlc2004/proceedings/symposia/symposium6/denis_et_al.htm. Deutsch, M. (1949). A theory of co-operation and competition. Human Relations, 2, 129–152. Elliot, A. J., & McGregor, H. (2001). A 22 achievement goal framework. Journal of Personality and Social Psychology, 80(3), 501–519. Goel, S., & Salganik, M. (2010). Assessing respondent-driven sampling. PNAS, 107(15), 6743–6747. Goold, A., Coldwell, J., & Craig, A. (2010). An examination of the role of the e-tutor. Australasian Journal of Educational Technology, 26(5), 704–716. Harris, N. & Sandor, M. (2007). Developing online discussion forums as student centred peer e-learning environments. In Hello! Where are you in the landscape of educational technology? Proceedings ascilite Melbourne 2008. [Available under: http://www.ascilite.org.au/conferences/melbourne08/procs/harris.pdf]. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. Hemsing, S. (2008). Online-Seminare in der Weiterbildung. [[Online-seminars in further education]]. Berlin: mbv. Jacobson, M. J. (2001). Problem solving, cognition, and complex systems: differences between experts and novices. Complexity, 6(3), 41–49. Kirschner, P., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. Kopp, B., Germ, M., & Mandl, H. (2010). Supporting virtual learning through e-tutoring. In B. Ertl (Ed.), E-collaborative knowledge construction: Learning from computersupported and virtual environments (pp. 213–230). Hershey (NY): IGI Global. Kopp, B., & Mandl, H. (2011). Supporting virtual collaborative learning using collaboration scripts and content schemes. In F. Pozzi, & D. Persico (Eds.), Techniques for fostering collaboration in online learning communities: Theoretical and practical perspectives (pp. 15–32). Hershey (NY): IGI Global. Larreamendy-Joerns, J., & Leinhardt, G. (2006). Going the distance with online education. Review of Educational Research, 76(4), 567–605. Lepper, M. R., Drake, M. F., & O’Donnell-Johnson, T. (1997). Scaffolding techniques of expert human tutors. In K. Hogan, & M. Pressley (Eds.), Scaffolding student learning: Instructional approaches and issues (pp. 108–144). Cambridge: Brookline Books. Lou, Y., Abrami, P. C., Spence, J. C., Poulsen, C., Chambers, B., & d’Apollonia, S. (1996). Within-class grouping: a meta-analysis. Review of Educational Research, 66(4), 423–458. Mandl, H., Ertl, B., & Kopp, B. (2006). Computer support for collaborative learning environments. In P. Dochy, L. Verschaffel, M. Boekaerts, & S. Vosniadou (Eds.), Past, present, and future trends: Sixteen essays in honour of Erik De Corte (pp. S. 223–S. 237). Oxford: Elsevier. Maor, D. (2008). Changing relationship: who is the learner and who is the teacher in the online educational landscape? Australasian Journal of Educational Technology, 24(5), 627–638, Available under: http://www.ascilite.org.au/ajet/ajet24/maor.html. McDonald, J. & Reushle, S. (2002). Charting the role of the online teacher in higher education: Winds of change. In Winds of change in the sea of learning: Proceedings ascilite Auckland 2002. [Available under: http://www.ascilite.org.au/conferences/auckland02/porceedings/papers/095.pdf]. McPherson, M., & Nunes, M. B. (2004). The role of tutors as an integral part of online learning support. Available under: http://eprints.whiterose.ac.uk/999/1/Maggie_MsP.html. Meece, J. L., & Holt, K. (1993). A pattern analysis of students’ achievement goals. Journal of Educational Psychology, 85(4), 582–590. Morris, L., Xu, H., & Finnegan, C. (2005). Roles of faculty in teaching asynchronous undergraduate courses. Journal of Asynchronous Learning Networks, 9(1), 65–82. Nemeth, C. J. (1986). Differential contributions of majority and minority influence. Psychological Review, 93(11), 23–32. O’Connaill, B., Whittaker, S., & Wilbur, S. (1993). Conversations over video conferences: an evaluation of the spoken aspects of video-mediated communication. Human Computer Interaction, 8, 389–428. Rautenstrauch, Ch (2001). Tele-tutoring. Zur Didaktik des kommunikativen Handelns im virtuellen Lernraum. Berlin: ISKO.
20
B. Kopp et al. / Computers & Education 58 (2012) 12–20
Salganik, M. J., & Heckathorn, D. D. (2004). Sampling and estimation in hidden populations using respondent-driven sampling. Sociological Methodology, 34, 193–239. Salmon, G. (2000). E-moderating – The key to teaching and learning online. London: Taylor and Francis. Schiefele, U., & Pekrun, R. (1996). Psychologische Modelle des fremdgesteuerten und selbstgesteuerten Lernens. Serie I Pädagogische Psychologie. In F. E. Weinert, & H. Mandl (Eds.), Enzyklopädie der Psychologie: Themenbereich D Praxisgebiete. Psychologie des Lernens und der Instruktion, Band 2 (pp. 249–278). Göttingen: Hogrefe. Schmidt, H. G., & Moust, J. H. C. (1995). What makes a tutor effective? A structural-equations modeling apporach to learning in problem-based curricula. Academic Medicine, 70, 708–714. Schreblowski, S., & Hasselhorn, M. (2006). Selbstkontrollstrategien: Planen, Überwachen, Bewerten. In H. Mandl, & H. F. Friedrich (Eds.), Handbuch Lernstrategien (pp. 151– 161). Göttingen: Hogrefe. Schweizer, K., Paechter, M., & Weidenmann, B. (2001). A field study on distance education and communication: experiences of a virtual tutor. Journal of Computer Mediated Communication, 6(2), 1–14. Tenenbaum, G., & Goldring, E. (1989). A meta-analysis of the effect of enhanced instructions: cues, participation, reinforcement and feedback and correctives on motor skill learning. Journal of Research and Development in Education, 22, 53–64. Van Eemeren, F. H., Grootendorst, R., & Henkemans, F. S. (Eds.). (1996). Fundamentals of argumentation theory - A handbook of historical backgrounds and contemporary developments. Mahawa, NJ: Erlbaum. Walberg, H. J. (1982). What makes schooling effective? Contemporary Education Review, 1, 1–34. Wilson, G., & Stacey, E. (2004). Online interaction impacts on learning: teaching the teachers to teach online. Australasian Journal of Educational Technology, 20(1), 33–48, Available under: http://www.ascilite.org.au/ajet/ajet20/wilson.html.