Peer Assessment Among Secondary School Students - LearnTechLib

0 downloads 0 Views 768KB Size Report
or her learning (Black & William, 1998; Xiao & lucking, 2008). Formative ... practice (Birenbaum, 1996; Fallows & chandramohan, 2001; hanrahan & issacs, 2001 ...
Jl. of Computers in Mathematics and Science Teaching (2012) 31(4), 433-465

Peer Assessment Among Secondary School Students: Introducing a Peer Feedback Tool in the Context of a Computer Supported Inquiry Learning Environment in Science Olia Tsivitanidou, Zacharias C. Zacharia, Tasos Hovardas, and Aphrodite Nicolaou University of Cyprus, Cyprus [email protected] [email protected] [email protected] [email protected] In this study we introduced a peer feedback tool to secondary school students while aiming at investigating whether this tool leads to a feedback dialogue when using a computer supported inquiry learning environment in science. Moreover, we aimed at examining what type of feedback students ask for and receive and whether the students use the feedback they receive to improve their science related work. The participants of the study were 38 eighth graders, who used a webbased learning platform, namely the SCY-Lab platform along with its SCYFeedback tool, as well as one of its learning missions, titled the “Healthy pizza” mission. In doing so, students were assigned to create a healthy pizza while considering the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. The findings of the study revealed that whenever students requested for feedback from peers, there was a great possibility to receive one. Additionally, significant correlations were found between changes requested by peers concerning their learner products and changes proposed by peers for revisions. Overall, it appears that the beginnings of a fruitful feedback dialogue were there, but it seems that they were not enough to support a thorough dialogue throughout the intervention that could lead to having students revising their work.

434

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

Introduction Despite the increasing view of learning as a participative activity (e.g., Barab, Hay, Barnett, & Squire, 2001; Kollar & Fischer, 2010), educators have been slow to react to the emergence of this new participatory culture (Jenkins, Clinton, Purushotma, Robison, & Weigel, 2006). In a participatory culture of learning students are expected to be actively engaged in the learning experience (Bosco, 2009), as well as in the assessment/feedback process (Tsivitanidou, Zacharias, & Hovardas, 2011). Kollar and Fischer (2010) have characterized peer assessment/feedback as an important component of this participatory culture of learning and an important component in the design of learning environments, such as computer supported inquiry learning environments, implementing this contemporary culture of learning. In our work on peer feedback (the peer assessment outcome) we are interested in whether students interact and give each other feedback in participatory environments (e.g., computer supported inquiry learning environments) when provided the opportunity. In this study, we introduced a peer feedback tool, namely the SCYFeedback tool, to secondary school students while aiming at investigating whether this tool leads to a feedback dialogue in the context of a SCY (Science Created by You) mission. In SCY (de Jong et al., 2010) students are offered a participatory learning environment (Barab et al., 2000), SCY-Lab, which is populated with resources and tools required to carry out a SCY Mission. During a SCY Mission learners address general socio-scientific problems (e.g., How do we create a healthy pizza?) through collaborative and inquiry learning. They engage in processes of active learning, based on inquiry, knowledge building, and learning by design. Through these activities students learn by creating and exchanging what we refer to as ELOs (Emerging Learning Objects (Chen, 2004; Hoppe et al., 2005). SCY ELOs include models (e.g., system dynamics models), concept maps, designed artifacts, data sets, hypotheses, tables, summaries, reports, and other types of artifacts (for details see de Jong et al., 2010). The ELOs are the vehicles through which a student can ask for feedback during the learning process, and from which the teacher can gain an understanding of the general science skills, social and presentations skills, and domain concepts the student has developed. Ronen and Langley (2004) pointed to the benefits of peer assessment when students are provided the opportunity to learn from artifacts (ELOs) created by their peers, and Falchikov (2003) showed how peer assessment assists students to create higher quality artifacts. Thus, in our approaches to assessment (Wasson, Vold, & de Jong, 2012) these ELOs are central. In particular, formative assessment is given during the Mission in the form of peer feedback on ELOs.

Peer Assessment Among Secondary School Students

435

The purpose of this study was to investigate whether the SCYFeedback tool leads to a feedback dialogue among students, when working in the context of the “Healthy pizza” SCY mission which requires from students to create a healthy pizza while considering the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. The overall idea was to examine whether students engage through a peer feedback tool in a process of giving and receiving feedback for improving their own work and that of their peers. Theoretical Background Peer Assessment Peer assessment concerns the involvement of learners in making judgments about their peers’ learning products by using grades and written or oral feedback (Topping, 1998; Sung, Chang, Chiou & Hou, 2004). In other words, peer-assessment can be either summative, thus concentrating on judging learning results to be correct or incorrect or assigning a quantitative grade, or formative, if they concentrate on in-depth qualitative assessment of different kinds of learning results (Topping, 2003). Formative peer assessment refers to any type of assessment conducted by student, which provides instructive feedback to be used by the student in order to enhance his or her learning (Black & William, 1998; Xiao & Lucking, 2008). Formative peer assessment focuses on cognitive, social, affective and meta-cognitive aspects of learning, and it often applies a multi-method approach that leads to a more holistic profile instead of a single score (Strijbos & Sluijsmans, 2009). It aims at constructing a comprehensive picture of learners’ competencies, it is an integral part of the learning process and it takes place several times during a course rather than only at the end of it as in the summative peer assessment (Xiao & Lucking, 2008). The outcome of formative peer assessment is peer feedback, which is given during the learning process and aims at impacting the learning process as it develops (Sluijsmans, Brand-Gruwel & van Merriënboer, 2002; Van Gennip, Segers, & Tillema, 2010; Xiao & Lucking, 2008). Peer feedback could be for example an opinion, a suggestion for improvements, or an idea. Giving and receiving feedback helps students to realize not only what they have achieved, but also how their work could be further developed (Tsivitanidou et al, 2011); thus, the assessment becomes part of the learning process (Frost & Turner, 2005). Frost and Turner (2005) explained that peer feed-

436

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

back is valuable because feedback is provided in ‘student-speak’ rather than ‘teacher-speak’ or ‘science-speak,’ and students may be more open to accepting it from peers. Peer assessment and feedback have been investigated at primary (Harlen, 2007; Harrison, & Harlen, 2006; Lindsay, & Clarke, 2001), secondary (e.g., Noonan & Duncan, 2005; Tsai, Lin, & Yuan, 2002; Tsivitanidou et al, 2011) and higher education levels (e.g., Crane & Winterbottom, 2008; Davies, 2006; Gehringer, 2001; Lindblom- Ylanne, Pihlajamaki& Kotkas, 2006; Lorenzo & Ittelson, 2005; Purchase, 2000; Van Dyke, 2008). Researchers have shown that peer assessment and feedback enhance students’ learning across all these educational levels (Black & Wiliam, 1998; Kennedy, Chan, Fok, & Yu, 2008; Pellegrino, Chudowsky, & Glaser, 2001; Dysthe, Lillejord, Wasson, & Vines, 2009; Hattie & Timperley, 2007; Shute, 2008; Sluijsmans, et al, 2002; Van Gennip et al., 2010; Xiao & Lucking, 2008). The primary reason behind the positive effects of peer assessment is the fact that students are positioned in a participatory culture of learning during the assessment process (Dysthe, 2004; Swan, Shen, & Hiltz, 2006), in which students reflect not only on what they have achieved, but also on how it compares to that of their peers, as well as on how their work could be further developed. Moreover, according to Falchikov (1995), such a participatory learning experience could enable students to develop meta-cognitive awareness that leads to the development of skills required for professional responsibility, judgment and autonomy. Hence, it could reasonably be understood why several researchers and educators are in favour of such a practice. However, it should be noted that peer assessment is not an easy procedure to implement. It requires exposing students to substantial training and practice (Birenbaum, 1996; Fallows & Chandramohan, 2001; Hanrahan & Issacs, 2001; Sluijsmans, 2002; Van Steendam, Rijlaarsdam, Sercu, & Van den Bergh, 2010). This complexity comes as a result of the complex nature of assessment in general, which requires understanding the content of the material to be assessed, the assessment criteria to be used, and the most effective way of providing suggestions of improving ones work without giving him/her ready made answers/solutions. Peer Assessment and Science Education Peer assessment and its effects in students’ learning have been also examined in various science education studies, mainly at the university level

Peer Assessment Among Secondary School Students

437

(Crane & Winterbottom, 2008). For example, researchers found that peer assessment had a positive effect on undergraduate students’ learning and critical thinking skills (Tsai, Liu, Lin, & Yuan, 2001), as well as on their willingness to revise their science related work (Prins, Sluijsmans, Kirschner, & Strijbos, 2005; Tsai et al., 2002). Positive influences of peer-assessment on student learning have also been reported in secondary education (e.g., Black & Harrison, 2001). Tsivitanidou et al (2011) have found that secondary school science students have the beginnings of the skills necessary for enacting peer assessment, even in the absence of support. More specific, they found that secondary school science students were able to provide written comments (positive or negative comments and suggested changes) in their feedback. Many studies, across several subject domains, have shown that providing such comments to peers promotes one’s learning (Chen, Wie, Wu, & Uden, 2009; Paré & Joordens, 2008; Ploegh, Tillema, & Segers, 2009; Sluijsmans, et al, 2002; Tseng & Tsai, 2007). In particular, it was found that the provision of reinforcing peer feedback (positive feedback on a peer’s work) was the factor that most positively impacted the quality of students’ work (Tseng & Tsai, 2007). On the other hand, research findings have indicated that peer feedback is constructive if it includes structural components such as suggestions for improvements and positive and negative judgments (Chen et al., 2009; Paré & Joordens, 2008; Ploegh, et al, 2009; Sluijsmans, et al 2002; Tseng & Tsai, 2007). Types of Peer Assessment Peer assessment can be one-sided or reciprocal. In one-sided peer assessment the student undertakes either the role of the assessor or the role of the assessee, whereas in the case of reciprocal peer assessment, the student undertakes both roles. In the context of this study we used reciprocal peer assessment because we consider it a better context for students to learn since they undertake both the role of the assessor and the assessee (for an example of a successful implementation of reciprocal peer assessment see Tsivitanidou et al, 2011). In this way students get the opportunity to reap the benefits of both roles. Reciprocal Peer Assessment Reciprocal peer assessment is the only type of peer assessment in which students undertake both the role of the assessor and the assessee. The

438

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

role of the assessor requires students to assess their peers’ work/products and, thus, to produce feedback that often includes qualitative comments in addition to, or instead of, grades. After all participants have acted as assessors, the next stage involves the review of peer feedback and the revision of learner products. When participants switch roles and become assessees, they need to critically review the peer feedback they have just received, decide which revisions are necessary for the improvement of their work and proceed with making the corresponding changes. Of course, learning benefits can arise when receiving feedback from peers, but also during the phase of giving feedback to peers, since students could be introduced to alternative examples and approaches (Gielen, Peeters, Dochy, Onghena & Struyven, 2010). Reciprocal peer assessment, and peer assessment in general, could be enacted both in a paper-and-pencil and a computer supported (e.g., webbased) context. However, it has proven through numerous studies that the computer supported context is the most effective (Davies, 2000; Sung, et al, 2004; Tsai & Liang, 2009; Tsai et al, 2002). First, the timeliness needed in formative assessment can be enhanced by online technology significantly (Davies, 2000; Sung, et al, 2004; Tsai & Liang, 2009; Tsai, et al, 2002). Second, computer supported peer assessment systems can offer support/ scaffolds during the assessment process. For instance, they can provide the students with pre-determined rubrics containing assessment criteria whenever needed. Third, computer supported peer assessment can also ensure the anonymity of participants, facilitate willingness to critique peer work (Wen & Tsai, 2006; Lu & Bol, 2007; Tseng & Tsai, 2007; Xiao & Lucking, 2008), and promote the effectiveness of peer assessment (Kollar & Fischer, 2010). Fourth, they can offer immense storage space, high processing speed, multimedia appeal, learner control, instant and personalized feedback, and multiple-branching capabilities (Heinich, Molenda, Russell, & Smaldino, 2002). Fifth, they could be provided to the students through the Internet and thus take advantage of the benefits carried by the internet (e.g., worldwide connectivity and collaboration due to the absence of time and place restrictions, communicating and sharing their work with other students) (Yu, Liu, & Chan, 2005). Sixth, computer supported peer assessment systems could offer students better possibilities for organizing, searching and accessing information (Zhang, Cooley, & Ni, 2001).

Peer Assessment Among Secondary School Students

439

This study Reciprocal peer assessment is usually enacted within formal constraints, meaning that there are specified points in a learning activity sequence at which students are prompted to implement peer assessment. In the context of this study we decided to remove these constraints and provided students with the freedom to enact peer assessment whenever they want. In this context, students are able to give and receive feedback from other students about their work, without any intervention from the teacher, while they are working on the same exercise or a different one. In this respect, peer assessment is reciprocal in nature but it does not necessarily require giving and receiving feedback from the same peer or group of peers. They can give and get ideas from different peers and this feedback could be about the content, organization, appearance, grammar and other aspects of a learner’s product. Needless to say, students are free to discuss anything relative to their work. This way, they can improve their work and learn from each other. Finally, in such a context, neither assessment criteria are given to the students, nor time restrictions placed on when to exchange feedback. Students are free to request feedback from peers and to give feedback whenever they see fit. In other words, the peer assessment implemented in this study was unsupported (e.g., no assessment criteria are provided), unstructured and unspecified time wise (students can initiate a “feedback dialogue” with their peers for learning purposes for the learning products that they wished to and whenever they feel the need to do so). The rationale behind this mode of peer assessment implementation was to identify what secondary school students really could do on their own when enacting an unsupported and unstructured peer assessment. More specifically, the present study aimed at answering the following questions: 1. Does an unsupported and unstructured peer assessment, provided through the use of the SCYFeedback tool, lead to a feedback dialogue among students? When and how? 2. What type of feedback do the students ask for and receive? 3. Do students use the feedback they receive to improve their science related work (ELOs, e.g., reports, concept maps, models, tables)? These questions were examined in the context of a science investigation. Specifically, we asked our participants to use the “Healthy pizza” SCY mission, which requires from students to create a healthy pizza while considering the nutritional value of the ingredients, diet-related health issues

440

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

and the human digestive system, and daily exercise. For providing peer feedback students were introduced to the SCYFeedback tool (for details on the SCYFeedback tool see the Methods part). Such research is significant for the science education community in several ways. It could shed light on what kind of support students are requesting from their peers and what feedback their peers are capable of offering them in a science learning context. Moreover, it provides an insight in terms of when students’ assessment capabilities fall short and, thus, the instances that the teacher could intervene. Finally, it provides insights to webplatform developers as far as when the students are in need of scaffolding from the web platform. Methods

Participants The participants were 38 eighth graders (14 year-olds), coming from two classes of a public school (Gymnasium) in Nicosia, Cyprus. The sample of the study was kept relatively low intentionally, because of the type and amount of data we needed to collect for answering the research questions. Participation in the study guaranteed anonymity and that it would not contribute to students’ final grade. Researchers strongly argue that such an arrangement is important in order to allow students express freely about their peers work (Topping, 2009). However, in order to avoid possible lack of motivation, since this study was disassociated from students’ final grade for methodological purposes, students that sincerely completed all the requirements of the study were promised a book of their preference and to participate in an excursion. None of the students had prior experience with peer assessment before the implementation of this study. Finally, the mode of work was primarily collaborative in nature. Students worked individually when ELOs that involved information of personal nature were requested (e.g., Daily calorie intake ELO, Health passport ELO). The study’s curriculum defined the instances when students had to switch to an individual mode of work (see Appendix A).

Peer Assessment Among Secondary School Students

441

Material Throughout the course, students used learning material developed for the SCY (Science Created by You) project (de Jong et al., 2010). In SCY students are offered a participatory learning environment, SCY-Lab, which is populated with resources and tools required to carry out a SCY Mission. During a SCY Mission, learners address general socio-scientific problems through collaborative and inquiry learning. Through these activities students learn by creating and exchanging what we refer to as Emerging Learning Objects (ELOs) (Chen, 2004; Hoppe et al., 2005). SCY ELOs include models (e.g., system dynamics models), concept maps, designed artifacts, data sets, hypotheses, tables, summaries, reports, and other types of artifacts. The ELOs are the vehicles through which a student can ask for feedback during the learning process, and from which the teacher can gain an understanding of the general science skills, social and presentations skills, and domain concepts the student has developed. Thus, ELOs are central in our approaches to assessment (Wasson, et al, 2012). Under the “Healthy pizza” mission, learning material carried by SCYLab required from students to create 31 ELOs (for a list of all activities and ELOs produced during the “Healthy pizza” mission see Appendix A) throughout the teaching intervention. The mission aimed at actively engaging students in the right choice of food products offered by their school’s canteen or cafeteria. In doing so, students were assigned to create a healthy pizza while considering the nutritional value of the ingredients, diet-related health issues and the human digestive system, and daily exercise. All ELOs were created using SCY-Labs tools (e.g., students used SCYMapper for creating concept maps) and were stored in the SCY-Lab platform. The ELO feedback was given using the SCYFeedback tool. The activity sequence of the learning material required from students to pass through several steps. The purpose of this mission was to create a healthy pizza. The mission was carried out by two science teachers, one per class, who previously attended preparatory meetings designed for the purposes of this study. These meetings focused on familiarizing the teachers with the content of the study, the procedures and methods, and the tools of the webbased platform. The meetings ran throughout the teaching intervention. Specifically, the teachers attended two three-hour meetings prior to this study and a one-hour meeting prior to each classroom meeting.

442

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

The SCYFeedback Tool The SCYFeedback tool is a peer feedback tool, whose design and development was inspired by both theoretical and empirical underpinnings (for details see Wasson & Vold, in press). Using this feedback tool, students can (a) ask for feedback on their own ELO, (b) receive feedback on their own ELO, (c) browse an ELO gallery of ELOs submitted for feedback, and (d) provide feedback on any ELO in the ELO gallery. While working on an ELO in SCY-Lab, a student can ask a question related to the ELO directly in the tool with which they are creating the ELO and submit it for feedback. Figure 1 shows the SCY-Lab environment where the student is creating a note taking table with personal information regarding his/her daily calorie intake during the Healthy Pizza Mission.

Figure 1. Requesting feedback for a specific ELO. Once the ELO has received feedback, the student receives notice and he/she can view the formative peer feedback. Similarly, students receive notice when another student has asked for feedback on an ELO. The student can open then the SCYFeedback tool, find the ELO in the ELO Gallery, and provide feedback. All feedback questions asked in SCY-Lab result in the ELO and its question being added to the SCYFeedback tool’s ELO Gallery (see Figure 2) of the most recently posted ELOs (i.e., those that are awaiting feedback). Figure 3 shows the ELO Feedback screen, where students can give or receive feedback on an ELO (for more details see Wasson & Vold, in press).

Peer Assessment Among Secondary School Students

443

Figure 2. Logging in SCYFeedback tool to view peers’ ELOs in the ELO Gallery

Figure 3. Giving feedback for a specific ELO of a peer.

444

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

Procedure Before starting the study’s intervention, an introductory lesson about the SCY-Lab environment and its tools, including the SCYFeedback tool, was implemented. Right after, each student used the SCY-Lab platform on a computer in order to access the learning material, follow the activity sequence and complete the accompanying assignments/tasks. The mission unfolded in several steps. At the beginning the students were informed about the purpose of the mission and created their favorite pizza selecting the ingredients of their choice. Following, they were introduced to the concept of nutrients (carbohydrates, fats, proteins, vitamins, minerals, and water) and learned how to read and interpret nutritional value labels found on most food products. They familiarized themselves with versions of the food pyramid and compared their own diet with the daily nutritional needs of the human body. Thus, they focused on exercise behaviour, the balance between the amount of calories ingested and the amount of energy spent based on the activities carried out throughout the day in order to put together a health passport based on their own eating and exercise habits. Further on, they learnt about the digestive system, and its function, and looked at the consequences of an unhealthy diet. Students used the optimization strategy in order to select the healthiest pizza ingredients and created several (virtual) pizzas, so as to find the healthiest option. Finally, they compared their pizzas with those of their peers and wrote a letter of advice to their school canteen or cafeteria providing a healthier alternative. Throughout this procedure students were free to engage in an unstructured and unsupported peer assessment and participate in any form of feedback dialogue with their peers via the SCYFeedback tool. The duration of the mission was approximately 18 hours. During the mission, students alternate individual and collaborative activities in SCYLab with whole-class discussions and authentic hands-on experiences. During the individual mode of work students were working on ELOs that required data of personal nature, whereas during the collaborative mode of work students worked in dyads and on ELOs that required synthesis of the personal information coming from the two members of each group. The members of a dyad were sitting next to each other, but working on separate computers. Data Collection The data collection process involved two sources, namely screen and video captured data and interviews. The interviews involved 30 out of the 38 of the study’s participants.

Peer Assessment Among Secondary School Students

445

The screen and video captured data were collected through a computer screen capture plus video-audio software (River Past Screen Recorder Pro) throughout the study. Screen recording allowed the collection of a rich record of actual computer work activity (e.g., actions, sounds, movements that took place on the computer monitor) in its natural work setting that portrays the user’s mobility among various parts of the web-based material. With the assistance of a microphone and a camera the software also allowed videotaping the students in conjunction with what was taking place on the screen. Screen captured data along with video data were collected for all class meetings. The interviews involved 30 participants (from 10 different groups). Each participant was interviewed separately through the use of a structured protocol (see Appendix B) which consisted of nine open-ended questions. Other questions besides the ones of the protocol were used only for clarification purposes. The purpose of the interview was to examine what students thought of their experience with the use of the SCYFeedback tool in SCYLab. Data Analysis For the purposes of answering the first and second research questions we used data derived from the interviews and the screen and audio recordings. In particular, we isolated the data about whether students used the SCYFeedback tool, when and how. From the screen and video captured data we isolated the episodes during which the students were requesting feedback and giving feedback. In the case of the requesting feedback episodes we coded for the number of students who requested feedback, the ELOs for which feedback was requested, and the type of feedback requested. The latter resulted in the following categories: question about science content, request for changes, request for clarification, request for help, and request for an opinion. Additionally, in an effort to understand the characteristics of the ELOs that the students requested feedback for and to check for any possible commonalities across these ELOs, we coded data concerning the type of an ELO (text, table etc.), the learning activities that were required for an ELO to be produced and the time needed to produce each ELO (see Appendix A for information on all of the mission’s ELOs). In the case of giving feedback we coded for the number of students who provided feedback, the ELOs for which feedback was provided, and the type of feedback provided. The latter resulted in two main categories: feedback in a form of an answer and feedback in a form of a question. The

446

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

latter category was not that prevalent (n=5). In contrast, the category of answers was the most popular type of response (n=47). The category of answers was further coded into sub-categories: positive judgments1, negative judgments2, changes proposed3, neutral comments and clarification/information. It should be mentioned that in both cases (requesting and giving feedback), the categories were not mutually exclusive (e.g. a specific feedback could include positive and negative judgments). These data were treated quantitatively by using the non-parametric Kendall’s tau b correlations. For the purposes of the Kendall’s tau b correlations tests we used the following variables: requesting feedback, giving feedback, the average time spent on each ELO’s production, along with its corresponding standard deviation, the time needed for the production of each ELO for each peer group, the number of ELOs for which feedback was requested, the number of requested feedback, the number of students who requested feedback, the various categories that emerged from the analysis of the ELO content, the number of ELOs for which feedback was finally given, the number of feedback received, and the various categories that emerged from the analysis of the given feedback’s content. From the interviews we used the data collected through the questions that focused on: whether students used the SCYFeedback tool (we also asked if not why, if yes what were their impressions about it), how they used the tool, whether they were satisfied with their experience with the tool, what improvements they would recommend if any, whether there was a specific feature that they liked in the SCYFeedback tool, how they would like a feedback tool to be, the type of feedback the students actually asked for and received from peers, and students’ expectations on what type of feedback they would like to ask for and receive from peers. All interviews were transcribed and then analyzed qualitatively. We used open coding to analyse the interview transcripts. After coding the data of each question of the protocol we used axial coding to create categories. 1 Positive judgments concern encouraging remarks and proper or correct handling of aspects related to the work/products (e.g., ELO) produced by a group of students (e.g., inclusion of proper material, inclusion of scientifically accurate information) (for more details see Chen et al., 2009; Tseng & Tsai, 2007). 2 Negative judgments concern incorrect or incomplete handling of aspects related to a student group’s work/products and discouraging remarks (for more details see Chen et al., 2009; Tseng & Tsai, 2007). 3 Changes proposed to assessee groups: concern assessors’ comments about the revision of assessees’ artifacts.

Peer Assessment Among Secondary School Students

447

For the third research question we used data derived from the interviews and the screen and audio recordings. In particular, we isolated the data about whether the students responded to the feedback they received and what was the content of this response, whether they revised their ELOs based on the received feedback and, if yes, what kind of changes were actually made. From the screen and video captured data we isolated the episodes during which the students were receiving feedback. More specifically, we coded for whether students revised their ELOs based on the feedback received, whether students sent comments or requests to their peers concerning the feedback received and what kind of interaction they had after these comments or requests. From the interview data, we used the data derived from the questions that focused both on students that received feedback and students that have not received any feedback. In the case that they received feedback from peers, we used the data coming from a question checking whether they used the feedback and whether it helped them to improve their work. In the case that they did not receive any feedback, we used the data coming from the questions focusing on whether they would like to receive one and whether they believe it would have helped them. These data were also analyzed qualitatively through open and axial coding. Internal-reliability Data Analysis Internal-reliability data were collected for each coding process separately. Specifically, a second rater (fourth author) scored about 40% of the study’s data (random sample), independently from the first author who coded all of the study’s data, and then a Cohen’s Kappa was calculated for the two raters for each coding process separately. The second rater did not have access to the study’s data or coding process until she was called to get involved in the inter-reliability process. Cohen’s Kappa was found to be above 0.89 in all cases. Results Does an unsupported and unstructured peer assessment, provided through the use of the SCYFeedback tool, lead to a feedback dialogue among students? When and how? Students engaged in a feedback dialogue for the purposes of 11 out of the 31 ELOs of the ‘Healthy pizza’ mission (Table 1). This indicates a rather

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

448

confined use of the SCYFeedback tool. Figure 4 also reveals that the frequency of requested feedback was descending as time progressed. Table 1 Number of feedback requests per ELO ELOa

Frequency of requested feedback

My favorite pizza

9

Notes on unhealthy diet

6

Questions about pizza benefits

2

Nutrient and energy calculations

2

Questions about the food pyramid

9

Daily galorie intake

5

Energy fact sheet

3

Estimated Energy Requirements

1

Map Of The Digestive System

2

Fact Sheet Of One Organ

2

My first healthy pizza

1

Total number of feedback requests

42

The ELOs are presented in the series that they appear in the learning material. a

Figure 4. Number of feedback requests per ELO.

Peer Assessment Among Secondary School Students

449

In an effort to understand the characteristics of the ELOs that the students requested feedback for and check for any possible commonalities across these ELOs, we coded data concerning the type of an ELO (text, table etc.), the learning activities that were required for an ELO to be produced and the time needed to produce each ELO (see Table 2; see also Appendix A for info on all of the mission’s ELOs), and then proceeded with contrasting the data with the use of the non-parametric Kendall’s tau b correlations. Table 2 Description, mean time needed to produce, and frequency of feedback requested for Emerging Learning Objects (ELOs)

a

ELOs with their serial number

ELO Description

Activities needed to produce ELOs

Mean time needed to produce ELOsa

Frequency of feedback requested

ELO1 (My favorite pizza)

Animation

Record preselected choices

12.27 (5.82)

9

ELO2 (Notes on unhealthy diet)

Text

Watch a video, note taking

34.85 (15.66)

6

ELO3 (Questions about pizza benefits)

Text

Read an article, note taking

65.23 (34.12)

2

ELO6 (Nutrient and energy calculations)

Table

Web quest, note taking

18.53 (0.04)

2

ELO8 (Questions about the food pyramid)

Text

Watch a video, note taking

29.49 (17.90)

9

ELO10 (Daily calorie intake)

Table

Note taking

63.68 (16.63)

5

ELO12 (Energy fact sheet)

Cognitive map

Watch a video, note taking

24.30 (15.04)

3

ELO14 (Estimated Energy Requirements)

Table

Mathematical calculations

36.45 (10.05)

1

ELO15 (Map of the digestive system)

Correspondence task

Read guidelines

11.55 (10.61)

2

ELO16 (Fact sheet of one organ)

Fact sheet

Web quest, note taking

86.55 (28.50)

2

ELO21 (My first healthy pizza)

Animation

Record preselected choices

7.12 (2.73)

1

Mean time needed to produce ELOs is given in minutes; standard deviation is given in parentheses.

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

450

For the purposes of the Kendall’s tau b correlations test we used the following variables: requesting feedback, giving feedback and the average time spent on each ELO’s production and standard deviation of time for each ELO’s production. Kendall’s tau b correlations (see Table 3) revealed that requesting feedback was positively correlated to giving feedback across ELOs (Kendall’s tau_b = 0.600; p < 0.05), which implies that whenever students requested for feedback from peers, there was a great possibility to receive one. Moreover, the more the average time for the preparation of an ELO, the more the standard deviation (Kendall’s tau_b = 0.667; p < 0.05). This means that there was heterogeneity among students on the time needed to produce ELOs; that is to say that some students needed much more time than others to produce the same ELO. Table 3 Kendall’s tau b correlations among the variables requesting and giving feedback and the time needed for an ELO production.

Requesting feedback Giving feedback Average time

Giving feedback

Average time

Standard deviation of time

0,60*

ns

ns

ns

ns 0,67*

Note: ns = not significant; * = p < 0.05It should be noted that 21 students out of the 38 participants requested feedback, while there were in total 42 requests for feedback across all ELOs (i.e., sum of last column in Table 1). As a response to these feedback requests, students responded with 52 different feedback texts. The students who responded to their peers’ requests were 16 (out of the 38 participants) and each one of them sent approximately three different feedback texts through the SCYFeedback tool. However, it should be clarified that not all of the 21 peers who requested feedback received one. Eight students out of the 21, who requested feedback, did not receive any feedback from their peers. Therefore, only 13 students received feedback based on their initial requests. Six of them received feedback concerning all their requests, whereas seven received feedback for part of their requests. Ten students out of the 38 participants both requested and gave feedback.

Peer Assessment Among Secondary School Students

451

Interview data revealed that 22 students (out of the 30 interviewees) used the tool while eight of them did not use it. Students who did not use the SCYFeedback tool mentioned that they did not have much time because they had to finish their own work first (N=2), and that they did not like the tool and did not find it useful (N=2). One said that he thought it was not important to use it and anther one said that he did not need it because he had help from the teacher and his group mates. Students who used the SCYFeedback tool were asked about their impressions of the tool. Three replied that it seemed a good experience and that they liked it while another three mentioned that they liked it because they could talk with others and get feedback for their work. Two students mentioned that the SCYFeedback tool was helpful to improve one’s work. Two students mentioned that the tool allowed them to interact with peers and avoid waiting for the teacher to come. The following quotations are particularly revealing: “It’s nice because you can interact with others and get your peers’ opinion. This way you don’t need so often teacher’s opinion about your work.” (Interviewee S3G9) “I liked it because we could see our peers’answers and also we could send our work simultaneously to all. You could see other’s work, get some ideas and then you could fill your work with something new, also you could communicate with your team” (Interviewee S2G10) What type of feedback do students ask for and receive? The most frequent type of feedback requested was help (mostly about technical issues) or a peer’s opinion about an ELO (see Table 4). In 13 cases students asked for clarification/information from their peers. Only six students requested suggestions for changes from their peers. Table 4 Type of feedback requested from peers Type of requested feedback

Frequency

Example (actual quotes)

Requesting help

39

“How can I save my ELO?”

Requesting an opinion

26

“What do you think of my table?”

Asking for clarification/ information

13

“What is the tofu?”

Asking for possible changes for improvement

6

“What can I add or correct to make it better?”

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

452

In the case of feedback provided to peers, the two main categories of feedback were answers/statements (n=47) and questions (n=5). Answers/ statements were further coded into the following sub-categories: positive judgments, negative judgments, changes proposed, neutral comments and clarification/information. As shown in Table 5, the positive judgments prevailed. In eight cases, the students proposed changes to their peers and in five cases students provided neutral comments. Finally, in five out of the 52 cases, students replied to the initial requests giving clarifications and additional information to peers. It should be mentioned that the categories were not mutually exclusive (e.g. a specific feedback could include positive and negative judgments: “Your answer is very good, you included all information needed but without many details. You also forgot to mention that in the new pyramid we eat from all the food groups”). Table 5 Type of feedback received from peers Type of feedback receiveda

Frequency

Answers/Statements

47

Example (actual quotes)

Positive judgments

29

“Your response is complete! Excellent! ”

Negative judgments

8

“I think that you are missing some information which I neither could find”

Neutral comments

5

“It is your choice to select the ingredients for your pizza”

Changes to be made

8

“Your work is good but you could also describe the experiment with the nuggets that we watched in the video”

Clarification/information

5

“Pepperoni is a spicy sausage”

5

“What are you talking about here?”

Questions

Feedback categories were not mutually exclusive.

a

Given the type of feedback that students requested and offered, we ran a non-parametric Kendall’s tau b correlations test to investigate possible correlations among the aforementioned variables. Kendall’s tau b correlations among types of feedback requested and offered revealed that changes requested were significantly correlated with changes proposed (Kendall’s tau_b = 0.708; p < 0.05) and clarification/information requested was significantly correlated with clarification/information provided (Kendall’s tau_b = 0.637; p < 0.05) (Table 6). That is to say, whenever students were asking for changes to be proposed and additional information, their peers were willing to provide the requested type of feedback to them.

ns

ns

1.00

ns

Requesting information

Requesting opinion

Requesting changes

Requesting help

ns

ns

ns

1.00

0.630*

Requesting information

1.00

ns

0.505*

0.698**

0.936***

Requesting help

0,505*

0.587*

1.00

ns

0.547*

Requesting opinion

Note: ns = not significant; *p < 0.05; **p < 0.01; ***p < 0.001

ns

Requesting question

Requesting changes

ns

0,566*

ns

ns

ns

Given Positive comment

0,636*

ns

ns

0.637*

ns

Given information

ns

ns

ns

ns

0.565*

Feedback given in a form of an answer/ statement

Table 6 Kendall’s tau_b correlations among structural components of requested and given feedback

ns

0.708*

0.653*

ns

0,584*

Given (proposed) changes

Peer Assessment Among Secondary School Students 453

454

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

For triangulation purposes, we also posed questions to students during the interviews, about the kind of feedback that they gave to peers, in order to complement the quantitative analysis. The interview data analysis revealed that 16 out of the 30 students stated that they did not give feedback, four of them mentioned that they gave positive comments, while four others mentioned that they gave positive feedback and some tips for work improvement. Furthermore, three students mentioned that they gave positive and negative comments and proposed changes for the improvement of their peers’ ELOs. Two other students stated that they offered their peers ideas that could improve their work; another stated that she offered only general comments to her peers, and finally one student mentioned that he did not remember if he had given feedback. Similarly, students were asked of what kind of feedback they received. Nineteen students mentioned that they did not receive any feedback at all, four students mentioned that they received only positive comments and not any proposed changes, three students reported that they did not look at the SCYFeedback tool to see whether they had received any feedback from their peers, two students mentioned that they had received just a neutral comment, one student mentioned that he received only negative comments and no changes were proposed, and finally one student mentioned that her peers suggested some changes concerning the improvement of her ELOs. Apart from what kind of feedback the students actually asked for and received from peers, they were further asked during the interviews about what kind of feedback they would like to ask for and receive from peers. The analysis revealed a wide range of responses. There were students who: (a) wanted to know if their work was scientifically accurate (N=14), (b) wanted to know what they could add/change in their ELO in order to make it better (N = 12), (c) wanted to ask for clarifications when a task was not understandable (N = 8), (d) wanted to ask for new ideas to be included in their ELO (N=5), and (e) wanted to ask for an assessment of a completed ELO (N=6) . Only one student did not have an opinion regarding this issue. When the students were asked of what kind of feedback they would like to receive, a wide range of categories of responses emerged through the analysis. In particular, the students wanted: (a) to receive both positive and negative comments (N=12), (b) to be informed about the scientific accuracy of their ELO (N=8), (c) to receive specific comments that point to changes to certain aspects of their ELOs (N=6), (d) to get honest answers to their initial requests/questions posed through the SCYFeedback tool (N=4), (e) to get feedback related to the content of their ELO (N=3), (f) to get a score/ grade for their work (N=3), (g) to receive comments of what their peers

Peer Assessment Among Secondary School Students

455

consider incorrect in their work (N=2), (h) to receive positive comments (N=2), and (i) to get new ideas to improve their work (N=2). The following quotation from interviewee S1G11 is particularly revealing, regarding these categories: “I would like to receive feedback with positive comments about my work, but I would also like to receive some advice of how I could improve it. It would be nice if I would also get a grade from my classmates. This way, I would know whether it is correct and complete and whether it needs to get better and how much my classmate liked my work” (Interviewee S1G11) Do students use the feedback they receive to improve their science related work? Interestingly, none of the students that received feedback proceeded with revising their ELOs based on the suggestions included in the feedback received. For investigating the reasons that discouraged students from proceeding to any revision after receiving peer feedback, we turned to our interview data. Most of the students that received feedback stated that they felt that the feedback was not of high standards, while they highlighted that their level of reference for this judgment was the quality of the ELO they themselves had produced during the mission. In other words, they felt more confident about the quality of their ELO than the quality of the feedback. Moreover, they mentioned that peer feedback was not as detailed as the feedback one usually gets from a teacher. Along these lines, they mentioned that general comments or comments without critical remarks, negative judgments or suggestions for changes were not helpful and therefore not worthy for considering them for any kind of revision. The following quotation is particularly revealing: “I did not change my ELO because the feedback was not pointing to something that was problematic with my ELO. For example, I got a comment that they way I organized my information in the Energy fact sheet (ELO) was not good. I could not understand what was ‘not good’. It looked fine to me.” (Interviewee, S1G3)

456

Tsivitanidou, Zacharia, Hovardas, and Nicolaou Discussion

The objective of this study was to investigate whether the SCYFeedback tool leads to a feedback dialogue among students in the context of a science investigation. In particular, we aimed at examining what type of feedback students ask for and receive and whether the students use the feedback they receive to improve their science related work. Regarding when and how students use unstructured and unsupported peer assessment in the context of the “Healthy pizza” mission (first research question), the results showed that students engaged in a peer assessment process when they felt that they needed help, an opinion for something they have created, and a question concerning clarifications or information about science content. Furthermore, our findings showed that whenever students requested for feedback from peers, there was a great possibility to receive one. However, student engagement in peer assessment was found to be conditional, since students felt the need for feedback only for certain ELOs. This calls for further research in order to identify the reasons students felt the need for certain ELOs and not others. Could it be that these ELOs have some commonalities that cause this need? On a surface level, we checked the type of these ELOs (e.g., text, table) and found that there were different types of ELOs, which means that the type was not the factor we are looking for. Needless to say, more in depth analysis is needed to reach to solid conclusions. On the other hand, it appears that the beginnings of a fruitful feedback dialogue were there, but they were not enough to support a thorough dialogue that could lead to having students revising their ELOs. Obviously, the type of such support needed to reach sustainable dialogues remains to be investigated. Concerning the type of feedback requested and provided (second research question), positive comments were much more numerous than negative comments in peer feedback, which confirms analogous findings of previous studies (Cho & MacArthur, 2010; Cho, Schunn, & Charney, 2006). This result appears to indicate that the students wanted to encourage their peers. However, the presence of positive judgments does not seem to have promoted the revision of ELOs. In the study’s interviews students stated that positive comments would not assist them to revise their ELOs. In fact, in one of our previous studies we found that positive judgments might act as a barrier and prevent assessees from revising their work (Tsivitanidou et al, 2011). Indeed, it seems that students would get involved in critically reviewing peer feedback only in the case of receiving negative comments and/or suggestions for changes. The quantitative analysis in the present study re-

Peer Assessment Among Secondary School Students

457

vealed significant correlations between changes requested and changes proposed. What is left now is to encourage students to increase the number of negative/critical comments and/or suggestions for changes in their feedback in order to stimulate their peers’ interest to engage in peer assessment and revise their ELOs, which is in line with previous research (Davies, 2006; Tseng & Tsai, 2007; Tsivitanidou et al, 2011). Another noteworthy finding was that students’ use of peer assessment descended as time progressed. Since students did not find peer comments justifiable enough to support their work, they might have felt that there is no need to request more of this feedback. This aspect of appreciation could also be the answer to why our participants did not proceed with revising their ELOs after receiving peer feedback, which relates to our final research question. We believe that appreciation could contribute to the confrontation of students’ hesitancy to accept peers as legitimate or capable assessors. Prior studies revealed that students tend to regard expert feedback more valuable than peer feedback (Bryant & Carless, 2010; Peterson & Irving, 2008). Hence, we need to create the right circumstances in which students could create feedback of good quality (e.g., scientifically accurate) and appreciate their peers as legitimate and trustworthy assessors. In any case, the fact that students did not make any changes to their ELOs does not imply that students did not benefit from the whole procedure. Indeed, students emphasised that through the use of the SCYFeedback tool they could interact with their classmates and get new ideas from them, either when receiving peers’ feedback or when reviewing peers’ ELOs. It could be that students improved their ELOs when examining the work of the other students and producing for them feedback or it could be that the ideas gained from receiving or providing feedback would be used by students in future learning endeavours (Strijbos & Sluijsmans, 2009). Both of these conjectures sound reasonable, but further research is definitely needed in order to reach to solid conclusions. Since high-quality peer feedback processes are not likely to show up spontaneously, earlier studies have emphasized training as a crucial prerequisite of an effective peer assessment procedure (Gielen et al., 2010; Van Zundert, Sluijsmans, & Van Merriënboer, 2010; Xiao & Lucking, 2008). Dochy, Segers and Sluijsmans (1999) highlighted the need to develop a shared understanding of the assessment/feedback procedure. Training is said to support the role of the peer that produces feedback, in improving the quality of peer feedback (Van Steendam, et al, 2010), as well as the role of the peer who is receiving feedback, in appreciating peer feedback (Gielen et al., 2010). To improve the quality of peer feedback, a training

458

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

session prior to any implementation of unsupported and unstructured peer assessment is needed, where students should be prompted to both request and provide feedback that includes (a) structural components that correlate with revisions and improvements of ELOs (e.g., detailed negative/critical comments), (b) scientifically accurate content, and (c) proper justification as far as why something needs to changed/revised. Training should also focus on the sustainability of the unsupported and unstructured peer feedback dialogue and on having students accepting their peers as legitimate or capable assessors. Prior studies have already pointed towards the need of addressing these two issues, but no framework has been provided yet as to what needs to be done (Brindley & Scoffield, 1998; Orsmond, Merely, & Reining, 1996; Smith, Cooper, & Lancaster, 2002; Van Gennip et al., 2010; Walker, 2001). Another interesting research direction would be how computer technology could further support unsupported and unstructured peer assessment. Of course, by providing support the unsupported aspect of peer assessment is “violated”. However, we could have a fading mechanism that initially provides support and later on starts fading out until we reach a completely unsupported stage (if unsupported peer assessment is at task). The idea is to have the students understand first the added value of providing and receiving feedback and then leave them of their own. In this context, the students need to understand when to provide feedback, why to provide it, what to include in it, how to present it, and what to do with the feedback received (e.g., examine the quality of the feedback received). Gielen et al. (2010) argued that the provision of scaffolding could help in this respect. For instance, the computer-based scaffolding could (a) prompt students to provide more thorough, well document critical comments, (b) encourage students to communicate for clarifications, (c) encourage students to collaborate, (d) encourage students to communicate when the feedback dialogue begins to fade-out, and (e) bring together students that could offer valuable feedback to each other according to the development and quality of their work. All these, in the context of a web based platform, could become feasible through the use of data mining (e.g., use of agents). Overall, further research on defining how to support students in enacting peer assessment in a science related, computer-based learning is definitely needed, specifically in an unstructured and unsupported peer assessment context, as the one of this study. Given the benefits that peer assessment brings into a science learning environment, such research becomes an essential need.

Peer Assessment Among Secondary School Students

459

References Barab, S. A., Hay, K. E., Barnett, M., & Squire, Kurt (2001). Constructing Virtual Worlds: Tracing the Historical Development of Learner Practices. Cognition And Instruction, 19 (1), 47–94. Barab, S. A., Hay, K. E., Squire, K., Barnett, M., Schmidt, R., Karrigan, K., et al. (2000). Virtual Solar System Project: Learning Through a TechnologyRich, Inquiry-Based, Participatory Learning Environment. Journal of Science Education and Technology, 9 (1), 7–25. Birenbaum, M. (1996). Assessment 2000: Towards a pluralistic approach to assessment. In M. Birenbaum, & F. Dochy (Eds.), Alternatives in assessment of achievements, learning processes and prior knowledge (pp. 3-29). Boston, MA: Kluwer. Black, P., & Harrison, C. (2001). Feedback in questioning and marking: the science teacher’s role in formative assessment. School Science Review, 82, 301, 55-61. Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in education: principles. Policy and Practice, 5, 7-74. Bosco, J. (2009). Participatory Culture and Schools: Can We Get There from Here? Threshold, 2009, 12–15. Brindley, C., & Scoffield, S. (1998). Peer assessment in undergraduate programs. Teaching in Higher Education, 3 (1), 79-89. Bryant, D. A., & Carless, D. R. (2010). Peer assessment in a test-dominated setting: empowering, boring or facilitating examination preparation? Educational Research in Policy and Practice, 9 (1), 3-15. Chen, W. (2004). Reuse of collaborative knowledge in discussion forums. Lecture Notes in Computer Science, Vol. 3220. (pp. 800–802)Berlin/Heidelberg: Springer-Verlag. Chen, N.-S., Wie, C.-W., Wu, K.-T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers and Education, 52, 283-291. Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20 (4), 328-338. Cho, K., Schunn, C. D., & Charney, D. (2006). Commenting on writing: typology and perceived helpfulness of comments from novice peer reviewers and subject matter experts. Written Communication, 23, 260-294. Crane, L., & Winterbottom, M. (2008). Plants and photosynthesis: peer assessment to help students learn. Educational Research, 42 (4), 150-156. Davies, P. (2000). Computerized peer assessment. Innovations in Education and Training International, 37, 346-355. Davies, P. (2006). Peer- assessment: judging the quality of students’ work by comments rather than marks. Innovations in Education and Teaching International, 43 (1), 69-82.

460

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

De Jong, T., Van Joolingen, W., Giemza, A., Girault, I., Hoppe, U., Kindermann, J., Kluge, A., Lazonder, A., Vold, V., Weinberger, A., Weinbrenner, S., Wichmann, A., Anjewierden, A., Bodin, M., Bollen, L., d’Ham, C., Dolonen, J., Engler, J., Geraedts, C., Grosskreutz, H., Hovardas, T., Julien, R., Lechner, J., Ludvigsen, S., Matteman, Y., Meistadt5, Ø., Næss, B., Ney, M., Pedaste, M., Perritano, A., Rinket, M., von Schlanbusch, H., Sarapuu, T., FSchulz, F., Sikken1, J., Slotta, J., Toussaint., J., Verkade, A., Wajeman, C., Wasson, B., Zacharia, Z., van der Zanden, M. (2010). Learning by creating and exchanging objects: The SCY experience. British Journal of Educational Technology, 41 (6), 909-921. Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and coassessment in higher education: a review. Studies in Higher Education, 24, 331-350. Dysthe, O. (2004). The challenges of assessment in a new learning culture. The 32nd International NERA/NFPF Conference, Reykjavik, Iceland. Dysthe, O., Lillejord, S., Wasson, B., & Vines, A. (2009). ‘Productive e-feedback in higher education: Two models and some critical issues’. In S. Ludvigsen, & R. Saljo (Eds.), Learning Across Sites. Oxon: Routledge. Fadel, C., Honey, M., & Pasnik, S. (2007). Assessment in the Age of Innovation. Education Week Retrieved 15 May 2012 from. http://www.edweek.org/login. html Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovations in Education and Training International, 32, 175–187. Falchikov, N. (2003). Involving students in assessment. Psychology Learning and Teaching, 3, 102–108. Fallows, S., & Chandramohan, B. (2001). Multiple approaches to assessment: reflections on use of tutor, peer and self-assessment. Teaching in Higher Education, 6 (2), 229-246. Frost, J., & Turner, T. (2005) Learning to Teach Science in the Secondary School, Second Edition., Routledge Falmer , London Gehringer, E., F. (2001). Electronic Peer Review and Peer Grading In ComputerScience Courses. SIGCSE. 139-143. Gielen, S., Peeters, E., Dochy, F., Onghena, P. & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20, 304-315. Hanrahan, S. J., & Issacs, G. (2001). Assessing self- and peer-assessment: The students’ views. Higher Education Research and Development, 20, 53–70. Harlen, W. (2007). Holding up a mirror to classroom practice. Primary Science Review, 100, 29–31. Harrison, C., & Harlen, W. (2006). Children’s self– and peer–assessment. In W. Harlen (Ed), ASE Guide to Primary Science Education (pp. 183-190). Hatfield: Association for Science Education. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112.

Peer Assessment Among Secondary School Students

461

Heinich, R., Molenda, M., Russell, J. D. & Smaldino, S. E. (2002) Instructional media and technologies for learning (7th ed) (Upper Saddle River, NJ: Merrill). Hoppe, H. U., Pinkwart, N., Oelinger, M., Zeini, S., Verdejo, F., Barros, B., et al. (2005). Building bridges within learning communities through ontologies and “thematic objects”. Proceedings of the 2005 Conference on Computer Support for Collaborative Learning (pp. 211–220). Mahwah (NJ): Lawrence Erlbaum. Jenkins, H., Clinton, K., Purushotma, R., Robison, A. J., & Weigel, M. (2006). Confronting the Challenges of Participatory Culture: Media Education of the 21st Century. Chicago: The MacArthur Foundation. Kennedy, K. J., Chan, J. K. S., Fok, P. K., & Yu, W. M. (2008). Forms of assessment and their potential for enhancing learning: conceptual and cultural issues. Educational Research for Policy and Practice, 7, 197-207. Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction, 20 (4), 344-348. Lindblom- Ylanne, S., Pihlajamaki, H., & Kotkas, T. (2006). Self-, peer- and teacher-assessment of student essays. Learning in Higher Education, 7 (1), 51-62. Lindsay, C., & Clarke, S. (2001). Enhancing primary science through self– and paired–assessment. Primary Science Review, 68, 15–18. Lorenzo, G. & Ittelson, J. (2005). Demonstrating and assessing student Learning with E- portfolios. Educause Learning Initiative. Retrieved April 10, 2011, from: http://net.educause.edu/ir/library/pdf/ELI3003.pdf Lu, R., & Bol, L. (2007). A comparison of anonymous versus identifiable e-peer review on college student writing performance and the extent of critical feedback. Journal of Interactive Online Learning, 6 (2), 100-115. Noonan, B., & Duncan, R. (2005). Peer and self-assessment in High Schools. Practical Assessment, Research & Evaluation, 10 (17). Orsmond, P., Merely, S., & Reining, K. (1996). The importance of marking criteria in the use of peer assessment. Assessment and Evaluation in Higher Education, 21, 239-249. Paré, D. E., & Joordens, S. (2008). Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool. Journal of Computer Assisted Learning, 24, 526–540. Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academic Press. Peterson, E. R., & Irving, S. E. (2008). Secondary school students’ conceptions of assessment and feedback. Learning and Instruction, 18, 238-250. Ploegh, K., Tillema, H. H., & Segers, M. S. R. (2009). In search of quality criteria in peer assessment practises. Studies in Educational Evaluation, 5, 102109. Prins, F. J., Sluijsmans, D. M. A., Kirschner, P. A., & Strijbos, J. W. (2005). Formative peer assessment in a CSCL environment: a case study. Assessment & Evaluation in Higher Education, 30, 417-444.

462

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

Purchase, H. C. (2000). Learning about Interface through Peer Assessment. Assessment & Evaluation in Higher Education, 25 (4), 341- 352. Ronen, M., & Langley, D. (2004). Scaffolding complex tasks by open online submission: Emerging patterns and profiles. Journal of Asynchronous Learning Networks, 8, 39–61. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189. Sluijsmans, D. (2002). Establishing learning effects with integrated peer assessment tasks. The Higher Education Academy Retrieved 15 May 2011 from. http://www.palatine. ac.uk/files/930.pdf Sluijsmans, D., Brand-Gruwel, S., & van Merriënboer, J., J., G. (2002). Peer assessment training in teacher education: effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27 (5), 443–454. Smith, H., Cooper, A. & Lancaster, L. (2002). Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in Education and Teaching International, 39, 71-81. Strijbos, J. W. & Sluijsmans, D. (2009). Unravelling peer assessment: Methodological, functional and conceptual developments. Learning and Instruction, 20, 265-269. Sung, Y. T., Chang, K.E., Chiou, S. K & Hou, H.T. (2004). The design and application of a web- based self- and peer – assessment system. Computers & Education, 45, 187-202. Swan, K., Shen, J., & Hiltz, S. (2006). Assessment and collaboration in online learning. Journal of Asynchronous Learning Networks, 10 (1), 44–61. Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68, 249-276. Topping, K. (2003). Self and Peer Assessment in School and University: Reliability, Validity and Utility. In M. Segers, F. Dochy, & E. Cascaller (Eds.), Optimising new modes of assessment: In search of qualities and standards (pp. 55-87). The Netherlands: Kluwer Academic Publishers. Topping, K. (2009). Peer assessment. Theory and Practise, 48, 20-27. Tsai, C. C., & Liang, J. C. (2009). The development of science activities via online peer assessment: the role of scientific epistemological views. Instructional Science, 37, 293–310. Tsai, C.-C., Lin, S. S. J., & Yuan, S.-M. (2002). Developing science activities through a network peer assessment system. Computers & Education, 38 (13), 241-252. Tsai, C. C., Liu, E. Z. F., Lin, S. S. J., & Yuan, S. M. (2001). A networked peer assessment system based on a vee heuristic. Innovations in Education and Teaching International, 38, 220-230. Tseng, S.C., & Tsai, C.C. (2007). On-line peer assessment and the role of the peer feedback: A study of high school computer course. Computers & Education, 49, 1161–1174.

Peer Assessment Among Secondary School Students

463

Tsivitanidou, E., O., Zacharias, Z., Hovardas, T. (2011). Investigating secondary school students’ unmediated peer assessment skills. Learning and Instruction, 21 (4), 506-519. Van Dyke, N. (2008). Self- and Peer-Assessment Disparities in University Ranking Schemes. Higher Education in Europe, 33 (2 & 3), 285-293. Van Gennip, N.A.E., Segers, M.S.R & Tillema, H.H. (2010). Peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Learning and Instruction, 20 (4), 280-290. Van Steendam, E., Rijlaarsdam, G., Sercu, L., & Van den Bergh, H. (2010). The effect of instruction type and dyadic or individual emulation on the quality of higher-order peer feedback in EFL. Learning and Instruction, 20 (4), 316-327. Van Zundert, M., Sluijsmans, D. M. A., & Van Merriënboer, J. J. G. (2010). Effective peer assessment processes: research findings and future directions. Learning and Instruction, 20, 270-279. Walker, A. (2001). British psychology students’ perceptions of group-work and peer assessment. Psychology Learning and Teaching, 1, 28-36. Wasson, B. & Vold, V. (in press). Leveraging New Media Skills for Peer Feedback. The Internet and Higher Education. http://dx.doi.org/10.1016/j. bbr.2011.03.031 Wasson, B., Vold, V., & de Jong, T. (2012). Orchestrating Assessment: Assessing Emerging Learning Objects. In K. Littleton, E. Schanlon, & M. Sharples (Eds.), Orchestrating inquiry learning: contemporary perspectives on supporting scientific inquiry learning (pp. 175–192). London: Routledge. Wen, M. L. & Tsai, C. C. (2006). University students’ perceptions of and attitudes toward (online) peer assessment. Higher Education, 51, 27-44. Xiao, Y. & Lucking, R. (2008). The impact of two types of peer assessment on students’ performance and satisfaction within a Wiki environment. Internet and Higher Education, 11, 186-193. Van Zundert, M., Sluijsmans, D. M. A., & Van Merriënboer, J. J. G. (2010). Effective peer assessment processes: research findings and future directions. Learning and Instruction, 20, 270-279. Yu, F., Liu, Y., & Chan, T. (2005). A web-based learning system for question posing and peer assessment. Innovations in Education and Teaching International, 42 (4), 337-348 Zhang, J., Cooley, D. H. & Ni, Y. (2001) NetTest: an integrated web-based test tool, International Journal of Educational Telecommunications, 7 (1), 33– 35.

Tsivitanidou, Zacharia, Hovardas, and Nicolaou

464

Appendix A The activity sequence of the “Healthy pizza” mission and the corresponding ELOs and mode of work. Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Activity Design a virtual artefact Watch video Read article Organize data Define Give examples Give examples Identify relevant concepts and criteria Build a model Organize data Interpret data Identify relevant concepts and criteria & define Organize and interpret data Organize and interpret data Identify relevant concepts Define Identify relevant concepts Draw conclusions Draw conclusions Organize and interpret data Design a virtual artefact Reflect on group processes Draw conclusions Reflect on individual processes Identify (prior) knowledge? Identify (prior) knowledge? Design a virtual artefact Evaluate processes & ELO Evaluate ELO Build an artefact Evaluate processes and ELO

ELO My Favourite Pizza (Pizza 1) Notes On Unhealthy Diet Questions About Pizza Benefits Food And Exercise Diary Nutrition Table Nutrient And Energy Calculations Pizza Ingredient Table Questions About The Food Pyramid

Mode of work Individual Group Group Individual Individual Individual Group Group

Construction Of The Food Pyramid Daily Calorie Intake Evaluate Diet (Food Pyramid) Energy Fact Sheet

Individual Individual Individual Individual

Basal Metabolic Rate (Health Passport) Estimated Energy Requirements (Health Passport) Map Of The Digestive System Fact Sheet Of One Organ Personal Comments Body Mass Index (Health Passport) Heart Rate (Health Passport) Health Passport

Individual

Individual Individual Individual Group Individual Individual

My First Healthy Pizza (Pizza 2) Methodology Steps

Individual Group

Reflection On Importance Of Criteria Criteria Table

Group Individual

Criteria Weight Table

Individual

Criteria Final Table

Individual

My Optimized Healthy Pizza (Pizza 3) Individual Report

Individual Group

Group Report Taste Scores Letter To School Canteen

Group Group Individual

Individual

Peer Assessment Among Secondary School Students

465

Appendix B The interview protocol 1. a. Did you use the SCYFeedback tool in the SCY-Lab environment? b. If not why didn’t you use it? c. If yes, what were your impressions after using the SCYFeedback tool?   2. How did you use the SCYFeedback Tool? 3. a. If you have given feedback what kind of feedback did you give? b. If you have received feedback what kind of feedback did you receive? c. If you have received feedback did it help you to improve your work? d. If you have not received any feedback, would you like to receive one? Do you think that receiving feedback from your classmates would have helped you improve your work? If no, why not and, if so, how?   4. a. Given your experience with SCY-Lab, were you satisfied with the SCYFeedback tool? If no, how would you prefer the SCYFeedback tool to be?  b. What improvements/changes would you recommend for the SCYFeedback tool? 5. Is there something that you liked in the SCYFeedback tool? If so what is it? 6. What kind of feedback would you ask through this tool when working in SCY-Lab? 7. What kind of feedback would you like to receive back from your classmates through the SCYFeedback tool? 8. Have you received feedback when working in the SCY-Lab environment? If yes, what do you think of the quality of this feedback? Have you used in any way? If yes, how? If no, why? 9. What kind of feedback would you prefer to receive?