Electronic Rubrics to Assess Competences in ICT

0 downloads 0 Views 324KB Size Report
... 13 Number 5 2014 www.wwwords.eu/EERJ ..... Martínez, M.E., Tellado, F., Raposo, M. & Doval, M.I. (2012) Evaluación de los aprendizajes y del trabajo en.
European Educational Research Journal Volume 13 Number 5 2014 www.wwwords.eu/EERJ

Electronic Rubrics to Assess Competences in ICT Subjects MANUELA RAPOSO RIVAS Faculty of Education Sciences, University of Vigo, Spain MANUEL CEBRIÁN DE LA SERNA Department of Didactics and School Organisation, University of Málaga, Spain ESTHER MARTÍNEZ-FIGUEIRA Faculty of Sports and Education Sciences, University of Vigo, Spain

ABSTRACT Helping students to acquire specific competences is nowadays one of the basic pillars of university teaching; therefore its evaluation and accreditation is of key importance. As of late, rubrics and in particular electronic rubrics (e-rubrics) have become an important resource to assess competences and guide students in their learning processes. In this contribution, the authors present a quasi-experimental study that was conducted to explore and evaluate the use of the e-rubrics in subjects related to information and communication technology (ICT). Data were collected on how students viewed their use of e-rubrics in self-evaluation and peer evaluation. They show that the experimental groups using e-rubrics were better evaluated by their professors than the control groups who did not engage in self-evaluation using e-rubrics. The students see e-rubrics as a positive resource because they feel e-rubrics allow for a truly objective evaluation. Additionally, e-rubrics are considered to be helpful in improving learning and self-regulation.

Introduction According to Jonsson and Svingby (2007), the changes that higher education institutions are experiencing today require new forms of evaluation. Biggs (1996) even argues that evaluation procedures are of key importance for students’ learning and possibly more important than curriculum objectives and teaching methods. In this respect we consider using electronic rubrics (erubrics) a valid strategy for guiding and following up on students’ work. E-rubrics can be used on their own as assessment tools, but their results can also feed into other resources like the portfolio presented by Raposo and Martínez (2011). E-rubrics are therefore of importance in the evaluation of students’ work (see also Cebrián Robles et al, 2014). Raposo and Martínez (2011) describe the specific characteristics of a learning outcome (a product, a project or a task) and the possible performance levels at which this may be achieved. Before tackling the task, they inform students about what is expected of them and how to assess their performance, and after completion of the task, they provide students with feedback (Mertler, 2001; Andrade, 2005). E-rubrics can be used for formative as well as summative assessment and can distinguish between scoring rubrics and instructional rubrics (H. Andrade, 2005; M.S. Andrade, 2014). In order to make rubrics really instructional, students must become involved in the learning process and in processes of self-assessment and peer assessment, as well as co-assessment with the teacher. It is desirable that students even participate in the design of rubrics, as suggested by Fallas (2005). A good rubric may familiarise students with the concept of quality. When rubrics are carefully designed, if possible with the collaboration of students, they will serve as useful evaluating guidelines (Andrade & Du, 2005; Khon, 2006). At the same time, there is an added value of e584

http://dx.doi.org/10.2304/eerj.2014.13.5.584

Electronic Rubrics rubrics vis-à-vis traditional rubrics. As Cebrián de la Serna (2007) points out, e-rubrics provide for more interaction and they help students to become more autonomous in evaluating their competences. They also provide teachers with detailed information and thus enable them to identify those competences which are difficult to acquire. Finally, they allow for more immediacy in the process of teacher–student communication. In recent years, a number of studies have been published focusing on the use of rubrics to assess students’ work (Campbell, 2005; Crotwell et al, 2011) and on how this tool supports the teaching and learning of competences (Andrade & Du, 2005; Raposo & Martínez, 2011). Assessing students’ competences requires profound changes in the fields of teaching, learning and assessment. We have to take a much closer look at how students learn (Cochran-Smith, 2003). In this context, self-regulation of learning has become increasingly important (Carneiro et al, 2007; Carneiro et al, 2011). Students need to develop competences of self-regulation, and e-rubrics are technology-based tools that can enhance this process (Steffens & Underwood, 2008). Research on self-regulated learning was much influenced by Zimmerman and Schunk’s work (see Bartolomé & Steffens, 2011). Self-regulated learning, or learning to learn, is becoming of crucial importance in Europe; in fact, learning to learn was one of the eight competences listed in the recommendations on Key Competences for Lifelong Learning which were adopted by the European Parliament and the Council in December 2006 (European Parliament, 2006; see also Mooij et al, 2014). Educational research suggests that students who are actively involved in their learning processes, and who are monitoring and regulating their processes towards the envisioned learning outcomes, outperform their peers who are less capable of self-regulating their learning (Rosario et al, 2004; Rosario et al, 2005). According to Zimmerman (2000), self-regulation consists of cycles of (1) forethought, (2) performance or volitional control, and (3) self-reflection. There also seems to be agreement that self-regulation involves ‘cognitive, affective, motivational and behavioural components that provide the individual with the capacity to adjust his or her actions and goals to achieve the desired results in light of changing environmental conditions’ (Zeidner et al, 2000, p. 751). While most students who enter university are most likely to have acquired some strategies for self-regulating their learning, there is no doubt that these can be improved. The development of self-regulation competences can be fostered through a scaffolding and fading approach where support is first provided by the teacher and then gradually decreased while the learners’ autonomy increases (Azevedo & Hadwin, 2005; Van de Pol et al, 2010; Mooij, 2014). E-rubrics are of particular value in this process of acquiring and improving self-regulated learning competence because they provide each student with individualised frequent and detailed feedback, thus allowing students to reflect on and improve their learning and – on a meta-level – the self-regulation of their learning. While the competence of self-regulated learning is largely a domain-general competence, there are always domain-specific competences that need to be acquired in the course of different university programmes. However, students highly competent in self-regulated learning will find it easier to acquire and improve the domain-specific competences characteristic of their fields of study than those students who are less competent in self-regulated learning. In summary, this article is based on the idea that rubrics are a valid resource for formative and summative assessment. Using e-rubrics provides an added value. They offer students more interactions with teachers and peers and they provide them with frequent and detailed feedback. They require students to be active participants in the learning process, thus fostering their autonomy (i.e. their learning to learn), or, in other words, their processes of self-regulated learning. Our study is part of the research project ‘Federated eRubric Service to Assess University Learning’ [1], whose principal objective is to develop, explore and evaluate the educational scope of e-rubrics in different contexts of university teaching with varying subjects, types of attendance, ways of teaching and areas of knowledge (see also Cebrián Robles et al, 2014). In our contribution, we focus on subjects related to information and communication technology (ICT) at education science colleges in three of the participant universities – those of Granada, Málaga and Vigo.

585

Manuela Raposo Rivas et al Methodology Subjects related to ICT in education have a theoretical and practical nature which aims at helping students acquire and improve their digital literacy skills and to use ICT to enhance their learning. Students are also expected to understand the educational implications of using ICT. In teaching our students, we use a didactical approach oriented towards their future teaching practices to relate their learning not only to the degree they intend to acquire, but also to the demands of the labour market. As the teachers participating in our study based the design and content of their teaching on the same textbook (Cebrián de la Serna & Gallego, 2011), it was easy to select learning content that was used by all teachers. They grounded their methodology on group work characterised by ‘activities’ during the theoretical sessions and on ‘projects’ which had to be verbally presented during the practical sessions. We adopted an exploratory quasi-experimental design (Campbell & Stanley, 1995). While the experimental group used e-rubrics for skills assessment during the course, the control group did not. We also analysed the thoughts and arguments students expressed with respect to selfevaluation and peer evaluation. Finally, we asked students about their use of and satisfaction with e-rubrics. In our research, we used three types of e-rubrics which targeted different aspects of students’ study activities: • Content that was specific to the subjects taught in different degree programmes: kindergarten, primary education, social education, industrial engineering and pedagogy. • Team work which was common to all the subjects that were part of the research; this competence was considered to be transversal. • Presentation of projects to examine oral and written expression, competence in a second language, use of different rhetorical techniques and ability to self-evaluate. These rubrics were used in the academic year 2011-12 by university staff teaching in pre-primary and primary education degree programmes at the universities of Granada, Málaga and Vigo (Cebrián de la Serna et al, 2011). They were also used by students in the experimental groups for self-evaluation and peer evaluation. The data presented in this paper were obtained solely with content e-rubrics applied to four activities associated with theoretical aspects of the subject areas. The procedure was as follows: in each session, the teacher demonstrated a specific activity and then proposed a similar one to be completed in teams of five people. After having completed the activity, each team uploaded its results to the university’s e-learning platform. In addition, students in the experimental groups did self-evaluations and evaluated their peers’ responses. Before the end of the session, the teacher provided solutions to the activity and left time for discussion. Students in the control groups used Google Drive to document information concerning their thoughts about their activities, as well as their views on the content they developed and on team work. At the end of each month, each group of students was evaluated by its teachers using erubrics. At the end of the course, all students received final marks. As well as analysing the role of e-rubrics in the assessment of competences and self-regulated learning, we studied the students’ satisfaction concerning their experiences. Data were analysed in a twofold way: • Quantitative analyses (frequencies, percentages and central key trends) were carried out on the data from the e-rubrics and ON the responses to the questionnaire using the computer program SPSS for Windows. • Interpretative and content analysis was performed on the qualitative information (Bardin, 1986; Miles & Huberman, 1994). Results The data collection, analysis procedure and results were similar in the three universities involved in the study. We will present results of the evaluation conducted at the University of Vigo concerning four activities carried out in class sessions focused on theoretical aspects of the subject area. These were: • Activity 1: innovative teaching with technology (in September). 586

Electronic Rubrics • Activity 2: didactic analysis and critical reading of an advertisement spot (in October). • Activity 3: selection and classification of didactic videos (in November). • Activity 4: assessment of technical and educational multimedia in compulsory education integration (in December). Data come from 60 students of the subject ‘New Technologies Applied to Education’, which is part of the pre-primary education degree. Participants were divided into one experimental group (n = 31 students; divided into 7 teams) and one control group (i=29 students; divided into 6 teams). For each activity, each group’s performance level was assessed by the professors on a six-point scale where 0% = no evidence; 20% = insufficient; 40% = can be improved; 60% = fair; 80% = significant; and 100% = excellent. The maximum score for each activity was 2.5 points, yielding a maximum total score of 10 points (see Figure 1). This score accounted for 10% of the student’s final mark in the subject.

Figure 1. Total scores achieved in the four activities

ACTIVITY 1 ACTIVITY 2 ACTIVITY 3 ACTIVITY 4 TOTAL

GROUP CONTR EXP CONTR EXP CONTR EXP CONTR EXP CONTR EXP

n 29 31 29 31 29 31 29 31 29 31

M 1.86 2.39 2.38 2.71 2.52 2.87 2.69 2.87 8.59 9.26

SD .69 .76 .73 .46 .51 .34 .47 .34 1.59 .89

t 2.78** 2.08* 3.14** 1.69 2.00*

** = p < .01; * = p < .05. Table I. Differences between control group and experimental group students.

Figure 1 shows total scores summed over the four activities for the 13 groups (g1 to g13) assigned to be either experimental (EXP) or control group (CONT). As can be seen, there is a strong tendency for the experimental groups to be better evaluated than the control groups. Three of the experimental groups were given the maximum score, while none of the control groups achieved this. We can also see that even the minimum scores are higher in the experimental groups. The mean evaluation score across the experimental groups was 9.26, compared with 8.59 for the control 587

Manuela Raposo Rivas et al group (see Table I). This difference is statistically significant: teacher evaluations for students who used e-rubrics tend to be higher than for those students who did not use e-rubrics. We also looked at teacher evaluations for each of the four activities separately (see Figure 2).

Figure 2. Average group scores achieved in each of the four activities.

Both groups improved from activity 1 to activity 4, but the experimental group outperformed the control group in all four activities. Table I furthermore demonstrates that three out of four specific activities differ significantly from final score; no difference in final scores exists with respect to activity 4. We could also talk here about practical significance because students in the experimental groups achieved better final marks in the course than students in the control groups. Moreover, we compared self-assessments of students in the experimental groups with those provided by their teachers (see Figures 3 to 6). Both groups were using the same e-rubric relating to the four activities mentioned above. Across the experimental groups, except for activity 1, teachers’ assessment scores were higher than students’ self-assessments scores, although the differences between the two reduced from activity 1 to activity 4. However, only in activities 2 and 3 are the differences between selfevaluation and evaluation by teachers statistically significant (see Table II). Table II and Figures 3-6 seem to suggest that, as progress was made in completing activities, there was more agreement in scores; differences between students’ self-assessments and teacher’s assessments became smaller. The extent of these differences varied between student groups, however. In group 8 (G8), for instance, differences between teachers’ assessments and students’ self-assessments were relatively small for all four activities. It might be argued that students of group 8 seemed to be the ones who knew their work and possibilities best because their selfassessments were very similar to the ones provided by the teachers (Raposo et al, 2012). This may also show that in the course of the trimester, by assessing their activities using e-rubrics, students were becoming increasingly aware of the importance of the indicators for their learning.

588

Electronic Rubrics

Figure 3. Self-evaluation and the teacher’s evaluation on activity 1.

Figure 4. Self-evaluation and the teacher’s evaluation on activity 2.

Figure 5. Self-evaluation and the teacher’s evaluation on activity 3

589

Manuela Raposo Rivas et al

Figure 6. Self-evaluation and the teacher’s evaluation on activity 4

ACTIVITY 1 ACTIVITY 2 ACTIVITY 3 ACTIVITY 4 TOTAL

Agent STUD TEACH STUD TEACH STUD TEACH STUD TEACH STUD TEACH

n 31 7 31 7 31 7 31 7 31 7

M 2.22 2.00 1.92 2.36 2.23 2.43 2.28 2.43 8.66 9.21

SD .25 .65 .21 .24 .25 .19 .19 .19 .57 .91

t .90 4.82** 2.34* 1.87 1.54

** = p < .01; * = p < .05. Table II. Differences between experimental group students and teachers.

Discussion The main purpose of our study was to explore a methodology based on assessment by e-rubrics of both general and specific competences to be acquired in ICT-related subjects. By explicitly defining criteria for assessment, e-rubrics allow students to become aware of these and internalise them. Students are thus enabled to assess their own activities and to reflect upon their execution, whether or not they are successful in achieving the desired competences (Kan, 2007). This is an important aspect of self-regulated learning. In our study, students in the experimental groups seem to become more aware than the control students of the indicators of the competences which were assessed. Moreover, according to Raposo et al (2012) and Martínez et al (2012), students need to appreciate assessment as an integral part of learning. This also is a participatory process which seems facilitated by using e-rubrics, particularly because indicators and criteria of the competences to be assessed are defined. In the long run, students have to become autonomous in using assessment instruments such as e-rubrics which will help them to regulate their learning process. Our results thus confirm findings by Stevens and Levi (2005), who concluded that the use of e-rubrics allows teachers to assist students to achieve a greater understanding of and autonomy in the selfregulation of their learning processes. In our study, we explored the usability of e-rubrics as a tool for assessing general competences exhibited in individual and group work, as well as subject-specific competences. The outcomes support our belief that working with e-rubrics does help students to improve their competence for self-regulation as well as domain-specific competences. We are aware that not all differences in 590

Electronic Rubrics these competences between those students who used e-rubrics during the course (experimental groups) and those who did not (control groups) were statistically significant. This means that our exploratory results have to be viewed with some caution and that further studies are necessary to clearly demonstrate the effects which we believe will occur with the use of e-rubrics for selfassessment and peer assessment. However, our results are also in line with those of Biggs (2005), who argues that self- and co-assessment not only enhance the acquisition of knowledge but also provide students with the opportunity to improve their meta-cognitive processes. Here we should bear in mind that self-assessment may serve to help students to learn how to work in groups (Bryan, 2006). As for students’ views on the use of the e-rubrics, we already presented some results in a previous article (Bartolomé et al, 2012; Raposo & Gallego, 2012). Our main findings were that students were positive concerning the use of e-rubrics. They considered assessment using e-rubrics to be ‘fairer’, more ‘objective’ and ‘more reflective’ than traditional forms of assessment. In addition to the impact their experiences with the tool had on their personal and pre-professional learning, we observed some more specific aspects: • In self-assessment, characteristics such as ease of use, simplicity and objectivity are appreciated as well as the potential of these characteristics to improve learning and self-education, to increase self-esteem and to strengthen attitudes like responsibility. • In peer assessment, learning ‘from’ and ‘with’ others is fostered. It is also expected that by engaging in peer assessment, participation in class will improve. Another outcome which we clearly observed and about which we are very pleased was that our study facilitated the creation of a collaborative group of ICT teachers who are willing to open their classroom doors, take part in curricular decisions and the development of methodological designs, create didactic materials and try out new ways of competence evaluation, thereby overcoming space and attitude barriers. Notes [1] The project ‘Federated eRubric Service to Assess University Learning’ (2010-13) was funded by the project ‘Plan Nacional I + D + i 2010-2013. EDU2010-15432’, Resolution of 30 December 2009 (BOE [Boletín Oficial del Estado – Spanish official state bulletin] of 31 December 2009). The universities of Málaga (coordinator), Barcelona, Granada, Vasque Country and Vigo, and the Polytechnic University of Madrid take part in it. For further details, the site http://erubrica.org may be visited.

References Andrade, H. (2005) Teaching with Rubrics: the good, the bad, and the ugly, College Teaching, 53(1), 27-30. https://webmail.csuchico.edu/vpaa/assessment/documents/AndradeTeachingWithRubrics.pdf Andrade, H. & Du, Y. (2005) Student Perspectives on Rubric-referenced Assessment: practical assessment, Research & Evaluation, 10(3), 1-11. http://pareonline.net/pdf/v10n3.pdf Andrade, M.S. (2014) Dialogue and Structure: enabling learner self-regulation in technology-enhanced learning environments, European Educational Research Journal, 13(5), 563-574. http://dx.doi.org/10.2304/eerj.2014.13.5.563 Azevedo, R. & Hadwin, A.F. (2005) Scaffolding Self-regulated Learning and Meta-cognition: implications for the design of computer-based scaffolds, Instructional Science, 33(5-6), 367-379. DOI: 10.1007/s11251-005-1272-9 Bardin, L. (1986) El análisis de contenido [The Content Analysis]. Madrid: Akal. Bartolomé, A., Martínez, M.E. & Tellado, F. (2012) Análisis comparativo de metodologías de evaluación formativa: diarios personales mediante blogs y autoevaluación mediante rúbricas [Comparison of Formative Assessment Methods: diaries in blogs and self-assessment with rubrics], in C. Leite & M. Zabalza (Eds) Ensino Superior: Inovação e qualidade na docência, pp. 417-429. Oporto: Centro de Investigação e Intervenção Educativas (CIIE). Bartolomé, A. & Steffens, K. (2011) Technologies for Self-regulated Learning, in R. Carneiro, P. Lefrere, K. Steffens & J. Underwood, Self-regulated Learning in Technology Enhanced Learning Environments: a European perspective, pp. 21-32. Rotterdam: Sense Publishers.

591

Manuela Raposo Rivas et al Biggs, J.B. (1996) Assessing Learning Quality: reconciling institutional, staff and educational demands, Assessment and Evaluation in Higher Education, 21(1), 5-15. http://dx.doi.org/10.1080/0260293960210101 Biggs, J.B. (2005) Calidad del aprendizaje universitario [Quality of University Learning]. Madrid: Narcea. Bryan, C. (2006) Developing Group Learning through Assessment, in C. Bryan & K. Clegg (Eds) Innovative Assessment in Higher Education. New York: Routledge. Campbell, A. (2005) Application of ICT and Rubrics to the Assessment Process Where Professional Judgment is Involved: the features of an e-marking tool, Assessment and Evaluation in Higher Education, 30(5), 529-537. http://dx.doi.org/10.1080/02602930500187055 Campbell, D. & Stanley, J. (1995) Diseños experimentales y cuasiexperimentales en la investigación social [Experimental and Quasi-experimental Designs in Social Research], 7th edn. Buenos Aires: Amorrourtu Editores. Carneiro, E., Lefrere, P. & Steffens, K. (2007) Self-regulated Learning in Technology Enhanced Learning Environments, A European Review. http://hal.archives-ouvertes.fr/docs/00/19/72/08/PDF/STEFFENSKARL-2007.pdf Carneiro, R., Lefrere, P., Steffens, K. & Underwood, J. (2011) Self-regulated Learning in Technology Enhanced Learning Environments: A European Perspective. Rotterdam: Sense Publishers. http://dx.doi.org/10.1007/978-94-6091-654-0 Cebrián de la Serna, M. (2007) Buenas prácticas en el uso del e-portafolio y e-rúbrica [Best Practices in the Use of e-Portfolio and e-Rúbric], in A. Cid, M. Raposo & A. Pérez (Eds) El practicum: buenas prácticas en el Espacio Europeo de Educación Superior [The Practicum: good practice in the European Higher Education Area]. Santiago de Compostela: Tórculo. Cebrián de la Serna, M. & Gallego, M.J. (2011) Procesos educativos con TIC en la sociedad del conocimiento [Educational Processes with ICT in the Knowledge Society]. Madrid: Pirámide. Cebrián de la Serna, M., Martínez, M.E., Gallego, M.J. & Raposo, M. (2011) E-rúbrica para la evaluación: una experiencia de colaboración interuniversitaria en materia TIC [e-Rubric for Evaluation: an experience of interuniversity collaboration in ICT]. Paper presented at 2nd Congreso Internacional de Uso y Buenas Prácticas con TIC, Málaga, Spain, 14-16 December. http://erubrica.uma.es/wpcontent/uploads/2011/06/Comunicaci%C3%B3n.pdf Cebrián Robles, D., Serrano Angulo, J. & Cebrián de la Serna, M. (2014) Federated eRubric Service to Facilitate Self-regulated Learning in the European University Model, European Educational Research Journal, 13(5), 575-583. http://dx.doi.org/10.2304/eerj.2014.13.5.575 Cochran-Smith, M. (2003) Teaching Quality Matters, Journal of Teacher Education, 54(2), 95-98. http://dx.doi.org/10.1177/0022487102250283 Crotwell, B., Strickland, D., Johnson, R. & Payne, J. (2011) Development of a ‘Universal’ Rubric for Assessing Undergraduates’ Scientific Reasoning Skills using Scientific Writing, Assessment and Evaluation in Higher Education, 36(5), 509-547. http://dx.doi.org/10.1080/02602930903540991 European Parliament (2006) Recommendation of the European Parliament and of the Council of 18 December 2006 on Key Competences for Lifelong Learning. http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006:394:0010:0018:en:PDF. Fallas, I.V. (2005) El uso de rúbricas para la evaluación de cursos en línea [The Use of Rubrics for Evaluating Online Courses]. Paper presented at Conferencia Internacional de Educación a Distancia, Puerto Rico, 4-6 August. http://www.uned.ac.cr/educacio/documents/Articulo_de_Rubricas.pdf Jonsson, A. & Svingby, G. (2007) The Use of Scoring Rubrics: reliability, validity and educational consequences, Educational Research Review, 2, 130-144. http://dx.doi.org/10.1016/j.edurev.2007.05.002 Kan, A. (2007) An Alternative Method in the New Educational Program from the Point of Performance-based Assessment. Rubric Scoring Scales, Educational Sciences: Theory & Practice, 7(1), 144-152. Khon, A. (2006) The Trouble with Rubrics, English Journal, 95(4), 12-15. http://dx.doi.org/10.2307/30047080 Martínez, M.E., Tellado, F., Raposo, M. & Doval, M.I. (2012) Evaluación de los aprendizajes y del trabajo en grupo utilizando rúbricas: una experiencia innovadora intercampus [Learning and group work assessment using rubrics: an innovative experience intercampus], in AA.VV. (Ed.) Xornada de Innovación Educativa 2012. Vigo: University of Vigo. http://webs.uvigo.es/xie2012/index_es.html Mertler, C.A. (2001) Designing Scoring Rubrics for Your Classroom, Practical Assessment, Research & Evaluation, 7(25). http://PAREonline.net/getvn.asp?v=7&n=25 Miles, M. & Huberman, A.M. (1994) Data Management and Analysis Methods, in N. Denzin & I. Lincoln (Eds) Handbook of Qualitative Research. London: Sage.

592

Electronic Rubrics Mooij, T. (2014) Towards Optimal Education Including Self-regulated Learning in Technology-enhanced Preschools and Primary Schools, European Educational Research Journal, 13(5), 529-552. http://dx.doi.org/10.2304/eerj.2014.13.5.529 Mooij, T., Steffens, K. & Andrade, M.S. (2014) Self-regulated and Technology-enhanced Learning: a European perspective, European Educational Research Journal, 13(5), 519-528. http://dx.doi.org/10.2304/eerj.2014.13.5.519 Raposo, M. & Gallego, M.J. (2012) Evaluación entre pares y autoevaluación basadas en rúbricas [Peer Assessment and Self-assessment Based on Rubrics], in C. Leite & M.A. Zabalza (Eds) Ensino Superior. Inovação e Qualidade na docência. Porto: Centro de Investigação e Intervenção Educativas (CIIE). Raposo, M. & Martínez, M.E. (2011) La Rúbrica en la Enseñanza Universitaria: Un Recurso Para la Tutoría de Grupos de Estudiantes [The Rubric in University Education: a resource for mentoring groups of students], Revista Formación Universitaria, 4(4), 19-28. http://www.scielo.cl/scielo.php?pid=S0718-50062011000400004&script=sci_arttext Raposo, M., Martínez, M.E., Tellado, F. & Doval, M.I. (2012) La evaluación de la mejora del aprendizaje y del trabajo en grupo mediante rúbricas [Improving learning and group work’s assessment through rubrics], in C. Leite & M.A. Zabalza (Eds) Ensino Superior. Inovação e Qualidade na docência. Porto: Centro de Investigação e Intervenção Educativas (CIIE). Rosario, P., Mourao, R., Trigo, J., Nuñez, J.C. & Gonzalez-Pienda, J.A. (2005) SRL Enhancing Narratives: Tests’ (Mis)adventures, Academic Exchange Quarterly, 9(4), 73-77. Rosario, P., Núñez, J. & González-Pienda, J. (2004) Stories that Show How to Study and How to Learn: an experience in the Portuguese school system, Electronic Journal of Research in Educational Psychology, 2(1), 131-144. Steffens, K. & Underwood, J. (2008) Self-regulated Learning in a Digital World, Technology, Pedagogy and Education, 17(3), 167-170. http://dx.doi.org/10.1080/14759390802383736 Stevens, D.D. & Levi, A.J. (2005) Introduction to Rubrics. Sterling, VA: Stylus Publishing. Van de Pol, J., Volman, M. & Beishuizen, J. (2010) Scaffolding in Teacher–Student Interaction: a decade of research, Educational Psychology Review, 22, 271-296. DOI: 10.1007/s10648-010-9127-6 Zeidner, M., Boekaerts, M. & Pintrich, P. (2000) Self-regulation: directions and challenges for future research, in M. Boekaerts, P. Pintrich & M. Zeidner (Eds) Handbook of Self-regulation. New York: Academic Press. http://dx.doi.org/10.1016/B978-012109890-2/50052-4 Zimmerman, B.J. (2000) Attaining Self-regulation: a social cognitive perspective, in M. Boekaerts, P. Pintrich & M. Zeidner (Eds), Handbook of Self-regulation, pp. 749-768. New York: Academic Press.

MANUELA RAPOSO RIVAS* is a lecturer at the University of Vigo, Spain. She has a PhD in science education (University of Vigo) and a university degree in pedagogy (University of Santiago de Compostela). Her research and teaching are related to the didactic use of information and communication technologies, practice in pre-professional teacher education, and both quality and innovation in education. She participates in different research projects and research networks on national and regional issues. She was a member of the Working Group on Teacher Training and Higher European Education in the Galician University Quality System Agency (ACSUG) (2004-2006). She belongs to different indexed journal committees in the field of higher education and ICT. Correspondence: [email protected] MANUEL CEBRIÁN DE LA SERNA is a professor in education at Málaga University, Spain. He is the director of the research group Gtea (Globalization, technology, education and learning) (http://gtea.uma.es). He was the director of the Institute of Science Education from 1994 to 1998 and of the Institute of Educational Innovation from 1998 to 2000. From 2000 to 2003 he was the director of virtual education at the University of Málaga. Furthermore, he was National Assessment Evaluator for Research and Development for ANEP (Agencia Nacional de Evaluación y Prospectiva [National Evaluation and Foresight Agency]) and other agencies in Spain. He has the award ‘Order of Puerto Ayacucho’, granted by the City Council of Puerto Ayacucho, the capital of Amazonas State. He is the promoter of educational innovation with the Iberoamerican Network of Higher Education. Moreover, he was the manager of a major production company involved with 593

Manuela Raposo Rivas et al audiovisual and multimedia and technological developments known as ‘Open Federated Environment’. ESTHER MARTÍNEZ-FIGUEIRA is a temporarily hired teacher with a PhD in didactics, school organization and research methods in the Didactics, School Organization and Research Methods Department at the Science Faculty of Education and Sports of the University of Vigo (the campus in Pontevedra, Spain). Her research, teaching and lecturing deal with inclusive education, practicum and educational technology. She has published numerous research articles and books and is a member of national network and research groups such as the CIES Network (Collaborative Network for Educational and Social Inclusion) from the University of Vigo; the e-rubric group from the University of Málaga; the e-portfolio group from the University Obert Catalunya; RINEFCISOC (Research Network on Education and Training for Citizenship and Knowledge Society); and GIE (Group of Educational Research). *Contact author

594