81
Vol. 12 (1), April 2014, 81-‐98 ISSN: 1887-‐4592 Date received: 30-‐12-‐2013 Acceptance Date: 02-‐04-‐2014
Evolution in the Design and Functionality of Rubrics: from “Square” Rubrics to “Federated” Rubrics
Evolution in the Design and Functionality of Rubrics: from “Square” Rubrics to “Federated” Rubrics
Manuel Cebrián de la Serna,
Manuel Cebrián de la Serna,
Juan José Monedero Moya
Juan José Monedero Moya
Universidad de Málaga, España Abstract La evaluación de los aprendizajes sigue siendo uno de los elementos más controvertidos y difíciles para los docentes. Entre algunas soluciones recientes, surgen metodológicas y técnicas como las e-‐rúbricas que pretenden ayudar a resolver esta situación, a sabiendas de que los contextos de enseñanza son diferentes, por lo que no cabe una única solución para todos los casos, sino medidas específicas y adaptadas a los contextos donde los docentes se ayudan desde el apoyo institucional y las comunidades de prácticas. El presente trabajo expone la evolución de un servicio de e-‐rúbricas [1] que partió desde la experiencia de diversos proyectos de innovación educativa primero, y proyectos de I+D+i [2] más tarde, que ha evolucionado con
University of Malaga, Spain Abstract The assessment of learning remains one of the most controversial and challenging aspects for teachers. Among some recent technical solutions, methods and techniques like eRubrics emerge in an attempt to solve the situation. Understanding that all teaching contexts are different and there can be no single solution for all cases, specific measures are adapted to contexts where teachers receive support from institutions and communities of practice. This paper presents the evolution of the eRubric service [1] which started from a first experience with paper rubrics, and, with time and after several I+D+R [2] educational projects, has evolved thanks to the support of a community of practice [3] and the exchange of experiences between teachers and
Evolution in the Design and Functionality of Rubrics
82
el apoyo de una comunidad de prácticas [3] y el intercambio de experiencias entre docentes e investigadores. En este artículo se muestran los resultados y funcionalidades de este servicio logrados hasta el momento de su publicación.
researchers. This paper shows the results and functionality of the eRubrics service up to the date of publication.
Palabras clave: Rúbricas, rúbricas electrónicas, diseño de rúbricas, evaluación formativa, herramientas de evaluación, sistemas federados.
Key words: Performance-‐based Assessment, Scoring Rubrics, Evaluation Methods, Reliability, Higher Education.
Introduction There are a number of studies that report a positive relationship between assessment and improvement of learning (Falchikov and Boud, 1989; Falchikov and Goldfinch, 2000; Brown and Glaser, 2003; Falchikov, 2005; López Pastor, 2009; Blanco, 2009; Sánchez González, 2010), especially when the formative assessment approach counts with “a model of collaboration" where teachers can closely communicate with their students, in order to share criteria and understanding of indicators as well as evidence of learning. Both teachers and students share the responsibility to select and apply criteria (Falchikov, 1986). Here, educational practice is more focused on how learning occurs than on teaching objectives and achievements. Likewise, the focus is on interpreting and understanding learning assessment in addition to raising the level of results. In his famous book on innovative teachers, Bain (2007: 169) stresses: “Extraordinary teachers use scores to help students learn, not only to classify and prioritise their efforts”. Clearly, a more close and constant communication between teachers and their students about learning leads to higher learning achievements -‐based on the indicators, evidence and assessment of criteria in the tasks-‐ than if teachers only cared about test results at the end of the learning process. In either case, “the validity of a learning assessment will depend on the extent to which the interpretation and use of such assessment reflects learning itself” (Hargreaves, 2007). This approach may be difficult to apply in certain educational contexts, due to the high number and heterogeneity of students per group. However, rubrics have proven to partially mitigate these issues, and at the same time offer a very practical and successful methodology during the assessment process for self-‐assessment (Overveld and Verhoeff, 2013; Panadero and Alonso-‐Tapia, 2013; Martínez-‐Figueira, Tellado-‐González and Raposo-‐Rivas, 2013), as well as peer-‐assessment, collaborative and interdisciplinary work (Serrano, Hernández, Pérez and Biel, 2013; Raposo, Cebrián and Martínez, 2014). Also, rubrics are successfully used in distance learning programmes involving technologies, and are an essential method in using ePortfolios (Moril, Ballester and Martínez, 2012; Cebrián, 2011a; 2011b). Their benefits lie in gathering evidence for students’ ePortfolios and conducting further analysis and evaluation with teachers, thus improving teacher-‐ student communication. Traditionally, rubrics have been tools and techniques for evaluation, and not necessarily based on competences. Today, the rubric-‐based assessment approach is REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
widespread. As will be discussed below, rubrics mainly consist of weighted indicators and evidence to which criteria are assigned. As a methodology, rubrics are applied for many purposes, educational levels and forms of teaching (distance learning, formative assessment, collaboration in evaluation, etc.). Presently, there is extensive literature on educational research in this area (Andrade, 2005; Jonsson and Svingby, 2007; Luxton-‐ Reilly, 2009; Panadero and Jonsson, 2013), but given its long history, it is worth stressing that educational contexts and practices have changed, especially since the incorporation of new technologies that allow for a greater interactivity between users and resources, a better socialisation of learning (e.g. Internet, social networks, etc.), an increased user mobility (e.g. mlearning), and overall, new opportunities and pedagogical models. However, educational innovation has not always paralleled technological innovation, as they often evolve at different speed and pace. But it is sometimes educational innovation that raises technology needs and result in innovative resources and tools. Other times it is technological innovation that leads to new ways to communicate in class and new models of teaching and learning. The speed of technological innovation does not allow much time for experiencing and researching, as by the time results for evaluations come in, teachers are already using newer technological solutions. Thus, in order to establish a stable and fruitful balance between the two innovations, social practice requires permanent changes in the use of technological innovation, as well as support from online communities of practice (Vasquez, 2011). Currently, an educational tool with no community of practice to experience, evaluate and guide its functionality will fail both pedagogically and technologically. Innovation and improvement should raise patterns of communication and exchange between technology and education, however apart their areas of knowledge may be. Boh researchers and teachers must implement an interdisciplinary approach in their daily practice. Electronic Rubrics There are already digital rubrics -‐eRubrics-‐ on the market, which reproduce the design of traditional paper rubrics. eRubrics have undoubtly allowed for greater user interactivity and communication, and emerged from the same pedagogical approach as traditional or “squared” rubrics: both of their designs involve tables or grids. The most important advantages of eRubrics and ePortfolios can be summarised as follows (Cebrián, 2011a; 2011b): • More autonomy for students to view their acquired competences and those which remain to be acquired at any time. • A more objetive definition of criteria and to become familiarised with criteria from the beginning of the academic year. • Teachers will be more informed and able to spot difficult competences to be acquired by groups or individually (e.g. they will be able to check which competence students stuggle with the most, or with which competence a particular student struggles). • Teachers will be more quickly able to republish and change contents in eRubrics. • More immediacy in the communication process and student-‐teacher assessment. • More opportunities for teachers to collaborate in the same eRubric or course, without time or space restrictions. • A faster and more automated evaluation. REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
• A gradual, cumulative and constructive organisational structure that allows students to progress at their own paces. A “square” eRubric can start from designing one or more tasks, or from conceiving one or more competences. Either way, it usually has a set of elements related to a learning objective, as shown in Figure 1. The first column usually shows task categories or competence indicators. Each competence is assigned a number of levels of performance as well as achievements, with a range of criteria under which evidence is shown. Likewise, learning evidence is shown in the description of the specific responses (e.g. behaviours, products, thoughts, cognitive processes, etc.) a student gives when performing the programme.
Source: created by the authors of this research.
Image 1. Example of square eRubric, in the “Agora Virtual” webtool.
Despite the indisputable advantages of digital rubrics, they have not yet incorporated the ongoing improvements arisen by teaching practice when facing limitations in the different educational contexts. Next, we will discuss the main limitations encountered by the authors of this paper when trying to improve the eRubric service in an important user community.
Reasons for Changing the Design of Gtea Rubrics Since 1997, we have worked to improve externships through educational innovation REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
projects, within a consolidated research group, Gtea-‐ [4], where we applied different methodologies. We started with portfolios and then moved to rubrics, as a solution to apply the same criteria and evidence agreed upon by the authors of the Practicum (Cebrián and Monedero, 2009). The first rubrics had a squared design on paper, then on Excel and on similar databases to the ones used by other authors (Campbell, 2008), until we ultimately created an ePortfolio with an eRubric, called AgoraVirtual (Figure 1). Here we realised the advantages of a digital format, i.e. the possibility of integrating rubrics into a digital platform; but we could also see the limitations of a square design to respond to developments and changes in the pedagogical model (Cebrián, Raposo and Accino, 2008; Cebrián and Accino, 2009). From these early experiences, we have accumulated over the years a number of reasons why we opted for a re-‐designed eRubric, which is more flexible in the teaching practice and better supported in federation technology. Next, these two aspects will be considered separately, although we will focus on the former, as it is the main objective of the present work. The Reality of Teaching Practice Demands More Flexible and Personalised Evidence In the different teaching contexts, evidence of learning is acquired by students at a different pace (depending on their learning style, interests, opportunities, etc.) and not necessarily at the pace established by the teacher in the square eRubric. In other words, we soon observed the need for greater flexibility in the collection and presentation of learning evidence by students, in order to achieve personalisation of learning. The gradual design of learning evidence was unrealistic and unreliable. Different Value and Criteria of Evidence In square eRubrics, each evidence may have a different value and weight, and they are somehow obliged to follow an order according to this value, in an ordinal scale in a grid. This would not be a problem if the presentation of learning evidence was not so closely related to learning criteria, as the reality shows how each evidence is acquired by each student based on different success criteria. When students score a level of evidence as valid, it means that all previous evidence has been successfully acquired, when we actually know that this is not true nor possible, as each evidence is usually presented through different achievements, and therefore, with multiple criteria. Limitations on the Number of Indicators and Evidence regarding Each and Every Competence The same may be said about the weighted value of criteria and the different amount of assigned boxes in each competence and for each indicator. For instance, when creating a square rubric, we are required to choose a number of evidence and/or indicators from the start, so this forces the rest of competences and/or indicators to have the same number (see Image 1 as an example). “Banking” Education versus the Constructivist Model of Learning Learning occurs when there is a change of perspective, beliefs or understanding and an improvement in the interpretive capacity of a student, in a situation prompted by a teacher using the context and resources available. In a teaching context where REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
eRubrics and evidence are used as a technique to achieve greater objectivity in assessing learning achievement, considering competences are met by simply adding evidence in a grid is like conceiving learning as the “bank” amount of evidence. In other words, learning processes are not the quantitative and ordered sum of evidence. This is why, when a poor eRubric is designed, students are tempted to add much more evidence than in previous boxes. Different Pace, Preferences and Interests when Presenting Evidence The above limitations of square eRubrics, including the impossibility of weighing evidence with different values and criteria, require all students to follow the same orderly process when presenting their evidence. This is a difficult task, as each teaching context contains particular aspects that either prevent students from or facilitate them to achieving evidence. Not to mention the individuality of the learning process for each individual, the flexibility of the learning pace and the emotional journey each student embarks on when facing a problem, task, exercise, project, teaching method, etc. We cannot guess the exact order in which learning evolves, let alone the preferred pace, interests and learning styles of students. Reasons from Evaluators Breaking up evidence as minimal units rather than relating them to indicators or competences allows us to distribute such evidence among teachers and experts for evaluation. Thus, each evidence can be easily assessed by a teacher, a fact that would be highly complicated in a square eRubric, where the order of evidence prevents teachers from assessing it. The Required Numerical Proportion of Evidence Square eRubrics sometimes falsely start from 0. If students do not present anything, they should not be evaluated with a 0, but should rather be marked as “not submitted”. And even if they present something, this could hardly have no value, as the effort to do the job should be at least considered, unless we want to use this number as a punishment rather than as information to assist learning. In any case, when the smallest value is assigned to the first box -‐e.g. from 1 to the maximum value assigned to the last box-‐ the resulting proportion in most square eRubrics is necessarily valued with similar numerical intervals. Clearly, what here seems to be a “mathematical virtue” when assigning intervals based on their similarity is yet another limitation to assign values to individual evidence. And the values do not need to be equivalent, rather they should be at least weighted values. There is no continuous numeric scale, but a rather categorical and ordinal scale. In square rubrics, students go from one category to another without the option to assign intermediate values between two adjacent categories. Technological Attributes for each Evidence Some programmes and teaching contexts require technology with certain characteristics to assign attributes to evidence. For instance, it is possible to assign a different geolocation to each evidence, indicator or competence separately when we are trying to establish a learning process in an mlearning environment. As observed in Figure 4, an eRubric has been designed with REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
different indicators and evidence for learning through a walk in the park. On this occasion, students are required to collect evidence from the ground, in a flexible way rather than by a predetermined square rubric. The setting essentially conditions the collection of evidence, as it will depend on their geographical location (e.g. a sunny/rainy day, the season, etc. which can make a real difference), and also because every evidence may require different times, rather than the expected or planned time. The same thing happens with learning, where the achievement of evidence by students does not always occur in the same order, time and pace. Therefore, a structured and ordered planning of evidence is, without question, an unrealistic design. Reasons due to Differences between the Theoretical and Practical Dimensions, as it happens in professional contexts University students aim to acquire theoretical knowledge and practical skills. However, skills will usually emerge in professional contexts, and so a number of subjects have been designed for this purpose under the name “Practicum”, and more recently, “externship”. We can simulate professional contexts at university. In particular, some of these processes or specific elements (e.g. getting to know principles, understanding and approaching theories, developing calculation processes, language acquisition, mastery of words, getting to know values and right attitudes, legislation, information research for externships, etc.) will later best help students to acquire competences in professional environments. However, both areas are very different, and so eRubrics must be different as well. The reality of professional contexts is so unpredictable, unique and distinctive that, starting from a square eRubric design is, to say the least, an absurdity. To conclude the arguments and limitations of square eRubrics, it seems clear that, except for teaching situations where tasks follow a necessary orderly process -‐ and even in such cases-‐, it will always be more interesting to have flexible teaching tools, in order to manage evidence according to each case or reason (e.g. pedagogical reason, psychological reason, emotional reason, reason of opportunity, reason of unforeseen circumstances, etc.). Undoubtedly, students will find it easier to address a certain evidence in a certain order, and more likely to succeed in achieving an evidence that represents a minor or major challenge, etc. According to the current literature of self-‐regulation (Carneiro et al., 2011; Cebrián, Serrano and Cebrián, 2014), managing resources and challenges plays an important role in learning through eRubrics (Panadero, Alonso-‐Tapia and Reche, 2013). Students must learn to manage their learning process independently, to be masters of their own learning and commit themselves to education, establishing self-‐ learning priorities and strategies (Carneiro, Lefrere, Steffens and Underwood, 2011; Panadero and Alonso-‐Tapia, 2011). Likewise, teachers need flexible tools to design their teaching process in very different, unpredictable and specific contexts. As for assessing learning, there will be greater differences, as assessment combines specific elements in particular contexts, available resources and heterogeneity of student learning styles.
Why Federate eRubrics? We cannot confuse the two aspects and dimensions behind the expression “federated eRubric”. The latter aspect is of a pedagogical nature: the eRubric design. Why squared? And why not? This has been addressed above. The former aspect is REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
technological: Why federation technology? Here we will give four brief reasons, so as not to dwell on what is not the focus of this paper. However, there is literature on federated tools applied to education (Accino and Cebrián, 2009; Cebrián and Cebrián, 2013; Cebrián, Serrano and Cebrián, 2014 ). Federation is defined as a technological system on which partner institutions trust and where they share information on user identity, and to provide authentication for the different services associated to it. This offers advantages to users who only have to identify themselves once to access the tools and services offered by federated institutions. This technology offers federated eRubrics functionality and benefits that go beyond their design (square versus non-‐square). The following are just three reasons and use cases that will illustrate the aforementioned advantages: • Argument 1. The emergence of the European Higher Education Area and, more recently, the Common Space of Higher Education for Latin America and the Caribbean represent a whole new scenario to exchange information, data and user mobility. -‐ Use Case: Students engage on national programmes (SICUE -‐ Mobility within Spanish Universities) as well as international programmes (Erasmus -‐ for graduated and post-‐graduated students). They are therefore required to use services outside their home institutions, where they are not registered. When a teacher uses a Gtea eRubric, students can access all the services of this tool, whether they are registered in this teacher/administrator’s institution or in a different institution, through the RedIRIS SIR [5] . In the case of foreign students, they do so through the EduGain identity service [6]. Without federation, students would be required to have multiple access accounts (and distribute their personal data) among the various institutions. In cases where users belong to Latin American institutions like Mexico, users can access through other identity services, such as the SINED (National System of Distance Education) [7], which has its own eRubric service, and can therefore export eRubric contents between both services (SINED and Gtea). • Argument 2. Currently, internationalisation is unquestionably an important indicator in the call for papers and projects. The world is becoming increasingly globalised and digitised, facilitating collaboration and promoting the exchange of goods and services. University institutions feel the need to share projects, whether of an academic, administrative or research nature, with other institutions in and out of their home countries, in order to facilitate the flow of information and data between national and international researchers. -‐ Use Case: The number of academic projects between different institutions is increasingly growing, such as the recent MOOC platforms, where students can access massive courses to complement their education. When doing their externship, the federation of these platforms spares them many identity authentication problems and grants them access to resources, repositories and MOOC. These platforms are also becoming more flexible and interactive with other tools, such as the Gtea eRubric, video annotations, etc. (See Annotation Tools) [8]. Moreover, eRubrics have been integrated in the annotation editor within the MOOC edX [9]. Additionally, the eRubric is a useful application for self-‐assessment and peer-‐assessment by MOOC users, when evaluating materials, activities and exercises. REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
90
-‐ Use case: In the event that several teachers from different institutions wish to share their eRubric contents to collaborate and share experiences, good practice, student projects, etc., federated rubrics allow for this sharing of academic and research collaboration projects. • Argument 3. The new degree programmes give greater importance (in terms of credits) to externships, which are carried out in institutions outside universities, with different technological systems and tools. This can pose a technological barrier when we are trying to deepen the quality of university-‐industry collaboration. -‐ Use case: if it is important for our students to be completely integrated and carry out their externships as any other professional employee in their company or institution, they must be registered in those institutions or companies where they conduct their externship. Likewise, if we render necessary a more fluid and interactive communication with company tutors and externship centres, these tutors and centres should also register with university platforms, eportfolios and eRubrics. Within federated systems between universities and companies, tutors could access eRubrics with their passwords, and our students could do likewise, accessing company services with their own institutional keys.
REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
Design and Functionality of Federated eRubrics
Source: created by the authors of this research.
Image 2. Transformation of a square eRubric into a federated eRubric.
Looking at the above image, federated eRubrics define competence as a set of indicators, which are shown by students through evidence and criteria ranked on a scale. Beginning with the fact that competence has a generic nature, defining it as a set of indicators allows for a greater level of detail, uniqueness and relation to the learning object. Within each indicator, we can establish the evidence that will allow us to objectively know if the learning objective has been met and to which level of assessment. Learning objectives in a rubric, whether holistic or analytical, harbour a limited number of activities (tasks, exercises, etc.). While it is nearly impossible for this REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
number to reflect the entire achievement of a competence, it can at least reflect the dimension of indicators of the learning objective. So the challenge lies in selecting the set of indicators that best defines the competence, through evidence. The result will be a new rubric (Image 3), which is the transformation of the previous square eRubric into the new federated eRubric.
Source: created by the authors of this research.
Image 3. Transformation of the Agora Virtual eRubric into Gtea federated eRubric.
Salient Features of Gtea Federated eRubric As with all tools, there are implicit models that users can operate incorrectly. Here we will provide a list of the basic and most important features, leaving readers an opportunity to test and explore the possibilities described in the manuals in pdf and video format [10]. • Each competence has its own number of indicators, and each indicator has its number of evidence. • Each competence, indicator and evidence may have a different weight. • Each evidence is based on qualitative or quantitative criteria, thus extending the range of definition and precision. • Each student acquires evidence at their own pace in different contexts. • Assessors and assessed can share notes during formative assessment, adding format (text annotations, online links, images, etc.) to each competence, indicator and evidence. This facilitates user communication with different multimedia codes, and they can also explain the application of criteria, clarify evidence, etc. • It is interoperable with any other institutional system and platform (Ilias, Sakay, Moodle, etc). • Access from any of the 434 partner institutions of RedIRIS through the SIR identity service. Likewise, foreign institutions worldwide can access through EduGain. REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
• Allows for an mlearning design where evidence can be distributed geographically. See Figure 4 as an example: In this case, evidence to be collected are spread over a park. For a teacher of architecture, evidence could be buildings distributed around a city, etc.
Source: created by the authors of this research.
Image 4. eRubric for mlearning in a park.
• Assessed students can follow the progress of their own learning, i.e. the competences, indicators and evidence that remain to be overcome, those already acquired, etc. Likewise, assessors can have a quick overview of those evidence that students struggle with the most, within the class group (see image 5, where evidence in red shows a non-‐achieved group mean score in a given time during the course of the programme) or in relation to an individual student.
Source: created by the authors of this research.
Image 5. Overview of eRubric with two competences and the achievement of group mean scores.
• eRubrics can be shared with other users in a community, and at the same time can be publicly assessed. • Their design is exportable to similar eRubrics, and can also be exported in a pdf format for printing purposes. • Exporting data in an Excel format allows for statistical analysis. • Different models of formative assessment can be carried out (anonymous -‐or not-‐ peer-‐assessment, team and group assessment, self-‐assessment, etc.) REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
• Different assessors or teachers can evaluate the same student, or several competences, indicators and evidence related thereto. • eRubrics can be integrated (embedded) in a blog, and spread in social networks like Twitter, Facebook, etc.
Conclusion The assessment of learning has always been a focus for research and educational innovation. By asking an institution how they evaluate, we can easily find out their conception of learning and teaching. These conceptions have changed over time, partly due to advances in research and the innovative practice of many teachers and institutions and partly, of course, due to technological advances. These changes have led to an increasingly broad view of the teaching process, wherein students are encouraged to become more involved in education in general and particularly in their own learning. However, there is still much to investigate and experience. From a pedagogical point of view, we can conclude with the following statements: • We can innovate with eRubrics and yet not change a thing in the assessment process. • The transition from traditional evaluation to competence assessment is challenging for some teachers and students in the beginning. • Collecting, describing and interpreting evidence requires practice and usually takes more time than mastering technical aspects of the tool. • This method demands students to be more responsible and committed to the teaching and learning process. • The effectiveness of formative assessment with ePortfolios and eRubrics will depend on the group size and the chosen methodology. The pedagogical design and tool of federated Gtea eRubrics are constantly evolving. Here we have discussed the reasons behind their recent changes and transformation, as well as their latest features available. All these changes have occurred in the last three years, thanks to a very dynamic user community of practice. We hope that this work will continue and the last few studies will conclude, such as integrating eRubrics in massive courses (MOOC), with efforts for greater interactive possibilities with Web 3.0 technologies, etc.
[1] http://gteavirtual.org/rubric [2] Research projects on eRubrics: a) Project I+D+i EDU2010-‐15432: eRubric federated service for assessing university learning http://erubrica.uma.es/?page_id=434 b) Centre for the Design of eRubrics. National Distance Education System -‐Sined-‐ Mexico. [http://erubrica.uma.es/?page_id=389] [3] Community of practice http://erubrica.org [4] GTEA. Research Group on Globalisation, Technology, Education and Learning. Regional Government of Andalusia. SEJ-‐462 http://gtea.uma.es REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
[5] RedIRIS SIR is the identity service that allows users to access affiliated RedIRIS institutions. Users access with their own institutional identity to all services offered by the different universities. This is the case of the eRubric service. https://www.rediris.es/sir/ [6] EduGain serves as an identity service to connect users from affiliated universities and institutions throughout Europe and worldwide. [7] National Distance Education System -‐ Mexico [http://www.sined.mx]. Centre for eRubric Design [http://www.sined.mx/rubrica.html], where users can find, among other services, micro-‐seminars with educational content on how to introduce eRubrics in different contexts. [8] http://openvideoannotation.org/ [9] The latest developments and use models of eRubrics and multimedia annotation tools were presented as “ePortfolios of Evidence” at the 3rd International Workshop on MOOC creation with multimedia annotation held at the University of Malaga on 5-‐7 March, 2014. http://gtea.uma.es/congresos [10] eRubric Manuals http://gtea.uma.es/multimedia/?page_id=272
References Accino Domínguez, J. A. & Cebrián de la Serna, M. (2009). Entornos de colaboración con tecnologías de federación: una experiencia en el espacio Iberoamericano de educación superior. Rev. Rediris. Centro de Comunicaciones CSIC. Nº 88-‐89 pp. 180-‐192. Andrade, H. G. (2005). Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53:1, 27-‐31. Bain, K. (2007). Lo que hacen los mejores profesores de Universidad. Valencia: Universidad de Valencia. Blanco, A. (2009). Desarrollo y evaluación de competencias en educación superior. Madrid: Narcea. Brown, S. & Glaser, A. (2003). Evaluar en la universidad. Problemas y nuevos enfoques. Madrid: Narcea. Campbell, A. (2008). Application of ICT and rubrics to the assessment process where professional judgment is involved: the features of an e-‐marking tool. Assessment & Evaluation in Higher Education, Vol. 30, Nº. 5, October, pp. 529– 537. Carneiro, R. Lefrere, P., Steffens, K. & Underwood, J. (2011). Self-‐regulated Learning in Technology Enhanced Learning Environments: A European Perspective. Sense Publishers. V. 5. https://www.sensepublishers.com/media/933-‐self-‐regulated-‐ learning-‐in-‐technology-‐enhanced-‐learning-‐environments.pdf Cebrián de la Serna, Raposo Rivas, M. & Accino Domínguez, J. A. (2008). Eportafolios en el Practicum: un modelo de rúbrica. Rev. Comunicación y Pedagogía. nº 218. pp. 8-‐13. Cebrián de la Serna, M. & Accino Domínguez, J.A. (2009). Del ePortafolios a las tecnologías de federación: La experiencia de Ágora Virtual. Jornadas Internacionales sobre docencia, investigación e innovación en la universidad: Trabajar con (e) portafolios, Santiago de Compostela, nov. 2009. Cebrián de la Serna, M. & Monedero Moya, J.J. (2009). El e-‐portafolio y la e-‐rúbrica en la supervisión del practicum. Raposo, M.; Martínez, M.E.; Lodeiro, L.; Fernández, C.J.; Pérez, A. (coords.). El practicum más allá del empleo: formación vs training. Santiago de Compostela: Imprenta universitaria. Disponible en: http://redaberta.usc.es/poio/documentos/actas/actas_poio_2009.pdf, pp.369-‐ 380. REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
Cebrián de la Serna, M. (2011a). Supervisión con ePortafolios y su impacto en las reflexiones de los estudiantes en el Practicum. Estudio de Caso. Revista de Educación, nº 354, Ene. pp. 183-‐208. Cebrián de la Serna, M. (2011b). Los ePortafolios en la supervisión del Practicum: modelos pedagógicos y soportes tecnológicos. Revista de Curriculum y Formación del profesorado. 15, 1. pp. 91-‐107. http://www.ugr.es/~recfpro/rev151ART6.pdf Cebrián de la Serna, M. & Cebrián Robles, D. (2013). Gteavirtual: Federated Environment for Open Learning. ECER/EERA 2013 Istambul -‐Turkey-‐. Cebrián de la Serna, M.; Serrano Angulo, J. & Cebrián Robles, D. (2014). Federated erubric service to facilite self-‐regulated learning in the European university model. European Educational Research Journal. En prensa. Falchikov, N. (1986). Product comparisons and process benefits of collaborative, peer group and self assessments. Assessment and Evaluation in Higher Education. 11, 146-‐165. Falchikov, N. & Boud, D. (1989). Student Self-‐assessment in Higher Education: A Meta-‐ Analysis. Review of Educational Research, 59 (4), pp. 395-‐430. Falchikov, N. & Goldfinch, J. (2000). Student Peer Assessment in Higher Education: A Meta-‐Analysis Comparing Peer and Teacher Marks. Review of Educational Research, Vol. 70, No. 3, pp. 287-‐322. Falchikov, N. (2005). Improving assessment through student involment. New York EEUU: Routledge. Hargreaves, E., (2007). The validity of collaborative assessment for learning. Assessment in Education Vol. 14, No. 2, July, pp. 185–199. http://dx.doi.org/10.1080/0950069022000038268. Jonsson, A. & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2, pp. 130–144. Luxton-‐Reilly, A. (2009). A systematic review of tools that support peer assessment. Computer Science Education Vol. 19, No. 4, pp. 209–232. López Pastor, V. (2009). Evaluación Formativa y Compartida en educación superior. Propuestas, técnicas, instrumentos y experiencias. Madrid: Narcea. Martínez-‐Figueira, E.; Tellado-‐González, F. & Raposo-‐Rivas, M. (2013). La rúbrica como instrumento para la autoevaluación: un estudio piloto. Revista de Docencia Universitaria, Vol.11 (2) 373-‐390. Moril, R., Ballester,L. & Martínez, J. (2012). Introducción de las matrices de valoración analítica en el proceso de evaluación del Practicum de los Grados de Infantil y de Primaria. Revista de Docencia Universitaria, Vol.10 (2), 251-‐271. Overveld, K. & Verhoeff, T. (2013). Self-‐consistent Peer Ranking for Assessing Student Work Dealing with Large Populations. CSEDU 2013 -‐ 5th International Conference on Computer Supported Education. 6-‐8,May AAchen, Germany. Panadero, E. & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review V. 9, pp. 129–144. Panadero, E.; Alonso-‐Tapia, J. & Reche, E. (2013). Rubrics vs. self-‐assessment scripts effect on self-‐regulation, performance and self-‐efficacy in pre-‐service teachers. Studies in Educational Evaluation, vol. 39 nº 3, pp. 125-‐132. Panadero, E. & Alonso-‐Tapia, J., (2013). Autoevaluación: Connotaciones Teóricas y Prácticas. Cuándo Ocurre, Cómo se Adquiere y qué Hacer para Potenciarla en REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
nuestro Alumnado. Electronic Journal of Research in Educational Psychology, 11(2), 551-‐576. nº 30. http://dx.doi.org/10.14204/ejrep.30.12200. Panadero, E. & Alonso-‐Tapia, J. (2011) El papel de la rúbricas en la autoevaluación y autorregulación del aprendizaje. En Bujan, K, Rekalde, I. y Aramendi, P. La evaluación de competencias en la educación superior. Sevilla. MAD. Raposo Rivas, M.; Cebrián de la Serna, M. & Martínez-‐Figueira, M.E. (2014). The electronic rubric to value skills on ICT subjects. European Educational Research Journal. En prensa. Sánchez González, M.P. (2010). Técnicas docentes y sistemas de evaluación en Educación Superior. Narcea: Madrid. Serrano, A. Hernández, M. Pérez, E. & Biel, P. (2013). Trabajo por módulos: un modelo de aprendizaje interdisciplinar y colaborativo en el Grado en Ingeniería en Diseño Industrial y Desarrollo de Producto. Revista de Docencia Universitaria, Vol.11, 197-‐220. Vásquez, S. (2011). Comunidades de práctica. Rev. Educar, 47/1, 51-‐68.
REDU. Revista de Docencia Universitaria
Evolution in the Design and Functionality of Rubrics
98
Article completed on December 29, 2013 Cebrián de la Serna, M. & Monedero Moya, J.J. (2014). Evolución en el diseño y funcionalidad de las rúbricas: desde las rúbricas “cuadradas” a las erúbricas federadas. REDU: Revista de Docencia Universitaria, Número monográfico dedicado a Evaluación Formati- va mediante Erúbricas, 12 (1), pp. 81-98.Publicado en http://www.red-u.net
Manuel Cebrián de la Serna University of Malaga Department of Didactics and School Management Mail:
[email protected] Full university professor. Doctor in Educational Technology and Bachelor of Science in Education. Research areas: a) Educational Innovation vs Technological Innovation; b) University Education and c) Federation Technologies applied to Education. He has directed Phd programmes and master’s degree programmes on educational innovation and new technologies applied to education. Director of teacher training services for 10 years: ICE, Educational Innovation and Virtual Learning Service. Advisor of the National Distance Education System (SINED), Mexico. Director of SEJ-‐462 Research Group on Globalisation, Technology, Education and Learning (GTEA).
Juan José Monedero Moya University of Malaga Department of Didactics and School Management Mail:
[email protected] Professor at the University of Malaga. His research lines focus on three areas: 1) Educational Ethnography, 2) Evaluation of Educational Materials and Assessment of e-‐ Learning, 3) ePortfolios and eRubrics. As a director, he managed studies of Third Cycle (Doctorate) and participated in several PhD programmes at the University of Malaga, which were taught in Malaga, Buenos Aires, Santiago de Chile, Chiclayo (Peru) and Guadalajara (Mexico). First National Award for Educational Research and Innovation 1992, ex aequo in the form of Educational Research. Gtea group member since its inception.
REDU. Revista de Docencia Universitaria