eLearning Papers 40

5 downloads 34081 Views 2MB Size Report
Jan 1, 2015 - Seven features of smart learning analytics - lessons learned from four ...... Retrieved December 18, 2012, from http://cdn.efquel.org/wp-content/.
January 2015

g n i n r a e L e ers p a P

0 4

Assessment, certification, and quality assurance in open learning Editorial Assessment, certification, and quality assurance in open learning In-depth Quality Assurance for OER : Current State of the Art and the TIPS Framework

http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_1

Students as evaluators of open educational resources

http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_2

Student’s Quality perception and learning outcomes when using an open accessible eLearning-resource

http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_In-Depth_40_3

From the field An Assessment-Recognition Matrix for Analysing Institutional Practices in the Recognition of Open Learning http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_1

Peer-review Platform for Astronomy Education Activities

http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_2

Seven features of smart learning analytics - lessons learned from four years of research with learning analytics http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_3

Quality assurance in online learning: The contribution of computational linguistics analysis to criterion referenced assessment http://www.openeducationeuropa.eu/en/article/Assessment-certification-and-quality-assurance-in-open-learning_From-field_40_4

eLearning Papers is a digital publication on eLearning by openeducationeuropa.eu, a portal created by the European Commission to promote the use of ICT in education and training. Edited by P.A.U. Education, S.L.. E-mail: editorialteam[at]openeducationeuropa[dot]eu, ISSN 1887-1542

The texts published in this journal, unless otherwise indicated, are subject to a Creative Commons AttributionNoncommercial-NoDerivativeWorks 3.0 Unported licence. They may be copied, distributed and broadcast provided that the author and the e-journal that publishes them, eLearning Papers, are cited. Commercial use and derivative works are not permitted. The full licence can be consulted on http://creativecommons.org/licenses/by-nc-nd/3.0/

Editorial

Open Learning and its Future of Assessment, Certification and Quality Assurance. Open learning is scaling up again after the ODL-peak in 1990’s. Thanks to the ongoing changes in societies, working life and technology enabled globalization of education, open education has more potential users today than ever before. The major question is how to implement it for achieving best learning results. Many initiatives took place during the last years: The ongoing debate on Open Educational Resources was leading to the influential 2012 Paris OER Declaration by UNESCO. In 2013, the “Opening up Education” communication was published by the European Commission demanding for “Innovative teaching and learning for all through new Technologies and Open Educational Resources”. And this year, the “Declaration of Crete” calling for “Reestablishing Openness as Default” was approved in a common workshop of the International Community for Open Research and Education (ICORE) and the Open Education Consortium (OEC) at the international LINQ conference 2014. Today open learning is introduced in many educational systems and sectors throughout Europe thanks to major flagship initiatives like Open Discovery Space and Inspiring Science Education involving all 28 EU member states and beyond.The proof of concept and potential benefits will be demonstrated and evaluated in the next years requiring a strong focus on assessment, on certification and in particular on the key quality dimension in open learning. Currently the vision of open learning is applied and amended for opening up education combining innovations and quality (Stracke 2014). This issue of eLearning Papers presents a collection of in-depth articles and reports from the field on “Assessment, certification, and quality assurance in open learning”. These papers provide a comprehensive glimpse to what is taking place in open learning today.

ng i n r eLeaers 0 4 Pap

Christian M. Stracke, Chair of TELIT Research Institute Tapio Koskinen, eLearning Papers, Director of the Editorial Board

eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers n.º 40 • January 2015

2

In-depth Quality Assurance for OER : Current State of the Art and the TIPS Framework Authors Paul Kawachi FRSA [email protected] Open Education Network United Kingdom

We present validated quality assurance criteria as guidelines for creating and improving open educational resources (OER). We have reviewed all 45 known related frameworks in the literature, and built an open collation of 205 criteria. Through several rounds of international workshops, questionnaires, surveys and referrals, these have been examined by more than 200 OER experts and teachers around the world to produce a practical framework consisting of 38 key criteria. Through a grounded theory approach, these are distributed among four levels ; the teaching aspects, information content aspects, presentation aspects, and the system technical aspects - giving us the acronym TIPS - for a highly validated (content validity index > 0.80 according to Lawshe) framework as guidelines for determining and improving the quality of OER. These key criteria can be helpful to creators of OER, or easily applied as a rubric to assess or improve existing OER by reusers. All the methods and data are in the free-of-cost open-access domain.

1. Introduction

Tags Quality assurance ; Criteria, Validation, OER

ning r a e eL ers Pap

40

Open educational resources (OER) offer an unprecedented opportunity to develop learning materials for the developing world. The present study focuses on the creation and improvement of OER by teachers, through Guidelines for quality assurance assessment. The rationale for developing these Guidelines for teachers as creators of their own OER is essentially to broaden the author-base to involve teachers as reflective practitioners. Good quality OER can widen informal access to education through independent study and widen formal access through prior learning. Good quality OER can also prevent dropout from formal education through offering remedial study resources. They therefore provide indirect cost benefits to the institution, community and governments. Moreover creating OER can empower the teacher as author, raise their self-esteem and social status, and help raise the profile of the school. Dhanarajan & Abeywardena (2013, pp.9-10) found that teachers’ lack in own skills was a leading barrier against creating OER, and lack in ability to locate quality OER was a leading barrier against reusing OER. In order to expand the OER author base, guidelines may be helpful which offer suggestions to teachers as potential authors. The current project deals with developing an instrument which consists of a checklist of criteria as suggestions to be considered by teachers when designing OER. The resulting criteria present ideas to teachers as prospective creators of OER : offering ways they could reflect upon in order to develop a culture of quality within their own respective local communities of practice. We also expect institutions supporting development and use of OER to adopt these Guidelines in their internal quality assurance practices. By developing and offering these Guidelines,

eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers n.º 40 • January 2015

3

In-depth we are interested in nurturing the idea of quality as a culture. Developing a culture of quality through teacher continuous professional reflection may be the best way forward rather than simply aiming to digitally store somewhat permanently an individual teacher’s own lesson materials. Defining quality in absolute terms is elusive because it depends upon whose perspective we choose to adopt. However, quality has been fairly well defined by Harvey & Green (1993) as being on five dimensions ;- with Fitness for Purpose as the dimension most relevant to quality for open educational resources (OER), Cost Efficiency as another which is also relevant and Transformative Learning, (the other two dimensions are not concerned with education). These are given in Box 1 below. The key dimension for quality of OER is thus Fitness for Purpose, and this indicates that the purpose needs to be defined, and this depends on whose perspective we adopt. Box 1 : Dimensions of Quality (i)

Achieving Exceptional Excellence : surpassing some preset criterion-referenced standard

(ii)

Achieving Perfection : focusing on first making a machine that is successful 100% of the time, rather than trial-and-error or envisaging improving it later on

(iii)

Achieving Fitness for Purpose : satisfying the aims or reasons for producing the item, according to the judgements of the various stakeholders - particularly the consumers

(iv)

Achieving Value for Money : focusing on relative efficiency, and the (immediate output, mid-term outcome, and long-term impact) effectiveness

(v)

Achieving Transformation : enhancing and empowering the consumer, eg equipping the student with the 21stcentury knowledge-creative skills

According to the third dimension of quality as Fitness for Purpose. We are grappling here with the issue of Whose purpose ? and therefore we suggest a practical way forward to accommodate the different perspectives. The challenge is illustrated by eg an OER highly rated as excellent quality by students in their remedial learning, but which teachers elsewhere find terribly difficult to adapt, change the language, and relocalise to another culture and context. So, on one level (let’s call this the basic or ground level with students in class) the OER is high quality, but on another higher level (of the teachers as reusers and translators) this same OER is low quality and unusable. The global institution and OER experts (say at the highest level) would rate this OER more critically because of the difficulty to remix. To simplify

ning r a e eL ers Pap

40

the challenge, we draw three levels of localisation each with its own specific quality criteria : (i) the upper-most level-1 of the repository containing the internationalised OER that have been standardised by OER experts and like a textbook are almost context-free, (ii) the intermediate level-2 of readily adaptable OER, and then (iii) the ground level-3 of the fully localised OER used by actual students. Briefly, the upper-most level-1 is the most restrictive interpretation of quality by OER experts and institutions, the intermediate level-2 is complex involving ease of adapting through re-contextualising OER by teachers, and the ground level-3 is quality in the hearts and minds of the students learning with the fully localised OER version. Very few if any studies have yet gathered feedback from students about their achieving improved learning using OER, and while we are waiting for our impact studies to be completed, the present study here reports on quality perceived at the other two levels ;at level-1 of the OER experts, and at level-2 of the teachers. The three levels are shown in Figure 1, as a pyramid-like structure.

Figure 1. The OER localisation processes

These three levels were originally designed to visualise the processes of localisation and internationalisation, according to the level of the reusers : depending on whether they were the intended end-users (notably the student learning), were the intermediate users (the providers, teachers, or translators), or were the storekeeper users (the repositories, portals and institutions). Here these three levels are employed to illustrate their three respective views on quality. This Figure 1 shows there are more OER at the base of a pyramid structure, to represent the reality that there are many versions eg one version in each context, while at the higher intermediate level-2 there are fewer, and even less in the highest level-1. An example here would be a national curriculum textbook at the repository level-1, lesson plans at the teacher level-2, and individualised interpretations to each student in his or her native language at level-3. The teacher enjoys some autonomy

eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers n.º 40 • January 2015

4

In-depth within the four walls of the classroom, and can use humour, exaggeration, gestures etc to convey the teaching points. But if a student asks to record the lesson for later revision study, the teacher could be advised to use clearer language, without idiomatic or local slang (this copy would be level-2 for sharing with others). And when the teacher writes all this up into a publishable textbook the product would be up at level-1.

Domains of Learning scaffold to be simplified and re-categorised for ease in use by teachers. The outcome was four groups of criteria which through a grounded theory approach were subsequently labelled as (T) Teaching and Learning Processes, (I) Information and Material Content, (P) Presentation, Product and Format, and as (S) System Technical and Technology, giving us the acronym TIPS. These four groups are presented in Figure 2 as layers of quality concerns.

2. Methods: To date a total of 45 quality assurance frameworks relevant to this project have been found in the literature. Of these, there are 19 that are useful, and these have been examined in detail, to harvest points that could help formulate criteria for our OER Guidelines. These 19 other frameworks are by Baya’a, Shehade & Baya’a (2009), Latchem (2012), Sreedher (2009), Ehlers (2012), Achieve (2011), Camilleri & Tannhäuser (2012), The Quality Matters Program (2011), Bakken & Bridges (2011), McGill (2012), Khanna & Basak (2013), Kwak (2009), Frydenberg (2002), The SREB - Southern Regional Education Board (2001), Merisotis & Phipps (2000), Sloan (2013), Jung & Lee (2014), Mhlanga (2010), Williams, Kear & Rosewell (2012), and Leacock & Nesbit (2007). They are covered in the discussion later. From these a total of 205 criteria are suggested for OER quality assurance (given in full by Kawachi, 2014a). Initially we hoped that these would suggest common categories across all the frameworks, but the ad hoc designs meant that no alignment was possible. Instead therefore the five Domains of Learning template was adopted (see Kawachi, 2014b) onto which suggestions could be positioned in some collated manner, to avoid duplication and to facilitate our review and collection processes. The comprehensive list of 205 criteria for OER quality were collated onto the Domains of Learning scaffold, and discussed at length with OER experts and teachers globally. Specifically a Regional Consultation Meeting was held in Hyderabad, India, on 13-15th March 2013 at Maulana Azad National Urdu University, and an International Workshop was held in Islamabad, Pakistan, on the 1st October 2013 at Allama Iqbal Open University. Other face-to-face and online discussions were held at other universities around the world. The various consultations and feedback discussion resulted in these 205 criteria being reduced to 65 criteria (these are given in Kawachi, 2013). Many of these criteria were in technical or complex English, which teachers in developing countries might find inaccessible. Feedback conversations also asked for the

ning r a e eL ers Pap

40

Figure 2. The four layers of the TIPS Framework

These four layers comprising the TIPS Framework are presented in easy accessible English in a pamphlet (available at http:// www.open-ed.net/oer-quality/tips.pdf ) for teachers to use in the field. The Framework has also been translated into other local languages. After publishing this initial version-1.0 (Kawachi, 2013), further studies were undertaken in both field tests and surveys to improve utility, confidence and reliability, and involving specifically Content Validation according to Lawshe (1975). We also included Wave Analysis according to Leslie (1972). Wave Analysis is a method to increase confidence in survey data being complete and comprehensive. Where successive waves show similar distributions of response ratings, then confidence is increased. Content Validity is a term with an imprecise meaning : according to Fitzpatrick (1983) it can refer to (i) how well the items cover the whole field, (ii) how well the user’s interpretations or responses to the items cover the whole field, (iii) the overall relevance of all the items, (iv) the overall relevance of the user’s interpretations, (v) the clarity of the content domain definitions, and/or (vi) the technical quality of each and all the items. The first two concern the adequacies of the sampling, and come under Construct Validity. Notwithstanding that Content Validity

eLearning Papers • ISSN: 1887-1542 • www.openeducationeuropa.eu/en/elearning_papers n.º 40 • January 2015

5

In-depth is an imprecise term, it can be measured quantitatively by asking content experts to rank each item as (i) Essential, (ii) Notessential but useful, or (iii) Not necessary, according to Lawshe (1975). Those items ranked as not necessary are likely to be discarded. Among a large number N of experts, the number who rank the item as essential N E is used to calculate the Content Validity Ratio for each item as shown in Box 2 below. This formula gives a Ratio of zero if only half the experts rank the item as essential, and if more than half the experts rank the item as essential then a positive Ratio between zero and one.

Table 1. The Minimum Averaged Value CVR for a Criterion to be Retained

BOX 2: The Content Validity Ratio CVR (from Lawshe, 1975)

To determine the valid criteria of quality as Fitness for Purpose, we surveyed individual OER experts, separately from individual teacher-practitioners, to discover their respective sets of criteria. In each case the individual was invited by personal email. Of the three arbitrary levels (see Figure 3 below), we thus are surveying level-1 of the OER-experts, and level-2 of the teachers.

For relatively small groups of experts, the average Ratio for each item retained in the instrument should be close to one to decide the specific item has content validity with a probability of p 0.80 for instrument validity at p