manual for the quality assessment of digital

0 downloads 0 Views 2MB Size Report
Dec 4, 2008 - Since the use of digital educational material has long surpassed the .... can provide clear answers to concrete questions or serve as a ..... as exercises or cases that invite the user to learn by doing (De Galan, 2003). .... regard to grammar, the language used in digital modules is ...... Supplementary learning.
MANUAL FOR THE QUALITY ASSESSMENT OF DIGITAL EDUCATIONAL MATERIAL

The QuADEM project has been funded with support from the European Commission. This document reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

CONTENT INTRODUCTION ..................................................................................................................................... 5 OPERATING INSTRUCTIONS .................................................................................................................. 6 QUADEM: THE BASICS................................................................................................................................ 6 QUADEM TOOLS .......................................................................................................................................... 9 ASSESSMENT PROCEDURE ......................................................................................................................... 10 BLENDED LEARNING ......................................................................................................................... 16 THEORETICAL BACKGROUND ....................................................................................................................... 16 BLENDED LEARNING CONTEXT .................................................................................................................... 17 ASSESSMENT OF BLENDED LEARNING .......................................................................................................... 18 ASSESSMENT UNITS .......................................................................................................................... 23 LEARNING OBJECTIVES............................................................................................................................... 23 CONTENT ................................................................................................................................................. 28 STYLE & LANGUAGE .................................................................................................................................. 33 INTERCULTURAL ASPECTS ........................................................................................................................... 38 USABILITY ................................................................................................................................................ 42 LEARNING STYLES ..................................................................................................................................... 48 WRITING STYLES....................................................................................................................................... 52 TESTING .................................................................................................................................................. 58 EXAMPLES ............................................................................................................................................... 65 MULTIMEDIA............................................................................................................................................. 69 QUESTIONNAIRES ...................................................................................................................................... 77 SCORING MANUAL ............................................................................................................................. 85 GENERAL PRINCIPLES ................................................................................................................................ 85 STEPS OF SCORING ................................................................................................................................... 85 EXAMPLE ................................................................................................................................................. 87

2

METHODOLOGICAL COMPENDIUM .................................................................................................... 89 CARD SORTING ......................................................................................................................................... 89 FOCUS GROUP .......................................................................................................................................... 92 INTERVIEW ............................................................................................................................................... 96 PLUS/MINUS ............................................................................................................................................ 99 TASK ANALYSIS ....................................................................................................................................... 101 VERBAL PROTOCOL ................................................................................................................................. 104 QUESTIONNAIRES .................................................................................................................................... 107 TEST ANALYSIS ....................................................................................................................................... 110 USABILITY TESTS ..................................................................................................................................... 114 GLOSSARY ....................................................................................................................................... 118 BIBLIOGRAPHY................................................................................................................................. 141

3

This handbook has been produced with the hard work of a large team, including: Liesbeth Opdenacker, Sarah Vaes, Gerd Bräuer, Piotr Cap, Els De Meyer, Bart Deygers, Vera Janssens, Geert Jacobs, Carel Jansen, Joep Jaspers, Mariëlle Leijten, Kennet Lindquist, Joanna Nijakowska, Ingrid Stassen, Luuk Van Waes, Arthur Van der Graaf, Daphne Van Weijen, Steef Verheij.

ISBN: 9789057282294 EAN: 9789057282294

QuADEM: manual for the quality assessment of digital educational material by Opdenacker L., Vaes S., Van Waes L., Jacobs G., et al. is licensed under a Creative Commons Attribution-Non-Commercial-No Derivative Works 2.0 Belgium License.

4

INTRODUCTION [Introduction text needs to be inserted here.]

5

OPERATING INSTRUCTIONS QUADEM: THE BASICS The next section will be devoted to four basic questions: What can QuADEM be used for? Who can use QuADEM? What is QuADEM about? How to use QuADEM? This should give a clear picture of what QuADEM is, what it’s not and - most importantly - what it can mean for you.

WHAT FOR? ASSESSING DIGITAL LEARNING MODULES… The QuADEM project’s main ambition is to contribute to the quality control of digital educational material in the field of academic/professional writing. We have substantiated this goal by developing a method for the quality assessment of digital learning modules. In QuADEM terminology a ‘digital learning module’ refers to a digital application that presents educational material devoted to the study of one specific topic in a systematically structured way. In many cases a digital learning module will be part of a ‘digital learning environment’. This is a broader digital framework that offers its users a whole set of applications that make it possible for different parts of the teaching and learning experience to happen through a digital medium. A digital learning module will always address a well-defined and demarcated topic, for example ‘How to write a CV?’1, and it will offer its users information, documentation, examples and exercises to help them to achieve the learning objective, in this case writing a good CV. This learning module might be available on a digital learning environment that, next to other learning modules, features tools such as course statistics, a drop box for assignments, a student forum, a notice boards and student administration. While the evaluation of a digital learning module will unavoidably touch upon some aspects of the broader digital learning environment, the QuADEM method doesn’t offer a complete evaluation scheme for the review of the overall digital learning environment. It does provide you with a practical guide for the creation, elaboration and assessment of digital learning modules that are part of it.

…IN THE FIELD OF ACADEMIC/PROFESSIONAL WRITING The QuADEM method is developed for digital educational material in the field of academic/professional writing. Although some parts of the QuADEM method can be used to assess digital educational

1

For an online example of a digital learning module, take a look at the Calliope module ‘Curriculum Vitae’.

6

materials in other domains as well (i.e. other types of writing or even in domains that are not related to writing or communication at all), it should be noted that QuADEM is essentially devoted to topics related to the teaching of academic/professional writing.

FOR WHOM? A SPECIFIC TARGET AUDIENCE… The QuADEM method is designed for instructors who develop and/or use digital educational materials. The initiative for a QuADEM quality assessment can be taken by the content developer of a new digital learning module during the development phase, or by the instructor who wishes to evaluate an existing digital learning module. The content developer or instructor/user of a digital learning module could fill the part of assessor, and evaluate his/her own material. But in ideal circumstances the content developer and the assessor are not the same person. At any stage of the assessment the initiator can decide to delegate (parts of) the assessment process to an external expert, because of the need for specific expertise or to avoid bias. If, when and to what extent an external expert should be involved in the assessment process, is a question that can only be answered on a case-by-case basis. It depends strongly on the overall objective and scope of the assessment. Anyone who initiates or performs a quality assessment of digital educational material - whether content developer, initiator or assessor - can use this handbook as a practical guide. It will lead you through the entire assessment process, all the way from the conceptual and preparatory phase of the evaluation to the processing and interpretation of the final results.

…BUT NO BACKGROUND KNOWLEDGE REQUIRED Since the use of digital educational material has long surpassed the stage of being reserved to experts only, a successful quality assessment method needs to be equally comprehensible and accessible to the wider audience. To meet this condition the QuADEM method and handbook have been developed in such a way that both the experienced assessor as well as the novice assessor can use them effectively and efficiently. In the QuADEM handbook the practical information on how to assess a specific aspect of a digital learning module can be accessed directly. At the same time additional theoretical sections, a glossary and a methodological compendium are available for anyone who wishes to gain more in-depth knowledge on a specific topic, terminology or methodology. This minimizes the need for specific background knowledge and makes the QuADEM method suitable for a wide audience ranging from the experienced IT-professional to motivated tutors.

WHAT ABOUT? ASSESSING QUALITY… Quality assessment of e-learning applications traditionally focuses on user-friendliness or, in plain words, the ease with which users are able to manipulate the educational software (Ardito et al., 2006). 7

The QuADEM method, however, applies a more comprehensive approach towards ‘quality’ by incorporating other facets such as, for example, the extent to which the digital educational material actually supports the learning objectives and processes of its users. All in all, eleven different components that - according to QuADEM - determine for the quality of digital educational material are taken into consideration. The QuADEM view on what constitutes ‘good’ educational material is strongly influenced by the theories of cognitive constructivism and social constructivism. Cognitive psychology (e.g. Gardner, 1993) has discovered that the development of knowledge needs to be personally relevant in order to be long lasting. This personally relevant knowledge can best be acquired by active learners in instructional settings stimulating individual documentation, analysis, selection, and assessment of information. While cognitive constructivism defines learning as a cognitive process of constructing meaning within each individual learner, social constructivism (e.g. Vygotsky, 1978; Rogoff, 1990) understands learning also and foremost as a socially embedded phenomenon. Here, learning is understood as a phenomenon of active, self-empowering and, at the same time, peer-oriented learning that is both result and requirement of student-centred instruction organized by educators who define themselves no longer as mere teachers but as facilitators of learners and their individual learning processes. The QuADEM method is especially well-fitted to support quality assessments that wish to apply a broad scope, going beyond a focus on the ‘ease of use’ to include an assessment of the accessibility and didactical effectiveness of the digital educational material under revision.

… IN A BLENDED LEARNING CONTEXT While there have been tendencies of trying to replace the traditional classroom with new media, the limitations of learning and teaching solely occurring in digital environments have also become more clear. The QuADEM method therefore strongly advocates a blended learning approach, where face-toface and computer-mediated instruction are combined in order to serve learners with various learning styles and optimize learning. The QuADEM method can be used to assess digital educational materials in different e-learning settings, but it is especially well-suited for the evaluation of digital learning modules being used in a blended learning context and it encourages the assessor to look at a digital learning module in a blended learning perspective.

HOW? USING QUALITATIVE RESEARCH… According to QuADEM, a good quality assessment needs to take into account the interaction between the digital educational material and its users. The best way to achieve this is by actually implicating the end-users of a digital learning module in the assessment process. A set of - mostly - qualitative research methods will be used to find out what opinions the different end-users and other stakeholders hold about a digital learning module under review. The QuADEM method assists the assessor in basing his/her own expert opinion on the concerted opinions of different groups of end-users and/or stakeholders by guiding him/her in the selection of respondents and research methods. 8

WITH A FLEXIBLE FOCUS… Since the QuADEM method employs a broad interpretation of quality, it also runs the risk of becoming laborious and cumbersome. To maintain the purposiveness and efficiency of this quality assessment scheme, the QuADEM method is devised in such a way that the user can pick and choose exactly what he/she needs. The flexible focus of the QuADEM method makes it possible for the user to shape the assessment procedure according to his/her needs. As a result the QuADEM method supports quality assessments of varying scale, ranging from focused evaluations aimed at very specific points of concern to extensive investigations covering all different components of quality.

QUADEM TOOLS The QuADEM handbook provides you with four tools for a quality assessment: a set of assessment units, a scoring manual, a methodological compendium and a glossary. This short overview will give you all the information you need to decide when and how to consult which QuADEM tool.

ASSESSMENT UNITS The first tool is a set of assessment units that provide the user with focused, well-structured and handson information on how to review a digital learning module. The starting point of the QuADEM project has been the identification of twelve components that can determine the quality of digital learning modules. The selected components are: 1) blended learning, 2) learning objectives, 3) content, 4) style and language, 5) intercultural aspects, 6) usability, 7) learning styles, 8) writing styles, 9) testing, 10) examples, 11) multimedia and 12) questionnaires. For each of these components an assessment unit is developed. Such a unit contains all information you need to assess that specific component of the digital learning module. E.g. If you want to decide on the quality of digital learning module as far as its content is concerned, you will find everything you need in the ‘Content Unit’. The core ingredient of an assessment unit is the checklist. The checklist is a non-exhaustive but representative list of criteria that determine one component of what constitutes a successful, high quality learning module. To find out how a digital learning module scores on the checklist criteria, the assessor can apply different research methods and involve different stakeholders. The specifics on how to go about it are all mentioned in the assessment unit and this information is always structured in the same way (see box 1). Box 1: Unit structure

Summary

Offers a definition of the component the unit refers to and specifies the scope of the unit (i.e. what it covers and what it does not), the research methods to be used and the respondents to contact.

Preconditions Sums up a number of controlling conditions which need to be fulfilled before proceeding to the actual assessment itself.

9

Checklist Shows a list of criteria dealing with one component of the digital learning module. The assessor can rate the digital learning module for each criterion on a scale ranging from -2 to 2 or he/she can judge the criterion not applicable (N/A). Script Suggests a scenario for the assessment of the unit’s component and guides the assessor through the process of completing the checklist step by step. Manual Provides the QuADEM user with additional information on the meaning and scope of the different checklist criteria - often by describing how the digital learning module should be developed in order to meet the terms of a certain criterion - and contains advice on how you might solve certain problems. Score A box where the assessor can rate the module (using a letter score from A to E) and explain his mark.

SCORING MANUAL The second tool provided by this handbook is the scoring manual. This manual explains step by step how the assessor can calculate how much a digital learning module scores on a specific component. An important part of the QuADEM quality assessments is the rating of each of the checklist criteria. The next step is the translation of the different numeric scores into one overall score that summarizes the assessor’s verdict about the digital learning module in relation to a specific component of quality. Although the QuADEM handbook is complemented by an online scoring tool that can automatically calculate this overall score, it can also be done manually. The scoring manual explains how.

METHODOLOGICAL COMPENDIUM The QuADEM handbook also contains a comprehensive methodological compendium. In the compendium every research method and technique that can be used during a quality assessment is explained, described and demonstrated. This section will be especially interesting for the novice assessor who is not familiar with qualitative research methods. It provides clear answers to questions such as: What kind of method is it? When do I use it? How do I use it? How can I solve problems? What do the results of the method look like?

GLOSSARY The fourth QuADEM tool is a glossary. In this detailed, alphabetical list you will find definitions for all QuADEM terminology, as well as clear-cut explanations and additional theoretical background on all field-specific concepts and terms used in the handbook.

ASSESSMENT PROCEDURE

10

This section, completing the instructions on how to use QuADEM, gives an overview of the QuADEM assessment procedure. In order to perform a quality assessment from start to finish, the assessor will have to go through this step-by-step plan.

STEP 1: STARTING SITUATION The starting situation for each QuADEM assessment can be characterized by two main conditions: First, a new e-learning tool or digital learning module will be developed or has been developed; or an existing module will be revised. Second, the developer wants to assess whether the digital learning module is consistent with the criteria QuADEM has determined, or what criteria the new digital module has to meet.

STEP 2: DETERMINE THE SCOPE OF THE ASSESSMENT The QuADEM assessment method supports assessments with a varying scope. Now is the time to decide how to use QuADEM (see box 2). Box 2: Different uses of QuADEM

Broad use In this case you use the QuADEM assessment method from the very inception of the learning module. You assess the learning module continuously and iteratively, using all or a selection of the assessment units broadly, fully and iteratively and correcting flaws until the desired standard has been reached. The assessment units and its subsections (i.e. “Script”) have been designed with broad use in mind. Narrow use In this case you use the QuADEM assessment method to perform a selective and partial assessment of one or several components of an existing digital learning module. You use the assessment units as your point of departure and adapt them to your needs by selecting criteria, research methods and the iteration that suit your objectives best. Reference use In this case the checklist criteria, assessment units and the handbook as a whole are being used as a reference document. In this capacity the assessment units can provide clear answers to concrete questions or serve as a useful aid in the conceptualization and design phase of digital learning materials.

STEP 3: ASSESS THE BLENDED LEARNING CONTEXT In line with the strong belief in blended learning that dominates QuADEM’s view on e-learning, we highly recommended each quality assessment to start with a thorough description and assessment of the blended learning context in which a digital learning module is being used.

11

In order to stimulate a proper examination of the blended learning context, the assessment unit on blended learning is offered to you as a separate section (p.14). Once you have a better insight in the blended learning context, you can continue with the selection of the other assessment units.

STEP 4: SELECT THE RELEVANT EVALUATION UNITS Select the assessment units you wish to use. Each of these units holds all the information necessary to assess one of the components that determine the overall quality of the digital learning module (see box 3). The selection of the assessment units will depend on the scope and focus of the assessment, but also on the nature of the digital educational material under review. Not all learning modules benefit from using every assessment unit available, since different digital learning modules host different content using different operating systems. We recommend selecting a number of relevant units while leaving others aside. Some units may also complement each other. You should consider using them simultaneously to maximize their potential. This is especially the case for the Learning Styles Unit, the Writing Styles Unit and the Testing Unit. Box 3: QuADEM evaluation units

Learning Objectives Unit

To assess the learning objectives of the digital learning module.

Content Unit

To assess the textual, visual and aural input that makes up the digital learning module.

Style and Language Unit

To assess the choices in the use of language (formulation, terminology, style…) of the digital learning module.

Intercultural Aspects Unit

To assess the intercultural transferability of the digital learning module.

Usability Unit

To assess the efficiency and effectiveness of the digital learning module.

Learning Styles Unit

To assess to what extent the digital learning module respects and supports different learning styles.

Writing Styles Unit

To assess to what extent the digital learning module respects and supports different writing styles.

Testing Unit

To assess the methods the digital learning module uses to evaluate learner performance.

Examples Unit

To assess the use of examples in the digital learning module.

Multimedia Unit

To assess the use of audio, video, pictures, animation and interactive elements in the digital learning module.

Questionnaires Unit

To assess the use of questionnaires in the digital learning module.

Additional information about the scope of each assessment unit can be found in the section ‘summary’ at the beginning of each unit.

STEP 5: CHECK THE PRECONDITIONS

12

Before starting the actual assessment you should check whether all preconditions are fulfilled. The preconditions are listed at the beginning of each evaluation unit. If not all boxes can be ticked, there are two options to consider: there will not be much use in applying this evaluation unit to the digital learning module at hand; some aspects of the learning module will need revision before proceeding. If one of the preconditions cannot be ticked, this matter should be solved before filling in the checklist itself. Once all the preconditions are fulfilled, the assessor can continue the evaluation procedure.

STEP 6: SELECT THE RESEARCH METHODS To be able to complete the checklist – which is the core of each evaluation unit – you need information on every checklist criterion that you have decided to include in the assessment. To elicit the relevant information, you need to select the right research method(s). The sections ‘summary’ and ‘checklist’ at the beginning of each evaluation unit provide you with a shortlist of the research methods that can be used to evaluate the unit’s component. Examples of such research methods are card sorting, interviews, focus group discussions or keystroke logging. The methodological compendium can provide you with an explanation on each of the research methods. It is up to the assessor to determine the appropriate combination of methods, depending on the focus and scope of the assessment. Often this decision will also be influenced by limitations the assessor experiences in the selection of the respondents (see step 7).

STEP 7: SELECT THE RESPONDENTS Once you have decided which research method to use, you have to determine who you will use it on. In other words, who will be your respondents? In the section ‘summary’ at the beginning of each assessment unit, you will find a shortlist of all stakeholders that can provide you with relevant information on the unit’s component. Those are your candidate respondents. Depending on the focus and scope of the assessment and in accordance with the research methods you have chosen, you will have to make a selection. There are two main groups of respondents: Representative end users: actors that actually (will) have to use the digital learning module, such as students, tutors and teachers. Experts: persons that can be involved in the assessment because of their expertise on the specific component under review, such as experts in the field of multimedia, cultural differences, language or pedagogy. It is up to the assessor to decide which respondents and how many respondents to address. It should also be mentioned that the different types of respondent may need a different methodological approach.

STEP 8: CUSTOMIZE THE SELECTED RESEARCH METHODS At this stage you have a clear view of what the assessment is about, you have decided what research methods you will use and you have selected your respondents. To be successful you need to customize 13

your research methods to the specific context you will be using them in. This means you should adjust the research method to the respondents and to the topics you are interested in. Ask yourself the following questions: Does the respondent need any specific background information in order to be able to comment on a certain topic or to answer certain questions? If so, can you provide the respondent with this information without distorting the research results? Will this research method, as it is planned, prompt the respondent to give information on the specific topic you are interested in? If not, how can you steer the respondent in a more relevant direction without distorting the research results? Does this research method overlap with one of the other research methods selected? If so, can you combine them and avoid repetition?

STEP 9: DESIGN YOUR SCRIPT At this stage you know which components and which checklist criteria you are including in the assessment. You have decided which research methods you will be using and how. You know who your respondents will be. You have all the pieces of the puzzle, but how do they fit together? The next step is to design a detailed, chronological description of the course of your assessment sessions. This means you need to think how (in what order) you will combine the different topics, research methods and respondents in order to make the research go as smoothly as possible.

STEP 10: COMPLETE THE CHECKLISTS Once all the assessment sessions are concluded, the assessor needs to complete the checklists, using a 5-point Likert scale. This means the assessor will have to translate the research results obtained through qualitative research into numeric scores. There are two possible scenarios for the transition to numeric scores, each of them with its own merits and drawbacks: Scenario 1: The assessor interprets and summarizes the research results and completes the checklist on his own. In this case the translation of the respondents’ opinions into scores is based entirely on the assessor’s judgement, which means the risk of bias is prominent. To avoid misrepresentation, the assessor should establish and define his standards beforehand as much as possible. Scenario 2: The assessor completes the checklist in cooperation with each respondent by discussing each of the checklist criteria, taking into account the respondent’s responses during previous research tasks and by asking the respondent to summarize this into a numeric score (on a scale from 2 to +2). In this case different respondents may interpret the checklist criteria differently, which would also distort the research results. The assessor should determine beforehand how to interpret and explain the different checklist criteria. Once all the respondents have assisted in filling out the checklist, the assessor can use the average scores as his final checklist scores.

STEP 11: USE THE RESULTS, SOLVE PROBLEMS The checklist scores are summarized in a single letter score per unit. In combination with the observations made during the qualitative research phase this should allow you to get a clear image of the strengths and weaknesses of the digital learning module under review.

14

The final step is to use this information to solve problems and carry out improvements. Some of the most common problems or shortcomings are discussed in the ‘manual’ at the end of each unit. In the manual you will also find suggestions on how to solve problems or how to revise and improve the digital learning module.

15

BLENDED LEARNING QuADEM strongly advocates a blended learning approach. In line with the strong belief in the benefits of blended learning, we highly recommended each quality assessment to start with a thorough description and assessment of the blended learning context in which a digital learning module is being used.

THEORETICAL BACKGROUND Despite the fact that ‘blended learning’ is becoming increasingly popular in contemporary educational practice, there seems to be very little consensus regarding what ‘blended learning’ actually is. In the QuADEM framework ‘blended learning’ refers to a combination of face-to-face and computer-mediated instruction in order to optimize learning by applying a number of learning technologies to match various learning and writing styles (Graham 2006).. The so-called Information Age, mostly driven by digital media, has made it both possible and necessary to use these digital media not only as tools for acquiring and processing information but also as a means of (de)constructing knowledge (Kerres and De Witt 2003). Computers and the Internet have become the technical basis for digital learning environments that challenge the traditional classroom in many ways. While there have been tendencies in the past and present of trying to replace the traditional classroom with new media, the limitations of learning and instruction solely occurring in digital environments have also become more clear. Blended learning brings together the most useful of both the traditional classroom and digital learning environments in the context of a specific educational setting (Rovai and Jordan 2004). The traditional classroom is limited in time, location, materials use and peer contact. Digital learning environments lack such limits to a great extent and this allows for an optimisation of conditions of learning on an individual level. The introduction of a digital learning module in a learning process entails more freedom, more control and more responsibility for the student since it allows the learner to decide what learning approach to take, how to use the digital educational material or how much time to invest. Such a learning environment encourages learners to discover and develop the personal relevance of the knowledge they acquire and stimulates them to adopt a more active learning attitude. With that kind of potential impact on the unfolding of each learner’s individual way of acquiring knowledge, blended learning can be seen as a truly holistic approach to education (Singh 2003). The art of blended learning is to find a smooth and powerful combination of online and offline components and instruction. Consequently, the quality of a digital learning module does not only depend on what is offered within the digital module itself. The provided combination of components (online and/or offline) needs to be a perfect match for the module’s learning objectives and the potential of the interaction between the components needs to be fully exploited. For example, for complex learning activities that benefit from face-to-face open discussion in larger groups, a traditional classroom context might be better suited than an online discussion forum. Or, while a teacher may have instructed the students to examine and process the theory on a certain topic on their own, through the

16

use of a digital learning module, he may decide to deal with the feedback on the accompanying assignments in class because it will allow the students to learn from each other’s mistakes and because it will give him the opportunity to review sections of the theory that have proven to be problematic. Because it is so important to have the right blend between the different components, a QuADEM quality assessment should explicitly take into account both online and offline components of the planned use of the digital learning module. This means the assessor’s first step should be to look beyond the module itself, and assess the way in which the digital learning module is integrated into the overall learning process.

BLENDED LEARNING CONTEXT You can use the three themes formulated below as guidelines to examine and describe the blended learning context in which the digital learning module is being used.

DESCRIBE COURSE SETTINGS Find out whether the digital learning module is part of a bigger course. If so, describe 1) the overall course objectives and 2) the place of the digital learning module in relation to the other components of

the course (max. 150 words).

DESCRIBE THE DIFFERENT COMPONENTS Identify and describe the different online and offline components that are part of the use of the digital learning module under review. The online components are the different sections and applications of the digital learning module. They can include sections on theory, practice, and case, or a blog the students have to maintain, or a discussion forum on which they have to participate, or the self tests they can fill out. The offline components are all the offline activities in support of the module’s learning objectives, such as ex cathedra classroom teaching, class discussion, class trips, group work, presentation, etc.(max.

17

150 words).

DESCRIBE THE BALANCE BETWEEN THE COMPONENTS Describe the balance between the different (online and offline) components that are part of the use of the digital learning module. Take into account criteria such as 1) the time available for each component 2) the variation between the components (max. 150 words)

ASSESSMENT OF BLENDED LEARNING Once you have gathered all the necessary information to give a complete description of the blended learning context in which a digital learning module is being used, you can use this description to assess whether this blended learning context lives up to a certain standard.

SUMMARY Definition

Blended learning combines face-to-face and computer-mediated instruction in order to optimize learning by applying a number of learning technologies to match various learning and writing styles. The blended learning context refers to the combination of and the interaction between different offline and online components that the planned use of the digital learning module entails.

Scope

The unit can be used to evaluate the blended learning context in which a digital learning module is being used.

Methods

Think aloud protocol Interview

Respondents

Tutors Pedagogical experts

18

CHECKLIST To assess the blended learning context of the digital learning module, you can use the criteria listed in the checklist below. Use the think aloud protocol and interviews as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Blended learning context

Scores

-2

1

The digital learning module can be completed within the available timeframe.

2

The total time spent on the module is within reasonable boundaries.

3

The proportion of time spent on each of the components is adjusted to the importance of the corresponding learning objectives.

4

The combination of components is sufficiently varied.

5

Each of the learning objectives can be pursued through the most appropriate components.

-1

0

1

2

N/A

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

19

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in. Then provide the respondents with a detailed description of the overall blended learning context of the module they are about to review (based on your descriptions in chapter 2).

THINK ALOUD Ask the participants to carry out a limited number of specific tasks while reading and thinking aloud the web text. You can record the audio and/or video of the session as a back-up. This method allows you to get a picture of the mental mapping process of the participants. Some examples: Brows through the digital learning module during 5 minutes and comment on the overall set-up Read (this part of) the text and comment on it. Which parts of the text will need more explanation in class? Would you change / add / delete some components?

INTERVIEW After the think aloud protocol, you can interview the respondents about the problems they encountered and the comments they made. You should also address the topics in the checklist, taking into consideration the information that was provided by the description of the blended learning context. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Blended learning

Criteria

Explanation

1

The digital learning module can be completed within the available timeframe.

Lack of time will cause unnecessary stress and frustration among the users, which will reflect negatively on user satisfaction.

2

The total time spent on the module is within reasonable boundaries.

The module’s workload should be in proportion to its importance and to the time spent on other modules within the course.

20

3

The proportion of time spent on each of the components is adjusted to the importance of the corresponding learning objectives.

Some learning objectives may be more important or more difficult to achieve, and should therefore be given more time and attention. For example, if the main learning objective is to be able to give a good oral presentation, overcoming stage fright and acquiring practical presenting skills ( e.g. through class presentations) will be more important (and often more difficult) than learning the theory on presenting by heart. Most of the time and effort should go to the components that are devoted to these practical skills. But in a module that aims to teach you everything about human physiology, most time should be reserved for components that can help you to learn the names and positions of all bones and muscles.

5

Each of the learning objectives can be pursued through the most appropriate components.

The overall learning objective should be broken down into subordinate learning objectives. For each of these smaller, more concrete learning objectives, the most appropriate instruction mode needs to be selected. The end result of this selection process should be a combination of online and offline components that allows the users to realize the overall learning objectives in the most efficient way.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

21

22

ASSESSMENT UNITS LEARNING OBJECTIVES A ‘learning objective’ is a statement that describes, in specific and measurable terms, which knowledge, skills and attitudes a learner should be able to exhibit as a result of completing a specific learning material or a specific instructional activity.

SUMMARY Definition

A learning objective is a statement that describes, in specific and measurable terms, which knowledge, skills and attitudes a learner should be able to exhibit as a result of completing a specific instructional activity.

Scope

The unit can be used to evaluate the learning objectives that are set out in the digital learning module.

Methods

Plus/Minus method Think aloud protocol

Respondents

Tutors Pedagogical experts

PRECONDITIONS Before proceeding to the actual assessment, check whether all preconditions are fulfilled.

1

The learning objectives are communicated to the users.

CHECKLIST To assess the learning objectives of the digital learning module, you can use the criteria listed in the checklist below. Use the plus/minus method and a think aloud protocol as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best.

23

2= agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Learning objectives

Scores

-2

1

The learning objectives of the digital learning module support the overall course objectives.

2

The main learning objective of the digital learning module is made operational by subordinate learning objectives.

3

The learning objectives are student-centred.

4

Each learning objective targets one specific aspect of the expected learning performance.

5

The learning objectives are measurable.

6

The learning objectives can be achieved effectively by going through the digital learning module.

7

The learning objectives can be achieved efficiently by going through the digital learning module.

-1

0

1

2

N/A

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

24

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in.

PLUS/MINUS METHOD [We need explain how to apply the plus/minus method (which is mostly understood as annotating on a print-out of the web text) when you are working with an online text. The same goes for how to combine the plus/minus method with a think aloud protocol would be welcome. The methodological compendium touches upon these subject only very briefly, and a better explanation would be welcome. You could improve the explanation in the compendium are add ‘tips and tricks’ in the each scripts that uses this combination of methods.] Ask the respondents to attentively read (a part of) the (printed) text of the website. In this case this will most probably be the section that describes the learning objectives. At parts of the text they value negatively, they should note down a minus (-), at parts they value positively they should note down a plus (+). Through the plus/minus method, respondents are invited to give very specific comments on the learning objectives as they are. For example, respondents could note down a minus next to one of the learning objective that, according to them, is badly formulated or too ambitious. Or, they could note down a plus next to a section in which the main learning objective is broken down into smaller, subordinate learning objectives, because they like the way it makes the whole look more feasible. The assessor can - alone or together with the respondent - generalize these comments to give an overall view on the quality of the formulated learning objectives.

THINK ALOUD Ask the respondents to carry out a limited number of specific tasks while reading and thinking aloud the web text. You can record the audio and/or video of the session as a back-up. This method allows you to get a picture of the mental mapping process of the participants. You can consider combining the think aloud protocol with the plus/minus method. This would mean you instruct the respondents to annotate the web text with plus and minus, while reading and thinking aloud. You can also point out the learning objectives of the digital learning module to the respondents and subsequently ask them to locate the different sections/components of the learning module that can help to achieve each of them. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL

25

This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Learning objectives

Criteria

Explanation

1

The learning objectives of the digital learning module support the overall course objectives.

A course objective is a statement of the intended general outcome of an instructional unit or program and describes a more global learning outcome. A learning objective is a statement of one or several specific performances which contribute to reaching that global goal.

2

The main learning objective is made operational by subordinate learning objectives.

The overall learning objective should be broken down into smaller but more specific, more concrete - and less overwhelming - learning objectives. Example: The main learning objective of a digital learning module on the topic ‘Curriculum Vitae’ could be: ‘At the end of the module you will be able to create a good CV’. This learning objective can be broken down into subordinate learning objectives such as: You will… be able to distinguish between different CV types. be able to use the different CV types appropriately. be able to write a CV appropriate to your personal history and experience. be able to adapt a CV to the requirements of a specific job.

3

The learning objectives are student-centered.

A learning objective should be written for the user, not the instructor. An effective learning objective will explain expectations of student behaviour, performance, or understanding. To ensure that learning objectives are studentcentered, a good objective should appropriately complete the statement “The student will...” or “You will…”

4

Each learning objective targets one specific aspect of the expected learning performance.

Learning objectives should be specific and target one expectation or aspect of understanding and highlight the conditions under which the user is expected to perform the task.

5

The learning objectives are measurable.

To ensure that learning objectives are measurable, avoid using verbs that are vague or cannot be objectively assessed. Use active verbs that describe what a user will be able to do once

26

learning has occurred.

6

The learning objectives can be achieved effectively by going through the digital learning module.

It has to be possible for the target audience to achieve all learning objectives by using the digital learning module. If the learning objectives cannot be achieved through the use of the module’s components, either the module or the learning objectives need to be revised.

7

The learning objectives can be achieved efficiently by going through the digital learning module.

It has to be possible for the audience to achieve all learning objectives within a reasonable timeframe and with reasonable effort by using the digital learning module. If the learning objectives cannot be achieved efficiently through the use of the module’s components, either the module or the learning objectives need to be revised.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

27

CONTENT In plain words ‘content’ is the substance that makes up a digital learning module. It is the textual, visual or aural material that the user encounters when browsing through the digital learning module. Although it may include images, sound, video and animation, when talking about web content we are referring in the first place to the content in textual nature. ‘Content’ in QuADEM terminology is therefore the textual information the digital learning module provides. In optimal conditions the choice of content is finely attuned to the learning objectives of the educational material, the needs of the target audience and the requirements and possibilities of the digital medium. This means that the selection of the content is determined, for one, by the competences the digital learning module wishes to develop. A second crucial factor determining what content should be treated and how it should be treated are the characteristics of the target audience. This may include the prior knowledge of the target audience as well as their skills, attitudes, experiences and learning styles (Lindeboom, 1994). The audience’s learning styles in particular deserve attention. A digital learning module that seeks to cater to users with different learning styles, needs to integrate different types of content: a logically structured and clear explanation of the theory but also links to practical uses and applications of certain ideas or theories, examples demonstrating the theory as well as exercises or cases that invite the user to learn by doing (De Galan, 2003). Finally it is important to realize that the content should be in line with the digital medium that is being used. Too often digital learning material is a digital reproduction of paper-based educational material. To have a successful digital learning module the content should be adapted to the limits and opportunities of the digital medium. The presentation of the selected content is the next important aspect. The content should be presented as much as possible in reader based prose. Examples, tasks and cases should be geared up to the audience’s perception of their environment in order to increase their meaningfulness and to help motivate the learner. Text should be organized around specific themes, with major themes clearly distinguished from minor themes, the relationships between themes made explicit and conclusions highlighted. Cues can be a valuable tool in making the organization of the text vivid and clear to the audience (Flower, 1989). To assess to what extent a digital learning module succeeds in presenting the right content in the right way - taking into account the selection, organisation, relevance, accuracy and presentation of the content - the QuADEM Content Unit was developed.

SUMMARY Definition

In general terms, the web content refers to the textual, visual or aural input that makes up the digital learning material. In the QuADEM framework, content is the textual information the digital learning module provides.

Scope

The unit can be used to assess textual web content. For the assessment of other content types such as audio and video, consult the multimedia unit.

28

Methods

Respondents

Card sorting Think aloud protocol Plus/Minus method Interview Focus group Representative end users: students, tutors, teachers Content / knowledge experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the content of the digital learning module, you can use the criteria listed in the checklist below. Use the plus/minus method, a think aloud protocol, card sorting, interviews and focus group discussion as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Content

Scores

-2

1

The content is adapted to the learning objectives.

2

The content is adapted to the target audience.

3

The content is objective.

4

The content is up to date.

5

The content is well-organized.

6

The content is correct and accurate.

-1

0

1

2

N/A

29

7

The content is complete.

8

The content is easy to understand.

9

The content is focused and specific.

10

The content is credible.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the aim and the set-up of the assessment session and the different research procedures they will participate in.

CARD SORTING Give the participants a number of cards. Most of the cards should be labelled - each referring to a specific content item of the tree structure of digital learning module - while some should be left blank for the participants to fill in as they choose. Instruct the respondents to logically arrange the cards, thus showing how they would structure the content of the digital learning module. This method is especially suitable to determine whether the respondents feel the provided information is well organized (criterion 7).

PLUS/MINUS METHOD Ask the participants to attentively read (a part of) the (printed) text of the website. At parts of the text they value negatively, they should note down a minus (-), at parts they value positively they should note down a plus (+). Respondents might, for example, note down a minus when they disagree with the provided information, or when they feel something is poorly explained. They could note down a plus when they like the way it is presented or when they see a very good explanation.

30

Through the plus/minus method, respondents will comment on very specific parts of the provided information. The assessor can - alone or together with the respondent during an interview - generalize these comments to get an overall view on the quality of the content in general.

INTERVIEW When the respondents have finished marking the text with plusses and minuses, you can do an interview. During the interview you can discuss the marked text, the respondent’s expectations for the content of the module, and the different checklist criteria.

THINK ALOUD Ask the participants to carry out a limited number of specific tasks while reading and thinking aloud the web text. You can record the audio and/or video of the session as a back-up. This method allows you to get a picture of the mental mapping process of the participants. Some examples: Brows through the digital learning module during 5 minutes Read (this part of) the text and comment on it. Which parts of the text will need more explanation in class? Would you change / add / delete some parts? Make one of the exercises. Read (one of) the case(s) and evaluate if the theory is sufficient and clear enough to deal with this case.

FOCUS GROUP After the individual interviews and/or after the think aloud protocol you can organize a small group discussion to discuss the content of the learning module. You should moderate the discussion and introduce new topics (use the checklist criteria as your guideline). You can also ask the respondents to discuss to whether their expectations for the content of the learning module were met. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Content

1

Problem

Problem solving

The content is adapted to the learning objectives.

The content of a digital learning module should entirely support the learning objectives of the module. The choice of content should be the response to the questions: What 31

competences need to be developed and until which level?

2

The content is adapted to the target audience.

The content of a digital learning module should relate to its target audience. This means a thorough analysis of the target audience – its prior knowledge, interests, learning styles and attitudes – should lie at the bottom of the content choice.

3

The content is objective.

The content of a digital learning module (or any educational material for that matter) needs to be objective in order to be credible. There must be a clear distinction between opinions and facts. When certain opinions are expressed, it must be clearly stated by whom they are expressed.

4

The content is up to date.

It should be clearly stated when the web content was produced, when it was updated, whether it is still valid, and whether it is updated regularly. A regular update of links is also a must.

5

The content is well organized.

The information provided by a digital learning module should be logically organized and well-structured. The way the information is organized and structured should be clearly visible to the module’s users, with titles, subtitles, paragraphs, links between text parts, and connections to other relevant sections within the module.

6

The content is correct and accurate.

The provided information should be precise and exact. It should also take into account the user’s prior knowledge of the subject.

7

The content is complete.

All the (theoretical) information necessary to achieve the learning objectives of the digital learning module should be present within the module.

8

The content is easy to understand.

The target audience should be able to understand the (theoretical) information provided. This means the prior knowledge of the target audience needs to be taken into account when compiling the content of the learning module.

9

The content is focused and specific.

The information offered should be detailed and focused. This can be promoted by including a summary of the highlights of the theoretical knowledge necessary to achieve the learning objectives. While the given information should address a wellspecified subject, it should also include the background

32

knowledge for contextualisation.

10

The content is credible.

The information should be trustworthy and convincing. This means the provided information should be based on reliable sources and fitted with complete references.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

STYLE & LANGUAGE ‘Style and Language’ refers to the many ways (grammatical, structural, stylistic, graphic, etc.) in which the presentation of the information included in digital learning modules corresponds to the characteristics and the needs of a digital learning environment. The importance of using appropriate style and language in a digital learning environment lies in their potential to warrant clear and comprehensible instructions as well as approachable and digestible content. In this sense the language applied in a digital module has the potential of reducing the cognitive effort of the module’s user, thus making the processing of the information faster and more efficient (Morkes & Nielsen, 1997; McCracken et al., 2003). 33

The beneficial potential of style and language features can be achieved by employing a style which encourages scanning the content for key information and which steers clear of the risk that the information appears subjective. The use of a style that is too ‘promotional’ or that puts the instructions’/instructor’s credibility into question should be avoided (McCracken et al., 2003). With regard to grammar, the language used in digital modules is expected to adhere to the general principles of coherence. The structure used in the presentation of the information is another important factor. Well-structured instruction sentences ‘look backward’ as well as ‘forward’. Each sentence begins by linking itself firmly to the sentence that comes before. If a link between sentences does not seem firm enough, an introductory clause or phrase is then used to connect one idea to the other. Furthermore, putting the old information at the beginning of the sentence, and the new information at the end, follows the principle of moving from old to new and accomplishes two important things. First, the reader is on solid ground – he or she moves from the familiar to the unknown. Second, because of a general tendency to give emphasis to what comes at the end of a sentence, the reader rightfully perceives that the new information is more important than the old. In addition, the use of a reasonable amount of repetition or reiteration creates a sense of unity. Repeating key words and/or reiterating phrases at appropriate moments helps to guide the reader through the instruction, while using clear transition markers makes the reader aware of a turn in the argument, emphasis, etc. (Morkes & Nielsen, 1997; McCracken et al., 2003). It is obvious that the assessment of a digital learning module must account for the issues of style and language because they substantially add to the quality of the content. The Style and Language Unit therefore aims to safeguard the style and language standards as described above. It is valid for the assessment of instructive style in all the digital learning/writing modules, designed for native as well as non-native users of the instruction language.

SUMMARY Definition

Style and Language involve the particular choices (grammatical, structural, stylistic, graphic, etc.) made by individuals and social groups in their use of language.

Scope

The unit is valid for assessment of style and language in all digital educational materials.

Methods

Expert review Plus/minus method

Respondents

Representative end users Tutors Language experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit. 34

CHECKLIST To assess the style and language of the digital learning module, you can use the criteria listed in the checklist below. Use the plus/minus method as a research method to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Style and Language

Scores

-2

1

The style and language make the information appear sufficiently objective.

2

The style and language are appropriate to the audience.

3

The style and language are accurate and correct.

4

The style and language are consistent.

5

The style and language are attractive.

-1

0

1

2

N/A

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION 35

Start by briefly informing the respondents about the aim and set-up of the assessment session. Explain the research procedure they will be participating in.

PLUS/MINUS METHOD Ask the participants to attentively read (a part of) the (printed) text of the website. At parts of the text they value negatively, they note down a minus (-), at parts they value positively they note down a plus (+). After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist. Additionally you can use the respondent’s comments to draft an itemized overview of the necessary changes.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Style and Language

Criteria

Explanation

1

The style and language make the information appear sufficiently objective.

A style that is too ‘promotional’ or that puts the instructor’s credibility into question makes the digital learning module look amateurish and unreliable. Choose for a more professional style, but remember that this is not the same as a stiff or uninviting style.

2

The style and language are appropriate to the audience.

It is important that the formulation and presentation of the information fits in with the prior knowledge of the audience. Academic / professional jargon that might not be familiar to the target audience should be avoided. Different audiences need to be addressed differently. For example, one could apply a more formal style when addressing teachers and a more informal style when addressing students. However, it is always best to address the audience directly and use active formulations.

3

The style and language are accurate and correct.

A formulation is accurate when it enables the reader to get a clear and precise image of the author’s intention. Therefore the formulation should be:

36

Precise in terminology Precise in references Precise in indication words Consistent in terminology It is also important that there are no language errors in the text: no punctuation, spelling or grammar mistakes. 4

The style and language are consistent.

Inconsistent style and/or terminology will cause confusion on the part of the reader.

5

The style and language are attractive.

A text is easier to read when the formulation has a certain liveliness to it. This can be achieved by: Variation in sentence and word formulation. Variation in sentence structure. A constant repetition of subject-verb-object sentences will create a dull text. Use of metaphors Concretizing

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

37

INTERCULTURAL ASPECTS In the QuADEM handbook the term ‘intercultural aspects’ refers to culture-specific points within a digital learning module. Intercultural aspects can pertain to non-specialist but especially to specialist fields. In the former intercultural aspects, in the ethnolinguistic/social sense, determine the applicability of the digital educational material in a cross-national and international context. In the latter they make up the cultural core of language for specific purposes (Gotti, 2003), underlying legal, commercial, political and institutional discourse used in particular workplaces. In this case they determine the transferability of the digital educational material to a different subject field or to a different organisational (or corporate) culture. Culture-specific characteristics, assumptions and values underlie all of our thinking, feeling and behavior patterns. These culturally preprogrammed patterns of thinking, feeling and acting predispose and partly predetermine our perception of power distance and gender roles, our tolerance towards ambiguity and diversity (Hofstede & Hofstede, 2004), our communication styles, etc. However, the different users of a digital learning module may not share the same cultural framework. Therefore, In order to avoid misunderstandings and communication failure, content designers and assessors of digital educational materials need to display a considerable amount of intercultural awareness or cultural sensitivity. In order to cater to a culturally diverse audience, aspects that relate to – mainly – national cultures should be scrutinized. In particular, it is crucial to make explicit the dangers of national stereotyping and to always provide sufficient culture-related background information for the performance of particular tasks. Digital learning modules related to writing should also include specific examples for different writing conventions depending on the country. As far as instructions are concerned, equal clarity for learners representing various degrees of performance in the target language constitutes a major demand. This involves making explicit the fact that the target language may include fixed forms (e.g. honorifics, etc.) which determine the level of formality in a given writing task. Moreover, it is strongly recommended that the task content of the educational material acknowledges the social norms of the learner’s native language and draws the learner’s attention to cross-national differences in prestige forms and formality levels naturally. The Intercultural Aspects Unit is valid for assessment of cross-cultural applicability of digital learning modules, in particular related to writing development. It aspires to uncover the cross-cultural variations and traditions behind the writing styles and procedures in different countries, sectors, or organizations.

SUMMARY Definition

Intercultural aspects are culture-specific points which can impede or allow for the implementation of a module in a cross-national context. Intercultural aspects can also determine the transferability of subject specialist material.

38

Scope

The unit can be used to evaluate the cross-cultural applicability of digital educational material (whether or not in the context of blended learning). When used to assess a digital learning module in the field of writing development, the unit can be used more specifically: to uncover the intercultural variations and traditions behind the writing styles and procedures in different countries; to make the learner aware of the cultural limitations of certain strategic and genre recommendations in a module; to relate certain communication practices to the characteristics of a specific corporate culture and to explain cultural diversity and limitations.

Methods

Expert review Plus/minus method Interview

Respondents

Representative end users: teacher, tutor, student in different countries Experts in cultural studies, globalization and the world media Content experts

PRECONDITIONS Before proceeding to the actual assessment, check whether all preconditions are fulfilled.

1

The target audience of the digital learning module is well-defined and clearly stated.

CHECKLIST To assess the style and language of the digital learning module, you can use the criteria listed in the checklist below. Use the plus/minus method and interviews as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Intercultural aspects

Scores

-2

-1

0

1

2

N/A

39

1

The digital learning module avoids national stereotyping.

2

The digital learning module draws attention to different communication conventions depending on the country.

3

The instructions for the particular tasks include sufficient culture-related background information.

4

The digital learning module includes contextualization that refers to the cultural limits of its recommendations.

5

Learning materials draw the learner’s attention to crossorganisational differences.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the participants about the set-up of the assessment session and the different research procedures they will participate in.

PLUS/MINUS METHOD Ask the respondents to attentively read (a part of) the text on the website. Instruct them to pay particular attention to the recommendations and examples. At each part of the text they value negatively, they note down a minus (-), at each part they value positively they note down a plus (+).

INTERVIEW When the respondents have finished marking the text with plusses and minuses, you can do an interview. By discussing the marked text and especially by revealing the reasons for the respondent’s likes and dislikes, you may discover intercultural aspects within the module.

40

After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist. Additionally you can use the respondent’s comment to draft an itemized overview of the necessary changes.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Intercultural aspects

Criteria

Explanation

1

The digital learning module avoids national stereotyping.

Avoid presenting professional habits and personality traits as nation-specific.

2

The digital learning module draws attention to different communication conventions depending on the country.

(Communication) conventions may differ from one cultural setting to another. Such differences can relate, for example, to the directness and indirectness of communication, to honorific, or to formality levels. A digital learning module could discuss a few country-specific conventions, in order to offer a cross-national selection of examples.

5

Learning materials draw the learner’s attention to cross-organisational differences.

If possible, include tasks to practice the organisation- or company-relevant skills.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A

41

B C D E

Assessor’s comments

USABILITY In the QuADEM framework, ‘usability’ refers to the extent to which a digital learning module can be used by its target audience to achieve specified learning objectives with efficiency, effectiveness and satisfaction (ISO, 1998). Two criteria determine the overall usability of a digital learning module. On the one hand the module’s objective qualities are decisive. A functional layout, a user-friendly interface and an efficient navigation system are prerequisites for a ‘usable’ learning module. Practical details, such as functioning hyperlinks, a search function or printable pages also matter. On the other hand the users’ subjective perception of the module’s usage is equally important. Put simply, learners judge a digital learning environment usable when it meets their initial expectations and when tasks can be performed easily. Hence, a good digital learning module is finely attuned the needs, expectations and motivations of its target audience (Ardito et al, 2005; Granić, 2008). In order to cover these two dimensions of usability, a usability assessment needs to be based on objective measures of effectiveness and efficiency - which translate into user performance - as well as on users’ subjective judgement of the module usage - which translates into user satisfaction (Granić, 2008). Consequently, usability testing is a large multi-method process that often incorporates a range of research methods, from task analysis and time logging (to measure user performance) to interviews and focus group discussions (to assess user satisfaction). In order to support and facilitate the usability assessment of digital educational material, the Usability Unit was developed. It will assess the accessibility, efficiency and clarity of the module, based on objective criteria as well as on users’ subjective judgement. Usability tests are at their most effective when they are used in a cyclical manner: one test will most likely improve the product but it is advisable to redo the test to be sure of the quality of the improvements.

SUMMARY

42

Definition

Usability refers to the efficiency and effectiveness of a digital learning environment and to the user satisfaction it creates. In digital learning ‘usability’ can be defined as the meeting point between the individual needs, expectations and motivations of learners and learning facilitators on the one hand and the digital material on the other hand.

Scope

The unit can be used to create an optimal user experience, in terms of navigation, accessibility, clarity and comprehension.

Methods

Respondents

Card sorting Task analysis Interview Focus group Representative end users Tutors Usability experts Content experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the style and language of the digital learning module, you can use the criteria listed in the checklist below. Use card sorting, think aloud protocol, interview, and focus group as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Usability

Scores

-2

1

-1

0

1

2

N/A

The layout of the digital learning module is appealing.

43

2

The interface of the digital learning module functions efficiently.

3

First time users find it easy to use the digital learning module.

4

The learning module allows the user to find specific information quickly.

5

The digital learning module meets the expectations of the user.

6

All multimedia (video, audio, hyperlinks, pdf) within the digital learning module functions as it should.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one assessment session.

INTRODUCTION Start by briefly informing the participants about the set-up of the assessment session and the different research procedures they will participate in.

CARD SORTING Give the participants a number of cards. Most of the cards should be labelled – each referring to a specific content item of the tree structure of the digital learning module – while some should be left blank for the participants to fill in as they choose. Instruct the respondents to logically arrange the cards, thus showing how they would structure the content of the digital learning module. This method will tell you how the respondents expect the digital learning module to be organized and structured (criterion 5).

INTERVIEW

44

You can continue with a brief discussion of the module’s structure as devised by the participant, asking the participant to clarify certain choices or to elaborate on certain comments he/she made during the card sorting. After you have briefly introduced the participants to the digital learning module – but before they have had the chance to access the digital material themselves – you can ask the participants about their general expectations of the module.

THINK ALOUD PROTOCOL Ask the participants to carry out a limited number of specific tasks while thinking aloud, and reading aloud the web text. This method allows you to get a picture of the mental mapping process of the participants. You can record the audio and/or video of the session as a back-up. Some examples: Explore the module during 5 minutes and comment. Go to exercise x and try to solve it. You can use any part of the module to help you. Locate the theory on topic x.

This method allows you to get a picture of the mental mapping process of the participants. It will give you a better insight in how the (first time) user experiences the module and how easy of use the module actually is.

INTERVIEW When the participants have finished browsing through the digital learning module, you can interview them about the problems they encountered, the comments they made, and about the other topics addressed in the checklist. Together you can also review the participant’s expectations and ask him to what extent they were met.

FOCUS GROUP After the individual interviews you can organize a small group discussion. You should moderate the discussion and introduce new topics. All criteria of the checklist should be addressed in the discussion. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Usability

45

1

Criteria

Explanation

The layout of the digital learning module is appealing.

If users consider the module not visually appealing it will negatively influence their perception of the module. To make the module more appealing: Use a limited number of colours. Use bright colours sparsely. Do not overuse pictures. Do not avert attention from the main content. The idea of what is appealing is subjective and is ideally verified in a focus group.

2

The interface of the digital learning module functions efficiently.

A think aloud protocol will reveal how the users interact with the interface. An inefficient interface may cause information or functionalities to go unnoticed. Equally important as the interaction is the appreciation of the interaction. Focus groups and interviews will reveal the user’s subjective judgement about a learning module’s efficiency. If learners feel the interface does not function well, they will be less motivated to use it. Separate components within the digital learning module (i.e. theory and exercises) belong in a horizontal menu on top of the page. The menu that is used for navigation within the site or within a component goes to the left hand side of the screen and takes up 150-200 pixels. The navigation tree (in the left column) is a very important aid for users to build a mental model of the topics that are dealt with. The order and hierarchical structure of these navigation links should reveal the logic structure that is the basis of the module.

3

First time users find it easy to use the digital learning module.

For first time users to be able to use the digital learning module efficiently, it should be intuitive and self-explanatory. A think aloud protocol will reveal how novice users interact with the learning module. Focus groups and interviews will reveal more about the user’s subjective judgement about a learning module’s efficiency.

4

The learning module allows the user to find specific information

Users should be able to find the information they need as quickly as possible and with a minimum number of clicks. Making the sitemap available to users, increases search ability, as does the integration of

46

quickly.

a search button. In the development phase, the development team should keep track of the number of clicks necessary to find specific information. A think aloud protocol will show how long and how many clicks it takes the users to find the information they are looking for. Interviews and focus groups will reveal how long they feel the search took.

5

The digital learning module meets the expectations of the user.

Whether the users’ experiences match with the expectations, will play a vital role in the overall learning experience. The actual experience should match or surpass the expectations. During the development phase, the expectations of the target audience should be checked. A focus group (or an individual interview) is a good tool to determine whether the original expectations are met.

6

All multimedia (video, audio, hyperlinks, pdf) within the digital learning module functions as it should.

Technical flawlessness is a prerequisite for a usable environment. All technical applications should function and no dead links should occur. If the user is expected to download data, an indication should be made of the file size and the time the download is expected to take. A think aloud protocol will reveal potential technical flaws. The development team should check the multimedia for technical flaws beforehand. The better this works, the better the (subjective) user experience will be.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C

47

D E

Assessor’s comments

LEARNING STYLES The concept of learning styles builds on the idea that all individuals have a different way of learning. The prevalent ways in which individuals gather, process, retain and use information are identified as the ‘learning styles’. Many models - some more controversial than others - mapping the different learning styles have been developed. One of the most influential models was developed by Kolb (1984). Depending on which learning modes a learner prefers to take in information (concrete experience or abstract conceptualization) and to process information (active experimentation or reflective observation) he distinguished between four learning styles: a converging, diverging, assimilating or accommodating learning style. Inspired by Kolb, Honey and Mumford (1992) described four types of learners: the pragmatist, the reflector, the theorist and the activist. Others developed a wholly different classification scheme, based on the sensory channel learners prefer to gather and process information (auditory, visual or kinesthetic learning style) or based on personality traits instead of learning (the Myers-Briggs Type Indicator) (Felder & Brent, 2005). The pedagogical implications of learning styles could be far-reaching. Studies (Felder & Brent, 2005) have shown that a more profound learning effect may occur when teaching styles match learning styles. This does not necessarily require assessing the students’ learning style preferences. Since classes always consist of students with a (wide) variety of learning preferences, it suffices to select a model of learning styles and attempt to address all of its categories. The optimal instruction style should be balanced, meaning that a wide range of learning styles is acknowledged and facilitated. If this is not the case, some students will always feel out of place, which will minimize learning, while others won’t be challenged in their current learning habits. The conceptual acknowledgement and practical appreciation of different learning styles is a key element in the QuADEM view of a successful digital learning module. The Learning Styles Unit can be used to assess to what extent a digital learning module recognizes and facilitates the use of different learning styles.

SUMMARY

48

Definition

Learning styles refer to the different ways in which individuals gather, process, retain, and use information.

Scope

The unit is valid for different learning styles or different learning style models

Methods

Task analysis Retrospective interview Keystroke or web logging

Respondents

Representative end users Pedagogical experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the learning style approach of the digital learning module, you can use the criteria listed in the checklist below. Use think aloud protocol, interview, and logging as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Learning styles

Scores

-2

1

The digital learning module is designed to allow learners to determine their own learning path.

2

The tasks / assignments cater to multiple learning styles, allowing learners to choose their preferred styles.

3

The tasks / assignments encourage learners to explore alternative

-1

0

1

2

N/A

49

learning styles.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up and aim of the assessment session and the different research procedures they will participate in. Either a think aloud protocol (in combination with a retrospective protocol) or an analysis on the basis of logging can be organized. It is recommended to test at least 5 respondents. You should consider determining the learning style of your respondents beforehand. Depending on which model of learning styles you are using, this can be done based on a short questionnaire. In case you are applying Kolb’s model, tests to determine your learning style are also available online.

THINK ALOUD Let the respondents carry out a limited number of specific tasks while thinking and reading aloud. It is always useful to record (audio / video) these sessions. This will to get a picture of the mental mapping process of the participants.

INTERVIEW After the think aloud session you can organize a short retrospective interview (or a small group discussion). All criteria of the checklist should be addressed.

KEYSTROKE OR WEB LOGGING In order to reconstruct the navigation paths the participants have followed, the computer actions (keystrokes, mouse clicks and the web page addresses) of the user are logged in combination with a

50

time stamp. This can be done with a keystroke logger2 or with a url logger3. These recordings allow the assessor to analyse the navigation path (together with the time needed to visit the different pages). After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Learning styles

1

Problem

Problem solving

The digital learning module is designed to allow learners to determine their own learning path.

Educational materials should accommodate different learning styles. Instead of forcing the learner to examine the module in a pre-defined way, the module’s design should all learners to approach the learning objectives according to their preferred learning style. For instance, Kolb’s ‘accommodator’ prefers to learn through experience, active experimenting, trial-and-error and needs little structure. In a module which offers theory, exercises and a case, this learner will probably choose the case-oriented approach and will not leave this approach unless he needs extra information. An ‘assimilator’ on the other hand has a more abstract, reflective learning style. These learners like theory, rules and structure. Theoretical models give them something to hold on to. They will probably prefer a different flow, starting with theory, then going through the exercises and finishing with the case.

2

The tasks / assignments cater to multiple learning styles, allowing learners to choose their preferred styles.

Write the tasks and assignments in such a way that it is easy for the learner to relate the problems they encounter while completing a case or exercise to the theory. In every step of the task completion explicit links should be provided to the corresponding theoretical part of the module. This enables more practically oriented learners to use a more problem solving approach to their learning.

2

For example, Inputlog: www.inputlog.net

3

For example, Surf logger: www.browsertools.net 51

3

The tasks / assignments encourage learners to explore alternative learning styles.

Provide enough incentives for the students to actively question their own learning approach and offer alternative routes to explore the content. Giving students explicit options, for instance, to either pursue or deviate from a more chronological navigation can encourage the students’ explorative behaviour.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

WRITING STYLES Different people organize their writing activities differently. The process of text production can be subdivided into three main sub processes: planning, formulating and revising. The prevalent ways in which writers orchestrate and prioritize these sub processes are termed ‘writing styles’ or ‘writing profiles’. Some writers depend heavily on preliminary planning whereas others prefer to start writing straight away. Some writers may revise a text extensively in the course of writing, while others postpone the revision and correct the text as a whole at the end. Some may write sequentially, starting with the 52

beginning of the text and working their way to they end, while others start with the easiest part or resequence the text afterwards. Based on these and many other criteria, different models classifying writing profiles have been developed. A first distinction was made between ‘Mozartians’ and ‘Beethovians’. ‘Mozartians’ are extensive planners who formulate and revise their texts sentence by sentence, whereas ‘Beethovians’ write a first draft of their text rather quickly with minimal revision, postponing the main revision until a later stage. This dichotomy was later used as the starting point for a more elaborate, five-part model distinguishing between ‘initial planners’, ‘first draft writers’, ‘second draft writers’, ‘non-stop writers’, or ‘average writers’ (Van Waes & Schellens, 2003). Others describe writers as ‘architect’, ‘bricklayer’, ‘oil painter’, ‘watercolourist’ or ‘sketcher’, depending on their writing strategy (Sharples, 1998). Although different models identify different wiring profiles, the pedagogical implications remain the same for all of them. For an optimal learning experience, the teaching of writing should acknowledge and facilitate the different writing styles. The Writing Styles Unit is the ideal tool to assess whether a digital learning module in the field of writing takes into account and supports different writing profiles.

SUMMARY Definition

Writing styles can be seen as a direct expression of how writers orchestrate and prioritize the different sub processes of writing during the production of a text: planning, formulating and revising.

Scope

The unit is valid for different writing styles.

Methods

Task analysis Retrospective interview Keystroke logging

Respondents

Representative end users Pedagogical experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the writing style approach of the digital learning module, you can use the criteria listed in the checklist below. Use think aloud protocol, interview, and logging as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree 53

-2 = disagree entirely N/A = not applicable

Writing styles

Scores

-2

1

The writing assignment is designed in such a way that also writers who prefer a more non-linear approach to writing can easily complete the assignment.

2

The writing assignment encourages students to explore different approaches to complete a writing task.

3

The writing assignment is presented in such a way that writers are encouraged to optimize their writing fluency (overcoming writing blocks, non-linear text development).

4

The module is designed to let students reflect on their own writing style (effectiveness and efficiency).

5

The writing assignment is designed in such a way that planning activities can be spread over the writing process.

6

The writing assignment in the case is designed in such a way that students are encouraged to plan at different rhetorical and text levels.

7

The writing assignment is presented in such a way that revision activities can be spread over the writing process.

8

The writing assignment is presented in such a way that writers are stimulated to revise their text at different levels.

-1

0

1

2

N/A

Additional comments:

SCRIPT

54

Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in. Either a think aloud protocol (in combination with a retrospective interview) or an analysis on the basis of logging data will be organized. It is recommended to test at least 5 respondents.

THINK ALOUD Let the respondents carry out a limited number of specific tasks while thinking and reading aloud. In this case you could instruct the respondents to carry out (one of) the writing assignments. You can observe the respondents while they complete the task. It is always useful to record (audio / video) these sessions. This will to get a picture of the mental mapping process of the participants.

INTERVIEW After the think aloud session you can organize a short interview (or a small group discussion) to discuss the writing experience with the respondent. A retrospective interview can also be an option when it is not possible to observe to respondent while they are performing a writing task. During the interview all criteria of the checklist should be addressed.

KEYSTROKE LOGGING In order to reconstruct the navigation paths the respondents have followed and to document how they constructed their final text, the computer actions (keystrokes, mouse clicks and the web page addresses) of the user are logged in combination with a time stamp. This can be done with a keystroke logger4 or with a url logger5. These recordings allow the assessor to describe and analyse the participant’s writing actions and profiles from different perspectives. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL

4

For example, Inputlog: www.inputlog.net

5

For example, Surf logger: www.browsertools.net

55

This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Writing styles

Criteria

Explanantion

1

The writing assignment is designed in such a way that also writers who prefer a more non-linear approach to writing can easily complete the assignment.

Make sure that the assignment is set up so that students can complete a task either in a more linear or in a more non-linear way. Therefore, it is important that after every step in the assignment, students are activated to decide themselves how to proceed with the writing task (e.g. finding more information or revising the text produced so far).

2

The writing assignment encourages students to explore different approaches to complete a writing task.

Make sure that the flow of the assignment representing the different steps is visualized adequately. This is not only important for the students to build a better understanding of the assignment, but also enables them to consider different approaches to the writing task.

3

The writing assignment is presented in such a way that writers are encouraged to optimize their writing fluency (overcoming writing blocks, non-linear text development).

4

The module is designed to let students reflect on their own writing style (effectiveness and efficiency).

5

The writing assignment is designed in such a way that planning activities can be spread over the writing process.

Although it is not possible to provide general or generic writing instructions in every module, adding tips for a better organization of the writing process could improve students’ writing proficiency.

Often students are not aware that it is possible to develop a writing assignment in different ways and that it is important to reflect on the different possibilities to organize their writing process. By offering writers several paths to complete the task at the different stages in the development, they will be more aware of the importance to make strategic choices that relate to writing styles. Also, offering them materials that show how other writers have tackled comparable writing subtasks or problems, will encourage them to reflect on their own writing style (cf. observational learning). Planning is a very important sub process in writing and the amount of planning activities correlate highly with text quality. However, research has shown that planning takes place at different stages in the development of a text and that different types of writers spread their planning activity differently. Some concentrate their planning quite explicitly at the beginning of the writing assignment (e.g. by drafting a text scheme); others prefer 56

to divide their attention to planning and take time to plan at different stages in completing a writing assignment.

6

The writing assignment in the case is designed in such a way that students are encouraged to plan at different rhetorical and text levels.

Planning can take place at different text levels and can be oriented at different goals: (a) Content goals; (b) Organization goals; (c) Style-oriented goals; (d) Rhetorical goals. Therefore, the materials in the assignment should encourage different types of planning.

7

The writing assignment is presented in such a way that revision activities can be spread over the writing process.

As is the case for planning, also revision is a cyclic sub process for most writers. The materials presented in the assignment and the accompanying instructions should encourage the students to revise their texts at different stages in the writing process. However, those persons who prefer to postpone reviewing their text till the final stage, shouldn’t feel forced to do that in an earlier stage.

8

The writing assignment is presented in such a way that writers are stimulated to revise their text at different levels.

Poor writers often limit revision to the word level or surface level (correctness). More experienced writers also revise at higher levels and make more structural and content related changes to their text. The materials in the assignment should also encourage revision from different, perspectives, for instance, consistency and correspondence (rhetorical).

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

57

TESTING Although the terms ‘testing’ and ‘assessment’ are often used interchangeably, they do not mean the same. ‘Assessment’ refers to “a wide range of methods for evaluating [learner] performance and attainment” (Gipps, 1994:vii). While it evidently generates information about the entire teachinglearning process of a learning activity or program, it is mostly aimed at improving student learning. ‘Testing’ is simply one of the methods or procedures that can be used to assess student performance. Tests are always designed with a very specific objective and aim to gather information about a specific ability. It is important to see assessment - and testing, as part of assessment - not merely as a scoring tool. Most literature agrees that a distinguishing feature of assessment is its pedagogical component: Assessment is used to (1) document the learning process, (2) compare the progress to an educational norm and (3) use the outcome of the comparison to guide learning (Falchikov, 2005). QuADEM subscribes to the viewpoint that assessment is a crucial constructive element in the learning process and stresses the key role of assessment in orientating and actively involving the learner in the learning process. We can identify various kinds of assessment (Alderson & Bachman 2000-2005), depending on who is judging the learner’s performance and what the judgement is based on. A first distinction is based on who initiates or controls an assessment, the teacher or the learner(s). This will determines whether we speak of self-assessment, peer-assessment, co-assessment, or tutor-assessment. A second subdivision is based on timing. An assessment can take place at preset points in time (fixed-point assessment) or it can go on throughout the learning process (continuous assessment). We can also break down assessment according to what is being assessed: the achievement as a whole (holistic assessment), the different components separately (analytic assessment), the ability of the learner to apply the theory to the practice (performance assessment) or simply the learner’s knowledge (knowledge assessment). Next to different types of assessment, we can also distinguish between different assessment goals (Mousavi 2002). Assessment can check whether the learner has attained the preset objectives (product assessment) or it can verify to which extent the goals have been pursued rather than attained (process assessment). Assessment can evaluate the overall performance after a longer period of time and be primarily aimed at grading (summative assessment), it can evaluate the learning progress regularly to give feedback and adjust when necessary (formative assessment), or it can allows students to monitor and measure their progress by comparing their score at the beginning to their score after completion (Step in/step out assessment). The Testing Unit offers a tool to assess the quality of the tests in a digital learning module. Some of the main criteria are the clarity of the instructions, the relevance of their content and the quality of the feedback. 58

SUMMARY Definition

A test is defined as any (collection of) task(s) that is aimed at determining somebody’s abilities. This definition encompasses anything from highly formalized high-stake tests to low-stake self-tests.

Scope

This unit can be used to evaluate both formative and summative tests as components of a digital learning module or a blended learning course. Questionnaires can also be used for testing. Because testing is not the only possible purpose of questionnaires and because the assessment of questionnaires demands a more methodological approach, we have added a separate unit to assess the quality of questionnaires in digital learning modules (see unit 11).

Methods

Respondents

Descriptive statistics Interviews Think-aloud protocol Test developer Tutor Representative end user Content expert (if LSP test)

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the tests in the digital learning module, you can use the criteria listed in the checklist below. Use the think aloud protocol, interview, and logging as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Assessment

Scores

-2

-1

0

1

2

N/A

59

1

The purpose of the test is clear.

2

The evaluation criteria are clear.

3

The input materials are adequate.

4

The test tasks are relevant.

5

The test tasks are authentic.

6

The test is practically feasible.

7

The test is reliable.

8

The level of difficulty of the test is appropriate.

9

The test is valid.

10

The feedback is given in a non-threatening way.

11

The test motivates learning.

12

It is clear what the test results will be used for.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in. 60

THINK-ALOUD PROTOCOL Ask the participants to carry out a limited number of specific tasks while thinking aloud, and reading aloud the web text. You can record the audio and/or video of the session as a back-up. This method allows you to get a picture of the mental mapping process of the participants.

INTERVIEW In the interview you should cover all the marked items from the checklist. You can add specific questions depending on the respondent’s profile: Test developer − What is the goal of the test? − What are the perceived strengths and weaknesses of the test? Test facilitator − What are the test facilitator’s experiences with administering the test? − What are the perceived strengths and weaknesses of the test? Representative end-user − Does each test task appear useful/representative/clear? Subject expert − Does each test task appear useful/representative/clear?

QUANTITATIVE ANALYSIS To check whether a test is reliable you can determine the Discrimination Index. Statistical programs can calculate to what extent a test item differentiates between advanced students and beginners. Split-half reliability is a quantitative method that allows you to determine the internal consistency of a test. For more information, see the methodological compendium. After all respondents have partaken in your assessment, you will have to translate their combined judgement into the scores on the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Assessment

Criteria

Explanation

1

The purpose of the test is clear.

If the goal of the test is unclear, this will negatively influence performance and motivation.

2

The evaluation criteria of the

Unclear evaluation criteria have been shown to cause frustration among test takers. If users get a hand in deciding on the 61

test are clear.

evaluation criteria, this increases the feeling of autonomy and motivation. Whoever rates a test should be clearly aware of the evaluation criteria. Ideally the rater has been involved in outlining the criteria. Alternatively, he or she should be sufficiently trained to rate task performances accurately. Like all raters, after some training students (peers) have been seen to be able to rate and give peer feedback. Peer raters are generally perceived as less daunting than tutor rating.

3

The input materials of the test are adequate.

The input material is the material the test taker is expected to respond to. Input can be the outline of a situation, a picture, a graph etc. Since the user’s output depends on the task’s input, its clarity is vital. Indeed, if the input is unclear, a user might underperform not because of a lack of abilities but because of input flaws. The input should provide the user with all the necessary information to successfully complete the task.

4

The test tasks are relevant.

For a task to be motivating it needs to be relevant within the rest of the content. There should be a clear link between the test and the module/course content. Interviews with test takers after the test/pilot has been completed can help determine the perceived relevance.

5

The test tasks are authentic.

Authenticity is a rather subjective concept. Being “real” does not automatically constitute an authentic task. Tasks should be as situationally authentic as possible, which means that the situation referred to in the task should have meaning or should seem authentic to the individual user. E.g. In a digital module that aims to learn its user how to write a good CV, one of the original tasks was to write a CV for an artificial person, in this case an older male, married and father of two. Students did not respond well to the task because they could not identify with this person and because they were not motivated to write a CV for somebody else. In the revised version of the module, the students were instructed to write a CV for themselves. Tasks should also be as interactionally authentic as possible, which means that the interaction should reflect a real-life

62

interaction as closely as possible. E.g. In the same CV module, one of the tasks was writing your own CV. But, since a good CV is always adapted to the specific job requirements, students were instructed to first select a job ad on which to base their CV. And, since you always send your CV with an accompanying letter, they also had to write a cover letter to go with it.

6

The test is practically feasible.

Feasibility refers to the practical side of the test, i.e. the timing, the technical equipment, the speed of rating, etc. The feasibility of a test can be determined during the piloting.

7

The test is reliable.

Reliable scores reflect a user’s ability. On a reliable test, a proficient student will consistently outperform an intermediate one. An efficient way to check a test’s reliability is performing an Item Reliability Analysis. This statistical application indicates the discriminating potential of a test item. In other words: it checks to what degree able users get a hard item right and less able users do not.

8

The level of difficulty of the test is appropriate.

Interviews with end users, tutors and subject specialists will show how stakeholders perceive the level of difficulty. Descriptive statistics will yield factual data concerning the mean, the standard deviation and the facility value.

9

The test is valid.

Validity is the extent to which scores on a test allow inferences to be made which are appropriate, meaningful and useful, given the purpose of the test (i.e.: does the test measure what it intends to measure?). There are various sub classifications of validity. Three important types are: Construct validity: Scores reflect a theory about a construct. It could be predicted, for example, that two valid tests of listening comprehension would rank learners in the same way, but both would have a weaker relationship with scores on a test of grammatical competence. Content validity: The items or tasks of which a test is made up constitute a representative sample of items or tasks for the area of knowledge or ability to be tested. Face validity: The extent to which representative end users judge a test to be an acceptable measure of the ability they wish to measure. This is a subjective judgement rather than one based on any objective analysis of the test. 63

Interviews with representative end users, test takers or subject specialists can help to determine a test’s validity.

10

The feedback is given in a nonthreatening way.

If feedback is limited to a numerical score, it might be perceived as threatening or impersonal. Personalised feedback and suggestions for improvement can make feedback less threatening and more personal. After some training students (peers) have been seen to be able to rate and give peer feedback. Peer rating is generally perceived as less daunting than tutor rating.

11

The test motivates learning.

Tests can have a positive or negative influence on user motivation. A key element in this is the degree of user autonomy. If users can help to determine the learning goals and the evaluation criteria, tests will be perceived as less daunting and perhaps even as a tool for learning.

Since tests are often a cause of student anxiety, it is important to realise that tests can and should be dealt with positively. They should be considered an opportunity for learning, rather than a threat to one’s self confidence.

12

It is clear what the test results will be used for.

There are many possible uses for tests. They can be used to track student progress or to test a student’s abilities at the end of a course. They can be used to verify one’s own ability or they can be organized centrally. It is important to clarify the use of a test at the beginning.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B

64

C D E

Assessor’s comments

EXAMPLES In general terms an example can be defined as “a specific instantiation of a general principle, chosen in order to illustrate or explore that principle” (Chick, 2007). The use of examples in educational material is intended to help the learner to understand, connect, apply and generalize the different items of the learning material. Examples can be presented in different forms: as a particular single item, fact, incident, or aspect that is representative of a group or type; as a parallel or closely similar case that constitutes a precedent or model; or as an instance (as in a problem to be solved) serving to illustrate a rule or precept or to act as an exercise in the application of a rule (Webster’s Ninth New Collegiate Dictionary, 1990). The different ways in which teachers use examples, have been summarized in several classification schemes. Rissland-Michener (1978) distinguishes four categories, possibly overlapping: (a) start-up examples that set up attention to the principle, (b) reference examples that are standard instances frequently referred to in the general theory, (c) model examples that show the typicality of a situation, and (d) counterexamples that show conditions under which the general principle might not apply. In view of the obvious importance of good examples to enhance a learning experience, the QuADEM method was equipped with a unit especially devoted to the assessment of examples. The Examples Unit will assist you in optimizing the accuracy, relevance, comprehensibility and presentation of the examples.

SUMMARY Definition

Examples are designed to offer users help to understand, connect, apply and generalize the different items of the learning material.

Scope

This unit can be used for all kinds of examples in digital educational materials.

65

Methods

Interview Think Aloud Protocol

Respondents

Representative end user Pedagogical expert

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the examples in the digital learning module, you can use the criteria listed in the checklist below. Use the think aloud protocol and interviews as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Examples

Scores

-2

1

The examples are accurate.

2

The examples are clear.

3

The examples are interesting.

4

The examples are contextualized.

5

The examples are attractively and clearly formatted.

-1

0

1

2

N/A

Additional comments:

66

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in.

THINK ALOUD Let the participants carry out a limited number of specific tasks while thinking and reading aloud. Ask the participants to pay special attention to the examples being used in the learning module. It is always useful to record (audio / video) these sessions. This will to get a picture of the mental mapping process of the participants.

INTERVIEW After the think aloud session you can organize a short retrospective and structured interview, based on the checklist. All criteria of the checklist should be addressed. Score the different criteria together with the respondent. After all respondents have partaken in your assessment, you can calculate the average scores and use these to score the unit’s checklist.

MANUAL 6 This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Examples

1

Criteria

Explanation

The examples are accurate.

The overall accuracy of an example can depend on: Pedagogical Accuracy: The example accurately reflects the theory being studied.

6

Partly based on: http://www2.gsu.edu/~wwwesl/issue1/extitle.htm 67

Linguistic Accuracy: The example uses authentic language. Content accuracy: The information in the examples is correct. Cultural Accuracy: The examples are sensitive to the cultural backgrounds of the users and avoid offensive content. 2

The examples are clear.

Clear examples are: free of jargon that is not involved in the particular point being illustrated. free of difficult or rare vocabulary. free of irrelevant irregularities unambiguous self-explaining as concrete as possible

3

The examples are interesting.

4

The examples are contextualized.

5

The examples are attractively and clearly formatted.

Interesting examples are: based on the background and educational, career, or job plans of the users credible and realistic novel in content and presentation within appropriate cultural boundaries humour used carefully with sensitivity to cultural differences Provide commentary or instructional contextualization to point out the concept behind the given example. Avoid free-standing lists of words and sentences. Use a lay out that easily distinguishes the examples from other instructional material. Use tables and charts where appropriate. Divide examples into appropriate chunks of material

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D

68

E

Assessor’s comments

MULTIMEDIA In ICT the term ‘multimedia’ refers to the combined use of all or a selection of the following within the same digital learning module: audio, video, pictures, animation and interactive elements (i.e. games). When audio is integrated in a digital learning module this means parts of the content of the learning module are presented as audio files. The learner has the possibility to use the electronic equipment to play the sound and access the information. Audio can be used to further clarify concepts or to offer background information through interviews or radio items. For audio to have a positive effect on learning it needs to complement the written text or diagram. When picture is integrated in a digital learning module, this means photograph or drawing are used to graphically clarify, visualize or illustrate the text or a concept from the text. Most of the research suggests that pictures have a positive influence on understanding and learning, with text being remembered and understood better when the graphic additions support or clarify what is written. It is vital however, that diagrams, pictures and graphics which support the text are unambiguous (Crisp et al, 2006). Ideally each picture is labelled to allow for easier contextualization. The overuse of pictures “can in fact be counterproductive. What teachers should do is to select the critical points in a course or course unit in which the efforts required for multimedia are best placed to illustrate learning progress and the acquisition of knowledge” (Peters, 2000). The Multimedia Unit offers a tool to evaluate the relevance, the quality and the appropriate and correct use of multimedia.

SUMMARY Definition

The term multimedia refers to the (combined) use of all or a selection of the following within the same digital learning module. audio video pictures animation interactive elements (i.e. games)

69

Scope

This unit is suitable for digital learning modules that use visual/audiovisual multimedia content. Even though animation and interactive elements can be a constructive element in a digital learning module, both are beyond the scope of this unit. For text related queries, please consult the assessment units on Style & Language and on Content. It is important to take into consideration that the technical info within this document does not necessarily apply to those users with dial-up internet access (i.e. internet access via telephone modem with a maximum theoretical speed of 56 Kbit/s). Technical issues dealing with streaming or other processes requiring a fair amount of bandwidth will largely apply only to those digital learning modules with user side broadband internet access.

Methods

Respondents

Interview Think Aloud Protocol Representative end user

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the examples in the digital learning module, you can use the criteria listed in the checklist below. Use the think aloud protocol and interviews as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Multimedia

Scores

-2

-1

0

1

2

N/A

70

1

The link between images and the text is clear.

2

The images offer an added value.

3

The images are clear.

4

The images are of an acceptable size.

5

The images are copyright free.

6

The link between audio and text is clear.

7

The audio files offer an added value.

8

The audio has a clear sound.

9

The streaming quality of the audio is reasonable.

10

The download times of the audio are reasonable.

11

The audio files are copyright free.

12

The link between video and text is clear.

13

The videos offer an added value.

14

Image and sound in the videos are synchronized.

15

The picture quality of the video is reasonable.

16

The streaming quality of the video is reasonable.

17

The download times of the video are reasonable.

18

The video files are copyright free.

Additional comments:

71

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in.

THINK ALOUD Let the participants carry out a limited number of specific tasks while thinking and reading aloud. Ask the participants to pay special attention to the images, audio, and sound being used in the learning module. It is always useful to record (audio / video) these sessions. This will to get a picture of the mental mapping process of the participants.

INTERVIEW After the think aloud session you can organize a short retrospective and structured interview, based on the checklist. All criteria of the checklist should be addressed. Score the different criteria together with the respondent. After all respondents have partaken in your assessment, you can calculate the average scores and use these to score the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Multimedia: Images

1

Criteria

Explanation

The link between images and

Images can serve as an explanation, elaboration or contextualization of the learning material. The link between an 72

the text is clear.

image and the rest of the material should be stated explicitly. If the image causes confusion (because it is intrinsically confusing or because the link with the context is unclear), it should be explained, contextualized or replaced.

2

The images offer an added value.

An image should constitute as a valuable addition (informative, aesthetic…) to the learning material. If this is not the case, it should probably be deleted. Purely aesthetic additions can make a text more pleasant to read and can increase learning pleasure. These images should however remain in the background. Graphs can add meaning to the text or can stress otherwise subtle connections. It is a prerequisite however that any graph is easily understandable.

3

The images are clear.

If the image is in the uncompressed RAW format, it is easy to enhance contrast, white balance and clarity. Other formats (such as .gif, .jpeg) also allow for contrast and brightness enhancement. The possibilities for change in these formats are not as extended as those in RAW. Photoshop is the leading photo editing software. It allows for a myriad of applications. A freeware counterpart of Photoshop is Picasa (picasa.google.com). It offers quite an extended editing package.

4

The images are of an acceptable size.

Various sizes suit various purposes. Thumbnails are small images of anything from 80x80 tot 200x200 pixels. They are ideally suited as previews of larger images. For other images, anything from 500 pixels² (medium sized) to 1024 pixels² (large) will do. The traditional 1024 × 768 (786k) resolution for images is still used quite often for websites, but as screen size and processing speed increase, the aspect ratio of 1920×1200 (2304k) is gaining momentum. However, these images will cause unacceptably long download times for users with a slow connection.

5

The images are copyright free.

If an image is not copyright free, the best option, next to asking for permission to publish the image, is to link to other websites which host the image without violating copyright laws.

6

The link between audio and text is clear.

Audio files can serve as an explanation, elaboration or contextualization of the learning material. The link between an

73

image and the rest of the material should be stated explicitly.

7

The audio files offer an added value.

An audio file should consitute a valuable addition (informative, aesthetic…) to the learning module. If this is not the case, it should be deleted. If audio files serve as background material, this should also be stated.

8

The audio has a clear sound.

When the sound is not clear, boosting the volume or changing the balance might help. It will however be hard to fully restore an unclear recording, so it might be better to replace the audio file entirely. Note: Free audio editing software is available on http://audacity.sourceforge.net/. Adobe Audition offers good tools for audio recovery, but it is not freeware.

9

The streaming quality of the audio is reasonable.

The playback stream should not be broken down. If the audio contains significant details that are lost because of streaming, downloading will most likely be a better solution for the file. Streaming files that are longer than 10 minutes should be either avoided or should be made available for downloading.

10

The download times of the audio are reasonable.

The larger the file, the more download time it requires. Especially users with a dial-up internet connection might find the download time of large files unacceptably long. One minute of sound takes up 1 megabyte of disk space, which implies a download time of about 2 – 3 minutes with a dial-up connection. The same file would take a broadband connection 20 seconds to download. This leads to the conclusion that some applications are not suitable for dial-up connections.

11

The audio files are copyright free.

International copyright law concerning the use of existing audio material on websites is rather blurry. The best option, next to asking for permission, is to link to other websites which host the audio file without violating copyright laws. When you offer audio files under copyright protection for educational purposes, you can “quote” from them if you do not make the files publicly downloadable. For copyright protected audio files, streaming is the best option (if the copyrights are cleared), since it makes downloading less easy. If the audio files are of considerable length however, the 74

listener might prefer to listen to the file as a podcast and save it to disk, in which case, you should probably not use them. When making home made podcasts public, make sure that its soundtrack does not contain any music which is copyright protected. It is wrong to assume that using copy written material for non commercial purposes will be considered as fair use.

12

The link between video and text is clear.

Video files can serve as an explanation, elaboration or contextualization of the learning material. If the link with the rest of the material is unclear however, it should be stated explicitly.

13

The videos offer an added value.

The video file should constitute a valuable addition (informative, aesthetic…) to the learning module. If this is not the case, it should be deleted. If video files serve as background material, this should also be stated.

14

Image and sound in the videos are synchronized.

If image and sound are not synchronized, this might be the result over over-editing. The easiest solution is to go back to the editor software and to reproduce the video. This does not imply reediting.

15

The picture quality of the video is reasonable.

To save on bandwidth or download time, most online videos are in a 320x240 pixel resolution. This image size does not allow for full screen viewing, which means that details will be lost. Subtitles too might become hard to read. A possible solution for the subtitle issue is to oversize the subtitles in the original file or to provide a full transcription of the video.

16

The streaming quality of the video is reasonable.

The playback stream should not be broken down. If the video contains significant details that are lost because of streaming, downloading will most likely be a better solution for the file. Streaming files that are longer than 5 minutes should be either avoided or should be made available for downloading.

17

The download times of the video are reasonable.

Six seconds of video take up roughly one megabyte. Downloading one minute therefore requires about thirty minutes using a fast dial-up connection (i.e. a speed of 56 Kbit/s). The larger the file, the more download time it requires. Use

75

MPEG2 as a standard compressed format.

18

The video files are copyright free.

Offering news broadcasts, films or other authentic audiovisual material is unadvisable. There are plenty of other websites (alluc, peekvid…) that offer copyright protected material. Providing links to these sites is not really illegal since you are not hosting the copyright protected material. Having said that, it is not really legal either. Using home made video avoids the issue of copyright unless you are quoting from copy written texts or using music under copyright. When making home made videos public, clear the copyrights of the copy written material (texts, music, images). It is wrong to assume that using copy written material for non commercial purposes will be considered as fair use.

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A B C D E

Assessor’s comments

76

QUESTIONNAIRES In general terms a ‘questionnaire’ is a research instrument consisting of a list of questions that a number of people are asked so that information can be collected about something. Most often this method of data collection is used to gain statistical data that can serve as the basis for scientific research. But questionnaires can also serve a pedagogical purpose. Used in digital learning modules, questionnaires can contribute to an optimal learning experience in different ways. A questionnaire can be used to introduce a certain topic, to arouse the learner’s interest in the subject and to create awareness about the learner’s own views on the issue at hand. Questionnaires can also be used to assess the learner’s (prior) knowledge about a certain topic, allowing the learner to discover his/her strengths and weaknesses or to assess his/her progress. This, in turn, can motivate the learner or can help him/her to reorient learning efforts. In sum, questionnaires can increase the interactive qualities of a digital learning module. By asking the learner specific questions about a certain topic (and by providing feedback) they can make the module’s users more motivated, conscious and goal-oriented learners. Because of their - underrated - pedagogical potential, a QuADEM assessment unit has been devoted to the use of questionnaires in digital learning modules. The Questionnaires Unit provides you with a tool to assess the methodological thoroughness of the questionnaires used in a digital learning module.

SUMMARY Definition

A questionnaire is a research instrument consisting of a list of questions that a number of people are asked so that information can be collected about something.

Scope

This unit can be used for all kinds of questionnaires.

Methods

Plus/Minus Method Think Aloud Protocol Interview

Respondents

Representative end user Methodological experts

PRECONDITIONS No specific preconditions need to be taken into account for this unit.

CHECKLIST To assess the examples in the digital learning module, you can use the criteria listed in the checklist below. Use the plus/mius method, a think aloud protocol and interviews as research methods to determine to what extent, according to the respondents, each statement applies for the digital learning

77

module at hand. Then continue by checking the boxes with the scores that reflect the respondents’ opinions best. 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely N/A = not applicable

Questionnaires

Scores

-2

1

The objective of the questionnaire is clearly stated.

2

The target audience of the questionnaire is well-defined.

3

The questionnaire is adapted to the target audience.

4

The questionnaire is useful within the learning process.

5

The questionnaire has a clear and meaningful title.

6

The questionnaire has a clear layout.

7

The question sequence is optimal.

8

The questionnaire is internally consistent.

9

The questionnaire has a reasonable length.

10

The questionnaire respects respondents’ privacy.

11

Questions are not double-barreled.

12

The answer categories of closed questions are mutually exclusive.

13

The answer categories of closed questions are exhaustive.

-1

0

1

2

N/A

78

14

Questions are written in clear terms (no jargon or technical terminology).

15

Questions don’t contain “red flag” terminology.

16

Questions are not prestige biased.

17

Questions are not leading.

18

Questions are consistent with the provided responses.

19

Questions are consistent with the provided responses.

Additional comments:

SCRIPT Next you will have to design a script for your assessment sessions. The overview below describes the possible course of an assessment session, summing up the different possible steps you will have to take as well as offering suggestions on how you can smoothly incorporate the different research methods into one session.

INTRODUCTION Start by briefly informing the respondents about the set-up of the assessment session and the different research procedures they will participate in.

THINK ALOUD Ask the participants to carry out a limited number of specific tasks while thinking aloud, and reading aloud the web text. In this case you should instruct them to fill out the questionnaire(s) available in the digital learning module. You can record the audio and/or video of the session as a back-up. This method allows you to get a picture of the mental mapping process of the participants.

PLUS/MINUS METHOD

79

Ask the participants to attentively go through the questionnaire(s) available in the digital learning module. At each question or at each part of the questionnaire they value negatively, they should note down a minus (-), at each question/part they value positively they should note down a plus (+).

INTERVIEW When the participants have finished marking the text with plusses and minuses, you can discuss the marked text. You can continue with a short retrospective and structured interview, based on the checklist. All criteria of the checklist should be addressed. Score the different criteria together with the respondent. After all respondents have partaken in your assessment, you can calculate the average scores and use these to score the unit’s checklist.

MANUAL This manual provides you with additional information on the meaning and the scope of specific criteria and can also offer advice on how to improve certain shortcomings within the digital learning module. You can consult it throughout the entire assessment procedure.

Questionnaires

1

Criteria

Explanation

The objective of the questionnaire is clearly stated.

Developing a questionnaire without a clear objective or purpose in mind, will probably result in including useless questions and omitting important issues. Hence, they are a waste of your time and your participants’ efforts. Moreover, a clearly stated objective will increase the participants’ motivation to answer questions and allow for quicker processing of results.

3

The questionnaire is adapted to the target audience.

In order to increase response rates, it is vital to make sure that both the questions in the survey and the overall style and language are adapted to the target audience.

4

The questionnaire is useful within the learning process.

For questionnaires to be truly successful, it should be integrated in the learning process. Think about timing, method, goal, type of questions and consequences of the questionnaire and how the results will be used to enhance learning.

5

The questionnaire has a clear and meaningful title.

A questionnaire with a title is generally perceived to be more credible than one without and adding a goal is useful for

80

motivation.

6

The questionnaire has a clear layout.

Make sure that the presentation of the questions and the use of white space, colours, pictures, charts, or other graphics do not affect your participants’ interest nor distract from the questions.

7

The question sequence is optimal.

Make sure the answer to a question is not influenced by previous questions. Logically group questions together. Insert clear headings. Make sure questions flow: logically from one to the next; from the more general to the more specific; from the least sensitive to the most sensitive; from factual and behavioural questions to attitudinal and opinion questions; from unaided to aided questions.

8

The questionnaire is internally consistent.

Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables. For example, a researcher designs a questionnaire to find out about college students’ dissatisfaction with a particular textbook. Analyzing the internal consistency of the survey items dealing with dissatisfaction will reveal the extent to which items on the questionnaire focus on the notion of dissatisfaction.7

9

The questionnaire has a reasonable length.

There are no universal agreements about the optimal length of questionnaires. It probably depends on the type of participants. As a general rule, short simple questionnaires usually attract higher response rates than long complex ones. However, some studies have shown that the length of a questionnaire does not necessarily affect response. More important than length is question content. Participants are more likely to respond if they are involved and interested in the question topic.

10

The questionnaire respects respondents’ privacy.

Inform participants about confidentiality issues, the status of the survey (voluntary or mandatory) and any existing data-sharing

7

Writing Guides. Colorade State University. On: http://writing.colostate.edu/guides/research/relval/com2a4.cfm 81

agreements with other organizations.

11

Questions are not doublebarreled.

Double-barreled questions combine two or more issues in a single question. They are difficult to answer and ambiguous for interpretation. If double-barreled questions occur in the questionnaire, rephrase them to separate the issues. E.g. “Do you think taxes should be lowered and schooling should be free?” should become: “Do you think taxes should be lower?’ and ‘Do you think schooling should be free?”

12

The answer categories of closed questions are mutually exclusive.

A respondent’s answer should fit in only one category. E.g. not mutually exclusive categories are: 0-10 10-20 20-30 This should become: Less than 10 10-19 20-29 More than 29

13

The answer categories of closed questions are exhaustive.

All possible responses should be provided. Add more answer categories, or an “other, please specify:”, “don’t know” or refusal category. A bad example: “Marital status: single, married” A good example: “marital status: married, widowed, divorced, separated, living together, single, other: please specify”

14

Questions are written in clear terms (no jargon or technical terminology).

Bad example: “Do you believe that the UK should have a bicameral parliament?” Good example: “Do you believe that the UK should have upper and lower houses of parliament?”

15

Questions don’t contain “red flag” terminology.

Words with emotional connotations or that coincide with strongly-held values evoke emotional responses and may skew results. This should be avoided. Bad example: “Do you think it is fair to murder innocent whales in the Pacific?” Good example: “How do you feel about whale hunting in the 82

Pacific?”

16

Questions are not prestige biased.

Respondents may answer on the basis of their feelings toward the prestigious person of group rather than addressing the issue. Bad example: “According to a recent ACM CHI poll, 80 percent of the people oppose copyright protection for the look and feel of user interfaces. What is your opinion on this issue?” Good example: “What is your opinion on copyright protection for the look and feel of user interfaces?”

17

Questions are not leading.

Leading questions are actually statements disguised as questions, and make respondents feel that only one response is legitimate. Bad example: “Don’t you agree that the look and feel of user interfaces should not fall under copyright protection?”, or “Do you agree with the majority of people that the health service is failing?” Good example: “What is your opinion on copyright protection for the look and feel of user interfaces?”, or “What is your opinion on the health service?”

18

Questions are consistent with the provided responses.

Bad example: Q: How often do you… A: yesterday, last week, etc. Good example: Q: How often do you… A: Once a day, once a week, once a month

SCORE After completing the checklist, please rate the module on a scale from A to E and explain your mark. For more information on the scoring procedure, please consult the scoring manual.

Overall judgement

Score

A

83

B C D E

Assessor’s comments

84

SCORING MANUAL Although each QuADEM assessment unit can be valuable as a reference document in its own right, its main purpose is an evaluative one. To facilitate, streamline and objectify the process of casting a verdict on the quality of a digital learning module, a scoring system is integrated in each of the units. This manual will provide a step-by-step explanation of how to operate this scoring system.

GENERAL PRINCIPLES ALL SCORING HAPPENS AT THE UNIT LEVEL Since each assessment may target different components and, consequently, may use a different combination of assessment units, the scoring is done at the unit level. This means you will not end up with one general score for the entire digital learning module, but with a set of scores, one for each of the components/ units you have decided to include in the quality assessment.

THE SCORING HAPPENS IN TWO PHASES The actual scoring happens in two phases: first you score each criterion in the unit’s checklist and then you calculate the overall score of the unit. The calculation of the overall unit score will be done automatically when you are using the QuADEM Webtool, but it can also be done manually.

QUALITATIVE INFORMATION SHOULD NOT BE WASTED The scoring process plays an important part in objectifying and summarizing your research results, but it is also restricting: the specific information, tips and comments you have gathered throughout the contact with different respondents can’t be represented in one single score. As assessor you should be aware of the value of this qualitative research output. Include it in the comment boxes at the end of each unit and incorporate it in your report’s findings and recommendations.

STEPS OF SCORING SCORING THE CHECKLIST CRITERIA The first step in the scoring process is the attribution of a numeric score to the criteria in the unit’s checklist. For the scoring, a 5-point scale is used, ranging from 2 (agree entirely) to -2 (disagree entirely). It is also possible to check a sixth option, N/A (not applicable). If this option is checked, the variable will be ignored when calculating the final score. The scoring of the criteria, listed in the unit’s checklist, can best be done immediately after the assessment sessions with all the respondents are concluded. The assessor should also remember to add additional comments that may explain, complement or illustrate the scores.

85

SCORING THE UNIT Since each assessment may involve a different combination of assessment units, an overall judgement is passed on the unit level. This overall judgement consists of a letter score ranging from A (excellent) to E (fail). The meaning of each of the possible letter scores is explained below (box 4). It should be kept in mind that the letter score and its message only summarize how the digital learning module scored for one specific component of quality. For example, when the assessor scores the content unit with an ‘A’, he judges the module to be excellent and in no need of any significant improvements when it comes to its content. This obviously does not mean the module is perfect in all fields. It might score only a ‘D’ for usability or an ‘E’ for intercultural aspects. Box 4: letter scores

A Excellent

Barring a few minor flaws listed in the assessor’s comments, the digital learning module under review can remain unchanged.

B Good

The digital learning module is of more than acceptable quality, but it is advisable to have a closer look at the criteria that received low scores and to give the assessor’s remarks serious consideration.

C Pass

Even though the digital learning module receives a ‘pass’ rate, it is inadvisable to launch it as it is. Clearly, a number of improvements should be made.

D Narrow fail

The digital learning module cannot be used as it is. A number of criteria and assessor suggestions have to be attended to before the module can be reviewed again.

E Fail

Major adjustments have to be made before the digital learning module can be reviewed again.

To calculate the letter score for a specific unit, one should make the sum of each of the applicable criteria. Next, the following applies:

LETTER SCORE

% OF MAXIMUM

A

≥ 80%

B

60% – 80%

C

50% – 60%

D

30% - 50%

E

≤ 30%

Next to the letter score attributed to the digital learning module, the overall judgement of a unit also consists of a written comment from the assessor, explaining the score and offering feedback, along with possible suggestions for improvement. 86

EXAMPLE In our example, an assessment of component ‘content’ using the Content Unit has resulted in the following checklist scores:

Content

Scores

-2

1

The content is adapted to the learning objectives.

2

The content is adapted to the target audience.

3

The content is objective.

4

The content is up to date.

5

The content is well-organized.

6

The content is correct and accurate.

7

The content is complete.

8

The content is easy to understand.

9

The content is focused and specific.

10

The content is credible.

-1

0

1

2

N/A

This unit’s checklist includes 10 criteria, so - if all of the criteria are applicable, which is the case - the maximum score is 20 and the minimum score is -20. Based on this, one can calculate the corresponding scores for this specific unit.

LETTER SCORE

% OF MAXIMUM

Corresponding scores

A

≥ 80%

≥ 12

B

60% – 80%

4 to 12

C

50% – 60%

0 to 4

87

D

30% - 50%

-8 to 0

E

≤ 30%

≤ -8

In our case the sum of the checklist scores is 10, which corresponds with a ‘B’.

Overall judgement

Score

A B C D E

Assessor’s comments

The content of the materials is overall quite impressive. Sometimes, however, it is necessary to make the information more specific and to the point. Once the material becomes less vaguethe score could quickly rise to A.

88

METHODOLOGICAL COMPENDIUM This section gives an alphabetical overview of the assessment methods and techniques used in combination with the QuADEM checklists and manuals. By explaining, describing and demonstrating the different research methods, this compendium constitutes a very useful tool for those researchers who wish to employ the methods suggested in the units to gauge the quality of a digital learning module under review.

CARD SORTING Card sorting serves to determine the ideal structure for a digital module or to evaluate an existing structure.

CLARIFICATION Card sorting is a research method used in website development. Rather than asking users what they think about the structural logic behind a prototype, this method asks users to logically arrange the site’s content themselves before looking at the site. In card sorting the respondents literally sort a number of cards. Each card represents a content unit. By combining content units, the users show which structures are the most intuitive. In some card sorting sessions, researchers may also ask the respondents to label the different groups they have arranged the cards into. Ideally, card sorting helps website developers to create an intuitive sitemap which will not need a lot of adjustments afterwards.

CARD SORTING MAY VARY IN TERMS OF STRUCTURE… Open

Respondents arrange the cards into groups and afterwards label each group. This type of card sorting is ideal when developers need to know how users intuitively group and label content units.

Closed

In a closed card sort, participants sort items into categories that have been predefined. This type of card sorting reveals how users intuitively arrange content blocks within a larger content category.

Combined

Combining both kinds of card sorting offers a complete view of the users’ intuitive content structure. The closed sorting takes place after open sorting. By following this order, it is possible to observe the effect of imposing predefined categories on the structure.

89

… OR IN TERMS OF PARTICIPANT-RESPONDENT INTERACTION… One on one

The researcher is present while the respondent is sorting the cards. In this way the researcher will be able to give the respondent full attention, reducing the risk of losing data.

Simultaneous

When multiple card-sorting sessions take place at the same time, the researcher takes on the role of facilitator, rather than active observer. The main advantage of this method is its efficiency. However, having multiple sessions going on at the same time implies that the observer will not be able to pay full attention to each user. This will make the data harder to interpret afterwards.

… AND IN TERMS OF CHANNEL. Face to face

The card sorting session uses a “real setting” and real cards. Being face to face with the respondent makes it easier for the researcher to interpret actions and thought processes. However, this method does require the respondents or the researcher to travel, which may be expensive in terms of means or time.

Online

WebCAT is a free card sorting tool, created by the National Institute of Standards and Technology (NIST) available for downloading. Online card sorting greatly decreases travel time and cost, but it also reduces the observer-participant interaction.

USE Card sorting is ideally combined with verbal protocols (i.e. the respondent voicing his/her thoughts and reasons for action). Card sorting sessions generally follow the chronology listed below.

Before

During

The various content blocks are defined and written on large separate cards. Each card gets a number on the back and in the front bottom right corner (for classification purposes). The total number of cards should not exceed 50. There should be a number of blank cards available for users who want to add topics. If multiple sessions are going on simultaneously, there should be as many sets of cards as there are participants. I If different user profiles have been defined, each user group should be represented by 4 to 6 participants. Ideally, an open card sorting session follows the following structure: Each participant is given a number of cards. Each card represents a block of content. Each participant groups the cards in the order that makes most sense to them. It is allowed for users to create a hierarchical structure. 90

Once the cards have been grouped, the users label each group. Ideally, a closed card sorting session follows the following structure: Each participant is given a number of cards. Each card represents a block of content. Each participant assigns the cards to groups that have been predefined. It is allowed for users to create a hierarchical structure. If the user has been asked to think aloud while sorting the card, the researcher takes note of the thinking process. After

Immediately after the session the researcher writes down the order the user has given the cards. The numbers that have been assigned to the cards now allow the researcher to quickly write down the chosen structure.

TROUBLESHOOTER The respondent disagrees with the chosen topics.

Respondents who wish to alter, remove, or add topics might have useful insights. Provide blank cards so users can add their suggestions.

All respondents sort the cards differently.

If no two card layouts at least roughly correspond, there might be different user groups that have not been defined yet. Alternatively, there may be something wrong with or unclear about the basic idea behind the concept.

The labels are unclear

Some labels may be inherently unclear without context (i.e. “introduction”) while others require subject-specific knowledge, making them obscure for some respondents. To avoid this problem, the labels may be accompanied by a brief contextualising note.

EXAMPLE The examples below show what the result of a card sorting session may look like. The subject of the open card sorting session was a digital learning module about the curriculum vitae. The structure shows the hierarchy in the same way a website menu might. Labels in green represent categories that have been added by the respondents.

Respondent 1 (13’ 36”)

Respondent 4 (17’03”)

Purpose

Types

Introduction

Functional CV

Activities

References

Interests & pastimes

91

Sections

Profile

Skills

Chronological CV

Education

Education

Performance CV

Skills

Languages

Professional goal

Profile

Introduction

Activities

Personal info

Experience

Education

Purpose

Languages

Personal information

Experience

Extra info

Professional goal

Layout

References

Interests/pastimes

Formats

Sections

Types

Performance CV

Layout

Functional CV

Formats

Chronological CV

Extra info

FOCUS GROUP A focus group is a small group discussion of 6 to 12 people, led by a trained facilitator who moderates the conversation and introduces new topics.

CLARIFICATION Usually focus groups are used to get user feedback after a task has been completed. The main assets of focus groups are efficiency and group interaction. It is this interaction that might trigger memories or feelings that would go unmentioned during a one on one interview.

FOCUS GROUPS MAY VARY IN TERMS OF RESEARCHER ROLE… 92

Standard

The facilitator moderates the discussion, ensuring that all respondents have a chance to respond and that all topics are covered.

Dual

The focus group is led by two moderators, one of whom leads the discussion while the other one takes care of technicalities (such as recording the session and making sure that all topics are covered).

Duelling

In duelling focus groups two moderators purposefully take opposing sides during the discussion in order to elicit more extreme reactions from the respondents.

… RESPONDENT ROLE… Two-way

One focus group watches another one focus group and discusses the observed interactions and conclusions

Respondent moderator

one or more respondents are asked to act as a temporary moderator

Client participant

A member of the product development team partakes in the focus group

… SIZE… Mini

4 or 5 participants

Normal

6-12 participants

… AND CHANNEL. Face to face

All of the participants are in the same physical space.

Phone

Interaction occurs via a telephone network (audio only), which may obstruct interaction and intelligibility.

Online

Interaction occurs via the internet (audio and possibly video), which may obstruct interaction and possibly intelligibility.

93

USE Focus groups can be used as part of a usability test (see below). When used as a stand-alone method, they are a valuable way of gathering user feedback. A focus group is led by two researchers, one of whom acts as a moderator while the other one takes notes and takes care of technical equipment (recording devices). Ideally, focus groups are recorded and transcribed, allowing for post factum interpretation. During the focus group however, one researcher should take notes in real time so as not to miss any salient information. A focus group might generally follow the chronology listed below:

Before

During

After

The researcher determines which topics need to be addressed For a 60 to 90-minute session, 5 or 6 questions (not including sub questions) are prepared. Practical arrangements are made (room is ready so that all participants can see and hear each other, audiovisual recording devices are installed…) The researcher(s) introduces himself/herself The researcher welcomes the participants and explains the aim and background of the focus group The researcher introduces the first topic that should be addressed, inviting participants to answer and ensuring that all participants have the chance to speak After each topic has been discussed, the researcher summarizes the main points, asks for comments and moves on to the next topic. After the final topic has been addressed, the researcher asks for final comments and thanks the participants for their time. As soon as possible, the researchers write a report based on what has been said and observed during the focus group session. Prior to writing the report it can be useful to transcribe the focus group session. The example below shows such a transcription.

TROUBLESHOOTER Only one focus group is conducted

The focus group is affected by “group think”.

As one focus group might yield unreliable results due to the profile of the participants or because of circumstances which are outside of the researcher’s control, it is advisable to conduct at least two. Sometimes participants voice the group’s consensus rather than their own opinion. Group think can lead to unreliable results. An efficient technique to avoid groupthink is to include a “devil’s advocate” in the group of respondents. It is this participant’s role to highlight different opinions and to rephrase questions.

EXAMPLE Mini Focus group 1/3

94

Meta info: Date:

4 December 2008

Researchers:

John Carpenter (moderator) and Maya Tillutson (facilitator)

Summary:

Mini focus group. User experience with digital learning module.

Location:

Library Portugese literature, Ghent University

Participants:

4 female students of multilingual business communication

Transcription M:

What was your general impression?

S1: Quite a clear layout I think. I liked the theory especially. The exercises were quite long. The case. Long too. I liked the theory especially. M:

Catherine. How did you experience the learning environment?

S2: What did I say again? [laughs] That is was terribly boring, that it only dealt with the introduction and that it could have used some more info on the entire dissertation. That I got the impression you’re supposed to spend half your life writing an introduction. Which you’re not. S4:

I…

M

Yes, Sarah?

S4: .I agree with that. I thought it was quite boring and quite limited as well. But the things it does deal with – they aren’t many – it deals with clearly. S3: I think it was very clear. I didn’t really like the exercises much. I’d never have done them at home. But I think the site is useful for freshmen.

95

S2:

Mhmm …

INTERVIEW Interviews are a way of gathering informant information via face to face contact.

CLARIFICATION THERE ARE VARIOUS KINDS OF INTERVIEW, DEPENDING ON THE INPUT… No input

The respondent discusses only those aspects which appear relevant to him/her. The researcher can decide to elaborate on certain topics, which were however initiated by the respondent.

Thematic

The researcher will initiate a number of preset themes or questions, which can be done in a structured, unstructured or semi structured way.

Stimulated recall The input for this interview depends largely on the actions performed by a user in a task. Together with the researcher the respondent watches a recording of his/her own performance and comments on any striking or salient points.

…ON THE STRUCTURE… Unstructured

The success of this kind of interview depends on the interaction between the researcher and the respondent. There is no fixed interview schedule, but rather a number of themes that are to be addressed.

Semi structured

The researcher follows a preset schedule. It is possible however to deviate from this when interesting issues arise.

Structured

The interviewer goes through a fixed series of written questions without deviation. This type of interview closely resembles a questionnaire.

…AND DEPENDING ON THE NUMBER OF INFORMANTS INTERVIEWED AT THE SAME TIME. One on one

This kind of interview allows the researcher to zoom in on the views of individual

96

respondents.

Group

The advantage interviewing larger numbers at once is that group interactions might spark observations that would have gone unnoticed.

USE Interviews can offer a deep insight into the thoughts, actions… of respondents. An advantage of interviews as opposed to questionnaires is the opportunity they offer for probing for subtleties or clarifications. Interviews are not a reliable method for determining the actual behaviour of users. In that case, a task analysis using a verbal protocol is the preferable method. Since interviews require preparation, execution and data processing, they are rather time consuming. For this reason usually no more than 15 (but no less than 5) respondents for each user group are required. An interview might follow the chronology listed below:

Before

During

After

The researcher prepares the interview by drawing up an interview script which can be anything from highly rigid (structured) to quite open (unstructured). The researcher follows the script. The session is recorded. Depending on the interview style, it may not be advisable for the researcher to take extensive notes (i.e. if the interview is to simulate a conversation). The researcher completes his/her notes and draws up an interview report, which may contain a transcription.

TROUBLESHOOTING The interviewer-interviewee interaction fails.

Even though piloting an interview is important and helpful, a lot depends on the interviewer – respondent interaction, which cannot be controlled. If there is no rapport between interviewer and interviewee, the interview will run less than smoothly.

The question contains “red flag” words.

Avoid words with emotional, religious or political connotations. Example: Do you think it is fair to murder innocent whales? you feel about whale hunt in the Pacific?

The question is prestige

How do

Because some respondents may be influenced by their feelings towards a prestigious person or group, it is important to ask neutral questions

97

biased.

and to omit references to prestige groups or persons.

The question is leading.

Leading questions are actually statements disguised as questions, and make respondents feel that only one response is legitimate. These questions should be avoided. Example: Do you agree with the majority of people that the health service is failing? What is your opinion about the health service?

EXAMPLE Example of a preparation sheet of a semi structured interview

INTERVIEW SCRIPT Interviewer name: Respondent #:

…………………………………………… ……………………………………………

Date:

……………………………………………

Time:

……………………………………………

INTRODUCTION The interviewer explains what the project is about and what the goal of the interview is (i.e. to review the electronic learning environment (ELO), not to evaluate the respondent). The interviewer now asks the respondent to read the introduction to the ELO. The interviewer leaves the room and returns after 35 minutes.

QUESTIONS: EXPECTATIONS Could you tell me what the website is about? On first sight, what do you think the website has to offer? What kind of information do you expect to get? How would you like to get this information (i.e. text, video…)?

QUESTIONS: PREDICTED USE How would you navigate through this site? How would you study the material?

98

How would you deal with the theory and the exercises?

ORIENTATION The interviewer asks the respondent to click through the ELO at his/her own pace as he/she would do at home. The interviewer leaves the room and returns after 15 minutes.

QUESTIONS: OPINION How do you feel about the variation in the available material? How would you deal with this website’s content? Does the website allow you to work as you’d like to work? Doe you think the material is relevant?

PLUS/MINUS The plus/minus method is a way of gathering information concerning the respondents’ opinion.

CLARIFICATION This method allows researchers to get an insight into stakeholders’ opinion. Respondents give their opinion by going though the material and writing a “+” or a “-” next to a section they like or dislike. Ideally, the respondents will briefly explain their opinion.

THE METHOD MAY DIFFER IN TERMS OF MEDIUM… Paper-based

The easiest medium to use for the respondents is paper. Commenting on a print-out is generally less troublesome and faster than it is on a PDF or word-based screenshot.

Digital

A digital version can be e-mailed and saved to disk, which is more convenient for the researcher. However, marking digital materials can be a challenge. The easiest way is to use comments in PDF or Word, or, even better, use specialized usability software such as InFocus.

Audiovisual

The plus/minus method can be combined with a think-aloud protocol. In that case, the respondent can click through the learning environment and explains the rationale behind the pluses and minuses. Software such as Camtasia or Morae will record the screen and the user’s comments.

99

… IN TERMS OF DETAIL OF JUDGEMENT… Dichotomous

The respondent can give two marks; “+” and “-”

Scale

In this case, a more fine-grained score tool is at the researcher’s disposal: “++”, “+”, “-” and “- -”.

… AND IN TERMS OF RESEARCHER – RESPONDENT INTERACTION. Separate

The respondent can be asked to comment and send his/her finished document to the researcher.

Face to face

The researcher can choose to be present as the respondent is commenting. This method is less time efficient but it does offer the possibility of asking clarifying questions.

USE The plus/minus method can be an efficient way of gathering informant judgements. It is not very suitable as a tool for gaining a nuanced insight in the respondents’ thoughts and opinions. Interviews, for instance, are more useful for that purpose. The plus/minus method can however also be a prologue to an interview in the same way a think-aloud protocol can. When used as a stand-alone method, the following steps should be taken: Decide whether to use a paper-based or a digital format. A digital format is advisable for large groups of respondents and/or when the respondents live far away. If the respondent groups are not large and live in the vicinity, an audiovisual format can be considered. Send the digital files or printouts to the respondents, giving them clear instructions on how to mark and whether or not to write comments. Analyse the results and, if necessary, ask a number of follow-up questions.

TROUBLESHOOTER The reason for the respondent’s judgement is unclear.

It is important to instruct the respondents to clarify their +/judgement. If they have neglected to do so, the researchers might consider inviting these respondents to a follow-up interview.

The method did not yield enough usable data.

Because the conciseness inherent to the +/- method, researchers will sometimes receive less usable data than they expected to. That is why it is advisable not to use the plus/minus method as a stand-alone tool, but to complement it with other methods, such as a follow-up

100

interview, a think-aloud protocol etcetera.

EXAMPLE

TASK ANALYSIS Task analysis is a method that verifies how users go about achieving the goals they set out to reach.

CLARIFICATION Task analyses check which goals users wish to accomplish. By analysing how users interact with the learning environment and by tracking their thinking patterns, it examines how these goals are accomplished and what influence skills and background knowledge have on the way users accomplish tasks. The purpose of task analyses is to determine which tools/applications a website should support and whether the website structure/layout could/should be improved.

TASK ANALYSES MAY DIFFER IN TERMS OF OBSERVATION METHOD… Camera

Videotaping the session is the most basic way of performing a task analysis. Even though the data is kept, the image quality can be quite poor. Being videotaped can also influence the behaviour of the respondent.

101

Additional screen

The researcher can track the actions of the respondent in real time by connecting two screens to the same computer. He/she should make notes while viewing, since the

session will not be saved.

Screen capture

Software such as Camtasia Studio records the screen and the user’s voice. The Morae software also records the user’s facial expression and tracks system-internad date such as mouse clicks and keystrokes, allowing for a more detailed analysis.

… IN TERMS OF DETAIL OF ANALYSIS... Manual

Taking notes of the respondent’s actions can be an adequate tool.

Software

Software can track and analyse the user’s interaction with the computer. The image below shows one such analysis.

… AND IN TERMS ADDITIONAL METHODS. Stand-alone tool

A task analysis can be used as a stand-alone method. In this case the focus of the research will be on technical aspects such as click paths and keystroke analyses.

Think-aloud protocol

When combined with a think-aloud protocol, the technical data will be supplemented by more subjective elements, such as thinking patterns.

102

USE A task analysis can be combined with the use of a scenario, of persona and with a verbal protocol. It can be useful to conduct an interview or a focus group after the task analysis so as to further interpret the actions taken by the respondents during the task. A task analysis might follow the chronology listed below:

Before

During

The researcher determines the goals of users or user groups, possibly after conducting a user analysis (i.e. through interviews or focus groups); The researcher draws up a number of scenarios or tasks for the respondents to perform. The respondents are contacted and the technical equipment is set up. The researcher meets the users and explains what a task analysis is and how to go about it. The user performs the tasks/scenarios. The researcher records the session and takes notes After the session the researcher processes the data.

TROUBLESHOOTING The goals from the task analysis do not correspond to the real user goals.

If the real goals do not correspond to the goals set in the task analysis, any data resulting from this task analysis will be unreliable. By analysing the target audience (through focus groups, questionnaires or interviews) the research team will be able to get a realistic view of the goals of each group of users and prevent a mismatch of user goals.

The task goal cannot be reached

It is always a good idea to trial or pilot a task. This will reveal most – if not all – technical flaws that may obstruct the task analysis.

EXAMPLE The example below shows notes taken by a researcher during a task analysis. The notes include time, a brief description of the action and often a brief quote.

103

VERBAL PROTOCOL

8

A verbal protocol is a way of collecting qualitative data which offers an insight into the thought processes of informants.

CLARIFICATION Verbal protocols are a widely used research method, employed in settings ranging from software design to social sciences. A verbal protocol combines the following variables.

TALK ALOUD OR THINK ALOUD. Talk aloud:

Informants voice their thoughts and actions as they conduct tasks. They do not interpret them, nor do they voice their feelings. In talk aloud protocols there should be no room for subjective information, as only factual data should be uttered.

CONCURRENT OR IN RETROSPECT. Concurrent:

The verbal is report is given in real time

Retrospective:

The verbal is report is given afterwards

8 In some literature, verbal protocol is used synonymously with verbal report.

104

THE ROLE OF THE RESEARCHER. Mediated:

The researcher occasionally intervenes

Non-mediated:

The researcher does not intervene

USE Verbal protocols can be used as a component in usability tests or as a tool in card-sorting. They can also be used independently to determine how users interact with a product (talk aloud and think aloud) as well as the logic behind the interaction (think aloud). Verbal protocols are usually (video) recorded, so researchers can go back to what has been said or done during any given session. As the session takes place observers usually take notes of what happens, without interpreting actions or words. After the session, a report is written, possibly supplemented by a full transcription. Concurrent verbal protocols contain the following phases: The researcher draws up a number of tasks the users should perform. If there are different groups of target users, the use of personae (see 2.3) will make it easier to draw up different scenarios for different groups. For each target group, four to six users are contacted. If the target audience is homogenous, this amount of respondents will suffice. The researcher meets the users and explains what a verbal protocol is and how to go about it. Each user partakes in a short try out session after which the researcher gives feedback, offering suggestions for improvement. The user performs the tasks while voicing the required information. The researcher records the session and take notes After the session the researcher processes the information as quickly as possible. Retrospective verbal protocols contain the following phases: The researcher draws up a number of tasks the users should perform. If there are different groups of target users, the use of personae (see 2.3) will make it easier to draw up different scenarios for different groups. For each target group, four to six users are contacted. If the target audience is homogenous, this amount of respondents will suffice. The researcher meets the users and explains what a verbal protocol is and how to go about it. Each user partakes in a short try out session after which the researcher gives feedback, offering suggestions for improvement. The user performs the tasks without saying anything he/she would not mention in reality. The researcher records the session and take notes Immediately after the session, the researcher interviews the user. Ideally, the recording of the session can offer cues or trigger memories about the session. After the session the researcher processes the information as quickly as possible.

105

TROUBLESHOOTER The respondents act differently in an artificial setting.

People may act differently in a verbal protocol setting than they would in private. Trialling the verbal protocols with the respondent will help to diminish this effect. After a while the respondent will stop feeling uncomfortable being observed.

The report is hard to interpret.

Reports of verbal protocols that make sense to the initial researchers alone have limited use. Only complete reports, also listing apparent details such as setting and room layout will be able to serve as full future references.

Researchers do not notice all salient details in real time.

Recording the verbal protocol enables researchers to reinterpret the material afterwards. Software such as Camtasia or Morae are ideally suited for this purpose.

There is too much data to process.

Working with initial hypotheses facilitates looking for information. Flagging important instances during the verbal protocol will help researchers to retrieve specific moments.

There is doubt about which kind of protocol to use.

Concurrent reports tend to slow down the thinking process, whereas their retrospective counterparts may cause users to forget why they took certain actions.

There is doubt about which langauge to use.

It is important to determine whether the informant is able to voice his/her thoughts accurately in an L2. Most likely, respondents will be better able to express fine shades of meaning in their L1.

EXAMPLE The example below shows the notes taken during a task analysis with a think-aloud verbal protocol. The notes taken in real time are a substantial help when interpreting the data afterwards.

Researcher:

B.D.

Respondent:

T.D.S.

Date & time:

15/12/08 – 11.25AM

106

0:30

Introduction

09:55

Session begins

09:57

Action

Reads out loud “European recruiter…”

10:03

Comment

This is promotional writing

10:25

Comment

Looks like it’s in the field of H.R.

10:30

Opinion

Does not appeal to respondent

10:35

Action

Selects different ad: “Junior visual communication specialist”

QUESTIONNAIRES A questionnaire is a research instrument consisting of a series of questions and other prompts for the purpose of gathering information from respondents.

CLARIFICATION Questionnaires have advantages over some other types of surveys in that they are cheap, do not require as much effort from the researcher as verbal or telephone surveys, and often have standardized answers that make it simple to compile data. There are various question types at the researcher’s disposal:

Closed questions

A question is closed when the answers are predefined. This enables standardized responses and fast and easy coding and scoring. However, closed questions might push or omit certain answers and might not have the same meaning to all the respondents.

Open-ended questions A question is open when the respondents can answer without being constrained by a fixed set of possible responses. Open questions avoid imposing any restrictions on the respondent. However, there are many different ways respondents may choose to answer a question. Moreover, no

107

matter how carefully we word the question, open questions may leave room for misinterpretation and provision of irrelevant or confusing answers. Thus, open questions can be difficult to code and analyse.

Contingency questions Contingency questions are answered only if the respondent gives a particular response to a previous question. This avoids asking questions of people that do not apply to them (for example, asking men if they have ever been pregnant). In questionnaires, certain questions will clearly be relevant only to some of the respondents and irrelevant to others. The proper use of contingency questions can facilitate the respondent’s task in completing the questionnaire.

Scaled responses

A popular format for scaled responses is a 5-point Likert scale: strongly agree | agree | neutral | disagree | strongly disagree. Likert scale responses are easy to code and fast to answer, but don’t limit your questionnaire to Likert agreedisagree questions, since this encourages acquiescence response set (acquiescence response set is negatively related to education). There is some disagreement in the social science community about whether to include neutral or "don't know" responses. Some researchers feel' that such choices allow respondents to avoid answering a question. In overview, it may be counter-productive to force people to answer questions they don't want to, or to force them to make a choice about which they feel ambivalent. However, your decision about using a neutral category must depend on the particular requirements of the survey. Another type of scale, a rating scale, asks respondents to rate some item or quality on a specific scale, e.g. 1 (worst) to 10 (best).

Matrix questions

Matrix questions assign identical response categories to multiple questions. The questions are placed one under the other, forming a matrix with response categories along the top and a list of questions down the side. This is an efficient use of page space and respondents’ time. Quite often, researchers will want to ask several questions that have the same set of answer categories. In such a case, it is often possible and desirable to construct a matrix of items and answers. The matrix format has a number of advantages. Respondents will probably find it faster to complete a set questions presented in this fashion. In addition, this format may increase the comparability of responses.

USE

108

Questionnaires are an effective tool when gathering information from large groups of respondents. Because of their relatively closed nature, questionnaires will only yield information that has been specifically asked for. Once a questionnaire has been distributed it is virtually impossible to refine the questions or to ask follow-up questions. For that reason, a questionnaire should be drawn up carefully, while keeping in mind the main research topics. The development cycle of a questionnaire could follow this order: First, the purpose of the questionnaire is clearly defined, after which the developer/development team draws up a first draft, followed by feedback which is aimed at eliminating flawed, unnecessary or unclear questions, based on which a second draft is composed, followed by a pilot (a trial run of the questionnaire with a representative audience); which will help to fine-tune the questionnaire before its administration.

TROUBLESHOOTING The question is double-barreled.

Double-barreled questions are difficult to answer and ambiguous for interpretation. A solution might be to reformulate the question in separate questions. Bad example: Do you think taxes should be lowered and schooling should be free? Good example: Do you think taxes should be lower? Do you think schooling should be free?

The answer categories of a closed question are not mutually exclusive.

A respondent’s answer should fit in only one category. If this is not the case, either the question or the answer categories are flawed and should be rewritten.

The answer categories of a closed question are not exhaustive.

All possible responses should be provided. If this is not the case, more categories should be added to the list until it is compete.

The question contains jargon or technical terms.

In most cases – especially if the questionnaire is to be distributed among a large population – it is inadvisable to include jargon in the questionnaire. Bad example: Do you believe that the UK should have a bicameral parliament? Good example: Do you believe that the UK should have upper and lower houses of parliament?

109

The question contains “red flag” or loaded words.

Words with emotional connotations or that coincide with strongly-held values evoke emotional responses and may skew results. Bad example: Do you think it is fair to murder innocent whales in the Pacific? Good example: How do you feel about whale hunt in the Pacific?

The question is prestige biased.

Respondents may answer on the basis of their feelings toward the prestigious person of group rather than addressing the issue.

The question is leading.

Leading questions are actually statements disguised as questions, and make respondents feel that only one response is legitimate

The question is unnecessary.

All questions should contribute to the goal(s) of the research. If this is not the case, the question should be omitted from the list.

TEST ANALYSIS The methods discussed below offer a way of analysing the reliability of language tests and tasks. Even though it is possible to employ each method separately, their combined use does offer significant advantages, especially when supplemented with the user perspective, which can be determined trough qualitative research methods such as focus groups or interviews.

DESCRIPTIVE STATISTICS CLARIFICATION Descriptive statistics are intended to offer a general idea about the test scores and usually contain the following data

N

N indicates the number of tests reviewed. In the example below, 62 tests have been entered into the system.

Min

The lowest score given on this test/task. In the example below, the listening test yieled the lowest score, i.e. 1.

Max

The highest score given on this test/task, which – in the example below – is 46/50 in the writing task.

110

Average

The arithmetical mean of all scores. The higher the mean, the more students did well on the test. In the example below, the mean of all partial scores combined is 52,6/100. The mean score should be supplemented with the standard deviation in order to be meaningfully interpretable.

Std. Dev.

Standard deviation (SD) is the mean deviation of the values from their arithmetic mean. It shows the range in which 98% off all the results will operate. In the example below, 98% of all the respondents will have got a total score ranging from 37,8 (i.e. the mean score minus the SD) to 67,4 (i.e. the mean score plus the SD). A small SD implies that in general the scores do not deviate much from the mean, implying that the scores of the most and the least able student are not far apart. On tests with a high discrimination, the SD is quite large (> 30% of total). In placement tests it is quite normal to have lager SD’s, since people with varying backgrounds and skills will take the test.

USE Statistical programmes such as SPSS offer a full analysis of the descriptive statistics at a mouse click.

It is also possible to use more consumer-oriented programmes such as MS Excel, which offers quite a few statistical applications:

111

EXAMPLE N

Minimum

Maximum

Mean

SD

WRIT_/50

62

12,5

46

29,24828

10,00063

LIST_/10

62

1

6,5

3,395517

1,23713

ORAL_/20

62

6

14

9,344828

2,60872

TOT_/100

62

24,5

74,5

52,60345

14,82901

CORRELATIONS CLARIFICATION Correlations indicate the strength and direction of a linear relationship between two variables. In other words: the stronger the correlation between two data sets, the more they correspond. Correlations are always situated on the -1 to 1 spectrum. The closer a correlation is to either end of the spectrum, the stronger the relationship. That is why the two identical data sets below offer a correlation of 1.

112

1.1.1

USE

Correlations can be used in test analysis to determine whether two tests are equally difficult (provided they have been taken by an identical group of respondents under identical circumstances) or whether two groups of students are equally able (provided they have taken an identical test under identical circumstances). MS Excel offers a tool to calculate the correlation between two data sets. In this case, data set B was exactly the opposite of data set A, yielding the correlation of -1.

ITEM

RELIABILITY ANALYSIS

113

CLARIFICATION An Item reliability analysis indicates whether or not a test item differentiates between the most able and the least able test taker.

USE The example below shows an SPSS table. The second column from the right shows “Corrected ItemTotal Correlations” (CITC). As in standard correlations, a very reliable item (with a highly discriminatory capacity) would score close to -1 or 1. Items are considered unreliable if they score in between .3 and .3. The fourth column (Cronbach’s Alpha) indicates what the reliability of the test would be if the item were deleted. Therefore, item VAR_2.3 has a CITC of .018. If it were deleted, the relative reliability of the scale would become .89.

Based on the Item Reliability Analysis, the following items are considered unreliable: VAR_6a.14, VAR_6b.3, VAR_6b.5, VAR_LI.1, VAR_LI.2, VAR_LI.3, VAR_LI.4a, VAR_LI.4b, VAR_LI.5, VAR_LI.6.

USABILITY TESTS CLARIFICATION Usability testing refers to the large multi-method process of determining and analysing the user-product interaction. Since a usability test is a large process, it may incorporate any combination of the abovementioned methods, such as focus groups, card sorting, task analysis etc. The aims of a usability test will include identifying potential product flaws, measuring the user’s performance and determining the user’s satisfaction. Often, but not always these products are web applications. Typically, but not necessarily usability tests occur during product development, but they can also be used to review existing products.

114

USE Usability tests are used to analyse the user-product interaction in detail. They may refer to small twomethod analyses but also to large processes which encompasses a myriad of quantitative and qualitative research methods Usability tests may adhere to the following chronology:

1. CREATING A USABILITY PLAN WHICH PAYS ATTENTION TO: Scope

The usability plan should clearly determine what lies within the test’s scope and what aspects are beyond it.

Purpose

The goals, concerns and research questions should be clearly stated from the beginning.

Time & place

Before the tests begin, the research team should have arranged the practical aspects.

Participants

The total number of participants should be determined. If necessary subgroups should be defined and personae should be created. For reliable results, eight to sixteen users should partake in each test. If the testing population is homogenous, four users may also suffice. It is vital for the testing population to resemble the target population (subgroup) as closely as possible.

Scenarios

The scenarios for the users to partake in should be written, based on the personae.

Questions

If interviews or focus groups will be conducted, these questions should be prepared.

Data

The research team should think about the kind of data they will receive and how to process this data.

Set up

The room layout should be determined. All technical equipment should be collected and checked for reliability.

Roles

Each member of the research team should have a clearly outlined task.

2. CONDUCTING A PILOT FOR EACH METHOD USED: Since a pilot is the dress rehearsal for the real usability test, it should resemble the actual setting as closely as possible. It is advisable to use a novice respondent. A pilot mainly serves the following purposes: Checking if the technical equipment (from audiovisual material to pens and paper) is functioning properly and if more or different equipment is necessary; Trying out tasks and scenarios; Training the facilitators. In a pilot, the facilitators practice their role and behaviour.

3. CONDUCTING THE LIVE USABILITY TEST, ACCORDING TO A FIXED CHRONOLOGY: For example:

115

Beginning

The facilitator welcomes the respondent and explains the outline and goal of the session, stressing the fact that he/she is not being tested, but rather the product. The participant is invited to think aloud. Ideally, the verbal protocol is tested by using a short and simple warming-up scenario. If there are no further questions from the user, the actual usability test begins.

Middle

The respondent works with the task/scenario while conducting a verbal protocol.

Ending

The facilitator thanks the participant for his/her cooperation. Possibly, this session is followed by an interview or a focus group.

4. IMPLEMENTING THE TEST RESULTS AND RETESTING: Usability tests are at their most effective when they are used in a cyclical manner. Conducting one test will most likely improve the product. Still, in order to be sure of the quality of the improvements, it is advisable to redo the usability test.

TROUBLESHOOTER Depending on the methods used, different pitfalls will occur. The problems discussed below apply to usability tests in general.

The test goals are not measurable.

Test goals should always be measurable. Time, accuracy, success and satisfaction are examples of such measurable goals.

The test goals are unclear.

If test goals are unclear, the data resulting from the test will be unreliable. In a pilot, the clarity of the goals can be checked. It should also be very clear to the respondents that the goal of the usability test is not to test their abilities, but the usability of the product.

The facilitator is biased.

If the facilitator takes positive or negative attitude towards the material that is being tested, he/she might influence the respondent. For the usability test to be reliable, the facilitator should voice neutral questions and observations.

The facilitator intervenes.

For the usability test to be a true depiction of reality, the facilitators should not help too quickly. Users who are struggling to accomplish a task might yield useful data that would be lost if the facilitator would 116

have helped.

117

GLOSSARY A

Accommodating learner/learning style

The ‘accommodating learning style’ is one of the four learning styles identified by Kolb (1984). Accommodating learners (accommodators) emphasise concrete experience and active experimentation. They like doing things, carrying out plans and getting involved in new experiences; they are good at adapting to changing circumstances; they solve problems in an intuitive, trial-and-error manner; they are at ease with people but are sometimes seen as impatient and 'pushy'. They like applying course material in new situations to solve real problems. Their characteristic question is 'what if'. See ‘Kolb’s learning styles’.

Activist

‘Activist’ refers to one of the four learning profiles defined by Honey and Mumford (1992). It correspondents with Kolb’s accommodating learning style. See ‘Kolb’s learning styles’ and ‘Honey and Mumford’.

Analytic assessment

In ‘analytic assessment’ the different components of a learning achievement are assessed separately (cf. ‘holistic assessment’).

Architect

The term ‘architect’ is used by Chandler and Willie (Sharples, 1998) to describe one of five different writing profiles. ‘Architects’ use a rather common ‘plan, compose, revise’ strategy. They determine the structure of the writing, often writing down headings as guidelines. They can write sequentially, but can also start with the easiest sections. They tend not to correct mistakes immediately, but they edit the text as a whole at the end. Revisions are usually elaborate: particularly making sentence level revisions (spelling and grammar). They are very conscious of their writing strategy. They can find the screen too restrictive, meaning that they might print out a hard copy and revise with pen and

118

paper. The other profiles in this model are: architects, bricklayers, oil painters, sketchers, and water colourists.

Assessor

In this handbook the term ‘assessor’ refers to the person who performs the QuADEM quality assessment. It is possible that the content developer of the digital educational material fills the part of the assessor, evaluating his own material. In ideal circumstances, however, the content developer and the assessor are not the same person. At any stage of the assessment the initiator can decide to delegate (parts of) the assessment process to an external expert, because of the need for specific expertise or to avoid bias. If, when and to what degree an external expert should be involved in the assessment process, is a question that can only be answered on a case-by-case basis. It depends strongly on the overall objective and scope of the assessment.

Assessment ‘Assessment’ refers to “a wide range of methods for evaluating [student] performance and attainment” (Gipps, 1994: vii). Most literature agrees that a distinguishing feature of assessment is its pedagogical component. It is used to (1) document the learning process, (2) compare the progress to an educational norm and (3) use the outcome of the comparison to guide learning (Falchikov, 2005). Assessment should therefore be recognised as a constructive element in the learning process instead of merely as a scoring tool. In this QuADEM handbook, ‘assessment’ will also be used to refer to the evaluation of the digital educational material through the use of the QuADEM method. Assessment procedure

In QuADEM terminology the term ‘assessment procedure’ refers to the entire QuADEM evaluation process, starting with the decision to undertake a quality evaluation and ending with the presentation of the results and recommendations. In the best case scenario it will also include the implementation of the recommendations. See ‘quality assessment’.

Assessment session

An ‘assessment session’, in the context of a QuADEM quality assessment, refers to a period of time arranged for a respondent to participate in the research phase of the QuADEM assessment

119

procedure. One assessment session may involve several different research methods (e.g. a think aloud protocol followed by an interview) or several respondents (e.g. several individual think aloud protocols followed by a focus group discussion), as long as they are all integrated into one meeting.

Assessment unit

An ‘assessment unit’ is one of the chapters that make up part 2 of the QuADEM handbook. Each assessment unit contains all the information necessary to evaluate one component of what constitutes a successful digital learning module. E.g. If you want to decide on the quality of digital educational material as far as their content is concerned, you will find everything you need in the ‘Content Unit’. See ‘component’.

Assimilating learner/learning style

The ‘assimilating learning style’ is one of the four learning styles identified by Kolb (1984). Assimilating learners (assimilators) prefer abstract conceptualisation and reflective observation. They like to reason inductively and to create theoretical models; they are more concerned with ideas and abstract concepts than with people; they think it is more important that ideas be logically sound than practical. They respond to having opportunities to work actively on welldefined tasks and to learn by trial-and-error in an environment that allows them to fail safely. Their characteristic question is 'how'. See ‘Kolb’s learning styles’

Auditory learner

An ‘auditory learner’ learns best through listening and talking things through (cf. ‘visual learner’ and ‘kinesthetic learner).

Authenticity

Authenticity refers to materials that are representative of tasks that might be encountered by language learners in a target language use situation. Authenticity is invariably held up as a desirable characteristic for language materials. However, there has been considerable debate over the nature and degree of authenticity required in language teaching materials if they are to represent target language use. Traditional appeals to ‘real-life’ have recently been shown to be

120

inadequate (Green 2009).

Average writer

The term ‘average writer’ is used by Van Waes and Schellens (2003) to describe one of five different writing profiles. Average writers combine characteristics of the other writing profiles and don’t have a clear profile. The other profiles in this model are: initial planners, first draft writers, second draft writers, and non-stop writers.

B

Beethovian writer

The term ‘beethovians’ refers to individuals with a certain writing style. Beethovians write a first draft of their text rather quickly with minimal revision, postponing the main revision until a later stage (cf. ‘mozartian’).

Blended learning

Despite the fact that ‘blended learning’ is becoming increasingly popular in educational practice, there seems to be very little consensus regarding what ‘blended learning’ actually is. Apparently, it constitutes different things to different people. According to QuADEM ‘blended learning’ combines face-to-face and computer-mediated instruction in order to optimize learning by applying a number of learning technologies to match various learning and writing styles. Taking into account the blended learning context of a digital learning module means to look beyond the module itself and include to the different ways the module is being integrated with classroom teaching, student homework, self study and group work.

Bricklayer

The term ‘bricklayer’ is used by Chandler and Willie (Sharples, 1998) to describe one of five different writing profiles. ‘Bricklayers’ tend to work sentence by sentence, paragraph by paragraph. They sometimes write sequentially, rarely starting with the easiest part. They frequently revise; mainly spelling and grammar on a sentence-level, but also re-sequencing of material. They tend not to be conscious strategists. Bricklayers frequently lose the overview of the text, especially when working on a 121

computer. The other profiles in this model are: architects, bricklayers, oil painters, sketchers, and water colourists.

Broadband Internet access

‘Broadband Internet access’ refers to high-speed internet access (minimum 256 kbit/s) that transfers data via phone lines (i.e. ADSL) or cable.

C

Case

A ‘case’ is a large task that requires the learner to apply multiple types of theoretical knowledge from different fields. Whereas a task can be completed in one or two steps, a case is a long-term process consisting of different phases.

Closed question

A ‘closed question’ contains predefined answers. This enables standardized responses and fast and easy coding and scoring. However, closed questions might push or omit certain answers and might not have the same meaning to all the respondents.

Co-assessment

In ‘co-assessment’ the learner defines the learning goals together with the tutor. Both the learner and the tutor judge the achievement. Depending on who is in control of an assessment, we distinguish between four types of assessment (as shown below). Teacher controlled

Traditional testing

Learner controlled

Co-assessment

Peer assessment

Self assessment

Contingency question

A ‘contingency question’ only gets answered if the respondent gives a particular response to a previous question. This avoids asking questions of people that do not apply to them (for example,

122

asking men if they have ever been pregnant). In questionnaires, certain questions will clearly be relevant only to some of the respondents and irrelevant to others. The proper use of contingency questions can facilitate the respondent’s task in completing the questionnaire.

Component In QuADEM terminology a ‘component’ refers to one of eleven aspects or facets of digital educational material that together determine the quality of that material. The QuADEM components are: pedagogy, content, intercultural aspects, learning styles, writing styles, styles and language, usability, assessment, examples, multimedia and questionnaires. A ‘component’ can also refer to the online and offline components that make up the blended learning context of a digital learning module. The online components are the different sections and applications in a digital learning module. This can be the sections on theory, practice, and case, or a blog the students have to maintain, or a discussion forum on which they have to participate. The offline components are all the offline activities in support of the module’s learning objectives, such as ex cathedra teaching, class discussion, class trips, group work, homework, class presentation… Content

In plain words ‘content’ is the substance that makes up a digital learning module. It is the textual, visual or aural material that the user encounters when browsing through the digital learning module. Although it may include images, sound, video and animation, when talking about web content we are referring in the first place to the content in textual nature. ‘Content’ in QuADEM terminology is therefore the textual information the digital learning module provides.

Content developer

In the QuADEM terminology the ‘content developer’ is the person who develops the digital educational material. The process of content development may include researching, gathering, editing and writing information for publication.

Content management system (CMS)

A ‘(web) content management system’ is a software application used to manage the design and the content of a webpage. It facilitates the collaborative creation, use, editing and updating of electronic text and other digital features available on the website. For example, a CMS makes it possible for someone with no ICT expertise to post a message on a 123

forum, to edit an online text or to input information to an online administration tool.

Continuous assessment

A ‘continuous assessment’ goes on throughout the entire learning process (cf. ‘fixed-point assessment).

Converging learner/learning style

The ‘converging learning style’ is one of the four learning styles identified by Kolb (1984). Converging learners (convergers) rely primarily on abstract conceptualisation and active experimentation. They are good at problem solving, decision making and the practical application of ideas. They do best in situations like conventional intelligence tests. They are controlled in the expression of emotion and prefer dealing with technical problems rather than interpersonal issues. They respond well to explanations of how course material relates to their experience, interests, and future careers. Their characteristic question is 'why'. See ‘Kolb’s learning styles’.

Corporate culture

See ‘organisational culture’.

Course

A ‘course’ is a prescribed section of teaching and study which is separately examined and to which a course-unit value has been assigned. E.g. written business communication.

D

Dial-up internet connection

A ‘dial-up internet connection’ provides internet access via telephone line. It differs from its broadband counterpart in terms of comfort (dial-up disrupts the regular telephone line) and speed (a dial-up connection has a maximum theoretical speed of 56 kbit/s). In short, it refers to internet access via telephone line with a maximum theoretical speed of 56 kbit/s.

Digital educational material

124

‘Digital educational material’ is educational material designed for presentation and use through the digital medium. Digital educational material may come in different shapes and sizes, but it is commonly presented in the form of digital learning modules.

Digital learning environment

A ‘digital learning environment’ can be envisioned as a digital framework that offers its users (students and teachers) a set of tools (such as learning modules, file sharing, discussion forum, drop boxes, quizzes, calendars, course statistics…) making it possible for different parts of the teaching and learning experience (such as instruction, gathering and processing of information, student collaboration, student-teacher interaction, exercise, testing and student administration) to occur using some kind of computer mediated communication. A digital learning environment aspires to facilitate an active, participative and social learning process. A digital learning environment can also be referred to as an ‘electronic learning environment’ or even an ‘online learning environment’ although the latter would implicate a live connection with the internet.

Digital learning module

A ‘digital learning module’ refers to a digital educational application that is designed to present the educational material devoted to the study of one specific topic in a systematically structured way. Using digital technology, a digital learning module offers its users information, documentation, examples, exercises, etc. that may help them to achieve certain learning objectives. For example, a digital learning module on ‘How to write a CV?’ may be part of a course on business communication. Or a digital learning module devoted to the human physiology may be part of a biology course. Such digital learning modules may contain theoretical sections, illustrations, interactive exercises, tests, etc. A digital learning module can also be referred to as ‘electronic learning module’ or even ‘online learning module’ although the latter would implicate a live connection with the internet.

Direct assessment

In ‘direct assessment’ the assessment is formalized and well-defined in terms of time and setting (cf. ‘indirect assessment’).

125

Distance learning

‘Distance learning’ can be characterised by an extensive use of the internet to support the learning process. The bulk of the content transfer and communication will happen online and possibly the entire learning process will take place without any face-to-face meetings. While there have been tendencies of trying to replace the traditional classroom with new media, the limitations of learning and teaching solely occurring in electronic environments have also become more clear. Also see ‘e-learning’.

Diverger/ diverging learning style

The ‘diverging learning style’ is one of the four learning styles identified by Kolb (1984). Diverging learners (divergers) emphasise concrete experience and reflective observation. They are imaginative and aware of meanings and values; they view concrete situations from many perspectives; they adapt by observation rather than by action; they are interested in people and tend to be feeling-oriented. They respond to information presented in an organized, logical fashion and benefit if they are given time for reflection. Their characteristic question is 'what'. See ‘Kolb’s learning styles’.

Double-barreled question

A ‘double-barreled’ question combines two or more issues in a single question. They are difficult to answer and ambiguous for interpretation.

E

E-learning (or electronic learning)

‘E-learning’ in the broad sense refers to all types of technology-enhanced learning (TEL), where technology is used to support the learning process. Often this involves the use of a computer network. Depending on the degrees of technology/internet involvement, we can distinguish between different types of e-learning, ranging from supplementary learning to distance online learning.

Evaluation

‘Evaluation’ refers to the continuous process of gathering and interpreting information about a learning activity or program to determine whether the activity or program meets its goals and 126

how it can be improved.

Evaluation criteria

‘Evaluation criteria’, in the context of student evaluation, consist of a clear definition of the construct (i.e. which skill is being tested), clear assessor criteria for correctness and full rating procedures.

Example

An ‘example’ is a specific instantiation of a general principle, chosen in order to illustrate or explore that principle” (Chick, 2007). The use of examples in educational material is intended to help the learner to understand, connect, apply and generalize the different items of the learning material. Examples can take different forms and can have different functions.

F

Fair use

‘Fair use’ is a doctrine in United States copyright law that allows for limited use of copyrighted material without requiring permission from the rights holder. It provides for the legal, nonlicensed citation or incorporation of copyrighted material in another author’s work.

Fixed-point assessment

A ‘fixed-point assessment’ takes place at preset points in time

First draft writers

The term ‘first draft writer’’ is used by Van Waes and Schellens (2003) to describe one of five different writing profiles. First draft writers tend to focus quite explicitly on the first draft of their text. They start writing their text almost immediately, and devote little time to initial planning. During the development of the first draft a lot of revision takes place. Their writing process is highly fragmented and characterized by a high degree of recursion. The other writing profiles in this model are: initial planners, second draft writers, non-stop writers and average writers.

127

Formative assessment

‘Formative assessment’ is meant to check the learning process regularly and give feedback to adjust the learning process when necessary (cf. ‘summative assessment’). In the formative assessment process, constructive feedback is essential. It stimulates reflection on learning and communication about the learning process.

H

Honey & Mumford - learning styles

Honey and Mumford (1992:1) define a learning style as “a description of the attitudes and behaviour which determine an individual’s preferred way of learning”. They adapted their theory from Kolb's learning style and focus mainly on business. The four learning styles are described as those of activists, reflectors, theorists and pragmatists and correspond with Kolb’s accommodating, diverging, assimilating and converging learning style.

React positively to:

Activists

action learning business game simulations

job rotation discussion in small groups

role playing training others outdoor activities

Reflectors

e-learning learning reviews

listening to lectures or presentations observing role plays

reading self-study / selfdirected learning

Theorists

analytical reviewing exercises with a right answer

listening to lectures or presentations observing role plays

solo exercises watching 'talking head' videos

Pragmatist s

action learning discussion about work problems in the organisation

discussion in small groups problem-solving workshops

group work with tasks where learning is applied project work

Hotspot task

In ‘hotspot tasks’, students are asked to select a specific area on a drawing or a picture.

I

Indirect assessment

128

In ‘indirect assessment’ the assessment goes on during the learning process without explicit attention is being paid to it (cf. ‘direct assessment’).

Initial planner

The term ‘initial planner’ is used by Van Waes and Schellens (2003) to describe one of five different writing profiles. Initial planners tend to make relatively few revisions, especially not during the second writing phase (after having completed a first draft). They devote quite some time to initial planning. The other profiles in this model are: first draft writers, second draft writers, non-stop writers, and average writers.

Intercultural aspects

In QuADEM terminology, the term ‘intercultural aspects’ refers to culture-specific points within a digital learning module. Intercultural aspects can pertain to non-specialist as well as specialist fields. In the former the intercultural aspects relate to culture in the ethnolinguistic/social sense, determining the applicability of the digital educational material in a cross-national and international context. In the latter they make up the cultural core of ‘language for specific purposes’ (e.g. Gotti, 2003), underlying legal, commercial, political and institutional discourse used in particular workplaces. In this case they determine the transferability of the digital educational material to a different subject field or to a different institutional/organisational culture.

Internal consistency

‘Internal consistency’ is the extent to which all questions or items of a questionnaire assess the same characteristic, skill, or quality.

K

Kinesthetic learner

A ‘kinesthetic learner’ learns best by doing and toughing (cf. ‘auditory learner’ and ‘visual learner’).

Knowledge assessment

129

In ‘knowledge assessment’ the focus is on what the learner knows, i.e. the cognitive skills (cf. ‘performance assessment’).

Kolb’s learning styles

According to Kolb new knowledge, skills or attitudes are achieved through confrontation among four modes of learning: “[Learners] must be able to involve themselves fully, openly, and without bias in new experiences (concrete experience). They must be able to reflect on and observe their experiences from many perspectives (reflective observation). The must be able to create concepts that integrate their observations into logically sound theories (abstract conceptualization), and they must be able to use these theories to make decisions and solve problems (active experimentation)” (Kolb, 1984: 30). Depending on (a) which learning modes learners preferably use to take in information (concrete experience or abstract conceptualization) and (b) which learning modes learners preferably use to process information (active experimentation or reflective observation), four different learning styles can be distinguished (Kolb, 1984; Felder & Brent, 2005): a converging, a diverging, an assimilating and an accommodating learning style.

L

Learning objective

A ‘learning objective’ is a statement in specific and measurable terms that describes which knowledge, skills and attitudes a learner should be able to exhibit as a result of completing a specific learning material.

Learning process

In the QuADEM handbook, the learning process most often refers to the entire process a learner goes trough in order to achieve a specific learning objective. It can also refer to the act of learning in general.

Learning styles

‘Learning styles’ refer to the different ways in which individuals gather, retain, and use information. Many different models and classification schemes summarizing the prevalent learning styles have been developed. One of the most influential models was developed by Kolb (1984). See ‘Kolb’s learning styles’.

130

The pedagogical implications of learning styles could be far-reaching. Studies (Felder & Brent, 2005) have shown that a more profound learning effect may occur when teaching styles match learning styles. But learning styles are a controversial topic.

Likert scale

The ‘Likert scale’ is the most widely used scale in survey research. When responding to a Likert questionnaire item, respondents specify their level of agreement to a statement. In this case a 5point Likert scale was used: 2 = agree entirely 1 = tend to agree 0 = neither agree nor disagree -1 = tend to disagree -2 = disagree entirely

M

Matching

‘Matching’ refers to a kind of multiple choice question in which the test taker is asked to combine two or more items that belong together.

Module

A ‘module’ is a unit of education (online, face-to-face or through self study) in which a single topic or a small section of a broad topic is studied for a given period of time. All the modules combined make up the course.

Mozartian writer

The term ‘mozartians’ refers to individuals with a certain writing style. ‘Mozartians’ are extensive planners who formulate and revise their texts sentence by sentence (cf. ‘beethovians’).

Multimedia

In ICT the term ‘multimedia’ refers to the (combined) use of all or a selection of the following within the same online environment: text, audio, video, pictures, animation, interactive elements (i.e. games).

131

Multiple choice

‘Multiple choice’ is a general term for any task type where the appropriate answer is picked from a list of alternatives. These task types are ideal for quick knowledge-based tests. They can be used to probe for topical knowledge, vocabulary, specific grammar use (i.e. tenses, gerunds…) etc.

Myers-Briggs Type Indicator (MPTI)

The ‘Myers-Briggs Type Indicator’ (Myers & McCaulley, 1985) is a personality inventory based on Jungian psychological principles that helps individuals to identify their learning preferences, teaching styles, and personality characteristics (Mamchur, 1996). It uses four different scales to identify an individual’s personality type: extraversion versus introversion, sensing versus intuition, thinking versus feeling and judging versus perceiving (Rushton et al., 2007).

N

Non-stop writers

The term ‘non-stop writer’ is used by Van Waes and Schellens (2003) to describe one of five different writing profiles. Non-stop writers revise very little. The proportion of words to number of revisions is correspondingly high in the final text. They also make relatively few revisions above the level of the word. Non-stop writers hardly ever pause while writing. They tend to spend little time on initial planning and complete their writing task more quickly than others. The other profiles in this model are: initial planners, first draft writers, second draft writers, and average writers.

O

Objective assessment

In ‘objective assessment’ external measures are used as guideline for the assessment procedure (cf. ‘subjective assessment’).

Oil painter

The term ‘oil painter’ is used by Chandler and Willie (Sharples, 1998) to describe one of five different writing profiles.

132

‘Oil painters’ rarely plan, but write down ideas as they occur to them, and later work them into the text during the revision process. They often start with the easiest part. Oil painters revise extensively, in particular: meaning and sequencing. Oil painters occasionally find the screen restrictive, but they are most likely to use a word processor for their writing. They do tend to print out a hard copy for revision. The other profiles in this model are: architects, bricklayers, oil painters, sketchers, and water colourists.

Online learning environment (OLE)

See ‘digital learning environment’.

Online learning module (OLM)

See ‘digital learning module’.

Open-ended question

An ‘open-ended question’ allows the respondent to answer without being constrained by a fixed set of possible responses. Open questions avoid imposing any restrictions on the respondent. However, there are many different ways respondents may choose to answer a question. Moreover, no matter how carefully we word the question, open questions may leave room for misinterpretation and provision of irrelevant or confusing answers. Thus, open questions can be difficult to code and analyse.

Organizational culture

‘Organizational culture’ comprises the attitudes, experiences, beliefs and values of an organization. It is a specific collection of values and norms that are shared by people and groups in an organization. It can also be referred to as ‘corporate culture’.

P

Peer-assessment

In ‘peer-assessment’ a group of learners define the learning goals and judge each other’s achievement. Depending on who is in control of an assessment, we distinguish between four types of 133

assessment (as shown below). Teacher controlled

Traditional testing

Learner controlled

Co-assessment

Peer assessment

Self assessment

Performance assessment

‘Performance assessment’ focuses on the ability to apply theory in practice (cf. knowledge assessment).

Pragmatist

‘Pragmatist’ refers to one of the four learning profiles defined by Honey and Mumford (1992). It corresponds with Kolb’s converging learning style. See ‘Kolb’s learning styles’ and ‘Honey and Mumford’.

Principles of coherence

Coherence refers to the way in which, both at micro- and macrolevels, authors help readers make sense of a text by linking new and old information within sentences and paragraphs (Le 2009).

Process assessment

‘Process assessment’ occurs throughout a course and checks the how rather than the what. This kind of assessment is more qualitative in its approach and checks the studying process. It verifies to which extent the goals have been pursued rather than attained (cf. ‘product assessment).

Product assessment

‘Product assessment’ typically occurs at fixed points in time (i.e. annually, monthly…) or at the end of a course. Product assessment checks whether the learner has reached the preset goals (cf. ’process assessment).

Q

Quality assessment

134

In QuADEM terminology, the term ‘quality assessment’ refers to the process of assessing or judging the quality of digital educational material to make sure they reach a certain standard. See also ‘assessment procedure’.

Questionnaire

A ‘questionnaire’ is a research instrument consisting of a series of questions and other prompts for the purpose of gathering information from respondents. Although they are often designed for statistical analysis of the responses, this is not always the case.

R

Response rate

The ‘response rate’ is the proportion of subjects in a statistical study who respond to a researcher's questionnaire.

Reflector

‘Reflector’ refers to one of the four learning profiles defined by Honey and Mumford (1992). It corresponds with Kolb’s diverging learning style. See ‘Kolb’s learning styles’ and ‘Honey and Mumford’.

Respondent

A ‘respondent’ is a person who is invited by the assessor to participate in research.

S

Scaled response questions

‘Scaled response questions’ require respondents to rate using a provided scale, such as a 5-point Lickert scale: strongly agree | agree | neutral | disagree | strongly disagree. Another type of scale, a rating scale, asks respondents to rate some item or quality on a specific scale, e.g. 1 (worst) to 10 (best).

Second draft writer

The term ‘second draft writer’ is used by Van Waes and Schellens (2003) to describe one of five 135

different writing profiles. Second draft writers postpone most of their revisions to the stage in which they are rereading/reviewing their first draft, i.e. the second writing phase. Many of these revisions are made at a level above the word, and the number of revisions is high in relation to the total number of words in the final text. Second draft writers spend quite some time on initial planning, but once they start writing, they pause relatively infrequently. However, any pauses they do make are relatively long. There is only a slight degree of recursion. The other profiles in this model are: initial planners, first draft writers, non-stop writers, and average writers.

Self-assessment

In ‘self-assessment’ the learner defines his of her learning goals and is the judge of the achievements. Depending on who is in control of an assessment, we distinguish between four types of assessment (as shown below). Teacher controlled

Traditional testing

Learner controlled

Co-assessment

Peer assessment

Self assessment

Script

In the QuADEM handbook the term ‘script’ refers to a detailed, chronological description of the planned course of an assessment session. It should guide the assessor and the respondent through a succession of research methods and tasks in the most efficient manner.

Sketcher

The term ‘sketcher’ is used by Chandler and Willie (Sharples, 1998) to describe one of five different writing profiles. ‘Sketchers’ tend to form a rough plan at the beginning, but revise this plan later. They occasionally start the easiest part first. Sketchers revise extensively, both at sentence level, and re-sequencing and changes in meaning. They are quite conscious of their strategy use. Sketchers occasionally find the screen restricts them, but word processing usually helps them to understand their meaning. Students and academics often use this strategy. The other profiles in this model are: architects, bricklayers, oil painters, sketchers, and water

136

colourists.

Split-half reliability

By correlating the scores of two halves of the test it is possible to determine the level of consistency on a test. For one data set, choose the even numbers of the test. Put the uneven items in the other set. Then calculate the correlation using a statistical program or Microsoft Excel. The result of the correlation should fall somewhere within the .04 and 1 interval. The closer the result is to 1, the stronger the correlation.

Stakeholder

A ‘stakeholders’ in a QuADEM quality assessment is a person who is involved with the digital learning module under revision, and therefore has responsibilities towards it and/or an interest in its success. E.g. instructor, tutor, student, developer, etc.

Step in / step out assessment

‘Step in/step out assessment’ allows learners to monitor and measure their progress by comparing their score at the beginning of a course to their score after course completion.

Subjective assessment

In ‘subjective assessment’ there are no external measures guiding the assessment procedure (cf. ‘objective assessment’).

Summative assessment

In ‘summative assessment’, learners are assessed after a longer period of time i.e. at the end of a semester. (cf. ‘formative assessment’).

Supplementary learning

In ‘supplementary learning’, traditional classroom teaching is ‘spiced-up’ with technological elements such as online testing or virtual labs. The number of face-to-face meetings remains the same. See ‘e-learning’.

137

Survey population

The ‘survey population’ of a questionnaire is the population from which information can actually be obtained in the survey (Cf. target population).

T

Target population

The ‘target population’ of a questionnaire is the population outlined in the survey objects about which information is to be sought (Cf. survey population).

Test

A ‘test’ is defined as any (collection of) task(s) that is aimed at determining somebody’s abilities. This definition encompasses anything from highly formalized high-stake tests to low-stake selftests.

Text

‘Text’ is the raw material of the website giving meaning to the other multimedia elements or at least introducing them. Research (Chapelle, 2003; Sakar et al, 2005) shows that hypertext (clicking on one word yields its explanation or an elaboration of its concept)/hypermedia (clicking a word opens a multimedia file) increases retention. If users are given the option, they prefer audiovisual or graphical explanations or elaborations.

Theorist

‘Theorist’ refers to one of the learning profiles defined by Honey and Mumford (1992). It corresponds to Kolb’s assimilating learning style. See ‘Kolb’s learning styles’ and ‘Honey and Mumford’.

Thumbnails

‘Thumbnails’ are small images of anything from 80x80 tot 200x200 pixels. They are ideally suited as previews of larger images.

138

Tutor-assessment

In ‘tutor-assessment’ the tutor decides on the learning goals and judges achievement. Depending on who is in control of an assessment, we distinguish between four types of assessment (as shown below). Tutor-assessment corresponds to traditional testing.

Teacher controlled

Traditional testing

U

Learner controlled

Co-assessment

Peer assessment

Self assessment

Usability

In QuADEM terminology, ‘usability’ refers to the extent to which a digital learning module can be used by its target audience to achieve specified learning objectives with efficiency, effectiveness and satisfaction.

User-friendliness

In the field of e-learning, user-friendliness refers to the ease with which users are able to manipulate the educational (interactive) software.

V

Video

‘Video’ refers to (digitally) capturing or showing a sequence of still images to create the illusion of movement.

Visual learner

A ‘visual learner’ learns best by seeing and will need visual displays (diagrams, illustrations, video …) (cf. ‘auditory learner’ and ‘kinesthetic learner’).

w

Water colourist

139

The term ‘water colourist’ is used by Chandler and Willie (Sharples, 1998) to describe one of five different writing profiles. ‘Water colourists’ usually write a single draft which needs little revision. Where water colourists do revise, this is usually in meaning and sequencing. They can make mental plans, but rarely write these down. Water colourists always write sequentially, and rarely start with the easiest part. They are not usually conscious strategists. Water colourists do not find the screen restrictive, and only rarely lose the general sense of their text. The other profiles in this model are: architects, bricklayers, oil painters, sketchers, and water colourists.

Writing styles

Different people organize their writing activities differently. The process of text production can be subdivided into three main sub processes: planning, formulating and revising. The prevalent ways in which writers orchestrate and prioritize these sub processes are termed ‘writing styles’ or ‘writing profiles’.

140

BIBLIOGRAPHY Adams, J., Blenkharn, A., Briggs, G., Burley, D., Elcock, K., Hughes, G. et al. (2006). Report to the Learning, Teaching and Assessment Committee of the Blended Learning Task and Finish Group. London: Thames Valley University. Retrieved February 21, 2007, from http://www.blended.tvu.ac.uk/bl/Docs/Blended_learning_T_F_Report.doc Alderson, C., & Bachman, L.F. (Eds.) (2000-2005). The Cambridge Language Assessment Series. Cambridge: Cambridge University Press. Alvarez, S. (2005). Blended learning solutions. In B. Hoffman (Ed.), ENCYCLOPEDIA OF EDUCATIONAL TECHNOLOGY Encyclopedia of Educational Technology. San Diego: San Diego State University. Retrieved February 28, 2007, from http://coe.sdsu.edu/eet/articles/blendedlearning/start.htm Anson, C., & Beach, R. (1995). Journals in the Classroom: Writing to Learn. Norwood, Ma: ChristopherGordon Publishers. Applebee, A. N. (1996). Curriculum as Conversation: Transforming Traditions of Teaching and Learning. Chicago: University of Chicago Press. Ardito, C., Costabile, M. F., De Marsico, M., Lanzilotti, R., Levialdi, S., Roselli, T. et al. (2006). An approach to usability evaluation of e-learning applications. Universal Access in the Information Society, 4(3), 270-283. Ayersman, D. J., & von Minden, A. (1995). Individual differences, computers, and instruction. Computers in Behavior, 11(3), 371-390. Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press. Bachman, L. F. (2000). Modern Language Testing at the Turn of the Century: assuring that what we count counts. Language Testing, 17(1), 1-42. Banerjee, J. (2003). Section D: Qualitative Analysis Methods. In: Relating Language Examinations to the Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Strasbourg: Council of Europe, Language Policy Division. Bazerman, C. (1988). Shaping Written Knowledge. Madison, WI: University of Wisconsin Press. Bean, J. C. (2001). Engaging Ideas. The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom. San Francisco, CA: Jossey-Bass. Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ: Lawrence Erlbaum. Bersin & Associates (2003). Blended learning: What works? An industry study of the strategy, implementation, and impact of blended learning. Oakland, CA: Bersin & Associates.

141

Blank, G. D., Roy, S., Sahasrabudhe, S., Pottenger, W. M., & Kessler, G. D. (2003). Adapting multimedia for diverse student learning styles. Journal of computing sciences in colleges, 18(3), 45-58. Bohlen, G. A., & Ferratt, T.W. (1993). The effect of learning style and method of instruction on the achievement, efficiency and satisfaction of end-users learning computer software. New York: ACM Press. Borton, T. (1970). Reach, Touch, and Teach. New York: McGraw-Hill. Bräuer, G. (2003). Schreiben als reflexive Praxis. Tagebuch, Arbeitsjournal, Portfolio (2nd edition). Freiburg im Breisgau: Fillibach. Bräuer, G., & Sanders, C. (Eds.) (2004). New Visions in Foreign and Second Language Learning. San Diego: LARC Press. Britton, J., Bruce, K., & Pellegrini, A. D. (Eds.) (1990). Narrative Thoughts and Narrative Language. Hillsdale, NJ: Erlbaum. Britton, J., Burgess, T., Martin, N., McLeod, A., & Rosen, H. (1975). The development of writing abilities (11-18). London: Mcmillan. Broadfoot, P., & Black, P. (2004). Redefining Assessment? The First Ten Years of Assessment in Education. Assessment in Education, 11(1), 7-26. Brookfield, S. D. (1987). Developing Critical Thinkers: Challenging Adults to Explore Alternative Ways of Thinking and Acting. San Francisco: Jossey-Bass. Brooks, J. (1991). Minimalist Tutoring: Making the student do all the work. Writing Lab Newsletter, 15(6), 1-4. Bruce, S., & Rafoth, B. (2004). ESL Writers. A Guide for Writing Center Tutors. Portsmouth, NH: Boynton/Cook, Heinemann. Bruffee, K. (1984). Collaborative Learning and the ‘Conversation of Mankind’. College English, 46(7), 635-652. Bruner, J. S. (1986). Actual Minds, Possible Worlds. Cambridge, MA: Harvard University Press. Bruner, J. S. (1990). Acts of Meaning. Cambridge, MA: Harvard University Press. Bull, S., & Shurville, S. (1999). Cooperative Writer Modelling: Facilitating Reader-Based Writing with Scrawl. In R. Morales, H. Pain, S. Bull & J. Kay (Eds.), Proceedings of Workshop on Open, Interactive and Other Overt Approaches to Learner Modelling (pp. 1-8). Le Mans, France: International AIED Society. Burgess, G. A., & Hanshaw, C. (2006). Application of learning styles and approaches in computing sciences classes. Journal of Computing Sciences in Colleges, 21(3), 60-68. Candlin, C. (2007). Intercultural Aspects of Specialized Communication. Frankfurt am Main: Peter Lang. Carkhuff, R. R., & Truax, C. B. (1965). Lay Mental Health Counseling: The Effects of Lay Group Counseling. Journal of Counseling Psychology, 29(5), 26-31.

142

Chamillard, A.T., & Karolick, D. (1999). Using Learning Style Data in an Introductory Computer Science Course. In Proceedings of the Thirtieth SIGCSE Technical Symposium on Computer Science Education in New Orleans (pp. 291-295). New York: ACM. Chamillard, A.T., & Sward, R. E. (2006). Learning styles across the curriculum. ACM SIGCSE Bulletin, 38(3), 275-279. Chapelle, C. A. (2003). English language learning and technology: Lectures on applied linguistics in the age of information and communication technology. Amsterdam: John Benjamins. Cheng, W, & Warren, M. (2005). Peer assessment of language proficiency. Language Testing, 22(1), 93121. Chick, H. L. (2007). Teaching and learning by example. In J. Watson & K. Beswick (Eds.), Mathematics: Essential research, essential practice (Proceedings of the 30th annual conference of the Mathematics Education Research Group of Australasia, pp. 3-21). Sydney: MERGA. Coogan, D. (1999). Electronic Writing Centers: Computing the Field of Composition. Stamford, CT: Ablex. Csikszentmihalyi, M. (1990). Flow. The Psychology of Optimal Experience. New York: Harper & Row. Crisp, V., & Sweiry, E. (2006). Can a picture ruin a thousand words? The effects of visual resources in exam questions. Educational Research, 48(2), 139-154. De Galan, K. (2003). Trainen. Een praktijkgids. Amsterdam: Pierson Education Benelux. Devey, J. (1916). Democracy and Education. New York: Macmillan. Dochy, F., Heylen, L, & Van de Mosselaer, H. (Eds.). Assessment in Onderwijs. Nieuwe Toetsvormen en Examinering in Studiegericht Onderwijs en Competentiegericht Onderwijs. Utrecht: Uitgeverij Lemma. Douglas, D. (2000). Assessing Languages for Specific Purposes. Cambridge: Cambridge University Press. Douglas, D. (2001). Language for Specific Purposes assessment criteria: where do they come from? Language Testing, 18(21), 171-185. Driscoll, M. (2002). Blended Learning: Let's Get Beyond the Hype. Learning and Training Innovations Newsline. Retrieved March 1, 2007, from http://www-8.ibm.com/services/pdf/blended_learning.pdf eLearningGuild (2003). The Blended Learning Best Practices Survey. Retrieved March 7, 2007, from http://www.elearningguild.com/pdf/1/Blended_Learning_Best_Practices_Survey.pdf Ellem, G. K., & McLaughlin, E. A. (2005). Tales from the coalface: From tragedy to triumph in a blended learning approach to the teaching of 1st year biology. In UniServe Science, Proceedings of the Blended Learning in Science Teaching and Learning Symposium (pp. 43-49). Sydney: University of Sydney. Retrieved February 27, 2007, from http://science.uniserve.edu.au/workshop/2005/index.html. Emig, J. (1977). Writing as a mode of learning. College Composition and Communication, 28, 122-128. Ender, S. C., & Newton, F. B. (2000). Students Helping Students. A Guide for Peer Educators on College Campuses. San Francisco, CA: Jossey-Bass.

143

Falchikov, N. (2005). Improving Assessment through Student Involvement: Practical Solutions for Higher and Further Education Teaching and Learning. London: Routledge Falmer. Felder, R. M. (1993). Reaching the Second Tier: Learning and Teaching Styles in College Science Education. College Science Teaching, 23(5), 286-290. Felder, R. M. (1996). Matters of Styles. ASEE Prism, 6(4), 18-23. Felder, R., & Brent, R. (2005). Understanding Student Differences. Journal of Engineering Education, 94(1), 57-72. Felder, R., & Henriques, E. (1995). Learning and teaching styles in foreign and second language education. Foreign Language Annals, 28(1), 21-31. Felder, R. M., & Silverman, L. K. (1988). Learning and Teaching Styles in Engineering Education. Engineering Education, 78(7), 674-681. Felder, R. M., & Soloman, B. A. (2000). Learning styles and strategies. Raleigh, NC: North Carolina State University. Retrieved March 8, 2007, from http://www2.ncsu.edu/felder/public/ILSdir/styles.html Felder, R. M., & Soloman, B. A. (2003). Index of Learning Styles Questionnaire. Raleigh, NC: North Carolina State University. Retrieved March 7, 2007, from http://www.ncsu.edu/felderpublic/ILSdir/ilsweb.html Felder, R., & Spurlin, J. (2005). Applications, Reliability, and Validity of the Index of Learning Styles. International Journal of Engineering Education, 21(1), 103-112. Field, R. (2005). Favourable conditions for effective and efficient learning in a blended face-toface/online method. In H. Goss (Ed.), Balance, fidelity, mobility: Maintaining the momentum? (Proceedings of 22nd Annual ASCILITE Conference, pp. 205–214). Brisbane: Queensland University of Technology. Retrieved February 26, 2007, from http://www.ascilite.org.au/conferences/brisbane05/blogs/proceedings/23_Field.pdf Flower, L. (1989). Problem Solving Strategies for Writing. Orlando: Harcourt Brace Jonanovich. Ford, N., & Chen, S. Y. (2001). Matching/mismatching revisited: an empirical study of learning and teaching styles. British Journal of Educational Technology, 32(1), 5 - 22. Fulwiler, T., & Young, A. (1990). Programs That Work: Models and Methods for Writing Across the Curriculum. Portsmouth, NH: Heinemann Boynton/Cook. Gardner, H. (1993). Frames of Mind. The Theory of Multiple Intelligences. New York: Basic Books. Gillespie, P. & Lerner, N. (2000). The Allyn and Bacon Guide to Peer Tutoring. Boston: Allyn and Bacon. Gipps, C. V. (1994). Beyond testing: towards a theory of educational assessment. London: Routledge Falmer. Gotti, M. (2003). Specialized Discourse: Linguistic Features and Changing Conventions. Frankfurt am Main: Peter Lang.

144

Graham, C. R. (2006). Blended learning systems: definition, current trends, and future directions. In C. J. Bonk & C. R. Graham (Eds.), Handbook of blended learning: Global Perspectives, local designs (pp.3-21). San Francisco, CA: Pfeiffer Publishing. Granić, A. (2008). Experience with usability evaluation of e-learning systems. Universal Access in the Information Society, 7(4), 209-221. Green, T. (2009). The quest for authenticity in language assessment. Paper presented at the 3rd Language Teaching Symposium. Ghent University. Harris, M. (1986). Teaching One-to-One: The Writing Conference. Urbana, IL: NCTE. Hartley, J., & Tynjälä, P. (2001). New technology, writing and learning. In: P. Tynjälä, L. Mason, & K. Lonka (Eds.). Writing as a Learning Tool (pp.161-182). Amsterdam: Kluwer Academic Publishers. Hayes, J. R., & Flower, L. (1980). Identifying the organization of writing processes. In: L. W. Gregg, & E. R. Steinberg (Eds.). Cognitive Processes in Writing (pp.3-30). Hillsdale, N.J.: Erlbaum. Hofstede, G., & Hofstede, G. J. (2004). Cultures and Organizations. Software of the Mind. Intercultural Cooperation and Its Importance for Survival. New York: McGraw-Hill. Honey, P., & Mumford, A., (1992). The Manual of Learning Styles. Berkshire: Honey Ardingly House. Horst, M., Cobb, T., & Nicolae, I. (2005). Expanding Academic Vocabulary with an Interactive On-line Database. Language Learning & Technology, 9(2), 90-110. Howard, R. A., Carver, C. A., & Lane, W. D. (1996). Felder's learning styles, Bloom's taxonomy, and the Kolb learning cycle: Tying it all together in the CS2 course. SIGCSE Bulletin, 28(1), 227-231. International Organisation for Standardisation (1998). ISO 9241: Software Ergonomics Requirements for office work with visual display terminall (VDT). Geneva: ISO. Jacobs, G., Opdenacker, L., & Van Waes, L. (2005). A multilanguage online writing center for professional communication: Development and testing. Business Communication Quarterly, 68(1), 8-22. Johnson, M., & Green, S. (2004). On-line assessment: the impact of mode on students’ strategies, perceptions and behaviours. Paper presented at the British Educational Research Association Annual Conference, Manchester. Kellogg, R. T. (1994). The psychology of writing. New York: Oxford University Press. Kerres, M., & de Witt, C. (2003). A didactical framework for the design of blended learning arrangements. Journal of Educational Media, 28(2-3), 102-113. Khan, B. H. (2003). A framework for e-Learning. Retrieved February 26, 2007, from http://bookstoread.com/framework Khan, B. H. (2004). People, process and product continuum in e-learning: The e-learning P3 model. Educational Technology, 44(5), 33-40.

145

Kieft, M. H. (2006). The effects of adapting writing instruction to students' writing strategies (Doctoral dissertation, University of Amsterdam, 2007) Retrieved August 25, 2009, from http://dare.uva.nl/document/29423 Kolb, D. A. (1984). Experiential Learning: experience as the source of learning and development. New Jersey: Prentice-Hall Layman, L., Cornwell, T., & Williams, L., (2006). Personality Types, Learning Styles, and an Agile Approach to Software Engineering Education. SIGCSE Bulletin, 38(1), 428-432. Le, E. (2009). Implicit and explicit coherence relations. . In: J. Renkema (Ed.). Discourse, of course (pp.113-126). Amsterdam/Philadelphia: John Benjamins. Lewandowski, G., & Morehead, A. (1998). Computer science through the eyes of dead monkeys: learning styles and interaction in CSI. Paper presented at the Twenty-ninth Technical Symposium on Computer Science Education. Atlanta. Lilley, M., & Barke, T. (2007). Students’ Perceived Usefulness of Formative Feedback for a Computeradaptive Test. Electronic Journal of e-Learning, 5(1), 31-38. Lindeboom, M., & Peters, J.J. (1994). Didactiek voor opleiders in organisaties. Deventer: Van Loghem Slaterus. Leahy, W., Chandler, P. & Sweller, J. (2003). When Auditory Presentations Should and Should not be a Component of Multimedia Instruction. Applied Cognitive Psychology, 17, 401-418. Lonka, K., Maury, S., & Heikkilä, A. (1997, August). How do students’ thoughts of their writing process relate to their conceptions of learning and knowledge? Paper presented at 7th EARLI Conference, Athens. Lynch, B. K. (2003). Language Assessment and Program Evaluation. Edinburgh: Edinburgh University Press. Mamchur, C. (1996). A teacher’s guide to cognitive type theory and learning style. Alexandria, VA: ASCD. Mandl, H., Gruber, H., & Renkl, A. (1996). Community of practice toward expertise: Social foundation of university instruction. In: P. B. Baltes & U. M. Staudinger (Eds.), Interactive minds. Life-span perspectives on the social foundation of cognition (pp. 394-412). Cambridge: Cambridge University Press. Martin, N., D’Arcy, P., Newton, B., & Parker, R. (1976). Writing and learning across the curriculum 11-16. Montclair, NJ: Boynton/Cook. Massy, J. (2006). The Integration of Learning Technologies into Europe's Education and Training Systems. In C. J. Bonk & C. R. Graham (Eds.), Handbook of blended learning: Global Perspectives, local designs (pp. 419-431). San Francisco, CA: Pfeiffer Publishing. McAndrew, D. A., & Reigstad, T. J. (2001). Tutoring Writing: A Practical Guide for Conferences. Portsmouth, NH: Boynton/Cook, Heinemann. McCracken, D., Wolfe, R., & Spool, J.M. (2003). User-Centered Website Development: A HumanComputer Interaction Approach. New York: Prentice Hall. Mcfarlane, A. (2003). Editorial. Assessment for the Digital Age. Assessment in Education, 10(3), 261-266. 146

Merrill, M.D. (2000). Instructional strategies and learning styles: which takes precedence? In R. Reiser & J. Dempsey (Eds.), Trends and issues in instructional technology (pp.83-99). New York: Prentice Hall. Mezirow, J. (1990). Fostering Critical Reflection in Adulthood. A Guide to Transformative and Emancipatory Learning. San Francisco, CA: Jossey-Bass. Morkes, J., & Nielsen, J. (1997). CONCISE, SCANNABLE, AND OBJECTIVE: HOW TO WRITE FOR THE WEB. Retrieved August 24, 2009, from http://www.useit.com/papers/webwriting/writing.html. Mousavi, A. (2002). An Encyclopedic Dictionary of Language Testing (3rd. ed.). Taiwan: Tung HUa Book Co. Myers, I. B., & McCaulley, M. H. (1985). Manual: A guide to the development and use of the MyersBriggs type indicator. Palo Alto: Consulting Psychologist Press. Myers-Breslin, L. (Ed.) (1999). Administrative Problem-Solving for Writing Programs and Writing Centers. Urbana, IL: NCTE. Neilforoshan, M. R. (2002). An integrative approach to the accommodation of various learning styles. Journal of Computing Sciences in Colleges, 17(3), 67-72. Nelson, C. (Ed.) (1989). Narratives from the Crib. Cambridge, MA: Harvard University Press. Neuwirth, C. M., & Wojahn, P. G. (1996). Learning to write. Computer support for a cooperative process. In: T. Koschman (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 147-170). Mahwah, NJ: Erlbaum. Oliver, M., & Trigwell, K. (2005). Can 'blended learning' be redeemed? E-Learning, 2(1), 17-26. Parvez, S. M. and Blank, G. D. (2007). A pedagogical framework to integrate learning style into intelligent tutoring systems. Journal of Computing Sciences in Colleges 22(3), 183-189. Paul, R. W. (1987). Dialogical Thinking: Critical Thought Essential to the Acquisition of Rational Knowledge and Passions. In: J. B. Baron, & R. J. Sternberg (Eds.), Teaching Thinking Skills: Theory and Practice (pp. 127-148). New York: Freeman. Peters, O. (2000). Digital Learning Environments: New Possibilities and Opportunities. International Review of Research in Open and Distance Learning, 1(1), 1-19. Rafoth, B. (Ed.) (2000). A Tutor’s Guide: Helping Students One to One. Portsmouth, NH: Boynton/Cook, Heinemann. Raikes, N., Greatorex, J., & Shaw, S. (2004). From Paper to Screen: some issues on the way. Paper presented at the 30th Annual IAEA Conference ‘Assessment in the Service of Learning’, Philadelphia, USA. Reigstad, T. J., & McAndrew, D.A. (1984). Training Tutors for Writing Conferences. Urbana, IL: NCTE. Rissland-Michener, E. (1978). Understanding understanding mathematics. Cognitive Science, 2(4), 361383. Roever, C. (2001). Web-Based Language Testing. Language Learning & Technology, 5(2), 84-94. 147

Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. Oxford: Oxford University Press. Rovai, A. P., & Jordan, H. M. (2004). Blended Learning and Sense of Community: A Comparative Analysis with Traditional and Fully Online Graduate Courses. International Review of Research in Open and Distance Learning, 5(2), 1-13. Ruf, U., & Gallin, P. (1998). Dialogisches Lernen in Sprache und Mathematik. Seelze-Verber: Kallmeyer. Rushton, S., Morgan, J. & Richard, M. (2007). Teacher’s Myers-Briggs personality profiles: Identifying effectives teacher personality traits. Teaching and Teacher Education, 23, 432-441. Ryan, L., & Zimmerelli, L. (2006). The Bedford Guide for Writing Tutors (4th edition). Boston, New York: Bedford/St. Martin’s. Scardamalia, M., & Bereiter, C. (1996). Computer-support for knowlede-building communities. In: T. Koschman (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 249-268). Mahwah, NJ: Erlbaum. Schipper, R. A., & Krist, P. S. (2002). Consideration of learning style, field orientation, format, citizen status, and time in online internet instruction. Journal of Computing Sciences in Colleges, 17(3), 73-83. Schwartz, M. (1983a). Revision profiles: patterns and implications. College English, 45, 549-558. Schwartz, M. (1983b). Two journeys through the writing process. College Composition and Communication, 34, 188-201. Shang, Y., Shi, H., & Chen, S. (2001). An intelligent distributed environment for active learning. Journal of Educational Resources in Computing, 1(2), 1-17. Sharples, M. (1998). How we write. Writing as creative design. London: Routledge. Singh, H. (2003). Building effective blended learning programs. Educational Technology, 43(6), 51-54. Singh, H., & Reed, C. (2001). A white paper: Achieving success with blended learning. Retrieved March 5, 2007, from http://www.centra.com/download/whitepapers/blendedlearning.pdf Slater, J. (2004). Quality assurance in open & distance learning: A national perspective. Retrieved February 26, 2007, from http://distlearn.man.ac.uk/events/abstract/slater.php Stash, N., Cristea, A., & De Bra, P. (2004). Authoring of Learning Styles in Adaptive Hypermedia: Problems and Solutions. Paper presented at the World Wide Web Conference, New York, USA. Sutliff, R., & Baldwin, V. (2001). Learning Styles: Teaching Technology Subjects Can Be More Effective. The Journal of Technology Studies, 27(1), 22-27. Tinio, V. L. (2003). ICT in Education. e-Primers Series. Bangkok: UNDP. Retrieved February 26, 2007, from http://www.apdip.net/publications/iespprimers/eprimer-edu.pdf Tynjälä, P., Mason, L., & Lonka, K. (Eds.) (2001). Writing as a Learning Tool. Integrating Theory and Practice. Amsterdam: Kluwer Academic Press.

148

Valiathan, P. (2002). Blended learning models. ASTD Learning Circuits. Retrieved, February 26, 2007, from http://www.learningcircuits.org/2002/aug2002/valiathan.html Van Waes, L., & Schellens, P. J. (2003). Writing profiles: The effect of the writing mode on pausing and revision patterns of experienced writers. Journal of Pragmatics, 35, 829-853. Victori, M., & Lockhart, W. (1995). Enhancing metacognition in self-directed language learning. System, 23(2), 223-234. Voos, R. (2003). Blended learning: What is it and where might it take us? Sloan-C View, 2(1), 2–5. Retrieved February 28, 2007, from http://www.sloan-c.org/publications/view/v2n1/pdf/v2n1.pdf. Vygotsky, L. S. (1969). Thought and Language. Cambridge, MA: Harvard University Press. Vygotsky, L. S. (1978). Mind in Society. The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Young, A. (2006). Teaching Writing Across the Curriculum (4th edition). Upper Saddle River, NJ: Pearson and Prentice Hall. Zunker, V. G., & Brown, W. F. (1966). Comparative Effectiveness of Student and Professional Counsellors. Personnel and Guidance Journal, 44 (7), 738-743.

149