A blended learning lecture delivery model for large ... - Semantic Scholar

9 downloads 0 Views 417KB Size Report
Jan 21, 2013 - Gordon Sanson b ... b eEducation Centre, Monash University, Australia ... and student demand for flexibility (Davidson, 2011; Twigg, 2003).
Computers & Education 64 (2013) 116–126

Contents lists available at SciVerse ScienceDirect

Computers & Education journal homepage: www.elsevier.com/locate/compedu

A blended learning lecture delivery model for large and diverse undergraduate cohorts Wendy A. McKenzie a, *, Eloise Perini a, Vanessa Rohlf a, Samia Toukhsati a, Russell Conduit a, Gordon Sanson b a b

School of Psychology & Psychiatry, Monash University, Clayton 3800, Australia eEducation Centre, Monash University, Australia

a r t i c l e i n f o

a b s t r a c t

Article history: Received 28 June 2012 Received in revised form 21 January 2013 Accepted 21 January 2013

A blended learning model was developed to enhance lecture delivery in a large, diverse introductory psychology unit, introducing the use of an online, personalized learning system for lecture preparation and using lecture time to extend students’ understanding. Changes to the assessment included diagnostic, formative and summative online quizzes. Using hierarchical multiple regression to account for the variance associated with prior achievement and background knowledge, the results show that students who completed the online formative assessments had significantly higher scores on summative assessment tasks, and that scores were even higher for students who used these resources repeatedly. Changes to future implementations of the model are suggested to improve student engagement in formative assessment, and to facilitate lecturer’s use of reports on student progress to focus and improve the quality of discussion in the face-to-face lectures. Ó 2013 Elsevier Ltd. All rights reserved.

Keywords: Blended learning Formative assessment Personalized learning Interactive learning Large lectures Psychology

1. Introduction 1.1. Background and context Blended learning is increasingly being adopted in course delivery and most university courses have some online component. The rapid growth of blended learning models in higher education recognizes the advantages in integrating the use of technology with traditional faceto-face teaching methods to meet economic challenges and student demand for flexibility (Davidson, 2011; Twigg, 2003). Another major advantage of blended learning is that it offers the potential for institutions to address some of the difficulties posed by delivering courses to large and diverse student cohorts (Dziuban, Hartman, & Moskal, 2004; Sharpe, Benfield, Roberts, & Francis, 2006; Vaughan, 2007). Online delivery is particularly suited to content dissemination as it allows students to work through materials at their own pace and at a time that is most convenient to them. While the power of face-to-face learning derives primarily from the unique opportunities it provides to build on prior knowledge and make the most of the benefits of discourse with an experienced practitioner in the discipline. Consequently it seems logical to use online learning to explicitly prepare all students for the valuable face-to-face opportunity. However, the successful integration of such different teaching and learning modalities is a challenge. The Monash University School of Psychology and Psychiatry teaches two first year psychology units with large student enrolments (N > 1500), across multiple local and international campuses and by off-campus learning. The educational backgrounds of this cohort vary widely with students completing a range of different degrees (Arts, Science, Engineering, Business, IT, and Law) and with some students having studied psychology before and others completely new to the discipline. Prior to 2011, delivery of the first year psychology program had been based on a conventional lecture plus laboratory model. The lectures focused on knowledge-based learning objectives, while laboratory classes targeted skills development. Traditionally, on-campus students attended three 1 h lectures per week and a 2 h laboratory

* Corresponding author. Tel.: þ61 3 9905 1706; fax: þ61 3 6605 3948. E-mail address: [email protected] (W.A. McKenzie). 0360-1315/$ – see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compedu.2013.01.009

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

117

workshop each fortnight. Off-campus students were provided with a hard-copy book of lecture summaries and attended a weekend school to complete the laboratory work. All lectures were recorded and made available online using the Echo360 recording system (http://echo360. com/). To ensure all students were accommodated, no prior knowledge was assumed, so lecture time was primarily used to cover the fundamental concepts using a traditional, didactic approach. Within this model, content that was prepared for face-to-face lectures was replicated in different modalities to provide flexibility of access. This practice is typical of a low-end “enabling” blended learning system (Graham, 2006). For this type of approach the learning management system is used to provide access to lecture slides prior to the lecture, audio recordings post-lecture, perhaps text-based lecture summaries, and links to other online resources such as multi-media learning aids and interactive practice tests (often through textbook publisher websites). One of the major limitations of this approach is that, apart from the lecture schedule and a set of prescribed readings, there is little guidance for students in how to make the most of the learning opportunities afforded by these diverse resources. At best, this scenario describes the use of online resources to “enhance” the classroom experience, with the use of technology progressively added onto the traditional lecture without changing the pedagogical approach (Garrison & Kanuka, 2004; Graham, 2006). Increasingly, curriculum change aims to incorporate blended learning into the core of the program, with the belief that this strategy can improve student engagement by facilitating active learning through increased opportunities for interaction and feedback in both face-toface and online environments (McGee & Reis, 2012). These transformational blended learning systems (Graham, 2006) demand a “thoughtful integration of classroom face-to-face learning experiences with online learning experiences” (Garrison & Kanuka, 2004, p. 96), and consider the use of technology in a way that dramatically changes the learning and teaching experience (Sharpe et al., 2006). In 2011, we began the process of curriculum renewal towards a more sustainable and effective model of learning and teaching that would ensure appropriate experiences were available to meet the needs of all students. A major part of this process required rethinking the role of lectures, which were essentially based on a traditional, didactic information dissemination model of teaching. Although lectures have long been used as an efficient mode of content delivery, they have been criticized as an ineffective means to encourage deep thinking, change attitudes and teach new skills (Bligh, 2000). With increasing student numbers, it is becomes even more difficult to rely on lectures as a way of facilitating these kinds of learning outcomes. As summarized by Walker, Cotner, Baepler, and Decker (2008), the difficulties of lecturing to large groups include poor attendance rates, low levels of engagement by unprepared students, with little opportunity for feedback or to encourage active learning. Adopting a blended learning approach for large cohorts can lead to improved learning outcomes and reduced costs, by replacing some of the traditional, didactic lectures with online activities that encourage active learning, such as quizzes, online discussion, tutorials, simulations (Heterick & Twigg, 2003, cited in Garrison & Kanuka, 2004; Vaughan, 2007). 1.2. A revised lecture delivery model The revised lecture delivery model developed for first year psychology units meets the criteria of a transformational blended learning system in that some lecture time was replaced with the use of online learning resources as a core component of the curriculum, and the approach to lectures was altered to integrate the online and face-to-face learning experiences. The integration was managed as part of a learning cycle: first the completion of a diagnostic test with immediate access to formative (pre-class) online activities, then a face-to-face lecture, followed by an online summative assessment task, with feedback on class performance to address misconceptions in a second faceto-face lecture. This cycle was undertaken over a fortnightly lecture block for each of the six lecture topics, as shown in Fig. 1. Over the semester, the number of face-to-face lectures was thereby reduced from 36 h (3  1 h lecture per week) to 24 h (one 2 h lecture per week).

Fig. 1. Overview of the blended learning lecture delivery model used in first year psychology showing the two week cycle of lecture preparation using MyPsychLab to obtain formative feedback prior to attending face-to-face lectures and completing a summative assessment task (in Moodle) (Reprinted from McKenzie & Perini, 2011).

118

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

Expecting students to use online learning resources in lieu of lectures has implications for both instructors and students. By reducing the number of lecture hours available, lecturers needed to revise the amount of content covered in lectures, as well as the style of presentation. Using the online resources as pre-class activities means that students come prepared to lectures with a basic level of understanding of the topic. If effective, then all students, regardless of their prior knowledge of psychology, can come together in the lecture with a similar level of understanding of basic concepts. The lecturer, then, is free to spend more time engaging students in discussion, illustration, and exploration, rather than dissemination of information around fundamental concepts, which for many students is essentially revision of the secondary school psychology curriculum, with which approximately one-third of the class are familiar. Similar “learn before the lecture” strategies have been successful in other large, introductory courses (e.g., Moravec, Williams, Aguilar-Roca, & O’Dowd, 2010). Completion of pre-class activities, however, requires students to operate effectively as independent learners. Therefore, the design of the online teaching and learning environment is a major issue in considering the development of students’ self-regulated learning skills as they transition to university (Krause & Coates, 2008; Winters, Greene, & Costich, 2008). It has been argued that the provision of regular opportunities for formative assessment is one means of developing self-regulated learning strategies (see Clark, 2012 for a review). Our revised lecture model introduces formative (pre-class) learning activities into the curriculum using the MyPsychLab (MPL) platformdan online, personalized learning system published by Pearson. Previous research has demonstrated the construct validity of this system for measuring mastery of knowledge in a large introductory psychology course, where scores in MPL were significantly correlated with performance across a range of measures (Cramer, Ross, Orr, & Marcoccia, 2012). For each lecture topic a module was created in MPL customized according to the learning goals for that topic. The first activity in the module was a pre-test, which provided a diagnostic assessment of the students’ prior knowledge. Based on the outcomes of the pre-test, MPL creates a personalized study plan for the student, acknowledging student diversity by personalizing the online learning experience according to the student’s demonstrated level of mastery. Specifically, the study plan provides links to study resources on topics that address the gaps in their knowledge without having to review content that they already appear to know well. Students can also view the correct answers for pre-test questions they answered incorrectly so they can learn from their mistakes. After working through the personalized study materials, students can complete a post-test to get a revised estimate of their level of mastery. Feedback statements in the post-test review provide explanations for why the selected alternative was correct/incorrect, including specific reference to text where available. Students can complete the post-test multiple times until they achieve the desired level of competency. MPL also produces reports on students’ progress on the study plans, which may assist instructors in their preparation of face-to-face lectures. Overall, the structure of these pre-class learning activities incorporates most of the design principles for using formative assessment to facilitate the development of self-regulated learning skills identified by Nicol and Macfarlane-Dick (2006): clarify goals; facilitate self-assessment; provide high-quality feedback; encourage dialogue; improve motivation; close gaps between current and desired goals; and inform teachers how to adapt teaching. There are numerous studies across a range of disciplines that demonstrate the positive impact of regular online testing on learning outcomes (e.g., Angus & Watson, 2009; Dobson, 2008; Gikandi, Morrow, & Davis, 2011; Peat & Franklin, 2002; Rayner, 2008; Stull, Varnum, Ducette, Schiller, & Bernacki, 2011a,b). However, it can be difficult to find the right balance of assessment that is truly formativedfor the purpose of providing feedback to students and teachers on level of mastery (Stull et al., 2011a) against the reality of assessment-driven behaviour by students who choose to only complete summative assessment tasks. Interestingly, Kibble (2007, 2011) noted that the relationship between completing online quizzes and exam scores breaks down if the quizzes are moved from being voluntary to being counted for course credit. Given these considerations, changes were made to the assessment structure of the unit to incorporate the regular online assessments, whereas previously the only assessment of learning outcomes associated with the lecture topics was the end-of-semester exam. In the revised curriculum, at the end of the first week of each lecture topic students completed an online lecture topic quiz. Lecture quizzes counted as summative assessment tasks, with the best 4 out of 5 contributing 10% of the final grade. The use of regular online quizzes in this way is quite common, and straddles both formative and summative purposesdrecognizing the need to award some credit to improve participation, but also providing opportunities for frequent and rapid feedback (Dobson, 2008). In the second week of each lecture topic cycle, the lecturer provided feedback to students about the class outcomes of the lecture quiz, allowing opportunities for discussion of common misunderstandings. The cycle of formative assessment (MPL study plans), lecture, summative lecture topic quizzes, and follow-up lecture satisfies many of the principles of best practice suggested by Nicol (2007) in how to use multiple-choice questions to support selfregulated learning skills. Our model has similar elements to case studies described by Nicol which embody the principles of ‘self-assessment’; immediate ‘feedback’; sustaining ‘motivation’ by spacing out the quizzes; ‘adapting teaching’ by addressing feedback on the quiz results in lectures; and ‘closing the gap’ by allowing multiple attempts on the post-tests. 1.3. Aims The aim of this paper is to investigate the impact of the new blended learning approach to lectures on the learning outcomes and student experience in our large enrolment introductory psychology unit. Replacing one-third of the lecture time with independent learning using an online personalized systemdMyPsychLabdrequires confidence that the learning cycle of formative assessment, lectures, and summative assessment tasks is an effective strategy. To this end, we collected data on students’ use of MPL to determine if the use of the online formative assessment tasks would predict improved performance on the summative assessments (lecture topic quizzes and end-of-semester exam). Further, as students’ perceptions of the teaching and learning environment also affects the quality of their learning experience (Diseth, Pallesen, Brunborg, & Larsen, 2010; Ramsden, 1992) a survey was conducted at the end of semester to explore a range of factors that impact on the student experience, such as perceptions about feedback, workload, lectures and resources. Students who used MPL were expected to show improvement in their understanding of core concepts, demonstrated by increases in scores from the pre-test to post-test for each lecture topic study plan. More importantly, it was expected that students who completed these formative assessments in MPL would have higher scores on the summative assessment tasks: lecture topic quizzes completed during the semester, and on the end-of-semester exam. This approach follows Angus and Watson’s (2009) lead to investigate the effect of merely engaging with the diagnostic and formative assessment tasks on subsequent performance in related summative assessments, rather than focus on the correlation between performance levels in formative and summative assessments per se. In other words, we were interested in

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

119

whether the benefits of using the diagnostic and formative tests available in MPL led to better learning outcomes. Furthermore, it was expected that these differences would still be reliable when tertiary entrance score and prior completion of psychology units in secondary school 4 were used as measures of variability attributed to prior ability. Tertiary entrance data is considered to have high face validity as a measure of secondary school academic achievement (James, Bexley, & Shearer, 2009). However, success at school is also likely to be influenced by psychosocial factors, such as motivation (good study habits, homework completion), self-regulation (well behaved in class), and a family environment that values education (Casillas et al., 2012). Whether the advantage for students using MPL depends on timing of the formative assessments, during semester and/or end of semester exam revision, was also of interest. The effects of repeated and distributed multiple testing on long-term retention (Carrillo-de-la-Peña & Pérez, 2012; Roediger & Butler, 2011) predicts a greater advantage for students who complete the MPL study plans both before the lecture topic quiz during semester, and prior to the exam for revision, compared to students who only complete the study plans prior to the relevant lecture topic quiz or only just prior to the exam. 2. Method 2.1. Participants All students (N ¼ 1710) enrolled in PSY1011 Psychology 1A at Monash University in Semester 1, 2011 were invited to participate in the evaluation. The unit was offered at three Melbourne Campuses (Clayton, Caulfield, Peninsula), South Africa, and Malaysia, and by off-campus learning. In accordance with procedures approved by the Monash University Human Research Ethics Committee, student permission was obtained to access data on their use and results in the online learning environments (Moodle and MyPsychLab) and university entrance records. Data from 1648 students was accessed and de-identified prior to analysis. As shown in Table 1, the majority of the students studied on-campus in Australia, were aged in their late teens to early twenties, and female. This age distribution is typical of most first year undergraduate students (Department of Education Employment and Workplace Relations, 2011); and it is common for females to be over represented in most psychology units (U.S. Department of Education, 2006; Willyard, 2001). Of the entire cohort, 718 students completed the anonymous evaluation survey. 2.2. Curriculum materials and assessment The unit is one of two first year psychology units designed to introduce students to a broad range of areas within the scientific discipline of psychology. This first semester unit focuses on six core areas, with a different topic covered each fortnight over the 12 week semester, including 1) history and science of psychology; 2) personality; 3) learning; 4) biological psychology; 5) developmental psychology; and 6) sensation and perception. One 2 h lecture per week was presented during the fortnightly period associated with a given topic area. Lecture slides were made available to students prior to the lecture, and audio recordings of the lectures delivered at the local campus were available to all students using the Echo360 lecture capture software. A set of curriculum materials were created for each topic, including a set of learning objectives/keywords; prescribed reading (mainly based on the textbook Psychology: From Inquiry to Understanding by Lilienfeld, Lynn, Namy, & Woolf, 2011); and customized study plans created in MyPsychLab (Pearson). Assessment of learning outcomes included lecture topic quizzes during the semester and an end-of-semester examinationdboth using multiple-choice questions. The unit was delivered using the Moodle learning management system. 2.2.1. MyPsychLab (MPL) study plans Customized learning modules were created using MPL (Pearson) to address the specific learning objectives/keywords for each lecture topic. Each study plan began with a pre-test of multiple choice questions, designed as a diagnostic assessment tool for students. Based on the pre-test score, the software indicates whether or not the student has met the pre-determined criteriondset at a minimum score of 70%dfor each learning objective. For each objective where the criterion has not been met, a personalized set of study materials is generated consisting of a set of links to the relevant pages of the prescribed reading (via the ebook), as well as any video or multimedia resources that have been loaded into the module. Students then have the opportunity to have multiple attempts at a post-test, comprising a selection of new multiple-choice questions drawn from a larger pool, to monitor improvement in their understanding of the specified learning objectives. Both pre- and post-tests provide students with a review of the questions at the end of each quiz. Feedback on the pre-test provided the correct answers, whereas feedback on the post-test appeared in the form of statements to assist students in understanding why their answer was incorrect, including specific references to relevant areas of the prescribed reading where appropriate. There were 30 questions per quiz, on average. A study plan was created as preparation for the first lecture in each topic and the associated lecture topic quiz.

Table 1 Mean age and gender distribution by learning mode and location. Unit location

On campus learning Australiaa Malaysia South Africa Off-campus learning Total a

Age M

SD

19.66 20.12 20.86 31.84 20.78

3.18 2.40 2.58 9.01 4.96

Includes students from three local campuses.

Female

Male

Total

829 113 146 101 1189

318(27.7%) 53(31.9%) 65 (30.8%) 23 (18.5%) 459 (27.9%)

1147 166 211 124 1648

(72.3%) (68.1%) (69.2%) (81.5%) (72.1%)

120

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

2.2.2. Lecture topic quizzes Five1 lecture topic quizzes were created in Moodle for the Personality, Learning, Biological, Developmental, and Sensation and Perception topics. Each student received a random selection of 10 questions from a larger pool, and was given 15 min to complete the quiz. The deadline for submission was close of business at the end of the first week in each fortnightly lecture topic cycle. The lecture quizzes consisted of a new batch of multiple-choice questions that differed from the questions included in the study plans, but which assessed knowledge of the same concepts. The best 4 out of 5 quiz scores contributed 10% of the final semester mark. 2.2.3. End of semester exam Students completed the 2 h, closed book examination exam in hardcopy as part of the centrally managed and invigilated examinations process. There were 100 multiple choice questions, 10 questions on History and Science of Psychology and 18 questions on each of the remaining five lecture topics. 2.3. Evaluation 2.3.1. Student experience At the end of the semester students were invited to provide feedback on their learning experience in first year psychology by completing an online survey. There were 40 items, including questions about: 1) enrolment (degree, campus, study mode); 2) the unit (e.g., interest, clarity of objectives, workload, deadlines, feedback, organization); 3) MPL (e.g., usefulness for preparation of lectures and assessment tasks); 4) lectures (e.g., attendance, interest, engagement, coverage); 5) laboratory classes (not discussed here); and 6) learning resources (i.e., rate according to how helpful each resource was in supporting the students’ studies in psychology). Each item was rated on a 5-point scale: (1 ¼ Strongly Disagree; 2 ¼ Disagree; 3 ¼ Neutral; 4 ¼ Agree; 5 ¼ Strongly Agree). 2.3.2. Student background Student identification numbers were used to obtain university records on tertiary entrance data and VCE2 psychology scores for those students who completed units in psychology as part of their secondary education certificate (N ¼ 609) Tertiary entrance rank (TER) data are based on the Australian Tertiary Admissions Rank calculated for universities for the purposes of comparing performance of students across secondary school subjects (Range ¼ 0–99.95). 2.3.3. Learning outcomes All pre- and post-test scores for study plans completed in MPL, lecture topic quiz scores obtained from Moodle, and examination scores were collated for analysis. Note: The History and Science of Psychology topic was not included in the evaluation of learning outcomes because there was no lecture quiz associated with this topic, and the study plan in MPL was used as a “practice” and introduction for students in the first week of semester. 3. Results 3.1. Use of MyPsychLab (MPL) MPL generates reports on students’ time of submission and score on the pre-test and all post-test attempts for each of the lecture topic study plans. Participation in MPL was counted if there was at least one submission recorded in either pre- or post-test for each study plan. These reports were used to categorize students into groups according to “MPL use” with four levels: 1) did not use MPL, 2) used MPL before exam only, 3) used MPL prior to the lecture topic quiz only, and 4) used MPL prior to the lecture topic quiz and the exam. These groups were identified independently for each of the five lecture topics as the pattern of completion across the MPL study plans differed for individual students. An overview of MPL use across the semester for the different student cohorts is shown in Table 2. Table 2 shows the use of MPL declined throughout the semester, mainly reflecting a drop off in use by students who recorded submissions both during and at the end of semester. Using MPL prior to each lecture topic quiz was quite stable across the semester. Relatively few students used MPL only for exam revision. Comparing scores on the pre-test and post-test quizzes in MPL illustrates benefits to student learning across each of the lecture topic study plans. Fig. 2 clearly shows that pre-test performance is lower and more variable, compared to post-test scores. 3.2. Learning outcomes 3.2.1. Lecture quiz scores The means and standard deviations of scores on the lecture quizzes were calculated for each of the five topics as a function of whether or not students used the MPL study plan during semester prior to completing the lecture topic quiz (Table 3). As can be seen in Table 3 students who participated in MPL prior to completing the lecture topic quiz outperformed those who did not. To determine whether test performance could be predicted by participation in MPL after controlling for prior ability and knowledge, a series of hierarchical multiple regressions were used whereby tertiary entrance rank (TER) and completion of psychology as a VCE subject were entered in Block 1 followed by participation in MPL in Block 2 (Table 4). For all students, participation in MPL predicted lecture quiz scores, even after controlling for the effects of student background: TER and completion of psychology units. There was a significant

1 There was no lecture topic quiz on the History and Science of Psychology topic in Week 1 as there is often substantial variability in enrolments at the beginning of semester. 2 Victorian Certificate of Education in the state of Victoria, Australia.

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

121

Table 2 Percentage of student use of MyPsychLab (MPL) during semester and before the exam (N ¼ 1648). Lecture topic

Week

Did not use MPL

MPL before exam only

MPL before quiz only

MPL before quiz and exam

Personality Learning Biological Development Perception

3 5 7 9 11

46.7 52.1 54.4 61.6 60.0

16.6 9.8 10.2 6.9 8.9

13.7 21.5 17.2 18.4 18.4

23.1 16.7 18.2 13.1 12.7

improvement in the prediction of lecture quiz scores on the basis of participation in MPL, with the percentage of variance explained ranging from 1% to 3.7%. 3.2.2. Exam scores The level of engagement with MPL was indexed using the number of study plans attempted by each student across the semester (range 0–5). Table 5 shows the improvement in mean exam scores as a function of the number of study plans completed. There was a moderate, positive relationship between the exam score and the number of MPL study plans completed (r ¼ .44, p < .001). Students who did not use MPL at all only achieved a mean pass score of 52% compared to students who achieved a mean score of 66% after completing all of the associated MPL study plans. Hierarchical multiple regression was used to predict total exam score across all five lecture topics. Results of the analyses indicated that prior knowledge, TER and completion of VCE Psychology, entered into Block 1 were significantly related to exam score (F (2, 918) ¼ 163.69, p < .001) and explained 26.3% of the variance. The addition of level of engagement in Block 2, as measured by the number of MPL study plans completed, significantly improved the model, with an R2 change of 7.2% (F (1, 917) ¼ 99.59, p < .001). The final model explained 33.5% of the variance in exam score (F (3, 917) ¼ 154.04, p ¼ .001). The means and standard deviations for scores on the end of semester exam were calculated for each of the five lecture topics. This time, participation in MPL was categorized into four groups: 1) did not use MPL, 2) used MPL prior to the exam only, 3) used MPL prior to the lecture topic quiz only, and 4) used MPL prior to the lecture quiz and exam. The results are summarized in Fig. 3. Across all topics, exam performance was highest for students who used MPL both for exam revision and during the semester, whereas the lowest exam scores were attributed to students who did not use MPL at all. Whether or not MPL is used only before the lecture quiz, or only before the exam contributes to a higher exam score than for non-users, but does not have the same advantage as using MPL both during and at the end of semester. Hierarchical Multiple Regression analysis was used to predict overall exam performance on the basis of participation in MPL. The regression analysis revealed that tertiary entrance rank (TER) and completion of psychology as a VCE subject, entered in Block 1, made a significant contribution to the prediction of exam score (F (2, 915) ¼ 172.78, p < .001) explaining 27.4% of the variance. The addition of participation in MPL (before exam; before quiz; before exam and quiz) in Block 2 also made a significant contribution with an R2 change of 8.5% (F (3, 912) ¼ 40.19, p < .001). Together the final model explained 35.9% (F (5, 912) ¼ 102.11, p < .001). 3.3. Student experience Students’ responses to the end of semester survey are summarized in Table 6. Where available, the change in the mean rating compared to a similar survey conducted for the same unit in the previous year is noted. Overall, students evaluated the unit positively, with improvements across all items, in particular those related to feedback and workload.

Fig. 2. Box plots presenting mean pre and post MPL quiz score.

122

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

Table 3 Lecture topic quiz scores (out of 2.5%) as a function of participation in MPL. No MPL

Personality Learning Biological Development Perception

MPL before quiz

n

M

SD

n

M

SD

918 863 870 856 829

1.55 1.84 1.94 1.92 1.95

0.54 0.49 0.48 0.53 0.48

567 608 556 473 450

1.81 2.03 2.12 2.16 2.13

0.50 0.41 0.39 0.40 0.38

Note: Independent samples t tests reveal these differences to be significant (p < .001).

Approximately 60% of students self-reported attending either every lecture or most lectures. This estimate matches attendance data recorded during the semester via a weekly headcount at the local campuses. On average, lecture attendance ranged from 82% in Week 1 to 36% in Week 12. Student perceptions of the lectures were not as positive as their evaluation of the unit overall, indicating moderate satisfaction with the clarity, coverage and interest of the lecture material. The evaluation of MPL was more positive, rating most highly in the areas of feedback on knowledge of the lectures, and identifying concepts needing further study. Overall, students rated the use of MPL and the lecture quizzes as more useful to their learning than either attending lectures or listening to the lecture recordings. An analysis of the students’ open-ended comments about the lectures indicated the most common concerns about the lecture content were related to inconsistencies in the approach of different lecturers across topics, and the need for further clarity of the relationship between lecture content, reading, and the assessment tasks. For example: “It was disconcerting that whilst some lecturers covered the same material as readings, others covered only extraneous information or bits of the reading. I have no problem with any of those lecture styles but the inconsistency made it hard for me to develop a consistent approach”. “Lecture topic quizzes included a lot of questions, concepts and information that was not discussed in lectures.”

4. Discussion Transforming lecture delivery in a large first year psychology course using an integrated blended learning approach has improved learning outcomes and the student experience. Changing our approach to lectures was achieved by introducing a personalized, online learning systemdMyPsychLab (MPL)das a diagnostic and formative assessment tool to test and provide feedback on students’ understanding of fundamental concepts. The purpose of the lectures was to build on the concepts covered in the MPL study plans, extending students’ understanding through the exploration of research, theory and application. Learning outcomes were assessed using online quizzes for each lecture topic during the semester, and on the end-of-semester exam. 4.1. Learning outcomes The results show that students who completed the formative assessments had significantly better learning outcomes as measured by performance on the during-semester online lecture topic quizzes and the end-of-semester exam. This result may simply be a reflection of the increased motivation of students who are likely to engage in formative assessment tasks. The self-selection bias is often a problem in educational research, as participants cannot be randomly allocated into one or the other experimental condition. However, it is encouraging to see that use of MPL predicted improved learning outcomes even when TER rank and prior study of psychology (i.e., in secondary school) was accounted for in the model. As noted earlier, a high TER is indicative of level of secondary school academic achievement as well as other

Table 4 Hierarchical multiple regression analyses predicting lecture quiz scores as a function of participation in MyPsychLab (MPL).

Student background (Block 1) Enter (TER) scores Psychology VCE unita R2 Adj R2 F df Participation in MPL (Block 2) MPLb R2 Adj R2 F df R2 change F change df change

Personality

Learning

Biology

Development

Perception

.314*** .083** .105 .103 53.71*** 2917

.286*** .291*** .164 .163 90.21*** 2917

.231*** .198*** .091 .089 44.48*** 2888

.251*** .068* .067 .065 29.21*** 2811

.263*** .187*** .103 .101 44.55*** 2777

.197*** .142 .140 50.67*** 3916 .037 40.03*** 1916

.104*** .175 .172 64.66*** 3916 .010 11.49*** 1916

.101** .101 .098 33.16*** 3887 .010 9.64** 1887

.167*** .094 .091 27.98*** 3810 .027 23.88*** 1810

.132*** .120 .116 35.12*** 3776 .017 14.68*** 1776

Notes: *p < .05, **p < .01, ***p < .001. a dummy coded 0 ¼ student did not complete a psychology VCE subject, 1 ¼ student completed at least one psychology VCE unit. b dummy coded 0 ¼ students did not complete any MPL before the quiz, 1 ¼ student completed at least the pre-test MPL before the quiz.

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

123

Table 5 Means and standard deviations of total exam scorea (out of 90) for the five lecture topics as a function of the number of MyPsychLab (MPL) study plansb attempted. Number of study plans

M

SD

n

0 1 2 3 4 5

46.78 47.18 48.61 51.28 56.08 59.45

11.51 10.60 11.91 9.93 11.25 11.91

526 88 65 68 157 416

a b

Exam score does not include performance on History and Science of Psychology questions (n ¼ 10). Study plans refer to lecture one in each topic.

factors, such as motivation, self-regulation and family environment (Casillas et al., 2012). Although the percentage of variance predicted by use of MPL is statistically small, the average advantage on the end of semester exam was 14% for students who completed all MPL study plans compared to students who did not use MPL. However, given the limitations of the designdnon-randomized and self-selection of students choosing to use MPLdit is not possible to rule out the contribution of other factors. Future research could extend the investigation of this relationship to include variables that predict academic achievement at university, such as self-efficacy and employment (McKenzie & Schweitzer, 2001). Using online quizzes as formative assessment to improve student learning in this first year psychology unit replicates findings that have been regularly demonstrated across a range of disciplines (e.g., Angus & Watson, 2009; Dobson, 2008; Gikandi et al., 2011; Peat & Franklin, 2002; Rayner, 2008; Stull et al., 2011a,b). The benefit of using an online personalized learning system, such as MPL, is that it provides students and teachers with a dynamic model of formative assessment (Stull et al., 2011b). After completing the pre-test as a diagnostic tool, the system provides a personalized study plan for the student to follow to improve their understanding of content related to the learning objectives for which they had not achieved the set achievement criteria (70%). The post-test provides a second opportunity to test their level of understanding. The results show that students’ scores on the post-tests were higher and less variable than scores on the pre-test for each of the study plans. Incorporating the use of the MPL study plans into the learning cycle of pre-class activities, face-to-face lecture, summative lecture topic quiz, followed by feedback in the next lecture, models many of the practices outlined by Nicol (2007) on how multiple-choice questions can be used to foster self-regulated strategies. On average, those students who were consistently engaged with the online and face-to-face learning activities by following this process could be said to benefit their learning in the way that was intended by the model. Other students chose to make use of the MPL resources for revision prior to the end-of-semester exam, and for these students the benefits are more likely to be poorer and possibly a result of retrieval practice rather than long-term learning strategies. Providing opportunities for retrieval practice through multiple testing has been shown to improve long-term retention more than the equivalent time spent studying (Carrillo-de-la-Peña & Pérez, 2012; Roediger & Butler, 2011). Students who used MPL both for exam revision and during semester before the lecture topic quizzes showed better learning outcomes than students who only used MPL either before the lecture quiz or before the exam. Multiple retrieval attempts may benefit retention by either increasing the availability of retrieval routes and/ or increasing retrieval effort; and the advantages should be greater with feedback (MPL) than without feedback (Moodle lecture topic quizzes), and when the retrieval attempts are spaced apart (Roediger & Butler, 2011). Interestingly, for topics that occurred earlier in the semester, there appeared to be an advantage for students who used MPL before the lecture quiz only, compared to before the exam only. Although this pattern supports the benefits of spacing retrieval attempts further investigation is necessary to verify this finding. Future research should consider both frequency, consistency, and conditional use of online learning tools in order to learn more about how these resources benefit learners in response to specific learning tasks (Lust, Juarez Collazo, Elen, & Clarebout, 2012).

Fig. 3. Mean and standard deviations of exam scores by lecture topic as a function of participation in MyPsychLab (MPL).

124

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

Table 6 Means and standard deviations of student ratings of the psychology unit, lectures, and learning resources (1 ¼ Strongly Disagree to 5 ¼ Strongly Agree). Student survey

M

SD

Changea

The material covered in this unit is interesting. The learning objectives in this unit were made clear to me. The workload was reasonable. The time allowed to complete the lecture quizzes was adequate.b Constructive feedback was provided on my work.b Overall, the unit has been well organized. MPL provided useful feedback on my knowledge of the lecture topics. Using MPL helped me to identify concepts I needed to study further. MPL helped me to prepare for lectures. MPL helped me to prepare for assessment tasks. The (lecture) learning objectives were explicit and clear. The lecture material was clearly presented. The lecture material was interesting. The lectures encouraged me to actively engage with the topic. The lectures provided adequate coverage of the topic. The lectures inspired me to learn more about this topic. Questions about this topic were answered adequately. Usefulness of resource for student learning: Attending lectures. Listening to the online lecture recordings. Completing the study plans in MPL. Completing the lecture topic quizzes. Studying the prescribed text.

4.08 3.99 3.77 3.93 3.90 4.03 3.90 3.96 3.61 3.74 3.89 3.74 3.73 3.42 3.65 3.43 3.71

0.71 0.79 0.83 1.09 0.93 0.84 0.92 0.88 0.96 1.02 0.76 0.81 0.84 0.86 0.83 0.89 0.74

0.08 0.11 0.28 0.18 0.28 0.17

3.69 3.79 3.95 4.20 4.38

1.02 1.04 1.06 0.96 0.84

a b

Change in mean rating from 2010 to 2011 end of semester student survey. Slightly different wording of item from previous year.

4.2. Student engagement As MPL was being used as formative assessment, it is not surprising that participation rates were not 100%. On average, approximately one-third of students regularly completed the study plans prior to the lecture topic quiz; and around half of these students also used the study plans as exam revision. A small sub-group (12%) used MPL only for exam revision. Of greater interest is that the majority of students did not engage with MPL at all. In contrast, almost all students completed the online lecture topic quizzes that contributed up to 10% of their final grade. This experience is typical of students who are largely assessment-driven. It does suggest that motivation to use the online personalized learning system would be enhanced if some combination of the pre- and post-test quiz scores contributed to the summative assessment of the unit. Contrary to this position is the finding that performance on online quizzes is a better predictor of student learning outcomes if they are completed voluntarily as part of formative, not summative assessment (Kibble, 2007). Past research suggests two possible explanations of this finding: first, that participation in formative assessment tasks may be an indicator of more well-developed independent learning skills (c.f. Clark, 2012; Nicol, 2007); and second, making the quizzes part of summative assessment may have increased the use of strategies to ensure high scores, rather than as an opportunity to learn from feedback (Kibble, 2007). It would be useful to explore other means of improving student motivation to complete formative assessments. One suggestion is to make more salient the message that using online learning systems, such as MPL, leads to better learning outcomes. To this end, the findings of the current study have been presented to subsequent student cohorts to illustrate the benefits of engaging in the formative assessment tasks. We are currently investigating the impact of providing personalized progress reports on students’ performance in relation to their peers on formative and summative assessment tasks in the blended learning lecture model. 4.3. Student experience Making substantial changes to the teaching and learning environment should impact on the quality of the student experience. It was encouraging to note the overall improvement in student perceptions of the unit in the current sample. There was no negative impact on students’ perception of the workload, despite the increase in the number of assessments. Although it is likely that the online quizzes contributed to satisfaction with feedback, changes to the assessment of the laboratory program may have also contributed to this outcome. Not surprisingly, student ratings of the usefulness of the learning resources reflect an assessment-driven strategy: studying the prescribed text and completing the lecture topic quizzes being rated as most useful. MPL was rated as the next most useful resource. MPL was seen as being helpful in providing feedback and identifying concepts needing further study. We also noted that lecture attendance rates declined over the course of the semester, a problem that is becoming a significant issue across many institutions (Kelly, 2012). In the current context it is difficult to fully assess the impact of the blended learning approach to lectures because we do not have the required infrastructure to collect and match data on students who attend lectures (and/or listen to recorded lectures), with their use of the online learning resources and their subsequent academic performance on assessment tasks. As lecture attendance is related to motivation and academic outcomes (Kelly, 2012), future research needs to collect data on behaviour in both the face-to-face and online to fully explore the impact of blended learning approaches. Student perceptions’ of the lectures, communicated through open-ended comments on the evaluation survey, highlighted two key ways we can improve the face-to-face lecture experience: addressing the inconsistency of approach across lecturers for different topics, and making more explicit the connections between the lecture and online content. It is worth noting that the implementation of the blended lecture delivery model was supported by a project officer responsible for developing the online resources for the unit, working closely with the lecturers on the customization of the study plans and online quizzes. While this support was essential in alleviating the workload

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

125

burden, it did mean the lecturers had varied engagement with the technology at a personal level, which may have been one barrier to the lecturers making better use of the reporting mechanisms in MPL to inform their lecture preparation. There was also variability in how lecturers approached the task of lecture delivery in the face-to-face classes. As the revised lecture model assumes students develop prior knowledge of the content by completing the online formative assessment tasks, lecturers were encouraged to engage with students through discussion and hands on activities as best suited their particular content area. The diversity of topic areas and teaching styles (there were five lecturers covering the six different topics) meant lecturers responded to this learning environment differently. One approach was to reduce coverage of definitions and basic concepts in favour of spending more time discussing research studies and applications; whereas another approach was to re-structure the content into an ‘overview’ lecture plus an ‘advanced issues’ lecture, encouraging interaction with activities in the lecture and online discussion of case examples. Some lecturers, however, tended to follow the more traditional, information delivery lecture approach. One barrier to achieving the full potential of the revised lecture model may be the degree of confidence lecturers had in the effectiveness of student engagement with the online resources, which would allow them to more completely shift away from didactic teaching of foundational concepts to engaging students in advanced discussion. This feeling of caution would seem to be vindicated given the lack of engagement with the formative assessments as lecture preparation by the majority of students. These observations serve as a useful reminder that the aim of a transformational blended learning model is to integrate the online and face-to-face activities in a way that fundamentally changes the learning and teaching experiencedwhich impacts both students’ and teachers’ conceptions and roles (Gerbic, 2011). 4.4. Limitations Few studies are able to fully evaluate the implementation of a blended learning model, which requires a large-scale analysis of student engagement across the complete range of learning activities available in both the online and face-to-face environments (Lust et al., 2012). Such an analysis was beyond the scope of this study given the absence of individual lecture attendance data and the paucity of information available from student logs in the online environments. Future implementations need to recognize the value of building accessible and information-rich software interfaces to provide ongoing feedback to users (students and teachers). The use of learning analytics is an essential component underpinning both the implementation and evaluation of blended learning environments. 4.5. Conclusions The new curriculum has already been successful in transforming our approach to lecturing from a traditional, information transmission model, to thinking about how to use lectures to engage students in discussion based on their learning in the online environment. The first implementation of the blended learning lecture model has had a significant impact on both student learning and satisfaction. Taking advantage of the formative feedback and retrieval practice opportunities offered by the online quizzes in MPL, improved student scores on summative assessments. These gains have been achieved alongside a reduction in face-to-face lecture hours, the organizational impact of which has been to improve the efficiency of lecture time. Further refinements to the model will rely on building platforms that support the inclusion of sophisticated learning analytics on student use and progress in the online environment. This level of information is required to provide personalized feedback to students to improve engagement and to enable lecturers to use reports on online learning activities to focus and improve the quality of the face-to-face teaching and learning experience. Advances in the use of learning analytics at this level will also support future research into the effectiveness of the blended learning experience for different student cohorts. Acknowledgements The authors acknowledge the support of Pearson for providing student access to MyPsychLab, and would like to thank the staff for their assistance with the implementation. The authors also acknowledge the funding from the eEducation Centre, and the Faculty of Medicine, Nursing & Health Sciences at Monash University, to support the development of resources and implementation of the blended learning lecture delivery model in first year psychology units. References Angus, S. D., & Watson, J. (2009). Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set. British Journal of Educational Technology, 40(2), 255–272. Bligh, D. A. (2000). What’s the use of lectures? San Francisco: Jossey-Bass Publisher. Carrillo-de-la-Peña, M. T., & Pérez, J. (2012). Continuous assessment improved academic achievement and satisfaction of psychology students in Spain. Teaching of Psychology, 39(1), 45–47. http://dx.doi.org/10.1177/0098628311430312. Casillas, A., Robbins, S., Allen, J., Kuo, Y., Hanson, M., & Schmeiser, C. (2012). Predicting early academic failure in high school from prior academic achievement, psychosocial characteristics, and behaviour. Journal of Educational Psychology, 104(2), 407–420. http://dx.doi.org/10.1037/a0027180. Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educational Psychology Review, 24, 205–249. http://dx.doi.org/10.1007/s10648-011-9191-6. Cramer, K. M., Ross, C., Orr, E. S., & Marcoccia, A. (2012). Making the grade: evaluating the construct validity of MyPsychLab as a measure of psychology mastery. Creative Education, 3(3), 293–295. http://dx.doi.org/10.4236/ce.2012.33046. Davidson, L. K. (2011). A 3-year experience implementing blended TBL: active instructional methods can shift student attitudes to learning. Medical Teacher, 33(9), 750–753. http://dx.doi.org/10.3109/0142159X.2011.558948. Department of Education Employment and Workplace Relations. (2011). Students: Selected higher education statistics. Commencing students. Retrieved 24 May 2012, from. http://www.deewr.gov.au/HigherEducation/Publications/HEStatistics/Publications/Pages/2010StudentFullYear.aspx. Diseth, A., Pallesen, S., Brunborg, G. S., & Larsen, S. (2010). Academic achievement among first semester undergraduate psychology students: the role of course experience, effort, motives and learning strategies. Higher Education, 59, 335–352. http://dx.doi.org/10.1007/s10734-009-9251-8. Dobson, J. L. (2008). The use of formative online quizzes to enhance class preparation and scores on summative exams. Advances in Physiology Education, 32(4), 297–302. Dziuban, C. D., Hartman, J. L., & Moskal, P. D. (2004, March 30). Blended learning. Educause Center for Applied Research: Research Bulletin, 7, 1–12, Retrieved from. http://net. educause.edu/ir/library/pdf/ERB0407.pdf. Garrison, D. R., & Kanuka, H. (2004). Blended learning: uncovering its transformative potential in higher education. Internet and Higher Education, 7(2), 95–105. Gerbic, P. (2011). Teaching using a blended approach – what does the literature tell us? Educational Media International, 48(3), 221–224.

126

W.A. McKenzie et al. / Computers & Education 64 (2013) 116–126

Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: a review of the literature. Computers & Education, 57(4), 2333–2351. http:// dx.doi.org/10.1016/j.compedu.2011.06.004. Graham, C. R. (2006). Blended learning systems: definition, current trends and, future directions. In C. J. Bank, & C. R. Graham (Eds.), Handbook of blended learning: Global perspectives, local designs. San Francisco, CA: Pfeiffer. James, R., Bexley, E., & Shearer, M. (2009). Improving selection for tertiary education places in Victoria. Paper prepared for the Joint Policy Unit on Youth Transition. Centre for the Study of Higher Education, The University of Melbourne, Retrieved from. https://www.skills.vic.gov.au/__data/assets/pdf_file/0015/124107/Tertiary-Selectionin-Victoria.pdf. Kelly, G. E. (2012). Lecture attendance rates at university and related factors. Journal of Further and Higher Education, 36(1), 17–40. http://dx.doi.org/10.1080/ 0309877X.2011.596196. Kibble, J. D. (2007). Use of unsupervised online quizzes as formative assessment in a medical physiology course: effects of incentives on student participation and performance. Advances in Physiology Education, 31(3), 253–260. http://dx.doi.org/10.1152/advan00027.2007. Kibble, J. D. (2011). Voluntary participation in online formative quizzes is a sensitive predictor of student success. Advances in Physiology Education, 35(1), 95–96. http:// dx.doi.org/10.1152/advan.00053.2010. Krause, K. L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education, 33(5), 493–505. http://dx.doi.org/10.1080/ 02602930701698892. Lilienfeld, S. O., Lynn, S. J., Namy, L. L., & Woolf, N. J. (2011). Psychology: From inquiry to understanding. Boston: Pearson. Lust, G., Juarez Collazo, N. A., Elen, J., & Clarebout, G. (2012). Content management systems: enriched learning opportunities for all? Computers in Human Behaviour, 28, 795–808. McGee, P., & Reis, A. (2012). Blended course design: a synthesis of best practices. Journal of Asynchronous Learning Networks, 16(4), 7–22. McKenzie, W. A., & Perini, E. (2011). Using online personalized study plans to address diversity and facilitate independent learning. In Poster presented at the 14th Pacific Rim First Year Higher Education Conference, Fremantle, Western Australia, 28th June–1st July, 2011. McKenzie, K., & Schweitzer, R. (2001). Who succeeds at university? Factors predicting academic performance in first year Australian university students. Higher Education Research & Development, 20(1), 21–33. http://dx.doi.org/10.1080/07924360120043621. Moravec, M., Williams, A., Aguilar-Roca, N., & O’Dowd, D. K. (2010). Learn before lecture: a strategy that improves learning outcomes in a large introductory biology class. CBEdLife Sciences Education, 9(4), 473–481. Nicol, D. (2007). E-assessment by design: using multiple-choice tests to good effect. Journal of Further and Higher Education, 31(1), 53–64. http://dx.doi.org/10.1080/ 03098770601167922. Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 198–218. Peat, M., & Franklin, S. (2002). Supporting student learning: the use of computer-based formative assessment modules. British Journal of Educational Technology, 33(5), 515– 523. http://dx.doi.org/10.1111/1467-8535.00288. Ramsden, P. (1992). Learning to teach in higher education. London: Routledge. Rayner, G. M. (20–21 Novermber, 2008). Using ‘mastering biology’ to formatively improve student engagement and learning in first year biology. In Paper presented at the ATN assessment conference 2008: Engaging students in assessment, Adelaide SA Australia. Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27. Sharpe, R., Benfield, G., Roberts, G., & Francis, R. (October, 2006). The undergraduate experience of blended e-learning: a review of UK literature and practice. The Higher Education Academy, Retrieved from. http://www.heacademy.ac.uk/assets/documents/research/literature_reviews/blended_elearning_exec_summary_1.pdf. Stull, J. C., Varnum, S. J., Ducette, J., Schiller, J., & Bernacki, M. (2011a). The effects of formative assessment pre-lecture online chapter quizzes and student-initiated inquiries to the instructor on academic achievement. Educational Research and Evaluation, 17(4), 253–262. http://dx.doi.org/10.1080/13803611.2011.6211756. Stull, J. C., Varnum, S. J., Ducette, J., Schiller, J., & Bernacki, M. (2011b). The many faces of formative assessment. International Journal of Teaching and Learning in Higher Education, 23(1), 30–39. Twigg, C. A. (2003). Improving learning and reducing cost: Lessons learned from Round 1 of the Pew Grant Program in Course Redesign. Retrieved from. http://www.colorado.edu/ physics/ScienceLearningCenter/TwiggImprovingLearning.pdf. U.S. Department of Education. (2006). 2005–06 Integrated Postsecondary Education Data System (IPEDS). From. http://nces.ed.gov/programs/digest/d07/tables/dt07_275.asp. Vaughan, N. (2007). Perspectives on blended learning in higher education. International Journal on E-Learning, 6(1), 81–94. Walker, J. D., Cotner, S. H., Baepler, P. M., & Decker, M. D. (2008). A delicate balance: integrating active learning into a large lecture course. CBEdLife Sciences Education, 7, 361– 367. http://dx.doi.org/10.1187/cbe.08-02-0004. Willyard, C. (2001). Men: a growing minority? gradPSYCH, Retrieved from. http://www.apa.org/gradpsych/2011/01/cover-men.aspx. Winters, F. I., Greene, J. A., & Costich, C. (2008). Self-regulation of learning with computer-based learning environments: a critical analysis. Educational Psychology Review, 20, 429–444. http://dx.doi.org/10.1007/s10648-008-9080-9.

Suggest Documents