The Generalist’s Corner
Using Quizzing to Assist Student Learning in the Classroom: The Good, the Bad, and the Ugly
Teaching of Psychology 2015, Vol. 42(1) 87-92 ª The Author(s) 2014 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0098628314562685 top.sagepub.com
Khuyen Nguyen1 and Mark A. McDaniel1
Abstract Recently, the testing effect has received a great deal of attention from researchers and educators who are intrigued by its potential to enhance student learning in the classroom. However, effective incorporation of testing (as a learning tool) merits a close examination of the conditions under which testing can enhance student learning in an authentic classroom setting, where a number of factors may deviate from the laboratory. Based on existing evidence, we highlight several studies that encompass a few of the complexities of using testing to enhance course performance, including situations in which quizzing is beneficial for summative test performance, contexts in which quizzing does not appear to be as beneficial, and instances in which quizzing may actually hamper final test performance. Keywords testing effect, laboratory, classroom, education
In educational settings, testing has typically been used as an instrument to evaluate learning objectives. Cognitive psychologists, however, have viewed testing as a technique for enhancing learning and retention. The ‘‘testing effect’’ refers to the robust finding in which individuals remember tested material better than material that they have merely restudied (see Roediger & Karpicke, 2006a, for a review). Recent enthusiasm for the testing effect has led researchers to advocate testing as a potent tool to enhance learning in the classroom (McDaniel, Roediger, & McDermott, 2007; Pashler et al., 2007; Roediger, Agarwal, McDaniel, & McDermott, 2011). However, effective incorporation of testing (as a learning tool) into the classroom merits a close examination of the conditions under which testing can enhance student learning in an authentic classroom setting, where a number of factors may deviate from the laboratory.
Laboratory Versus Classroom Early laboratory studies on the testing effect used basic verbal learning materials (e.g., word lists and paired associates), which conveniently allowed for the initial tests and final tests to be identical. Although recent laboratory studies have shifted toward the use of more educationally relevant materials (e.g., texts), most studies have continued to use final test questions that are identical (or very similar) to the initial quiz questions (Agarwal, Karpicke, Kang, Roediger, & McDermott, 2008; Butler & Roediger, 2007, 2008). Most instructors, however, may not favor this method of testing in the classroom. Wooldridge, Bugg, McDaniel, and Liu (2014) conducted an online survey of
introductory psychology instructors who implemented quizzing in their courses. Wooldridge et al. found that the majority of instructors reported that they were unlikely to use final test questions that were identical to initial quiz questions. This finding suggests a discrepancy between laboratory research on the testing effect and how instructors typically implement quizzing in the classroom. Indeed, most instructors surveyed favored using different final test questions that probe the quizzed information. Specifically, instructors indicated that they commonly include final test questions that cover similar concepts but are framed in different contexts. Accordingly, it is important to examine whether the testing effect can be obtained when final test questions are not identical to initial quiz questions. Subsequently, we highlight some of the lessons from the handful of classroom experiments examining this issue. Additionally, laboratory studies allow for more control over study time and over the delay between studying and testing. This has typically resulted in experimenter-paced study phases in order to equate exposure to the study materials between the testing and control conditions. Furthermore, the retention interval is normally no longer than a week. In realistic educational settings, however, students can study whenever they want and
1
Washington University, St. Louis, MO, USA
Corresponding Author: Khuyen Nguyen, Department of Psychology, Washington University, 1 Brookings Drive, St. Louis, MO 63130, USA. Email:
[email protected]
Downloaded from top.sagepub.com by guest on January 14, 2015
88
Teaching of Psychology 42(1)
for as long as they want. Accordingly, students often cram right before the exam, and laboratory research has shown that testing produces null or negative memory effects when participants take the final test immediately after studying (Roediger & Karpicke, 2006b). Finally, another factor that must be considered when examining the effectiveness of testing in the classroom is test anxiety. Most students do not enjoy taking tests, and research has shown that test anxiety can lead to decrements in performance (Chapell et al., 2005; Zeidner, 1998). Furthermore, certain groups of students may be more prone to test anxiety than others (e.g., Steele & Aronson, 1995). Thus, the introduction of frequent quizzing may raise anxiety levels in students and perhaps hinder performance (although with frequent quizzing, students might become more relaxed with tests and less anxious; see Agarwal et al., 2014, for findings in middle school classrooms). For these reasons, it is imperative to demonstrate that testing effects can occur in authentic classrooms where there is variability on many factors that are typically controlled in laboratory studies. In this column, based on available evidence, we will highlight the good—situations in which quizzing is beneficial for summative test performance, the bad—contexts in which quizzing does not appear to be as beneficial, and the ugly—instances in which quizzing may actually hamper final test performance.
The Good As noted earlier, most of the laboratory research on testing has used final test items that are identical to the quiz items. Some classroom investigations of the testing effect have followed this methodology and yielded similar positive results. For instance, in a brain and behavior course, students who took unsupervised online quizzes showed better exam performance for identical questions than students who only received reexposure to the material (McDaniel, Wildman, & Anderson, 2012; see also Daniel & Broida, 2004). Test-enhanced performance on identical questions has also been observed in middle school classrooms (Carpenter, Pashler, & Cepeda, 2009; Roediger et al., 2011). Lyle and Crawford (2011) found that the testing effect persisted when the question format changed between quizzes and a subsequent exam. Students enrolled in a psychology statistics course took short-answer quizzes on lecture material at the end of each class period. These questions were later rephrased into multiple-choice questions on an exam. Thus, the wording on the quiz and exam items was not identical but was closely paraphrased. Lyle and Crawford found that students answered more questions correctly in the section of the exam that included paraphrased quiz questions. Less research has examined whether testing can promote performance for transfer questions (see Carpenter, 2012, for a review). That is, can testing confer benefits for final test questions that are related, but not identical, to the quiz questions? The answer likely depends on the nature of the similarity between the quiz and exam questions.
One type of similarity is a straightforward associative similarity in which the identical information (e.g., A $ B), where a particular kind of axon associated with a particular neurotransmitter is tested in one associative direction on the quiz (A !?) and in the reverse direction (B !?) on the final test. Several classroom experiments have shown a testing effect when quiz and exam questions reflect such similarity. For instance, in a brain and behavior course, students quizzed with the question, ‘‘all preganglionic axons, whether sympathetic or parasympathetic, release _________ as a neurotransmitter’’ performed better on a test item, ‘‘all _________ axons, whether sympathetic or parasympathetic, release acetylcholine as a neurotransmitter,’’ than students who did not answer this quiz item (McDaniel, Anderson, Derbish, & Morrisette, 2007; see McDaniel, Thomas, Agarwal, McDermott, & Roediger, 2013, for similar findings in a middle school science class). This finding is sensible, given that basic laboratory research indicates that testing improves learning and retention of associations relative to additional study of material (e.g., Carpenter, Pashler, & Vul, 2006; Karpicke & Roediger, 2008; Rohrer, Taylor, & Sholar, 2010). Although many psychology teachers want their students to acquire basic facts, concept terms, and definitions, teachers also often want their students to understand the instantiation of these concepts in a variety of contexts. This learning objective spawns another type of quiz-test item similarity: A concept can be quizzed in one application and then tested within the context of a different application. The available classroom evidence suggests that quizzing can also enhance summative test performances under these circumstances. In a psychology course on memory, for example, Glass (2009) found that when the quiz questions (on three different quizzes) varied the particular illustration of the concept, test performance improved relative to when the quiz questions repeatedly used the same illustration. For instance, ‘‘semantic priming’’ was illustrated differently across the following three multiple-choice quizzes: In Quiz 1, the stem read, ‘‘The word ‘cow’ will be read fastest when preceded by the word . . . ’’ (‘‘mammal’’ was the correct answer); in Quiz 2, the stem read, ‘‘The word ‘tulip’ will be read fastest if preceded by the word . . . ’’ (‘‘flower’’ was the answer); and in Quiz 3, the stem read, ‘‘The word ‘cobra’ will be read fastest if it is preceded by the word . . . (‘‘snake’’ was the answer)’’. The final exam question reflected yet a different illustration of semantic priming: ‘‘Consider a block of three trials in a word-non-word discrimination task. In each case, on the third trial, the observer sees DOG. At the end of which block will DOG be recognized the fastest?’’ This classroom study, however, did not include a no-quiz control condition. An experiment in a middle school science course that did include a no-quiz control found that quizzing concepts with one application promoted better performance on a test of the concepts using a different application relative to the concept not being quizzed (McDaniel et al., 2013). Taken together, these findings suggest that quizzing in the classroom can produce positive benefits on summative exams. More specifically, quizzing can effectively help students learn fundamental terms and concepts in a given course.
Downloaded from top.sagepub.com by guest on January 14, 2015
Nguyen and McDaniel
89
Furthermore, providing application quiz questions (i.e., illustrating how concepts are instantiated) can help students understand the concepts and apply them to novel contexts on later tests.
The Bad In the previous section, we highlighted two types of quiz-test similarities that can produce positive benefits in the classroom. However, there is another type of similarity that does not confer the same testing benefits. This type of similarity, which is not uncommon with ancillaries from quiz and test banks provided by textbook companies, is one in which the quiz and exam items are related topically but do not focus on the same concept or fact. For instance, this can occur when the quiz and test questions overlap in terms of covering material from the same topic headings in the book (but not the same concept or theory). Wooldridge et al. (2014) investigated this situation in a laboratory experiment. Participants studied a section of an introductory biology textbook chapter, took two quizzes, and then returned 48 hr later to take a final test that consisted of some questions that were related topically to the quiz questions (e.g., convergent evolution) but focused on different concepts (e.g., the quiz item ‘‘Convergent evolution can occur only when two species ______’’ [Answer: evolve under similar selective forces] was followed by the exam question, ‘‘Penguins and dolphins have flippers but do not have a common ancestor. Their flippers are ______’’ [Answer: homologous structures]). No testing effect emerged here. This experiment, however, may have limited generalizability to the classroom because, in part, as mentioned earlier in this article, participants were not allowed to restudy the material (especially after taking the quizzes). Accordingly, in a second experiment designed to better approximate the classroom, participants had an opportunity to restudy the text after taking the quiz. Once again, though, no influence of testing (quizzing) emerged. Students apparently restricted their restudy to the particular concepts targeted by the quiz. A similar quasi-experiment by Mayer et al. (2009) in an educational psychology class also showed no testing effect when the quizzes, which were not accompanied by extensive discussion, and exam questions targeted the same domain (i.e., theories of transfer) but different constructs (i.e., different theories of transfer). Importantly, a testing effect occurred if the instructor included discussion, integrated with interactive-response system quizzes, about why a particular alternative was correct and other potential answers were incorrect. Thus, when quizzing alone is implemented to promote learning (i.e., improve exam performances), it appears that instructors need to engineer the quiz and test questions to target the same concepts, as is the case in the laboratory-based literature. This potential limitation suggests that instructors should not expect that willy–nilly use of quizzing programs provided by textbook companies will necessarily be an effective tool for enhancing learning in the classroom. We expand on this point next.
The Ugly In the experiments that we have described thus far, the experimenters or instructors usually developed the quizzes and tests (and in many of the classroom experiments, research assistants provided the instructors with help). Many instructors, however, find that it is convenient and time efficient to utilize the quizzing ancillaries and the test banks provided by textbook companies. These supplemental ancillaries and test banks are not necessarily coordinated to ensure the types of similarity of quiz and exam items that afford test-enhanced learning. As a result, the similarity between quiz items and exam items may be haphazard. Some of the quiz items and exam items may be closely related, but many may only share similarity in terms of the general domain targeted (as described earlier). The implication is that haphazard deployment of quizzing ancillaries and test bank questions may not produce the intended benefit of quizzing (i.e., improved exam performances). To examine this possibility, we (Nguyen and McDaniel) conducted a laboratory experiment using published course materials (textbook chapter, online quizzing ancillary, and test-bank questions); we found no net gain on a final exam for students who took the quizzing program compared to those students who were instructed to highlight while studying. In looking at the haphazard relation between quiz and exam items, we observed that for the quiz and exam items that targeted the same concept, there was the expected advantage of quizzing. However, for another subset of items, the test items were examples of concepts taken directly from the text (textverbatim questions), whereas the quiz items illustrated a new example of the concept. For this arrangement of quiz and exam items, the quizzed students performed worse than the highlighting students. We are unsure why this negative testing effect occurred (and even if it is reliable), but the tentative suggestion is that there may be situations in which quizzing may interfere with information presented in the text. More generally, the take-home message is that when quiz and test items are haphazardly sampled, teachers must be cautious in assuming that testing will confer benefits for exam performance. (Of course, students are still learning more from taking the quizzes than not taking the quizzes, which is clearly an outcome all teachers would welcome.) Little, Storm, and Bjork (2011) examined another situation in which testing may hamper performance for related information. Participants read two short passages and then took a quiz consisting of fill-in-the-blank questions for one of the passages. After a short delay, the students took a final test that consisted of some identical questions, some related questions, and some questions about the passage that was not initially tested. Little et al. observed a testing effect for the identical questions. However, quizzing had a negative effect on performance for the related questions, which is consistent with the literature on retrieval-induced forgetting (Anderson, Bjork, & Bjork, 1994). These results suggest that quizzing may strengthen memory for some information at the expense of related information.
Downloaded from top.sagepub.com by guest on January 14, 2015
90
Teaching of Psychology 42(1)
Taken together, these findings suggest that there are instances in which quizzing might actually dampen subsequent performance (see also Roediger & Marsh, 2005). Nevertheless, teachers can likely avoid these negative effects of quizzing by carefully considering the type of information they include on the quizzes and on the subsequent exams (see Chan, McDermott, & Roediger, 2006; Little, Bjork, Bjork, & Angello, 2012, for details).
Some Final Observations When translating the impressive benefits of testing reported from laboratory experiments to possible classroom use, another issue merits attention. In the classroom, many instructors implement an array of pedagogical devices and assign study activities that are not present in laboratory control conditions (e.g., other study activities and rereading; but see Karpicke & Blunt, 2011, for an interesting laboratory exception). Does testing promote additional learning beyond the effective teaching methods already implemented by instructors? Saville, Pope, Lovaas, and Williams (2012) examined whether the addition of quizzes to interteaching, an effective teaching method, would produce a testing effect. Interteaching requires students to complete a preparation guide before class and discuss their answers with another student in class, followed by a brief clarifying lecture from the instructor (see Boyce & Hineline, 2002 for a more detailed discussion of interteaching). Saville et al. found that the addition of postdiscussion quizzes to this interteaching method did not enhance exam performance relative to just interteaching alone. Their results suggest that in situations where effective teaching methods are already implemented, quizzing may not confer the robust benefits that are typically found in the laboratory. When students already engage learning strategies that draw upon similar underlying processes activated by testing (generation, retrieval, and application), then testing may be redundant with the existing instructional context. From our point of view, however, testing can be an effective technique for stimulating active engagement with the course material while requiring little disruption of, or revision to, the instructors’ preferred teaching agendas and techniques. Indeed, giving students quiz items that require application, synthesis, and evaluation may promote the kind of comprehensive learning that many teachers have as an objective in their courses. For instance, in an introductory biology course, a section in which all quizzes and exam questions required application, evaluation, or synthesis resulted in significantly higher final exam scores for both high-level (application and evaluation) and low-level (remembering and understanding) questions than a section in which all quizzes and exam questions targeted only remembering (Jensen, McDaniel, Woodard, & Kummer, 2014). Further, another benefit of quizzing that deserves mention is the indirect benefit of getting students to complete their assigned readings. Sikorski et al. (2002) conducted a largescale survey of undergraduates at large southeastern
universities enrolled in introductory psychology courses and found that only 31% of the students actually bought the required textbook. Of those who bought the textbook, 80% reported spending less than 3 hrs per week reading their texts, and 60% of the students reported not reading the texts until 3 days prior to the exam. This is consistent with Burchfield and Sappington’s (2000) finding that reading compliance has declined over the years: During a 16-year period, they measured reading compliance by giving surprise quizzes on material covered in the assigned readings and found that performance on these surprise quizzes progressively deteriorated over the years. In situations like these, quizzing may prove to be especially effective for helping students learn the important concepts covered in the class (Lyle & Crawford, 2011). Furthermore, it may have an indirect effect of getting students to do their assigned readings and engage in spaced practice rather than cramming the few days before an exam (Clump, Bauer, & Bradley, 2004; Johnson & Kiviniemi, 2009; Leeming, 2002; Narloch, Garbin, & Turnage, 2006).
Conclusion The available evidence suggests that quizzing, when used appropriately, can be a potent strategy for assisting student learning in the classroom. It is certain that quizzing can assist students in remembering important terms or concepts that are targeted by the quiz. Quizzing also appears to promote flexible use of that information (on summative exams) when quiz items focus on having students apply, evaluate, and synthesize concepts and findings. One cautionary note, however, is that haphazardly selecting quiz questions from a test bank can have less impact on exam performance than intended. Nevertheless, we believe that the benefits of quizzing, both direct and indirect, outweigh any potential negative consequences. Declaration of Conflicting Interests The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The Nguyen and McDaniel unpublished experiment described herein used materials related to an e-textbook project with Worth Publishers on which McDaniel has been consulting.
Funding The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The first author was supported by funding from the National Science Foundation Graduate Fellowship Program.
References Agarwal, P. K., D’Antonio, L., Roediger, H. L., III, McDermott, K. B., & McDaniel, M. A. (2014). Classroom-based programs of retrieval practice reduce middle school and high school students’ test anxiety. Journal of Applied Research in Memory and Cognition, 3, 131–139. Agarwal, P. K., Karpicke, J. D., Kang, S. H. K., Roediger, H. L., III, & McDermott, K. B. (2008). Examining the testing effect with
Downloaded from top.sagepub.com by guest on January 14, 2015
Nguyen and McDaniel
91
open- and closed-book tests. Applied Cognitive Psychology, 22, 861–876. Anderson, M. C., Bjork, R. A., & Bjork, E. L. (1994). Remembering can cause forgetting: Retrieval dynamics in long-term memory. Journal of Experimental Psychology: Learning, Memory and Cognition, 20, 1063–1087. Boyce, T. E., & Hineline, P. N. (2002). Interteaching: A strategy for enhancing the user-friendliness of behavioral arrangements in the college classroom. The Behavior Analyst, 25, 215–226. Burchfield, C. M., & Sappington, J. (2000). Compliance with required reading assignments. Teaching of Psychology, 27, 56–60. Butler, A. C., & Roediger, H. L., III. (2007). Testing improves longterm retention in a simulated classroom setting. European Journal of Cognitive Psychology, 19, 514–527. Butler, A. C., & Roediger, H. L., III. (2008). Feedback enhances the positive effects and reduces the negative effects of multiplechoice testing. Memory & Cognition, 36, 604–616. Carpenter, S. K. (2012). Testing enhances the transfer of learning. Current Directions in Psychological Science, 21, 279–283. Carpenter, S. K., Pashler, H., & Cepeda, N. J. (2009). Using tests to enhance 8th grade students’ retention of U.S. history facts. Applied Cognitive Psychology, 23, 760–771. Carpenter, S. K., Pashler, H., & Vul, E. (2006). What types of learning are enhanced by a cued recall test? Psychonomic Bulletin & Review, 13, 826–830. Chan, J. C. K., McDermott, K. B., & Roediger, H. L., III. (2006). Retrieval-induced facilitation: Initially nontested material can benefit from prior testing of related material. Journal of Experimental Psychology: General, 135, 553–571. Chapell, M. S., Blanding, Z. B., Silverstein, M. E., Takahashi, M., Newman, B., Gubi, A., & McCann, N. (2005). Test anxiety and academic performance in undergraduate and graduate students. Journal of Educational Psychology, 97, 268–274. Clump, M. A., Bauer, H., & Bradley, C. (2004). The extent to which psychology students read textbooks: A multiple class analysis of reading across the psychology curriculum. Journal of Instructional Psychology, 31, 227–232. Daniel, D. B., & Broida, J. (2004). Using web-based quizzing to improve exam performance: Lessons learned. Teaching of Psychology, 31, 207–208. Glass, A. L. (2009). The effect of distributed questioning with varied examples on exam performance on inference questions. Educational Psychology, 29, 831–848. Jensen, J. L., McDaniel, M. A., Woodard, S. M., & Kummer, T. A. (2014). Teaching to the test . . . or testing to teach: Exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review, 26, 307–329. Johnson, B. C., & Kiviniemi, M. T. (2009). The effect of online chapter quizzes on exam performance in an undergraduate social psychology course. Teaching of Psychology, 36, 33–37. Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331, 772–775. Karpicke, J. D., & Roediger, H. L., III. (2008). The critical importance of retrieval for learning. Science, 319, 966–968.
Leeming, F. C. (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology, 29, 210–212. Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiplechoice tests exonerated, at least of some charges: Fostering testinduced learning and avoid test-induced forgetting. Psychological Science, 23, 1337–1344. Little, J. L., Storm, B. C., & Bjork, E. L. (2011). The costs and benefits of testing text materials. Memory, 19, 346–359. Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38, 94–97. Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., . . . Zhang, H. (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34, 51–57. McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19, 494–513. McDaniel, M. A., Roediger, H. L., III, & McDermott, K. B. (2007). Generalizing test-enhanced learning from the laboratory to the classroom. Psychonomic Bulletin & Review, 14, 200–206. McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L., III. (2013). Quizzing in middle-school science: Successful transfer performance on classroom exams. Applied Cognitive Psychology, 27, 360–372. McDaniel, M. A., Wildman, K., & Anderson, J. L. (2012). Using quizzes to enhance summative-assessment performance in a web-based class: An experimental study. Journal of Applied Research in Memory and Cognition, 1, 18–26. Narloch, R., Garbin, C. P., & Turnage, K. D. (2006). Benefits of prelecture quizzes. Teaching of Psychology, 33, 109–112. Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning (NCER 2007-2004). Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education. Roediger, H. L., III, Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382–395. Roediger, H. L., III, & Karpicke, J. D. (2006a). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181–210. Roediger, H. L., III, & Karpicke, J. D. (2006b). Test-enhanced learning: Taking tests improves long-term retention. Psychological Science, 17, 249–255. Roediger, H. L., III, & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 31, 1155–1159. Rohrer, D., Taylor, K., & Sholar, B. (2010). Tests enhance the transfer of learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 233–239. Saville, B. K., Pope, D., Lovaas, P., & Williams, J. (2012). Interteaching and the testing effect: A systematic replication. Teaching of Psychology, 39, 280–283.
Downloaded from top.sagepub.com by guest on January 14, 2015
92
Teaching of Psychology 42(1)
Sikorski, J. F., Rich, K., Saville, B. K., Buskist, W., Drogan, O., & Davis, S. F. (2002). Student use of introductory texts: Comparative survey findings from two universities. Teaching of Psychology, 29, 312–313. Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69, 797–811.
Wooldridge, C. L., Bugg, J. M., McDaniel, M. A., & Liu, Y. (2014). The testing effect with authentic educational materials: A cautionary note. Journal of Applied Research in Memory and Cognition, 3, 214–221. Zeidner, M. (1998). Test anxiety: The state of the art. New York, NY: Plenum.
Downloaded from top.sagepub.com by guest on January 14, 2015