Te c h n o l o g y a n d E d u c a t i o n
How Important Is Computer Testing Experience in Preparing Students to Take the PANCE? Richard Dehn, MPA, PA-C Feature Editor
One parameter of physician assistant program evaluation is the performance of graduates on the Physician Assistant National Certifying Exam (PANCE). Programs have an interest in how their students perform on the PANCE for several reasons: first, their students must pass the PANCE to enter practice; second, success in the PANCE indicates a program is admitting and educating individuals who demonstrate a minimum level of competency in basic medical knowledge as assessed by a multiple-choice exam; and third, a program’s success on the PANCE will affect its ability to attract future applicants. And while PANCE performance is only one parameter of program self-evaluation, since PANCE data is “objective” by its very nature, programs attach a lot of importance to changes in the PANCE pass rate. A program’s PANCE record is routinely scrutinized during accreditation reviews, and schools with high passing rates include them as part of their advertising and recruitment processes. Because we lack other objective measures of program strength, graduate performance on the PANCE has in the opinion of some become a de facto way to compare programs.1 In a 2001 survey by Blessing et al aimed at determining the factors most frequently considered important in ranking programs, program directors indicated with the greatest agreement that faculty-to-student ratio, graduation rate, student attrition rate, and PANCE scores were important program ranking characteristics.2 Many PA authors have opined on this issue, generally in editorials, and PANCE pass rates are invariably included in the discussions.3-5 Given that the PANCE exerts a powerful influence in PA education, most programs have traditionally dedicated some degree of curricular activity toward preparing their students for the exam. This has meant administering examinations, particularly in the clinical year, in multiple-choice formats similar to the PANCE. In this era, providing PANCE preparation mostly meant creating paper-and-pencil multiple-choice examinations, a process well within the range of technical skills possessed by PA faculty. However, it has been several years
since the PANCE was converted from a paper-and-pencil exam to a computer-administered process, and providing students an experience similar to the current PANCE is technically more difficult and likely more expensive. Given the increased challenges of providing students with testing experiences similar to the PANCE, the question becomes: How important is it to provide our students with computer-based testing experience as a means of preparation for the PANCE? Why not provide computer-based testing experience for PANCE preparation? The two major reasons are cost and the technical skills required of faculty. Paper-and-pencil multiplechoice testing is cheap and requires few technical skills beyond typing or word processing. Computer-based testing, on the other hand, requires test authoring and delivery software, a large number of computers for students, and support from individuals with the necessary technical skills to manage a computer network. Moving from a pencil-and-paper testing format to a computer-based system will be expensive both financially and in terms of personnel costs, at least in start-up costs. So, the essential question that may give us guidance is whether giving our students experience with computer-based testing will improve their performance on the PANCE. In other words, does having a computer-based testing process in your program contribute to the success of graduates on the PANCE? Or are there other alterable factors that are more important in helping our students? Early research on the reliability of computer-based testing investigated the issue of whether test performance was different between questions administered by paper-and-pencil compared to the same questions administered by computer (the question of equivalence). Wise et al reported that small increments of lower performance using computer-based testing compared to traditional testing were of concern because they were likely caused by poorer performance in a small group of individuals, however the small overall differ-
Authors desiring to contribute to “Technology and Education” should forward submissions to Richard Dehn, MPA, PA-C, University of Iowa, PA Program, 5167 Westlawn, Iowa City, IA 32242. E-mail:
[email protected]. Both visual and written media will be considered for publication. Visual media should be accompanied by a brief narrative.
64
2004
¶
Perspective on Physician Assistant Education ¶ Volume 15, Number 1
How Important Is Computer Testing Experience in Preparing Students to Take the PANCE?
ences in the group scores may include individuals exhibiting substantial differences between the two systems.6 This was thought to be most likely due to computer anxiety or computer use inexperience, however subsequent research showed that neither individuals made anxious by computers nor those lacking computer experience performed any more poorly on computer-based exams than with traditional testing methods.6 Differences between computer testing and traditional testing were most notable when the characteristics of the testing experiences were fundamentally different, for example, when students taking a computer-based test could not leave questions unanswered until later, could not review items already answered, and could not change their answers after initially answering them—all characteristics of a paper-andpencil test. In a study comparing traditional testing and computer-based testing where those three actions were either allowed or not allowed, students performed similarly on traditional and computer-based testing, when the computer-based testing most closely resembled a traditional testing format. When skipping questions, reviewing questions, and changing answers was not allowed, performance on computer-based testing was lower.7 However, this is not really a factor in today’s computer-based testing systems, as most systems very closely emulate the traditional paper-and-pencil structure, and are widely acknowledged to assess knowledge as accurately as paper-and-pencil tests.8 Common sense would tell us that having our students practice on computer-based systems would likely help them perform better on the PANCE. A long-accepted principle of test preparation—that we need to expose students to the environment and conditions likely to be encountered as a sort of desensitization process—is often heard among educators. In fact, some private for-profit test preparation programs make familiarizing the individual to the likely testing experience a major selling point. If we don’t give our students experience in computer-based testing, is it possible that students who fail the PANCE would consider us liable for the cost of their failure? And even though today most of our students upon enrollment bring better computer skills than some of their faculty, there are occasional computer-phobic and computerilliterate students who would likely be identified by programbased computer testing. However, it probably doesn’t matter if computer testing measures knowledge differently than traditional testing, since
2004
¶
computer administration of high-stakes tests like the PANCE is here to stay. Fortunately, a computer-based testing experience is currently available for your students in the form of ePACKRAT, which is now administered over a Web-based computer-testing engine. Additionally, institutions increasingly support course management software like WebCT and Blackboard, and both of these products contain test bank and administration programs for Web-based testing. Several proprietary test administration programs are also available for computer administration, both over a secure network and via the Internet. As these products evolve, they require less technical expertise to operate, and it becomes more reasonable for PA programs to consider them as part of their testing mix. Since our high-stakes testing processes such as PANCE have converted to a computer-based format, questions as to whether paper-and-pencil testing and computer-based testing are equivalent are a moot point. Today’s students will not feel prepared to take the PANCE if their PA education does not provide them with computer-based testing experience, regardless of what the research says. We need to make sure we admit computer-literate students to our programs; otherwise, we must be prepared to teach computer skills. With the increasing availability of products and systems that can provide computer-based testing experiences, all programs should consider providing some form of computer-based testing experience to students.
References 1. Cawley JF. Physician assistant education and PANCE performance: a passing controversy? Perspective on Physician Assistant Education. 2002;13:79. 2. Blessing JD, Hooker R, Jones PE, Rahr RR. An investigation of potential criteria for ranking physician assistant programs. Perspective on Physician Assistant Education. 2001;12:160-166. 3. Dehn R. Ranking schools. The Clinical Advisor. 1999;2:98. 4. Blessing JD. Who is #1?! (editorial). Physician Assistant. 1998;22:18, 20. 5. Pedersen, D. Program ranking. AAPA News. April 15, 1998. 6. Wise SL, Barnes LB, Harvey A, Plake BS. The effects of computer anxiety and computer experience on the computer-based achievement test performance of college students. Applied Measurement in Education. 1989;2:235-241. 7. Harvey AL. Differences in Response Behavior for High and Low Scorers as a Function of Item Presentation on a Computer-Assisted Test. Unpublished doctoral dissertation, University of Nebraska, Lincoln. 1987. 8. Bugbee AC Jr. The equivalence of paper-and-pencil and computer-based testing. J Res Computing Eduction. 1996;28:282-299.
Perspective on Physician Assistant Education ¶ Volume 15, Number 1
65