Electronic Rating of Objective Structured Clinical Examinations

2 downloads 0 Views 292KB Size Report
we compared two types of checklists for student performance ratings: paper & pencil vs. ..... experience as OSCE examiners (all 7-point rating scales from 1 = not ...
Electronic Rating of Objective Structured Clinical Examinations: Mobile Digital Forms Beat Paper and Pencil Checklists in a Comparative Study Felix M. Schmitz1, Philippe G. Zimmermann1, Kevin Gaunt2, Markus Stolze2, and Sissel Guttormsen Schär1 1

Institute of Medical Education, University of Bern, Konsumstr, 13, 3010 Bern, Switzerland {felix.schmitz,philippe.zimmermann, sissel.guttormsen}@iml.unibe.ch 2 Institute for Software, University of Applied Science, Oberseestr, 10, 8640 Rapperswil, Switzerland {kevin.gaunt,markus.stolze}@hsr.ch

Abstract. During a two-day objective structured clinical examination (OSCE), we compared two types of checklists for student performance ratings: paper & pencil vs. digital checklists on iPads. Several subjective and objective measures from 10 examiners were collected and computed. Data showed that digital checklists were perceived as significantly more usable and less exertive and were also preferred in overall ratings. Assessments completed with digital checklists were found to have no missing items while assessments completed with paper checklists contained more than 8 blank items on average. Finally, checklist type did not influence assessment scores even though when using digital checklists more item-choice changes were produced. Keywords: electronic assessments, OSCE, iPad, checklists, data quality, effort, perceived usability, preference.

1

Introduction

In 1975 Harden and his colleagues introduced the Objective Structured Clinical Examination (OSCE) at which medical students rotate around a series of stations in the hospital ward (Fig. 1) [1]. At these stations, the candidates are asked to perform a procedure with a standardized patient (e.g. perform an anamnesis) or with an anatomical model (e.g. clinical examination). The student’s performance is observed and scored by an examiner using a standardized checklist. Hence, the students’ final scores are usually calculated based on the score sheets handed in by every examiner. Since all students are assessed based on identical content by the same examiners using predetermined guidelines, OSCE can be an objective, valid and reliable method of assessment [1-3]. Today OSCE is an established tool in the repertoire of clinical assessment methods in many medical schools around the world [2].

A. Holzinger and K.-M. Simonic (Eds.): USAB 2011, LNCS 7058, pp. 501–512, 2011. © Springer-Verlag Berlin Heidelberg 2011

502

F.M. Schmitz et al.

Fig. 1. OSCE consists of a circuit of stations usually manned with an examiner and a standardized patient. Students move from station to station, where they perform medical procedures.

The main drawback of OSCE is “the increased preparation required” ([1], p. 451): case-based tasks and the according checklists must be created (iteratively), standardized patients and examiners must be acquired, selected and carefully trained, students must be randomly assigned into rotation groups, and so on. Furthermore, carrying out the exam is time and resource consuming. The packed schedule requires the examiners’ full attention and time, during which they are missing at the clinic. Additionally, the process of extracting the examiners evaluations is very time consuming: up to 60% of the paper checklists must be manually verified in order to ensure the assessments contain no missing data. Then they have to be transformed into digital representations by scanning the paper checklists. This error-prone transformation from paper-based information to digital data is the standard procedure used today. Summing up: OSCE is a time and resource-intensive form of examination (e.g. [2], [4]). This unsatisfactory process has motivated us to start a project titled Electronic Registration of Objective Structured Clinical Examination (e-OSCE) which aims for a more efficient and entirely digital preparation, execution and analysis process of OSCEs (Fig. 2) [5], [6]. OSCE Examination Mangement: Current Process (Media Break) Prepare Exam Create Exam Material

Train Actors & Examinors

Schedule Exams

Execute Exam Print Eval Forms

Register Eval‘s

Sign Eval. Form

Publish Exam Results Scan Form

Correct Form

Analyze Results

Time: 26 Days (Average)

OSCE Examination Mangement: Optimized Process (No Media Break) Prepare Exam Create Exam Material

Train Actors & Examinors

Schedule Exams

Execute Exam Register Eval‘s

Publish Exam Results Analyze Results

Sign Eval. Form

Time: 12 Days (Average)

Fig. 2. The project e-OSCE aims for an optimization of the preparation and analysis process and for a facilitation of scoring the candidates’ performances by introducing digital checklists running on iPads

Electronic Rating of Objective Structured Clinical Examinations

503

Our literature research revealed that a few previous attempts to perform OSCE evaluations digitally exist [7-9]. These projects were promising with regard to effectiveness, efficiency, and user satisfaction. However, none of the investigated systems have been established as tools for real exams so far, as these projects applied devices that are outdated (i.e. PDAs) by today’s standards. The current generation of tablet PCs is thin, light and cheap. This convinced us that it was worthwhile to revisit this challenge in our project. The project went through several stages. The first stage included the development of an exam client interface for Apple iPhones. Our usability tests and in-depth interviews soon revealed that the display size was too small for the intended content. The second stage addressed this issue by developing a Windows tablet PC client. First experiences in a field test showed that this hardware form factor was better suited to displaying the medical checklists. However, the power consumption of the available hardware at that time (2010) was insufficient and the test subjects deemed the devices too uncomfortable (due to weight and heat emission) to use for the intended time period. Hence in the third and current stage of the eOSCE project we switched to Apple iPads as client devices. We developed an exam client named OSCE-Eval, a JAVA based server component for delivery and reception of examination data and a web-based exam management component. All of these components have been undergoing various laboratory and field tests [10]. Besides the technical maturity of the system concerning stability, speed and security, several subjective and objective characteristics of the client application are a major focus of the user tests mentioned above. Due to the examiners high cognitive load during the exam and the long examination time, the system has to be usable with minimal effort and has to support the user throughout the examination process. Furthermore, a major focus of the project is to improve the unsatisfactory data quality of the paper-based evaluations. Hence, the digital checklists have to reduce or eliminate input errors by preventing missing, unreadable or ambiguous data input. Since the change from paper checklists to a digital evaluation process is challenging for the OSCE examiners, we are interested in how well they accept the new system in comparison to the paper version. In the following chapters we present the results of our study investigating subjective and objective characteristics of OSCE-Eval in comparison to paper-based checklists. This study was conducted during a real OSCE.

2

Hypotheses

As previously stated, the usability of OSCE-Eval is crucial for two reasons: to validate that the examiners are able to adequately operate the application and to minimize the induced cognitive load for examiners. Compared to the existing paper and pencil checklists (PCL), OSCE-Eval offers more guidance and support through on-screen messages, time tracking with audio-visual feedback, graphical accentuation of blank items, indication of already processed and prospective forms, etc. (for an overview of the features, see [10]). Moreover, the application is built for touch screen

504

F.M. Schmitz et al.

devices (i.e. Apple iPads) that are “the most natural of all input devices” ([11], p. 387; see also [12]). In other words, touch screens are intuitive, but PCLs are selfexplanatory instruments, too. We consider the benefits associated with OSCE-Eval to outperform PCL in respect to perceived usability. This leads us to our first hypothesis (H1). H1: Compared to PCL the perceived usability of OSCE-Eval is more satisfying The examiners’ tasks result in a high mental effort during an OSCE day: they consistently observe and evaluate students’ performances and therefore need to be attentive for longer time spans. Consequently, it is important to assist them as much as possible in order to keep them focused. The evaluation of a student’s performance during an OSCE is similar to a repetitive non-sequential selection task. This is because candidates usually do not adhere to diagnosing in the same order as the checklist would indicate. OSCE-PCLs used at the medical faculty of the University of Bern are typically between two to four pages long. Thus finding the right item is a search task that regularly involves turning pages (PCL) or scrolling (OSCE-Eval). In this context, scrolling seems to be more convenient, because one doesn’t have to search through a stack of several pages. Moreover, OSCE-Eval uses popovers (also known as callouts; see Fig. 3) to represent the available choices for each checklist item (e.g. yes, partial and no). When performing non-sequential selection tasks, popovers have been found to deliver better performance and evoke more userconfidence compared to using inline radio buttons in electronic forms [13]. Obviously popovers can’t be used in PCLs, leaving a checkmark design as the only alternative. Therefore scrolling and the use of popovers are unique to OSCE-Eval and could result in lower mental effort. On the other hand, computer anxiety could be evoked when using OSCE-Eval. Computer anxiety increases stress during the usage of digital devices, e.g. [14-15]. However, following [16] we expect iPad devices to be inviting (playful) tools for the examiners and thus decrease the likeliness of computer anxiety to occur. For this reason we postulate that the examiners will experience less mental effort by using OSCE-Eval. H2: The experienced mental effort is lower when using OSCE-Eval than when using PCL In regard to data quality, missing data (i.e. blank items) are unlikely when using OSCE-Eval. This is because the system enables the use of input validation and visual feedback for unprocessed checklist items. Of course, with PCL this level of support is not possible. This leads us to our third hypothesis (H3). H3: Using OSCE-Eval results in less missing data than by using PCL The evaluation process must not be influenced by checklist type (objectivity). Despite both checklist types consisting of the identical set of items, we expect the checklist type to have an impact on scoring results. The reason for this is that the systems

Electronic Rating of Objective Structured Clinical Examinations

505

obviously feel different and therefore could influence the examiners’ scoring behavior (see H4). H4: Evaluation results differ between checklist types. Finally, we are interested in which checklist type (OSCE-Eval vs. PCL) will be preferred by the end-user (i.e. the examiners). We postulate that examiners either prefer OSCE-Eval or PCL (see H5). H5: Preference of the two checklist types is different.

3

Method

3.1

Sample and Design

Participants were 10 physicians (2 women) from the medical faculty of the University of Bern acting as examiners in a two-day OSCE (5 participants per examination day). The examiners’ age ranged between 34 and 50 years. The mean age was 40.20 years (SD = 5.87). All examiners were fluent in German. To test our hypotheses, we conducted a within-subject study with the factor “checklist type” comprised of two levels: (1) OSCE-Eval (running on iPad) and (2) paper and pencil checklist (PCL). In order to avoid a sequence effect, we counterbalanced the checklist type order (see Procedure). Overall, the examiners executed a total of 240 evaluations. 3.2

Material

The checklists used in this study consisted of 36 items that had been used in previous OSCEs. Both checklist types featured the same items in the same order according to the same evaluation dimensions: anamnesis (15 items), exploration (10 items), diagnosis (5 items), communication (5 items), and overall impression (1 item). The checklist items were typeset in Helvetica 9 pt. for PCL (printed at 300 dpi) and Helvetica 13 pt. for OSCE-Eval (using a 132 ppi screen). The PCL was a total of three pages long and the choices available were presented by the use of multiple checkbox lists (Fig. 3). Depending on the iPad’s orientation, OSCE-Eval displayed either 789 characters (portrait) or 607 characters (landscape) on screen. The available choices per item were presented using popovers (Fig. 3). All examiners operated OSCE-Eval on identical first generation Apple iPad (2010) devices. To familiarize the examiners with the usage of OSCE-Eval (see Procedure) we prepared a 12 minutes introductory video. The video is available online at [10] (in German). Since we needed training material on how to cope with the checklists, we also filmed two 5 minutes videos showing either a “bad” or a “good” performance of a student (played by an actor).

506

F.M. Schmitz et al.

Fig. 3. Schematic comparison of checklist types: PCL uses a checkbox design (top) and OSCEEval uses a popover design (bottom). The items were identical for OSCE-Eval and PCL.

3.3

Measurements

Subjective Usability. To test which checklist type was perceived to be more usable (see H1), the examiners were presented with an adapted version of the Post-Study System Usability Questionnaire (PSSUQ) [17]. The original PSSUQ is a 19-item instrument for assessing user satisfaction with system usability. The reliability of its overall scale has been found to be excellent (Cronbach’s Alpha = .97) [17]. We selected a subset of 16 PSSUQ items suitable for both OSCE-Eval and PCL and translated them into German. All items were 7-point Likert scales ranging from 1 = strongly disagree to 7 = strongly agree. An example for one of the items is Overall, I am satisfied with how easy it is to use OSCE-Eval [PCL]. Experienced Mental Effort. To test how intensely the examiners experienced the effort of their task (observing and evaluating candidates’ performances) (see H2), we used the German version of the Rating Scale Mental Effort (RSME) [18]. The German RSME is an analogous 220 mm long one-item graphic scale anchored at its

Electronic Rating of Objective Structured Clinical Examinations

507

end points with the values 0 and 220. Between the end points 7 written terms were displayed additionally in order to enrich the scale semantically. The terms ranged from barely exertive for 20 and extremely exertive for 205. Examiners were asked to do the following: Please specify how exertive your task (observing and evaluating) was by marking a point on the continuum with a cross. We evaluated the examiners input by measuring the cross’ offset distance in millimeters from 0 by using a ruler. According to [19] “The Rating Scale Mental Effort proved to be a reliable and valid instrument measuring psychological costs” (p. 139). Missing Data. To test which checklist type contained less missing data (see H3), we manually counted unanswered evaluation items for PCL and entered the results into SPSS. For OSCE-Eval, before transposing data to SPSS, we queried the application’s database for unanswered questions. To do that, the test software recorded every one of the examiners’ choices and stored them locally inside a database. Scoring Results. To test if the checklist type influenced scoring results (see H4), we analyzed the scores of 5 of the checklist items used in the forms. All of these items were global scales of the evaluation dimensions anamnesis, exploration, diagnosis, communication, and overall impression. All 5 items were 5-point rating scales ranging from 1 = very bad to 5 = very good. In order to compare data from both checklist types, we scanned the PCLs. From the resulting Microsoft Excel file we then computed the mean scores for all 5 global ratings per examiner and entered them into SPSS. OSCE-Eval stored all of the examiners’ choices in XML files for each candidate. To extract the mean ratings from OSCE-Eval, it was necessary to parse the XML forms and sort them by examiner. A script was developed for this purpose. Preference Ratings. After using both OSCE-Eval and PCL, examiners were asked with which checklist type they would prefer to work in the future (see H5). The choices for this question were either OSCE-Eval or paper and pencil checklist. Subjects. During the examiners’ training session, that took place two days before the exam (see Procedure), we handed out a questionnaire. It contained questions regarding their gender, age and handedness as well as their experience with iPads, iPhones, and touch screens in general. We also wanted to know if they had prior experience as OSCE examiners (all 7-point rating scales from 1 = not experienced at all to 7 = very experienced). Finally, we were interested in how comprehensible and effective the instruction video demonstrating the usage of OSCE-Eval was: (1) The training video is comprehensible and (2) I think I am able to use OSCE-Eval without problems because of the training video. Both items were 7-point Likert scales from 1 = strongly disagree to 7 = strongly agree.

508

3.4

F.M. Schmitz et al.

Procedure

OSCE Training. Two days before the examination was conducted, all 10 examiners were introduced to their tasks by communicating the aims, procedures and rules of OSCE. Then the examiners were given a sample paper and pencil checklist consisting of 5 items from the evaluation dimension communication. The investigator made sure that every examiner understood the scope of each item. Then examiners watched the video of a student performing badly at an examination station and had to evaluate that student’s performance by rating the 5 items on their checklist. This was followed by a public discussion moderated by the investigator. The discussion finished when the examiners reached a consensual evaluation result. A similar private evaluation and validating discussion followed after the second video was shown. This time it showed the student performing well. After this introduction the examiners watched the 12 minutes OSCE-Eval training video. We then handed out the questionnaire to collect data regarding the examiners demographics, ICT and OSCE experience, and their perception of the training video. After all examiners had completed their questionnaires, we answered open questions about the usage OSCE-Eval. Once the examiners felt reasonably comfortable two hands-on trainings took place in two different rooms. Half of the sample (n = 5) operated OSCE-Eval while observing a re-enacted scene played by two actors, one in the student’s and the other in the patient’s role. The scene was based on a cased-based example matching the items of the form. The other half of the sample used PCL while observing a comparable re-enacted scene. After finishing the evaluation using either OSCE-Eval or PCL the two groups changed checklist type and completed the same tasks with a comparable scene. Finally, the examiners were informed about the dates of the their OSCE and the type of checklist they will be using first during the examination. OSCE Examination. The examination spanned two days, with the procedure on either day being identical: 30 minutes prior to the start of the examination 5 examiners were seated at their stations. They were introduced to the standardized patient acting out a patient case. Then we handed out either an iPad running OSCEEval or the PCL. Each examiner started evaluating the performance of 12 candidates by using either checklist type. Subsequent to the evaluation of 12 students examiners completed the questionnaires on usability satisfaction and mental effort. After lunch we exchanged the checklist type so the accordant examiners evaluated another 12 students by using the opposed system (see Table 1). Then they filled out the questionnaires on usability satisfaction and mental effort for the second time. Additionally, the examiners had to state whether they would prefer OSCE-Eval or PCL as an instrument in further evaluations. Each evaluation lasted for 15 minutes including a 2-minute break to allow for the students changing stations and for examiners to complete the evaluation. The schedule was structured into different rotations, each consisting of 4 evaluations. After each

Electronic Rating of Objective Structured Clinical Examinations

509

Table 1. Schedule for examiners either starting with the OSCE-Eval application or the PCL Day of run

Examiner (ID)

1 1 1 1 1 2 2 2 2 2

1 2 3 4 5 6 7 8 9 10

Checklist type used for first 12 evaluations OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval PCL

Checklist type used for last 12 evaluations PCL OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval PCL OSCE-Eval

rotation the examiners were granted a 15-minute break. After the first 3 rotations, the examiners had a 30-minute lunch break. After lunch the checklist type was interchanged and the exam continued for another 3 rotations. In total, an examination day spanned approximately 8 hours, including the time for answering the questionnaires.

4

Results

All examiners were either right-handed or ambidextrous. Before participating in the study, examiners had little experience using iPads. In fact they were more experienced using iPhones and other touch screen-based devices (see Table 2). Furthermore, most participants were novices in regard to acting as examiners within an OSCE (Table 2). The OSCE-Eval instruction video was perceived as comprehensible and effective (Table 2). Table 2. Overview of the examiners’ ICT experience, OSCE experience and OSCE-Eval training video evaluation Variable iPad usage experience i iPhone usage experience i Touch-screen usage experience i OSCE-examiner experience i Comprehensibility of training video ii Ability to use OSCE-Eval due to video ii i

N 10 10 10 10 10 10

Min 1 1 4 1 4 3

Max 7 7 7 7 7 7

M 3.00 5.20 5.80 2.50 6.30 5.40

Scale from 1 = not experienced at all to 7 = totally experienced. Scale from 1 = strongly disagree to 7 = strongly agree.

ii

SD 2.36 2.39 1.14 2.17 .95 1.43

Mode 1 7 5; 7 1 7 7

Mdn 2 6 5.5 1 6.5 5.5

510

F.M. Schmitz et al.

To test our first 4 hypotheses, we computed Wilcoxon tests for paired samples (the non-parametric equivalent of the paired samples t-test). The satisfaction with system usability (overall scale) was higher for OSCE-Eval (M = 6.36, SD = .54) than for PCL (M = 5.60, SD = .88). This difference is statistically significant (z = -1.94, p(1-tailed) = .027). Therefore, we accept our first hypothesis (H1). The experienced mental effort was lower using OSCE-Eval (M = 71.30, SD = 32.96) than using PCL (M = 89.20, SD = 38.85). This difference is significant (z = -1.84, p(1-tailed) = .033) and the reason why we also accept our second hypothesis (H2). As expected, when using OSCE-Eval, examiners did not miss to fill in any data (i.e. blank items). In contrast, examiners left open between 1 and 27 blank items (M = 8.40, SD = 7.58) by using PCL. Wilcoxon test statistics showed that this distinction is highly significant (z = -2.81, p(1-tailed) = .0025). We consequently accept our third hypothesis (H3). Data further showed that evaluation results did not significantly differ between checklist types for any of the 5 evaluation dimensions (Table 3). Thus, we reject our fourth hypothesis (H4). Table 3. Comparison of scoring results Score dimension Anamnesis Exploration Diagnosis Communication Overall impression i

N 10 10 10 10 10

M1 (SD1) i 3.52 (.23) 3.41 (.33) 3.87 (.40) 3.95 (.31) 3.67 (.27)

M2 (SD2) ii 3.45 (.53) 3.30 (.35) 4.02 (.45) 3.96 (.37) 3.61 (.48)

SS+ 31 39 13 24 24

SS24 16 42 21 21

Z -.357 -1.173 -1.480 -.178 -.178

P(2-tailed) .721 .241 .139 .859 .859

Parameters M1 and SD1 based on scores transmitted by operating OSCE-Eval. Parameters M2 and SD2 based on scores transmitted by operating PCL.

ii

In order to test which checklist type was preferred by the examiners (see H5) we conducted a Chi-square test. Our data shows that 8 examiners (out of 10) preferred OSCE-Eval as an evaluation instrument. 1 examiner abstained from voting and 1 preferred using PCL. Consequently, the number of observations per cell significantly differs from the expected number per cell (Chi2(1, N = 9) = 5.44, p = .02). For this reason we accept our fifth and final hypothesis.

5

Discussion

In this study, we investigated whether commonly used paper and pencil checklists (PCL) or item-identical digital checklists running on Apple iPads (OSCE-Eval) are the more suitable instrument when applied within the medical assessment framework Objective Structured Clinical Examination (OSCE). Our comparative study provides the following five results regarding our initial hypotheses. First, the experienced usability differed between checklist types. In

Electronic Rating of Objective Structured Clinical Examinations

511

particular, examiners perceived the usability of OSCE-Eval as to be significantly more satisfying compared to PCL. Second, the usage of OSCE-Eval decreased the examiners’ perceived mental effort significantly in comparison with the usage of PCL. Third, examiners did not leave any blank items when using OSCE-Eval. In contrast, when using PCL examiners sometimes failed to provide an input for every checklist item. This difference is strongly significant. Fourth, checklist types did not have an impact on scoring results (H4 rejected) why we assume OSCE-Eval to be a non-distracting evaluation instrument. Fifth, a statistically significant difference in the examiners’ preference benefitting OSCE-Eval was found. Examiners justified their clear preference by underlining the simplicity of the touch screen, the ability to observe performances from multiple views by walking around the station while using the iPad for scoring, and the provided support and feedback associated with OSCE-Eval (e.g. highlighting blank items). Examiners also mentioned that when using OSCE-Eval, it is easier to correct previously processed checklist items simply by tapping the accordant item again and then selecting another choice. When using PCL, already processed items must be crossed out before ticking off another choice, so corrections are more tedious. In this respect, we also extracted data from log files as well as from the paper forms and found that there was a significantly higher occurrence of evaluation changes when examiners used OSCEEval (M = 35.80, SD = 17.68) as when they worked with PCL (M = 9.50, SD = 5.52) (z = -2.703, p(2-tailed) = .007). Apparently examiners were more conservative selecting choices when using PCL. We assume the higher effort in modifying an answer and the limited number of changes (before the form becomes unreadable) are possibly attributable to this. The limitation of the present study is its small sample size (N = 10). As already pointed out, OSCE is a costly form of examination for medical faculties. That is why we were not able to invite more physicians joining as examiners. However, we see these preliminary results as promising, which is why we intend to replicate the present study in a bigger exam with more subjects. In conclusion, the digital checklist type OSCE-Eval running on iPad outperformed the commonly used paper and pencil checklist in respect to all subjective dimensions surveyed in this study, i.e. subjective usability, experienced effort and preference. Further, when examiners used OSCE-Eval, the data quality was much better than when using the paper version of the form (i.e. no blank items). Finally, OSCE-Eval did not influence examiners’ scoring behavior. Thus we regard this type of checklist as a non-distracting evaluation instrument. Based on the results of this study we are looking forward to apply OSCE-Eval to productive OSCEs in the near future and are exited to expand our findings. Acknowledgments. The E-OSCE project is funded by SWITCH through the AAA (E-Infrastructure for E-Science) research program as well as by the University Bern and the University of Applied Science Rapperswil (HSR).

512

F.M. Schmitz et al.

References 1. Harden, R.M., Stevenson, M., Downie, W.W., Wilson, G.M.: Assessment of clinical competence using objective structured examination. Br. Med. J (Clin. Res. Ed.) 1, 447–451 (1975) 2. Barman, A.: Critiques on the objective structured clinical examination. Ann. Acad. Med. 34, 478–482 (2005) 3. Harden, R.M., Gleeson, F.A.: Assessment of clinical competence using an objective structured clinical examination (OSCE). Med. Educ. 13, 41–54 (1979) 4. Gupta, P., Dewan, P., Singh, T.: Objective Structured Clinical Examination (OSCE) Revisited. Indian Pediatr. 47, 911–920 (2010) 5. Serving Swiss Universities (SWITCH), http://www.switch.ch/aaa/projects/detail/FHO.2 6. Serving Swiss Universities (SWITCH), http://www.switch.ch/aaa/projects/detail/UNIBE.4 7. Schmidts, M.B., http://m3e.meduniwien.ac.at/resources/e_osce.pdf 8. Treadwell, I.: The usability of personal digital assistants (PDAs) for assessment of practival performance. Med. Educ. 9, 855–861 (2006) 9. Hatfield, C.L., Bragg, H.H.: Utilizing Electronic Objective Structured Clinical Exam (eOSCE) Stations for College-Wide Assessment Purposes. In: 109th Annual Meeting of the American Association of Colleges of Pharmacy, Chicago, Illinois, p. 81 (2008) 10. E-OSCE official Website, http://www.e-osce.ch 11. Holzinger, A.: Finger Instead of Mouse: Touch Screens as a Means of Enhancing Universal Access. In: Carbonell, N., Stephanidis, C. (eds.) UI4ALL 2002. LNCS, vol. 2615, pp. 387–397. Springer, Heidelberg (2003) 12. Holzinger, A., Hoeller, M., Schedlbauer, M., Urlesberger, B.: An Investigation of Finger versus Stylus Input in Medical Scenarios. In: ITI 30th International Conference on Information Technology Interfaces, pp. 433–438. IEEE Press, New York (2008) 13. Gaunt, K., Schmitz, F.M., Stolze, M.: Choose Popovers over Buttons for iPad Questionnaires. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part II. LNCS, vol. 6947, pp. 533–540. Springer, Heidelberg (2011) 14. Hudiburg, R.A.: Psychology of Computer Use.7. Measuring Technostress - ComputerRelated Stress. Psychol. Rep. 64, 767–772 (1989) 15. Anderson, A.A.: Predictors of computer anxiety and performance in information systems. Computers in Human Behavior 12, 61–77 (1996) 16. Webster, J., Martocchio, J.J.: Microcomputer Playfulness - Development of a Measure with Workplace Implications. Mis. Quart. 16, 201–226 (1992) 17. Lewis, J.R.: Ibm Computer Usability Satisfaction Questionnaires - Psychometric Evaluation and Instructions for Use. Int. J. Hum.-Comput. Int. 7, 57–78 (1995) 18. Eilers, K., Nachreiner, F., Hänecke, K.: Entwicklung und Überprüfung einer Skala zur Erfassung subjektiv erlebter Anstrengung (The development and testing of a scale to validation for recording subjectively experienced effort). Zeitschrift für Arbeitswissenschaft 40, 215–224 (1986) 19. Zijlstra, F.R.H.: Efficiency in Work Behaviour; A Design Approach for Modern Tools. Delft University Press, Delft (1993)

Suggest Documents