prototypes of computer-based deception detection training systems called Agent99. Trainer have been ... into different topics and a pull down menu was provided so that users can jump to any topic during the ... Examples were now embedded in the lecture video, as subtopics under ... system would be free of effort [9].
Computer-based Training for Deception Detection: What Users Want? Jinwei Cao, Ming Lin, Amit Deokar, Judee K. Burgoon, Janna M. Crews, Mark Adkins Center for the Management of Information University of Arizona Tucson, AZ, 85721, USA {jcao, mlin, adeokar, jburgoon, jcrews, madkins}@cmi.arizona.edu
Abstract. Training humans in detecting deception is as much a difficult and important problem as detecting deception itself. A computer-based deception detection training system, Agent99 Trainer, was built with a goal to train humans to understand deception and detect deception more accurately. Based on the previous studies, a newer version of this system was designed and implemented not only to overcome the limitations of the earlier system, but also to enhance it with additional useful features. In this paper, we present a usability study to test the design of this system from a users’ perspective. The findings of this study, based on quantitative and qualitative data, demonstrate good usability of the training system, along with providing a better understanding of what users want from such a deception detection training system.
1
Introduction
In the past two years, a curriculum for deception detection training and two prototypes of computer-based deception detection training systems called Agent99 Trainer have been developed in a southwest research university for US Air Force officers. Reported in many literatures, human’s deception detection accuracy rates are only slightly better than chance level (around 50%) [3], [14], [16]. The goal of the Agent99 Trainer, therefore, is to train people (such as Air Force security officers) to understand deception and enable them to detect deception more accurately, along with providing the advantages of computer-based training such as ease of access and low cost. The first prototype of Agent99 Trainer was designed as a Web-based multimedia training system, and it has been shown effective in deception detection training [4]. It was also demonstrated that users had a good experience with this first prototype [5]. However, several limitations of the first prototype were discovered when the prototype was evaluated [5]. These limitations, including security concerns with Webbased implementation and insufficient functionalities, indicated the need for a new design of Agent99 Trainer. Hence, based on the experiences from developing the first prototype, we designed a new one in which, the major design of the first prototype was retained, but the Web-based implementation was changed to CD-ROM
implementation, and some advanced features such as search functionality and assessment tool were added. A recent experiment showed that the new prototype also effectively improved people’s deception detection accuracy [13]. However, in order to judge the success of the system, we also need to evaluate the usability of the system, which can be measured based on attributes such as system effectiveness, user satisfaction and system ease of use [17]. In this paper, we will focus on evaluating the usability of the new prototype of Agent99 Trainer, and study what system features affect the system’s usability. More broadly, we want to investigate what users really want in a computer-based training system for deception detection. The remainder of the paper is organized as follows. We first review the deception detection training literature briefly, followed by the design of the new prototype of Agent99 Trainer. We then describe the usability measures and evaluation instrument; present the analysis, results and lessons learned. Finally, we conclude the paper with a list of suggestions for future development of the system.
2 Agent99 Trainer – A Computer-based Training System for Deception Detection Many previous research studies have shown that properly designed training can help people better detect deception [20], [10], [11], [12]. Our previous research has identified three critical components of effective deception detection training: explicit instructions on cues of deception, practice judging the veracity of real communication, and immediate feedback on the judgment [15], [6]. Although these three components can be easily implemented in a traditional classroom training environment, the shortage of qualified deception detection instructors and the consequent high cost of traditional classroom training call for a more efficient and cost-effective alternative. As a result, a computer-based training system, Agent99 Trainer, was developed for deception detection training. The first prototype of Agent99 Trainer was designed and implemented as a Webbased multimedia training system. The three critical components of deception detection training were implemented in two modules in this prototype: Watch Lecture and View Example with Analysis. The Watch Lecture module presented a deception detection lecture through a combination of synchronized media including the expert video of the lecture, PowerPoint slides, and lecture notes. The lecture was segmented into different topics and a pull down menu was provided so that users can jump to any topic during the training. The View Example with Analysis module linked practice examples to the deception cues taught during the lecture. Users could view those examples and click on a link to receive analytical feedbacks on the deception cues illustrated. Experiments have shown that this first prototype was effective in deception detection training: users’ detection accuracy was significantly improved [6]. However, some problems of this prototype were also pointed out [5], such as insufficient interaction, lack of assessment capability, and bandwidth restriction. In addition, implementing the Web-based prototype in the Air Force training site was revealed to be troublesome because of the networking security concerns from the
military. Therefore, the second prototype of the Agent99 Trainer was developed to address these problems by changing the delivery method and adding new functionalities. The new prototype was delivered on CD-ROMs, which enabled us to carry high quality audio/video without the networking bandwidth restrictions. The major design of the Watch Lecture and View Example modules in the old prototype was retained in this new prototype, but the two previously separated modules were combined into a single Virtual Classroom (Fig 1) module. The lecture video, slides and notes were structured in a very similar fashion as in the old prototype, and they were still synchronized. Examples were now embedded in the lecture video, as subtopics under certain concepts or cues of deception detection. An outline-type navigation menu (upper right in Fig 1) replaced the pull down menu in the old prototype, in order to give users a clear framework of the lecture throughout their training process. Users were able to choose any particular lecture topic or example they wanted to watch from the navigation menu.
Fig. 1. The Virtual Classroom Besides the Virtual Classroom, two new modules were added into the new prototype: Search Tools and Assessment Tool. The Search Tools (Fig. 2) was designed to better support just-in-time training. It allowed users to look for specific information using two methods: Keyword Search and Ask A Question (AAQ). AAQ allowed the user to ask a question regarding the subject matter using everyday English (e.g. what is deception detection?) Search results returned by the system would be a list of the relevant video segments in the lecture video. The Assessment Tool was designed to popup quizzes at certain breakpoints in the lecture, to test users’ understanding of the lecture until then (Fig. 3). The correct answer would be given to users as feedback, each time after they answered the question.
Regular keyword search Allow user to pose a natural language question and system responds with a list of video clips
Fig. 2. Search Tools
3
Fig. 3. Assessment Tool
Study Design
Since the objective of this study is to evaluate the usability of the new prototype of Agent99 Trainer, a usability questionnaire was given to the users during an experiment conducted at an Air Force training center. The responses to this questionnaire were collected together with the learning performance data in the same experiment for convenience. However, only the usability data will be analyzed and interpreted in this study. 3.1
Usability Measures and the Questionnaire
There are many measurable usability attributes described in literature, but three of them are the most common and important ones used to measure any given system’s success [17]. They are: • Ease of Use – The degree to which a person believes that using a particular system would be free of effort [9]. • Satisfaction – Users are subjectively satisfied when using the system [17]. • Effectiveness/Usefulness – The degree to which a person believes that using a particular system would enhance his or her task performance [9], [1]. In this study, a questionnaire was developed to test these three usability attributes from many different perspectives. For example, in terms of user satisfaction, questions were asked about whether the users feel the overall training content interesting; whether they like the major features of the system, such as the self-paced control and/or the structured and synchronized multimedia; whether the new functions such as Ask A Question (AAQ) or Popup Quizzes made them feel more satisfied; how they feel about the quality of audio and video in this system; and whether they would consider using the system again. According to Weisberg’s guidelines [18] on how to develop questionnaires to avoid invalid questions and improve reliability, we designed both closed-ended and open-ended questions in our questionnaire, so that users can both choose from given alternatives as well as have the freedom to answer
in their own words. Negatively worded items were also provided to avoid any bias caused by different wording. The development of the questionnaire was an incremental process. Starting with a previous usability questionnaire used for testing a general Web-based training tool [19], the questionnaire was iteratively revised in each experiment when the earlier Agent99 Trainer prototype was tested. The final version used in this study contained items directly from an existing validated measure called the System Usability Scale (SUS) [2]; items adapted from other existing validated measures such as Questionnaire for User Interface Satisfaction (QUIS) [7] and Perceived Usefulness and Ease of Use [9]; as well as items developed by us regarding some specific system features. There were twenty nine closed-ended questions rated on a 7-point Likerttype scale (1 = Completely Disagree; 2 = Mostly Disagree; 3 = Slightly Disagree; 4 = Neither Agree nor Disagree (Neutral); 5 = Slightly Agree; 6 = Mostly Agree; 7 = Completely Agree). However, seven questions among them were about certain system functions that were only available in certain treatment groups (see Table 2). Therefore an eighth choice “N/A” (not applicable) was added to deal with this situation. Six open-ended questions were given to users after the closed-ended questions. They were as follows: 1) Please describe any problems that you experienced using this system; 2) What do you like about the system? 3) What do you dislike about the system? 4) How can the system be improved to better help you learn? 5) In what learning situations do you think you would like to use this type of learning system? 6) Other comments. The questionnaire was evaluated by several graduate students prior to conducting the study to avoid any possible misunderstanding. 3.2
Experiment
The usability questionnaire was given in an experiment conducted at an Air Force training center. The major purpose of this experiment was to test the effectiveness of Agent99 Trainer by measuring users’ learning performance. For this purpose, pre- and post-tests on deception knowledge and deception judgment accuracy were given before and after training respectively. The usability questionnaire was deployed after the post-test. To understand the relationship between the different combinations of system functions and the system usability, we compared the Agent99 Trainer prototype with full functions to the ones with partial functions. The experimental conditions are described in section 3.2.3. 3.2.1 Participants The participants were 180 Air Force officers who were receiving general training at a large US Air Force training center. Deception detection training was considered part of their training program and they were given credits for participating in the experiment. The participants were randomly assigned to 5 treatment groups and scheduled to several different sessions; however, the number of participants per group varied due to some technical difficulties we encountered during the experiment. For example, in the first few sessions, some users had to be changed to a different treatment group due to a computer problem.
3.2.2 Experimental Conditions and Procedure The experimental conditions were arrayed from least to most amount of functions as follows (in ascending order): Video Only. In this condition, users watched a video of an expert talking about deception, with PowerPoint slides and examples cut in the video. The presentation order in this video is pre-determined by the instructor. Linear A99. In this condition, lecture outline, expert video, PowerPoint slides and lecture notes were shown on the Agent99 Trainer interface. PowerPoint slides and lecture notes were synchronized with the lecture video. However, users could not click on the lecture outline to change topics. They still had to follow the predetermined instruction pace and sequence. A99 + AAQ. This condition had almost all function of Agent99 Trainer except for the popup quizzes. Users could control their own learning pace by clicking on the lecture outline. They could also look for specific topics by using keyword search or AAQ. A99 + AAQ + Content. This condition was almost the same as the previous, except that links to more examples of deception detection were provided at the bottom of the lecture outline. A99 + AAQ + Content + Quizzes. This condition deployed the complete functional implementation of Agent99 Trainer. Additionally, more examples of deception detection were also given in this condition. The procedure of the experiment is summarized in Table 1. There were two training sessions in this experiment. The first session was an introductory lecture about basic deception detection knowledge and the second session was a lecture teaching the specific cues of deception. Every user completed both sessions under the same experimental conditions. Since the introductory lecture had less content than the cues lecture, the usability questionnaire was given after the post-test at the end of the first session to prevent participants from getting fatigued. Finally the two sessions had about the same duration: one and half hours. Table 1. Experiment design
Video Only
Linear A99
Pre-tests
Pre-tests
Training Sessions
Treatment Conditions A99 + AAQ A99 + AAQ + content Pre-tests
Pre-tests
A99 + AAQ + content + quizzes Pre-tests
Instruction Intro
Post-tests
Post-tests
Post-tests
Post-tests
Post-tests
Usability Questionnaire Pre-tests
Usability Questionnaire Pre-tests
Usability Questionnaire Pre-tests
Usability Questionnaire Pre-tests
Usability Questionnaire Pre-tests
Post-tests
Post-tests
Post-tests
Post-tests
Cues
Instruction Post-tests
In the Intro session where the usability questionnaire was given, every participant in all groups was given a CD-ROM containing the entire session. They were
instructed to start the Agent99 Trainer program from the CD-ROM at the same time, and the entire procedure was then controlled by the program. Participants first watched a short video on the introduction to the Agent99 Trainer and the procedure of the experiment. They would then be given the pre-tests. The Agent99 Trainer counted time for users and would force them into the training step if they didn’t finish the test on time. After one hour of training, participants would be given the post-tests. The one hour training time was designed to be slightly longer than the length of the basic lecture video so that participants could have time to explore more contents and try out more system functions in the last three conditions. However, the training time and the time for completing pre- and post-tests were the same across conditions in order to control the variance among treatment conditions. Finally, all participants were asked to complete the usability questionnaire.
4 4.1
Results Factor Loading and Responses to Specific Items
Factor analysis was conducted to reduce the twenty two close-ended items (applicable to all five conditions) into a smaller number of composite variables that represent the major usability attributes. Using principle component analysis extraction with varimax rotation and Kaiser normalization, five factors were extracted, which are as follows. Perceived Learning Effectiveness (Alpha = .8400): captures users’ perception on how effectively the system helped them learn about deception detection; General Satisfaction (Alpha = .7803): captures users’ overall satisfaction with the system; Audio/Video Quality Satisfaction (Alpha = .7722): captures users’ satisfaction with the quality of the audio and video used in this system; Ease of Use/Learning to Use (Alpha = .7804): captures users’ perception on how easily they could learn to use the system and how easily they actually used the system; Comparison With Traditional Classroom Learning: captures users’ views on their experience with Agent99 as compared to the traditional classroom. This factor contains only one item, so it needs to be refined in the future to improve its reliability. These five factors correspond well to the three usability measures proposed in section 3.1, since we believe that the audio/video quality and the comparison to classroom training are also different perspectives of user satisfaction. Mean and standard deviation of each factor in each experimental condition and in total are shown in Table 4. In brief, the usability attributes were measured positive across all conditions, with most means close or more than 5 (Slightly agree). This indicates that the participants agreed that the Agent99 Trainer system had good usability even with just partial functions. Among the five extracted factors, only the last one, comparison with traditional classroom learning, had average neutral responses. However, this result is consistent with other distance learning research findings [8]. It indicates that
online training would rather be complementary to classroom training than a complete replacement. As described in section 3.1, seven closed-ended items could only apply to certain conditions and the responses to these specific items were shown in Table 2. Table 2. Responses to the items related to specific features of Agent99 Trainer1 Items
N
The synchronous display of video, slides and notes helped me understand the subject matter. It was hard for me to concentrate with so much information (ask a question, keyword search, video, slides and notes) on the screen. † I liked being able to select any part of the lecture or examples at any time. The "Keyword Search" helped me find specific information easily.
162 157 122 52
The "Ask a Question" helped me find specific information easily.
46
The natural language based "ask a question" was better than the "keyword search" function when I wanted to find specific information. The Popup Quiz helped me learn deception detection concepts.
44
40
Mean (Std.) 5.32 (1.61) 5.03 (1.62)
Applicable Conditions 2, 3, 4, 5
5.42 (1.36) 3.75 (1.57) 3.52 (1.46) 3.89 (1.04)
3, 4, 5
5.60 (1.41)
5
2, 3, 4, 5
3, 4, 5 3, 4, 5 3, 4, 5
As shown in Table 2, most of the items related to specific features of Agent99 Trainer got positive responses. For example, the synchronized multimedia presentation and the self-paced learning method were both viewed as effective in helping users learn to detect deception. Although some students experienced the information overload problem caused by multiple multimedia displays, most of them did not view it as a significant issue. The newly-added function, popup quizzes, was also rated as very helpful in supporting learning. However, for the three items measuring users’ attitudes towards the search tools, almost all users selected “N/A”. After carefully analyzing the responses to the open-ended questions, we realized that most users had no time to use these new functions because of the time constraint on the lecture and because they were not forced to use these functions (not as the popup quizzes, which were mandatory to be taken during the training). In fact, the following comment was very representative of this situation: “I didn’t use the keyword or AAQ feature because I barely had the time”. This indicates that although the training time had been designed to be slightly longer than the training video, the actual individual learning time could be even longer than the designed time, because the self-paced control in condition 3, 4, and 5 allowed users to replay video segments as many times as they want. Therefore, the responses for these three items could not represent the real usability of the search tools, and we need to re-test the search tools in the future.
1
The responses to the negatively worded item (labeled with †) were reverse-coded; therefore, for all the items listed, larger numbers indicate more positive response.
4.2
Comparison Among Different Conditions
To understand the relationship between the different combinations of system functions and the system usability, we conducted a planned contrast analysis. We noted that the time constraints restricted users from using the search tools, and hence the actual usage of the system in conditions 2, 3, and 4 was similar to some extent. Therefore, we planned the contrasts as in Table 3. Table 3. Contrast coefficients
Video Only
Linear A99
-4 0 0 0
1 -1 -2 0
System Features A99 + AAQ A99 + AAQ + content
Contrast 1 2 3 4
1 -1 1 -1
1 -1 1 1
A99 + AAQ + content + quizzes 1 3 0 0
Contrast 1 compares the Video Only condition with all the other conditions. The hypothesis is that the partial or complete implementation of Agent99 Trainer (A99) is better than a simple linear lecture video. Contrast 2 compares the three actually similar A99 groups with the one having full functionalities. We predicted that the popup quizzes could bring more positive usability. Contrast 3 compares the Linear A99 with the non-linear A99 groups (users can control their own learning pace). We hypothesize that non-linear A99 will have more positive usability. Finally, contrast 4 tests whether the addition of more examples in A99 can make better usability. ANOVA tests with planned contrasts above were conducted for each of the five factors extracted from the factor analysis. Mean and standard deviation of each factor are shown in Table 4. Table 4. Means and standard deviations of the five factors
Video Linear A99 A99 + AAQ A99 + AAQ + content A99 + AAQ + content + quizzes Total
N 30 37 30
Learning mean (std) 4.96 (1.03) 5.26 (1.21) 5.44 (1.09)
Satisfaction mean (std) 4.83 ( .99) 5.08 ( .96) 5.20 (1.28)
AV mean (std) 4.75 (1.41) 5.05 (1.12) 4.88 (1.35)
Ease of Use mean (std) 5.90 ( .74) 5.96 ( .79) 6.18 ( .75)
Online/Class mean (std) 3.67 (2.11) 3.50 (1.74) 4.10 (2.11)
43
5.03 ( .99)
4.76 (1.04)
4.96 (1.10)
5.87 ( .80)
4.07 (1.78)
40
5.60 (1.10)
5.49 (1.18)
5.03 (1.22)
6.10 (1.01)
3.97 (1.87)
180
5.26 (1.10)
5.07 (1.11)
4.95 (1.22)
6.00 ( .83)
3.90 (1.89)
Results of these contrast analysis and explanations to these results, supported by the responses to the open-ended items, are discussed below for each factor.
4.2.1 Perceived Learning Effectiveness The planned contrasts were significant for both contrast 1 (t = 1.697, p = .046, onetailed) and contrast 2 (t = 1.730, p = .043, one-tailed). It indicates that the Agent99 Trainer (with full or partial functions) was better than a simple linear lecture video, and that the Agent99 Trainer with full functions was better than the one with partial functions. By analyzing the responses to the open-ended items, we found that the following features of the system could explain these contrast results. • Structured and synchronized multimedia lecture: Except for condition 1 (Video Only), the following response was very representative for all other 4 conditions: “The video, slide and notes/commentary combine to give you the visual as well as audio aspects that reinforces the learning material being presented very effectively”. The obvious difference between the comments in condition 1 and the other 4 conditions indicated that the synchronized multimedia display did improve users’ experience on learning as we predicted. • User self-paced control: The following response was very representative in condition 1 (Video Only) and 2 (Linear A99): “I was unsure how to go back so I couldn't relearn what I had missed”. On the other hand, in the rest of the conditions, users reacted with positive comments such as “liked the ability to go to different sections at any time by using the top right screen”, “allows one to backtrack and see sections again”, and “it allowed you to learn at your own pace”. Therefore, user self-paced control played a key role in improving users’ learning experience. • Popup quizzes: In the condition where the popup quizzes were used, i.e. condition 5 (A99 + AAQ + content + quizzes), users gave comments such as: “I liked the pop up questions most because of the feedback it provided”. Noticing that in other conditions users complained about lacking of immediate feedback in comments like: “there was no immediate feedback”, we conclude that the immediate assessment and feedback provided with popup quizzes made the difference between condition 5 and the other Agent99 Trainer conditions. Besides these features that determined the differences among conditions, we found that users’ overall perceived learning effectiveness in Agent99 Trainer was positive, as shown in the following representative comment: “I imagine coursework taught through Agent99 would be easier to understand and lead to better results than other types of media.” A key contributor to this positive learning effectiveness across all conditions was the use of video/audio/text examples for practice. Not only the closed-ended item for this feature got the highest mean response (mean = 5.90), but there were also a lot of positive user comments about this feature in all five conditions, such as “the examples were very helpful” and “Examples of what you are trying to teach are good”. 4.2.2 General Satisfaction The planned contrast for contrast 2 was significant (t = 2.366, p = .010, one-tailed). Similar to the results for Perceived Learning Effectiveness, the Agent99 Trainer with full functions is better than the one with partial functions. Again, this could be due to the addition of popup quizzes which provided assessment and immediate feedback. Although contrast 4 was not statistically significant, from the qualitative data it
appears that the Agent99 Trainer with more content had worse usability than the one with less content. One possible explanation for this anomaly would be cognitive overload. Users in condition 4 found the training session very lengthy. This was suggested by the user suggestions such as “reduce the amount of information in the program; I realize there is a lot to be delivered, but take it in phases”, and “it was kind of long and stressful”. Possibly users in this condition were exposed to information overload in the given time frame. Although users in condition 5 were also given more examples as in condition 4, the user comments in condition 5 showed that most users did not avail the additional content provided because of the time spent in answering the popup quizzes. One representative comment from condition 5 is: “I could not take benefit of the complete information provided because of the time constraints”. Therefore, the users in condition 5 did not have the information overload problem as in condition 4. Thus, lengthiness and cognitive load affect the perceived satisfaction of the system, in that, the higher the cognitive load, the lesser the perceived satisfaction. Although no statistical significance was shown between the first condition and the rest, the user comments indicated a preference for the synchronized multimedia lecture and the user self-paced control, such as “liked the combination of audio and video presentation along with slideshow and text” and “liked the ability to go to different sections at any time by using the top right screen”. Consistent with the findings for Perceived Learning Effectiveness, we observed that the perceived user satisfaction was positively related to these two features. 4.2.3 Audio/Video Quality Satisfaction There was no significant planned contrast for this factor. The satisfaction level for audio/video quality was not very high (mean = 4.99) for all groups. The following comments expressed the users’ concerns on audio and video quality, and explained a possible reason for the low score: “the video window is too small to pick enough details on the examples”, “audio levels wasn’t consistent across all examples”, “some of the audio/video was cutoff at the end of clip before completely finishing”, and “sound quality can be improved”. Each of these representative comments address different issues involved in the quality of audio and video presented. These definitely need to be considered in the improved version of the system. 4.2.4 Ease of Use/Learning to Use Again, we did not observe any statistically significant differences between the different conditions. This result is good because it indicated that the addition of system functions did not make the system too complex to use. The perceived ease of use for the Agent99 Trainer was the highest among all five usability factors (Mtotal = 6.00, Mcondition5 = 6.10). Positive comments were representative across all five conditions, such as: “It was easy to use and very informative”, “Simple to understand and easy to follow along”, and “Extremely easy to use…” 4.2.5 Comparison With Traditional Classroom Learning There were no significant differences among any conditions on users’ opinions about comparing Agent99 training system with the traditional classroom. Majority of the
users liked Agent99 Trainer as compared to the traditional classroom learning. This was evident from their comments such as: “Agent99 Trainer would be good in any academic setting”, and “can be used in a at home-based environment where I can take my time with it, stop and replay info as needed”. Some comments indicated a preference for the traditional learning system as compared to the Agent99 Trainer system and suggested improvements to the Agent99 Trainer. This was very helpful and was seen from comments like “I think this is an excellent way to conduct distance learning courses; however there should be an option to be able to actually communicate with an instructor for more specific questions”. Still others commented on disliking this system compared to the classroom learning, mainly because of the lack of interaction and feedback. Example comments include “certain topics like the one we went over require discussion and personal explanation to sometimes see complex details that aren’t obvious the first time you look at it”. Interestingly, such responses were mainly from users in conditions 1 through 4, but not from condition 5. This also indicated support for the effectiveness of the feedback provided by the popup quizzes in condition 5.
5
Conclusion: A Checklist for Future Developers
Resulting from both the quantitative and qualitative data collected in this exploratory study, findings of this study demonstrate that the Agent99 Trainer has good usability. More importantly, these findings provide future developers a better understanding of what users want in such a deception detection training system, which are summarized in the following list: • Synchronized multimedia lecture. A special feature of the Agent99 Trainer, the synchronized multimedia lecture, was well embraced by most users. The combination of the video, slides, and text give users multiple channels of learning so that an individual can choose his or her favorite channels to focus on. This presentation also gives users a vivid simulation of the typical classroom instruction they are familiar with. However, some users may face an information overload problem with the multiple presentations. A user configurable GUI that allows users to choose their own combination of displays could be a solution to tackle this problem. • Ample real-world examples. For a training program about deception detection, real-world examples are the most effective ways for helping users learn different deception scenarios and the respective detection strategies. This is not only preferred by the users, but also demonstrated in many previous research studies. • Assessment and immediate feedback. Users need assessments to test their understanding on the subject matter. However, without immediate feedback, they cannot know how well they learned and hence cannot effectively adjust their learning process. Therefore, any assessment provided in the training system needs to be accompanied by an immediate feedback to the users. If the assessments are used for experiment measures, the users need to be informed before the experiment that they will not get feedbacks on these tests; otherwise the satisfactory level towards the system could be decreased.
• User self-paced control. Users most likely want to control the training program so that they can learn on their own pace. At the least, they should be given an opportunity to go back to a specific topic they do not understand. The navigable lecture outline provided in the Agent99 Trainer is a good example for the implementation of user self-paced control. • High quality audio/video. For any training system that uses lots of video or audio presentations, the quality of the audio/video clips can greatly affect the system usability. Professional production of the lecture video and the example video or audio clips is highly desired. • Sufficient training time. Finally, the training time for using such a training system should not be restricted. Clearly shown in this study, too much training content with insufficient training time will lead to the cognitive overload problem and will hence decrease the system usability. Ideally, users want to use such a training system as a supplementary tool for just-in-time learning. This list provides us a guideline for future development. This can also be a guideline for any developers who want to design an information system for deception detection training with good usability. Based on this list of what users want, we will improve our current prototype by providing configurable user interface, improving audio/video quality, providing more immediate feedback, and providing more realworld examples. Since unlimited training time could add difficulty in controlling variances in a controlled experiment, we plan to conduct a field study to test the learning effectiveness and users’ experience in using the system without time limitation.
Reference: 1. 2. 3. 4.
5.
6.
7.
Bevan, N. and Macleod, M.: Usability measurement in context. Behaviour and Information Technology, Vol. 13. (1994) 132-145 Brooke, J.: SUS: A “quick and dirty” usability scale. Usability Evaluation in Industry. Taylor and Francis (1996) Buller, D. B., Burgoon, J. K.: Interpersonal Deception Theory. Communication Theory, Vol. 6. (1996) 203-242 Cao, J., Crews, J., Lin, M., Burgoon, J. K., and Nunamaker, F. J.: Designing Agent99 Trainer: A Learner-Centered, Web-Based Training System for Deception Detection. NSF/NIJ Symposium on "Intelligence and Security Informatics", Tucson, AZ (2003) 358365 Cao, J., Crews, J., Burgoon, J. K., and Nunamaker, F. J., Lin, M.: User Experience with Agent99 Trainer: A Usability Study. 37th Annual Hawaii International Conference on System Sciences (HICSS 2004) Cao, J., Crews, J., Lin, M., Burgoon, J. K., and Nunamaker, F. J.: Can people be trained to detect deceptions? Americas Conference on Information Systems (AMCIS 2003), Tampa, Florida Chin, J. P., Diehl, V. A., and Norman, K. L.: Development of an instrument measuring user satisfaction of the human-computer interface. Proceedings of the CHI `88 Conference on Human Factors in Computing Systems. (1988)
8. 9. 10.
11. 12. 13.
14. 15.
16. 17. 18. 19. 20.
Croson, D. C. and Westerman, G.: Distance learning over a short distance. Working paper. (2004) Davis, F. D.: Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, Vol. 13. (1989) 319-340 DeTurck, M. A., Harszlak, J. J., Bodhorn, D., Texter, L.: The Effects of Training Social Perceivers to Detect Deception from Behavioral Cues. Communication Quarterly, Vol. 38. (1990) 1-11 Fiedler, K., Walka, I.: Training Lie Detectors to Use Nonverbal Cues Instead of Global Heuristics. Human Communication Research, Vol. 20. (1993) 199-223 Frank, M. G., Feeley, T. H.: To Catch a Liar: Challenges for Research in Lie Detection Training. Journal of Applied Communication Research. (2002) George, J. F., Biros, D. P., Adkins, M., Burgoon, J. K., Nunamaker, F. J.: Testing Various Modes of Computer-Based Training for Deception Detection. Submitted to the 2nd NSF/NIJ Symposium on "Intelligence and Security Informatics", Tucson, AZ (2004) Kraut, R.: Humans as Lie Detectors. Journal of Communication, Vol. 30. (1980) 209-216 Lin, M., Cao, J., Crews, M. J., Nunamaker, F. J., and Burgoon, J. K.: Agent99 Trainer: Design and Implement a Web-based Multimedia Training System for Deception Detection Knowledge Transfer. Americas Conference on Information Systems (AMCIS 2003), Tampa, Florida Miller, G. R., Stiff, J. B.: Deceptive Communication. Sage Publications, Inc. (1993) Nielsen, J., Usability Engineering. Boston: Academic Press, 1993. Weisberg, H. F., Krosnick, J. A., and Bowen, B. D.: An introduction to survey research, polling, and data analysis. Thousand Oaks, CA: Sage. (1996) Zhang, D. S.: Virtual Mentor and Media Structuralization Theory. PhD dissertation in the MIS Department, University of Arizona, Tucson, AZ. (2002) Zuckerman, M., Koestner, R., Alton, O. A.: Learning to Detect Deception. Journal of Personality and Social Psychology, Vol. 46. (1984) 519-528