training to detect deception: an experimental ... - Semantic Scholar

52 downloads 0 Views 254KB Size Report
Florida State University. Judee K. ... corrupt politicians or identity theft, most probably do not .... large USAF facility located in the United States. A total of 125 ...
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

TRAINING TO DETECT DECEPTION: AN EXPERIMENTAL INVESTIGATION Joey F. George, Kent Marett Florida State University Judee K. Burgoon, Janna Crews, Jinwei Cao, Ming Lin University of Arizona Lt. Col. David P. Biros USAF Chief Information Office

Abstract Humans are not very good at detecting deception in normal communication. One possible remedy for improving detection accuracy is to educate people about various indicators of deception and then train them to spot these indicators when they are used in normal communication. This paper reports on one such training effort involving over 100 military officers. Participants received training on deception detection generally, on specific indicators, and on heuristics. They completed pre- and posttests on their knowledge in these areas and on their ability to detect deception. Detection accuracy was measured by asking participants to judge if behavior in a video, on an audiotape, or in a text passage was deceptive or honest. Trained individuals outperformed those who did not receive training on the knowledge tests, but there were no differences between the groups in detection accuracy. In addition, individuals who received training using specially developed software did as well as individuals who were trained by lecture or by lecture and software in combination, for both knowledge and detection accuracy.

1 Introduction Deception is pervasive and comes in many forms. Deception can be a serious threat to individuals, society, economy and security. Although people are aware of the dramatic threats and deceptions spotlighted in the press, such as computer viruses, national traitors,

corrupt politicians or identity theft, most probably do not realize that simple deception can also be a serious threat that detrimentally and directly impacts them. One very common, but seemingly innocuous, form of deception is email hoaxes, which do everything from claiming that the recipient will receive money for forwarding the message to convincing recipients to delete “virus files” on their computer that are really system files. Although they may not seem overtly malicious or costly, hoaxes waste time, money, and system resources and may cause stress to the unsuspecting recipient. Hoaxes are so prolific that there are extensive, highlytrafficked websites dedicated to providing information on the prevailing e-mail scams, like http://urbanlegends.about.com/ or http://www.scambusters.org/. Social engineering is another common form of deception used to breach security. In social engineering, a person poses as someone else to obtain access to off-limits information or persuade the deceived to comply with some other type of request. The infamous hacker Kevin Mitnick employed this technique often with great success [22]. Effective in many circumstances, social engineering is often used to get the unsuspecting to open SPAM or pornographic email, executable virus files or download malicious software from the Internet that can be used to remotely compromise an infected machine to launch denial of service attacks or worse. Therefore, the ability to reliably detect deception is important. How can this knowledge be used to improve deception detection? Training seems like the

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

1

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

logical conclusion. The goal is to improve accuracy by training people how to consistently and reliably detect deception; however, previous research on the effectiveness of training found mixed results [19]. Therefore, the research question is whether training people to identify these cues can significantly improve their ability to accurately detect deception. The study described herein was an attempt to develop and test a deception detection training program for rank-and-file military members. The training curriculum provided a basic understanding of deception and specific information about cues and heuristics for detecting deception. The study also served as a test of the use of instructional technology, in the form of a training system called Agent99 Trainer, for training individuals in deception and its detection. Following a brief review of the deception training literature, we present the study design. This is followed by a discussion of the Agent99 Trainer, its development, and its experimental validation. The paper ends with a presentation of experimental procedures, the development of the measures used, and the findings on the effectiveness of the training curriculum and delivery method in improving accuracy in detecting deception.

2 Literature Review Deception and its successful detection have been studied for decades. As Ekman [13] and others have implied, everyone lies to some extent, and lies can occur in any social situation and modality. Although there are many types of deception, for the purposes of this study we use the Buller and Burgoon [3] definition for deception, “a message knowingly transmitted by a sender to foster a false belief or conclusion by the receiver” (p.205). This definition rules out honest mistakes made by people, which are also common occurrences. Most prior deception research centered on the behavior of the liars (also referred to as “senders”), the detection abilities of the people who receive the lies, and the perceptions of those who merely observe the exchange [5, 23]. Detecting lies is a difficult task for most, in that people are right only about half the time [15, 23], even though they may believe they are actually proficient. To try to improve these odds, researchers have worked to uncover reliable verbal and nonverbal indicators of deception [2, 9, 26]. These include increased blinking, higher voice pitch, increased self-

grooming, more passive statements, more negative statements, and more distancing of the storyteller from the story told. Other cues that have emerged more recently include more pauses, longer response latencies, less eye contact, and shorter message duration [11, 25]. Obviously, not all the indicators are applicable to all modalities, but from a detection standpoint, many of the cues do apply to multiple media, and instilling a working knowledge of these indicators in decision-makers is a worthy goal. As Fiedler and Walka [17] point out, without knowledge of deceptive cues, humans tend to rely on individual heuristics for their judgments of honesty, and these heuristics may be based on faulty logic to begin with and are thus less reliable. Logic dictates that if people can be trained to successfully identify these cues, then their detection accuracy should improve. As stated before, past attempts at training for lie detection have been mixed in some instances, and when training does work, it may come with a price. For example, Biros and colleagues [1] found that training may induce individuals to issue more false alarms, or mistakenly judge information as being deceptive. Nevertheless, increased sensitivity toward the presence of deception, even toward the point of becoming suspicious, has been found to produce better detection rates, and this is the one of the desired products of detection training. A training program designed to instruct and inform communicators as to the aforementioned indicators and other behavior displayed by liars should be developed with three components in mind: explicit instruction, practice, and feedback. Prior studies on detection training have shown that the performance of trainees is best when explicit instruction on cues is coupled with repetitive exercises allowing trainees to evaluate examples of honesty and deception, and afterwards given feedback on their performance [24]. Prior experiments that involved instructing human detectors in the ways of lie recognition are similar in that the topics included in the curriculum typically include the indicators that have been uncovered by prior research and summarized in the aforementioned metaanalyses [9, 26]. In terms of practice, deTurck and Miller [12] support offering trainees chances to apply their instruction with a series of exercises testing their detection abilities, although they warn that too much practice may result in a downturn in performance due to fatigue or boredom. Finally, concerning

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

2

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

feedback, Zuckerman et al. [27] report that providing feedback is beneficial for trainees because it reinforces the reliability of certain indicators that are associated with deception. It is especially useful for informing subjects as to their actual abilities to detect lies, which are usually lower than their perceived capacities [14]. These three basic components were included in the training program we developed for this study. Prior training literature has failed to acknowledge different delivery systems that might instruct trainees in the development of deception detection abilities. Instructional technology has been used in many other curricula, but to the knowledge of the authors, it has not been used in training people to detect deception. Given the number of people who may stand to gain from such training, instructional technology could be a practical substitute for a more costly expert lecturer. Training people to interpret behavioral cues pointing to deception should improve their detection abilities, and training delivered via instructional technology should be at least as effective as traditional lecturing. Discovering if these assumptions are reasonable is the purpose this research.

3 Research Questions Two research questions guided our efforts. The first research question is whether it is possible to improve individual ability to detect deception through training. As stated previously, past research on the effectiveness of training to detect deception has been mixed. The second research question is whether the delivery mode used for training makes any difference in deception detection performance. If instructional technology works as well as a traditional lecture, it has the potential to be more cost effective, and it has the added benefit of being adapted so that it can accessed from anywhere at anytime through the Internet.

4 Study Design The study was conducted in fall 2002 on a large USAF facility located in the United States. A total of 125 officers participated as subjects, although the total number participating per session varied. The subjects were already assigned to “blocks,” or classes, made up of sixteen officers, so the blocks were randomly assigned to either the control or to one of three treatment groups. For one treatment, the lectureonly training groups, traditional lectures were

used for all three training sessions. For the second treatment, the computer-based training groups, lectures were used for the first and third sessions, but in the second session, a specially created system called Agent99 Trainer was used exclusively. For the third treatment, the “combination” group, lectures were used for the first and third sessions, but for the second session, a combination of lecture and Agent99 Trainer was used. All lectures in all treatments were supported with the same Powerpoint presentations and interview examples. The control received no training, but control subjects completed the same measurement instruments as the experimental subjects. A pre-session was used to collect baseline data on all subjects in all four groups. The instructors were two USAF officers completing their masters’ degrees at the Air Force Institute of Technology and one doctoral student from a US business school. To avoid any potential instructor effect on performance, the instructors did not train the same blocks of subjects more than once, switching to another treatment with each new session.

4.1 Agent99 Trainer Agent99 Trainer was designed to provide well-structured instruction with supportive materials, practice by viewing real life examples and scenarios, expert analysis as feedback, and access to self-paced, self-directed anytimeanywhere learning. A Web-based multimedia learner-centered training system, Agent99 Trainer was developed as an adaptation and extension of the LBA (Learn by Asking) system [8] to specifically address the requirements of deception detection training. As a Web-based system, Agent99 Trainer offers anytimeanywhere training to the learner. As a learnercentered training system, Agent99 Trainer strives to engage the learners’ active participation in knowledge construction by developing their mental schemas while providing a self-paced and directed learning environment. Learner-centered training is particularly appropriate for ill-defined problems (Ertmer & Newby, 1993), such as reliably detecting deception [21, 27]. In addition, the research of Frank and Feeley [18] on deception detection training identifies three instructional components essential to a successful training curriculum: 1) explicit instruction, 2) practice, and 3) feedback. Agent99 Trainer was designed to provide these three components with the Watch Lecture

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

3

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

module and the View Example with Analysis module. For this study, Agent99 Trainer was used as a delivery mode for the computer-based training and combination groups in Session 2 only. The functionality of the version tested was limited to two modules: 1)Watch Lecture and 2) View Examples with Analysis (Figure 1). The Watch Lecture module contained a one-hour, multimedia expert lecture entitled Cues of Deception, thereby providing explicit instruction. Expert lecture video, PowerPoint presentation slides, and a lecture transcript are synchronized on an integrated, Web-based interface. One drop-down menu provides an outline of the topics in the lecture, as well as allowing the users to select and access those topics on demand. Another drop-down menu provides a listing and access to communication examples for each lecture topic in the View Examples with Analysis module. Although Agent99 Trainer allowed users to pace themselves and direct their learning, the system was populated with the exact same lecture materials and PowerPoint presentation as those presented in the traditional lecture treatment, and all trainees were constrained to the same training time, regardless of treatment. The View Example with Analysis module encourages users to practice the knowledge learned in the Watch Lecture module by analyzing the veracity of interviewees in communication examples. To help users construct a broader and deeper understanding of deception in varying task and communication conditions, the interview examples were in three media (text, audio, and video) and two modalities (face-to-face conversation and NetMeeting™ chat). The examples were organized by the deception cues demonstrated therein, and could be accessed sequentially or in a tree fashion directly from the Watch Lecture topics. Specifically, the module contained 23 total examples, including 15 interview segments from a series of studies on deceptive communication [4, 6, 7], and 9 military-centric enacted scenarios. For each example, an expert analysis of the veracity of the interviewee and the existing tell-tale cues provides the users immediate response and explanatory feedback, thereby assisting the users in their construction of mental schemas for deception detection. The same examples were presented in the traditional lecture treatment. In fact, great care was taken to ensure that participants in all three treatments had access to the exact same lecture content and examples for the cues training. Thus, Agent99

Trainer integrates three essential components of deception detection training: explicit instruction, practice, and feedback.

(a)

(b) Figure 1: Interfaces of the (a) Watch Lecture & (b) View Example with Analysis modules

4.2 Procedures The training curriculum was developed jointly by the authors and their respective research teams. The basis for the curriculum was a set of three Powerpoint presentations, each on a different topic: an introduction to deception detection, cues used to detect deception, and heuristics for decision making that are susceptible to deception. Each presentation was designed to last for one hour. The second lecture, on cues, also included deceptive communication examples used by the instructors to illustrate the cues. These examples were either text only, audio only, or video with audio. Most examples came from past studies of deception detection, consisting of experimental subjects trying to deceive their interviewers.

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

4

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Other examples were specifically created and recorded for this study. The lectures, delivered by the training instructors, were also videotaped. The instructors pilot-tested all training materials, including Agent99 Trainer, weeks before the study began. In designing each session, an emphasis was placed on integrating instruction, practice, and feedback. The basic procedures for the training sessions were as follows: Subjects reported to their regular block classroom at the USAF facility. They began by completing a battery of instruments, including a knowledge pre-test and a deception detection accuracy pre-test, both dedicated toward determining how much of the subject matter the trainees understood entering the session. Subjects were then trained for approximately 45 minutes, which is slightly longer than previous training studies [10], except for control subjects, who were given a one hour break in the interim. This study also differed from other training experiments in that a series of three training sessions were offered rather than a single lecture. The only differences in the training presentations occurred in Session 2 (the cues lecture), in that a third of the trained subjects received a traditional lecture, another third received training via the Agent99 Trainer software, and the other third received a combination of the lecture and software. Afterwards, all subjects completed a knowledge post-test, made up of the same questions as the pre-test but in a different order, and a deception detection accuracy post-test, similar to the pretest but consisting of different examples. At this point, participants in all conditions received feedback on the correct responses to the deception detection pre- and post-tests. Subjects then completed additional instruments and were dismissed.

4.3 Measures Two different measures of performance were developed for this study, knowledge tests and judgment, or diction accuracy, tests. Each knowledge test was composed of 12 multiple choice questions taken directly from the content covered in the training curriculum. Since there were three training sessions scheduled, three knowledge tests were created, each based on the respective content from that day’s session. The knowledge pre-test and the knowledge post-test for each session were identical, except for the ordering of questions and the ordering of the

choices for each question. Each subject’s knowledge was measured as their proficiency on each of the knowledge tests, or the number of correctly answered questions from 0 to 12, with 12 being a perfect score. Six detection accuracy tests were developed. A common measure in deception detection studies, the judgment tests were designed to test the ability of the participants to judge the veracity (truth or untruth) of statements made by an interviewee in a short interview. Each test consisted of six short interviews in three different media (2 text, 2 audio, and 2 video with audio), culled from twenty real interviews in three separate research studies on deception. Furthermore, the interviews in each test were half truthful and half deceptive and a combination of difficulty levels. The interviews were randomly ordered within each test based on media, veracity, and difficulty. Within each pretest-posttest set (12 interviews), each interviewee was unique to avoid results due to communicator (interviewee) specific cues. Subject performance on a judgment task was the number of correct responses, ranging from 0 to 6, with 6 being a perfect score.

4.4 Pilot Tests The difficulty and equivalency of the six veracity judgment tests were analyzed in a series of two experiments, with 124 management information systems (MIS) upper-division undergraduate students in Introduction to Business Information Systems as participants. The purpose of the experiments was to test the difficulty of the individual items and the equivalency of the compilation of the six tests. The test forms needed to be of equal average difficulty since the tests were to be used to measure changes in deception detection accuracy. In the first pilot experiment (PE1), 96 students completed one of six veracity judgment test forms, with an average of 16 students completing each test form. Participants took approximately fifteen minutes to complete each test form. The students did not have any previous training in deception detection and thus were expected to achieve approximately 50 percent accuracy. Based on the results of PE1 indicating that the difficulty of the six test forms were not equivalent, items were switched between four of the test forms based on the average item scores in PE1. The second pilot experiment (PE2) was conducted to collect data on the four revised test

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

5

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Table 1: Pilot Experiment 2 – Data from Revised Veracity Judgment Tests Test Mean 95% Confidence Interval for Form N (Std Dev) Mean Min Max ANOVA Lower

Upper

p-value*

1

15

3.47 (0.83)

3.00

3.93

2

5

2

15

3.67 (1.18)

3.02

4.32

2

5

3

12

3.83 (0.94)

3.24

4.43

3

6

4

16

3.31 (0.79)

2.89

3.74

2

4

5

13

3.62 (1.39)

2.78

4.45

1

6

6

15

3.60 (0.91)

3.10

4.10

1

4

Total

86

3.57 (1.00)

3.36

3.78

1

6

0.825

* significance of difference between the mean scores on the 6 different test forms forms. The participants were students who had not participated in PE1. Each student completed two test forms, with an unrelated task in between. An average of 14 students completed each form. The PE2 data were combined with the PE1 data for the two test forms not revised and re-analyzed, and the analysis indicated that the accuracy rates achieved on the six test forms were statistically equivalent (p = .825). See the data for the revised veracity judgment test forms in Table 1.

4.5 Findings Table 2 provides the results of the knowledge tests for the control group and the combined treatment groups for all three sessions in the primary experimental study. Each knowledge test had 12 questions, and results reported indicate the number of questions answered correctly. Table 3 provides the results of the deception detection accuracy tests for the control group and the combined treatment groups for all three sessions. Each accuracy test had six

examples, and the results reflect the number of examples evaluated correctly. The first research question asked whether individual performance in deception detection could be improved through training. The knowledge tests can be used as a manipulation check, comparing the control group to the treatment groups, to determine if training was effective in imparting information about deception and its detection. Performance on the knowledge tests was measured by taking the difference between pre-test and post-test scores within each session. Independent t-tests showed that the treatment groups differed from the control group for all three sessions (introduction: t(113)=-8.921, p < .001; cues: t(113)=-4.54, p < .001; heuristics: t(113)=-7.536, p < .001). For each session, the control group did not improve, while the training session groups did. Trained individuals, then, appeared to have learned about deception and its detection through the training program.

Table 2. Means and standard deviations (in parentheses) for knowledge pre-tests and post-tests. Control (N = 29) Treatments (N = 86) Pre-test Post-test Pre-test Post-test Introduction 5.07 (1.60) 5.14 (1.64) 5.55 (1.69) 8.81 (1.64) Cues 4.07 (1.49) 4.38 (1.59) 5.57 (1.63) 7.73 (2.30) Heuristics 5.41 (2.23) 4.93 (2.30) 6.01 (1.97) 8.74 (2.07) Table 3. Means and standard deviations (in parentheses) for accuracy pre-tests and post-tests Control (N = 29) Treatments (N = 85) Pre-test Post-test Pre-test Post-test Introduction 2.89 (1.11) 3.72 (1.25) 3.11 (1.29) 3.67 (1.01) Cues 4.10 (1.23) 2.97 (0.78) 4.39 (1.11) 3.65 (0.86) Heuristics 3.86 (1.22) 3.38 (1.37) 3.72 (1.14) 3.39 (1.02)

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

6

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

Were trained individuals able to use what they had learned through training to improve their ability to detect deception? This is the issue at the heart of the first research question and can be investigated by looking at performance on the judgment tests. Performance on the judgment tests was also measured by taking the difference between pre-test and posttest scores within the session. There were no statistically significant differences between the treatment groups and the control group on deception detection accuracy. Deception detection accuracy did not improve for trained subjects, when their pre-test and post-test scores were compared for each of the three training sessions.

However, there was an overall improvement in deception detection accuracy across the entire set of training sessions. The general trend, although not statistically significant, is toward an improvement in deception detection accuracy for the control group (an improvement of 0.48 between the pre-test of the first session and the post-test of the last session) and for the combined treatment groups (an improvement of 0.28 for the same comparison). If all subjects are combined, the average improvement in deception detection performance between the pre-test of the first session and the post-test of the last session is 0.333, which is statistically significant (t(113)=2.048, p < .043) (Figure 2).

Judgment Test Scores 5 4 3 No. correct answers

2 1 Overall Treatments Control 10/15 post

10/15 pre

10/1 post

10/1 pre

9/17 post

9/17 pre

0

Control Treatments Overall

Figure 2: Deception detection accuracy scores for control vs. combined treatment groups and for the combined sample

Table 4: Means and standard deviations (in parentheses) for judgment and knowledge pre-tests and post-tests for Session 2, comparing Agent99 Trainer to lecture and combination treatment groups. Agent99 Trainer (N = 26) Lecture and combo (N = 59) Pre-test Post-test Pre-test Post-test Judgment 4.42 (1.24) 3.42 (0.81) 4.37 (1.07) 3.75 (0.86) Agent99 Trainer (N = 26) Lecture and combo (N = 60) Pre-test Post-test Pre-test Post-test Knowledge 5.42 (1.60) 7.31 (2.70) 5.63 (1.66) 7.92 (2.11)

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

7

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

The second research question dealt with differences in deception detection performance based on delivery mode. Table 4 represents the performance breakdown for both the knowledge and judgment tests for the treatment group that used Agentt99 Trainer software and other two treatment groups. Data for the lecture only and the combination lecture and software groups have been combined for this analysis. There were no statistically significant differences between Agent99 Trainer and the other delivery modes on deception knowledge tests (F(1,84) = 0.65, n.s.). There were also no statistically significant differences between Agent99 Trainer and the other delivery modes on deception detection accuracy (F(1,83) = 1.32, n.s.). Agent99 Trainer by itself seems to be capable of delivering the same material as lecture or a combination of lecture and software. In addition, training through Agent99 Trainer alone seems to lead to the same level of performance in detecting deception as training through the other delivery methods.

5 Discussion The groups that received training improved their understanding of deception, vis-à-vis the control group, as shown by their knowledge test performance. In addition, there were no differences in deception detection accuracy between the treatment groups and the control group, indicating that while the training improved factual knowledge, it may not have improved detection ability for students, compared to those who received no training. Obviously this is not the desired result, but we believe that the limited examples provided in the training in Agent99 Trainer, and the constrained training time, are possible contributing factors. These factors may have limited the effectiveness of Agent99 Trainer, since a strong advantage of a Web-based training system is its capability of providing self-paced, repeatable training with unlimited access time. Although there were no differences between groups, participants in the study did significantly improve their detection accuracy between the first and last veracity judgment tests, overall. It may be that mere exposure to the accuracy tests improved performance, possibly through heightening subject lie bias [16]. Although the control group did not receive explicit training, per se, they did complete the judgment tests and received response feedback immediately afterward, like the treatment groups. This could

be viewed as a form of training in that it mimics the practice with feedback provided by the View Examples with Analysis. If so, the positive result may provide substantiation for the benefits of that module to deception detection training, especially since the effect was obtained with the inclusion of a very limited number of examples. In addition, although the veracity judgment tests were piloted, the results on one test may have been adversely biased by the training. Exit comments indicated that the answers for two items on the posttest for Session 2 seemed to contradict the Session 2 training, although inspection uncovered other tell-tale cues not covered in the training. Thus, further testing and revision of the measure may be indicated. The lack of differences among treatment conditions at the very least suggests that computer-based tools can deliver relevant training material without the necessity of a human instructor delivering all of the lecture content, with the added benefit of being selfpaced and repeatable with time and place independence. The subtleties exposed by comparisons made within and between groups, and on both knowledge and judgment tests, illustrates the value of the training design employed in this investigation. Tests that lack pre- to post-test comparisons, control groups, or both types of knowledge gains (cognitive and judgmental) may fail to adequately discern what a given training curriculum and tool provide. Nevertheless, the judgment accuracy rates of the training program developed and tested in this study compare favorably with other training studies, including those focusing primarily on content analysis rather than cue-based training [20]. It would be speculative to suggest that one type of training is favorable to another without a direct comparison.

6 Conclusions The study reported here presents two main conclusions about training people to detect lies when confronted with them. First, over time, training does make a difference in lie detection performance. Continued exposure to events that require a person to judge the honesty of others and a general understanding of concepts involved with interpersonal deception are important parts of the training. Secondly, instruction by electronic means seems to work as well as traditional lecture-based delivery. Electronic delivery of training may therefore be an acceptable manner to teach deception

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

8

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

A learner-centered, web-based training system for deception detection," presented at NSF/NIJ Symposium on Intelligence and Security Informatics, Tucson AZ, 2003.

detection, allowing for a cheaper and more timeflexible means of training.

Acknowledgment Portions of this research were supported by funding from the U. S. Air Force Office of Scientific Research under the U. S. Department of Defense University Research Initiative (Grant #F49620-01-1-0394). The views, opinions, and/or findings in this report are those of the authors and should not be construed as an official Department of Defense position, policy, or decision.

[9]

DePaulo, B., Lindsay, J., Malone, B., Muhlenbruck, L., Charlton, K., and Cooper, H., "Cues to deception," Psychological Bulletin, vol. 129, pp. 74-118, 2003.

[10]

deTurck, M., Harszlak, J., Bodhorn, D., and Texter, L., "The effects of training social perceivers to detect deception from behavioral cues," Communication Quarterly, vol. 38, pp. 189-199, 1990.

References

[11]

Biros, D., George, J., and Zmud, R., "Inducing sensitivity to deception in order to improve decision making performance: A field study," MIS Quarterly, vol. 26, pp. 119-144, 2002.

deTurck, M. and Miller, G., "Deception and arousal - Isolating the behavioral correlates of deception," Human Communication Research, vol. 12, pp. 181-201, 1985.

[12]

Buller, D. and Burgoon, J., "Deception: Strategic and non-strategic communication," in Strategic interpersonal communication, J. A. Daly and J. M. Wiemann, Eds. Hillsdale NJ: Erlbaum, 1994, pp. 191-223.

deTurck, M. and Miller, G., "Training observers to detect deception: Effects of self-monitoring and rehearsal," Human Communication Research, vol. 16, pp. 603620, 1990.

[13]

Buller, D. and Burgoon, J., "Interpersonal deception theory," Communication Theory, vol. 6, pp. 203-242, 1996.

Ekman, P., Telling lies: Clues to deceit in the marketplace, politics, and marriage, vol. 2. New York: WW Norton and Company, 1992.

[14]

Elaad, E., "Effects of feedback on the overestimated capacity to detect lies and the underestimated ability to tell lies," Applied Cognitive Psychology, vol. 17, pp. 349-363, 2003.

[15]

Feeley, T. and deTurck, M., "Global cue usage in behavioral lie detection," Communication Quarterly, vol. 43, pp. 420430, 1995.

[16]

Feeley, T. and Young, M., "Humans as lie detectors: Some more second thoughts," Communication Quarterly, vol. 46, 1998.

[17]

Fiedler, K. and Walka, I., "Training lie detectors to use nonverbal cues instead of global heuristics," Human Communication Research, vol. 20, pp. 199-123, 1993.

[18]

Frank, M. and Feeley, T., "To catch a liar: Challenges for research in lie detection training," Journal of Applied Communication Research, in press.

[19]

Kassin, S. and Fong, C., "'I'm innocent!': Effects of training on judgments of truth and deception in the interrogation room," Law and Human Behavior, vol. 23, pp. 499-516, 1999.

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

Burgoon, J. and Buller, D., "Interpersonal deception: III. Effects of deceit on perceived communication and nonverbal behavior dynamics," Journal of Nonverbal Behavior, vol. 18, pp. 155-184, 1994. Burgoon, J., Buller, D., Floyd, K., and Grandpre, J., "Deceptive realities: Sender, receiver, and observer perspectives in deceptive conversations," Communication Research, vol. 23, pp. 724-748, 1996. Burgoon, J., Buller, D., White, C., Afifi, W., and Buslig, A., "The role of conversational involvement in deceptive interpersonal interactions," Personality and Social Psychology Bulletin, vol. 25, pp. 669-685, 1999. Burgoon, J., Stoner, G., Bonito, J., and Dunbar, N., "Trust and deception in mediated communication," presented at 36th Hawaii International Conference on System Sciences, 2003. Cao, J., Crews, J., Lin, M., Burgoon, J., and Nunamaker, J., "Designing Agent99 trainer:

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

9

Proceedings of the 37th Hawaii International Conference on System Sciences - 2004

[20]

Landry, K. and Brigham, J., "The effect of training in criteria-based content analysis on the ability to detect deception in adults," Law and Human Behavior, vol. 16, pp. 663676, 1992.

[21]

Levine, T., Park, H. S., and McCornack, S., "Accuracy in detecting truths and lies: Documenting the 'veracity effect'," Communication Monographs, vol. 66, pp. 125-144, 1999.

[22]

Littman, J., The Fugitive Game. Boston: Little, Brown, and Co., 1996.

[23]

Miller, G. and Stiff, J., Deceptive communication. Newbury Park, CA: Sage Publications, Inc., 1993.

[24]

Vrij, A., "The impact of information and setting on detection of deception by police detectives," Journal of Nonverbal Behavior, vol. 18, pp. 117-136, 1994.

[25]

Vrij, A., Edward, K., Roberts, K., and Bull, R., "Detecting deceit via analysis of verbal and nonverbal behavior," Journal of Nonverbal Behavior, vol. 24, pp. 239-263, 2000.

[26]

Zuckerman, M. and Driver, R., "Telling lies: Verbal and nonverbal correlates of deception," in Nonverbal Communication: An Integrated Perspective, A. W. Siegman and S. Feldstein, Eds. Hillsdale, NJ: Erlbaum, 1985, pp. 129-147.

[27]

Zuckerman, M., Koestner, R., and Alton, A., "Learning to detect deception," Journal of Personality & Social Psychology, vol. 46, pp. 519-528, 1984.

0-7695-2056-1/04 $17.00 (C) 2004 IEEE

10

Suggest Documents