Introduction: Computer-Aided Support of the Detection of Deception. SPECIAL ISSUE EDITORS â JUDEE K. BURGOON AND JAY F. NUNAMAKER, JR.
PREFACE
Group Decision and Negotiation 13: 107–110, 2004 107 © 2004 Kluwer Academic Publishers. Printed in the Netherlands
Introduction: Computer-Aided Support of the Detection of Deception SPECIAL ISSUE EDITORS – JUDEE K. BURGOON AND JAY F. NUNAMAKER, JR. University of Arizona
Collaborative distributed work is founded on principles of trust and good will. Yet, like all forms of human interaction, distributed interactions are as vulnerable (if not more so) as face-to-face communication to vested interests, hidden agendas, malicious and fraudulent messages, misrepresentations, concealment of adverse information, equivocations, and other forms of deception. Understanding how deception is perpetrated and can be detected when using computer-mediated communication and collaboration tools is a most timely topic, in light of the increasing ubiquity of social computing. Toward such understanding, we have assembled special issues on “Computer-Aided Support of the Detection of Deception” that bring together the expertise and research traditions of scholars from such fields as management information systems, communication, psychology, and criminal justice and run the gamut from theoretical précis to laboratory and field experiments to application development and testing. Papers received in response to the call for these special issues underwent three blind review cycles and reviews by a minimum of four reviewers. We are therefore confident that the resultant compilations offer highly valuable insights into the world of deception in computer-mediated environments and potential means of assisting detection with the aid of computers. In the preceding issue, we had five papers on the detection of deception. The first paper by Carlson, George, Burgoon, Adkins and White, entitled “Deception in Computer-Mediated Communication,” seeks to combine theories of interpersonal communication with theories of media use, such as channel expansion theory, to arrive at a set of empirically testable propositions regarding the enactment and detection of deception in computermediated communication. The paper integrates a broad range of literature on deception and electronic communication from which the resultant integrated model is derived. The second paper by Marett and George, entitled “Deception in the Case of One Sender and Multiple Receivers,” offers a unique perspective on deception in group settings. It seeks to extend prior literature, most of which has examined the case of a single deceiver and single receiver, by presenting initial thoughts on deceptive communication when a deceiver has multiple receivers. This innovative paper considers the numerous complications and changes in strategy that are likely to arise when deceivers must juggle more than one audience. The third paper, by Frank, Feeley, Paolantonio and Servoss, entitled “Individual and Small Group Accuracy in Judging Truthful and Deceptive Communication,” serves as an excellent complement to the Marett and George paper in that it tests experimentally how deception detection in groups compares to deception detection by individuals. Using a jury
108
BURGOON AND NUNAMAKER
form, the reported study examined how well students could distinguish truths from lies of video taped students who offered their opinions on the death penalty or smoking in public. The fourth paper, by Vrij and Mann, entitled “Detecting Deception: The benefit of Looking at a Combination of Behavioral, Auditory and Speech Content Related Cues in a Systematic Manner,” makes the case for relying on a combination of verbal and nonverbal indicators to achieve the greatest success in detecting deceit. To make their case, the authors review the ways in which people communicate at a distance and the verbal and nonverbal information that is available under different modalities. The fifth paper, by Zhou, Burgoon, Nunamaker, and Twitchell, entitled “Automating Linguistic-Based Cues (LBC) for Detecting Deception in Text-based Asynchronous Computer-Mediated Communication (TA-CMC),” also takes a multi-cue approach but focuses exclusively on verbal indicators that may distinguish truth tellers from deceivers under one specific form of distributed communication: asynchronous text-based communication. The paper reviews 27 potential language-based indicators that were clustered into nine linguistics constructs: quantity, diversity, complexity, specificity, expressivity, informality, affect, uncertainty and nonimmediacy. In this second special issue of GDN, we continue to navigate the relatively uncharted waters of how computer-mediated communication creates vulnerabilities to deception or information manipulation and how computer-aided tools can be enlisted to reduce such vulnerabilities. We present here five papers that test new tools that: automate the identification of criminals, provide tactical decision support under stressful conditions, consider factors that separate the successful from the unsuccessful, explain why and how the task load affects use of a system and investigates how media selection influences the detection of deception. The sixth paper in the special issues on deception detection, by Gang Wang, Hsinchun Chen, and Homa Atabakhsh entitled, “Criminal Identity Deception and Deception Detection in Law Enforcement,” considers the very knotty issue of criminal identification when criminals intentionally falsify their identities. Currently, police database systems contain little helpful information in uncovering deceptive identities. The problem is compounded when criminals have multiple, non-matching identities in various databases. Law enforcement agencies therefore rely largely on investigations that are neither efficient nor fully effective. The objective of the Wang et al. work was therefore to propose an automated solution for matching and verifying identities. Drawing upon various theories of deception, expert opinion from a police detective, and a case study, the authors advance a definition of criminal identity deception. They then present a taxonomy of patterns used by criminals to falsify their identities that can be automated to improve identity detection by law enforcement. The paper offers a very sensible, feasible, and original approach to automating what is otherwise a very laborious and costly undertaking. The seventh paper, by C.A.P. Smith, Joan Johnston, and Carol Paris, entitled, “Decision Support or Air Warfare: Detection of Deceptive Threats,” examines the effectiveness of a decision support system intended to aid detection of deceptive threats by offset cognitive limitations of decision-making under stress. Various models of cognitive information processing and situational awareness are discussed for their implications in detecting deceptive information during a stressful and time-pressured task such as air warfare. An experiment is presented in which six-member teams had to perform decision-making under
PREFACE
109
different threat detection scenarios. Teams that had the aid of the Tactical Decision-Making Under Stress decision support system had fewer false alarms than those who did not use the system. Features of the system are discussed in terms of six design prescriptions intended to mitigate cognitive information processing limitations and biases. These design features have broad implications for the development of other computer aids to deception detection. The eighth paper, by Stefano Grazioli, entitled “Where Did They Go Wrong? An Analysis of the Failure of IT-Knowledgeable Internet Consumers to Detect Deception over the Internet,” considers what separates successful from unsuccessful detectors of intentional, malicious deception over the internet. Guided by the Theory of Deception, the investigation compares the information processing behavior of IT-savvy subjects who detected the deception, those who missed it, those who correctly identified the real site as non-deceptive, and those who incorrectly believed that the real site was deceptive. Hypotheses are generated for each of the four steps of the guiding information-processing model: activation of attention, deception hypothesis generation, hypothesis evaluation, and global assessment. Subjects visited either a real commercial site or a deceptive copycat (“page-jacking”) site that contained several deceptive manipulations. Results indicated that elevating subjects’ suspicions via priming affected detection success, that competence in evaluating the hypothesis of deception strongly differentiates between successful and unsuccessful detectors, and that successful and unsuccessful detectors differ in their reliance on “assurance” as opposed to “trust” cues. Results are discussed in terms of their implications for adopting strong server security authentication, obstacles to detection for naïve users, recommendations for a novel Deception Detection Support System that is a browser add-on, and pointers for practitioners in developing educational policies related to online information retrieval. The ninth paper by David Biros, Mark Daly, and Gregory Gunsch entitled, “The Influence of Task Load and Automation Trust on Deception Detection,” considers how the task load users are experiencing affects the relationship between their trust in, and subsequent use of, a system’s automation. Decision-makers in many contexts rely on automated information systems to make tactical judgments and decisions. In situations of information uncertainty, such as information warfare environments in the military, decision-makers must remain aware of information reliability issues and temper their use of system automation in the face of potentially deceptive or inaccurate information. An individual’s degree of task load may alter how he or she utilizes and trusts such automated information. An experiment is reported in which users in a simulated command and control situation were subjected to different degrees of task load and their trust in the information system was manipulated. In addition to the findings concurring with previous research showing a positive relationship between automation trust and automation use, they showed that high task load does have a negative impact on that relationship, especially in terms of over-reliance on automated information systems to assist them in their decision-making activities. Such an over-reliance can lead to vulnerabilities to deception and suggests the need for automated deception detection capabilities. The tenth paper, by Carlson and George, entitled, “Media Appropriateness in the Conduct and Discovery of Deceptive Communication: The Relative Influence of Richness and
110
BURGOON AND NUNAMAKER
Synchronicity,” investigates how two properties of new media – synchronicity and richness – affect media selection for detecting deception. Drawing upon previous models of mediated communication and channel expansion theory, two survey-based studies were conducted from the vantage points of the deceiver and the receiver. Study 1 (the deceiver) asked respondents to consider several different scenarios and the medium they would select to accomplish a specific deceptive act under each. Study 2 (the receiver) asked respondents how confident they would be in their ability to detect deception under differing degrees of synchronicity, media richness, media familiarity, and co-participant (deceiver) familiarity. Results indicated that deceivers prefer synchronous, non-reprocessable media, whereas receivers prefer rich media (regardless of synchronicity) and co-participants with whom they have more experience and familiarity. Implications of the results are considered in terms of building warnings into computer-mediated communication as well as combating receivers’ overconfidence in their detection abilities with synchronous and rich media. We commend these five articles, the five preceding ones, and those to follow in a last special issue on their originality and diverse insights into a topic that will doubtless continue to garner significant attention in both academic and public spheres.