Impact of Anti-Phishing Tool Performance on Attack Success Rates Ahmed Abbasi
Fatemeh “Mariam” Zahedi and Yan Chen
Information Technology University of Virginia Charlottesville, Virginia, USA
[email protected]
Information Technology Management University of Wisconsin, Milwaukee Milwaukee, Wisconsin, USA
[email protected],
[email protected]
providers [2], and e-tailers. Spoof websites attempt to steal unsuspecting users’ identities; account logins, personal information, credit card numbers, etc. [9]. Online fraud prevention databases such as PhishTank maintain URLs for millions of verified spoof websites used in phishing attacks intended to mimic thousands of legitimate entities. Concocted websites deceive users by attempting to appear as unique, legitimate commercial entities such as shipping companies, escrow services, investment banks, and online pharmacies [2, 3, 4]. The objective of concocted websites is failure-to-ship fraud; taking customers’ money without providing the agreedupon goods or services [27]. Both spoof and concocted websites are also commonly used to disseminate malware and viruses [26].
Abstract—Phishing website-based attacks continue to present significant problems for individual and enterprise-level security, including identity theft, malware, and viruses. While the performance of anti-phishing tools has improved considerably, it is unclear how effective such tools are at protecting users. In this study, an experiment involving over 400 participants was used to evaluate the impact of anti-phishing tools’ accuracy on users’ ability to avoid phishing threats. Each of the participants was given either a high accuracy (90%) or low accuracy (60%) tool and asked to make various decisions about several legitimate and phishing websites. Experiment results revealed that participants using the high accuracy anti-phishing tool significantly outperformed those using the less accurate tool in their ability to: (1) differentiate legitimate websites from phish; (2) avoid visiting phishing websites; and (3) avoid transacting with phishing websites. However, even users of the high accuracy tool often disregarded its correct recommendations, resulting in users’ phish detection rates that were approximately 15% lower than those of the anti-phishing tool used. Consequently, on average, participants visited between 74% and 83% of the phishing websites and were willing to transact with as many as 25% of the phishing websites. Anti-phishing tools were also less effective against one particular type of threat. The results suggest that while the accuracy of anti-phishing tools is a critical factor, reducing the success rates of phishing attacks requires other considerations such as improving tool interface/warning design and enhancing users’ knowledge of phishing. Given the prevalence of phishing-based web fraud, the findings have important implications for individual and enterprise security.
Anti-phishing tools are a type of protective technology [10] designed to protect users against phishing attacks that rely on spoof or concocted websites. Unfortunately these tools’ poor results have hindered their adoption and perceived usefulness; users are not very trusting of their recommendations even when they are correct [3, 21]. This has resulted in the “cry-wolf” effect, a behavioral response to the inadequate accuracy of a warning system [29]. Consequently, considerable recent work has focused on improving the detection capabilities of antiphishing tools [1, 3, 13, 19]. However, reducing the impact of phishing in both personal and organizational settings is largely dependent on individual web users’ security behaviors [16]. With users often being the weak link in the security loop [17, 20], it is unclear if the performance of more accurate antiphishing tools would result in users’ enhanced identification rates for phishing websites.
Keywords - phishing; Internet fraud; online security; enterprise security; fake websites; anti-phishing tools; security usability.
I.
INTRODUCTION
Accordingly, in this study we investigate the impact of antiphishing tool performance on users’ ability to differentiate legitimate websites from phish. A controlled experiment involving over 400 participants was used to examine the impact of tool accuracy and threat type (i.e., spoof or concocted websites) on users’ ability to detect phishing websites and to avoid visiting and transacting with such websites. The remainder of the paper is organized as follows. Section II presents a review of related work. Section III discusses research gaps and questions while Section IV describes the experiment design. Section V presents the experiment results. Conclusions are discussed in Section VI.
Phishing website-based attacks continue to generate billions of dollars in fraudulent revenue at the expense of individual users and organizations [23]. In addition to monetary losses, phishing attacks pose additional enterprise-level security threats, including malware and viruses [9, 10, 20]. Given the dynamic, adversarial, and challenging nature of the problem, organizations are investing more money than ever in protective tools designed to combat phishing [28]. Two common types of phishing websites are spoof and concocted websites [1]. Spoof sites are imitations of existing commercial websites [8, 9]. Commonly spoofed websites include eBay, PayPal, various banking and escrow service
978-1-4673-2104-4/12/$31.00 ©2012 IEEE
12
ISI 2012, June 11-14, 2012, Washington, D.C., USA
II.
browser), users were unable to accurately differentiate legitimate websites from spoof phishing websites 40% of the time. However, based on the results from benchmarking studies for anti-phishing tools, popular tools such as the IE security toolbar (used with Internet Explorer) and Firephish were only found to have overall accuracies of approximately 55% [1, 3]. These tools’ poor results have hindered their adoption and perceived usefulness; users are not very trusting of their recommendations [3, 21]. It is unclear whether more accurate anti-phishing tools would facilitate enhanced detection capabilities for users.
RELATED WORK
Prior work has focused on three important aspects of phishing: (1) the design and development of novel antiphishing tools capable of providing enhanced detection capabilities; (2) benchmarking the performance of existing tools; (3) evaluating users’ ability to detect phishing websites with or without the aid of an anti-phishing tool. Existing anti-phishing tools use fraud cues and blacklists to determine whether a particular website is legitimate or a phish [2]. Fraud cues are website content, linkage, and design elements that can serve as reliable indicators regarding the legitimacy of a website [1, 3]. These cues, which are generally derived from the body text, URL tokens, source code, images, links, and domain registration information of known legitimate and phishing websites, are then input into machine learning classification algorithms [1, 2, 4, 6, 13, 19, 24]. Blacklists are databases of URLs for known phishing websites developed and maintained by online communities such as PhishTank. Blacklists are used by lookup-based tools, including antiphishing security toolbars found in web browsers such as Internet Explorer and Firefox [3, 21, 23].
When evaluating the users’ performance against phishing attacks, important considerations include the ability to correctly detect phishing websites, to avoid visiting phishing websites, and to avoid transacting with phishing websites. Furthermore, it is also important to consider the impact of different types of phishing threats on users’ ability to make the three aforementioned decisions. The ability to detect phishing websites is important for obvious reasons; users are highly susceptible to phishing attacks because they don’t realize that the websites are illegitimate [8, 9, 15, 21]. Analysis of visitation is important since by visiting phishing websites, users become susceptible to malware [9]. In one recent phishing attack, millions of users were infected with malware from concocted anti-virus software websites [26]. Consequently, most anti-phishing tools are designed to preemptively warn users about potential phish before they visit the website (i.e., between the time that a URL is entered or clicked and the time it is displayed). Popular web browser security toolbars generally display a warning web page. In order to continue to the suspected phishing website, users must explicitly disregard the warning by clicking on a link at the bottom of the page.
Recent benchmarking studies have revealed that the performance for existing anti-phishing tools varies considerably with respect to detection rates on phishing websites [23]. Moreover, performance is also impacted by whether the website is concocted or a spoof [1]. Accuracies for commonly used and state-of-the-art anti-phishing tools fall between 55% and 92% [3]. With respect to users’ ability to detect phish, prior studies have found that users have difficulty correctly identifying phishing websites. In an experiment involving experienced Internet shoppers that regularly made online purchases, more than 82% of the participants purchased products from a phishing website [14]. In a more recent study, as many as 72% of the participants provided personal information to a spoof website they believed to be legitimate [15]. In the case of spoof websites, this is often attributable to the close resemblance between the phishing website and the legitimate website it is attempting to mimic. Similarly, the authentic appearance of concocted websites makes their correct detection difficult. Moreover, using anti-phishing tools has not always improved results either, since users often disregard or explain away tool recommendations [8, 18, 21]. III.
Analyzing users’ intent to transact with phishing websites is important since providing personal information makes users susceptible to identity theft and failure-to-ship fraud [27]. In a recent phishing attack, 43 million credit card numbers were stolen using concocted websites [26]. Prior studies have also found that users are often too willing to transact with phishing websites, readily providing credit card numbers and passwords [14, 15]. Phishing attacks can employ different tactics, depending on the type of threat [1, 22]. Spoofs mimic top-level pages of legitimate websites (e.g., homepage, login pages, etc.) with the intention of identity theft. Concocted websites attempt to appear as unique providers of goods and/or services with the intention of failure-to-ship fraud [27]. Prior studies have found that strategies utilized by different spoof websites can result in diverse fraud cues (i.e., potentially noticeable indicators of illegitimacy), and consequently, varying user detection rates [8, 21]. However it is unclear what impact concocted versus spoof website-based attacks have on users’ ability to detect and avoid phishing.
RESEARCH GAPS AND QUESTIONS
Recent work on improving anti-phishing algorithms has resulted in significant improvements in detection capabilities, with accuracies of 90% or slightly better [1, 3, 13, 19]. However, given that phishing is a form of semantic attack that targets people as opposed to software or hardware vulnerabilities [20], anti-phishing tools are only effective if users heed their warnings. Consequently in semantic attacks, user behavior (which can often be unpredictable) plays a critical role [17]. Prior studies have observed that even while using anti-phishing security toolbars, users’ were often ineffective in detecting phishing websites. For instance, Dhamija et al. [8] found that when using the Firephish antiphishing security toolbar (which comes with the Firefox web
Based on the aforementioned research gaps, in this study we present the following research questions: •
13
What impact does anti-phishing tool accuracy have on users’ ability to detect phishing websites?
•
How does anti-phishing tool accuracy affect users’ ability to avoid visiting and transacting with phishing websites?
•
What impact does the type of phishing threat (e.g., spoof or concocted) have on users’ performance while using anti-phishing tools? IV.
warning page was used since it is similar to ones used by other popular browsers such as Mozilla Firefox and Google Chrome. When presented with a warning, participants had the option of either heeding the warning and returning to the URL list without visiting the site, or ignoring the warning and continuing on to the website (by clicking on a URL on the warning page). If the tool considered the website legitimate, the URL’s page was displayed in the web browser. In line with prior benchmarking studies [1, 23], the anti-phishing tool was either 60% or 90% accurate in its predictions. In other words, the 90% tool was always incorrect in one of its 10 predictions for a given participant, either failing to warn against a phishing website (false negative) or displaying a warning for a legitimate website (false positive).
EXPERIMENT DESIGN
In light of the proposed research questions, a controlled lab experiment was conducted. In the experiment, participants were asked to purchase a product from a list of 5 legitimate and 5 phishing websites (10 URLs in total). Both categories of phishing websites were incorporated: concocted and spoof [1, 3, 9]. An anti-phishing tool was used to provide warnings. The tool had either 60% or 90% accuracy. These two numbers were chosen since prior benchmarking studies have revealed that most anti-phishing tools’ performance falls within this range [1, 3, 23]. The experiment was run using a factorial design, where each participant was given either legitimate and concocted websites or legitimate and spoof, and either a 60% accurate tool or a 90% accurate one. This resulted in 4 possible combinations. Details of the experiment design are as follows. Fig. 1 shows an illustration of the experiment design. The participants were given the following task: purchase a particular over-the-counter drug from an online pharmacy. Each participant was asked to purchase the same product. This task was chosen since online medical and health-related websites are becoming increasingly important due to the rise of Health 2.0; users are increasingly turning to the Internet as a source for products and information [11, 12]. However, medical phishing is also becoming increasingly pervasive thereby making the selected application domain for the experiment highly relevant [4, 5, 11].
Figure 1. Experiment Design. Participants were given a task and a list of 10 URLs; 5 legitimate and 5 concocted or spoof. For each URL clicked, the antiphishing tool evaluated the website and either redirected the web browser to a warning page or displayed the requested web page. The anti-phishing tool was either 60% or 90% accurate in its recommendations. Participants could ignore the warning and continue on to visit the website by clicking on a URL at the bottom of the warning page.
Each participant was given 5 legitimate and 5 phishing URLs. The 5 legitimate websites’ URLs were randomly selected from a pool of 15 known legitimate websites taken from the National Association of Boards of Pharmacies (www.nabp.net). The legitimate websites were roughly balanced between those with very high levels of web traffic, those with average levels, and those with below average traffic (relative to other online pharmacy websites on the Internet). Each participant was also given URLs for either 5 spoof or 5 concocted websites (always one of the two categories). These URLs were also selected randomly from a set of 15 spoof and 15 concocted websites. The spoof site URLs, which were replicas of the 15 legitimate websites included in the experiment, were taken from the popular phishing database PhishTank (www.phishtank.com). The concocted URLs were derived from LegitScript (www.legitscript.com), an independent organization that certifies online pharmacies and maintains a database of concocted ones. The 10 URLs provided to each participant were displayed in random order.
Prior research has shown that certain legitimate and phishing websites are more difficult to correctly classify as compared to others [2]. In other words, instance-level analysis of false positive and false negative classifications across various anti-phishing tools has shown patterns and correlations [1]. For instance, prominent legitimate websites are rarely misclassified by anti-phishing tools. Therefore, it would be unrealistic to randomly simulate the performance of the tool for the 60% and 90% settings in the experiment. Accordingly, we evaluated each of the 45 websites in our test bed (15 legitimate, 15 concocted, and 15 spoof) using five anti-phishing tools: AZProtect, IE Security Toolbar, Firephish, Netcraft, and SpoofGuard [1, 17, 21, 23, 30]. These five were chosen since they include the most commonly used tools as well as ones that have been shown to provide good performance [3]. For each of the 45 websites in the test bed, the average misclassification rate across the five tools was computed. For example, a website incorrectly classified by two of the five tools had an average misclassification rate of 0.4. These rates were used as the
The participants were required to click on each URL, in any order, to search for the product. Clicking on a URL triggered the anti-phishing tool. The tool evaluated the website and made a recommendation. If the tool considered the website to be a phish, the participant’s web browser was redirected to a warning page. The standard Microsoft Internet Explorer
14
tool were statistically significant, with all six p-values less than 0.01. However, participants using the high accuracy tool had considerably lower overall accuracies and legit/phish recall rates than the anti-phishing tool itself; with 11% to 15% lower overall accuracy. Hence, users seem to at least partially disregard the recommendations of the 90% accurate antiphishing tool. Moreover, on average, participants attained 3% to 6% higher legit recall rates than phish recall. Consistent with prior work [8, 21], this suggests that even when using more accurate anti-phishing tools, users tend to be overly trusting, considering many phishing websites to be legitimate (i.e., higher false negative rates). Comparing performance on the concocted versus spoof settings, participants performed significantly better when using the 90% accurate tool on concocted websites (with t-test p-values for accuracy and legit/phish recall all significant at alpha = 0.01). Conversely, they performed significantly better on the spoof websites when using the 60% accurate tool (all p-values also less than 0.01).
probability of a given website being misclassified during the experiment. Hence, the 90% accurate tool misclassified one of the ten selected sites. The probability of a given site being the one misclassified was proportional to its average misclassification rate (computed using the five actual tools). For each website, after being exposed to the anti-phishing tool’s recommendation, the task required participants to decide if they would visit the website, explore the website to find the product (assuming it was visited), and buy the product (assuming it was found). The participants needed to answer three questions: whether the website was legitimate or a phish, whether the website had the product (if visited), and whether they would buy the product from the website (if they visited the site and found the product on the website). Participants were scored based on their performance regarding the decisions they made and costs they incurred during the experiment. More specifically, performance was evaluated based on participants’ decisions to differentiate legitimate websites from phish [8, 21], decisions to visit or avoid websites, and willingness to transact with phishing sites [14]. The experiment design was intended to mimic real conditions with respect to time limitations and potential losses. The participants had 20 minutes to make all their decisions about their 10 assigned websites. This time constraint was chosen after pre-testing and pilot testing revealed that it represented an appropriate and reasonable amount of time for performing the task.
TABLE I. Tool Accuracy
Concocted Website Setting Overall Accuracy
Legit Recall
Phish Recall
60% detector
62.28
65.54
59.01
90% detector
78.88
82.06
75.70
Spoof Website Setting Tool Accuracy
The experiment participants were 437 students and staff from a large university in the mid-western United States. Each participant was randomly assigned to one of the four experiment settings (i.e., high or low accuracy and spoof or concocted phishing websites). Overall, each of the four settings had the same approximate number of participants (around 109). Prior to the experiment, participants were given instructions regarding the aforementioned experiment task (i.e., to purchase an over-the-counter drug) and were also explicitly made aware of the accuracy of their particular anti-phishing tool. V.
PREDICTIONS ON CONCOCTED AND SPOOF SETTINGS
Overall Accuracy
Legit Recall
Phish Recall
60% detector
65.05
67.92
62.18
90% detector
75.05
76.58
73.51
Fig. 2 shows the percentage user disagreement with the anti-phishing tools’ recommendations for different experiment settings. In other words, the values depicted are the average percentage of times when the users decided that the website was legitimate when the tool warned that it was a phish, or vice versa. The first group of bars shows overall percentage disagreement across legitimate and phishing websites. The second group shows the percentage disagreement when the website was actually legitimate, while the third group shows the level of disagreement when the websites were actually phish. Based on the figure it is evident that users disagreed quite often with the tool recommendations. While disagreements were higher when using the less accurate tool (over 30% for both the concocted and spoof setting), even with the 90% accurate tool users went against the tool recommendations 21% to 25% of the time. Moreover, these disagreements were more prevalent on phishing websites.
RESULTS
Table 1 shows the experiment results for users’ ability to differentiate legitimate websites from phish. The top two rows show performance results for participants using the 60% and 90% accurate tools to differentiate legitimate from concocted websites (top half of Table 1). The bottom two rows show the performance for participants differentiating legitimate websites from spoofs. The three columns indicate the average overall accuracy, legit recall, and phish recall across users for that particular experiment setting (i.e., results are stratified by each of the four accuracy-threat type combinations). These evaluation metrics were incorporated since they are commonly used to assess the effectiveness of anti-phishing strategies [1, 2, 3, 4, 8, 21, 23]. Overall accuracy is participant’s ability to correctly classify websites as legitimate or phishing websites. Legit recall is the percentage of legitimate websites correctly identified by users, while phish recall signifies the percentage of concocted or spoof websites correctly identified by users.
Comparing the concocted and spoof results in Fig. 2, users were more likely to disregard tool warnings on spoofs. The findings suggest that when encountering spoofs (as compared to concocted websites), users were more likely to rely on their own judgment, thereby disregarding tool recommendations. Prior work has suggested that this is largely attributable to users’ familiarity with the websites that are being spoofed, which causes them to disregard tool warnings [21]. This is precisely the type of human vulnerability that spoof websitebased phishing attacks successfully exploit [8, 9].
Based on the results presented in Table 1, it is evident that using the more accurate anti-phishing tool resulted in enhanced overall accuracy, legit recall, and phish recall. Pair-wise t-tests comparing the performance of the 90% tool against the 60%
15
recommendations. Consequently, users’ increased self-reliance on the spoof setting resulted in enhanced performance (both over the tool being used and as compared to the concocted setting) for the low accuracy setting, but was a hindrance when using the high accuracy tool. Table II shows the experiment results for users’ decisions regarding visiting legitimate and phishing websites. The phish recall column shows user’s decision to correctly avoid visiting phishing websites (i.e., higher values indicate better avoidance of phishing websites). Users were better able to avoid phishing websites when using the more accurate tool (on both the concocted and spoof settings). Pairwise t-test results were significant, with all p-values less than 0.01. However, the phish recall rates in general were very low. Consequently, despite the anti-phishing tools’ warnings, on average, participants visited between 74% and 83% of the phishing websites. With respect to the concocted versus spoof setting, users were more likely to visit spoofs than concocted websites. This finding is consistent with the aforementioned user prediction results presented. Putting together the findings pertaining to user predictions and user visitation behavior (Tables I and II as well as Fig. 2 and 3), it can be surmised that when encountering phishing websites, users felt the need to formulate their own opinions, and generally ended up coming to an incorrect conclusion.
Figure 2. Percentage User Disagreement with Anti-Phishing Tool Recommendations for Different Experiment Settings. Values are grouped across all websites (Overall), legitimate websites, and phishing websites.
TABLE II. Tool Accuracy
VISITATION ON CONCOCTED AND SPOOF SETTINGS Concocted Website Setting Overall Accuracy
Legit Recall
Phish Recall
60% detector
53.66
89.11
18.22
90% detector
60.28
94.77
25.79
Spoof Website Setting Tool Accuracy
Figure 3. Average Overall Accuracy and Legit/Phish Recall for Instances where Users Disagreed with the Anti-Phishing Tool’s Recommendations. Results are stratified across the four experiment settings, resulting in 12 values on the x-axis.
In the context of the current experiment, this user inclination to disregard tool recommendations, and to do so more prolifically on the spoof setting, was responsible for the performance differences between the spoof and concocted settings (across accuracy levels). Fig. 3 shows the average percentage accuracy or recall for the users and anti-phishing tool in situations where users disagreed with the tool recommendations. The figure displays the overall accuracy (OA), legit recall (LR), and phish recall (PR) on instances where there was user-tool disagreement. The results depicted were stratified across the four experiment categories, resulting in 12 values along the x-axis. From the figure it is apparent that in situations where users disregarded the more accurate tool warnings (Spoof90 and Concocted90, second half of Fig. 3), user performance was 25%-68% lower in terms of legit and phish recall. This caused the lower user performance results, relative to the tool being used, for the high accuracy tool settings (see Table 1). However, user disagreements on the low accuracy settings improved performance over the tool
Overall Accuracy
Legit Recall
Phish Recall
60% detector
51.78
85.94
17.62
90% detector
57.57
92.43
22.70
Table III shows the experiment results for users’ decisions regarding their intention to transact with legitimate and phishing websites. Users tended to be more selective in whether they would transact with a particular website. Phish recall shows the percentage of phishing websites users would not transact with (i.e., a larger number indicates better avoidance of phish), while legit recall indicates the percentage of legitimate websites users would transact with. Using the high accuracy tool did reduce users’ intention to transact with phishing websites, however once again the performance improvements were more pronounced on the concocted websites. Pair-wise t-test results for impact of accuracy on intent to transact were significant across settings, with all pvalues less than 0.01. In any case, users were willing to transact with 21% to 26% of the spoof websites and 14% to 24% of the concocted websites. This result suggests that even when using a more accurate tool, users are still fairly susceptible to identity theft.
16
TABLE III.
PURCHASE INTENT ON CONCOCTED AND SPOOF SETTINGS
Tool Accuracy
[7]
Concocted Website Setting Overall Accuracy
Legit Recall
Phish Recall
60% detector
64.65
52.87
76.44
90% detector
74.77
63.74
85.79
[8] [9] [10]
Spoof Website Setting Tool Accuracy Overall Accuracy
Legit Recall
Phish Recall
60% detector
62.28
50.30
74.26
90% detector
68.20
57.84
78.56
[11]
VI.
[12]
[13]
CONCLUSIONS
The study has important implications for individual security behavior and enterprise-level security. The results indicate that using more accurate anti-phishing tools can significantly improve users’ ability to identify phishing websites and to better avoid visiting and transacting with phish. A follow-up experiment on online banks revealed similar results, suggesting that the findings are generalizable beyond the online pharmacy domain. The results indicate that future work geared towards further improving the accuracy of anti-phishing tools is warranted. However, it is also imperative to improve methods for conveying tool warnings such that users are less likely to disregard recommendations. This is particularly important in the case of spoof websites, where perceived familiarity often trumps tool warnings. Future work focusing on warning and interface design issues is essential [7]. Education is also necessary; unaware users continue to rely on their own perceived expertise and judgment, which often result in undesirable outcomes [17].
[14]
[15] [16]
[17]
[18] [19]
[20] [21]
ACKNOWLEDGMENT This research has been supported in part by the following U.S. National Science Foundation grant: CNS-1049497 “A User-Centric Approach to the Design of Intelligent Fake Website Detection Systems,” October 2010 – September 2012.
[22] [23]
REFERENCES [1] [2]
[3]
[4]
[5]
[6]
[24]
A. Abbasi and H. Chen, “A Comparison of Tools for Detecting Fake Websites,” IEEE Computer, vol. 42, pp. 78-86, October 2009. A. Abbasi and H. Chen, “A Comparison of Fraud Cues and Classification Methods for Fake Escrow Website Detection,” Information Technology and Management, vol. 10(2), pp. 83-101, 2009. A. Abbasi, Z. Zhang, D. Zimbra, H. Chen, and J. F. Nunamaker Jr., “Detecting Fake Websites: The Contribution of Statistical Learning Theory,” MIS Quarterly, vol. 34(3), pp. 435-461, 2010. A. Abbasi, F. M. Zahedi, and S. Kaza “Detecting Fake Medical Websites using Recursive Trust Labeling,” ACM Trans. Information Systems, in press. G. Bansal, F. M. Zahedi, and D. Gefen, “The Impact of Personal Dispositions on Information Sensitivity, Privacy Concern and Trust in Disclosing Health Information Online,” Decision Support Systems, vol 49(2), pp. 138-150, 2010. K. Chen, C. Huang, C. Chen, and J. Chen, “Fighting Phishing with Discriminative Keypoint Features,” IEEE Internet Computing, vol. 13(3), pp. 56-63, 2009.
[25]
[26] [27]
[28] [29]
[30]
17
Y. Chen, F. M. Zahedi, and A. Abbasi, “Interface Design Elements for Anti-phishing Systems,” In Proc. Intl. Conf. Design Science Research in Information Systems and Technology, pp. 253- 265, 2011. R. Dhamija, J. D. Tygar, and M. Hearst, “Why Phishing Works,” In Proc. ACM Intl. Conf. Computer Human Interaction, pp. 581-590, 2006. T. Dinev, “Why Spoofing is Serious Internet Fraud,” Comm. of the ACM, vol. 49, pp. 76-82, October 2006. T. Dinev and Q. Hu “The Centrality of Awareness in the Formation of User Behavioral Intention toward Protective Information Technologies,” J. of AIS, vol. 8, pp. 386-408, 2007. G. Easton, “Clicking for Pills,” British Medical J., vol. 334, pp. 14-15, January 6, 2007. G. Eysenbach, “Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness,” J. Medical Internet Research, vol. 10(3), e23, 2008. A. Y. Fu, W. Liu, and X. Deng, “Detecting Phishing Web Pages with Visual Similarity Assessment Based on Earth Mover’s Distance (EMD),” IEEE Trans. Dependable and Secure Computing, vol. 3(4), pp. 301-311, 2006. S. Grazioli and S. L. Jarvenpaa, “Perils of Internet Fraud: An Empirical Investigation of Deception and Trust with Experienced Internet Consumers,” IEEE Trans. Systems, Man, and Cybernetics Part A, vol. 20(4), pp. 395-410, 2000. T. N. Jagatic, N. A. Johnson, M. Jakobsson, and F. Menczer, “Social Phishing,” Comm. of the ACM, vol. 50, pp. 94-100, October 2007. A. C. Johnston and M. Warkentin, “Fear Appeals and Information Security Behaviors: An Empirical Study,” MIS Quarterly (34:3), pp. 549-566, 2010. P. Kumaraguru, S. Sheng, A. Aquisti, L. F. Cranor, and J. Hong, “Teaching Johnny Not to Fall for Phish,” ACM Trans. Internet Technology, vol. 10(2), no. 7, 2010. L. Li and M. Helenius, “Usability Evaluation of Anti-Phishing Toolbars,” J. Computer Virology, vol. 3(2), pp. 163-184, 2007. W. Liu, X. Deng, G. Huang, and A. Y. Fu, “An Antiphishing Strategy Based on Visual Similarity Assessment,” IEEE Internet Computing, vol. 10, pp. 58-65, February 2006. B. Schneier, “Semantic Network Attacks,” Comm. of the ACM, vol. 43, p. 168, December 2000. M. Wu, R. C. Miller, and S. L. Garfunkel, “Do Security Toolbars Actually Prevent Phishing Attacks?,” In Proc. Conf. on Human Factors in Computing Systems, pp. 601-610, 2006. B. Xiao and I. Benbasat, “Product-related Deception in E-Commerce: A Theoretical Perspective,” MIS Quarterly, vol. 35(1), pp. 169-196, 2011. Y. Zhang, S. Egelman, L. F. Cranor, and J. Hong, “Phinding Phish: Evaluating Anti-phishing Tools,” In Proc. 14th Annual Network and Distributed System Security Symposium (NDSS), 2007. Y. Zhang, J. Hong, and L. F. Cranor, “CANTINA: A Content-based Approach to Detecting Phishing Web Sites,” In Proc. Intl. World Wide Web Conference, pp. 639-648, 2007. F. M. Zahedi and J. Song, “Dynamics of Trust Revision: Using Health Infomediaries,” J. Management Information Systems, vol. 24(4), pp. 225-248, 2008. P. Willis “Fake anti-virus software catches 43 million users' credit cards,” Digital J., www.digitaljournal.com/article/280746, Oct. 20, 2009. C. E. H. Chua and J. Wareham, “Fighting Internet Auction Fraud: An Assessment and Proposal,” IEEE Computer, vol. 37(10), pp. 31–37, 2004. Gartner, “Magic Quadrant for Web Fraud Detection,” Gartner Research, 2011. J. Edworthy, “Cognitive Compatibility and Warning Design,” International Journal of Cognitive Ergonomics, vol. 1(3), pp. 193-209, 1997. N. Chou, R. Ledesma, Y. Teraguchi, D. Boneh, and J. C. Mitchell, “Client-side Defense Against Web-based Identity Theft,” In Proc. Network and Distributed System Security Symposium, San Diego, CA., 2004.