literature with respect to test automation of security testing. As a result, it is ... Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply. ..... commercial or free tools such as Acutenix and Paros) [23,. 24], and the ...
2009 33rd Annual IEEE International Computer Software and Applications Conference
Automatic Testing of Program Security Vulnerabilities
Hossain Shahriar and Mohammad Zulkernine School of Computing Queen’s University, Kingston, Canada {shahriar, mzulker}@cs.queensu.ca Common Vulnerabilities and Exposures (CVE) [1] and Open Source Vulnerability Databases (OSVDB) [2]. A practical approach to deal with this situation is to apply appropriate application security testing techniques to prevent vulnerabilities and attacks before their deployment. In recent years, many program security testing methods have been proposed and applied in practice [10-25, 35-38]. Each work is valuable from certain perspective such as automatic test case generation, test case execution, and covering particular vulnerabilities. However, there is no extensive comparative study of these work that might guide testing practitioners to choose tools to perform the task of security testing. Moreover, there is no comparative analysis in current literature with respect to test automation of security testing. As a result, it is difficult to identify costs incurred due to manual process in security testing process. In this work, we identify seven criteria to analyze program security testing techniques. These are vulnerability coverage, source of test cases, test generation method, level of testing, granularity of test cases, tool automation, and target applications. We compare and contrast 20 program security testing techniques based on these criteria. We choose these work as they claim to be superior to other contemporary tools in terms of both detecting vulnerabilities effectively and identifying previously unknown vulnerabilities. We focus on security testing work that address four widely known vulnerabilities, which are buffer overflow (BOF) [4], SQL injection (SQLI) [5], format string bug (FSB) [6], and cross site scripting (XSS) [7]. These are the worst vulnerabilities found in today’s application [3]. Moreover, we perform a comparative analysis of testing automation supported by these work with respect to three identified criteria: test case generation, oracle generation, and test case execution. Our initial findings indicate that most of the available tools are geared towards web-based vulnerabilities such as SQLI and XSS [13-16, 18-24, 36, 38]. While some tools provide testing of BOF, their automation support is poor [10, 11, 12, 17, 35]. Moreover, very few work test FSB vulnerabilities [25, 37]. The paper is organized as follows: Section II provides some background information on four major vulnerabilities, and security testing process. Section III discusses the seven criteria for the classification of the existing security testing work and categorizes work based on these criteria. In Section IV, we compare automation aspect of different testing work. Finally, Section V draws the conclusions and discusses current open issues.
Abstract— Vulnerabilities in applications and their widespread exploitation through successful attacks are common these days. Testing applications for preventing vulnerabilities is an important step to address this issue. In recent years, a number of security testing approaches have been proposed. However, there is no comparative study of these work that might help security practitioners select an appropriate approach for their needs. Moreover, there is no comparison with respect to automation capabilities of these approaches. In this work, we identify seven criteria to analyze program security testing work. These are vulnerability coverage, source of test cases, test generation method, level of testing, granularity of test cases, testing automation, and target applications. We compare and contrast prominent security testing approaches available in the literature based on these criteria. In particular, we focus on work that address four most common but dangerous vulnerabilities namely buffer overflow, SQL injection, format string bug, and cross site scripting. Moreover, we investigate automation features available in these work across a security testing process. We believe that our findings will provide practical information for security practitioners in choosing the most appropriate tools. Keywords: Security testing, Vulnerabilities, Buffer overflow, SQL injection, Format string bug, Cross site scripting.
I.
INTRODUCTION
Today’s applications (or programs) are complex in nature and accessible to almost everyone. These programs are developed using implementation languages (e.g., ANSI C), library functions (e.g., ANSI C library, Java API), processors (e.g., SQL query engine, HTML parser, JavaScript engine, etc.) that often suffer from inherent vulnerabilities such as buffer overflow [4], SQL Injection [5], format string bug [6], and cross site scripting (XSS) [7]. Moreover, these applications are not always used by legitimate users in a legitimate manner. As a result, exploitations of these known vulnerabilities through successful attacks are very common. The practice of developing secure application has been established for more than a decade ago. Several widely complementary techniques of testing have been established to detect and prevent vulnerabilities. These include static analysis tools [26-30] to identify vulnerable code, combined static analysis and runtime monitoring approach [31, 32], automatic fixing of vulnerable code [33, 34], etc. Despite usage of such techniques, we still find numerous exploitation reports in different publicly available databases such as 0730-3157/09 $25.00 © 2009 IEEE DOI 10.1109/COMPSAC.2009.191
550
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.
II.
such goals in advance. For example, an application should be tested for all BOF and SQLI vulnerabilities. In the second step, test cases are generated by using application artifacts (e.g., source code, implementation unit treated as black box), interacting environments (e.g., network protocol data unit or PDU) in a systematic way. A subsequent issue that needs to be addressed by testers is to define the oracle for each test case. Unlike traditional software testing, the end computational results performed by applications rarely play any role for determining oracle (or successful attack) in program security testing. In most of the cases application states and response messages are used to identify the presence or absence of attacks. In the final stage, test cases are run against implementations and applications are assessed based on predefined oracles to identify vulnerabilities in applications (or whether a test case exposes the vulnerabilities through an application’s response). Overall, security testing process is analogous to traditional software testing process having slightly different objectives in each of the stages. While software testing in general strives to remove “all” software faults, software security testing focuses on the removal of “exploitable software faults (bug classes)” [39].
BACKGROUND
A. Vulnerabilities Vulnerabilities are flaws in applications that allow attackers to do something malicious (i.e., unauthorized access, modification, or destruction of information) [8]. Attacks are successful exploitation of vulnerabilities. Although there are different types of vulnerabilities [3], this work addresses four major vulnerabilities: BOF, SQLI, FSB, and XSS. The primary reason of these vulnerabilities is the lack of input validation mechanism employed in applications. For example, BOF vulnerabilities exist in applications, if inputs coming from users or environments are copied to data buffers by exceeding their capacities. The consequences of such overflow vary such as application crashes and launching remote root shells [4]. An application is said to have SQLI vulnerabilities, when SQL queries are generated using an implementation language (e.g., Java Server Pages or JSP) and user supplied inputs become part of the query generation process without proper validation. As a result, the execution of these queries might cause unexpected results such as authentication bypassing and leaking of private information. Format string bugs (FSBs) exploit format functions (e.g., format functions of ANSI C standard library) through malicious format strings. XSS vulnerabilities (XSSVs) imply the generation of dynamic Hyper Text Markup Language (HTML) contents (i.e., attributes of tags) with invalidated inputs. XSS attacks exploit the vulnerabilities through inputs that might contain HTML tags, JavaScript code, and so on. These inputs are interpreted by browsers while rendering web pages. As a result, the expected behavior of generated web pages alters through visible (e.g., creation of pop-up windows) and invisible (e.g., cookie bypassing) symptoms.
III.
COMPARISON CRITERIA OF SECURITY TESTING WORK
We propose seven criteria to compare security testing work. These are vulnerability coverage, source of test cases, test case generation method, testing level, granularity of test cases, test automation, and target applications. Table I summarizes the classification, where the first column shows the work (or tools) and subsequent columns represent each of the criteria mentioned above. We provide detail description for each criterion and analyze the testing work at the same time. A. Vulnerability coverage A common approach for choosing a tool is based on what particular vulnerability can be tested. As we focus on four vulnerabilities (BOF, SQLI, FSB, and XSS), we identify tools that test these only. The second column of Table I shows vulnerabilities tested by different tools. We notice that both SQLI and XSS have been addressed by most of the tools [13-24, 36, 38]. However, very few tools test FSB [25, 37]. Moreover, very few tools are capable of testing a wide range of vulnerabilities such as BOF, SQLI, and XSS [13].
B. Security Testing Like traditional software testing [9], security testing has three major steps: identifying testing requirements and determining test coverage, generating test cases, and executing test cases. During the first step, appropriate security requirements are identified based on functional requirements. Sometimes security requirements are expressed explicitly (e.g., access control policies, secured communication protocols, etc.). Our study does not include these requirements. We consider security breaches that occur through the implementation language (e.g., ANSI C), APIs (ANSI C library, Java library), environment variable’s (network data unit used in applications), and processors (e.g., SQL database engine, HTML parsers, JavaScript interpreter) used by applications. Vulnerabilities resulted due to their limitations are often ignored and rarely expressed explicitly in security requirements. In traditional testing, test coverage implies whether generated test cases can cover a particular objective related to application artifact. For example, an application can be tested in such a way so that all branches present in the source code are tested, or a finite state machine can be used to generate test cases in such a way so that all transition pairs are covered. Similarly, security testing approaches often set
B. Source of test cases This criterion identifies what artifacts of an application or environment are used for generating test cases. The third column of Table I shows that security related test cases are generated from a variety of artifacts. These include source code of applications and vulnerable APIs (e.g., ANSI C library functions) [10-12, 35-38], protocol syntax [13], application’s behavior model [14], response pages returned by applications (i.e., downloaded HTML pages for given URLs) [15, 18, 21], HTTP requests generated by applications [19], user session data (i.e., sequence of inputs are provided to perform functionalities) [22], compiled code [17], and known attack signatures [25].
551
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.
TABLE I.
COMPARISON SUMMARY OF PROGRAM SECURITY TESTING WORK
Tool / Work
Vulnerability covered
Source of test cases
Test case generation method
Test level
Test case granularity
Splat [10]
BOF
Vilela et al. [11] BOF
Source code and APIs Source code
Solve path constraints of applications. Analyze mutants.
Unit and integrated Unit
Tal et al. [12]
Protocol syntax
Inject faults.
Black box
String or complex data Yes type containing string. String or complex data Yes type containing string. PDU Yes
Source code
Crawl web pages and add attack test cases in input forms. Solve constraints expressed in Object Constraint Language (OCL). Crawl and add attack test cases in input forms. Inject attack cases in nonmalicious inputs. Modify program instructions through dynamic compiler. Crawl web pages and fill input forms with attack inputs. Append leaf nodes of parse trees with attack inputs. Solve path constraints in applications and replace non malicious test cases with attack test case. Bypass input constraints in response pages. Replace non malicious test cases with attack test cases. N/A
Unit
URL
Black box
String or complex data Yes type containing string.
Black box
URL
Yes
Black box
URL
Yes
Black box Black box
String or complex data Yes type containing string. URL Yes
Unit
URL
Yes
Hybrid
URL
Yes
Black box
URL
Yes
Hybrid
Sequence of URLs.
Yes
Black box
Yes
Fonseca et al. SQLI and XSS N/A [24] Vigna et al. [25] BOF, FSB Attack templates
N/A
Black box
Inject fault in application and network layer.
Black box
URL or sequence of URLs. URL or sequence of URLs. Sequence of network data packets.
MUBOT [35]
BOF
Source code
Analyze mutants.
Unit
MUSIC [36]
SQLI
Source code
Analyze mutants.
Hybrid
MUFORMAT [37]
FSB
Source code
Analyze mutants.
Unit
MUTEC [38]
XSS
Source code
Analyze mutants.
Hybrid
BOF
Tappenden et al. BOF, SQLI, [13] and XSS Salas et al. [14] SQLI WAVES [15]
Incomplete or under-specified model SQLI and XSS Response pages
SecuBat [16]
SQLI and XSS Source code
Breech et al. BOF Compiled [17] program Huang et al. [18] SQLI and XSS Response pages Sania [19]
SQLI
HTTP request
ARDILLA [20]
SQLI and XSS Source code
Offutt et al. [21] SQLI and XSS Response pages McAllister et al. XSS User session [22] Fong et al. [23] SQLI and XSS N/A
C. Test case generation method It implies how to convert the source of test cases to a set of test cases. The fourth column of Table I highlights different test case generation methods that are applied in security testing. It is interesting to note that most of the traditional software testing techniques have been used or leveraged to conduct test case generation for security testing. These include fault injection [13, 25], mutation analysis [11, 35-38], constraint solving [10, 14, 20]. However, many recent techniques employ the method of replacing nonmalicious test cases with attack test cases [13, 15, 16, 18, 19, 20, 22] and the modification of application instruction with dynamic compiler [17] for generating test cases.
Tool automation
Yes
Yes Yes
String or complex data Yes type containing string. URL Yes String or complex data Yes type containing format string. URL or sequence of Yes URLs.
Target applications Utilities Utilities Network daemons Web applications Web applications Web applications Web applications Network daemons Web applications Web applications Web applications Web applications Web applications Web scanners Web scanners Intrusion detection systems Utilities Web applications Utilities Web applications
D. Testing level It indicates whether security testing of an application is performed in a white box or a black box manner. White box testing techniques are indicated as either unit or integration level in Table I. The fifth column of Table I indicates that very few approaches explore white box testing mechanism for different vulnerabilities at unit and integrated level [10, 11, 13, 19, 35, 37]. Most of the popular testing approaches are based on black box level [12, 14, 15, 16, 17, 18, 21, 23, 24, 25]. Moreover, several work use hybrid testing (i.e., combination of black box and white box) where artifacts (i.e., source code) are used to generate test cases, and test cases are run and analyzed in a black box manner [20, 22, 36, 38].
552
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.
E. Test case granularity This feature provides an impression about what constitutes a test case in program security testing. The sixth column of Table I shows that test case granularity varies not only based on data received by applications and its surrounding environment, but also on vulnerabilities. For example, exploiting BOF vulnerabilities often involves generating strings of particular lengths, or complex data types containing strings [10, 11, 35]. Similarly, test cases for exposing FSB vulnerabilities requires strings (or complex data types) containing format specifiers [37]. However, SQLI and XSS vulnerabilities exploitations often require URLs with appropriate parameters and values [13, 15, 16, 18, 19, 20, 21, 22, 36, 38]. Moreover, a sequence of URLs [22, 38] or PDUs [12, 25] might form just one test case since all of them must be applied to applications to exploit vulnerabilities. For example, in stored XSS attacks (a variation of XSS), at least two URLs are required to form one test case: one for storing a malicious script and the other to download a page containing that script.
A. Test case generation This criterion identifies whether test cases are generated automatically or not. The second column of Table II shows that most of the testing methods automate test case generation [10, 12, 14, 15, 20]. However, few approaches are manual such as the mutation-based analysis [11, 35-38]. For such approach, test case generation is not the main interest. Rather assessing the capability of discovering vulnerabilities of test cases is the primary concern. Some work are classified as semi-automatic [16-19, 21, 22] in the sense that auxiliary tasks before generating actual test cases are automatic. For example, testing SQLI vulnerabilities requires the identification of input HTML forms, which can be automatically extracted through web crawlers [15, 18]. However, after identifying them, attack test cases are injected from repositories which are developed manually from previous attack incidents and reports. These repositories are developed manually. Moreover, some approaches [22] use third party scanners to crawl applications with non-attack inputs followed by fuzzing these inputs with attack inputs. These approaches enhance testing coverage in terms of applications depth (through crawler) and breadth (through fuzzer).
F. Tool automation One of the most important criteria for choosing a tool is how much automation is supported. Although most of the security testing tools claim to be automatic (as shown in the seventh column of Table I), we find that some tasks needed to be done manually. We discuss the details of the automation aspects in Section IV.
B. Oracle generation Oracles are the expected outputs which are obtained when running test cases. Unlike traditional testing, it is difficult to compare the computed results with the expected results. Existing methods of oracle generation includes automatic (or manual) source code instrumentation based on adopted security policies and checking application states [10, 11, 12, 17, 35, 37] and scanning application response pages [13, 14, 15, 16, 17, 18, 36, 38]. From the third column of Table II, we observe that for BOF and FSB vulnerabilities, oracles are generated through code instrumentation [10, 17, 35, 37]. However, for web-based vulnerabilities such as SQLI and XSS, application response pages are scanned for finding text patterns to conform attack successes or failures [13, 15, 16, 18, 20, 21, 22, 36, 38]. Often, parse trees of application data (e.g., SQL query parse trees) between attack and non attack test cases are compared [19]. The third column of Table II also shows that most of the work use automatic or semi-automatic generation of oracle. However, for BOF and FSB vulnerabilities, oracle generation is manual or semi-automatic.
G. Target applications It implies the type of an application under security test. From the eighth column of Table I, we note that each of the testing techniques primarily focuses on a particular application domain. These include utility applications (e.g., ftp client, compression tools) [10, 11, 35, 37], network daemons (e.g., router daemon, web server) [12, 17], web applications [13-16, 18-22, 36, 38], web scanner tools (e.g., commercial or free tools such as Acutenix and Paros) [23, 24], and the intrusion detection systems (IDS) [25]. The testing process of web scanner tools and IDS are primarily intended to identify the effectiveness of vulnerability detection capabilities of web-based and network-based applications, respectively. Research have been done to establish state of the art benchmark applications with vulnerabilities [23] that need different kinds of attack test cases to exploit vulnerabilities successfully. Moreover, considerable work [24] have been done to inject faults in applications in a systematic way to emulate vulnerabilities occurring naturally and to find suitable scanners that can expose these vulnerabilities.
C. Test case execution Test case execution is one of the core steps of automatic testing. It involves supplying test cases to application under test, monitoring responses, and deciding if test cases are exposing vulnerabilities. Moreover, it should help to repeat testing process by bringing application to initial states and launching next set of test cases. The fourth column of Table II shows test case execution activities in terms of manual, automatic, and semi-automatic processes. We note that most of the work are automatic [12, 15, 16, 17, 18, 19, 20, 22]. In manual approach, test cases need to be launched by testers [10, 14, 21]. We also note that bypass testing of web
IV. TOOL AUTOMATION COMPARISON We compare the automation aspect of security testing work based on three testing tasks namely test case generation, oracle generation, and test case execution. Table II summarizes a number of work based on these criteria.
553
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.
applications [21] is manual as automatically generated test cases need to be applied by testers. Mutation-based analysis process [11, 35-38] is semi-automatic in the sense that mutation generation step is automatic. However, applying test cases on mutants and actual programs needed to be done manually by the testers. Several work do not address automatic test case execution [10, 14, 21] as they are mainly interested in generating test cases. Tappenden et al. [13] develop API support to define test sessions, launch applications with test inputs, and analyze the test outputs. We classify it as semi-automatic in the sense that test execution incurs programming tasks. TABLE II.
generation, oracle generation, and test case execution. We believe that the analysis and findings are helpful for practitioners to opt tools that satisfy their interests. From the comparative study, it is obvious that every approach has a very narrow perspective of testing vulnerabilities. We notice that while majority of the testing work follow black box approaches, few work consider combining black box and white box approaches to obtain more coverage. One interesting direction would be to explore such combined approaches for testing security vulnerabilities. It is difficult to relate existing test coverage criteria with vulnerability testing coverage. For example, we cannot conclude that testing of all paths of applications ensures testing of all BOF vulnerabilities, or generating test inputs by solving all path constraints detects all BOF vulnerabilities present in applications. One interesting future research direction will be to identify a testing process that relates vulnerabilities with test coverage information. We also observe that few work address the generation of test cases based on software design languages such as UML that can reveal security vulnerabilities. Finally, very few tools offer full automation support. Therefore, more work need to be done to automate the program security testing processes.
COMPARISON OF THE AUTOMATION ASPECT OF THE PROGRAM SECURITY TESTING WORK
Work
Test case generation Oracle generation
Test case execution
Splat [10]
Automatic
Automatic
Manual
Vilela et al. [11] Tal et al. [12]
Manual
Manual
Semi-automatic
Automatic
Semi-automatic Automatic
Tappenden et al. [13] Salas et al. [14] WAVES [15]
Manual
Manual
Semi-automatic
Automatic
Manual
Automatic
ACKNOWLEDGMENT
Automatic
Automatic
Automatic
SecuBat [16]
Semi-automatic
Automatic
Automatic
This work is partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Breech et al. [17] Huang et al. [18] Sania [19]
Semi-automatic
Semi-automatic Automatic
Semi-automatic
Manual
Automatic
Semi-automatic
Automatic
Automatic
ARDILLA [20] Offutt et al. [21] McAllister et al. [22] MUBOT [35]
Automatic
Automatic
Automatic
Semi-automatic
Manual
Manual
Semi-automatic
Semi-automatic Automatic
Manual
Automatic
Semi-automatic
MUSIC [36]
Manual
Automatic
Semi-automatic
MUFORMAT Manual [37] MUTEC [38] Manual
Automatic
Semi-automatic
Automatic
Semi-automatic
REFERENCES [1] [2] [3] [4]
[5]
[6] [7] [8]
V.
[9]
CONCLUSION AND CURRENT OPEN ISSUES
Testing of an application for vulnerabilities is important to prevent exploitation and damages after the deployment of the application. A number of work have addressed testing of applications against four major vulnerabilities namely BOF, SQLI, FSB, and XSS. In this work, we propose seven criteria to perform a comparative analysis of 20 such work. The criteria are vulnerability coverage, source of test cases, test generation method, level of testing, granularity of test cases, tool automation, and target applications. Moreover, we perform an in depth comparison of the automation support of these work based on three identified criteria: test case
[10]
[11]
[12]
Common Vulnerabilities and Exposures, http://cve.mitre.org Open Source Vulnerability Database, http://osvdb.org. The Open Web Application Security Project (OWASP), http://www.owasp.org/index.php/Top_10_2007. Aleph One, “Smashing the Stack for Fun and Profit”, Phrack Magazine, Volume 7, Issue 49, Nov 1996. http://www.phrack.org/archives/49/P49-14 W. G. Halfond, J. Viegas, and A. Orso, “A Classification of SQL-Injection Attacks and Countermeasures”, In Proc. of the Intern. Symposium on Secure Software Engineering (ISSSE 2006), March 2006. Scut/team teso, "Exploiting Format String Vulnerabilities", 2001, http://doc.bughunter.net/format-string/exploit-fs.html G. Zuchlinski, “The Anatomy of Cross Site Scripting”, November 2003. M. Dowd, J. McDonald, and J. Schuh, The Art of Software Security Assessment, Addision-Wesley publications, 2007. A. Mathur, Foundations of Software Testing, First edition, Pearson Education, 2008. R. Xu, P. Godefroid, and R. Majumdar, “Testing for Buffer Overflows with Length Abstraction”, Proceedings of the International Symposium on Software Testing and Analysis, Seattle, WA, July 2008, pp. 27-38. P. Vilela, M. Machado, and E. Wong, “Testing for Security Vulnerabilities in Software”, Proceeding of Software Engineering and Applications (SEA 2002), Cambridge, USA, November 2002. O. Tal, S. Knight, and T. Dean, “Syntax-based Vulnerabilities Testing of Frame-based Network Protocols”, In Proceedings of the 2nd Annual Conference on Privacy, Security and Trust, Fredericton, October 2004, pp. 155-160
554
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.
[26] FlawFinder, Available: http://www.dwheeler.com/flawfinder/ [27] D. Evans and D. Larochelle, “Improving Security Using
[13] Tappenden, A. Beatty, P. Miller, J. Geras, A., and Smith, M.,
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
“Agile Security Testing of Web-based Systems via HTTPUnit”, Proceedings of Agile Development Conference (ADC), Denver, Colorado, July 2005, pp. 29- 38. Pari Salas, Krishnan, K.J Ross, “Model-Based Security Vulnerability Testing”, In Proceedings of Australian Software Engineering Conference, Australia, 2007, pp. 284-296 Yao-Wen Huang; Chung-Hung Tsai; Lee, D.T.; Sy-Yen Kuo; “Non-Detrimental Web Application Security Scanning”, Proceedings of the 15th International Symposium on Software Reliability Engineering, France, Nov. 2004, pp. 219-230. S. Kals, E. Krida, C. Kruegel, and N. Jovanovic, “SecuBat: A Web Vulnerability Scanner”, Proceedings of the 15th International Conference on World Wide Web, Edinburgh, Scotland, May 2006, pp. 247-256. Ben Breech and Lori Pollock, “A Framework for Testing Security Mechanisms for Program-based Attacks”, Proceedings of the 2005 Workshop on Software Engineering for Secure Systems-Building Trustworthy Applications, St. Louis, Missouri, pp. 1-7. Yao-Wen Huang; Chung-Hung Tsai; Lee, D.T.; Sy-Yen Kuo; “Non-Detrimental Web Application Security Scanning”, Proceedings of the 15th International Symposium on Software Reliability Engineering, France, Nov. 2004 pp. 219-230. Y. Kosuga, K. Kono, M. Hanaoka, M. Hishiyama, Y. Takahama, “Sania: Syntactic and Semantic Analysis for Automated Testing against SQL Injection”, In Proceedings of the 23rd Annual Computer Security Applications Conference, 2007, Miami, December 2007, pp. 107-117. Adam Kieżun, Philip J. Guo, Karthick Jayaraman, and Michael D. Ernst, “Automatic creation of SQL injection and cross-site scripting attacks”, MIT Computer Science and Artificial Intelligence Laboratory technical report MITCSAIL-TR-2008-054, Cambridge, MA, September, 2008. Offutt, J.; Wu, Ye.; Du, X.; Huang, H., “Bypass Testing of Web Applications”, In Proceedings of the 15th International Symposium on Software Reliability Engineering, France, November 2004, pp. 187-197. Sean Mcallister, Engin Kirda, and Christopher Kruegel, “Leveraging User Interactions for In-Depth Testing of Web Applications”, Proceedings of the 11th International Symposium on Recent Advances in Intrusion Detection, 2008, Massachusetts, USA, pp. 191-210. E. Fong, R. Gaucher, V. Okun, P. Black, “Building A Test Suite for Web Application Scanners”, Proceedings of the 41st Hawaii International. Conference on System Sciences (HICSS’08), Hawaii, January 2008, pp. 478-485. Jose Fonseca, Marco Vieira, and Henrique Madeira, “Testing and Comparing Web Vulnerability Scanning Tools for SQL Injection and XSS Attacks”, In Proceedings of the 13th Pacific Rim International Symposium on Dependable Computing, Australia, December 2007, pp. 365-372. G. Vigna, W. Robertson, D. Balzarotti, “Testing Networkbased Intrusion Detection Signature Using Mutant Exploits”, In Proceedings of the ACM Conference on Computer and Communication Security (ACM CCS), October 2004, Washington DC, pp. 21-30.
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
Extensible Lightweight Static Analysis”, IEEE Software, 19(1):42–51, 2002. U. Shankar, K. Talwar, J. Foster, and D. Wagner, “Detecting Format String Vulnerabilities with Type Qualifiers”, In Proceedings of 10th USENIX Security Symposium, August 2001, Washington, pp. 201-218. G. Wassermann and Z. Su, “Static Detection of Cross-site Scripting Vulnerabilities”, Proceedings of the 30th ICSE, Leipzig, Germany, May 2008, pp. 171-180. V. Benjamin Livshits and Monica S. Lam, “Finding Security Vulnerabilities in Java Applications with Static Analysis”, Proceedings of the 14th Conference on USENIX Security Symposium, Baltimore, MD, August 2005, pp 18-18. W. Halfond, and A. Orso, “AMNESIA: Analysis and Monitoring for NEutralizing SQL-Injection Attacks”, Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering (ASE 2005), Nov 2005, Long Beach, CA, USA, pp.174-183. G. A. Di Lucca, A. R. Fasolino, M. Mastoianni, and P. Tramontana, “Identifying Cross Site Scripting Vulnerabilities in Web Applications”, Proceedings of the Sixth International Workshop on Web Site Evolution (WSE 2004), Chicago, September 2004, pp. 71-80. S. Thomas and L. Williams, “Using Automated Fix Generation to Secure SQL Statements”, Third International Workshop on Software Engineering for Secure Systems (SESS’07), Minneapolis, 2007, pp. 9-14. J. Lin and J. Chen, “An Automatic Revised Tool for AntiMalicious Injection”, In Proceedings of the 6th International Conference on Computer and Information Technology (CIT2006), Seoul, Korea, September 2006, pp. 164-169. H. Shahriar and M. Zulkernine, “Mutation-based Testing of Buffer Overflow Vulnerabilities”, To appear in the Proceedings of the Second International Workshop on Security in Software Engineering (IWSSE 2008), IEEE CS Press, Turku, Finland, July 2008, pp. 979-984. H. Shahriar and M. Zulkernine, “MUSIC: Mutation-based SQL Injection Vulnerability Checking”. Proceedings of the Eighth International Conference on Quality Software (QSIC 2008), IEEE CS Press, London, August 2008, pp. 77-86. H. Shahriar and M. Zulkernine, “Mutation-based Testing of Format String Bugs”, Proceedings of 11th High Assurance Systems Engineering Symposium (HASE 2008), IEEE CS Press, Nanjing, China, December 2008, pp. 229-238 H. Shahriar and M. Zulkernine, “MUTEC: Mutation-based Testing of Cross Site Scripting,” To appear in the Proceedings of the 5th International Workshop on Software Engineering for Secure Systems (SESS), Vancouver, Canada, May 2009. Felix “FX” Lindner, “Software security is software reliability”, Communications of the ACM, Volume 49 , Issue 6, June 2006, pp. 57-61.
555
Authorized licensed use limited to: Virginia Commonwealth University. Downloaded on April 14,2010 at 00:17:44 UTC from IEEE Xplore. Restrictions apply.