2012 IEEE Sixth International Conference on Software Security and Reliability Companion
Towards a Model–Based Security Testing Approach of Cloud Computing Environments Philipp Zech
Michael Felderer
Ruth Breu
Institute of Computer Science University of Innsbruck Innsbruck, Austria Email:
[email protected]
Institute of Computer Science University of Innsbruck Innsbruck, Austria Email:
[email protected]
Institute of Computer Science University of Innsbruck Innsbruck, Austria Email:
[email protected]
Talking about security in Cloud computing environments, most of the time, providers rely on the mechanisms of virtualization security, preventing attacks like virtual machine escape or communication via covert channels between virtual machine instances. For sure, login authentication and also policies in terms of SLAs play a major role in Cloud computing, however, as already mentioned, from our point of view, this does not suffice. What we especially want to point out in this paper is that security in the context of Cloud computing environments cannot be assured once and will work, in the context of Cloud computing environments, assuring the security of the complete Cloud is a task to be performed during the complete lifespan of the Cloud itself. This is motivated by the fact, that Clouds are subjected to daily changes in terms of newly deployed applications and offered services. Considering the various layers among the Cloud, on the lowest, hardware– oriented layers (IaaS and PaaS), classical security mechanism like access policies, monitoring or intrusion detection suffice. However, the more complex in terms of running software the levels and the offered services become, the more complex it is to assure their secure operating. Considering SaaS, security becomes of major relevance. Basically, there are two motivational reasons, why the SaaS layer especially should be considered in terms of security and, besides, the call for profound security is fortified:
Abstract—In recent years Cloud computing became one of the most aggressively emerging computer paradigms resulting in a growing rate of application in the area of IT outsourcing. However, as recent studies have shown, security most of the time is the one requirement, neglected at all. Yet, especially because of the nature of usage of Cloud computing, security is inevitable. Unfortunately, assuring the security of a Cloud computing environment is not a one time task, it is a task to be performed during the complete lifespan of the Cloud. This is motivated by the fact that Clouds undergo daily changes in terms of newly deployed applications and offered services. Based on this assumption, in this paper, we propose a novel model– based, change–driven approach, employing risk analysis, to test the security of a Cloud computing environment among all layers. As a main intrusion point, our approach exploits the public service interfaces, as they are a major source of newly introduced vulnerabilities, possibly leading to severe security incidents. Index Terms—Software Penetration; Risk Analysis; Model– Based Testing; Security Testing; Cloud Computing; Cloud Security; Fuzzing;
I. I NTRODUCTION Cloud computing is a new paradigm fostering the idea of computing as a service and is well on the way transforming a large part of the IT industry, unlocking novel uses of software and even shaping the way IT hardware is designed and purchased. Cloud computing refers to both, the applications delivered as services over the Internet, as well as the hardware and software systems in the data centers that provide those services [1]. Hence, cloud services can be shared on several layers, referred to as Software-as-a-Service (SaaS), Platformas-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). On the one side, companies can save time and money by outsourcing IT landscapes and business processes into clouds. On the other side, moving possible sensitive data and business processes into the cloud demands a high degree of information security. In the course of outsourcing, among applications and platforms, cloud consumers also move parts of or even their complete data sets into the Cloud. It is quite obvious that online stored data in such a multi–user environment needs to be secured and protected from illicit access. However, against all hunches, if taking a look at a recently performed study by the Ponemon Institute [2], currently the majority of Cloud computing providers either does not really take security into account or it is neglected completely. Yet, for sure, such statistics always need to be considered very carefully, as in fact they only take into consideration the efforts of the Cloud providers themselves. Fortunately, cloud–enabling technologies are already equipped with a profound set of security features, however, from our point insufficient, as security should be taken into account at all layers. 978-0-7695-4743-5/12 $26.00 © 2012 IEEE DOI 10.1109/SERE-C.2012.11
1) Changing the public interfaces of the Cloud by means of newly deployed applications introduces new, possibly exploitable vulnerabilities in terms of offered services. 2) The publicly offered services by the Cloud are the main point of intrusion for malicious agents in an attempt to compromise the Cloud and abuse its data or computational power. Also in [3] the authors clearly state that the SaaS layer is of special importance regarding Cloud security. Now, if pursuing this thought of especially considering the SaaS layer and thinking of oneself being a malicious user, what one attempts is to compromise the Cloud in one or the other way (i.e.: Denial–of–Service or repudiation) as a consequence of feeding it with malicious input as parameters to offered services. For sure, such can be (and is) done among all layers, however, SaaS is of special importance due to its permanent changes and exposedness. In pursuing his malicious intentions, as already stated earlier, a malicious feeds the Cloud with malicious data and tries to track and analyze the outcomes by means of finding an exploitable flaw. Put it another way, a malicious agent tests the Cloud’s hardness in terms of security. In doing so one might effectively reveal 47
II. P OSITIONING IN R ESPECT TO R ELATED W ORK
flaws in a deployed application offering a service and hence, compromise the Cloud or parts of it. However, if moving back to a more complete view of the Cloud, incorporating all layers, it now should be quite obvious that security depends on a lot of factors, the most important among them the dedicated security of the offered services. This is basically due to the circumstance, that maliciously fed data from a service travels down through every virtualized and also physical layer until it is some when run on a physical machine (hypervisor) where in the end, it actually can do real bad harm. Hence, to put it another way, only securing a hypervisor does not suffice, as there is a broad range of vulnerabilities introduced by the upper layer services, invisible to the hypervisor, yet leading to possible incident. Consequently, Cloud computing environments demand for sophisticated security testing at all layers, to assure, that no running service allows to compromise the Cloud in any way.
The topic of cloud security has been identified as important from a research point of view [7] and from an application point of view [8] as clouds are open and data is stored in the cloud. In an actual IDC survey on cloud computing, cloud users rate security of the cloud as the most important topic, even before availability and performance. Cloud security is an evolving sub-domain of information security but still a mostly neglected topic in the area of cloud computing [9]. Security testing for cloud applications to control the compliance with the clouds security requirements has not been considered so far. Riungu et al. [10] highlight several aspects of testing in the cloud: Testing can be provided as a service to cloud consumers which is typically called Testing-as-aService (TaaS), TestSupport-as-a-Service (TSaaS) or SoftwareTestingas-a-Service (STaaS). The cloud provider can test the cloud environment itself when the system is developed or maintained. Also the customer can use cloud services on the infrastructure, platform or service level, and test their process integration. Riungu et al. highlight security testing in the cloud as an important research issue as it is challenging but also of high importance for users. The approaches presented by King et al. [11], Ciortea et al. [12], and Yu et al. [13] follow the idea of providing test as a service to cloud consumers for enhanced application testing. Nevertheless, as their approach attempts testing from inside the cloud environment, it is not applicable for security testing, also focusing on the examination of risks posed from outside the cloud. An approach to cloud infrastructure testing is proposed by Rings et al. [14] focusing on cloud interoperability testing. Yet, this approach does not take security aspects of any kind into consideration at all. Security testing is often fundamentally different from traditional software testing because it emphasizes what an application should not do rather than what it should do. This fact was also pointed out by Alexander in [15], where the author distinguishes between positive and negative requirements modeled as use cases and misuse cases. For testing positive security requirements, i.e., functional security properties that are defined in the requirements specification, classical functional testing techniques can be applied [16] provide a detailed listing of functional testing techniques for testing positive security requirements, e.g., equivalence testing or decision tables). Testing positive security requirements can be performed by the traditional test organization [17]. Negative security requirements express what a system should not do, respectively what should not happen. The set of negative requirements is therefore infinite on principle, and this makes it impossible to achieve complete test coverage. A promising way to overcome this problem is the derivation of tests based on a risk-analysis [16]. Due to this fact, risk-based testing (RBT) techniques [18] are highly relevant for security testing [19]. Based on a threat model, or based on abuse cases [20], vulnerabilities can be identified and prioritized relying on a risk analysis. Our approach supports testing positive and negative security requirements but focuses on negative security requirements which have so far not been investigated in detail. In practice, penetration tests are frequently used for testing negative security requirements. Penetration testing approaches attempt to compromise the security of a system [21] by acting like an attacker trying to penetrate the system and exploit
In this paper we present a novel approach for the risk– driven, model–based negative security testing of Cloud computing environments. The idea is to employ malicious users’ techniques by initially analyzing the public interfaces of the system, thereby identifying possible intrusion points, feeding them with malicious input and hopefully exploit them. In doing so, we start by performing a model–to–model transformation–based risk analysis on the system model of the Cloud Under Test (CUT), yielding in a risk model. This very model contains a formalized description of the possible risks, the CUT is exposed to. As a next step, the risk model is employed in another model–to–model transformation, allowing to derive security test cases with the main intent of breaking the CUT, put another negative security test cases. This very model is called the misuse case model. Both, the risk analysis and derivation of test cases are supported by a provided Vulnerability Knowledge Database. As a next step, we skip the generation of executable test code but instead immediately interpret the misuse case model by means of runtime model execution. During interpreting the misuse case model, our test engine employs a generating fuzzer for the inline generation of security test data. Prior to execution we allow to modify the misuse case model and assure its validity by means of model checking. Besides, our approach employs the idea of Living Models [4] allowing us to implement our idea of a change–driven testing approach, capable of reacting on any changes in the CUT’s system model at any time, hence, allowing to assure security over the complete lifespan of the Cloud in a sophisticated way. Model–based risk analysis allows to identify design flaws early and to manage the infinite number of test cases for negative requirements by prioritizing according to the threat level. As a modeling language UML [5], [6] has been chosen as it is a common, industry–wide accepted standard. The remainder of this paper will be structured as follows. Section II gives an overview on the area of security testing in relation to our work. Hereinafter, Sections III, IV, V and VI introduce and give a detailed discussion of our novel security testing approach. Finally, prior to concluding our contribution in Section VIII, Section VII sketches an evaluation scenario as intended to be used for our approach.
48
its vulnerabilities. Besides penetration testing, another wellknown approach to negative security testing is fuzzing [22]. Fuzzers have initially been developed for testing protocol implementations on possible security flaws due to improper handling of malicious input. Recently, the idea of fuzzing has been combined with the concept of model-based testing which allows for systematic and automated testing of software applications [23]. Yet, what makes fuzzers difficult to use is the fact that a fuzzer by design cannot be general–purpose. In our approach we incorporate both, penetration testing and fuzzing, which we somehow systematize compared to prior approaches for effective testing of negative security aspects. On a more abstract level, model-based security testing has been applied for testing access control policies in several approaches, e.g., [24]–[28] Approaches similar to our model– based security testing methodology are presented by Weider et al. [29], Felderer et al. [30] and by Jurjens [31]. In the first quoted paper, an approach to sound security testing of web services, based on fault models, is presented by Weider et al. Nevertheless, their work currently neither considers test identification, test generation nor test automation at all. The work presented by Felderer et al. attempts to test the security of a SCS based on predefined security requirements, contained in the model of the SUT. Yet, this approach only incorporates manually defined positive requirements and does not incorporate misuse cases derived from risk analysis. The idea laid before by Jurjens employs fault injection on the model level to properly test security requirements of software. However, the presented work actually has not that much in common with classical model–driven testing techniques (i.e., test code generation from models), it more or less focuses on checking the soundness and completeness of the model of the SUT under various mutations, caused by fault injection.
System Model Design CUT System Model Design
SUT Model
Misuse Case Model Generation Risk Model
Risk Analysis
Misuse Case Generation
Misuse Case Model
Vulnerability Knowledge Database
Model Reviewing
Consistency Checking
CUT Implemetation
Fixture Misusecase Model
Validated Misusecase Model Test Execution
System Implementation CUT
WSDL/ WADL Files Test Data Fuzzing Stub Generation
Test Execution
Testlog
Test Feedback Generation
Fig. 1.
Risk–Driven Security Testing Methodology
[6] modeling language, however, it only allows to use a restricted set of notions of UML2. As in the context of Cloud computing, one only deals with interface descriptions in terms of WSDL or WADL, we decided to restrict the System Model to only allow the concepts of interfaces, operations, operation parameters and primitive as well as complex type definitions. From our point of view, such a restriction is justifiable as it simplifies the System Model, impedes disambiguations, yet remains powerful enough to be used for security testing. Besides, as already mentioned earlier, in imitating a malicious agent’s techniques it only makes sense to use the same kind of information as he/she does. In the case that one lacks the existence of valid UML2 System Model, we provide tools (model–to–model transformations) allowing to transform WSDL or WADL service descriptions into valid, equivalent UML2 models. The lower left part in Figure 1 depicts the task named System Implementation. However, as the implementation of the Cloud is not part of the tester’s duty it will not be considered further in this paper.
III. A F RAMEWORK FOR M ODEL –BASED S ECURITY T ESTING The basic idea of our security testing framework is to address two, from our point of view, major challenges in the area of security testing, viz. 1) How to achieve maximum coverage of the security aspects in terms of testing? 2) How to reflect and properly address changes in the running system allowing iterative verification of its security? Keeping those two guidelines in mind we developed our approach as depicted in Figure 1. To achieve maximum coverage, we decided to use a risk analysis, allowing us to reveal nearly any potential flaw in the system, and hence achieving maximum coverage in terms of tested requirements, both positive and negative. The second question, concerning reflection of changes, is, as already mentioned in Section I, addressed by implementing the Living Models approach [4] enabling the idea of change–driven development, by making it possible to react on model changes in terms of model change events. In the following we will give a cumulative discussion of the basic tasks of our approach, a detailed discussion of each task will follow in the subsequent sections.
B. Misuse Case Model Generation The Misuse Case Model Generation is one of the central tasks of our security testing approach. As can be seen in Figure 1, it consists of various subtasks, viz. Risk Analysis, Misuse Case Generation, Model Reviewing and Consistency Checking. As already mentioned earlier, the Risk Analysis is the first subtask, performed by our approach. Realized as a model–to–model transformation, its basic mechanism is to analyze the CUT System Model, by means of applying attack patterns [32]. If successful, based on the applied attack pattern, possible exploits and outcomes are derived in conjunction with the Vulnerability Knowledge Database (see Section IV-A for a detailed discussion). Finally, risk quantification is performed by means of calculating risk related values like impact, threat level and probability of occurrence. This calculation insofar is necessary, as it allows to define a first, loose structuring and categorization of the identified risks. This idea of loose structuring and categorization is reflected inside the resulting model by means of packages and features and is vital for defining valid test suites. As a result, the model–to–model
A. System Model Design and System Implementation The System Model of the CUT acts as the main input to our security testing approach. It is based on the UML2 [5],
49
transformation finally generates the Risk Model (see Section IV-C for a detailed discussion), which contains a detailed, formalized description of all identified risks. The Risk Model itself is based on an own meta–model developed on top of the Eclipse Modeling Framework [33]. As a next step, the Risk Model is employed in another model–to–model transformation, yielding in the Misuse Case Model (see Section IV-D for a detailed discussion). This very model is retrieved by again incorporating the Vulnerability Knowledge Database, however, now the purpose is to derive formalized descriptions of concrete attacks. Put another way, each risk contained in the Risk Model is processed and according to its contents equivalent Misuse Cases are generated. A distinct Misuse Case also is considered as a concrete test case to be executed against the CUT. Hence, the Misuse Case Model also can be seen as the central test model of our approach. The aforementioned loose structuring and categorization again is reflected inside the Misuse Case Model by means of packages and priorities, assigned to the various misuse cases. Like the Risk Model, also the Misuse Case Model is based on an own meta–model on top of the Eclipse Modeling Framework [33]. Although the basic idea of our approach is to work completely autonomous in terms of test generation and execution, yet, we allow to manually adapt the Misuse Case Model by means of modifying, adding or deleting distinct misuse cases. Besides, this chance of manual modification is inevitable for the definition of security test suites reflecting different attack scenarios. The definition of test suites and test scenarios hereby is fostered by the aforementioned priorities and by the execution count, assigned to each misuse case. Changing the respective values allows to select or de–select misuse cases for execution or setting their number of executions, respectively, and hence, adapt the workflow of the test execution. Modifying the Misuse Case Model yields in the Fixture Misuse Case Model (see Section IV-E for a detailed discussion). Finally, prior to executing the Fixture Misuse Case Model, consistency and validity checks are performed on it, to assure that it contains sound and complete descriptions of misuse cases, and besides, also valid test suite definitions. Talking about sound and complete in the context of a formalized test case, and especially in our case, means nothing that it has to contain all necessary information like runtime configuration, assertions and test data specification. The task of model checking is depicted in Figure 1 by the activity named Consistency Checking (also see Section IV-E for a detailed discussion). As a checking language, OCL [34] is used.
As can be seen in the lower right part of Figure 1, the Test Execution itself consists again out of various subtasks, namely Stub Generation, Test Data Fuzzing, Test Execution and Test Feedback Generation. The Stub Generation can be seen as an auxiliary task, allowing the automatic generation of adapters for the CUT out of WSDL or WADL service descriptions. These adapters insofar are necessary as they enable the transparent and seamless communication with the various services of the CUT, necessary for performing calls, contained in misuse case descriptions. Test Data Fuzzing more or less is a task, performed inline with Test Execution. During the Test Execution, the Misuse Cases, contained in the Misuse Case Model are processed one after another by means of parsing their contents and creating in–memory test cases, which are then on executed against the CUT. During the parsing of each Misuse Case, the test data fuzzer is invoked for generating the necessary test data, needed by the concrete service call, encapsulated inside the misuse case. After executing a test case against the CUT, put another way, invoking a service, the outcomes of the call are evaluated against a predefined assertion, judging after a test cases’ failure or success. At this point, a detailed discussion of the fuzzer is skipped and postponed to Section V. Finally, as a last subtask during Test Execution, test feedback is generated during Test Feedback Generation. The generation of test feedback in our case promotes the following activities • Generation of a detailed test log, containing all relevant information concerning a test case (outcome, execution time, used data). • Playing test outcomes back into the used models, viz. the System and the Risk Model by means of annotating risks in the Risk Model and operations as well as interfaces in the System Model, respectively. • Analyzing the outcomes of test cases to improve the contents of the Vulnerability Knowledge Database. The above mentioned process of playing back feedback into the various models resembles a part of our idea of change–driven model evolution. The underlying idea is to attach a state–machine to each involved model, viz. the System, the Risk and the Misuse Case Model. The state– machine’s purpose is to react on changes inside the model and trigger them down– or upwards to the next model inline (i.e., from the System Model to the Risk Model and further up to the Misuse Case Model, or, contradictory, from the Misuse Case Model to the Risk Model and further down to the System Model). Doing so allows to react on any changes in the underlying CUT in terms of a changing System Model, triggering an updated Misuse Case Model, or as already mentioned, the other way round to annotate the model in terms of test feedback, also resulting in an updated Misuse Case Model.
C. Test Execution The Test Execution depicts the second central task of our approach, as the name already indicates, the execution of tests by means of runtime model execution. Put it another way, we skip the generation of executable test code in any target language and instead directly execute the Misuse Case Model against the CUT. This comes at three advantages, namely 1) Our approach is target platform– and target language– independent. 2) Directly executing the model allows to provide online feedback inside the Misuse Case Model during execution. 3) Directly executing the model gives full traceability support for the complete test execution.
In the subsequent sections, the various tasks and subtasks depicted in Figure 1 are discussed in detail. IV. S ECURITY T EST D ESIGN In this section, we discuss the system and security test modeling activities of our methodology in detail. We first present the vulnerability knowledge database, and then the system, risk and misuse case modeling. Finally, we show how the defined models are reviewed and checked.
50
However, before we delve into those topics we first give a brief overview on the most important threats to cloud computing for setting the work environment for cloud tailored security testing. Although the idea of cloud computing and its core technologies are nothing new, the concept of how they are reused in the cloud computing context is something new and different. This basically also applies to the security aspects of cloud computing, however, the context in which and how cloud environments are used, poses new and severe security threats to those very environments. According to the Cloud Security Alliance (CSA), currently there exist seven major threats to cloud computing [35], most of them related to attacks against a cloud environment for further abuse of it. Additionally to those threats, in the Security Guidance of Critical Areas for Cloud Computing [36], the CSA states, that security issues for cloud computing can be partitioned in two categories, namely Governing in the Cloud and Operating in the Cloud, whereas the former mainly deals with non–testable, legal aspects and the latter with testable, technical aspects. The domains of most concern hereby mainly address either confidentiality, integrity and availability issues or application specific security issues. Putting it all together, cloud computing security can be seen as a mixture of classic computer security and contemporary information security. It is the intersection of these two domains, where cloud security testing is tailored to.
vCloudAPI +login( username : String, password : String ) : void +logout() : void +setProxy( proxyhost : String, port : int, scheme : String ) : void +setProxyCredentials( username : String, password : String ) : void
Fig. 2.
Excerpt of the API of VMWare’s vCloud Director
C. Risk Modeling As already mentioned earlier, the risk analysis represents the entry point of our security testing approach. The idea is to retrieve a set of blacklisted operations, potentially vulnerable to various exploits. The basic idea behind doing so is that the risk analysis initially starts by scanning the CUT’s interfaces and its contained operations. During this process of scanning, attack pattern matching [32] is performed to effectively identify the relevant operations. This process of matching is crucial, as it responsible for successfully identifying all potentially vulnerable operations to be examined later on during testing. More detailed, during this process of matching, the signatures of the various operations, contained in the interfaces of the CUT, are analyzed by at the same time trying to apply an attack pattern. If a pattern fits, the operation becomes blacklisted for further evaluation during later phases of the risk analysis. With the given set of blacklisted operations, our risk analysis enters its next stage where it is about to create the risk’s specific threat profile. The threat profile’s purpose is to contain a detailed descriptions of all possible exploits in accordance with the possible outcomes of an exploit, respectively. Besides, the threat profile also has a reference to the examined operation, including all its relevant information (signature and return type). The threat profile and its validity insofar are crucial, as it is the base for generating test cases later on. Finally, after the threat profile has been created, in a last phase, the risk analysis calculates various risk related values, viz. • impact The impact of an attack on the system in terms of possible losses, i.e., loss of data. • probability of occurrence The chance that an identified exploit may happen. • threat level A cumulative value, calculated out of the impact and the probability of occurrence, defining the severity of attack, coming along with the described risk. The calculation of the above mentioned values again occurs by analyzing the operation signatures in combination with the generated threat profile. In doing so, the risk analysis starts by calculating a complexity factor c for each operation op(p1 : t1 , . . . , pn : tn ), depending on the input parameter types t1 , . . . , tn . The more complex the set of parameter types is, the higher is the overall complexity c. The automatically determined complexity of a parameter type ti considers whether a type is primitive or structured and is denoted by c(ti ), the common dependence factor between t1 , . . . , tn is denoted by d. Hereby, determining the complexity c for a structured type requires to analyze its internal structure in terms of accessible attributes and the inheritance structure. Especially, we also consider the size in bytes of the types ti , as this defines the size of a possible attack payload. The overall complexity c of an operation op with the operation signature op(p1 : t1 , . . . , pn : tn ) is then the sum of the parameter type complexities c(ti ) and the dependence factor d(t1 , . . . , tn ), i.e.,
A. Vulnerability Knowledge Database The idea of the Vulnerability Knowledge Database, as we use it in our testing approach, is to provide a source of formalized knowledge on system–specific vulnerabilities and exploits. Yet, in doing so it does not follow the classical concept of a tuple–based relational database but instead exploits QVT’s [37] concept of libraries. The concept behind such QVT libraries is to offer a set of predefined, imperative operations for reusability. More exactly, instead of providing queryable tables we exploit QVT’s concept of libraries to defined stored knowledge inside maps, accessible during the model–to–model transformations (which are itself also based on QVT [37]) by using their related key–value mechanisms. Employing the same framework, i.e., QVT [37], for both, the model–to–model transformations and the Vulnerability Knowledge Database, results in seamless integration of our system–specific insecurity knowledge into our security testing approach. Additionally, avoiding to use another language than QVT for storing our knowledge also fosters the querying process in terms of unnecessary mappings between runtime data types and relational data types. B. System Modeling As mentioned before, the System Model based on a restricted subset of UML2. Figure 2 shows a shortened excerpt of the interface of VMware’s [38] vCloud Director [39] API interface as we use it throughout this paper. As quite obvious from the depicted interface in Figure 2, the System Model is very simple in its nature by only depicting relevant public services and their according parameters, yet it exactly suffices our purposes, as it allows seeing a Cloud environment from a malicious agent’s perspective when testing against it. However, we will skip any further discussion of the System Model at this point and instead refer to the vCloud API Programming Guide for an in–detail discussion [40].
51
c(op) =
n
: Risk
c(ti ) + d(t1 , . . . , tn )
ID = "Risk_login" impact = "MEDIUM" probability = "HIGH" threat_level = "HIGH"
i=1
Now, with the calculated complexity factor c and the number of parameters, the risk analysis finally is ready to calculate the aforementioned values. First, it starts by calculating the impact by employing the very like. The reason for doing so, is that the number of parameters in combination with their complexity allows to estimate the probable impact an attack may have. For sure, in calculating the impact, our risk analysis is not aware of any financial or social values, which may be affected by an attack. Yet such a problem can be easily overcome by annotating the System Model with the relevant information. As a next step, the risk analysis calculates the probability of occurrence. Again, we incorporate the number of parameters in combination with the complexity factor. We motivate this by the fact, that the more parameters an operation offers, the higher the risk of an attack, as the attack surface [32] is broader in terms of parameters. Yet, we need to consider the complexity factor as with growing complexity of the parameters, the chance of successfully performing an attack decreases, as it requires more sophisticated knowledge on the system specifics and how to exploit it. Finally, as a last step, the threat level of the described risk is calculated. However, at this point we only consider the aforementioned calculated impact and probability of occurrence. The result of this calculation is a cumulative value, giving information on how concrete the threat, that an attack, exploiting one of the identified vulnerabilities, actually is. Put another way, it states how threatened the CUT currently is. Table I shows the internal table, we currently use to calculate the threat level of a risk, i.e., if the impact is LOW and the probability MEDIUM, the resulting threat level will be LOW. Figure 3 shows a sample risk as generated out of the System Model depicted in Figure 2 during our risk analysis. The top of the picture shows the Risk itself, whereas the lower part shows the remaining generated artifacts, viz. the ThreatProfile and the Operation, completing the formal description of a risk. As can be seen, the Risk itself only contains an associated ID plus the aforementioned, calculated risk related values, viz. impact, probability of occurrence and threat level. The ThreatProfile contains the number of possible identified exploits in addition with a list of possible outcomes for each exploit, respectively, i.e., XML Injection resulting in Denial–of–Service or SQL Injection resulting in Information Disclosure. Finally, the ThreatProfile has associated a reference to the operation in question, in this case the login(...) operation of the vCloudAPI interface (see Figure 2). Based on this formal notation, we later on, as described in the following Section, generate the negative security test cases, in other words, the Misues Cases.
Probability
LOW MEDIUM HIGH
Impact LOW LOW LOW HIGH
MEDIUM LOW MEDIUM HIGH
: ThreatProfile
DIRECTORY_TRAVERSAL = "[TAMPERING_WITH_DATA, INFORMATION_DISCLOSURE, ELEVATION_OF_PRIVILEGES]" FORMAT_STRING = "[INFORMATION_DISCLOSURE, ELEVATION_OF_PRIVILEGES]" HTTP_HEADER_INJECTION = "[DENIAL_OF_SERVICE, SPOOFING_IDENTITY, REPUDIATION]" OVERFLOW = "[DENIAL_OF_SERVICE, INFORMATION_DISCLOSURE, ELEVATION_OF_PRIVILEGES]" SQL_INJECTION = "[DENIAL_OF_SERVICE, TAMPERING_WITH_DATA, INFORMATION_DISCLOSURE, SPOOFING_IDENTITY, ELEVATION_OF_PRIVILEGES]" XML_INJECTION = "[DENIAL_OF_SERVICE, TAMPERING_WITH_DATA]" XSS = "[REPUDIATION, SPOOFING_IDENTITY, ELEVATION_OF_PRIVILEGES]"
: Operation
name = "login" parameter1 = "username" parameter2 = "password" returnType = "void"
Fig. 3. Sample Risk for the vCloudAPI’s (see Figure 2) login(...) Operation
D. Misuse Case Modeling With the Risk Model ready to go, as next step the Misuse Case Model is generated in the course of another model–to– model transformation out of the Risk Model. In its idea, the Misuse Case Model itself is treated as the actual test model of our approach, as it, as already mentioned earlier, contains all test case descriptions, and later on is executed against the CUT. As a first step during concrete Misuse Case Generation (see Figure 1, right upper part), the underlying package structure of the Misuse Case Model is generated. This package structure insofar is inevitable, as it provides the base for defining and modifying test suites and custom test runs. The idea hereby is, that based on the blacklisted operations and their according interfaces, packages in the Misuse Case Model are generated, finally holding the relevant test cases. However, we do not only group test cases based on interfaces and operations, but instead we additionally also consider the identified exploits as a third indicator, among which we group test cases. Put it another way, our Misuse Case Model has the following inherent package structure: • TestSuite — root package, containing all other packages and the test cases, in the lowest subpackage • TestSet — subpackage of the TestSuite, generated for each identified interface • TestCaseSet — subpacakge of a TestSet, generated for each blacklisted operation; besides containing instances of TestCaseGroups (see below), the TestCaseSet also holds the necessary reference to the operation in question • TestCaseGroup — subpackage of a TestCaseSet, generated for each identified exploit, to finally hold and group the concrete test cases (Misuse Cases) With the package structure generated and ready to be filled, the generation of the distinct Misuse Cases starts. In doing so, each risk contained in the Risk Model is processed and transformed into a valid Misuse Case, thereby stored in the relevant package for later execution. The generation of a Misuse Case again employs the Vulnerability Knowledge Database, yet the idea is to derive possible attacks based on a given exploit and its intentional goal (see Figure 3 for pairs of exploit and goal). Talking about a concrete attack in this
HIGH MEDIUM HIGH HIGH
TABLE I L OOK – UP TABLE USED TO CALCULATE THE Threat Level
52
: Testcase
assertion1 = " = DATA" assertion2 = " = EXCEPTION" executions = "10" goal = "INFORMATION_DISCLOSURE" ID = "Testcase38_SQL_INJECTION" priority = "PRIORITY_URGENT"
: Threaded
instances = "0" threaded = "false"
Algorithm 1: Misuse Case Generation Algorithm Input: Risk Model rm Input: Vulnerability Knowledge Database vdb Output: Misuse Case Model mcm 1 begin 2 foreach risk ∈ rm do 3 exploits ← risk.exploits; 4 foreach exploit ∈ exploits do 5 goals ← exploit.goals); 6 foreach goal ∈ goals do 7 asserts ← vdb.getManifestations(goal); 8 ac ← vdb.createAC(exploit, goal); 9 mc ← createMC(exploit, goal, asserts, ac); 10 mcm.addMC(mc); 11 end 12 end 13 end 14 return mcm; 15 end
: AttackContext
fuzzingContext = "DATA"
: FuzzingSpecification
feature = "ESCAPE" language = "SQL"
Fig. 4. Sample Misuse Case describing a SQL Injection, generated from the Risk, depicted in Figure 3
context shall be understood as feeding malicious data to the CUT, as the exploit actually already states how to attack, yet, the success of the attack is mainly driven by the employed malicious data. Figure 4 shows one of the Misuse Cases, generated out of the Riks, depicted in Figure 3, describing an attack of type SQL Injection with the potential goal of Information Disclosure. The left upper corner depicts the TestCase itself, with all the relevant runtime configuration in terms assertions, executions and its priority. The lower left part of Figure 4 shows the Threaded element, which desribes the Misuse Case in terms of concurrent execution. Such a configuration is necessary in case of, i.e., performing distributed Denial–of–Service attacks. The right part of Figure 4, viz. the AttackContext and the FuzzingSpecification, providing the relevant configuration for the fuzzer in terms of which kind of data to generate for this specific Misuse Case. Their role is discussed in detail in Section V. At this point, the Vulnerability Database again plays a major role, as it allows to derive the relevant required configuration for the malicious test data. Algorithm 1 shows how we generate the Misuse Case Model and the Misuse Cases, like the one depicted in Figure 4. We basically start by iterating over each Risk contained in the Risk Model. For each Risk, based in its threat profile, we access the associated exploits and their goals, respectively. Now, with the exploit and the goal given, as a next step in line 7 we derive assertions followed by creating the Attack Context in line 8. In doing, both of these task employ the Vulnerability Knowledge Database. Next, in line 9, the concrete Misuse Case is created. As a last step, the newly generated Misuse Case is added to the Misuse Case Model in line 10. After iterating over all Risks and generating all Misuse Cases, the algorithm finally returns the new test model in line 14. The algorithm creates a distinct Misuse Case for each possible combination of exploit and goal within the scope of a Risk. Finally, we want to remark that the above algorithm only considers how a concrete Misuse Case is generated and added to the Misuse Case Model, yet the generation of packages is skipped for reasons of space limitation.
scenarios. To overcome this drawback we, as already stated, allow the Misuse Case Model to be manually modified by a test designer. These modifications may comprise adding, deleting or changing distinct Misuse Cases. Besides, also changing Misuse Case–respective properties like priority or execution count (see Figure 4) allows to alter test execution in a very fine–grained way, put another way, describing test scenarios. As a consequence of allowing such model modifications, we need to provide the necessary model checks to verify, prior to execution, that the Fixture Misuse Case Model is sound and complete in terms of Misuse Case descriptions. In doing so, we offer a variety of predefined OCL queries, allowing to verify each necessary aspect of the Fixture Misuse Case Model. Listing 1 shows such an example OCL constraint, used to check whether each Misuse Case has set its execution counts feature. context TestCase : s e l f . e x e c u t i o n C o u n t n u l l and s e l f . e x e c u t i o n C o u n t >= 0
Listing 1. Sample OCL query validating the execution counts feature of each test case
V. F UZZING M ALICIOUS DATA The main reason why we decided to employ fuzzing techniques in the context of our security testing approach is because fuzzing techniques already quite early proved their power in searching for potential, hidden security bugs based on malformed input [41]. However, in contradiction to the classic functioning of a fuzzer, consisting of both, the provisioning of data and the execution of the tests itself (passing the data to the SUT as an input, respectively), our fuzzer solely is used for the provisioning of data, yet the execution of the tests itself is left to the Test Engine (see Section VI). Besides, our fuzzer is a generating fuzzer, crafting malicious data completely from scratch, in contradiction to a mutating fuzzer, which, as the name already indicates, works on preexisting
E. Model Reviewing and Checking As already mentioned earlier in Section III-B, we allow to manually modify the Misuse Case Model prior to test execution. In its current idea, our approach supports the automated unit testing of CUTs in terms of security by trying to achieve maximum coverage in terms of negative requirements. Yet, what our approach does not support (mainly due to reasons of infeasibility) is the automated generation of custom test
53
1 2 3
data (i.e., prerecorded or manually predefined) by means of mutating them. This decision is due to the fact that a mutating fuzzer simply would not scale in terms of the required set of preexisting data to mutate on, as it would require too much manual labor to retrieve such a set. In providing security test data our fuzzer makes use of two input artifacts, namely WSDL and WADL service descriptions and the AttackContext, in combination with its encapsulated FuzzingSpecification (see right part of Figure 4), thereby generating test data in three different shapes, viz.: • Fuzzed Parameters Provide instances of simple or complex data types to be directly passed as arguments to a service call, i.e., if performing overflow–based or format string attacks. • Fuzzed Message Contexts As potential flaws may not only reside in service–specific code, but also inside code of an enabling software component, fuzzed message contexts are necessary if performing attacks i.e., like XML or XPath Injections. • Fuzzed HTTP Requests Finally, as every kind of service centric system mainly relies on transmitting messages via HTTP, our fuzzer also allows to generate complete, fuzzed HTTP request if performing attacks i.e., like Cross–Site–Scripting or HTTP Header Injection. Coming back to the aforementioned used artifacts, the former one, WSDL or WADL service descriptions are used to derive the necessary data types and, if required, the structure of a message context. The latter one mentioned, the AttackContext, plays a more important role, as it, in combination with the FuzingSpecification, actually drives to operating of the fuzzer. As can be seen from Figure 4, the AttackContext and the FuzzingSpecification hold the necessary information to configure the fuzzer in terms of what kind of data (i.e.: language and feature) and in which shape to generate it (i.e.: fuzzingContext). In the concrete case of the Misuse Case depicted in Figure 4 this would mean that the fuzzer is advised to generate malicious SQL statements based on SQL escape characters, yet only in the form of simple data, as those statements are intended to be used as input parameters for the login(...) call, i.e., • admin’ #, • ’ or 1=1/* or • ’) or (’1’=’1--.
domain–oriented concepts (i.e., test cases or risks) have to be broken down and mapped onto concepts of the target language, resulting in a loss of transparency and understandability between testing artifacts and test execution. Algorithm 2: Model Execution Algorithm Input: Fixture Misuse Case Model m Output: Test Log log 1 begin 2 if m is valid then 3 foreach Misuse Case mc ∈ m do 4 if mc.threaded = true then 5 fork (mc.instances); 6 begin 7 for i ← 1 to mc.executions do 8 data ← fuzz (mc.context); 9 result ← invoke (mc.op, data); 10 evaluate (result, mc.assert); 11 end 12 end 13 yield (); 14 else 15 for i ← 1 to mc.executions do 16 data ← fuzz (mc.context); 17 result ← invoke (mc.op, data); 18 evaluate (result, mc.assert); 19 end 20 end 21 end 22 log ← createTestlog (); 23 return log 24 else 25 return log 26 end 27 end Algorithm 2 describes the concept of how we execute the Fixture Misuse Case Model by means of runtime model execution. As an input, the algorithm takes the Fixture Misuse Case Model (later on referred to by m and returns the Test Log (later on referred as log). Initially, in line 2, a model validity check is performed. Although we already assure the model’s validity by the checks performed as described in Section IV-E, this check is inevitable as it is required by the internal model parsing framework [33]. The actual interpretation starts with the f oreach–loop in line 3, where the algorithm iterates over each contained Misuse Case mci of the input model m. Next, in line 4, the algorithm checks, whether mci requires to be executed in parallel or sequential. If parallel execution is required the algorithm continues to execute at line 5, where it forks n instances of mci to be executed in parallel, whereas n depends on the number of concurrent instances, as defined in mci . Following, after all instances of mci have been forked, execution of the test logic of mci starts. First, test data is retrieved by invoking the fuzzer, followed by actually invoking the service on the CUT in line 9. Finally, as part of the test logic execution, the retrieved result is evaluated against the defined assertions in line 10. During this evaluation, also the online feedback is played into m, as mentioned earlier in this Section. Finally, as a last step of parallel test execution, each instance of mci waits for the other instances to finish, before
VI. M ODEL E XECUTION In contradiction to the traditional idea of model–based testing, where the test model is used to generate executable test code out if it [42], we instead directly interpret the Fixture Misuse Case Model by means of runtime model execution. This comes, as already stated in Section III-C, due to the following three advantages, • Target platform and target language independence • Online test feedback inside the Misuse Case Model during execution • Full traceability during test execution. Besides the above mentioned reasons for not generating any executable code, we also think that in avoiding to do so allows us to define a more domain–oriented testing approach as in contradiction to one, that generates test code. We simply believe this, as currently, if generating executable test code, an existing general–purpose language has to be used as a target language (i.e., Java). However, in doing so, highly abstract
54
continuing with the next mci+1 . In case of sequential execution of mci , the algorithm continues to execute at line 15 instead of line 5. However, the concrete execution of the test logic itself does no differ from what happens during parallel execution between lines 7–10, hence, it will not be again explained in detail. As a last step of Model Execution, the log is generated and returned. During this process of creating the log, the algorithm also analyzes the traces of interpreted Misuse Cases, thereby trying to improve the Vulnerability Knowledge Database in terms of system–specific exploitation knowledge and, additionally also plays the relevant feedback into the relevant models, viz. the Risk and the System Model. In case m is not valid or a runtime exception is thrown as a result of model interpretation, the algorithm still returns a log, yet containing error messages and no test feedback is generated. Also the Vulnerability Knowledge Database remains untouched. For the sample Misuse Case depicted in Figure 4 this would mean that the algorithm would, after the initial parallelism check in line 4, continue to execute at line 15 (as the threaded flag is set to false, the check returns false and the else branch is taken). The f or–loop, beginning at line 15 would be taken 10 times, based on the defined number of executions, which, in this is example is set to 10. After each invocation inside the loop, the returned result is checked against the defined assertions, viz. a check whether data was returned (in case the SQL injection lead the database to disclose its content) or an exception was thrown (in case the SQL injection lead to some unwanted, insecure behavior of the system). The assertions itself are checked in the order defined, however, as soon as one of the assertions is fulfilled, the checking stops and returns the verdict. If no assertion is fulfilled, the test is considered to have failed.
The Test Evaluator generates test reports and visualizes test results within the models. It corresponds to the activity Test Feedback Generation. • The Fuzzer generates test data and oracles, if applicable. It corresponds to the activity Test Data Fuzzing. • The Model Manipulator generates the models (and hence test artifacts), viz. the risk and the misuse case models. It performs the activities Risk Analysis and Misuse Case Generation. In its modularity, the tool is only bound to the Eclipse framework. The various components can be exchanged by more custom triggered extensions as the tool follows established practices (using XMI as a representation and exchange format). •
B. Cloud Under Test As already mentioned earlier, as a CUT, for this paper we used a VMWare’s vCloud Director to build up our Cloud environment. Doing so allows us to provide Softeware–, Platform– and also Infrastructure–as–a–Service. Yet, vCloud Director itself is only running on top of our compelte cloud setup, the following list gives a brief discussion of the basic software components enabling our Cloud (references to access further information will be given): • ESX ESX [44] is VMWare’s hypervisor software, implemented as a stand–alone server. It enables the virtualized environment, necessary for instantiating and deploying virtual images. • VMWare vSphere VMWare’s vSphere [45] allows to setup virtualized datacenters by clustering and managing physical ESX hypervisors. • VMWare vShield VMWare’s vShield [46] is attachable to a single vSphere and provides Cloud–relevant security features for build a trusted environment. • VMWare vCloud Director VMWare’s vCloud Director [39] finally runs on top of the aforementioned software stack and enables the provisioning of a managed Cloud computing environment. However, as in the case of the System Model, discussed in Section IV-B we skip a furthe detailed discussion of the CUT, as both, it would exceed the scope of this paper and the above provided links reference detailed discussion of the mentioned software components. In the course of evaluating our approach, our setup requires nothing more than simply a computer running our testing software (Testengine), a running Cloud deployment (CUT) and finally, most crucial of all, an online connection between the CUT and the Testengine. With these prerequisites fulfilled, based on a valid System Model, testing can begin.
VII. E VALUATION S CENARIO In this section, we show how our model–based security testing approach can be applied to concrete cloud computing environments. We first sketch the tool implementation for our approach and then a possible CUT. A. Test Tool Implementation Developed as a set of Eclipse [43] plug-ins, the tool consists of various components setting up the whole environment, to keep a high level of modularity. The main components correspond to the activities of our testing methodology depicted in Fig. 1 and are as follows: • The Modeling Environment is used for designing the system model and adapting the intermediary models, viz. the risk and the misuse case model. It processes the activities CUT Service Model Design and Model Review. • The Model Evaluator uses OCL as constraint language. It processes the activity Consistency Checking. • The Web Service Stubs are used by the test controller to invoke services on the CUT. They can be created manually or generated automatically depending on the service technology. Stubs correspond to the activity Stub Generation. • The Test Controller executes the tests against the SUT. It processes the activity Test Execution.
VIII. C ONCLUSION In this paper we have presented a novel approach for risk–driven model–based security testing of Cloud computing environments, applicable at all layers of services. We believe that the presented approach is prolific in various aspects, viz.: • It allows to define target platform and target language independent test suites. • It is both, expandable and generic in terms of adapting or exchanging the contents of the Vulnerability Knowledge Database. • It address the concept of security testing from a negative perspective, put another way, it does not assure a system’s
55
[15] I. Alexander, “Misuse cases: Use cases with hostile intent,” Software, IEEE, vol. 20, no. 1, pp. 58–66, 2002. [16] C. C. Michael and W. Radosevich, “Risk–Based and Functional Security Testing,” Cigital, Tech. Rep., 2009, https://buildsecurityin.us-cert.gov/ bsi/articles/best-practices/testing/255-BSI.pdf [accessed: May 8, 2010]. [17] B. Potter and G. McGraw, “Software Security Testing,” IEEE Security & Privacy, pp. 81–85, 2004. [18] S. Amland, “Risk-based testing: : Risk analysis fundamentals and metrics for software testing including a financial application case study,” Journal of Systems and Software, vol. 53, no. 3, pp. 287–295, 2000. [19] C. Wysopal, L. Nelson, D. D. Zovi, and E. Dustin, The Art of Software Security Testing. Addision–Wesley, 2006. [20] D. Firesmith, “Security use cases,” Journal of Object Technology, vol. 2, no. 1, pp. 53–64, 2003. [21] M. Bishop, “About Penetration Testing,” IEEE Security & Privacy, vol. 5, no. 6, 2007. [22] A. Takanen, J. DeMott, and C. Miller, Fuzzing for software security testing and quality assurance. Artech House Publishers, 2008. [23] Y. Yang, H. Zhang, M. Pan, J. Yang, F. He, and Z. Li, “A modelbased fuzz framework to the security testing of tcg software stack implementations,” in Multimedia Information Networking and Security, 2009. MINES’09. International Conference on, vol. 1. IEEE, 2009, pp. 149–152. [24] R. DeMillo, R. Lipton, and F. Sayward, “Hints on Test Data Selection: Help for the Practicing Programmer,” Tutorial software quality assurance: a practical approach, 1985. [25] Y. L. Traon, T. Mouelhi, and B. Baudry, “Testing Security Policies: Going Beyond Functional Testing,” in The 18th IEEE International Symposium on Software Reliability, 2007, pp. 93–102. [26] J. Julliand, P.-A. Masson, and R. Tissot, “Generating Security Tests in Addition to Functional Tests,” in AST ’08: Proceedings of the 3rd international workshop on Automation of software test. ACM, 2008. [27] A. Pretschner, T. Mouelhi, and Y. Le Traon, “Model-Based Tests for Access Control Policies,” in Proceedings of the 2008 International Conference on Software Testing, Verification, and Validation, 2008. [28] P.-A. Masson, M.-L. Potet, J. Julliand, R. Tissot, G. Debois, B. Legeard, B. Chetali, F. Bouquet, E. Jaffuel, L. Van Aertrick, J. Andronick, and A. Haddad, “An access control model based testing approach for smart card applications: Results of the POSE´ project,” JIAS, Journal of Information Assurance and Security, vol. 5, no. 1, pp. 335–351, 2010. [29] W. D. Yu, P. Supthaweesuk, and D. Aravind, “Trustworthy web services based on testing,” Service-Oriented System Engineering, IEEE International Workshop on, pp. 167–177, 2005. [30] M. Felderer, B. Agreiter, and R. Breu, “Security Testing by Telling TestStories,” in Modellierung 2010. Gesellschaft fuer Informatik, 2010, pp. 195–202. [31] J. J¨urjens, “Model-based security testing using umlsec,” Electron. Notes Theor. Comput. Sci., pp. 93–104, 2008. [32] G. Hoglund and G. McGraw, Exploiting Software: How to Break Code. Pearson Higher Education, 2004. [33] “Eclipse Modeling Framework,” http://www.eclipse.org/modeling/emf/ [accessed: Oct. 30, 2011]. [34] OMG, Object Constraint Language, V2.2, 2010. [Online]. Available: http://www.omg.org/spec/OCL/2.2/PDF [35] Cloud Security Alliance, Cloud Security Alliance (CSA) - Top Threats to Cloud Computing, V1.0, 2010. [Online]. Available: https://cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf [36] ——, Cloud Security Alliance (CSA) - Security Guidance for Critical Areas of Focus in Cloud Computing, V3.0, 2011. [Online]. Available: http://www.cloudsecurityalliance.org/guidance/csaguide.v3.0.pdf [37] OMG, Meta Object Facility (MOF) 2.0 Query/View/Transformation Specification, V1, 2008. [Online]. Available: http://www.omg.org/spec/ QVT/1.0/PDF/ [38] “VMWare Virtualization Software,” http://wwww.vmware.com [accessed: Dec. 25, 2011]. [39] “VMWare vCloud Director,” http://www.vmware.com/products/vcloud– director/overview.html [accessed: Dec. 25, 2011]. [40] “vCloud API Programming Guide,” http://pubs.vmware.com/vcloud– api–1–5/wwhelp/wwhimpl/js/html/wwhelp.htm#href=welcome/welcome.html [accessed: Dec. 25, 2011]. [41] B. Miller, L. Fredriksen, and B. So, “An Empirical Study of the Reliability of UNIX Utilities,” Communications of the ACM, pp. 32– 44, 1990. [42] M. Utting and B. Legeard, Practical model-based testing: a tools approach. Morgan Kaufmann, 2007. [43] “Eclipse Framework,” http://www.eclipse.org/ [accessed: Oct. 30, 2011]. [44] “VMWare ESX,” http://www.vmware.com/products/vsphere/esxi-andesx/index.html [accessed: Dec. 25, 2011]. [45] “VMWare vSphere Enterprise ,” http://www.vmware.com/products/vsphere/mid– size–and–enterprise–business/overview.html [accessed: Dec. 25, 2011]. [46] “VMWare vShield,” http://www.vmware.com/products/vshield/overview.html [accessed: Dec. 25, 2011].
validity at any time but eagerly attempts to show its deficiency, which is most of the time neglected at all. Currently we are about to finalize the implementation of the prototype in terms of the Fuzzer and the Test Evaluator, the remaining components have already been implemented and proven to be running. As soon as the implementation is finished, evaluation will start within a setup as described in Section VII-B. Based on the results retrieved from the experiments, we will start consider where to work on in the near future, whereas destined areas of improvement are: • The Vulnerability Knowledge Database, both in terms of representation of content and the actual content itself. • The Fuzzer in terms of fine–tuning and extending its data generation capabilities. • The risk analysis by means of extending its attack pattern matching capabilities and, as a result also its detection capabilities. • The Misuse Case generation in terms of generating more fine–grained and configurable Misuse Cases. • Enabling change–driven testing by means of attaching the necessary state machines to the models and incorporating change events as described in [4]. ACKNOWLEDGMENT This research was funded by the Austrian Science Fund (FWF): P17380. R EFERENCES [1] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica et al., “A view of cloud computing,” Communications of the ACM, vol. 53, no. 4, pp. 50–58, 2010. [2] Ponemon Institute, LLC, “Security of Cloud Computing Providers Study,” 2011, http://www.ca.com/ /media/Files/IndustryResearch/security-of-cloudcomputing-providers-final-april-2011.pdf [accessed: Dec. 18, 2011]. [3] S. Subashini and V. Kavitha, “A Survey on Security Issues in Service Delivery Models of Cloud Computing,” Journal of Network and Computer Applications, 2010. [4] R. Breu, “Ten Principles for Living Models - A Manifesto of ChangeDriven Software Engineering,” in CISIS. IEEE Computer Society, 2010, pp. 1–8. [5] OMG, OMG Unified Modeling Language (OMG UML), Infrastructure, V2.3, 2010. [Online]. Available: http://www.omg.org/docs/formal/ 07-11-04.pdf [6] ——, OMG Unified Modeling Language (OMG UML), Superstructure, V2.3, 2010. [Online]. Available: http://www.omg.org/docs/formal/ 07-11-02.pdf [7] I. Sriram and A. Khajeh-Hosseini, “Research agenda in cloud technologies,” Arxiv preprint arXiv:1001.3259, 2010. [8] Ponemon Institute, “Cloud Computing 2010 . An IDC Update,” 2011, http://www.ca.com/∼/media/Files/IndustryResearch/ security-of-cloud-computing-providers-final-april-2011.pdf [accessed: Dec. 29, 2011]. [9] F. Gens, R. Mahowald, R. L. Villars, D. Bardshaw, and C. Morris, “Security of Cloud Computing Providers Study,” 2009, http://www.slideshare. net/JorFigOr/cloud-computing-2010-an-idc-update [accessed: Dec. 29, 2011]. [10] L. Riungu, O. Taipale, and K. Smolander, “Research issues for software testing in the cloud,” in Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on. Ieee, 2010, pp. 557–564. [11] T. M. King and A. S. Ganti, “Migrating Autonomic Self-Testing to the Cloud,” Software Testing Verification and Validation Workshop, IEEE International Conference on, pp. 438–443, 2010. [12] L. Ciortea, C. Zamfir, S. Bucur, V. Chipounov, and G. Candea, “Cloud9: A software testing service,” ACM SIGOPS Operating Systems Review, vol. 43, no. 4, pp. 5–10, 2010. [13] L. Yu, W. Tsai, X. Chen, L. Liu, Y. Zhao, L. Tang, and W. Zhao, “Testing as a service over cloud,” in Service Oriented System Engineering (SOSE), 2010 Fifth IEEE International Symposium on. IEEE, 2010, pp. 181–188. [14] T. Rings, J. Grabowski, and S. Schulz, “On the Standardization of a Testing Framework for Application Deployment on Grid and Cloud Infrastructures,” in The Second International Conference on Advances in System Testing and Validation Lifecycle. IEEE, 2010, pp. 99–107.
56