Cloud computing incident monitoring and reporting

2 downloads 321170 Views 172KB Size Report
support for SLA based cloud services needs to be made, where SLAs mention about ... the service provider provides the user with required appli- cation software ...
Cloud computing incident monitoring and reporting A survey of current tools and techniques Balasubramaniyan Sundaresan

Dinesh Kandavel

Technical University of Darmstadt

Technical University of Darmstadt

[email protected]

[email protected]

ABSTRACT Incident monitoring and reporting is one of the most vital part of security in cloud computing environment. Here in this paper we survey about essentials of security incident and reporting in different kinds of cloud environments and services. This paper gives a brief introduction of usage and security needs of cloud computing in recent years starting with basic security incident handling procedures, its classification and details about its impact level of incidents. The methods used or monitoring and the tools which are used in the current systems are vital to measure the impact and threat level of each incident. Likewise post incident functionalities like incident analysis will help us to eradicate or prevent the incident from happening in the future. Strong support for SLA based cloud services needs to be made, where SLAs mention about ownership, mitigation and prevention of incidents from happening.

HYBRID PRIVATE

PUBLIC

SAAS(Software as a service)

PAAS(Platform as a service)

IAAS(Infrastructure as a service)

Keywords Cloud Computing, Incident Management

1.

INTRODUCTION

In this era of Big data[13], we need computing techniques which are more equipped for intensive computation along with flexibility. Cloud computing is one such model as it is based on these two main concepts pay as you go and on demand. In cloud computing, the term cloud stands as metaphor for the Internet, so basically it means internet based computing. Cloud computing is formally defined as distributed computing or delivery of computing service over the internet. These days almost all critical services are being delivered with help of cloud and hence it is very important to safeguard cloud services, since the internet is a layer which is open that cannot be trusted, there comes the liability of preventing it from attacks or failures. The components of cloud computing is broadly seen as client systems, distributed servers and data center. Each component needs will have different strategies of safeguarding it from threats and attacks. The service models of cloud computing are Software as a service(SaaS)[6], Platform as a service(PaaS)[4] and Infransture as a service(IaaS)[4] as mentioned in figure 1. In SaaS, the service provider provides the user with required application software along with hardware infrastructure such as operating system and networking and storage systems. In PaaS, operating system, network and storage infrastructure

Figure 1: Service models in cloud

is provided for developing and delivering web applications. In IaaS, the IT system Infrastructure such as virtualized servers, storage and network systems are delivered to the end user with high elasticity and scalability while abstracting away the massive underlying implementation from the end user. Each deployment models needs its own security and incident monitoring strategies as there is no one solution which fits all deployments as the functionality of each models differ from each other. The deployment models of cloud computing are Private cloud, Public cloud and Hybrid cloud. Private cloud[16] can be seen as datacenter delivered only via intranet which provides autonomy and control while still providing cloud style benefits. Public cloud provides service that are externally accessed over internet while still maintaining intranet datacenter benefits such as secured and robust infrastructure. Hybrid Cloud[16] as the name suggests, is combination of internal and external clouds to get the best out of both worlds. Since the end users in each type of deployments are different, there comes the necessity for different strategies to safeguard and secure it. For example, the private cloud will only be used by employees of organisation who has access to intranet, does not need stringent security model whereas

we may need stringent security model for public cloud which are accessed by anyone over internet. This paper deals with different techniques needed for each service and deployment models to make cloud computing as safe and trustworthy experience. Also we are in need of incident monitoring and reporting strategies in place for cloud deployments as we can’t afford to make the cloud unavailable for even very short time as this can have devastating effects on the business. The estimated cost of data loss and downtime is 1.7 trillion for last 12 months[1]. Hence we have to take precautionary steps for safeguarding cloud deployments and approach should be proactive rather than reactive which is provided by incident monitoring and reporting strategies. Section 1 gives a brief introduction of usage and security needs of cloud computing in recent years. Sections 2 starts with basic security incident handling procedures, its classification and details about the impact level of incidents. Further in the same section, incident monitoring and tools used today are discussed in detail. The last part of section 2 describes about the post functionality of security incident occurrence like analysis, eradication and preparation. The need for SLA management and survey data is also mentioned at the end of section 2. At the end we conclude with our opinion on current incident handling methods used along with future needs.

2.

INCIDENT MONITORING AND REPORTING

Security Incident handling in cloud environment has been on a high rise in recent years. The need for an incident handling and reporting system is very critical to maintain and analyse the incidents and take precautions for the future. Here is the basic description[9] of incident handling process which is followed across many organisation. 1. Monitoring and detection: Threat detection and monitoring are done using many live monitoring tools. Reporting tools are sometimes inbuilt or linked with monitoring tools. 2. Reporting parameters and format: There are certain attributes to be mentioned in any incident report to understand its impact and damage caused or going to cause. 3. Containment: The incident has to be contained before it spreads all over the network and also between one process to another in a system. 4. Eradication of incident and recovery: Necessary steps has to be taken to eradicate the incident and progress towards recovery as soon as possible without affecting the functionality of other dependable systems. 5. Preparation and precautions: Proper preparation has to be performed from the lessons learnt from previous incident history and under the likely hood of future incidents, proper precaution has to be performed.

2.1

Incident classification

Incidents can be of different types, different importance, and different impact across the organisation. Categorising the incident into different types will help us to analyse it in the later stages and mitigate it to prevent the occurrence

of same incident again. It also helps us to identify the root cause with much more clarity. A general common practise of diving into common incident types should be followed universally as it would be easier to defend against such security threats in the future. A study of general incident types reported across the world by various organisation using the cloud computing has been collected down below: 1. Outage: Incidents reported for total outage of cloud service, which will affect operational time loss, data loss and interruption in the midst of a transaction can be very critical. Sequences of intermittent outages are reported as unique incidents with all the intermittent incidents marked down. 2. Vulnerabilities: These are often reported for sitespecific vulnerabilities and online service provider websites. 3. Auto-Fail: When there are automatic updates to operating systems or applications which breaks the core functionality of services running or any virus definition updates which flags competing products as trojan. 4. Data-loss: These types of incidents are reported when there is data loss while backup process is going on or any recovery procedure is failed. 5. Hack: These incidents are mainly referred to as highprofile breaches of cloud providers or online services. When hack event fails integrity and confidentiality of customer data, it is classified as Hack incident of high priority and not under the Data-Loss category. Although the listed types are very generic, each and every type may have many sub divisions of incident types. This list may be considered as an abstract level of diving incidents into its belonging category. Some incidents are reported more often by clients to the cloud providers such as outage, vulnerability and auto-fail. Where as data-loss and Hack are mainly alert notification to clients for data theft, or identity theft mainly to help organisation to report fraudulent credit card and bank transactions. Impact levels is the attribute provided to each incident depending on damage it has on the system. In the survey data provided by ENSAI[7], we can divide the impact levels roughly into 5 categories. The impact level of an incident generalizes the affected nature of the security incident. Outage impacts which stall the work of a whole organisation would be stated as impact level 4, where as defect in small functionality like unexpected change of access control to less important files may be treated as impact level 1 or impact level 0. • Impact 0: Something went wrong in an exercise or a test. No impact on users. • Impact 1: Incident had impact on assets, but no direct impact on customers. • Impact 2: Incident had impact on assets, but only minor impact on customers. • Impact 3: Incident had impact on customers. • Impact 4: Incident had major impact on customers.

Impact 4

100%

Impact 3

92%

Impact 2

Infrastructure data

49.2%

Impact 1

32.5%

Impact 0

Security mon- IaaS itor data proxy/gateway Access data data Application Yes logs

Network monitor and logs Other logs

10.5% 0%

20%

40%

60%

80%

100%

PaaS

SaaS

Access data

none

Yes

Client side and browser based data none

Yes, access data Change logs and hypervisor logs Local host none traffic Host/endpoint Error logs activity and platform logs

none none

Table 2: Security monitoring in each services

Figure 2: Histogram of Incident Impact level. X axis:impact damage measured in percentage. Services IaaS

PaaS

SaaS

Virtual machine monitoring Status of internal resource and status of every VM Simultaneous connections and hosting space used Resource sharing among applications usage patterns

Client side monitoring Costs per instance,consumed time and VM status Usage of platform resources service costs, resource usage, status of application and access history

Table 1: Types of monitoring in each services

2.2

Monitoring classification

Monitoring certain aspects and logs of the system are most vital functionality to detect any vulnerabilities or any loopholes in the system. As the cloud services are divided into IaaS, PaaS and SaaS, we divide the monitoring classification based on the same. Each cloud service may have different aspect of monitoring and may have common aspects such as network monitoring, which is most common in all three services. Consider the Table 1 which describes different aspects of monitoring in different services. IaaS has to maintain a status about internal resource and status of every virtual machine as in case of internal resource damage, it would lead to data loss category of incident. PaaS will be hosting simultaneous connections who’s failure would lead to outage category of incidents as the clients wont be able to access the platform. A usage counter and monitor at the client should be state of the art as failure would lead to wrong programming decisions and also wrong billing. SaaS should always have the status of resource sharing among applications, as this pattern will be very crucial for the client side, failure of which will lead to lower priority of outage as still the main purpose of SaaS is not defeated.

Table 2 Describes about the critical security monitoring dividing on all services of cloud. IaaS and PaaS critically monitors Access data logs from network gateway. Any breach detected would trigger alert for incident in the Hack category. In the application logs IaaS and Paas use it most and in the SaaS the client side data is much monitored. Infrastructure data logs tell about the status of usage of hardware resources such as major change in hardisk usage like format change are very important to notify. Hypervisor logs are used to store data logs generated by virtual machines and any irregular change will have to be analysed since there are chances of data breach from one virtual machine to another through guest operating system installed in the same hardware system. Network monitor is the most important and most critical aspect of monitoring. It has to be logged from the lower most stack possible for hack alert notifications. Especially in private cloud deployment, all the layers of stack has to be monitored as there will be no internal restrictions for access.

2.3

Reporting parameters and format of incident reporting

In this section we describe the format and classification[8] of the incidents to be reported. Mentioned template is a general template followed by many cloud providers, which most commonly have these parameters to be filled out as incident documentation. The 13 attributes are very important, and should be provided very accurately. But in a survey performed[8], with data collected from cloutage[2], there are many incidents which does not have accurate or has missing attributes out of these 13 parameters. Table 3 lists the percentage of records missing in combination with the collected incident data reported. We can observe that high percentage of data missing from last two attributes, that is OSVDB ID and Data-loss ID. This percentage tells that a cross reference with the online databases is not performed most of the time. There are chances that incidents are not reported to the online incident databases also. These two factors have made the missing data for almost all the incidents. The format for describing an incident in general practise

Field Number affected Occurred data Incident duration OSVDB ID Dataloss ID

% miss 96.3 27.4 62.9 99.1 99.4

Cloud monitoring tools Research Oriented

SLA oriented

Security Oriented

QoS-MONaasS, LoM2HiS, CASVID

Open source

SLA@SOI, Sandpiper, CloudCompass NMS, Site27*7

Trustvisor, VMwatcher, SIM, Secvisor, SecMon, RKprofiler, Revirt, PoKer, Overshadow, NICKLE, MISURE, MAVMM, Lycosid, Livewire, Lares, KVMSec, K-Trace, HyperWall, CloudWatcher, CloudSec, Aftersight Snorby, FBcrypt

Table 3: Missing attributes

is mentioned below. • Incident ID: Unique identification key • Incident type: Falls under particular category of incident (Outage,Vulnerability,Auto-Fail,Dataloss or Hack) • Summary : Headline description

Commercial

• Detail: Detailed description stating the cause if identified and also steps taken to resolve the issue. • Number of affected users: Approximate number of service subscribers facing the impact.

CipherCloud, CLoudForge, CloudPassage, MARS, SPAE, SplunkStorm, ThreatStack

Table 4: Incident Monitoring tools

• Impact level: This depends on the number of service subscribers affected. For scale refer figure 2

• Affected services(s):List of one or more services affected due to the impact of the incident

More general open source monitoring tools are available in wide range. Some of the tools are Collecti, Cloudcmp, Ganglia, SIGAR, Nagios, Hyper-HQ, Zabbix etc. Rest of the most common tools are mentioned in table 4.

• Reported date: Reported date of the incident

2.5

• Incident duration: Duration in days, hours and minutes

Incident response life cycle mentioned in figure 3 is a method of continuous improvement towards secure cloud computing. Once detection from the monitoring tools are pulled out, and the data documented in right place, analysis can be started. During the analysis, as mentioned earlier, insufficiency of data and quality of documentation will affect the quality of the analysis directly. With the analysis done, right path for recovery is chosen. Recovery methods has to be documented as well, for future reference, which is quite useful for speeding the recovery process in the future. Recovery methods documentation will be the key for preparation phase of the incident response life cycle. Precautionary measures has to be taken after the recovery process has finished. The experience of historical incidents is the key for precautionary measures. This phase is a continuous learning and improvement phase. After an incident has occurred it is very important to analyse the incident and trace back to the root cause of it and take necessary precautions for future. A thorough understanding of the incident occurred or is about to happen is very crucial in any organisation. Understanding of incidents depend on the nature of incident. Some incidents may be easy to understand and trace all the evidence of it, for instance when a website has been defaced, but others may be it is hard enough when some hackers might wipe out the data footprint left behind in the system. This will cause deficiency in understanding the root cause of the incident. To understand the complete root cause of any security

• Organisation:Affected service provider

• OSVDB ID: Unique identification key of Hack incidents cross reported in the Open Source Vulnerability Database[12]. The goal of OSVDB is to provide accurate, detailed, current, and unbiased technical information on security vulnerabilities. It aims to promote greater, more open collaboration between companies and individuals, eliminate redundant works, and reduce expenses inherently with the development and maintenance of in-house vulnerability databases. • DataLossDB ID: Unique identification key of DataLoss and Outage incidents cross-reported in the Open Source Foundation’s DataLossDB[3]. Curators and volunteers search news feeds, blogs, and other websites looking for data breaches to provide unbiased, high quality data regarding data loss to promote research.

2.4

Classification of incident monitoring tools

As the number of cloud vendors increase for different types of cloud environments, there are a lot of monitoring tools supporting each kind of environment. Here we list some of the most common tools used based on the data collected[14]. We broadly classify incident monitoring tools into three main categories. Research Oriented tools are used mainly for complex environments, which are testing bed for several parameter testing. There are a lot of research oriented tools available and are listed under security monitoring.

Detection and analysis

Preparation and Con.t improvement

Detection

Analysis

Recovery

Figure 3: Incident response life cycle

incident, precise knowledge of affected network, system and application must be reported. When dealing with incidents, clients input of their knowledge towards this process would add to better understanding of the root cause. But client will generally lack a lot of necessary knowledge to determine the scope of the incident. Timely detection of security incidents and correct documentation will lead to successful analysis of incidents. Analysis of incidents depends purely on the quality of data reported during the detection phase. Both detection and documentation are two major challenges faced in today’s cloud computing providers. Availability of accurate data is very difficult to obtain as cloud computing environments are very dynamic in nature. secondly, the analysis of data provided cannot be made sure by the client that it is made to utilize fully. This is due to the fact that there is non-transparency with hardware and software maintenance with cloud vendors to clients. Although many technical challenges are faced by cloud vendors for analysis, they are put aside due to the fact that there are no enforced legal standards in today’s cloud computing technology. Until standards and enforced methods are introduced for analysis, it is difficult for incident management to keep up with the incident detection and post analysis. Cloud customers should be enforced to keep up the challenge with cloud vendors to make sure that they have access to incident and analysis data. This would always be a positive step towards incident management in cloud.

2.6

Resolution and Recovery from Incident

Once the incident analysis is performed, which provides information about the affected services, the incident should be contained in such a way that it is not contagious to other areas, basically we should be able to reduce the attack surface as soon as incident occurs so that impact of incident would be less. One such example for containment of incident is that the network connectivity of the compromised server can be cut off so that the attack surface is confined only to that server. Another example would be freezing of the affected virtualisation [11] image so that attacker is not able to carry out his activities. If there is vulnerability in the software, then it has to be patched with security fixes so that it is not used as attacking point by attacker. But sometimes it may not be possible to apply security patches since it takes time to fix the security issue and application needs to taken offline. In such scenarios, a fine grained access control can act as con-

tainment strategy so that only authorised users are using the resources in the cloud. Web application firewall can also be used to contain and prevent incidents as it is possible to configure intrusion detection and prevention mechanisms. As businesses are heavily dependent on cloud computing infrastructure, a failed system leads to huge revenue loss due to unavailability of data. Hence there is a need for business continuity solutions which takes of resolution and recovery of cloud infrastructure without any downtime and outage. Disaster recovery(DR)[17] can be termed as recovery from the catastrophic failure to the physical data center. This is possible only if we are able to maintain a back up site which is exact replica of the primary data center. Usually this DR site is geographically separate, so that if there is outage in main data center due to natural calamity or fire accident, the DR site would still be available to avoid downtime or cloud outage. There are implementations where this DR site is brought up and running in under one minute. Even if some hosts and virtual machines fails due to some issues or network failure happens, then the end user may experience difficulty in using the cloud service. DR site comes into use in these instances as well. Business continuity solution is dependent on concepts such SQL refresh rate, server refresh rate. SQL refresh rate determines the database recovery strategy on how often the data is in synchronisation between primary data center and disaster recovery site. Server refresh rate determines on how often the system changes are allowed in the datacenter and that it has been propagated to the disaster recovery site so that all the application software and system software are at same versions. The system change can be as big as upgrading the software or as small as parameter change in the system. The changes needs to applied and tested in a test environment before implementing it in the datacenters.

2.7

Precautionary steps to mitigate incident

It is necessary to take some precautionary measure to provide resilient cloud computing infrastructure. A well designed cloud computing infrastructure can provide with many benefits such as scalability and resilience also easy to evolve to different deployment model. For example, the private cloud computing can be a stepping stone to implement hybrid solution or public cloud since it has given the groundwork needed for its implementation. Multi tier architecture approach provides us the separation of concerns so that virtual machines belonging to differ-

CLOUD PROVIDER CLOUD PROVIDER

1 3

1 User

Critical Infrastructure

Critical Infrastructure

SME

User

SME

NCA

2 4

EU Networks (NCA,ENSIA)

Figure 4: Incident Reporting flow by European commission

ent security zone need not be hosted on same virtual server. Internet facing application software such as webserver can be at web tier which is usually a DMZ firewall zone[10] and whereas application middle ware components can be at different security zone such as intranet. It is highly unlikely that all resources are needed for everyone. Hence configuration of access control are very important as only users get what they need and authorised to access. The patches for security bugs should be applied and the patch level of software should be at latest level to safeguard from known bugs. For example, Cloud computing uses visualisation strategy and there was known hyper-visor vulnerability which allowed guest operating system to run processes on other hosts or guests. The private cloud computing can be seen safer than public cloud, but not many recognise the fact the insider attacks are more stealthy and powerful and can have an devastating offsets on the infrastructure. Encryption mechanism and network level authentication should be in place so that it is hard to crack the information in the networks and virtual machines connect only with other virtual machines for which they are authorised. The host OS should be free from malware and should be installed and configured with defence mechanisms such as anti-malware , anti-spyware software. The host OS is not advised to have internet connectivity and any software updates or patches should be from the trusted source on the internet which has the opportunity to scan all updates or patches before being applied on the operating systems.

2.8

Reporting flow

The European Union Agency for Network and Information Security(ENISA), has summarised a reporting model for control of security across European union. This model is described in figure 4. The following is the summary of reporting flow:

1. The report from cloud Providers to clients must be customer specific and also should include all the technical details about the incident, its duration, affected areas, remediation time, root cause and mitigation steps along with contact point. 2. The report from Critical infrastructure to National Competent Authority(NCA), should contain the report of incidents from cloud providers and additional information like clients estimated affected areas in the incident, users affected and cloud services affected. 3. The report sent by cloud provider directly to NCA, will contain incident reports of high , that is huge number of users and systems affected. The provider will send this report including this information to the competent authority 4. When needed the National authority shares important information with all countries across EU.

2.9

Survey of Service Level Agreements

On a report published by ENISA[7], the European Union Agency for Network and Information Security has done a vast survey of cloud providers and cloud customers across European union. Many statistics have been unearthed in this survey. One of the most interesting facts found based on the Service Level Agreements(SLA) provided by the cloud vendors was: • 75% of the contracts define availability requirements • 50% of the contracts stipulate that availability be measured regularly. • In 78% of the cases the provider is obliged to report service outages.

The ENISA[7] has also defined some of the parameters to be regulated in any SLA between cloud vendors and cloud customers. These parameters should be taken into account. 60 1. Parameter definition: a definition of exactly what is being measured. For example, not just availability, but a detailed definition of what availability means in terms of basic functions, their expected operation (how long to send an email e.g.) 2. Monitoring methodology: The methodology for measuring real-time security parameters should be clearly understood before the contract is established. This includes techniques for obtaining objective measurements, sub-indicators, etc. For example, for availability, a technique might be the use of active probes. A sub-indicator might be the number of customer calls about availability issues. 3. Independent testing: Wherever feasible technically and economically, independent testing of the SLA parameters should be carried out. Some monitoring may be easily and economically carried out by the customer themselves, while others can only be run on a systemwide basis by the service provider and cannot be carried out by a single customer (or are too expensive). Examples of parameters which can be tested independently include availability, business continuity. 4. Incident/alerting thresholds: Contracting parties should define the ranges of parameters that trigger ad-hoc alerts, incident response or re-mediation. For example, for resource provisioning, a typical trigger point would be the inability to provision extra resources of more than 10% of existing resources per day. 5. Regular reporting: Regular Service Level Reports (SLRs) and their contents should be defined. SLRs typically include for example, incidents, event logs and change reports. 6. Risk profile considerations: response thresholds should be determined according to the risk profile of an organisation. For example, a low-cost service for batchprocessing non-personal data does not need high levels for confidentiality requirements. SLA’s and in particular incident reporting, alerting and penalty triggers should be adapted to an organisation’s risk-profile.

40

20

A

B

C

E

none

Figure 5: Governance frameworks and security standards used (number of respondents), A=ISO270x, B=COBIT, C=ITIL, D=TOGAF, E=Custom

Yes 65 % 22 % No

13 %

Don’t know Figure 6: Pie chart describing the percentage of SLAs[7] found to be containing security requirement, containing no security requirement and some don’t know No

7. Penalties and enforcement: depending on the setting, parameter thresholds can be linked to financial penalties, to incentivize compliance with contractual requirements or compensate for certain losses. In addition to these enforcements, the survey also checked if cloud vendors follow any particular model for incident management. Here in figure 5, we have standards[5] used by most are ISO270x[5] and ITIL[15]. Please note that some do not use any such standards which is about 37%. The survey also pointed out some of the very interesting facts about SLAs which actually contain security requirement, this is described in the pie chart of figure 6. Classification of incidents has to be mentioned in the SLA very clearly, the pie chart in figure 7 shows the collected data in the survey.

D

42 %

13 %

32 % Yes in brief

unspecified

13 %

Don’t know Figure 7: Inclusion of classification of incidents in SLA

R ITIL R [5] M. Beims. IT-Service Management mit ITIL : R in der Edition 2011, ISO 20000: 2011 und PRINCE2 Praxis. Carl Hanser Verlag GmbH Co KG, 2012.

unspecified 80 %

[6] P. Buxmann, T. Hess, and S. Lehmann. Software as a service. Wirtschaftsinformatik, 50(6):500–503, 2008.

15 % 5%

Yes

No Figure 8: Availability reports sent to cloud customers from cloud vendors

Incident reports has to be sent to cloud customers from its vendors, the pie chart in figure 8 shows that 80% of the cloud customers don’t get any reports or rather 80% of the cloud vendors did not specify.

3.

CONCLUSIONS

On the data collected for this paper to analyse the current situation of incident monitoring and reporting with cloud vendors and cloud customers, we have found one of the most important things to be enforced is the parameters mentioned in the SLA. Many clients do not challenge cloud vendors for incident reporting as they think that there is an abstraction from the resources they use via cloud. In the cloud computing paradigm, the clients tend to think that all of the resources, incidents, maintenance of hardware and software reports are unnecessary for them. They have to realize the fact that clients data is stored in the cloud and they have to realize how safe is it, or how is it managed. Recent reports by ENISA from EU have come up front with the survey which shows total negligence from clients as well as vendors. Due to cut throat competition in the cloud vendors business has been forcing them not to share any incident threat information. More and more common forums has to be enforced to publicise the incident reports, its analysis and mitigation. All the cloud vendors should publicise their reports to the forums, which will reduce the security incidents to a vast percentage. Moving on, every aspect of IT system is being moved to cloud, strict law enforcements and SLA should be enforced and more transparency between client and cloud vendors is required.

References [1] Business loss in downtime. [2] Cloutage: Online cloud incident report db. [3] Open security foudation’s dataloss db. [4] N. Antonopoulos and L. Gillam. Cloud computing: Principles, systems and applications. Springer, 2010.

[7] M. Dekker and G. Hogben. Survey and analysis of security parameters in cloud slas across the european public sector. Online abrufbar unter: http://www. enisa. europa. eu/activities/Resilience-and-CIIP/cloudcomputing/survey-and-analysisof-security-parametersin-cloud-slas-across-the-european-public-sector, 2011. [8] L. Fiondella, S. S. Gokhale, and V. B. Mendiratta. Cloud incident data: An empirical analysis. In Cloud Engineering (IC2E), 2013 IEEE International Conference on, pages 241–249. IEEE, 2013. [9] B. Grobauer and T. Schreck. Towards incident handling in the cloud: challenges and approaches. In Proceedings of the 2010 ACM workshop on Cloud computing security workshop, pages 77–86. ACM, 2010. [10] W. Huang and J. Yang. New network security based on cloud computing. In 2010 Second International Workshop on Education Technology and Computer Science, volume 3, pages 604–609, 2010. [11] F. Lombardi and R. Di Pietro. Secure virtualization for cloud computing. Journal of Network and Computer Applications, 34(4):1113–1122, 2011. [12] B. Martin, C. Sullo, and J. Kouns. Osvdb: Open source vulnerability database. [13] S. Nepal and A. Bouguettaya. Big data and cloud. In Web Information Systems Engineering–WISE 2011 and 2012 Workshops, pages 237–237. Springer, 2013. [14] D. Petcu. A taxonomy for sla-based monitoring of cloud security. In Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pages 640–641, 2014. [15] D.-I. J. Repschl¨ ager, K. Erek, R. Zarnekow, et al. Itservicemanagement im cloud computing. HMD Praxis der Wirtschaftsinformatik, 49(6):6–14, 2012. [16] S. Subashini and V. Kavitha. A survey on security issues in service delivery models of cloud computing. Journal of Network and Computer Applications, 34(1):1–11, 2011. [17] T. Wood, E. Cecchet, K. Ramakrishnan, P. Shenoy, J. Van Der Merwe, and A. Venkataramani. Disaster recovery as a cloud service: Economic benefits & deployment challenges. In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, pages 8–8. USENIX Association, 2010.