Detecting Intrusions in Federated Cloud Environments Using Security ...

6 downloads 126258 Views 162KB Size Report
Federated Cloud environments, and Cloud Service Providers will benefit significantly if .... based on ensuring the required QoS level or those aiming at optimising the ... Anomaly based. IDSs detect abnormal patterns that deviate from what is.
2015 International Conference on Developments of E-Systems Engineering

Detecting Intrusions in Federated Cloud Environments Using Security as a Service Áine MacDermott, Qi Shi, and Kashif Kifayat School of Computer Science, Liverpool John Moores University, Liverpool, L3 3AF, UK [email protected] and {q.shi, k.kifayat}@ljmu.ac.uk

service instance and a CSP is vulnerable to all attacks that a service instance can run against its hosting Cloud systems, such as Distributed Denial of Service (DDoS), and Cloud malware injection. In the same way, a malicious CSP of the Cloud Federation may perform several attacks towards service instances running on it. Cloud environments, compared to traditional service hosting infrastructure, involve more complex monitoring systems which have to be scalable and robust. Monitoring systems and intrusion detection systems (IDSs) must be refined and adapted to different situations in Cloud environments and cope with the scalability and resilience requirements. To embrace this challenge, we propose a methodology that develops a robust collaborative IDS in a federated Cloud environment. Our approach offers a proactive collaborative model for intrusion detection based on the distribution of responsibilities. The structure of this paper is as follows: Section II provides background on the research problem we have identified, namely the Critical Infrastructure and Cloud Computing progression, and the associated benefits and vulnerabilities of Cloud Federations. In Section III we outline our collaborative intrusion detection methodology for federated Cloud environments, “Security as a Service”. In Section IV we discuss the benefits and implication of such an approach, and present our conclusions and future work in Section V.

Abstract—Securing applications and services provided in the Cloud against cyber attacks is hard to achieve, due to the complexity, heterogeneity, and dynamic nature of such systems. Federated Cloud environments, and Cloud Service Providers will benefit significantly if there is a comprehensive intrusion detection methodology that evolves based on their requirements, i.e. nonintrusive and resilient. The cooperation of threat knowledge, both known and unknown threats among peers within enterprise networks or with other providers will contribute to better incident detection and prevention; enhancing Cloud security and providing faster and more effective incident response. Inter-Cloud communication allows on-demand reassignment of Cloud resources including compute, storage and network, and transfer of workload through interworking of multi-national and multi-operator Cloud systems. The flow of information across the Cloud systems could be used to help trace the source of attack, identify location, and limit attack vectors. Keywords-Critical Infrastructure; Cloud Computing; Cloud Federation; intrusion detection; dempster-shafer; collaboration.

I.

INTRODUCTION

Cloud Federation is the deployment and management of multiple external and internal Cloud Computing services. It is the practice of interconnecting the Cloud Computing environments of two or more service providers for the purpose of load balancing traffic and accommodating spikes in demand. Cloud Federation models are designed for static environments where priori agreements among parties are needed to establish federations [1]. Security vendors tend not to exchange information, (e.g. malware reported from their customers), with other vendors due to privacy issues and maintaining their reputation. From the customers’ perspective, if diverse security vendors collaborate with each other, by means of providing feedback regarding the legacy of suspicious files, they may achieve even better accuracy of threat detection [2]. It is with this in mind, that the benefits of security vendor collaboration, in our case, Cloud Service Providers (CSPs) could benefit all parties involved. With a Cloud environment, each interface can present vulnerabilities that can be exploited by malicious entities (i.e. users, service instances, and CSPs) to perform cyber attacks. The interface between a service instance and an user can be considered as a client-to-server interface, that is vulnerable to all types of attacks that are possible in common client-server architectures, including SQL injection, buffer overflow, privilege escalation, SSL certificate spoofing, phishing attacks, and flooding attacks [3]. The interface between a 978-1-5090-1861-1/15 $31.00 © 2015 IEEE DOI 10.1109/DeSE.2015.28

II.

BACKGROUND

As more sectors utilise Cloud-based services in their computing environment, this is being adopted by Critical Infrastructure. Utilising Cloud Computing within an environment that historically has not had any Internet connectivity would appear trivial to some, however research has shown that Cloud Computing will reach the Information and Communication Technology (ICT) services that are operating Critical Infrastructure [4, 5, 6]. This is reflected in a white paper produced by the European Network and Information Security Agency (ENISA) in 2011 [5], which provides specific guidelines in this area, highlighting the technical, policy and legal implications for this progression. Cloud Computing hides resource availability issues making this infrastructure appealing to users with varying computational requirements: from storage applications to intensive computing tasks, such as large-scale parallel simulations which require computational time on high performance computing machines and clusters. In a Cloud Computing environment resources are shared among multiple 91

effecting CSPs which is the Economic Denial of Sustainability (EDoS), which shares similar characteristics to DDoS but focuses more on financial implications [5]. Resource management also has a very important security function to prevent the potential for DDoS attacks. If resource management is not in place, a compromised virtual machine could allow an attacker to starve all of the other virtual machines within that Cloud of their needed resources. If the Cloud system notices the lack of availability, it could move the affected service instances to other servers of the Cloud Federation. This results in additional workload for such servers, and thus the flooding attack can propagate and spread throughout the whole Cloud Federation [9]. As more enterprise workloads have moved into Cloud environments, some traditional on-premises threats have followed them. Cyber attacks on Cloud environments have almost reached the same level as attacks on traditional ICT infrastructures, with increased adoption of cloud-based services by Critical Infrastructure and organisations due to the intricate data and services being hosted.

users. The number and nature of the workload presented by these users can vary over time. Operators of Critical Infrastructures, in particular the ICT that supports gas and electricity utilities and government services, are considering using the Cloud to provision their high assurance services. Many operators do not have the infrastructure to support the growing need for accurate predictive and historical simulations imposed by the adoption of renewable energy sources and the on-going development of smart grids. To overcome this, Cloud Computing allows these operators to reduce or avoid over investment in hardware resources and their associated maintenance [6]. Deploying high assurance services in the Cloud increases cyber security concerns, as successful attacks could lead to outages of key services that our society depends on, and disclosure of sensitive personal information. This exposes these infrastructures to cyber risks and results in demand for protection against cyber attacks, even more than traditional systems. Critical security issues include data integrity, user confidentiality, and trust among providers, individual users, and user groups. Additionally, availability issues and real world implications would be the main concern for providers of Critical Infrastructure, depending upon the operations or services they are hosting [7].

B. Intrusion Detection and Intrusion Prevention methods Intrusion detection and intrusion prevention methods are increasingly used to monitors network or system activities for malicious activities or policy violations and produces reports to a management station. An Intrusion Detection System (IDS) typically performs extensive logging of data that is related to detected events. This data can be used to confirm the validity of alerts, investigate incidents, and correlate events between the IDS and other logging sources [11]. IDS can be classified as host based and network based, depending on their data sources. Host based IDSs provide local intrusion detection and support by monitoring user behaviour over an application layer protocol, such as the client-server protocol. Network based IDSs provide global intrusion detection, where they provide level monitoring of traffic flowing through the network and detect intrusions based on the nodes behaviour over the network. An Intrusion Prevention System (IPS) operates the process of performing intrusion detection and attempting to prevent detected possible incidents. An IPS is a device or software application that has all the capabilities of an IDS, but can also attempt to prevent certain incidents or attacks. IPSs provide security at all system levels, from the OS kernel to network data packets. Both IDSs and IPSs use one of two detection methods; anomaly based or signature based. Anomaly based IDSs detect abnormal patterns that deviate from what is considered to be normal behaviour [12]. Anomaly detection does not require prior knowledge of intrusion and can detect new intrusions. However, they may not be able to describe what an attack is and may have a high false positive rate. Signature based IDSs use known patterns of unauthorised behaviour to predict and detect subsequent similar attacks, which can accurately and efficiently detect instances of known attacks. They cannot detect zero day attacks. Signature databases must constantly be updated, and IDSs must be able to compare and match activities against large collections of attack signatures.

A. Cloud Federations and Cyber attacks As Cloud Computing grows in popularity, new models are deployed to further the associated benefits via the deployment of a Cloud Federation. Cloud Federations allow users to access resources and services transparently, distributed among several independent CSP [8]. Federated Clouds are an alternative for those companies who are reluctant to move their data out of house to a service provider due to security and confidentiality concerns. By operating in geographically distributed data centres, companies could still benefit from the advantages of Cloud Computing by running smaller Clouds in house, and federating them into a larger Cloud [8]. While users focus on optimising the performance of a single application or workflow, Cloud providers aim to obtain the best system throughput, resource efficiency, or less energy consumption. Brokering policies will try to satisfy the user requirements and Clouds’ global performance at the same time [9]. Thereby, Cloud Federation introduces new avenues of research into brokering policies such as those techniques based on ensuring the required QoS level or those aiming at optimising the energy efficiency [9]. DDoS attacks are a real threat in a Cloud environment due to the potential to access virtually unlimited resources. The Infrastructure as a Service (IaaS) platform can facilitate DDoS attacks, as it tries to balance the traffic load, effectively supporting the attacker. When the Cloud system observes the high workload on the flooded service, it is likely that the Cloud Federation will start providing more computational power in order to cope with it. In the same way, a malicious CSP of the Federation may perform several attacks towards service instances running on it [10]. Previous work of ours [4] has highlighted this possibility, conveying how this type of attack could affect interdependent services and CSPs. Additionally, there is a new variation of DDoS which is also

92

Most IDS/IPSs employ a centralised architecture and detect intrusions that occur in a single monitored system/network. However, centralised processers are not able to process collected data from massive network or distributed attacks, as currently more and more attacks appear to use a distributed architecture [13]. In centralised IDSs, the analysis of data is performed on a fixed number of locations, whereas in distributed IDSs analysis of data is performed on a number of locations that are proportionate to the number of available systems in the network. Table 1 shows a comparison between centralised and distributed IDS characteristics. TABLE I.

COMPARISION BETWEEN CENTRALISED AND DISTRIBUTED IDS CHARACTERISTICS

Characteristic Run continually

Centralised A relatively small number of components need to be kept running.

Fault tolerant

The state of the intrusion detection system is centrally stored, making it easier to recover it after a crash.

Resist subversion

A smaller number of components need to be monitored. However, these components are larger and more complex, making them more difficult to monitor.

Minimal overhead

Impose little or no overhead on the systems, except for the ones where the analysis components run, where a large load is imposed. Those hosts may need to be dedicated to the analysis task. Easier to configure globally, because of the smaller number of components. It may be difficult to tune for specific characteristics of the different hosts being monitored. By having all the information in fewer locations, it is easier to detect changes in global behaviour. Local behaviour is more difficult to analyse. The size of the intrusion detection system is limited by its fixed number of components. As the number of monitored hosts grows, the analysis components will need more computing and storage

Configurable

Adaptable

Scalable

resources to keep up with the load.

Distributed Harder because a larger number of components need to be kept running. The state of the intrusion detection system is distributed, making it more difficult to store in a consistent and recoverable manner. A larger number of components need to be monitored. However, because of the larger number, components can crosscheck each other. The components are also usually smaller and less complex. Impose little overhead on the systems because the components running on them are smaller. However, the extra load is imposed on most of the systems being monitored. Each component may be localised to the set of hosts it monitors, and may be easier to tune to its specific tasks or characteristics.

Graceful degradation of service

If one of the analysis components stops working, most likely the whole intrusion detection system stops working. Each component is a single point of failure.

Dynamic reconfiguration

A small number of components analyse all the data. Reconfiguring them likely requires the intrusion detection system to be restarted.

the existence of central coordination components. If one analysis component stops working, part of the network may stop being monitored, but the rest of the intrusion detection system can continue working. Individual components may be reconfigured and restarted without affecting the rest of the intrusion detection system.

Key challenges with IDSs include reduction of false positives and avoidance of false negatives, handling encrypted traffic, algorithms for effective content matching, deep packet inspection at wire-speed, performance improvement, latency reduction, behaviour-based detection, and environment awareness [14]. III.

RELATED WORK

The scale of the data found in Cloud Computing environments makes the application of existing techniques unfeasible, as they are unable to carry out analysis within reasonable temporal constraints. Additionally, current IDS techniques do not take into consideration the threat exposures in the network while detecting intrusions, resulting in obtaining alerts for all types of events, many of which may not be relevant to the operating environment [13]. In the context of dynamic network environments, this approach may lead to a huge number of unnecessary alerts. Depending on the frequency of changes to the network environment, this may in turn affect the efficacy of the IDS. Related work in the area of Cloud security tends to focus on the problem in a multiplicity of ways. Some work applies the principles of network security to the Cloud environment, without taking into consideration the sheer size and dynamic nature of the Cloud environment. In the work of Lo et al. [15], they present a cooperative intrusion detection system framework for Cloud Computing networks. In each Cloud region an IDS is applied, and each entity communicate with each other through the exchange of alerts to reduce the impact of DDoS attacks. A cooperative agent is used to receive alerts from other IDSs, and they are analysed using a majority vote in order to determine the accuracy of results. If deemed a legitimate alert, a blocking rule is implemented. This is an interesting approach but currently the monitoring architecture does not meet the needs of the scale of the Cloud environment, and often majority vote can skew results. In the work of Calheiros et al. [16], they propose an InterCloud project. Through the use of agents, “Cloud Coordinators” they aim to tackle the issues related to performance, reliability, and scalability of elastic applications. Their architecture could be applied to the intrusion detection domain, whereby Cloud Coordinators would have to be present on each data center that wants to interact with

Data are distributed, which may make it more difficult to adjust to global changes in behaviour. Local changes are easier to detect. A distributed intrusion detection system can scale to a larger number of hosts by adding components as needed. Scalability may be limited by the need to communicate between the components, and by

93

Service (IaaS) to Platform as a Service (PaaS), and Software as a Service (SaaS). Attacks and failures are inevitable; therefore, it is important to develop approaches to understand the Cloud environment under attack and plan for resilience. The current lack of collaboration among different components within a CSP, or among different providers, for detection or prevention of attacks is a complex and difficult challenge to address. Our current work focuses on the deployment of such a solution for CSP collaboration: Security as a Service. The collaboration of CSPs could help trace the source of attack, identify location, and limit attack vectors. Cloud defence strategies need to be distributed so that it can detect and prevent the attacks that originate within the Cloud itself, and from the Users in differing geographical locations via the Internet. The measurements required to obtain a comprehensive view of Cloud security leads to the generation of a vast volume of data coming from multiple distributed locations [4]. Within the sub sections we outline the components of our Security as a Service solution, and detail our collaborative intrusion detection

InterCloud parties. The Cloud Coordinator is also used by users and brokers that want to acquire resources via InterCloud and do not own resources to negotiate in the market. Montes et al. [17] implemented GMonE, a Cloud monitoring tool which conveys its capability of being able to adapt to differing resources, services and monitoring parameters. GMonE performs the monitoring of each element by means of GMonEMon, which abstracts the type of resource; both virtual and physical. The GMonEMon service monitors the required parameters and then it communicates automatically with the monitoring manager (GMonEDB) to send it the monitored data. The work of Chen et al. [18] aims to develop a new collaborative system to integrate a Unified Threat Management (UTM) via the Collaborative Network Security Management System (CNSMS). Such a distributed security overlay network coordinated with a centralised security center leverages a peer-to-peer communication protocol used in the UTMs collaborative module and connects them virtually to exchange network events and security rules. The CNSMS also has a huge output from operation practice, e.g., traffic data collected by multiple sources from different vantage points, operating reports and security events generated from different collaborative UTMs, etc. There is a vast amount of data generated which is not easy to analyse in real-time, but they also keep it archived for forensic analysis. Dhage et al. [19] propose an architecture in which mini IDS instances are deployed between each user of the Cloud and the CSP. As a result, the load on each IDS instance will be less than that on a single IDS and for this reason, the small IDS instance will be able to do its work in a more efficient way. For example, the number of packets dropped will be less due to the lesser load which single IDS instance will have. By proposing a model in which each instance of IDS has to monitor only a single user, an effort has been made to create a coordinated design, which will be able to gather appropriate information about the user, thus enabling it to classify intrusions in a better way. Current IDSs are not devised to handle dynamic network environments, as they predominantly use a predefined set of signatures and anomaly detection thresholds to detect intrusions. They can also be ignorant of any changes to the operating environment that may eliminate or introduce vulnerabilities and threat exposures. Due to this weakness, they may miss critical attacks and detect intrusions that are not relevant due to changes to the environment. A threatawareness capability provides a key opportunity for an IDS to improve its detection rate [13]. As such, there are no unified models or unified detection and prevention approaches to detect intrusions in the Cloud environment, nor is there a globally accepted metric or standard to evaluate against [10]. IV.

A. Components Conceptually, we structure our Security as a Service architecture into the Cloud Broker, Command and Control server (C2), Super Nodes and Monitoring nodes; each of which is discussed below [4]: Cloud Broker: Within our infrastructure, the Cloud Broker is the Cloud Security provider. The Broker, possessing the knowledge base of Cloud attack signatures and behavioural profiles to identify threshold violations, propagates this throughout the Cloud federation. This information is sent to the Command and Control server (C2) in each CSP domain. The Broker invokes a global poll procedure when a decision cannot be made. He queries the C2s in adjacent domains, and asks them to generate their own beliefs. They check their local grey lists to see if they have encountered the suspect actions previously. Their grey list is a function mapping signatures to beliefs. Each C2 generates their own belief, and the Broker uses Dempster-Shafer to fuse the different beliefs and to create one decision. This in turn can improve resilience to attack. The predefined black lists are of attack signatures, and the monitoring nodes can analyse anomalous actions and threshold violations [14]. Command and Control server (C2): The C2 is effectively a domain management node. When a threat is detected in its domain, a belief is formed that an attack is underway. The C2 queries the Broker about the generated belief, to see if it is legitimate or not. C2s possess black lists comprised of attack signatures, and local grey lists provided by the SN and MN which contains ambiguous observations. Super Node: The Super Node effectively communicates upstream with the C2 to query any suspicious actions. The hierarchy of communication means network latency is low, and communication occurs only when essential, or when thresholds are violated. The Super Node, based on the amount of monitoring nodes in its subset, observes the generated alarms, these alarms are counted and when the pre-alarm count is more than or equal to the threshold based on the amount on monitoring nodes, a belief is formed that there is

SECURITY AS A SERVICE

Monitoring is a core function of any integrated network and service management platform. Cloud Computing makes monitoring a more complex support function, as it includes multiple physical and virtualized resources which span several layers of the Cloud software stack, from Infrastructure as a

94

DS combines the beliefs expressed by monitors producing a single combined belief that is finally compared with the accumulative sum of the beliefs q. Within our methodology the monitors produce a single belief for each focal element, as below:

an attack. The Super Node then sends this belief to the C2, who queries the Broker [4]. Monitoring Nodes: Monitoring Nodes contain a black list determined by the Broker, and a local grey list, which contains ambiguous observations. Monitoring nodes trigger a prealarm when a pre-defined threshold is violated. In particular, an alarm is signalled when the accumulated volume of measurements that are above some traffic threshold exceeds an aggregate volume threshold. This approach is beneficial but for the scale of the Cloud environment, we have added the role of the Super Node, who, when local thresholds are violated, based on the count of observed violations, the SN queries the Broker. If we used this current approach there would be an increase in network overhead, and a basic role of a monitoring entity is to be non-intrusive [14].

b : the belief that there is an attack

x

b : the belief there is not an attack (normal)

x

b : the belief expressing an ambiguity: attack or no attack.

Evidence is fused to reach the goal that can determine the current state of the network. If the combined belief is greater than q, an alarm is raised [22]. C. Dempster-Shafer weighted beliefs According to Chen and Aickelin [22], DS has two major problems with application for detecting intrusions; the computational complexity associated and the conflicting beliefs management. The computational complexity of DS increases exponentially with the number of elements in the frame of discernment θ. If there are n elements in θ, there will be up to 2n -1 focal elements for the mass function. Further the combination of two mass functions needs the computation of up to 2n intersections. The authors convey that treating evidences equally without differentiating them and considering priorities can be problematic. Using nonweighted rule combination implies that we trust all evidences equally. For the purpose of our work, we believe associative rule combinations should suffice, whereby this procedure does not impact the result.

B. Dempster-Shafer Theory of Evidence For collaborative detection, we use the Dempster-Shafer (DS) theory of evidence. DS theory is a probabilistic approach, which implements belief functions which are based on degrees of belief or trust. Probability values are assigned to sets of possibilities rather than single events [14]. The main advantage of this algorithm is that no priori knowledge of the system is required, thus making it suitable for anomaly detection of previously unseen information [19]. Collaborative intrusion detection has been considered in several contributions where data provided by heterogeneous intrusion detection monitors is fused. In the decision making process, the uncertainty existing in the network often leads to the failure of intrusion detection or low detection rate. Basic concepts of Dempster-Shafer [21] include: Frame of Discernment: A complete set describes all the sets in the hypothesis space. Generally, the frame is denoted as θ, which is similar to a state space in probability. The elements in the frame must be mutually exclusive. While the number of the elements is , the space will be 2 . Basic Probability Assignment: It is a positive number between 0 and 1. It exists in the form of probability. The value of BPA denotes the degree supporting or refuting evidence, and is denoted as m(A). Plausibility Function: For 2 ∈ [0,1], Pl(A) = 1 − Bel(A ) = ∑ ∩ ∅ describes the belief not refuting the hypothesis. According to the above concepts, the belief function and plausibility function are related by Bel(A) ≤ Pl(A). Then we call [Bel(A), Pl(A)] the Belief Range. Combination Rule: DS utilises orthogonal sum to combine the evidences. We define the belief functions, describing the belief in a hypothesis A, as Bel (A), Bel (A); then the belief function after the combination is defined as: Bel(A) = Bel (A)⨁Bel (A) The mass function after the combination can be described as:

V.

DISCUSSION

Adoption of Cloud technologies allows Critical Infrastructure to benefit from dynamic resources allocation for managing unpredictable load peaks. Given the awareness of Critical Infrastructure and their importance, the consensus is that these systems are to be built to function in a secure manner, and if embracing the Cloud paradigm, should do so in a security oriented manner. Appropriate security measures have to be selected when developing such systems and documented accordingly. Most existing technologies and methodologies for developing secure applications only explore security requirements in either Critical Infrastructure or Cloud Computing. Individual methodologies and techniques or standards may even only support a subset of specific Critical Infrastructure requirements. Requirement based security can be quite different for these applications and for common ICT Cloud applications, but need to be considered in combination for the given context [23]. The links between Critical Infrastructure services or Cloud Federations can be either dependent or interdependent. Dependency is a unidirectional relationship whereas interdependency is a bidirectional relationship where the capabilities of one infrastructure influence the state of another [24]. In the case of Cloud Federations, it could be argued that both of these relationships are evident, but for the purpose of our work we focus on bidirectional interdependency. Ensuring

m(A) = K  .  m (A ) m (B )  ∩ 

Where K is called Orthogonal Coefficient, and it is defined as: K=

x

 m (A  ) m (B )  ∩  ∅

95

[6]

each CSP is robust, resilient to malicious attacks, or unplanned faults, can keep the effects of cascading failure in an interdependent network to a minimum. Individual security requirements are often considered in depth for traditional applications, but there is often oversight in how security requirements will be affected when software systems are composed. Looking at how the interruption or fault in one system affects another is often not planned for, and associated risk is not investigated. Risk models and methodologies are increasingly being developed to improve the planning and policies needed for interconnected Critical Infrastructure, but are missing the interconnection to outside networks, and increasingly how their services belonging to a Cloud Federation could impact them. Cloud monitoring systems should be aware of logical and physical groups of resources and should organise monitored resources according to certain criteria so as to separate and to localize the monitoring functions. VI.

[7]

[8]

[9]

[10]

[11]

CONCLUSION

[12]

The increasing integration of Critical Infrastructure services and sensitive data to the IT paradigm offers great concern for the protection of these services and information. The more interconnected and available they become, the more threats they are opening themselves up to. Traditional intrusion detection mechanisms are not sufficient for protecting infrastructure services in the Cloud environment, as many solutions do not have the economy of scale and are inefficient of processing data of such a volume. The distributed and open structure of Cloud Computing and services becomes an attractive target for potential cyberattacks by intruders. We have identified that there is a consensus in existing work on the need for improved approaches for protecting infrastructure services in the Cloud environment, additionally in a federated Cloud environment. This is a challenging and complex problem and environment, as the confidentiality, integrity, and availability of data can have high socioeconomic implications as there are more and more infrastructure services migrating to this environment. Collaborative security between CSPs in a Cloud Federation can offer holistic security to those in this scheme. Information sharing in this approach is automated which we conceive to be an important aspect of our approach.

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

REFERENCES [1]

[2] [3]

[4]

[5]

A. Celesti, F. Tusa, M. Villari, and A. Puliafito, “How to Enhance Cloud Architectures to Enable Cross-Federation,” 2010 IEEE 3rd Int. Conf. Cloud Comput., pp. 337–345, Jul. 2010. C. J. Fung, D. Y. Lam, and R. Boutaba, “A Decision Making Model for Collaborative Malware Detection Networks,” 2013. G N. Gruschka and M. Jensen, “Attack Surfaces: A Taxonomy for Attacks on Cloud Services,” in 2010 IEEE 3rd International Conference on Cloud Computing, 2010, pp. 276–279. Á. Macdermott, Q. Shi, M. Merabti, and K. Kifayat, “Security as a Service for a Cloud Federation,” in The 15th Post Graduate Symposium on the Convergence of Telecommunications, Networking and Broadcasting (PGNet2014), 2014, pp. 77–82. ENISA, “Cloud Computing: Benefits, risks and recommendations for information security”, 71001 Heraklion, Greece.

[21]

[22] [23]

[24]

96

M. T. Khorshed, a. B. M. S. Ali, and S. a. Wasimi, “A survey on gaps, threat remediation challenges and some thoughts for proactive attack detection in cloud computing,” Futur. Gener. Comput. Syst., vol. 28, no. 6, pp. 833–851, Jun. 2012. K. Hwang, S. Kulkareni, and Y. Hu, “Cloud Security with Virtualized Defense and Reputation-Based Trust Mangement,” 2009 Eighth IEEE Int. Conf. Dependable, Auton. Secur. Comput., pp. 717–722, Dec. 2009. M. Rak, M. Ficco, J. Luna, H. Ghani, N. Suri, S. Panica, and D. Petcu, “Security Issues in Cloud Federations,” in Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice, 2012, pp. 176– 194. D. Villegas, N. Bobroff, I. Rodero, J. Delgado, Y. Liu, A. Devarakonda, L. Fong, S. Masoud Sadjadi, and M. Parashar, “Cloud federation in a layered service model,” J. Comput. Syst. Sci., vol. 78, no. 5, pp. 1330–1344, Sep. 2012. Á. MacDermott, Q. Shi, and K. Kifayat, “Collaborative Intrusion Detection in Federated Cloud Environments,” J. Comput. Sci. Appl. Big Data Anal. Intell. Syst., 2015. F. Sabahi and A. Movaghar, “Intrusion Detection: A Survey,” in 2008 Third International Conference on Systems and Networks Communications, 2008, pp. 23–26. A. Patel, Q. Qassim, and C. Wills, “A survey of intrusion detection and prevention systems,” Inf. Manag. Comput. Secur., vol. 18, no. 4, pp. 277–290, 2010. F. Sabahi and A. Movaghar, “Intrusion Detection: A Survey,” in 2008 Third International Conference on Systems and Networks Communications, 2008, pp. 23–26. S. Neelakantan and S. Rao, “A Threat-Aware Hybrid Intrusion – Detection Architecture for Dynamic Network Environments,” CSI J. Comput., vol. 1, no. 3, 2012. C.-C. Lo, C.-C. Huang, and J. Ku, “A Cooperative Intrusion Detection System Framework for Cloud Computing Networks,” in 2010 39th International Conference on Parallel Processing Workshops, 2010, pp. 280–284. R. N. Calheiros, A. N. Toosi, C. Vecchiola, and R. Buyya, “A coordinator for scaling elastic applications across multiple clouds,” Futur. Gener. Comput. Syst., vol. 28, no. 8, pp. 1350–1362, 2012. J. Montes, A. Sánchez, B. Memishi, M. S. Pérez, and G. Antoniu, “GMonE: A complete approach to cloud monitoring,” Futur. Gener. Comput. Syst., vol. 29, no. 8, pp. 2026–2040, 2013. Z. Chen, F. Han, J. Cao, X. Jiang, and S. Chen, “Cloud computingbased forensic analysis for collaborative network security management system,” Tsinghua Sci. Technol., vol. 18, no. 1, pp. 40–50, 2013 S. N. Dhage and B. B. Meshram, “Intrusion detection system in cloud computing environment,” Int. J. Cloud Comput., vol. 1, no. 2/3, p. 261, 2012. Á. MacDermott, Q. Shi, and K. Kifayat, “Collaborative intrusion detection in a federated cloud environment using the Dempster-Shafer theory of evidence”, in 14th European Conference on Cyber Warfare and Security (ECCWS), 2015. W. H. Jianhua Li and Q. Gao, “Intrusion Detection Engine Based on Dempster-Shafer’s Theory of Evidence,” in 2006 International Conference on Communications, Circuits and Systems Proceedings, 2006, vol. 2, no. 2003, pp. 1627–1631. Q. Chen and U. Aickelin, “Anomaly Detection Using the DempsterShafer Method,” in DMIN, 2006, pp. 232–240. S. Paudel and M. Tauber, “Security Standards Taxonomy for Cloud Applications in Critical Infrastructure IT,” in 8th International Conference for Internet Technology and Secured Transactions (ICITST), 2013, pp. 645 – 646. W. Hurst and Á. Macdermott, “Evaluating the Effects of Cascading Failures in a Network of Critical Infrastructures,” Inderscience Int. J. Syst. Syst. Eng., vol. 6, no. 3, pp. 1–12, 2015.

Suggest Documents