328
Int. J. High Performance Computing and Networking, Vol. 9, No. 4, 2016
Efficient and secure software-defined mobile cloud computing infrastructure Lo’ai Tawalbeh* Computer Engineering Department, Umm-AlQura University, Makka, Saudi Arabia and Faculty of Computer and Information Technology, Jordan University of Science and Technology, Irbid, Jordan Email:
[email protected] Email:
[email protected] *Corresponding author
Yousef Haddad and Omar Khamis Princes Sumaya University for Technology, Amman, Jordan Email:
[email protected] Email:
[email protected]
Elhadj Benkhelifa Cloud Computing and Application Research Lab, Staffordshire University, Beaconside, Stafford ST18 0AD, UK Email:
[email protected]
Yaser Jararweh Faculty of Computer and Information Technology, Jordan University of Science and Technology, Irbid, Jordan Email:
[email protected]
Fahd AlDosari Computer Engineering Department, Umm-AlQura University, Makka, Saudi Arabia Email:
[email protected] Abstract: This paper proposes an efficient software-based data possession mobile cloud computing framework. The proposed design combines the characteristics of two frameworks. The first one is the provable data possession design built for resource constrained mobile devices and it uses the advantage of trusted computing technology, and the second framework is a lightweight resilient storage outsourcing design for mobile cloud computing systems. The proposed solution utilises the strength aspects found in each framework in order to gain better performance and maintain adequate and comparable security. The proposed framework is a software-defined system, encompassing software-defined storage (SDStore) and software-defined security (SDSec). The evaluation and comparison results showed that our proposed system has better flexibility and efficiency than other related frameworks, individually. Keywords: mobile cloud computing; MCC; software-defined systems; SDSs; software-defined infrastructure; trusted computing; cloud management; security; performance. Reference to this paper should be made as follows: Tawalbeh, L., Haddad, Y., Khamis, O., Benkhelifa, E., Jararweh, Y. and AlDosari, F. (2016) ‘Efficient and secure software-defined mobile cloud computing infrastructure’, Int. J. High Performance Computing and Networking, Vol. 9, No. 4, pp.328–341.
Copyright © 2016 Inderscience Enterprises Ltd.
Efficient and secure software-defined mobile cloud computing infrastructure
329
Biographical notes: Lo’ai Tawalbeh received his MSc and PhD degrees in Computer Engineering from Oregon State University, USA in 2002 and 2004 respectively. He is a Tenure Associate Professor at the Computer Engineering Department at Jordan University of Science and Technology (JUST), Jordan, and the Director of the Cryptographic Hardware and Information Security (CHiS) Lab at (JUST). He is now a Visiting Professor at Umm AlQura University, Makkah, KSA. From 2005 till 2012, he worked in many universities including: New York Institute of Technology (NYIT) and DePaul’s University. He is a Co-Founding Chair of many IEEE workshops/conferences about cloud security and mobile cloud computing. Yousef Haddad is a Computer Engineer and Network and System Administrator. He holds Masters’ degree in Information Systems Security and Digital Criminology from Princess Sumaya University for Technology (PSUT) in Amman, Jordan (2013). He received his Bachelor in Computer Engineering from Jordan University of Science and Technology (JUST), Irbid, Jordan in 2009. He co-authored conference and journal papers in his research area which include: cloud computing, data security and cryptographic functions. Omar Khamis has over 20 years of experience in the area of computer science. He worked as an Assistant Professor at King Hussein Faculty of Computing Sciences at Princes Sumaya University for Technology, Amman, Jordan. He taught many courses in different computer science and engineering topics and he supervised successfully many master students thesis. He has many publications in international referred conferences and indexed journals. Elhadj Benkhelifa is an Associate Professor (Reader), at Staffordshire University, UK. He is the Faculty Director of the Mobile Fusion Applied Research Centre (45 PhD students and 15+ staff). During his academic career, he has built a rich portfolio of successful national and international collaborations. Over the past three years, he successfully secured external funding in excess of $1.5 million USD. He is the Founding Head of the Cloud Computing and Applications Research Group, leading a team of ten PhD. He is a Co-Founding Chair of several conferences/workshops IEEE CCSNA, IEEE BDSNA, IEEE SNAMS, IEEE SDS and IEEE IOTSMS. Yaser Jararweh received his PhD in Computer Engineering from the University of Arizona in 2010. He is currently an Assistant Professor of Computer Science at Jordan University of Science and Technology, Jordan. He has co-authored about 70 technical papers in established journals and conferences in fields related to cloud computing, HPC, SDN and Big Data. He was one of the TPC Co-Chair, IEEE Globecom 2013 International Workshop on Cloud Computing Systems, and Networks, and Applications (CCSNA). He is the General Co-Chair in IEEE International Workshop on Software Defined Systems SDS-2014 and SDS 2015. He is also chairing many IEEE events such as ICICS, SNAMS, BDSN and IoTSMS. Fahd AlDosari received his PhD degree in 2010 from Bradford University, Bradford, UK. He is currently the Dean of the College of Computer and Information Systems, Umm Al-Qura University, Makkah, Saudi Arabia. He has many publications in the areas of cloud computing, quality of service routing, simulation and modelling and adhoc networking. He is also a founder of expert houses in computing and higher education at Umm AlQura University. He is an IEEE member. This paper is a revised and expanded version of a paper entitled ‘Efficient software-based mobile cloud computing framework’ presented at the 2015 IEEE International Conference on Cloud Engineering (IC2E), Arizona, USA, 9–12 March 2015.
1
Introduction
It could be argued that the emergence of software-defined systems (SDSs) was an inevitable result of the paradigm shift from traditional computing models to utility-based cloud computing. Cloud computing providers typically rely on virtualisation (an abstraction of computing resources through software) to effectively and efficiently manage their underlying hardware within their data centres. Virtualisation provides the ability to logically divide physical resources, which allows secure, efficient, multi-tenancy upon single machines. It also enables the ability to aggregate virtualised resources across multiple hosts, provide redundancy through
resource migration and elasticity through rapid resource provision and cloning. The ability to abstract a large amount of computing resources, enabled environments that were highly dynamic and could rapidly respond to change. With this came complex and virtual network paths of resources, for individual workloads and therefore also came the need to automate this resource management to enable precise resource provisioning for individual applications. Software-defined networking (SDN) was the first software-defined resource (utilising this terminology within this context), allowing the management and control planes to be separated from routing hardware and operated by software remotely, providing an increased ability to control,
330
L. Tawalbeh et al.
provision and optimise networking according to the changing requirements of an individual work load (Breiter et al., 2014). Software-defined storage (SDStore) has followed suit by providing a similar level of control over virtualised storage resources, allowing the ability to dynamically manage storage according to the application policies (Darabseh et al., 2015). Whilst software-defined compute (SDC) is the final element of the trio, allowing computational resources to be managed. The combination of software-defined resources may be grouped as software-defined environments (SDEs) or the software-defined cloud (SDCloud) (Jararweh et al., 2015). SDCloud appear to be the future of cloud computing infrastructures. Mobile devices are becoming an essential part of human communities in daily lives. There are new technologies and applications and different usage for these smart devices every few months, which made them very attractive to the wide users (Ling, 2004). Mobile cloud computing (MCC) (Dinh et al., 2011) can be defined as a service that allows mobile users to offload their intensive computational jobs and storage operations to cloud resources (Khan et al., 2013). Such service can adjust the performance and abilities of resource-constrained mobile devices. MCC provides is a promising technology and can offer many services, such as infrastructure as a service (IaaS) (Nathani et al., 2012), data storage as a service (DaaS), software as a service (SaaS) (Rindos et al., 2014), and platform as a service (PaaS) (Mell and Grance, 2011). On the other hand, MCC has many pitfalls, which are related to communication including bandwidth, availability, and heterogeneity, and others related to offloading, security, integrity, authenticity and data access (Tawalbeh et al., 2015b; Cicotti et al., 2015; Cuomo et al., 2015). This paper proposes a framework for more efficiency and secure MCC infrastructure. The proposed framework is a SDS, encompassing SDStore and software-defined security (SDSec) (Al-Ayyoub et al., 2015). Two existing frameworks are first, analysed, then combined to propose a more resilient and secure software-defined MCC infrastructure. The first one, proposed in Yang et al. (2011), is a framework for provable data possession (PDP) of resource-constrained mobile devices in cloud computing. The second one, proposed in Ren et al. (2011), is a framework for lightweight and compromise resilient storage outsourcing with distributed secure accessibility in MCC. The proposed framework integrates the advantages and abilities of both frameworks to overcome the pitfalls of performance and security, present in each of the previous frameworks. The presented study here in investigates how the PDP scheme, integrated with the lightweight and compromise resilient schemes, which are based on XOR operations, can improve the system’s performance, design, and security. The paper also discusses the differences between the resulted combined scheme and the original PDP scheme, providing a detailed analysis of their limitations. Other light calculations are also explored instead of heavy-load encryption.
The rest of this paper is organised as follows: Section 2 provides a background review on SDStore and security and presents the two above mentioned frameworks, based on which the proposed system was developed. Section 3 described the proposed framework where the proposed model takes the coding-based scheme (CoS) and the sharing-based scheme (ShS) and combines them with the PDP model, so encryption is no longer used but instead other light operations are employed to achieve confidentiality, integrity, and authenticity. For each framework, the three phases (setup phase, integrity verification phase, and file retrieval phase) are discussed in details. Section 4 provides an evaluation of the proposed model, focusing mainly on performance and security attributes. The performance wise evaluations (Section 4.1) cover both coding-based PDP and sharing-based PDP frameworks (Sections 4.1.1 and 4.1.2, respectively). The security evaluations of the proposed frameworks (Section 4.2) cover a number of aspects, including: privacy, integrity, non-repudiation and accountability, availability, confidentiality and authenticity. The paper finished with a conclusion in Section 5.
2
Background and related work
This paper proposes a more efficient solution for a resilient and secure software-defined MCC infrastructure. The software-defined aspect of the proposed solutions encompasses SDStore and security (SDSec), therefore, it is worthwhile providing a brief background on SDStore and SDSec. It is also important to review the main areas of related work which based on which the proposed solution was developed. This includes the PDP of resource-constrained mobile devices in cloud computing (Yang et al., 2011; Fortiş et al., 2015; Cuomo et al., 2015) and lightweight and compromise resilient storage outsourcing with distributed secure accessibility in MCC (Ren et al., 2011; Tawalbeh et al., 2015a).
2.1 Software-defined storage SDStore is emerging as one of the most important subsystems in SDSys. It takes the responsibility of managing a huge data in storage systems by isolating the data control layer from the data storage layer. The control layer refers to the software component that manages and controls the storage resources, whereas the data layer refers to the underlying infrastructure of the storage assets. Such isolation is meant to reduce the management complexity with this new system architecture design. Moreover, it reduces the cost of the infrastructure by creating a single central control until to manage the different elements in the system regardless of their vendors rather than installing the control software on each element. Due to the novelty of the area, little work within the field may be found within academic literature, whilst industry appears to have a greater grasp on the subject. IBM Storwize, EMC ViPR, Atlantis USX, IOFlow, Maxta, HITACHI, Acore,
Efficient and secure software-defined mobile cloud computing infrastructure CloudBytes and IBM SmartCloud are the most cited examples of SDStore solutions and supporting architectures (Breiter et al., 2014; Jararweh et al., 2015; Ouyang et al., 2014; Seshadri et al., 2014; Thereska et al., 2013; Zhou et al., 2014; Crump, 2013; EMC, 2015; Atlantis USX, http://www.atlantiscomputing.com/products/atlantis-usx). Traditional cloud environments do offer a degree of storage resource abstraction through virtualisation; however, in order to have a more powerful environment, which enables these resources to be autonomously managed, SDStore solutions are needed. These solutions enable dynamic policies for storage requirements to be easily enforced, mitigating the complexity required by administrators in managing multiple resource paths through multiple layers. SDStore systems have some characteristics that distinguish it from other systems and specifically from the traditional storage systems, as illustrated in Figure 1 and explained below: a
Scale-out architecture: The resources in SDStore system can be inserted and removed dynamically to increase the capacity of the SDStore in a scale up/down fashion.
b
Commodity HW: SDStore uses available resources to build the infrastructure whether it is considered a storage hardware or network communication resources. If any change occurs then the hardware is ready to be added easily without any affecting the performance.
c
Resource pooling: this feature allows for the allocation or de-allocation of resources dynamically and on-demand, all centrally controlled.
d
Abstraction: In contrast to traditional storage, SDStore gathers all the underlying hardware in a single centralised unit; so that if any event occurs in any part of the system, it can be recognised and easily handled.
e
Automation: The operations on SDStore system are all done automatically to respond to users’ requirements as well as for routine monitoring and updated
f
Programmability: A number of APIs are available to provide visible control for the resources, which allows high levels of programmability to accommodate any changes.
g
Policy driven: The SDStore control is separated into two layers •
user layer, where availability, reliability, latency are specified
•
control layer, maintains a high QoS level including the handling of recovery from failures, resources migration (Fortiş et al., 2015).
2.2 Software-defined security Traditional security mechanisms are considered unsuitable to deal with virtualised environments. The design of traditional security devices is unable to protect the
331
components of virtualised environments, due to its dependency on physical network devices. The changes brought by virtualisation, which range from new virtual network topology and the threats related to the hypervisors to eliminating the roles in virtualised management, demonstrate the need of virtualised security solutions. Such virtualisation would reduce the complexity and the cost of security operations. It also facilitates the deployment of security policies that are superior to the classical ones by making them more accurate, seamless and context-aware. In addition, by virtualising the security, almost all the security activities can be automated. To realise this concept, cloud security alliance (CSA) launched the software-defined perimeter (SDP) project as new security architecture in order to keep secure systems against the network attacks (Yang et al., 2011). SDP was designed to complement SDN in order to reduce the attacks on the network applications by disconnecting them until the users and devices are authenticated. This approach was later coined SDSec. Figure 1
A design comparison between the traditional storage model and the SDStorage model (see online version for colours)
Figure 2
The main components of SDSecurity and its integration to protect the network (see online version for colours)
332
L. Tawalbeh et al.
Since then, a number of other initiatives have been reported to implement different SDSec solutions, including Catbird (Catbird Networks Inc., 2014), vShield (Walker, 2013; VMware Inc., 2010), and OneControl (VMware Inc., 2013). The latest release of Catbird, which is formally called vSecurity, is the first multi firewalls and hypervisors integration solution. Catbird is established to support the hypervisors of Microsoft and VMware as well as the VMware vCloud networking and security firewall applications and Cisco Virtual Security Gateway (VSG). vShield, part of vCloud suite, covers the bottlenecks on physical security approach, and provides a single comprehensive virtual security framework. OneControl was introduced by NetCitadel to eliminate the need of manual reconfiguration and response actions when an event or any change occurs in the network. vArmour was developed to fully exploit the benefits of virtualisation environments (NetCitadel Inc., 2012). vArmour covers the scalability, flexibility, and cost bottlenecks that are face traditional security techniques in virtualised environments. Below is a summary of some features and attributes that distinguish the SDSec approach from traditional security approaches (Catbird Networks Inc., 2014; Ca, 2013). a
Abstraction: SDSec abstracts the security policies from the hardware layer and run it in an independent software layer. This way, the deployment of a new policy control is not affected by the assets or VMs location, which simplifies the networks
b
Automation: In SDSec, the creation of a new VM or device in the system and putting it in a specific trust zone is done automatically. Also, the process of detecting any violation or vulnerability is done automatically when an event occurs and appropriate alerts are triggered in response.
c
Elasticity: being am entirely software-based and unrestricted to hardware, it is easy to scale it up or down and adapt it to new changes.
d
Concurrency control: In SDSec systems, a number of security controls can work concurrently. The abstraction feature of SDSec provides it with the ability to apply different aspects of security controls regardless of the underlying language of the assets appliances.
e
Portability: The independent property of SDSec facilitates the deployment process even if the devices move from one location to another.
of MCC, the limitations of resources in mobile devices make storage and verification operations more difficult. Using trusted computing technology, Yang et al. (2011) proposed a model (depicted in Figure 3) that utilises PDP operations effectively by the incorporation of trusted third-party agent (TPA) to calculate and process most of the operations instead of the mobile device. Bilinear signature and Merkle hash tree (MHT) were used to process signatures and provide verification and data updating. End-users are responsible for generating some keys and random numbers using a trusted platform module (TPM) chip. TPM (Atlantis USX, http://www.atlantiscomputing. com/products/atlantis-usx) is a specification and a standard specifying a secure crypto-processor that stores, maintains, and generates keys and other sensitive data. The structure of this resource-constrained public PDP framework contains three main entities: 1
Mobile end-user (client) that needs to store its data in the cloud and ensures that its data is safe and secure. This client has a TPM chip that is responsible for generating random keys and numbers and protects them.
2
TPA that is responsible for intensive computations (encryption, authentication, and verification). The TPA works as a service agent between mobile access point and the gateway of the IP network. The TPA has a limited storage ability to save data about current sessions and clients.
3
Cloud service provider (CSP) that is responsible for managing the cloud environment and provides clients with storage and proofs of data possession when queried through the internet. CSP in this framework is assumed untrustworthy so that data could be in threat if not encrypted.
Figure 3
2.3 PDP of resource-constrained mobile devices in cloud computing Cloud computing introduced a cloud storage infrastructure that solves the problem of storage at a convenient cost. It also provides security countermeasures and storage verification methods. PDP schemes are used for these purposes PDP schemes comprise of storage and computing resources used to verify publicly the integrity of data in the cloud and to update data dynamically. However, in the case
PDP MCC framework architecture (see online version for colours)
Source: Yang et al. (2011)
The framework assumes all the communication channels to be secure, authenticated, and reliable. The design objectives of this framework are correlated to four main concepts, explained briefly below: 1
Public PDP is about enabling any verifier to ensure data integrity so that this process is not restricted to end-mobile clients.
Efficient and secure software-defined mobile cloud computing infrastructure 2
Trusted computing technology aims to establish a trustful authenticated channel between client and TPA and using it to exchange relative data so that TPA can perform the intensive computation operations in place of the mobile device.
3
Stateless verification ensures that proofs of storage are computed using random data generated by the verifier not a static data maintained by some entity.
4
Support of mobile device environment is done by making the user’s mobile device responsible for generating and computing a small amount of work (e.g., generating key) using the TPM chip, and allowing the TPA to perform most of the workload due to the resource limitations of mobile devices.
2.4 Lightweight and compromise resilient storage outsourcing with distributed secure accessibility in MCC Ren et al. (2011) proposed a framework for lightweight storage outsourcing. The framework is constructed of two entities: mobile device and CSP. The framework treats the cloud storage servers as totally distrusted nodes, and so, aims to preserve integrity and confidentiality of the data stored. The framework treats mobile devices as distrusted nodes in case of storage. This is because mobile devices are exposed to loss, which means that the storage information and credentials are also lost. Thus, the framework is compromise resilient in this case. The device is considered trusted in case of computation due to the idea that the execution environment could be secured using anti-malware and other tools to counter any malicious attempts and activities (Khan et al., 2013). The framework provides the accessibility scheme for the data owner and any other parties that the owner wants to share data with. Storage credentials’ sharing operation is semi-automated since the owner needs to provide other parties with the credentials via e-mail or other methods, which is not considered an automated scheme. The framework consists of three schemes: 1
Encryption-based scheme (EnS): uses encryption algorithms to provide confidentiality to the data stored on a single cloud storage server
2
CoS: uses secrecy codes (SC) and linear coding for each block (share) of the data to provide confidentiality on multiple storage servers with less computational overhead than encryption algorithms since matrix multiplication is used instead of encryption functions
3
ShS: produces the least computational overhead by using exclusive-or (XOR) operations to provide confidentiality for data stored on multiple distrusted cloud storage servers
Integrity and authenticity verification is provided using message authentication code (MAC) functions for all three schemes. The communication channel between device and
333
cloud servers is assumed secure. All computations and processes are done on the mobile device.
3
The proposed model
The proposed model takes the CoS and the ShS and combines them with the PDP model, so encryption is no longer used but instead other light operations are employed to achieve confidentiality, integrity, and authenticity. The proposed model consists of two parts. First, is a coding-based PDP framework that combines CoS scheme with PDP. Second, is a sharing-based PDP framework that combines ShS scheme with PDP. Note that EnS is not used because PDP relies on encryption functions to encrypt data by TTP and send it to the cloud. Now let e: G1 × G1 → G2 a bilinear map with a big prime order p and g is the generator of G1. Let H:{0, 1}* → G1 be hash function.
3.1 Coding-based PDP framework This is mainly based on the PDP framework combined with CoS to replace encryption operations and other related aspects. The framework consists of three phases (same as PDP): setup phase, integrity verification phase, and file retrieval phase.
3.1.1 Setup phase 1
The mobile device and third party use Diffie-Hellman protocol to share a secret key (gαβ).
2
The mobile device prompts the user to enter a password (PWD).
3
PWD, file, and file name are encrypted by the mobile device using the secret shared key (gαβ). File size and a copy of file name are stored in the TPM chip in a local table T.
4
Data encrypted is sent to the third party.
5
Third party decrypts data received and then divides file into d parts. Each part consists of t chunks and each chunk consists of n bits. Note that file data is intended to be sent to d cloud servers.
6
TPA generates the coding vector θ = [θ1, θ2, …, θt] using recursive hash functions depicted as the following: θi = Hi ( PWD || FN || FS ) where 1 ≤ i ≤ t. H1 = H(x) and Hi = H ( H i −1 (x) ) where 2 ≤ i ≤ t.
7
Using θ, the TPA produces SC (F’[j]) to obtain confidentiality in the cloud by coding each part using the following equation: F’[ j] =
∑
t i =1
θi ∗ F[i][ j] where 1 ≤ j ≤ d.
334 8
9
L. Tawalbeh et al. conditions is false then data integrity is refuted, and the TPA sends a ‘false’ message to the end-user.
The TPA builds MHT and calculates H(R) that is the hash value of the root node of the tree. The leaf nodes of the tree represent the hash values of F’[j]. The TPA then sends H(R) to the end-user that saves it in its TPM chip. TPA deletes PWD.
e ( Sig sk ( H(R) ) , g ) = e ( H(R), g α )
e ( ω, g α ) = e
α
The end-user signs the H(R) (Sigsk (H(R)) = (H(R)) ) and sends the signature to the TPA. The TPA computes the signature collection of each data block denoted by Ф = {σj} where 1 ≤ j ≤ d and σj = [H(F’[j])uF’[j]]β u is an element in G1 choosing randomly by the TPA. The TPA then sends {Sigsk (H(R)), F’= {F’[j]||H(FN + j)} for all 1 ≤ j ≤ d, Ф} to the cloud provider that distribute all the values of F’[j] to a corresponding cloud storage server CSj. TPA sends back t to the end-user and deletes θ and FN.
Figure 4
Figure 5
(∏
c i =1
H ( F’[i]) i u μ , g αβ v
)
DFD of integrity verification phase in coding-based PDP framework
Data flow diagram of setup phase in coding-based PDP framework
3.1.3 File retrieval phase
3.1.2 Integrity verification phase 1
TPA generates a challenge message by choosing a ‘c’ random values of the set [1, d] to build the subset I. Where each element in I denoted by ‘i’ is linked to a random value vi ∈ Zp generated by the TPA.
2
TPA sends the challenge to the CSP.
3
The CSP receives the challenge and generates the hash value of H(F’[i]) of each F’[i] where i ∈ I and the additional information denoted by Ωi for building the hash value of the root of MHT denoted by H(R).
4
The CSP also computes the following two values given by the following equations: μ=
∑
ω=
∏
c i =1
vi F’[i] ∈ Zp
c σ vi i =1 i
∈ G1
5
CSP sends the proof indicated by the values of µ, ω, {H(F’[i]), Ωi}, and Sigsk(H(R)) to TPA.
6
The TPA checks the proof by checking the following two conditions. if the two conditions are true then data integrity is verified, and the TPA sends a ‘true’ message to the end-user. If one or both of the
1
Device prompts the user to enter PWD.
2
TPA and end-user exchange data using Diffie-Hellman protocol to get a symmetric session key (Ks).
3
PWD, FN, FS, and t are encrypted using Ks and sent to the TPA.
4
TPA requests the cloud provider for stored data by sending it the value of each H(FN + j) where 1 ≤ j ≤ d.
5
CSP sends the values of F’[j] back to the TPA.
6
TPA calculates θi = Hi(PWD||FN||FS) where 1 ≤ i ≤ t.
7
File is decrypted using the following equation: F[i][j]= θ–1[i] * F’[j]
8
TPA sends File (F) to the end-user using the secure channel and deletes the values of θ, PWD, FN, FS, and t.
Figure 6
DFD of file retrieval phase in coding-based PDP framework
Efficient and secure software-defined mobile cloud computing infrastructure
335
3.2 Sharing-based PDP framework
3.2.2 Integrity verification phase
In this case, framework combines the sharing-based PDP framework with PDP in order to use the framework is divided into three phases: setup phase, integrity verification phase, and file retrieval phase.
1
TPA generates a challenge message by choosing a ‘c’ random values of the set [1, d] to build the subset I. Where each element in I denoted by ‘i’ is linked to a random value vi ∈ Zp generated by the TPA.
2
TPA sends the challenge to the CSP.
3
The CSP receives the challenge and generates the hash value of H(F’[i]) of each F’[i] where i ∈ I and the additional information denoted by Ωi for building the hash value of the root of MHT denoted by H(R).
4
The CSP also computes the following two values given by the following equations:
3.2.1 Setup phase 1
The mobile device and third party use Diffie-Hellman protocol to share a secret key (gαβ).
2
File and FN are encrypted using secret key gαβ by the mobile device and sent to the TPA. A copy of the file name is stored in the TPM chip in a local table T.
3
4
Suppose there are multiple cloud servers CSd–1 that would be used for storing data. TPA generates d–1 random files (shares) denoted by F’[j] where 1 ≤ j ≤ d–1 and size of F’[j] equals size of F (real file to be stored in the cloud) |F’[j]| = |F|. TPA calculates the accumulative result on the F’[j] files depicted in the following equation: AR = ⊕dj=−11 F’( j)
Figure 7
∏
vi F’[i] ∈ Zp
c
σ vi i =1 i
∈ G1
6
The TPA checks the proof by checking the following two conditions. if the two conditions are true then data integrity is verified, and the TPA sends a ‘true’ message to the end-user. If one or both of the conditions is false then data integrity is refuted, and the TPA sends a ‘false’ message to the end-user.
The TPA builds MHT and calculates H(R) that is the hash value of the root node of the tree. The leaf nodes of the tree represent the hash values of F’[j]. The TPA then sends H(R) to the end-user that saves it in its TPM chip. The end-user signs the H(R) (Sigsk (H(R)) = (H(R))α) and sends the signature to the TPA. The TPA computes the signature collection of each data block denoted by Ф = {σj} where 1 ≤ j ≤ d and σj = [H(F’[j])uF’[j]]β u is an element in G1 choosing randomly by the TPA. The TPA then sends {Sigsk (H(R)), F’= {F’[j]||H(FN + j)} for all 1 ≤ j ≤ d, Ф} to the cloud provider that distribute all the values of F’[j] to a corresponding cloud storage server CSj. TPA deletes FN.
ω=
c i =1
CSP sends the proof indicated by the values of µ, ω, {H(F’[i]), Ωi}, and Sigsk(H(R)) to TPA.
F’[d] = AR ⊕ F
6
∑
5
and then computes F’[d]
5
μ=
e ( Sig sk ( H(R) ) , g ) = e ( H(R), g α ) e ( ω, g α ) = e Figure 8
(∏
c i =1
H ( F’[i]) i u μ , g αβ v
)
DFD of integrity verification phase in sharing-based PDP framework
Data flow diagram of setup phase in sharing-based PDP framework
3.2.3 File retrieval phase 1
TPA and end-user exchange data using Diffie-Hellman protocol to get a symmetric session key (Ks).
2
FN is encrypted using Ks and sent to the TPA.
3
TPA requests the cloud provider for extracting stored data by sending it the value of each H(FN + j) where 1 ≤ j ≤ d.
4
CSP sends the values of F’[j] back to the TPA.
336
L. Tawalbeh et al.
5
TPA retrieves F using the following equation: F = ⊕dj=1 F’[j].
6
TPA sends file (F) to the end-user using the secure channel and deletes FN.
standard model and the proposed one in the file retrieval phase. Table 1 and Table 2 also explain the changes of the load on both client and TTP that occur in the proposed model with respect to the standard PDP framework. Table 1
Figure 9
DFD of file retrieval phase in sharing-based PDP framework
Comparison of coding-based PDP and standard PDP at setup phase
Standard PDP
Proposed model
Load
User is prompted to enter PWD on mobile device
Increase computation load on client
F encryption on mobile device and decryption on TPA
PWD, F, and FN encryption on mobile device and decryption on TPA
Increase computation load on client and TPA
TPA generates a pair of asymmetric keys (ek, dk)
TPA divides the file into d parts each part contains symmetric t chunks
Decrease computation load on TPA
TPA encrypts the file (F) using ek
TPA generates the coding vector θ using recursive hash functions
-
4
Evaluation of the proposed model
To recap, the main contribution of the proposed model to the PDP framework is the usage of coding-based and ShSs in order to solve the main concern of the PDP framework; that is the performance degradation resulted from the increase of the number of mobile clients that offloads their jobs on the TTP. This issue was dealt with by replacing heavy and computation-intensive operation of encryption with other computational mechanisms that are lighter and simpler. Section 4.1 presents a detailed evaluation concerned with the performance of the proposed model, which proved superior. Section 4.2, on the other hand provides an evaluation of the security aspect of the proposed model ensuring that the main security goals are achieved using the new techniques in the PDP framework. The security evaluation does not aim to prove that the new techniques have better security, but to show that security measures are not compromised with the implementation of the proposed model, but provide strong enough or at least comparable measures.
4.1 Performance evaluation of the proposed model In this section, a performance evaluation is conducted on both coding-based PDP framework and sharing-based PDP framework. Setup phase and file retrieval phase are both analysed since they include the contributions of the proposed model. Integrity verification phase is not modified in the proposed model except for some notations.
4.1.1 Performance evaluation of coding-based PDP framework Table 1 shows the main changes in the steps of setup phase between standard PDP framework and the new proposed model. While Table 2 shows the differences between the
TPA partitions the encrypted file (F’) into N data blocks and encodes each block with erasure codes
TPA produces secrecy codes using vector product: F’[j] =
∑
t j =1
θi ∗ F[i][ j]
H(R) is sent to client
Decrease computation load on TPA and client
dk is stored on TPM
H(R), FS, FN stored on TPM
Decrease storage load on Client
TPA sends F’= {mi} to cloud provider
TPA computes and concatenates F’[j]||H(FN+j) and sends them to cloud provider
Increase computation load on TPA
H(R) and dk are encrypted and sent to client
Where FN contains a string of letters presenting the name of the file. Each letter is represented by an 8-bit ASCII code. FS presents the file size, which is a number presenting the number of bytes in the file; this number can be converted to binary to represent a small string of bits. H(R) is the hash root value of MHT. Mostly, MHT uses SHA-1 hash function that produces a 160-bit digest that means that the size of H(R) is 160 bits. dk in usual asymmetric encryption functions like RSA and ElGamal is at least 1,024-bit long which if compared to number of bits for FS, FN, and H(R) is larger, thus, decreasing storage load in the proposed model. 1
MHT, Diffie-Hellman key exchange protocol, H® signature, and signature collection Φ operations are used similarly on both standard and proposed models.
2
There is a decrease of storage load on client in the proposed model. FS, FN, and H® are stored on TPM, while in standard model dk is stored.
Efficient and secure software-defined mobile cloud computing infrastructure 3
Encryption operation on mobile device and decryption on TPA is operated on three values F, FN, and PWD in the proposed model, which increases in the size of data to be encrypted and decrypted; while in the standard model it is operated on F alone. This increases the computational cost on mobile device and TPA. Comparing F with {F, FN, PWD}, F can be considered the only large value to be encrypted/decrypted, while FN and PWD contain smaller values, hence the encryption/decryption cost on the proposed model is slightly higher than in standard model.
4
The key generation for the encryption function is replaced by recursive hash functions that are used to create coding vector θ. The encryption function is replaced by a vector product on d parts of F. Erasure codes encoding and file division to N parts is replaced by file division to d parts with each part containing symmetric t chunks of data. These three aspects show that the computational cost for the proposed model on TPA is reduced.
5
There is no need for encryption for H(R) in the proposed model since there is no dk to be encrypted alongside with H(R). H(R) can be sent in plaintext because it is just used in verifying the integrity of the encrypted file on the cloud. Actually, the distrusted cloud itself sends Ωi to TPA in the integrity verification phase to rebuild H(R). This point reduces the computational overhead on TPA and client.
6
The TPA computes and concatenates F’[j]||H(FN + j) and sends them to the cloud provider instead of sending F = {mi}, which increases the computational cost on TPA.
Table 2
Comparison of coding-based PDP and PDP at file retrieval phase
Standard PDP
Proposed model
Load
-
User is prompted to enter PWD on mobile device
Increase computation load on client
dk (decryption key) is encrypted using Ks and sent to the TPA that decrypts it
PWD, FN, FS, and t are encrypted using Ks on mobile device and decrypted on TPA
Decrease computation load on client and TPA
TPA sends a request to the CSP to extract the encrypted file stored in its storage servers (F’)
TPA requests the cloud provider for stored data by sending the value of each H(FN + j) where 1 ≤ j ≤ d
Increase computation load on TPA
Decryption of F’ using dk
TPA produces coding vector θ using recursive hash functions and decrypts F’ using F[i][j] = θ–1[i] * F’[j]
Decrease computation load on TPA
337
Diffie-Hellman key exchange exists in both models. In standard model dk is encrypted/decrypted using Ks which is a high cost operation in comparison to proposed model since size of dk could be 1,024 bits at least, while sizes of FN, FS, PWD, and t altogether are way less than dk. PWD entry operation is an addition to the standard model procedure. TPA in proposed model has to calculate H(FN + j) for j times to send a file retrieval request to the CSP, while in standard model it just sends a request this increases computation cost for this step. On the other hand, file decryption in standard model is done using asymmetric decryption function using dk, while in proposed model file is decrypted by producing coding vector θ and then performing matrix multiplication by considering θ as a tx1 matrix and considering F’[j] values as a 1 × j matrix. This is presented in the equation of F[i][j] = θ–1[i] * F’[j] note that value of θ is inverted for each multiplication. This modification – including the H(FN + j) calculations – in the decryption operation at the new proposed model decreases the computational cost. The previous tables and explanations prove that relatively the proposed coding-based PDP model is performance-wise less costly than the standard PDP model.
4.1.2 Performance evaluation of sharing-based PDP framework ShS is based on XOR operations and simple random number generation schemes which makes is it lighter and simpler than CoS. In Table 3, a comparison between the standard PDP model and the new proposed sharing-based PDP model in terms of change in load of computation and storage with reference to new proposed model. Table 3 shows that in general the new proposed model is lighter and more performance-efficient than the standard one. Diffie-Hellman key exchange, MHT, H(R) signature, and signature collection Φ operations are all used in both models. The mobile device in the proposed model encrypts F and FN while TPA decrypts them; on the other hand, in the standard model encryption and decryption are done only on F. Thus, the proposed model holds more computational complexity than the standard model. Still, FN is relatively small (each letter = 8 bits), which makes only a slight change in performance. The main point of comparison lies in the encryption stage. In the standard model, it is held in the usual scheme (key generation, encryption, partitioning and encoding), while in proposed model it is all about generating a d–1 random numbers (shares) that would be stored on d–1 storage servers, then perform XOR operation on the d–1 shares and XOR the result with the file string of bits to have F’[d]. Thus, the proposed model is much lighter and more efficient than the standard model. Also, in the standard model, both H(R) and dk are encrypted and sent to client, while in the proposed model, H(R) is sent in plaintext to client, which reduces the computational cost on TPA and client. The storage load in the proposed model is decreased
338
L. Tawalbeh et al.
on the client, since it only needs to store the value of FN in its TPM chip, while in the standard model, the client needs to store dk, which is much larger than FN in its TPM chip. Before sending the shares to storage servers, TPA concatenates and calculates the value of F’[j]||H(FN + j) for each intended storage server, which slightly increases the computational cost in the proposed model since; while in the standard model, the encrypted part is sent alone. Table 3
Comparison of sharing-based PDP and standard PDP at setup phase
Standard PDP
Proposed model
Load
F encryption on mobile device and decryption on TPA
F, and FN encryption on mobile device and decryption on TPA
Increase computation load on client and TPA
TPA generates a pair of asymmetric keys (ek, dk)
TPA generates d-1 random files (shares) F’[j] where 1 ≤ j ≤ d-1 and |F’[j]|= |F|
Decrease computation load on TPA
TPA encrypts the file (F) using ek
AR = ⊕dj=−11 F’( j)
TPA partitions the encrypted file (F’) into N data blocks and encodes each block with erasure codes
F’[d] = AR ⊕ F
H(R) and dk are encrypted and sent to client
Table 4
Comparison of sharing-based PDP and PDP at file retrieval phase
Standard model
Proposed model
Load
dk (decryption key) is encrypted using Ks and sent to the TPA that decrypts it
FN is encrypted using Ks by mobile device and decrypted on TPA
Decrease computation load on client and TPA
TPA sends a request to the CSP to extract the encrypted file stored in its storage servers (F’)
TPA requests the cloud provider for extracting stored data by sending it the value of each H(FN + j) where 1≤j≤d
Increase computation load on TPA
TPA retrieves F using F = ⊕dj=1 F’[ j]
Decrease computation load on TPA
Decryption of F’ using dk
4.2 Security evaluation of the proposed system In this section, the security of the proposed system is evaluation considering a number of aspects covering: availability, authenticity, integrity, confidentiality, privacy, non-repudiation, and accountability.
H(R) is sent to client
Decrease computation load on TPA and client
dk is stored on TPM
FN stored on TPM
Decrease storage load on client
TPA sends F’= {mi} to cloud provider
TPA computes and concatenates F’[j]||H(FN + j) where 1 ≤ j ≤ d and sends them to cloud provider
Increase computation load on TPA
Table 4 presents a comparison between the standard and proposed models at the file retrieval phase. Table 4 shows that the proposed model is more efficient than the standard model. The encryption and decryption of dk and FN using Diffie-Hellman exchanged key (Ks) results in a lighter approach in the proposed model since dk is a large value (at least 1,024 bits), while FN is relatively small. The standard model sends a request to extract the encrypted file from the storage servers, while in the proposed model, request is held by sending the value of each H(FN+j) to each relative server, which means that the proposed model results in a slight increase in computational load at TPA. The main aspect in the file retrieval scheme is the decryption operation using dk.; it is replaced by a simple d-times XOR operation on retrieved encrypted shares to obtain the file that is much more lighter operation than the decryption function.
•
Availability: The main contribution of the coding-based PDP and the sharing-based PDP schemes is the enhancement of the TPA performance, when the number of requests and operations increase. That is to make the TPA system highly scalable. Thus, the proposed system provides a high-availability compared to the PDP standard system. Moreover, the availability of the CSP is dependent on the reputation and the efficiency of the provider. A secure, efficient, non-expensive CSP must be chosen in order to help the TPA in providing and obtaining a scalable and efficient services to the client. TPA, cloud provider, and client’s device must obtain an anti-DoS technologies and techniques to ensure the availability aspect.
•
Integrity: in the proposed system, the integrity of data is verified using MHTs. These trees help in providing integrity verification services with minimum storage and communication overhead. Moreover, the verification process is stateless that the proofs of storage are created randomly by the verifier. Unlike some other systems, where proofs of storage are created using some persistent data obtained by some parties.
•
Privacy: Privacy of data used by TPA and CSP must be maintained. Legal agreements, privacy policies, access controls, and regulations and laws are some of the methods used to achieve transparency and data privacy.
•
Non-repudiation and accountability: After the remote authentication between the client and the TPA, a secure channel that connects them is enabled. Therefore, the client’s actions that deal with TPA can all be traced. This ensures accountability and non-repudiation temporarily as long as the session is open, because
Efficient and secure software-defined mobile cloud computing infrastructure
•
•
339
there is no logging service in the TPA. Integrity verification phase attempts to provide accountability and non-repudiation for data integrity.
a
First, local table (T) is sent to cloud servers to link the values of FN with H(FN + j) using brute force searching.
Authenticity: authenticity is applied in the remote identification process between the client and the TPA (before Diffie-Hellman key exchanging). On one hand, using trusted computing standards TPA is considered trusted by the client; on the other hand, the client’s device is authenticated by answering a challenge produced by the TPA with a message signed by the attestation identity key (AIK). AIK is created and managed in the TPM chip of the mobile device. The PWD entry in the coding-based PDP scheme is an employment of authenticity in mobile devices. In addition, password entry in TPM chip is considered an employment of an authenticity principle in the proposed model.
b
Second, malicious code searches local table (T) to find a matching FN for each H(FN + j).
Confidentiality: Data confidentiality is achieved using linear coding in the coding-based PDP model and XOR operations alongside random number generation in the sharing-based PDP model. Cloud servers are assumed to be distrusted and that is why encrypting data using the previously mentioned methods is taking place. The mobile device is assumed trusted in terms of computation and storage. A user can maintain the needed countermeasures to secure its device’s execution environment. All sensitive keys and data are stored in the TPM chip, which is provided with security measures and supplied with an authentication process that makes the TPM chip secure enough. It is supplied with security procedures that counter such type of attacks, like a specified number of trials for a specified period of time. The TPA is trusted using trusted computing standards and data exchange is transferred in a secure communication channel. Symmetric encryption functions are used to encrypt the data transferred between the two parties with Diffie-Hellman key exchange protocol used to negotiate on a common key. A trusted computing technology helps the two parties countering man-in-the-middle attacks during Diffie-Hellman key exchange protocol.
In the coding-based PDP model, confidentiality and integrity, as discussed in Ren et al. (2011) of the file relies on the coding vector (θ). Without θ, there are no computational methods that can help in retrieving F from F’[j]. To obtain θ, the value of θ1 = H(PWD||FN||FS) must be computed and to compute it, values of FN, PWD, and FS must be known. To have FN and FS, storage in TPM must be exposed, which is a difficult process. If this data was somehow revealed then the next step is to connect the values of H(FN + j) to FN; this is done using two methods. The first method requires a large transmission traffic while the second method requires an intensive processing.
Both can be detected by intrusion detection systems installed on mobile devices. Another approach could be based on random guessing of FN, FS, PWD, and H(PWD||FN||FS) that can be written as H(o). To achieve FN from H(FN + j), there is a probability of success denoted by Pr1 = 1 / (2|FN|). To guess the value of θ there is a probability of success denoted by Pr2 = 1 / (2min(|H(o)|, |PWD|+|FS|)). Now, to reveal F from F’[j], the total probability (Pr = Pr1 * Pr2) is a very small number when size of H(o), FN, or PWD + FS is large enough. Ren et al. (2011) also discussed that in the sharing-based PDP model, confidentiality and integrity are based on XOR operations of random numbers and the file F. If the cloud servers collude to obtain F, then a connection between FN and the value they own (H(FN + j)), need to me made, which can be done using malicious attempts to send values of local table (T) to the cloud servers or making a brute force searching in table (T). Both are difficult to achieve because TPM chip is difficult to be exposed and both methods either induce large transmission traffic or intensive processing which can be detected using installed intrusion detection systems on the mobile devices. Another approach is random guessing of FN from H(FN + j) and this could be achieved with probability Pr= 1/(2|FN|) which can be very small in case that |FN| is large enough. It could be proposed that an adversary server would collect all shares of F’ and XOR them together to get F. This is a huge threat to the confidentiality and the integrity of the data. Still, it depends on the way the TPA sends the shares to the cloud. If the TPA sends shares from different files or different clients in a way that each file’s shares are sent once at a time, then an adversary can collect them all together. However, if the TPA sends shares from different files or different clients in a mixed way such that a share of file F1 is sent to the cloud and the next share that gets sent to the cloud is a share from file F2, then this puts the adversary server in the FN dilemma discussed before; thus, making this type of attack infeasible.
5
Conclusions
In this paper, we integrated two MCC frameworks in order to have a more efficient novel framework in terms of performance and security. The PDP framework presented by Yang et al. (2011) and the lightweight compromiseresilient frameworks presented by Ren et al. (2011) are combined in order to solve the bottleneck performance degradation problem of the PDP framework using the lightweight encryption-alternatives of Ren et al. (2011) framework. Through this integration, we achieved a more efficient and relatively secure software-defined MCC infrastructure. The performance and security of the new
340
L. Tawalbeh et al.
framework were evaluated and discussed, where it proves more efficient for the TPA when linear coding and XOR operation replace the standard encryption function. Security-wise evaluation found that the two new models meet with the requirements to achieve the standard information security concepts of confidentiality, integrity, availability, privacy, non-repudiation, accountability, and authenticity. In conclusion, the CoS PDP framework proved to be lighter than the standard PDP framework in terms of performance. At the same time, the sharing-based PDP framework proved to be lighter than both the CoS PDP framework and the standard PDP framework. Both, the CoS and sharing-based PDP frameworks are relatively secure and meet the requirements for information security concepts achievement. Though, the ShS is less secure than both CoS PDP and the standard PDP but at the expense of a better performance and computational overhead.
Acknowledgements This work is funded by grant number (13-ELE2527-10) from the Long-Term National Science Technology and Innovation Plan (LT-NSTIP), the King Abdul-Aziz City for Science and Technology (KACST), Kingdom of Saudi Arabia. We thank the Science and Technology Unit at Umm Al-Qura University for their continuous logistics support.
References Al-Ayyoub, M., Jararweh, Y., Benkhelifa, E., Vouk, M. and Rindos, A. (2015) ‘Sdsecurity: a software defined security experimental framework’, 2015 IEEE International Conference on Communication Workshop (ICCW), IEEE, pp.1871–1876. Atlantis USX [online] http://www.atlantiscomputing.com/ products/atlantis-usx (accessed October 2014). Breiter, G., Behrendt, M., Gupta, M., Moser, S.D., Schulze, R., Sippli, I. and Spatzier, T. (2014) ‘Software defined environments based on TOSCA in IBM cloud implementations’, IBM Journal of Research and Development, Vol. 58, Nos. 2/3, pp.1–10. Ca, S.V. (2013) ‘Catbird announces support for vmware nsx network virtualization platform’ [online] http://www.catbird. com/company/catbird-vsecurity-with-vmware-nsx#.Uz9J9KSzHT (accessed October 2014). Catbird Networks Inc. (2014) Private Cloud Security, a Catbird White Paper, White Paper. Cicotti, G., Coppolino, L., D’Antonio, S. and Romano, L. (2015) ‘How to monitor QoS in cloud infrastructures: the QoSMONaaS approach’, Int. J. Comput. Sci. Eng., August, Vol. 11, No. 1, pp.29–45 [online] http://dx.doi.org/10.1504/ IJCSE.2015.071359 (accessed 8 September 2015). Crump, G. (2013) Storage Switzerland: Software Defined Storage Needs a Platform, Technical Report TSL03137USEN, IBM, Inc.
Cuomo, A., Rak, M. and Villano, U. (2015) ‘Performance prediction of cloud applications through benchmarking and simulation’, Int. J. Comput. Sci. Eng., August, Vol. 11, No. 1, pp.46–55 [online] http://dx.doi.org/10.1504/IJCSE. 2015.071362 (accessed 13 September 2015). Darabseh, A., Al-Ayyoub, M., Jararweh, Y., Benkhelifa, E., Vouk, M. and Rindos, A. (2015) ‘Sdstorage: a software defined storage experimental framework’, in 2015 IEEE International Conference on Cloud Engineering (IC2E), IEEE, pp.341–346. Dinh, H.T., Lee, C., Niyato, D. and Wang, P. (2011) ‘A survey of mobile cloud computing: architecture, applications, and approaches’, Wireless Communications and Mobile Computing, Vol. 13, No. 18, pp.1587–1611. EMC (2015) Transform Your Storage for the Software Defined Data Center with EMC ViPR Controller, White Paper H11749.4, EMC Corporation. Fortiş, T-F., Munteanu, V.I. and Negru, V. (2015) ‘A taxonomic view of cloud computing services’, Int. J. Comput. Sci. Eng., August, Vol. 11, No. 1, pp.17–28 [online] http://dx.doi.org/ 10.1504/IJCSE.2015.071360 (accessed 17 September 2015). Jararweh, Y., Al-Ayyoub, M., Benkhelifa, E., Vouk, M. and Rindos, A. (2015) ‘Software defined cloud: Survey, system and evaluation’, Future Generation Computer Systems, May 2016, Vol. 58, pp.56–74 [online] http://www.sciencedirect. com/science/article/pii/S0167739X15003283. Khan, A.N., Kiah, M.L.M., Khan, S.U. and Madani, S.A. (2013) ‘Towards secure mobile cloud computing: a survey’, Future Generation Computer Systems, Vol. 29, No. 5, pp.1278–1299. Ling, R. (2004) The Mobile Connection: The Cell Phone’s Impact on Society, Morgan Kaufmann, San Francisco. Mell, P. and Grance, T. (2011) The NIST Definition of Cloud Computing, Draft, NIST Special Publication 800:145. Nathani, A., Chaudhary, S. and Somani, G. (2012) ‘Policy based resource allocation in IaaS cloud’, Future Generation Computer Systems, Vol. 28, No. 1, pp.94–103. NetCitadel Inc. (2012) Netcitadels One Control Platform the Key to Intelligent, Adaptive Network Security, White Paper. Ouyang, J., Lin, S., Jiang, S., Hou, Z., Wang, Y. and Wang, Y. (2014) ‘SDF: software-defined flash for web-scale internet storage systems’, SIGARCH Comput. Archit. News, Vol. 49, No. 4, pp.471–484. Ren, W., Yu, L., Gao, R. and Xiong, F. (2011) ‘Lightweight and compromise resilient storage outsourcing with distributed secure accessibility in mobile cloud computing’, Tsinghua Science & Technology, Vol. 16, No. 5, pp.520–528. Rindos, A., Vouk, M. and Jararweh, Y. (2014) ‘The virtual computing lab (VCL): an open source cloud computing solution designed specifically for education and research’, International Journal of Service Science, Management, Engineering, and Technology (IJSSMET), Vol. 5, No. 2, pp.51–63. Seshadri, S., Muench, P.H., Chiu, L., Koltsidas, I., Ioannou, N., Haas, R., Liu, Y., Mei, M. and Blinick, S. (2014) ‘Software defined just-in-time caching in an enterprise storage system’, IBM Journal of Research and Development, Vol. 58, Nos. 2/3, pp.1–13. Tawalbeh, L.A., Haddad, Y., Khamis, O., Aldosari, F. and Benkhelifa, E. (2015a) ‘Efficient software-based mobile cloud computing framework’, in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp.317–322, IEEE, March.
Efficient and secure software-defined mobile cloud computing infrastructure Tawalbeh, L.A., Jararweh, Y., Ababneh, F. and Dosari, F. (2015b) ‘Large scale cloudlets deployment for efficient mobile cloud computing’, Journal of Networks, February, Vol. 10, No. 1, pp70–76. Thereska, E., Ballani, H., O’Shea, G., Karagiannis, T., Rowstron, A., Talpey, T., Black, R. and Zhu, T. (2013) ‘IOFlow: a software-defined storage architecture’, in Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, Farminton, Pennsylvania, pp.182–196. VMware Inc. (2010) Vmware Vshield Virtualization-Aware Security for the Cloud, White Paper. VMware Inc. (2013) VMware vCloud Networking and Security Overview, White Paper.
341
Walker, K. (2013) ‘Cloud security alliance announces software defined perimeter (SDP) initiative’ [online] https://cloudsecurityalliance.org/media/news/csa-announcessoftware-defined-perimeter-sdp-initiative/ (accessed October 2014). Yang, J., Wang, H., Wang, J., Tan, C. and Yu, D. (2011) ‘Provable data possession of resource-constrained mobile devices in cloud computing’, Journal of Networks, Vol. 6, No. 7, pp.1033–1040. Zhou, R., Sivathanu, S., Kim, J., Tsai, B. and Li, T. (2014) ‘An end-to-end analysis of file system features on sparse virtual disks’, Proceedings of the 28th ACM International Conference on Supercomputing, Munich, Germany, pp.231–240.