Efficient Software-Based Mobile Cloud Computing Framework

3 downloads 16478 Views 239KB Size Report
data possession mobile cloud computing framework. The pro- posed design utilizes the characteristics of two frameworks. The first one is the provable data ...
2015 IEEE International Conference on Cloud Engineering

Efficient Software-Based Mobile Cloud Computing Framework Lo’ai Tawalbeh∗ † , Yousef Haddad‡ , Omar Khamis‡ , Fahd AlDosari∗ and Elhadj Benkhelifa§ ∗ Computer



Engineering Department, Umm-Alqura University, Makka, Saudi Arabia, Email: [email protected]. Computer Engineering Department, Jordan Univ. of Science and Technology, Irbid, Jordan, Email: [email protected] ‡ Princes Sumaya University, Amman, Jordan § Staffordshire University, United Kingdom verification of the data stored in the cloud is done by using bilinear signature and Merkle Hash Tree (MHT). This framework verifies data efficiently. The problems with this framework is the scalability of the TTP agent where the increase of number of users and the heavy calculations might led to a bottleneck. In [8] a secure lightweight distributive access scheme for storage resources in the cloud is presented.

Abstract—This paper proposes an efficient software based data possession mobile cloud computing framework. The proposed design utilizes the characteristics of two frameworks. The first one is the provable data possession design built for resourceconstrained mobile devices and it uses the advantage of trusted computing technology, and the second framework is a lightweight resilient storage outsourcing design for mobile cloud computing systems. Our software based framework utilizes the strength aspects in both mentioned frameworks to gain better performance and security. The evaluation and comparison results showed that our design has better flexibility and efficiency than other related frameworks.

The proposed software based framework integrates the advantages and abilities of both frameworks to overcome the pitfalls of performance and security in both of them. The rest of this paper is organized as follows: The next section gives a background about the software defined systems and storage in addition to related work. Section 3 introduces the base frameworks while in Section 4, we present the proposed model. The evaluation of the new model is presented in Section 5 followed by the conclusions in Section 6.

Keywords—Mobile Cloud Computing, Software Defined Systems, Trusted Cloud Computing, Software Defined Storage, Security.

I.

I NTRODUCTION

Mobile devices are becoming an essential part of human communities daily lives. And there is new technologies and applications and different usage for these smart devices every few months which made them very attractive for people al over the world [1].

II.

It could be argued that the emergence of Software-Defined Systems was an inevitable result of the paradigm shift from traditional computing models to utility-based Cloud computing. Cloud computing providers typically rely on virtualization (an abstraction of computing resources through software) to effectively and efficiently manage their underlying hardware within their data centres. Virtualization provides the ability to logically divide physical resources which allows secure, efficient, multi-tenancy upon single machines. It also enables the ability to aggregate virtualized resources across multiple hosts, provide redundancy through resource migration and elasticity through rapid resource provision and cloning.

Mobile Cloud Computing (MCC) [2] can be defined as a service that allows mobile users to offload their intensive computational jobs and insistent storage operations to cloud resources [3]. Such service can adjust the performance and abilities of resource-constrained mobile devices. MCC provides is a promising technology and can provide many services, such as Infrastructure as a Service (IaaS) [4], Data storage as a Service (DaaS), Software as a Service (SaaS) [5], and Platform as a Service (PaaS) [6]. But on the other hand, MCC has many pitfalls that is related to communication including bandwidth, availability, and heterogeneity, and issues related to computing including offloading, security, integrity, authenticity, and data access.

The ability to abstract a large amount of computing resources, enabled environments that were highly dynamic and could rapidly respond to change. With this came complex and virtual network paths of resources, for individual workloads and therefore also came the need to automate this resource management to enable precise resource provisioning for individual applications. Software-Defined Networking (SDN) was the first software defined resource (utilising this terminology within this context), allowing the management and control planes to be separated from routing hardware and operated by software remotely, providing an increased ability to control, provision and optimise networking according to the changing requirements of an individual work load [9]. Software-Defined

This research focuses on providing more efficiency and security to the MCC frameworks being a Software Defined System that include Software Defined Storage. Two MCC frameworks will be are investigated to propose a new efficient software based framework. The first one in [7] is a provable data possession (PDP) of resource-constrained mobile devices in cloud computing, and the second one in [8] is Lightweight and compromise resilient storage outsourcing with distributed secure accessibility in mobile cloud computing. In [7] the 978-1-4799-8218-9/15 $31.00 © 2015 IEEE DOI 10.1109/IC2E.2015.48

S OFTWARE -D EFINED S YSTEMS

317

Storage (SDS) has followed suit by providing a similar level of control over virtualized storage resources, allowing the ability to dynamically manage storage according to the application policies[10]. Whilst Software Defined Compute (SDC) is the final element of the trio, allowing computational resources to be managed. The combination of software defined resources may be grouped as Software Defined Environments (SDEs) and a necessity for fulfilling the requirement of Software Defined Datacentres (SDDs) [10]. SDEs appear to be the future of Cloud computing infrastructures

solution can solve these problems. They name their solution Software-Defined Cooperative Cache (SDCC) which provides a block level API, allowing any operating system, file system etc. to take advantage of the system. The architecture follows a similar centralised system to the previous, where clients connect to the controller which manages the policies, however in this system there are multiple controllers which manage each storage server and the clients (hypervisors) are clustered into groups with a storage server. Evaluation of their system showed good results with a latency reduction of 69% and throughout increase of 5.4x and is already implemented in a number of IBM products.

A. Software Defined Storage - State of the Art

In [15] the authors do not present another implementation but instead, focus on the identification of potential issues involving the application of conventional storage formats to SDS based solutions. They focus on analysing the performance of sparse disks (a dynamic virtualization format aimed at reducing spare capacity) when enabling various VM guest file system features such as Copy-on-Write and auto defragmenting. Through experimental results they have shown that, utilising sparse disk formats creates performance issues through the enablement of these additional features. However, via tuning the storage requests and taking into account the cross layer input/output their benchmarks have shown that performance gains are feasible. The authors also stress the need for optimising input/output for SSD. Overall, the work highlights the necessity for benchmarking SDS systems to understand the exact effect upon implementations consisting of different feature sets. As with the previous work, they also suggest that work.

Due to the novelty of the area, little work within the field may be found within academic literature, whilst industry appears to have a greater grasp on the subject. IBM may be seen as a leader through literature, as well as VMWare and Microsoft [9][11][13][14][15]. Traditional cloud environments do offer a degree of storage resource abstraction through virtualization; however in order to have a more powerful environment which enables these resources to be autonomously managed, SDS solutions are needed. These solutions enable dynamic policies for storage requirements to be easily enforced, mitigating the complexity required by administrators in managing multiple resource paths through multiple layers. The authors in [14] introduce their SDS architecture: IOFlow. They explain that when making an input/output request to storage within a cloud environment, the request (Ethernet based) must traverse multiple layers (an example given is 18 layers for a Windows system). This layer traversal creates issues when attempting to implement policies for certain applications as each layer may need to treat individual packets differently and therefore enforcing these policies involves a complex procedure in which each layer must be separately configured and managed. The IOFlow SDS architecture enables end to end management of these input/output flows, between a hypervisor and storage controller, via a centralised controller which manages a queue of policy requests. The centralised concept is influenced by the success of those seen in SDN. However they cite that a key issue was in managing the queue of requests, as network devices inherently manage queues for networking efficiently, this is not enabled for storage. They claim that the current implementation only supports small to medium data centres and cite scalability as a potential reason for this. This may be, in part, due to the centralised design those which are known for their inability to scale efficiently. They also only provide implementations for Windows based machines through multiple device drivers, due to the heterogeneous nature and requirement of SDS this would need to be improved to take advantage of multiple storage types.

SSDs are again the focus of the work presented in [12]. The authors propose their system Software-Defined-Flash (SDF) arguing that under-utilisation is their motivation, as well as the speed requirements mentioned in other work. They explain that in the system they studied, due to lack of optimisation in higher levels to take into account SSDs, a 50% bandwidth loss was occurring. Their proposed solution exposes a software interface to the SSDs which uses a number of methods to mitigate underutilisation. They match the block write size to the erasure block size to maintain performance gains in line with the parallel applications to maintain maximum SSD bandwidth and also force applications to erase blocks appropriately in order to maximise utilisation. The implementation itself involves a custom hardware board for interfacing with the SSDs and clustered servers which serve an interface to the clients. The system has shown considerable performance gains and is currently in use in a production data centre. B. Related Work The authors in [16] and [17] suggested using RSA-based hash functions to verify data integrity. However, this method is costly in terms of computation and data transfer. In [18], the PDP is implemented and homomorphic tags are used to ensure integrity. The Proofs Of Retrievability (POR) service that verifies the data retrieved using Reed-Solomon codes is implemented in [19]. Shacham and Waters [20] designed a public data possession scheme that uses bilinear mapping. Problems of this scheme include that it is applicable just for static file storage, and that the number of authentication tokens is proportional to the number of data blocks. Wang,

Other work concentrates on developing a SDS solution for optimising a single problem in contrast to a unified storage solution. In [13] the authors introduce their SDS solution for just-in time caching, aimed at optimising use of Solid State Drives (SSD) in server environments. Flash storage offers huge performance gains over traditional storage drives but carries a considerable financial cost; therefore it tends to be more suited for caching. However the authors argue that this performance gains may be in effective due to bottle necks in networking and enabling intelligent caching through a software defined

318

et al. [21] utilize MHT to build verification tags stored at cloud servers. They treat the tree as a left-to-right sequence and thus the location of the error could be perceived. MHT is also used for dynamic data update. Atiniese, et al. [22] improves their previous model by combining homomoprphic linear authenticators with identification protocols to obtain proofs of storage. This framework support verification publicly and for infinite number of times. It also supports the concept of independency between communications complexity and file length.

computing technology, stateless verification, and support of mobile device environment. Trusted computing technology aims to establish a trustful authenticated channel between client and TPA and using it to exchange relative data so that TPA can perform the intensive computation operations in place of mobile device. Stateless verification is the idea that proofs of storage are computed using random data generated by the verifier not a static data maintained by some entity. Figure 1 shows this architecture.

An effective PDP scheme should enable the TPA to verify the integrity of data in the cloud without the need for retrieving the whole data and without presenting a new burdens and obstacles to the end-user. Wang, el al. [23] introduced a framework for data auditing using third party that is independent from data encryption. Using random masking and homomorphic authenticators a third party can preserve privacy by operating audition on concealed data. Chang and Xu [24] proposed a method for calculating signature for a redacted message without knowing the private key. This opens the door for the verification of a message by third party without obtaining the private key. III.

Figure 1 PDP MCC Framework Architecture [7]. B. Lightweight and Compromise Resilient Storage Outsourcing with Distributed Secure Accessibility in Mobile Cloud Computing

BASE FRAMEWORKS

A. Lightweight and Compromise Resilient Storage Outsourcing with Distributed Secure Accessibility in Mobile Cloud Computing

Ren, et al. [8] proposed a framework for lightweight storage outsourcing. The framework is constructed of two entities: mobile device and cloud service provider. The framework treats the cloud storage servers as totally distrusted nodes to preserve integrity and confidentiality of data stored. The framework treats mobile devices as distrusted nodes in case of storage. This is due to the fact that mobile devices are exposed to device loss, which means that the storage information and credentials are lost or stolen. Thus, the framework is compromise resilient in this case. The device is considered trusted in case of computation due to the idea that the execution environment could be secured using anti-malware and other tools to counter any malicious attempts and activities [3].

The size of data that users need is increasing which creates the problem of storing it and verifying its integrity. Cloud computing introduced a cloud storage infrastructure that solves the problem of storage in a convenient cost. It also provides security countermeasures and storage verification methods. Provable Data Possession (PDP) schemes are storage and computing effective schemes used to verify publicly the integrity of data in the cloud for unlimited number of times and to update data dynamically in the cloud. The integration of cloud computing technology and mobile device technology arises a new dilemma. The limitations of resources in mobile devices make storage and verification operations inapplicable. Using trusted computing technology, Yang, et al. in [7] proposed a model that utilize PDP operations effectively by the incorporation of Trusted Third-Party Agent (TPA) to calculate and process most of the operations instead of the mobile device using the Bilinear signature and Merkle Hash Tree (MHT). End-users are responsible for generating some keys and random numbers using a Trusted Platform Module (TPM) [25]. The structure of this resource-constrained software based public PDP framework contains three main entities: first, mobile end-user (client) that needs to store its data in the cloud securely. Second, trusted third-party agent (TPA) which is software defined system (SDS) that is responsible for intensive software computations (encryption, authentication, and verification). The TPA works as a service agent between mobile access point and the gateway of the IP network. Finally, Cloud Service Provider (CSP), that is responsible for managing the cloud environment and provides clients with storage and proofs of data possession when queried through the Internet. The framework assumes all the communication channels to be secure, authenticated, and reliable. The design objectives of this framework are correlated to four main concepts: public provable data possession, trusted

The framework provides the accessibility scheme for the data owner and any other parties that the owner wants to share data with. The framework consists of three schemes: Encryption based Scheme (EnS), Coding based Scheme (CoS), and Sharing based Scheme (ShS). EnS uses encryption algorithms to provide confidentiality to the data stored on a single cloud storage server. CoS uses secrecy codes and linear coding for each block (share) of the data to provide confidentiality on multiple storage servers with less computational overhead than encryption algorithms since matrix multiplication is used instead of encryption functions. ShS produces the least computational overhead by using exclusive-or (XOR) operations to provide confidentiality for data stored on multiple distrusted cloud storage servers. Integrity and authenticity verification is provided using Message Authentication Code (MAC) functions for all three schemes. The communication channel between device and cloud servers is assumed secure. All computations and processes are done on the mobile device. IV.

PROPOSED MODEL

The proposed model combines the advantages of both discussed schemes. It obtains the characteristics of lightweight

319

and compromise resilient schemes presented in [8], and utilizes the aspects of provable data possession model [7] to overcome the performance degradation problem in the PDP model. The proposed Software based model combines the Coding based Scheme and the Sharing based Scheme with the PDP model, so encryption is no longer used and instead using light operations to achieve confidentiality, integrity, and authenticity. The proposed model consists of a coding-based provable data possession entity that combines CoS scheme with PDP. Note that EnS is not used because PDP relies on encryption functions to encrypt data by TTP and send it to the cloud. Figure 2 Setup Phase in Coding-based PDP Framework. Now let e: G1 × G1 → G2 a bilinear map with a big prime order p and g is the generator of G1 .LetH : [0, 1]∗ → G1 be hash function. The implementation of the proposed model is divided into two different frameworks.

Integrity Verification Phase: TPA generates a challenge message by choosing a “c“ random values of the set [1, d] to build the subset I. Where each element in I denoted by “i“ is linked to a random value vi ∈ Zp generated by the TPA. Then, TPA sends the challenge to the cloud service provider. After that, the CSP receives the challenge and generates the hash value of H(F‘[i]) of each F‘[i] where i ∈ I and the additional information denoted by Ωi for building the hash value of the root of MHT denoted by H(R). Next, the CSP also computes the following two values given by the following equations: c c (v ) μ = (i=1) vi F  [i] ∈ Zp , ω = (i=1) σi i ∈ G1

The Coding-based Provable Data Possession Framework: Mainly based on PDP framework with using Coding based Scheme to replace encryption The framework consists of three phases: setup phase, integrity verification phase, and file retrieval phase. Setup Phase: The mobile device and third party use DiffieHellman protocol to share a secret key (g αβ ). Then mobile device prompts the user to enter a password (PWD). The PWD, file, and file name all are encrypted by the mobile device using the secret shared key (g αβ ). File size and a copy of file name are stored in the TPM chip in a local table T. Then the encrypted data is sent to the third party. After that the third party decrypts data received and then divides file into d parts. each part consists of t chunks and each chunk consists of n bits. Note that file data is intended to be sent to d cloud servers. Next, TPA generates the coding vector θ = [θ1 , θ2 , , θt ] using recursive hash functions depicted as the following:

CSP sends the proof indicated by the values of μ, ω, H(F’[i]), Ωi , and Sigsk (H(R)) to TPA, and then TPA checks the proof by checking the following two conditions. if the two conditions are true then data integrity is verified, and the TPA sends a True message to the end-user. If one or both of the conditions is false then data integrity is refuted, and the TPA sends a False message to the end-user. e(Sigsk (H(R)), g) = e(H(R), g α )(1) c e(ω, g α ) = e( (i=1) H(F  [i])(vi ) uμ , g αβ )(2)

θi = H i (P W D||F N ||F S) where 1 ≤ i ≤ t. H1 = H(x)andHi = H(H i−1 (x))where2 ≤ i ≤ t. Using θ , the TPA produces Secrecy Codes (SC) (F’[j]) to obtain confidentiality in the cloud by coding each part using the following equation: F’[j]= [

t

i=1 θi

∗ F [i][j]where1 ≤ j ≤ d. Figure 3 Integrity Verification Phase in Coding-based PDP.

The TPA builds MHT and calculates H(R) that is the hash value of the root node of the tree. The leaf nodes of the tree represent the hash values of F’[j]. The TPA then sends H(R) to the end-user that saves it in its TPM chip. TPA deletes PWD.Then, the end-user signs the H(R) (Sigsk (H(R)) = (H(R))α ) and sends the signature to the TPA. The TPA computes the signature collection of each data block denoted  by φ = σj where 1jd and σj = [H(F  [j])u(F [j]) ]β u is an element in G1 choosing randomly by the TPA. The TPA then sends Sigsk (H(R)), F  = F  [j]|H(F N + j)f orall1 ≤ j ≤ d, φ to the cloud provider that distribute all the values of F[j] to a corresponding cloud storage server CSj . TPA sends back t to the end-user and deletes θandF N.

File Retrieval Phase: Device prompts the user to enter PWD. Then, TPA and end-user exchange data using DiffieHellman protocol to get a symmetric session key (Ks). After that, PWD, FN, FS, and t are encrypted using Ks and sent to the TPA. Next, TPA requests the cloud provider for stored data by sending it the value of each H(FN+j) where 1 ≤ j ≤ d. Then, CSP sends the values of F[j] back to the TPA. TPA calculates i = H i (P W D||F N ||F S)where1 ≤ i ≤ t, then File is decrypted using the following equation: F [i][j] = θ−1 [i] ∗ F [j]. Finally, TPA sends File (F) to the end-user using the secure channel and deletes the values of θ, P W D, F N, F S, andt. 320

letter presenting the name of the file each letter is represented by an 8-bit ASCII code. FS presents the file size which is a number presenting the number of bytes in the file and they number can be converted to binary to represent a small string of bits. H(R) is the hash root value of merkle hash tree. mostly, MHT uses SHA-1 hash function that produces a 160-bit digest that means that the size of H(R) is 160 bits. dk in usual asymmetric encryption functions like RSA and ElGamal is at least 1024-bit long which if compared to number of bits for FS, FN, and H(R) is larger, thus, decreasing storage load in the proposed model. Encryption operation on mobile device and decryption on TPA is operated on three values F, FN, and PWD in the proposed model while in the standard model its operated on F alone. This increases the computational cost on mobile device and TPA. Still, comparing F with F, FN, PWD it could be obtained that F is the only large value to be encrypted/decrypted while FN and PWD contains a small values to be encrypted/decrypted which means that the encryption/decryption cost on the proposed model is slightly bigger than in standard model.

Figure 4 File Retrieval Phase in Coding-based PDP. V.

EVALUATION

As mentioned before, the main contribution of the proposed model to the PDP framework is the usage of codingbased schemes in order to solve the performance degradation problem of PDP framework resulting from the increase of the number of mobile clients that offloads their jobs on the TTP. This issue is solved by replacing heavy and computationintensive operation of encryption with other computational mechanism that are lighter and simpler, and then resulting in more performance. The main changes in the setup phase between standard PDP framework and the new proposed model can be summarized as follows: 1)

2) 3)

4)

5)

6)

For the file retrieval phase in both standard and proposed PDP model, Diffie-Hellman key exchange exists in both models. In standard model dk is encrypted/decrypted using Ks which is a high cost operation in comparison to proposed model since size of dk could be 1024 bits at least, while sizes of FN, FS, PWD, and t altogether are way less than dk. PWD entry operation is an addition to the standard model procedure.

Merkle hash tree, Diffie-Hellman key exchange protocol, H(R) signature, and signature collection φ operations are used similarly on both standard and proposed model. Decrease of storage load on client in the proposed model. FS,FN, and H(R) are stored on TPM while in standard model dk is stored. Increase in the size of data to be encrypted and decrypted in the new model (F, FN, and PWD instead of only the file F), plus the new operation of entering PWD on mobile device. These issues augment the computational cost on mobile device and TPA. Key generation for the encryption function is replaced by recursive hash functions that are used to create coding vector θ. Encryption function is replaced by vector product on d parts of F. Erasure codes encoding is and file division to N parts is replaced by file division to d parts with each part containing symmetric t chunks of data. These three issues show that computational cost for the proposed model on TPA is reduced. Theres no need for encryption for H(R) in the proposed model since there is no dk to be encrypted alongside with H(R). H(R) can be sent in plaintext because its just used in verifying the integrity of the encrypted file on the cloud. Actually the distrusted cloud itself sends Ωi to TPA in integrity verification phase to rebuild H(R). This point reduces the computational overhead on TPA and client. TPA computes and concatenates F[j]——H(FN+j) and sends them to cloud provider instead of sending F=mi which in a neglected manner increases computational cost on TPA.

TPA in proposed model has to calculate H(FN+j) for j times to send a file retrieval request to the cloud service provider, while in standard model it just sends a request this increases computation cost for this step. On the other hand, file decryption in standard model is done using asymmetric decryption function using dk, while in proposed model file is decrypted by producing coding vector θ and then performing matrix multiplication by considering θ as a tx1 matrix and considering F[j] values as a 1xj matrix. This is presented in the equation of F [i][j] = θ−1 [i] ∗ F [j] note that value of θ is inverted for each multiplication. This modification -including the H(FN+j) calculations- in the decryption operation at the new proposed model decreases the computational cost. From the above evaluation, we can say that the proposed codingbased PDP model has better performance and less cost than the standard PDP model. VI.

CONCLUSIONS

Whilst current progress in the area of SDS is sparse at best, there is potential for work in this field in a number of areas. Initial work tends to focus on the development of a softwaredefined storage solutions. These tend to take a large amount of inspiration from SDN implementations such as the centralised management system. However, this may create issues during when attempting to scale the software to larger sized clouds and might even imply that the software will have difficulty in varying dynamic environments. A recurring theme throughout the work is stressing the need for optimisation of SSDs which currently offer huge performance gains over conventional, magnetic drives. However without proper optimisation, which is feasible with SDS, these benefits will not be realised and the capacity will remain underutilised.

The decrease of storage load on client in a relevant issue still most of the time it‘s a true proposition. FN contains a string of 321

The novelty of the area ensures that security and privacy issues as a consequence of software defined storage are yet to be properly identified and formally quantified. However, a number of issues can be inferred via examining problems which are inherited from the underlying technologies and their environments. Additional issues may be identified via an examination of similar systems, the integration of SDS within SDEs and an analysis of the softwares complexity. Privacy issues relate to those already inherent within cloud environments but consists of ways in which these may be catalysed or mitigated thought the introduction of software defined systems.

[12]

[13]

[14]

[15]

In this paper we introduced an efficient software based MCC framework. The PDP original framework uses the concepts of MHT, Diffie-hellman key exchange protocol, and standard encryption functions. In our proposed software model that is bsed on Coding based Scheme (CoS) we replace the major data encryptions performed by the TPA by less computationally intensive operations to enhance the performance of the TPA, and this is because CoS scheme is based on linear coding and matrix multiplication that is relatively lighter than standard asymmetric encryption functions.

[16]

[17] [18]

[19]

VII.

ACKNOWLEDGMENT

This work is funded by grant number (13-ELE252710) from the Long-Term National Science Technology and Innovation Plan (LT-NSTIP), the King Abdul-Aziz City for Science and Technology (KACST), Kingdom of Saudi Arabia. We thank the Science and Technology Unit at Umm A-Qura University for their continued logistics support.

[20]

[21]

R EFERENCES [22]

[1]

Ling R (2004) The Mobile Connection: The Cell Phones Impact on Society. Morgan Kaufmann, San Francisco. [2] Dinh HT, Lee C, Niyato D, Wang P (2011) A survey of mobile cloud computing: architecture, applications, and approaches. Wireless Communications and Mobile Computing. [3] Khan AN, Kiah MLM, Khan SU, Madani SA (2013) Towards secure mobile cloud computing: A survey. Future Generation Computer Systems 29:12781299. [4] Nathani A, Chaudhary S, Somani G (2012) Policy based resource allocation in IaaS cloud. Future Generation Computer Systems 28:94103. [5] Hamdaqa M, Livogiannis T, Tahvildari L (2011) A Reference Model for Developing Cloud Applications. In: Leymann F, Ivanov I, van Sinderen M, Shishkov B (eds) CLOSER. SciTePress, pp 98103. [6] Mell P, Grance T (2011) The NIST definition of cloud computing (draft). NIST special publication 800:145. [7] Yang J, Wang H, Wang J, Tan C, Yu D (2011) Provable data possession of resource-constrained mobile devices in cloud computing. Journal of Networks 6:10331040. [8] Ren W, Yu L, Gao R, Xiong F (2011) Lightweight and compromise resilient storage outsourcing with distributed secure accessibility in mobile cloud computing. Tsinghua Science Technology 16:520-528. [9] G. Breiter, M. Behrendt, M. Gupta, S. D. Moser, R. Schulze, I. Sippli and T. Spatzier (2014) Software defined environments based on TOSCA in IBM cloud implementations. IBM Journal of Research and Development 58:1-10. [10] M. Carlson, A. Yoder, L. Schoeb, D. Deel and C. Pratt (2014) Software defined storage. SNIA. [11] C. Li, B. L. Brech, S. Crowder, D. M. Dias, H. Franke, M. ogstrom, D. Lindquist, G. Pacifici, S. Pappe, B. Rajaraman, J . Rao, R. P. Ratnaparkhi, R. A. Smith and M. D. Williams (2014) Software defined environments: An introduction. IBM Journal of Research and Development, 58:1-11.

[23]

[24]

[25]

322

J. Ouyang, S. Lin, S. Jiang, Z. Hou, Y. Wang and Y. Wang (2014) SDF: Software-defined Flash for Web-scale Internet Storage Systems. SIGARCH Comput.Archit.News, 42: 471-484. S. Seshadri, P. H. Muench, L. Chiu, I. Koltsidas, N. Ioannou, R. Haas, Y. Liu, M. Mei and S. Blinick (2014) Software defined just-in-time caching in an enterprise storage system. IBM Journal of Research and Development, 58:1-13. E. Thereska, H. Ballani, G. O’Shea, T. Karagiannis, A. Rowstron, T. Talpey, R. Black and T. Zhu (2013) IOFlow: A software-defined storage architecture. in Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, Farminton, Pennsylvania, pp. 182-196. R. Zhou, S. Sivathanu, J. Kim, B. Tsai and T. Li (2014) An end-to-end analysis of file system features on sparse virtual disks. in Proceedings of the 28th ACM International Conference on Supercomputing, Munich, Germany, pp. 231-240. Deswarte Y, Quisquater J-J, Sadane A (2004) Remote Integrity Checking. Integrity and Internal Control in Information Systems VI IFIP TC11/WG115 Sixth Working Conference on Integrity and Internal Control in Information Systems (IICIS) 1314 November 2003, Lausanne, Switzerland 140:111. Filho G, Luiz D, Baretto P (2006) PSLM: Demonstrating data possession and uncheatable data transfer. Ateniese G, Burns R, Curtmola R, Herring J, Kissner L, Peterson Z, Song D (2007) Provable data possession at untrusted stores. Proceedings of the 14th ACM conference on Computer and communications security. ACM, pp 598609. Juels A, Kaliski Jr BS (2007) PORs: Proofs of retrievability for large files. Proceedings of the 14th ACM conference on Computer and communications security. ACM, pp 584597. Shacham H, Waters B (2008) Compact Proofs of Retrievability. Advances in Cryptology-ASIACRYPT. 4th International Conference on the Theory and Application of Cryptology and Information Security, Melbourne, Australia, December 7-11, 2008. Proceedings. Springer, pp 90107. Wang Q, Wang C, Li J, Ren K, Lou W (2009) Enabling public verifiability and data dynamics for storage security in cloud computing. In: Bages M, Ning P (eds) Computer Security–ESORICS 2009. 14th European Symposium on Research in Computer Security, Saint-Malo, France, September 21-23, 2009. Proceedings. Springer, pp 355370. Ateniese G, Kamara S, Katz J (2009) Proofs of storage from homomorphic identification protocols. In: Matsui M (ed) Advances in Cryptology–ASIACRYPT 2009. 15th International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, December 6-10, 2009. Proceedings. Springer, pp 319333. Wang C, Wang Q, Ren K, Lou W (2010) Privacy-preserving public auditing for data storage security in cloud computing. INFOCOM, 2010 Proceedings IEEE. IEEE, pp 19. Chang E-C, Xu J (2008) Remote Integrity Check with Dishonest Storage Server. Computer Security-ESORICS. 13th European Symposium on Research in Computer Security, Mlaga, Spain, October 6-8. Proc. Springer Berlin Heidelberg, pp 223237. Kinney SL (2006) Trusted platform module basics: using TPM in embedded systems. Newnes.

Suggest Documents