Towards a Trust Prediction Framework for Cloud ... - IEEE Xplore

0 downloads 0 Views 8MB Size Report
Mar 13, 2017 - extensive experiments are performed based on public QoS data set, as well as in-depth comparison .... network (NN) shows better effect than classic classifica- ..... diction precisions of four typical classification algorithms.
SPECIAL SECTION ON TRUST MANAGEMENT IN PERVASIVE SOCIAL NETWORKING (TRUPSN)

Received November 27, 2016, accepted December 23, 2016, date of publication January 17, 2017, date of current version March 13, 2017. Digital Object Identifier 10.1109/ACCESS.2017.2654378

Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven Neural Network CHENGYING MAO1 , RONGRU LIN1 , CHANGFU XU1 , AND QIANG HE2 1 School

of Software and Communication Engineering, Jiangxi University of Finance and Economics, Nanchang 330013, China of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne, VIC 3122, Australia

2 Department

Corresponding author: C. Mao ([email protected]) This work was supported in part by the National Natural Science Foundation of China under Grant 61462030, in part by the Natural Science Foundation of Jiangxi Province under Grant 20162BCB23036 and Grant 20151BAB207018, and in part by the Science Foundation of Jiangxi Educational Committee under Grant GJJ150465. This paper was presented at the 4th International Conference on Advanced Cloud and Big Data (CBD’16) [41].

ABSTRACT Trustworthiness is an important indicator for service selection and recommendation in the cloud environment. However, predicting the trust rate of a cloud service based on its multifaceted quality of services (QoSs) is not an easy task due to the complicated and non-linear relations between service’s QoS values and the final trust rate of the service. According to the existing studies, the adoption of intelligent technique is a rational way to attack this problem. Neural network (NN) has been validated as an effective way to predict the trust rate of the service. However, the parameter setting of NN, which plays an important role in its prediction performance, has not been properly addressed yet. In the paper, particle swarm optimization (PSO) is introduced to enhance NN by optimizing its initial settings. In the proposed hybrid prediction algorithm named PSO-NN, PSO is used to search the appropriate parameters for NN so as to realize accurate trust prediction of cloud services. In order to investigate the effectiveness of PSO-NN, extensive experiments are performed based on public QoS data set, as well as in-depth comparison analysis. The results show that our proposed approach has better performance than basic classification methods in most cases, and significantly outperforms the basic NN in the terms of prediction precision. In addition, PSO-NN demonstrates better stability than the basic NN. INDEX TERMS Cloud services, quality of service (QoS), trust prediction, neural network, particle swarm optimization (PSO).

I. INTRODUCTION

In the era of cloud computing, software system is usually built and delivered based on the pay-as-you-go model. In the cloud environment, the traditional software functions are usually encapsulated as services. Software systems are constructed by combining services hosted on cloud platforms, and delivered as software as a service (SaaS) [1] over the Internet. In this way, the coupling between the logical modules of a software system has been obviously reduced, and the heterogeneous and distributed services can be integrated flexibly. Therefore, SaaS provides a more flexible, elastic and cost-effective way to construct software system in the cloud environment. Once the business logic (also known as workflow [2]) of an application is determined, the main task of building system is to discover and select an appropriate concrete service for each functional point (also called abstract service) in the workflow. For a given abstract service, the functionality is VOLUME 5, 2017

the primary issue that must be taken into consideration while selecting the concrete services. However, for each abstract service, there might be multiple concrete services that can satisfy a required functionality. These services offer the same functionality, usually with different quality-of-service (QoS) values. When selecting concrete services, system engineers must pay attention to the QoS values and the comprehensive trustworthiness of those services. In order to build highly reliable service systems, system engineers must properly predict the trustworthiness of the services [3] during the process of service selection. In general, the trustworthiness of a cloud service is usually reflected by several QoS attributes from different perspectives, including availability, reliability, response time, throughout and so on. The QoS values of a service can be collected not only from the corresponding service level agreement (SLA) published by the service providers, but also by monitoring services on the service requester’s side.

2169-3536 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

2187

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

Subsequently, to build a highly reliable SaaS, system engineers need to predict trust of each candidate service according to its QoS attributes, and then select the appropriate ones [4]. Due to the multi-attribute feature, trust prediction can be viewed as a multi-attribute decision making (MADM) problem. Accordingly, some traditional decision-making methods, such AHP [5], ANP [6], PROMETHEE [7] and VIKOR [8], have been employed to facilitate service selection based on service trust. On the other hand, ranking methods have also been applied to help system engineers select reliable concrete services [9]–[11]. Very often system engineers need to assign each candidate service to a specific trust level (or rate) before service selection. However, the evaluation of service’s trust rates is not an easy task. In most cases, it needs manual interaction to obtain results with high confidence. When the number of candidate services reaches a certain quantity, it is not practical to evaluate the trust rate of services manually one by one. In real-world application scenarios, the evaluation of some services is performed manually first. Then, based on those services, intelligent data analysis methods are adopted to predict the trust rates of new services. In recent years, how to utilize intelligent techniques to predict the trust rates of services has been widely studied. Classic prediction models such as regression analysis [12] have been adopted to attack this problem. Bayesian network, as a popular probability reasoning model, has also been used for predicting the trust and reputation of services [13]–[15]. In addition, some well-known decision-making methods such as fuzzy technology [12], [16], game theoretical model [17] and evidential reasoning [18] have been used to handle the uncertainty during the process of trust prediction. However, the accuracy of these methods is not very ideal. According to the comprehensive comparison reported in [19], neural network (NN) shows better effect than classic classification algorithms. The underlying reason lies in that neural network can simulate the non-linear and complicated relations between the QoS values of a service and its trust rate. However, the performance of a neural network is largely affected by its initial parameter settings. Moreover, the parameter configuration space of neural network is usually very large. As a result, how to find an appropriate parameter setting for such a neural network is a critical task. The previous studies have reported that evolutionary algorithms such as genetic algorithm can effectively assist in configuring a neural network to achieve high performance [20]–[22]. In the paper, we employ particle swarm optimization (PSO) [23] to search for an appropriate parameter setting for neural network to predict the trust rates of cloud services. Comprehensive experimental analysis is performed on a public data set to evaluate the proposed approach named PSO-NN. The experimental results demonstrate that the prediction accuracy of the improved neural network (PSO-NN) is significantly better than that of the traditional neural network for service trust prediction problem. The rest of paper is organized as follows. Section II defines the trust rate problem for cloud services and briefly 2188

introduces neural network and particle swarm optimization used in our study. The trust rate prediction framework and the related techniques are introduced in Section III. Section IV presents the experimental results and compares our approach and other prevalent methods. The threats to validity and related work are discussed in Section V. Finally, Section VI concludes this study. II. PRELIMINARIES A. QoS TRUST RATE PREDICTION PROBLEM

The trustworthiness of services, which is a comprehensive quality measure of services, is an important issue that need to be carefully addressed to support service selection [24]. In existing studies, it is usually evaluated or predicted in the following two ways: QoS monitoring and user feedbacks [25], [26]. In practice, it is difficult to obtain reliable and comprehensive user feedbacks. Thus, in this research, we predict the trust rate of a service base on its monitored QoS values. For a cloud service csi (1 ≤ i ≤ n), assume that its QoS is reflected by m attributes, its QoS attributes can be represented by a tuple qos(csi ) = (q1i , q2i , · · · , qmi ). The trust rate is a measure to scale the trustworthiness of a service for the reference of service selection. It is usually specified as one of the several discrete numbers (or levels) for distinction and identification. The trust rate is a comprehensive indicator of the performance of m different QoS attributes. Here, we denote the trust rate of csi as rate(csi ) = ri (1 ≤ i ≤ n). Thus, the complete record with QoS values and trust rate of csi can be expressed as rec(csi ) = (qos(csi ), rate(csi )) = (q1i , q2i , · · · , qmi , ri ). For a set of QoS and trust rate records S = {rec(csi )}ni=1 , suppose that each element in a subset Strain ⊂ S has a complete record, that is, the trust rate has been evaluated for each service in Strain . Conversely, the trust rates in the remaining service records (i.e., S − Strain ) have not been determined. In the remainder of this paper, this subset of services with empty trust rates is denoted as Stest . Thus, the prediction problem of QoS trust rate for cloud services can be defined as below. Given a training set (Strain ), in which the QoS values and trust rate of each service are completely available, the prediction is to give a trust rate for each service in test set Stest according to the QoS values and the experience from Strain . In other words, the problem is to predict the trust rate of a given service through knowledge learned from the existing QoS and trust records. B. BRIEF REVIEW ON ARTIFICIAL NEURAL NETWORK

In the fields of machine learning and cognitive science, an artificial neural network (ANN, also known as neural network) is a computational model inspired by the way of biological nervous systems like animal brains [27]. It is an abstract and simulation of the structure and features of biological neural networks. In an artificial neural network, the fundamental processing elements are artificial nodes, VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

known as ‘‘neurons’’. They are connected together to form a network to mimic the working mechanism of a biological neural network. Basically, a neuron receives inputs from other neurons, combines them in a certain way, performs a nonlinear operation, and then outputs the result. Neural networks are usually organized in layers. In general, the neurons in a neural network are classified into three types of layers: input layer, hidden layer(s) and output layer. The connections between the neurons in two adjacent layers are designed to bridge the neurons and exchange messages. the input layer is mainly responsible for receiving input data or patterns, and communicates to one or more hidden layers where the actual processing is done via the connections. The hidden layer then links to the output layer to output the final results. In order to simulate the intelligence of biological brain, the transformation from the input (hidden) layer to the hidden (output) layer is modeled as a non-linear or linear transfer function. Meanwhile, in order to enhance the adaption and learning ability of a neural network, the connection weights between neurons are dynamically adjusted during the learning process. Neural network has several advantages. One of most recognized merits is the fact that it can learn and pick up rules and patterns from the training data set. It is considered as a typical non-linear statistical data processing model, with which the complex relationships between inputs and outputs can be properly represented. For this reason, it has been widely applied for solving real-world problems, such as prediction, clustering and classification. As reported in [19], neural network is also useful for predicting the trustworthiness of cloud services. In this study, we expand from the model introduced in [19] and attempt to employ the particle swarm optimization (PSO) technique to improve the prediction performance.

impact of the previous history on the new velocity. In general, it gradually reduces from max value (wmax ) to the minimum (wmin ). Parameters c1 and c2 are the acceleration constants reflecting the weighting of stochastic acceleration terms that pull each particle j toward pbestj and gbest positions, respectively. Here, pbestj represents the personal best solution (position) found by the jth particle and gbest is the global best solution (position). Parameters r1j and r2j are two random numbers in the range [0, 1].

C. PARTICLE SWARM OPTIMIZATION

FIGURE 1. PSO-driven NN for predicting the trust rate of cloud services.

In recent years, meta-heuristic search algorithms have been widely adopted for solving optimization problems. PSO is a population-based meta-heuristic optimization approach inspired by the social behavior of birds within a flock [23]. In the PSO algorithm, each potential solution to the problem is called a particle and the population of the solutions is called a swarm. During the iteration process, each particle maintains its own current position, present velocity and personal best position. The iterative process leads to a stochastic manipulation of velocities and flying directions according to the best experiences of the swarm to search for the global optimum in solution space. The velocity and position of the jth particle at generation t can be updated by the following two formulas. vj (t) = w · vj (t − 1) + c1 · r1j · (pbestj − xj (t − 1)) + c2 · r2j · (gbest − xj (t − 1)) xj (t) = xj (t − 1) + vj (t)

(1) (2)

where xj is the position of the jth particle, vj represents the velocity of particle j. The inertia weight w controls the VOLUME 5, 2017

III. TRUST RATE PREDICTION FRAMEWORK

In this section, the overall framework of PSO-based trust rate prediction model is introduced firstly, and then the supporting mechanisms.

A. THE OVERALL FRAMEWORK

The trust rate prediction framework is illustrated in Figure 1, and consists of 11 steps. At the preparation stage, the input vector and output vector of trust rate prediction problem are analyzed. Accordingly, the structure of the neural network is constructed. Here, the structure includes the number of layers in the neural network and the number of neurons in each layer. Apart from the topology structure, the weights of connections between neurons and the thresholds on hidden (output) nodes are also important features of a neural network. Based on the above analysis, the key parameters of the neural network are encoded as the position vector of a particle in the PSO algorithm. In other words, each particle in a population (called a swarm) represents an initial setting of the neural network for trust prediction problem. As reported in the existing work, neural network has a strong ability to model complex data in a non-linear manner. However, its performance highly depends on the initial setting of network parameters. In this study, PSO, 2189

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

a well-known swarm intelligent algorithm, is employed to find appropriate initial parameters of neural network. Based on the settings, the neural network can be further trained with classic strategies such as back-propagation (BP) to generate a final prediction model. At the optimization stage, each particle in the swarm (i.e. a population of solutions) is decoded into a parameter setting of the neural network, and then the network is further trained with the training data. At the same time, the prediction error of each particle is evaluated to determine its pbest. Consequently, the global best solution (gbest) is updated accordingly. Finally, when the termination condition is fulfilled, gbest is taken as the best initial setting of the neural network for the trust rate prediction on test data.

B. ALGORITHM DESCRIPTION

The back-propagation learning is a popular and effective approach for training a neural network, and has been used to solve a wide variety of problems [27], such as pattern recognition, data processing, time series prediction and so on. Although it has the abilities of self-learning, self-adapting and generalization, it still has inherent disadvantages: slow convergence and local optimal point which may lead to inaccurate results. In general, the initial weights of connections between neurons in neural network are assigned with random values. Thus, the network may produce poor prediction results due to the local optimal trap. In order to improve the precision of trust rate prediction for cloud services, we employ PSO to search for the suitable connection weights for the initial structure of the neural network. The details of our solution are described by the algorithm named ‘‘PSO-NN for Trust Rate Prediction’’ (hereafter referred to as PSO-NN). As presented in Algorithm 1, PSO-NN consists of three parts. The first part (Line 1) is the preparation stage, at which the connection weights and node thresholds are combined into a position vector of particle. The second part (Lines 2-17) is to optimize the initial parameters (including weights and thresholds) of the given structure of the neural network. Firstly, for each particle in the swarm, its initial position and velocity are randomly generated (Line 2). During the optimization process, the neural networks represented by particles are trained with the basic neural network and the corresponding errors are treated as fitness values (Lines 5-7). Accordingly, the pbest of each particle is updated (Lines 8-10). Subsequently, the particle with the best fitness is selected as gbest (Line 12). Further, the position and velocity of each particle are updated with its pbest value and the global gbest. The last part (Lines 18-22) is for the further application of the best initial settings of the neural network. After PSO-based searching, the neural network with the best initial configuration is selected (Line 18). Then, the selected neural network is further trained with the BP learning algorithm (Line 19). Finally, the neural network is used to predict trust rates of the given services according to their QoS values (Lines 20-22). 2190

Algorithm 1 PSO-NN for Trust Rate Prediction Input: (1) the population size of swarm: s; (2) the QoS records of cloud services, S = Strain ∪ Stest ; (3) the basic parameters of PSO and BPNN. Output: trust rates of the given services. 1. encode connection weights and thresholds on hidden nodes to position vector of particle; 2. randomly generate s position vectors as the initial solution in PSO; 3. repeat 4. for each particle j(1 ≤ j ≤ s) in swarm do 5. decode its position vector to the initial settings of neural network; 6. call the basic NN to train network on training data set Strain ; 7. calculate prediction error on Strain as the fitness of this particle; 8. if its fitness is better than that of pbestj then 9. set the current particle as the new pbestj ; 10. end if 11. end for 12. set the particle with the best fitness as gbest; 13. for each particle in swarm do 14. update particle’s velocity according to formula (1); 15. update particle’s position according to formula (2); 16. end for 17. until the termination condition is satisfied 18. select gbest as the best initial setting of neural network; 19. use BP method to further train the network on Strain ; 20. for each service csi in Stest do 21. predict rate(csi ) with the trained network; 22. end for 23. return {rate(csi )|csi ∈ Stest }.

C. TECHNICAL DETAILS

This section presents the supporting mechanisms for Algorithm 1. (1) The encoding for particle and neural network. In order to find appropriate connection weights and thresholds (also known as biases) for hidden and output nodes, a position vector of particle is needed to store the parameter information of the neural network. Take the three-layer neural network in Figure 2(a) as an example. The numbers of nodes in the input layer, hidden layer and output layer are 3, 4 and 2, respectively. The connection weights can be classified to two types: 1) {wih (i, j)|1 ≤ i ≤ 3, 1 ≤ j ≤ 4}, which represents the weights from input node to hidden node, and 2) {who (j, k)|1 ≤ j ≤ 4, 1 ≤ k ≤ 2} which is a weight set for connections between the hidden layer and the output layer. Meanwhile, the thresholds (or biases) on hidden nodes and output nodes, such as {θ1,j |1 ≤ j ≤ 4} or {θ2,k |1 ≤ k ≤ 2}, are also the important parameters of a neural network. Accordingly, as shown in Figure 2(b), the position vector of a particle consists of the following four parts: {wih (i, j)}, {wih (j, k)}, {θ1,j } and {θ2,k }. Apart from the mapping between the neural network and the particle position, the QoS record rec(csi ) (1 ≤ i ≤ n) need to be decomposed into the input and output of the neural network. For the QoS data of a service, i.e., qos(csi ) = (q1i , q2i , · · · , qmi ), it is firstly converted VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

it can be further trained with the BP strategy. For each particle pj (1 ≤ j ≤ s), its fitness can be modeled as the prediction error of the trained neural network, whose initial setting is represented by pj , on the training data set. For each service csi in Strain , the prediction error is calculated according to the difference between the real output of the neural network and the expected trust rate by Eq. (5). fitness(pj ) =

FIGURE 2. An Illustration of encoding the parameters of NN into the position vector of a particle. (a) An example NN. (b) The corresponding coding of the above NN.

into (q01i , q02i , · · · , q0mi ) by means of min-max normalization (refers to Eq. (3)), which will then be mapped to the nodes in the input layer of the neural network: x1 = q01i , x2 = q02i , · · · , xm = q0mi . q0ki =

qki − qmin qmax − qmin

(3)

where qmin and qmax are the minimum and maximum values of qki (1 ≤ k ≤ m). Accordingly, the trust rate of the given service, i.e. rate(csi ), is represented by an integer number (e.g., from 1 to 4). For this reason, there is only one node in the output layer in the neural network. However, the output node of the neural network will produce a decimal number rather than an integer in most cases. Therefore, an additional step is necessary to transform the decimal output to a trust rate (i.e., the ordinal data). Suppose that the trust rate (ri ) of a cloud service (csi ) varies from 1 to l, i.e. 1 ≤ ri ≤ l, the real output value of neural network for the service, i.e. rireal , needs to be adjusted to an integer between 1 and l following the rule shown in Eq. (4)). And then, the transformed output is taken as the predicted trust rate of the given service.  if rireal < 1  1, 0 real ri = round(ri ), if 1 ≤ rireal ≤ l (4)  l, otherwise where round() function is used to round a number with a fractional part to an integer. (2) Fitness function. As mentioned earlier, a particle in PSO stands for an initial parameter setting of the neural network. During the particle swarm optimization, once a neural network is initialized with the parameters of a particle, VOLUME 5, 2017

1 |Strain |

|SX train |

(ri − rireal )2 ,

(5)

i=1

where Strain is the training data set of QoS trust rate records, rireal is the real output of the neural network for the ith QoS record, and ri represents the excepted trust rate of csi . According to the definition in Eq. (5), the particle with a lower fitness has much better performance for setting the neural network. (3) Configuration of search process. As shown in Algorithm 1, the process of particle swarm optimization is controlled by the termination condition. In general, this condition can be expressed in the following two ways: the first is to set the largest number (Gmax ) of iterations to limit the updating of particles. The second is to observe the changes of the global best fitness for all particles. If gbest does not change in the last steps, the optimization process of PSO has stabilized, thereby the iteration process can terminate. In this paper, we adopt the first method to control the execution of particle swarm optimization for demonstration purpose. On the other hand, parameter w (i.e. inertia weight) plays an important role in the diversity of solutions (or particles) and the convergence speed of search process. At the initial stage, the search algorithm must ensure the diversity of the solutions. The algorithm must ensure that the particles have a high global search ability. Thus, a relatively low value of w is needed. At the late stage of search process, a relatively high value of w is necessary to enhance the local search ability of the particles. Based on above analysis, we adopt a linear adjustment method to control the inertia weight [28], [29]. Gmax − t (wmax − wmin ) + wmin , (6) Gmax where t is the current generation of the PSO search process, Gmax is the predefined maximum generation. wmax and wmin are the initial value and final value of inertia weight, respectively. w(t) =

IV. EXPERIMENTAL ANALYSIS A. EXPERIMENT SETUP

In this section, we validate the effectiveness of PSO-NN through experiments on a public QoS data set QWS.1 This data set has been extensively used in the studies on the quality of Web services. It contains 365 real Web services that exist on public Web sites. The QoS records of these services are collected by the tool Web Service Crawler Engine (WSCE) 1 http://www.uoguelph.ca/~qmahmoud/qws/index.html

2191

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

developed by Al-Masri and Mahmooud [30]. Each service was tested over a ten-minute period for three consecutive days. For each QoS record, it contains 9 attributes such as response time, availability, reliability and so on. Al-Masri and Mahmooud also evaluated the trust rate of each service by a comprehensive quality score—WsRF (Web service relevancy function). The generated rates have four levels: platinum (high quality), gold, silver and bronze (low quality), which are represented by the numbers from 1 to 4, respectively. In our experiments, several classic classification algorithms, such as BayesNet, CART and J48, were used as reference algorithms for the comparative analysis. We used their implementations in WEKA2 to predict the trust rates of services. With regard to fuzzy linear regression algorithm, we directly made use of the reported results in [12] for comparison. The work in this paper is mainly to improve the performance of BPNN for the trust rate prediction problem. Hence, it is necessary to carry out the detailed contrast between our PSO-NN algorithm and the traditional BPNN. In the experiments, we have implemented PSO-NN on MatLab,3 and used the standard implementation of BPNN in the neural network toolbox of MatLab. The parameter settings of PSO and the neural network are shown in Table 1. TABLE 1. The parameter settings in comparative experiment.

The experiments were performed on a machine running Windows 7 with 64-bits. While comparing PSO-NN and BPNN, we repeated 1000 runs to get the average value of each algorithm’s results. To perform quantitative comparison, precision was used to measure the performance of each prediction algorithm.

precision =

1 |Stest |

|S test | X

 match(ri , ri0 ) × 100%,

(7)

i=1

where Stest is the data set for testing, and match() is a binary function. If the trust rate (ri0 ) predicted by PSO-NN or BPNN equals to the expected vector (ri ), match(ri , ri0 ) returns 1, otherwise 0. 2 http://www.cs.waikato.ac.nz/ml/weka/index.html 3 http://www.mathworks.com

2192

B. PSO-NN vs. BASIC ALGORITHMS

It is not hard to find that, the traditional classification algorithms are suitable to solve the trust rate prediction problem. In order to compare PSO-NN with them, four classic classification algorithms, including BayesNet, CART, J48 and SVM, are selected. In addition, the result of the FLRA method [12] is also included here for comparison. TABLE 2. Comparison on prediction precision (%) between PSO-NN and some basic algorithms.

As shown in Table 2, except several exceptions, the prediction precisions of four typical classification algorithms and PSO-NN increase with the growth of the ratio of the training data set. For algorithms BayesNet, CART and J48, the results of PSO-NN algorithm are always better than theirs regardless of the size of the training data set. When the ratio of training set is 40%, the precisions of these three algorithms are all lower than 60%, whereas PSO-NN reaches about 86.5%. When the ratio increases to 90%, these three algorithms’ results are still below 70%, however, the precision of PSO-NN exceeds 88%. SVM, a well-known classification algorithm, has been widely applied for category prediction problem [34]. For the problem in this paper, its results in all cases of training ratio are always higher than 70%. In particular, its precision is above 80% when the training ratio exceeds 70%. Although the precision of SVM is better than BayesNet, CART and J48, PSO-NN always achieves better performance than SVM in terms of prediction precision. Since the 10-fold cross validation is used for experimental analysis in [12], we can only compare PSO-NN with FLRA in the case of ratio = 90%. In this case, PSO-NN outperforms FLRA ahout 9% in terms of prediction precision. The experimental results show that PSO-NN significantly outperforms those classification algorithms in terms of prediction precision. C. PSO-NN vs. BPNN

Our work in this study mainly lies in the improvement to the classic back-propagation neural network. Thus, it is important to perform a detailed comparison between PSO-NN and classic BPNN. According to the results in Table 3, PSO-NN always achieves a higher prediction precision than BPNN regardless of the ratios of the training data set. The difference in prediction precision between the two methods is about VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

TABLE 3. Comparison on prediction precision (%) between PSO-NN and BPNN.

four to five percent. In order to confirm the advantage of PSO-NN, an ANalysis Of VAriance (ANOVA) test is used for evaluating the difference between the two algorithms. As shown in the 5th column in Table 3, the p-values of the statistical test on two algorithms’ results in all cases of training set ratios are far less than 0.01. Therefore, we can conclude that our PSO-NN is significantly better than the traditional BPNN for the problem of trust rate prediction for cloud services. The above analysis only provides a sketchy comparison between PSO-NN and BPNN in terms of the general statistics like the mean value. A more in-depth analysis is necessary that considers the precision distribution of each algorithm’s results. The corresponding results of PSO-NN and BPNN with six different ratios of the training data set are illustrated in Figure 3. When the ratio of the training data set increases from 40% to 80%, i.e. in Figures 3(a)-(e), the

distribution range of PSO-NN’s precisions in 1000 trials is always smaller than that of BPNN. Meanwhile, the mean value of PSO-NN’s precisions is also significantly higher than BPNN’s. Thus, PSO-NN demonstrates a better performance and higher stability than BPNN. In the case of ratio = 90% (Figure 3(f)), although the best result of PSO-NN’s precisions is comparable to that of BPNN, PSO-NN’s distribution range of prediction precisions is much smaller than that of BPNN, and its precision average is significantly higher than that of BPNN. Furthermore, the impact on prediction precision caused by different ratios of the training set is also studied in this section. The changing trends of these two algorithms with respect to different ratios are illustrated in Figure 4. Whatever the population size, the prediction precision of PSO-NN algorithm gradually increases with the growth of training data ratio. When the ratio increases from 40% to 90%, the prediction precision increases by about one percent. On the other hand, according to Figures 4(a)-4(d), the difference between PSO-NN and BPNN gradually enlarges with the growth of population size of particles in PSO-NN. Based on the above comparison, PSO-NN always achieves a higher precision than BPNN regardless of the ratio of the training data set. From the perspective of computational efficiency, due to the additional step of PSO-based searching, PSO-NN takes a relatively longer time than BPNN to train a network for prediction. Fortunately, the training of neural network is usually an off-line task and thus it does not affect the efficiency of online trust prediction.

FIGURE 3. Comparison on the precision distributions of two algorithms in different training set sizes. (a) ratio of training set = 40%. (b) ratio of training set = 50%. (c) ratio of training set = 60%. (d) ratio of training set = 70%. (e) ratio of training set = 80%. (f) ratio of training set = 90%.

VOLUME 5, 2017

2193

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

relatively significant when the ratio of training set varies from ratio = 50% to ratio = 70% in Figure 5(b). These observations show that the prediction precision is more sensitive to lower ratios of training sets. E. IMPACT OF THE MAX GENERATION

FIGURE 4. The changing trends of PSO-NN and BPNN with respect to prediction precision. (a) Population size = 20. (b) Population size = 40. (c) Population size = 60. (d) Population size = 80.

D. IMPACT OF POPULATION SIZE

In general, the population size of the particles is an important parameter that influences the performance of PSO. In order to verify its impact on PSO-NN, we varied the size from 20 to 80 and compared the corresponding results.

The maximum iterations (or generations, Gmax ) of PSO usually plays an important role in both the effectiveness and efficiency of the search process. Hence, it is necessary to investigate the impact of the maximum generation for PSO-NN. In the experiments, we take two ratio cases of the training data set as examples for demonstration and analysis purposes. For each case, the maximum generation of PSO varies from 50 to 400, in steps of 50. The experimental results are shown in Figure 6. According to the results in both cases, PSO-NN’s mean prediction precision increases slowly with the growth of the maximum generation. In Figure 6(a), the average prediction precision is 85.21% when Gmax = 50, and 88.45% when Gmax = 400. In the other case shown in Figure 6(b), the average precision is 85.62% when Gmax = 50, and increases to 89.09% when Gmax increases to 400. It is easy to see that the growth rate of the average precision is not very significant. That means the parameter Gmax in PSO-NN will not cause a big variance to the average prediction precision. In terms of the distribution of prediction precisions, the valid interval of precisions in the low Gmax case is relatively wide, but becomes narrow with the increase of the maximum generation. The experimental observations are consistent in both cases. Therefore, we conclude that parameter Gmax have a slight impact on the stability of PSO-NN’s prediction precision. However, when Gmax reaches about 250, PSO-NN’s performance becomes stable. F. IMPACT OF THE NUMBER OF HIDDEN NODES

FIGURE 5. The impact on precision caused by population size in PSO-NN algorithm. (a) Ratio of training set = 40%, 60% or 80%. (b) Ratio of training set = 50%, 70% or 90%.

As shown in Figure 5, the prediction precision of PSO-NN increases quickly with the population size regardless of the ratios of the training data set. Specifically, when the population size increases from 20 to 80, the increasing amplitude of the prediction precision of PSO-NN is about two percent. In the experiments, we adopt a relatively small population size (i.e. s = 30) for PSO-NN to predict trust rate. The experimental results show that our PSO-NN is still significantly better than BPNN. Thus, the advantage of PSO-NN over BPNN will be even more significant with larger population sizes, e.g., 50 or 80. In addition, as shown in Figure 5(a), the increase in the prediction precisions from the case of ratio = 40% to the case of ratio = 60% is more obvious than that from ratio = 60% to ratio = 80%. Similarly, the difference is 2194

For the applications of neural network, topology structure design is one of the most important issues that need to be carefully addressed. In most scenarios, a three-layer structure is considered suffice for designing a neural network. In our experiments, the numbers of nodes in the input and the output layers are determined by the features of the public QoS data set. The default number of nodes in the hidden layer was set as 10. In order to verify whether the number of hidden nodes will impact the performance of our PSO-NN algorithm, we varied this parameter for PSO-NN in the experiments to evaluate the corresponding impact. Meanwhile, BPNN with the same topology structure was also designed and used in the experiments so as to observe the difference from it to PSO-NN. As shown in Figure 7, the number of nodes in the hidden layer has no significant impact on the prediction precision of PSO-NN. When the ratio of the training set is 60% (Figure 7(a)), the precisions of PSO-NN with different numbers of hidden nodes are always about 87.5%. In the other case as shown in Figure 7(a), the precisions are approximately 88%. On the contrary, the number of hidden nodes VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

FIGURE 6. The impact on precision caused by the max generation in PSO-NN. (a) Ratio of training set = 60%. (b) Ratio of training set = 80%.

FIGURE 7. The impact on precision caused by the node number in hidden layer. (a) Ratio of training set = 60%. (b) Ratio of training set = 80%.

has an significant impact on the performance of BPNN. The average precision of BPNN increases slowly with increase in the number of nodes in the hidden layer. More importantly, the distribution range of the precisions becomes smaller as the number of hidden nodes increases. Based on the above observations, we can conclude that PSO-NN is more stable than BPNN with respect to the number of hidden nodes. To fully compare the difference between PSO-NN and BPNN, we have also performed an ANOVA test on the experimental results obtained by these algorithms with different numbers of hidden nodes. The statistical results are presented in Table 4. It is not hard to see that, the prediction precision of PSO-NN is always significantly better than that of BPNN. Although the difference between the two algorithms’ average precisions gradually decreases with the increase in the number of hidden nodes, the difference in the setting of the maximum number of nodes (i.e. 15) is still VOLUME 5, 2017

very significant, with a p-value far less than 0.01. Therefore, PSO-NN has better stability than BPNN, and its performance always outperforms BPNN’s significantly as the number of hidden nodes increases from 5 to 15. The reason behind the difference in stability between the two algorithms lies in that, PSO-NN can better configure a neural network through PSO-based searching, while the initial parameters of BPNN are assigned in a random manner. Thus, the stability of BPNN’s performance cannot be guaranteed. G. IMPACT OF THE TRAINING TIMES IN NN

The parameter of training times means the times of iterations for optimizing the configuration of a neural network during the training process. In general, this parameter can affect the network’s performance. In order to investigate its impact on the prediction precision, we have performed experiments on PSO-NN and BPNN at different levels of training times. 2195

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

TABLE 4. Prediction precision change caused by hidden node number and the corresponding comparison between PSO-NN and BPNN.

FIGURE 8. The impact on precision caused by the training times in NN. (a) Ratio of training set = 60%. (b) Ratio of training set = 80%.

Similarly, we have also employed two training set ratios (i.e., 60% and 80%) in the experiments. As shown in Figure 8, the average precision of PSO-NN increases very slowly when the number of training times increases from 200 to 800. At the same time, the distribution range of PSO-NN’s precisions in the case of small training times is slightly wider than that in the case of larger training times. With regard to the performance of BPNN, its average precision gradually increases with the growth of the training times, and the increase at the level of small training times

is faster than that of large training times. On the other hand, the precision distribution in the case of small training times is much wider than that in the case of large training times. It indicates that the average precision and stability of BPNN is heavily affected by the training times. Similarly, we have also performed an ANOVA test on the prediction results of both algorithms for each case of training times. According to the statistical results shown in Table 5, PSO-NN outperforms BPNN with different training times. The difference is more significant for relatively small training

TABLE 5. Prediction precision change caused by NN’s training times and the corresponding comparison between PSO-NN and BPNN.

2196

VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

times. Even though the number of training times increases to 800, the p-values of difference between PSO-NN and BPNN are both far less than 0.01 with two different training set ratios (i.e. 60% and 80%). Based on the above experimental analysis, we can conclude that the training times has no obvious effect on the performance of PSO-NN, but impacts BPNN significantly. Meanwhile, PSO-NN’s prediction precision is significantly better than that of BPNN in all cases of different training times. V. DISCUSSIONS A. THREATS TO VALIDITY

Our study may face some threats to validity. Internal validity refers to whether or not there were some mistakes in the experiments. During the experimental analysis, the implementations of the classification methods were implemented using the APIs of WEKA, a well-known and stable tool for machine learning. For the basic neural network (i.e. BPNN), we used the standard implementation available on MatLab. As a result, these well-known tools can help us reduce the internal threats to the validity of our study. Threats to external validity are concerned with whether the findings in our experiments can be generalized. Although we only used one data set to validate the effectiveness of PSO-NN, the QWS data set has been widely used in many other studies on service trust prediction and evaluation. Thus, the use of this data set is appropriate for performing comparison analysis with existing methods. Moreover, extensive experiments have been conducted with varying key parameters for PSO-NN. The results have confirmed that PSO-NN’s performance is always significantly better than that of BPNN. More importantly, this experimental conclusion is drawn based on 1000 repeated experiments and statistical test. In addition to BPNN and the four representative classification algorithms, we will compare PSO-NN with other representative classification algorithms. Meanwhile, we will also employ more data sets of cloud or Web services to further evaluate PSO-NN. B. RELATED WORK

How to evaluate or predict the trustworthiness of Web and cloud services according to their multiple QoS attributes is an important topic in the field of services computing. Although some methods based on feedback [31], [32] or matching between the advertised and delivered service qualities [33] have been proposed, intelligent techniques have been proven as a more effective way to attack the rate prediction problem. Bayesian network is a basic intelligent model for classification and prediction problems. In the past, this model has been widely studied for trust-based service selection and trustworthiness evaluation. Fu et al. [13] presented a method to evaluate the trust of grid services. However, the measurement of trustworthiness is simply based on the degree of consistency between QoS information and QoS provision. VOLUME 5, 2017

Bayesian network has also been used to evaluate the trust of Web services by combining different trust sources [14], or to compute QoS-based trust scores for composite services [15]. In the paper, we have compared this model with our own method named PSO-NN for trust prediction problem. The experimental results show the advantage of PSO-NN over methods based on Bayesian network. Mohanty et al. [19]investigated the effect of applying several well-known classification models for the classification of service trust. Based on the comparative analysis, they find that BPNN is a relatively effective method for predicting trust rate according to Web service’s quality values. The performance of BPNN is usually better than most other intelligent techniques, such as J48, CART and SVM. Expanding from their studies, we use BPNN as a baseline method and conduct the further research. Through integrating PSO and BPNN, PSO-NN demonstrates better performance than the original BPNN. Furthermore, several other classification methods are also implemented and used in our experimental analysis. The results confirm that PSO-NN also outperforms those prediction methods significantly. Although the quality values of Web and cloud services are qualitative by nature, some of them are still in the ordinal form due to the subjective judgment from service consumers. In order to give a rational trust rate according to both subjective and qualitative QoS data set, the fuzziness technology has been taken into consideration by some researchers. Mashinchi et al. [12] introduced a new approach based on fuzzy linear regression analysis (FLRA) to predict the trust rate of Web services. Although FLRA outperforms several basic algorithms such as BayesNet, CART and J48, its prediction precision is lower than that of PSO-NN, as demonstrated by our experimental study. As mentioned earlier, the performance of neural network is easily affected by its initial settings. In order to overcome its instability, it is necessary to search for an appropriate parameter settings before performing further training and prediction. In recent years, evolution algorithms have been successfully used for optimizing the design and the parameters of NNs [20], [21]. Typically, genetic algorithm (GA) [22], particle swarm optimization (PSO) and gravitational search algorithm (GSA) [37] are widely used for such a task. In our work, we adopt PSO as the searching algorithm for NN due to its widely acknowledged features of easy implementation and high performance. Although NN combined with evolution algorithms has been used to solve quite a few prediction, classification or clustering problems [38], to the best of our knowledge, PSO-driven NN has not been applied for the trust rate prediction of Web services. Here, we explore and study this topic and achieve excellent outcomes, as demonstrated by the experimental results. In the past, bio-inspired algorithms like PSO have been widely used to settle the problems of service selection, composition or concretization [35], [36]. Different from these studies, we mainly utilize PSO to search for an appropriate parameter setting for a neural network which is used to predict 2197

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

the trust rate of cloud services. Although the combination of evolution algorithm and neural network as well as its applications have been widely studied [20], [21], [38], to the best of our knowledge, PSO-driven NN has not been well exploited and applied for the prediction problem of cloud services’ trust rates. Meanwhile, the experimental results demonstrate that our solution can achieve better performance than existing trust rate prediction methods. Recently, Luo et al. [39] combined fuzzy neural networks and adaptive dynamic programming (ADP) to settle the problem of QoS value prediction. In their approach, fuzzy rules are extracted from QoS data through fuzzy neural networks. Then, ADP method is employed for the parameter learning of these fuzzy rules. Our work mainly aims to predict the trust rates of cloud services rather than the basic QoS values. More importantly, we adopt PSO to directly optimize the weights and thresholds (or biases) of neural networks, and then select the neural network with the best performance to perform trust prediction. It is not hard to find that way the of two kinds of intelligent techniques are combined in their work is different from ours. VI. CONCLUSIONS

In order to construct reliable and dependable service-based systems, trustworthiness is viewed as a prominent indicator for consumers to make decisions about service selection. Since the trust of a service is usually determined by both subjective and qualitative QoS values, it is difficult to predict or evaluate its trust rate through considering only its own QoS attributes. Previous studies have confirmed that neural network is an effective intelligent technique to solve this problem with a higher prediction precision than the basic classification methods like Bayesian network or decision tree. However, the performance of neural network is heavily dependent on its parameter settings. For this reason, we adopted PSO to find an appropriate setting for NN to obtain better performance for trust rate prediction. In addition, comparative experiments were performed on a public QoS data set of Web services. According to the experimental results, our method named PSO-NN has significant advantage over both basic classification methods and traditional neural network BPNN. Although PSO-NN has exhibited promising effectiveness and stability for solving trust prediction problem of Web and cloud services, there are still some issues that need be further exploited in our future work. The current method needs to train a considerable amount of basic neural networks for multiple times. As a result, its efficiency is not ideal. We are going to study the strategies for speeding up the training process in our future work. Inspired by the concept of deep learning [40], we are going to adopt the neural network with a more complex structure for the hidden layer to build an more advanced trust prediction model. REFERENCES [1] W.-T. Tsai, X. Bai, and Y. Huang, ‘‘Software-as-a-service (SaaS): Perspectives and challenges,’’ Sci. Chin. Inf. Sci., vol. 57, no. 5, pp. 1–15, 2014. 2198

[2] X. Liu et al., The Design of Cloud Workflow Systems. New York, NY, USA: Springer-Verlag, 2012, pp. 1–97. [3] O. A. Wahab, J. Bentahar, H. Otrok, and A. Mourad, ‘‘A survey on trust and reputation models for Web services: Single, composite, and communities,’’ Decision Support Syst., vol. 74, pp. 121–134, Jun. 2015. [4] L. Sun, H. Dong, F. K. Hussain, O. K. Hussain, and E. Chang, ‘‘Cloud service selection: State-of-the-art and future research directions,’’ J. Netw. Comput. Appl., vol. 45, pp. 134–150, Oct. 2014. [5] M. Zuo, S. Wang, and B. Wu, ‘‘Research on Web services selection model based on AHP,’’ in Proc. IEEE Int. Conf. Service Oper. Logistics, Inform. (SOLI), Oct. 2008, pp. 2763–2768. [6] M. Godse, R. Sonar, and S. Mulik, ‘‘Web service selection based on analytical network process approach,’’ in Proc. IEEE Asia–Pacific Services Comput. Conf. (APSCC), Dec. 2008, pp. 1103–1108. [7] R. Karim, C. Ding, and C.-H. Chi, ‘‘An enhanced PROMETHEE model for QoS-based Web service selection,’’ in Proc. IEEE Int. Conf. Services Comput. (SCC), Jul. 2011, pp. 536–543. [8] M. Khezrian, A. Jahan, W. M. N. W. Kadir, and S. Ibrahim, ‘‘An approach for Web service selection based on confidence level of decision maker,’’ PLoS ONE, vol. 9, no. 6, p. e97831, 2014. [9] S. S. Yau and Y. Yin, ‘‘QoS-based service ranking and selection for servicbased Ssystems,’’ in Proc. IEEE Int. Conf. Web Services (ICWS), Jul. 2011, pp. 56–63. [10] R. Hu, W. Dou, X. F. Liu, and J. Liu, ‘‘WSRank: A method for Web service ranking in cloud environment,’’ in Proc. 9th IEEE Int. Conf. Dependable, Auto. Secure Comput. (DASC), Dec. 2011, pp. 585–592. [11] C. Mao, J. Chen, D. Towey, J. Chen, and X. Xie, ‘‘Search-based QoS ranking prediction for Web services in cloud environments,’’ Future Generat. Comput. Syst., vol. 50, pp. 111–126, Sep. 2015. [12] M. H. Mashinchi, L. Li, M. A. Orgun, and Y. Wang, ‘‘The prediction of trust rating based on the quality of services using fuzzy linear regression,’’ in Proc. IEEE Int. Conf. Fuzzy Syst. (FUZZ), Jun. 2011, pp. 1953–1959. [13] Y. Fu, Z. Hu, and Q. Zhang, ‘‘Bayesian network based QoS trustworthiness evaluation method in service oriented grid,’’ in Proc. 9th Int. Conf. Young Comput. Sci. (ICYCS), Nov. 2008, pp. 293–298. [14] H. T. Nguyen, W. Zhao, and J. Yang, ‘‘A trust and reputation model based on Bayesian network for Web services,’’ in Proc. IEEE Int. Conf. Web Services (ICWS), Jul. 2010, pp. 251–258. [15] M. Mehdi, N. Bouguila, and J. Bentahar, ‘‘A QoS-based trust approach for service selection and composition via Bayesian networks,’’ in Proc. 20th IEEE Int. Conf. Web Services (ICWS), Jul. 2013, pp. 211–218. [16] Z. Saoud, N. Faci, Z. Maamar, and D. Benslimane, ‘‘A fuzzy-based credibility model to assess Web services trust under uncertainty,’’ J. Syst. Softw., vol. 122, pp. 496–506, Dec. 2016. [17] H. Yahyaoui, ‘‘A trust-based game theoretical model for Web services collaboration,’’ Knowl.-Based Syst., vol. 27, pp. 162–169, Mar. 2012. [18] S. Ding, X.-J. Ma, and S.-L. Yang, ‘‘A software trustworthiness evaluation model using objective weight based evidential reasoning approach,’’ Knowl. Inf. Syst., vol. 33, no. 1, pp. 171–189, 2012. [19] R. Mohanty, V. Ravi, and M. R. Patra, ‘‘Web-services classification using intelligent techniques,’’ Expert Syst. Appl., vol. 37, no. 7, pp. 5484–5490, 2010. [20] X. Yao, ‘‘A review of evolutionary artificial neural networks,’’ Int. J. Intell. Syst., vol. 8, no. 4, pp. 539–567, 2007. [21] S. Ding, H. Li, C. Su, J. Yu, and F. Jin, ‘‘Evolutionary artificial neural networks: A review,’’ Artif. Intell. Rev., vol. 39, no. 3, pp. 251–260, 2013. [22] S. Ding, C. Su, and J. Yu, ‘‘An optimizing BP neural network algorithm based on genetic algorithm,’’ Artif. Intell. Rev., vol. 36, no. 2, pp. 153–162, 2011. [23] Y. Shi and R. Eberhart, ‘‘A modified particle swarm optimizer,’’ in Proc. IEEE Int. Conf. Evol. Comput. (ICEC), May 1998, pp. 69–73. [24] J. Li, X. L. Zheng, D. R. Chen, and W. W. Song, ‘‘Trust based service selection in service oriented environment,’’ Int. J. Web Services Res., vol. 9, no. 3, pp. 23–42, 2012. [25] N. Limam and R. Boutaba, ‘‘Assessing software service quality and trustworthiness at selection time,’’ IEEE Trans. Softw. Eng., vol. 36, no. 4, pp. 559–574, Apr. 2010. [26] M. Tang, X. Dai, J. Liu, and J. Chen, ‘‘Towards a trust evaluation middleware for cloud service selection,’’ Future Generat. Comput. Syst., pp. 1–11. Jan. 2016, [Online]. Available: http://dx.doi.org/10.1016/j. future.2016.01.009 [27] A. Basheer and M. Hajmeer, ‘‘Artificial neural networks: Fundamentals, computing, design, and application,’’ J. Microbiol. Methods, vol. 43, no. 1, pp. 3–31, 2000. VOLUME 5, 2017

C. Mao et al.: Towards a Trust Prediction Framework for Cloud Services Based on PSO-Driven NN

[28] P. N. Suganthan, ‘‘Particle swarm optimiser with neighbourhood operator,’’ in Proc. Congr. Evol. Comput. (CEC), vol. 3. 1999, p. 1962. [29] M. Taherkhani and R. Safabakhsh, ‘‘A novel stability-based adaptive inertia weight for particle swarm Optimization,’’ Appl. Soft Comput., vol. 38, pp. 281–295, Jan. 2016. [30] E. Al-Masri and Q. H. Mahmooud, ‘‘Investigating Web services on the World Wide Web,’’ in Proc. 17th Int. Conf. World Wide Web (WWW), 2008, pp. 795–804. [31] G. Zacharia and P. Maes, ‘‘Trust management through reputation mechanisms,’’ Appl. Artif. Intell., vol. 14, no. 9, pp. 881–907, 2000. [32] Z. Malik and A. Bouguettaya, ‘‘RATEWeb: Reputation assessment for trust establishment among Web services,’’ Int. J. Very Large Data Bases, vol. 18, no. 4, pp. 885–911, 2009. [33] L.-H. Vu, M. Hauswirth, and K. Aberer, ‘‘QoS-based service selection and ranking with trust and reputation management,’’ in Proc. Int. Conf. Cooperat. Inf. Syst. (CoopIS), 2005, pp. 466–483. [34] L. Wang, Support Vector Machines: Theory and Applications. Berlin, Germany: Springer-Verlag, 2005, pp. 1–431. [35] C. Jatoth, G. R. Gangadharan, and R. Buyya, ‘‘Computational intelligence based QoS-aware Web service composition: A systematic literature review,’’ IEEE Trans. Services Comput., pp. 1–18, Aug. 2015. [Online]. Available: http://dx.doi.org/10.1109/TSC.2015.2473840 [36] L. Wang and J. Shen, ‘‘A systematic review of bio-inspired service concretization,’’ IEEE Trans. Services Comput., pp. 1–14, Nov. 2015. [Online]. Available: http://dx.doi.org/10.1109/TSC.2015.2501300 [37] S. Mirjalili, S. Z. M. Hashim, and H. M. Sardroudi, ‘‘Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm,’’ Appl. Math. Comput., vol. 218, no. 22, pp. 11125–11137, 2012. [38] C. Jin and S. W. Jin, ‘‘Prediction approach of software fault-proneness based on hybrid artificial neural network and quantum particle swarm optimization,’’ Appl. Soft Comput., vol. 35, pp. 717–725, Oct. 2015. [39] X. Luo, Y. Lv, R. Li, and Y. Chen, ‘‘Web service QoS prediction based on adaptive dynamic programming using fuzzy neural networks for cloud services,’’ IEEE Access, vol. 3, pp. 2260–2269, 2015. [40] Y. LeCun, Y. Bengio, and G. Hinton, ‘‘Deep learning,’’ Nature, vol. 521, pp. 436–444, May 2015. [41] C. Mao and R. Lin, ‘‘QoS trust rate prediction for Web services using PSObased neural network,’’ in Proc. 4th Int. Conf. Adv. Cloud Big Data (CBD), Aug. 2016, pp. 68–74.

CHENGYING MAO received the B.E. degree in computer science and technology from Central South University, China, in 2001, and the Ph.D. degree in computer software and theory from the Huazhong University of Science and Technology, China, in 2006. He holds a postdoctoral position with the College of Management, Huazhong University of Science and Technology, from 2006 to 2008. He is currently a Full Professor with the School of Software and Communication Engineering, Jiangxi University of Finance and Economics, China. His research interests include service computing and software engineering.

VOLUME 5, 2017

RONGRU LIN is currently pursuing the master’s degree with the School of Software and Communication Engineering, Jiangxi University of Finance and Economics, China. Her research focuses on cloud computing.

CHANGFU XU is currently pursuing the master’s degree with the School of Software and Communication Engineering, Jiangxi University of Finance and Economics, China. His research interests include service computing and social network analysis.

QIANG HE received the Ph.D. degree from the Swinburne University of Technology (SUT), Australia, in 2009, and the Ph.D. degree in computer science and engineering from the Huazhong University of Science and Technology, China, in 2010. He is currently a Lecturer with the Swinburne University of Technology. His research interests include software engineering, cloud computing, services computing, big data analytics, and green computing.

2199