Combining QoS-based Service Selection with ... - Semantic Scholar

4 downloads 51477 Views 152KB Size Report
provide client dynamic, on demand service performance prediction. Through this way, a client may be more capable of finding the best service based both on ...
Combining QoS-based Service Selection with Performance Prediction Zhengdong Gao, Gengfeng Wu School of Computer Engineering and Science Shanghai University, Shanghai, 200072 P. R. China [email protected] [email protected] Abstract Currently, much efforts have being focused on dynamic, personalized QoS-based service selection, however, current QoS models are generally composed of static QoS parameters and haven’t taken the dynamic nature of service performance into consideration. In our framework, we extend existing QoS model by adding new attributes that reflect performance of services and rely on ANN to provide client dynamic, on demand service performance prediction. Through this way, a client may be more capable of finding the best service based both on his/her preferences and on the service performance estimation. Keywords: service selection, Neural Network, QoS, performance prediction

1. Introduction Web service[2] promises the dynamic creation of loosely coupled information systems. However, current approaches are logically centralized and lack key functionality, especially to locate, select, and bind services to meet certain criteria of quality. As more and more services growing on the web/Grid, it is inevitable that several services may have syntactically identical function whereas each holds different quality of service (QoS)ഘthen how can we pick out the one that provide highest quality to our demands from those candidate services? Liu, Ngu and Zeng introduced a dynamic selection process for web services based on QoS computation and policing. They developed an “open, fair, and dynamic QoS computation model for web services selection” by means of a central QoS registry. They describe an “extensible QoS model”, define a number of quality criterions, including execution price, execution duration, and reputation to denote both generic and domain-specific QoS criterions [4]. Julian Day developed a system using RDF ([7]), JESS, and OWL[8] to augment web service clients. The clients can collect, report, and analyze data about their

experiences with the QoS of web services, and are able to parse and use this information to dynamically select the best service for their needs. In his system, QoS quality parameters are only generic ones: Availability, Reliability, Execution time[9]. E. Michael Maximilen addresses dynamic service selection via an agent framework coupled with a QoS ontology. With this approach, participants can collaborate to determine each other’s service quality and trustworthiness. The middle ontology for web services QoS in his system is Availability, Capacity, Economic, Interoperability, Performance, Reliability, etc. [1][5]. From the three systems built above, we can see that all QoS models in their systems haven’t taken the nature of dynamic constantly changing performance of services into consideration, the QoS parameters in these QoS models are generally static and lack mechanisms to provide dynamic QoS evaluation (such as service performance prediction), thus, as a result, the service brought back to a client may not be the best one that satisfy the client’s demand at all. In this paper, We designed a new QoS-based service selection framework built on Neural Network (NN)[3] to meet this challenge. Our approach is adding new attributes that denote dynamic QoS information to existing QoS ontology, and making use of NN to generate dynamic, on demand QoS estimation. This paper is organized as follows: we give the details of principle of our framework in Section 2 and its implementation in Section 3. Experiments as well as the result analysis are made in Section 4. Finally, we made some discussion and conclusion on our framework in Section5.

2. Combining QoS-based service selection with performance prediction In order to predict the service’s performance, we make use of other users’ experiences on this service. Given a set of past QoS information associated with a particular service and a new request, in which way can we predict the QoS that the service would provide for the new user?

Proceedings of the 2005 IEEE International Conference on e-Business Engineering (ICEBE’05) 0-7695-2430-3/05 $20.00 © 2005

IEEE

Artificial Neural Network (ANN) can offer the possible solution to this question.

2.1. Back-Propagated Neural Network Artificial Neural Networks (ANN’s) have been considered the most powerful and universal predictor of all of the various kinds. It has proved to be highly effective behavior in resolving complex problems. Here, we just narrow our discussion on the back-propagation neural network (BPNN). Unlike single-layer networks, typical 3-layer BPNN architecture is shown below: input layer hidden layer output layer x1

xd

z1

zm

x0



As long as a service’s performance prediction combined with other QoS parameters in QoS ontology is generated and became available, summarized QoS of this service can then be calculated according to client’s personalized preferences [4], the service which has the highest QoS would be selected for client to the end.

3. Implementation

y1

yc

z0

bias bias Figure 1. 3-layer BPNN structure Running the network consists of two-pass circulations: (1) Forward pass: the outputs are calculated and the error at the output units calculated; and (2) Backward pass: The output unit error is used to alter weights on the output units. Then the error at the hidden nodes is calculated (by back-propagating the error at the output units through the weights), and the weights on the hidden nodes altered using these values. For each data pair to be learned, a forward pass and backwards pass is performed. This procedure is repeated over and over again until the error is at a low enough level.

The architecture of QoS–prediction based service selection framework is shown as Figure 2. The framework contains of three main parts: Service Management Center (SMC), QoS Prediction Broker (QoS PB), and QoS Predict Service (QoS PS), each function will be described in detail in the following sections. We can also find out that each dashed pane in Figure 2 contains a pair of entries: a QoS PS and a service, which means a particular QoS PS is always associated with a particular service or a service requires a QoS PS to fulfill its QoS prediction. Service 1 to N denotes the candidate services for a particular user. Service Management Center

QoS Predict Service N

QoS Predict Service N

2.2. QoS prediction As discussed above, ANN works in two phrases: training and working. Given a set of cases, NN is firstly trained to fit all the cases and store the knowledge in its network structure. Then when facing a new input, it acts as a “black box” and automatically generates a predict output for the user. Similarly, after a NN is trained by a set of historical QoS information of a particular service, it can generate QoS prediction output automatically when facing a new QoS prediction request. The QoS prediction model schema (illustrated below) is used for initiating NN of a QoS prediction service, each sub-element of QoS_prediction_input denotes an input unit at input layer of NN, correspondingly, each sub-element of the element QoS_prediction_output denotes an output unit at output layer of NN. // QoS predict model schema

Service 1

IEEE

Service N

Client

Figure 2. Architecture of QoS-prediction based service selection framework

3.1. Service Management Center (SMC) SMC is implemented as a web service; it contains a map file which maintains maps between services and QoS PS. Each record in map file contains three columns: service URL, QoS PS URL, and current service state. Here we define three states of a particular service: ACTIVE, SUSPEND, and DEAD. ACTIVE means the

Proceedings of the 2005 IEEE International Conference on e-Business Engineering (ICEBE’05) 0-7695-2430-3/05 $20.00 © 2005

QoS Prediction Broker

service is normal and available; SUSPEND means the service can not be accessed for the moment probably because of updating, reconfiguring, etc.; DEAD means the service is not accessible permanently. Another key function of SMC is monitoring the state of services. Once a new service is asked to be added, SMC will initiate a QoS PS instance for it. The initiation steps include parsing the service schema it provided, choosing proper NN structure, registering information to map file, etc. If a service is expected to be suspend or removed, SMC can just directly change the service’s state to SUSPEND or DEAD as needed.

3.2. QoS Predict Service (QoS PS) QoS PS is implemented as a web service. It plays key rules in our framework. Its main use is to fulfill QoS prediction when facing a new input. Each QoS PS possesses a set of historical usage cases of its related service defined in the map file of SMC and an implementation of NN. After having received input from a QoS PB, it parses the input and organizes necessary parameters in a proper format and feeds to NN. The QoS prediction will then be promptly generated, passed to QoS PB and then to the end user ultimately. If one of those candidate services was finally selected for execution by a user, QoS PB will bring back feedback information gathered after finishing the real time execution of the service to QoS PS. In such circumstance, QoS PS will add a new usage case record to its service usage case library, and retrain NN by using the newly updated service usage cases file as well.

3.3. QoS Prediction Broker (QoS PB) QoS PB acts as an agent of a client. It holds the client’s request. After taking a series of interactions with SMC and QoS PS, it brings back QoS prediction value of the service to client for him/her to make further decision. The whole interactions the QoS PB made is as follows: 1. Visit map file of SMC, get service state, and allocate QoS PS. 2. Get service prediction model from QoS PS, expose the interface to enable client to input special parameters as needed. 3. Transport user-specified input to QoS PS and call NN to work. 4. Bring back generated QoS predict value to client. 5. Gather service’s real time execution information provided it has been selected and executed, inform QoS PS to update service’s usage cases library and retrain NN. In our framework, every service is associated with a particular QoS PS, so the first thing we do is to establish a QoS PS for each service. This task is due to system administrator. When a service is requested to add into the

system, provided the service schema is also available, the admin will create an QoS PS instance, configure an NN according to the service’s QoS prediction model. Here each sub-element of element represents a corresponding input node of NN, and each sub-element of element represents an output node of NN. After having initiated QoS PS, the system is ready to work then. When a user queries UDDI and gets a set of syntactically identical services, they are then passed to its QoS PB automatically. QoS PB would contact each candidate service’s QoS PS from map file of SMC and expose the interface of each QoS PS (remember, only the services whose state is ACTIVE is allocated), then the user is required to fill in some user special parameters which reflect his preferences. The information is gathered by QoS PB and afterwards passed to every QoS PS to call NN to fulfill QoS prediction. As long as the prediction is done, QoS PB will collect their prediction values and deliver them to the user, the user can then choose the service which has the best QoS value computed on his/her preference to perform his/her task on. The execution progress is monitored by QoS PB. Once the execution is finished, QoS PB would collect some real time execution information (such as execution time, delay, result, etc.) and rewrite them to the service usage cases library as needed, whether to retrain NN or not depends on the cases update policy made by client or service provider.

4. Experimentation In processing distributed data query on the grid, a key function is allocating and selecting data resources which meet personal requirements or preferences from large quantity of grid resources, we implemented several query services to fulfill this task. It is obviously that these services have syntactically identical functions. Query services surely provide different QoS for user at a certain time since each of them is deployed on different computers and potentially serves different amount of users. Then, our question is “which query service should we submit our query statement to?” To simplify the selection process, we suppose all services share the same QoS prediction model defined below. The element QoS_prediction_input has four properties: Availability, Reliability, Bandwidth, and Request_time. Availability measures whether or not the client can connect to the web service. It takes a value of 0 (can not connect) or 1 (be able to connect). Reliability refers to whether the operation the client wishes to perform can be performed. It takes a value of 0 (unable to perform the operation) or 1 otherwise. If a service is not reachable, the reliability is assumed to be 0 for that interaction. Bandwidth is used to measure the network

Proceedings of the 2005 IEEE International Conference on e-Business Engineering (ICEBE’05) 0-7695-2430-3/05 $20.00 © 2005

IEEE

condition, and Request_time is the moment a user requests a particular service. As to the QoS output parameters, we mainly consider two QoS criterions: execution_duration and transaction_state. The former measures the expected delay between the moment a request is sent and the moment the result is brought back; the latter reflects whether the transaction process has performed properly or not, where 1 means transaction was well performed and 0 means otherwise. The Service’s QoS prediction model is given as below: Table 1. An example of QoS prediction model QoS prediction input

QoS prediction output

Avail Relia Bandw RTime EDu TranS Avail: Availability Relia: reliability Bandw: bandwidth RTime: request_time EDu: execution_duration TranS: transaction_state According to this QoS prediction model schema, we choose 4-6-2 BP NN. The constant learning rate k=0.02 (0

Suggest Documents