A Context and Service-Oriented Architecture with ...

3 downloads 3519 Views 717KB Size Report
include support for quality of service. ... Web service could not meet the QoS level it claims to support. ... Department of Computer Science and Engineering.
IJCA, Vol. 18, No. 1, March 2011

37

A Context and Service-Oriented Architecture with Adaptive Quality of Service Support Dina Hafez*, Sherif G. Aly*, and Ahmed Sameh† The American University in Cairo, Egypt

Abstract In this paper we extend upon an existing software architecture, namely the Context Oriented Architecture to include support for quality of service. The Context Oriented Architecture is a responsive service oriented infrastructure that transparently monitors application context and allows for custom responses designed by service developers and triggered by conditions in the monitored context. We augment the architecture with a QoS-Broker, which supports QoS representation, discovery, matchmaking, monitoring and selfhealing based on Web service standards. Service providers can specify various categories of Web services that differ in their QoS support. Clients are able to dynamically state their QoS requirements. To support standardization, QoS requirements and offers are described using the OWL-Q ontology. Our QoS-Broker matches a group of customers with a group of service offers by converting the problem into a constraint satisfaction problem, and solving it using a matchmaking search algorithm. As a proof of concept, the QoS-Broker monitors the invocation process and takes corrective action if a Web service could not meet the QoS level it claims to support. We verified the feasibility and performance of the QoS-Broker with our prototype implementation and performance measurements. In addition, we showed that group serving has less overhead than individual serving and that our matching logic conforms to the wisdom of the crowd. Key Words: QoS; context awareness, negotiation, constraint satisfaction, and architecture. 1 Introduction The term ‘pervasive computing’, introduced first by Weiser in 1991 [28], refers to the seamless integration of mobile devices into users’ everyday life, and the availability of services anywhere and anytime. Pervasive computing is currently considering Web service technologies as the *

Department of Computer Science and Engineering. E-mail: [email protected] and [email protected]. † Affiliated with the American University in Cairo at the time this research was conducted. Department of Computer Science and Engineering. E-mail: [email protected]

standardized, widespread, evolvable and low maintenance infrastructure for building pervasive systems. In pervasive applications, a common target is to satisfy highly nomadic users who expect applications to behave effectively and efficiently taking into considerations changes in the context of applications, users and environments. To achieve this setup, applications should be able to sense the surrounding environment (context) and adapt its behavior accordingly. By context, we mean “any information that can be used to characterize the situation of entities (i.e., whether a person, place or ) that are considered relevant to the interaction between a user and an application, including the user and the application themselves” [6]. Examples of context attributes include: personal profile, preferences, user’s current activities and application and device context attributes. Unfortunately, existing context-sensitive architectures suffer from two main problems [8]. First, they lack openness and scalability; most infrastructures hide the heterogeneity of devices and the way they monitor user activities. Besides, their data and discovery models hinder the addition of new functionalities, thus preventing such architectures from being scalable. Secondly, most of these architectures are provided either as complete platforms that start from the operating systems upwards or as context-sensitive middle layers that do not specify how sensors are added on the server or the client sides. Elsafty et al. [8] have overcome such challenges by proposing the Context Oriented Architecture, which enables application designers to build context sensitive web-based systems using Web services as their building components. Figure 1 demonstrates the Context Oriented Architecture [8]. The Context Oriented Architecture is a responsive infrastructure that transparently monitors the context of both the client and server. The architecture allows for custom responses designed by service developers and triggered by conditions in the monitored context. In order to preserve the privacy of clients, the Context Oriented Architecture allows clients to choose the context types to be revealed by the application as well as the sensor and response components to be used for monitoring and adapting to each context types. This is achieved through a negotiation phase between the client and the service provider at the beginning of a new service invocation [1]. For the sake of using standardized ontologies, Elsafty et al. have extended the OWL-S ontology to incorporate context representation [8].

ISCA Copyright© 2011

IJCA, Vol. 18, No. 1, March 2011

38

2.2 QoS-Aware, Context-Aware Web Service Architectures

Figure 1: The context oriented architecture In this work, and after an extensive survey of state of the art literature that studied the requirements that should be supported in a context aware architecture [2, 3, 5, 7, 9-10, 1415, 19-20, 22-25], the researchers concluded that QoS support is one of the most pressing features that should be supported in any architecture of this kind. We ultimately contributed to the specification and implementation of the Context Oriented Architecture by augmenting it with support for quality of service, the work of which is presented in this paper. 2 Related Work

ConQo is the only developed architecture, so far, that is QoS-aware and context-aware at the same time [4]. However, this architecture has five pitfalls. The major drawback of ConQo is that it bases its QoS and context representation on WSMO ontology which is not a standardized ontology, regardless of the disdvantages of WSMO-QoS ontology [27], discussed in Hafez et al. [11]. Secondly, the matching algorithm of the ConQo architecture does not take into account servicing a group of customers together. Instead, it serves clients on a “first-come-first-served” basis. Thirdly, the architecture does not react when a Web service does not meet the QoS level it declares to offer. Moreover, ConQo gives no description on how context sensors will be incorporated in the client and the server sides. Finally, there is no support for customer classification or prioritization. A survey of pervasive architectures that discusses the incorporation of QoS in various architectures of this kind is given in Hamza, et al. [13]. 3 Supporting QoS in the Context Oriented Architecture 3.1 QoS-Aware Architecture Overview Our proposed architecture aims at the management of QoS by providing differentiated services based on clients’ profiles. Our system is composed of three main components: 1) Web service providers. 2 Web service clients 3) The QoS Broker, which is a middle layer between the client application and the Web services.

2.1 QoS Definition and Representation QoS for the Web service domain has been defined by Hafez, et al. [11]. In this paper, we showed that since there is no standard methodology for the representation of QoS in the Web service domain, three different approaches were proposed: i) using a QoS specification language, ii) expanding the WSPolicy framework, and iii) developing a QoS ontology. The latter is the most mature method among the three approaches. We previously compared between different QoS ontologies developed to support representation of QoS for Web services and concluded that OWL-Q [17] is the most well developed ontology [11]. OWL-Q provides a detailed specification modeling QoS information, especially those related to, units, value types and relationships among QoS. The modular approach enables OWL-Q to support the addition of ontologies for applications, such as time or currency ontologies. OWL-Q supports specifying relationships between QoS properties such as independence or correlation. The OWL-Q ontology extends the OWL-S standard ontology by the addition of a QoSAttribute class, which is a subclass of the OWL-S ServiceParameter and references a ServiceElement. QoSAttributes can be static or dynamic and are measured by one or more static or dynamic QoSMetrics [17].

We assume that service providers offer different classes of the same Web services. This means that classes of one Web Service share the same functionality (i.e., the WSDL description and tModel interface), as well as the same context they are sensitive to, but differ in their QoS support. Obviously, different classes of services may vary in their QoS specificaions and their prices. To support standardization, the Web service functional description is specified using OWL-S. The specific context that Web services are sensitive to is written using the OWL-S extension developed by Elsafty, et al. [8]. As for the representation of QoS parameters, service offers are represented using the OWL-Q ontology [16]. Context and QoS offers are attched to the WSDL description file using two separate files. We assume that each service provider specifies the capacity of the Web service it offers; i.e., each provider declares for each QoS level, the maximum number of requests it can handle concurrently. Each client, upon his/her first interaction with the architecture, is asked to state his/her QoS preferences, using a GUI, which are then converted in an OWL-Q representation and kept on the client side. The user also specifies the preferred self-healing action in case the Web service violates

IJCA, Vol. 18, No. 1, March 2011

its QoS agreement, as will be described later in this section. To request a Web service, the client specifies its interface name (tModel name) and asks the QoS-Broker to find the best suitable Web service offer according to his/her QoS preference. The client has the ability to change these QoS preferences at any time. The QoS-Broker is responsible for receiving incoming requests and returning the best suitable Web service offers that match these requests. It queries the UDDI to get a list of all services that implement such interface, and at the same time declare themselves to be context-aware and QoS-aware. After that, it matches the client’s QoS requirements against Web services’ QoS offer to choose the best Web service to be used. However, sometimes we can have problematic scenarios that require specific handling. For example, assume that we have gold and platinum Web service (WS) offers, and assume that each Web service can only handle one request at a time. Assume we have two clients: A and B (A came before B); client A requests a platinum WS but would accept a gold WS, while client B also requests a platinum WS but would not accept any degradation in the quality level. If we were to serve client A with the Platinum WS, we will not find a matching offer for client B. For this reason, our architecture does not serve clients on a “first-come-first-served” basis. Instead, it considers a group of customers requesting Web services over a specific window of time and tries to satisfy all concurrently. The QoS-Broker converts this matching problem into a Constraint Satisfaction Optimization Problem (CSP) and solves it to find the best solution. Each client, at the end, receives the URL of the WS to be invoked. Moreover, some Web service providers may not be able to maintain QoS properties they claimed to offer, while Web service consumers expect to get served with their required QoS level, especially if this involves money. As a proof of concept, our architecture supports primitive monitoring and self-healing mechanisms. Self-healing is defined as an approach to detect improper operation of software applications and transactions, and initiate corrective action without disrupting users [26]. Basically, we monitor the invocation process and keep track whether the invoked WS is obeying to its agreement. If a Web service breaks the agreement, a corrective action should be taken. Examples of corrective actions are: tolerate, halt or swap the WS without losing the connection. These actions are specified by the client before the beginning of the invocation process. 3.2 Requirements of the QoS-Aware Architecture In order to build a profound QoS-aware architecture, a set of requirements should be satisfied. These requirements are classified into functional and non functional requirements. 3.2.1 Functional Requirements a. Dynamic Web Service Discovery: The QoS-Broker should dynamically search the UDDI for available Web service offers. It should not depend on previously found Web services since Web services may fail, or they may

39

even change their QoS offers by time. b. Effective Matchmaking Algorithm: The matchmaking algorithm should effectively evaluate the conformance between QoS offers and QoS requirements stated by the user. In order for an offer to conform to a demand, it should offer exact or better solutions to the QoS metrics required. For positively monotonic metrics, in which the higher the value is the better (e.g., Throughput), ‘better’ is translated to ‘greater than’, while for negatively monotonic metrics (e.g., Response Time) it is ‘less than’. Moreover, the algorithm should take into account the capacity of each Web service offer, which is the maximum number of users to whom the Web service can offer such level of QoS concurrently. c. Selection of the Minimum Exact Matching QoS Offer: The matchmaking process results in a list of all Web service offers that match a specific demand D. The question now is which Web service should be chosen; the one with the highest QoS offer (even better than what the user requests) or the offer that exactly matches the user’s preferences. Selecting Web services with better QoS offers than what is required by the user is not always welcomed. For example, considering a metric that measures refresh time, the user’s device may not be able to process the output of a quick Web service. Moreover, we can have a situation in which the first customer asks for a minimum QoS level, while the second customer requires a high level offer. If we served the first one with the high offer, we may not find a suitable offer for the second client. For these reasons, we match each demand with the minimum matching offer. In situations where we cannot find an offer that exactly or better matches the client’s request, we select an offer that partially meets the client’s requirements. By partially we mean offers that satisfy some but not all of QoS metrics of the demand. We do not want to end up with a no-match message sent to the user, since this would decrease the functionality of the architecture. d. Matching Group of Customer Demands with a Group of Service Offers: In real life, clients can come and go at any time. These clients share resources, i.e., Web service offers, between each other. Each client request is dependent on other requests coming at the same time. Therefore, while selecting the best suitable offer for each client, we take into consideration other clients who request Web services at the same time. We take the best group decision that satisfies the maximum number of clients. Our matchmaking algorithm solves a constraints satisfaction optimization problem (CSP) to match a group of customer requests with a group of service offers. By this, we assign for each user the best offer available, taking into consideration other clients’ preferences and the capacity of each Web service. We select a Web service offer that exactly matches user’s requirements and at the same time does not conflict with preferences of

IJCA, Vol. 18, No. 1, March 2011

40

other customers. e. Primitive Monitoring and Self-Healing Mechanisms: Since we obtain QoS offers from providers themselves, without contacting a third party that makes profiling or categorization of these services, and given the fact that some Web services may claim to offer higher QoS values than they could afford, the architecture should support monitoring mechanisms. It should monitor the invocation process and take corrective actions when a Web service breaks its claimed QoS offer [13].

QoS support. The heart of our architecture lies in the addition of a middle layer, the QoS-Broker, between the server and the client sides. The QoS-Broker is responsible for discovering Web services published on UDDI, selecting the best matching Web service by performing a constraint satisfaction search, monitoring the invocation process and taking corrective actions if the agreement is broken. It notes mentioning that our QoS-Broker is designed as a separate Web service that could be called from any platform.

3.2.2 Non Functional Requirements a. Usable GUI for Customers and Providers: One of the key requirements for any Web service application is to have a user friendly graphical user interface. In our design of our GUI for clients and providers, we tried to make it as usable as possible. The client interface is just for demonstration purposes. The only thing required from the user after registration with the QoS-Broker is to specify the tModel interface of the Web service that he/she wants to invoke. b. Respecting Customers with Stringent QoS Requirements: Since we serve a group of customers together, we take all requests coming in a specific window frame. Waiting for the window frame, even for a short period of time, could be a hassle for some clients, especially those with stringent response times. Therefore, we design our architecture to support both those with strict requirements and others. The first group of users would be served instantaneously; while others would wait for a window frame to collect all incoming requests and solve them concurrently. c. Encouraging Interoperability and Scalability: To support interoperability, we design our QoS-Broker as a Web service that could be called from any platform. Our client interface is nothing more than a Web page that could be opened from any Web browser. Our architecture is scalable as it supports the addition of more functionality. d. Support for Standardization: While designing our system, we made sure to comply with the standards of the Web service domain. Although there is no standardized ontology for the representation of QoS metrics, our choice to use OWL-Q came basically from the fact that this ontology extends OWL-S, which is used for the description of the functional requirements of Web services. 3.3 The QoS-Aware Architecture 3.3.1 Overview. This section provides an overview of the main components of our QoS-Aware architecture, illustrated in Figure 2. The client and server sides are kept the same as in the Context Oriented Architecture [8], but with a slight addition to QoS Specification Manager component on both sides. In addition, we have a cluster of servers that offer various categories of the same Web services that differ in their

Figure 2: The proposed architecture 3.3.2 The QoS Specification Manager. The QoS specification manager component is responsible for converting QoS requirements and offers to OWL-Q, XML format and saving them in files on the client and server side respectively. When the QoS-Broker asks for these QoS metrics, during the matchmaking process, the QoS specification manager component converts these OWL-Q files into arrays of data structures that are understood within the architecture. On the client side, a GUI is used for specifying customer’s QoS requirements and preferred self healing action. Each QoS parameter has value and a weight factor to imply the importance of this parameter to the client. The weight can take values between 0-1 inclusive, where 1 being the maximum, or 2 to indicate a very important metric not to be compromised. We have chosen to indicate the ultra important QoS metrics with a weight value equal to 2 to discriminate it from the miniature differences between 0 and 1, yet at the same time not to make it too high to neglect the importance of other QoS metrics requested. On the other hand, the GUI on the server side is used by service providers to define their QoS offers. It enables them to specify the name, response time, throughput, capacity and cost of the offer. They have the ability to specify a range of values to represent their offer for QoS parameters; but for the capacity and cost, they have to have exact values. 3.3.3 WS-Discovery. WS-Discovery is the component responsible for requesting services that implement specific interface from UDDI Registries. It takes as input the name of the tModel port and binding interface and returns a list with all

IJCA, Vol. 18, No. 1, March 2011

WS that abide by these tModels. These services are represented as an array of UDDIServices. A UDDIService keeps track of its name, description, service key and access-points. 3.3.4 UDDI. Universal Description, Discovery and Integration Registry (UDDI) is responsible for keeping track of all Web services published. It provides dynamic service selection. When the UDDI registry is asked for services that implement a specific functional interface, it replies with a list of all services that implement these functionalities. We extended the UDDI by the addition of a category bag to group all Web services that declare themselves to be context and QoS aware. The created categoryBag is called “Context and QoS Awareness” with two category values: “Context-aware” and “QoS-aware”. 3.3.5 QoS Requirements Matchmaker. This component is responsible for performing a syntactic matching between QoS offers and QoS demands. For each QoS demand, it loops on all service offers to first check whether the QoS requirements match this offer. If yes, it then checks the capacity constraint and makes sure that assigning such an offer to this demand will not exceed the capacity of the Web service. 3.3.6 CSP-Solver. This is the component responsible for converting the problem into a constraint satisfaction optimization problem (CSP) and solving it using a backtracking search algorithm. While processing, it consults the QoS Requirements Matchmaker to see which offer could be assigned to which demands. It returns the solution matrix to the WS-Dispatcher component. 3.3.7 WS-Dispatcher. The WS-Dispatcher is responsible for sending the address of the most suitable Web service back to the client. Typically, this single most suitable Web service is chosen from among a pool of services satisfying both the functional and non-functional requirements of the client. 3.3.8 Monitoring and Self Healing Component. The monitoring component is responsible for monitoring the invocation process initiated between client and Web service provider. It starts after the client connects with the Web service. As the client sends a request to the Web service, a new thread is created for each required QoS metric to monitor the behavior of the Web service regarding such metric. For each QoS metric, there is a specific methodology to monitor it. For example, to monitor response time of the Web service, we reset a timer once the client sends his request to the Web service. If the service does not reply in the time it claims in its offer, we wait for an extra time interval and if no reply is received, we take a corrective action. The reason behind waiting for an extra interval is to accommodate for overloaded Web services, especially if they claim to offer low response time. However, if the WS is completely down, we cut the connection and swap it with another one. To better support the process of self-healing, we send the same request to a free Web service that does the same functionality. If we received a reply from it early, we display the result to the user with a notification message.

41

Self-healing actions could range from: 1) Tolerate: Ignores the claimed QoS offer and to continue processing. 2) Halt: Stops the invocation of currently violating Web service. 3) Swap: Cuts the connection with the invoked service, solves a new CSP and creates a new connection with another WS. To be able to swap Web services, two conditions must be satisfied: Web services should be functionally similar and their respective interfaces should be conciliated [26]. In our architecture, both conditions are true since we assume that Web services that implement specific functionalities must have the same interface (tModel) in order to get discovered. 3.3.9 The Matchmaking Algorithm. Our Matchmaking algorithm is a modified version of the one described by Kritikos and Plexousakis [18]. Although the most prominent QoS-based WS discovery algorithm that expresses the problem as CSP is introduced by Martin-Diaz, et al. [21], its solving algorithm divides WS offers into two categories, either completely satisfying or not satisfying the QoS request. In its analysis, it depends on the concept of conformance, which excludes from the matching offers those providing better solutions than what has been demanded. Kritikos and Plexousakis [18] provide a more solid definition for matchmaking a Web service offer to a client demand. They claim that an offer Oi matches a client demand D when the solution for QoS parameters PO of the offer is either contained in the solution set of the demand’s QoS parameters PD or is better than the demand’s solutions. Moreover, they add that the matchmaking algorithm should not depend on the best value for QoS metrics required by the client. It should match an offer PO with a demand PD if its worst values for QoS metrics have a preference of greater or equal value with respect to the preference of the worst solution of the demand. The algorithm orders Web service offers according to; exactly matching, better matching and then partially matching offers. For partially matching offers, they only take into account the weight of the QoS metric assigned by the customer; thus it penalizes all offers that partially match the customer’s request with the same penalty. We modified this algorithm to make it assign different penalty values to offers that partially match the QoS demand. We re-arrange the order of partially matching offers. We add more penalties when a QoS parameter is not offered by the Web service offer, else we take the difference in the value between what is offered and what is requested then multiply that value by the weight, as will be explained later. For example, suppose that the client’s requirement for metric X is 0.95 ≤ X ≤ 0.99 and the provider’s offer is X ≥ 0.94. If we compared X ≥ 0.94 of the offer with X ≥ 0.95 of the demand, we would find that the solution for the offer is not contained in the solution set of the demand, and the offer Oi is

IJCA, Vol. 18, No. 1, March 2011

42

not a solution for demand D. However, if we compared X ≥ 0.94 of the offer with X ≤ 0.99 of the demand, we would find out that offer Oi is a better solution for demand D, thus we could assign this offer to such demand. Obviously, we do not prefer the last conclusion as how can we serve a user who requires X ≥ 0.95 with an offer in which X can be less than 0.95 (the provider ensures that X ≥ 0.94, but it can fall in the area between [0.94, 0.95[. Thus, for positively monotonic metrics X, in which higher values are preferable, like the one of the example, we are interested in constraints that have the form: a ≤ X. Similarly, for negatively monotonic metrics X, we are interested in the constraints that have the following form: X ≤ b. Now we are going to describe the matchmaking algorithm, which forms the requirement constraints of our CSP problem. Each QoS parameter, whether it is a service offer or a demand, is written in one of these two forms: X : [ X = v] or [ X op1 a; X op 2 b] ,

where v, a, b ∈ domain ( X ),

a < b, op1 = {>, ≥}, op 2 = { or ≥ the value and finally the cost is < or ≤ the value. Each parameter also has a weight factor. We conducted this survey on seven subjects with relevant background, after explaining to them the architecture and how it works, without elaborating on how the matchmaking process takes place. We left it for them to decide which Web service they would expect each client to be served with. We compared the wisdom of the crowd against the results returned from running these requests on our prototype implementation. In order to analyze the survey results, for each client request, we marked the highly expected Web service offer, as suggested by people, as the first choice, and the second highly expected offer as the second choice. We compared the result received from running our architecture against the first and second choices. After collecting and analyzing the survey, we found that if we considered the first choices only, we got 86 percent correct. However, if we considered the union of the first and second choices together we got 96 percent correct. This means that our prototype implementation is 96 percent compatible with the first two choices of the wisdom of the crowd. It notes mentioning that most of the wrong answers, which form 4 percent, are for client requests that do not have exact or better matching offers. So the 4 percent inaccuracy of our prototype falls basically in the part of selecting the closest partially matching offers. 5.3 Advantage of Solving Simultaneously

a

Group

of

Customers

In this experiment, we investigated the advantage behind Table 1: Web service offers QoS Parameter Web Service A X ≤ 0.3 Response Time X > 15 Throughput X = 0.6 Cost

Web Service B X < 0.5 34 X = 0.2

48

IJCA, Vol. 18, No. 1, March 2011

solve the CSP. So here we divide the window sizes into three categories; those sizes that result in performing actual processing more than twice, those that result in performing actual processing exactly two times, and those that result in performing actual processing only once

Figure 6: Average index of deviation with varying window size

Figure 8: Service selection time while varying window size and fixing the number of clients

Figure 7: Average weighted index of deviation with varying window size same as number of clients, which were 12. So here, we fixed the total number of clients also to 12, and examined the effect on total service selection time for serving the same number of clients when window frame varied from 1, 2, 4, 6, 8, 10 and 12 respectively. We varied the number of offers published on UDDI from 3, 5, 7, 10 to 15 offers and observed how it affected the service selection time of our QoS-Broker. Each test was conducted for 100 times and we took the average service selection time of each. From Figure 8 we notice that all lines have approximately the same pattern; they start with a very high service selection time, when the window size is 1, and then move downwards as the window size increases. Moreover, all the lines have steep slopes in their first parts, which start to flatten as the window size becomes six or more. The service selection time of the QoS-Broker reaches its minimum when the window size is equal to 12, which is the number of client requests received in a specific window of time. We also notice that as the number of Web service offers published on UDDI increases, the service selection time increases. However, it has the same behavior while varying the window sizes. To analyze this situation, first we have to define what we mean by the term actual processing. By actual processing, we mean searching for Web service offers on UDDI, contacting each Web service to get its QoS offer, and backtracking to

Considering the first category, when the window size is set to 1, 2 and 4, the actual processing is performed 12, 6 and 3 times, respectively. For example, if the window size is 2, and we have 12 clients, then the actual processing will be performed 12/2 times. Since the process of searching the UDDI and contacting each service to get its QoS offers is time consuming, as will be shown in Experiment 5.4.3, increasing the number of times the actual processing is performed results in increasing the total time taken to serve the clients. This is why we have high values for the service selection time at the first part of the lines on the graph. As for the second category, which includes window sizes 6, 8 and 10, the actual processing is performed twice. For example, in the case when the window size is set to 8, the QoS-Broker performs its first actual processing run on 8 clients, and then waits for a delta time to receive another 8 clients. Since it only receives 4 clients, as the total number of clients is 12, after the delta time passes it starts its second actual processing run. This is the case when the window size is set to 6, 8 and 10 as the actual processing is performed twice in all three cases. Therefore, the execution time is near each other in these three cases, resulting in more flattened slopes. The final category includes only the window size equal to 12, which is equal to the total number of clients. Here, the actual processing is performed only once, resulting in the minimum execution time. 5.4.2 Varying Number of Clients and Window Size. This experiment shows the difference in the time taken by our QoSBroker to serve a group of clients individually and concurrently. Here, we fixed the number of Web service offers

IJCA, Vol. 18, No. 1, March 2011

published on UDDI to 7 offers, which is the average expected number of published offers in real life, and we varied the number of clients served and the window sizes. For each number of clients, we performed three scenarios. In the first scenario, we assumed that each client was served individually, so the window size was set to 1. In the second scenario, we assumed that the window size was set to half the number of clients. As for the third scenario, we assumed that each group of clients was served concurrently, which means that the window size is equal to the total number of clients. So for example, assuming the number of clients is 6, for the first scenario, the window size is set to 1, while in the second scenario, the windows size is set to 3, and in the last scenario, the window size is set to 6. Each of these tests was conducted 100 times and we took the average service selection time for them. Figure 9 shows the effect on the service selection time of our QoS-Broker when varying the number of clients and window size. We have three series: one that shows the effect on the service selection time when the window size is set to 1, the second which shows the effect when the window size is equal to half the number of clients, and the last which shows the effect as the window size is equal to the total number of clients.

Figure 9: Service selection time while varying the clients and fixing the number of offers We notice that the time taken for serving a group of clients concurrently is much less than the time taken for serving the same number of clients individually. This comes from the fact that in individual serving, the actual processing is done for each single request. Besides consuming much time, it also consumes part of the QoS-Broker’s processing power making it incapable of processing many requests at the same time. Moreover, the QoS-Broker holds information about available offers of different service providers in RAM, resulting in lowperformance as this information increases. On the other hand, when clients are served concurrently, the actual processing is performed only once, while the rest of the requests wait for the result without consuming any processing power. Secondly, we see the service selection time when the window size is equal to one half the number of clients, is slightly higher than when the whole number of client are

49

served concurrently. Yet, it is not midway between the service times taken to serve the clients individually, and concurrently. This is because when the window size is equal to half of the clients, the actual processing is performed twice, which exceeds the scenario when the clients are served all at once by only one actual processing round. However, in the case when clients are served individually, the actual processing is performed the same times as the number of clients. Finally, we notice that each of the three series has an increasing slope, which means that as the number of clients increases, the time taken by the QoS-Broker to find the best suitable Web service offer increases, which is true as it involves more processing to solve the CSP. However, we can see varying slopes in all three scenarios. This comes from the fact that when the window size is equal to or one half of the number of clients, the actual processing is performed once or twice respectively. While for servicing clients individually, the actual processing is performed with the same total number of clients. So for example, assuming that the number of clients is 8, when the window size is set to 1, the actual processing is performed 8 times. However, when the window size is 4, the actual processing is performed twice, and when the window size is 8, the actual processing is performed only once. In conclusion, group serving has less overhead than individual serving. 5.4.3 Varying Number of Clients and Number of Offers. Based on the results attained from the previous two experiments, we concluded that the minimum service selection time occurs when the window size is equal to the number of client requests received over a specific window of time. In this experiment we examine the effect on the service selection time while fixing the window size equal to the number of clients and varying the number of clients and number of offers. Figure 10 shows the results of running such an experiment. As the number of offers increases, the service selection time of our QoS-Broker increases. The service selection time of our QoS-Broker is composed of the time taken by the QoS-Broker to fetch the published Web services from the UDDI, contact each Web service to get its QoS offer, and solve the CSP problem. As the number of Web service offers increases, the QoS-Broker contacts more Web services to get their QoS offer,

Figure 10: Service selection time with different offers and number of clients

IJCA, Vol. 18, No. 1, March 2011

50

thus increasing the service selection time. Besides, increasing the domain of our CSP, the number of Web service offers, results in an increase in the time taken to reach a solution. Moreover, we can see that the number of clients also affects the service selection time; as it increases, the service selection time also increases. This is because increasing the number of clients means increasing the variables of our CSP, which results in an increase in the time taken to reach a solution. Thus, we can conclude that our QoS-Broker is scalable with regards to the total number of offers and the total number of clients. 6 Conclusions In this paper we augmented an existing architecture named the Context Oriented Architecture with support for quality of service. The Context Oriented Architecture is an adaptive service oriented architecture that allows applications built according to its specification to adapt to varying contextual changes. In this work, and to support quality of service, both service providers and service requesters specify their quality of service offers and demands respectively using the OWL-Q ontology. We added a Web service Broker (QoS-Broker) into the architecture, which is responsible for selecting and matchmaking Web service offers to client demands. We specified a matchmaking process and dealt with the matchmaking process as a constraint satisfaction problem (CSP), with an optimization function to return the most suitable Web service offer based on client requirements. We eventually allowed for some primitive self healing support in the case where service providers fail to abide by their stated quality of service claims. We ultimately verified the feasibility and performance of the QoS support in the architecture with our prototypic implementation as we vary the offers, clients, and matchmaking process variables. We demonstrated that serving clients in groups has less overhead than individual serving and that our matching logic conforms to the wisdom of the crowd. One major future extension to this piece of work would be integrating our Web service selection broker with a Web service profiling process, like the one intorduced by Aly et al. [12]. So instead of relying on QoS offers claimed by Web service provider, our system would dynamically acquire QoS relevant measurement from deployed Web services accroding to predetermined policies.

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

References [14] [1]

[2] [3]

Sherif G. Aly, A. Sameh, and A. Elsafty, “Negotiation Subsystem and Protocol for Sensitivity and Reactivity in the Context-Oriented Architecture,” 3rd IET International Conference on Intelligent Environments (IE 07), Ulm, Germany, pp. 346-353, Sept. 2007. M. Baldauf, S. Dustdar, and F. Rosenberg, “A Survey on Context-Aware Systems,” International Journal of Ad Hoc and Ubiquitous Computing, 2:263-277, 2007. M. Bonett, “Personalization of Web Services: Opportunities and Challenges,” Ariadne, 28:26-10, 2001.

[15]

[16]

I. Braun, A. Strunk, G. Stoyanova, and B. Buder, “ConQo – A Context- and QoS-Aware Service Discovery,” IADIS Intl. Conference WWW/Internet, Freiburg, Deutschland, October 2008. A. Brown, S. Johnston, and K. Kelly, “Using ServiceOriented Architecture and Component-Based Development to Build Web Service Applications,” Rational Software Corporation, 2002, http://www.ibm.com/developerworks/rational/library/510 .html. A. K. Dey and G. D. Abowd, “Towards a Better Understanding of Context and Context Awareness,” Workshop on the What, Who, Where, When and How of Context-Awareness, ACM Press, New York, USA, pp. 304-307, 2000. A. K. Dey, G. D. Abowd, and D. Salber, “A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications,” HumanComputer Interaction, 16:97-166, 2001. A. Elsafty, S. G. Aly, and A. Sameh, “The Context Oriented Architecture: Integrating Context into Semantic Web Services,” First International Workshop on Semantic Media Adaptation and Personalization (SMAP'06), Athens, pp. 74-79, 2006. A. Finkelstein and A. Savigni, “A Framework for Requirements Engineering for Context-Aware Services,” First International Workshop from Software Requirements to Architecture (STRAW 01), Toronto, Canada, 2001. R. Guerraoui and A. Schiper, “Fault-Tolerance by Replication in Distributed Systems,” Proceeding Conference on Reliable Software Technologies, SpringerVerlag, pp. 38-57, 1996. D. Hafez, A. Sameh, and Sherif G. Aly, “Augmenting QoS Support in the Context Oriented Architecture,” 2009 International Conference on Semantic Web and Web Services (SWWS’09), Las Vegas, USA, 2009. A. M. Hamza and Sherif G.. Aly, “Web Service QualityBased Profiling and Selection,” 2009 International Conference on Semantic Web and Web Services (SWWS’09), Las Vegas, USA, 2009. M. Hamza and Sherif G. Aly, “A Study and Categorization of Pervasive Systems Architectures: Towards Specifying a Software Product Line,” International Conference on Software Engineering Research and Practice, Las Vegas, Nevada, 2010. J. I. Hong, “The Context Fabric: an Infrastructure for Context-Aware Computing,” Proc. ACM CHI '02 Extended Abstracts on Human Factors in Computing Systems ACM, Minneapolis, Minnesota, USA, pp. 554555, 2002. P. Korpipää and J. Mäntyjärv, “An Ontology for Mobile Device Sensor-Based Context Awareness,” Proceedings of CONTEXT, Lecture Notes in Computer Sciences, pp. 451-458, 2003. K. Kritikos and D. Plexousakis, “OWL-Q for Semantic QoS-Based Web Service Description and Discovery,”

IJCA, Vol. 18, No. 1, March 2011

[17] [18]

[19] [20]

[21]

[22] [23]

[24] [25]

[26] [27]

First International Joint Workshop on Service Matchmaking and Resource Retrieval in the Semantic Web Busan, South Korea, Nov 2007. K. Kritikos and D. Plexousakis, “Semantic QoS Metric Matching,” European Conference on Web Services (ECOWS'06), Zurich, pp. 265-274, 2006. K. Kritikos, and D. Plexousakis, “Semantic QoS-Based Web Service Discovery Algorithms,” Fifth European Conference on Web Services (ECOWS '07), Halle, Germany, pp. 181-190, 2007. K. C. Lee, J. H. Jeon, W. S. Lee, S. Jeong, and S. Park, “QoS for Web Services: Requirements and Possible Approaches,” W3C Working Group Note 25, 2003. A. Mani and A. Nagarajan, “Understanding Quality of Service for Web Services,” IBM developer Works, 2004, http://www-106.ibm.com/developerworks/library/wsquality.html. O. Martin-Diaz, A. R. Cortes, D. Benavides, A. Duran, and M. Toro, “A Quality-Aware Approach to Web Services Procurement,” 4th International VLDB Workshop Technologies for E-Services (TES’03), 2003 N. Milanovic and M. Malek, “Current Solutions for Web Service Composition,” IEEE Internet Computing, 8:5159, 2004. M. P. Papazoglou, P. Traverso, S. Dustdar, and F. Leymann, “Service-Oriented Computing: State of the Art and Research Challenges,” Computer, 40:38-45, 2007. K. Sheikh, M. Wegdam, and M. Van Sinderen, “Qualityof-Context and Its use for Protecting Privacy in Context Aware Systems,” Journal of Software, 3:83, 2008. T. Strang and C. Linnhoff-Popien, “A Context Modeling Survey,” biComp 1st International Workshop on Advanced Context Modeling, Reasoning and Management, Nottingham, 2004. [25]Y. Taher, D. Benslimane, M. C. Fauvet, and Z. Maamar, “Towards an Approach for Web Services Substitution,” 10th Intl. Database Engineering and Applications Symposium, IEEE CS Press, pp. 166-173, 2006. X. Wang, T. Vitvar, M. Kerrigan, and I. Toma, “A QoSAware Selection Model for Semantic Web Services,” Lecture Notes in Computer Science, 4294:390, 2006. M. Weiser, “The Computer for the 21st Century,” IEEE Pervasive Computing, 1:19-25, 2002.

Dina Hafez received her B.Sc. and MSc. degrees in Computer Science from the American University in Cairo, Egypt in 2006 and 2009 respectively. Her Masters level research involved the incorporation of quality of service support into an adaptive architecture that supports context awareness in pervasive systems. She is currently a Ph.D. candidate at the Computer Science Department of Duke University, USA. Her research interests include artificial intelligence, bioinformatics and pervasive systems.

51

Sherif Aly is an Associate Professor with tenure, and Associate Chairman at the Department of Computer Science and Engineering of The American University in Cairo. Dr. Aly received his Bachelor of Science from AUC, and subsequently his Master of Science and Doctor of Science degrees from George Washington University. Prior to joining the American University in Cairo, Dr. Aly worked as a Senior Member of Technical Staff for General Dynamics Network Systems in Egypt, as a Research Scientist at the Internet Architecture Research Lab of Telcordia Technologies in New Jersey, and as a Guest Researcher for the Advanced Network Technologies Division of the National Institute of Standards and Technology in Maryland. Dr. Aly is a recipient of the renowned Egypt State Prize in 2006 for his scholarly activities. During his doctoral studies at George Washington University, he was nominated for the Trachtenberg teaching award for his current scholarship and scholarly debate. From 2005-2009, Dr. Aly coached students from the American University in Cairo to participate in the world-renowned ACM Intercollegiate Programming Contest where, on multiple occasions, AUC teams won first position at both the national and regional levels, and among approximately one percent of worldwide contest participants, competed twice in the World Finals of the contest. Dr. Aly’s main research interests involve Mobile and Pervasive Computing, Software Engineering, and Programming Languages. Dr. Aly has a significant number of journal, conference, and book chapter publications in his domains of interest. Besides his regular academic responsibilities, Dr. Aly participates in consultation activities in his domain, and is a member of the Egyptian Council for Foreign Affairs.

Ahmad Sameh received his Ph.D. from the University of Alberta in 1989. Dr. Sameh was a Professor at the American University in Cairo, in which his prime areas of teaching and research included mobile computing, wireless communications, computer architecture, and artificial intelligence. Dr. Sameh has a large number of publications in his domains of interest, and is a recipient of many awards. Before that, Dr. Sameh also worked for Kuwait University and George Washington University.

Suggest Documents