service provider's site may not be the best solution due to the .... values are computed for Web services that are in the same domain. A Web service with the ...
QoS-based Discovery and Ranking of Web Services Eyhab Al-Masri, Qusay H. Mahmoud Department of Computing and Information Science University of Guelph, Guelph, Ontario, N1G 2W1 Canada {ealmasri, qmahmoud}@uoguelph.ca Abstract— Discovering Web services using keyword-based search techniques offered by existing UDDI APIs (i.e. Inquiry API) may not yield results that are tailored to clients' needs. When discovering Web services, clients look for those that meet their requirements, primarily the overall functionality and Quality of Service (QoS). Standards such as UDDI, WSDL, and SOAP have the potential of providing QoS-aware discovery, however, there are technical challenges associated with existing standards such as the client's ability to control and manage discovery of Web services across accessible service registries. This paper proposes a solution to this problem and introduces the Web Service Relevancy Function (WsRF) used for measuring the relevancy ranking of a particular Web service based on client’s preferences, and QoS metrics. We present experimental validation, results, and analysis of the presented ideas. Keywords-UDDI, Service Registries, Web Services, Quality of Service, QoS, Web Service Broker, Discovery of Web Services, Ranking of Web Services, Ranking
I. INTRODUCTION Standards such as UDDI have enabled service providers and requestors to publish and find Web services of interest through UDDI Business Registries (UBRs), respectively. However, UBRs may not be adequate enough for enabling clients to search for relevant Web services due to a variety of reasons. One of the main reasons hindering the efficient discovery of Web services is the fact that existing search APIs (i.e. UDDI Inquiry API) only exploit keyword-based search techniques which may not be suitable for Web services particularly when differentiating between those that share similar functionalities.
share similar functionalities is significantly achieved by examining non-functional Web service attributes such as response time, throughput, availability, usability, performance, integrity, among others. It would be desirable if existing standards applied for publishing, discovering, or using Web services have the ability to incorporate QoS parameters as part of the registration process (i.e. publishing process) while continuously regulating or monitoring revisions to QoS information as a result of any related Web service updates. There are several approaches that address how to deal with QoS for Web services. However, many of them rely on the service providers to supply their QoS metrics and therefore, storing this type of information either in the UDDI or at the service provider’s site may not be the best solution due to the fact that features such as response time and throughput will be advertised by the service provider and may be subjected to forms of manipulation. In addition, the supply of QoS metrics by the service provider raises several concerns such as integrity and reliability of the supplied values. It would be ideal if there is a trusted service broker that can manage the supply of QoS information for Web services in a transparent manner such that: (1) service providers provide only QoS information that must directly be supplied (i.e. cost per invocation or price plans) through an interface; and (2) QoS metrics that are not necessarily supplied by service providers (i.e. response time, availability, reliability, penalty rate, among others) are computed in an autonomous manner.
Furthermore, many software vendors are promoting their products with features that enable businesses and organizations to create their own UBRs (i.e. IBM WebSphere, Microsoft Enterprise Windows Server 2003, and others). In this case, businesses and organizations may preferably deploy their own internal UBRs for intranet or extranet use which will cause a significant increase in the number of discrete UBRs over the Web. This adds to the already existing complexity of finding relevant Web services of interest in the sense that there needs to exist an automated mechanism that can explore all accessible UBRs, mainly a Web services’ discovery engine.
To address the above issues, this paper introduces a mechanism that extends our Web Services Repository Builder (WSRB) architecture [1] by offering a quality-driven discovery of Web services and uses a combination of Web service attributes as constraints when searching for relevant Web services. Our solution has been tested and results show high success rates of having the correct or most relevant Web service of interest within top results. Results also demonstrate the effectiveness of using QoS attributes as constraints when performing search requests and as elements when outputting results. Incorporating QoS properties when finding Web services of interest provides adequate information to service requestors about service guarantees and gives them some confidence as to the quality of Web services they are about to invoke.
Due to the fact that much of the information provided by Web services is mostly technical, there is a need for a mechanism that can distinguish between Web services using well-defined criteria such as considering Quality of Service (QoS) attributes. Differentiating between Web services that
The rest of this paper is organized as follows. Section II discusses the related work. Our Web service ranking mechanism is presented in Section III. Experiments and results are discussed in Section IV, and finally the conclusion and future work are discussed in Section V.
1095-2055/07/$25.00 ©2007 IEEE
II. RELATED WORK Several Web services may share similar functionalities, but possess different non-functional properties. When discovering Web services, it is essential to take into consideration functional and non-functional properties in order to render an effective and reliable selection process. Unfortunately, current UDDI specification V3 [3] does not include QoS as part of its Publication or Inquiry APIs. In order to solve this problem, some work has been implemented for enhancing the UBR’s inquiry operations by embedding QoS information within the message. One example is UDDIe [4] which provides an API that can associate QoS information through a bag of user-defined properties (called propertyBag) while search queries are executed based on these properties. Such properties may provide QoS support; however, UDDIe is mainly used for the G-QoSM framework which is used for Grid Computing, and provides a very limited level of support for QoS details. Another approach that attempted to enhance the discovery of Web services using QoS information is Quality Requirements Language (QRL) [5]. This general XML-based language can be used for Web services as well as other distributed systems such as video on demand. However, QRL does not clearly define how it can be used or associated with Web services’ interfacing standards such as WSDL. In addition, QRL does not provide sufficient information on when and how to control and manage any specified QoS information. In [6], an approach for certifying QoS information by a Web service QoS Certifier is proposed. In this approach, the Web service provider has to provide QoS information at the time of registration. However, this approach does not provide a reliable mechanism and an adequate solution for solving the support of QoS properties for Web services. This solution proposes an extension to the current UDDI design which might not be feasible, concentrates on verifying QoS properties at the time of registration, does not provide any guarantees of having up-to-date QoS information (i.e. in case of Web service updates or changes), and requires service providers to go through additional steps during the publishing process which enforces them to think of ways on how to measure the QoS of their Web services. In addition, the current solution does not differentiate between QoS properties directly supplied by service providers [7] (i.e. cost) or how the QoS certifier handles such parameters when issuing a certification. Other approaches focused on using QoS computation and policing for improving the selection of Web services [8,9], developing a middleware for enhancing Web service composition for monitoring metrics of QoS for Web services [10], using agents based on distributed reputation metric models [11], and using conceptual models based on Web service reputation [12]. Many of these approaches do not provide guarantees as to the accuracy of QoS values over time or having up-to-date QoS information. In addition, preventing false ratings using reputation metric models is not present and therefore, false information may be collected and as a result may significantly impact the overall ratings of service providers.
III.
WEB SERVICE RELEVANCY FUNCTION (WSRF)
Although many of the existing QoS-enabled discovery mechanisms discussed in Section II provide ways for service providers to publish QoS information, there are some challenges that must be taken into consideration. These challenges include: (1) automating, administering, and maintaining updated QoS information in UBRs, (2) ensuring the validity of QoS information supplied by service providers, (3) conducting QoS measurements in an open and transparent manner, (4) controlling and varying time periods for evaluating QoS parameters, (5) managing the format of QoS information results, and (6) enhancing UDDI to support QoS information with the existing version without the need for any modifications or extensions to specifications. In order to provide a quality-driven ranking of Web services, it is important to collect QoS information about Web services. The WSRB framework [1] uses a crawler targeted for Web services called the Web Service Crawler Engine (WSCE) [14] that actively crawls accessible UBRs as shown in Figure 1.
Figure 1. WSRB Framework: Quality-Driven Ranking of Web Services
A Web Service QoS Manager (WS-QoSMan) module [15] within the WSRB framework is responsible for measuring QoS information for the collected Web services, and information is stored in the Web Service Storage (WSS) as shown in Figure 1. WSRB enables clients to selectively choose and manage their search criteria through a graphical user interface. Once clients submit their search requests, a Web Service Relevancy Function (WsRF) is used to measure the relevancy ranking of a particular Web service wsi. QoS parameters help determine which of the available Web services is the best and meets clients’ requirements. Because of their significance, we selected the following QoS parameters that are based on earlier research studies [5,12] for computing WsRF values: 1.
Response Time (RT): the time taken to send a request and receive a response (unit: milliseconds).
2.
Throughput (TP): the maximum requests that are handled at a given unit in time (unit: requests/min).
3.
Availability (AV): a ratio of the time period when a Web service is available (unit: %/3-day period).
4.
Accessibility (AC): the probability a system is operating normally and can process requests without any delay. (unit: %/3-day period).
where hi,j measures the difference of qi,j from the maximum normalized value in the corresponding QoS property group or j column.
5.
Interoperability Analysis (IA): a measure indicating whether a Web service is in compliance with a given set of standards. WSRB uses SOAPScope’s [13] Analysis feature for measuring IA (unit: % of errors and warnings reported).
6.
Cost of Service (C): the cost per Web service request or invocation (cents per service request).
In order to allow for different circumstances, there is an apparent need to weight each factor relative to the importance or magnitude that it endows upon ranking Web services based on QoS parameters. Therefore, we need to define an array that represents the weights contribution for each Pj where w = {w1,w2,w3,…,wj}. Each weight in this array represents the degree of importance or weight factor associated with a specific QoS property. The values of these weights are fractions in the sense that they range from 0 to 1. In addition, all weights must add up to 1. Each weight is proportional to the importance of a particular QoS parameter to the overall Web service relevancy ranking. The larger the weight of a specific parameter, the more important that parameter is to the client and vice versa. The weights are obtained from the client via a client interface. Introducing different weights to (3) results in the following equation:
Clients can submit their requests (i.e. via a GUI) to the WSRB framework which processes them and computes WsRF values for all matching services. It is assumed that WsRF values are computed for Web services that are in the same domain. A Web service with the highest calculated WsRF value is considered to be the most desirable and relevant to a client based on his/her preferences. Assuming that there is a set of Web services that share the same functionality such that WS ( WS = {ws1, ws2, ws3, …wsi}) and QoS attributes such that P ( P = {p1,p2,p3,…pj} ), a QoS-based computational algorithm determines which wsi is relevant based on QoS constraints provided by the client. Using j criteria for evaluating a given Web service, we obtain the following WsRF matrix in which each row represents a single Web service wsi, while each column represents a single QoS parameter Pj. q1,1 q 2,1 . E= . . qi ,1
q1, 2
...
q 2, 2
...
.
.
.
.
.
.
qi , 2
...
q1, j q 2, j . . . qi , j
(1)
In order to calculate WsRF(wsi), we need the maximum normalized value for each Pj column. Let N be an array where N = {n1,n2,n3,…nm} with 1 ≤ m ≤ i such that: i
(2)
m
where qm,j represents the actual value from the WsRF matrix in (1). Each element in the WsRF matrix is compared against the maximum QoS value in the corresponding column based on the following equation: hi , j =
qi , j max( N ( j ))
(4)
Applying (4), we get a weighted matrix as shown below: h1,1 h 2,1 . E′ = . . hi ,1
h1, 2 h 2, 2
... ...
.
.
.
.
.
.
hi , 2
...
h1, j h 2, j . . . hi , j
(5)
Once each Web service QoS value is compared with its corresponding set of other QoS values in the same group, we can calculate the WsRF for each Web service as shown below: N
Due to the fact that QoS parameters vary in units and magnitude, E(qi,j) values must be normalized to be able to perform WsRF computations and perform QoS-based ranking. Normalization provides a more uniform distribution of QoS measurements that have different units. In addition, normalization allows for weights or thresholds to be associated with QoS parameters and provides clients with effective ways to fine-tune their QoS search criteria.
N ( j ) = ∑ qm , j
qi , j hi , j = wj max( N( j ))
(3)
WsRF ( wsi ) = ∑ hi , j
(6)
i =1
where N represents the number of Web services from a given set. To demonstrate how WsRF works, we will consider a simple example in which a client assigns weights to QoS properties discussed earlier as follows: w1 = 0, w2 = 0, w3 = 0, w4 = 0, w5 = 0, and w6 = 1 From the weights assigned, it is clear that the last weight that represents the most important QoS property to this client is w6 or cost. The importance level assigned to each QoS parameter varies since QoS properties vary in units. Due to the fact that each QoS property chosen by clients has an associated unit that is different from other properties (i.e. response time is represented in milliseconds while cost is represented in cents), it is important to clarify the fact that each weight represents a different degree of significance which must be optimized. For example, a client that sets all weights to zero except for cost indicates that WsRF should minimize cost since it represents 100% significance to the client. Therefore, WsRF will eventually return the cheapest Web service in this case.
IV. EXPERIMENTS AND RESULTS Data used in this paper are based on actual Web service implementations that are available over the Web, and are listed in XMethods.net, XMLLogic, and StrikeIron.com. Seven Web services were chosen from the same domain and they all share the same functionality of validating an email address. QoS parameters discussed in Section III were used to evaluate QoS metrics for all of the seven Web services. Table I shows the average QoS metrics for these Web services from testing four trials running on two different networks. The QoS parameter cost is represented in cents and was provided by the service provider. In addition, availability and accessibility values were measured over a three-day period.
6 7
720
6.00
85
87
80
1.2
1100
1.74
81
79
100
1
710
12.00
98
96
100
1
912
10.00
96
94
100
7
910
11.00
90
91
70
2
1232
4.00
87
83
90
0
391
9.00
99
99
90
5
TABLE II. ID
RESULTS OF WSRF WITHOUT WEIGHTS
Service Provider & Name XMLLogic ValidateEmail XWebservices XWebEmail-Validation StrikeIron Email Verification StrikeIron Email Address Validator CDYNE Email Verifier Webservicex ValidateEmail ServiceObjects DOTS Email Validation
1 2 3 4 5 6 7
Client Controlled vs. Keyword-based QoS Specified
The values shown in Table I represent QoS values measured for seven different Web services. In order to find the most suitable Web service, it is important to optimize the values for each QoS parameter. For instance, having higher probability for accessibility percentage is preferable than having a Web service with low probability for accessibility. In this case, WsRF will maximize accessibility. However, for some other QoS parameters such as cost, WsRF will minimize them.
Generic Search (No QoS)
5 4 3 2 1 0 0
Table I shows QoS values that were measured by WSQoSMan for seven Web services. In order to find the most relevant Web service, there needs to exist a set of Web services that share the same functionality and are used to measure QoS metrics based on QoS criteria. Once WSRB has successfully generated the necessary QoS metrics, these QoS values are used as inputs to the WsRF and the matrix in (1) is established. It is important to note that WSRB does not necessarily have to perform QoS metrics check when a client performs a request, but rather uses an update interval for measuring these metrics. This provides the WSRB with up-to-date QoS information that is ready and available upon client requests in real-time scenarios.
WsRF 3.6638 3.2166 4.6103 4.1955 3.9246 4.2679 4.6700
From Table II, the Web service with the highest WsRF value indicates that it has the best QoS metrics. For this example, WsRF determines that the best Web service without having any dependency on any specific QoS parameter (i.e. keyword-based search) is Web service number seven. Figure 2 shows the results from computing WsRF values for all Web services listed in Table I using a keyword-based search technique versus a client-controlled search in which cost represents the most important QoS parameter (i.e. running WsRF that is heavily dependent on cost).
WsRF (ws i )
5
C (cents/ invoke)
4
IA (%)
3
AC (%)
2
XMLLogic ValidateEmail XWebservices XWebEmailValidation StrikeIron Email Verification StrikeIron Email Address Validator CDYNE Email Verifier Webservicex ValidateEmail ServiceObjects DOTS Email Validation
AV (%)
1
Service Provider & Name
TP (req/min)
I D
QOS METRICS FOR VARIOUS AVAILABLE EMAIL VERIFICATION WEB SERVICES RT (ms)
TABLE I.
Applying WsRF in (6) without any associated weights, the following results are obtained:
1
2
3
4
5
6
7
Web Service Number
Figure 2. Results from running WsRF heavily dependenton cost vs. keyword-based search
Based on QoS values obtained in Table I, Web service number six has the least cost (zero implies that it is being offered at no cost) which complies with the results obtained from running WsRF that is heavily dependent on cost as Figure 1 demonstrates (i.e. Web service number six has the highest WsRF value). When analyzing results from Figure 2, it is important to take into consideration WsRF values when associating at least one QoS parameter which results in WsRF values ranging from 0 to 1 while having a broad search that is not QoS specific (i.e. WsRF without weights) produces WsRF values ranging from 3.22 to 4.67. Having smaller values means that the standard deviation is smaller, and therefore, the faster the WsRF is converging into a solution. Having smaller standard deviation using WsRF outperforms keyword-based search technique in the sense that it provides a very good estimate of the true or optimal value, while providing precise and accurate results. To demonstrate the effectiveness of our ranking technique and how it outperforms other discovery methods that merely depend on keyword-based technique, we will consider six test
scenarios in which each scenario represents a different combination of QoS requirements.
QoS Ranking Dependent on RT (75%) and TP (25%) WsRF (wsi )
1.20
A. Test Scenario 1 Figure 3 shows another test from running WsRF with having response time as the most important QoS parameter. Results from Figure 3 demonstrate that Web service number seven has the highest WsRF value, or the one that has the fastest response time, or 0.391 seconds, which conforms to the data obtained in Table I. QoS Ranking Heavily Dependent on RT
1.00 0.80 0.60 0.40 0.20 0.00 0
1
2
3
4
5
6
7
Web Service Number
Figure 5. QoS ranking with more emphasis on RT (75%) than TP (25%)
WsRF (wsi )
1.20 1.00 0.80 0.60 0.40 0.20 0.00 0
1
2
3
4
5
6
7
Web Service Number
D. Test Scenario 4 Another test was conducted with varying the weights for response time and cost. Figure 6 shows the results from running this test in which the QoS ranking is heavily dependent on RT (80%) but considerably takes into account cost (20%). Figure 6 shows that Web service number seven has highest WsRF value (or 0.8040) followed by the third Web service (WsRF value of 0.4606).
Figure 3. QoS ranking heavily dependent on RT
QoS Ranking Dependent on RT (80%) and C (20%) WsRF (wsi )
B. Test Scenario 2 Figure 4 shows another test when running WsRF with more emphasis on the maximum throughput (TP). Results from Figure 4 demonstrate that Web service number three has the highest WsRF value which is consistent with the data obtained in Table I in which this Web service has the highest throughput or 12 requests per minute.
1.20 1.00
0.8040
0.80 0.60
0.4606
0.4511 0.3044
0.40
0.3458 0.3537
0.4539
0.20 0.00 0
1
2
3
4
5
6
7
Web Service Number
QoS Ranking Heavily Dependent on TP WsRF (wsi )
1.20 1.00
Figure 6. QoS ranking dependent on RT (80%) and C (20%)
0.80 0.60 0.40 0.20 0.00 0
1
2
3
4
5
6
7
Web Service Number
Figure 4. QoS ranking heavily dependent on TP
C. Test Scenario 3 Another test was conducted with varying the weights for response time (RT) and throughput (TP). In this test, response time has more weight than throughput which yields results presented in Figure 5. Results from Figure 5 demonstrate that Web service number seven has the highest WsRF value while Web service number three has the second WsRF value. By comparing results in Figure 5 to those in Figures 3 and 4, it is apparent that the highest WsRF value in Figure 3 dominates the ranking in this test while the one in Figure 4 (Web service three) has a lesser WsRF value due to the fact that the requirements for this test indicate more emphasis (weight) on RT than TP.
Results shown in Figure 6 can be compared to those in Test Scenario 1 in the sense that in both of these scenarios, response time is the dominant QoS parameter. However, WsRF values slightly change in Test Scenario 4 such that response time remains the dominant QoS parameter, but slightly takes into consideration the cost parameter. Table III demonstrates the ranks for both scenarios and the ranking variation for each Web service. TABLE III. ws i ws1 ws2 ws3 ws4 ws5 ws6 ws7
RANKING DEVIATION FOR TEST SCENARIOS 1 AND 4
Test Scenario 1 WsRF Rank 0.54 3 0.36 6 0.55 2 0.43 5 0.43 4 0.32 7 0.99 1
Test Scenario 4 WsRF Rank 0.45 4 0.30 7 0.46 2 0.35 6 0.35 5 0.45 3 0.80 1
∆ Rank
∆ Rank %
-1 -1 0 -1 -1 +4 0
-14.29 -14.29 0.00 -14.29 -14.29 57.14 0.00
Table III demonstrates ranking variations from Test Scenarios 1 and 4. Although the dimensionality of QoS
parameters has changed from Test Scenario 1 to Test Scenario 4 in the sense that an additional QoS parameter was introduced into the ranking (i.e. cost), Web service number seven has the highest WsRF value in both tests. This major change was reflected on Web service number six in which the WsRF value increased significantly by 4 ranking degrees or 57.1% in terms of percentage ranking difference. For example, Web service number six in this test scenario has gained or moved four ranking levels (+4) when the cost parameter was introduced into the ranking technique. Due to the fact that Web service number six has the least cost, the difference in WsRF value and ranking was affected considerably. It is important to note that total changes that occurred from Test Scenario 1 to this test scenario (total ∆ Rank) add up to 1. In this case, although WsRF values have considerably changed, they are as a result of compromising one QoS parameter over the other. It is also important to note that WsRF takes into consideration an acceptable error range that can be assigned by clients and which allows the computed WsRF values to be more accurate. Varying QoS parameters and associated degrees of importance (i.e. weights) significantly affects the ranking results or WsRF values. When varying these weights, WsRF values may potentially overlap with each other which we refer to as the QoS point of reflection: a point that determines the changes required for QoS parameters of two or more Web services that reflect on their WsRF values such that they are approximately equal. For example, in this test scenario, varying weights for response time to 59.8 % and cost to 41.2% will produce WsRF values for Web services number 6 and 7 that are approximately equal. E. Test Scenario 5 Another test was conducted in which the weights associated are equally distributed across all QoS parameters (w=0.1667) and yields results shown in Figure 7.
WsRF (wsi )
1.20
QoS Ranking w/ Equal Distribution of Weights (1/6)
1.00
0.7684
0.80 0.60 0.40
[2] [3]
[4]
[5] [6]
[7]
[8]
0.7113
0.6993
REFERENCES [1]
0.7783
0.6541
0.6106
V. CONCLUSION A Web service relevancy ranking function based on QoS parameters has been presented in this paper for the purpose of finding the best available Web service during Web services’ discovery process based on a set of given client QoS preferences. The use of non-functional properties for Web services significantly improves the probability of having relevant output results. The proposed solution has shown usefulness and effectiveness of incorporating QoS parameters as part of the search criteria and in distinguishing Web services from one another during the discovery process. The ability to discriminate on selecting appropriate Web services relied on the client’s ability to identify appropriate QoS parameters. The proposed solution provides an effective Web service relevancy function that is used for ranking and finding most relevant Web services. For future work, we plan to extend QoS parameters to include information such as reputation, penalty rates, reliability, and fault rates.
0.5361
0.20
[9]
0.00 0
1
2
3
4
5
6
7
Web Service Number
[10]
Figure 7. Equal distribution of weights (w=0.1667)
[11]
Results in Figure 7 demonstrate a small standard deviation when compared to the ones shown in Figure 2 using a generic search. For instance, the standard deviation when considering WsRF without any weights is 0.5197 while considering equally distributed weights for all QoS parameters is reduced significantly to 0.0866. This shows that associating weights with WsRF significantly improves the accuracy of the ranking and enables WSRB to converge into a solution. Therefore, the more client preferences specified, the narrower the results and the higher performance of WsRF.
[12] [13] [14]
[15]
Al-Masri, E., and Mahmoud, Q. H., “A Framework for Efficient Discovery of Web Services across Heterogeneous Registries”, IEEE Consumer Communication and Networking Conference, 2007, pp. 415419. UDDI Version 3.0.2 Specifications, October 2004, http://uddi.org/pubs/uddi_v3.htm. Ali, A., Rana, O., Al-Ali, R., and Walker, D., “UDDIe: An Extended Registry for Web Services”, Proc. of 2003 Symposium on Applications and the Internet Workshops, 2003, pp.85-89. Martin-Diaz, O., Ruiz-Cortes, A., Corchuelo, R., and Toro, M., “A Framework for Classifying and Comparing Web Services Procurement Platforms”, Proc. of 1st Int’l Web Services Quality Workshop, Italy, 2003, pp. 37-46. Ran, S., “A Model for Web Services Discovery with QoS”, ACM SIGecom Exchanges 4(1), 2003, pp. 1-10. Kumar, A., El-Geniedy, A., and Agrawal, S., “A Generalized Framework for Providing QoS based registry in Service-Oriented Architecture”, Proc. of IEEE International Conference on Services Computing, 2005, pp. 295-301. Liu, Y., Ngu, A., and Zeng, L., “QoS Computation and Policing in Dynamic Web Service Selection”, Proceedings of the 13th International World Wide Web Conference, 2004. Zeng, L., Benatallah, B., Dumas, M., Kalagnanam, J., and Sheng, Q.Z., “Quality Driven Web Services Composition”, Proceedings of the 12th International World Wide Web Conference, 2003, pp. 411-421. Sheth, A., Cardoso, J., Miller, J., and Koch, K., “Web Services and Grid Computing”, Proceedings of the Conference on Systemics, Cybernetics and Informatics, Florida, 2002. Sreenath, R., and Singh, M.P., “Agent-based Service Selection”. Journal of Web Semantics, Volume 1, Issue 3, 2004. Larkey, L., “Automatic Essay Grading Using Text Classification Techniques”, ACM SIGIR, 1998. Mansace, D., “QoS Issues in Web Services”, IEEE Internet Computing, 6(6), 2002, pp. 72-75. Mindreef SOAPScope, http://home.mindreef.com, (Last Accessed May 2007). Al-Masri, E., and Mahmoud, Q. H., “Crawling Multiple UDDI Business Registries”, 16th International World Wide Web Conference, 2007, pp. 1255-1256. Al-Masri, E., and Mahmoud, Q. H., “Discovering the Best Web Service”, 16th International World Wide Web Conference, 2007, pp. 1257-1258.