This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
1
Exploiting Client-Side Collected Measurements to Perform QoS Assessment of IaaS Ammar Kamel, Ala Al-Fuqaha and †Mohsen Guizani Department of Computer Science, Western Michigan University, Kalamazoo, MI, USA † Qatar University, Doha, Qatar {ammar.m.kamel, ala.al-fuqaha}@wmich.edu †
[email protected]
Abstract— Delivering reliable service offerings to clients remains a challenging aspect in today’s cloud infrastructure. A broad number of research studies have undertaken the service evaluation process from one side; that is, the infrastructure’s perspective. Conversely, clients’ assessment to the service has been mostly neglected. In this paper, we propose a client-side service evaluation approach which mainly relies on the clients’ assessment of infrastructure’s service offerings. The proposed approach utilizes the strength of the Social Network Analysis (SNA) principles in conjunction with the Generalized Extreme Value Theorem (EVT) to converge to a precise Quality of Service (QoS) model. Our goal in this research is to build precise QoS models to predict the performance of clients that exhibit similar behaviors. Thus, we develop a novel SNA-based clustering algorithm that analyzes the strength of the interconnection links between clients and cluster related clients in communities of similar behaviors. The proposed approach is effective in providing Infrastructure as a Service (IaaS) providers with a better assessment tool to evaluate and improve their service offerings. The experimental results of the proposed approach on GENI’s SEATTLE platform demonstrate its ability to enhance the prediction process of the performance of IaaS service offerings. Index Terms— IaaS, QoS Service Assessment, Social Network Analysis, Extreme Value Theorem, Generalized Pareto Distribution, Peak over Threshold, Integer Linear Programming.
I. INTRODUCTION
D
elivering assured performance to mobile clients of cloud-hosted services is a challenging problem. Nowadays researchers and practitioners are paying much attention to deliver performance optimized cloud services to mobile users. Towards this end, service analytics provides the insight that enables IaaS providers to assess the performance of their offerings and take actions to increase customer satisfaction. Currently, most of QoS techniques utilize service measurements that are collected by the network elements (i.e., network-side monitoring) to evaluate the IaaS performance. However, this process does not take into account the service performance from clients' perspective and might contradict with the Service Level Agreement (SLA). Table 1 provides a comparison between the client and IaaS side approaches.
Client-Side Approach Provides important input that can be used by the service providers to tune the performance of their service offerings. Empowers customers by giving them quantitative evidence that could be used to compare the performance of similar services offered by different IaaS providers.
Server-Side Approach Unpredictable service degradations and failures may result in violations of the SLA.
It is a cooperative approach that provides a compromised service plan for both the IaaS provider and the clients to achieve a better network service performance.
The service assessment process relies only on service providers’ perspective.
The collected data does not fully reflect the customer’s experience with the requested services.
In order to overcome the limitations of server-side QoS monitoring, much research has been conducted to present alternative architectures and algorithms for client-side QoS service assessment in the telecommunications networks. Furthermore, effective tools, such as SNA and EVT, have been utilized in the recent literature to mitigate the potential negative impacts of network service degradations and failures [23]. In computer networking and telecommunication applications, SNA has a great potential to enhance the performance of the network infrastructure through analyzing the network measurements that has been collected from the monitoring process [5, 17, 21]. It can be applied to discover the relationships among the network nodes, which in turn, can lead to better prediction of the network behavior. On the other hand, using EVT models can serve as an efficient technique to extract the collected extreme measurements and build suitable distribution models that can be used to predict anomalous
1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
2 network service degradations through the service monitoring process [4,15,19]. The main impetus in this research is to combine these two techniques to build a precise QoS model based on a client-side service monitoring and assessment process. Precisely, EVT provides an opportunity to throttle the amount of data delivered from the clients to the broker since the focus is on extreme measurements that are unusually high beyond a preset threshold. Also, SNA is utilized to cluster the extreme measurements in order to exclude outliers or malevolent measurements delivered to the broker. Our experimental results demonstrate that the proposed joint approach (i.e., utilizing the EVT and the SNA jointly) leads to the construction of a more precise service prediction model compared to the baseline case when using the EVT only without the SNA. The proposed approach can help IaaS providers to build more accurate performance prediction models for their services. IaaS providers will also benefit by delegating the monitoring task to the clients while making sure that outlier and biased measurements that are potentially generated by malevolent clients are excluded before building the prediction model. The rest of the paper is organized as follows. Section II provides a review of the statistical techniques and tools used as the core part of our proposed research. Studies that adopt client-side based QoS approach are also reviewed in Section II. The Client-Based QoS Monitoring Architecture is presented in Section III. The proposed QoS assessment approach and Integer Linear Programming (ILP) formulation of the problem are presented in Section IV. Section V provides detailed analysis and discussions of our experiments and results to illustrate the efficiency of the proposed approach. Finally, Section VI concludes our study and presents future research directions.
the literature to model network traffic [2]. The GPD equation is given by: / 1 − 1 + ≠ 0 = 1−
=0
(1)
Where x is an extreme value, and is the shape parameter that can take > 0 for the ordinary Pareto distribution, < 0 for Pareto II-type distribution, and it is known as an exponential distribution when = 0. It is important to mention that the shape parameter is invariant to the data block size when compared to that of the GEV. That is, choosing a large block size would affect the GEV’s parameters’ values, but not the GPD’s parameters, especially the shape parameter [6,19]. Figure 1 shows the GPD family. For the sake of selecting an appropriate GPD model that fits the collected extremes, a suitable threshold should be carefully selected. However, selecting the threshold to be too high would generate few samples and that would lead to high variance whereas selecting the threshold to be too low would lead to bias the distribution model. The objective of the threshold selection is to pick a threshold that makes the model provide reasonable approximation of the distribution of the extreme event. Several techniques can be used to find an appropriate threshold. The Mean Residual Life plot (also referred to as Mean Excess Plot) applies the GPD to a range of thresholds in order to evaluate the stability of the parameters and to determine a good threshold selection [6].
II. BACKGROUND AND RELATED WORK
The GPD provides a useful tool to evaluate and model shortterm extreme observations (e.g., hourly and daily extreme events). Moreover, it utilizes threshold excess techniques to quantify the extremes that exceed predefined thresholds. The threshold selection techniques are called Peak over Threshold (POT). Alternatively, the GEV models the long-term extreme observations using the Block Maxima (BM) technique. Adopting the GPD approach has been widely documented in
Probability Density
A. Background We consider readings as extremes when they exceed a predefined threshold. Extreme readings are in general represented in the tail of a probability distribution function. Extreme Value Theory (EVT) is a powerful statistical tool for modeling the tail of a statistical distribution [15, 16]. It provides the ability to model the stochastic behavior of an event at different time scales. Furthermore, Generalized Pareto Distribution (GPD) and Generalized Extremes Value Distribution (GEV) can be categorized as EVT. EVT, of both type GEV and GPD, has been utilized widely by network researchers to overcome the limitations of the Central Limit Theorem (CLT) where the focus is on the mean of the measurements.
Data
Fig. 1: Generalized Pareto Distribution (GPD) •
Mean Residual Life Plot (MRL): Let’s assume that x1,…,xn is a sequence of collected measurements, and x(1),…,x(k) is a subset of data points (extreme events) that exceed a certain threshold, u ,where {x(i) : x(i) > u}. Define threshold excesses by: y(j) = x(j) –u for j=1,…,k. The
1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
3 following set of points define the Mean Residual Life Plot (MRL): ,
1
!
&'
"#$ − %) : < +, -
u= 0.58
$(
Where, xmax is the largest data set of extreme measurements. The Mean Residual Life Plot should exhibit some linearity above the selected threshold u to consider u as a suitable threshold [6]. •
Parameters Estimation against Thresholds (PET): The idea behind this approach is to fit the GPD model to data each time using a range of different thresholds. The GPD’s shape and scale parameters can be extracted, and the stability of these parameters is then checked. Consequently, the best threshold can be used when it satisfies the stability between the shape and the scale parameters. However, it is recommended to adopt the lowest threshold possible to make the GPD model provide a reasonable approximation of the distribution of the underlying extreme event [6].
Figures 2 and 3 illustrate an example of selecting the best threshold u after applying PET and MRL, respectively. Adopting new research ideas in network behavioral analysis have been the focus of diverse studies that deal with critical issues in the networked systems field. These studies investigate the possibility to undertake a new dimension of network behavioral analysis by means of utilizing Social Network Analysis (SNA) techniques to study the network as an active community and distinguish the structure and the characteristics among the network’s nodes. The SNA has added a step forward towards differentiating the network topologies through exploring the strength of the network’s underlying connections.
u= 0.58
Fig. 3: Parameters Estimation against Thresholds plot (PET) The simplest SNA model consists of nodes (actors) and links (relationships) that represent the flow among nodes. Furthermore, the SNA has devoted centrality measurements, such as (degree, betweenness, closeness, and eigenvector centrality) that can be used to analyze and divide network nodes into subgroups such that each subgroup or cluster reflects specific relation pattern among certain nodes. Indeed, finding such patterns can provide valuable information that leads to a better understanding of the nodes’ relationships and the network’s structure [20]. From the SNA’s perspective, there are two types of network clustering techniques; namely, community structure and social-positions based clustering [5]. The community structure clustering can be determined based on a graph topology. That is, “clustered nodes are those tensely intra-connected in the graph structure while some loosely inter-connected nodes locate between clusters” [17] and [10]. On the other hand, the social-positions clustering approach classifies the nodes based on the similarity of their connections as well as specific common patterns such as similar neighbors [5]. In our research, we adopt the social-positions clustering to group the clients. B. Related Work In [12], Lahyani et al. utilized the EVT to analyze QoS measurements through developing a monitoring module that detects QoS degradations in publish/subscribe (P/S) systems. The proposed approach aims to discover immediate link failures between brokers by utilizing the Gumbel and Gaussian distributions. However, this approach aims to build a monitoring module in the network elements instead of application, and that would limit the mobility of the monitoring process. Thio et al. [18], presented a client-side QoS performance analysis framework for web services (WS) that utilizes two
Fig. 2: Mean Residual Life plot (MRL) 1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
4 processes that are based on service clients' experiences. The on-going analysis process summarizes and creates WS profiles and client profiles, while the on-demand recommendation process utilizes these profiles to evaluate the WS clients’ experiences. However, the proposed framework does not provide a mechanism to verify or enhance the accuracy of the collected performance data. Our previous work [4] has investigated the possibility of applying the client-based approach for measuring the service performance in the cloud computing environment. We presented a technique for early detection of cloud service degradations through utilizing the GPD. The proposed technique employs the GPD to construct an accurate QoS model by fitting the network measurement (extremes) collected by the reserved Virtual Machines (VMs). Furthermore, by applying a data aggregation process, the proposed approach is capable of providing multi-level service performance assessment through analyzing the collected extreme measurements from VMs, zones and datacenters. Oberortner et al. [8] proposed an Architectural Design Decision Model (ADDM) which measures, stores and assesses the performance-related QoS agreements of service-based systems. This model is used to build a QoS monitoring architecture that evaluates the QoS properties against the negotiated SLAs. The proposed model can be reconfigured to reside either on the service or client components. The model was applied to different case studies to investigate the SLA violations of multimedia services when the clients access live streaming services. The results demonstrate the importance of the clients’ feedback to improve the offered services. Another study has been conducted by Chauhan et al. [13] to perform a measurement-based analysis of the impact of running Video-On-Demand (VOD) servers in parallel in a virtualized environment. The study focused on the QoS measurements collected by clients to evaluate the VMs that suffer service degradation in presence of overwhelming VOD server requests. Also, this study argued that the client side’s view is important in the sense that the misbehaved servers will impact the reliability of the service delivered to the clients. In [14], Wei et al. proposed a framework, eQoS that monitors and controls the client-perceived QoS in web servers. The eQoS framework deals with the inherent process delay in servers’ resource allocation by utilizing a two-level self-tuning fuzzy controller (STFC). The proposed framework has been tested under different workloads on PlanetLab testbed to collect QoS measurements from heavily loaded web servers. In the literature, several researchers have been conducting studies to investigate the network’s structure and analyze the network communities’ behavior by employing SNA techniques. Barzinpour et. al. [10] proposed an algorithm to compute closeness centrality and detect communities in complex networks. The proposed algorithm aimed to partition a
complex network through adding individual node attributes into a D-dimensional space using eigenvectors of the Laplacian matrix. The Euclidean distance and k-means clustering techniques are then used in the eigenvectors space to generate network’s partitions. However, determining the number of communities is a prerequisite stage that should be performed before generating network partitions, and that can be done through using spectral clustering. The results have shown that the proposed algorithm has detected inter-cluster closeness centrality and the number of clusters. Another study has been conducted by C. Li et al. [5] to capture the topological and semantic of heterogeneous social networks. Their effort has resulted in a knowledge discovery framework that has three models, tensor-based relational adjacency model, contribution-based, diversity-based and similarity-based centrality, and role-based clustering schema. The role-based clustering scheme has adopted the social positions instead of the community structure clustering. This scheme clusters network nodes depending on their higherorder relational connection in the network. Xu et. al. [23] presented a structural clustering algorithm for networks (SCAN) to identify clusters, hubs, and outliers in networks. The proposed algorithm has employed vertices neighborhood as clustering measure instead of using the direct connection among nodes. The nodes that share more neighbors can be grouped as a cluster of the same community. III. CLIENT-BASED QOS MONITORING ARCHITECTURE Our previous study [3] addressed the tradeoffs between network-side and client-side QoS monitoring and presented a client-based architecture for the evaluation and prediction of service degradations. Our proposed architecture, illustrated in Figure 4, delegates the role of QoS measurements collection and reporting (measurement extremes) to the clients. In this study, the only QoS metric that we collect from the mobile client and report to the broker is the RTT delay extremes. The reported data is provided to the Broker Manager (BM) which utilizes EVT models to predict potential service degradations and provide IaaS providers with global information about the QoS level throughout their network. The EVT based model underlies the strength of our approach as it helps to recognize and model the service performance fluctuations throughout the network. While our proposed approach is generic and the measurements can be collected by fixed or mobile clients, we believe that the use-cases that involve mobile clients are more interesting given the ubiquity of mobile devices. In this study, we assume semi-dynamic EVT and SNA models. This means that the EVT and SNA models will be built based on extreme measurements collected from mobile clients during a predetermined period of time (e.g., time of the day). This allows for the EVT and SNA models to evolve over time based on the physical mobility of the underlying nodes. While the experimental results presented in this paper show an instance of the constructed EVT and SNA models, the same process can be utilized to re-construct the models every pre-
1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
5
The following provides details of the different modules of the proposed architecture: 1. Mobile Clients (MCs): Mobile clients seek to use available services provided through different IaaS providers. The MCs gather and aggregate parameterized data that pertain to the monitored services. Collected data is then submitted to the broker manager module for further analysis. 2. QoS Broker: The role of the QoS broker is to collect data from the MCs and disseminate it to the relevant IaaS providers. This module also provides the clients with thresholds that force them to report QoS measurements when these thresholds are exceeded. The QoS broker can control the values of these thresholds to strike an intelligent balance between the amount of the data collected from the clients and the accuracy of the EVT model. The QoS Broker is comprised of the following components: • Broker Manager (BM) Module: this module interacts with MCs to analyze the collected data. This module is also responsible for handling IaaS queries that seek to predict the behavior of the services over time. • Historical Service Measurements Database: this database archives all service measurements submitted to the QoS broker over time. • Service Measurement and Modeling (SMM) Database: this database hosts the service measurement and modeling tables. The service measurement table is used to track information about the MCs including delay, bandwidth, number of clients and physical distance from the various IaaS providers. • EVT Model Builder: this module builder utilizes GEV distribution (Fréchet, Weibull and Gumbel) and GPD distribution models to fit the collected extreme measurements stored in the SMM database and tunes the models’ parameters according the collected data. It also exploits the SNA techniques to analyze the extremes’ behavior and extract the relationships amongst the collected data. • Database Manager: this module is responsible for updating the SMM database. Furthermore, it categorized the parameters of the collected data to different levels (e.g., low, medium and high). 3. IaaS Providers: IaaS providers offer dedicated services to
the MCs through negotiated SLAs with the service providers and provide certain levels of QoS. Furthermore, IaaS providers utilize the QoS broker to find efficient and cost effective alternatives to enhance their current service offerings. In this paper, we utilize SNA techniques to discover the meaning of relationships that can be found among clients. The relationships can be discovered through applying the Kendall’s Tau statistic to the measurements (extreme values) that are collected from the client and quantifying the association among the clients’ measurements. Moreover, we propose a novel clustering technique which uses the calculated Kendall’s Tau statistic to group clients to construct precise QoS models for the generated clusters that resulted from applying the SNA clustering technique. The GPD models can be applied to predict the service degradation and significantly improve the service evaluation process of IaaS providers.
IaaS
determined period of time in order for the models to evolve based on the physical mobility of the nodes. Further, an ensemble of models can be used to enhance the confidence of the obtained results. In this study, we focus on a single metric (i.e., RTT delay extremes). We believe that the RTT delay extremes play a vital role to determine the QoS offered by the wireless access and wired infrastructure to the mobile clients but other metrics can be used in conjunction with this metric (e.g., bandwidth, error rate, signal strength, etc.).
Fig. 4. Client-Based Service Monitoring Architecture IV. THE PROPOSED APPROACH Our proposed approach exploits the attributes of the SNA and presents an efficient strategy that can be employed to improve the infrastructure monitoring process and lower its overhead. It is a step towards building a service performance predication model that combines the SNA and the GPD. More specifically, our approach is based on collecting MCs measurements while they are interacting with the offered services. The collected measurements can then be used as an effective indicator of MCs’ behavior if we consider certain conditions such as, MCs’ locations and MCs service type. The first stage of the proposed approach explores the correlation among active nodes (MCs). This process enables us to identify and differentiate the MCs based on their behaviors. More specifically, we attempt to measure the correlation between each pair of MCs by using the Kendall Tau statistic based on the collected nodes measurements. Each link that connects the randomly elected pair is ranked by a weight based on the Kendall Tau computation. Thus, the rank
1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
6 value reflects the strength of the ties between the elected pair. Since the Kendall Tau correlation coefficient falls between [1, 0, 1], we only consider the values that are greater than or equal to zero. That is, 1 represents a strong tie, whereas the absence of a tie is 0, and -1 represents a negative tie. However, we neglect the negative correlations between nodes since we assume that there are no negative behaviors amongst the nodes. The more positive value we can acquire from Kendall Tau computation the more behavior closeness we can determine of the elected pair. The Kendall Tau statistic is defined as follows [1, 7]: ./ =
∑:;
−
>
−
?
1 ≤ ≤ |345 |, 1 ≤ 6 ≤ |349 | − 1 > = 2 C$ C$ − 1 = " 2 $ D D − 1 ? = " 2 D
1 E > 0 2E = 0 E = 0 −1 E < 0
n0 = Total number of possible pairs ti = Number of tied MCK values in the ith group of tied MCK values uj = Number of tied MCL values in the jth group of tied MCL values MCK, MCL = two randomly selected vectors of extremes collected from two different mobile clients In the second stage of our approach, we cluster all pairs that achieved high Kendall Tau values (strong ties) in the first stage into communities (groups of similar behavior). Towards this end, we propose an efficient clustering algorithm that generates distinct clusters without the need to specify the number of clusters in advance. The clustering algorithm dynamically examines the social network connections and removes the weak link connections that have less correlation weights. That results in organizing the social network into clusters that exhibit similar behaviors. In the following two sub-sections, we present an ILP formulation of the nodes’ clustering problem followed by a heuristic algorithm inspired by SNA techniques. A. Problem description and Formulation Through this section, we focus our attention on formulating the proposed nodes’ behavior clustering problem using integer linear programming (ILP). The proposed formulation can be applied to small-scale social networks to construct optimal clusters that contain nodes with similar behavior based on the Kendall Tau statistic. Our objective is to
maximize the cluster size such that the nodes with strong connections can be grouped and isolated from those with weak connections. The proposed ILP model has constants, constraints, and an objective function that can be described as follows: • Constants : N : Number of nodes τ : N × N input matrix where element τij represents the Kendall Tau statistic calculated based on the extreme delay measurements collected between node i and node j 0 ≤ Fτ$D F ≤ 1 • Variables: X : N × N output matrix , such that 1, if node is clustered in cluster T xij = G 0, UCℎ6WX • Constraints : [
" $D = 1, ∀ 1 ≤ ≤ Z D(
0 ≤ $D ≤ 1 ∀ 1 ≤ , T ≤ Z And $D is binary •
Objective Function: [
[
[
3\ ]3 × " " " τ$D × _$D` `( D( $( [
[
[
− " " " τ$D × #−$` − D` %a `( D( $(
Where, _$D` is a binary ancillary variable, such that _$D` ≤ $` _$D` ≤ D` _$D` ≥ $` + D` − 1 0 ≤ _$D` ≤ 1
B. Extreme Social Bond Clustering Heuristic (ESBCH) Once the weights are determined through calculating the Kendall Tau coefficients for the networks’ nodes, our proposed ESBCH can be applied. The NBF metric can be formalized as follows: Node Bond Factor (NBF) =
∑ij ∈9 g&h h
l&h Where, ni is any selected node in the network, Dni is the degree centrality of node ni, lni is a link from ni to any other node in the network. Figure 8 illustrates the clustering process used in the ESBCH. The following provides the detailed pseudo-code of our proposed ESBCH approach:
1536-1233 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMC.2014.2370648, IEEE Transactions on Mobile Computing
7 Extreme Social Bond Clustering Heuristic (ESBCH) Input : G = weighted network Output : Network with clustered nodes Step 1: Computes Node Bond Factor (NBF) of G NR = NBF (G); Step 2: Choose two nodes that have highest NBF scores NMax1 = Max(NR); NMax2 = Max(NR – Nmax1); Step 3: Find the shortest path between NMax1 and NMax2, and remove the link that has the least weight Pshort = ShortPath (NMax1, NMax2) If Pshort ==0 then goto Step 4 Else Remove the weakest link Pshort Goto Step 1. Step 4: End of Algorithm Figure 8 illustrates a scenario in which the ESBCH has assigned different NBFs (weights) to each link and for every path from node A to node B. The link e8 represents the weakest link compared with other links on the paths from A to B, and since it has the lowest weight of 0.1, it will be removed in the clustering process. Thus, by removing e8, the ESBCH produces two distinct clusters that hold only the nodes with the strongest ties. C. Immediate Service Performance Assessment Algorithm (iSPA) The clusters’ extremes that resulted from ESBCH have to be evaluated in order to analyze and predict the service performance for potential degradations. We propose the Immediate Service Performance Algorithm to construct GPD models that fit the collected clusters’ extremes. Through this algorithm, we apply the MRL and PET to come up with the best threshold calculation that can be used to identify the most influential extremes. The GPD model then can be fitted to the extremes that fall over the calculated threshold. It should be emphasized here that the constructed GPD models serve as the core of the service evaluation and prediction process. The following provides the detailed pseudo-code of our proposed iSPA approach: Immediate Service Performance Assessment Algorithm (iSPA) Input: Extreme : Cluster’ Extremes Output: , : GPD model parameters ________________________________________________ Step 1: Calculate threshold u using MRL and PET such that it satisfies the best GPD approximation. u = min [ MRL (Extreme), PET (Extreme) ] Step 2: Retrieve all extremes that are above u, and extract a suitable GPD model by estimating the shape and scale parameters. EPOT = m∀ no