dca-services: a distributed and collaborative architecture for ...

4 downloads 200 Views 2MB Size Report
Planning and Execution of Experiments in Service Oriented Systems (PEESOS), .... Hard Disk. HD 2TB Seagate. Sata III 7200RPM. 50 GB. Operating. System.
International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

DCA-SERVICES: A DISTRIBUTED AND COLLABORATIVE ARCHITECTURE FOR CONDUCTING EXPERIMENTS IN SERVICE ORIENTED SYSTEMS Luiz H. Nunes¹,Júlio C. Estrella¹,Carlos H. G. Ferreira¹,Luis H. V. Nakamura¹,Rafael M. Libardi¹,Edvard M. de Oliveira¹, Bruno T. Kuehne¹,Paulo S. L. Souza¹, Regina H. C. Santana¹,Marcos J. Santana¹, Stephan Reiff-Marganiec² ¹University of São Paulo Institute of Mathematics and Computer Science (ICMC), São Carlos - SP, Brazil. ²University of Leicester University Road, Leicester, LE1 7RH - UK. {lhnunes, jcezar, chgferreira, nakamura, mira, edvard, btkuehne, pssouza, rcs,mjs}@icmc.usp.br [email protected]

Abstract Current distributed computing environments, such as Cloud Computing, Grid Computing and Internet of Things are typically complex and present dynamic scenarios, which makes the execution of experiments, tests and performance evaluations challenging. Performing large scale experiments in Service-Oriented Computing (SOC) environments can be a difficult and complex task. In this paper, we propose a Distributed and Collaborative Architecture for Conducting Experiments in Service Oriented Systems (DCA-SERVICES). DCA-SERVICES is a client-server architecture that provides a real environment to execute experiments in systems based on the SOC paradigm. Using the DCA-SERVICES and our developed tool named Planning and Execution of Experiments in Service Oriented Systems (PEESOS), we were able to execute experiments, tests, and analyze a target system environment quickly and efficiently. Keywords: Web Services, Service Oriented Architecture, Quality of Service, Performance Evaluation, Capacity Planning, Functional Tests, Grid Computing, Cloud Computing, Distributed Systems ____________________________________________________________________________________________________ ______________ reasonable QoS (Quality of Service) levels. Moreover, it is 1. INTRODUCTION extremely hard and complex to perform a controlled and Service-Oriented Computing (SOC) is a computing repeatable study of resource management policies in paradigm that uses individual services and its compositions distributed environments due to the fact that resources and as fundamental elements to delivery computational solutions users are dispersed across multiples geographical locations in a manner similar to public utilities such as water, with different access constraints policies (Buyya, 2008). electricity, and telephony (Papazoglou et al., 2003; Buyya, Thus, to overcome the limitations and abstract the 2010). In these environments several organizations share complexity found in these environments, simulations tools resources that are geographically distributed and such as CloudSim and GridSim have been used as a feasible interconnected by wide-area networks or the Internet technique for policy analysis and resource allocation (Buyya, 2008). Grid computing, P2P, Cloud computing and prediction in highly distributed environments. Although more recently Internet of Things are some of existent these tools help to understand the scalability and behavior of distributed computing paradigms that allows to share a policy or system through specific environment simulation, resources as applications, databases and applications as a the conclusions made based on the collected results may be service (Coulouris et al., 2011). inaccurate. Dynamic factors present in real environments The diversity and flexibility provided by distributed that are not considered during simulation can influence the systems along with workload variation, dynamic expected results and impact the system behavior. infrastructures and heterogeneous services are the main As an example, these tools are unable to predict challenges to provide and delivery application services geographic distribution of users requests since the load (Buyya et al.,2010). In these dynamic distributed computing distribution property is very dynamic and services QoS environments resources management and scheduling levels should change according to load variations (Buyya et challenges were studied in several domains, such as al. , 2010). Measuring the performance of a service oriented resource and policy heterogeneity; fault tolerance; and system at a high-depth level for different application and dynamic resource management to achieve services with 14

International Journal of Services Computing (ISSN 2330-4472)

service model under dynamic conditions, such as workload variations, available services and available providers is still a problem to tackle (Calheiros et al., 2011). In this paper, we propose a Distributed and Collaborative Architecture for Conducting Experiments in Service Oriented Systems (DCA-SERVICES). DCASERVICES is a client-server architecture that provides a real environment to execute experiments in systems based on the SOC paradigm. The main contributions of this architecture can be summarized as: 1) it enables a real collaborative environment to execute distributed experiments; 2) allows the execution of controlled and repeatable experiments; 3) orchestrate distributed geographical workload requests and 4) become it possible to find bottlenecks and behaviors that are unable to detect in simulations tools. The paper is organized as follows: Section 2 describes the proposed architecture structure and its working flow. Section 3 describes a scenario where this architecture was deployed. The results collected in this study case are discussed in Section 5. Section 6 presents a literature review of existing approaches in SOA testing tools. Finally, the conclusions and directions for future work are presented in Section 7.

2. DCA-SERVICES The DCA-SERVICES is an architecture that aims to provide a real environment to execute experiments in systems that uses SOC paradigm through a collaborative way. The architecture is composed by a client and a server domain as showed in Figure 1. Both server and client architectures are described as a three layer architecture. Briefly:

Figure 1. DCA-Services architecture • API Layer: describes all interfaces used to communicate between the client and server modules besides the target system. This layer abstract the implementation details of the

Vol. 3, No.4, July-September, 2015

architecture and standardizes the access between the stakeholders. • Transport Layer: provides the mechanisms to perform the communication that receive and transfer files and data among the stakeholders. Besides, any communication technology can be used to implement these mechanisms such as, Remote Procedure Call, Web Services or Sockets. • Core Layer: has the main functions of the module. It is responsible for processing all data that comes to the application and orchestrate the tasks among the distributed applications. In addition, other modules can be added inside these layer to provide extra functionality to the architecture. The client-server architecture centralizes all actions of DCA-SERVICES in the server module. It is responsible for managing all functionality and operations offered by the architecture. The main function of this module is to coordinate the experiment execution through different distributed clients. Inside the core layer other modules can be added to improve the server functionality such as workload distributions and monitors of the target system and available clients. Also, this module must be in direct touch with the target system to setup the files required by the experiment. The client module performs the requests to the target system. It receives all information required to make a request such as, data and files from the server module. Like the server module, inside the core layer other modules can be added to improve the client functionality, such as, file management.

Figure 2. DCA-SERVICES operation Figure 2 shows the DCA-SERVICES operation. Firstly, the user must setup the experiment that will be performed by the architecture (step 1). Then if needed files are transferred and services are deployed in the server environment aiming to allow the experiment execution (step 2). Also the collaborative clients receives the files such as workload distributions files and applications that will be used to perform the requests to the target system (step 3). Finally,

15

International Journal of Services Computing (ISSN 2330-4472)

these files are executed and the requests are made to the target system by the collaborative clients (step 4).

3. PEESOS The Planning and Execution of Experiments in Service Oriented Systems (PEESOS) tool (Nunes et al. , 2014) is a prototype based on DCA-SERVICES which aims to assist the design of experiments (DOE) and execution in Service Oriented Architectures (SOA). The main purpose of PEESOS is to capture the results from the multiple executions to predict the performance of the target system, under different resource configurations and workloads. PEESOS complements the existent simulations solutions and allow to have more accurate results through a real workload, which helps to detect bottlenecks present in the implementation. The DOE of this tool is based on a set of commons SOA entries, where a model is used to generate the test cases that will describes the expected behavior of an experiment. Each entry affects the response variable and are called factors. Test cases generated by these model use a full factorial design with every possible factors and levels combination (Jain, 1991) being covered.

Vol. 3, No.4, July-September, 2015

1) The stakeholder prepares the experiment parameters: number of hosts, number of clients, service, client application and workload; 2) Services are deployed in the selected hosts; 3) The client application is deployed to selected clients; 4) The clients perform requests to the broker; 5) The broker chooses a host to attend the client request; 6) The broker forwards the client request to the selected provider; 7) The broker forwards the host response to the client; 8) The services are removed from the hosts.

4. PERFORMANCE EVALUATION In this section, we present a performance evaluation of PEESOS in a functional case study which is related to SOA components and web services. The motivating example for our evaluation is a SOA prototype named WSARCH (Web Service Architecture) (Estrella et al. , 2010) and a service composition architecture named AWSCS (Automatic Web Service Composition System). The main aim of the WSARCH architecture is to provide a basic infrastructure to detect and solve problems related to workload, service composition, fault tolerance, network, security of messages and components (Estrella et al., 2011). While the AWSCS architecture evaluates the performance of service composition (Kuehne et al., 2013).

4.1 Experiment Environment

Figure 3. Workflow of PEESOS (Nunes et al. ,2014) PEESOS aims to reduce the complexity to perform capacity tests in SOA by removing the problem of service deployment and workload generation for the desired experiments. The PEESOS tool performs all steps to prepare real or simulated environment to execute an experiment where a collaborative synthetic workload is generated which allows to obtain more accurate results. Figure 3 shows the sequence of steps when using the PEESOS tool:

The test environment for experiments is prepared with a set of machines disposed in an external network to perform requests. The target system environment is composed by the WSARCH (Estrella et al. , 2010) and AWSCS (Kuehne et al., 2013) architectures, which are available to deploy applications, handle requests and make automatic web services compositions. Figure 4 shows the test environment arrangement. PEESOS manages external machines to make requests to the target environment, which is responsible for scheduling those requests among providers and then responds to them. PEESOS also can communicate with the target environment to deploy services in providers or UDDI (Universal Description, Discovery and Integration) registries into a repository. The test environment is composed of one virtual machine to execute PEESOS hosted in a physical machine and fifteen hosts as available clients. Table I describes these machine configurations. The AWSCS architecture has five

16

International Journal of Services Computing (ISSN 2330-4472)

composition and

execute

modules

while WSARCH

Vol. 3, No.4, July-September, 2015

WSARCH and AWSCS interaction. Figure 5 shows the relationships between these components.

Figure 4. Test Environment architecture is composed of one broker, one LogServer, twelve virtual providers (four virtual machines per host) and twelve virtual UDDIs (four virtual machines per host). All information regarding access to Providers, UDDIs, Broker and LogServer can be found at http://wsarch.lasdpc.icmc.usp.br/infrastructure. Table I. Clients and PEESOS configuration Clients

Hosts

PEESOS

Processor

AMD Processor Vishera 4.2 Ghz

2 Virtual Processor

Memory

32 GB RAM DDR3 Corsair Vegeance

2 GB RAM

Hard Disk

HD 2TB Seagate Sata III 7200RPM

50 GB

Operating System

Linux Ubuntu Server 12.04 64 Bits LTS Java JDK 1.7

Applications

Apache Axis2 1.5 Apache Tomcat 7.0

Linux Ubuntu Server 12.04 64 Bits LTS Java JDK 1.7

Qemu Apache Axis2 1.5 / KVM Apache Tomcat 7.0

4.1.1 Target Environment The target environment will briefly be presented to allow a better understanding of the test system. The AWSCS architecture has two distinct modules: the automatic composition module and the composite execution. The WSARCH architecture has five distinct modules: the client application, the providers, the Broker, the UDDI registry and the Log Server. Figure 5. Web Service Architecture –

Figure 5. Web Service Architecture – WSARCH and AWSCS interaction Clients perform requests with QoS parameters to the Automatic Composition Module (ACM). ACM is responsible for finding the candidate services in the repository and creates the description of a execution flow that is able to attend the request. Matchmaking and service selection algorithms are performed to choose the most suitable services for the Composition Execution Module (CEM). CEM receives the flow generated by ACM and makes the properly requests of the web services to the Broker. The Broker is responsible for finding the specific service to meet the request, which will be available in one of the providers (Harshavardhanan et al, 2012). Providers are repositories of services and work closely with a core group responsible for processing the messages of requests and responses, called Apache Axis 2 (Estrella et al, 2011). The location and information of service providers are stored in a UDDI registry, which was modified to contain qualifications (QoS) and characteristics of services. The target environment also has the Log Server, which is a database responsible for storing all data transactions between components. Besides, information of quality of service offered by providers are updated every second, collected by a Ganglia monitor (Massie et al., 2004) and transmitted from one module to another under Broker management. Table II describes the configuration of the environment components.

17

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

Table II. WSARCH architecture configuration Virtual Machines Broker and Hosts (Providers and Log Server UDDIs) Processor

AMD Processor Vishera 4.2 Ghz

2 Virtual Processor

Memory

32 GB RAM DDR3 Corsair Vegeance

2 GB RAM

Hard Disk

HD 2TB Seagate Sata III 7200RPM

50 GB

Operating System

Linux Ubuntu Server 12.04 64 Bits LTS Java JDK 1.7

Applications

Apache Axis2 1.5 Apache Tomcat 7.0

a. Composite Service 1

Linux Ubuntu Server 12.04 64 Bits LTS Java JDK 1.7

Qemu Apache Axis2 1.5 / KVM Apache Tomcat 7.0

4.2 Experiment Design A composite web service that aims to planning a trip is used as a study case where two possible composite flows are available. Figures 6.a and 6.b show the available flows for the experiment, which are semantically equivalent. All services in the flow are executed in the same time and has the response time (ms), cost (US$) and reputation as QoS attributes.

b. Composite service 2 Figure 6. Possible composite services flows

Table III. Possible service QoS attribute values Response Time Cost (ms) (US$)

Reputation (Stars)

Total

Hotel

“3000” “5000” “7000” “9000”

“60” ”88” ”116” ”144” ”172” ”200”

“1” “2” “3”

72

Trip Bus

“1250” “1900” “2500” “3000”

“30” ”40”

“1” “2” “3”

24

City Bus

“1250” “1900” “2500” “3000”

“3”

“1” “2” “3”

12

Event 1

“500” “1500” “2200” “3000”

“10” “18” “26” “32” “41” ”53”

“1” “2” “3”

72

Event 2

“2500” “3100” “3500” “4000”

“20” “22” “24” “26” “28” ”30”

“1” “2” “3”

72

Event 3

“1000” “2000” “3000” “4000”

“20” “25” “32” “48” “60” ”70”

“1” “2” “3”

72

Night Event

“1000” “2000” “3000” “4000”

“20” “25” “32” “48” “60” ”70”

“1” “2” “3”

72

Roost

“2000” “4500” “7000” “10000”

“40” “68” “96” “124” “152”

“1” “2” “3”

60

Flight

“1400” “2100” “2800” “3300”

“100” “120”

“1” “2” “3”

24

Taxi

“250” “900” “1500” “2000”

“30”

“1” “2” “3”

12

Event Package

“2500” “6000” “12000” “18000”

“50” “80” “100” “140” “161” “203”

“1” “2” “3”

72

18

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

for response time, cost and reputation imposed by each client during the experiment for flow a and b.

Table III. Possible service QoS attribute values shows the possible QoS values for each service attribute. Each existent column value in this table was matched to describe a different web services. To perform the experiment execution the PEESOS tool was used. To perform the request an exponential workload was generated based on Equation 1 (Ross, 2009),

Table V. Clients QoS restrictions Flow (a) Trip 01

were the x has value of 1000. Table 4 describes all factors and levels used in the experiment.

C1

C2

C3

C4

C5

Hotel

Bronze silver

gold

gold

bronze

Citybus

Bronze silver

gold

silver

bronze

Tripbus

Bronze silver

gold

silver

gold

Event1

Bronze silver

gold

gold

silver

gold

silver

silver

Event2

Gold

silver

Event3

Bronze silver bronze bronze

silver

Table IV. Design of Experiment

NightEvent

Bronze silver

silver

Factor

Level

Response Time 14000 12000 10000 10000

14000

Number of Clients

5

Cost

Number of Flows

a and b

Number of Requests

250

Number of Replications

10

Type of Distribution

Exponential

Mean of Distribution

1.000 ms

173

Reputation

3

gold

gold

314,5

416

406

239,5

2

1

2

2

C3

C4

C5

Flow (b) Trip 02

It is important to highlight that each one of the clients used in the experiment has different QoS restrictions. Table 5 shows the services restrictions and the desired total values

C1

C2

Roost

Bronze silver

gold

gold

bronze

Taxi

Bronze silver

gold

silver

bronze

FlightCompany Bronze silver

gold

silver

gold

EventPacket

gold

silver

silver

Response Time 16000 12000 10000 12000

14000

Cost

Silver

silver

260

437,5

505

418,5

296,5

1

2

3

2

2

Reputation

Table VI. SLAs Agreement SLA

Response Time

Cost

Reputation

Gold

Silver

Bronze

Gold

Silver

Bronze

Gold

Silver

Bronze

Hotel

7000

10000

13000

200

130

60

3

2

1

TripBus

5250

6125

7000

40

35

30

3

2

1

CityBus

4250

5625

7000

3

3

3

3

2

1

Event 1

4500

5750

7000

53

31,5

10

3

2

1

Event 2

6500

7250

8000

30

25

20

3

2

1

Event 3

5000

6500

8000

70

45

20

3

2

1

Night Event

5000

6500

8000

70

45

20

3

2

1

Roost

6000

10000

14000

152

81

40

3

2

1

Flight

5400

6350

7300

120

110

100

3

2

1

Taxi

4250

5125

6000

30

330

30

3

2

1

Event Packet

6500

14250

22000

203

126,5

50

3

2

1

19

International Journal of Services Computing (ISSN 2330-4472)

Table 6 shows the QoS values that must be respected in every class of service. All services execute in parallel and the composition response time is the same of the service with highest value in the composition. The cost is the sum of all used services costs and the reputation is the mean value of all services reputation used in the composition. Also, smaller values for response time and cost is better, while higher values is better for reputation.

5. RESULTS In order to obtain results the PEESOS was used against AWSCS in different scenarios (flows a and b). The QoS results for the five clients that executed the flows (a) and (b) are represented in Figures 7 and 8 respectively. To analyses the results is important to keep in mind the clients’ restrictions (Table 5 and the values of their SLA Agreements (6). Thus, it will be possible to determine whether the SLA was respected for each client in the proper flow. It is also relevant to remember that the composition flows are semantically equivalent. Figure 7a shows the QoS results to Client 1 (C1). Response Time attribute has a different value in each execution (the experiment was replicated 10 times), so we also calculated the Confidence Interval (95%). Cost and Reputation have always the same values in all executions because modifications in their values are not performed often, so the experiment time is shorter than any Cost or Reputation updates. The highest Response Time of the services in Figure 7a was 22,89 seconds, total Cost was US$167 and average Reputation was 1.28. Comparing these values with the SLA limitations of the Table 6, it is possible to notice that the SLA was broken in terms of Response Time and Reputation, because the maximum Response Time acceptable for Client 1 (C1) is 14 seconds and the required Reputation was 3 but the Reputation obtained was 1.28 in the flow (a). Figure 7b shows the QoS results to Client 2 (C2). The highest Response Time was 20.33 seconds, total Cost was US$223 and average Reputation was 2. Again the SLA for Response Time was broken (22.33 > 12 seconds). Figure 7c shows the QoS results to Client 3 (C3). The highest Response Time was 26.20 seconds, total Cost was US$261 and average Reputation was 2.71. The SLA for Response Time was broken (26.20 > 10 seconds), however the Cost and reputation were much better than the Client 3 (C3) restrictions. Figure 7d shows the QoS results to Client 4 (C4). The highest Response Time was 19.39 seconds, total Cost was

Vol. 3, No.4, July-September, 2015

US$247 and average Reputation was 2.28. Again, the SLA for Response Time was broken (19,39 > 10 seconds), but the Cost was very good (247 < 406) and the Reputation was enough (2.28 > 2). Figure 7e shows the QoS results to Client 5 (C5). The highest Response Time was 35.57 seconds, total Cost was US$205 and average Reputation was 1.85. The Response Time restriction was broken (35,57 > 14 seconds), the Cost (205 < 239,5) was enough and the Reputation also was broken (1.85 < 2). In summary for the flow (a), we can conclude that the system is able to compose services, but in this experiment executed by PEESOS using 250 requests with a distribution mean equal to 1000ms, the AWSCS algorithm could not comply with the SLA in terms of Response Time. In some cases (C1 and C5) the SLA for reputation was also broken. However, all experiments were successfully conducted using the DCA-SERVICES architecture and the result flow (a) indicate that we need an improvement on the composition algorithm or a relaxation on the clients QoS restrictions. Regarding the results for flow (b) (Figure 8), Figure 8a shows the QoS results to Client 1 (C1). The highest Response Time was 24.53 seconds, total Cost was US$270 and average Reputation was 1.25. Again, the SLA for Response Time (24.53 > 16 seconds) and Cost (270 > 260) were broken, but the Reputation was enough (1.25 > 1). Figure 8b shows the QoS results to Client 2 (C2). The highest Response Time was 17.94 seconds, total Cost was US$336 and average Reputation was 2. The Response Time restriction was broken (35,57 > 12 seconds), the Cost (336 < 437,5) and the Reputation (2 = 2) were enough. Figure 8c shows the QoS results to Client 3 (C3). The highest Response Time was 10.37 seconds, total Cost was US$476 and average Reputation was 3. The Response Time restriction for Client 3 in flow (b) has value of 10 seconds, so considering the Standard Deviation of 1.22, we can not say that the SLA was broken, the Cost (476 < 505) and the Reputation (3 = 3) were enough. Figure 8d shows the QoS results to Client 4 (C4). The highest Response Time was 22.29 seconds, total Cost was US$326 and average Reputation was 2.25. The Response Time restriction was broken (22,29 > 12 seconds), the Cost (326 < 418,5) and the Reputation (2.25 > 2) were enough. Figure 8e shows the QoS results to Client 5 (C5). The highest Response Time was 26.65 seconds, total Cost was US$280 and average Reputation was 1.75. The Response

20

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

A) C1 - FLOW (A)

B) C2 - FLOW (A)

C) C3 - FLOW (A)

D) C4 - FLOW (A)

E) C5 - FLOW (A) . FLOW (A) - QOS RESULTS FOR CLIENTS 1 TO 5

Figure 7. QoS results for clients 1to 5

21

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

F) C1 - FLOW (B)

G) C2 - FLOW (B)

H) C3 - FLOW (B)

I) C4 - FLOW (B)

J) C5 - FLOW (B)

Figure 8. QoS results for clients 1 to 5 - FLOW(b) Time restriction (26,65 > 14 seconds) and the Reputation (1.75 < 2) were broken, but the Cost (280 < 296,5) was enough. In summary for the flow (b), almost all clients restrictions had a SLA violation in terms of Response Time. A possible exception is for the Client 3 (C3) . In one case (C1) the SLA

for Cost also was broken and in another case (C5) the SLA for Reputation was broken. Similar to flow (a), all experiments for flow (b) were successfully conducted using the DCA-SERVICES architecture and the results for flow (b) also indicate the need for adjustments in the composition algorithm or a relaxation on the clients QoS restrictions. 22

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

B) MEAN REPUTATION

A) MEAN RESPONSE TIME

C) MEAN COST

Figure 9. Mean QoS Results - FLOWS (A) and (B) Figure 9 shows the mean values for Response Time, Cost and Reputation in both flows. Although flows are semantically equivalent, each service has a SLA Agreement value and each client has a set of QoS restrictions for a specific

6. RELATED WORK The evaluation of distributed systems is a challenge to computer science. Several research teams are putting efforts to develop novel techniques with this purpose. Most of these efforts arise from the modelling and simulation area, which offer great opportunities to distributed systems performance evaluation. It started with network simulation, tools such as NS-2 (Issariyakul and Hossain, 2008), OMNeT++ (Varga and Hornig, 2008), that were created to analyze and model networks. Then, present a simulation framework for distributed system called GridSim, which can model and

simulates distributed resources and scheduling strategies for large-scale computer systems. GridSim was structured as an architecture multi-layer simulator based in the SimJava, a general-purpose discrete-event simulation package. Also following the simulation path, an evolution of the GridSim framework, called CloudSim is presented in . CloudSim is able to instantiate and execute cloud core entities (MVs, hosts, data centers, application) during the simulation period and thus allow modelling and simulation of Cloud environments with very low cost, so that it is possible to simulate the behaviour of such systems prior to deploying them, so it is possible to find and solve bottlenecks in a controlled environment. CloudSim was a milestone for the cloud community, and after it, other research’s were published extending its functions, such as (Long and Zhao, 2012) where it is stated that real environments are hard to build and maintain to perform cloud experiments, especially when it is needed 23

International Journal of Services Computing (ISSN 2330-4472)

QoS features. The paper proposes a extension to CloudSim that allows file striping, data replica management function, data layout and replica management strategy, making it into a simulation platform for computing and storage. Li et al. (2013) presents a framework to provide an energyaware network suited to green cloud experiments. It is also possible to have a network-aware live migration to avoid network overheads. The paper need further experiments to evaluate possible flaws of the model. Also, it lacks the implementation of protocol-level network simulation to select policies migration target leveraging network overheads and supporting simulation of specific migration policies, such as pre-copy migration. The paper of Long et al. (2013) explained the difficulty to test new mechanisms in real cloud computing environment, because researchers often cannot fully control them. CloudSim make it simple to test cloud issues, and the framework introduced provides simulation power to manage services and mode ling of cloud infrastructure. Besides the interesting contributions, there is a lack of experiments in the paper to show the framework behavior in distinct scenarios. Nunez et al. (2011) questions that to develop new ˜ proposals to the cloud (for example, datacenter management, or provision of resource) a testing platform is needed. Therefore, the authors proposed another simulator of cloud systems that simulate various types of instances called IcanCloud. This tool uses more realistic scenarios simulating instance types provided by Amazon, including their cost models. The authors also question that even if an automated tool exists, it will still be very difficult to produce time and cost effective performance evaluation, due to the great number of possible setups that a typical cloud infrastructure provides. Another evaluation tool is presented in to evaluate services oriented architectures, which uses Markov chains along with formal models by reinterpreting the BPEL specification, creating a performance evaluation specification called PerfBPEL that can be used to evaluate SOA systems. The Service-Oriented Performance Modeling framework (SOPM) (Brebner, 2009) and the Services Aware Simulation Framework (SASF) (Smit and Stroulia, 2012) also allow to simulate service oriented systems, services and client requests. Although the mean error of theses tools were up to 15 percent in a functional test, they help to understand the scalability and performance before service deployment in a real environment. However, despite all these efforts to simulate such systems to performance evaluate them, simulation has its boundaries.

Vol. 3, No.4, July-September, 2015

The unpredictable evolution of services, variations in workload and flexibility of infrastructure become these environments highly dynamic (Kalamegam and Godandapani, 2012). Sometimes the error added to those simulations due to the use of simplified models (Bogush, 1994), while small, can be an issue to evaluate some types of systems, such as critical embedded, high availability and others, in which even a small error can lead to inconsistent results. That is the point when simulation alone is not enough to bypass these limitations and a more focused and real approach is usually needed to handle these cases. In these scenarios where simulation is not enough to evaluate the system, a real and dynamic evaluation tool is required. But one of the biggest challenges that such a tool needs to address, despite all the environment orchestration, is to manage the workload generation. A workload can have distinct characteristics, such as the distribution, means, types of requests and several others. One attempt to study such workloads was found in Medernach (2005). That paper proposes a Markovian distribution model to requests arrival in a grid environment. Also, in Ciciani et al. (2012), it was proposed some workload characterizations for cloud-based environments. They also proposes the Radargun framework benchmark for cloud environments. However, both approaches are very context specific, the first from grid environments and the second for transactional cloud environments, so they are not able to be deployed in generic SOA. Then the main contribution that PEESOS has to offer is a performance evaluation tool to evaluate real systems, with distinct types of workloads and experimental configurations. This tool also enables to evaluate the implementation of any service oriented distributed tool. By using PEESOS, it is possible to decrease the costs to evaluate a distributed system in a real environment. This is particular useful when it is desired to performance evaluate a prototype, including its implementation and functionalities. This approach offers more realistic data, which can contribute to discover bottlenecks not considered in simulation tools.

7. CONCLUSIONS In this paper, DCA-SERVICES was presented, the main goal of this architecture is to provide a real collaborative environment to execute distributed experiments in a controlled and repeatable manner. Also, this paper presented the PEESOS tool, which is based on the proposed architecture and their main goal is to facilitate the planning and execution of experiments in SOA. The experiments

24

International Journal of Services Computing (ISSN 2330-4472)

performed with PEESOS was capable to show SLA violations in AWSCS, which must be analyzed for the stakeholders to avoid QoS degradation and fines for it. As future work we are planning to replace the DCASERVICES client/server model to a peer-to-peer model and focus on scalability and fault tolerance problems during the experiments executions.

8. ACKNOWLEDGMENT We thank Coordination of Improvement of Personal Higher Education (CAPES) and São Paulo Research Foundation (FAPESP, processes 11/09524-7, 13/26420-6, 11/12670-5, 09/16055-3), for the support of this research. We also thank ICMC-USP and LaSDPC for offering the necessary equipment’s for this study. Some of this work was conducted while Stephan Reiff-Marganiec was on study leave from the University of Leicester.

9. REFERENCES [Barbierato et al., 2012] Barbierato, E., Iacono, M., and Marrone, S. (2012). Perfbpel: A graph-based approach for the performance analysis of bpel soa applications. In Performance Evaluation Methodologies and Tools (VALUETOOLS), 2012 6th International Conference on, pages 64–73. [Bogush, 1994] Bogush, A. (1994). Simulation error in the probability estimate of a random event. Cybernetics and Systems Analysis, 30(2):181–193. [Brebner, 2009] Brebner, P. (2009). Service-oriented performance modeling the mule enterprise service bus (esb) loan broker application. Conference Proceedings of the EUROMICRO, pages 404– 411. [Buyya, 2008] Buyya, R. (2008). Service and utility oriented distributed computing systems: Challenges and opportunities for modeling and simulation communities. In Simulation Symposium, 2008. ANSS 2008. 41st Annual, pages 3–3. [Buyya, 2010] Buyya, R. (2010). Cloud computing: The next revolution in information technology. In Parallel Distributed and Grid Computing (PDGC), 2010 1st International Conference on, pages 2–3. [Buyya et al., 2010] Buyya, R., Ranjan, R., and Calheiros, R. N. (2010). Intercloud: Utility-oriented federation of cloud computing environments for scaling of application services. In Proceedings of the 10th International Conference on Algorithms and Architectures for Parallel Processing Volume Part I, ICA3PP’10, pages 13–31, Berlin, Heidelberg. Springer-Verlag.

Vol. 3, No.4, July-September, 2015

[Calheiros et al., 2011] Calheiros, R. N., Ranjan, R., Beloglazov, A., De Rose, C. A. F., and Buyya, R. (2011). Cloudsim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. volume 41, pages 23–50, New York, NY, USA. John Wiley & Sons, Inc. [Ciciani et al., 2012] Ciciani, B., Didona, D., Di Sanzo, P., Palmieri, R., Peluso, S., Quaglia, F., and Romano, P. (2012). Automated workload characterization in cloud-based transactional data grids. In Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum, IPDPSW ’12, pages 1525–1533, Washington, DC, USA. IEEE Computer Society. [Coulouris et al., 2011] Coulouris, G., Dollimore, J., Kindberg, T., and Blair, G. (2011). Distributed Systems: Concepts and Design. Pearson Education. [Estrella et al., 2010] Estrella, J., Toyohara, R., Kuehne, B., Tavares, T., Santana, R., Santana, M., and Bruschi, S. (2010). A performance evaluation for a qos-aware service oriented architecture. In Services (SERVICES-1), 2010 6th World Congress on, pages 260 –267. [Estrella et al., 2011] Estrella, J. C., Santana, R. H. C., and Santana, M. J. (2011). WSARCH: An Architecture for Web Services Provisioning with QoS Support: Performance Challenges. VDM Verlag Dr. Mller. [Harshavardhanan et al., 2012] Harshavardhanan, P., Akilandeswari, J., and Sarathkumar, R. (2012). Dynamic web services discovery and selection using qos-broker architecture. In Computer Communication and Informatics (ICCCI), 2012 International Conference on, pages 1 –5. [Issariyakul and Hossain, 2008] Issariyakul, T. and Hossain, E. (2008). Introduction to Network Simulator NS2. Springer Publishing Company, Incorporated, 1 edition. [Jain, 1991] Jain, R. (1991). The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling. Wiley professional computing. Wiley. [Kalamegam and Godandapani, 2012] Kalamegam, P. and Godandapani, Z. (2012). A survey on testing soa built using web services. International Journal of Software Engineering and its Applications, 6(4):91–104. [Li et al., 2013] Li, X., Jiang, X., Ye, K., and Huang, P. (2013). Dartcsim+: Enhanced cloudsim with the power and network models integrated. In Cloud Computing (CLOUD), 2013 IEEE Sixth International Conference on, pages 644– 651. 25

International Journal of Services Computing (ISSN 2330-4472)

[Long and Zhao, 2012] Long, S. and Zhao, Y. (2012). A toolkit for modeling and simulating cloud data storage: An extension to cloudsim. In Control Engineering and Communication Technology (ICCECT), 2012 International Conference on, pages 597–600. [Long et al., 2013] Long, W., Yuqing, L., and Qingxin, X. (2013). Using cloudsim to model and simulate cloud computing environment. In Computational Intelligence and Security (CIS), 2013 9th International Conference on, pages 323–328. [Massie et al., 2004] Massie, M., Chun, B., and Culler, D. (2004). The ganglia distributed monitoring system: design, implementation, and experience. Parallel Computing, 30(7):817–840. [Medernach, 2005] Medernach, E. (2005). Workload analysis of a cluster in a grid environment. In Proceedings of the 11th International Conference on Job Scheduling Strategies for Parallel Processing, JSSPP’05, pages 36–61, Berlin, Heidelberg. Springer-Verlag. [Nunes et al., 2014] Nunes, L., Nakamura, L., Kuehne, B., Libardi, R., Adami, L., Estrella, J., and Reiff-Marganiec, S. (2014). Peesos: A web tool for planning and execution of experiments in service oriented systems. In 21th IEEE International Conference on Web Services, pages 606–613, Anchorage, AK, USA. [Nunez et al., 2011] Nunez,A., Vazquez-Poletti, J. L., Caminero, A. C., Carretero, J., and Llorente, I. M. (2011). Design of a new cloud computing simulation platform. In Computational Science and Its Applications-ICCSA 2011, pages 582–593. Springer. [Papazoglou, 2003] Papazoglou, M. (2003). Serviceoriented computing: concepts, characteristics and directions. In Web Information Systems Engineering, 2003. WISE 2003. Proceedings of the Fourth International Conference on, pages 3–12. [Ross, 2009] Ross, S. (2009). Introduction to Probability and Statistics for Engineers and Scientists. Elsevier Science. [Smit and Stroulia, 2012] Smit, M. and Stroulia, E. (2012). Simulating service-oriented systems: A survey and the services-aware simulation framework. IEEE Transactions on Services Computing, 99(PrePrints). [Tardiole Kuehne et al., 2013] Tardiole Kuehne, B., Carlucci Santana, R., Linnemann, V., and Santana, M. (2013). Performance evaluation of an automatic web service composition architecture. In High Performance Computing and Simulation (HPCS), 2013 International Conference on, pages 123–130.

Vol. 3, No.4, July-September, 2015

[Varga and Hornig, 2008] Varga, A. and Hornig, R. (2008). An overview of the omnet++ simulation environment. In Proceedings of the 1st International Conference on Simulation Tools and Techniques for Communications, Networks and Systems & Workshops, Simutools ’08, pages 60:1–60:10, ICST, Brussels, Belgium, Belgium. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

Authors Graduated in Bachelor of Computer Science from Universidade Estadual Paulista Julio de Mesquita Filho (2011) and master's degree in Computer Science and Computational Mathematics from the University of São Paulo (2014). He is currently a doctoral student in the Institute of Computational Mathematics and Computer Science (ICMC) at University of São Paulo (USP). His main research topics are Internet of Things, Quality of Service, Service Level Agreement, Sensing the Service, Cloud Computing and Wireless Sensor Networks.

Julio C. Estrella is Graduated in Computer Science from Universidade Estadual Paulista Julio de Mesquita Filho (2002), master's degree in Computer Science and Computational Mathematics from the University of São Paulo (2006) and PhD in Computer Science and Mathematics from the University of Computactional São Paulo (2010). He has experience in Computer Science with emphasis in Computer Systems Architecture, acting on the following topics: Service Oriented Architectures, Web Services, Performance Evaluation, Distributed Systems, High Performance Computing and Computer Networks. He is currently Professor I of Institute of Mathematics and Computer Sciences - ICMC / USP - São Carlos – SP

Carlos H. G. Ferreira Graduated in Information Systems from the Federal University of Viçosa - Campus Rio Paranaíba. Currently he is a Master Degree candidate in Computer Science at Institute of Mathematical Sciences and Computing, University of São Paulo acting in the following lines: Service Oriented Architectures, Web

26

International Journal of Services Computing (ISSN 2330-4472)

Services, Cloud Computing, Performance Evaluation.

and

Computer

System

Luis H. V. Nakamura is a PhD candidate in Institute of Computer Science and Mathematics Computational (ICMC) at University of São Paulo (USP), he received a B.S from Technology College (FATEC) in 2006, and a M.S. from the University of São Paulo (ICMCUSP) in 2012. His research interests are based on distributed systems, which includes Cloud Computing, Autonomic Computing and Semantic Web. He has also investigated the implications of Web Services and Performance Evaluation of computational systems.

Rafael M. de O. Libardi holds a BA in Informatics at Computer Science and Computational Mathematics Institute USP in São Carlos (2013) and is currently doing Masters also at ICMCUSP addressing security in cloud environments.

Edvard M. de Oliveira is a PhD candidate at University of São Paulo (USP),Computer Science and Mathematics Computational (ICMC) São Carlos/SP (2013). He received his Master in Computer Science at University of São Paulo (USP),- Computer Science and Mathematics Computational (ICMC) and Graduated in Computer Science from the Catholic University of Minas Gerais – PUC (2010). Interest Areas: Protein Folding, SOA, Cloud Computing, Web Service, Fault Tolerance.

Bruno Tardiolle Kuehne is graduated in Computer Science from the Catholic University of Minas Gerais (2006) and master's degree in Computer Science and Computational Mathematics from the University of São Paulo (2009). Currently is a PhD student at the University of São Paulo and assistant professor at the Federal University of Itajubá.

Paulo de Souza graduated in Data Processing from the State University of Ponta Grossa (1990), master's degree in Computer Science and Computational Mathematics at ICMC / USP in São

Vol. 3, No.4, July-September, 2015

Carlos (1996) and a PhD in Computational Physics from the IFSC / USP (2000). He is currently an associate professor at the Department of Computer Systems of ICMC / USP, which he joined in 2005. He did post-doctorate at the University of Southampton / UK (2010/2011) and received his title of Associate Professor in 2014 at ICMC / USP in Concurrent Programming. He was professor at the State University of Ponta Grossa for 14 years (1991/2005), and leader of the Department of Informatics (1992-1993). He has experience in computer science, acting on the following topics: distributed systems, parallel computing, high performance application development, testing of concurrent programs and learning objects applied to the teaching of computing.

Regina H. C. Santana graduated in Electrical Engineering from the School of Electronic Engineering of São Carlos (1980), master's degree in Computer Science from the Institute of Mathematical Sciences of São Carlos (1985) and PhD in Electronics and Computer Science - University of Southampton (1989). He is currently associate professor at the University of São Paulo. She has experience in computer science, with emphasis on performance evaluation, acting on the following topics: performance evaluation, simulation, distributed simulation, process scheduling and parallel computing.

Marcos J. Santana Graduated in Electrical Electronic Engineering from the School of Engineering of São Carlos (1980), master's degree in Computer Science from the Institute of Mathematical Sciences of São Carlos (1985) and PhD in Electronics and Computer Science - University of Southampton (1989) . He is currently associate professor at the University of São Paulo. He has experience in computer science, with emphasis on performance evaluation, acting on the following topics: performance evaluation, web services, grid computing, cloud computing, process scheduling, parallel computing, simulation and load balancing. Course Coordinator of Computer Engineering at ICMC 2002 to 2011 and leader of the Computer Systems department since 2010.. Stephan Reiff-Marganiec is a Senior Lecturer in Computer Science at the University of Leicester. He has worked in the computer industry in Germany and Luxembourg and held research 27

International Journal of Services Computing (ISSN 2330-4472)

Vol. 3, No.4, July-September, 2015

positions at the University of Glasgow (while simultaneously reading for a PhD) and the University of th and 10th International Conference on Feature Interactions in Telecommunications and Software Systems an was coChair of three instances of YR-SOC. Stephan lead workpackages in the EU funded projects Leg2Net, Sensoria and inContext focusing on automatic service adaption, context aware service selection, workflows and rule based service composition. Stephan is co-editor of the Handbook of Research on Service-Oriented Systems and Non-Functional Properties and has published in excess of 50 papers in international conferences and journals as well as having served on a large number of programme committees. Stephan was appointed Guest Professor at the China University of Petroleum and was visiting Professor at Lamsade at the University of Dauphine, Paris. He was elected Fellow of the BCS (FBCS) in 2009 and is a member in both ACM and IEEE.

28