Performance Modelling Power Consumption and Carbon ... - CiteSeerX

4 downloads 106681 Views 4MB Size Report
application and security services hosted on servers of the ... they are still possible and affordable. ... MODELLING CARBON-EMISSIONS FOR SOA HOSTING.
Performance Modelling Power Consumption and Carbon Emissions for Server Virtualization of Service Oriented Architectures (SOAs) Paul Brebner, Liam O’Brien, Jon Gray NICTA, Canberra Research Laboratory & Computer Sciences Laboratory, RSISE, ANU Canberra, Australia {Paul.Brebner, Liam.OBrien, Jon.Gray}@nicta.com.au Abstract—Server Virtualization is driven by the goal of reducing the total number of physical servers in an organisation by consolidating multiple applications on shared servers. Expected benefits include more efficient server utilisation, and a decrease in green house gas emissions. However, Service Oriented Architectures combined with Server Virtualization may significantly increase risks such as saturation and Service Level Agreement (SLA) violations. Since 2006 National ICT Australia Ltd (NICTA) has been developing a technology for the performance modelling of large scale heterogeneous Service Oriented Architectures (SOAs). The technology has been empirically trialled, refined and validated with collaborating Australian Government agencies to address critical performance risks. Many government SOAs are developed, tested and deployed on virtualized hardware, and we have developed the capability to model the performance of SOAs deployed on virtual servers. In this paper we provide an overview of NICTA’s SOA performance modelling approach, and then explore a number of alternative deployment scenarios for an example SOA based on a synthetic carbon emission trading system. We show how our modelling approach provides insights into the relationship between workloads, services, and resource requirements, and can therefore be used to predict server power consumption and carbon emissions. We model and evaluate four different deployment options including planned and optimised resources, server virtualization, and computing-on-demand (cloud computing using Amazon EC2). We conclude with an overview of other potential problems and benefits of SOA virtualization. Performance Modelling; Service Oriented Architecture; SOA; Server Virtualization; Cloud Computing; Power Consumption; Carbon Emissions; Green ICT

I.

INTRODUCTION: SOA MEETS VIRTUALIZATION: MONSTER OR MARVEL?

This paper explores the convergence of the two recent Enterprise technology trends of Service Oriented Architecture (SOA) and Server Virtualization, and the implications for the major environmental concern of carbon emissions. We do this through the application of a novel Service Oriented Performance Modelling technology. We observe that SOA decouples application use from implementation, while Server Virtualization decouples applications and platforms. This suggests an inevitable

convergence of the two technologies resulting in the decoupling of service use, service implementation, and service resources. Industry commentators have made contrasting observations about this convergence. Some see it a Monster, “two simple technologies pairing to devolve into an enterprise Frankenstein of ultimately unmanageable complexity” [1], while others see it as a Marvel, a new wave of application creation composed by developers “from services they didn’t author, and run in datacenters they don’t own” [2]. Certainly there are obvious risks with the convergence, such as increased infrastructure complexity and risk of saturation [3], but it is not so obvious if the risks outweigh the benefits, and if the Virtualized SOA Frankenstein should be switched on or not. This paper applies NICTA’s Service-Oriented Performance Modelling technology (SOPM) to the problem and shows how modelling can help answer questions about performance, scalability and capacity, server capacity and utilisation, power consumption and carbon emissions. The paper is structured as follows: Section 2 introduces a synthetic SOA Carbon-emission commodity trading example, and in Section 3 we show how a performance model is built for it. In Section 4 we construct 4 different models to predict scalability and power consumption for four different deployment scenarios, including planned resources, optimized resources, virtualized resources, and on-demand (EC2) cloud computing. Finally, Section 5 concludes with some observations about the benefits and risks of these approaches. II.

SOA CARBON-EMISSION COMMODITY TRADING EXAMPLE

Our synthetic carbon emission commodity trading Service Oriented Architecture (SOA) example is motivated by the Australian Climate Exchange (ACX) [4]. The ACX is a system designed for the online trading of carbon emission commodities such as emission offsets. Our example system includes three main sub-systems: users (e.g. sellers, brokers, buyers), the Carbon Emission Commodity Trading System (to provide the services to users), and a 3rd-party Carbon Emission Commodity Registry (to maintain ownership records for carbon commodities). The Trading system exposes three services to users as follows: •

S1, user management (e.g. registration, security, funds transfer)

S2, emission commodity management (e.g. offers, discovery) S3, emission commodity trading (bids and transfer of ownership)

• •

The 3rd-party provides one service which is consumed by the Trading system and acts as an emission commodity certificate registry. The system works in two phases - a pretrading phase, and a trading phase (this is a simplification as the real system has three phases). The pre-trading phase workload consumes S1 and S2, and is assumed to peak at 10,000 requests per hour at 1-2pm. The trading workload consumes S1 and S3, and is assumed to peak at 20,000 requests per hour at 5-6pm. For simplicity we assume a constant arrival distribution (but can also model probabilistic arrivals such as a Poisson distribution). Even though the peaks of each workload occur at different times, some activity can overlap. The workload distribution over 24 hours is shown in Figure 1. Workload distribution over 24 hours 25000

Transactions per hour (TPH)

20000

Figure 2. Carbon Emission Commodity Trading SOA architecture.

III.

SOA PERFORMANCE MODELLING

NICTA has developed a service-oriented performance modelling (SOPM) technology [7] that is applicable throughput the software development lifecycle to mitigate performance and scalability risks so that expensive failures can be avoided, and architectural changes can be made while they are still possible and affordable. Figure 3 shows the application of SOPM during the software development lifecycle.

15000 Address Business 10000

5000

0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

Hours

Figure 1. 24 hour workload.

Internally, the three exposed services are implemented as composite services. A composite service consumes other services, and is represented as a workflow with steps calling other (mostly internal) services. Related services are grouped into applications. The 3rd party registry service is also consumed by the composite services. The groupings of internal services are: Client services, Application Services, and Security Services. The services for each application are initially assumed to be hosted on separate servers. External Web services are hosted on the web server, and client, application and security services hosted on servers of the corresponding name (Figure 2).

Figure 3. Service Oriented Performance Modelling lifecycle

The technology is model driven, and provides tool support for a method designed to develop performance models from available SOA artifacts. The tool supports a meta-model of Service-oriented performance, including concepts such as workloads, services (simple and composite), service deployment, and servers. The model is populated from architectural artifacts, and parameterised with measured sample performance data and expected workloads. An executable performance model is automatically generated which can then be run in a discrete event simulator to allow interactive experimentation and computation of performance metrics. A SOPM is typically derived from architectural artifacts including network diagrams, service maps, UML sequence diagrams, and service requirements documents. Performance data sampled from test, pre-production or production environments is used to parameterise the model.

Figure 4. SOPM applied to Carbon Trading SOA

Even though we are modelling a synthetic carbon trading system, it is actually based on a simplification of a real whole-of-government SOA that we previously modelled, which was validated as part of ongoing trials [18]. Figure 4 shows the modelling process and example input artifacts for the carbon emissions SOA. We now proceed to model four different deployment scenarios for this example. IV.

MODELLING CARBON-EMISSIONS FOR SOA HOSTING SCENARIOS

A. Scenario 1: Planned The initial (planned) resourcing for the SOA was for the four groups of application services (Web, Client, Application and Security) to be deployed to four physical servers, with 8, 4, 8, and 2 CPUs each (a total of 22 CPUs). We ignore the resource requirements for the 3rd party registry service (assuming it has infinite resources). We built a model to determine if these resources are sufficient and optimally allocated for the intended workloads, in order to answer questions such as “Are the resources sufficient for workload peaks?”, and “What is the Utilisation for each server?” Figure 5 shows a screenshot of the initial performance model.

Figure 5. SOPM model of Carbon Trading SOA

Running the simulation showed that there was sufficient capacity for the pre-trading workload, but insufficient capacity for the trading workload. The security server saturates at less than the required load, and the response time increases rapidly as shown by Figure 6 (response time and throughput on the top graph, and server utilization on the bottom).

consumption for an unloaded system (per CPU) of 100W, so 24 CPUs use 2.4kW per hour without doing any useful work. We assume that a fully utilised CPU consumes double this [5], so 24 fully utilised CPUs consume 4.8kW per hour. The performance model predicts utilisation per server, including average utilisation (Figure 8). 9 8 7 6 Web

5

Client App

4

Figure 6. Initial performance results

Security

3 2

Increasing the number of CPUs for the Security server from 2 to 4 relieves the bottleneck, providing sufficient capacity for the trading workload peak of 20,000 requests an hour. The revised server resources (24 CPUs, an increase of 2 from the original plan) are sufficient to cope with each peak individually. Server capacity, utilization, and carbon emissions Computers are a significant contributor to global carbon emissions. For the purposes of this paper we only take into account the contribution to emissions from CPU usage (ignoring other resource usage, such as networks and databases, computer construction, transport and decommissioning, and air conditioning). This is obviously a simplification, as air conditioning power consumption can be up to four times the server power consumption. We assume that computer CPU use contributes to carbon emissions due to both a base power consumption (a server turned on, but idle), and due to utilization resulting from a load on the system. We also assume that 100% of the power is produced from carbon emitting power stations (Figure 7).

1 0 Utilisation (%)

Figure 8. Predicted average server utilisation

Total power consumption is computed by taking account into the base consumption and the utilization according to the formula: Power consumption per day (kWh) = (base consumption per hour * 24) + load consumption per server (CPUs * 0.1 * average utilization * 24) = (2.4 x 24) + (8 * 0.1 * 0.075 * 24) + (4 * 0.1 * 0.082 * 24) + (8 * 0.1 * 0.0311 * 24) + (4 * 0.1 * 0.0508 * 24) = 61kWh. The power consumption for this scenario is 61kWh, the majority of which is base load power (Figure 9). Pre-consolidation planned resources 70 60

K/W Day

50 40

Load K/W day Base K/W day

30 20 10 0 Web

Client

App

Security

Total

Server

Figure 9. Predicted power consumption

Figure 7. Relationship between servers and emissions

In order to explore the Green credentials of different deployment scenarios we need to compute the power consumption of the servers. We assume a base power

From this information the carbon emissions and offset costs can be computed. Using local Australian figures [6] (for Coal fired power stations) this scenario produces 23.6 tonnes of carbon/year (base 22.31 tonnes, load 1.28 tonnes), with offset costs of $1,113. The power consumption cost per year is $106,872 (at 20c/kWh), or if sourced from green energy, $141,000 (an extra 6.5c/kWh, but no offset costs). 23.6 tonnes of carbon corresponds approximately to the

emissions produced by 4.3 cars a year (assuming 5.5 tonnes per car). B. Scenario 2: Optimised Looking at the initial results we see that some servers are over-resourced. Running the model again we find the minimum number of CPUS required to support the peak loads (Figure 10). In fact, the application server only needs 4 CPUs (a reduction of 4 from 8 planned), so the total number of CPUs can be reduced by 4 from 24 (planned) to 20 (optimised).

Figure 11. Server Virtualization process

Figure 10. Predicted performance (optimised resources)

The power consumption is reduced slightly to 51kWh per day, which produces 19.73 tonnes of carbon a year (base 18.65 tonnes, load 1.28 tonnes). The yearly offset cost is $930 for a power consumption bill of $90,000 (or $118,000 if sourced from green energy). This is a reduction in emissions of 15%. The power consumption and emissions are still high due to the base load, and there is no room for growth in workloads over time, or higher peak loads, as the servers have barely sufficient resources for the highest current peak. C. Scenario 3: Virtualized We now see if Server Virtualization can reduce the base power consumption. We model the change from 1 application deployed per physical server (potentially underutilised), to 1 per application per Virtual Server (and multiple virtual servers per physical server) (Figure 11). Potential benefits include a decrease in the total number of servers (an increase in average utilisation), application isolation (so multiple applications can run on the same physical server), and better control over resource allocation (e.g. limits such as the number of virtual CPUs or shares per instance; scalability such as an increase in instances and/or CPUs per instance; migration such as replication and/or moving instances from one server to another).

We assume an “idealised” virtualization configuration with: identical speed CPUs as pre-consolidation; no overhead for virtualization (optimistic); perfect load balancing across Virtual Servers; one virtual server per application zone; services from each application zone (web, client, application, security) deployed to the same Virtual Server; each Virtual Server accessing an unlimited number of (up to maximum) physical CPUs (multi-processor configuration); and a single physical server pool (e.g. blade server) shared among all the Virtual Servers. Some of these assumptions may not be realistic in practice, or for particular Virtual Server technologies. A good overview of virtualization is provided in [20], with more details in [8-12]. We now predict how many servers are required for this deployment scenario. Figure 12 shows the performance model with services deployed to four virtual servers, sharing one physical server resource pool.

Figure 12. SOPM of Virtualized Servers

The simulation predicts that 14 CPUS in the resource pool is adequate to cope with each separate workload peak, a reduction of 10 CPUs from 24 CPUs (planned) and 20 CPUs (optimised). This reduces the emissions and offset costs to 33.6kWh/day, 14 tonnes of carbon emissions (base 12.8 tonnes, load 1.26 tonnes), with offsets costing $667 a year. Total power consumption cost is $64,000/year (or $85,000 if sourced from green energy). Virtualization achieves a saving of 40% compared to the original resource plan.

D. Scenario 4: On-demand cloud computing (EC2) The performance model predicts an average server resource pool utilisation of 9.8%, which is still relatively low as the servers are substantially over-resourced for peaks. More applications or workloads could be deployed on the shared servers to increase the average utilisation, but with the increased risk of overload for simultaneous peaks. However, a more dynamic resourcing approach using on-demand cloud computing may be appropriate. We will use the model to predict the cost of using Amazon EC2 and the carbon emissions produced, after first exploring the technology.

modeling tool which automatically increases and decreases the number of CPUs for servers, to compute the number of EC2 instances required over 24 hours. This information can be used to compute the cost of using EC2, and the power consumption/emissions. We assume that emissions that we don’t produce (i.e. pay for) are S.E.P. (Somebody Else’s Problem). Figure 13 shows the predicted number of EC2 instances required for the 24 hour workload. EC2 instances for 24 hour load 16 14

Amazon architected the scaling of their own applications using the Amazon Infrastructure Web Services, so we assume they are applicable for hosting our example SOA application. Amazon Web Services (AWS) consist of the following offerings: SimpleDB: Database with indexing and querying; Simple Storage Service (S3): Object write, read and delete; CloudFront: Distributed, location optimised content delivery; Simple Queue Service (SQS): Hosted messaging queue service; Elastic Compute Cloud (EC2): Ondemand CPU. Infrastructure-as-a-Service is the Holy Grail of Grid computing, and the AWS overcome the previous barriers to achieving this with the following three enablers: Web Services address management of resources, deployment of user applications, and interoperability across heterogeneous resources; Server Virtualization addresses security, privacy, isolation, and Service Level Agreements (SLAs); the charging model (low cost, usage based) addresses resource contention, fairness of use, and scheduling. The use Web Services and Virtualization to enable remote deployment of services in Grids was identified in [19]. EC2 works using virtual server instances (Xen) managed via Web Services, giving remote control of virtual machine instances which is scalable (dynamically increase/decrease instances), and isolation from other users. Web services and infrastructure can be deployed onto the virtual instances. This combination thus gives individual resource users control of Server Virtualization of SOA applications by SOA. Resource users only pay for what they use (number of instance hours), so EC2 is popular with technology start-ups to reduce cost of ownership of servers and cope with increasing and large unforeseen loads. See [13-17] for more details.

EC2 instances

12

Amazon Infrastructure Web Services

10 8

EC2 instances

6 4 2 0 0

5

10

15

20

25

Time (hours)

Figure 13. Predicted number of EC2 instances

A minimum of one EC2 instance is continuously required for the background load, and a maximum of 14 EC2 instances for the peak load. This works out to be 50 CPU hours/day at US10c/hr per instance, costing US$5/day, US$1,825/year or A$2,800/year. We ignore in/out data costs. Daily power use is approximately 10kWh, with carbon emissions of 3.87 tonnes/year, costing $182.50/year to offset (which may be less as 50% of USA power comes from no or low green-house gas producing sources). It may be more cost effective to use larger EC2 instances for peaks (e.g. an “extra large” instance has 8 EC2 compute units). E. Comparison of scenarios Given these assumptions, EC2 deployment results in a considerable reduction in both cost and emissions compared to the previous scenarios, as it effectively removed the overhead of idle servers. The carbon emissions (tones/year) predicted for each scenario are shown in Figure 14.

Modelling EC2 deployment Our model of EC2 hosting of the emissions trading SOA assumes that all services are deployed to each EC2 instance (which may not be feasible in practice), all instances are started/stopped as required (on hourly boundaries), only “small” instances are used (1 EC2 compute unit), and the speed of CPUs is the same as the previous scenarios (in practice EC2 compute units are probably slower, as they use 2007 CPU technology). We use a feature of our simulation

Figure 14. Carbon emissions (tonnes/year)

Predicted emission offset costs (A$/year) are shown in Figure 15.

V. $1,200 $1,113

$1,000

$930

$800 $667

Planned Optimised

$600

Virtualized EC2

$400

$183

$200

$0 Offset costs

Figure 15. Emission offset costs (A$/year)

The emission savings compared with the original resource plan are shown in Figure 16. The absolute saving is 19.73 tonnes/year or the equivalent to the emissions produced by 3.5 cars per year. Emissions savings compared with original plan 90 80 70

Saving (%)

60 50

SOA AND VIRTUALIZATION: CONCLUSIONS

A. SOA and Virtualization: Modelling NICTA’s SOPM technology can model different SOA hosting scenarios e.g. service deployment, servers, server capacities, resource models (fixed, Virtualization, clouds), etc, and computes power consumption and emissions. Modeling can easily be applied to explore alternative deployment scenarios once performance data is obtained from an initial deployment. Different assumptions and tradeoffs will give different results. For this round of modelling, we excluded server and software costs, airconditioning, etc. B. SOA and Virtualization: Performance Virtualization typically incurs a performance penalty. The amount depends on factors such as virtualization technology (type, vendor) and load. A 15% reduction in throughput has been reported [12], with between 20-60% increase in response times (depending on load). Worst case performance overhead could be substantially higher (e.g. 400% increase reported in [10]). Performance overhead can easily be included in the model, assuming it can be measured (or modeled) in the first place [8]. A virtualized environment may also put proportionally more load on other resources such as the network (for diskless servers connected to SAN). This can also be included in the model.

40 30 20 10 0 Optimised

Virtualized

EC2

De ployment

Figure 16. Emission savings

The total cost (power consumption plus offsets, excluding EC2 scenario which is just usage cost and offsets) is show in Figure 17. $120,000 $107,985 $100,000 $90,930

$80,000 Planned

$64,667

Optimised

$60,000

Virtualised EC2

$40,000

$20,000

$2,983

C. SOA and Virtualization: Scalability Scalability with increasing numbers of virtual servers, physical servers and CPUs is complex. In practice a Virtual Machine can only access a fixed upper number of CPUs (cores). This is typically four, and this limit may be due to licensing, but also the optimal configuration for scalability for a Virtual machine (optimal scalability is 1 CPU per Virtual machine). Scalability is achieved by replicating virtual machines. To achieve high scalability each application must therefore be deployed on multiple Virtual Machines. The number of virtual machines can be increased statically (with the required number predicted by modelling) or dynamically (Virtual Server infrastructure automatically replicates/deletes Virtual machine instances as load changes). There is a limit to the number of Virtual machines per host server, and overhead increases as the number of Virtual machines increases (and therefore scalability decreases). This can be included in the model as it can be significant [10, 12]. Finally, scalability may depend on configuration (e.g. CPU affinity [10]).

$0 Total cost (power + offsets)

Figure 17. Total costs

D. SOA and Virtualization: Isolation In most situations virtual servers won’t be allowed to compete equally against each other for spare resources. Various resource configuration options (such as virtual CPUs and shares) are offered by Virtual Server products and can be used to isolate applications running in different virtual server instances [11]. NICTA is conducting ongoing research into modelling virtual server configurations to guarantee SOA

SLAs for potentially competing workloads consuming the same services. E. SOA and Virtualization: Synergies Apart from the modeled power and emissions savings, some other potential benefits of SOA and Virtualization include efficient sharing of pooled physical resources, and “autonomic” resources using replication and on demand capacity to cope with load peaks, to increase reliability, to isolate service consumers, and to guarantee service SLAs for user communities. The ability to use virtualization throughout the software development lifecycle may enable smoother transitions from development to testing to production, lower the total cost of deployment, enable maintenance of multiple versions of services, increased uptime, and allow better diagnosis of run-time problems (by copying the production environment for diagnosis). SOA and Virtualization: Challenges There are likely to be some outstanding risks and challenges with the convergence of SOA and Virtualization. There will be an increasingly large numbers of Virtual Servers to manage, performance issues may be harder to understand and fix, and availability may be lower unless Virtual Servers are replicated on redundant hardware. Moreover, scalability may be difficult to achieve and limited and not all applications may scale as well on EC2 architecture. Load balancing may be trickier (E.g. EC2 doesn’t support hardware load balancing prior to 2009). Modelling may need to take into account a variety of real Virtual Server product performance characteristics, settings, and limitations - which are currently difficult to obtain due to lack of relevant and transparent benchmarks. Finally, not everything can be virtualized. For example, legacy applications, applications dependent on custom hardware or modified operating systems, 3rd party services, and applications that must be hosted on physically separate servers due to security or privacy considerations.

[6]

[7]

[8]

[9] [10]

F.

[11]

[12]

[13]

[14]

[15] [16]

ACKNOWLEDGEMENT NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.

[17] [18]

REFERENCES [1]

[2]

[3] [4] [5]

Joe McKendrick, SOA meets virtualization: IT 'Frankenstein' in the making?, October 10th, 2006, http://blogs.zdnet.com/service-oriented/?p=723 Steven Martin, Convergence - SaaS, SOA, Virtualization, Modeling, July 25 2008, http://blogs.msdn.com/stevemar/archive/2008/07/25/videoclip-on-convergence-saas-soa-virtualization-modeling.aspx “The Truth behind Virtual Infrastructure Performance”, Sysload whitepaper, www.sysload.com. Australian Climate Exchange (ACX), www.climateexchange.com.au Processor power consumption: http://www.extremetech.com/article2/0,2845,1937997,00.asp

[19]

[20]

http://www.silentpcreview.com/article694-page1.html http://www.pcstats.com/articleview.cfm?articleid=2366&page =2 Carbon emissions calculator, http://www.originenergy.com.au/carbon/?_qf_p2_display=tru e Brebner, P. C. 2008. Performance modeling for service oriented architectures. In Companion of the 30th international Conference on Software Engineering (Leipzig, Germany, May 10 - 18, 2008). ICSE Companion '08. ACM, New York, NY, 953-954. DOI= http://doi.acm.org/10.1145/1370175.1370204 Virtualization: Concepts, Applications, and Performance Modeling, Daniel A. Menascé, CMG 2005, http://cs.gmu.edu/~menasce/papers/menasce-cmg05-virtslides.pdf Beyond the Hypervisor Hype, Michael Salsburg, CMG 2007, http://www.cmg.org/conference/cmg2007/awards/7180.pdf Performance Evaluation of Virtualization Technologies for Server Consolidation, Padala, Pradeep; Zhu, Xiaoyun; Wang, Zhikui; Singhal, Sharad; Shin, Kang G., http://www.hpl.hp.com/techreports/2007/HPL-2007-59.html Matthews, J. N., Hu, W., Hapuarachchi, M., Deshane, T., Dimatos, D., Hamilton, G., McCabe, M., and Owens, J. 2007. Quantifying the performance isolation properties of virtualization systems. In Proceedings of the 2007 Workshop on Experimental Computer Science (San Diego, California, June 13 - 14, 2007). ExpCS '07. ACM, New York, NY, 6. DOI= http://doi.acm.org/10.1145/1281700.1281706 Load Testing SugarCRM in a Virtual Machine, Determining the CPU cost of virtualization with VMware ESX, Christopher L Merrill, 2008, Web Performance Inc, http://www.webperformanceinc.com/library/reports/Virtualiza tion2/index.html EC2 Load balancing, http://developer.amazonwebservices.com/connect/entry.jspa? externalID=1639&ref=featured EC2 Auto scaling (Scalr), http://developer.amazonwebservices.com/connect/entry.jspa? externalID=1603&ref=featured Exploring Amazon EC2 for Scale-out Applications, http://d.scribd.com/docs/1j97a7qkrcystco5ou9p.pdf Using the Cloud to build highly-efficient systems (load balancing, automatic scaling), http://www.allthingsdistributed.com/2008/10/using_the_cloud _to_build_highl.html Building Scalable Web Services. Tom Killalea. 2008, http://queue.acm.org/detail.cfm?id=1466447 Paul Brebner, Liam O'Brien, Jon Gray. Performance Modelling for e-Government Service Oriented Architectures (SOAs). 19th Australian Software Engineering Conference. ASWEC 2008. 25-29 March 2008, Perth, Australia. Experience Report Proceedings. pp. 130-138. Paul Brebner, Wolfgang Emmerich, "Deployment of Infrastructure and Services in the Open Grid Services Architecture (OGSA)", Proceedings: Third International Component Deployment Conference, CD 2005, Grenoble, France, Dearle, A, Eisenback, S. (Eds.), LNCS Volume 3798/2005, pp. 181-195. IEEE Computer, Volume 38 Issue 5, May 2005. Editors Figueiredo, R. Dinda, P.A. Fortes, J. Guest editors’ introduction : Resource Virtualization Renaissance, 28-31. Digital Object Identifier: 10.1109/MC.2005.159

Suggest Documents