Framework for Testing Cloud Platforms and Infrastructures

9 downloads 0 Views 755KB Size Report
testing [7, 8] as well as cloud testing for specific types of systems, such as .... Hence, the Java development application server was selected as a primary means ...
2011 International Conference on Cloud and Service Computing

Framework for Testing Cloud Platforms and Infrastructures William Jenkins, Sergiy Vilkomir, Puneet Sharma

George Pirocanac

Department of Computer Science East Carolina University Greenville, NC, USA

Google Inc. Mountain View, CA, USA

reliability and quality, therefore, cloud platforms and infrastructures must undergo thorough and intensive testing.

Abstract— Cloud computing, which is a relatively new approach to distributed computing, uses cloud infrastructures that automatically scale to support an application's hardware requirements, and therefore, must have high reliability in order to meet user expectations. To achieve a high level of quality and reliability, cloud platforms and infrastructures must be tested thoroughly. Typically, cloud providers have individual approaches to infrastructure testing, and there are no widely accepted methods of infrastructure testing that are currently available. An important theoretical and practical problem in this area is the development of such methods, including testing application programming interfaces (APIs), which are direct links between the client code and the infrastructure it runs upon. In this paper, an approach for testing cloud platforms and infrastructures is suggested. An intelligent framework is presented, which accelerates testing and provides for parallel development of test cases. This framework is a cloud application, which contains plugins for testing APIs of cloud platforms. A prototype framework for testing Google App Engine has been created to show demonstrated applicability of the suggested approach.

The growing importance of cloud testing has received attention in recent years due to two specialized research workshops in the area of software testing in the cloud [5, 6]. Recent research has focused on general approaches to cloud testing [7, 8] as well as cloud testing for specific types of systems, such as distributed systems [9] and network management systems [10]. A review of the latest results in cloud testing is also available in [11]. However, most research to date is focused on testing cloud applications or using testing tools in the cloud rather than the testing of cloud platforms and infrastructures. Cloud providers, in general, have their own unique approaches to infrastructure testing, which are typically internal to the company and not necessarily revealed to the public. Often this information is not published as an official report, but instead is available via the Internet through blogs, video presentations, etc. One such testing technique and framework is for Google App Engine, which can be found in [12]. In the present literature, however, there are no widely accepted methods of cloud infrastructure testing, which is why the development of testing techniques has become an important theoretical and practical task. Specifically, investigating methods of testing application programming interfaces (APIs), which are direct links between the client code and the infrastructure it runs upon, are of special importance.

Keywords- Cloud computing; platform; infrastructure; testing; API; Google App Engine

I.

INTRODUCTION

In recent years, cloud computing has taken on significance as a new approach to distributed computing. For instance, when users upload their applications, the infrastructure within ‘the cloud’ automatically scales to support the hardware requirements of any given application. Well-known examples of cloud platforms include Google’s App Engine [1], Amazon’s EC2 [2], Microsoft’s Azure [3], IBM SmartCloud [4] and others. Users of these platforms need not concern themselves with issues such as hardware, scaling, and load balancing since these tasks are handled by the providers, leaving users free to concentrate on their applications. The downside, however, to this capability is that users have increasingly come to depend on infrastructure that cannot be controlled or that can be implicitly relied upon. In these instances user application performance is determined by more than just the application’s capabilities. Rather, it also includes the supporting cloud software and hardware. For this reason, cloud computing infrastructure must have high reliability in order to meet user expectations. To achieve this level of

In this paper a testing approach is presented based on a modular framework that has been developed as an application within the cloud itself. Section 2 covers the general approach of the testing technique. In Section 3, a prototype tool is presented for testing Google’s App Engine. This prototype includes plugins for URLFetch, Blobstore, Multitenancy and Test Generation, which allow using different methodologies for the testing, including Pair-wise [13] and Base Choice [14]. Other testing approaches such as T-wise [15], MC/DC [16], and RC/DC [17] can be easily implemented and added to the prototype. Section 4 contains conclusions and directions for future work. II.

The focus of any cloud infrastructure testing is the testing of those functions that provide services to client applications. APIs are not considered functioning programs in themselves, but are necessary for a client code to enable the proper

This research is supported by the "Google Faculty Research Award" from Google, Inc.

978-1-4577-1637-9/11/$26.00 ©2011 IEEE

APPROACH FOR TESTING CLOUD INFRASTRUCTURE

134

execution. The Java server provides unlimited run time for page requests, which allows plugins to run indefinitely while testing. However, run-time durations for plugins were also taken into consideration so that the App Engine application could be deployed to the cloud, where it could also test while still conforming to the 30-second request response deadline. This approach also allowed a comparison of testing results between the development platform and the cloud to ensure that they are identical from a client’s application perspective.

functioning on the platform. Since APIs only run when requested by the client code, it is difficult to test them without some testing application. To simplify the testing process and provide testers maximum control over the tests that are executed, it is necessary to build a client shell that is as minimal as possible so as to receive accurate results of the APIs’ abilities and not of the shell application. Once the shell is created, it needs a way to reach the APIs that are being tested. For this purpose, plugins are loaded, which contain a code that activates the APIs.

Fig.1 provides a view of the created testing framework, ECU Test Suite, running on Google App Engine. A general structure of the framework is presented in Fig. 2. The testing framework in this investigation uses Google Web Toolkit (GWT) for creating the user interface and functional logic such as the types of plugins to run. However, the application does not test any classes of GWT since GWT is a separate project from the App Engine platform. Each displayed plugin will be covered in a separate subsection.

The idea of using plugins for modular functionality is not new. However, there are few if any documented cases of plugins being used as a method that allows a testing suite access to APIs, as well as adding or changing APIs without substantial rewrites. This approach allows for the testing framework to be developed for many services using the same language or server. More importantly, plugins also allow for division of coding across multiple developers to accelerate testing across a platform. In the development of a plugin, each author chooses the type of methodology that the plugin will use. There can be multiple plugins with multiple methodology types that test a single API. To increase the usefulness of a plugin, it can also load test input values from a file, allowing for quick loading of many tests without rewriting the plugin. Since plugins are independent, they can run as mini testing programs for anything (a single API, a specific method, or a class). However, a plugin could theoretically work on many APIs at a given time and even test for their interactions with each other. III.

TESTING GOOGLE APP ENGINE: CASE STUDY

A. Testing Google App Engine Google App Engine [1] allows web applications to be built, deployed, and hosted on Google's infrastructure. Distributing tasks over various servers, Google’s infrastructure provides automatic, on-demand traffic shaping and load balancing for a given application. As a result, many cloud computing features are distributed to a range of users. Various APIs abstract the programs from this diverse infrastructure and also enable tracking resource usage over the platform. APIs provide access to a variety of services, including storage (datastore, blobstore), user management, offline processing (taskqueue, cron), and web requests (URLFetch). A software development kit (SDK) consisting of tools to develop applications locally on the user's Linux, MacOs, and Windows client systems was also being provided. Within the SDK, a development application server simulates the supported Google App Engine APIs and allows the developed applications to be run and debugged before being deployed. At the time of this investigation, Google App Engine was supporting APIs in two separate languages (Python and Java). The Go language was being supported in an experimental form.

Figure 1. Testing framework which has loaded several plugins.

Figure 2. General structure of the framework for testing App Engine APIs.

B. Default Plugin A default plugin is used for testing our framework application rather than a specific API from App Engine. The default plugin performs no real function except as a sanity check for plugin loading, test running, and logging. Given its simplicity as a plugin, the default plugin can be used as troubleshooting in the plugin, the framework, or the App Engine platform. If there is some problem but the default

The focus of this investigation is to apply functional testing techniques rather than performance or scalability testing. Hence, the Java development application server was selected as a primary means to conduct both test development and test

135

TABLE I. 1 2 3 4 5 6 7 8

Name setAllowUserInteraction setDoInput setDoOutput SetIfModifiedSince SetUseCaches SetRequestMethod Server Response Download

THE LIST OF TEST PARAMETERS FOR URLCONNECTION.

Description Prompt a user for credentials if required using a GUI dialog Set this as a socket that will be receiving data Set this as a socket that will be sending data Choose a cached version if it is available and valid Choose to cache documents that meet certain requirements for faster access Choose the HTTP request method for sending form and URL-encoded data Force the server to respond using a specific response code Force the server to send an attachment header for a file on the server.

Four test sets were run. The first two sets were built into the URLFetch test plugin to verify that an active network connection was available before proceeding into the actual tests. This method is also closest to what a programmer would use to simply grab a webpage so it helps guarantee that at the highest levels of the API the data is returned as expected.

plugin loads and executes correctly, then the problem can be either with the other plugin or the platform. Finding the source of the problem will require additional testing, but the framework can now be considered fairly trustworthy. C. Plugin for URLFetch The URLFetch plugin works by following through the hierarchy of objects that must be created to complete a web request. From this hierarchy, an object is chosen from the start, the middle, and the end of the call chain. The objective was to locate any bugs while narrowing in on their position by testing objects across the hierarchy. This method also functions as a thorough check rather than relying on the highest level object to simply obtain the correct answer.

For URLConnection, a text document was provided for constructing an array that instructed the test plugin on the options being used and the order in which they were presented. Table 1 contains a list of eight testing parameters (options) and their possible values. Six parameters have two different values, one parameter – three values, and one parameter – four values. For non-Boolean inputs such as setIfModifiedSince, a 3-option system based on Past, Present, and Future was created. In other words, the test would fabricate a time value such that it would meet any comparison criteria and would pass or fail as specified in the documentation. For SetRequestMethod, there are many HTTP headers other than GET and POST; however, only two were used for testing. Each test case includes an expected result that is derived from experience with web applications and the Java documentation. Each result is in the form of an HTTP response code that is expected, except in cases where a conflicting configuration would produce an error. In this case, a Java exception is expected. Altogether, there are 2*2*2*3*2*2*4*2=768 different combinations of option values. Testing can be very time consuming and it is unrealistic to test all these combinations. We focus on generating fewer, better test cases and we use Pair-wise testing [13] instead exhaustive testing.

URLConnection is a high-level Java class that provides access to options whenever a URL object is used to retrieve data. The object’s attributes are used to formulate the request that is eventually made. The data is returned and made available through additional methods from the object. For the URL connection object, properties are chosen that need prerequests. These options then become locked after the request is made. Therefore, the request is configured in different ways rather than to try and read the data back in various formats. URLFetch is the door through which all request traffic must go through at the bottom of the URLFetch API call stack. URLFetch is a simple class that only manages a few properties, but it is a single point of failure for the URLFetch API, and thus, worthy of scrutiny. For this reason, it was chosen to use the following request-related options: •

Allow/Disallow Truncate – Allow (or not) the silent truncation of data over the buffer size.



(doNot)FollowRedirects – Allow or disallow the automatic following of redirect responses.



setDeadline – Set the amount of time in seconds a request has to respond before it is aborted.

Values 0, 1 0, 1 0, 1 Past, Present, Future 0, 1 GET, POST 200, 307, 403, 404 0, 1

The requirement of the pair-wise testing approach is that for any two testing parameters, all combinations of their values should be covered during testing. In other words, any pair of values of any two parameters should appear at least in one test case. The number of test cases according to a pair-wise approach is relatively small. Thus, for URLConnection, instead of 768 test cases for all combinations, we use only 18 pair-wise test cases (Table 2). These test cases have been generated using the TConfig [18] tool. The tool uses several algorithms for covering array construction, which generate fewer test cases to achieve pair-wise coverage compared with other strategies [19].

In cases where there is a non-Boolean input, such as setDeadline, each test case was allowed to specify a double input that was added onto the current system time. In cases of conflicting configurations, such as a test case that states ‘allow/disallow truncate,’ the first one that was read from left to right, was processed, and the other ignored.

The tilde (~) is used to represent the concept of “do not care” or irrelevancy of which value is chosen, i.e., any option value can be used in this test case. All configurations,

136

TABLE II.

Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7 Test 8 Test 9 Test 10 Test 11 Test 12 Test 13 Test 14 Test 15 Test 16 Test 17 Test 18

setAllow User Interaction 0 1 ~ 0 1 ~ 0 1 ~ 1 ~ 0 ~ 0 1 0 1 ~

setDo Input

setDo Output

0 1 ~ 1 ~ 0 ~ 0 1 1 ~ 0 ~ 0 1 0 1 ~

0 1 ~ ~ 0 1 1 ~ 0 1 ~ 0 ~ 0 1 0 1 ~

THE TEST SET FILE FOR URLCONNECTION. SetIf Modified Since Past Past Past Present Present Present Future Future Future Future Past Present Present Future Past Past Present Future

TABLE III.

Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7 Test 8 Test 9 Test 10 Test 11 Test 12 Test 13 Test 14 Test 15 Test 16 Test 17 Test 18

allow Truncate 0 0 0 1 1 1 ~ ~ ~ ~ 0 1 1 ~ 0 0 1 ~

Follow Redirects 0 1 ~ 0 1 ~ 0 1 ~ ~ 0 1 1 ~ 0 0 1 ~

SetUse Caches 0 1 ~ 0 1 ~ 0 1 ~ ~ 0 1 1 ~ 0 0 1 ~

Server Respons e 200 307 403 403 200 307 307 403 200 403 200 307 307 403 200 404 404 404

Set Request Method GET POST ~ POST ~ GET ~ GET POST ~ GET POST POST ~ GET GET POST ~

Download

Resul t

0 0 0 0 ~ ~ 0 ~ ~ 1 1 1 ~ ~ ~ 0 1 ~

error 307 403 403 200 error 307 error 200 403 200 error 307 error 200 error 404 404

THE TEST SET FILE FOR URLFETCH. set Deadline 0 1 100 1 100 0 100 0 1 100 0 1 1 100 0 0 1 100

Server Response 200 307 403 403 200 307 307 403 200 403 200 307 307 403 200 404 404 404

Download

Result

0 0 0 0 ~ ~ 0 ~ ~ 1 1 1 ~ ~ ~ 0 1 ~

200 307 403 403 200 307 307 403 200 403 200 307 307 403 200 404 404 404

regardless of generating methodology, are stored in a plain text file tabulated using whitespace. Each row of this file is a test case except for those marked as a comment or the header row, which is used to identify the order of columns within the file.

The options provided for URLFetch are represented and loaded in the same way as URLConnection. A total of 18 test cases have been generated according to the pair-wise approach as shown in Table 3.

In the result column, when there is a certain combination guaranteeing a Java exception, the “error” is provided as a token signal showing that an exception is the desired outcome. In the case of Boolean inputs, 1 is used to represent True, and 0 is used to represent False. The Past value used for setIfModifiedSince represents the 0 unix timestamp with Present representing the current system time and Future representing 10 minutes into the future from the current system time. Ten minutes time was chosen since it is not an exorbitantly large time increase, but is large enough to guarantee that extremely long network responses are accounted for.

Part of URLFetch’s behavior comes from how the server responds. A simple server-side behavior was introduced as an available option for test cases based on previous studies. The server chosen was Apache running PHP. A simple script was constructed that took two parameters for the desired HTTP response code from the server. It was also possible to specify that the server should offer a content attachment via the response headers. The attachment is a simple zip file that contains a text document, whose contents can be verified for correctness to ensure appropriate and complete transfer. Additionally, the document’s contents can be padded to raise the size of the zip file up to any desired size. Common HTTP response codes tested included 200, 307, 403, and 404. Other

137

Test Set #1 httpRequest: Pass Test Set #2 httpsRequest: Pass Test Set #3 httpViaURLConnection1: Pass httpViaURLConnection2: Pass httpViaURLConnection3: Pass httpViaURLConnection4: Pass httpViaURLConnection5: Pass httpViaURLConnection6: Pass httpViaURLConnection7: Pass httpViaURLConnection8: Pass httpViaURLConnection9: Pass httpViaURLConnection10: Pass httpViaURLConnection11: Pass httpViaURLConnection12: Pass httpViaURLConnection13: Pass httpViaURLConnection14: Pass httpViaURLConnection15: Pass httpViaURLConnection16: Pass httpViaURLConnection17: Pass httpViaURLConnection18: Pass

Test Set #4 urlFetch1: Pass urlFetch2: Pass urlFetch3: Pass urlFetch4: Pass urlFetch5: Pass urlFetch6: Pass urlFetch7: Pass urlFetch8: Pass urlFetch9: Pass urlFetch10: Pass urlFetch11: Pass urlFetch12: Pass urlFetch13: Pass urlFetch14: Pass urlFetch15: Pass urlFetch16: Pass urlFetch17: Pass urlFetch18: Pass [Test Results] Pass: 38 Fail: 0

# Comments are prefaced with a pound sign # Name - The name that should be tested for validity # Result - "error" for should produce an error or "pass" for should pass as valid Name Result Google pass .g0Gl3. pass -(weird)} error # Proceeding underscores may be reserved in the future _underscore pass badChar# error valid_name pass valid-name pass __________ pass .......... pass 1234567890123456789012 3456789012345678901234 5678901234567890123456 7890123456789012345678 901234567890 pass \\\\\\\\\\ error "."."."."." error

Figure 3. The result output from the URLFetch plugin

more complicated cases, such as those that require the header provide additional information will entail more complex implementation.

Start with default namespace:Passed Change namespace:Passed Checking namespace persistence:Passed Trying: Google:Passed Trying: .g0Gl3.:Passed Trying: -(weird)}:Passed Trying: _underscore:Passed Trying: badChar#:Passed Trying: valid_name:Passed Trying: valid-name:Passed Trying: __________:Passed Trying: ..........:Passed Trying: 123456789012345678901234 567890123456789012345678 901234567890123456789012 345678901234567890123456 7890:Passed Trying: \\\\\\\\\\:Passed Trying: ".".".".".":Passed [Test Results] Pass: 15 Fail: 0

Figure 4. Example of test set file contents for multitenancy plugin and the results it generates.

Fig. 3 shows the straight textual output from the Test Suite App Engine application, specifically the URLFetch plugin. In this output, it can be observed that the Test Suite has loaded only the URLFetch test plugin and has run a total of four test sets. The first two tests are the connection tests mentioned earlier and the remaining two are the URLConnection and URLFetch tests, respectively. According to our results, all test cases returned the values indicated by their oracle, and therefore, are considered to have “passed.”

Additionally, even though the plugin in Fig. 4 is designed to focus exclusively on multitenancy, it is worth noting that the process will involve testing of various APIs as well. Given that the entire purpose of multitenancy is to affect how other APIs store their data, it is important to include testing-affected APIs in order to ensure proper functioning of the multitenancy. E. Plugin for Blobstore In Blobstore testing, the previously established practice of separating the tests from the actual plugin code is followed. The first step is to set the conditions to which Blobstore should adhere. It was decided that Blobstore should be able to take uploaded data and return them without alteration. The ability to detect if subsequent fetches varied from the original uploaded file would be the measurement for success. To that end, an alphabet of mutations was created defined by the plugin as an operation that changes data, but which is also reversible. In this case, things like serialization or applying a lossless compression algorithm were used.

D. Plugin for Multitenency Multitenancy is better known as namespaces. By setting a namespace, supporting APIs will partition data such that even objects with identical primary keys will be stored separately within the various data stores provided. Namespaces are accessed via the NamespaceManager object, which is responsible for setting, retrieving, and validating the namespaces. This manager is the object we chose to test. Testing the conformance of a string to a regular expression is a difficult situation. We chose to offload the input test cases to a file so that many different methods can be tried for generating the test cases. This was done in much the same way as with URLConnection and URLFetch. Fig. 4 presents a sample of some of the initial cases that were tested to ensure that the plugin itself was in working order. Test cases have been manually generated to achieve a good representation of various characteristics of the regular expression. The acceptable expression for input is defined within Google’s App Engine documentation as [0‑9A‑Za‑z._‑]{0,100} [20].

Included operations were: •

Heap.



Stack.



Serialize



Bytebuffer.

Here, Heap creates a new item and copies all the data over; Stack passes the value via a recursive call; Serialize converts

138

currently bundled in with our test suite as a plugin, but for reasons mentioned later, it will need to become a stand-alone application.

the current data storage into a string using the bytes of the current state; and Bytebuffer converts the current state into a byte buffer. Due to time limitations, we restricted the process to simple operations; future implementations, however, could contain more complex compression algorithms or hashing. These operations are grouped into tests and stored in a form of tags as seen in Fig. 5.

The plugin begins by creating a list of class files that it detects in jar packages. These jar packages contain all the class files for the various APIs and represent their actual code base. The user is presented with this list via a preform, and from there the user can select entire APIs or single class files to process. The plugin then takes this list and extracts only the selected class files from the jar, where it then loads the classes into the JVM via a custom class loader. Additionally, the plugin loads each test generation methodology as a module. It then goes through each method from each class using Java’s reflection library and passes it to each test generation module. The test generation modules compute the number of different test cases they would produce for the particular method and return this to the plugin, which then decides which module produces the fewest test cases. Additionally, if a test generation methodology is unsuited to generate for a particular method due to parameters or an incredibly large number of test cases, it may opt to disqualify itself from consideration in the test generation plugin. If there is a test generation methodology selected, it is passed through the method again and allowed to generate test cases, which it returns to the plugin that formats the test sets as output and subsequently returns them to the GUI for the user. The user is then responsible for separating the various test cases and formatting them for other tools. It should be noted that test generation does not produce an oracle value and these are left to the user to provide. The code cannot understand the intent of other codes without additional markup, so it is not possible to provide oracles without pairing the class files with some other source of input that declares the programmer’s intent in ways that the plugin can process and verify.

TEST HEAP STACK STACK STACK STACK STACK STACK STACK TESTEND Figure 5. A sample test case from the Blobstore plugin test file.

To test Blobstore API, the Base Choice [13] approach was used. According to this approach, a set of default options (stack) is selected as the base choice and then each combination would change a single value and keep all other values. In other words, all values are tested for each parameter separately, while default options are used for other parameters. A total of 25 test cases were used where one test case was a sequence of eight operations. The example in Fig. 5 represents one test case derived according to the Base Choice approach, which represents a recursive function allocating some memory and then passing it seven times. This process has produced nonrigorous tests, which work as a proof-of-concept and can help keep the plugin under the execution time limit if it is running on Google App Engine.

This approach seems straightforward enough but there are many reasons why the test generation plugin needs to become a standalone application in order to function properly. First, the API jar for App Engine is larger than the upload limit. Second, the JVM takes classes loaded by different class loaders as completely different even if they come from the same class file and are exactly similar. The fact that Test Generation is a plugin means that it is loaded by our class loader in the framework. Here, when it then tries to load modules, it does so with a new class loader, which means that it cannot use features of the original framework without some degree of difficulty. Third, even if this plugin was adapted to function properly on the live App Engine platform, it would fall under the runtime limit of 30 seconds and it may not be able to complete while processing large numbers of class files. Finally, results must currently be concatenated into a single log file to be returned to the user. It would be much better if the module could simply write the necessary files for the user on a local drive.

With each operation the plugin stores a copy of the manipulated data, retrieves a new copy, and then tries to duplicate the operations performed, arriving at the same result. The failure of an operation to match the previous data at the same step indicates the data was not retrieved properly. A checksum or hash are other options that would have been easier to implement; however, as the overall goal was to find bugs in any way possible regardless of API, it makes sense to use additional methods that add complexity to the process while testing different operations. The Blobstore plugin uses a feature of our framework called preforms. These are requests for additional information before the plugin executes. In this case, the plugin requests an additional file to be uploaded for it to operate on. This file can be any file type as chosen by the user. We do not currently test the size of the file or to see if it comes under any limits.

IV.

F. Plugin for Test Generation The Test Generation plugin is specifically designed to analyze a given Java class, choose the best of a registered set of test development methodology modules, and provide the resulting test sets to the user. The test generation plugin is

CONCLUSIONS AND FUTURE WORK

An approach for testing cloud platforms and infrastructures is suggested. An intelligent framework has been created that can significantly accelerate testing and provide for parallel test development. This framework is a cloud application containing plugins for testing cloud platforms APIs. With plugins acting as

139

mini-programs, they are empowered both in their minimalism and directness to the testing target. Because only objects and variables referenced by tests are included, our focus is readable test plugins that are easy to maintain and adaptable in the future. A prototype framework for testing Google App Engine has been created with demonstrated applicability of the suggested approach. Future research areas include advanced test generation and the automatic detection of different specific types of faults. Additionally, we can also expand upon automatic oracle generation.

[10]

[11]

[12]

REFERENCES [1] [2] [3] [4] [5]

[6] [7]

[8]

[9]

Google, “Google App Engine,” code.google.com/appengine Accessed on July 24, 2011. Amazon, “Amazon E2,” aws.amazon.com/ec2/ Accessed on July 24, 2011. Microsoft, “Microsoft Azure,” www.microsoft.com/windowsazure/ Accessed on July 24, 2011. IBM, “IBM SmartCloud,” www.ibm.com/cloud-computing/us/en/ Accessed on July 24, 2011. Scott Tilley, Mike McAllister, and Tauhida Parveen, “1st International Workshop on Software Testing in the Cloud (STITC 2009),” Proceedings of the 2009 Conference of the Center for Advanced Studies on Collaborative Research (CASCON '09), November 2-5, 2009, Toronto, Ontario, Canada. ACM, pp. 301-302. Proceedings of the 2nd International Workshop on Software Testing in the Cloud (STITC 2010), April 10, 2010, Paris, France. Tauhida Parveen, Scott Tilley, “When to Migrate Software Testing to the Cloud?” Proceedings of the Third International Conference on Software Testing, Verification, and Validation Workshops (ICSTW), Paris, France, April 06-10, 2010, IEEE, pp. 424-427. W. K. Chan, L. Mei, and Z. Zhang, “Modeling and Testing of Cloud Applications,” Proceedings of the IEEE Asia-Pacific Services Computing Conference (APSCC’09), Singapore, December 7-11, 2009, pp. 111-118. Toshihiro Hanawa, Takayuki Banzai, Hitoshi Koizumi, Ryo Kanbayashi, Takayuki Imada, Mitsuhisa Sato, “Large-Scale Software Testing Environment Using Cloud Computing Technology for Dependable Parallel and Distributed Systems,” Proceedings of the Third

[13]

[14]

[15]

[16]

[17]

[18] [19]

[20]

140

International Conference on Software Testing, Verification, and Validation Workshops (ICSTW), Paris, France, April 06-10, 2010, IEEE, pp. 428-433. Ganon, Z., Zilbershtein, I.E., “Cloud-based Performance Testing of Network Management Systems,” Proceedings of the IEEE 14th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD'09), 12 June 2009, Pisa, Italy. Sergiy Vilkomir, “Cloud Computing Infrastructures: Software Testing Aspects,” Proceedings of the 1st International Workshop on Critical Infrastructure Safety and Security (CrISS-DESSERT’11), May 11-13 2011, Kirovograd, Ukraine, Vol. 1, pp. 35-40. Max Ross, “Testing Techniques For Google App Engine,” Google I/O 2010 conference, Moscone Center, San Francisco, May 19-20, 2010. Video. www.google.com/events/io/2010/sessions/testing-techniquesapp-engine.html Accessed on July 24, 2011. David M. Cohen, Siddhartha R. Dalal, Jesse Parelius, and Gardner C. Patton, “The combinatorial design approach to automatic test generation,” IEEE Software, September 1996, pp. 83–88. Paul Ammann and Jeff Offutt, “Using formal methods to derive test frames in category-partition testing,” Proceedings of the Ninth Annual Conference on Computer Assurance (COMPASS 94), Gaithersburg MD, June 1994, IEEE Computer Society Press, pp. 69–80. Alan W. Williams and Robert L. Probert, “A measure for component interaction test coverage,” Proceedings of the ACSI/IEEE International Conference on Computer Systems and Applications (AICCSA 2001), Beirut, Lebanon, June 2001, pp. 304–311. John J. Chilenski and Steven P. Miller, “Applicability of modified condition/decision coverage to software testing,” Software Engineering Journal, 9(5), September 1994, pp. 193–200. Sergiy A. Vilkomir and Jonathan P. Bowen, “From MC/DC to RC/DC: Formalization and Analysis of Control-Flow Testing Criteria,” Formal Aspects of Computing, Vol. 18, Num. 1, March 2006, pp. 42-62. A. Williams, J.H. Lo, and A. Lareau, “TConfig,” www.site.uottawa.ca/~awilliam/TConfig.jar Accessed on July 24, 2011. A. Williams, “Determination of Test Configurations for Pair-Wise Interaction Coverage,” Proceedings of the 13th International Conference on Testing Communicating Systems: Tools and Techniques (TestCom '00), August 29 - September 1, 2000, Ottawa, Canada, pp. 59-74. Google, “Implementing multitenancy using namespaces,”. code.google.com/appengine/docs/java/multitenancy/multitenancy.html# Setting_the_Current_Namespace Accessed on July 24, 2011.