SysCon 2010 – IEEE International Systems Conference San Diego, CA, April 5–10, 2010
Towards an Evaluation Framework for SOA Security Testing Tools Nawwar Kabbani
Scott Tilley
Lewis Pearson
Department of Computer Sciences Florida Institute of Technology
[email protected]
Department of Computer Sciences Florida Institute of Technology
[email protected]
Government Communications Systems Harris Corp.
[email protected]
Abstract – Service-Oriented Architecture (SOA) is a paradigm that organizes and uses distributed capabilities to bring together a technical solution to a business problem. Despite the large and increasing dependency on SOA by the enterprise, testing SOA systems is still a nascent and immature field. In particular, testing SOA applications from a security perspective is an essential yet underserved activity. This paper presents preliminary work towards an evaluation framework for SOA security testing tools, in order to address the question “Which testing tool(s) provide(s) the best value in testing SOA security with respect to our needs and context?” Keywords – software testing, Service-Oriented Architecture (SOA), Web services, security, tools, evaluation
I.
INTRODUCTION
Service-Oriented Architecture (SOA) has its own specific characteristics and attributes which raise unique challenges to SOA testing [12] and make the techniques used in testing SOA applications somewhat different from traditional software techniques. To achieve a reasonable level of SOA testing, different tools are necessary. It is inevitable to use tools because of the fact that SOA services are programmatic (i.e., they have no user interface), and therefore some mediating tools are needed to allow testers to invoke services and capture and replay input/output messages. As with testing any software system, the main purpose of SOA testing is to provide the stakeholders with qualityrelated information about the system under test. Therefore, one factor that makes one SOA testing tool better than another depends on how much it helps the tester to reveal quality information about the service. SOA testing tools that are currently available enable testers to obtain information about services quality using a variety of different approaches and testing techniques. Depending on the objectives of the testing process, one tool might be more suitable than another, and some tools might have little value for a specific testing objective. Most commercial SOA testing tools tend to be expensive (up to several hundreds of thousands of dollars), while free ones seem to have far fewer capabilities than the commercial ones. Therefore, choosing the right tool that provides the most value to the stakeholders is important and a challenging task. SOA testing includes different testing objectives, such as functional, performance, interoperability, and security testing [13]. This paper focuses on security testing, since security is a vital component of SOA systems. By analyzing the SOA security elements, this paper tries to investigate the testability of each of the security elements, and analyze the potential of
978-1-4244-5883-7/10/$26.00 ©2010 IEEE
using tools to achieve the security testing purposes. As a result, this work proposes an evaluation framework that can be used to measure the security testing capabilities of a SOA testing tool. The framework is employed on some selected SOA testing tools that are commonly used in the field. In [7], a task-oriented approach is proposed to evaluate and select a testing tool based on the testing task. This is based on evaluating the tool role at each testing task such as test design, execution, and evaluation tasks. The proposed framework in this paper follows a similar approach in considering the different testing phases. But it is specifically tailored for security testing of SOA services. Furthermore, this framework is based on the systematic evaluation method called T-Check [11] developed in the Software Engineering Institute (SEI) at Carnegie Melon University. The next section of this paper provides background about software testing. Section III describes SOA security elements and common SOA risks. Section IV presents preliminary work towards an evaluation framework for SOA testing tools. Finally, Section V summarizes the paper and outlines current and future work in the area. II.
SOFTWARE TESTING
Software testing can be defined as “… an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test” [9], where quality is simply “value to some person” according to Weinberg [14]. Based on this definition, software testing is not only concerned with validating that the software provides the specified or expected functionality, but also whether or not it meets other criteria called nonfunctional quality attributes. There are many such attributes defined in the software engineering literature, such as performance, security, availability, reliability, robustness, interoperability, extensibility, scalability, maintainability, etc. Each of these attributes makes a good subject for testing. Software testing processes typically consists of a sequence of activities similar to the ones listed below (adapted from [8] with some modifications): 1. Analyzing the situation to understand the software input, outputs, constraints, risks, testing objectives, etc. This is mainly a human effort. 2. Choosing the technique most suitable to achieve the testing objectives. 3. Choosing an oracle: how do we know a test passes or fails, what to look at, what to compare to, etc.
4.
Generating test cases: the values of the input variables, precondition state, and expected output. 5. Executing the test, including setting up the environment, executing the software or the part of it that is under test, passing in the values from the test cases, and capturing the results. 6. Evaluating the results, by looking at the software behavior, including its output, state, speed, and even internal values, in case of glass-box testing, such as internal variables, the database, or the file-system. 7. Saving the test steps, inputs, and results for future regression testing. After these steps are completed, the saved tests can be reused after each change of the software in order to detect newly injected bugs resulting from the system changes. Testing tools are used to assist the tester in performing some of these tasks more efficiently when the task can be automated and executed by a machine. However, human participation is indispensable during the whole testing process, simply because some of these tasks cannot be efficiently done by a machine. Therefore, a fully automated (non-trivial) testing is unrealistic. Testing tools, like any software application, have their own quality attributes. James Bach proposed some attributes to be looked at when evaluating a testing tool [1]: capability (functionality or features), reliability, capacity, learnability, operability, performance, compatibility, and nonintrusiveness. Although all of these attributes are important and need to be considered when choosing a testing tool, this paper mainly considers the assessment of the capability of SOA testing tools. III.
SOA SECURITY
Information security is usually seen as a combination of three core principles: confidentiality, integrity, and availability (CIA). This set is often supplanted with accountability, authenticity, authorization, non-repudiation, and others. Many security mechanisms exist to support these security principles, such as Access Control Lists (ACL), encryption, digital signatures, certificates, identity management, Intrusion Detection Systems (IDS), firewalls, audits, and so on. Testing these mechanisms can be a challenge because it is hard to enumerate all the ways that one of them could fail. There exists, however, long lists of vulnerabilities and security risks commonly found in the history of software. Fortunately, in SOA, many of the known Web applications vulnerabilities, exploits, and attacks are also applicable to SOA (e.g., buffer overflow, SQL injection, and session hijacking). However some other Web vulnerabilities are not generally applicable in SOA, such as Cross-Site Scripting (XSS), except when services are incorporated as part of Web applications (e.g., mash-ups). Some SOA vulnerabilities are published in the literature. For instance, [4] identifies 27 vulnerabilities and threats applicable in Web services. Several other vulnerabilities are also described in [2].
A. Basic SOA Security Elements This section presents a brief overview of some security elements affecting SOA systems and shows the main standards or technologies used to achieve these security objectives. A short analysis of the testing required for each item is provided as well. Encryption is used to protect confidential or sensitive information from being understood or tampered by an unauthorized party, whether the information is in transmission state or in storage state. In SOA, encryption can be implemented at different levels: network layer (e.g., IPSec, VPN); transport layer (e.g., TLS, SSL), message layer (XMLEncryption), and application-defined specific encryption methods. From testing point of view, there is little point in testing the “breakability” of a proven encryption algorithm. However, the tester could be interested in verifying that the correct algorithm is used in the right place. For example, the encryption algorithm might be mistakenly used when hashing should be used. In all cases, testing tool should support different encryption methods in order to properly establish communication with a secure service. Authentication: This refers to the ability of a message receiver to verify the identity of the message originator. Authentication can be done through several methods, such as passwords, digital signatures, certificates, and security tokens. Security Assertion Markup Language (SAML) [15] provides an XML-based standard for web services authentication and authorization. It enables security measures when multiple services with one identity provider. Testing services that use SAML for authentication requires testing tools capable of generating SAML headers. However, testing the authentication mechanism can be very complex. The best way to do it is by creating a set of a risk-based tests, and complex authentication scenarios based on typical real life use cases. Digital signatures: This is a mechanism used to assure the integrity of data and the authenticity of its source. Digital signing is done by applying a hashing algorithm (e.g., SHA1) to the contents of a document combined with a shared secret (e.g., a key). Or it a can be created using a public-key encryption algorithm (e.g., RSA). XML Signature (XMLDSig) is a commonly used standard for implementing the digital signature in SOAP. A supportive standard called XML-Canonicalization (XML-C14n) is proposed to assure that XML-DSig works correctly in spite of different ways of XML serialization. Testing tools should be able to generate and validate XML signatures. However, and despite the standards, some service providers choose to implement custom signature methods. For example, Amazon Web Services (AWS: http://aws.amazon.com) defines its own signature method by applying hash function to the service parameters values including a time-stamp formatted in a specific way. Therefore, it is a good feature for a testing tool is to be able to generate custom signatures to allow testing of services like
AWS. This capability implies that some sort of scripting with support of hashing and public key cryptography needs to be provided by the tool. Access Control Lists: ACLs are used to enforce authorization and security policies defining who is authorized to do what. XML Access Control List (XACL) is a common method for implementing ACL in XML documents [10]. B. Common Risks and Vulnerabilities Following is a list of some of the common vulnerabilities or threats applicable to SOA. This list is representative, not exhaustive. SQL Injection: This threat occurs when input variables are used to build SQL statements without proper escaping the the input strings. This vulnerability can be exploited by inserting strings such as “*”, “’OR ’1’=’1”, “’;DROP DATABASE x;”, and so on. This might result in improper behavior such as authentication with invalid credentials or data tempering. Test case generation can be done to a large extent automatically based on common SQL injection strings. The tester’s knowledge of the internals of the service such as the database schema can enhance the test cases generation. Choosing the best oracle for this test is usually difficult. In this case, white-box (glass-box) testing assertions on the database can effectively facilitate the detection of this vulnerability type. Once the test cases are generated the execution can be fully automated by the tool. For the evaluation, a testing tool can be used to automatically detect many bugs by evaluating SOA return errors, time-out errors, and pre-defined assertions on the output. However, tester review of the results is imperative to find other anomalies. Other “injections”: Similar to SQL Injection, a testing tool might support other injections such as code injection which means injecting programming code such as PHP, Ruby, and so on. This is usually found in scripting languages supporting an “eval()” method that can execute dynamic code on the server. Another type of injection is (Shell) command injection, which is found when the server runs some system commands (e.g., file move, copy, delete). Finally, there are new kinds of injections specific to web services, such as XML and XPath injections. Buffer Overflow: This happens when a variable (usually a string or an array) takes an extraordinary length that is bigger than the size of the buffer assigned to store the variable. A tool that supports testing for this vulnerability should be able to generate overly big values for input parameters, too long strings, and too many items in a list. The symptoms of a buffer overflow bug vary considerably; therefore it is hard to define a general oracle for it. Some simplistic oracles can be used, like checking for system crash or return errors to detect some failures, in which case a testing tool can easily perform the evaluation. Weak Input Validation: In addition to the injections and buffer overflow vulnerabilities, which are usually result of weak input validation or processing, there are other kinds of input validation bugs that can be potentially exploited to
achieve some security attack. The test case generations might be automated to a large extent. The following is a list of sources of common input validation errors: • Type checks errors: such as inserting non-numerical data in numerical fields. • Range checks errors: numbers valid range, string minimum and maximum length, invalid dates..etc. • Special values errors: null, empty strings, zero, negative numbers, Not a Number (NaN), extreme values (e.g., MaxInt), etc. • Wrong number of parameters: omitting one or more parameters, duplication of one or more parameters, and extra parameters. • Data format errors: such as dates incorrectly formatted incorrect email format, etc. • XML validation errors: if the SOAP request messages that are not well formed, or not valid with respect to a DTD or XSD schema. Information Leakage on Errors: When errors happen, sensitive information is sometimes leaked to the service consumer in the response to the service invocation, or less often to public sources (such as some Web pages, subsequent invocations of the same service, other services, public logs, or even email). An attacker might try all possible ways to cause an error in the service, and then would look at all possible places for information leaks. The leaked information could be technical such as server file system structure or database schema, which can be used by the attacker for subsequent attacks. It is impossible for a test tool to generate all test cases that all possible. Vulnerability to Denial of Service (DoS) Attacks: “Can the test tool evaluate the ability of the service to withstand DoS attacks?” It is sometimes possible to consume the server resources with few requests or even a one, if they were specially crafted in a way that makes the service does excessive amount of computation, I/O operations, network traffic, database operations, or memory consumption. In order to test service resistance to DoS attacks, the test tool should be able to generate high volume of requests. Often this is limited by the test machine performance and the connection speed. A bigger load can be generated if the testing tool can run in a distributed manner (DDoS), similar to the way a Dosnet (DoS network) is used by attackers. Session Hijacking: This happens if an attacker manages to steal the security tokens used in a legitimate session between a service provider and consumer, allowing the attacker to impersonate the consumer. The risk of session hijacking is reduced when encryption and digital signatures are used for the session. Since test cases for session hijacking are difficult to be generated automatically, the tester has to analyze the protocol and the service design in order to come up with plausible testing scenarios. However, the execution and evaluation of these scenarios can be automated. The oracle for this case would be checking whether the attacker manages to execute unauthorized actions thereafter.
IV.
THE EVALUATION FRAMEWORK
Based on the previous discussion on SOA security risks, we created a table (TABLE I. ) that summarizes our analysis on the possible capabilities of a SOA security testing tool. The table shows each of the aforementioned risks with the capability that a testing tool might help to do each of the main testing tasks. (The table shown is still preliminary work and needs to be extended and revised.) For each testing task and each testing purpose we show, to our best estimation, to which degree a testing tool can assist the testing activity. For example, for testing injection vulnerabilities, we think that it is possible to automatically generate an extensive list of test cases depending on a database of common injection exploits, all done with little or no tester intervention. However, evaluating the results cannot be fully automated. The tester’s intervention is indispensable for that task, although the test tool can check for some predefined post-condition such as the absence of some obvious errors (time-out, error return, etc.). Table I shows the maximum automation possible for some testing purposes at different testing phases. The highest level of automation (referred to by A) means the task can be fully or almost-fully automated (no or very little human intervention). At the lowest level, (M) means that the task is primarily a manual task and tools cannot provide much assistance. Between those two levels there is a wide grey range of semi-automation (S) in which the tool might provide a considerable amount of assistance, but the tester has still a lot of work to do manually. TABLE I. THE MAXIMUM POSSIBLE AUTOMATION FOR DIFFERENT TESTING PURPOSES AT DIFFERENT TESTING TASKS. Task → Purpose ↓ Injections Buffer overflow Weak input validation DoS vulnerabilities XML validation (service input) XML validation (service output) Information leakage Session Hijacking
Test case generation A S S S A S
Test execution A A A A A A
Test evaluation S M S A S A
S M
A S
S A
A. T-Checks In order to evaluate the fitness of a SOA testing tool for security testing under certain context, we use an evaluation method called T-CHECK, which was introduced in [11] and described in other reports by the Software Engineering Institute at Carnegie Melon University (CMU/SEI). This method suggests modeling the problem by a set of hypotheses along with one or more criteria for each hypothesis. Then information is gathered and model results are analyzed and compared to the usage context and goals to determine whether the technology/tool is a fit or not. In order to model the problem of choosing the best testing tool, a set of hypotheses and criteria need to be created for that purpose.
Figure 1 Evaluation Method, loosely based on T-Check[11]
Error! Reference source not found. shows an activity diagram loosely based on T-Check, but tailored for the purpose of SOA testing tools evaluation. The first step is to gather and analyze the security requirements of the SOA project or system. Each system has its own characteristics and special requirements. This can be done with the help of the project stakeholders, especially the project manager, system analysts, system architects, developers, and testers. The second step is to specify the testing goals. This is also important in order to clearly understand the context and scope of the testing process. After that, a limited set of services needs to be prepared; either actual services selected from the existing ones if there are any, or prototypes created by developers. The services will help us run the experiments later to evaluate the tools against the hypotheses and criteria. It is important that the services are as representative as possible with respect to the security requirements. The next activity is the development of the hypotheses and criteria as it is described in T-Check process [11]. Then a set of testing scenarios should be created. With these scenarios, the testing tool will be assessed against the criteria. The scenarios should be designed in a way the reveals the information related to the evaluation criteria. Finally, the tests scenarios are implemented and conclusion about whether the tool meets the criteria is reached. In case the tool seems to not meet some criteria directly, this might be because some criteria are not very well chosen; there is a possibility that refining or modifying the criteria give positive results. This
step has to be carefully done to avoid subjective bias. But it is important in some cases to correct wrong criteria. B. Example Using soapUI This section presents a hypothetical case study based on the free version of one SOA testing tool for the purpose of analyzing some of its security testing capabilities, and to show an example of using the proposed evaluation framework on this tool. In order to better understand the framework, assume the following hypothetical context. It is required to do security testing on a service that has a database backend; for instance, a customer lookup service. Supposing that the service can only be invoked using encrypted SOAP messages (XMLencryption). The authentication mechanism depends on X.509 certificates. Finally, the testing objectives include exposing security vulnerabilities of the service and being able to repeat the tests easily in the future for regression testing. From above description and requirements, it is clear that testing tool must be able to communicate with the secure service using the aforementioned mechanisms. Knowing that the service depends on a database raises security risks related to proper data validation and sanitization. Therefore domain testing and SQL injection testing are examples of important testing techniques for this case. Finally, regression testing is mentioned as a needed testing objective. This means that the testing tool need to be capable of saving and replaying saved tests. For regression testing to be more effective when the service undergoes changes, the testing tool should posses some flexibility in modifying saved test, for example by allowing parameterization of test cases. Based on this analysis of security requirements and testing objectives, the next step is to is to choose a sample of services to run the experiments on. Since there is only one service in this example, that service is chosen. After that, a testing tool is selected for evaluation. Eviware soupUI [6] is a SOA testing tool that has two versions, a free/open source and a commercial. The vendor of soapUI claims it is one of the most popular SOA testing tools. In the related academic literature, soapUI is frequently mentioned, and that is the main reason for conducting our study using it. There exists, however, several other commercial and free SOA testing tools. The appendix of this paper lists some of the most common ones. The next step is to develop a set of hypotheses and criteria for the evaluation. Table 1 shows an example of some hypotheses and criteria for this evaluation process. Ultimately the results of the experiments would be filled in the table as shown. The results are reached after evaluating each criterion using some testing scenario. For example, in order to reach a conclusion about the first criterion, if the tool can encrypt/decrypt SOAP messages using XML-Encryption, a simple test case with the tool on the target service using XML-Encryption. For some criteria, like this one, the criterion can also be answered by looking at the specifications sheet of the testing tool.
Finally, after running the experiments on all hypotheses, the suitability of the tool can be assessed, and comparisons between different tools using the same set of hypotheses and criteria can be made. Table 1 Set of hypotheses, criteria, and the results of the evaluation process of soapUI Hypotheses Supports secure services
Can test database related vulnerabilities
Criteria Can encrypt/decrypt SOAP messages using XML-Enc. Can store and use X.509 certificates. Can generate test cases with SQL-injections and boundary tests. Can execute SQLinjection test cases, and boundary tests Can evaluate the results
Can test authentication vulnerabilities Regression testing
Can generate authentication test cases Can save a test Can replay a test with minimum human interaction Test can be modified through parameterization or data-driven testing
Results Yes Yes No, but with scripting and/or data-driven testing capabilities it is possible to generate such tests. Yes Partially. soapUI can only help evaluating the SOAP response, but doesn’t support database assertions for example. No. But scripting might be useful. yes yes Yes
C. Related Work IN [3] Brown and Wallnau present a methodology for the evaluation of a software product or technology within the context within which the software will be used. Their approach is focused around the differences (delta) in features between the technologies, and consists of three phases: (1) modeling the domain and problem using semantic networks; (2) generating a set of hypotheses and experiments; and (3) doing empirical experiments and studies. Poston and Sexton presented in [14] a process for the evaluation and selection of software testing tools. The article explains using standard data-collection forms to achieve 4 objectives that are required to evaluate testing tools. These objectives are: (1) identifying and quantifying user needs; (2) establishing tool-selection criteria; (3) finding available tools; and finally (4) selecting tools and estimating return on investment. There are also various IEEE and ISO standards, such as IEEE 1209, IEEE 1175, ISO 14102, and many other standards and related papers that are also relevant to the evaluation of software testing tools. They mainly provide recommendations and guidelines for the evaluation and selection of Computer-Aided Software Engineering (CASE) tools which include software testing tools among others. For
example, the process as described in IEEE 1209 [5] takes user needs and a list of criteria as input, based on these two inputs we get a list of “tailored criteria” that are tailored according the user needs. Then the evaluation is conducted using this list, in addition to list of available tools and the objectives, assumptions and constraints about the evaluation. The evaluation results are used for the tools selection. V.
SUMMARY
This paper presented preliminary work towards a framework for evaluating SOA testing tools from security perspective. This work is part of a larger research agenda that aims at setting a basis for comparing SOA testing tools for any testing objective. Professional testers and academics should both benefit from this work, because it provides the practitioners a model that help them choose the best testing tool to achieve their objectives. Academics and researchers can use this work to analyze their testing approaches and easily place their work in the space of testing capabilities.
ACKNOWLEDGEMENTS The authors would like to thank Tauhida Parveen and George Frederick for their contributions to this paper. This work is supported in part by a grant by Harris Corp.
REFERENCES Bach, J. “Test Automation Snake Oil.” Windows Tech Journal, pp. 40– 44. Online at http://www.satisfice.com/articles/test_automation_snake_oil.pdf, 1996. [2] Barbir, A., Hobbs, C., Bertino, E., Hirsch, F., and Martino, L. “Challenges of Testing Web Services and Security in SOA Implementations.” Test and Analysis of Web Services, pp. 395–440. 2007. [3] Brown, A. and Wallnau, K.. “A Framework for Evaluating Software Technology.” IEEE Software, 13(5):39–49, 1996. [4] Demchenko, Y., Gommans, L., De Laat, C., and Oudenaarde, B. “Web Services and Grid Security Vulnerabilities and Threats Analysis and Model.” Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing, pages 262–267. IEEE Computer Society, 2005. [5] IEEE Std 1209-1992, Recommended Practice for the Evaluation and Selection of CASE Tools, (ISO/IEC 14102, 1995), IEEE Press, 1992. [6] Eviware soapUI, a SOA testing tool. http://www.soapui.org/ [7] Illes, T., Herrmann, ‘A., Paech, B., and Rückert, J. “Criteria for Software Testing Tool Evaluation: A Task Oriented View.” Proceedings of the 3rd World Congress for Software Quality, 2005. [8] Kaner C. “High Volume Test Automation” (Keynote address) [SLIDES]. International Conference on Software Testing Analysis & Review (STAREAST 2004: Orlando, FL; May 2004). [9] Kaner C. “Exploratory Testing.” (Keynote address) [SLIDES]. Conference of the Association for Software Testing (CAST 2004: Orlando, FL; November 2006). [10] Kudo M., and Hada S. “XML Document Security Based on Provisional Authorization.” Proceedings of the 7th ACM conference on Computer and communications security, pages 87–96, Athens, Greece, 2000. ACM. [11] Lewis G. and Wrage L. “A Process for Context-Based Technology Evaluation.” Software Engineering Institute (CMU/SEI-2005-TN-025), June 2005. [12] O’Brien L., Merson P., and Bass L. “Quality Attributes for ServiceOriented Architectures.” Proceedings of the International Workshop on Systems Development in SOA Environments, page 3. IEEE Computer Society, 2007.
[13] Parveen, T. and Tilley, S., "A Research Agenda for Testing SOABased Systems," Proceedings of the 2nd IEEE International Systems Conference (SysCon 2008: April 7-10, 2008; Montréal, Canada), pp. 355-360. Piscataway, NJ: IEEE, 2008. [14] Poston, R., & Sexton, M. “Evaluating and Selecting Testing Tools. Software, IEEE, 9(3), 33-42. [15] SAML, an XML-based security standard for exchanging authentication and authorization information. http://www.oasisopen.org/committees/security/ [16] Weinberg G. Quality Software Management: Systems Thinking. Dorset House Publishing Co., Inc., 1991.
APPENDIX TABLE II.
COMMON SOA TESTING TOOLS
Tool
Vendor
SilkPerformer
Borland http://www.borland.com/us/products/silk/s ilkperformer/ http://www.se.unihannover.de/forschung/soa/bpelunit/ CrossCheck Networks http://www.crosschecknet.com/ Eviware http://www.soapui.org/
BPELUnit SoapSonar soapUI GH Tester HP Service Test Lisa QEngine
Green Hat http://www.greenhat.com/ HP (Mercury) http://www.hp.com/
TestMaker
iTKO http://www.itko.com/ ManageEngine (Zoho) http://www.manageengine.com/products/q engine/ Parasoft http://www.parasoft.com/ Progress (Mindreef) http://web.progress.com/index.html PushToTest http://www.pushtotest.com/
WebInject
http://www.webinject.org/
SOAPBox
Vordel http://www.vordel.com/products/soapbox/
[1]
SOATest SOAPScope
License type Proprietary Free/Open Source Proprietary Free/Open Source Proprietary Proprietary Proprietary Proprietary Proprietary Proprietary Free/Open Source Free/Open Source Proprietary