Jun 24, 2009 - Lists of Capabilities That a Best-in-Class Test and ... production system to facilitate testing, developm
Best Practices for Provisioning for Test and Development Databases
W H I T E PA P E R
This document contains Confidential, Proprietary and Trade Secret Information (“Confidential Information”) of Informatica Corporation and may not be copied, distributed, duplicated, or otherwise reproduced in any manner without the prior written consent of Informatica. While every attempt has been made to ensure that the information in this document is accurate and complete, some typographical errors or technical inaccuracies may exist. Informatica does not accept responsibility for any kind of loss resulting from the use of information contained in this document. The information contained in this document is subject to change without notice. The incorporation of the product attributes discussed in these materials into any release or upgrade of any Informatica software product—as well as the timing of any such release or upgrade—is at the sole discretion of Informatica. Protected by one or more of the following U.S. Patents: 6,032,158; 5,794,246; 6,014,670; 6,339,775; 6,044,374; 6,208,990; 6,208,990; 6,850,947; 6,895,471; or by the following pending U.S. Patents: 09/644,280; 10/966,046; 10/727,700. This edition published June 2009
White Paper
Table of Contents Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Challenges with Provisioning Databases for Test and Development . . . . . . . 3 Large Footprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Time and System Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Sensitive Data in Test and Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Solution: Creating Secure, Space-Efficient Copies of Your Production Databases with Minimal Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Understanding the Application Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Clarifying What Constitutes Sensitive Information and How to Protect It . . . . . . . . . . . . . . 7 Understanding Data Requirements for Test and Development . . . . . . . . . . . . . . . . . . . . . 7 Defining Data Selection Criteria and Data Masking Rules . . . . . . . . . . . . . . . . . . . . . . . . 8 Defining Your Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Understanding the Importance of Application Awareness . . . . . . . . . . . . . . . . . . . . . . . . 9 Testing and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Auditing and Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Following a Time-Tested Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Appendix A – Regulations and Industry Standards Protecting Sensitive Information . . . .13 Appendix B – Lists of Capabilities That a Best-in-Class Test and Development Solution Should Provide . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Best Practices for Provisioning for Test and Development Databases
1
Executive Summary Enterprises running packaged and custom applications face the challenge of maintaining a complex and rapidly growing data environment. The average size of a packaged application production database often exceeds 500 GB with a growing number of systems falling into the terabyte range1. As these production systems grow, so does the complexity of maintaining an efficient support environment. Often, two or more nonproduction systems are required for each production system to facilitate testing, development, training, and other support purposes. With the growth of production environments, creating nonproduction copies constitutes a significant overhead for information technology (IT) departments. Traditional system copy approaches, which have been the mainstay for data duplication, center on making a complete copy of the production system, including its entire data repository as well as transactional data. However, this process is very inefficient and expensive, both in terms of time and resources. Because large enterprises often have tens to hundreds of servers with different applications, continuing to make complete system copies is quickly escalating storage cost for nonproduction usage. Nonproduction systems also represent a significant data security problem. In today’s heightened regulatory environment, auditors are now beginning to expose the risks associated with using production data in test and development environments. As organizations gear up to be compliant with data privacy regulations, they are beginning to realize that protecting sensitive information, such as bank account information and health data, across multiple applications and versions not only requires deep knowledge across all applications but is also time consuming and resource intensive. The goal of this white paper is to discuss best practices and new technologies for creating test and development databases. By following the practices outlined in this paper, organizations can save time and resources and streamline data provisioning processes for creating, updating, and securing nonproduction environments to achieve significant cost savings and better regulatory compliance.
2
1
Survey by Gamma Enterprise Technologies, September 2007.
White Paper
Challenges with Provisioning Databases for Test and Development Size, performance, and privacy are three major challenges in the creation of test and development databases.
Large Footprints A primary challenge facing IT organizations is the storage footprint of their nonproduction environments. With a traditional approach, test and development systems are created by making a complete copy of the production system. This means that the footprint of each nonproduction systems in terms of disk space is identical to that of production. If the size of the production database is 500 GB, the storage footprint for maintaining eight nonproduction copies quickly escalates to 4 TB. See Figure 1 for the number of nonproduction databases created in an organization. On average, how many secondary database instances (copies or clones of the primary) does your organization create monthly for each of the following purposes? (Percent of respondents, N=110.)
Figure 1: Survey Conducted by the ESG Group Showing the Number of Nonproduction Copies Created in an Organization
Best Practices for Provisioning for Test and Development Databases
3
Time and System Performance The increasing footprint of nonproduction systems also has an exponential impact on the time necessary to perform support activities. First, as the production database grows, the longer it will take to make a complete system copy. However, this is not the sole source of wasted time in test and development environments. The second impact is that by using full production copies for development and test purposes, project teams are receiving a great deal more data than they require. This decreases system performance and slows down test and development processes. As a result, unit and regression tests of new functionality take longer than necessary, leading to longer patch and upgrade test cycles in which multiple iterations are required.
Sensitive Data in Test and Development Data privacy is another challenge when provisioning nonproduction environments. Production systems contain a large amount of sensitive information such as credit card numbers and bank account data. In a production environment, this sensitive information is routinely protected via security policies and roles according to industry and government regulations as well as industry best practice. When copied to nonproduction environments, these same policies and roles are not applied, leaving sensitive data exposed to a variety of employees and contractors. See Appendix Afor a description of some of the data privacy regulations and the sensitive data that needs to be protected. What percentage of your organization’s primary database instances contains information that can be classified as confidential? (Percent of respondents, N=110.)
Figure 2: Survey Showing the Percentage of Confidential Data Contained in Primary Databases
4
White Paper
Solution: Creating Secure, Space-Efficient Copies of Your Production Databases with Minimal Overhead To reduce the cost and risk of maintaining test and development databases, businesses need to implement procedures that address all the above-mentioned challenges. Effective system copy tools should be employed to shrink the footprint of nonproduction databases by taking only relevant subsets of production data without compromising functional integrity. In addition, sensitive data moved during the subset process should be automatically identified and protected using sophisticated data masking techniques. To further increase the quality of the data subset, any comprehensive solution should supply prepackaged logic or “accelerators” that define data structures, relationships, and business logic for the most popular enterprise applications, such as enterprise resource planning (ERP), customer relationship management (CRM), and human resources (HR). Finally, the solution should provide a robust framework for handling and adapting to custom enhancements as well as custom applications.
Best Practices The best-practice steps for creating test and development databases are as follows: 1. Understanding the application data 2. Clarifying what constitutes sensitive information and how to protect it 3. Understanding data requirements for test and development 4. Defining data selection criteria and data masking rules 5. Understanding the importance of application awareness 6. Testing and validation 7. Auditing and security 8. Following a time-tested methodology
Best Practices for Provisioning for Test and Development Databases
5
Understanding the Application Data Identifying High-Volume Modules and Tables The goal of this step is to understand the data growth trends and the distribution of data in your application database. A best-in-class solution for creating and updating development and test databases must give you the ability to analyze data growth in an application and identify key modules and tables that take up the most space.
Figure 3: The Largest Modules in an Example of an Oracle Applications Environment 6
White Paper
To effectively reduce the footprint of your nonproduction environments while retaining value and usability, all high-volume tables and modules must be examined to identify relevant data for the test and development process and by contrast the data that can be excluded. Having this knowledge will enable you to have more meaningful discussions with the requesting teams and to understand how their requirements can be met while reducing the footprint of the requested database copy at the same time.
Clarifying What Constitutes Sensitive Information and How to Protect It How to Identify Sensitive Data and Report Compliance There are numerous state and federal regulations, industry standards, and international laws that require an organization to protect sensitive information. Developing a successful enterprise program to protect this data involves: • Working with the compliance group and appropriate business managers to gain an
understanding of the data privacy rules that apply to your organization and the sensitive data that needs to be protected • Reviewing and understanding the type of reports that auditors look for to certify compliance
with industry standards and privacy regulations • Developing a mapping between the sensitive data and the application modules containing the
sensitive data • Identifying the best methodology to protect sensitive information that does not compromise
application utility
Understanding Data Requirements for Test and Development Who Needs Test and Development Copies and Why? To best develop a successful strategy for optimizing your test and development environment, you need to develop a keen understanding of the work being done by each project team. In many cases, each project team will need an independent copy so they don’t inadvertently affect the work of another team. For example, the development team may be designing a new feature for the HR module, while at the same time the QA team is conducting initial unit tests of HR functionality. It is important, therefore, to ensure that data provisioning requests made by the development team do not interfere with the work being performed by the QA team. While working with your users, it is also important to survey them for unmet needs. Are there requests for additional copies of production data that are not being fulfilled because of resource or time constraints? Have you surveyed the data warehouse group, marketing, business intelligence, and business partners for their needs?
Best Practices for Provisioning for Test and Development Databases
7
Defining Data Selection Criteria and Data Masking Rules Understanding the Data Needs and Priorities of Each Audience The next step is to analyze the unique requirements of each data request to determine how to reduce the size of the requested copy, without affecting the functionality or usability of the application. For example: • Does the development team need a full copy of production or will a subset with a limited set of
general ledger (GL) transactions created during the past six months suffice? • The QA team is requesting an application test environment. Will a database with a limited set of
transactions created over the last year across several modules suffice? • For a team testing a modification to the revenue recognition rules in Germany, will a database
containing only German-based sales transactions suffice? • How often should test and development databases be refreshed (data erased and reloaded) so
that the team can restart testing from a baseline copy? • Will the refresh overwrite ongoing work in test and development systems? How can this be
avoided? • The team testing accounts receivable (AR) functionality needs incremental AR data added every
month from production. The team needs to test against current data. How can they be provided with data on demand without necessarily re-creating the test database they are already using? Another issue to discuss with each project team is access to sensitive data. Sensitive data should always be masked, but the masking methodology may be different depending on how the data will be used. While evaluating data requirements, it is important to: • Prioritize requests according to business needs • Categorize these requests as regular versus one-off requests • Determine the intervals at which requested databases’ data needs to be refreshed
8
Sr. The No Requesting Project Team
Reason for Request
Priority
1
QA
Testing an upgrade
2
Development
3
4
Data Required for Test and Development
Data Refresh Interval
Is incremental Data Needed from Production
Type of Request
Medium A copy of production with limited transactions created over the last year
Monthly
No
One-Off
Testing a customization to year end payroll in Germany
High
N/A
Yes, monthly
Regular
Training
Test and development environment for the trainees
Medium A copy of production with limited transactions over the last six months
Help desk/ Support
A nonproduction Medium A complete copy of copy to troubleshoot production issues
Functional slice of all the data created in Germany
Figure 4: A Sample Spreadsheet Listing the Requirements
No Every time the database is upgraded
Regular
Every six months
Regular
No
White Paper
Defining Your Policies When defining your policies for test and development, it is important to define the following items: 1. Create a baseline policy that satisfies all the requirements listed above. • After reviewing the requirements of each requesting team, create a default data privacy and
subset policy that addresses how to fulfill the majority of test and development requests. • Depending on the needs of your users, it is not uncommon to have more than one policy;
however, having fewer policies improves simplicity and manageability. For example, the default subset policy for creating smaller nonproduction databases could be “all transactional data created over the last year” or “transactional data created over the last year in one high-volume module”—for instance, the GL module. • A default data privacy policy would detail the specific masking algorithm to be used for each
sensitive datatype. 2. Manage and reuse data privacy and subset policies: • When data needs to be refreshed, it will be efficient to reuse the same policies on each new
production copy. • The unique needs of a project team can be addressed by making minor tweaks to the default
policies versus starting from scratch. 3. Develop a basic set of questions that need to be answered for each new data request. • Will the default data privacy and subset policy suffice? • How can the footprint of the requested copy be further reduced while meeting requirements for
test and development? • Do you have any specific needs in regard to protecting sensitive data? • Does incremental data from production need to be brought into the requested test and
development copy at regular intervals? • How often should data in the requested copy be refreshed?
Understanding the Importance of Application Awareness How to Mask Sensitive Data Without Compromising Application Usability Data masking has two objectives: to protect sensitive data and to create realistic, usable data for testing and development purposes. Accomplishing both requires a complete understanding of the application context to ensure not only that all functionality remains intact but also that the resulting masked data is as usable as the original data (e.g., male first names should be replaced by male first names and not random strings, salutations must match, and Social Security numbers must have the predefined nine-digit format to ensure that business rules don’t fail). Also, data must be identified and masked consistently across all tables in the application and across all modules. Enterprise applications present a particular challenge because many of the functional relationships that exist between data are defined only within the business logic of the application, not within the database. For this reason, masking algorithms alone, functioning at the database level, cannot be expected to protect application data as a logical, business-driven unit. The privacy solution that incorporates masking must additionally account for all related data in the functional area it seeks to protect.
Best Practices for Provisioning for Test and Development Databases
9
The Importance of Prepackaged Masking Logic and Predefined Masking Rules to Protect Sensitive Data Prepackaged masking logic for protecting sensitive data should define relationships between data objects, identify the locations of sensitive data, and suggest the best methods to protect it. A best-practice solution for creating test and development environments must also provide a comprehensive set of masking algorithms to protect sensitive data out of the box. Solutions that provide predefined rules for substituting sensitive information such as names, addresses, and credit card numbers give organizations more choices and a head start in implementing strategies that protect sensitive data. It should also be possible to customize predefined rules to accommodate unique business requirements.
How Do You Reduce the Size of Nonproduction Databases Without Compromising Functional Integrity? For the reasons discussed above, using data selection criteria to select and create a lean nonproduction database without compromising functional integrity requires a deep understanding of the underlying application data model and the defining business logic. For example, creating a test database with all sales order entries created over the last year requires that the provisioning solution identify all the tables in which the sales order data and all related dependencies are stored and the relationships (both database constraints and application level relationships) between these tables. If any of these relationships are not in place, application integrity is compromised, raising questions on the validity of the entire test and development effort.
The Importance of Using Prepackaged Business Logic and Rules for Creating Lean Nonproduction Systems A best-in-class test and development provisioning solution should provide out-of-the- box prepackaged mechanisms that allow a user to copy a subset of production by the following commonly used criteria: • Create a database using a subset of production data containing any high-volume module by
date. For example, only GL data created over the last two years should be available in the new database. • Create a database with a subset of production data containing all transactions across several
modules by date. • Create a database with a subset of production data selected by organization, business unit, or
geographic location. Prepackaged business rules and logic define how the application stores data and the relationships between complex data structures. It should be possible to customize the prepackaged rules to accommodate an organization’s unique needs.
10
White Paper
The Ability to Handle Custom Applications Organizations have custom applications developed in-house to help meet specific business needs and requirements. These custom applications need to be secured and copied to nonproduction environments just like packaged applications. A best-practice solution for creating test and development environments must provide an easy-to-use framework that can model data in these applications, create relationships, and define data privacy and subset selection rules. The provisioning solution must be able to mine the data model, set up relationships, and provide intuitive interfaces to create additional relationships. The solution should also provide a comprehensive set of data masking rules to protect sensitive data in the custom application. The testing and validation step ensures that the provisioning process is efficient. Once the policies have been executed and the results meet the requirements, the process can be measured and benchmarked to provide predictability and repeatability.
Testing and Validation Simulating Data Privacy Policies Simulating a data privacy policy allows you to examine the effects without actually applying the policy to the data. Additionally, running the simulation on a small sample of the data instead of the entire dataset enables you to quickly assess the effectiveness of your data privacy policy. Testing and validation of a data privacy policy should be an iterative process allowing multiple revisions to determine the best method.
Simulating Subset Policies Simulating a subset policy allows you to estimate the savings in disk space without actually downsizing the data. Simulating multiple policies enables you to find the policy that best matches the requirements of the requesting team, while keeping the footprint of the test and development environment to a minimum.
Benchmarking The testing and validation step should ensure that the provisioning process is efficient. Once the policies have been executed and the results meet established requirements, the process can be measured and benchmarked to provide predictability and repeatability.
Validating the Test and Development Copy Once the test and development copy is ready, the data should be tested by the requesting project team for validation and sign-off. The requesting team should verify that: • The data in the copy is sufficient and addresses requirements • Sensitive data is adequately protected
Best Practices for Provisioning for Test and Development Databases
11
Communicating Your Execution Plan Communication describing your plan to provision test and development databases should include: 1. A description of each test and development database. The description should include: a. The data contained in the database b. The information that is secured c. The interval at which data is refreshed d. If incremental data is needed from production, the intervals at which this data is to be added 2. The process for making new requests 3. The lead time for fulfilling one-off requests
Auditing and Security Just as production data has to be classified, secured, and associated with a retention policy, so must test and development data be appropriately cataloged, secured, and destroyed when its purpose is served. Auditors, who verify compliance with data privacy regulations (see Appendix A), should be able to access reports that demonstrate how and when sensitive information was secured. Only authorized technical users (e.g., DBAs) should be able to use solutions that directly provision or update databases. An audit trail should be kept for all the actions performed by these users. A best-in-class solution should support segregation of duties between those who can create policies and those who can execute policies. Deleting test and development copies when the data is no longer needed reduces the risk of sensitive information being compromised, saves resources in terms of disk space and time spent on backups, and ensures that information in test and development is managed in accordance with an organization’s information management policy. Best practices dictate that any solution for creating test and development systems should capture the following information for every test and development system that is being provisioned with data from production: • The person who created the copy and why it was created • The date and time it was created • The sensitive data that was protected • The criteria used to make a subset of the data • The location of the copy • The users who have access to the data • The person who deleted a test and development copy and the date it was deleted
Following a Time-Tested Methodology No organization wants its implementation—no matter how customized—to be on the “bleeding edge” of experimentation. An organization needs to be sure that the vendor it works with has seen and addressed the types of challenges likely to arise during an implementation. Organizations should choose a vendor with proven experience in developing implementation methodologies. The methodologies should address complex environments, should enable a reduction of data size and protection of sensitive data, should meet the outlined business objectives , and should have a track record of being successfully applied to many implementations
12
White Paper
Appendix A – Regulations and Industry Standards Protecting Sensitive Information This appendix is not intended to be an exhaustive listing of all the data privacy policies. Our goal is to give you a brief overview of some of the most popular data privacy and industry regulations that protect sensitive data.
PCI DSS The Payment Card Industry Data Security Standard (PCI DSS) protects the privacy of personal credit card information. Information that needs to be protected includes card holder name, address, date of birth, Social Security number, credit card number, and PIN. PCI DSS is enforced by credit card issuers and applies to all organizations that hold credit card information.
Gramm-Leach-Bliley Act The Gramm-Leach-Bliley Act of 1999 protects the privacy of personal financial information. Information that needs to be protected includes bank account number, credit card number, bank balance, name, address, date of birth, and Social Security number. Gramm-Leach-Bliley applies to banks, financial institutions, credit card companies, and other organizations (e.g., property assessors and tax preparers that process personal financial information) and is enforced by the FTC and state governments.
HIPAA The Health Insurance Portability and Accountability Act (HIPAA) protects the privacy of personal medical information—for example, past or present health condition, payment for treatment, name, address, date of birth, and Social security number. HIPAA applies to health plan companies, health care providers, and health care clearing houses and is enforced by the Office of Civil Rights.
EU Privacy Directive of 1995 The EU Privacy Directive of 1995 protects personal private information such as name, address, birth date, credit card number, racial or ethnic origin, and political affiliation. The EU Privacy Directive of 1995 applies to all organizations doing business in the European Union.
FERPA The Family Educational Rights and Privacy Act (FERPA) protects the privacy of student educational records, including student grades and transcripts. FERPA applies to all schools that receive funding from the Department of Education.
Privacy Act of 1974 The Privacy Act of 1974 protects personal private information—name, address, birth date, education, and financial transactions, for example. The Privacy Act of 1974 applies to information collected by and maintained in systems of records of all Federal agencies.
Best Practices for Provisioning for Test and Development Databases
13
Appendix B – Lists of Capabilities That a Best-in-Class Test and Development Solution Should Provide Capabilities for Creating Fully Functional Nonproduction Databases from Subsets of Production Data
CATEGORY
DESCRIPTION OF CAPABILITY
Data analysis
The ability to analyze an application database and report on the modules and tables that take up the most space. This will give the basis team an overview of the policies that can be used to select subsets of production and that will have the greatest impact in terms of reducing size of the nonproduction client.
Several intuitive methods to create fully functional copies of production data
1. The ability to specify a time slice (e.g., create a copy with only transactional data created over the last six months). 2. The ability to specify a functional slice (e.g., create a reduced system/client copy with the data from one or more company codes, business units, etc.).
Predefined business logic for popular ERP, CRM, and SRM applications
Prepackaged business logic and rules ensure that the underlying SAP data model is defined. As all the high-volume modules and tables are identified, all the user needs to do is specify the subset section criteria.
Support for custom applications and enhancements
The provisioning solution should provide a robust framework that supports not only custom application enhancements but also custom rules to address unique business recruitments.
Data shuttling capability
The ability to bring in information from production to the nonproduction client.
Performance
Some client copy requests can be satisfied by a small subset of the production database. The data provisioning solution should provide methodologies to ensure that the copy is generated in a timely manner irrespective of the volume of the data required in nonproduction.
Simulate subset policies
As creating a subset copy run can take an appreciable amount of time to complete, it’s important to be able to simulate a subset policy quickly, without downsizing the nonproduction actual data to get a report on the space saved.
Auditing
Detailed audit trails and reports should be kept that capture the following: • The person who executed the subset policy • The selection criteria • Policy creation and modification
Authorization
14
Only privileged users should be authorized to create and execute a subset policy. Ideally there should be a segregation of duties between those who can create and execute subset policies.
White Paper
Capabilities for Creating Fully Functional Database Subsets with Masked Sensitive Data:
CATEGORY
DESCRIPTION OF CAPABILITY
Predefined accelerators Accelerators ensure that sensitive information is protected out of the for market leading ERP, box. CRM, HR applications Comprehensive methods to support protecting sensitive data
Masking methods available include substitution, skewing, shuffling, randomization, nullifying, scrambling, encryption, and mathematical formulas. Prepackaged datasets for substituting names, addresses, and credit card numbers give organizations a head start in protecting sensitive data.
Support for custom applications and rules
The subset solution should provide a robust framework that supports not only custom applications but also custom rules to address unique business requirements.
Support for custom masking algorithms
The solution should support custom masking algorithms to accommodate the specific needs of an organization.
Simulation
As securing an application database can take an appreciable amount of time, it’s important to be able to simulate a data protection policy and compare the before and after values on a small sample of the data.
Auditing
Detailed audit trails should be kept that capture the following: • The person who executed the data protection policy • The policy criteria (i.e., the fields masked and the masking
method.) • Policy creation and modification
Security
Only privileged users should be able to create and execute a policy that protects sensitive data. Ideally there should be a segregation of duties between these which can create and execute subset policies.
Applications for which prebuilt accelerators are available
A best-in-class test and development solution should support several market leading applications.
Best Practices for Provisioning for Test and Development Databases
15
ABOUT INFORMATICA Informatica enables organizations to operate more efficiently in today’s global information economy by empowering them to access, integrate, and trust all their information assets. As the independent data integration leader, Informatica has a proven track record of success helping the world’s leading companies leverage all their information assets to grow revenues, improve profitability, and increase customer loyalty.
16
White Paper
Best Practices for Provisioning for Test and Development Databases
17
Worldwide Headquarters, 100 Cardinal Way, Redwood City, CA 94063, USA phone: 650.385.5000 fax: 650.385.5500 toll-free in the US: 1.800.653.3871 www.informatica.com
Informatica Offices Around The Globe: Australia • Belgium • Canada • China • France • Germany • Japan • Korea • the Netherlands • Singapore • Switzerland • United Kingdom • USA © 2009 Informatica Corporation. All rights reserved. Printed in the U.S.A. Informatica, the Informatica logo, and The Data Integration Company are trademarks or registered trademarks of Informatica Corporation in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners.
6992 (06/24/2009)