Using Reflection to Introduce Self-Tuning ... - Semantic Scholar

4 downloads 369 Views 96KB Size Report
performance problems in DBMSs and then describe ... parameters and more sophisticated monitoring tools ..... [13] Microsoft Corp., Microsoft SQL Server 2000.
Using Reflection to Introduce Self-Tuning Technology into DBMSs Patrick Martin, Wendy Powley School of Computing Queen’s University Kingston, Ontario Canada, K7L 3N6 {martin, wendy}@cs.queensu.ca

Abstract The increasing complexity of database management systems (DBMSs) and their workloads means that manually managing their performance has become a difficult and time-consuming task. Autonomic computing systems have emerged as a promising approach to dealing with this complexity. Current DBMSs have begun to move in the direction of autonomic computing with the introduction of parameters that can be dynamically adjusted. A logical next step is the introduction of self-tuning technology to diagnose performance problems and to select the dynamic parameters that must be adjusted. We introduce a method for automatically diagnosing performance problems in DBMSs and then describe how this method can be incorporated into current DBMSs using the concept of reflection. We demonstrate the feasibility of our approach with a proof-of-concept implementation for DB2 Universal Database. 1.

Introduction

Database management systems (DBMSs) are a vital component of many mission-critical information systems and, as such, must provide high performance and high availability. These DBMSs are managed by expert Database Administrators (DBAs) who must be knowledgeable in areas such as capacity planning, physical database design, systems tuning and systems management. DBAs face increasingly more difficult challenges brought about by the growing complexity of DBMSs, which stems from several sources: • Increased emphasis on Quality of Services (QoS). A DBMS must provide service guarantees in order that the overall system can satisfy the end-to-end QoS requirements. • Advances in database functionality, connectivity, availability, and heterogeneity. DBAs must grapple with

Darcy Benoit Jodrey School of Computer Science Acadia University Wolfville, Nova Scotia Canada B4P 2R6 [email protected]

complex decisions about hardware platforms, schema design, constraints and referential integrity, primary keys, indexes, materialized views, the allocation of tables to disks, and shared-nothing, shared-everything, or SMPcluster topology. • Ongoing maintenance. Once designed, databases require substantial human input to build, configure, test, tune, and operate. • Burgeoning database size. Popular applications such as SAP typically create more than 20,000 tables and support thousands of users simultaneously. • E-Service era. Web-based applications present the DBMSs with a broader diversity of workloads in terms of type and intensity. Autonomic computing systems have emerged as a promising approach to dealing with this complexity [7]. Autonomic computing systems are, among other things, self-tuning, that is they have the ability to manage their own performance by automatically recognizing their current workload and environment and then reconfiguring their resource allocations accordingly. DBMS vendors have begun to move in the direction of self-tuning systems with the inclusion of features like dynamically adjustable configuration parameters and more sophisticated monitoring tools [1][9][14]. What is currently missing is an automated approach to bridging the gap between monitoring and parameter adjustment, that is, automated diagnosis of the possible sources of performance problems. Our work makes two main contributions to the development of self-tuning DBMSs: • We define a general approach to automatically diagnosing performance problems in a DBMS. In this model-based approach users define models of their system’s resources and workload and provide a set of diagnosis rules. From this information we generate a diagnosis tree, which can be used to automatically

locate potential sources of performance problems. • We provide a method of implementing our approach with current DBMSs that support dynamically adjustable configuration parameters. Our method, which is based on principles from reflective programming [11] and uses standard DBMS features like triggers and user-defined functions, is applicable to any DBMS product with the appropriate features. The reminder of this paper is organized as follows. Section 2 discusses related work. Section 3 describes our approach to diagnosis and presents experiments to demonstrate its effectiveness. Section 4 highlights our implementation of the approach and shows experimental results of a proof-of-concept implementation with DB2 Universal Database Version 8.1 (DB2 UDB) [8]. Section 5 presents our conclusions.

2.

Related Work

DBMS vendors, as we indicated earlier, have accepted the concept of an autonomic DBMS as a viable approach to simplifying the task of managing their products and now include self-tuning features in several areas of their products. In query optimization, for example, some self-tuning features include automatic collection of new statistics and the construction of histograms [4][16]; self-validation of the cardinality model [18] and dynamic adjustment of the query execution strategy. Examples of self-tuning features for configuration management include wizards to select indexes [10][13][14] and materialized views [1][15]. Examples of self-tuning features for maintenance and monitoring include tools, such as DB2 UDB’s Health Center [8] and Oracle’s Manager Console [14], which indicate potential performance problems and facilitate the collection and analysis of performance data, and tools that schedule necessary maintenance utilities. Previous work on automated tuning has identified important issues and features of the process. Chaudhuri and Weikum argue that the current approach to tuning resources in a DBMS is outdated and that a new DBMS architecture must be developed to support selftuning approaches [5]. They support a RISC-style architecture in which interactions between components are strictly controlled so that tuning specific components has minimal side-effects. Parehk et. al. [17] propose a system based on control theory to monitor and dynamically adjust resource allocations.

Bigus et. al. [3] propose a generic agent to support tuning systems where prior knowledge of system performance does not exist. The agent is a neural network that learns the system model and workload in order to determine appropriate control settings. Finally, Weikum et. al. [20] identify several principles of selftuning: the feedback control loop is an appropriate framework for self-tuning algorithms; mathematical models are a way to minimize the impact of oscillations within the tuning process and maintaining statistics about workload properties and performance metrics are a key asset for prediction and tuning.

3.

Approach to Automating Diagnosis

We now describe our approach to automating the diagnosis of performance problems in DBMSs. In this model-based approach, users define models of their system’s resources and workload and provide a set of diagnosis rules. From this information we generate a diagnosis tree, which is used to automatically locate potential sources of performance problems.

3.1.

Diagnostic Method

An overview of our diagnostic method is shown in Figure 1. The ovals represent artefacts that are created once for a specific DBMS and workload and then used for each diagnosis. From the point of view of diagnosis, the DBMS execution can be viewed as a cycle of four steps, which are shown as rectangles in Figure 1, namely execute the DBMS, inspect the performance, diagnose the performance problems and adapt the DBMS’s resource settings. Our diagnostic process uses several kinds of information that must be initially extracted from various sources. Information about the tuneable resources and the relationships among these resources is represented in a resource model. We identify two types of resources. Physical resources are hardwareoriented resources whose allocations have a direct effect on the physical hardware. Examples of physical resources are main memory or disk space. Allocations of physical resources are limited by the hardware available from the system for the DBMS. Logical resources are provided by the DBMS. An example of a logical resource is the number of processes allocated to write data to disk. Logical resource allocations are limited by the DBMS. Logical resource allocations do have an effect on physical resources, as the DBMS must use memory and CPU to maintain these processes.

DBMS Expertise and Documentation

Resource Model

DBMS Workload

Diagnosis Rules

Workload Model

Diagnosis Tree

Diagnosis

Adaptation

Inspection

Execution

Figure 1: Overview of Diagnostic Method In the resource model, a resource in the DBMS has the following attributes: • Impact – The high, medium or low impact that this resource has on DBMS performance. • Allowable range – The allowable (legal) range of values that the resource may be assigned. • Default value – The default value assigned to the resource by the DBMS. • Indicator values – Indicators are observed or calculated values that can be used to determine how the system is performing with respect to the resource in question. For example, the hit rate is an indicator of the performance of the buffer pool. • Setting values – A list of setting values associated with the resource. A single resource may be associated with several tuning parameters. A setting value is a tuning parameter and its current value. Information about the relationships among resources is also critical to tuning the DBMS since adjusting the value of one resource may impact those resources that are closely related to it. The resource model represents resources as nodes in a directed graph and an edge from resource Ri to resource Rj implies that a change to the value Ri should lead to a change in

the value of Rj. Figure 2 shows a subgraph of the resource model constructed for the example presented later in the paper. We also see in Figure 2 that there may be cycles in the resource model graph. For example, changing the number of I/O Cleaners forces us to consider modifying the Changed Pages Threshold, and vica versa. We remove cycles during the actual tuning by designating a particular root or starting point in the graph for each instance of the tuning process, which ensures that it ends after a finite number of steps [2]. The workload model describes how the DBMS performs for a given workload. There is an element in the workload model for each resource that provides thresholds of acceptable performance for that resource’s indicators. For example, for an OLTP workload and the I/O cleaner resource, a reasonable threshold is a lower bound of 95% on asynchronous writes. In other words, at least 95% of all disk writes should be asynchronous (performed by the I/O cleaners). Acceptable threshold values are determined by consulting the DBMS documentation and by observing similar workloads where the performance is known to be good.

# of I/O Cleaners

Changed Pages Threshold

Database Heap Size

Catalog Cache Size

Buffer Pool Size

Log Buffer Size

Figure 2: Example Resource Model The diagnosis rules are extracted from expert knowledge and DBMS documentation. They embody the “rules of thumb” and common steps followed by DBAs when they diagnose performance problems. The diagnosis rules are used to organize the diagnosis tree, which is the core of the diagnosis process. The diagnosis tree combines the information from the workload model and the diagnosis rules. Specifically, the diagnosis tree is an ordering of the elements of the workload model based on the diagnosis rules. Starting at the root node of the tree, questions are posed about the performance of the DBMS. Depending on the values of particular performance indicators within the DBMS, a decision is made to traverse either the left or right branch of a node. This continues until a leaf node in the diagnosis tree is reached. A leaf node contains a list of one or more resources that should be considered for tuning. Once a leaf node is reached and a list of resources has been acquired, we use the resource model to determine other related resources that may also have to be tuned. The diagnosis tree for an OLTP workload running on DB2 UDB is shown in Figure 3. Diagnosis results for this example will be presented later in the paper. Non-leaf nodes in the tree are decision nodes and are labeled as Di where i is a unique number for each node.

Leaf nodes are tuning nodes and are labeled as Ti . Details of the construction of this diagnosis tree are given elsewhere [2].

3.2.

Example Diagnoses

We first implemented our approach as a stand-alone tool that can be used by a DBA to help diagnose performance problems. This first implementation does not exploit dynamically adjustable parameters and the cycle of steps shown in Figure 1 involved manual operations. Our test environment consists of an IBM server running the Windows NT Server operating system and DB2 UDB Version 8.1 as the DBMS. The machine is equipped with 2 1GHz Pentium III processors and 2 GB of RAM. A 10GB database is spread across 20 disks. The database workload is the DB2 Transaction Workload (DTW), which resembles the Transaction Processing Council OLTP benchmark, TPC-C [19]. This workload simulates a typical OLTP business workload. Our results are in no way representative of typical TPC-C results for the given hardware and thus should not be interpreted as such.

Yes

Yes

D2 Locklist in Use ~= Locklist size? (90%)

Yes No

T2 Maxlocks Locklist

T1 Increase locklist size

Yes

Yes

Yes

D6 Log Triggers low (1000)

Yes No

T4 Deadlock Check Time Locktimeout

No

D4 Async Writes > 95%?

No

D3 Buffer Pool Hit Rate (>95%)

No

T9 Tune the bufferpools

T8 Number of I/O Cleaners

No

D7 SortHeap Overflows > 10%?

T5 Increase sortheap OR sheapthresh

D8 Is % CPT Triggers > % Victim Writes? Yes

T6 Increase CPT

No

T7 Tuned.

Figure 3: The diagnosis tree. A set of typical results from a tuning session with our diagnosis algorithm are shown in Table 1. The resources considered during the tuning session are shown in the leftmost column of the table. The column labelled “Initial” shows the initial settings for the resources. These are the resources’ worst possible settings according to allowable ranges so that the initial throughput is essentially zero. Each of the remaining columns in the table corresponds to a step in the tuning session. Each step consists of several actions. We first run the workload for a 10 minute warm up period. We then monitor for system for 5 minutes to collect performance data. We next run the diagnosis algorithm on the collected data to determine the resource, or set of resources, that should be tuned. We then adjust the parameters and re-run the workload for a 20-minute period to measure the resulting performance after tuning. The throughput is measured for the last 10 minutes, allowing for a 10-minute stabilization period. A number in the cell for a step/resource pair indicates that the diagnosis algorithm suggests that the resource be adjusted and the number represents the new setting for the resource. The final row shows the resulting performance (expressed in Transactions / Second) using the new configuration. In Step 1, our algorithm examines the monitoring data collected under the initial configuration and finds that the number of lock escalations is high and the percentage of the lock list that is in use is high. It suggests increasing the lock list size and lock timeout

parameter, since it is related to locklist size in the resource model. Stpes 2 -7 of the tuning session continue to find the main problem is the number of lock escalations and suggest tuning these resources as well. After the locklist size reaches 144 (4K pages) and the lock timeout period is substantially reduced, the locking mechanisms appear to be functioning well and lock escalations are no longer occurring. In Step 8, the diagnosis algorithm determines that the hit rate is lower than our 95% threshold so the buffer pool is adjusted. Since buffer pool, changed pages threshold and asynchronous I/O cleaners are related resources in the resource model (see Figure 2), the algorithm also examines the key data for I/O resources (the percentage of asynchronous writes as well as victim writes and the percentage of triggers due to the change pages threshold value) and determines that the number of I/O cleaners and the changed pages threshold value should be increased. The throughput increases substantially as a result. In Step 9 of the tuning session the algorithm suggests changes to the deadlock check time based on the monitoring data which shows that the average lock wait time is above the 1000ms threshold. In the final two steps, the value of the change pages threshold is increased based on a comparison of the percentages of victim writes and the number of times the changed pages threshold was reached. AT this point, the diagnosis algorithm decides that the system is tuned and the tuning session is finished.

Initial Deadlock Check Time

Step 1

Step 2

Step 3

Step 4

Step 5

Step 6

Step 7

24

44

64

84

104

124

144

Step 8

60000

Step 9 30000

Step 10

Step 11

45

65

56

57

(ms)

Locklist Length

4

Resources

(4K pages)

I/O_cleaners Changed Pages Threshold (%

0 5

10 25

of buffer pool size)

Lock Timeout (ms) Buffer Pool Size

30000

15000

7500

3750

1875

937

468

25000

50000

Results

(4K pages)

Transactions Per Second

0

.03

.1

.8

.9

.9

9

14

50

55

Table 1: Example Diagnosis We see that, as a result of the tuning actions suggested by the diagnosis algorithm, average performance of the system is improved from 0 to 57 transactions/second. As a comparison, the same system configured to the resource settings suggested by the DB2 Configuration Wizard [8] has an average performance of 53 transactions per second.

4.

We now describe a method of implementing our diagnostic method with DB2 UDB Version 8.1, which supports a limited number of dynamically adjustable configuration parameters. We explain our method and then provide a proof of concept.

4.1. We note that the same diagnosis is made more than once during the tuning session, for example steps 1 through 7. This is due to the naive tuning algorithms used in the experiments. We simply adjust each resource by a pre-set amount each time. We expect that intelligent tuning algorithms based on mathematical models of the resources, for example the buffer pool tuning algorithm described by Martin et. al. [12], can be defined for each resource that determine the optimal value for each resource setting. We also observe that there is a possibility of oscillation in the system, that is, the algorithm could oscillate between different parameter settings. We can detect this problem by maintaining a history of the actions taken during a tuning session, much like the data in Table 1, and then take corrective actions when the same set of parameter settings appear.

Implementation

Reflective DBMS

Our approach to automating DBMS tuning is based on principles of reflective programming. A reflective system is one that can inspect and adapt its internal behaviour in response to changing conditions. A reflective system maintains a model of selfrepresentation and changes to the self-representation are automatically reflected in the underlying system. Reflection enables inspection and adaptation of systems at run-time [11]. In our approach, the self-representation of the system embodies the current configuration settings and statistics that are collected regarding the system performance. The configurations are stored as a set of relations - one for the database configuration parameters (those specific to a particular database) and another for the DBMS (system-wide) parameters. The performance statistics are maintained in a data warehouse. Figure 4 illustrates the framework of our reflective DBMS implementation. Although our

Figure 4: A Reflective DBMS Architecture implementation uses DB2 UDB, the architecture is DBMS-independent and can be implemented using any current DBMS that provides triggers, user-defined functions and a monitoring API. The system works as a feedback loop. A monitoring tool periodically takes snapshots of the DBMS performance and stores the collected data in the performance data warehouse. The data collected includes information about buffer pool usage, I/O activity, locking, sorting, etc. When a new set of performance data is inserted into the warehouse, a database trigger is fired that calls the diagnosis function. The diagnosis code implements the diagnosis algorithm as a user-defined function. This algorithm examines the current performance data and compares it to the current configuration settings and pre-set thresholds to determine whether or not a change in configuration is warranted. If one or more configuration parameters should be altered, a change is made to the self-representation. A change to the selfrepresentation triggers a change to be made to the underlying DBMS configuration parameters. This process continues cyclically, thus forming an automatic feedback loop including self-inspection, self-diagnosis and self-adaptation. The monitor runs periodically collecting data required by the diagnosis algorithm. The monitoring

frequency and length depends upon the nature of the workload and how much variation is expected. Our custom-built monitoring tool uses the DB2 UDB snapshot API to collect the data shown in Table 2 and to insert it into the performance data warehouse. A trigger is defined on one of the warehouse tables which fires after a new row is inserted into this table. This trigger calls the user-defined function that contains the diagnosis logic. Monitoring has an impact on application performance so there is a trade-off between the overhead associated with the diagnosis procedure and the potential performance decrease due to undiagnosed problems. The exact nature of the trade-off is beyond the scope of the paper. The diagnosis algorithm implements the logic outlined in Figure 3 (the diagnosis tree). As each node in the tree is visited, the monitoring data is examined and a decision is made as to which direction to follow. Each leaf node represents a tuning node that indicates which resource (or set of resources) should be tuned. Consider a case where the monitoring data indicates that there were zero lock escalations and a buffer pool hit rate of 84% during the monitoring period. The algorithm first checks for lock escalations, finds that

Resource

Data Collected

I/O

Hit rate, number of physical reads, number of logical reads, physical read time, number of asynchronous reads and writes, number of dirty page steals, number of log triggers, number of change page threshold triggers Sort overflows, total number of sorts Number of lock waits, lock wait time, amount of lock list in use, deadlocks and number of lock escalations

Sorting Locking

Table 2: Monitored Data there are none, so traverses the tree to the right. Next the buffer pool hit rate is examined and found to be below our threshold of 95%, indicating that the size of the buffer pool should be altered. The algorithm obtains the current size of the buffer pool from the DBMS self-representation and a tuning algorithm determines an appropriate size for the buffer pool based on cost-analysis. The self-representation is updated with the new buffer pool value. At this point, the actual size of the buffer pool has not yet been changed. The two steps outlined thus far, that is, system monitoring and problem diagnosis, represent the selfinspection phase of the DBMS reflective system. In a reflective system, changes to the self-representation are automatically reflected in changes to the underlying system itself. We implement this self-adaptation phase by the use of a trigger defined on each of the attributes in the self-representation tables. Whenever one of these values is updated, a user defined function is invoked to update the value of the tuning parameter in the database. To continue the example from above, when the value for the buffer pool is changed in the self-representation, the trigger defined on this attribute is fired and the value for the buffer pool size parameter is changed within the DBMS.

4.2.

Implementation Validation

Of the parameters used in our diagnosis tree, only buffer pool size, deadlock check time and the size of the lock list can be changed dynamically in DB2 UDB Version 8.1 without disconnecting applications currently connected to the database. This restriction prevents complete validation of our entire diagnosis tree at this time. We can, however, provide a proof-ofconcept with a restricted example. To test our implementation, we show a simplified example of dynamic tuning involving only the buffer pool. The buffer pool is initially set to 10,000 4K pages and incremented by 10,000 pages at each iteration. All other parameters are set to reasonable

values so that the diagnosis algorithm only ever suggests tuning the buffer pool. In this example, we run the DTW workload until the diagnosis algorithm deems the system to be tuned. The system is monitored (for 5 minutes) every 10 minutes. The insertion of monitoring data into the warehouse triggers the diagnosis algorithm, which in turn triggers a change to the buffer pool size. The experiment was run a total of 10 times and each time the diagnosis algorithm suggested a total of 6 changes to the buffer pool. Averages taken from the same time period (that is, while the buffer pool is 10K, 20K, 30K, 40K, 50K and 60K) from each of the 10 runs are reported for throughput (TPS), hit rate and the percentage of physical reads in Table 3. Results show that as the buffer pool size increases, throughput improves, hit rate increases and the percentage of physical reads decreases. The system is determined to be tuned once the hit rate reaches our threshold of 95%.

5.

Summary

Self-tuning database technology is a promising approach to dealing with the growing complexity and costs associated with managing today’s DBMSs. Vendors have begun to move towards self-tuning systems with the inclusion of features like dynamically adjustable configuration parameters and more sophisticated monitoring tools. What is currently missing is an automated approach to bridging the gap between monitoring and parameter adjustment, that is, diagnosis of the possible sources of performance problems. The first main contribution of this paper is a general approach to automatically diagnosing performance problems in a DBMS. Users define models of their system’s resources and workload and provide a set of diagnosis rules. From this information we generate a diagnosis tree, which can be used to automatically locate potential sources of performance problems.

Buffer Pool Size

Transactions Per Second (TPS)

Hit Rate

% Physical Reads

10K 20K 30K 40K 50K 60K

29.8 34.9 39.8 46.2 48.2 50.1

86.2 89.3 91.2 92.4 93.3 95.1

13.8 10.7 8.8 7.6 6.6 5.8

Table 3: Automatic Diagnosis and Tuning of Buffer Pool We demonstrate the effectiveness of our approach with an experiment that uses a tool based on the approach to tune an initially poorly configured system. The diagnosis tool suggests a series of adjustments that improves system throughput from basically 0 to a respectable rate. We believe that, with further work, our approach can be the basis for an effective addition to the selftuning capabilities of DBMSs. We expect that the diagnosis rules can be relatively generic with perhaps different sets of rules for major classes of system configurations and workloads. These sets can be defined once and then tailored for specific situations as required. The resource and workload models, on the other hand, must be specified for each DBMS-workload instance. We can define generic templates for major classes of configurations and workloads and then provide a convenient means for users to modify and extend the templates to suit their needs. We can also provide a tool that, with guidance from the user, can generate the diagnosis tree from the diagnosis rules and resource model. Finally, we used a very naïve tuning method in the experiments presented in the paper. Whenever a resource had to be tuned we simply adjusted the setting by a predetermined absolute value. This tuning method led to situations where one resource was diagnosed and adjusted for several consecutive tuning steps. A better method is to develop more intelligent tuning methods for each resource that determine the amount a resource needs to be adjusted based on predictive models [12][20]. The second main contribution of this paper is a method of implementing our approach with current DBMSs that support dynamically adjustable configuration parameters. Our method is based on principles from reflective programming and uses standard DBMS features like triggers. We describe a prototype implementation of our method with DB2 UDB and demonstrate the viability of the method with

experiments that dynamically tune a small set of resources. Our implementation can be easily extended as vendors increase the number of dynamically adjustable parameters available. Adding another parameter involves adding the appropriate data to the configuration tables and then modifying the user defined function to include the diagnostic logic for that parameter. We can use code-generation techniques to automatically create this function from the diagnosis tree.

Acknowledgements We thank IBM Canada, the Natural Sciences and Engineering Research Council of Canada (NSERC) and Communications and Information Technology Ontario (CITO) for their support of this research.

References [1] S. Agrawal, S. Chaudhuri, and V. Narasayya, Automated Selection of Materialized Views and Indexes for SQL Databases, Proc of 27th VLDB Conference, pp. 20-31, Rome, Italy, 2001. [2] D. Benoit, Automatic Diagnosis of DBMS Performance Problems, Ph.D. thesis, School of Computing, Queen’s University, 2003. [3] J. Bigus, J. Hellerstein, T. Jayram, M. Squillante. AutoTune: A Generic Agent for Automated Performance Tuning, Practical Application of Intelligent Agents and Multi-Agent Technology, 2000. [4] S. Chaudhuri and V. Narasayya, Automating Statistics Management for Query Optimizers, Proceedings of 16th International Conference on Data Engineering, San Diego, USA, 2000. [5] S. Chaudhuri and G. Weikum. Rethinking Database System Architecture: Towards a SelfTuning RISC-Style Database System, Proc of 26th International Conference on Very Large Databases (VLDB), pp. 1 – 10, Cairo, Egypt, 2000.

[6] A. Fox, D. Patterson, Self-Repairing Computers, Scientific American, June 2003, http://www.sciam.com. [7] A.G. Ganek, T.A. Corbi, The Dawning of the Autonomic Computing Era, IBM Systems Journal 42(1), March 2003. [8] IBM Corp., DB2 Universal Database Version 8.1 Administration Guide: Performance, 2003. [9] S. Lightstone, B. Schiefer, D. Zillio and J. Kleewein, Autonomic Computing for Relational Databases: The Ten-Year Vision, Proceedings of Workshop on Autonomic Computing Principles and Architectures, Banff, Alberta, August 2003. [10] G. Lohman, G. Valentin, D. Zilio, M. Zuliani and, A. Skelly, DB2 Advisor: An optimizer Smart Enough to Recommend Its Own Indexes, Proceedings of 16th IEEE Conference on Data Engineering, San Diego, CA, 2000. [11] P. Maes. Computational Reflection, The Knowledge Engineering Review, pp.1-19, Fall 1988. [12] P. Martin, M. Zheng, H. Li, K. Romanufa and W. Powley, Dynamic Reconfiguration: Dynamically Tuning Multiple Buffer Pools, Proceedings of the International Conference on Database and Expert System Applications (DEXA'2000), pp. 92-101, September, 2000. [13] Microsoft Corp., Microsoft SQL Server 2000 Documentation, 2002. [14] Oracle Corp., Oracle 9i Manageability Features. An Oracle White Paper, September 2001. http://www.oracle.com/ip/deploy/database/oracle9 i/collateral/ma_bwp10.pdf [15] Oracle 9i Materialized Views. An Oracle White Paper, May 2001. http://technet.oracle.com/products/oracle9i/pdf/o9i _mv.pdf [16] Oracle Corp., Query Optimization in Oracle 9i. An Oracle White Paper, February 2002. http://technet.oracle.com/products/bi/pdf/o9i_opti mization_twp.pdf. [17] S. Parekh, N. Gandhi, J. Hellerstein, D. Tilbury, T. Jayram and J. Bigus. Using Control Theory to Achieve Service Level Objectives in Performance Management, Real-time Systems 23(1), pp. 127 – 141, 2002. [18] M. Stillger, G. M. Lohman, V. Markl, M. and, Kandil, LEO - DB2's LEarning Optimizer, Proc of 27th VLDB Conference, Rome, Italy, 2001, pp. 19-28. [19] TPC-C Benchmark Specification, http://www.tpc.org [20] G. Weikum, A. Mönkeberg, C. Hasse, P. Zabback, Self-tuning Database Technology and Information Services: From Wishful Thinking to Viable

Engineering, Proc of 28th VLDB Conference, pp. 20-31, Hong Kong, China, 2002.

Suggest Documents