A Decentralized Deadlock-free Concurrency Control ... - CiteSeerX

23 downloads 85212 Views 171KB Size Report
global transactions must be the same at all systems the ... uses a top-down approach to enforce the same serial- ization order ..... local system as the ticket value.
A Decentralized Deadlock-free Concurrency Control Method for Multidatabase Transactions Raj Kumar Batra and Marek Rusinkiewicz Department of Computer Science University of Houston Houston, TX 77204-3475

Abstract In many applications in a multidatabase environment global serializability is needed to assure correctness of concurrent execution of transactions. The serializability of all local schedules is, by itself, not sucient to ensure the global serializability, since the (relative) local serialization orders of subtransactions of global transactions must be the same at all systems the global transactions execute. In this paper, we present a fully decentralized global concurrency control method in which the concurrency control decisions concerning global transactions can be made at each site, based on the information that is locally available. The method uses a top-down approach to enforce the same serialization order at all sites a global transaction is executed. The proposed method uses forced local con icts to prevent unacceptable local schedules while assuring deadlock free execution.

1 Introduction A Multidatabase System (MDBS) is a federation of autonomous and possibly heterogeneous database systems (called Local Database Systems, or LDBSs). Such federation allows the coexistence of local and global transactions. Local transactions are submitted directly to LDBSs, and are executed outside the control of the MDBS system. On the other hand, global transactions are submitted through the MDBS interface, and are executed under the MDBS control. A global transaction is usually decomposed into a set of subtransactions each of which can be executed at a single LDBS, therefore it is frequently referred to as a multidatabase transaction. The transaction management in multidatabase systems is hierarchical. Local  This work was supported, in part, by grants from Bellcore and MCC.

Dimitrios Georgakopoulos GTE Labs, Inc. 40 Sylvan Road Waltham, MA 02254

Database Management Systems (LDBMS) control the execution of local transactions at each site and assure local serializability. The subtransactions of global transactions are also executed under the control of LDBMSs. A Global Database Management System (GDBMS) controls the execution of the global transactions and is responsible for assuring the correctness of their concurrent execution. The objectives of multidatabase transaction management are to avoid inconsistent retrievals, and to preserve the global consistency in the presence of global and local updates. These objectives are more dicult to achieve in a multidatabase system than in a homogeneous, distributed database system because an MDBS has to deal with additional problems of autonomy and heterogeneity of the participating LDBSs. It has been argued that, in many cases, global serializability is needed to assure correctness of concurrent execution of multidatabase transactions. The serializability of all local schedules is, by itself, not sucient to ensure the global serializability. In a tightly-coupled distributed database system, it is required that subtransactions of global transactions have the same relative serialization order in all sites they do con ict directly. In a multidatabase system consisting of autonomous local systems the global serializability requires the (relative) local serialization orders of subtransactions of global transactions to be the same at all systems the global transactions execute { even in the absence of direct con icts among them. The main problem in multidatabase transaction management is the inability of the global transaction manager to determine the local serialization orders without the violation of local autonomy. This diculty is compounded by the existence of local transactions whose behavior is not known to the multidatabase system. A practical solution to the problem of obtaining local serialization order was reported recently in [GRS91]. Once the local serialization orders

are known at the global level, appropriate validation schemes can be designed to ensure that they are consistent at all local systems. Multidatabase transaction management mechanisms must also deal with the possibility of global deadlocks. Many solutions that have been proposed to detect and resolve deadlocks in homogeneous, distributed database systems are also applicable in a multidatabase environment. However, deadlock detection in a multidatabase environment is more complicated [BST89] because one has to take into consideration the possibility that a global deadlock may be caused by local transactions, unknown at the global level. Therefore, it is highly desirable to incorporate methods assuring freedom from global deadlocks into global concurrency mechanisms. In this paper, we present a global concurrency control method that satis es the requirements formulated above. Its main advantage is that the global transaction management is fully decentralized. Most of the schemes discussed in the literature (with the possible exception of [WV90]) assume that the multidatabase transaction management is centralized. The information about the local serialization order of subtransactions is usually forwarded to a global transaction manager that uses it to perform global concurrency control. In contrast, in the method proposed below the concurrency control decisions concerning global transactions can be made at each site, based on the information that is locally available. The rest of this paper is organized as follows. Section 2 is a brief survey of the related work. Section 3 discusses the multidatabase architecture that is assumed by the proposed method. In the next section we describe the Multidatabase Timestamping Mechanism (MTSM) for global concurrency control in multidatabases. In section 5 we discuss the problem of global deadlocks and prove that the proposed mechanisms are deadlock-free. Finally, we conclude this paper with a short summary.

2 Related work The problem of global concurrency control in multidatabases has been studied, among others, in [BS88, Pu88, EH88, BST89, ED90, GRS91]. In [BS88], Breitbart and Silberschatz have shown that global serializability is preserved in the presence of local transactions if global transactions have the same relative serialization order at all sites. In [Pu88], Pu has proposed a scheme in which LDBSs are modi ed to provide local serialization order of the transactions. Based on the

serialization order of subtransactions of global transactions at di erent LDBSs, global certi cation is done to ensure global serializability. The main limitation of this scheme is that it may not be always possible to modify the local DBMSs. Depending on how global concurrency control schemes ensure the equivalent serialization order at all sites, Elmagarmid and Du in [ED90] propose to classify these schemes into two basic categories. In bottomup schemes, a global transaction is submitted to different LDBSs that it needs to access, and is allowed to proceed under their control. The global concurrency control mechanism obtains information about the relative serialization order of transactions at di erent local systems, and detects and resolves incompatibilities among them. On the other hand, the schemes using a top-down approach decide on a global serialization order for global transactions before submitting them to local systems, and try to enforce this order at all the systems through di erent means. The top-down approach discussed in [ED90] discusses how a prede ned serialization order can be enforced in local systems using two-phase locking (2PL), timestamp ordering and value date mechanisms [LT88]. However, its applicability is quite limited because it e ectively eliminates concurrency in execution of global transactions in LDBSs based on 2PL and value-date schemes. In particular, for 2PL based schemes, a new transaction is submitted to an LDBS only if all the previously submitted transactions have reached their lock points, which in strict 2PL [BHG87] happens at the end of transaction. As a result, we can have at most one transaction in its lock-acquiring phase at any site. In the case of valuedate based schedulers, a new transaction is submitted only if all the previously submitted transactions have reached their value-dates. Since values-dates have a total order among them and a transaction has completed all its data update operations when it reaches its value-date, we can have at most one active transaction that is doing update operations at any time. A di erent approach called the Optimistic Ticket Method (OTM) [GRS91], uses tickets to enforce global serializability. A ticket is a regular data item maintained in each participating LDBS. At each local system, the local serialization order of global transactions is obtained by performing additional data manipulation operations on the local ticket. A global serialization graph is constructed by the MDBS to ensure that all subtransactions of each global transaction have the same relative serialization order. Although OTM enforces global serializability without violating the au-

tonomy of LDBSs, it has some weaknesses:  Since a global transaction is validated when all its subtransactions reach their prepared to commit states, it may be aborted late in its life-cycle if the relative serialization orders of subtransactions are di erent.  Global transactions are subject to global deadlocks, which must be resolved by a separate mechanism (e.g., timeouts). Both of the approaches discussed above, i.e., topdown and tickets, have some desirable properties. The main advantage of the top-down approach is that once the global serialization order has been assigned by the global concurrency controller it can be enforced at each site by using only the information that is available locally. The main advantage of the ticket approach is that it allows to obtain the local serialization orders without violating the local autonomy, and irrespective of the local concurrency control mechanisms. In the method described in the following sections we attempt to combine the advantages of both the methods while trying to eliminate their shortcomings identi ed above.

3 MDBS architecture In the following discussion we will assume the system architecture shown in Figure 1. Most of the systems, proposed earlier in the literature [Pu88, BST89, GRS91], assume the existence of a multidatabase transaction coordinator to which the global transactions are submitted. In contrast, our architecture is completely distributed. It allows global transactions to be submitted to any of the participating sites which is capable of coordinating their execution. The MDBS software on each site consists of a Global Transaction Manager (GTM) and a set of Servers. Global Transaction Managers are responsible for controlling the execution of global transactions. The GTM to which a global transaction is submitted becomes the coordinator for this transaction. Each global transaction is assigned a timestamp, and timestamps generated at each site are in monotonically increasing order. It is assumed that timestamps are unique system-wide, and de ne a total order among global transactions1 . If the local clocks are used to generate timestamps, they should be kept approximately synchronized for better performance [Lam78]. 1 One way to generate such a timestamp is to append the site-id to the local timestamp [BHG87].

However, it should be noted that the synchronization of clocks is not required for correctness of the mechanism. After assigning a timestamp to a global transaction, a coordinator GTM decomposes the global transaction into a set of subtransactions, each of which is assigned to a participating site where it can be executed. All the subtransactions of the global transaction carry its timestamp. Each subtransaction is then sent to the GTM of the corresponding site. When a GTM receives a subtransaction it creates/allocates a server process that becomes responsible for the execution of the subtransaction. A Server is a process, assigned to a subtransaction by a local GTM to act as an agent for the global transaction. A GTM may maintain a set of servers and allocate an idle one to a new subtransaction, or it may start a new instance of the server. A server is allocated to a subtransaction until the global transaction commits or aborts. A server submits the subtransaction to the LDBS, monitors its execution and interacts with the local GTM.

4 Multidatabase Timestamp Mechanism In this section we present a top-down method for global concurrency control in multidatabases, called Multidatabase Timestamp Mechanism (MTSM), which ensures global serializability without violating local autonomy. We make the following three assumptions about the participating LDBSs: 1. Each LDBMS guarantees local serializability. 2. Each LDBMS provides a visible prepared to commit state for its transactions. A transaction is in a visible prepared to commit state if it has completed all its operations, and will not be aborted unilaterally by the LDBMS for any reason [GRS91]. This implies that all local systems generate only cascadeless histories [BHG87]. 3. All local schedules are deadlock free, i.e. either deadlocks do not occur in LDBMSs, or if they occur then they are detected and resolved, locally. MTSM ensures that global transactions are serialized in the timestamp order at all sites irrespective of the mechanisms used by the local DBMSs for concurrency control. Please note that, in general, a multidatabase commitment protocol, by itself, is not sucient to ensure global serializability. To illustrate this, let us con-

Global Transactions

Global Transactions

? ?

GTM 



GTM -

KA KA 



A A 



AU AU 

Server

Server

.....



A

 AU

-

Server

KA

 A

.....

Server

 KA

Local Transactions

.....



AU 

LDBMS

LDBMS

Local Data

Local Data

site1

Local Transactions 

siten Figure 1: MDBS Architecture

sider two local systems which use strict two phase locking protocol. Under this protocol, a transaction must keep its write locks until it commits or aborts; read locks, however, can be released in accordance with the two phase rule. Consider two local histories involving transactions T1 and T2 running at two di erent sites: Ha : r1(a)w2(a) Hb : r2(b)w1 (b) If we assume that read locks are released by each local transaction immediately after the read operation is completed2, both transactions T1 and T2 can become ready to commit (and be committed). The resulting global schedule is obviously non-serializable. In [BGRS90], it has been shown that we can achieve global serializability by controlling the commitment of transactions only if all local systems generate rigorous schedules. Therefore, additional measures must be taken to disallow globally non-serializable schedules.

4.1 Basic MTSM algorithm To enforce the timestamp serialization order of global transactions at all sites, MTSM requires each subtransaction of a global transaction to perform additional data manipulation operations on a common 2 This does not violate the 2PL rule because read operation is the last operation of a transaction at each site.

data item, called ticket, stored in the local database. This technique introduces forced local con icts between the subtransactions of global transactions at each LDBS [GRS91]. The ticket operations guarantee that the local serialization order is equivalent to the order of ticket operations, or the subtransaction is aborted by the local system. MTSM stores the timestamp of the last global transaction committed at each local system as the ticket value. The ticket which will be referred further as Global Timestamp (GTS) is updated only when a global transaction is committed. Each global subtransaction, accessing a local system, performs a read and a write operation on the local GTS. Each server processes a subtransaction of a global transaction in accordance with the following algorithm: START_SUBTRANSACTION: read(GTS); IF (GTS > transaction timestamp) ABORT; {abort the transaction} send ABORT to the local GTM; ELSE write(GTS, transaction timestamp); submit operations of the transaction

to the local DBMS; {transaction in prepared to commit state} send READY to the local GTM; wait for the global COMMIT/ABORT decision; IF (decision = COMMIT) COMMIT; {commit the transaction} ELSE ABORT; {abort the transaction} ENDIF; ENDIF; END_SUBTRANSACTION;

In the algorithm above, the server sends a READY message to the local GTM when all the operations on the LDBS are completed successfully or an ABORT message, otherwise. These messages are forwarded to the global coordinator in accordance with the twophase commitment (2PC) protocol. If all the subtransactions report READY, the coordinator GTM commits the global transaction and sends a COMMIT message to all the servers. If any of the servers reports an ABORT, the global transaction is aborted, and an ABORT message is sent to the servers participating in the global transaction through their local GTMs. A subtransaction can be aborted before it reaches the prepared to commit state for various reasons. A server aborts the subtransaction if the value of GTS read from the local database is larger than the timestamp of the subtransaction. A local database management system may also decide to abort a subtransaction because of the local concurrency control reasons, for example if the subtransaction is involved in a local deadlock. In such cases an ABORT message is sent to the GTM, and as a result the complete global transaction is aborted. The prepared to commit state is required for the case in which a global commit decision is taken by the coordinator GTM. In this case, once the decision to commit the global transaction is reached, no subtransaction should be aborted due to local reasons. Otherwise, the consistency of the global database cannot be ensured [Geo90].

Theorem 1 The basic MTSM guarantees global serializability.

Proof: To prove the correctness of basic MTSM we

will show that:

1. Global subtransactions update GTS in the timestamp order at each local system.

2. The serialization order of global subtransactions at each site (and hence, the global serialization order) is the same as their timestamp order. To prove the rst claim, let us consider possible schedules involving read and write operations on GTS performed by transactions T1 and T2, with timestamps ts1 and ts2 respectively. Without any loss of generality, let us also assume that ts1 < ts2 . Assuming that each subtransaction performs a read and write operation on the local GTS, there are six possible schedules, i.e.: S1 : r1(GTS )w1 (GTS )r2 (GTS )w2 (GTS ) S2 : r1(GTS )r2 (GTS )w1 (GTS )w2 (GTS ) S3 : r1(GTS )r2 (GTS )w2 (GTS )w1 (GTS ) S4 : r2(GTS )r1 (GTS )w2 (GTS )w1 (GTS ) S5 : r2(GTS )r1 (GTS )w1 (GTS )w2 (GTS ) S6 : r2(GTS )w2 (GTS )r1 (GTS )w1 (GTS ) Schedules S2 through S5 are non-serializable and since we assume local local systems guarantee serializability will not be allowed by the local schedulers. Schedule S6 will also be dissallowed because of timestamp comparison performed by the transaction server. In this schedule, the server of T1 will read the GTS value, and will compare it with its timestamp. Since, in this case the value of GTS is ts2 and ts2 > ts1 , the server will abort T1 . Hence, S1 is the only schedule allowed by basic MTSM because the operations are serializable with the order T1 !T2. To prove the second claim, we observe that the serialization order of two transactions is the same as the order in which they perform their con icting operations. In our case, every global subtransaction con icts with every other subtransaction at its site (through operations on GTS). Also, we have shown above that the subtransactions update GTS in the timestamp order. Hence, the serialization order of global subtransactions at all sites is same as their timestamp order. 2 We have shown that the basic MTSM guarantees global serializability by enforcing a pre-speci ed serialization order at all local systems irrespective of the local concurrency control mechanisms. However, it has serious disadvantages if the local DBMSs in a multidatabase system use a blocking concurrency control mechanism such as strict 2PL. 1. It allows only serial execution of subtransactions. Each subtransaction acquires a write lock on GTS at the beginning of its execution and keeps it until it commits or aborts. Therefore, no other subtransaction can perform any operation at this LDBS until the lock on GTS is released. This

leads to the same problem which we have identi ed for the top-down method described in [ED90]. 2. The basic MTSM may be subject to global deadlocks. Let us consider two global transactions T1 and T2 , both accessing two local systems s1 and s2 using locking for concurrency control. Let us assume that T1 acquires a write lock on GTS at s1 and T2 locks GTS at s2 . In this case, T2 has to wait for T1 to release its lock on GTS at s1 , and T1 must waits for T2 at s2 . Thus, both transactions wait for each other resulting in a global deadlock.

4.2 A non-blocking MTSM In this section we propose a non-blocking variant of basic MTSM that allows more concurrency in the execution of global transactions. In addition, the proposed method is deadlock free. To achieve these objectives we use an additional variable, called Provisional Timestamp (PTS), at each blocking LDBMS site. Unlike GTS, PTS is maintained by the local GTM, and is not stored in the local database. Servers send get and set requests to obtain and modify the value of PTS maintained by the local GTM. At any time, the value of PTS is the timestamp of the last committed transaction that successfully updated GTS. Using PTS to delay accessing GTS, a server executes each subtransaction as follows: START_SUBTRANSACTION: get(PTS); IF (PTS > transaction timestamp) send ABORT to the local GTM; ELSE submit operations of the transaction to the local DBMS; Begin-Critical-Section; get(PTS); IF (PTS > transaction timestamp) ABORT; {abort the transaction} send ABORT to the local GTM; ELSE write(GTS, transaction timestamp); {transaction in prepared to commit state} send READY to the local GTM; wait for the global COMMIT/ABORT decision; IF (decision = COMMIT) set(PTS, transaction timestamp); COMMIT; {commit the transaction}

ELSE ABORT; {abort the transaction} ENDIF; ENDIF; End-Critical-Section; ENDIF; END_SUBTRANSACTION;

In non-blocking MTSM, a server checks the value of PTS before its starts the execution of the subtransaction. If this value is greater than the timestamp of the subtransaction it sends an ABORT message to the local GTM which eventually results in the abortion of the global transaction. Otherwise it submits the operations of the subtransaction to the local DBMS. On successful completion of all operations of the subtransaction, the server enters a critical section. The server remains in the critical section until the global transaction is committed or aborted. The server checks whether the PTS has been updated by another transaction that was able to commit after the subtransaction started its execution. If not, the server updates GTS with its timestamp. Next, the server sends a READY message to its local GTM and waits for the global COMMIT/ABORT decision. If the decision is to commit, the server updates PTS with the timestamp of the subtransaction and commits the subtransaction. Otherwise, the subtransaction is aborted. To eliminate the possibility of global deadlocks, the local GTM processes a READY message received from a subtransaction server as follows: 1. Sends an ABORT message to all local servers executing subtransactions with timestamps that are earlier than the timestamp of the subtransaction that has sent the READY message. Any server that receives an ABORT message from its local GTM aborts its subtransaction. 2. Sends an ABORT message to the coordinator GTMs that are responsible for the global transactions whose subtransactions were aborted as a result of the procedure described above. 3. Forwards the READY message to the coordinator GTM of the global transaction whose subtransaction sent the READY message.

5 Global deadlocks As mentioned earlier, due to the presence of local transactions deadlock detection and resolution is more expensive in a multidatabase environment than in a

tightly-coupled distributed system. A global deadlock occurs in a multidatabase system when two or more global transactions are waiting for each other. The wait-for relation may be direct, if it involves only the subtransactions of global transactions, or indirect, when local transactions are involved. In the discussion below we will write gi;j to denote the subtransaction of a global transaction Gi that is executing at site j . We say that a subtransaction gi;S waits directly for another subtransaction gj;S at a local system S, i gj;S has locked a data item and gi;S is waiting for gj;S to release its lock. A subtransaction gi;S waits indirectly on another subtransaction gj;S , i there is a non-empty set of local transactions L1,...,Ln at some site S , such that, L1 is waiting directly on Gj;S , Li+1 is waiting directly on Li (0 < i < n), and Gi;S is waiting directly on Ln . A global deadlock exists if there is a set of global transactions G1; :::; Gn and a set of sites S 1; :::; Sn (n > 1), involved in the following waiting situation: 1. G2 waits directly or indirectly on G1 at site S 1, Gi+1 waits directly or indirectly on Gi at site Si (1 < i < n), and G1 waits on Gn at site Sn. We say that Gj waits on Gk at site Sl, if gj;Sl waits on gk;Sl. 2. gi;Si (i = 1; :::; n) are in their local prepared-tocommit states but cannot be committed until the remaining subtransactions of global transactions are ready to commit. 3. A subtransaction that is in its prepared to commit state will not release any of its resources until it is committed or aborted. Global deadlocks involving only directly con icting subtransactions of global transactions can be resolved in the same way as in tightly-coupled distributed database systems. On the other hand global deadlocks involving subtransactions waiting indirectly are dicult to resolve because the multidatabase system has no information about the operations of the local transactions. To illustrate such a global deadlock situations let us consider two global transactions G1 and G2, each accessing two sites A and B , and a local transaction LB at site B . Let us assume that g1;A has nished its operations and g2;A is waiting for g1;A to release a lock. Similarly, on site B g2;B has completed its operations, LB is waiting for g2;B to release a lock and g1;B is, in turn, waiting on LB to unlock a data item. Thus, g2;A waits directly for g1;A at site A and g1;B waits indirectly for g2;B at site B . Subtransaction g1;A cannot commit until g1;B nishes and g2;B cannot commit until g1;B nishes its operations. Thus,

there is a global deadlock and neither G1 nor G2 can complete.

Theorem 2 Non-blocking MTSM is free from global

deadlocks.

Proof: By de nition, global deadlock may involve only transactions that have a subtransaction in its prepared to commit state. Therefore, to show that non-blocking MTSM is deadlock-free we need to show the following: 1. A global subtransaction which is in a prepared-tocommit state has the lowest timestamp among all the global subtransactions at its site. As explained

in Section 4, a global subtransaction sends a READY message to its local GTM when it enters its prepared-to-commit state. Upon receiving the READY message the local GTM sends an ABORT message to all subtransactions at its site which have earlier timestamps. Thus, after a subtransaction enters its prepared-to-commit state, there cannot be any other subtransaction with an earlier timestamp at that site. 2. A global transaction with the lowest timestamp cannot be involved in a global deadlock. Since we assume that each local DBMS resolves local deadlocks, there will be some subtransaction at each LDBMS which can reach its prepared-to-commit state. We have shown that such a subtransaction has a timestamp that is earlier than timestamps of other subtransactions that are active at this site. Thus, either all the subtransactions of the global transaction with the earliest timestamp will reach their prepared-to-commit state, or the global transaction will be aborted. The abort may occur if at one of the sites accessed by the global transaction, a subtransaction of some other global transaction with a later timestamp is in its critical section. Once all subtransaction of a global transaction reach their prepared to commit states, the global transaction will be eventually committed or aborted by the coordinator and, hence cannot be involved in a global deadlock. Thus, the global transaction with the lowest timestamp will eventually terminate. Since each global transaction will at some point in time have the lowest timestamp, no global transaction can be in a wait state forever, and hence global deadlock cannot occur. 2

6 Summary In this paper we have presented a global concurrency control mechanism for multidatabase systems that preserves the autonomy of LDBSs, and is free from global deadlocks. The mechanism extends the notion of timestamps to multidatabase environment to enforce the global serialization order through additional data operations on a data item stored in local systems. Most systems that were proposed earlier in the literature assumed the existence of a global coordinator that maintained all the necessary information, and was responsible for all the decisions concerning global transactions. The main advantage of the proposed mechanism is that it allows a fully distributed architecture, in which concurrency control decisions can be made based on locally available information. Since no centralized information is maintained by the mechanism, it provides a higher degree of fault-tolerance, and allows incremental growth. The mechanism is easy to implement as it involves only the creation of a server and a GTM for each type of LDBS.

References [BGRS90] Y. Breitbart, D. Georgakopoulos, M. Rusinkiewicz, and A. Silberschatz. Rigorous scheduling in multidatabase systems. In Proceedings of the Workshop on Multidatabases and Semantic Interoperability, October 1990. [BHG87] P.A. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency Control and Recovery in Database Systems. AddisonWesley, 1987. [BS88] Y. Breitbart and A. Silberschatz. Multidatabase update issues. In Proceedings of ACM SIGMOD International Conference on Management of Data, June 1988.

[BST89] Y. Breitbart, A. Silberschatz, and G. Thompson. Reliable transaction management in a multidatabase system. In Proceedings of ACM SIGMOD International Conference on Management of Data,

[ED90]

June 1989. A.K. Elmagarmid and W. Du. A paradigm for concurrency control in heterogeneous distributed database systems. In IEEE

Proceedings of the 6th International Conference on Data Engineering, 1990.

[EH88]

[Geo90]

A.K. Elmagarmid and A.A. Helal. Supporting updates in heterogeneous distributed database systems. In IEEE Pro-

ceedings of the 4th International Conference on Data Engineering, 1988. D. Georgakopoulos. Transaction Management in Multidatabase Systems. PhD the-

sis, University of Houston, Department of Computer Science, 1990. [GRS91] D. Georgakopoulos, M. Rusinkiewicz, and A. Sheth. On Serializability of Multidatabase Transactions through Forced Local Con icts. In Proceedings of the 7th IEEE International Conference of Data Engineering, April 1991. Kobe, Japan.

[Lam78] L. Lamport. Time, clocks and the ordering of events in a distributed system. Communications of ACM, 21(7), July 1978. [LT88] W. Litwin and H. Tirri. Flexible concurrency control using value dates. Technical Report 845, INRIA, May 1988. [Pu88] C. Pu. Superdatabases for Composition of Heterogeneous Databases. In IEEE Pro-

ceedings of the 4th International Conference on VLDB, 1988.

[WV90]

A. Wolski and J. Veijalainen. 2PC Agent method: Achieving serializability in presence of failures in a heteroneous multidatabase. In Proceedings of PARBASE-90 Conference, February 1990.