A Metaprotocol Outline for Database Replication Adaptability

0 downloads 0 Views 346KB Size Report
required interaction between the main database replication protocol families. ... and empirical studies or they can be automatically triggered according to .... not been rejected during validation phase. ... 2. if Pq .enabled == false and Pq .locals == 0 ... certification, such transactions are discarded in their remote replicas and ...
A Metaprotocol Outline for Database Replication Adaptability M.I. Ruiz-Fuertes, R. de Juan-Mar´ın, J. Pla-Civera, F. Castro-Company, and F.D. Mu˜noz-Esco´ı Instituto Tecnol´ogico de Inform´atica Universidad Polit´ecnica de Valencia Camino de Vera, s/n 46022 Valencia (Spain) {miruifue,rjuan,jpla,fmunyoz}@iti.upv.es, [email protected]

Abstract. Database replication tasks are accomplished with the aid of consistency protocols. Commonly, proposed solutions use a single replication protocol providing just one isolation level. The main drawback of this approach is its lack of flexibility for changing scenarios –i.e. workloads, access patterns...– or heterogeneous client application requirements. This work proposes a metaprotocol for supporting several replication protocols which use different replication techniques or provide different isolation levels. With this metaprotocol, replication protocols can either work concurrently with the same data or be sequenced for adapting to changing environments. In this line, the use of a load monitor would enable the best choice for each transaction, selecting the most appropriate protocol according to the current system characteristics. This paper is focused on outlining this metaprotocol design, establishing the metadata set needed and the required interaction between the main database replication protocol families.

1 Introduction Different replication protocols provide different features –consistency guarantees, scalability. . . – and obtain different performance results. It can be depicted a scenario (workload, access patterns, isolation requirements, etc.) where each concrete protocol can be chosen as the most suitable. So, instead of proposing a consistency protocol with general purposes, we have designed a metaprotocol that allows a set of consistency protocols to work concurrently. With this metaprotocol, applications will be able to take advantage of the protocol that better suits their needs. Not only the best protocol for the general case can be selected for an application but also it can be changed for another one if the application access pattern changes drastically or if the overall system performance changes due to specific load variations or network or infrastructure migrations. These selections and changes can be forced by an administrator following theoretical and empirical studies or they can be automatically triggered according to background performance analysis (response times, abort rates...) executed by a monitoring system. 

This work has been partially supported by FEDER and the Spanish MEC under grants TIN2006-14738-C02-01 and BES-2007-17362.

R. Meersman, Z. Tari, P. Herrero et al. (Eds.): OTM 2007 Ws, Part II, LNCS 4806, pp. 1052–1061, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A Metaprotocol Outline for Database Replication Adaptability

1053

Also it is possible that a given application knows for sure which protocol is more suitable for its needs or that a given transaction access pattern is best served with a certain protocol. There exist different architectures and many replication protocols in the literature. As each architecture displays advantages and disadvantages working in different scenarios, recent studies such as [1] and [2] compare protocols performance. Jim´enez et al. [1] present the ROWAA approach as the most suitable for the general case and Wiesmann’s paper [2] is based on total order broadcast techniques. Our paper follows the classification given in the latter. The rest of this work has been structured as follows. Section 2 describes the assumed system model for our middleware metaprotocol support. Section 3 explains the metaprotocol and the interaction between data replication protocols belonging to different families in the classification proposed by [2]. Later, Section 4 discusses some related work and finally, Section 5 presents the paper conclusions.

2 System Model We assume a partially synchronous distributed system –where clocks are not synchronized but the message transmission time is bounded– composed by N nodes where each one holds a replica of a given database; i.e., the database is fully replicated in all system nodes. Each system node has a local DBMS that is used for locally managing transactions. On top of the DBMS a middleware –called MADIS– is deployed in order to provide support for replication. More information about our MADIS middleware can be found in [3,4]. This middleware also has access to a group communication service (abbreviated as GCS, on the sequel). A GCS provides a communication and a membership service supporting virtual synchrony [5]. The communication service features a total order multicast for message exchange among nodes through reliable channels. The GCS groups messages delivered in views [5]. The uniform reliable multicast facility [6] ensures that if a multicast message is delivered by a node (correct or not) then it will be delivered to all available nodes in that view. In this work, we use Spread [7] as our GCS.

3 Metaprotocol As it has been said, the main goal of our whole system is to use the replication protocol that best behaves in regard to the current requirements and environment of the replicated system. This implies that it must change the working protocol when it is detected that another one would fit better in the replicated system –possibly common in changing environments. Moreover, our proposal is that this protocol exchange is performed without stopping working. Then, when a protocol exchange must be done, transactions already started will end their execution using the protocol they started with, while new ones will use the new protocol. In this section, we will present a basic element of this system: a metaprotocol which supports real concurrency among several replication protocols. Concurrency can lead to inefficient systems due to the natural differences in the behavior of the protocols,

1054

M.I. Ruiz-Fuertes et al.

reason for what it should only be exploited to support the protocol exchange phase previously commented. Nevertheless, depending on the system requirements, it could be acceptable to use this concurrency without restrictions. For designing and developing this system, different aspects must be considered. As several protocols must work concurrently, it will be necessary to know the metadata needed by each supported protocol to work; on the other hand, it is mandatory to study and analyse important issues related to their interaction. To do so, we will use the protocol families presented in [2]. As a first try, we will only consider those families based on total order broadcast: active, certification-based and weak voting replication. All these families are update-everywhere [8], so they are decentralised replication protocols. 3.1 Targeted Replication Protocol Families In active replication, a client request arrives to one server –the delegate–, which sends the transaction to all replica servers using a total order broadcast. Later, server replicas –including the delegate one– execute and commit the transaction in the order it has been delivered. This way, the total order broadcast ensures that all replicas commit transactions in the same order. This protocol ensures 1-copy-serializability isolation level. In the certification-based replication, the delegate server locally executes the transaction before broadcasting it. Then, when the local transaction arrives to its commit phase, the delegate broadcasts the transaction information to all server replicas. Once it is delivered, a certification phase starts in all replicas to determine if such transaction can be committed or not. The total order established by the broadcast determines the certification result: in case of a pair of conflicting transactions, the newest inside the total order is the one that is aborted. The transactions that commit, do it in the order established by the broadcast. The transaction information that must be broadcast will depend on the guaranteed isolation level. The writeset (set of written objects) is enough for achieving snapshot isolation; for 1-copy-serializability, also the readset (set of read objects) must be broadcast. Some additional metadata is also needed in order to complete the certification; e.g., the transaction start logical timestamp in case of using the snapshot isolation [9] level. The weak voting technique is very similar to the certification-based replication in their first steps. A client request arrives to one server –the delegate– who starts to process it locally. When the local transaction arrives to its commit phase, the delegate broadcasts the transaction information to all replica servers. Upon delivering this message, the delegate can determine if conflicting transactions have been committed before (i.e., their writesets have been delivered before its own one). If so happens, the transaction being analysed should be aborted. Based on this information, the delegate server does a new broadcast reporting the outcome of the transaction to the replicas. It must be noticed that with this technique, broadcasting just the transaction writeset already allows 1-copy-serializability isolation level, since delegate replicas are able to check for conflicts between their own transactions readsets and the remote transactions’ writesets.

A Metaprotocol Outline for Database Replication Adaptability

1055

3.2 Metadata Set Before considering the interaction among these protocols, it is important to determine the metadata needed by each supported protocol because when two or more protocols work concurrently the metaprotocol will need to generate, for each executed transaction in the system, the metadata needed by each executing protocol. Therefore, if P 1 and P 2 are the two protocols working concurrently, and M P 1 and M P 2 are their respective metadata when working alone, the metaprotocol must generate for transactions executed by P 1 and P 2 the M P 1 ∪ M P 2 metadata. As active replication applies transactions sequentially in the order obtained by the total order delivery, it does not need any metadata. Certification-based replication uses in an explicit way the begin of transaction and writeset as metadata, thus providing snapshot isolation [9]. The readset may be added to provide 1-copy-serializability [9]. Weak voting technique simply uses the writeset as metadata. It must be noticed that active and weak voting techniques do not need additional metadata because the total order broadcast provides them enough information to work. Therefore, the transaction metadata needed when all protocols are working is compound by: begin of transaction, writeset and readset. However, readsets are only necessary when the certification-based technique is used for providing 1-copy-serializability. Therefore, due to the high cost associated to manage readsets, it would be preferable to use the certification-based technique only for providing snapshot isolation. This way, when talking about certification-based transactions, only the writesets will be considered from now on. 3.3 Metaprotocol Outline Once the protocols have been described in behavior and metadata, the interaction between them can be detailed. The meeting point where these replication techniques work concurrently is a pair of shared lists: the log list with the history of all the system transactions (only needed by the certification technique, but including information from all the protocols), and the tocommit list, with the transactions pending to commit in the underlying database. These lists are maintained by the metaprotocol in each replica. The outline of this metaprotocol is depicted in Figure 1. A transaction is included in both lists when it is delivered by the GCS and, depending on the protocol class, it has not been rejected during validation phase. Transactions are processed following the list order. Some dependencies may arise due to the behavior of each replication technique. For determining these dependencies, it will be necessary to know how each protocol behaves: – When a transaction managed by active replication reaches the first position in the tocommit list, it is applied in the database, obtaining its associated writeset –and readset when needed. – Transactions using certification-based replication should be certified before being inserted in the lists. If they pass their certification without being rejected, they are appended to both lists but must wait to arrive to the first position of the tocommit list for being applied and committed in the database. In case of an unsuccessful

1056

M.I. Ruiz-Fuertes et al.

Initialization: 1. ∀ available protocol Pq a. Pq .enabled := suitable(Pq ) ( suitable(Pq ) is true if Pq is currently suitable. Pq .enabled can be also set later. When set to true, Pq is loaded. When set to false, Pq is unloaded if there are no local transactions already started with it.) 2. tocommit := ∅; log := ∅ 3. L-TOI := 0; N-TOI := 1 (TOI = Total Order Index N-TOI = TOI for the next transaction to be delivered. L-TOI = TOI of the last committed transaction.)

3. if Mn .type == A a. Ti .toi := N-TOI++ b. Ti .bot := Ti .toi−1 c. Ti .log entry type := w-pending d. Ti .committable := true e. append to log and tocommit 4. if Mn .type == WV-2 a. if Mn .vote == commit i. Ti .committable := true b. if Mn .vote == abort i. delete Ti from log and tocommit c. resolve c-dependencies(Ti , Mn .vote) 5. do garbage collection

I. New local request for a transaction in protocol Pq : 1. if Pq .enabled a. Pq .locals++ b. grant request 2. else, deny request

V. Garbage collection: 1. delete oldest log entry if all nodes applied it

[1]

[2]

VI. Committing thread: 1. Ti := head(tocommit) 2. if Ti .committable == true II. A local Pq transaction terminates: a. if Ti .replica = Rk 1. Pq .locals-i. call Ti .protocol for apply(Ti ) [3] 2. if Pq .enabled == false and Pq .locals == 0 b. call Ti .protocol for commit(Ti ) [4] a. unload Pq c. L-TOI := Ti .toi d. if Ti .log entry type == w-pending III. Pq asks to send message Mn ={t,p,r,c } related to Ti i. resolve w-dependencies(Ti ) [5] where t is message type, p is protocol id, e. delete Ti from tocommit r is replica id and c is message content: (see message types in table 2) 1. if t == WV-1 or C VII. Metaprotocol procedures: a. Ti .bot := L-TOI [1] 1. establish dependencies(Ti ): 2. piggyback garbage collection info (L-TOI) [2] Ti .log entry type := c-pending 3. broadcast Mn ∀ concurrent w- or c-pending Tj in conflict with Ti add Ti in the dependent list of Tj IV. Upon delivery of Mn related to transaction Ti : Ti .dependencies++ 1. if Mn .type == C a. call Ti .protocol for validate(Ti ) 2. resolve w-dependencies(Tj ): (check conflicts with concurrent transactions) ∀ transaction Ti in Tj .dependent b. if validation == negative remove Ti from Tj .dependent (if Ti conflicts with a resolved) if no conflicts between Ti and Tj i. if Ti .replica == Rk Ti .dependencies-(transaction is local) if Ti .dependencies == 0 call Ti .protocol for rollback(Ti ) Ti .committable := true ii. else Ti .log entry type := resolved discard Ti resolve c-dependencies(Ti , commit) c. else else i. if validation == positive delete Ti from log and tocommit (if Ti has no conflicts and  w-pending) resolve c-dependencies(Ti , abort) Ti .log entry type := resolved Ti .committable := true 3. resolve c-dependencies(Tj , termination): ii. if validation == pending ∀ transaction Ti in Tj .dependent (if Ti conflicts with a c-pending or ∃ w-pending) remove Ti from Tj .dependent Ti .committable := false if termination == commit establish dependencies(Ti ) delete Ti from log and tocommit iii. Ti .toi := N-TOI++ resolve c-dependencies(Ti , abort) iv. append to log and tocommit else 2. if Mn .type == WV-1 Ti .dependencies-a. Ti .toi := N-TOI++ if Ti .dependencies == 0 b. Ti .log entry type := c-pending Ti .committable := true c. if Mn .replica == Rk Ti .log entry type := resolved i. Ti .committable := true resolve c-dependencies(Ti , commit) d. else i. Ti .committable := false e. append to log and tocommit

Fig. 1. Metaprotocol algorithm at replica Rk (See explanations for right-enumerated lines in Section 3.6)

A Metaprotocol Outline for Database Replication Adaptability T YPE WV-1 C A WV-2

1057

D ESCRIPTION weak voting transaction writeset the writeset (and readset if needed) of a transaction executed in a certification-based protocol the whole transaction of an active protocol the voting message of a weak voting protocol

Fig. 2. Message types

T YPE D ESCRIPTION resolved a committable (or already committed) transaction with writeset (and readset if needed) info available c-pending a transaction with writeset (and readset if needed) info available but not yet committable (e.g. a weak voting transaction waiting for its voting message) w-pending a transaction with no writeset info available (e.g. an active transaction not yet committed)

Fig. 3. Log entry types

certification, such transactions are discarded in their remote replicas and aborted in their delegate one. – A transaction Ti replicated with weak voting has to overcome multiple validations against all remote writesets previously delivered. To this end, each time one of such remote writesets is applied, the delegate replica can check if it has conflicts with such local transaction Ti . If so arises, the local transaction is tagged as tobe-aborted, and a second broadcast (reliable, but without total order) is sent to all replicas telling them that the transaction should be aborted. Otherwise, the transaction writeset will finally arrive to the head of the tocommit queue. When this arises, the second broadcast is also sent, but notifying in this case that the transaction should be committed. In all other replicas, the transaction writeset is applied as soon as it arrives at the head of the tocommit queue, but its terminating action (either commit or abort) is delayed until the second broadcast is delivered. In case of abortion, such transaction data should be removed from the log list. 3.4 Dependencies Several dependencies may arise in the validation phase –in case of a certification-based replication technique– due to these transaction behaviors. A certification-based transaction, in order to be certified, must know the writesets of all concurrent and previously delivered transactions that will eventually commit. But this can not be immediately known, as there is some pending information in the entries of the log list (see table 3). First, the writesets of active transactions are not known until their commit time (we will identify these active transactions as w-pending transactions). Second, weak voting transactions final termination (commit/abort) is not known until it is delivered in the voting phase (until then, these transactions are identified as c-pending transactions). Third, certification-based transactions whose certification phase is waiting for the resolution of any of the previous cases, are also treated as c-pending transactions. Note that a certification-based transaction Tk may be waiting for a c-pending transaction Tj , being Tj also a certification-based one waiting for the resolution of, for example, a weak voting transaction Ti . Note that Tk and Ti may have no conflicts between them, but they are related through a dependency chain. The metaprotocol must

1058

M.I. Ruiz-Fuertes et al.

handle properly these dependency chains. It also has to be noticed that a transaction may have several dependencies and it only will be resolved when all its dependencies are resolved. Once a pending transaction is resolved, this resolution is propagated along the dependecy chain, as detailed in Figure 1 in procedures VII.2 and VII.3. As it can be seen, these dependencies imply some delays in regard to the protocol original behavior when working alone. So, these delays would imply an extra cost that usually could not be tolerated during long periods. Therefore, our metaprotocol must be designed in order to not introduce any overhead when a protocol works alone, and allowing to work concurrently several replication protocols for short time periods when a protocol change must be done without stopping working. Future work will provide measures of metaprotocol overhead. Regarding to the protocol exchange possibilities, our metaprotocol offers the required functionality for another system component (such as a load monitor, an administrator, etc.) to enable and disable protocols in order to best fit the current system requirements. 3.5 An Easy Example In order to make easy the understanding of the metaprotocol pseudocode, let us consider an example situation. Assume that each protocol is currently enabled and working, so transactions from all of them are being delivered at a replica Rk . Suppose also that, at a given moment, the tocommit list is empty. Then, four new transactions are delivered to the replica. In our example, these four transactions will be enqueued in the tocommit list before the first one commits –in order to show how the dependencies among them are established–. The first delivered transaction, A, is an active one. Due to this, A is directly appended to the tocommit list, and marked as committable and w-pending (we haven’t got its writeset yet). The commitment process of this transaction can start at any moment from now on, but, as said before, it will not end until the four sample transactions are enqueued. The second delivered one, W, is a weak voting transaction. This way, W is marked as c-pending and appended to the tocommit list. Suppose that Rk is not the delegate replica of W. This means that W will remain as c-pending and non-committable until its voting message is delivered to Rk . The next two delivered transactions are certification-based. The first one, C1, ends its validation phase with a pending result as there exist w-pending transactions in the tocommit list and, let us suppose, it presents conflicts with W. Therefore, C1 is marked as c-pending and non-committable, and we say that C1 is dependent on both A and W. Later, another certification-based transaction C2 is delivered. C2 is also marked as c-pending and non-committable because of a conflict with C1 and the existence of A. In this scenario, as C2 depends on C1, and C1 depends on W, a dependency chain appears between C2 and W –note however that these two transactions do not conflict in our example–. At this moment, A finally commits and its writeset is collected. Now it is time to resolve the w-dependencies with C1 and C2. Assume that A does not conflict with any other transaction, so all the w-dependecies are just removed. Now, the voting message for W is delivered with a commit vote. So W is now committable and its c-dependencies can be resolved. As W presented conflicts with C1 and W is going to commit, C1 must abort. Due to the termination of C1, its

A Metaprotocol Outline for Database Replication Adaptability

1059

c-dependencies are resolved on cascade, thus removing all the dependencies presented by C2, that becomes committable. Any committable transaction is eventually committed when it arrives to the head position of the tocommit list. 3.6 Further Details For completeness purposes, we should describe some details from the metaprotocol outline (set as [number] in Figure 1). 1. Initialization of the begin of transaction logical timestamp metadata. This metadata is only needed for certification-based protocols. In weak voting and certificationbased replication techniques, the value is set at broadcast time. With a conflict detection mechanism [4], we are able to ensure that this logical timestamp is valid for representing the transaction start. In active replication, this value is set at commit time since it actually starts at that time. 2. Garbage collection. Once a writeset has been applied in all replicas, all the local conflicting transactions have been eliminated, so such writeset can be removed from the log list as it will not be necessary any more [2]. This garbage collection should include a heartbeat mechanism in order to guarantee its liveness. But for simplicity, this heartbeat is not included in the outline. 3. Possible conflicts with local transactions. At this point, is when our conflict detection mechanism eliminates local conflicting transactions allowing the correct remote writeset application. 4. Commitment of a transaction. At this time, a weak voting protocol will broadcast the outcome of the transaction. This outcome can be either commit or abort, being an abort when another conflicting transaction has committed causing the rollback of this weak voting transaction. 5. After commitment, active protocol transactions will be required to collect their writeset in order to resolve possible dependencies. Finally, if we want to add support in our metaprotocol for the primary copy technique replication [2], it is needed to know how it can interact with other protocols. When working alone in our metaprotocol, it can work in its original way, but when it must work concurrently with one of the other protocols some changes must be applied. The main problem relates to the fact that in the primary copy only one node serves client requests, and therefore it is the only one which must perform conflict detections. Therefore, when a primary copy technique works concurrently with another one where all replicas can serve client requests, the conflict detection performed by the primary one would not consider possible conflicts with concurrent transactions that are in their local phase in other replicas. Then, it is needed that all replicas participate in the conflict checking process. To do so, a possible approach would consist in modifying the primary copy technique in order to behave like a certification-based protocol when working concurrently with other protocol families. In this case, it would be a certification-based technique where only one replica processes all client requests. Notice that this artificial behavior would not be necessary if all concurrent protocols are primary copy approaches sharing the same primary copy server.

1060

M.I. Ruiz-Fuertes et al.

4 Related Work This paper is a continuation of the work started by our group with MADIS [3], a platform designed to give support to a wide range of replication protocols. MADIS was thought to support different kinds of pluggable protocols, whose paradigms range from eager to lazy update propagation, from optimistic concurrency control to pessimistic, etc. Consequently, MADIS was though to maintain a wide range of metadata in order to cover the most common database replication protocols requirements. In particular, it was thought for switching from one consistency protocol to another, as needed, without the need of recalculating metadata for the newly plugged-in protocol. The resulting MADIS architecture allowed the administrator to change the used protocol, stopping first the old one and starting later the new one, thus not concurrent execution was supported. An exchanging algorithm for database replication protocols was presented in [10]. In this work, the authors designed a protocol for supporting a closed set of database replication protocols. Communication protocols are another distributed system research branch that also has studied techniques for dynamically changing protocols. In this case, the idea is to select each time the group communication system which best fits with the network layer and changing load profile of the replicated system. [11] and [12] are different approaches to provide this support. Another proposal in order to switch between total order broadcast communication protocols is presented in [13]. In this case the switching system does not buffer messages neither blocks their propagation, instead of that the system spreads each message using both protocols during the switching phase. It must be noticed that the third switching mechanism is less aggressive than the other two, but presents some overload in the network, whilst [11] and [12] can reach in a so easy way the inactivity period because they do not care about the existence of transactions. In regard to supporting concurrently multiple isolation levels in replicated databases –a future work line of this paper–, [14] presents a different approach. [14] authors propose a general scheme for designing a middleware database replication protocol supporting multiple isolation levels. The authors based it on progressive simplifications of the validation rules used in the strictest isolation level being supported, and on local (to each replica) support for each isolation level in the underlying DBMS.

5 Conclusions The aim of this paper is to design a metaprotocol that will be the key piece of an adaptable system allowing to dynamically change the replication protocol used in a replicated database. The idea is to select each time the replication technique that best fits with the changing requirements and dynamic environment characteristics, trying to provide always the best achievable performance. As first approach, the goal of the metaprotocol designed in this paper is to provide support for the concurrent execution of the most relevant database replication protocol families based on total order broadcast [2]: active, certification-based and weak voting replication.

A Metaprotocol Outline for Database Replication Adaptability

1061

We present a basic metaprotocol algorithm for concurrently supporting the targeted database replication protocols, based on the required protocols interaction. In following works, we will provide a correctness proof and new metaprotocol revisions for supporting other database replication techniques. Another future work will consist in implementing this metaprotocol and studying possible optimizations in order to increase the performance when several protocols work concurrently. Among these optimizations, the holes technique presented in [15] can be generalized and included in our metaprotocol algorithm.

References 1. Jim´enez-Peris, R., Pati˜no-Mart´ınez, M., Kemme, B., Alonso, G.: How to select a replication protocol according to scalability, availability, and communication overhead. In: SRDS, pp. 24–33. IEEE Computer Society, Los Alamitos (2001) 2. Wiesmann, M., Schiper, A.: Comparison of database replication techniques based on total order broadcast. IEEE Transactions on Knowledge and Data Engineering 17, 551–566 (2005) 3. Ir´un, L., Decker, H., de Juan, R., Castro, F., Armend´ariz, J.E.: MADIS: a slim middleware for database replication. In: 11th Intnl. Euro-Par Conf. Monte de Caparica (Lisbon), Portugal, pp. 349–359 (2005) 4. Mu˜noz-Esco´ı, F.D., Pla-Civera, J., Ruiz-Fuertes, M.I., Ir´un-Briz, L., Decker, H., Armend´arizI˜nigo, J.E., de Mendivil, J.R.G.: Managing transaction conflicts in middleware-based database replication architectures. In: SRDS, pp. 401–410. IEEE Computer Society, Los Alamitos (2006) 5. Chockler, G.V., Keidar, I., Vitenberg, R.: Group communication specifications: A comprehensive study. ACM Computing Surveys 4, 1–43 (2001) 6. Hadzilacos, V., Toueg, S.: Fault-tolerant broadcasts and related problems. In: Mullender, S. (ed.) Distributed Systems, pp. 97–145. ACM Press, New York (1993) 7. Spread: The Spread communication toolkit (2007), Accessible in URL http://www.spread.org 8. Wiesmann, M., Schiper, A., Pedone, F., Kemme, B., Alonso, G.: Database replication techniques: A three parameter classification. In: SRDS, pp. 206–215 (2000) 9. Berenson, H., Bernstein, P.A., Gray, J., Melton, J., O’Neil, E.J., O’Neil, P.E.: A critique of ANSI SQL isolation levels. In: SIGMOD Conf. pp. 1–10. ACM Press, New York (1995) 10. Castro-Company, F.: Mu˜noz-Esco´ı, F.D.: An exchanging algorithm for database replication protocols. Technical report, Instituto Tecnol´ogico de Inform´atica, Valencia, Spain (2007) 11. Miedes de El´ıas, E., Ba˜nuls Polo, M.C., Gald´amez Saiz, P.: Group communication protocol replacement for high availability and adaptiveness. Thomson Paraninfo, 271–276 (2005) 12. Liu, X., van Renesse, R., Bickford, M., Kreitz, C., Constable, R.: Protocol switching: Exploiting meta-properties. In: 21st International Conference on Distributed Computing Systems, p. 37. IEEE Computer Society, Washington, DC, USA (2001) 13. Mocito, J., Rodrigues, L.: Run-time switching between total order algorithms. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 582–591. Springer, Heidelberg (2006) 14. Bernab´e-Gisbert, J.M., Salinas-Monteagudo, R., Ir´un-Briz, L., Mu˜noz-Esco´ı, F.D.: Managing multiple isolation levels in middleware database replication protocols. In: Guo, M., Yang, L.T., Di Martino, B., Zima, H.P., Dongarra, J., Tang, F. (eds.) ISPA 2006. LNCS, vol. 4330, pp. 511–523. Springer, Heidelberg (2006) 15. Lin, Y., Kemme, B., Pati˜no-Mart´ınez, M., Jim´enez-Peris, R.: Middleware based data replication providing snapshot isolation. In: Ozcan, F. (ed.) SIGMOD Conf. pp. 419–430. ACM, New York (2005)