server data-shipping mechanism by allowing clients to up- date and query cached .... preserving the To-Lend copy for recovery purposes. Each object has a ...
Two-Stage Transaction Processing in Client-Server DBMSs* Vinay Kanitkar and Alex Delis Department of Computer and Information Science Polytechnic University Brooklyn, NY 11201
Abstract In this paper, we show that there is scope for replication in data-shipping client-server DBMSs offering opportunities for improved transaction response times. To support this replication, we describe a two-stage protocol for transaction processing (2STP). We extend the conventional clientserver data-shipping mechanism by allowing clients to update and query cached objects that have replicas in multiple sites. We use the concept of acceptance criteria to provide a means for flexible handling of client updates. The effectiveness of the two-stage transaction processing mechanism is supported by means of queuing analysis and detailed simulation experiments comparing 2STP with a global lockbased data-shipping protocol. This improvement in transaction processing efficiency is achieved at the cost of longer downtimes for crash recovery.
1
Introduction
Contemporary database systems are designed to operate in a client-server environment [6, 13, 14, 22]. There is usually a centralized server and multiple clients that are connected to the server by a local area network. The two most common models for database processing in such systems are data-shipping and query-shipping [8]. In this paper, we deal with the data-shipping client-server model where the data is shipped from the server to the client to be processed there. This model has become quite pervasive in database computing. However, most previous efforts on this model assume one or more of the following: (i) the clients maintain cached data and local log records in their main memory only [10, 14, 22], (ii) a data object can be updated at only one client at a time [20], and (iii) a client ships the log records * This work was supported in part by the National Science Foundation under Grant NSF IIS-9733642 and the Center for Advanced Technology in Telecommunications, Brooklyn, NY. Copyright 1998 IEEE. Published in the Proceedings of HPDC-7’98, 28-31 July 1998 at Chicago, Illinois.
for every transaction to the server as soon as the transaction commits [17]. Most of these requirements were imposed by an assumed strict relationship between a server and its clients. Emerging trends such as the widespread electronic monetary-transfers and PC banking [1], the use of newblend transaction and workflow systems [11, 18], and the broad use of interoperating databases [5, 9] dictate a more flexible protocol among cooperating elements in database architectures. In this paper, we propose a two-stage transaction processing protocol (2STP) that allows multi-client replication of data. By permitting time and location independent updates, we trade-off the advantage of locality of data accesses with the need to maintain global data consistency. In particular, we exploit the facts that in many transaction-oriented systems, there is no need for strict data consistency and that some classes of transactions are inherently completed in two phases. For example, consider a facility like PC banking. In the near future, customers of a bank could download their financial account data to their personal computers, and initiate queries, money transfers and other transactions. The cached data is updated to reflect these actions. However, the transactions are said to have completed only if they execute successfully on the data stored at the bank. For this reason, the transactions are shipped to the bank’s computing system for re-execution. Note that, for a wide variety of customer services, these transactions need not be sent over immediately. Instead, the customer can complete all required transactions locally and ship them all to the bank together. To allow such a two-stage mechanism to function correctly, we make use of the concept of acceptance criteria. For example, in a banking application, an acceptance criteria may be that the balance in an account cannot be negative. Therefore a transaction that transfers $5,000 from X’s account to Y’s account must be aborted if X does not have $5,000 in the account. Acceptance criteria, therefore, specify conditions that allow transactions to complete. Our analysis shows that data replication and its two-stage processing is feasible under certain conditions. We show that object requests can be satisfied by our server in a shorter time (on the average) than
in a conventional lock-based system [7, 17, 20, 24, 25]. This in turn provides a better transaction throughput at the clients. The paper is organized as follows: Section 2 presents an overview of similar and related work. In Section 3, we present the two-stage transaction processing protocol and discuss recovery issues. Section 4 contains a queuinganalytical and experimental comparison of our protocol and a conventional lock-based system and lastly, conclusions can be found in Section 5.
ments that commute. The effects of asynchronously committed transactions can be incorporated into the database in an arbitrary order. 2STP differs from the above protocols in the following aspects: (i) it is designed to provide better transaction processing performance in the traditional client-server environment, (ii) replication of data is kept to the minimum that is required, and (iii) acceptance criteria are used to provide a relaxed form of consistency control.
2
3
Related Work
In the proposed scheme, we use inter-transaction client caching in order to exploit locality of data accesses, avoid data contention and reduce communication overheads. To improve transaction response times, we remove the restriction of global locking and allow replication of data at multiple sites. Several models dealing with asynchronous update of replicated objects have been proposed in the literature. [15] presents an extension to the Coda file system [21] specifically designed for disconnected operation. The system performs automatic read/write conflict detection based on certain serializability constraints. Results of client transactions with partitioned file access aren’t immediately committed to the server but are held at the client and used for future processing. Once the client reconnects with the server, these results are validated with respect to the current server state and committed only if the validation is successful. An asynchronous update protocol was discussed at a conceptual level in [12]. Gray et. al proposed that mobile clients store a replica of the entire database and attempt to reconcile the results of local transactions at regular intervals. This is done by sending the transactions to the server for re-execution when connectivity with the server is restored. Walborn and Chrysanthis [23] examine the usefulness of semantic information for transaction processing in a mobile environment. Here, an application-independent semantic such as commutativity of operations considerably reduces the problems of concurrent execution and recovery. If all operations of an object commute with one another in all states of the object, then this object can be cached and updated at multiple hosts asynchronously. Thus, the conventional notion of serializability can be relaxed in favor of weaker application-specific criteria, and data consistency requirements can be relaxed from strict to eventual consistency. The Escrow transactional model described in [19] exploits the fact that aggregate data types are numeric values which can represent a quantity of interchangeable items. The database system allows transactions the ability to hold “in escrow” required quantities of data values in a manner similar to setting aside a portion of a bank account balance for a particular purpose. Data consistency is ensured by limiting the operations on the quantity to increments and decre-
The Two-Stage Transaction Processing Protocol (2STP)
!"
A To-Lend version of an object is the last reconciled version stored in the server’s database. This version is shipped to any client that requests the object. As soon as an object is cached for the first time, the server creates a copy of it (termed To-Save). The To-Save copy is generated so that all subsequent client updates can be re-performed on it while preserving the To-Lend copy for recovery purposes. Each object has a unique object identifier (ObjectID), a monotonically increasing state identifier called the Object Log Sequence Number (ObjectLSN) and space for storing another log sequence number (LSN) called the ServerLSN. At the server, the ObjectLSN and the ServerLSN for an object are identical. Acceptance criteria for data objects are encapsulated within the objects. When an object is shipped to a client, the acceptance criteria are shipped with it. The acceptance criteria for database objects also contain conditions that cause the server to reconcile updates to all the copies in the system. These are called conditions for significant modification and are application-specific. Acceptance criteria for transactions are stated in the transaction scripts. Every client maintains a structure called the Object Log Table (OLT) which contains an entry for each database object cached locally. For every such object, the OLT contains a Reference Counter (RC) and the Last Access Transaction (LATr). The RC counts how many local transactions are currently accessing the object, and the LATr records the ID of the last transaction that accessed the object. The OLT can be used to store any information specific to the requirements and object release policies of the application. Transactions that have completed are stored in a disk spooling area called the Committed Transaction Spool (CTS). The CTS is used to batch completed transactions until they can be shipped to the server for re-execution.
#
$%&')(*+,
We classify the operation of the two-stage transaction processing protocol into three phases based on the direction
of migration of database objects or transactions scripts between clients and servers.
Client A
At a client: When a transaction requests an object, the client first looks for it locally. If it is not available, then a request is dispatched to the server. The latter ships out the To-Lend version of the object along with its acceptance criteria. The client creates an entry for the incoming data object in its OLT and sets its RC to zero. Before a client transaction is allowed to access the object, it requests the appropriate lock from the local lock manager. When a transaction is granted access to an object, the object’s OLT RC is incremented. In Figure 1, we show how, in a cluster of four clients and one server, the lending of objects takes place. An arrow from a client to the server indicates a request for an object, while an arrow from the server to a client is the shipment of an object to that client. Note that, when object ./0 is checked out for the second time the To-Save version is not created.
Request Object S91
Client C
Server
Client D
Client B
Request Object S4431
Request Object S91
Time Send Object S91; Create To−Save version.
Request Object S178
Server
Release Object S91; Significant Modification
3.2.1 Object Lending
Client A
Client C
Send Object S4431; Create To−Save Version.
Send Object S91
Object Lending Phase
1243!5687:9;=,@A7B,CED&7F&G2HFI3 At the server: The To-Lend copy of the object is sent following a client request. If the To-Save version of the object does not exist then a replica of the To-Lend copy is created as the To-Save copy. If an additional demand arrives for the same object the To-Lend version is checked out. A directory maintains the list of sites that have cached the To-Lend copy. 3.2.2 Transaction Processing At the clients: During transaction processing, cached objects can be modified at any client independently as long as the data and transaction acceptance criteria are met (otherwise, the transaction is aborted). The client updates the
Client D
Client B
Recall Object S91
Time
Transaction Using Object S91 Completes; Release Object S91
Re−do Execution on To−Save version
Release Object S3441; Object Replacement
Replace To−Lend with To−Save version
Re−request Object S91
Object Release Phase
1243!5687KJL;M,@7&B,CN7OP7&QR&7 ObjectLSN locally for its own logging and recovery purposes. Clients are not allowed to modify the ServerLSN for cached objects. When a transaction commits, then for each object accessed by the transaction the OLT reference counter is decremented and the transaction ID is saved as the OLT LATr. The transaction script itself is appended to the client’s CTS only if the transaction has updated any data. At the server: Transaction processing is performed when a client has shipped a set of CTS-resident transaction scripts for re-execution on the primary version of the database. These scripts are stored on the server’s disk and assigned new transaction IDs. These transactions are re-executed on the To-Save versions of the data objects. Transaction acceptance criteria are validated where they appear in the transaction script. Data acceptance criteria are verified when data values are modified. Transactions that cannot satisfy acceptance criteria are aborted. After an update takes place, the ObjectLSN and the ServerLSN of the modified object are replaced by the LSN of the log record corresponding to that update. When the server sends a successful completion acknowledgement on a transaction, the client can delete the transaction script from its CTS. If, however, a transaction fails during this phase, the responsible client is informed about the occurred acceptance criteria violation. Now, the user has the choice of resubmitting the transaction in the light of a modified script and updated data. During both stages of transaction processing, writeahead-logging and checkpointing are used to ensure that transaction rollback and crash recovery are possible. 3.2.3 Object Release A client releases a cached object when: (i) the server explicitly asks the client to, (ii) the client has to create free space in
its cache S for newly requested objects (the replacement policy is left to the application), (iii) the data object has been locally modified significantly, and (iv) a transaction that succeeded locally has failed at the server due to violation of acceptance criteria. If an object has been released because of condition (iii), the server requests all copies of the object released. Any client can obtain a new copy of the object later if it needs to. It is possible that a particular client cannot release its copy of an object because the client (or network) has crashed. In this situation, the crashed client is ignored and the client’s copy is marked invalid. The ServerLSN of the client’s copy of the object now differs from that of the To-Lend version at the server. This is handled during crash recovery by the client. If the object has not been modified at the client (ObjectLSN and ServerLSN are the same) then the only action to take is to inform the server, and delete the object from its local database. The server can remove the client from its list of clients that cache that object. Obviously, objects that are still being used by executing transactions cannot be released immediately. For such objects, the OLT Reference Counter (RC) is non-zero. Figure 2 shows how the objects are released (e.g. client A releases object S91). If an object has been modified then the client sends a message to the server specifying the object it wants to release. On acknowledgment from the server, the client ships a set of transactions scripts from its CTS upto the last transaction that accessed the object. The server reexecutes the transactions shipped by the client on the ToSave version of the object (grayed segment). When there are no more copies of an object at the clients, the To-Lend version of the object is replaced by the To-Save version. At this point, the To-Save version is deleted (blacked segment).
TUT
VXWZYM[\W]+^`_ba aMcWa
Recovery from Client Crashes: When a client becomes operational after a crash, it may have some transactions that were executing at the time of the crash. Therefore, the disk versions of some of the objects accessed by these transactions may not reflect all the updates applied to them. This is due to the fact that the client’s transaction manager does not have to write an updated object to disk at the time of transaction commit (no-force policy [16]). The updates are, however, present in the client’s log. As the first step, the client ships its CTS to the server. Once this is done, the client is essentially ready to resume normal operation if it invalidates its entire cache (no undo operation is required). However, the client may have cached a considerable portion of the database. Re-fetching all this data on demand can be quite time-consuming and, for many data objects, unnecessary. Therefore, the client compiles a dirty object list by looking at its log. Now the client sends to the server, a list of ObjectIDs along with their ServerLSNs,
and the list of dirty objects. The server responds with a subset of the list of ObjectIDs whose To-Lend version has been updated (server’s copy of objects have greater ServerLSNs) since it was cached by the client. These objects are reshipped to the client as are all the objects on the dirty object list. Now the client is ready to resume normal operation. All the transactions that were interrupted by the crash can be classified as aborted. Recovery from Server Crashes: Once the server becomes operational, it need not perform any recovery if it wasn’t re-executing client transactions. Otherwise, it uses the ARIES redo-undo recovery algorithm [16] to make the database consistent with the log. The transactions that were being re-executed at the time of the crash are identified and their effects undone. The server then processes the batched transactions, beginning with the undone transactions, until it reaches the end of the transaction buffer. Once this is done, the server informs the clients that it is ready to resume normal operation. The clients can continue to process transactions locally during a server crash as long as they do not require additional data from the server.
TUHd
egfih?jklnmEopW
As an example, consider the case of a stockbroker in San Francisco trying to purchase shares of XYZ Inc. at the NYSE. If the broker’s cached data reports a price of $55 per share, the broker can send a transaction to the stock exchange as “If the price is below $56 per share then purchase 1,000 shares of XYZ Inc.” The price-per-share criteria is necessary because the broker may want to purchase fewer or no shares if the price is above $56. TRANSACTION PurchaseShares( buyer, seller, company, number_of_shares, price_ceiling ) { /* Transaction Acceptance Criteria */ Assert( PriceQuote( company ) < price_ceiling ); /* Transfer payment from buyer to seller */ Withdraw( buyer.account, number_of_shares * PriceQuote( company )); Deposit( seller.account, number_of_shares * PriceQuote( company )); /* Transfer shares from seller to buyer */ TransferShares( seller.portfolio, buyer.portfolio, number_of_shares ); Commit(); }
qr4s!tu8vxwybz{}|&~uv*tu|&~&&vxu8~
&&~,rP
K&vK ~,v At the server, the execution of this transaction is aborted if the price is greater than or equal to $56 per share. A possible transaction script for this transaction is shown in Figure 3. If the assertion fails at the server, that implies that the
data at the client is outdated. The server informs the client of the transaction’s failure and the client transaction is rolled back. The client can then request a new version of the object.
4
Analytical and Experimental Results
E
!g} !¡£¢,¤n¥¦¨§8
Figure 5 shows the queuing model for the 2STP architecture. In this figure, Îãâ is the probability that an object is not already cached at the client. Here the server allows replication of data in the system and therefore does not have the feedback loop (re-queuing of requests) as in Figure 4. However, to keep the database consistent, the server has to redo clients’ update transactions. Assuming the (Poisson) rate of update-redo actions from each client to be ä , we can write the expression for the average response time as:
Ý
Îãâ Õ ÍKÞßÌ Ï ÞßË!àÌÎ â Õ äåá
In this subsection, we analyze the global lock-based datashipping architecture and the 2STP architecture. We show that the enhanced cache efficiency of the replicated approach allows 2STP to redo the requisite update transactions during normal operation. We derive an expression for the maximum transaction redo rate in 2STP such that clients’ object requests are satisfied in an equal period of time in both systems (on the average). In Figure 4, the queuing model of a conventional lock-based C/S data-shipping architecture [20] is shown.
1−q2
r
r
u
n q2
Server
1−q2
n q2
©ª¬«!®¯Kæ±,²³&¯KçIÁ²è¯&Ä&¯&¼,ÅEÁ¯® Ç ªP¶&¯ Èɵ½¯Ê
q 1−p
1−q
To formulate an expression for the additional transaction processing capability of 2STP, viz. ä , we equate the average object response times in both the systems to get:
q
n
u
q
p
Server
n clients
Î Îêâ ÐLà8Ï:Þ%ËÌIÎbáÉé Ï ÞëË!àÌIÎãâ Õ äá
1−q
r
q2
n clients
1−q
r
q2
1−q2
r
r
(2)
q
©ª¬«!®¯%°±²³&¯n´&µ¶· ¸º¹»&¼&¯&½¿¾ÀÁï&Ä&¯¼,ÅÆÁ¯L®¸ Ç ªP¶&¯*Èɵ½¯LÊ We have used M/M/1 queues to model the clients and the server [4]. There are Ë clients and Ì is the Poisson arrival rate for object requests generated at each client. Í is the exponential service rate (the client checks whether or not it has the object cached locally) at the client and Î is the probability that the object is not already cached at the client. At the server, Ï is the exponential service rate and Ð is the probability that no other client has a conflicting lock on the requested object. If another client does have a conflicting lock on an object then the request has to be re-queued until the conflicting lock(s) have been released. The average response time for object requests that are satisfied locally is Ò,ÓZÑ Ô . For requests that are sent to the server, the response time is Ò,ÓZÑ ÔxÕ Öbר+ÓÑ Ù=ÔbÚÜÛ Since the probability that an object request will go to the server is Î , the average response time for an object requested by a transaction executing at a client is
Ý
Î ÍKÞßÌ Õ ÐàÏ*Þ%ËÌIÎbá
(1)
(3)
We assume, for simplicity, that the server’s request service rate is equal to the sum of the rates of object requests at the clients, i.e. Ï = ËÌ . This is not unreasonable as the server has to be able to service all object requests during the initial phase when no database objects are cached at the Î = ì=Îãâ , for some positive clients. In equation (5), letting Ý Þ Ö í Þ% à Îêâ Õ ÐIÎãâîáÜÌ . Recognizing real value ì , we get ä Ý é that ï?ðñÐ!òÎãâóð , we can write the range for ä as:
Ð
àôÐõÞ Ð Üá Ìöðä÷ð¿à Ý Þ Ð áÜÌ ì ì
(4)
From this inequality, we can see that the variables ì and define the range for ä . ä is definitely negative when ìxø%Ð , Ý and definitely positive when ìúù . In terms of the two Ý models, when ìõù , the cache-hit ratio at the clients in the conventional lock-based model (1- Î ) is lower than that in the 2STP model (1- Îãâ ) and vice versa. However, it is very difficult to estimate realistic values for Ð , Î and Îãâ (and ì ) analytically. We derive estimates for these probabilities by means of experiments. This is described in the next subsection.
Eû
üÉý}þ}Zÿ,!¢bXZM¨p¢
We have simulated the conventional client-server request model and the 2STP model with packages written in C us-
For the transactions at the clients, we have designed a database access pattern based on the object identifiers. In a database of size , the object IDs range from 0 to -1. The INT2 access pattern has been achieved by designing the access ranges such that the access range of client intersects the access ranges of client -1 and +1. This way there is a 2-way contention for most of each client’s access range. Although this access pattern is not very realistic, it creates locality of accesses with a reasonable degree of contention. We have simulated the INT2 access pattern for 5%, 10% and 25% exclusive lock requests (ELRs). As the clients’ cache sizes are much smaller than the actual database size, clients return objects to the server using the LRU algorithm when they need disk space to receive newly arrived objects. For the first experiment, fifty clients, one server and a database consisting of 25,000 objects were used. The client caches were set to accommodate up to 10% of the server database size. For testing both models, the same sequences of object requests by clients were submitted. At each client, the object requests were uniformly distributed over the client’s access range. The network delays in the transmission of data or requests were modeled as those in an Ethernet LAN [2]. We also assume that processing a transaction at the server is many times more time-consuming (for our experiments considered ten times more expensive) than at a client. This is to account for I/O overheads, internetworking delays, and other concurrent operations [3]. For all our experiments, we assumed that transactions in 2STP would fail during their second-stage of processing with a certain probability (proportional to the percentage of updates in the system). Such transactions are reported back to their originating clients and re-submitted with a fixed probability. Figure 6 shows the average cache-hit percentages achieved by both CS and 2STP while clients request fractions of their objects to be locked exclusively. The curves labeled 2STP-INT2 show the cache-hit percentage at the clients with 5%, 10% and 25% exclusive lock requests for our protocol. This cache-hit percentage, when expressed as a probability, is the variable - from the model in Figure 5. The curves labeled CS-INT2 show the same for the conventional data-shipping model. Similarly, this is the variable from the queuing model in Figure 4.
90 2STP-INT2: 5% ELRs 2STP-INT2: 10% ELRs 2STP-INT2: 25% ELRs CS-INT2: 5% ELRs CS-INT2: 10% ELRs CS-INT2: 25% ELRs
80 70
Cache-Hit Percentages
ing the CSIM simulation library. Through our experiments, we show the following points: (i) the clients’ cache effectiveness is greater in 2STP than in conventional lock-based data-shipping, (ii) in the conventional model, a significant percentage of the requests that go to the server are re-queued as the object cannot be granted to the client immediately, (iii) the experimentally obtained values for , and for a number of update workloads support the feasibility of 2STP in client-server environments, and (iv) the 2STP protocol offers improved client performance even in the presence of heavy updates and hot spots.
60 50 40
30 20 10 0 0
50000
100000
150000 Time
200000
250000
300000
! "$#% &(') *+&,-. /0+&13245*+&,/ 678:9+&,; /3245*+&? ! (@5ABDC;E4FHG I:+&,J / K@5AB L
As the curves show, the cache-hit percentages level off as the experiments progress and the caches reach a steady state. In Figure 6, 2STP-INT2 begins to level off at around 81% while cache-hit percentages for CS-INT2 level off at 64% for 10% updates. The percentage of the requests, arriving at the server, that need to be re-queued in the CS-INT2 simulation (as the lock cannot be granted immediately) is 36.5% for 10% exclusive lock requests. This is the probability 1- in Figure 4. Table 1 shows a table for the values of , , for varying exclusive lock requests. Substituting these values into the expression for M , we can see that the numerical coefficient of M increases as the percentage of exclusive lock requests increases. This represents the additional capacity available to 2STP for processing update-redo transactions N
Exclusive Lock Requests 5% 10% 25% S+J25F 3TU24 /V7.6 &>\* 2\2 ]^78J " * 9
To see how the number of clients connected to the server affected the system, we increased the number of clients in the system from 10 to 50 in steps of ten. Figure 7 shows the values at which cache-hit percentages level off for different numbers of clients. As the graph shows, the cache-hit percentages are lower when there are fewer clients. This is due to the fact that as the number of clients decreases, each
90
180
140
70 120 Response Time
Steady State Cache-Hit Percentage
c
160
2STP-INT2: 25% ELRs CS-INT2: 5% ELRs CS-INT2: 10% ELRs CS-INT2: 25% ELRs
80
60
50
2STP-INT2: 5% ELRs 2STP-INT2: 10% ELRs 2STP-INT2: 25% ELRs CS-INT2: 5% ELRs CS-INT2: 10% ELRs CS-INT2: 25% ELRs
100 80 60
40 40 30 20 20
0 10
15
20
25
30 35 Number of clients
40
45
50
10
15
20
25
30 35 Number of Clients
40
45
50
degfhikjmlnop,j q r.s t>oup>q+p,jvq wx j t$y%e pz{ji)w j|+p,q-f.j }
defhijHn8:j } 8| } j
8e4[j
z:~5.p8iv3~\e5j|+p t
q+puv3~4e j|+p>}
)j w+p::j 8h j }p>}
130 120 110
client has a larger object access range. This implies fewer cache-hits and an increase in the replacement of cached objects with newer ones. Consequently, a larger percentage of requests are forwarded to the server. This is reflected in the response times plotted in Figure 8. Next, we ran experiments simulating hot-spot accesses on the database. The INT2 access pattern was used for 90% of the database and, since there were 50 clients in the system, we allowed 50-way contention for the remaining 10%. The response times and cache-hit percentages for 2STP and the conventional method in this scenario are given in Table 2. The effectiveness of the cache is greater for 2STP in all our experiments. The significantly lower response times for 2STP suggest that the cost of re-execution of updates at the server is not as high as that of waiting to acquire global locks. Finally, we explore the length of recovery in the case of client crashes. Such crashes are simulated by not receiving or responding to messages for a random downtime. At the end of the crash period the crashed client begins its recovery process. The results, we present, are in terms of the time required for a crashed client to become operational from the moment it comes back up. Figure 9 shows the times taken for recovery by both the protocols on the y-axis. The xaxis represents the percentage of pages cached at the client that could have been modified (exclusively locked in case of lock-based CS). The 2STP protocol required approximately 1.5-2.5 times more time to recover from a crash and resume normal operation. This is due to the fact that 2STP allows unrestricted updates on replicated objects.
CS-INT2 2STP-INT2
Time taken for recovery
100
90 80 70 60 50 40 30 20 5
10
15 20 Percentage of Exclusive Locked Objects
25
defhijnv3~4e5j|+pKikj w . jispe4[j }8i`px jpk i).p>w