Performance Evaluation of Replica Control Algorithms in a Locally Distributed Database Syste:m Chang
S. Keum, Wan Choi ETRI, Korea
[email protected]
Eui K. Hong Seoul City University, Korea
[email protected]
Abstract Replication is a key factor for improving the availability and reliability of data in a distributed database system. A Major restriction is that replicated copies should behave like a single copy. Many replica control algorithms have been proposed for solving the problem, but the performance of various replica control algorithms has received relatively little attention. In this paper, the performance of five replica control algorithms is investigated using simulation in a locally distributed database system. The five alternatives studied are the read-one/write-ah, primary copy, primary site, quorum consensus, and tree quorum algo-
rithms.
1
Introduction
In a distributed system, data can be stored at several sites. Instead of keeping one copy of important data at a single site, multiple copies of the same data can be maintained
at different
sites.
The
benefits
of data
replication are the following [7, 16, 181. First, data replication increases the data’s availability. A single site failure does not make replicated .data inaccessible. The system can access the data in the presence of the failures, even though some of the redundant copies are not available. Second, replication increases data’s reliability: if a copy is accidentally destroyed, it can be reconstructed from the other copy. Finally, replication enhances the performance of a read operation by allowing an authorized user to work on the copy that can be most easily accessed. However, the benefits of data replication must be balanced against additional cost and complexities due to synchronization of the replicated data. A major restriction of replication is that the replicated copies t This research was supported by Korea Science and Engineering Foundation under grant No. 941-0900-051-l. Proceedings of the Fourth International Conference on Database Systems for Advanced Applications (DASFAA’95) Ed. Tok Wang Ling and Yoshifumi Masunaga Singapore, April 10-13, 1995 @ World Scientific Publishing Co. Pte Ltd
Won Y. Kim, Kyu Y. Whang KAIST, Korea wykim@mozart .kaist .ac.kr
must behave like a single copy, i.e., mutual consistency of a replicated data must be preserved [16]. By mutual consistency, we mean that all copies converge to the same value and would be identical when all update activities cease. The inherent communication delays between the sites storing and maintaining copies of a replicated data make it impossible to ensure that all copies are identical during updating in the system. The principal goal of a synchronization mechanism for replicated data is to guarantee that all updates are applied to each copy in a way that assures mutual consistency. Mutual consistency is not the only constraint that a distributed system must satisfy. In a system where several users access and update data concurrently, operations from different transactions may be interleaved and operate concurrently on the data for better system throughput. An interleaved execution of read and write operations of transactions would produce incorrect results. Concurrency control is the activity of coordinating concurrent accesses to the database to provide the same effect as if each request is executed in a serial fashion. Concurrency control in a replicated database is more complicated than in a centralized system. To ensure correctness, concurrent execution of transactions on replicated data must be equivalent to a serial execution on non-replicated data. Multiple copies of a data item should appear as a single copy to the transaction. This requirement is known as one-copy serializability [4] and enforced by the replica control algorithm. Many replica control algorithms have been proposed. The simplest replica control algorithm is readin which a read operation is one/ write-all(ROWA) allowed to read any copy, and a write operation is required to write all copies of the data item. The available copy algorithm [3] is an enhanced version of the ROWA in terms of the availability of write operations. In the primary copy method [2], one copy of an item
388
is designated as the primary copy, and as such is responsible for that item’s activity. The primary site algorithm is a simplified version of the primary copy algorithm. The majority consensus algorithm [18] is the first voting approach ever proposed. The quorum consensus algorithm [8] is generalization of the majority consensus algorithm. The dynamic voting algorithm [ll, 131 has been proposed to enhance the availability of voting algorithm. Recently, some algorithms impose a logical structure such as trees and grids on the set of physical copies for efficiency of operations [l, 61. Up to the present, the research on replica control algorithms mainly concentrates on increasing the availability of an operation in case of site failures or network partitioning. However, assuming that site failures or network partioning do not frequently occur, the performance of replica control algorithms in normal cases is a very important issue. In this paper, we examine the performance of five alternatives, which are read/onewrite/all, primary copy, primary site, quorum consensus, and tree quorum, using simulation in a locally distributed database. We use a simulator based on a closed queuing model of a distributed database system for our performance studies. The critical performance results not only determine the relative efficiency of the five algorithms, but also help the readers to obtain better insight into the trade-offs inherent to these algorithms. The organization of the rest of the paper is as follows. Section 2 investigates alternative replica control algorithms. Section 3 presents the detailed simulation model and simulation parameters. In Section 4, several experiments conducted to investigate the performance of five replica control alternatives, and analyze the results. Finally, we conclude the paper in Section 5.
2
Replica
Control
Algorithms
In this section, we review five basic replica control algorithms such as read-one/write-all, primary copy, primary site, quorum consensus, and tree quorum algorithms. 2.1
Read-One/Write-All
to be updated have been successfully locked. All locks are held until the transaction has successfully committed or aborted. To support a write transaction, which includes a set of write operations, the centralized twophase commit protocol [9] is employed. We handle deadlock via the deadlock prevention method based on distributed wound wait locking algorithm [14]. The ROWA has the lowest read operation cost because only one copy is accessed by a read operation. A weakness of this method is the lower availability for write operation because write operations can not be done upon the failure of any copy. 2.2
Primary
Copy
Algorithm
In the primary copy(PC) algorithm [2], each data object is associated with a known primary site, to which all updates in the system for the data object are first directed. Distributed INGRES [17] follows this approach. Different data objects may have different primary sites. However, the primary site(PS) algorithm has only one primary site for every data object. All read operations for a data item must be performed at a primary site for that data item. Updates are propagated to all copies. In this method, the lock for a data item need be acquired at only the primary site for the data item. The actual operation may be performed on any copy once the lock has been granted. PC works well only if site failures are distinguishable from network failures. If a primary site for a data item fails, a new primary can be elected. When a network is partitioned, sites in different partitions would disagree on the identity of the primary copy. 2.3
Quorum
Consensus
Algorithm
The first voting approach proposed was the majority consensus algorithm [18]. The quorum consensus(QC) scheme proposed by Gifford [8] is a generalization of the majority consensus algorithm. In this approach, every copy of a replicated item is assigned some number of votes. Every transaction must collect a read quorum of r votes to read an item, and a write quorum of w votes to write an item. Quorums must satisfy two constraints:
Algorithm
The simplest technique to maintain replicated data is read-one/write-all(ROWA) algorithm. In ROWA, a read operation is allowed to read any copy, and a write operation is required to write all copies of the data item. In this method, it suffices to set a lock on any copy of data item for a read operation, while write locks are required on all copies for a write operation. A write operation is blocked until all copies
389
(1) r + w exceeds the total number of votes 21assigned to the item, and (2) T > v/2 The first constraint
ensures that there is a nonnull inand every write Any read quorum is therefore guaranteed to have a current copy of the item. Version numbers tersection quorum.
between
every read quorum
operations implemented by corresponding physical operations on one or more copies of the data object. We do not consider local computation for the value of a data object. The assumptions we make are as follows.
are used to identify the most recent copy. The second constraint ensures that two writes cannot happen in parallel or, if the.system is partitioned, that writes cannot occur in two different partitions on the same data item.
(1) The
smallest unit of data accessible by a user is the data object.
The QC algorithm works with any correct concurrency control algorithm. As long as the algorithm produces serializable executions, quorum consensus will ensure that the effect is just like an execution on a single copy database. In summary, the QC approach ensures one-copy serializability and works well in the presence of the site crash or the network partition. The read availability is given as a higher priority by choosing a small r. A weakness of this scheme is that writing a data item is fairly expensive since a write quorum of copies must be larger than the majority of votes, i.e., w > v/2. 2.4
Tree Quorum
(2) Data (3) The
3.1
Simulation
algorithm
[9] is used for
wound-wait locking algorithm deadlock prevention.
[14] is used for
(5) The centralized
two-phase
commit
protocol
[9] is
chosen. 03) Object-level
locking is selected.
(7) Site
and network
failures
partitioning
do not oc-
cur.
3.2
Simulation
Model
The simulation model of distributed database systems consists of a set of database sites (DB sites) connected by a high-speed local area network. In this study, the local area network is assumed to be a broadcast network. Figure 1 shows the detailed model of a DB site in a distributed database system. This is a modified version of the model by Lu [12] and Hong [lo]. Each DB site includes both terminals (users) and physical resources for storing and processing data and messages such as CPU, disks, and communication channels. Transactions originate from terminals and to which the results are returned. The DB site also includes several queues. The CPU serves requests from two queues:
For read operations, a read quorum contains a root or the majority of the children of the root. If any node in the majority of the children of the root fails, it can be replaced by the majority of its children, and so on recursively. In the best case, a read quorum contains only a root. For write operations, a write quorum contains a root, and the majority of the children of the root, and the majorities of the children of the selected children of the root, etc.
Simulation
in all sites.
(4) The
The tree quorum(TQ) algorithm [l] imposes a logical structure on the set of copies of a data item. In a failure-free environment, a protocol executes read operations by reading one copy of data item while guaranteering fault-tolerance of write operations. It also exhibits the property of graceful degradation, i.e., communication costs are minimal in a failure-free environment but may increase as failures occur.
3
replicated
two-phase locking concurrency control.
Algorithm
A weakness of TQ is that certain pattern even if the number of failed copies is small, duce the write availability of a data item [7]. ple, when a root fails, a write operation is even if all other nodes are accessible.
objects are fully
of failures, would reFor examsuspended
(1) operation queue for read/write operation, concurrency control, and two-phase commit processing. (2) message queue
for message processing.
A transaction enters the operation queue whenever read/write operations, concurrency control, and twophase commit processing need to consume CPU time. A transaction enters the disk queue in order to read/write data object from/to disk, or write twophase log records to disk. If the transaction must be blocked as a result of a concurrency control request, it enters a blocked queue until it is able to proceed. In addition to using the communication lines, the messages also consume some CPU time. Hence all incoming and outgoing messages enter the message queue to receive CPU service. The outgoing messages then
Environment Background
For the simulation study, we make some assumptions. We consider these assumptions reasonable, since either they used in real prototypes, or their performance has been demonstrated good, or at the least they are simple to implement. A data object is an abstract unit of data. A data object might be a file, a page or a record. Read and write are two fundamental types of logical
390
TERMINALS
T yes
rngyk-/l
‘9
no outgoing v
yes
blocked
queue send queue
FROM NETWORK
TO NETWORK
Figure 1: DB enter the send queue to receive service from the network. Two queues connected to CPU are assumed to be served alternatively. Each disk has its own queue. Requests in the disk and the send queue in a DB site are served in FCFS (First-Come, First-Served), whereas the two CPU queues are served in a roundrobin fashion. The simulation model of each replica control algorithm has been implemented using the concurrent simulation language CSIM [15] and runs on the Sun workstation. We conducted 25,000 transactions for each simulation run. 3.3
Simulation
Parameters
Table 1 lists the simulation parameters used in the model of this paper. We describe several simulation parameters that need some additional explanation. The parameter thi&time is the mean of an exponentially distributed thinking period between the completion of one transaction and the submission of the next one at a terminal. The parameter rt-ratio means the
391
site model. ratio of the number of read transactions to the sum of the number of read and write transactions, The parameter access-objects is the mean number of data objects accessed; it also specifies the mean number of cycles through CPU and data disks for the transaction unit. The parameter cpzl-data-time parameter specifies the average amount of CPU time required for transactions to process a data object when reading or writing it. The actual number of data objects accessed ranges uniformly between half and twice the average, and cpu-data-time is exponentially distributed. The parameter msg-cpu-time captures the cost of protocol processing for sending or receiving a message. Table 2 shows the values of the simulation parameters in our experiments. Deciding the parameter values to evaluate the performance results is a difficult problem. We set the parameter values based on reference [5] and [lo]. The system consists of a varying number of sites from 3 to 15 sites. The number of terminals at each site varies from 2 to 18. Read transaction ratio varies from 0 to 1. The number of data objects in a DB site is 10000. A transaction access an aver-
Table 1: Simulation Parameter
Name
num-sites num-terms n urn-disks total-objects think-time rt-ratio access-objects read-quorum write-quorum CPU-data-time cc-time
parameters.
Table 2: Parameter
Description
1 Parameter
number of sites in a distributed database svstem number of terminals in a DB site number of disks in a DB site number of data objects in a DB site mean think time for the terminals read transaction ratio number of objects accessed by a transaction read quorum write quorum CPU time needed for accessing data object CPU time needed for concurrency
settings for the simulation. 1
Name ( Range 3 - 15 sites
num-sites num-disks num-terms totaLob.jects
write-quorum cc-time cpu-data-time io-data-time
+ 1
[num&tes/2J
1 .O milliseconds 8.0 milliseconds 20.0 milliseconds 10.0 milliseconds
I/O time needed for accessing data I/O time needed for writing msg-cpu-time msg-b yte
short-msa-size
long-msg-size
log
per-message cost such as message I oackine and task switchine I per-byte cost of message needed for transferring one byte over the
local area network I size of short messages in bvtes size of long messages in bytes
1 4096 bytes
1 long-msg-size
ted transactions to the total number of committed and aborted transactions. Queuing delays is also measured to analyze the bottleneck of each algorithm. The relative importance of several costs is also given in some cases.
t I
4
age 10 data objects to commit execution. The data objects accessed by transactions are chosen uniformly from the database. It takes a transaction an average of 8 milliseconds to process each data object. Broadcast on the local area network has the effective transfer rate of 4M bits/set, the nominal transfer rate is 24M bits/set (3M bytes/set).
Experiments
and Results
the parameters in Table 2, num-sites, and &ratio varies over some ranges with the following default values num-sites = 7, num-terms = 10, and rt-ratio = 0.75.
In this section, we present the performance of the five replica control alternatives - read-one/writeall( ROWA), primary copy( PC), primary site(PS), quo- under rum consensus(QC), and tree quorum(TQ) various simulation conditions. Three experiments are performed using a detailed simulation model, and their performance is discussed and analyzed. Due to space limitation, we do not present all our results but have selected the graphs that best illustrate the differences in performance of the algorithms.
3.4
4.1
Among
num-terms,
Performance
Measure
Experiment
1: Varying
Number
of Sites
The purpose of this experiment is to investigate the impact of the varying number of sites on the performance of the five replica control alternatives. The number of sites is varied between 3 and 15. As the
The primary performance measure used in this paper is the response time. Response time is the difference between the time when a terminal first submits a new transaction and the time when the transaction returns to the terminal having successfully completed. Response time is measured in seconds. Several secondary performance measures are used in analyzing the results of the experiments. A transaction might be aborted and restarted several times before it commits, thus the total number of restarts becomes critical metric for the concurrency level of each algorithm. The success rate is defined as the ratio of the total number of commit-
number
of sites
increases,
increases since we cated in all sites. impacted by data the response time transactions.
the number
assume data objects Thus, we see how replication. Figures results obtained for
For both
read
and write
of copies
also
are fully repliperformance is 2 and 3 show read and write transactions,
PC performs the best, followed by TQ, followed by ROWA, followed by QC, and finally followed by PS.
.392
As the number of sites increases, the performance of PS degrades significantly. The performance of PC and TQ is little affecbed by the increase of the number of sites.
Table 3: Relative weights of component transactions (n.um-sites = 15).
CPU disk comm. aueue
RON’A ?%
PC 1 11%
89%
0.9%
64.1%
0.8%
2%
14%
9%
070
PS
0.06% 1 97.14%
Table 4: Relative weights of component transactions (n.um-sites = 15). RO\VA 1
3
7
5
1’1
13
PC 1
PS 1
costs for read
QC 3% 6% 0.2% 90.8%
TQ 1 I
7%
12% 1% 80%
costs for write
QC (
TQ
15
No. of sites
Figure 2: Response time of read transactions. Table 5: Relative weights of various queuing delays for read transactions (mm-sites = 15).
35 ‘ROWA +ac
*To
*PC
*cps
3Q-
I
s E25Y z20t
1 ROWA
1
PC 1
PS 1
QC
1
TQ
1
Ta.ble 6: Relative weights of various queuing delays for write transactions (mm-sites = 15). 3
5
7
9
11
13
I
15
1 ROWA
1
PC 1
F’S 1
QC
I
TQ
1
No. 01 sites
Figure 3: Response time of write transactions. Tables 3 and 4 give the relative weights of CPU cost, disk cost, communication cost, and queuing delay (acc.umulated in all queues) of read and write transactions when the number of sites is 15. In fact, queuing de1a.y has the highest weight for both transa.ction types, a.nd for all the replica control schemes. Tables 5 and G summarize the relative weight of each queuing delay (associated with physical resources in Figure 1) of read and write transactions, when the number of sites is 15. Read and write transactions have the highest delay in the disk queue. As anticipated, queuing delay in the sen,d queue turns out to be negligible for both read and write transactions. Since replica control algorithms have different mechanisms for read/write operation, the increments of queuing delays are also different. Figure 4 shows that the queuing delay of write transactions in t,he disk queue. The queuing delay of PS in the disk queue is obtained from the primary site because all read/write
393
activities occur only at the primary site. On the other hand, the curves for ROWA and PC for write transactions are obtained by dividing the total queuing delay in the disk queue from all sites by the number of sites. Other queuing delays represent the average values obtained from the related sites, even though the amount of resources used may differ, and thus the amount of queuing dela.ys incurred may different in the coordinator site and apprentice site(s). Figure 5 presents the success ratio of write transactions. In the figure, PC has the highest success ratio, and TQ has higher success ratio. QC has the lowest restart ratio by far, and PS has lower success ratio. ROWA has higher success ratio when the number of sites is small, but a lower success ratio when the number of sites is large. Since QC locks majority copies of a replicated item for read or write operation due to
many conflicts between read and write transactions, it results in the lowest success ratio. PC needs only one lock for a primary copy for a read or write operation. So it has the lowest success ra.tio. TQ requires one lock for read operation and a set of locks which comprising in a write quorum for a write operation. ROWA needs one lock for a read opera.tion and all locks for a write operation. Thus, as the number of sites increases, the number of locks for a write operation also increase. PS needs only one lock for a primary site like PC, but read and write transactions shoulfi access the primary site to lock associated data objects, leading to congestion at the prima.ry site. Thus, write transaction is frequently aborted at the primary site and has a low success ratio. As shown above, the performance
However, ROWA shows in-between performance for read and write t,ransactions. The reason is that write t,ransactions needs to access all remote sit,es to perform a write operation. Thus, write transactions enter several queues and several resources at all sites. Furthermore, write transactions in ROWA increases the queuing delays of read t.ransactions in several queues. It is expected t,hat QC gives bad performance for read tra.nsactions and good performance for write transactions since write transactions in all algorithms except QC must access all remote sites in a locally distributed database syst.em. But QC shows ba.d performance for write transactions due t,o the lowest success ratio. In a.ddition, QC is close to a serial esecut.ion compared with the other algorithms. TQ differs from QC only in the way of collecting a quorum. This has great influence on the performance difference of two algorithms. TQ has a good performance in read and write transactions. PS has the worst performance due to the congestion of the primary site. In fact, PS is a simplified version of PC in that every data object has same primary site. After all, this leads to the congestion at the primary site. 4.2
5
7
9
11
13
15
NO. al sites
Figure 4: Queuing disk queue.
delay of write tra.nsactions
in the
7
15
,g 0.a E 0 0.6 Fi Y UJ 0.4 0.2-
4.3
0 ! 3
5
9
11
13
2: Varying
Number
of Termi-
Experiment 2 is conducted to see how increase of t,he load a.t each site affects the performance of the five replica control algorithms. The level of resource contention is controlled by varying the number of terminals. The number of terminals is varied from 2 to 18. Figure G shows the response time of rea.d transactions, and Figure 7 is for the write transactions. The relat,ive performa.nce ordering of the algorithms gives the same result as Experiment 1. As the number of terminals increases, the queuing delays also increase, since the increase of concurrent transactions results in contention of resources such as CPU, disks, and communica.tion channel. As in experiment 1, the factors such as queuing delays a.nd the success ratio also have a great influence on the relative performance ordering of the algorithms.
0 3
Experiment nals
Experiment tion Ratio
3:
Different
Read
Transac-
No. of sites
To investigate the effects of the read transaction ratio on t,he performance, this experiment is conducted in t,he way t,ha.t the read transa.ction ratio is varied from 0 to 1. The rea.d transaction ratio becomes 0 when the number of read transactions is 0. Conversely, the rea.d tra.nsa.ction ratio becomes 1 when all transa.ctions generated are read transactions. Figures 8 and 9 present the response time results of
Figure 5: Success ratio of write transactions. difference among replica cont.rol alternatives is mainly due to different success ratios and queuing. PC has the best performance in read and write transactions due to the highest success ratio and the shortest queuing delay. It appears that ROWA gives the best performance in read transa.ctions since ROWA accesses only the local copy of the data object for a. rea.d operation.
394
-ROW*
+ac
*10
*PC
operation. However, a read operation has higher preference over a read operation in the other algorithms, i.e., they have to trade off the read availability with the write availabiiity. TQ shows better performance as the read transaction ratio decreases. This is mainly due to the fact that a read operation for TQ needs only one remote access, while a write operation needs to access all remote sites included in a write quorum. PS has the worst performance over the almost entire ranges.
3cPS
--ROWA
2
10
6
14
+oc
XT0
*PC
*ps
18
No. of terminals
Figure 6: Response time of read transactions.
‘ROWA
+oc
*la
*PC
*ps
0
a
0.2
0.4
0.6 Read
0.0
0
transactionratio
Figure 8: Response time of read transactions.
-Flow*
0 I 2
6
10
14
+ac
*To
*PC
*ps
I 16
No. of terminals
Figure 7: Response time of write transactions. read and write transactions, respectively. The ordering of relative performance of the algorithms is varied according to read rransaciion ratio. PC outperforms the other replica control schemes for the entire range of read transaction. For a low read transaction ratio, ROWA for read transactions has performance inferior to those of QC and TQ. However, as the read transaction ratio increases, the performance ordering reverses; ROWA provides a better performance than QC or TQ. The improvement of response time of ROWA is mainly attributed to the decrease of queuing delay in the disk queue as the read transaction ratio increases. That is, as the read transaction ratio increases, the contention of various resources weakens since the number of write transactions that consume resources heavily decreases. ROWA for write transactions shows similar trends to read transactions except that ROWA has the worst performance when the read transaction ratio is 0. The performance of QC is little affected by the increase of the number of sites. This is concerned with the preference of a read/write operation in each algorithm. In QC, write operation has the same preference as read
395
2-
0 I 0.0
I
I 0.2
0.4 Read transaction
0.6
0.8
ratio
Figure 9: Response. time of write transactions.
5
Conclusions
In this paper, we have investigated the performances of five replica control algorithms for a locally distributed database system using simulation. The five algorithms studied are the read-one/write-all, primary copy, primary site, quorum consensus, and tree quorum algorithms. For the simulation, we assume that data objects are fully replicated in all sites, and that site failures and network partitioning do not occur. This paper concentrates primarily on the performance of replica control algorithms in normal cases.
The simulation results indicate that the performance difference among the replica control algorithms is mainly due to the different queuing delays and success ratios. In terms of the relative performance of the algorithms, we have found that PC and TQ dominate ROWA, QC, and PS. QC and PS have poor performance under almost all simulation conditions due to frequent transaction restarts and higher queuing delay. The performance of ROWA changes sharply as the system load and the read transaction ratio varied. It is difficult to make an absolute statement on performance of the algorithms because system configuration and user requirements of specific application determine the relative efficiency of the algorithms. In general, PC is preferable when network or site failures seldom occur. However, when network or site failures frequently occur, TQ offers an acceptable solution. For future research, we need to consider sites failures and network partitioning. Also, simulation environment for a distributed database system needs to incorporate fault tolerance issues.
PI D.Gifford,
“Weighted Voting for Replicated Data,” Proc. 7th ACM SIGOPS Symp. on Operating Systems Principles, Pacific Grove, Calif., Dec. 1979, pp.150-162
PI J.Gray,
“Notes on Data Base Operating Systems,” Operating Systems: An Advanced Course, Lecture Notes in Computer Science:60 Edited by R.Bayer et al., Springer-Verlag, 1979.
WI
E.K.Hong, “Performance of Catalog Management Schemes for Running Access Modules in a Locally Distributed Database System,” Proc. 19th Int’l Conf. on Very Large DataBases, Ireland, Aug. 1993, pp.194-205.
PI
S.Jajodia and D.Mutchler, “Dynamic Voting Algorithms for Maintaining the Consistency of a Replicated Database,” ACM Trans. on Database Systems 15(2), J une 1990, pp.230-280.
WI
H.Lu, Distributed Query Processing with Load Balancing in Local Area Networks, Ph.D. Dissertation, Computer Sciences Technical Report #624, Computer Sciences Department, Univ. of Wisconsin-Madison, Dec. 1985.
References
PI D.Agrawal
and A.El Abbadi, “The Tree Quorum Protocol: An Efficient Approach for Managing Replicated Data,” Proc. 16th Int’l Conf. Very Large DataBases, Australia, Aug. 1990, pp.243254.
PI P.A.Alsberg
and J.D.Day, “A Principle for Resilient Sharing of Distributed Resources,” Proc. 2nd Int’l Conf. on Software Engineering, San Francisco, Calif., Oct. 1976, pp.562”570.
and N.Goodman, “An Algorithm [31 P.A.Bernstein for Concurrency Control and Recovery in Replicated Distributed Databases,” ACM Trans. on Database Systems 9(d), Dec. 1984, pp.596-615.
P31 J .F.Paris and D.Long, “Efficient Dynamic Voting Algorithms,” Proc. IEEE Int’l Conf. on Data Engineering, 1988, pp.268-275.
P4
CSIM User’s Guide, MicroelecP51 HSchwetman, tronics and Computer Technology Corporation, 1990.
WI
PI P.A.Bernstein, Recovery 1987.
et al. Concurrency Control and in Database System, Addison-Wesley,
and M.Livny, “Conflict Detection [51 M.J.Carey Tradeoffs for Replicated Data,” AClM Trans. on Database Systems 16(4), Dec. 1991, pp.703-746. PI S.Y.Cheung, et al. “The Grid Protocol: A High Performance Scheme for Maintaining Replicated Data,” Proc. 6th Int’l Conf. on Data Engineering, Jan. 1990, pp.438-445.
D.J.Rosenkrantz et al., “System Level Concurrency Control for Distributed Database Systems,” ACM Trans. on Database Systems 3(2), June 1978, pp.178-198.
S.H.Son, “Replicated Data Management in Distributed Database Systems,” A CM SIGMOD Record 17(4), Dec. 1988, pp.62-69.
and E.Neuhold, “A Distributed P71 M.Stonebraker Data Base Version of INGRES,” 1977 Berkeley Workshop on Distributed Data Management and Computer Networks, May 1977, pp.19-36.
W31R.H.Thomas,
“A Majority Consensus Approach to Concurrency Control for Multiple Copy Databases,” ACM Trans. on Database Systems 4(Z), June 1979, pp.180-209.
[71 S.M.Chung, “Enhanced Tree Quorum Algorithm for Replicated Distributed Databases,” Proc. 2nd Database Systems for Advanced Application, Daejon, Korea, April 1993, pp.83-89.
396