der to meet their basic business requirements. The demand for retrieval ... dard Client{Server approach interacts with database engine. ... ing the server free to carry out database work only. A database .... erally much smaller than the size of the pages read into main .... fast workstations is to run various software and appli-.
Database Computing on Clusters of Workstations Alex Delis
Department of Computer and Information Science Polytechnic University Brooklyn, NY 11201
Abstract The cooperation between workstations and database servers has become the standard paradigm in database computing. Numerous workstations are usually clustered along with database servers in a LAN to oer high performance processing. There are various ways to organize the resources available in such aggregate architectures. The overall performance of the cluster can be signi cantly improved when system designers can delegate as much processing as possible to the clients of the cluster and exploit local resources such as disk units, client memory and CPUs. In this paper, we describe our work in this area and outline open problems.
1 Introduction Corporations and government agencies are deploying database servers at an ever increasing rate in order to meet their basic business requirements. The demand for retrieval and manipulation of very large amounts of data is also increasing rapidly [1, 28]. This trend calls for high throughput database systems and scalable architectures that demonstrate excellent performance characteristics. Previous eorts in the realization of high performance database computing include database machines and multiprocessor databases [3]. Although these eorts oered solutions in the area of database performance, they demanded excessive costs since they required specialized hardware and software. In recent years, the Client{Server computing paradigm [27] has become a popular vehicle for the development of modern database systems. It provides the necessary infrastructure that potentially
may satisfy both the high performance and the scalability requirements[11, 7]. In addition, it has been adopted as a standard by DBMS suppliers and used in commercial products [16, 14], as well as in a number of research prototypes [2, 25]. Technological advances coupled with reduced pricing in hardware have created a new reality in the database eld [26]. More speci cally, we have experienced the wide availability of inexpensive workstations and PCs, the introduction of large, fast and reliable disk units, as well as the appearance of fast local area networks [21]. PCs, workstations and local area networks now enjoy a large installed base in both corporations and organizations. Thus, there is movement towards a new era of intense data sharing and networked database computing [1, 26]. These advancements have paved the way for the introduction of the Client{Server Database Systems (CS{DBMSs). The fundamental concept in CS-DBMSs is that a dedicated machine runs a DBMS and maintains a main centralized database (DBMS{Server). The users of the system access the database through either their workstations or PCs, via a local area network. The interaction between clients and server is achieved by the underlying operating systems and their interprocess communication abstraction mechanisms[18]. The database functionality usually delegated from the main server to the workstation (or client) may vary from the simple execution of the presentation manager all the way to running a simpli ed workstation database system that stores, accesses, processes and coordinates action with the system's server. Along these lines, a number of such aggregate database architectures have been proposed. In the simplest formulation, a cluster of workstations using the Stan-
dard Client{Server approach interacts with database engine. The server tries to satisfy all client requests through light{weight processes[13]. Although the environment in SCS is distributed, the DBMS is centralized and therefore, transaction handling is easier than in distributed databases. Database servers in such architecture may become the system's bottleneck point as soon as the number of clients attached per cluster increases signi cantly. Alternative CS{DBMS approaches advocate the delegation of database tasks to clients made possible by the available computing resources at this level[11, 10, 6, 8]. The rationale behind these architectures is the o-loading of the processing that the database server has to perform. The goal of this paper is to present a range of such clustered database architectures, discuss brie y their performance characteristics, and outline future research directions in the area.
2 Clustered Architectures The reference architecture for the clustered architectures is shown in Figure 1. In general, clients may interact with more than one servers within the boundaries of a LAN. The main question asked is how to Network Iterface
User Interface
Network Interface
LAN/WAN
Scheduler − Transaction Parser
Application
Transaction Decomposition and Planning
Client Buffer
Inner Loop
Buffer Manager Concurrency Controller Resource Managers Recovery Module Log Manager
Network Iterface
Access Methods User Interface
Scheduler − Transaction Parser Transaction Decomposition and Planning
Inner Loop
Buffer Manager Concurrency Controller Resource Managers Recovery Module Log Manager
Network Interface
Server Disks
Application Client Buffer
Access Methods
Network Interface
Client Disk
Network Interface Application
Application Client Buffer Client Buffer
Server Disks
Client Disk Client Disk
Figure 1: Database Resources on the Network attain the highest possible utilization of the available resources and derive high{performance database characteristics in this setting. In the following subsections,
we present a number of clustered architecture that try to answer the questions above.
2.1 Standard Client{Server (SCS) The Standard Client{Server database architecture (SCS) is an extension of the traditional centralized database system model. Its origins can be found in engineering applications. Here, data are mostly processed in powerful clients, while centralized repositories with check in and out protocols are predominantly used for maintaining data consistency. In a SCS database, each client runs an application on a workstation (client) but performs database access from the server. The communication between server and clients is typically done through remote calls over a local area network (LAN) [27]. The processing of the applications is carried out at the client sites, leaving the server free to carry out database work only. A database process running on a server machine waits to be contacted by client processes. If no request is issued by the clients then the server process goes to sleep waiting for some request to occur. In general clients may access database servers from various locations. However, in the remaining of this paper, we will consider issues in the context of a single \cluster." A cluster consists of one server and a varying number of clients attached. Such a cluster provides the basis for our discussion. The processing of a server request is done by the spawning of a lightweight process for every received client request. While in progress, all server database processes interact in an interleaved manner with the system resources such as the CPU, the disk unit, buer spaces etc. As soon as a server process completes its computation, it passes its results and/or messages through the open communication link to the appropriate client and terminates. The Standard Client-Server architecture o{loads CPU cycles from the server to the clients. The application programs along with other interface utilities|such as the DBMS presentation manager|are ran on the clients without aecting the server.
2.2 Page{Server (PS) This CS{DBMS organization takes advantage of the existing client buer space to cache database pages pertinent to the processing of a client. The availability
of diskless workstations was mainly responsible for the introduction of this con guration. The main objective of the Page{Server con guration is to improve the response time by not only utilizing the available buer space of the clients but also their processing capability. In this way, a number of database functionalities such as the presentation manager execution, transaction parsing, and query processing and optimization can take place almost exclusively at the client level. The role of the server is to execute low level DBMS operations such as locking and page reads/writes. Therefore, the server has to maintain the lock and the data manager. There is no special requirement on the type of jobs scheduled for execution at the server. The clients perform the query processing and determine the plan to be executed for every submitted request. A number of locking schemes have been proposed for this environment in [30, 29, 5]. As data pages are being retrieved, they are sent to the diskless client memory for carrying out the rest of the database processing. An early Page{Server design was described in [24] and termed the RAD{UNIFY architecture. Some experimental results were presented where mostly look{ up operations perform much better than in the Standard Client{Server con guration. In [10], we have shown that under mixed workloads of various update selectivities the Page{Server con guration achieves better performance results than its SCS counterpart. The improvement depends on the size of the available client buers set aside for database processing and the available bandwidth of the LAN used.
2.3 Persistent Page{Server (PPS) The Page{Server architecture relieves most of the CPU load on the server but does little to the biggest bottleneck, the I/O data manager. The Persistent Page{Server architecture attempts to take advantage of the clients disk managers. This architecture exploits the idea of caching server data pages into the client disks. These data pages are of interest to the particular clients. The fundamental assumption is that clients de ne their own \operational space". In many instances, this space cases corresponds to a limited area of the server database. This data caching may happen during an initial phase in which clients download pages from the server. In the case of no server updates and after the end of the initial phase the clients can work from their local replicas of the server database.
However, problems may occur when updates occur in the server database. The consistency of the cached data needs to be maintained at all times. We achieve that by using locking at the server site for all client initiated transactions. Before a client query starts materialization, it has to dispatch a message to the server which contains the description of the job to be performed. At this point, and based on the client furnished timestamp, the server has to perform a number of tasks, namely:
Lock all the appropriate server pages involved in the transaction. Decide which pages are obsolete in the client site. Read the obsolete pages from the disk and ship them over to the requesting client. Release all the acquired locks once the shipping is complete.
Obsolete client pages can be determined if the client furnishes a timestamp indicating the last time its pages were consistent with those of the server. As the server locks pages, it veri es their consistency with client replicas. After the client receives all the needed pages from the network, it has to ush them into its local disk manager and overwrite obsolete images. Subsequently, it can start materializing the query using the client's CPU and disk manager exclusively. Client initiated updates are shipped through the network to the server and treated as transactions demanding exclusive type of locks by the server lock manager. Once all pages have been updated, the server lock manager can release all the acquired write locks.
2.4 Enhanced Client{Server (ECS) The Enhanced Client{Server architecture (ECS) [8] oers relief to database servers by utilizing both CPU cycles and I/O capabilities of its clients. The functionality of the clients is \enhanced" to a simpli ed single{user DBMS that takes advantage of often voluminous long{term memory spaces available at the client level. In this setting, clients may cache data (and not only pages) of their interest (i.e., query results) into their local disk managers for future re{use. Enforcement of data consistency throughout the architecture is undertaken by the server who is the primary site in the cluster. Hence, the server becomes
the caretaker for updates and their propagation to pertinent clients. By caching query results over time, a client creates a server database subset on its own disk unit. A client database is a partial replica of the server database that is of interest to the client's application(s). Later on, a user can integrate into her/his client database private data not accessible and of no interest to others. There are two major advantages is this kind of disk caching: rstly, repeated requests for the same server data are eliminated and secondly, system performance is boosted as clients can access local data copies. Nonetheless, in the presence of updates the system needs to ensure proper propagation of new item values to the appropriate clients. Every time a client decides to cache the results of some query into the local disk, it creates a \new" local relation. This newly created entity is derived from the server tables and it must be aliated with these tables. Each such relationship between client cached data entities and server relations is designated by a client binding. Every such binding is described by three elements: the participating server relation(s), the applicable condition(s) on the relation(s), and a timestamp. The condition is essentially the ltering mechanism that decides what are the qualifying tuples for a particular client. The timestamp indicates the last time that a client has seen the server updates that may aect its cached data. Bindings can be either stored at the server's catalog or maintained by the individual clients. Updates are directed for execution to the server which is the primary site. Pages to be modi ed are read into the main memory, updated and ushed back to the server disk unit. Every server relation is associated with an update propagation log which consists of timestamped inserted tuples and timestamped qualifying conditions for deleted tuples. Only updated (committed) tuples are recorded in these logs. The number of bytes written to the log per update is generally much smaller than the size of the pages read into main memory. Client query processing against local data is preceded by a request for an incremental update of server data. The server is required to look up the portion(s) of the query-involved relation logs that maintain timestamps greater than the one seen by the submitting client so far. This may be done once the binding information for the demanding client is available. The binding information will enable the server to perform the correct tuple ltering. Only relevant frac-
tions (i.e., increments) of the modi cations (relation update logs) are propagated to the client's site. Incremental algorithms as those described in [23] can be used to generate the needed dierential les. As soon as the increments are received and ushed into the local disk manager, the client may initiate the materialization of the query.
3 Performance Characteristics In this section, we present simulation results obtained for the performance of the above architectures under continuous client streams. Continuous client streams create the data patterns of access and are made up of sequences of jobs. A sequence of jobs is created by mixing both queries and updates in a prede ned proportion. In the two extreme cases, we can have either query or update only streams. Every client is assigned to execute such a stream. In the experiment presented here each client submit a continuous stream of both queries and updates. Updates constitute 10% of all the jobs and are uniformly distributed over the queries. Queries are selected randomly from a prede ned set of queries whose page selectivity ranges up to 10% of the server relations. Every update operation modi es one of the tables of the server database and the page selectivity of each such operation ranges from 0% (pure query workload) to 8% which is a rather high upper limit for updates. The page update selectivity remains the same throughout all the modi cations of the same job stream. Our main objective is to measure the performance achieved by the cluster as the number of clients attached per server increases. The main performance criterion for these experiments is the overall average throughput measured in jobs per minute (JPM). Client think time is set to zero in order to examine the behavior of the con gurations under stringent conditions. Figure 2 shows the results of the experiment obtained for the Standard and Page{Server Client{ Server con gurations. The number of clients varies from 5 to 250. The 0% curve of this graph is essentially update free (\null" update jobs). In general, the PS curves indicate much better performance than their SCS counterparts. This is mainly attributed to the usage of the client buer space. From 5 to 20 clients,
Throughput
SCS and PS Configurations
ECS and PPS Configurations
Throughput
0% PS
3000 0% ECS
30
4% PS
2500
25
0% PPS
8% PS
2000
20
1500 15
0% SCS
1000
10
8% SCS
4% SCS
500
5
0
Clients
Clients 0
50
100
150
200
50 250
100
150
200
250
Figure 2: Results for the SCS and PS Con gurations
Figure 3: Results for the PPS and ECS Con gurations (no updates)
throughput increases as both SCS and PS con guration clients increase the utilization of their resources. For more than 50 clients, all the curves level o. The SCS/PS server disk utilization is high for more than 60 clients (approximately 0.98) and is mainly responsible for this leveling. The 8% curves present the poorest performance in both groups of curves. This is due to the high number of aborted and restarted jobs as well as to a considerable amount of delays generated by large updates. Thus, in the high client space, the server disk becomes the bottleneck of both the SCS and PS con gurations. Figures 3 and 4 show the results obtained by submitting the same continuous client streams to the Persistent Page Server and the Enhanced Client{Server architectures. Figure 3 presents the results of the performance for query only workloads. Caching query results or pages into the client disk units results in tremendous gains in terms of throughput achieved. However, the PPS demonstrate somewhat inferior average throughput than the ECS as the process of page consistency checking at the server becomes lengthy in the presence of many clients. Non{zero updating client streams are shown in Figure 4. The existence of a large number of updates makes the server the bottleneck of the con gurations as its disk unit experience high utilization. The maintenance of logs at the server for the ECS requires additional log page writing and reading. The overhead of this book{keeping activity may become excessive in certain occasions. This is the
reason why the PPS demonstrates slightly improved throughput rates for more than 100 clients in the 4% updating curve.
4 Technology Trends Independently of the approach taken to organize a CS{DBMS cluster, the cost for accessing objects resident in a network of workstations will always be a function of few key parameters. The dominant factor in the design of database systems is always the mean access time from the disk unit. In a contemporary SCSI disk unit, this time is approximately 12ms. It is predicted that by the end of 1996, this time will further decrease to around 10 msecs per block[12]. On the other hand, the CPU processing speed of workstations such as those used in desktop computing keeps increasing. The Sparc20-71 that is a low{end client works at 204.7 MIPS [17]. Newly released Sun Ultra1 and Ultra2 workstations are two to three times faster than their predecessor Sparc20s. The central idea behind the design and release of such extremely fast workstations is to run various software and application packages eciently by making use of the available networked resources. In networked environments, one of the most dramatic changes we are witnessing is the move from a
Memory Copy Net Overhead Net Transfer Disk Access Time
Ethernet F-Ethernet/FDDI ATM Switch
Throughput
0.25 msec 0.40 msec 6.55 msec 12.95 msec 0.25 msec 0.40 msec .65 msec 12.95 msec 0.25 msec 0.40 msec .42 msec 12.95 msec Table 1: Various Costs Involved in the Transfer of a Page
Non−Zero Updating Client Streams for PPS/ECS
550.00 500.00 450.00 400.00 350.00 300.00 4% ECS
250.00 200.00 150.00
4% PPS
8% PPS 100.00 8% ECS 50.00
Clients 0
50
100
150
200
250
Figure 4: Results for the PPS and ECS Con gurations standard Ethernet type of network to faster LANs with much higher available bandwidth. There are at least three options to the Ethernet of the 80's, namely: Fast Ethernet, FDDI network, and ATM switches. While the traditional Ethernet delivers 10 MBits/Sec, Fast Ethernet and FDDI have a bandwidth of 100Mbits/Sec and an ATM switch can reach approximately 155 MBits/Sec. Let us consider the following question: how long does it take in terms of time to transfer a page from a remote location to the local memory? The answer to this depends on whether the page sought is already in the buer space of another client or needs to be retrieved from a disk unit in a dierent location. For every main{memory resident page heading towards the network interface, we need to take into consideration memory copying delays and network processing before the page reaches the LAN. After the page is received from the requesting workstation, a similar process has to be followed so that the page reaches the machine's main memory from the network interface. The costs involved in transferring a page between two machines are summarized in Ta-
Total
20.15 msec 14.25 msec 14.02 msec
ble 1. The overheads were computed using a Hewlett Packard 9000/735 workstation [15] and are comparable to those reported in [22]. Next{generation workstations will certainly reduce even more the overheads at the dispatching and receiving ends. The penalties due to the network transfer shown in Table 1 is rather optimistic as the whole bandwidth of the network is assumed to be available for the handling of the data page. From Table 1, it is apparent that faster networks slash shipment expenses and give the opportunity for modern database systems to nally use a \network{ centric" approach. These gures also have further implications if the working elements of a database can be accommodated by the aggregation of all (or part of the) the client buer spaces. In this case, clients will be able to avoid additional disk reads/writes by accessing the buer space of other workstations in the network directly. However, in the long term even such an option will be rather limited as the size of the information available in single database relations is expected to be more than 100 MegaBytes per table[19]. Another development that is going to seriously aect the way Client{Server database systems are designed and built the availability of RAID{based storage devices. Such devices can be created by utilizing existing SCSI technology and could oer amortized disk access times and high reliability necessary for the environments in which databases operate.
5 Issues in CS-DBMSs The above trends will certainly aect the way databases are being designed and built in the near future. Clustered{based database computing will have to deal with a number of issues, among them: total management of database objects in Client{Server environments, closer cooperation of clients, techniques for the propagation of updates, new query processing paradigms using parallelism within clusters, and nally inter{cluster coordination and data migration.
With the introduction of LAN solutions for databases, the number of objects available to the end{ user can become very large [20], making their access a challenging task. The fundamental question here is how to organize the meta-data within a cluster so that the server does not suer from over{exposure to the requests of its clients. It is also worthwhile exploring whether it makes sense for the clients to have \global knowledge" about the naming, location, size and access methods of various elements across the network. Directory organizations for networked database entities will be of prime concern as they will themselves becomes the \hot spots" or such systems. Caching of data to either short or long term memory of the clients needs to be managed in a way that does not add great overheads in the presence of database updates. Client cooperation is an alternative way to achieve uniform load balancing among the nodes of a cluster by avoiding unnecessary messages and requests to the server. Client cooperation proposed so far is limited among the various main{memory buers of the clients. However, the mixture of transactional and query workloads that we observe today in databases makes this type of cooperation only a partial success. For skewed query workloads, it would be more bene cial if client collaboration crosses the boundaries of the main{memory hierarchy and cooperation of long-term memory resources is achieved. The main question is whether and how a client could undertake the responsibility for furnishing data from its own data manager on behalf of the server. In environments where data entities are frequently updated, propagation of updates may be of great importance as nancial and administrative decisions may depend on such changes. Therefore, the deployment of policies that create \fresh" instances of distributed and likely replicated data elements are of great signi cance. The trade{o between maintaining fully consistent data and tolerating quasi{copies of data needs to be reexamined in the light of fast networks. A number of studies have considered this question in the past [1]. Under limited network bandwidth, quasi{copies were found to be more bene cial for rudimentary Client{ Server environment. We expect that this will not to be the case in clusters connected with high{bandwidth communication media. In [9], we have studied a number of policies that propagate updates imposed on ECS database server. If however, collaboration among clients is achieved, primary copies of data may be resident in non{server locations and the adaptation of such policies has to be re-examined.
Query processing in distributed settings has been mostly examined in the presence of slow communication media. Their small bandwidth made query processors put more emphasis on the transmission aspects of the optimization process and ignore local and possibly underutilized resources. Query processing within a cluster of workstations can be carried out by either delegating part of a client's query materialization plan to another possibly idle client or by demanding that the server make some parallelization of the request among a number of available clients. Query processing performed in this way will give emphasis to the local resources and to the state information of the various clients in the cluster. Independent of their internal organizational schemes, Client{Server database clusters could be used as a building block for organizing wide{area information systems. Applications with long{lived characteristics could make use of such systems. Consortia of clusters are an inexpensive alternative to traditional interoperable database environments[4]. Issues such as meta{data dissemination and usage, and interoperation scripting languages are relevant to interoperable databases. However, this new generation systems will have to deal with automatic data migration as users change their preferred location of work (which is the case in mobile systems), as well as with inter{cluster query processing, exploitation of potential parallelism to improve performance, and cluster organizational aspects.
References [1] R. Alonso, D. Barbara, and H. Garcia-Molina. Data Caching Issues in an Information Retrieval System. ACM{Transactions on Database Systems, 15(3):359{384, September 1990. [2] F. Bancilhon, C. Delobel, and P. Kanelakis, editors. Building An Object{Oriented Database System: The Story of O2. Morgan{Kaufmann, San Mateo, CA, 1992. [3] H. Boral and P. Faudemay Edts. Database Machines. Springer{Verlag, June 1989. [4] A. Bouguettaya, R. King, and S. Milliner. Resource Location in Large Scale Heterogeneous and Autonomous Databases. Journal of Intelligent Information Systems, 5(4):147{173, 1995. [5] M. Carey, M. Franklin, M. Livny, and E. Shekita. Data Caching Tradeos in Client{Server DBMS
[6] [7]
[8] [9]
[10]
Architecture. In ACM{SIGMOD{Conference on the Management of Data, May 1991. M. Carey, M. Franklin, and M. Zaharioudakis. Fine{Grainded Sharing in a Page Server OODBMS. In Proceedings of ACM{SIGMOD Conference, Mineapolis,MN, 1994. A. Delis and N. Roussopoulos. Performance and Scalability of Client{Server Database Architectures. In Proceedings of the 19th International Conference on Very Large Databases, Vancouver, BC, Canada, August 1992. A. Delis and N. Roussopoulos. Performance Comparison of Three Modern DBMS Architectures. IEEE{Transactions on Software Engineering, 19(2):120{138, February 1993. A. Delis and N. Roussopoulos. Management of Updates in the Enhanced Client{Server DBMS. In Proccedings of the 14th IEEE International Conference on Distributed Computing Systems, Poznan, Poland, June 1994. A. Delis and N. Roussopoulos. The Page{Server versus the Enhanced Client{Server DBMS. In
Proceedings of International Symposium on Advanced Database Technologies and Their Integration, Nara, Japan, October 1994.
[11] D. DeWitt, D. Maier, P. Futtersack, and F. Velez. A Study of Three Alternative Workstation{Server Architectures for Object{ Oriented Database Systems. In Proceedings of the 16th Very Large Data Bases Conference, pages 107{121, 1990. [12] G. Gibson. Storage Technology: RAID and Beyond. Tutorial at the 1995 ACM{SIGMOD Conference, May 1995. [13] J. Gray and A. Reuter. Transaction Processing: Concepts and Techniques. Morgan{Kaufman, San Mateo, CA, 1992. [14] K. Kuspert, P. Dadam, and J. Gunauer. Cooperative Object Buer Management in the Advanced Information Management Prototype. In Proceedings of the 13th Very Large Data Bases Conference, Brighton, UK, 1987. [15] R. Martin. HPAM: An Active Message Layer for a Network of HP Workstations. In Proceedings of the 1994 Hot Interconnects Conference, August 1994. [16] D. McGovern and C.J. Date. A guide to SYBASE and SQL Server. Addison{Wesley, Reading, MA, 1992. [17] Sun Microsystems. Hardware Summary Guide. Technical and Pricing Memo, April 1995.
[18] S. Milliner and A. Delis. Networking Abstractions and Protocols Under Variable Length Messages. In Proceedings of the 1995 IEEE International Conference on Network Protocols (ICNP{ 95), Tokyo, Japan, November 1995. [19] C. Mohan. A Survey of DBMS Research Issues in Supporting Very Large Tables. In Proceedings of the Fourth International Conference on Foundations of Data Organization and Algorithms, Chicago, IL, October 1993.
[20] J. Ordille and B. Miller. Nomeclator Descriptive Query Optimization for Large X.500 Environments. In Proceedings of the 1991 ACM{ SIGCOM Conference, 1991. [21] S. Ough and R. Sonnier. Spotlight on FDDI. Unix Review, 10(10):40{49, October 1992. [22] J. Purtilo and P. Jalote. An Environment for Developing Fault{Tolerant Software. Transactions on Software Engineering, 17:153{159, February 1991. [23] N. Roussopoulos. The Incremental Access Method of View Cache: Concept, Algorithms, and Cost Analysis. ACM{Transactions on Database Systems, 16(3):535{563, September 1991. [24] W. Rubenstein, M. Kubicar, and R. Cattell. Benchmarking simple database operations. In ACM{SIGMOD{Conference on the Management of Data, pages 387{394, 1987. [25] E. Shekita. High-Performance Implementation Techniques for Next-Generation Database Systems. PhD thesis, University of Wisconsin{ Madison, Madison, WI, 1990. Technical Report 1026. [26] A. Sinha. Client{Server Computing. Communications of ACM, 35(7), July 1992. [27] R. Stevens. Unix Networking Programming. Prentice Hall, 1990. [28] R. Velter, C. Spell, and C. Ward. Mosaic and the World{Wide Web. IEEE{Computer, 27(10), October 1994. [29] Y. Wang and L. Rowe. Cache Consistency and Concurrency Control in a Client/Server DBMS Architecture. In Proccedings of the 1991 ACM SIGMOD International Conference, Denver, CO, May 1991. [30] K. Wilkinson and M.A. Neimat. Maintaining Consistency of Client{Cached Data. In Proceedings of International Conference on Very large Data Bases, August 1990.