To cope with non-uniform request patterns, various ... a channel after repeated borrowing attempts, it rejects the ... use though) in those directions of cell i that do not lead to ... Response time measures the .... Message m1 is broadcast to all the high neighbors when a .... in Phase 1, the second phase, Phase 2, is started.
Ecient Distributed Algorithms for Dynamic Channel Assignment Manhoi Choy and Ambuj K. Singh Department of Computer Science, The Hong Kong University of Science and Technology Department of Computer Science, University of California at Santa Barbara
Abstract
nel borrowing without locking . Here the group of channels assigned to a cell is divided into subgroups, and each subgroup is correlated with a neighbor or a direction. A cell is allowed to borrow only those channels from a neighbor that are in its direction. The division into subgroups is achieved statically in a way that minimizes (or eliminates) con icts due to borrowing. The performance metric considered in the above papers is the blocking factor , or the average fraction of unsuccessful requests. As useful as this metric may be, another important consideration in a real system is response time . Any dynamic channel assignment algorithm involves the maintenance of non-local information (e.g., the channels being used by a neighbor), and serving a request may require the use of this information. As a result, we should also pay attention to how this information is maintained and used in order to satify requests. Response time measures the amount of time that a user waits until its request can be served. (Failed requests are ignored in measuring response time.) It takes into consideration factors such as transmission delay, message processing delay, and synchronization delay. Synchronization delays may be caused for a number of reasons such as if a cell attempts to check the status of a channel at a neighbor while the neighbor is trying to assign channels. In order to minimize response time, we use existing paradigms of mutual exclusion and its generalization to a graph structure, called the dining philosophers [3], from distributed systems. The application of these paradigms is based on the fact that channels are the limited resources and the cells are the contending processes desiring the resources. A number of ecient algorithms have already been proposed for these paradigms, and we consider using these algorithms to improve the response time of dynamic channel assignments. In a distributed implementation, channels may be borrowed either in an optimistic or a pessimistic manner. In optimistic borrowing, the target channel is chosen rst, and then a check is made to see if the borrowing leads to any interference [8]. This is repeated until a channel is successfully borrowed from all neighbors or until the base station has tried all possible channels. This scheme has two drawbacks under high contention. First, it requires a large number of messages (directly proportional to the number of cycles repeated). Second, because of its optimistic nature, it is provably less fair than pessimistic schemes. In pessimistic borrowing, the selection of channels for possible transfers and requests for the actual transfers occur together with the help of exclusive locks on groups of channels. The size of these groups, called the granularity of the algorithm, is a chosen parameter. Once an exclusive lock is obtained on a group of channels, the corresponding channels are inspected, and a channel is transferred if possible. There is a trade-o on granularity: a large granularity increases the likelihood of successful borrowing, however, beyond a certain limit, large granularities lead to excessive blocking and
The eciency of channel assignment in a cellular telephone system is considered using the measures of successful channel assignment ratio as well as response time. Existing paradigms of mutual exclusion and dining philosophers from distributed systems are used to synthesize new algorithms that optimize both measures. The results are veri ed by extensive simulations.
1 Introduction
Cellular telephone systems are becoming increasingly important. The inherent bandwidth limitations of these systems and the possibility of interference among neighboring cells has prompted much research into eective assignment of available channels to requesting mobile units. The static channel assignment algorithm assigns a xed group of channels to each cell, and all requests are satis ed locally. This algorithm performs well only under uniform request patterns. To cope with non-uniform request patterns, various dynamic channel allocation (DCA) strategies have been proposed. Here, channels are assigned to cells initially (as in xed allocation), but can be borrowed by neighboring cells. When a cell tries to borrow a channel from its neighbors, its request may fail if one or more of its neighbors is using the channel and refuses to transfer it. On such a failure, a cell may simply reject the user's request, or it may attempt to borrow other channels. If a station is still unable to allocate a channel after repeated borrowing attempts, it rejects the request. In the simple borrowing strategy [5], the general rule is to borrow from the \richest" neighboring cell. In the hybrid assignment strategy [6], some channels are assigned in a xed way while the remaining can be borrowed. The optimal ratio of the two allocations depends on the trac load. Elnoubi et al. [4] proposed a variation of the hybrid strategy called borrowing with channel ordering in which all the channels assigned to a cell are ordered. Local requests are satis ed by one end of the ordering while borrowing requests are satis ed by the other end. Once borrowed by a cell i, a channel c is prohibited from both use and borrowing within the interference distance of cell i. Borrowing with directional channel locking [10] is a similar scheme except that the borrowed channel c is available for borrowing (not for use though) in those directions of cell i that do not lead to interference. Yeung and Yum [9], de ne two kinds of compact channel allocation patterns (clockwise and anti-clockwise) and attempt to maintain these patterns under dynamic channel allocations. Simulation studies show it to be better than the channel borrowing with directional channel locking scheme. Jiang and Rappaport [?] proposed a method called chan This research is supported in part by NSF grants CDA-9421978, CCR-9223094, CCR-9505807, and CDA-9216202.
1
delays. We present the idea of pessimistic borrowing and our simulation experiments with granularities in this paper. The rest of this paper is organized as follows. In Section 2, we discuss the dining philosophers problem. An ecient solution to the problem is presented in Section 3. Based on this solution, algorithms for the dynamic channel allocation problem are designed in Section 4. In Section 5, we compare the performance of our algorithms with existing algorithms and present some simulation results. Finally, a discussion concludes the paper in Section 6.
2 The Dining Philosophers Problem
The channel allocation problem is closely related to resource allocation problems in the area of distributed operating systems. Resource allocation problems have been studied extensively and are commonly abstracted as the dining philosophers problem [3, 7]. In this problem, philosophers reside at nodes of an underlying graph. Two philosophers who contend for a shared resource are connected by an edge; this is denoted by a fork that resides on each edge of the graph. A philosopher can be in one of three states | thinking , hungry , or eating . In the thinking state, the philosopher is not interested in any of the shared resources. A philosopher requests the shared resources by transiting to the hungry state. Once this request is satis ed, the philosopher moves to the eating state. Once it has completed using the resources, it transits back to the thinking state. Because there is a limited number of forks, neighboring philosophers share a common fork and need to synchronize. A solution to the dining philosophers problem is required to synchronize these philosophers so that the mutual exclusion and the starvation freedom properties, as de ned below, are satis ed. mutual exclusion: No two neighboring philosophers eat at the same time. no starvation: Every hungry philosopher eats eventually. Since a philosopher may eat only if its neighbors agree to release the shared forks, this situation resembles the fact that a base station can only use a channel of its neighbor provided all its neighbors who own the channel agree to transfer the channel. The dining philosophers problem has been extensively studied in the literature and many ecient algorithms have been proposed for it. Therefore, instead of re-inventing the wheel, we use them to solve the dynamic channel allocation problem.
3 An Ecient Solution for the Dining Philosophers Problem
Figure 1 shows a dining philosophers algorithm presented by us in [2]. Each philosopher (modeled as a process) is initially assigned a color in such a way that neighboring philosophers have dierent colors. Colors are represented by integers here, for ease of presentation. Given two neighboring philosophers i and j , we say that i is a high neighbor of j if the color of i is larger than the color of j . Similarly, j is called a low neighbor of i. A process is in one of ve states: a thinking state, an eating state, and three waiting states corresponding to the hungry state. These three waiting states are wait1, wait2, and collect (collecting forks). While transiting between the states, a process broadcasts
'$ '$&%'$ &% &% '$'$ &%&%
h8j : j 2 Ni ^ wait until h8j : j 2 Ni ^ color(j ) > color(i) : color(j ) < color(i) : wait2 Lij 6= m2 i; wait until Lij 6= m1 i; broadcast m1 to Z broadcast m2 to > high neighbors
Z all neighbors ~ Z
wait1
wait until hungry
collect
h8j : j 2 Ni : atij i; m
B M B B
thinking
wait until
broadcast 3 to low neighbors
eating
Release-forks
Figure 1: State transitions of process Pi .
three kinds of messages, m1 ; m2 ; and m3 , to its neighbors. Message m1 is broadcast to all the high neighbors when a process transits from wait1 to wait2; message m2 is broadcast to all the neighbors when a process transits from wait2 to collect; message m3 is broadcast to all the low neighbors when a process transits from collect to eating. We assume that every process has an identity. If a process identity is i, the process is denoted by Pi . The set of the identities of the neighbors of a process Pi is denoted by Ni . Process Pi has two sets of local variables Lij and atij , where j ranges over the set Ni . Variable Lij stores the last message received from process Pj . Boolean atij is true i the fork between processes Pi and Pj is at Pi . The color of process Pi is denoted by color(i). A process becoming hungry transits from thinking to wait1. It stays at wait1 until none of its low neighbors is at state wait2 continuously from the time the process arrived at state wait1. Since a process sends a message m1 to its high neighbors before transiting to state wait2, the waiting can be achieved by waiting until h Lij 6= m1 i independently for every j 2 Ni such that Pj is a low neighbor of Pi . At state wait2, a process transits to state collect if it knows that none of its high neighbors are at state collect at the moment. This is achieved by waiting until the condition h 8 j : j 2 Ni ^ color(j ) > color(i) : Lij 6= m2 i is true. At state collect, a process waits until it has collected all the necessary forks and transits to state eating. It follows from the waiting conditions at states wait1 and wait2 that messages m1 need to be sent only to high neighbors whereas messages m3 need to be sent only to low neighbors. The fork gathering scheme is not shown in the state transition diagram explicitly. In fact, for ecient use of messages, we use message m2 as the fork request message. Any process that receives message m2 from its neighbor knows that the neighbor is attempting to collect forks. The process can then decide whether it should release the fork or not based on its own state and color, and the color of the neighbor. The actions performed upon the receipt of a message and the subroutine Release-forks are shown in Figure 2. Variable S stores the set of identities of neighboring processes which are waiting for forks from process Pi . The identity of a process is inserted into set S if its fork request can not be satis ed immediately and the identity of a process is removed from set S when process Pi releases forks to the neighbors in the subroutine Release-forks. The correctness of the algorithm of Figure 2 is shown formally in [2] and is not repeated here. It is also shown that the algorithm has an upper bound on response time that is a quadratic function of the upper bound on communication delay and the upper bound on the eating period. The
Initially: Lij = m3 ; S = fg; On receiving message mnum from neighbor Pj : Lij := mnum ; if (num = 2) then if (atij ^ :(state = eating)^ :(state = collect ^ color(i) < color(j ))) then send fork to Pj else S := S [ fj g Procedure Release-forks h8j : j 2 S : if atij then send fork to Pj i; S := fg end Release-forks
Figure 2: Message handler and subroutine Release-forks interested readers are referred to [2] for detailed proofs.
4 Two-Phase Algorithm
In this section, we show how the dynamic channel allocation problem can be solved based on an underlying algorithm for the dining philosophers. Each base station is mapped to a philosopher. Two philosophers share a fork if the corresponding base stations are close enough to interfere with each other. Assume that the set of nc channels is divided into ng groups; i.e., the granularity of the scheme is nc=ng . Then ng instances of a dining philosophers algorithm are run independently, one for each group. Each group of channels can be locked in a read or a write mode; a read lock is used to inspect the set of channels and nd an unused channel owned by the base station, whereas a write lock is used for transfer requests. The dining philosophers algorithm is used to lock a set of channels in write mode. The complete algorithm is shown in Figure 3. /*Phase 1*/ For each group of channels g do if g is not held under a write lock then obtain a read lock on g; if there exists a locally owned channel c 2 g then satisfy the request with c; release the read lock on g; if a channel was assigned successfully exit; /* Phase 2*/ Select a random group of channels g; Obtain a write lock on g by the dining philosophers algorithm; If there exists an unused channel c 2 g then satisfy the request with c; Satisfy other pending requests with unused channels in g; Release the write lock on g; Respond to user with a successful assignment or a failure;
Figure 3: Two-phase algorithm When a user request arrives at a base station, the base station checks if it can be satis ed with a local unused channel. This is carried out by obtaining read locks on groups of channels and inspecting their status. This is called Phase 1 of the algorithm. If a request cannot be satis ed locally in Phase 1, the second phase, Phase 2 , is started. Here, a group g is selected at random, and a write lock is obtained on the group by executing the dining philosophers algorithm. (It is also possible to totally order all the groups and the channels within them, serve local requests from one end and transfer requests from the other end, as in some of the existing channel assignment algorithms.) Once a write lock has been obtained, the set of channels in the group is examined for any possible transfers. Other pending requests are also served simultaneously, if possible. The mechanisms
of locking and channel transfer are implemented by messages, and are not shown for brevity.
4.1 Fairness Comparison with Existing Algorithms
In any resource allocation problem such as the channel assignment problem, it is useful to make fairness or progress guarantees. For example in the mutual exclusion problem or the dining philosophers problem, we guarantee that every request will be eventually serviced. So, what are the guarantees that we can make for channel assignments? First, note that requests for channels are not blocking in the sense that if a request cannot be satis ed, the mobile user is informed of this fact. This is unlike the mutual exclusion problem or the dining philosophers problem where a process blocks until serviced. Therefore, we should consider the condition of fairness under repeated requests by a mobile unit. Next, due to the asynchrony in the system, a mobile unit's request can always lose out to a competing neighbor. So, we should consider the scenario in which some channels are continuously usable (rather than usable in nitely often). Let usable(c; i) denote that a channel is usable, i.e., there is no interference from neighbors if i uses c. De ne fairness condition F 1 as follows: If mobile unit i makes repeated requests until successful and h9c :: usable(c; i)i holds continuously, then some request of i will be serviced eventually. The above condition seems to be a reasonable fairness guarantee that one can expect from any channel assignment algorithm. However, this condition is not satis ed by optimistic algorithms, and in particular by the algorithm of Prakash et al. [8], which is a distributed implementation of borrowing with directional channel locking [10]. This can be illustrated by the following scenario. Let there be three cells i, j and k that form a clique, and some other neighbors of k. Suppose that channel c is usable continuously by i from some point on and that channel c is owned by j . Processes i and k attempt to transfer the channel from j . The interleaving of events occurs such that the transfer request of i loses out to the transfer request of k, and moreover the transfer request of k also does not succeed due to interference from other neighbors. This sequence of events may repeat forever. Thus, it is possible that channel c is usable by i continuously, but repeated requests of i never succeed. The situation is similar with other optimistic algorithms. In fact, the above reasoning shows that these algorithms do not even satisfy the weaker fairness guarantee F2: If mobile unit i makes repeated requests and h9c :: usable(c; i) holds continuouslyi, then some request of i will be serviced eventually. Let us now consider the fairness guarantees of our algorithms. If all channels are combined into a single group, then our algorithm satis es F1 (and hence F2). When mobile unit i obtains a write lock on the channels, it is able to inspect all the channels, and service its request with channel c (or some other usable channel). If the granularity is smaller, the random selection of a group in Phase 2 ensures that condition F1 will be satis ed with probability 1.
5 Experimental Results
We performed simulation experiments to compare our algorithms with existing algorithms. The dining philosophers algorithm described in Section 3 is used in all our experiments.
Our experimental setup consists of a two-dimensional service area with 8 x 8 hexagonal cells. The common parameters for all our experiments are summarized in Figure 4. The frequency band allocated to the system is divided into 450 independent channels. The duration of calls are normally in the order of minutes. There is a small penalty in communication between stations and in the processing of messages at the stations. We assume both of these are 50ms per event. We assume a large user pool, and that arrival of requests at each station can be modeled as a Poisson process with 1= calls per minute (cpm). All our simulations were carried out for a simulation time of one hour using the CSIM simulation package [?]. We compare our algorithm to three others { a centralized algorithm in which all allocations are done by a central server, the static assignment algorithm, referred to as the preallocating algorithm in which all channels are preallocated uniformly, and an optimistic algorithm from Prakash et al. [8]. We chose the centralized algorithm since it de nes the performance bound for the non-distributed algorithms. We chose the preallocating algorithm since it has the best response time of two network hops. We chose the optimistic algorithm due to Prakash et al. since it is the only other distributed algorithm, and it implements the idea of borrowing with directional channel locking [10], which is known to be an ecient algorithm. Two sets of experiments are performed. The rst assumes a uniform request arrival rate over the entire region, i.e. is constant in the entire region. The second assumes that some cells are more heavily loaded while others are lightly loaded. These are presented next. Number of cells 8x8 Number of channels (nc ) 450 Mean msg delay between stations 50 ms Mean delay to process a msg 50 ms Mean length of calls l (min) Mean request arrival rate 1= calls/min(cpm) Figure 4: Common Parameters
5.1 Uniform Case
We rst consider the case when all cells in the area have the same request arrival rate. In order to decide a suitable granularity for the algorithm, we ran simulations of the algorithm with a mean request arrival rate of 60 cpm and a mean call length of 3 minutes. The granularity g was varied from 90 to 15. The results are shown in Figure 5. As shown in the table, the response time improves as the granularity decreases. This can be attributed to a decrease in the amount of blocking. We choose g = 15 in our later experiments. Note that blocking factor denotes the ratio of unsuccessful requests. cpm 90 45 30 18 15 blocking factor 0.22 0.21 0.20 0.20 0.20 response time (sec.) 0.38 .31 .29 .28 .27 Figure 5: Eect of granularity on our pessimistic algorithm for uniform loads Figure 6 shows the percentage of allocation and the response time of the algorithms under a range of request arrival rates. The mean length of each call is set to 3 minutes. The centralized algorithm performs badly because the central site becomes a bottleneck. Its response time is a few orders of magnitude higher than the other algorithms, and
0.28
0.24
optimistic
0.20
blocking factor
0.24
pessimistic
0.20
preallocating
0.16 0.12
0.16 response time (secs) 0.12
0.08
0.08
0.04
0.04
35
40
45
50
55
60
35
40
45
50
55
60
calls per minute
calls per minute
Figure 6: Eect of increasing the number of calls per minute for uniform load its blocking factor is 3-5 times higher. We do not show these results in the gures. The other three algorithms have roughly the same percentage of allocation, with the pessimistic sandwiched between the other two. The response times of the optimistic algorithm and the pessimistic algorithm are slightly higher than that of the preallocating one. This can be explained because no communication is needed in the preallocating algorithm. 0.28
0.24
optimistic
0.20
blocking factor
0.24
pessimistic
0.20
preallocating
0.16 0.12
0.16 response time (secs) 0.12
0.08
0.08
0.04
0.04
108
126
144
162
180
mean call length (secs)
108
126
144
162
180
mean call length (secs)
Figure 7: Eect of increasing the mean length of calls for uniform load Figure 7 shows the performance of the algorithms simulated with a xed mean arrival rate of 60 cpm and with the mean length of calls varied from 108 seconds to 3 minutes in increments of 18 seconds. The results are similar to that of Figure 6. All algorithms besides the central algorithm have similar performance.
5.2 Non-uniform Case
To simulate the performance of the algorithms under a nonuniform call arrival distribution, we assume one third of the cells are heavily loaded. Heavily loaded cells are evenly distributed over the region. Lightly loaded cells have call arrival rates 1/5 that of the heavily loaded cells. As in the case of uniform arrival rate, we rst select a suitable granularity. We vary the granularity from 90 to 15. The call arrival rate is kept xed at 150 cpm for heavily loaded cells and the mean length of calls is set at 3 minutes for all calls. The blocking factor increases slightly and the response time decreases with a decrease in granularity. We choose g = 15 in our later experiments. We simulate all the algorithms and compare their performance as in the uniform case. Figure 8 shows the results of the simulation with the arrival rate for heavily loaded cells varying from 80 to 150. The mean length of each call is set to 3 minutes. The centralized algorithm suers from a performance bottleneck as in the uniform case and is not shown here. The optimistic algorithm outperforms the preallocating algorithm (with respect to the number of channels allocated) by more than 25 percent at small arrival
0.56
points have not been plotted in the gures as they lie outside the range.) On the other hand, with the pessimistic approach, the response time remains in a tolerable range.
2.8
pessimistic optimistic
0.48
2.4
preallocating 2.0
0.40
0.32 blocking
response
1.6
6 Discussion
time (secs)
factor 0.24
1.2
0.16
0.8
0.08
0.4
80
90
110
100
120
130
140
150
80
90
110
100
120
130
140
150
calls per minute
calls per minute
Figure 8: Eect of increasing the number of calls per minute for non-uniform load rates. The improvement shrinks as requests arrive more frequently. Finally, at 150 cpm, the preallocating algorithm begins to allocate more channels than the optimistic strategy. This phenomenon can be explained as follows. At very low arrival rates (not considered in Figure 8), there is no need to borrow channels at all, and the optimistic algorithm should perform as well as the preallocating algorithm. As the arrival rate increases, the optimistic algorithm starts to allocate more channels by means of channel transfers. Thus, the optimistic algorithm performs signi cantly better than the preallocating one between 80 and 130 cpm. However, as the arrival rate increases further, the probability of successful borrowing decreases. Finally, at 150 or more cpm, excessive borrowing attempts lead to thrashing, and channel allocation suers. The pessimistic algorithm performs similarly to the optimistic one at low cpm. However, there is no thrashing at high cpm; the gain achieved over the preallocating algorithm is now sustained at 140 or higher cpm. This is primarily because, in the pessimistic algorithm, more requests (up to g = 15) can be served per eating period. This saves a lot of communication and processing overhead. The response time of the optimistic algorithm (9-107 seconds) becomes orders of magnitude higher than the other two algorithms (pessimistic and preallocating) at 100 cpm, and these points are not plotted in the gure. 0.56
2.8
pessimistic optimistic
0.48
2.4
preallocating 0.40
2.0
0.32 blocking
1.6 response time (secs) 1.2
factor
0.24 0.16
0.8
0.08
0.4
108
126
144
mean call length (secs)
162
180
108
126
144
162
180
mean call length (secs)
Figure 9: Eect of increasing the mean length of calls for non-uniform load Figure 9 shows the performance of the algorithms when the arrival rate (of heavily loaded cells) is xed at 150 cpm and the call lengths are varied from 108 seconds to 3 minutes in increments of 18 seconds. In terms of allocation percentage, the optimistic approach performs better than the preallocating algorithm at short call lengths. However, it performs worse than the xed allocation scheme at longer call lengths. On the other hand, the pessimistic approach performs much better than the other algorithms. In terms of response time, the optimistic approach is signi cantly worse than others. Its response time varies from 20 seconds to 107 seconds, which are far too long for users to wait. (These
Overall, the optimistic schemes do well under light loads but do not perform as well under heavy loads. This is true for both uniform loads and non-uniform loads. On the other hand, the pessimistic schemes do better compared with the optimistic schemes. The improvements are more signi cant in non-uniform load conditions. When compared with preallocating schemes, the pessimistic schemes allocate a much higher percentage of channels in all the experiments. However, in order to use the pessimistic schemes, one must choose an appropriate granularity g. The optimum value of the granularity could vary under dierent load situations. A dynamic adjustment of the granularity may be needed to cope with dierent load situations. Further experimentation with other existing algorithms and their distributed implementation is planned for the future.
References
[1] Andrea Baiocchi, Francesco Delli Priscoli, Francesco Grilli, and Fabrizio Sestini. The geometic dynamic channel allocation as a practical strategy in mobile networks with bursty user mobility. IEEE Transactions on Vehicular Technology, 44(1):14{23, 1994. [2] M. Choy and A. K. Singh. Ecient fault tolerant algorithms for distributed resource allocation. ACM Transactions on Programming Languages and Systems, 17(4):535{559, 1995. [3] E. W. Dijkstra. Hierarchical ordering of sequential processes. Acta Inf., 1:115{138, 1971. [4] S. M. Elnoubi, R. Singh, and S. C. Gupta. A new frequency channel assignment algorithm in high capacity mobile communication systems. IEEE Transactions on Vehicular Technology, 31(3), 1982. [5] J. S. Engel and M. M. Peritsky. Statistically-optimum dynamic sever assignment in systems with interfering servers. IEEE Transactions on Vehicular Technology, 22(4), 1973. [6] T. J. Kahwa and N. D. Georganas. A hybrid channel assignment scheme in large-scale, cellular-structured mobile communication systems. IEEE Transactions on Communications, 26(4), 1978. [7] N. Lynch. Fast allocation of nearby resources in a distributed system. In Proceedings of the 12th Annual ACM Symposium on Theory of Computing, pages 70{81, 1980. [8] Ravi Prakash, Niranjan G. Shivaratri, and Mukesh Singhal. Distributed dynamic channel allocation for mobile computing. In Proceedings of the 14th ACM Annual Symposium on Principles of Distributed Computing, pages 47{56, 1995. [9] Kwan Lawrence Yeung and Tak-Shing Peter Yum. Compact pattern based dynamic channel assignment for cellular mobile systems. IEEE Transactions on Vehicular Technology, 43(4):892{ 896, 1994. [10] Ming Zhang and Tak-Shing Peter Yum. Comparisons of channelassignment strategies in cellular mobile telephone systems. IEEE Transactions on Vehicular Technology, 38(4):211{215, 1989.