Dynamic Hashing + Quorum = E cient Location

0 downloads 0 Views 267KB Size Report
Dynamic Hashing + Quorum = E cient Location. Management for Mobile Computing Systems 1. Ravi Prakash. Mukesh Singhal. Dept. of Computer Science Dept.
Dynamic Hashing + Quorum = Ecient Location Management for Mobile Computing Systems 1 Ravi Prakash Mukesh Singhal Dept. of Computer Science Dept. of Computer and Information Science University of Rochester The Ohio State University Rochester, NY 14627-0226. Columbus, Ohio 43210.

[email protected]

[email protected]

Available as Technical Report 649, February 1997, University of Rochester, Computer Science Department. 1

Abstract Location management is a fundamental problem in mobile computing systems. Existing industry standards employ centralized location management schemes. Centralized schemes are not highly scalable. This paper presents a new, distributed location management strategy for mobile systems. Its salient features are fast location update and query, load balancing among location servers, and scalability. The strategy employs dynamic hashing techniques and quorums to manage location update and query operations. Location information of a mobile host is replicated at a subset of location servers. The set of location servers associated with a mobile host changes with time, depending on the location of mobile hosts and load on the servers. This dynamism prevents situations of heavy load (location update and query messages) on some location servers when the mobile hosts are not uniformly distributed in space, or when some mobile hosts have their location updated or queried more often than others. New location servers can be added to the system as the number of mobile hosts and/or location update and query rates increase. Also, if the load diminishes, the number of location servers in the system can be reduced. Tries and quorum systems are used to expand and shrink the number of location servers. Unlike several existing schemes that progressively expand their region of search and may require multiple rounds of messages to locate a mobile host, the proposed scheme requires a single round of message multicasting for location update and query operations. The size of the multicast set is proportional to the square root of the number of location servers and each message has a small size. All multicast messages are restricted to the high bandwidth wired part of the mobile network. Hence, communication overhead and latency are low. The storage overheads imposed on the location servers are nominal.

Keywords: mobile computing, location management, dynamic hashing, tries, quorum systems.

1 Introduction In mobile computing systems, the network con guration and topology change with time. The ability of mobile hosts (MH s) to autonomously move from one part of the network to another raises interesting issues in the management of location information of these nodes. 1. Should location information be maintained at a central site, or should it be distributed? As centralized solutions are neither robust, nor scalable, we will concentrate on distributed solutions. We assume that there exist a set of location servers in the system that store the location of mobile hosts. 2. How to minimize the time to locate a mobile host? 3. How to minimize the communication overheads incurred in location query and update? Simultaneously achieving both 2 and 3 does not appear to be possible. So, appropriate trade-o s have to be made. 4. Replicating location information of a mobile host at all location servers is an expensive solution. So, what subset of location servers should be informed when a mobile host's location changes? Also, if a node wishes to locate a mobile host which location servers should it probe? 5. Should the set of location servers associated with a mobile host be dependent solely on the mobile host, or also on the location of the mobile host whose information is being updated? Also, should the set of queried servers depend on the location of the node that is trying to locate the mobile host? 6. How to handle scalability issues, namely, addition of new mobile hosts to the system, departure of old mobile hosts from the system, signi cant changes in the update and query rates of mobile hosts? It is to be noted that the location management strategy should be able to adapt to the arrival/departure of groups of mobile hosts and variations in their update/query rates without signi cantly a ecting the time to locate or update the location information of other mobile hosts. Also, while the system is adjusting to uctuations in load o ered by a group of mobile hosts, location services (update and query) for other nodes should not be suspended. It is extremely important that a location management scheme adequately address the issues mentioned above. Otherwise, high communication overheads and/or delays will be experienced. Location management operations can be broadly classi ed into location update, location query and paging operations. The location update operation stores information about the location of an MH . When an MH is to be reached, a location query is performed to locate the MH . The frequency of location updates may be dependent on a variety of performance concerns. Usually, it 1

is very expensive to update the information each time an MH crosses a cell boundary. So, it is possible that the location management system has a general idea of the region where the queried MH is present, but not exactly in which cell it is present. In such situations, paging is done to identify the cell. Existing standards for location management are IS-41 [6] used for Advanced Mobile Phone System (AMPS) in North America and GSM MAP [11] used for the Global System for Mobile Communications (GSM) in Europe. Both these schemes are centralized, and therefore not scalable. Several alternatives/enhancements to these standards have been proposed that employ either a centralized approach [8, 9] or a distributed approach [2, 16]. In this paper we present a distributed location management scheme in which fast location updates and queries can be performed, while incurring modest communication and storage overheads. Also, unlike previous location management schemes, great attention is paid to ensure that no single location server gets over-burdened while other servers are under-utilized.

2 System Model We assume a cellular system that divides the geographical region served by it into smaller regions, called cells. Each cell has a base station, henceforth referred to as the mobile service station (MSS). Thus, a cell can be uniquely identi ed by referring to its base station. Figure 1 shows a logical view of a mobile computing system. The mobile service stations are connected to each other by a xed wireline network. The xed wireline network also contains various location servers and other xed nodes. A mobile service station can be in wireless communication with the mobile hosts in its cell. The location of a mobile host can change with time. It may move from its present cell to a neighboring cell while participating in a communication session, or it may stop communicating with all nodes for a period of time and then pop-up in another part of the network. Cell

Cell MH

MH MH

MH

MH MH

MSS

MSS MH

Fixed Network Cell MH

MH MH

MSS MSS Cell

MSS

MH MH

MH

Cell

Figure 1: Logical view of a mobile computing system. 2

A mobile host can communicate with other units, mobile or static, only through the mobile service station of the cell in which it is present. If a node (static or mobile) wishes to communicate with an MH , rst it has to determine the location of the MH (the cell in which the MH is currently residing). If a mobile host were to update its location information every time it crosses a cell boundary, location update operations would impose very high communication overheads. Hence, the cellular coverage area is divided into several registration areas (RAs). An RA can be as small as a cell, or as large as the entire coverage area. Typically, a cluster of neighboring cells constitute an RA. A mobile host's location information is updated when it moves from one RA to another. This location information is stored at location servers. A location query operation returns the identity of the RA, henceforth referred to as RA id, in which the queried MH is present. Once the RA id of the MH has been determined, paging is done to determine the RA's cell in which the MH is present.

3 Previous Work The IS-41 [6] and GSM MAP [11] standards use centralized location management schemes. For example, IS-41 associates a home location register (HLR) with each MH . When an MH moves from one RA to another the MH registers its location with the visitor location register (V LR) of the RA it enters. If the old and new RAs are associated with the same V LR, no further action is required. Otherwise, the new V LR informs the HLR that it will now be serving the MH . The HLR deregisters the MH at the old V LR and associates the MH with the new V LR. For location query, the querying node can identify the HLR to be queried based on the identity of the MH it is trying to locate. On receiving the query, the HLR forwards the query to the V LR where the queried MH is known to be registered. Various improvements to the standards have been suggested.1 For example, to reduce the signaling trac due to location registration, forwarding [9] and local anchoring [8] have been proposed. Both these schemes reduce the location update overheads, but may incur extra delays during location queries. Depending on the call-to-mobility ratio, the two approaches lead to varying degrees of performance improvement. Distributed location management schemes [2, 15, 16] employ hierarchical databases such that each database need not store location information about every MH in the system. The basic idea is to have a tree structure with the mobile hosts as the leaves and the regional directories as the internal nodes and root node. A regional directory stores location information of leaves in its subtree. In [2], a hierarchy of distributed regional directories is maintained. The ith level regional directory enables a node, static or mobile, to track any mobile host within a distance of 2i from it. 1

A thorough survey of recent work in location management can be found in [1].

3

Corresponding to each level i, read and write sets of directories are associated with nodes u, v such that readi(u) \ writei (v ) 6= , 8u; v within 2i distance from each other. The write set for a node is the set of directories where the location information of the node is stored. The read set for a target node is the set of directories that will be probed to nd the location of the target node. If location information is not found at the ith level regional directories, the region of search is expanded by probing higher level directories. So, in [2], multiple rounds of probes may be required. If a mobile host is highly mobile, the time to locate it will be greater. Moreover, there has been a trend towards reduction of cell size, i:e:, there is a shift from cellular systems to microcellular and nanocellular systems. If the distance is measured in terms of number of cells, reduction in cell size will lead to increase in location update and query time. Broadcasting updates and queries to all the location servers is a fast, but expensive solution. The locality of reference patterns is exploited in [15]. The notion of working set for mobile hosts is introduced. Nodes in an MH 's working set communicate with the MH more frequently than nodes that are not in the working set. A location management scheme has been described in [15] in which an MH can dynamically determine its working set depending on the call-to-mobility ratio between network node and MH pairs. Nodes in the working set are informed about the location update when an MH moves, while other nodes are made to search for the MH when they wish to communicate with the MH . In [3], some mobile service stations (MSS s) are designated as reporting centers (similar to location servers). Location update is done when an MH moves into the cells corresponding to the reporting centers. When an MH has to be located, it is searched for in the vicinity of the reporting center at which the last update was made. However, an issue that needs to be addressed is how are the number and location of reporting centers determined? Also, what should be the size of the registration area (RA) or the working set [15]? This is important because a large RA would mean high paging costs when the location of an MH is to be determined. A simple approach is to determine the size of RAs a priori. However, such a solution does not account for di erent call-to-mobility ratios or mobility patterns of individual MH s. Updating the location directory each time an MH moves from one cell to another (i:e: having one cell per registration area) can be very expensive. Three alternatives, namely, timebased, number of movements-based, and distance-based strategies for MH location updates have been proposed in [4]. So, the size of an RA is not static. Instead, it depends on the mobility characteristics of the MH . In this paper we assume that a policy to determine the size and layout of registration areas (RAs) has already been decided. We proceed from that point and propose an algorithm for ecient location updates and queries. Also, once the RA for the queried MH has been determined, any 4

paging strategy can be employed to pin-point the MH 's cell without adversely a ecting the intrinsic advantages of the proposed location update and query algorithm.

4 Motivation and Basic Idea The centralized location management schemes lack scalability. Moreover, a centralized scheme does not provide complete and true mobility due to the following reason: even though a mobile host is free to roam throughout the coverage area and still be accessible, its location information is, in a sense, immobile as all location registrations have to be conveyed to the HLR whose location is xed. The distributed schemes su er from increased latency in location queries as they may have to perform multiple directory look-ups in a sequential fashion, or follow a series of forwarding pointers. Hence, we propose a novel approach to location management which is motivated by the two shortcomings mentioned above. Its salient features are: 1. The concept of home location register (HLR) is discarded. The location of a mobile host is one of the parameters of a function that determines the set of location servers (directories) storing its location. 2. The increased latency of distributed schemes is due to the fact that they do not exploit the intrinsic concurrency in the query process. We probe a small set of location servers concurrently and can determine the registration area in which an MH is present in time equal to a single round-trip message delay. Similarly, location updates are sent concurrently to a set of location servers.

Using Quorums for Fast Update and Query We use a new quorum based strategy for location management which has the following characteristics: (i) multicast location updates for a mobile host to a set of location servers (update set), (ii) multicast queries for a mobile host to a set of servers (locate set), (iii) update and locate sets for a mobile host must intersect. Next, we identify the criteria for the composition of update and locate sets. Mobile hosts exhibit a spatial locality of reference: even though all nodes in the system can potentially communicate with a given node, bulk of the references for the given node originate from only a subset of nodes (referred to as the working set in [15]). The nodes in the working set may be clustered in di erent parts of the network. So, to reduce query costs, it is advisable to have location servers for the MH in the vicinity of such clusters. Using only the MH 's identity to determine its location servers fails to exploit the locality of reference characteristics. Regardless of where the MH is located in the 5

network, its location information will always be stored at the same servers. As mentioned above, such an approach cannot support true and complete mobility of information. Determining the location servers of an MH based solely on the cell in which that MH is present will lead to uneven distribution of responsibility. Quite often a signi cant fraction of MH s are concentrated in a very small area, while there is a very low density of MH s in the rest of the network. For example, most of the MH s may be situated on the highways and other major streets of a city during the morning and evening rush-hour trac, and most of the MH s may be concentrated in the business districts of the city during rest of the day. In such situations, the directory servers [2] and reporting centers [3] in the high density regions will be overburdened, while the directory servers and reporting centers in other regions will be comparatively lightly loaded. Hence, it is desirable that the location servers storing the location information of an MH be a function of the identities of the MH as well as the cell in which the MH is present. Such a function can be represented as follows: h : MSS  MH ! SLS , where MSS denotes the mobile service station of the cell in which the MH is present, and SLS denotes a set of location servers. The location servers corresponding to an MH will change as the MH moves in the network from one registration area (RA) to another. It is to be noted that an MH can enter an RA through di erent cells (corresponding to di erent MSS s). Hence, given an MH in a particular RA, its location information may be stored at di erent sets of location servers depending on the RA's cell in which the MH was present at the time of location update. Also, di erent MH s in the same cell need not have the same set of location servers. Nodes that wish to locate an MH should be able to access at least some of the location servers of the MH quickly, and in an inexpensive fashion. Function h(), described above, can be employed to determine the set of location servers that should be queried when an MH is to be located. The set h(MSS; MH ) can represent the set of location servers that a node, in the cell represented by MSS, should query when it wishes to locate a mobile host MH . Thus, function h() determines the write set for location updates when an MH moves, and the read set for querying the location of the MH . The read and write sets (quorums) for every (MSS , MH ) pair intersect. So, the latest location information of an MH can be accessed in a single round of message exchange.

Hashing Based Quorum Selection The function h() used to map (MSS, MH ) pairs to read and write sets should be such as to avoid disparity in the load (location update and query operations) seen by di erent location servers. If the quorum system is such that each location server is contained in roughly the same number of quorums, then load balancing among location servers can be achieved by ensuring that the function h() maps on to each quorum with equal likelihood. This indicates that the function h() should be a uniform hashing function taking an (MSS, MH ) pair as input and returning SLS . 6

However, a uniform hashing function that ensures load balancing at a particular time cannot guarantee load balancing among location servers at other times. For example, a few MH s may get hot, i:e:, experience location updates and/or queries at a much greater frequency than other MH s. As a consequence, even though the hash function h() maps equal number of (MSS, MH ) pairs to each quorum, some quorums may be receiving queries and updates more often than other quorums. Also, there is a possibility that a group of new MH s join the system. The geographical distribution of these new MH s and of nodes querying their location may be such that the function h() may be mapping their location operations more frequently onto some quorums than others. In such a situation, even if no MH is hot, some of the location servers get overloaded while others are not so heavily loaded.

Load Balancing with Dynamic Hashing A solution to the problem mentioned above is to employ a family of universal hash functions [5]. Periodically, the system can switch from one hash function in the family to the other. This ensures that even if a particular hash function does not provide good load balancing under certain circumstances, over the long run load balancing among location servers is achieved. However, using a family of universal hash functions is not a feasible solution due to the following reasons:

 Switching between hash functions may result in an (MSS, MH ) pair to be hashed to a

quorum that is di erent from the quorum selected by the previous hash function. This may require reorganizing location information about all the mobile hosts at all the location servers. Therefore, each switch can be very time consuming and communication intensive.  Long term load balancing may be acceptable from the point of view of the location servers. However, during the switch all applications that require location information may experience delays. This is because location updates and queries will have to be suspended for the duration of the switch. From the user's perspective, temporary outages in location service may not be acceptable. A better solution to the problem is to employ dynamic hashing. So, we de ne h(MSS; MH ) to be a hash function whose range of values can expand or contract, depending on the load in the system. Each value in the range of function h() corresponds to a quorum. When overall load increases on the location servers, new location servers are added creating new quorums. So, location updates and queries are spread over a larger number of location servers. Hence, for the same number of location update and query operations, the load on individual location servers declines. Conversely, when the frequency of location updates and queries goes down and all location servers have low utilization, some location servers can become dormant leading to fewer and/or smaller 7

quorums. Even though the load on each location server, hitherto underutilized, increases, response time for update messages and queries will be acceptable. Also, the servers that are no longer being used for location management can be used to support other services. Demand driven quorum addition and deletion is similar to the military's policy of having a limited number of active duty units and a number of reserve units. During peaceful times (low to moderate load) only the active duty units are deployed. However, during localized emergencies (uneven load distribution among quorums) or during wars (high load situation) the reserve units are deployed wherever they are needed to assist the active duty units. When normalcy is restored, the reserve units are released and can return to their regular tasks.

5 Location Update and Query As stated earlier, the function h() maps each (MSS , MH ) pair to a quorum for location update as well as query operations. So, during each location operation, a message is multicast to the location servers belonging to the quorum.

5.1 Determination of Location Servers MH s that are hot will have their location queried and/or updated more frequently than other MH s. Hence, the location servers corresponding to hot nodes will receive more messages than location servers that correspond to cold MH s. Also, nodes querying the locations of hot MH s

may be spread all over the network. Therefore, to provide load balancing among location servers, a greater number of location servers, spread throughout the network, should share responsibility for maintaining location information about hot MH s. Fewer location servers need to maintain location information about cold MH s. For this purpose, alias(es), referred to as virtual MH identity, are assigned to each MH . A hot MH is assigned multiple virtual identities, while a cold MH is assigned a single virtual identity. In essence, the location management scheme considers a hot MH to be equivalent to multiple cold MH s. Determination of location servers for an MH involves three steps: 1. Mapping the MH identity (MH id) to a virtual MH identity (V MH id): Let each MH id correspond to a system wide unique non-negative integer. Non-negative integers are also used to represent V MH id. Let there be an integer constant x which is a parameter of the algorithm, and is determined a priori. If an MH is cold, its V MH id = MH id + x. If an MH is hot, then it is assigned multiple virtual identities. Solely for the purpose of explanation in this paper, we assume that each hot MH is assigned at most two virtual identities: V MH id1 = MH id + x, and V MH id2 = an integer between 0 and x ? 1 (inclusive) that has not already been assigned as a virtual identity to some other hot MH . 8

When a hot MH turns cold, it relinquishes its second virtual identity which is returned to the pool of available virtual identities, and can be assigned to hot MH s in the future. It is to be noted that the rst virtual identity of each node is unique as the MH id of each MH is assumed to be unique. Thus, up to x hot MH s can be eciently handled: each being assigned its second virtual identity from the range [0, x ? 1] of integers. If there are more than x hot MH s, then the surplus hot MH s cannot be assigned multiple virtual identities, and will have to be treated no di erent from cold MH s. This is obviously a serious limitation of the virtual identity strategy, and inhibits scalability of the solution. In Section 6, we describe how to employ dynamic hashing to handle the problem. Basically, if a set of location servers is overloaded with update and query operations because a large number of mobile hosts served by the set are hot, the location management responsibilities for these mobile hosts are distributed among a larger set of location servers. Thus, load balancing is achieved. Right now, it will suce to say that dynamic hashing is employed to ensure coarse grain load balancing while virtual identities are used for ne grain load balancing. 2. Given an MSS id, denoting the cell in which the mobile host is present, and a V MH id for that mobile host, we employ double hashing [5], as follows:

h(MSS id; V MH id) = (h0 (MSS id) + V MH id  h00 (MSS id)) mod m where [0,m ? 1] is the range of the hash function h(), and h0 () and h00() are auxiliary hash functions. Functions h0 () and h00() are uniformly distributed over the range [0,m ? 1], and h00 (MSS id) is relatively prime to m. Therefore, given an (MSS id, V MH id) pair, h(MSS id; V MH id) will be uniformly distributed over the range [0, m ? 1], i:e:, mapped uniformly over all the location server quorums in the system [5]. 3. Corresponding to each i = h(MSS id; V MH id), there is a set Si of location servers with the following properties: (a) (b) (c) (d)

Si 6 Sj , for 0  i; j  m ? 1; i 6= j . Si \ Sj \ Sk 6= , for 0  i; j; k  m ? 1. j S1 j= : : : j Sm?1 j= K . Any location server is contained in K Si 's, 0  i  m ? 1.

Properties (c) and (d) represent the equal e ort and equal responsibility properties, respectively. Together they represent the symmetry property. Property (b) is more strict than what is usually speci ed for quorum systems. The motivation for this stronger constraint 9

will become apparent when we discuss the accuracy of the location management scheme in Section 5.4. So, if an MH with virtual identity V MH id, in the cell corresponding to MSS id, updates its location information, the update is done at all location servers in the set Sj such that j = h(MSS id; V MH id). The set Sj of location servers is the write set for the (MSS id, V MH id) pair. The value of j is evenly distributed over the range [0, m ? 1]. Also, the quorums are symmetric. Hence, the responsibility for location management of all the virtual MHs (there are as many virtual MH s as the number of virtual identities assigned) is evenly distributed among the location servers.

5.2 Updating Mobile Host Location Let old MSS be the cell of the previous registration area where the MH was located when the MH initiated the previous location update. Let new MSS be the cell of the new registration area in which the MH is present when it is about to initiate the latest location update. The following operations are performed to update the location information of the MH when it moves from the previous registration area to the new registration area: 1. Determine all virtual identities, V MH id, for the MH with identity MH id. 2. purge set

; inform set

3. for all V MH id

fj

4. for all V MH id

fj

.

h(old MSS, V MH id); purge set h(new MSS, V MH id); inform set

purge set [ Sj g inform set [ Sj g

5. send DELETE message, containing MH id, to all location servers 2 purge set ? inform set. 6. send ADD message, timestamped with MH 's local clock value and containing MH id and new MSS , to all location servers in inform set ? purge set. 7. send REPLACE message, timestamped with MH 's local clock value and containing MH id and new MSS , to all location servers in inform set \ purge set. When a location server receives a DELETE message it deletes location information about the MH id carried in the message. On receiving an ADD message, a location server adds an entry for MH id indicating new MSS as its location at the time indicated by the timestamp. When a REPLACE message is received by a location server corresponding to an MH id, the old MSS 10

and old timestamp value are replaced by the new MSS and new timestamp value in the MH 's location entry.

5.3 Locating a Mobile Host When a mobile service station with identity MSS id, or a node inside the cell corresponding to this MSS wishes to locate an MH whose identity is MH id, following actions are taken by the MSS: 1. Determine all virtual identities, V MH id, for the MH with identity MH id. 2. query set

.

3. Randomly select one V MH id for the MH ; compute j h(MSS id, V MH id); query set Sj . If the MH is cold it will have only one V MH id which is always selected. Selection of V MH id2 for a hot MH may require a global table lookup. 4. Send QUERY to all location servers 2 query set to locate MH with identity MH id. The query set is similar to the read set, described in previous algorithms, for the (MSS id, V MH id) pair. 5. If a queried server contains location information about MH id, it sends this information in its RESPONSE along with the associated timestamp value. Otherwise, the servers sends a NULL response. 6. On receiving RESPONSE(s) containing location information, select those with the latest timestamp and extract the location information contained in them. As Si \ Sj 6=  for 0  i; j  m ? 1, the read and the write sets for every pair of tuples (MSS1; V MH id) and (MSS2; V MH id), respectively, are bound to intersect. Therefore, every location query will return some location information about the queried MH . However, this location information may be a little outdated. This is because location information is updated only when the MH moves from one RA to another. If the location information is outdated, a good strategy is to start looking for the mobile host in the vicinity of the cell indicated by the outdated location information, and slowly expand the region of search. This is similar to the approach described in [3].

5.4 Accuracy of Location Operations We de ne accuracy of location operations as a qualitative measure for the collective performance of the two location operations: update and query. Accuracy indicates the likelihood of a query 11

operation for an MH returning the location information stored by the latest update operation corresponding to that MH . High accuracy of location operations is desirable. Due to the way quorums are constructed, the set of location servers probed by a query message (query set) contains some location servers that also belong to the set of location servers that received the latest location update (inform set). The query set also has some servers in common with the set that received the previous location update (purge set). From the construction of quorums, it is guaranteed that inform set \ purge set \ query set 6= . With regard to the location servers belonging to the inform set and purge set, there are six possible situations for the arrival of a location query message: 1. location query reaches a location server belonging to purge set ? inform set before information is DELETED during location update: the location server sends outdated location information. 2. query reaches server belonging to purge set ? inform set after old location information is DELETED: location server sends a NULL reply. 3. query reaches server belonging to inform set ? purge set before location information is ADDED there: location server sends a NULL reply. 4. query reaches server belonging to inform set?purge set after location information is ADDED there: latest location information is sent in the reply. 5. query reaches server belonging to inform set \ purge set before location information is REPLACED there: outdated location information is sent in the reply. 6. query reaches server belonging to inform set \ purge set after location information is REPLACED there: latest location information is sent in the reply. Thus, for each of the three sets: (i) purge set ? inform set, (ii) inform set ? purge set, and (iii) purge set \ inform set there are two possibilities of the relative arrival time of a location update message and a location query message. Only when situations 3 and 5 simultaneously occur, i:e:, all location servers common to the inform set and query set receive the query message before the update message (ADD or REPLACE) does the query not return the latest location information. The location information returned is the information stored during the previous location update. Otherwise, assuming that local clocks of all nodes increase monotonically, some location servers may return outdated location information with an earlier timestamp, and at least one location server de nitely returns the latest location information with a later timestamp. Therefore, the location information with the latest timestamp, which also happens to be the latest location information, is selected. 12

If, instead of specifying a stronger three set non-empty intersection constraint between quorums, a weaker constraint Si \Sj 6=  had been speci ed, the following situation could arise: inform set\ purge set \ query set = . So, the query could reach servers belonging to purge set ? inform set after old location information is deleted, and inform set ? purge set before new location information is added. In such a situation, all the queried location servers send a NULL reply and not even old location information is retrievable. In such a situation, the query could once again be sent to the query set after a short while. By then, at least one location server in inform set \ query set would have received the new location information. In summary, a location query has a very small probability of failing to return the latest location information only during the time interval that a location update message is in transit from a cell to a location server in the inform set. As these messages travel along the high bandwidth xed wireline network, we can safely assume that message propagation time is small. We can also assume that the time between succeessive location updates for a mobile host is much greater than a message's propagation time. Therefore, most of the time location queries return the latest location information.2

5.5 Load Balancing by Virtual Identity Hot mobile hosts have at least one, and often two quorums of location servers associated with them. A querying node probes location servers belonging to only one of its two possible quorums for the hot mobile host. There are two possible cases: 1. Let a location server belong to both the quorums (read sets) corresponding to a (MSS of querying node, hot MH ) pair. In that case, to the location server the hot MH appears to be equivalent to two cold virtual MH s (on account of the two virtual identities of the hot MH ). By the symmetry property of quorum construction, and the fact that the hashing function is uniformly distributed over all the quorums, it is implied that this location server stores location information of fewer MH s than a location server that stores location information only for cold mobile hosts. 2. Let a location server belong to only one of the two read sets. In that case, as query messages are sent to only one arbitrarily selected read set, this location server will, on an average, receive only half the location queries for the hot MH . Thus, a location server storing location information about a hot MH either stores location information about fewer MH s, and/or receives only half the queries corresponding to the hot MH . If location update conditions proposed in [4] were to be employed then the interval between successive location updates for an MH would depend on the triggering threshold set for the time-based, number of movements-based, or distance-based policies (depending on which one is employed) and the mobility pattern of the MH . 2

13

Hence, double hashing along with virtual MH identities ensures that:

 as long as the number of hot mobile hosts is no greater than x, location information about mobile hosts is fairly distributed among all the location servers constituting the distributed location directory, irrespective of the geographical distribution of the mobile hosts.

 the number of location queries received by each participating location server is fairly distributed regardless of whether it stores location information about only hot MH s, or only cold MH s, or a combination of hot and cold MH s. The performance can be further optimized if the hot nodes were not restricted to having only two virtual identities. Instead, the number of virtual identities of an MH could vary from 1 to n, for an integer constant n. The more frequently an MH is queried, the greater the number of virtual identities assigned to it. The determination of whether an MH is hot or cold is done by the MSS of the cell in which the MH is present. If at time t, the number of queries per unit time received by the MSS on behalf of the MH , from nodes outside the cell, averaged over the interval [t ? T; t], exceeds a pre-determined threshold, the MH is hot. Otherwise, it is cold. T is a pre-selected constant. A small value of T makes transitions between hot and cold states very sensitive to uctuations in query rates. A large value of T , on the other hand, reduces the sensitivity, but also avoids state changes due to short lived aberrations in query patterns.

6 Load Balancing by Dynamic Hashing In Section 5.5 we showed how assigning multiple virtual identities to hot mobile hosts results in load balancing. However, the virtual identity approach is not scalable for two reasons: 1. At most x hot mobile hosts can be assigned multiple virtual identities. The value of x is determined a priori. 2. Let the number of mobile hosts increase signi cantly. Even if no mobile host is hot, the collective location update and query load of all mobile hosts can degrade the performance of the location servers. Both the problems mentioned above can be handled by adding new location servers to the system when load increases. The new location servers can be integrated with the existing location servers using tries, a dynamic hashing technique, as described in [7]. Basically, if a quorum of location servers is heavily loaded, a new quorum is added to the system. Location management responsibilities of the old quorum are now shared between the old quorum and the newly added 14

quorum. Also, when load decreases, some of the location servers can be released and location management responsibilities of two lightly loaded quorums can be reassigned to just one quorum. Quorum addition and deletion, and integration of new location servers into the system can be accomplished seamlessly. Also, location information of only a small subset of mobile hosts has to be redistributed among the location servers during this process.

6.1 Dynamic Quorum System Approach In Section 5.1 the hash function h() has been de ned as:

h(a; b) = (h0 (a) + b  h00(a)) mod m Let m be a power of 2. Then we can de ne an entire series of hash functions, H , of the form:

hi (a; b) = (h0(a) + b  h00 (a)) mod 2i for 0  i  limit, for some suciently large constant limit. Therefore,

hi (a; b) = hi?1 (a; b) or hi (a; b) = hi?1 (a; b) + 2i?1 So, given MSS id and V MH id, the hash function hi can be applied to map every (MSS id, V MH id) pair to an integer in the range 0 to 2i ? 1. Each integer corresponds to a quorum of

location servers. Thus, each quorum in the location management strategy is similar to a bucket for holding data items that are hashed to the same value. Each value returned by the hash function is also associated with a non-negative integer variable referred as local depth. When hash function hi is in use, the maximum permissible value of local depth is i. Initially, let hash function hi be employed to map (MSS id, V MH id) pairs to values in the range 0 to 2i ? 1. Each of these values, v , has its local depth set to i, and there is a one to one mapping between a value and a quorum. Let us refer to the quorum of location servers corresponding to the value v as Sv . Let us assume that upto 2limit distinct quorums of location servers can be formed. Let the cumulative update and query rates (load) corresponding to Sv exceed a pre-speci ed threshold. This is similar to a bucket over ow in the context of hash function. In such a situation, the action taken depends on the local depth of Sv . There are two distinct cases with respect to local depth: 15

Case 1: Local depth equals i Let local depth(v ) = i at the time when Sv is overloaded. Then, the location management scheme switches from hash function hi to the next higher hash function hi+1 . The situation when local depth(v) < i will be discussed later. As a result of the switch to a higher hash function, if hi(MSS id; V MH id) = v, then:

hi+1 (MSS id; V MH id) = v or hi+1 (MSS id; V MH id) = v + 2i Also, local depth(v ) and local depth(v + 2i ) are set to i + 1. For all values 0  w 6= v < 2i the value of local depth(w) remains unchanged. For all newly created values 2i  w = 6 v + 2i < 2i+1 in the range of function hi+1 , local depth(w) = local depth(w ? 2i ). Also, if value w was previously mapped to quorum Sw , then as a result of the split, hash values w and w +2i are mapped to quorums Sw mod 2local depth w and S(w+2i ) mod 2local depth w , respectively. ( )

( )

Assertion 1 As a result of the hash function switch, the (MSS id, V MH id) pairs that were all

previously mapped to the heavily loaded quorum Sv are now split between two distinct quorums, namely, Sv and Sv+2i .

Proof: After the split, local depth(v) = local depth(v + 2i) = i + 1. 0  v < 2i ) 0  v + 2i < 2i . So, v mod 2i = v , and (v + 2i ) mod 2i = v + 2i . Therefore, v mod 2i = 6 (v + 2i) mod 2i . Also, both v and v + 2i are less than 2limit. Therefore, the +1

+1

+1

+1

+1

corresponding quorums are distinct.

Assertion 2 After the hash function switch, all other (MSS id, V MH id) pairs, that were not mapped to the heavily loaded quorum, are mapped to the same quorum as before.

Proof: For (MSS id, V MH id) pairs besides the ones mentioned in Assertion 1, let hi (MSS id, w + 2i .

w + 2i

+1

As local depth of w and remains unchanged, it can be inferred V MH id) = w or i that local depth(w) = local depth(w + 2 ) = d < i + 1. So, 2i mod 2d = 0, and w mod 2d = (w + 2i ) mod 2d . Therefore, Sw mod 2d = S(w+2i ) mod 2d . Inference: As a result of hash function switch from hi to hi+1, a quorum of location servers that

was heavily loaded now has its load split between two quorums: the previous heavily loaded quorum and a new quorum. Location information of only a fraction of mobile hosts will have to be shifted: from the location servers belonging to a heavily loaded quorum to location servers belonging to the new quorum. Other mobile hosts and location servers are una ected by the split. 16

There is a possibility that all (MSS id, V MH id) pairs that were previously mapped to the heavily loaded quorum, Sv , are still mapped to only one quorum after the split (quorum Sv or Sv+2i ). It is very unlikely that such a situation will arise. An easy solution to such a situation is to switch to the next higher hash function hi+2 hoping that such a switch will lead to the desired load balancing. In database applications such a solution is highly e ective in situations of bucket over ow [7].

Case 2: Local depth less than i Let the hash function in use be hi , quorum Sv be overloaded, and local depth(v ) = k < i. By construction of the hashing scheme, this indicates that for all w: 0  w < 2i ^ w = v + j  2k , where j is an integer, local depth(w) = k. This indicates that all (MSS id, V MH id) pairs that are hashed to values corresponding to w are mapped to the same quorum Sv . In such a situation, switching up from hash function hi to hi+1 is not required. Instead, the following steps are executed: 1. local depth(w) = k + 1; 8w : 0  w < 2i ^ w = v + j  2k ; j 2 I ; 2. All (MSS id, V MH id) pairs such that hi (MSS id; V MH id) = w, are now mapped to quorum Sw mod 2k . +1

As a result all (MSS id, V MH id) pairs that were previously mapped to the heavily loaded quorum Sv are now distributed between quorum Sv and Sv+2k . The mapping from other (MSS id, V MH id) pairs to quorums remains unchanged.

6.2 Handling Quorum Switch Let some (MSS id, V MH id) pairs, previously mapped onto quorum Sv , now be mapped onto a new quorum Sv+2i , where i  0. Then location information has to be recon gured for the mobile host whose virtual identity is V MH id, and whose latest location update was made while it was present in the cell represented by MSS id. Recon guration of location information is similar to location update described in Section 5.2, except for one di erence: in location recon guration, the location information of mobile host(s) does not change, but the set of location servers storing this information may change. All location servers in the set Sv ? Sv+2i are asked to DELETE location information about the mobile host MH . No action needs to be performed at location servers belonging to the set Sv \ Sv+2i . Location servers in the set Sv+2i ? Sv are asked to ADD location information about MH . The timestamp associated with the MH 's location information being added is the same as 17

the timestamp associated with MH 's location information in the location servers belonging to the set Sv ? Sv+2i .

6.3 Quorum Addition and Deletion It is to be noted that each time the load of a heavily loaded quorum is split between two quorums, we are in essence adding one new quorum to the quorum system. The addition of quorums can be handled in the following manner: 1. Given a set of location servers, construct a quorum system in advance consisting of 2limit distinct quorums. 2. Initially, use only a subset of quorums from the quorum set for location management. Let us refer to the quorums in use as active quorums and the remaining quorums as reserve quorums. 3. As load increases, due to addition of new mobile hosts and/or increase in location query and update activity of existing mobile hosts, add quorums from the reserve quorum set to the active quorum set. Thus, initially all the location servers may not be in use for location management. The set of active quorums may not fully cover the set of location servers. With increasing load, as the number of active quorums increases, more and more location servers are pressed into service. This seems to be contrary to the desirable properties of equal responsibility and equal e ort on the part of all location servers. So, we restate the properties as follows:

 All the location servers in use at any given point of time share equal responsibility and expend equal e ort in location management.

 If the load (e ort) of a non-empty subset of active location servers exceeds a pre-speci ed

threshold, additional location servers are activated. The responsibility for location management is now shared among a larger number of location servers. As a result the load on each server is reduced.

Just as the set of active quorums expands with increasing load, the set shrinks with decreasing load. When the rate of location update and query operations declines, some quorums can leave the active set and join the reserve set. For example, let a lightly loaded quorum Sv , corresponding to local depth i and its buddy quorum Sv+2i? at the same local depth be lightly loaded. Then quorum Sv takes responsibility for location operations corresponding to (MSS id, V MH id) pairs mapped onto itself and its lightly loaded buddy quorum. Therefore, the buddy quorum joins the 1

18

reserve quorum set and the local depth for quorum Sv is decreased to i ? 1. As the number of active quorums decreases, some location servers may no longer belong to any active quorum. Assuming general purpose servers are being used for location management, freeing up a location server when the load of location operations decreases implies that the freed server can be used for other tasks. Similarly, when the load imposed by location operations increases and response time of location operations increases, some general purpose servers can be diverted from other services to location mangement.

6.4 Quorum Load Measurement In previous sections, it has been mentioned that quorums can move between active and reserve quorum sets depending on the load on active quorums. Therefore, it is important to have an ecient means to determine a quorum's load. In order to determine a quorum's load three actions need to be performed: 1. Each time a location update or query message is sent to a location server, the identity of the quorum corresponding to which the message is being sent is piggybacked on the message. 2. A location server can belong to multiple active quorums. For each active quorum to which the server belongs, the server maintains an integer counter initialized to zero. When a server receives a location update or query message, it increments its counter corresponding to the quorum identity received in the message. 3. Periodically, all location servers are polled. The value of their counters is measured to determine the number of location update and query messages generated for each quorum between the previous poll and the latest poll. Also, all counter values are reset to zero. If polling indicates that the number of location updates and queries corresponding to a quorum, in the inter-poll interval, exceeds a threshold value the quorum is declared to be overloaded. It is important to select appropriate threshold and interval between successive polls to maximize the bene ts of dynamic hashing.

7 Quorum Construction The performance of the location management scheme depends on the construction of the read and write sets (quorums) for location query and location update operations, respectively. Let the total number of location servers in the network be N . In order to minimize the communication overhead of the algorithm, it is necessary that the size of each quorum Si be kept as small as possible. Also, to distribute the responsibility of storing the distributed location directory in a fair manner among 19

the location servers, the quorums should be symmetric. Several quorum construction strategies have been proposed [13, 14]. We will brie y describe two simple quorum construction strategies that satisfy the constraint: Si \ Sj 6=  for any two quorums. Iterative approach: As described in [12], initially the quorums are trivially constructed, each containing just over N=2 location servers whose identities form a contiguous sequence. An iterative reduction of the quorum size is done. In each iteration, the quorum (sequence of location servers) is partitioned into three partitions of roughly the same size, and the middle partition is discarded. The other two partitions are further reduced in the next iteration. A set of location servers is not partitioned any further if its size plus one is not divisible by three, or its size is less than seven. This leads to the generation of symmetric quorums of size 0:97N 0:63. Another approach to constructing quorums, similar to that proposed in [10] in the context of the mutual exclusion problem, is as follows: Grid based scheme: Let N = l2 for an integer l. An l  l grid is constructed. The grid points are numbered from 0 to N ? 1. A quorum Si consists of all the grid points on a row, and one grid p point on every other row. Thus, the cardinality of each quorum Si is equal to 2 N ? 1. If N is not a square of an integer, a degenerate grid can be constructed with the outermost row/column being reduced in size. In such a case, while constructing Si , the partial row/column is complemented p using grid points from another row/column. Therefore, the size of each read and write set is O( N ), where N is the number of location servers in the network. However, as described in Section 5.4, when the quorum systems generated by the strategies mentioned above are employed there is a very small possibility that a query may return a NULL response. In order to preclude such a possibility, the following simple extension to the grid based scheme can be used: Extended grid-based scheme: Let N = m3 for an integer m. An m  m  m grid is constructed with the grid points corresponding to location servers. A quorum can be formed by taking all the grid points that belong to three mutually perpendicular planes in the 3-dimensional grid. So, the side of each quorum is O(N 2=3). With such a construction, any three quorums will have at least one location server in common.

8 Computation, Communication, and Storage Overheads The location management algorithm is fast, and imposes low computation and communication overheads. Time for location operations: The time to complete location update and query operations can be expressed in terms of round-trip message delays. A message round consists of multicasting a message and receiving a response/acknowledgment. In order to update the location information of 20

a mobile host at most three multicast message (ADD, DELETE, and REPLACE) to three disjoint sets of location servers are needed. As there are no causal dependencies between the three multicast messages they can be sent concurrently. While trying to locate a mobile host the most recent location information about the host at any location server can be obtained, with a very high probability, in a single round of message exchange. When a mobile host makes a transition between hot and cold states, or when a heaily loaded quorum's responsibility for location management is split between two quorums a single multicast message can recon gure the location information. Thus, all operations associated with location management can be completed in at most one round of message exchange. In order to assign virtual identities to a hot mobile host, the assigned array of x elements has to be traversed at most once. Hence, the time complexity of this operation is O(x) comparisons. The auxiliary hash functions h0 () and h00 () usually employ a small, constant number of integer multiplication and division operations. Hence, determination of sets of location servers (quorums) to which location ADD, DELETE, or REPLACE messages should be sent is computationally inp expensive. There are O( N ), O(N 0:63) or O(N 2=3) elements in each quorum depending on the quorum construction strategy employed, where N is the total number of location servers in the system. Typically, N is a very small fraction of the total number of nodes in the network. Communication overheads: During transition between hot and cold states, ADD or DELETE messages are sent to at most one quorum of location servers. During location update ADD, DELETE, or REPLACE messages are sent to at most two quorums of location servers. When the location of a mobile host is to be determined, query messages are sent to only one quorum of p location servers. Hence, the message complexity of each location operation is O( N ) if the simple grid-based scheme is used, O(N 0:63) if the iterative approach is employed, and O(N 2=3) if the enhancement of the grid-based scheme is used. The choice of the third scheme as opposed to the rst two depends on whether we wish to guarantee that each query returns at least some location information (even if it is old information) or we do not mind resending the query if no location information is returned. Each message has a small size, consisting of at most two MSS identities, one MH identity, a quorum identity, and a ag indicating the nature of the message (query, ADD, REPLACE, etc.). Moreover, these messages are transmitted in the xed wire network, which has a much higher bandwidth than the wireless MH -MSS links. Storage overheads: Assuming that the value x used for assigning virtual identities is a constant value and the number of mobile hosts in the network is M , each of the N participating location p server has to store location information about O(M= N ) mobile hosts for the simple grid-based approach. This is because the location information of each of the M mobile hosts is replicated p O( N ) times in the location directory, and the entire directory is distributed uniformly over N 21

p

location servers. Each of these O(M= N ) pieces of information contains the MSS id for the cell in which the mobile host is present. Assuming there are m MSS s in the system, this piece of information requires lg(m) bits. Besides this location information, each location server has to store p an extra O(Q N ) entries, each of size lg(m) bits, indicating memberships of all the quorums. Q p is the total number of quorums in the quorum system (Q = 2limit ), and as already mentioned N is the number of servers in each quorum. Similar analysis can be performed for the other quorum p construction schemes, with N being replaced by N 0:63 or N 2=3. As N is usually a small number, the overheads are low.

9 Conclusion and Future Work As mobile computing systems mature, the number of mobile hosts in a network is expected to increase at a rapid pace. In order to keep track of these hosts and to be able to interact with them, it is important to store and update their location information eciently. In this paper, we presented a distributed dynamic location management scheme. Dynamic hashing is used in conjunction with quorum systems to ensure fast update and retrieval of location information. At the same time, the responsibility for location management is shared among all the servers in a fair manner. Two levels of load balancing are supported: (i) ne grain load balancing at the level of individual mobile hosts through multiple virtual identities and (ii) coarse grain load balancing at the level of sets (quorums) of location servers through dynamic hashing techniques like quorum split and quorum join. As a result, the location directory is fairly distributed throughout the network, and no single location server is overburdened with the responsibility of responding to location queries. The number of location servers where location information about mobile hosts is replicated p varies between O( N ) and O(N 2=3), depending on the quorum construction strategy employed. Here N is the total number of location servers in the system. So, not all location servers need to store the location of every mobile host. Mobile hosts that are queried more often than others have their location information stored at a greater number of location servers. Also, location servers that store location information of frequently queried mobile hosts store information about fewer hosts than the location servers that only store location information of infrequently queried mobile hosts. The set of servers that receive location update and location query messages for a mobile host changes dynamically as the host moves from one part of the network to another. This is because the hash function to determine the quorum of location servers takes the identities of the mobile host and the mobile service station (from whose cell the update or query originates) as arguments. As a result, nonuniformity in the geographical distribution of mobile hosts does not lead to load 22

imbalance among the location servers. If a subset of location servers collectively get overloaded with location management operations, their tasks are distributed among a larger set of location servers (quorum split) using dynamic hashing techniques. As a result load balancing among location servers is ensured. Also, if a set of location servers are very lightly loaded, some of them are released to perform other tasks and the remaining servers handle all location operations (quorum join). Thus, resources for location management, in the form of location servers, are dynamically allocated based on the demand. Location information of only a small subset of mobile hosts, a ected by quorum split/join, has to be moved from one set of servers to another. All other mobile hosts remain una ected. Thus, the system can gracefully adapt to changes in load. The distributed location management scheme imposes low computation, communication, and storage overheads. Moreover, mobile hosts and the wireless links do not incur any of these overheads, which is a desirable feature as they are usually resource poor. The overheads are visible to the location servers and the xed wireline network, which are comparatively resource rich.

Future Work The proposed location management scheme raises some new issues that will be the focus of our future research. First, news ways of quorum construction, as described in [13, 14], can be employed to construct smaller quorums. This will lead to reduction in location update and query trac over the wireline network. Also, smaller quorums imply replication of location information to a smaller degree and reduction in storage requirements at the location servers. Second, given a quorum system and a set of location servers, mapping of quorums on to servers appears to be a very challenging optimization problem. The actual communication overhead in the wireline network is a function of the number of messages as well as the distance these messages have to travel. Hence, the hash function and quorum mapping should be selected so as to minimize the number of hops that multicast messages for location update and query have to make. Also, rather than sending separate but identical messages to each server in the multicast set, one message may be sent to an intermediate switch in the vicinity of several target servers. The switch can then replicate the message and forward it to each of the target servers a short distance away. The suitability of existing optimal routing algorithms for message combining needs to be evaluated.

References [1] I. F. Akyildiz and J. S. M. Ho. On Location Management for Personal Communications Networks. IEEE Communications Magazine, pages 138{145, September 1996. 23

[2] B. Awerbuch and D. Peleg. Online Tracking of Mobile Users. Journal of the Association for Computing Machinery, 42(5):1021{1058, September 1995. [3] A. Bar-Noy and I. Kessler. Tracking Mobile Users in Wireless Communication Networks. In Proceedings of IEEE INFOCOM, pages 1232{1239, 1993. [4] A. Bar-Noy, I. Kessler, and M. Sidi. Mobile Users: To Update or not to Update? In Proceedings of IEEE INFOCOM, pages 570{576, 1994. [5] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press{ McGraw-Hill Book Company, 1990. [6] EIA/TIA IS-41 Rev. C. Cellular Radio Telecommunications Intersystem Operations, PN-2991, November 1995. [7] R. J. Enbody and H. C. Du. Dynamic Hashing Schemes. ACM Computing Surveys, 20(2):85{ 113, June 1988. [8] J. S. M. Ho and I. F. Akyildiz. Local Anchor Scheme for Reducing Location Tracking Costs in PCNs. In Proceedings of Mobicom, pages 181{193. ACM, 1995. [9] R. Jain, Y.-B. Lin, and S. Mohan. A Forwarding Strategy to Reduce Network Impacts of PCS. In Proceedings of IEEE INFOCOM, pages 481{489, 1995. p [10] M. Maekawa. A N Algorithm for Mutual Exclusion in Decentralized Systems. ACM Transactions on Computer Systems, pages 145{159, May 1985. [11] M. Mouly and M. B. Pautet. The GSM System for Mobile Communications. Book published by authors, 49, rue Louise Bruneau, F-91120 Palaiseau, France, 1992. ISBN 2-9507190-0-7. [12] W. K. Ng and C. V. Ravishankar. Coterie Templates: A New Quorum Construction Method. In Proceedings of the 15th International Conference on Distributed Computing Systems, pages 92{99, May 1995. [13] D. Peleg and A. Wool. Crumbling Walls: A Class of Practical and Ecient Quorum Systems. In Proceedings of the 14th ACM Symposium on Principles of Distributed Computing, pages 120{129, Ottawa, August 1995. [14] D. Peleg and A. Wool. How to be an Ecient Snoop, or the Probe Complexity of Quorum Systems. In Proceedings of the 15th ACM Symposium on Principles of Distributed Computing, pages 290{299, Philadelphia, May 1996. [15] S. Rajagopalan and B. R. Badrinath. An Adaptive Location Management Strategy for Mobile IP. In Proceedings of First ACM Mobicom 95, November 1995. [16] J. Z. Wang. A Fully Distributed Location Registration Strategy for Universal Personal Communication Systems. IEEE Journal on Selected Areas in Communications, 11:850{860, August 1993.

24

Suggest Documents