Strategies for Cache Invalidation of Location Dependent ... - CiteSeerX

0 downloads 0 Views 291KB Size Report
item is identified and stored along with it in the client's cache. In this paper, we study the selection procedure of finding best suitable candidate for valid .... location is outside the inscribed circle but within polygon. CEB is a generic method for ...
Strategies for Cache Invalidation of Location Dependent Data in Mobile Environment Ajey Kumar, Manoj Misra and A. K. Sarje

Abstract— Mobile computing as compared to traditional computing paradigms enables clients to have unrestricted mobility while maintaining network connections. Data management in this paradigm poses new challenging problems to the data base community. Location Dependent Information Services (LDIS) is an emergent application in this area where information provided to users depends on their current locations. Data caching at mobile clients plays a key role in data management due to its ability to improve system performance and overcome availability limitations. Spatial data cached in the mobile clients may become invalid because of the movement of the client. Cache Invalidation schemes aims to keep data consistency between the client’s cache and the server. To maintain consistency of LDD in cache, valid scope of that data item is identified and stored along with it in the client’s cache. In this paper, we study the selection procedure of finding best suitable candidate for valid scope (i.e., best suitable sub polygon of a given polygon) and propose a generalized algorithm which selects the best suitable candidate for valid scope. We compare its performance with the existing algorithms. More over, we also introduced new algorithm CEFAB which tries to improve the performance by considering the user movement pattern and speculation about its future access.

Index Terms—Cache invalidation, location dependent data, mobile computing, valid scope.

R

I. INTRODUCTION

ecent advances in computer hardware technology and wireless communication networks have led to the emergence of mobile or nomadic computing systems. Mobility has opened up new classes of applications for mobile environments. Location Dependent Information Service (LDIS), where information provided to users depends on their current locations, is one of these applications which are gaining increasing attention. The Advanced Traveler Information Systems (ATIS) [3] project and GUIDE project [4] have explored this in depth. Location is an important piece of information for representing, storing, and querying location-dependent information. Location dependent data (LDD) is data whose value is determined by the geographical location of the mobile user where a query originates [15]. In a location-dependent query, a location is needed to be specified

The Authors are with Department of Electronics and Computer Engineering, I.I.T. Roorkee, Roorkee, India. e-mail: {ajeykdec,manojfec,sarjefec}@iitr.ernet.in).

explicitly or implicitly. A location model depends heavily on the underlying location identification technique employed in the system. The available mechanisms for identifying locations can be categorized into two basic approaches Symbolic Model [8] and Geometric Model [11]. In the former the location space is divided into disjoint zones and each zone is identified with a unique name. Examples are Cricket [5] and the cellular infrastructure [7]. In the latter a location is specified as a 3-dimensional coordinate, e.g., GPS [6]. Mobile clients in wireless environments suffer from scarce bandwidth, low quality communication, weak and intermittent connectivity, frequent network disconnections, and limited local resources [1][2].Also, contacting the server for data is expensive in wireless network and may be impossible if client is disconnected. Data caching on mobile clients has been considered an effective solution to improve system performance and facilitate disconnection [1] [2] [7] [9].There are two common issues involved in client cache management: a cache invalidation scheme maintains data consistency between the client’s cache and the server; a cache replacement policy determines which data item(s) should be deleted from the cache when the cache does not have enough free space to accommodate a new item. Researchers in [8] [9] [10] [11] [12] [13] [15] have contributed towards cache management for LDD. In [12], three cache invalidation schemes were proposed based on geometric model: Polygonal Endpoints (PE) Scheme, Approximate Circle (AC) Scheme and CachingEfficiency-Based (CEB) Method (detail in section IV). CEB is based on caching efficiency and it balances the overhead and the precision of valid scope that is to be sent to mobile client along with response data. The rest of the paper is structured as follows: Section II, formulates the problem to be addressed in this paper. Section III, describes mobile system model used in this paper. Section IV, describes the cache invalidation schemes based on geometric model. A Generalized CEB (CEB_G) Scheme is discussed in Section V. Section VI, details the new performance metric FA and integrated algorithm CEFAB. Section VII, describes the simulation model carried out in this paper. Section VIII, deals with performance evaluation and comparison. Section IX, concludes the paper.

II. PROBLEM FORMULATION Location Dependent Data (LDD) refers to data whose value is dependent on some reference location, which in most cases is the location of the mobile user who generates the query. A

data item, in the context of this study, refers to one type of LDD (e.g. restaurants) and usually has different instances. Each data instance is only valid in some specific region. Thus, as mentioned earlier, valid scope is introduced to represent the bounded area within which a data instance is valid. Valid scopes can be defined differently for different types of applications. Due to client’s mobility, the returned data should be checked in order to guarantee that the client is still within the valid scope of the answer. Consequently, valid scope is valuable auxiliary information that should be attached to the retrieved data instance in order to facilitate validity checking at user side. However, this auxiliary information comes with a cost. It takes longer time for user/clients to download the extra information. It consumes more bandwidth and needs more storage space. Thus, the main issue addressed in this paper is how to represent the valid scope in order to balance the precision and overhead costs. The concept of valid scope information was first proposed in paper [11], in which it was used to construct a semantic cache in order to reuse the cached data. The work in [11] emphasizes on cache related technologies, such as replacement schemes and invalidation strategies. The effort is to maximize the cache hit ratio using limited cache space. It tries to find a precise representation of the valid scope which does not introduce too much overhead. In this paper we show that the algorithm given in [11] does not always give best possible scope. We then present the generalized selection procedure for finding best suitable candidate for valid scope, so that the caching efficiency is higher. In the performance evaluation, cache hit ratio is employed as the primary performance metric. Cache hit ratio can be defined as the ratio of number of queries answered by the client’s cache to the total number of queries generated by the client. Specifically, the higher the cache hit ratio, the higher the local data availability, the less the uplink and downlink costs, and the less the battery consumption.

dimensional space, a valid scope v can be represented by a geometric polygon p (e1, …,en ), where e i 's are endpoints of the polygon. A mobile client can cache data values on its local disk or in any storage system that survives power-off. In this paper, data values are assumed to be of fixed sizes and read-only so that we can omit the influence of data sizes and updates on cache performance and concentrate on the impact caused by the unique properties of location-dependent data.

III. SYSTEM MODEL

sub region contained in v. Let D be the data size, A( vi′ ) the

This section describes the system model adopted in this paper. We assume a cellular mobile network similar to that in [1] [2] as the mobile computing infrastructure. A mobile client can move freely from one location to another while retaining its wireless connection. Seamless hand-off from one cell to another is assumed. The information system provides location dependent services to mobile clients. We refer to the geographical area covered by the system as the service area. A data item can show different values when it is queried by clients at different locations. Note that data item value is different from data item, i.e., an item value for a data item is an instance of the item valid for a certain geographical region. For example, “nearest restaurant” is an item, and the data values for this item vary when it is queried from different locations. In this paper, we assume a geometric location model, i.e., a location is specified as a two-dimensional coordinate. Mobile clients can identify their locations using systems such as the (GPS) [6]. The valid scope of an item value is defined as the region within which the item value is valid. In a two-

IV. GEOMETRIC MODEL BASED CACHE INVALIDATION B. Zheng et. al. in [11] proposed three cache invalidation schemes based on geometric model: Polygonal Endpoints (PE) Scheme, Approximate Circle (AC) Scheme and CachingEfficiency-Based (CEB) Method. The PE scheme records all the endpoints of the polygon representing the valid scope. However, when the number of endpoints is large the overall performance get worsen, because this scheme will consume a large portion of the wireless bandwidth and client’s limited cache space for storage, effectively reducing the amount of space for caching the data itself. The advantage is the complete knowledge of the valid scopes. An alternative to PE scheme is to use an inscribed circle to approximate the polygon instead of recording the whole polygon. In other words, a valid scope can be approximated by the center of the inscribed circle and the radius value. The medial axis approach is used for generating inscribed circle [14] in a polygon. When the shape of the polygon is thin and long, the imprecision introduced by the AC method is significant. This will lead to a lower cache hit ratio, since, the cache will incorrectly treat valid data as invalid if the query location is outside the inscribed circle but within polygon. CEB is a generic method for balancing the overhead and the precision of valid scopes. It is based on caching efficiency. Suppose that the valid scope of a data value is v, and vi′ is a area of vi′ , and O( vi′ ) the overhead needed to record the scope vi′ . Then, caching efficiency of the data value with respect E (v i′ ) =

to

a

scope

vi′

is

defined

as

follows:

A (v i′ ) / A (v ) A (v i′ )D = + O (v i′ )) D A (v )(D + O (v i′ ))

(D

CEB scheme can be stated as follows: For a data item value with valid scope of v, given a candidate valid scope set V ′ = {v 1′ , v ′2 ,...., v k′ }, v i′ ⊆ v ,1 ≤ i ≤ k , choose the scope

vi′ that maximizes caching efficiency E (vi′ ) as the valid

scope to be attached to the data. Thus, CEB generates candidate valid scopes and then selects the best one. Greedy approach is used to generate series of candidate polygons. Suppose the current candidate polygon is v’i .It considers all polygons resulting from the deletion of one endpoint from v’i and chooses the next candidate, v’i+1, the polygon which has the maximal area. The

algorithm can be seen in [11] which describe the generation of candidate valid scopes and the selection of the best valid scopes. V.

GENERALIZED CEB (CEB_G) SCHEME

To look in detail how the greedy approach works in CEB, consider an example polygon consisting of 7sides. CEB sets the original polygon as candidate polygon (size=7). The first iteration finds all combinations of 6 points from 7 points of the original polygon and finds the best among the six sided polygons. It then sets it as the candidate polygon (size=6) for the next iteration. The second iteration finds all combinations of 5 points from 6 points of the candidate polygon (size=6), finds the best among them and sets it as the candidate polygon (size=5) for next iteration. This goes on until the number of sides of the sub polygon becomes 3. Although the complexity of CEB algorithm is O(n2), but if polygons are not regular CEB does not ensures that the final polygon selected is always optimal. The optimal polygon may be among those polygons which CEB never considers. In each iteration CEB always selects the best out of the polygons constructed during that iteration. It then explores only the sub cases of this best. We show with the help of a case study in section V (A) that in some cases the final best solution may not be found using CEB. This affects the overall performance of the system resulting in less cache hit. We propose a generalized method CEB_G for the generation of candidate valid scope set. Our method explores all possible combinations of sub polygons in the original polygon. The pseudo algorithm of CEB_G is described in Algorithm A1, where the generation of candidate valid scopes and the selection of the best valid scope are integrated. Algorithm A1: Selection of the Best Valid Scope for the CEB_G Method Input: valid scope v = p (e1, …,en ) of a data value; Output: the attached valid scope v’; Procedure: 1: v’1:= the inscribed circle of p (e1, …,en ) 2: v’:= v’1 ; E max := E(v’1 ); 3: v’2 := p (e1, …,en ); 4: i := 2; 5: while n - i ≥ 1 do 6: //containing at least three end-points for a // polygon 7: if E(v’i) >E max then 8: v’:= v’i ; E max := E(v’i ); 9: end if 10: if n - i > 1 then 11: v’i+1 := the polygon having maximum area, consisting of ((n – 1) – i + 2 ) endpoints of v and being bounded by v ; 12: end if 13: i := i + 1; 14: end while 15: output v’.

Considering the same example as above, i.e., polygon consisting of 7 sides. In algorithm A1, the first iteration, finds all combinations of 6 sided sub polygon from the original polygon (size=7) and finds the best among them. The second iteration finds all combinations of 5 sided sub polygon from the original polygon (size=7) and not the polygon selected as best in first iteration. It then finds the best among them. This goes on until the side of the sub polygon becomes 3. This method has more combination of valid scopes available for selecting the best in each iteration as compared to CEB. Although, the complexity of CEB_G is exponential but exponential factor matters when the number of sides of polygon is high. In actual scenario maximum side may vary from 6 to 10. Beyond 10 sides, the polygon resembles more towards circle as circle is a polygon having infinite sides. So for lower side polygons even exponential complexity may be acceptable. Moreover, valid scope is same for each data value, it is nearly static, so calculating the best valid scope only once and storing it against each data value on the server database, further reduces the overheads. CEB_G selects more precise representation of valid scope as compared to CEB which improves the over all performance, resulting in higher cache hit than that of CEB. A. Case Study To show the importance of each end point in a polygon, we consider a polygon in Fig. 1(a) as our case study selected from the scope distributions generated in our simulation for 110 points. The endpoints are e1(1351.5,3513.22),e2(1352.89,3516.69),e3(1480.88,3580.69), e4(1535.16,3307.59),e5(1522.8,3279.94),e6(1354.61,3183.61) and e7(1351.5,3187.75). CEB selects the polygon pCEB (e1, e3, e4, e7) given in fig 1(b) as the best candidate for the valid scope to be sent to client along with data, where as CEB_G selects the polygon pCEB_G (e1, e3, e4, e6) given in fig 1(c) as the best candidate. e3

e3

e3

e2 e1

e2 e1

e2 e1 e1

e5

e7 e6

a) Original Polygon

e4

e4

e4 e7

e5

e7 e6

e6

b) Best candidate for CEB, pCEB

e5

c) Best candidate for CEB_G, pCEB_G

Fig. 1. Case Study

Iteration 0 1 2 3 4

TABLE I Stepwise Execution for CEB and CEB_G CEB CEB_G p (e1, e2 e3, e4, e5, e6, e7) p (e1, e2 e3, e4, e5, e6, e7) p (e1, e3, e4, e5, e6, e7) p (e2, e3, e4, e5, e6, e7) p (e1, e3, e4, e5, e7) p (e1, e3, e4, e5, e6) p(e1, e3, e4, e7) p (e1, e3, e4, e6) p (e1, e4, e6) p(e1, e4, e7),

Stepwise execution is shown in Table I. The entries in each column shows the best polygon selected by CEB and CEB_G in each iteration. The final best polygons selected by CEB and CEB_G to be sent to client along with response data are p (e1, e3, e4, e7) and p (e1, e3, e4, e6) respectively. Both polygons have size 4, which means that the over head is same. But if we compare the area, area of pCEB_G is greater than the area of pCEB, which means pCEB_G has more precise representation than pCEB .This results in more cache hit at the client side.

Suppose SMI be the start and EMI be the end of Moving Interval. Let TQ be the time at which a query is executed by the client, where SMI ≤ TQ ≤ EMI. Also, let eTQ and e E MI be

B. Varying CEB to achieve CEB_G You will have If we look again at the greedy approach of CEB, in each iteration only one best candidate valid scope is selected. Varying the best candidate valid scope from one to two in each iteration we can be able to catch hold of some of those end points that are left in each iteration.

In this work we are more interested in FMP with respect to the valid scope v. Consider the scenario as shown in Fig. 3.

FMPTQ , EMI = Line _ Segment(eTQ , eEMI )

TQ

TABLE II Stepwise Execution with best two in CEB Iteration 0 1 2 3 4

the points in x-y plane at TQ and EMI respectively. We define Future Movement Path (FMP) for interval [TQ, EMI] as:

Best Two Candidate Valid Scope p (e1, e2 e3, e4, e5, e6, e7) p (e1, e3, e4, e5, e6, e7) ,p (e2, e3, e4, e5, e6, e7) p (e1, e3, e4, e5, e7), p (e1, e3, e4, e5, e6) p(e1, e3, e4, e7) , p (e1, e3, e4, e6) p(e1, e4, e7), p (e1, e4, e6)

Computing time only increases by a constant c, but the complexity remains same, i.e., O(n2 ).Consider again the original polygon in Fig. 1(a).Selecting the best two candidate valid scope at each iteration, helps to get hold of left over points. Stepwise execution is shown in Table II. CEB by using best two in each iteration selected polygon p (e1, e3, e4, e6) as its best valid scope, same as that selected using CEB_G method.

EMI

a) e E MI ∈ v

Predicting accurately the movement behavior of the client is a challenging task. Lots of research is going on this area. With our setup (detail in section VII), we can also track the client for its future movement, but for a certain amount of time. Since the client movement is random, so it is very difficult to track its entire path, however we can make use of the Moving Interval to track client’s future path from the current query location. Moving Interval is duration within which client’s velocity and direction remains constant. A simple scenario is shown in Fig. 2. FMP

SMI

TQ

Fig. 2. Future Movement Path

EMI

b) e E MI

TQ

VI

∈v

EMI

c) e E MI ∉ v Fig. 3.

e EMI with respect to valid scope v

The first and the second case is trivial, but the third case , we have to consider the intersection point of Line _ Segment (eT , e E ) with the valid scope v, let it be eVI , Q

VI. CACHING EFFICIENCY WITH FUTURE ACCESS BASED (CEFAB)

TQ

MI

because we select the best candidate valid scope v, so we need line segment that is within the valid scope , not beyond that. So, redefining FMP with respect to the valid scope v for interval [TQ, EMI], we have

⎧⎪Line _ Segment(eTQ , eEMI ) FMPTQ , EMI (v) = ⎨ ⎪⎩ Line _ Segment(eTQ , eVI )

if eEMI ∈ v if eEMI ∉ v

Our ultimate goal is to select a valid scope that increases the cache hit of the client, which means the sub polygon which retains the total FMP. Keeping this fact, we define a new metric called Future Access (FA) for valid scope vi’ for interval [TQ, EMI], given by

FATQ , EMI (v ) = ' i

Length( FMPTQ , EMI (vi' )) Length( FMPTQ , EMI (v))

EMI

where, vi’= sub region contained in v v=valid scope of a data value Length= computes length of line segment between two given end points.

FA helps to find out the best candidate polygon/sub polygon with respect to its future validity in client’s cache, because it takes into account the future path to be traversed by the client/user from the current position. But owing to the limited bandwidth in wireless environment, integrating FA with caching efficiency is advantageous as caching efficiency balances overhead and precision of the valid scope and FA adds future movement behavior to it. As a result, we get an integrated metric, called Caching Efficiency with Future Access (CEFA) for valid scope vi’ in interval [TQ, EMI], given by:

CEFATQ , EMI (vi' ) = E (vi' ) * FATQ , EMI (vi' ) The new metric takes into account the future movement behavior of client. We proposed a new cache Invalidation algorithm called Caching Efficiency with Future Access Based (CEFAB) using CEFA metric. The pseudo algorithm of CEFAB is described in Algorithm A2. Algorithm A2: Selection of the Best Valid Scope for the CEFAB Method Input: valid scope v = p (e1, …,en ) of a data value, TQ and EMI; Output: the attached valid scope v’; Procedure: 1: v’1:= the inscribed circle of p (e1, …,en ) 2: v’:= v’1 ; CEFA max := E(v’1 ); 3: v’2 := p (e1, …,en ); 4: i := 2; 5: while n - i ≥ 1 do 6: //containing at least three end-points for a // polygon 7: if CEFA T , E ( v i' ) >CEFA max then Q

MI

8: v’:= v’i ; CEFA max := CEFA

T Q , E MI

( v i' ) ;

9: end if 10: if n - i > 1 then 11:v’i+1 := the polygon having maximum Length ( FMP T , E ( v i' )) , consisting of ((n – 1) – i + 2 ) Q

MI

endpoints of v and being bounded by v ; 12: end if 13: i := i + 1; 14: end while 15: output v’.

VII. SIMULATION MODEL This section describes the simulation model used to evaluate the performance of the proposed CEB_G and CEFAB. Our Simulator is implemented in C++. A. System Execution Model Since seamless hand-off from one cell to another is assumed, the network can be considered a single, large service

area within which the clients can move freely and obtain location dependent information services. In our simulation, the service area is represented by a rectangle with a fixed size of Size. We assume a ''wrapped-around'' model for the service area. In other words, when a client leaves one border of the service area, it enters the service area from the opposite border at the same velocity. The database contains ItemNum items. Every item may display ScopeNum different values for different client locations within the service area. Each data value has a size of DataSize. In the simulation, the scope distributions of the data items are generated based on voronoi diagrams (VDs) [14]. In our simulation, a scope distribution contains 110 points randomly distributed in a square Euclidean space. The model assumes that two floating-point numbers are used to represent a two-dimensional coordinate and one floating-point number to represent the radius. The size of a floating-point number is FloatSize. TABLE III Configuration Parameters of the Server Execution Model Parameter Size ItemNum ScopeNum DataSize UplinkBand DownlinkBand F loatSize

Description size of the rectangle service area number of data items in the database number of different values at various locations for each item size of a data value bandwidth of the uplink channel bandwidth of the downlink channel size of a floating-point number

The wireless network is modeled by an uplink channel and a downlink channel. The uplink channel is used by clients to submit queries, and the downlink channel is used by the server to return query responses to target clients. The communication between the server and a client makes use of a point-to-point connection. It is assumed that the available bandwidth is UplinkBand for the uplink channel and DownlinkBand for the downlink channel. Table III summarizes the configuration parameters of the system model. B. Client Execution Model The mobile client is modeled with two independent processes: query process and move process. The query process continuously generates location-dependent queries for different data items. After the current query is completed, the client waits for an exponentially distributed time period with a mean of QueryInterval before the next query is issued. The client access pattern over different items follows a Zipf distribution with skewness parameter θ. To answer a query, the client first checks its local cache. If the data value for the requested item with respect to the current location is available, the query is satisfied locally. Otherwise, the client submits the query and its current location uplink to the server and retrieves the data through the downlink channel. The move process controls the movement pattern of the client using the parameter MovingInterval. After the client keeps moving at a constant velocity for a time period of MovingInterval, it changes the velocity in a random way: the next moving direction (represented by the angle relative to the x axis) is

TABLE IV Configuration Parameters of the Client Execution Model Parameter QueryInterval

MinSpeed MaxSpeed

Description average time interval between two consecutive queries time duration that the client keeps moving at a constant velocity minimum moving speed of the client maximum moving speed of the client

CacheSizeRatio

ratio of the cache size to the database size

θ

skewness parameter for the Zipf access distribution

MovingInterval

20,000 queries, so that the warm-up effect of the client cache is eliminated. Fig 4 compares the improved performance of CEB_G with CEB scheme. The LRU cache replacement policy is employed for cache management. We observe that the performance of CEB_G is better than CEB for small query interval. More over, using CEFAB further improves the performance over CEB_G.

Cache Hit Ratio

selected randomly between 0 and 360, and the next speed is selected randomly between MinSpeed and MaxSpeed. When the value of MovingInterval is small, the client's movement is rather random; when the value of MovingInterval is large, the movement of the client behaves more like a pre-defined trip which consists of long straight-line segments. The client is assumed to have a cache of fixed size, which is a CacheSizeRatio ratio of the database size. Table IV summarizes the configuration parameters of the client model.

0.36 0.34 0.32 0.3 0.28 0.26 0.24 0.22 0.2 0.18 0.16 0.14 0.12 0.1

AC CEB CEB_G CEFAB

20

40

60

80

100

120

140

160

180

200

Query Interval(second) Fig. 4.

Cache Hit Ratio of Invalidation Schemes vs Query interval

XI. CONCLUSION C. Server Execution Model The server is modeled by a single process that services the requests from clients. The requests are buffered at the server if necessary, and an infinite queue buffer is assumed. The FCFS service principle is assumed in the model. To answer a location-dependent query, the server locate the correct data value with respect to the specified location. Since the main concern of this paper is the cost of the wireless link, which is more expensive than the wired-link and disk IO costs, the overheads of request processing and service scheduling at the server are assumed to be negligible in the model.

We proposed a generalized algorithm CEB_G which selects the best suitable candidate for valid scope that maximizes the caching efficiency and compared its performance with the existing CEB algorithm. Moreover, we also showed that varying CEB with more choices in each iteration, better results can be obtained. We also introduced new metric called future access, which takes into account the future movement behavior of client and proposed CEFAB algorithm based on it. As our future work we are extending our study for prefetching and cache replacement policies for location dependent data. . ACKNOWLEDGMENT

VIII. PERFORMANCE EVALUATION In this section, the proposed CEB_G and CEFAB is evaluated using the simulation model described in the previous section. Table V shows the default parameter settings of the simulation model.

The authors would like to thank Intel PlanetLab, I.I.T. Roorkee for providing support for this work.

REFERENCES [1]

TABLE V Default parameter settings for Simulation Model Parameter Size ItemNum ScopeNum DataSize UplinkBand DownlinkBand F loatSize

Setting 4000*4000 500 110 128 bytes 19.2 kbps 144 kbps 4 bytes

Parameter QueryInterval MovingInterval MinSpeed MaxSpeed CacheSizeRatio θ

Parameter 50.0s 100.0s 1 s-1 2 s-1 10% 0.5

For our evaluation purpose, we assume that all data items follow the same scope distribution in a single set of experiments. The results are obtained when the system has reached the stable state, i.e., the client has issued at least

[2] [3] [4] [5] [6] [7] [8]

D. Barbara, “Mobile Computing and Databases: A Survey”, IEEE Trans. on Knowledge and Data Engg, 11(1), 1999. D. Barbara and T. Imielinski, “Sleepers and Workaholics: Caching Strategies in Mobile Environments”, In Proc. of SIGMOD, 1994. S. Shekhar, A. Fetterer, and D. R. Liu, “Genesis: An Approach to Data Dissemination in Advanced Traveler Information Systems”, IEEE Data Engineering Bull., 19(3),1996. K. Cheverst, N. Davies, K. Mitchell, and A. Friday, “Experiences of Developing and Deploying a Context-Aware Tourist Guide: The GUIDE Project”, In Proc. of MOBICOM., 2000. N.B. Priyantha, A. Chakraborty, and H. Balakrishnan, “The Cricket Location-Support System”, In Proc. of MOBICOM., 2000. I.A. Getting, “The Global Positioning System”, IEEE Spectrum, 12(30), 1993. E. Pitoura and G. Samaras, “Locating Objects in Mobile Computing”, IEEE Trans. on Knowledge and Data Engg, 13(4),2001. J. Xu, X. Tang and D. L. Lee, “Performance Analysis of LocationDependent Cache Invalidation Schemes for Mobile Environments”, IEEE Trans. on Knowledge and Data Engg, 15(2), 2003.

[9] [10] [11] [12] [13] [14] [15]

Q. Ren, M. H. Dunham and Vijay Kumar, “Semantic Caching and Query Processing”, IEEE Trans. on Knowledge and Data Engg, 15(1), 2003. B. Zheng and D. L. Lee, “Semantic Caching in Location-Dependent Query Processing”, In Proc. of SSTD, 2001. B. Zheng, J. Xu and D. L. Lee, “Cache Invalidation and Replacement Strategies for Location-Dependent Data in Mobile Environments”, IEEE Trans. on Comp., 51(10), 2002. D.L. Lee, Lee, W.-C., J. Xu, and B. Zheng, “Data Management in Location-Dependent Information Services”, IEEE Pervasive Computing, 1(3), 2002. H. Manica, M.S. Camargo, and R.R. Ciferri, “A New Model for Location-Dependent Semantic Cache Based on Pre-Defined Regions”, CLEI 2004. J. O’ Rourke, Computational Geometry in C, chapter 5, Univ. of Cambridge Press, 1998. V. Kumar, and M.H. Dunham, “Defining Location Data Dependency, Transaction Mobility and Commitment”, Technical Report 98-CSE-01, Southern Methodist Univ., TX, 1998.

Suggest Documents