Energy-Efficient Data Caching and Prefetching for Mobile Devices

0 downloads 0 Views 303KB Size Report
Apr 14, 2005 - prefetch schemes involving point-to-point session transmis- 143 sion model, which is ...... 1.6. 1.7. 1.8 x 104. Power per Query vs Prefetch Threshold. PPQ(uw .s). Prefetch Threshold ..... E-mail: hpshen@cse.uta.edu. 786.
 C

Mobile Networks and Applications 10, 475–486, 2005 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands.

2

Energy-Efficient Data Caching and Prefetching for Mobile Devices Based on Utility

3 4 5

Center for Research in Wireless Mobility and Networking (CReWMaN), Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019-0015

6 7 8 9 10 11 12 13 14

Abstract. In mobile computing environments, vital resources like battery power and wireless channel bandwidth impose significant challenges in ubiquitous information access. In this paper, we propose a novel energy and bandwidth efficient data caching mechanism, called GreedyDual Least Utility (GD-LU), that enhances dynamic data availability while maintaining consistency. The proposed utility-based caching mechanism considers several characteristics of mobile distributed systems, such as connection-disconnection, mobility handoff, data update and user request patterns to achieve significant energy savings in mobile devices. We develop an analytical model for energy consumption of mobile devices in a dynamic data environment. Based on the utility function derived from the analytical model, we propose algorithms for cache replacement and passive prefetching of data objects. Our comprehensive simulation experiments demonstrate that the proposed caching mechanism achieves more than 10% energy saving and near-optimal performance tradeoff between access latency and energy consumption.

15

Keywords: wireless networks, mobile devices, caching, prefetching, energy efficient

16

1. Introduction

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

A major challenge for true realization of mobile computing systems is the provisioning for ubiquitous data access to mobile users. Users of mobile devices generally desire to access diverse data such as stock quotes, news headlines, email, traffic reports and video clips, via wireless networks. However, limited battery power of mobile devices and scarce wireless channel bandwidth are the two major challenges for ubiquitous information access in mobile computing environments. Caching is considered as one of the important techniques to relieve bandwidth constraint imposed on wireless mobile systems [3,20,24]. Copies of remote data can be kept in the local memory of mobile devices to substantially reduce data retrievals from the original server. This not only reduces the uplink and downlink bandwidth consumption but also the average latency in data access. In a majority of mobile devices like laptops, palmtops and cellular phones, wireless communication is one of the major sources of energy consumption that reduces battery life [10]. For example, the energy cost for transmitting 1K bits of information is approximately the same as the energy consumed to execute 3 million instructions [16]. Thus, caching frequently accessed data in mobile devices can potentially minimize communication and hence conserve battery power. Prefetching is a technique that can reduce access latency and improve cache hit ratio. In prefetching, access to remote data is anticipated and so the data is fetched a priori. Most of the existing prefetching schemes [6,9,11] use uplink bandwidth and battery energy to improve cache hit ratio and reduce access latency. Broadcast and multicast [1,2] are proven as effective data dissemination techniques in mobile computing environments. A prefetching scheme that considers both

1

HUAPING SHEN, MOHAN KUMAR, SAJAL K. DAS and ZHIJUN WANG

F

TECHBOOKS

O

O R P

power and bandwidth efficiency needs to be investigated for data dissemination environments. Due to frequent mobility and possible disconnection of mobile devices, the data in such devices’ caches are prone to become stale. A novel scheme called scalable asynchronous cache consistency scheme (SACCS) was proposed by Wang, et al. [21,22] to maintain data consistency for mobile computing systems. SACCS relies on three key features: (1) use of flag bits at the server and mobile device’s cache to maintain cache consistency; (2) use of an identifier for each entry in mobile device’s cache after its invalidation in order to maximize the broadcast bandwidth efficiency; and (3) rendering of all valid entries of a mobile device’s cache to uncertain state upon wake up. These features make SACCS a highly scalable algorithm with minimum data management overhead in both single and multi-cell environments. In this paper, we develop an analytical model for energy consumption of mobile devices due to data access in a dynamic data environment, assuming the SACCS data consistency maintenance scheme. The analytical model considers different events such as data request, data update, connectiondisconnection and mobility handoff that occur at mobile devices and affect energy consumption due to data retrievals. Based on this model, we derive an utility function to evaluate the utility of each data item in terms of energy saving. A novel caching mechanism, called GreedyDual Least Utility (GD-LU), is proposed for cache management in mobile devices. GD-LU has two main components: GD-LU cache replacement algorithm and GD-LU passive prefetch algorithm. The GD-LU cache replacement algorithm selects data items with the most utility to cache in local memory. In passive prefetch, mobile devices acquire the data items from broadcast channels for the users’ future requests based on data items’

D E

T C

E R

R

O C

N U

Journal: MONE

MS Code: DISK

PIPS No: PAPER-8

14-4-2005

18:57

Pages: 12

48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80

SHEN ET AL.

476

81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97

relative utility values. Based on priority queue management, the GD-LU cache replacement and passive prefetching algorithms achieve a time complexity of O(log N ) where N is the number of data items in the cache. Extensive experiments show that the GD-LU cache replacement algorithm achieves more than 10% energy saving than existing approaches. The GD-LU passive prefetching algorithm achieves near-optimal performance tradeoff between access latency and energy consumption under various cache sizes. A preliminary version of this paper is available in [18]. The rest of this paper is organized as follows. Section 2 describes some related work. Section 3 briefly describes the SACCS algorithm. Section 4 derives a utility based model for energy consumption in mobile devices. Section 5 describes in details the proposed caching and prefetching algorithms. Section 6 shows the simulation model and presents comprehensive simulation results. Section 7 concludes the paper.

98

2. Related work

99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134

trieval delay, and cache invalidation cost to minimize the access latency by mobile devices. Since this approach needs an aging daemon to periodically update the estimated information of each data item, the computational complexity and energy consumption of the algorithm are too high for a mobile device. Prefetching has been investigated to reduce web access latency in wireline networks. Existing works [9,11] investigate prefetch schemes involving point-to-point session transmission model, which is different from the broadcast communication model in wireless mobile networks. In recent years, hybrid of prefetching and data invalidation has been studied for performance tradeoff [6,25]. In [6], mobile devices calculate the prefetch access ratio (PAR), which is the number of prefetches divided by the number of accesses for each data item. Mobile devices use a threshold PAR to determine whether to prefetch a data item. In [25], a value function is used instead of PAR to evaluate each data item. All of these works use prefetching to retrieve the desired data in a proactive fashion, in which mobile devices need to send an uplink message to the base station and then wait for a period of time to acquire the data from downlink channel. Since mobile devices consume much more energy for sending an uplink message than receiving a downlink message of the same size, due to channel contention and data retransmission [10], proactive prefetching schemes are not expected to be energy efficient in general. Finally, frequent disconnections are quite common to mobile devices, hence the cache consistency is an important issue. There are two types of cache consistency maintenance approaches for wireless mobile environments: stateless and stateful. In the stateless approach [3,7,12], the server assumes no knowledge of the mobile device’s cache contents and simply sends them invalidation reports (IRs) periodically. At the mobile device, a data item request cannot be serviced until the next IR from the server is received. In the stateful approach [13], on the other hand, the base station maintains the state of the data item for each mobile cache and only broadcasts IRs for cached items. Even though stateless approaches employ simple database management, their scalability and ability to support disconnectedness are poor in general. In contrast, the stateful approaches such as asynchronous stateful [13] are scalable, however, they incur significant overhead due to server database management. Wang et al. [21,22] proposed a scalable and efficient hybrid scheme that maintains cache consistency between the server and mobile devices. This hybrid scheme outperforms the others both in single cell and multiple cell wireless environments.

135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181

3. Preliminaries

182

As depicted in figure 1, for a mobile device there are four kinds of events related to data management in the system: data request, data update (i.e., receiving data invalidation report), disconnection, and mobility handoff.1 Data update,

183 184 185 186

F

There are three main issues involved in the design of caches for mobile devices [24]. (1) Replacement—the process of selecting a set of victim data items to discard from the cache in order to accommodate new incoming data; (2) Prefetchingthe process of prefetching data into the cache in anticipation of future accesses; and (3) Invalidation- the process of invalidating data items to maintain data consistency between local caches and the original server. Several policies have been proposed in past research work to carry out each one of the above three processes. However, previous works [7,11,24] have investigated these problems individually, but none considered all these three issues together to enhance data availability in mobile devices. Moreover, most of the existing approaches focused only on the performance improvement of access latency, and hence energy and bandwidth efficiency for mobile cache management received little attention. Nuggehalli et al. [15] addressed the problem of optimal cache placement in ad hoc wireless networks and proposed a greedy algorithm, called POACH, to minimize the weighted sum of energy expenditure and access delay, but the problem addressed in this paper is different from that in [15]. Replacement policy in web proxy caching has been studied widely. Recently, many deterministic replacement schemes, such as GD-Size [8], LNC-R-W3-U [19], LRV [17] and Hybrid [23], have been studied. Since poor connectivity, limited energy and memory capacity of mobile devices are the characteristics of today’s mobile environments, the above algorithms deployed in web proxy caches can not be adopted directly to manage mobile caches. A cache replacement algorithm for wireless data dissemination was first proposed in the Broadcast Disk project [1]. Subsequently, many replacement algorithms have been proposed under data dissemination scenarios [1,20]. However, most of them need to cooperatively work with a specific broadcast schedule algorithm to achieve good performance. Xu et al. [24] proposed an algorithm that considers access probability, update frequency, data size, re-

O

D E

O R P

T C

E R

R

N U

O C

1

We refer the handoff as the start of handoff procedure in this paper.

477

DATA CACHING AND PREFETCHING

nth Request data update

disconnect

(n+1)th Request handoff Time

since the mobile last acquired it; otherwise, the whole data item is sent to the mobile. After receiving confirmation message from the base station, the uncertain cached data items will be changed to certain state.

235 236 237 238

4. Analytical model for utility

239

In this section, we start with some assumptions that are used in our analytical model. Then a utility model is developed to analyze the energy saving by using a cache at the mobile devices. Finally, a utility function is derived from the analysis. Since Poisson process has been widely used to model independent arrivals, we assume that data request, data update at the server, mobility handoff and disconnection of mobile devices follow Poisson processes. The following analysis assumes a unicast communication between the base station and mobile devices. As mentioned earlier, the SACCS algorithm is used to maintain the data consistency between mobile devices and the original server. The notations used in the analysis are listed below:

240 241 242 243 244 245 246 247 248 249 250 251 252

Figure 1. Events of mobile devices.

187 188 189 190 191

disconnection, and handoff events may occur zero or more times between two consecutive request events. Each mobile device retrieves the data from an original server through either a unicast or broadcast channel. The data consistency is maintained by both mobile devices and the original server.

192

3.1. An overview of SACCS

193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234

In this paper, the data consistency between mobile devices and the server follows the approach proposed in SACCS algorithm [21]. In SACCS, each data item in a server is associated with a flag bit. When an item is retrieved by a mobile device, the corresponding flag bit is set indicating that a valid copy may be available in the mobile cache. When the data item is updated, the server immediately broadcasts its invalidation report (IR) to mobile devices and resets the flag bit. The reset flag bit implies that there is no valid copy in any mobile cache. Hence, subsequent updates do not broadcast IR until the flag bit is set again. A device is either in an awake (i.e., connected with base station, BS) or in a sleep (i.e., disconnected from BS) state at the time of IR broadcast. If a mobile device is in the awake state, it deletes the valid copy and sets the entry to invalidated state (i.e., the data item is deleted, but an ID is kept) after it receives the IR. If the device is in the sleep state, it misses the IR; and upon wake up, it sets all valid cache entries to an uncertain state. An uncertain entry must be refreshed or checked before its usage. Thus, the cache consistency is guaranteed. All entries with data items in uncertain or invalidated state can be used to identify useful broadcast messages for validating and triggering of data item downloading. In SACCS, each data item in the system is in one of three states: invalidated, certain and uncertain. The invalidated state is defined for data items that are not cached at the mobile device. The uncertain state is defined for data items that are cached at the mobile device, but validation of data items is not confirmed. The certain state is defined for data items whose validation is confirmed and can be used to satisfy the application’s data request. After receiving the IR, mobile devices discard the corresponding data items and keep the metadata information (including ID) in their local caches. Also, the data items contained in IR will change their status to the invalidated state. If mobile devices reconnect to wireless networks after a disconnection from the base station for a while or if they move to another cell, we set all data items in the cache into the uncertain state. In both cases, we do not know whether the data in the cache are valid or not. If applications request an item that is in uncertain state, an uncertain request message is initialized and sent to the base station. The base station responds with a confirmation message if the data item has not been updated

F

r r r r r r r r r

O

E R

R

N U

O C

r r r r r r r r r

253

λia : dc

254

access request arrival rate of data item di ;

λ : disconnection rate of mobile devices;

255

λ : handoff rate of mobile devices;

256

λ = λ + λ : uncertain rate of mobile devices;

257

λ : user request arrival rate of mobile devices;

258

pi = λia /λa : average access probability of di ; si : size of data item di ;

259

D E

T C

O R P

λiu : update rate of data item di ;

h

s

dc

h

a

260

sup: size of uplink uncertain request or cache missing re- 261 quest; 262 sdw: size of downlink confirmation message; 263 ε(sup): energy cost to send a request to a base station; ε(sdw): energy cost to receive a confirmation message;

264

ε(si ): energy cost to receive a data item of size si ;

266

265

Snu : set of uncertain data items in cache after nth request; 267 Snc : set of certain data items in cache after nth request; 268 Sn = Snu ∪ Snc : set of data items in cache after nth request; 269 E Vi : event i occurs in mobile device.

270

Pi : the probability of event E Vi .

271

Figure 2 depicts the state transitions of each cached data item di in the system between nth and (n + 1)th data requests. Event E V1 occurs when the mobile device does not receive any IR for di which therefore remains in the uncertain state. Event E V2 occurs when the item di moves to invalidate state as an IR for di is received. Event E V3 occurs if both the following conditions are satisfied: mobile device does not receive any IR for di , and no handoff or disconnection occurs. Event E V4 accounts for di changing from certain to uncertain state, if a handoff or disconnection occurs. E V5 stands for the event that di changes to invalidated state from certain state. E V5

272 273 274 275 276 277 278 279 280 281 282

SHEN ET AL.

478

given by,

EV1

305

 E iu = pi (ε(sup) + ε(sdw))

λia λia + λiu  λu + (ε(sup) + ε(si )) a i u λi + λi

Uncertain

EV4

EV2

EV3 EV5

Invalidated

The energy cost (E ic ) of a mobile device’s request for a 306 certain data item di ∈ Snc is given by, 307    λa λia E ic = pi (ε(sup) + ε(sdw)) a i u − a λi + λi λi + λiu + λs  λu + (ε(sup) + ε(si )) a i u λi + λi

Certain

Figure 2. States transition of mobile data.

283 284 285 286 287

happens if there is at least one IR for di received by the mobile device. Since we assume that the arrivals of data request, data update, disconnection and handoff are all Poisson processes, the corresponding probability Pi of each event E Vi (for 1 ≤ i ≤ 5) is given by equations (1)–(5).  P1 =



0

u a e−λi t λia e−λi t

P2 = 1 − P1 =  P3 =



0

 P4 =

0

=



(2)

λia + λiu a

s

u

(3)

λia λia + λiu



λia λia + λiu

λiu

E R

R

(5)

O C

Energy consumed by any mobile device for sending, receiving or discarding a message is given by the following linear equation [10]:

N U

O R P

where Vnu ∈ Snu is the victim data set chosen from the uncertain data set and Vnc ∈ Snc is the victim data set chosen from the certain set. So the energy cost after the (n + 1)th request can be expressed in a recursive way as follows:   en+1 = en + E xc − px (ε(sup) + ε(sx ))   + pi (ε(sup) + ε(si )) − E iu

D E

i∈Vnu

(4)

+ λs

λia + λiu

O

T C

a

s

ε(s) = m ∗ s + h

291 292 293 294 295 296 297 298 299 300 301 302 303 304

λia λia + λiu + λs

e−λi t (1 − e−λ t )λia e−λi t dt

P5 = 1 − P3 − P4 = 288 289 290

(1)

λiu

u

F

308 309 310 311

Sn+1 = Sn ∪ dx − Vnu − Vnc

λa dt = a i u λi + λi

e−λi t e−λ t λia e−λi t dt =

At the (n + 1)th request, the cache mechanism needs to choose victim data set from certain and uncertain data sets in the cache to make space for an incoming data item dx . Therefore,

(6)

where s is the message size, and m denotes the incremental energy cost associated with message, and h is the energy cost for the overhead of message. The latter two parameters, i.e., m and h, are different for sending and receiving. When a mobile user requests an uncertain data item in the cache, two probabilities need to be considered to calculate the energy cost of satisfying user request. If the requested data item is still valid, the energy cost is ε(sup) + ε(sdw), that is the energy cost of sending uplink uncertain request and receiving downlink confirmation message. If the requested data item is invalid, the energy cost is ε(sup) + ε(si ), the energy cost of sending uplink uncertain request and receiving data item. So, according to equations (1)–(5), the energy cost (E iu ) of the mobile device’s request for an uncertain data item di ∈ Snu is



+

pi (ε(sup) + ε(si )) − E ic



312 313 314 315 316

(7)

i∈Vnc

The objective of our replacement policy is to choose the optimal victim data set from the cache to minimize the energy cost after the (n + 1)th request. The second term in equation (7) is the increased energy cost as dx is cached at the (n + 1)th request, i.e., dx changes from invalidated state to certain state. In equation (7), the cache mechanism can not change the first two terms, however, the last two terms can be minimized as follows:   pi (ε(sup) + ε(si )) − E iu i∈Vnu



=  i∈Vnc

=

pi (ε(si ) − ε(sdw)) ∗

i∈Vnu

pi (ε(sup) + ε(si )) − E ic 

λia λia + λiu



pi (ε(si ) − ε(sdw)) ∗

i∈Vnc

+ ε(sdw))

λia λia + λiu + λs



λia

λia + (ε(sup) + λiu

317 318 319 320 321 322 323 324

479

DATA CACHING AND PREFETCHING

325 326

Therefore, the utility function to evaluate uncertain data items in a cache is given by, Uiu = pi [ε(si ) − ε(sdw)]

327

λia

λia + λiu

(8)

The utility function for valid data items is given by, Uic = pi [ε(si ) − ε(sdw)]

λia λia + λiu

+ pi [ε(sup) + ε(sdw)]

λia λia + λiu + λs

(9)

In order to minimize the energy cost at mobile devices, at each replacement, we choose the data items with the least utility values according to equations (8) and (9). An uncertain data item costs more energy as it is necessary to get confirmation from the base station. The utility of a data item di while it is in the certain state is more than thatain the uncertain state λi 334 by a factor of pi [ε(sup) + ε(sdw)] λia +λiu +λs . 328 329 330 331 332 333

335 336

5. GreedyDual (GD) Least Utility (LU) caching mechanism

337 338 339 340 341 342

In this section, we propose the GD-LU caching mechanism, which is based on the utility functions derived from last section. We describe the GD-LU cache replacement algorithm followed by GD-LU passive prefetching algorithm. The implementation issues and algorithmic complexity of GD-LU caching mechanism are also discussed.

343

5.1. GD-LU cache replacement algorithm

344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367

The GreedyDual (GD) cache replacement is an efficient online optimal algorithm [26] devised to deal with systems that exhibit heterogeneous data retrieval costs. GD in essence allows a bias to be applied to each item in cache so as to give higher priority to items that incur higher retrieval costs. Revised versions of GD algorithm have been deployed for web proxy caching [8] and multimedia stream caching [14]. Based on the GD concept, we propose a novel algorithm, called GreedyDual Least Utility (GD-LU) cache replacement algorithm for mobile devices. Since the utility functions derived from equations (8) and (9) is used in the GD-LU algorithm, the parameters in the utility functions should be available when a replacement occurs at the cache. We associate each data item with a metadata that contains the necessary parameters of that item. Parameter estimation is discussed later in Section 3. Since each metadata contains history information of the corresponding data item, we use a queue to store the metadata of replaced data items. As shown in the algorithm described in figure 3, when an application requests for a data item, the algorithm first checks the state of the item in the cache. If the data item is valid, it is returned and the corresponding metadata is updated. If the data item is in an uncertain state, an uncertain message is sent to the server to check if the data is valid or not since the last

N U

O C

O

D E

T C

E R

R

F

O R P

Figure 3. GD-LU cache replacement algorithm

retrieval. In the event of a cache miss, the message is sent to the server to retrieve the data. When the mobile device receives the confirmation message, the data item is set to the certain state and returned to the application. If the whole data item is received, the GD-LU replacement algorithm chooses the victim data set to make space to accommodate the incoming data item.

368 369 370 371 372 373 374

5.2. GD-LU passive prefetching algorithm

375

The base station and the mobile devices can communicate through broadcast, multicast or unicast channel. In multicast or broadcast, the mobile devices in one group can cooperatively retrieve data from the server, so the GD-LU replacement can be expected to achieve a better performance in broadcast or multicast modes than in unicast. For broadcast and multicast communications, the data item requested by one mobile device is available to all other mobile devices in the data dissemination channel. Although the data item may not be requested by a mobile device, prefetching data into the cache of that mobile device may reduce future access latency. However, acquiring the data item from dissemination channel still costs energy at the mobile devices, therefor, blind prefetching of the data that user never requested may result in wasting energy and replacement of some recently accessed data, thus degrading the cache performance.

376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391

SHEN ET AL.

480

Figure 4. GD-LU passive prefetch algorithm.

392 393 394 395 396 397 398 399 400 401 402 403 404

In order to avoid blind prefetching, we propose a passive prefetching scheme for cache management of mobile devices in multicast and broadcast communications. In passive prefetching, a threshold (TH) is set at the cache to admit data appearing at the broadcast or multicast channel. According to the GD-LU replacement algorithm, a data item whose metadata are kept in the metadata queue may have a higher probability to be requested again by the mobile device, so the data item whose metadata is kept in the metadata queue is considered as valuable candidate for passive prefetching. To evaluate the future utility of an item at the mobile device, we define relative utility (RU ) as the ratio of the utility of data (di ) to the utility of cached data. Thus, RU(di ) =

Uic (di ) MAX j∈C (u j ) − MIN j∈C (u j )

for the data items based on their utility values. Handling a cache hit takes O(log N ) time where N is the number of data items in the cache. Handling an eviction also requires O(log N ) time. For both cache hit and eviction, the GD-LU replacement requires only one data to be updated. Due to the priority queue management, the proposed passive prefetching algorithm also has a time complexity of O(log N ). Several parameters are involved in computing the utility value of a data item. The incremental energy coefficient m and overhead h in equation (6) are system parameters obtained from system specifications. The disconnection rate (λdc ), handoff rate (λh ) and request arrival rate (λa ) can be estimated by using the sliding average method [19]. The update rate (λiu ) and access rate (λia ) of each data item can be estimated by the exponential aging [19] or sliding average method. We use a metadata to keep the information (e.g., λia and λiu ) of each data item. For each eviction, the metadata of victim items are kept in the metadata queue. The space overhead of each metadata is very small, and the total space overhead of our proposed cache mechanism is smd ∗ (N + Q), where smd is the size of one metadata, N is the total number data items in the cache, and Q is the maximum length of the metadata queue.

F

O

D E

O R P

6. Performance evaluation

452

In this section, we evaluate the performance of the proposed GD-LU replacement and passive prefetching algorithms under various system settings and environments. Different performance metrics are used to evaluate three main aspects concerned with the applications (e.g., energy efficiency, access latency and data availability). The Min-SAUD [24], GreedyDual-Size (GD-Size) [8] and traditional Least Recently Used (LRU) algorithms are included for comparison in the performance evaluation study.

453 454 455 456 457 458 459 460 461

6.1. Simulation model and system parameters

462

For web data, the size distribution follows the lognormal model [4]. In our simulation also, the lognormal model is used for the size distribution of all data items in the system. We consider a total of 10,000 data items. The cutoff minimal size of the data item is assumed to be 256 bytes, the cutoff maximal size is 80 K bytes, and the average size is 25.7 K bytes. For each data item, the period between two updates follows an exponential distribution. At original setting, the mean update period of each data item is chosen from a set of {100, 200, 400, 800, 1600, 3200, 6400, 12800, 25600, 51200} seconds with equal probability. The arrival of requests for each mobile device is assumed to follow a Poisson process with a mean arrival rate of 1/10 sec−1 . We consider Zipf-like [5] distribution for mobile device’s access pattern, in which the access probability pi of data item di is proportional to its popularity rank, rank(i). That is, pi = c/rank(i)θ , where c is the normalization constant and

463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479

T C

(10)

E R

405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421

where C is the set of data items in cache, and u j is the utility value of cached data item j. If RU is greater than the threshold (TH) of the cache, the data item is admitted into the cache; otherwise the cache ignores such data. Although the utility function, i.e, equation (9) is derived based on a unicast assumption, as shown in the next section, it can adapt broadcast communication very well. The derivation of specific utility function under broadcast is a part of our future work. In the later part of performance evaluation section, the threshold will be discussed in more details. A median utility threshold scheme is proposed to achieve a near-optimal performance tradeoff between access latency and power consumption. The passive prefetching algorithm is shown in figure 4. In passive prefetching, a mobile device does not send uplink request, thus reducing the burden on the server while improving on cache performance at the mobile device and saving scarce wireless bandwidth.

422

5.3. Implementation issues and complexity

423 424 425 426 427

To implement the proposed replacement and passive prefetching schemes at the mobile devices, two issues must be addressed; namely, the data structure management and the parameter estimation mechanisms. Like the GreedyDual algorithm [26], the GD-LU mechanism also uses a priority queue

R

O C

N U

428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451

481

DATA CACHING AND PREFETCHING

θ is a parameter between 0 and 1 whose default value is 0.9 in simulations. A sleep-wakeup process [3] is used to model the connection and disconnection behavior of mobile devices. In this model, the arrival rate of the event that a connected mobile device goes into the disconnected state is α = 1/(1−r )Ts and the arrival rate of the event that a disconnected mobile device goes into the connected state is β = 1/r Ts , where Ts is the duration of a connected-disconnected cycle whose default setting is 3600 seconds, and r is the ratio of the connected time to the connected-disconnected period whose default value is 0.4. We use the linear power consumption model of IEEE 802.11 based wireless LAN for mobile devices [10]. One broadcast channel of 2 Mbps bandwidth is shared by all mobiles. In the simulation, there are 180 mobile devices, each having a cache of size 2M bytes. The size of a request message (i.e., uncertain and cache miss) is 20 bytes. The size of confirmation and IR message is 16 bytes.

497

6.2. Performance metrics

498 499 500 501 502 503 504 505 506 507

The performance metrics considered are the hit ratio (HR), power per query (PPQ), stretch (ST) [2] and energy stretch (EST) [25]. The metric PPQ is defined as the ratio of the total energy consumed by data requests to the number of requests in a given period. ST is defined as the ratio of the response time of a request to its service time, where service time is defined as the ratio of the requested item size to the bandwidth. EST is defined as the product of stretch and the energy consumption of a data request. Clearly, energy stretch reflects a performance tradeoff between energy consumption and access latency.

508

6.3. Simulation results

509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528

In the experiments, we evaluate the performance of GDLU replacement algorithm and compare it with the LRU, GD-Size [8], and Min-SAUD [24] algorithms. For performance comparison, two versions of the GD-Size algorithm are used, namely, GD-Size(1) and GD-Size(packets) [8]. The cost of each document is set to 1 for GD-Size(1), and 2 + si ze/536 (that is, the estimated number of network packets sent and received if a miss to the document occurs) for GD-Size(packets). As Min-SAUD originally was proposed with the time stamp (TS) based cache consistency maintenance, for the reason of fair comparison, we implement the Min-SAUD under SACCS consistency maintenance scheme. In the Min-SAUD under SACCS scheme, we set the cache validation delay (v) to zero for valid data items and set v to the confirmation delay for uncertain data items as we calculate the gain function of Min-SAUD. In the following figures, for brevity, SC stands for the SACCS cache consistency scheme, MIN stands for the Min-SAUD algorithm, GD-SIZE(1) and GD-SIZE(PKT) stand for the two versions of the GD-Size algorithm.

529 530 531

GD-LU passive prefetching In the GD-LU passive prefetching, a large threshold (TH) value means few data items to be prefetched. As the threshold

Power per Query vs Prefetch Threshold

4

1.8

x 10

C=2M C=3M C=4M C=5M

1.7

1.6

PPQ(uw.s)

480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496

1.5

1.4

1.3

1.2

1.1 −4 10

−3

10

−2

−1

10 Prefetch Threshold

0

10

10

Figure 5. Power consumption with passive prefetching.

F

O

decreases, more and more data items can be prefetched. When the threshold is zero, all data items in the downlink channel are prefetched. Figures 5–8 show the performance of PPQ, HR, ST and EST metrics against prefetch threshold under different cache sizes. In Table 1, the median relative utility values among all current cached data items in the cache are used as the threshold. As shown in figure 5, the energy consumption is high when the threshold (TH) value is small. For T H ≤ 10−2 , the energy consumption significantly decreases as the threshold increases. But if the threshold is more than 10−2 , the energy consumption is almost a constant. This is because more data items are prefetched when the threshold is set at a lower value and some of them are not useful to users. When the threshold is large enough, most of the prefetched data items are useful which leads to constant energy consumption. Figure 6 shows that the hit ratio improves as the threshold decreases. However, when the threshold is set at low (i.e., T H = 10−3 ), the hit ratio does not improve although more data items are prefetched. This is called blind prefetching. A similar behavior

D E

O R P

T C

E R

R

O C

N U

Hit Ratio vs Prefetch Threshold 0.3 C=2M C=3M C=4M C=5M

0.29 0.28

Hit Ratio

0.27 0.26 0.25 0.24 0.23 0.22 0.21 −4 10

−3

10

−2

10 Prefetch Threshold

−1

10

Figure 6. Hit ratio with passive prefetching.

0

10

532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551

SHEN ET AL.

482 Table 1 Median utility prefetching performance. Cache size (M)

MU PPQ (104 uw.s)

MU HR

MU ST

MU EST (104 uw.s)

Min EST (104 uw.s)

1.20368 1.2057 1.21802 1.23306

0.256328 0.271676 0.281747 0.289208

6.39952 5.45998 4.99865 4.75577

7.70297 6.5831 6.0845 5.86415

7.7069 6.57452 6.12083 5.87524

2 3 4 5

Stretch vs Prefetch Threshold

8

Energy Stretch vs Prefetch Threshold

4

8.5

12 C=2M C=3M C=4M C=5M

x 10

C=2M C=3M C=4M C=5M

11

7.5

10 Energy Stretch(uw.s)

Stretch

7 6.5 6 5.5

9

8

F

7

O

5

6

4.5 4 −4 10

−3

10

−2

−1

10 Prefetch Threshold

5 −4 10

0

10

10

Figure 7. Stretch with passive prefetching.

−2

−1

10 Prefetch Threshold

0

10

10

Figure 8. Energy stretch with passive prefetching.

is shown in stretch performance in figure 7. The stretch of access drops with decreasing threshold, since prefetched data items help to improve the hit ratio and reduce stretch. However, blind prefetching does not improve stretch performance due to more energy consumption without any improvement in cache performance. From figures 5–7, we also conclude that devices with more cache space have less energy consumption, and enjoy high hit ratio and less access latency than the devices with less cache space. Since energy consumption of passive prefetching is determined by the threshold value, the GD-LU passive prefetching scheme can dynamically adjust the threshold according to the current energy level of the devices. When battery power is not a big concern, we set the threshold at a low value to reduce data access latency. When a mobile is at low energy level, we choose high threshold to save the battery power. However, the setting of threshold should not result in blind prefetching. As seen in figure 8, if we consider EST as the main performance metric for passive prefetching, we can get an optimal value of threshold. Since energy stretch is the metric that factors in both energy consumption and access latency, the optimal value of threshold can be considered as the best point for the tradeoff between energy consumption and access latency. As shown in Table 1, if we choose the points with minimal energy stretch (Min EST) from Figure 8 for comparison, the energy stretch performance of median utility threshold setting (MU EST) achieves near-optimal performance under different cache sizes. This demonstrates that the median relative utility threshold setting is a good heuristic method to achieve optimal

D E

performance tradeoff between energy consumption and access 581 latency. 582

T C

E R

R

O C

GD-LU replacement: Effect of cache size In this experiment, we evaluate the performance of GD-LU replacement under different cache sizes, using PPQ, HR and ST as the performance metrics. In order to evaluate only the cache replacement algorithms, we do not consider passive prefetching scheme in this experiment. As shown in figure 9, power consumption at mobile devices decreases with increasing cache size. GD-LU achieves best PPQ performance among

N U

Power per Query vs Cache Size

4

1.3

x 10

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

1.25

1.2 PPQ(uw.s)

552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580

O R P −3

10

1.15

1.1

1.05

1 1

1.2

1.4

1.6

1.8 2 2.2 Cache Size (M)

2.4

2.6

2.8

Figure 9. Power consumption under different cache sizes.

3

583 584 585 586 587 588 589 590

483

DATA CACHING AND PREFETCHING

Hit Ratio vs Cache Size

Stretch vs Cache Size

0.26

90 SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

80 0.24

70 60

Stretch

Hit Ratio

0.22

0.2

0.18

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

0.16

50 40 30 20 10

0.14 1

1.2

1.4

1.6

1.8 2 2.2 Cache Size (M)

2.4

2.6

2.8

3

0 1

1.2

1.4

1.6

1.8 2 2.2 Cache Size (M)

2.4

2.6

2.8

3

Figure 10. Hit ratio under different cache sizes. Figure 11. Stretch under different cache sizes.

all schemes, and GD-Size(1) shows worst PPQ performance. Both Min-SAUD and GD-LU outperform the LRU algorithm in terms of energy consumption, since both of them significantly improve the hit ratio as shown in figure 10. However, the improvement on energy saving of GD-LU is about 15% which is larger than that of Min-SAUD. Because the goal of Min-SAUD is to improve the stretch performance, data items with small size are preferred since smaller size data items contribute more to the stretch performance. In contrast, GD-LU endeavors to achieve optimal energy conservation, therefore, the data items that consume more energy are selected to be cached. Thus, GD-LU outperforms Min-SUAD in terms of energy consumption. Furthermore, GD-LU outperforms both GD-Size(1) and GD-Size(packets), since GD-Size does not consider data update in the cost function. The PPQ performance of GD-Size(1) is worse than LRU, since GD-Size(1) only tries to minimize the miss ratio by giving preference to small data items. In figure 10, LRU shows worst hit ratio (HR) performance, and GD-LU shows best hit ratio performance. Min-SAUD performs better than GD-Size(1) for small cache sizes, while Min-SAUD achieves a similar hit ratio performance as GDSize(1) at big cache size. When the cache size is small, the hit ratio is very sensitive to the total number of data items in the cache. The algorithm favoring small data items can cache more data items, so Min-SAUD achieves better HR performance than GD-Size(1). GD-Size(1) achieves better hit ratio performance than GD-Size(packets), since GD-Size(1) tries to minimize the miss ratio, and GD-Size(packets) tries to minimize the network traffic. In figure 11, GD-LU and MinSAUD achieve similar stretch performance, and both MinSAUD and GD-LU get significant improvement on stretch performance from GD-Size(1), LRU and GD-Size(packets).

624 625 626 627

GD-LU replacement: Effect of data update rate As mentioned at the beginning of this section, the default data update rates have a uniform distribution. The mean update period is 6855 seconds for all data items in the system. In

R

O C

N U

O

x 10

1.3

1.25

1.2

D E

T C

E R

F

Power per Query vs Mean Update Time

4

1.35

PPQ (uw.s)

591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623

O R P

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

1.15

1.1

1.05

1 0.5

1

1.5 2 2.5 Mean Update Time (seconds)

3

3.5 4

x 10

Figure 12. Power consumption under different data updating rates.

this experiment, the mean update period varies. As shown in figure 12, the energy consumption decreases as the mean update period increases for all algorithms. There are two reasons for the PPQ dropping as mean update period increases. First, SACCS broadcasts fewer IR messages as the mean update period increases, so the energy used for IR listening is less at a low data update rate. Second, as seen in figure 13, the hit ratio performance increases as the mean update period increases, and each scheme save more energy by reducing cache misses. As shown in figure 12, GD-LU achieves best PPQ performance among all schemes, but the improvement from LRU scheme is low at a high update rate. When data update rate is high, each mobile device needs to listen to many IR messages, thus IR listening dominates the energy consumption for mobile devices. This leads to low PPQ improvement of GD-LU compared to LRU at high update rates. But the PPQ improvement of GD-LU from LRU is higher than that of Min-SAUD at different updating rates, and GD-LU also outperforms two versions of GD-Size algorithm.

628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646

SHEN ET AL.

484 Hit Ratio vs Connection–Disconnection Period

Hit Ratio vs Mean Update Time

0.22

0.24 SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

0.2

0.22

0.18

0.21

0.16

Hit Ratio

Hit Ratio

0.23

0.2

0.14

0.19

0.12

0.18

0.1

0.17 0.5

1

1.5 2 2.5 Mean Update Time (seconds)

3

0.08 500

3.5

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

1000

4

x 10

1500 2000 2500 3000 Connection–Disconnection Period (seconds)

3500

Figure 15. Hit ratio under different Connection/Disconnection period.

Figure 13. Hit ratio under different data updating rates.

F

Stretch vs Connection–Disconnection Period

As shown in figure 13, all algorithms achieve better hit ratio performance in a lower update rate environment. The hit ratio performance of GD-LU is better than that of other schemes under different update rates. Min-SAUD achieves similar hit ratio performance as LRU at a high update rate. This is because the gain function used in Min-SAUD approaches zero for each data item at high data update rates. Thus, Min-SAUD is degraded as LRU, which only evaluates recent access.

655 656 657 658 659 660 661 662 663 664

GD-LU replacement: Effect of disconnection frequency This experiment evaluates the performance of GD-LU under different disconnection frequencies. The connectiondisconnection duration (Ts ) is varied to simulate the different disconnection frequencies of mobile devices. A small value of Ts indicates a high disconnection frequency, and vice versa. Figure 14 shows that GD-LU has a significant improvement on PPQ performance over LRU under different disconnection frequency, and also outperforms Min-SAUD and GDSize(packets). GD-Size(1) shows worst PPQ performance

50

O

45 40 35 30

Stretch

647 648 649 650 651 652 653 654

D E 25

O R P

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

20

T C

E R

R

4

1.3

x 10

O C

N U

Power per Query vs Connection–Disconnection Period

1.28

1.26

PPQ (uw.s)

1.24

1.22

SC–LRU SC–GD–SIZE(1) SC–GD–SIZE(PKT) SC–GD–LU SC–MIN

1.2

15 10

5 500

1000

1500

2000

2500

3000

3500

Connection–Disconnection Period (seconds)

Figure 16. Stretch under different Connection/Disconnection period.

under different disconnection frequencies. According to the SACCS algorithm, more data items in the cache are in uncertain state as the disconnection frequency increases. Since the hit ratio is counted only for valid data, as shown in figure 15, LRU, GD-Size(1), GD-Size(packets) and Min-SAUD achieve similar hit ratio performance for high disconnection scenario, while GD-LU gets best hit ratio performance under different disconnection frequencies. This is because the utility function deployed in GD-LU considers connection-disconnection characteristic of mobile devices. Figure 16 shows that the GDLU achieve best stretch performance among all schemes under different connection-disconnection periods.

665 666 667 668 669 670 671 672 673 674 675 676

7. Conclusions

677

In this paper, we develop a utility model to investigate the energy conservation by deploying a cache at mobile devices under a dynamic data environment. Based on the utility function derived from the utility model, we propose a novel

678 679 680 681

1.18

1.16

1.14 500

1000

1500

2000

2500

3000

3500

Connection–Disconnection Period (seconds)

Figure 14. Power consumption under different Connection/Disconnection period.

485

DATA CACHING AND PREFETCHING

682 683 684 685 686 687 688 689 690 691 692 693 694

caching mechanism, called GreedyDual Least Utility (GDLU), that combines a cache replacement algorithm with a passive prefetching scheme to reduce energy consumption and access latency of mobile applications. Simulation experiments demonstrate that the cache replacement algorithm achieves more than 10% energy saving and the passive prefetching algorithm can achieve near-optimal performance tradeoff between access latency and energy consumption. In our future work, we will investigate GD-LU performance under multicell data dissemination environments through analysis. Along with the study on web cache consistency, GD-LU cache replacement and passive prefetching schemes can be applied to web caching at various mobile devices.

695

Acknowledgment

696 697

This work is supported by a grant from Texas Advanced Research Program, TXARP Grant Number: 14-771032.

698

References

699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740

[1] S. Acharya, Broadcast disks: Dissemination-based data manamgement for asymmetric communication environments, PhD Dissertation, Brown Univeristy (May 1998). [2] S. Acharya and S. Muthukrishnan, Scheduling on-demand broadcasts: New metrics and algorithms, in: Proceedings of the 4th Annual ACM/IEEE MobiCom’98 (October 1998) pp. 43–54. [3] D. Barbara and T. Imielinksi, Sleepers and workaholics: Caching strategies for mobile environments, in: Proceedings of ACM SIGMOD Conference on Management of Data (May 1994) pp. 1–12 . [4] P. Barford and M. Crovella, Generating representative web workloads for network and server performance evaluation, in: Proceedings of the ACM SIGMETRICS Conference (1998) pp. 151–160. [5] L. Breslau, P. Cao, J. Fan, G. Phillips and S. Shenker, Web caching and zipf-like distributions: Evidence and implications, in IEEE Proceedings of INFOCOM (1999) pp. 126–134. [6] G. Cao, Proactive power-aware cache management for mobile computing systems, IEEE Transactions on Computers 51(6) (2002) 608–621. [7] G. Cao, A scalable low-latency cache invalidation strategy for mobile environments, in: ACM Intl. Conf. on Computing and Networking (Mobicom) (Aug 2001) pp. 200–209. [8] P. Cao and S. Irani, Cost-aware WWW proxy caching algorithms, in: Proc. USENIX Symp. Internet Technologies and Systems (Dec 1997) pp. 193–206. [9] L. Fan, P. Cao, W. Lin and Q. Jacobson, Web prefetching between lowbandwidth clients and proxies: Potential and performance, in: Proceedings of the ACM SIGMETRICS Conference (June 1999) pp. 178–187. [10] L. Feeney and M. Nilsson, Investigating the energy consumption of a wireless network interface in an ad hoc networking environment, in: IEEE Proceedings of INFOCOM (2001), Vol. 5, no. 8. [11] S. Gitzenis and N. Bambos, Power-controlled data prefetching/caching in wireless packet networks, in: IEEE Proceedings of INFOCOM 2002, New York, June 2002. [12] J. Jing, A. Elmagarmid, A. Heal and R. Alonso, Bit-sequences: An adaptive cache invalidation method in mobile client/server environments, Mobile Networks and Applications 2(2) (1997) 115–127. [13] A. Kahol, S. Khurana, S.K.S. Gupta and P.K. Srimani, A strategy to manage cache consistency in a distributed mobile wireless environment, IEEE Trans. on Parallel and Distributed Systems 12(7) (2001) 686–700. [14] W.H.O. Lau, M. Kumar and S. Venkatesh, A cache-based mobilityaware scheme for real-time continuous media delivery in wireless networks, in: IEEE International Conference on Multimedia and Expo. (2001).

[15] P. Nuggehalli, V. Srinivasan, C.F. Chiasserini and R.R. Rao, Energy efficient caching strategies in ad hoc wireless networks, in: ACM Proc. of MobiHoc’03 (2003) pp. 25–34. [16] G.J. Pottie and W.J. Kaiser, Wireless integrated network sensor, Communications of the ACM 43(5) (2000) 551–558. [17] L. Rizzo and Vicisano, Replacement policies for a proxy cache, IEEE/ACM Trans. on Networking 8 (May 2000) 158–170. [18] H. Shen, M. Kumar, S.K. Das and Z. Wang, Energy-efficient caching and prefetching with data consistency in mobile distributed systems, in: IEEE International Parallel and Distributed Processing Symposium (IPDPS), Santa Fe, NM (April 2004). [19] J. Shim, P. Scheuermann and R. Vingralek, Proxy cache design: Algorithms, implementation and performance, IEEE Trans. on Knowledge and Data Engineering(TKDE) 11(4) (1999) 549–562. [20] C. Su and L. Tassiulas, Joint broadcast scheduling and user’s cache management for efficient information delivery, ACM/Klumer Wireless Networks 6(4) (2000) 279–288. [21] Z. Wang, S.K. Das, H. Che and M. Kumar, SACCS: Scalable asynchronous cache consistency scheme”, in: Proceedings of ICDCS Internation Workshop on Mobile Wireless Networks (May 2003) pp. 797– 802. [22] Z. Wang, M. Kumar, S.K. Das and H. Shen, Investigation of cache maintenance strategies for multi-cell environments, in: International Conference on Mobile Data Management (MDM) (Jan 2003) pp. 29– 44. [23] R. Wooster and M. Abrams, Proxy caching that estimates edge load delays, in: Proc. 6th Int. World Wide Web Conf., Santa Clara, CA (April 1997). [24] J. Xu, W. Lee, Q. Hu and D.L. Lee, An optimal cache replacement policy for wireless data dissemination under cache consistency, in: Proc. The 30th Int. Conf. on Parallel Processing (ICPP’01) (Sep 2001) pp. 267– 277. [25] L. Yin, G. Cao, C. Das and A. Ashraf, Power-aware prefetch in mobile environments, in: IEEE International Conference On Distributed Computing Systems (ICDCS) (July 2002) pp. 571–578. [26] N.E. Young, The K-server dual and loose competitivenesss for paging, Algorithmica 11(6) (1994) 525–541.

741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777

Huaping Shen received his M.S. and B.S. degrees in computer science from Fudan University, China, in 2001 and 1998, respectively. He is currently a Ph.D. student in the Department of Computer Science and Engineering at the University of Texas at Arlington. His research interests include data management in mobile networks, mobile computing, peer-to-peer networks, and pervasive computing. E-mail: [email protected]

778 779 780 781 782 783 784 785 786

Mohan Kumar is an Associate Professor in Computer Science and Engineering at the University of Texas at Arlington. His current research interests are in pervasive computing, wireless networks and mobility, active networks, mobile agents, and distributed computing. Recently, he has developed or co-developed algorithms for active-network based routing and multicasting in wireless networks and caching prefetching in mobile distributed computing. He has published over 90 articles in refereed journals and conference proceedings and supervised Masters and doctoral theses in the areas of pervasive computing, caching/prefetching, active networks, wireless networks and mobility, and scheduling in distributed systems. Kumar is on the editorial board of The Computer Journal and he has guest edited special issues of several leading international journals including MONET and WINET issues and the IEEE Transactions on Computers. He is a co-founder of the IEEE International Conference on pervasive computing and

787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803

F

O

D E

O R P

T C

E R

R

N U

O C

SHEN ET AL.

486

804 805 806 807 808 809 810 811 812 813

communications (PerCom)—served as the program chair for PerCom 2003, and is the vice general chair for PerCom 2004. He has also served in the technical program committees of numerous international conferences/workshops. He is a senior member of the IEEE. Mohan Kumar obtained his PhD (1992) and MTech (1985) degrees from the Indian Institute of Science and the BE (1982) from Bangalore University in India. Prior to joining The University of Texas at Arlington in 2001, he held faculty positions at the Curtin University of Technology, Perth, Australia (1992–2000), The Indian Institute of Science (1986-1992), and Bangalore University (1985–1986). E-mail: [email protected]

814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836

Dr. Sajal K. Das is currently a Professor of Computer Science and Engineering and also the Founding Director of the Center for Research in Wireless Mobility and Networking (CReWMaN) at the University of Texas at Arlington (UTA). Prior to 1999, he was a professor of Computer Science at the University of North Texas (UNT), Denton where he founded the Center for Research in Wireless Computing (CReW) in 1997, and also served as the Director of the Center for Research in Parallel and Distributed Computing (CRPDC) during 1995–97. Dr. Das is a recipient of the UNT Student Association’s Honor Professor Award in 1991 and 1997 for best teaching and scholarly research; UNT’s Developing Scholars Award in 1996 for outstanding research; UTA’s Outstanding Faculty Research Award in Computer Science in 2001 and 2003; and the UTA College of Engineering Research Excellence Award in 2003. An internationally-known computer scientist, he has visited numerous universities, research organizations, government and industry labs worldwide for collaborative research and invited seminar talks. He is also frequently invited as a keynote speaker at international conferences and symposia. Dr. Das’ current research interests include resource and mobility management in wireless networks, mobile and pervasive computing, wireless multimedia and QoS provisioning, sensor networks, mobile internet architectures and protocols, parallel processing, grid computing, performance modeling

and simulation. He has published over 250 research papers in these areas, directed numerous industry and government funded projects, and holds four US patents in wireless mobile networks. He received the Best Paper Awards in the 5th Annual ACM International Conference on Mobile Computing and Networking (MobiCom’99), 16th International Conference on Information Networking (ICOIN-16), 3rd ACM International Workshop on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM 2000), and 11th ACM/IEEE International Workshop on Parallel and Distributed Simulation (PADS’97). Dr. Das serves on the Editorial Boards of IEEE Transactions on Mobile Computing, ACM/Kluwer Wireless Networks, Parallel Processing Letters, Journal of Parallel Algorithms and Applications. He served as General Chair of IEEE PerCom 2004, MASCOTS’02 ACM WoWMoM 2000-02; General Vice Chair of IEEE PerCom 2003, ACM MobiCom-2000 and IEEE HiPC 2000-01; Program Chair of IWDC 2002, WoWMoM 199899; TPC Vice Chair of ICPADS 2002; and as TPC member of numerous IEEE and ACM conferences. He is Vice Chair of the IEEE TCPP and TCCC Executive Committees and on the Advisory Boards of several cutting-edge companies. Dr. Sajal K. Das received B.S. degree in 1983 from Calcutta University, M.S. degree in 1984 from Indian Institute of Science, Bangalore, and Ph.D. degree in 1988 from the University of Central Florida, Orlando, all in Computer Science. E-mail: [email protected]

F

O

T C

D E

E R

R

N U

O C

O R P

Zhijun Wang received the M.S degree in Electrical Engineering from Pennsylvania State University, University Park, PA, 2001. He is working toward the Ph.D. degree in Computer Science and Engineering Department at the University of Texas at Arlington. His current research interests include data management in mobile networks and peer-topeer networks, mobile computing and networking processors. E-mail: [email protected]

837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859

860 861 862 863 864 865 866 867 868 869

Suggest Documents