Message-Efficient Location Prediction for Mobile Objects in Wireless ...

9 downloads 313 Views 1MB Size Report
parameters were estimated via an autocorrelation tech- nique in [17], [18], [43], ... of the Gauss-Markov parameters of mobile objects in wireless sensor networks.
1

Message-Efficient Location Prediction for Mobile Objects in Wireless Sensor Networks Using a Maximum Likelihood Technique Bing-Hong Liu, Min-Lun Chen, and Ming-Jer Tsai Abstract—In the tracking system, a better prediction model can significantly reduce power consumption in a wireless sensor network because fewer redundant sensors will be activated to keep monitoring the object. The Gauss-Markov mobility model is one of the best mobility models to describe object trajectory because it can capture the correlation of object velocity in time. Traditionally, the Gauss-Markov parameters are estimated using an autocorrelation technique or a recursive least square estimation technique; either of these techniques, however, requires a large amount of historical movement information of the mobile object, which is not suitable for tracking objects in a wireless sensor network because they demand a considerable amount of message communication overhead between wireless sensors which are usually battery-powered. In this paper, we develop a Gauss-Markov parameter estimator for wireless sensor networks (GMPE MLH) using a maximum likelihood technique. The GMPE MLH model estimates the Gauss-Markov parameters with few requirements in terms of message communication overhead. Simulations demonstrate that the GMPE MLH model generates negligible differences between the actual and estimated values of the Gauss-Markov parameters and provides comparable prediction of the mobile object’s location to the Gauss-Markov parameter estimators using an autocorrelation technique or a recursive least square estimation. Index Terms—Wireless sensor network, Gauss-Markov mobility model, Gauss-Markov parameter estimation, object tracking, message-efficient location prediction.

F

1

I NTRODUCTION

A

WIRELESS sensor network is composed of multiple wireless sensors. Each sensor can collect, process, and store environmental information as well as communicate with others via inter-sensor communication. The rapid development of wireless communications and embedded micro-sensing technologies has facilitated the use of wireless sensor networks in our daily lives; the study of wireless sensor networks has become one of the most important areas of research [1], [2], [10], [16], [24], [28], [29], [34], [36], [42]. A wide range of applications exist for wireless sensor networks, including environmental monitoring, battlefield surveillance, health care, nuclear, biological, and chemical (NBC) attack detection, intruder detection, and so on. Another application–and one of the most important areas of research–is object tracking, in which sensors monitor and report the locations of mobile objects [4], [7], [20], [25], [33], [38], [45]. In a wireless sensor network, sensors are usually in the sleep state to save energy to prolong the network life. The tracking system in a wireless sensor network usually includes three components: 1) a monitoring mechanism, 2) a prediction model, and 3) a recovery mechanism [14], [35], [39]. A monitoring mechanism activates selected sensors to monitor and collect the location information • The authors are with the Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan 30013, ROC. • E-mail: [email protected], [email protected], [email protected].

of the mobile object using acoustic signal [7], [8], [27] or images of objects [11], [15], [37]. Once the object moves away from the activated sensors, the primary sensor among the activated sensors uses a prediction model to predict the next location of the object and activates the appropriate sensors to continue monitoring the object. One of the activated sensors receives knowledge of being assigned to be the next primary sensor. If the prediction fails to track the object, the recovery mechanism activates additional sensors in order to re-capture the lost object. Therefore, a better prediction model can significantly reduce power consumption because fewer redundant sensors will be activated. Many methods for predicting object trajectory have been proposed. The methods in [22], [23] predict object trajectory using Kalman filters. In [19], [44], extended Kalman filters are proposed because Kalman filters process non-linear variations in non-trivial systems with difficulty. In [12], [41], sequential Monte Carlo filters are adopted because the use of extended Kalman filters may lead to divergence due to the non-linear nature of the system. All of these filters, however, require storage of many parameters, which are updated by the measured location, velocity, and acceleration of the object, in order to predict the next location of the object. Therefore, these filters are not suitable for tracking objects in a wireless sensor network because multiple parameters (messages) must be transmitted between the primary sensors, placing a heavy power consumption burden on wireless sensors, which are usually battery-powered.

2

TABLE 1: Summary of Notations µ σ α µ ˆt σ ˆt α ˆt vt Vt Vt−1 A N (µ, σ 2 ) α ˜t ¯ α ˜t α ˜ t (v, x) ¯ α ˜ fV (v) LVt (α)

A Gauss-Markov parameter used to denote the mean velocity as t → ∞. A Gauss-Markov parameter used to denote the standard deviation of velocity as t → ∞. A Gauss-Markov parameter used to vary the randomness of the Gauss-Markov equation. The value of µ estimated at time slot t. The value of σ estimated at time slot t. The value of α estimated at time slot t. The velocity of a mobile object at time slot t. The random variable of vt . The random variable of vt−1 . The random variable of α. A normal distribution having a mean equal to µ and a standard deviation equal to σ. The most likely value of α at time slot t. The mean of α ˜1 , α ˜ 2 , . . ., and α ˜t . The evaluated value of α ˜ t , given that Vt−1 = v and Xt−1 = x. The mean of α ˜ t (v, x) for all possible values of v and x. The probability density function of random variable V. The likelihood function of α for sample vt .

communication overhead, and however, wireless sensor networks are power-sensitive. To date, to the best of our knowledge, no existing methods can accurately estimate the Gauss-Markov parameters with few requirements in terms of message communication overhead in a wireless sensor network, thereby providing the motivation of this paper. The remainder of this paper is organized as follows. Related works are introduced in Section 2. In Section 3, the GMPE MLH model is proposed. In Section 4, theoretical analysis for the GMPE MLH model is provided. Section 5 gives numerical results. Finally, we conclude this paper with the discussion of future research in Section 6.

2

We demonstrate two Gauss-Markov parameter estimators: GMPE ACR [17], [18] and GMPE RLSE [13]. GMPE ACR and GMPE RLSE use the autocorrelation and recursive least square estimation techniques to estimate the Gauss-Markov parameters, respectively. TABLE 1 summaries the notations used in this paper. 2.1

Some methods for predicting object trajectory produce little message communication overhead in a wireless sensor network. The instant prediction model [21], [39], [40] and the average prediction model [14], [26], [39] predict the subsequent velocity of the object using the current velocity and the mean of previous velocity measurements, respectively. The exponential average prediction model [39] predicts the subsequent velocity of the object using the current velocity and the last estimated velocity. Although the instant, average, and exponential average prediction models have little or no need for message transmission in a wireless sensor network, they do not well predict object trajectory in a more complicated mobility model, such as the Gauss-Markov mobility model [17]. The Gauss-Markov mobility model is one of the best mobility models to describe object trajectory because it can capture the correlation of object velocity in time. Additionally, the Gauss-Markov mobility model, using different Gauss-Markov parameters, can duplicate the object mobility pattern generated by other popular mobility models, such as the random walk, the fluid flow, and the random waypoint mobility models [13], [17], [18], [43]. Estimation of the Gauss-Markov parameters is critical to correctly predict object trajectory. The Gauss-Markov parameters were estimated via an autocorrelation technique in [17], [18], [43], and were estimated via a recursive least square estimation in [13]. Since these GaussMarkov parameter estimators require a large amount of historical movement information of mobile objects, they are not suitable for the estimation of the Gauss-Markov parameters of mobile objects in wireless sensor networks because they demand a considerable amount of message

R ELATED W ORKS

The GMPE ACR Model

In the GMPE ACR model, n Gauss-Markov equations are used to describe the movement of an object in ndimensional space. In each dimension, the velocity of a mobile object at time slot t, vt , is modeled by the following Gauss-Markov equation: √ vt = αvt−1 + (1 − α)µ + 1 − α2 Xt−1 , (1) where Xt−1 is the (t − 1)-th random variable chosen from a normal distribution having a mean equal to zero and a standard deviation equal to σ, α (0 ≤ α ≤ 1) denotes the parameter used to vary the randomness of Eq. 1, µ denotes the parameter used to represent the mean velocity as t → ∞, and σ denotes the parameter used to represent the standard deviation of velocity as t → ∞. Additionally, µ, σ, and α are called GaussMarkov parameters. A normal distribution having a mean equal to µ and a standard deviation equal to σ is denoted by N (µ, σ 2 ). In each dimension, given the previous west samples of ′ the device velocity v1′ , v2′ , . . ., vw , the estimated values est of µ, σ, and α at time slot t, denoted by µ ˆt , σ ˆt , and α ˆt, respectively, are calculated by the following equations: west 1 ∑ v′ , west i=1 i

(2)

est ∑ 1 (v ′ − µ ˆ t )2 , west − 1 i=1 i

(3)

µ ˆt =

w

σ ˆt2 =

  1,

and α ˆt =

{ } σ ˆt′2  max 0, σˆ 2 , t

where σ ˆt′2 =

1 west −1

∑west −1 i=1

if σ ˆt ≈ 0, otherwise,

′ (vi′ − µ ˆt )(vi+1 −µ ˆt ).

(4)

3

vt

t 1 vt 1 Pˆ t 1 Vˆ t 1 D t 1

Dˆt

3 T HE GMPE MLH M ODEL Our system model and assumptions are first demonstrated. Then, we describe how the Gauss-Markov parameters µ, σ, and α are estimated in the GMPE MLH model. Finally, the prediction location of a mobile object using the estimated values of µ, σ, and α is given.

t vt Pˆ t Vˆ t D t

Fig. 1: The GMPE MLH model.

In each dimension, the predicted location of the mobile device at time slot t + n, x ˆt+n , is calculated by the following equation: ( ) 1−α ˆ tn 1−α ˆ tn x ˆt+n = xt + vt + 1 − µ ˆt , (5) 1−α ˆt 1−α ˆt where xt denotes the actual location of the mobile device at time slot t. 2.2 The GMPE RLSE Model In the GMPE RLSE model, the velocity and the moving direction with respect to the positive x-axis are used to describe the movement of an object in 2-dimensional space. The velocity and the moving direction of a mobile object at time slot t, denoted by vt and θt , respectively, are modeled by the Gauss-Markov equation shown in Eq. 1. Given the previous west samples of the device velocity ′ , the estimated value of µ at time slot t, v1′ , v2′ , . . ., vw est µ ˆt , is calculated by Eq. 2. The estimated value of α at time slot t, α ˆ t , is calculated by the following recursive least square estimation: γˆ w = γˆ w est

[

est −1

− Kwest (Hwest γˆ w ]

est −1

− y w ),

(6)

est

[ ′ ] µ ˆt , y k = , Hk = vk−1 [ ] [ ′ ] Pk−1 HT 1 T k Hk Pk−1 vk , Kk = Pk Hk , Pk = λ Pk−1 − λ+H P HT , k k−1 where γˆ w

est

=

αˆt 1 − αˆt

3.1 System Model and Assumptions It is assumed that each sensor has a region of detection and is capable of measuring the velocity vectors of the target. In our system model, once an object moves into the sensing range of the wireless sensor network, a monitoring mechanism activates sensors to monitor and collect the location information of the object and selects one of the activated sensors to be the primary sensor. Once the object moves away from the activated sensors, the primary sensor uses the GMPE MLH model to predict the next location of the object, activates the appropriate sensors to continue monitoring the object, and designates the next primary sensor among the activated sensors. In the Gauss-Markov mobility model, when an object moves, the future location is expected to be accurately predicted by the estimation of its Gauss-Markov parameters, µ, σ, and α, in n dimensions. Here, we only consider the estimation of µ, σ, and α in one of the n dimensions due to the similarities of the calculations. Fig. 1 illustrates the GMPE MLH model in one dimension. The primary sensor uses the parameter estimator to evaluate µ ˆt , σ ˆt , α ˆt, ¯˜ t after it measures vt and receives µ ¯˜ t−1 , and α ˆt−1 , σ ˆt−1 , α vt−1 , and t−1 from the previous sensor, where µ ˆt , σ ˆt , and α ˆ t denote the estimated values of µ, σ, and α at time slot ¯˜ t denotes the mean of α t, respectively, and α ˜1, α ˜ 2 , . . ., α ˜ t , where α ˜ t denotes the most likely value of α at time slot t, as discussed later. For each dimension, 4 messages ¯˜ t , and vt must be transmitted between the of µ ˆt , σ ˆt , α primary sensors. Therefore, a total of 9 messages are required to be transmitted between the primary sensors in 2-dimensional space.

k

P0 is an identity matrix, and λ is a tunable parameter. The estimated velocity of the mobile object at time slot t + 1, vˆt+1 , is calculated by the following equation: vˆt+1 = α ˆ t vt + (1 − α ˆ t )ˆ µt .

(7)

The estimated moving direction of the mobile object at time slot t + 1, θˆt+1 , can be calculated in the manner analogous to that of vˆt+1 . The predicted location of the mobile object at time slot t + 1, (ˆ xt+1 , yˆt+1 ), is calculated by the following equations: x ˆt+1 = xt + vˆt+1 cos θˆt+1

(8)

yˆt+1 = yt + vˆt+1 sin θˆt+1 ,

(9)

and

where (xt , yt ) denotes the actual location of the mobile object at time slot t.

3.2 Estimation of µ and σ In the GMPE MLH model, the following recurrence exists for µ ˆt : µ ˆ1 = v1 ; 1 t−1 µ ˆt−1 + vt , if t ≥ 2; µ ˆt = t t and, the following recurrence exists for σ ˆt :

(10)

σ ˆ12 = 0; (v1 − µ ˆ2 )2 + (v2 − µ ˆ2 )2 ; σ ˆ22 = (11) 2 t − 1 1 2 σ ˆt2 = σ ˆt−1 + (vt − µ ˆt )2 , if t ≥ 3. t t−1 In the GMPE MLH model, once the t-th primary sensor measures vt and receives vt−1 , µ ˆt−1 and σ ˆt−1 from the previous sensor, µ ˆt is first evaluated according to Eq. 10, and subsequently, σ ˆt is evaluated according to Eq. 11.

D

4

˄ ˃ˁˌ ˃ˁˋ ˃ˁˊ ˃ˁˉ ˃ˁˈ ˃ˁˇ ˃ˁˆ ˃ˁ˅ ˃ˁ˄ ˃

Let Vt−1 denote the random variable of vt−1 , fVt−1 ,Xt−1 (v, x) denote the probability density function of Vt−1 and Xt−1 , and α ˜ t (v, x) denote the α (0 ≤ α ≤ 1) value which maximizes LVt (α) given that ¯˜ , the mean of Vt−1 = v and Xt−1 = x. Then, α α ˜ t (v, x) ∫for all possible values of v and x, is equal ∫∞ ∞ to −∞ −∞ fVt−1 ,Xt−1 (v, x)˜ αt (v, x)dxdv. Theorem 1, in ¯˜ Section 4.1, shows the following equation exists for α of a mobile object having the Gauss-Markov parameter α = α1 : ∫ ∞∫ ∞ 1 − v2 +x2 ¯ 2 α ˜ t (v, x)dxdv, (14) e α ˜= −∞ −∞ 2π

˃ˁˇˇˉ˃˅ ˃ˁˆˈˆˆˋ

˃

˃ˁ˄ ˃ˁ˅ ˃ˁˆ ˃ˁˇ ˃ˁˈ ˃ˁˉ ˃ˁˊ ˃ˁˋ ˃ˁˌ D ˥˸˴˿ʳ˖̈̅̉˸

˄

˥˸˺̅˸̆̆˼̂́ʳ˖̈̅̉˸

¯ Fig. 2: A plot of α ˜ versus α.

3.3 Estimation of α ¯˜ t using a maximum likelihood technique. We evaluate α Subsequently, we calculate α ˆ t by showing the relation¯˜ and α. ship between α ¯ B.1 Evaluation of α ˜t

where α ˜ t (v, x) is equal to ( the√α (0 ≤ α )≤2 1) value which 2 maximizes √

G(x) =

1 t−1¯ ¯ α ˜t = α ˜t + α ˜ t−1 . t t

(12)

This leads us to calculate α ˜ t , the α value having the maximum probability that the velocity of the mobile object is changed from vt−1 to vt at time slot from t−1 to t, given vt−1 , vt , µ, and σ. Our idea is to evaluate α ˜ t using a maximum likelihood estimation for α. Let A denote the random variable of α. Assume vt is a random sample of a random variable Vt . Let LVt (α) be the likelihood function of α for sample vt . It follows that α ˜ t , the maximum likelihood estimation for α, is the α (0 ≤ α ≤ 1) value which maximizes LVt (α). Given vt−1 , vt , µ, and σ, LVt (α) is calculated as follows: Substitution into Eq. 1 yields Vt = b + aXt−1 , where √ a = 1 − α2 and b = αvt−1 + (1 − α)µ. Because Xt−1 ∼ N (0, σ 2 ), we have Vt ∼ N (b, a2 σ 2 ). Therefore, the conditional probability density function of Vt given A = α, fVt |A (v|α) = established in Eq. 13.

aσ 2π



e

LVt (α) = fA|Vt (α|vt ) =

(v−b)2 2a2 σ 2

1 √

. Thus, LVt (α) is

aσ 2π

e−

(vt −b)2 2a2 σ 2

.

(13)

Here, α ˜ t is calculated by a simple method as follows: The interval [0, 1] is split into a finite number of subintervals [α0 , α1 ], [α1 , α2 ], . . ., [αm−1 , αm ] with α0 = 0 < i+1 if < . . . < αm = 1. α ˜ t is set to αi +α 2 ∫α1αi+1 ∫ αSubsequently, j+1 LVt (α)dα ≥ αj LVt (α)dα for 0 ≤ j < m. αi In the GMPE MLH model, once the t-th primary ¯ sensor has vt−1 , vt , µ ˆt , σ ˆt , and α ˜ t−1 , the sensor first evaluates α ˜ t as the α (0 ≤ α ≤ 1) value which maximizes √

− 1 √ e 1−α2 σ ˆt 2π

(vt −αvt−1 −(1−α)µ ˆ t )2 2 2(1−α2 )ˆ σt

¯˜ t according to Eq. 12. uates α B.2 Evaluation of α ˆt

, and subsequently, eval-

α1 v+

1−α1 x−αv 2(1−α2 )

.

¯˜ and α using We evaluate the relationship between α ¯ a regression method. First, α ˜ is evaluated as α1 (= α) varies from 0 to 1, increased by 0.0001, as illustrated in Fig. 2. Let the regression model function be

¯ The following recurrence exists for α ˜t:

1 √

− 1 e 2π(1−α2 )

0.2 1 1 ln( − ), θ1 θ2 x θ2

where θ1 and θ2 are model parameters. The model function with θ1 = 53.151 and θ2 = −0.43403 best fits the curve plotted by the samples with α1 = 0, 0.0001, 0.0002, · · · , 0.005, and the model function with θ1 = 0.37492 and θ2 = −0.55069 best fits the curve plotted by the samples with α1 = 0.005, 0.0051, 0.0052, · · · , 1. A function of the regression curve which best fits the plot in Fig. 2 is derived, as shown in the following equation:  0, if α ˜¯ ≤ 0.35338;    0.2 1 1  ln ( − −0.43403 ),  ¯ −0.43403α ˜   53.151 if 0.35338 < α ˜¯ ≤ 0.44602; α = α1 = (15) 1 0.2 1 − −0.55069 ),  ¯ 0.37492 ln ( −0.55069α  ˜   ¯˜ ≤ 1;  if 0.44602 < α   ¯ 1, if α ˜ > 1. Therefore, in the GMPE MLH model, once the t-th pri¯˜ t , the sensor evaluates α mary sensor has α ˆ t by Eq. 16.  ¯˜ t ≤ 0.35338; 0, if α    0.2 1 1  53.151 ln ( −0.43403α¯˜ − −0.43403 ),   t  if 0.35338 < α ˜¯ t ≤ 0.44602; α ˆt = (16) 0.2 1 1 ), − −0.55069  ¯ 0.37492 ln ( −0.55069α  ˜t   ¯˜ t ≤ 1;  if 0.44602 < α   ¯ 1, if α ˜ t > 1. 3.4

Location Prediction

We calculate the predicted location of the mobile object at time slot t + 1, x ˆt+1 , as xt + E[ˆ vt+1 ], where E[ˆ vt+1 ] denotes the expected value of vˆt+1 . Because vˆt+1 = α ˆ t vt + √ ˆ t2 Xt by Eq. 1 and E[Xt ] = 0 (because (1 − α ˆ t )ˆ µt + 1 − α 2 Xt ∼ N (0, σ )), x ˆt+1 can be calculated by the following equation: x ˆt+1 = xt + α ˆ t vt + (1 − α ˆ t )ˆ µt .

(17)

ˉ

ˋ ˊ ˉ ˈ ˇ ˆ ˅ ˄ ˃

ˈ

˃ˁ˄ˈ

D  Dˆ t

˄˃ ˌ

V  Vˆ t

P  Pˆ t

5

ˇ

˃ˁ˄˅ ˃ˁ˃ˌ

ˆ ˃ˁ˃ˉ ˅ ˃ˁ˃ˆ

˄

˃

˃ ˄˃˃

˅˃˃ ˆ˃˃

ˇ˃˃ ˈ˃˃

ˉ˃˃ ˊ˃˃

ˋ˃˃ ˌ˃˃ ˄˃˃˃

t

(a) ˚̅̂̈̃ʳ˔

˄˃˃ ˅˃˃ ˆ˃˃ ˇ˃˃ ˈ˃˃ ˉ˃˃ ˊ˃˃ ˋ˃˃ ˌ˃˃ ˄˃˃˃

˄˃˃ ˅˃˃ ˆ˃˃ ˇ˃˃ ˈ˃˃ ˉ˃˃ ˊ˃˃ ˋ˃˃ ˌ˃˃ ˄˃˃˃

t

t

(b) ˚̅̂̈̃ʳ˕

˚̅̂̈̃ʳ˖

(c) ˚̅̂̈̃ʳ˘

˚̅̂̈̃ʳ˗

˚̅̂̈̃ʳ˙

˚̅̂̈̃ʳ˚

˚̅̂̈̃ʳ˛

Fig. 3: The differences between the actual and estimated values of parameters (a) µ, (b) σ, and (c) α for the GMPE MLH model.

4

A NALYSIS

OF THE

GMPE MLH M ODEL

4.2

¯ described in Eq. 14 is first shown The evaluation of α ˜ to be correct. Subsequently, the convergence rates of the estimation of µ, σ, and α are studied. Finally, the accuracy of the estimation of µ, σ, and α is provided. In the paper, all lemmas and theorems are proved in the appendix. ¯ 4.1 Correctness of Evaluation of α ˜ Lemma 1 demonstrates the evaluation of fVt−1 (v), fXt−1 (x), and α ˜ t (v, x), where fVt−1 (v) and fXt−1 (x) denote the probability density functions of Vt−1 and Xt−1 , respectively. Lemma 1 is necessary for the proof of ¯ Lemma 2, in which it is shown that α ˜ is invariant with respect to µ and σ of the mobile object. Then, with the ¯ help of Lemma 2, α ˜ of the mobile object having the Gauss-Markov parameters µ, σ, and α = α1 can be evaluated by setting µ = 0 and σ = 1, as concluded in Theorem 1. Lemma 1: Let O1 be a mobile object having the Gauss-Markov parameters µ = µ1 , σ = σ1 , and α = α1 . For object O1 , fVt−1 (v) = 2 − x2 2σ1

√ 1 e 2πσ1

fXt−1 (x) = the α (0 ≤( α √

− 1 e 2π(1−α2 )σ1



− √ 1 e 2πσ1

(v−µ1 )2 2 2σ1

,

, and α ˜ t (v, x) is equal to 1) value which )maximizes √ 2 2

α1 v+(1−α1 )µ1 +

1−α1 x−αv−(1−α)µ1

2 2(1−α2 )σ1

.

Lemma 2: Let O1 and O2 be two mobile objects having the Gauss-Markov parameters µ = µ1 , σ = σ1 , and α = α1 and the Gauss-Markov parameters µ = µ2 , σ = σ2 , ¯ and α = α2 , respectively. If α1 = α2 , α ˜ of object O1 is ¯˜ of object O2 . equal to α Theorem 1: Let O1 be a mobile object having the ¯ Gauss-Markov parameter α = α1 . Then, α ˜ of object O1 is ∫ ∞ ∫ ∞ 1 − v2 +x2 2 equal to −∞ −∞ 2π e α ˜ t (v, x)dxdv, where α ˜ t (v, x) is equal to the α (0 ≤ α ≤ 1) value which maximizes )2 ( √ 2 √

− 1 e 2π(1−α2 )

α1 v+

1−α1 x−αv 2(1−α2 )

.

Convergence Rates of Estimation of µ, σ, and α

Theorem √ 2 shows that the convergence rates of µ ˆt and σ ˆt are 1/ n, where n denotes the number of samples, using a similar argument described in [32]. In addition, ¯˜ t by Eq. 16, we demonstrate because α ˆ t is evaluated by α ¯˜ t is 1/√n in Theorem that the convergence rate of α 3. Let Wi denote a random variable, α ˜ t (Vt−1∑ , Xt−1 ). To n prove Theorem 3, it is sufficient to show n1 i=1 Wi ∼ ¯˜ , ( √σ )2 ) as n → ∞. Let X1 , X2 , · · · , Xn denote a N (α n sequence of specific random variables, and let Fn (x) and Φ(x) be the cumulative distribution functions of −nµ random variable Z = Tσn √ and N (0, 1), respectively, n ∑n X , and µ and σ 2 denote the mean where Tn = i=1 i and variance of X1 , respectively. Lemma 3, as proved in [32], describes that the least upper bound of the absolute difference of Fn (x) and Φ(x) for x ∈ R is bounded ∑n by √ O(1/ n), implying that the√distribution of Tn = i=1√Xi approximates to N (nµ, (σ n)2 ) on the order of 1/ n. Because W1 , W2 , · · · , Wn are a sequence of specific ran¯˜ , dom E[W1 ] = α ∑n variables described in Lemma√3 and 2 ¯ W approximates to N (n α, ˜ (σ n) ) on the order i=1√ i ∑n ¯ ˜ , ( √σn )2 ) as of 1/ n. This implies that n1 i=1 Wi ∼ N (α n → ∞, and hence Theorem 3 follows. Theorem 2: The convergence rates of µ ˆt and σ ˆt are √ 1/ n, where n denotes the number of samples. Lemma 3: Let {Xn } be a sequence of independent and identically-distributed (i.i.d.) random variables with E[X1 ] = µ, variance of X1 equal to σ 2 and suppose that 2+δ E[|X1 − µ| some 0 < δ ≤ 1. Also ∑n ] = ν2+δ < ∞ for −nµ ≤ x}, x ∈ R. Then let Tn = i=1 Xi , Fn (x) = Pr{ Tσn √ n there exists a constant C such that ∆n = sup |Fn (x) − Φ(x)| ≤ C x∈R

ν2+δ n−δ/2 , σ 2+δ

(18)

where supx∈R g(x) denotes the least upper bound or supremum of g(x) for x ∈ R and Φ(x) is the cumulative distribution function of N (0, 1). √ Theorem 3: The convergence rate of α ˜¯ t is 1/ n, where n denotes the number of samples.

˄˃˃˃

˄˃˃˃

ˌ˃˃

ˌ˃˃

ˌ˃˃

ˋ˃˃

ˋ˃˃

ˋ˃˃

ˊ˃˃

ˊ˃˃

ˊ˃˃

ˉ˃˃ ˈ˃˃ ˇ˃˃

conv_tĮ

˄˃˃˃

conv_tı

conv_tȝ

6

ˉ˃˃ ˈ˃˃ ˇ˃˃

ˈ˃˃ ˇ˃˃

ˆ˃˃

ˆ˃˃

ˆ˃˃

˅˃˃

˅˃˃

˅˃˃

˄˃˃

˄˃˃

˄˃˃

˃

˃ ˃ˁ˄

˃ˁ˅

˃ˁˆ

˃ˁˇ

˃ˁˈ

İȝ

˃ˁˉ

˃ˁˊ

˃ˁˋ

˃ˁˌ

˄

˃

˃ˁ˄

(a) ˚ˠˣ˘˲ˠ˟˛

ˉ˃˃

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃ʼ

˃ˁ˅

˃ˁˆ

˃ˁˇ

˃ˁˈ

˃ˁˉ

İı

˃ˁˊ

˃ˁˋ

˃ˁˌ

˄

˃ˁ˄

(b) ˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃˃ʼ

˃ˁ˅

˃ˁˆ

˃ˁˇ

˃ˁˈ

İĮ

˃ˁˉ

˃ˁˊ

˃ˁˋ

˃ˁˌ

˄

(c)

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ːЌʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ːЌʼ

Fig. 4: Convergence rates of the estimated values of (a) µ, (b) σ, and (c) α for the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model in the Gauss-Markov mobility model having parameters of Group A.

4.3 Accuracy of Estimation of µ, σ, and α Theorem 4 shows that µ ˆt = µ and σ ˆt = σ as t → ∞, and Theorem 5 demonstrates the asymptotic∑confidence n interval for α. As n → ∞, because n1 i=1 Wi ∼ σ ¯ ¯˜ t = α ¯ N (α, ˜ ( √n )2 ), α ˜ , where Wi denotes a random vari¯ able, α ˜ t (Vt−1 , Xt−1 ). In addition, α ˆ t = G(α ˜ t ). Therefore, as t → ∞, the difference between α and α ˆ t can be obtained by evaluating the difference between the real curve and the regression curve in Fig. 2. Lemma 4, as proved in [31], provides a statistical analysis of the data of these two curves, and helps us to complete the proof of Theorem 5. Theorem 4: limt→∞ µ ˆt = µ and limt→∞ σ ˆt = σ. Lemma 4: Assume that yi = f (xi ; θ) + ϵi and yˆi = ˆ for given n samples (x1 , y1 ), (x2 , y2 ), · · · , f (xi ; θ) (xn , yn ), where θ = (θ1 , θ2 , . . . , θp )T is the true value of the parameter vector, ϵi are independent and identicallyˆ = (θˆ1 , θˆ2 , . . . , θˆp )T . The distributed (i.i.d.) N (0, σ 2 ), and θ approximate 95% confidence interval for y at x = x0 is given by √ s 1 + f0T (FT F)−1 f0 , yˆ0 ± t0.025 (19) k ˆ t0.025 = 1.645 denotes the upwhere yˆ0 = f (x0 ; θ), k per 0.025 critical value of √ the∑ t-distribution with k n degrees of freedom, s = ˆi )2 /k, fi = i=1 (yi − y ( ∂f (xi ;θ) ∂f (xi ;θ) ) T ∂f (xi ;θ) , ∂θ2 , . . . , ∂θp for 0 ≤ i ≤ n, and ∂θ1 F = (f1 , f2 , . . . , fn )T . Theorem 5: The approximate 95% confidence interval for α is within 0.00812 of α ˆ t as t → ∞.

5

P ERFORMANCE E VALUATION

To evaluate the performance of the GMPE MLH model, the absolute differences between the actual and estimated values of µ, σ, and α with mobile objects in the Gauss-Markov mobility model were investigated. In addition, the convergence rates of the estimated values of µ, σ, and α were studied for the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model.

Moreover, the root mean square error (RMSE) [44] of the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model were measured in the GaussMarkov mobility model. The value of RMSE is used to indicate the differences between the actual and estimated object trajectories. We also measured RMSE of the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model in various mobility models. Nevertheless, the GMPE MLH Dynamic model, an extention of the GMPE MLH model, was compared with the GMPE ACR model and the GMPE RLSE model in the dynamic mobility models in terms of RMSE. In our simulations, 1000 mobile objects were generated having parameters randomly chosen from the parameter ranges of the mobility models. The plotted results were obtained by averaging the data of 1000 mobile objects. 5.1

Accuracy of Estimated Parameters

In the simulation, 1000 objects move in the GaussMarkov mobility model, having different values of µ, σ, and α randomly chosen from the intervals [−d, d], [0, e], and [0, f ], respectively. To study the performance of the GMPE MLH model in groups of objects with different expected mean, variance, and randomness of velocities, we simulate objects having 8 groups of parameters (d, e, f ): A) (100, 100, 1), B) (100, 100, 0.5), C) (100, 50, 1), D) (100, 50, 0.5), E) (50, 100, 1), F) (50, 100, 0.5), G) (50, 50, 1), and H) (50, 50, 0.5). As illustrated in Fig. 3, the larger the value of t, the smaller were the average values observed for |µ − µ ˆt |, |σ − σ ˆt |, and |α − α ˆ t |. That is, the GMPE MLH model provides better estimation of the values of µ, σ, and α as the value of t increases. This observation is reasonable because, as the value of t increases, more information concerning object trajectory is used to estimate the values of µ, σ, and α. The value of d was noted to have a negligible effect on the average values of |µ− µ ˆt |, |σ − σ ˆt |, and |α − α ˆ t |. Additionally, it was observed that the smaller the value of e, the smaller the average value of |µ − µ ˆt |. If an object has the smaller expected value of σ (as the value of e is smaller), the expected absolute difference between vt and µ is smaller, making the absolute

ˉˈ

ˉˈ

ˋˈ

ˉ˃

ˉ˃

ˋ˃

ˋ˃

ˈˈ

ˈˈ

ˊˈ

ˊˈ

ˊ˃

ˊ˃

ˉˈ

ˉˈ

ˉ˃

ˉ˃

˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

RMSE

ˌ˃

ˋˈ

RMSE

ˌ˃

RMSE

RMSE

7

ˈ˃ ˇˈ

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

ˇˈ

ˇ˃

ˇ˃

ˆˈ

ˆˈ ˆ˃

ˆ˃ ˄˃˃

ˈ˃

˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

t

t

t

t

(a)

(b)

(c)

(d)

˚ˠˣ˘˲ˠ˟˛

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃˃ʼ

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ːЌʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃˃ʼ

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ːЌʼ

Fig. 5: Measurement of RMSE for the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model in the Gauss-Markov mobility model. Parameters used in Group A: (a); Group B: (b); Group C: (c); and Group D: (d).

difference between µ and µ ˆt smaller. It was also observed that the smaller the value of f , the smaller the average value of |µ − µ ˆt |. If an object has the smaller expected value of α (as the value of f is smaller), according to Eq. 1 the expected absolute difference between vt and µ is smaller. Moreover, as can be seen in Figs. 3a and 3b, for 8 groups of parameters the larger the average value of |µ − µ ˆt |, the larger the average value of |σ − σ ˆt | √ 2 + |ˆ 2 − σ or because as t → ∞, |ˆ σ − σ| = σ µ − µ| t √ ∑t t σ 2 + |ˆ µt − µ|2 + σ (because σ 2 = 1t i=1 (vi − µ)2 and ∑t µ = 1t i=1 vi as t → ∞). Nevertheless, it is believed that the smaller the expected value of α, the smaller ¯ the expected value of α ˜ . Because the slope of the curve ¯ in Fig. 2 is less for smaller values of α ˜ , the absolute difference between the actual and estimated values of ¯˜ t is smaller. Therefore, the smaller is the α is larger as α expected value of α (the smaller value of f ), the larger is the average value of |α − α ˆ t |, as illustrated in Fig. 3c. 5.2 Convergence Rates of Estimated Parameters Let ∆|µ−µ ˆt |, ∆|σ−ˆ σt |, and ∆|α−α ˆ t | denote ||µ−µ ˆt |−|µ− µ ˆt+100 ||, ||σ− σ ˆt |−|σ− σ ˆt+100 ||, and ||α− α ˆ t |−|α− α ˆ t+100 ||, respectively, if t < 1000; otherwise, let ∆|µ−µ ˆt |, ∆|σ−ˆ σt |, and ∆|α − α ˆ t | be equal to 0. In addition, let conv tµ , conv tσ , and conv tα denote the smallest time slots t such that for all time slots t′ ≥ t, ∆|µ − µ ˆt′ | ≤ ϵµ , ∆|σ − σ ˆt′ | ≤ ϵσ , and ∆|α − α ˆ t′ | ≤ ϵα , respectively, where t and t′ are multiples of 100, and ϵµ , ϵσ , and ϵα are the given constants. Smaller values of conv tµ , conv tσ , and conv tα indicate greater convergence rates of estimated values of µ, σ, and α, respectively. In the simulation, the convergence rates of estimated values of µ, σ, and α were not investigated in the Gauss-Markov mobility model having parameters of Groups B, C, D, E, F, G, and H, because the differences in the convergence rates of estimated values of µ, σ, and α among 8 groups are not significant, as can be seen in Fig. 3. In addition, for the GMPE RLSE model, in the second dimension, µ, σ, and α are randomly chosen, respectively, from the intervals [0, 2π], [0, 2π], and [0, 1]. Moreover, the convergence rates of estimated values of µ and σ were not studied for the GMPE RLSE model because the method of esti-

mating µ used for the GMPE RLSE model is the same as that for the GMPE ACR model and because the value of σ is not estimated in the GMPE RLSE model. In Fig. 4, west = 10, 100, and ∞ means 10, 100, and all samples of history information concerning object trajectory are used to estimate the parameters, respectively. It can be seen that the GMPE ACR model had the smallest convergence rates of estimated values of µ, σ, and α as west = 10. The observation is reasonable because the average values of |µ − µ ˆt |, |σ − σ ˆt |, and |α − α ˆ t | each fluctuate as the value of t increases due to the fact that a small amount of information concerning object trajectory is used to estimate the values of µ, σ, and α. By contrast, in the GMPE ACR model with west = 100, the average values of |µ − µ ˆt |, |σ − σ ˆt |, and |α − α ˆ t | each fluctuate slightly as the value of t increases because a constant and large amount of information concerning object trajectory is used to estimate the values of µ, σ, and α as t ≥ 100. In the GMPE ACR model with west = ∞ (which means that all the history information concerning object trajectory is used to estimate the parameters), or the GMPE MLH model, the larger the value of t, the smaller the average values observed for |µ− µ ˆt |, |σ − σ ˆt |, and |α− α ˆ t | were because all information concerning object trajectory is used to estimate the values of µ, σ, and α, resulting in smaller convergence rates of estimated values of µ, σ, and α compared to the GMPE ACR model with west = 100. Nevertheless, compared to the GMPE RLSE model, the GMPE ACR model had a greater convergence rate of estimated value of α in most cases.

5.3 Measurement of RMSE in the Gauss-Markov Mobility Model Let xi and x ˆi denote the actual and estimated values of the i-th x coordinate of the object, respectively, and yi and yˆi denote the actual and estimated values of the i-th y coordinate of the object, respectively. Also, let x ¯ and y¯ denote the mean of the actual values of x and y coordinates of the object, respectively. Then, RMSE is

8

Fig. 6: The Markov chain.

illustrated in Eq. 20. v u t u1 ∑ ( ) (xi − x ˆi )2 + (yi − yˆi )2 . RMSE = t t i=1

(20)

A value of RMSE near 0 indicates a good fit among the data. In the simulation, the RMSE was not measured in the Gauss-Markov mobility model having parameters of Groups E, F, G, and H because the value of d has a negligible effect on the average values of |µ− µ ˆt |, |σ − σ ˆt |, and |α − α ˆ t | as discussed previously. In addition, for the GMPE RLSE model, in the second dimension µ, σ, and α are randomly chosen, respectively, from the intervals [0, 2π], [0, 2π], and [0, 1] in group A, the intervals [0, 2π], [0, 2π], and [0, 0.5] in group B, the intervals [0, 2π], [0, π], and [0, 1] in group C, and the intervals [0, 2π], [0, π], and [0, 0.5] in group D. As illustrated in Fig. 5, the larger the value of t, the better the fit in the values of RMSE in the GMPE MLH model. This observation results from the fact that the GMPE MLH model provides better estimation of the values of µ, σ, and α when the value of t increases as discussed previously. By contrast, the value of t has a minor effect on the value of RMSE in the GMPE ACR model and the GMPE RLSE model. In addition, the larger the west , the better the fit in the value of RMSE of the GMPE ACR model and the GMPE RLSE model. This is because when west is larger, more information concerning object trajectory is used for location prediction. Furthermore, it was observed that the GMPE RLSE model has the worst fit in the values of RMSE. The GMPE MLH model has a better fit in the values of RMSE compared to the GMPE ACR model if west = 10. Though the difference of the values of RMSE between the GMPE MLH model and the GMPE ACR model is negligible as west = 100 or ∞, the GMPE ACR model requires more messages to be transmitted between the primary sensors in a 2-dimensional space. Nevertheless, as anticipated, the smaller the value of e (the smaller the expected value of σ), the better the fit in the values of RMSE. Moreover, the larger the value of f (the larger the expected value of α), the better the fit in the values of RMSE because when the value of α is smaller, according to Eq. 1 vt has greater randomness (resulting from the larger value of √ 1 − α2 Xt−1 ), which makes it more difficult to predict object trajectory using any of the prediction models.

5.4 Measurement of RMSE in Various Mobility Models In this experiment, six mobility models are examined: 1) Random walk mobility model [5]: A mobile object moves from the current location to the next location by randomly choosing its velocity and direction, respectively, from the intervals [−100 m/s, 100 m/s] and [0, 2π). 2) Markovian random path mobility model [6]: A mobile object moves from the current location to the next location based on a Markov chain shown in Fig. 6, where the arrows represent transitions between states and the decimal on each arrow indicates the probability of the corresponding transition. In the Markov chain, three states exist for each dimensional space. The object moves 0 m or 50 m (the value of D) for each state transition. 3) Simple individual mobility model [6]: This model is similar to the Markovian random path mobility model. The difference between the two models is that the simple individual mobility model has a probability of 0.2 of a transition from state 0 to the same state and a probability of 0.4 of a transition from state 0 to a state of 1 or 2. 4) Random waypoint mobility model [5]: Firstly, a mobile object at location (x, y) randomly chooses a destination (x + ∆x, y + ∆y), where −300 m ≤ ∆x, ∆y ≤ 300 m. Subsequently, the mobile object moves toward its destination at a selected velocity randomly chosen from the interval [0 m/s, 100 m/s]. 5) ETSI vehicular mobility model [3]: A mobile object moves in a constant velocity, 120 km/h, and has a probability of 20% that its direction will change every 20 m. The changes of direction are randomly selected from [−π/4, π/4]. 6) Smooth random mobility model [3]: At each time interval, the object has a probability of 4% that its target velocity will change and a probability of 2% that its target direction will change. The target velocity is changed by choosing a value from the interval [0 m/s, 13.9 m/s], in which 0 and 13.9 are chosen with a probability of 30% and the others are randomly chosen with a probability of 40%. The target direction is changed by randomly choosing a value from the interval [0, 2π). Once a target velocity is changed, the object accelerates or decelerates its current velocity to achieve the target velocity, where acceleration or deceleration is randomly chosen from the intervals (0, 2.5] and [−4, 0), respectively. In addition, if a target direction changes, the object achieves this target direction in selected time intervals randomly chosen from [1, 10]. As illustrated in Fig. 7, the GMPE RLSE model was noted to provide poor prediction in all mobility models except for the ETSI vehicular mobility model and the smooth random mobility model. In most mobility models, the GMPE MLH model provides better prediction than the GMPE ACR model as west = 10, and comparable prediction as west = 100 and ∞. In addition, each of the three prediction models provides a good

ˊ˃

ˊ˃

ˊˈ

ˉˈ

ˉˈ

ˊ˃

ˉ˃

ˉ˃

ˉˈ

RMSE

ˋ˃

RMSE

RMSE

9

ˈˈ ˈ˃

ˈ˃

ˈˈ

ˇˈ

ˇˈ

ˈ˃

ˇ˃ ˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˇ˃ ˄˃˃

ˌ˃˃ ˄˃˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

(a)

ˊ˃˃

ˋ˃˃

˄˃˃

ˌ˃˃ ˄˃˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

t

(b)

(c)

˄˃

ˉ˃

ˆ

ˌ ˋ ˊ

ˇˈ

ˉ

RMSE

ˈ˃

ˇ˃ ˆˈ

˅

RMSE

ˈˈ

RMSE

ˉ˃˃

t

t

ˈ ˇ ˆ

ˆ˃

˄

˅

˅ˈ

˄ ˃

˅˃ ˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

˃

˄˃˃

˅˃˃

ˆ˃˃

ˇ˃˃

ˈ˃˃

(d) ˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃ʼ

ˉ˃˃

ˊ˃˃

ˋ˃˃

ˌ˃˃ ˄˃˃˃

˄˃˃

˅˃˃

t

t

˚ˠˣ˘˲ˠ˟˛

ˈˈ

ˉ˃

ˇ˃˃

ˈ˃˃

t

(e) ˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃˃ʼ

ˆ˃˃

(f)

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ːЌʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ːЌʼ

Fig. 7: Measurements of RMSE for the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model in (a) the random walk mobility model, (b) the Markovian random path mobility model, (c) the simple individual mobility model, (d) the random waypoint mobility model, (e) the ETSI vehicular mobility model, and (f) the smooth random mobility model.

performance of RMSE in the ETSI vehicular mobility model and the smooth random mobility model. This observation results from the fact that a mobile object has little opportunity to make a sharp turn in these two mobility models. By contrast, each prediction model provides a poor performance of RMSE in the random walk mobility model due to the fact that a mobile object has a great opportunity to make a sharp turn. 5.5 Measurement of RMSE in the Dynamic GaussMarkov Mobility Model In the simulation, each mobile object changed the GaussMarkov parameters, which were randomly chosen from the parameter ranges of the mobility models at k, 1 ≤ k ≤ 10, time slots, which were randomly chosen from 5000 time slots. The GMPE MLH Dynamic model, which is equivalent to the GMPE MLH model except for the functionality of indicating changes in the Gauss-Markov parameters, was compared with the GMPE ACR model and the GMPE RLSE model. The GMPE MLH Dynamic model indicates a change in α based on the fact that the distribution of the random ¯˜ t approximates to a normal distribution with variable of α ¯ a mean equal to α ˜ , as demonstrated in Theorem 3. In time slot 5i + 5, i is an integer in the interval [0, 999], the mean of 5 α ˜ t in time slots 5i+1 through 5i+5 is obtained ¯ to be one sample datum of α ˜ t . In time slot 25j + 25, j is an integer in the interval [0, 199], the means and variances of the history samples and the recent samples are evaluated; the history samples comprise all sample ¯˜ t from the beginning of the prediction model, or data of α

the last time slot in which the prediction model is reset, and the recent samples comprise the last 5 sample data ¯˜ t . Then, if an F -test [9] with a level of significance of α of 5% indicates equality (or, inequality) of the variances of the history samples and the recent samples, a t-test [9] (or, Welch’s t-test [30]) with a level of significance of 5% is used. If the t-test or Welch’s t-test indicates inequality of the means of the history samples and the recent samples, the GMPE MLH Dynamic model indicates a change in α and resets the prediction model. The GMPE MLH Dynamic model requires 4 messages to indicate a change in α. 1 message is required to ¯˜ t in time slot 5i + 5, and obtain one sample datum of α 3 messages are required to obtain the variance of the history samples and the mean and variance of the recent samples in time slot 25j + 25. Note that the mean of the history samples is obtained without an additional ¯˜ t . message because it is equal to α The GMPE MLH Dynamic model indicates changes in µ and σ based on the fact that the distribution of random variable Vt is a normal distribution in a manner analogous to that for indicating a change in α. The history samples comprise all vt from the beginning of the prediction model, or the last time slot in which the prediction model is reset, and the recent samples comprise 25 vt in time slots 25j + 1 through 25j + 25. In time slot 25j + 25, if an F -test (or, t-test) with a level of significance of 5% indicates inequality of the variances (or, means) of the history samples and the recent samples, the GMPE MLH Dynamic model indicates a change in σ (or, µ) and resets the prediction model. The GMPE MLH Dynamic model re-

ˋ˃

ˋ˃

ˌˈ

ˊˈ

ˊˈ

ˌ˃

ˊ˃

ˊ˃

ˌ˃

ˉˈ

ˉˈ

ˋˈ

ˋˈ

ˉ˃

ˉ˃

ˋ˃

ˋ˃

ˊˈ

ˊˈ

ˊ˃

ˊ˃

ˉˈ

ˉˈ

ˉ˃

ˉ˃ ˄

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

˄˃

RMSE

˄˃˃

ˌˈ

RMSE

˄˃˃

RMSE

RMSE

10

ˈˈ ˈ˃

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

˄˃

ˈ˃

ˇˈ

ˇˈ

ˇ˃

ˇ˃

ˆˈ

ˆˈ ˆ˃

ˆ˃

˄

ˈˈ

˄

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

˄˃

˄

˅

ˆ

ˇ

ˈ

k

k

k

k

(a)

(b)

(c)

(d)

˚ˠˣ˘˲ˠ˟˛˲˗̌́˴̀˼˶

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃˃ʼ

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ːЌʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃˃ʼ

ˉ

ˊ

ˋ

ˌ

˄˃

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ːЌʼ

Fig. 8: Measurement of RMSE for the GMPE MLH Dynamic model, the GMPE ACR model, and the GMPE RLSE model in the dynamic Gauss-Markov mobility model. Parameters used in Group A: (a); Group B: (b); Group C: (c); and Group D: (d).

quires 2 messages to obtain the mean and variance of the recent samples. Because no message is required to obtain the mean and variance of the history samples, the GMPE MLH Dynamic model requires 2 messages to indicate changes in µ and σ. To sum up, the GMPE MLH Dynamic model requires a total of 6 messages to indicate changes in the Gauss-Markov parameters in each dimension. In a 2-dimensional space, the GMPE MLH Dynamic model requires 9 messages to estimate the Gauss-Markov parameters, and thus requires a total of 21 messages transmitted between the primary sensors. As illustrated in Fig. 8, in each of the three prediction models, as anticipated, the larger the value of k, the worse the fit in the values of RMSE. Both the GMPE ACR model and the GMPE RLSE model have the worst fit in the values of RMSE as west = ∞. This is because when west = ∞, all the history information concerning object trajectory is used for location prediction, resulting in inaccuracy of the estimation of the updated Gauss-Markov parameters. In addition, the GMPE RLSE model has the worst fit in the values of RMSE, which is reasonable. The GMPE MLH Dynamic model has a better fit in the values of RMSE in almost all cases compared to the GMPE ACR model with west = 10. Though the GMPE MLH Dynamic model has a worse fit in the values of RMSE, compared to the GMPE ACR model, if west = 100, the GMPE MLH Dynamic model requires fewer messages to be transmitted between the primary sensors in a 2-dimensional space. Furthermore, as we can expect, the smaller the value of e (the smaller the expected value of σ) or the larger the value of f (the larger the expected value of α), the better the fit in the values of RMSE. 5.6 Measurement of RMSE in Various Dynamic Mobility Models In the simulation, each mobile object changed the velocity, the interval from which the velocity, direction, acceleration, or deceleration was chosen, or the probability of changing state, velocity, or direction at k, 1 ≤ k ≤ 10,

time slots, which were randomly chosen from 5000 time slots. In the dynamic random walk mobility model, the velocity interval was the union of intervals [−vmax m/s, −vmin m/s] and [vmin m/s, vmax m/s], where [vmin m/s, vmax m/s] was a subinterval randomly chosen from the interval [0 m/s, 200 m/s]. In the dynamic random waypoint mobility model and the dynamic smooth random mobility model, the velocity intervals were subintervals randomly chosen from the intervals [0 m/s, 200 m/s] and [0 m/s, 27.8 m/s], respectively. In the dynamic Markovian random path mobility model, or the dynamic simple individual mobility model, the value of D and the probability of a transition from state 0, 1, or 2 to the same state were randomly chosen from the intervals [0 m, 100 m] and [0, 1], respectively. In the dynamic random waypoint mobility model, −ℓ m ≤ ∆x, ∆y ≤ ℓ m, where ℓ was randomly chosen from the interval [0, 600]. In the dynamic ETSI vehicular mobility model, the velocity and the probability of changing direction were randomly chosen from the intervals [100 km/hr, 140 km/hr] and [0, 40%], respectively, and the direction interval was a subinterval randomly chosen from the interval [−π/2, π/2]. In the dynamic smooth random mobility model, the probabilities of changing velocity and direction were randomly chosen from the intervals [0, 8%] and [0, 4%], respectively, and the acceleration and deceleration intervals were subintervals randomly chosen from the intervals (0, 5] and [−8, 0), respectively. As illustrated in Fig. 9, in all dynamic mobility models, the larger the value of k, the worse the fit in the values of RMSE in each of the three prediction models. In all dynamic mobility models, except for the dynamic ETSI vehicular mobility model and the dynamic smooth random mobility model, the GMPE RLSE model provides poor prediction, and the GMPE MLH Dynamic model provides comparable prediction to the GMPE ACR model. In the dynamic ETSI vehicular mobility model, or the dynamic smooth random mobility model, each of the three prediction models provides a good performance of RMSE. More specifically, the GMPE MLH Dynamic model provides worse

ˋ˃

ˋ˃

˄ˈ˃ ˄ˇˈ ˄ˇ˃ ˄ˆˈ ˄ˆ˃ ˄˅ˈ ˄˅˃ ˄˄ˈ ˄˄˃ ˄˃ˈ ˄˃˃ ˌˈ ˌ˃ ˋˈ ˋ˃

ˊ˃

RMSE

ˊ˃

RMSE

RMSE

11

ˉ˃

ˈ˃

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

ˆ˃ ˄

˄˃

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

˄

˄˃

˅

ˆ

ˇ

(a)

(b)

ˉ

ˊ

ˋ

ˌ

˄˃

ˉ

ˊ

ˋ

ˌ

˄˃

(c)

˄˃

˅˃˃ ˄ˌ˃ ˄ˋ˃ ˄ˊ˃ ˄ˉ˃ ˄ˈ˃ ˄ˇ˃ ˄ˆ˃ ˄˅˃ ˄˄˃ ˄˃˃ ˌ˃ ˋ˃ ˊ˃ ˉ˃ ˈ˃

ˈ

k

k

k

ˆ

ˌ ˋ ˊ

˅

ˉ

RMSE

RMSE

RMSE

ˈ˃ ˇ˃

ˇ˃ ˄

ˉ˃

ˈ ˇ ˆ

˄

˅ ˄ ˃ ˄

˅

ˆ

ˇ

ˈ

ˉ

ˊ

ˋ

ˌ

˄˃

˃

˄

˅

ˆ

ˇ

ˈ

(d) ˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃ʼ

ˊ

ˋ

ˌ

˄˃

˄

˅

ˆ

ˇ

k

k

˚ˠˣ˘˲ˠ˟˛˲˗̌́˴̀˼˶

ˉ

k

(e) ˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ː˄˃˃ʼ

ˈ

(f)

˚ˠˣ˘˲˔˖˥ʳʻ˪˸̆̇ ːЌʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ː˄˃˃ʼ

˚ˠˣ˘˲˥˟˦˘ʳʻ˪˸̆̇ ːЌʼ

Fig. 9: Measurements of RMSE for the GMPE MLH Dynamic model, the GMPE ACR model, and the GMPE RLSE model in (a) the dynamic random walk mobility model, (b) the dynamic Markovian random path mobility model, (c) the dynamic simple individual mobility model, (d) the dynamic random waypoint mobility model, (e) the dynamic ETSI vehicular mobility model, and (f) the dynamic smooth random mobility model.

prediction than either the GMPE ACR model or the GMPE RLSE model in the dynamic ETSI vehicular mobility model and the dynamic smooth random mobility model. This stems from the observation that the history information concerning object trajectory is useful for location prediction, because a mobile object has little opportunity to make a big change in either the velocity or the direction in these two dynamic mobility models. 5.7 Comparison of Prediction Models As shown in TABLE 2, neither the GMPE ACR model nor the GMPE RLSE model provides analytical bounds for the estimation of the Gauss-Markov Parameters. In addition, the time complexities of the GMPE MLH model, the GMPE ACR model, and the GMPE RLSE model are O(1), O(west ), and O(west ), respectively. The message overhead is ranked by the number of messages required to be transmitted. In a 2-dimensional space, the GMPE MLH model requires 9 messages to be transmitted. Either the GMPE ACR model or the GMPE RLSE model has comparable prediction for object trajectory with the GMPE MLH model only when west is at least 100, which requires at least 200 messages to be transmitted. The location prediction is ranked by the simulation results for the measurement of RMSE in the Gauss-Markov mobility model. Because the GMPE MLH model has low time complexity, low message overhead, and good location prediction, the GMPE MLH model is suitable for location prediction in wireless sensor networks (WSN), wireless personal communication service (PCS) networks, and mobile ad

hoc networks (MANET).

6

C ONCLUSIONS

In this paper, we have proposed a method to predict object trajectory. Since the Gauss-Markov mobility model is one of the best mobility models to describe object trajectory, we developed a Gauss-Markov parameter estimator, GMPE MLH, using a maximum likelihood technique. The GMPE MLH model requires the transmission of only 9 messages between the primary sensors in a 2-dimensional space and generates negligible differences between the actual and estimated values of the Gauss-Markov parameters, therefore we believe that it is a novel method for a wireless sensor network to predict object trajectory. In addition, the method can be easily extended to 3-dimensional space in which 13 messages are transmitted between the primary sensors. Also, the GMPE MLH model can be used to predict object trajectory in wireless personal communication service networks and mobile ad hoc networks as well. Furthermore, an extention of the GMPE MLH model, termed GMPE MLH Dynamic, is proposed to predict object trajectory in the dynamic Gauss-Markov mobility model in which the object may change Gauss-Markov parameters along with time. Using simulations, we compared the performance of the GMPE MLH model with the Gauss-Markov parameter estimators using an autocorrelation technique (GMPE ACR) and a recursive least square estimation technique (GMPE RLSE). Either the GMPE ACR model or the GMPE RLSE model requires a large amount of

12

TABLE 2: Comparison of Prediction Models Performance Parameter Bound for GaussMarkov Parameter Estimation Time Complexity Message Overhead Location Prediction Usable in PCS networks Usable in MANET Usable in WSN a √ ⇒ Yes; × ⇒ No b ⋆ ⋆ ⋆ ⇒ Best; ⋆ ⇒ Worst

GMPE MLH GMPE ACR GMPE RLSE √ × × O(1) ⋆⋆⋆ ⋆⋆⋆ ⋆⋆⋆ ⋆⋆⋆ ⋆⋆⋆

O(west ) ⋆ ⋆⋆⋆ ⋆⋆⋆ ⋆⋆⋆ ⋆

O(west ) ⋆ ⋆⋆ ⋆⋆ ⋆⋆ ⋆

historical trajectory information to provide good estimation of the Gauss-Markov parameters. As compared to the GMPE ACR model and the GMPE RLSE model, simulations demonstrate that the GMPE MLH model provides comparable prediction for object trajectory while requiring much less message transmission overhead. Nevertheless, simulations show that the GMPE MLH Dynamic model can well-predict object trajectory in the dynamic Gauss-Markov mobility model. Future research includes the improvement of the performance of the GMPE MLH model and the GMPE MLH Dynamic model, and the study of estimating the occurrence area of the object with a given probability. Another research direction is to study how to predict the trajectory of a group of objects having a mutual relationship.

Proof of Lemma 1: Vt−1 ∼ N (µ1 , σ12 ); therefore, − √ 1 e 2πσ1

=

(v−µ1 )2 2 2σ1

.

Xt−1



N (0, σ12 );

2 − x2 2σ1

1 therefore, fXt−1 (x) = √2πσ e . Given Vt−1 = v 1 and X = x, v = α v + (1 − α1 )µ1 + t−1 t 1 √ to Eq. 1. Thus, 1 − α12 x according ( )L2Vt (α) = √ 2





1 e 2π(1−α2 )σ1

α1 v+(1−α1 )µ1 +

1−α1 x−αv−(1−α)µ1

2 2(1−α2 )σ1

according to Eq. 13. This implies that α ˜ t (v, x) is the α (0 ( ≤ α ≤ 1) value that )maximizes √ 2 √

fVt−1 (v1,i )fXt−1 (x1,j )∆x1 ∆v1 = fVt−1 (v2,i )fXt−1 (x2,j )∆x2 ∆v2 ,

(21)

α ˜ t (v1,i , x1,j ) = α ˜ t (v2,i , x2,j ), if α1 = α2 .

(22)

and 1) 2) We first note that n − 1 = (ℓσ1 )−(−ℓσ = (ℓσ2 )−(−ℓσ and ∆x1 ∆x2 (µ1 +ℓσ1 )−(µ1 −ℓσ1 ) (µ2 +ℓσ2 )−(µ2 −ℓσ2 ) m−1 = = , thereby ∆v1 ∆v2 ∆x1 σ1 ∆v1 implying that ∆v = = k, where k = . ∆x2 σ2 We also 2 note that v1,i = v1,1 + (i − 1)∆v1 = µ1 − ℓσ1 + (i − 1)∆v1 . Similarly, v2,i = µ2 − ℓσ2 + (i − 1)∆v2 , x1,j = −ℓσ1 + (j − 1)∆x1 , and x2,j = −ℓσ2 + (j − 1)∆x2 . Therefore, we 2 (v −µ )2 +(i−1)k∆v2 )2 1) have 1,i2σ2 1 = (−ℓσ1 +(i−1)∆v = (−ℓkσ22(kσ = 2 2σ 2 2) 1

(−ℓσ2 +(i−1)∆v2 )2 2σ22

1

=

(v −µ )2 − 1,i 2 1 2σ1

(v2,i −µ2 )2 , 2σ22

− 1 e 2π(1−α2 )σ1

1−α2 1 x−αv−(1−α)µ1 2 2(1−α2 )σ1

α1 v+(1−α1 )µ1 +

.

Proof of Lemma 2: Because random variables ∫ Vt−1 and X are independent, t−1 ∞ ∫∞ ¯˜ = ¯˜ α fVt−1 (v)fXt−1 (x)˜ αt (v, x)dxdv. So, α −∞ −∞ ∫ ∫ µ+ℓσ ℓσ = limℓ→∞ µ−ℓσ −ℓσ fVt−1 (v) fXt−1 (x) α ˜ t (v, x) dx ¯ dv. That is, the value of α ˜ of object O1 is limℓ→∞ ∫ µ1 +ℓσ1 ∫ ℓσ1 f (v) f (x) α ˜ t (v, x) dx dv, and V X t−1 t−1 µ1 −ℓσ1 −ℓσ1 ∫ µ +ℓσ ¯ the value of α ˜ of object O2 is limℓ→∞ µ22−ℓσ22 ∫ ℓσ2 f (v)fXt−1 (x)˜ αt (v, x)dxdv. Let the interval −ℓσ2 Vt−1 ∆x1 ∆x1 1 1 , µ +ℓσ + ∆v [µ1 −ℓσ1 − ∆v 1 1 2 2 ] (or, [−ℓσ1 − 2 , ℓσ1 + 2 ]) be subdivided into m (or, n) subintervals of length )−(µ1 −ℓσ1 ) 1) ∆v1 = (µ1 +ℓσ1m−1 (or, ∆x1 = (ℓσ1 )−(−ℓσ ), n−1

implying fVt−1 (v1,i ) =

(v −µ )2 − 2,i 2 2 2σ2

1 = σσ21 · √2πσ = k1 ·fVt−1 (v2,i ). e 2 1 Thus, Similarly, fXt−1 (x1,j ) = k · fXt−1 (x2,j ). we conclude that fVt−1 (v1,i )fXt−1 (x1,j )∆x1 ∆v1 = fVt−1 (v2,i )fXt−1 (x2,j )∆x2 ∆v2 . We now adhere to the proof of Eq. 22. According to Lemma 1, α ˜ t (v1,i , x1,j ) is the α (0 ≤( α ≤ 1) value which maximizes )Y = √ 2 2 √ 1 e 2πσ1

A PPENDIX fVt−1 (v)

such that v1,1 = µ1 − ℓσ1 and v1,m = µ1 + ℓσ1 (or, x1,1 = −ℓσ1 and x1,n = ℓσ1 ) as illustrated in Fig. 10a ¯˜ of object (or, Fig. 10b). Consequently, the value of α ∑m ∑ n O1 is limm→∞ limn→∞ i=1 j=1 fVt−1 (v1,i )fXt−1 (x1,j ) α ˜ t (v1,i , x1,j ) ∆x1 ∆v1 . Similarly, let the interval ∆v2 ∆x2 ∆x2 2 [µ2 −ℓσ2 − ∆v 2 , µ2 +ℓσ2 + 2 ] (or, [−ℓσ2 − 2 , ℓσ2 + 2 ]) be subdivided into m (or, n) subintervals of length )−(µ2 −ℓσ2 ) 2) ∆v2 = (µ2 +ℓσ2m−1 (or, ∆x2 = (ℓσ2 )−(−ℓσ ), such n−1 that v2,1 = µ2 − ℓσ2 and v2,m = µ2 + ℓσ2 (or, x2,1 = −ℓσ2 and x2,n = ℓσ2 ), as illustrated in Fig. 10c (or, Fig. 10d). ¯˜ of object O2 is then limm→∞ limn→∞ The value of α ∑ m ∑n ˜ t (v2,i , x2,j ) ∆x2 ∆v2 . i=1 j=1 fVt−1 (v2,i ) fXt−1 (x2,j ) α We claim that Eqs. 21 and 22 hold.



− 1 e 2π(1−α2 )σ1

and 1) √

α1 v1,i +(1−α1 )µ1 +

1−α1 x1,j −αv1,i −(1−α)µ1 2 2(1−α2 )σ1

α ˜ t (v2,i , x2,j ) is the α (0 value ( which maximizes √ 2

− 1 e 2π(1−α2 )σ2

α2 v2,i +(1−α2 )µ2 +



≤ =

α Z

1−α2 x2,j −αv2,i −(1−α)µ2

2 2(1−α2 )σ2

, )2 .

Y Z

It suffices to show that is a constant if α1 = α2 . It√ is easy to verify that α1 v1,i + 2 (1( − α1 )µ1 + 1−α √1 x1,j − αv1,i − (1 − α)µ1 ) = k α2 v2,i + (1 − α2 )µ2 + 1 − α22 x2,j − αv2,i − (1 − α)µ2 , if α1 = α2 . So, YZ = σσ21 , if α1 = α2 . Because σσ21 is a constant, the validity of Eq. 22 is established. to Eqs. ∑m According ∑n 21 and 22, limm→∞ limn→∞ f V t−1 (v1,i ) i=1 j=1 f∑ ) α ˜ (v , x ) ∆x ∆v = lim lim Xt−1 (x 1,j t 1,i 1,j 1 1 m→∞ n→∞ m ∑n f (v ) f (x ) α ˜ (v , x ) ∆x ∆v2 , V 2,i X 2,j t 2,i 2,j 2 t−1 t−1 i=1 j=1 if α1 = α2 . Proof of Theorem 1: Because random variables Vt−1 and Xt−1 are independent, 2 (v−µ1 )2 ∫∞ ∫∞ − − x2 2 2σ1 2σ1 ¯˜ = √ 1 √ 1 α α ˜ t (v, x)dxdv e e −∞ −∞ 2πσ1 2πσ1 according to Lemma 1, where α ˜ t (v, x) is

13 AV 1

³

 AV 1

P1  AV 1

2

fVt 1 ( v ) f X t1 ( x )D t ( v, x )dx

v

'v1

AV 1 x1,1

P1  AV 1

P1

v1,m

v1,1

³

AV 2

 AV 2

P 2  AV 2

'x1

x

AV 1 x1,n

0

(a)

(b)

fVt1 ( v ) f X t1 ( x )D t ( v, x )dx

fVt1 ( v ) f X t1 ( x )D t ( v, x ) v

'v 2

P2

AV 2 x2,1

P 2  AV 2 v2,m

v2,1

− x2 2σ1 √ 1 e 2πσ1

fVt1 ( v ) f X t1 ( x )D t ( v, x )

'x2

(c)

x

AV 2 x2,n

0

(d)

Fig. 10: Subdivision of intervals. In (a), (b), (c), and (d), ∆v1 1 the intervals [µ1 − ℓσ1 − ∆v 2 , µ1 + ℓσ1 + 2 ], [−ℓσ1 − ∆x1 ∆x1 ∆v2 ∆v2 2 , ℓσ1 + 2 ], [µ2 − ℓσ2 − 2 , µ2 + ℓσ2 + 2 ], and ∆x2 ∆x2 [−ℓσ2 − 2 , ℓσ2 + 2 ] are subdivided into subintervals of lengths ∆v1 , ∆x1 , ∆v2 , and ∆x2 , respectively. equal √

to

(the

− 1 e 2π(1−α2 )σ1

α

value √ 2

α1 v+(1−α1 )µ1 +

which

)2maximizes

1−α1 x−αv−(1−α)µ1

2 2(1−α2 )σ1

.

¯˜ α

Because is invariant with respect to the values of the Gauss-Markov parameters µ and σ of the mobile object according to Lemma ∫ ∞ ∫ ∞ 1 − v2 +x2 ¯˜ 2 e α ˜ t (v, x)dxdv, where 2, α = −∞ −∞ 2π α ˜ t (v, x) is equal to the α value which maximizes ( )2 √ √

− 1 e 2π(1−α2 )

α1 v+

1−α2 1 x−αv 2(1−α2 )

, by setting µ1 = 0 and

σ1 = 1, as desired. Proof of Theorem 2: ∑ Let Vi , i ≥ 1, denote the ∑nrandom n 1 2 variable of vi , V¯n = n1 i=1 Vi , and S√ n = n i=1 (Vi − V¯n )2 . The convergence rate of µ ˆt is 1/ n because V¯n ∼ N (µ, ( √σn )2 ) as n → ∞, as proved in [32]. In addition, it √



n n( √n−1 Sn −σ)

is shown in [32] that ∼ N (0, 1) as n → ∞, (2σ)−1 γ 4 where γ 2 = µ√4 − σ 4 and µ4 = E[(V 1 √ − µ) ]. Furthermore, n(Sn −σ) n as n → ∞, √n−1 → 1, implying (2σ)−1 γ ∼ N (0, 1), and thus Sn ∼ N (σ, ( 2σγ√n )2 ). Therefore, the convergence rate √ of σ ˆt is 1/ n. Proof of Theorem 3: Let O1 be a mobile object having ¯˜ of the Gauss-Markov parameter α = α1 . Because α O1 is equal to E[˜ αt (Vt−1 , Xt−1 )] with random variables Vt−1 ∼ ∑ N (µ1 , σ12 ) and Xt−1 ∼ N (0, σ12 ), it suffices to n 1 ¯ ˜ , ( √σn )2 ) as n → ∞, where Wi show n i=1 Wi ∼ N (α denotes a random variable, α ˜ t (Vt−1 , Xt−1 ). First note that Wi and Wj , 1 ≤ i ̸= j ≤ n, are identically distributed because Pr{Wi ≤ x} = Pr{Wj ≤ x} for all x. Because Wi and Wj are mutually independent, {Wn } is a sequence of i.i.d. random variables. In addition, E[|W1 − 2 (v−µ1 )2 ∫∞ ∫∞ − − x2 2 2σ1 2σ1 √ 1 √ 1 E[W1 ]|3 ] = · · e e −∞ −∞ 2πσ1 2πσ1 (v−µ1 )2 ∫∞ ∫∞ − 2 1 2σ1 · |˜ αt (v, x) − E[W1 ]|3 dxdv ≤ −∞ −∞ √2πσ e 1

· 13 dxdv = 1 < ∞ because 0 ≤ α ˜ t (v, x) ≤ 1 and 0 ≤ E[W1 ] ≤ 1. According to Lemma 3, there exists a constant C such that ∆n = ∑supx∈R |Fn (x) − n −1/2 W −nµ Φ(x)| ≤ C ν3 nσ3 , where Fn (x) = Pr{ i=1σ√ni ≤ x}, Φ(x) is the cumulative distribution function of N (0, 1), ¯˜ , and σ is the variance ν3 = E[|W1 − µ|3 ], µ = E[W1 ] = α of W1 . Because C, ν3 , and σ are constants, F ∑nn(x) → Φ(x) √ ¯ W −nα ˜ ∼ on the order of 1/ n. As n → ∞, we have i=1σ√ni ∑n √ 2 ¯ N (0, 1). Therefore, W ∼ N (n α ˜ , (σ n) ) as n → ∞. ∑n i=1 i ¯˜ , ( √σ )2 ) as n → ∞. We then have n1 i=1 Wi ∼ N (α n Proof of Theorem 4: µ and σ denote the mean and the standard deviation of velocity as t → ∞, ∑ respect tively. Therefore, it suffices to show that µ ˆt = 1t i=1 vi ∑ t ˆt )2 . It is easy to verify that and σ ˆt2 = 1t i=1 (vi − µ ∑t 1 µ ˆt = t i=1 vi . According to Eq. 11, σ ˆ12 = 0 and 2 2 ∑ t (v −ˆ µ ) +(v −ˆ µ ) σ ˆ22 = 1 2 2 2 2 ; therefore, σ ˆt2 = 1t i=1 (vi − µ ˆt )2 2 holds ˆt = ∑t for both 2t = 1 and t = 2. We prove σ 1 (v − µ ˆ ) holds for t ≥ 3 by induction on t. As i t i=1 t ∑t ˆt )2 an induction assumption, we take σ ˆt2 = 1t i=1 (vi − µ 2 holds for all t ≤ k − 1. According to Eq. 11, σ ˆk = k−1 2 1 2 σ ˆ + (v − µ ˆ ) . Then, by induction hypothk k k−1 k k−1 ∑k−1 1 2 ˆk−1 )2 ; therefore, σ ˆk2 = esis, σ ˆk−1 = k−1 i=1 (vi − µ ( ) ∑ ∑ k−1 2 k−1 1 1 µk−1 vi +(k−1)ˆ µ2k−1 + k−1 (vk −µ ˆk )2 . i=1 vi −2ˆ k ∑k−1 2i=1 k−1 2 1 1 2 2 So, σ ˆk = k i=1 vi − k µ ˆk−1 + k−1 (vk − µ ˆk ) be∑k−1 1 k −vk cause µ ˆk−1 = k−1 i=1 vi . Because µ ˆk−1 = kµˆk−1 , ( ) ∑ k−1 1 k 1 2 2 2 v − (k µ ˆ − v ) + (v − µ ˆ ) . σ ˆk2 = k k k k k i=1 i ( k−1 k−1 ) ∑k ∑k 1 1 2 2 2 2 Therefore, σ ˆk = k v − kµ ˆk = k i=1 (vi − µ ˆk ) ∑k i=1 i because µ ˆk = k1 i=1 vi . Using a similar argument, it can ∑t ˆt )2 holds for t = 3; be shown that σ ˆt2 = 1t i=1 (vi − µ therefore a basis for the proof exists. This implies that ∑t 1 2 2 σ ˆt = t i=1 (vi − µ ˆt ) holds for t ≥ 3. Proof of Theorem 5: We first show the approximate ¯˜ ). 95% confidence interval for α is within 0.00812 of G(α According to Eq. 15, we need to show the approximate ¯˜ ) 95% confidence interval for α is within 0.00812 of G(α ¯ ¯ if C1) 0.35338 < α ˜ ≤ 0.44602 and C2) 0.44602 < α ˜ ≤ 1. The proof of C1 is omitted due to its similarity with ˆ = (0.37492, −0.55069). the proof of C2. For C2, θ According to Lemma 4, the approximate 95% ¯˜ ) ± 1.645 × confidence √ interval (for α is given by G(α )2 0.00493 2.58456 × ln((x − 0.2)/x) + 0.40848 + 1.0001. It is√ easy to verify that 1.645 × ( )2 0.00493 2.58456 × ln((x − 0.2)/x) + 0.40848 + 1.0001 is concave up on [0.44602, 1], and has maximum value 0.00812, implying that the approximate 95% confidence ¯˜ ± 0.00812. In addition, interval for α is given by G(α) ∑n ¯ ¯ ¯˜ , ( √σ )2 ) since limt→∞ α ˜t = α ˜ (because n1 i=1 Wi ∼ N (α n as n → ∞, where Wi denotes a random variable, α ˜ t (Vt−1 , Xt−1 ), as demonstrated in Theorem 3) and ¯˜ t ), the approximate 95% confidence interval α ˆ t = G(α for α is within 0.00812 of α ˆ t as t → ∞.

14

R EFERENCES [1] [2] [3]

[4] [5] [6] [7] [8]

[9] [10]

[11]

[12] [13]

[14] [15] [16]

[17] [18] [19]

[20]

[21] [22]

J. Ahn and B. Krishnamachari, “Scaling laws for data-centric storage and querying in wireless sensor networks,” IEEE/ACM Transactions on Networking, vol. 17, no. 4, pp. 1242–1255, 2009. P. Balister, Z. Zheng, S. Kumar, and P. Sinha, “Trap coverage: allowing coverage holes of bounded diameter in wireless sensor networks ,” IEEE INFOCOM, pp. 136–144, 2009. C. Bettstetter, “Mobility modeling in wireless networks: categorization, smooth movement, and border effects,” ACM SIGMOBILE Mobile Computing and Communications Review, vol. 5, no. 3, pp. 55–66, 2001. P.K. Biswas and S. Phoha, “Self-organizing sensor networks for integrated target surveillance,” IEEE Trans. on Computers, vol. 55, no. 8, pp. 1033–1047, 2006. T. Camp, J. Boleng, and V. Davies, “A survey of mobility models for ad hoc network research,” Wireless Communications and Mobile Computing, vol. 2, no. 5, pp. 483–502, 2002. C. Campos, D. Otero, and L.D. Moraes, “Realistic individual mobility markovian models for mobile ad hoc networks,” IEEE WCNC, pp. 1980–1985, 2004. W.P. Chen, J.C. Hou, and L. Sha, “Dynamic clustering for acoustic target tracking in wireless sensor networks,” IEEE Trans. on Mobile Computing, vol. 3, no. 3, pp. 258–271, 2004. J.C. Chen, R.E. Hudson, and K. Yao, “Maximum-likelihood source localization and unknown sensor location estimation for wideband signals in the near-field,” IEEE Trans. on Signal Processing, vol. 50, no. 8, pp. 1843–1854, 2002. J.C. Davis, “Statistics and Data Analysis in Geology,” Wiley, New York, pp. 72–78, 2002. H.A.B.F. de Oliveira, A. Boukerche, E.F. Nakamura, and A.A.F. Loureiro, “An efficient directed localization recursion protocol for wireless sensor networks,” IEEE Trans. on Computers, vol. 58, no. 5, pp. 677–691, 2009. J.M.B. Dias and P.A.C. Marques, “Multiple moving target detection and trajectory estimation using a single SAR sensor,” IEEE Trans. on Aerospace and Electronic Systems, vol. 39, no. 2, pp. 604– 624, 2003. B. Dong and X. Wang, “Adaptive mobile positioning in WCDMA networks,” EURASIP Journal on Wireless Communications and Networking, vol. 5, no. 3, pp. 343–353, 2005. K.T. Feng, C.H. Hsu, and T.E. Lu, “Velocity-assisted predictive mobility and location-aware routing protocols for mobile ad hoc networks,” IEEE Trans. on Vehicular Technology, vol. 57, no. 1, pp. 448–464, 2008. Z. Guo, M. Zhou, and L. Zakrevski, “Optimal tracking interval for predictive tracking in wireless sensor network,” IEEE Communications Letters, vol. 9, no. 9, pp. 805–807, 2005. B.H. Kim, D.K. Roh, J.M. Lee, M.H. Lee, K. Son, M.C. Lee, J.W. Choi, and S.H. Han, “Localization of a mobile robot using images of a moving target,” IEEE ICRA, pp. 253–258, 2001. S.P. Kuo, H.J. Kuo, and Y.C. Tseng, “The beacon movement detection problem in wireless sensor networks for localization applications,” IEEE Trans. on Mobile Computing, vol. 8, no. 10, pp. 1326–1338, 2009. B. Liang and Z.J. Haas, “Predictive distance-based mobility management for multidimensional PCS networks,” IEEE/ACM Trans. on Networking, vol. 11, no. 5, pp. 718–732, 2003. B. Liang and Z.J. Haas, “Predictive distance-based mobility management for PCS networks,” IEEE INFOCOM, pp. 1377–1384, 1999. T. Liu, P. Bahl, and I. Chlamtac, “Mobility modeling, location tracking, and trajectory prediction in wireless ATM networks,” IEEE Journal on Selected Areas in Communications, vol. 16, no. 6, pp. 922–936, 1998. B.H. Liu, W.C. Ke, C.H. Tsai, and M.J. Tsai, “Constructing a message-pruning tree with minimum cost for tracking moving objects in wireless sensor networks is NP-complete and an enhanced data aggregation structure,” IEEE Trans. on Computers, vol. 57, no. 6, pp. 849-863, 2008. X. Luo, T. Camp, and W. Navidi, “Predictive methods for location services in mobile ad hoc networks,” IEEE WMAN, pp. 246–252, 2005. D. McErlean and S. Narayanan, “Distributed detection and tracking in sensor networks,” IEEE ACSSC, pp. 1174–1178, 2002.

[23] M. McGuire and K.N. Plataniotis, “Dynamic model-based filtering for mobile terminal location estimation,” IEEE Trans. on Vehicular Technology, vol. 52, no. 4, pp. 1012–1031, 2003. [24] Z. Merhi, M. Elgamel, and M. Bayoumi, “A lightweight collaborative fault tolerant target localization system for wireless sensor networks,” IEEE Trans. on Mobile Computing, vol. 8, no. 12, pp. 1690–1704, 2009. [25] M.Y. Nam, M.Z. Al-Sabbagh, J.E. Kim, M.K. Yoon, C.G. Lee, and E.Y. Ha, “A real-time ubiquitous system for assisted living: combined scheduling of sensing and communication for real-time tracking,” IEEE Trans. on Computers, vol. 57, no. 6, pp. 795–808, 2008. [26] W. Navidi and T. Camp, “Predicting node location in a PCS network,” IEEE IPCCC, pp. 165–170, 2004. [27] M.M. Noel, P.P. Joshi, and T.C. Jannett, “Improved maximum likelihood estimation of target position in wireless sensor networks using particle swarm optimization,” IEEE ITNG, pp. 274–279, 2006. [28] T. Park and K.G. Shin, “Soft tamper-proofing via program integrity verification in wireless sensor networks,” IEEE Trans. on Mobile Computing, vol. 4, no. 3, pp. 297–309, 2005. [29] K.K. Rachuri and C. Murthy, “Energy efficient and scalable search in dense wireless sensor networks,” IEEE Trans. on Computers, vol. 58, no. 6, pp. 812–826, 2009. [30] S.S. Sawilowsky, Fermat, Schubert, Einstein, and Behrens-Fisher, “The probable difference between two means when σ1 ̸= σ2 ,” Journal of Modern Applied Statistical Methods, vol. 1, no. 2, pp. 461– 472, 2002. [31] G.A.F. Seber and C.J. Wild, “Nonlinear regression,” Wiley, New York, 1989. [32] P.K. Sen and J.M. Singer, “Large sample methods in statistics: an introduction with applications,” Chapman & Hall, New York, pp. 107–147, 1993. [33] S.C. Tu, G.Y. Chang, J.P. Sheu, W. Li, and K.Y. Hsieh “Scalable continuous object detection and tracking in sensor networks,” Journal of Parallel and Distributed Computing, vol. 70, no. 3, pp. 212–224, 2010. [34] M.J. Tsai, H.Y. Yang, B.H. Liu, and W.Q. Huang, “Virtualcoordinate-based delivery-guaranteed routing protocol in wireless sensor networks,” IEEE/ACM Transactions on Networking, vol. 17, no. 4, pp. 1228–1241, 2009. [35] Y.C. Tseng, S.P. Kuo, H.W. Lee, and C.F. Huang, “Location tracking in a wireless sensor network by mobile agents and its data fusion strategies,” Computer Journal, vol. 47, no. 4, pp. 448– 460, 2004. [36] Y.C. Wang, Y.Y. Hsieh, and Y.C. Tseng, “Multiresolution spatial and temporal coding in a wireless sensor network for long-term monitoring applications,” IEEE Trans. on Computers, vol. 58, no. 6, pp. 827–838, 2009. [37] X. Wang and S. Wang, “Collaborative signal processing for target tracking in distributed wireless sensor networks,” Journal of Parallel and Distributed Computing, vol. 67, no. 5, pp. 501–515, 2007. [38] J. Xu, X. Tang, and W.C. Lee, “A new storage scheme for approximate location queries in object-tracking sensor networks,” IEEE Trans. on Parallel and Distributed Systems, vol. 19, no. 2, pp. 262–275, 2008. [39] Y. Xu, J. Winter, and W.C. Lee, “Prediction-based strategies for energy saving in object tracking sensor networks,” IEEE MDM, pp. 346–357, 2004. [40] H. Yang and B. Sikdar, “A protocol for tracking mobile targets using sensor networks,” IEEE SNPA, pp. 71–81, 2003. [41] Z. Yang and X. Wang, “Joint mobility tracking and handoff in cellular networks via sequential Monte Carlo filtering,” IEEE Trans. on Signal Processing, vol. 51, no. 1, pp. 269–281, 2003. [42] Z. Ye, A.A. Abouzeid, and J. Ai, “Optimal stochastic policies for distributed data aggregation in wireless sensor networks,” IEEE/ACM Trans. on Networking, vol. 17, no. 5, pp. 1494–1507, 2009. [43] W.L. Yeow, C.K. Tham, and W.C. Wong, “Energy efficient multiple target tracking in wireless sensor networks,” IEEE Trans. on Vehicular Technology, vol. 56, no. 2, pp. 918–928, 2007. [44] Z.R. Zaidi and B.L. Mark, “Real-time mobility tracking algorithms for cellular networks based on Kalman filtering,” IEEE Trans. on Mobile Computing, vol. 4, no. 2, pp. 195–208, 2005. [45] W. Zhang and G. Cao, “Optimizing tree reconfiguration for mobile target tracking in sensor networks,” IEEE INFOCOM, pp. 2434–2445, 2004.

Suggest Documents