tions confirm that the novel channel-aware distributed tracker ... (WSNs), distributed estimation and tracking of dynamical pro- ..... It is also worth mention- ing that ...
POWER-EFFICIENT DIMENSIONALITY REDUCTION FOR DISTRIBUTED CHANNEL-AWARE KALMAN TRACKING USING WIRELESS SENSOR NETWORKS Hao Zhu, Ioannis D. Schizas and Georgios B. Giannakis University of Minnesota, 200 Union Str. SE, Minneapolis, MN 55455, USA ABSTRACT Estimation and tracking of nonstationary dynamical processes is of paramount importance in various applications including localization and navigation. The goal of this paper is to perform such tasks in a distributed fashion using data collected at power-limited sensors communicating with a fusion center (FC) over noisy links. For a prescribed power budget, linear dimensionality reducing operators are derived per sensor to account for the sensor-FC channel and minimize the meansquare error (MSE) of Kalman filtered state estimates formed at the FC. Using these operators and state predictions fed back from the FC online, sensors compress their local innovation sequences and communicate them to the FC where tracking estimates are corrected. Analysis and corroborating simulations confirm that the novel channel-aware distributed tracker outperforms competing alternatives. Index Terms— Distributed tracking, Kalman Filtering 1. INTRODUCTION With the advent of power-limited wireless sensor networks (WSNs), distributed estimation and tracking of dynamical processes using sensor data processed at a fusion center (FC) has drawn a lot of interest recently. To comply with stringent bandwidth limitations, existing approaches entail sensors communicating their analog or quantized compressed data to the FC where a Kalman filter (KF) is implemented to track the dynamical process of interest [2, 3]. However, sensor-power as well as the sensor-FC channels are not accounted for in these approaches. In this paper, power-efficient and channel-aware KF-based tracking is derived based on analog-amplitude reduced dimensionality multi-sensor data. Specifically, linear dimensionality reducing operators (matrices) are derived to compress sensor data and minimize the MSE of state estimates at the FC, while adhering to power constraints prescribed at each sensor. An attractive feature of the present scheme is Prepared through collaborative participation in the Communication and Networks Consortium sponsored by the U. S. Army Research Lab under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD19-01-2-0011. The U. S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon.
the utilization of feedback from the FC to the sensors which allows them to remove redundant information from their observations and gain in power efficiency; see also [4] where feedback was also advocated (without channel-aware dimensionality reduction) to track an auto-regressive process. The novel approach subsumes as a special case the results reported in [1] for distributed estimation of stationary random signals using reduced-dimensionality sensor observations. After formulating the problems in Section 2, we develop our reduced-dimensionality KF scheme (Section 3). Further, in Section 3.1 the MSE optimal linear dimensionality reducing operators are specified for a single-sensor setup; whenever the latter is impossible, we develop a block coordinate descent algorithm (Section 3.2). Our theoretical results are corroborated by numerical examples in Section 4. 2. PROBLEM STATEMENT AND PRELIMINARIES Consider the WSN in Fig. 1 which depicts L sensors {S` }L `=1 linked with an FC. Each sensor observes an N` × 1 vector x` (n) that is correlated with the p × 1 state process of interest s(n). The latter obeys the discrete-time vector recursion s(n) = A(n)s(n − 1) + w(n)
(1)
where w(n) denotes zero-mean additive white Gaussian noise (AWGN) with covariance matrix Σww (n). Sensor S` observes x` (n) = H` (n)s(n) + v` (n), ` = 1, . . . , L
(2)
where v` (n) is zero-mean, uncorrelated (in time), Gaussian observation noise with (cross-)covariance matrix Σv` vm (n). Using a k` × N` matrix C` (n) with k` ≤ N` , sensor S` forms the reduced-dimensionality vector C` (n)[x` (n) − ˆ ` (n|n − 1)], where x ˆ ` (n|n − 1) is a vector subtracted from x ˆ ` (n|n− x` (n) to save transmission power. (It will turn out that x 1) is a predictor of x` (n) based on data to be specified soon.) Furthermore, we assume that: (a1) The S` -to-FC link comprises a k` × k` full rank channel matrix D` along with zero-mean AWGN z` (n) at the FC with covariance Σz` z` , which is uncorrelated with s(n), {x` (n)}L `=1 and across channels. Sensors transmit over orthogonal channels so that the FC can receive separately the vectors ˆ ` (n|n − 1)] + z` (n), ∀`. y` (n) = D` C` (n)[x` (n) − x
(a2) The covariance matrices Σww (n), {Σv` vm (n)}L `,m=1 as well as matrices A(n) and {H` (n)}L are known at the FC. `=1 Assumption (a1) is satisfied for e.g., multicarrier transmissions where each entry of the compressed vector rides on a subcarrier with nonzero constant channel gain. Matrices {D` }L `=1 can be acquired during a training phase, while the matrices in (a2) can be obtained from the physics of the problem or using sample averaging of training samples. The FC concatenates y` (n), for ` = 1, . . . , L, to form ˆ (n|n − 1)] y(n) = diag(D1 C1 (n), . . . , DL CL (n))[x(n) − x + z(n) (3) where x(n) := [xT1 (n) . . . xTL (n)]T and likewise for the agˆ (n|n − 1) and FC noise vector z(n). gregate predictor x We are interested in tracking s(n) using the received data {y(k)}nk=0 . Specifically, we wish to find at the FC the MMSE optimal state estimate ˆs(n|n) = E[s(n)|y(0), . . . , y(n)], recursively (via a KF) based on the observations given by (3). To this end, notice that the filtered estimate ˆs(n|n) can be obtained from the predictor ˆs(n|n − 1) = A(n)ˆs(n − 1|n − 1) ˜ (n|n−1) := y(n)− after correcting it using the innovations y ˆ (n|n−1), where y ˆ (n|n−1) = E[y(n)|y(0), . . . , y(n−1)]; y i.e., see e.g., [5, Chapter 13] ˆs(n|n) = ˆs(n|n − 1) + E[s(n)|˜ y(n|n − 1)].
(4)
Equation (4) guides the selection of the local predictor at sensor lˆ x` (n|n−1) which is summarized in the following lemma. ˆ ` (n|n − 1) = E[x` (n)|y(0), . . . , y(n − 1)], Lemma 1: If x ˆ ` (n|n − 1) = 0 then y Proof: The predictor of y` (n) based on the past observations can be written as [c.f. (a1)] £ ¤ ˆ ` (n|n − 1)]|y(0), . . . , y(n − 1) E D` C` (n)[x` (n) − x ˆ ` (n|n − 1)] = 0 = D` C` (n)[ˆ x` (n|n − 1) − x Q.E.D. ˜ ` (n|n − 1) = y` (n). This choice Lemma 1 implies that y ˆ ` (n|n − 1) enables each sensor S` to save considerable of x power because it transmits the non-redundant local innovation ˆ ` (n|n − 1), which has smaller variance than its raw x` (n) − x data x` (n). In addition, this innovation that is received at the ˜ ` (n|n − 1), which is all that FC as y` (n) coincides with y the FC needs to implement the correction step (4) of the KF. ˆ ` (n|n − 1) = H` (n)ˆs(n|n − 1), Since from (2) it holds that x ˆ ` (n|n − 1) the FC must feed back the for sensor S` to form x predictor ˆs(n|n − 1). To proceed with the dimensionality reduction task, we note that Gaussianity and uncorrelatedness of z` (n) and v` (n) imply that E[s(n)|˜ y(n| n − 1)] in (4) is a linear function of ˜ (n|n−1) = y(n), i.e., E[s(n)|˜ y y(n|n−1)] = B(n)˜ y(n|n− 1). Furthermore, it follows from (3) and (4) that ˆs(n|n) depends on both C` (n) and B(n). Thus, for a prescribed transmit power P` per sensor S` the optimization problem is: (Bo (n), {Co` (n)}L s(n|n)k2 ] `=1 ) = arg min E[ks(n) − ˆ s.t. {tr(C` (n)Σx˜` x˜` (n)C` T (n)) ≤ P` }L `=1
(5)
Dynamic stochastic process
+
S1 C1(n)
xL(n)
x2(n)
x1(n)
¡ˆ x1(njn ¡ 1) ¡ˆ x2(njn ¡ 1)
+
. . .
xL(njn ¡ 1) ¡ˆ
+
SL CL (n)
S2 C2(n)
Channel B(n)
FC
Fig. 1. A FC-based wireless sensor network. ˜ ` (n|n − 1) := where Σx˜` x˜` (n) is the covariance matrix of x ˆ ` (n|n − 1) that is compressed at sensor `. x` (n) − x 3. REDUCED-DIMENSIONALITY KF In this section we first outline the steps of the reduced dimensionality distributed tracker at the sensors and FC, and then explain how matrices {C` (n)}L `=1 and B(n) are selected. Supposing that all quantities at step n − 1 are available, the FC relies on ˆs(n − 1|n − 1) to obtain the state predictor ˆs(n|n − 1) = A(n)ˆs(n − 1|n − 1)
(6)
and on M(n − 1|n − 1) := E[(s(n − 1) − ˆs(n − 1|n − 1))(s(n−1)−ˆs(n−1|n−1))T ] to update M(n|n−1), which is the covariance matrix of state innovation ˜s(n|n − 1) := s(n) − ˆs(n|n − 1) M(n|n − 1) = A(n)M(n − 1|n − 1)AT (n) + Σww (n). (7) With optimal {C` (n)}L `=1 and B(n) assumed determined as described in the ensuing section, the FC feeds back to sensor S` matrix C` (n) and vector ˆs(n|n−1). Note that the feedback channel can be assumed reliable since the FC does not have stringent power and communication limitations. Using the feedback and its local observation, sensor S` ˜ ` (n|n − 1) and transmits to the FC the forms the innovation x reduced-dimensionality vector C` (n)˜ x` (n|n − 1). The FC receives each y` (n) separately and relies on y(n) to obtain the filtered estimate (corrector) ˆs(n|n) = ˆs(n|n − 1) +
PL `=1
B` (n)˜ y` (n|n − 1)
(8)
where B` (n) represents the columns of matrix B(n) with P`−1 P` indices ( m=1 km ) + 1 through m=1 km and B(n) := ˜ ` (n|n − 1) = y` (n).) [B1 (n) . . . BL (n)]. (Recall also that y Note that B(n) is the gain of the KF based on the reduceddimensionality data in y(n). Further, as B(n) is chosen so that ˆs(n|n) is the LMMSE estimator, based on the orthogonality principle and the linearity of the expected value operator the filtered error covariance matrix (ECM) can be updated
as
ΦB ) that satisfies:
M(n|n) =E[(˜s(n|n − 1) −
L X
T
B` (n)˜ y` (n|n − 1))˜s(n|n − 1) ]
`=1
=[I −
L X
B` (n)D` C` (n)H` (n)]M(n|n − 1).
(9)
`=1
3.1. MSE Optimal Dimensionality-Reducing Matrices Consider the (cross-) covariance matrices of the state and observation innovations given by £ ¤ Σs˜x˜` (n) := E ˜s(n|n − 1)˜ xT` (n|n − 1) = M(n|n − 1)HT` (n) ¤ £ ˜ ` (n|n − 1)˜ Σx˜` x˜m (n) := E x xTm (n|n − 1)
(10)
= H` (n)M(n|n − 1)HTm (n) + Σv` vm (n). (11) Focusing first on the solution of (5) for a single sensor, and the Lagrangian function for L = 1 is J(B(n), C1 (n), µ) =Jo (n) + tr[(Σs˜x˜1 (n) − B(n)D1 C1 (n)Σx˜1 x˜1 (n)) T T T Σ−1 ˜1 s˜(n) − Σx ˜1 x ˜1 (n)C1 (n)D1 B (n))] (12) x ˜1 x ˜1 (n)(Σx
C1 (n) = Qzd ΦC VgT (n)QTx˜ (n) B(n) =
(13)
T T −1 Us˜x˜ (n)ΦB Λ−1 zd Qzd D1 Σz1 z1
where ΦC := [φc,ij ] and ΦB have sizes k1 × N1 and p × k1 , respectively. Using (13), the Lagrangian in (12) becomes J(ΦC , µ) = Jo (n) + tr(Λgx˜ (n)) + µ(tr(ΦC ΦTC ) − P1 ) ¢ ¡ T −1 (14) − tr (Λ−1 ΦC Λgx˜ ΦTC . zd + ΦC ΦC ) Applying the well known Karush-Kuhn-Tucker (KKT) conditions (see e.g., [6, Ch. 5]) that must be satisfied at the minimum of (14), we obtain that ΦoC minimizing (14), is diagonal with diagonal entries: r ³ ´1/2 λg x ˜,i (n) 1 ± − λzd,i , i = 1, . . . , κ (15) µo λzd,i φoc,ii = 0, i = κ + 1, . . . , k1 where κ is the maximum integer in [1, k1 ] for which {φoc,ii }κi=1 are strictly positive; and µo is the Lagrange multiplier given by Pκ 1/2 2 ) ( i=1 (λgx˜,i (n)λ−1 zd,i ) o µ = . (16) Pκ −1 2 (P1 + i=1 λzd,i )
Summarizing, we can establish that: + tr(B(n)Σz1 z1 BT (n)) + µ[tr(C1 (n)Σx˜1 x˜1 (n)CT1 (n)) − P1 ] Proposition 1: Under (a1), (a2) and for L = 1 the optimal matrices Co1 (n) and Bo (n) that minimize (5) are: where µ is the corresponding Lagrange multiplier and Jo (n) Co1 (n) =Qzd ΦoC VgT (n)QTx˜ (n) (17) := tr(M(n|n − 1) − Σs˜x˜1 (n)Σ−1 ˜1 s˜(n)) is the minx ˜1 x ˜1 (n)Σx T o o imum MSE achieved when estimating ˜s(n|n − 1) based on B (n) =Σs˜x˜1 (n)Qx˜ (n)Vg (n)ΦC ˜ 1 (n|n − 1) without channel distortion and additive noise at x ´−1 ³ T −1 T Λ−1 × ΦoC ΦoC T + Λ−1 the FC. Interestingly, the Lagrangian function for determinzd zd Qzd D1 Σz1 z1 o o ing C1 (n) and B (n) resembles the formulation in [1], which where ΦoC is given by (15), and the corresponding Lagrange deals with distributed estimation via reduced-dimensionality multiplier µo is specified by (16). observations of stationary signals. The important difference The optimal matrices Co1 (n) and Bo (n) obtained in Propohere is that we consider estimation of nonstationary processes. sition 1 are given in closed-form as a function of the state To simplify (12), consider the SVD Σs˜x˜1 (n) = Us˜x˜ (n) T Ss˜x˜ (n)Vs˜x˜ (n), the eigen-decompositions Σx˜1 x˜1 (n) = Qx˜ (n) and observation model parameters A(n), Σww (n), H1 (n) T and Σv1 v1 (n), as well as the channel matrix D1 , the received Λx˜ (n)QTx˜ (n) and DT1 Σ−1 ˜ (n) z1 z1 D1 = Qzd Λzd Qzd , where Λx noise covariance Σz1 z1 and the transmit-power P1 . = diag(λx˜,1 (n), · · · , λx˜,N1 (n)) with λx˜,1 (n) ≥ · · · ≥ λx˜,N1 Intuitively, the optimal matrix Co1 (n) in Proposition 1 se(n) > 0 and Λzd = diag(λzd,1 , · · · , λzd,k1 ) with λzd,1 ≥ ˜ 1 (n|n − 1) in which s(n) is strongest lects the entries of x · · · ≥ λzd,k1 > 0. Notice that λzd,i captures the SNR of and the channel imperfections weakest, and allocates power the ith entry in the received signal vector at the FC. Further, T T T among them in a water-filling like manner. Matrix Bo (n) define G(n) := Qx˜ (n) Vs˜x˜ (n)Ss˜x˜ (n)Ss˜x˜ (n)Vs˜x˜ (n)Qx˜ (n) o ensures that B (n)˜ y1 (n|n − 1) is the MMSE estimate of with ρg := rank(G(n)) = rank(Σs˜x˜1 (n)), and Gx˜ (n) := −1/2 −1/2 ˜ ˜ (n|n − s (n|n − 1) based on the non-redundant innovation y Λx˜ (n)G(n)Λx˜ (n) with eigen-decomposition Gx˜ (n) = o 1), and is a function of C (n). It is also worth mentionT 1 Qgx˜ (n)Λgx˜ (n) Qgx˜ (n), where Λgx˜ (n) = diag(λgx˜,1 (n), · · · , ing that (15) dictates a minimum power per sensor. Indeed, λgx˜,ρg (n), 0, · · · , 0) and λgx˜,1 (n) ≥ . . . ≥ λgx˜,ρg (n) > 0. in order to ensure that rank(ΦoC ) = κ, we must have µo < −1/2 Moreover, let Vg (n) := Λx˜ (n)Qgx˜ (n) denote the inλgx˜,κ (n)λzd,κ , which implies that the power must satisfy vertible matrix which simultaneously diagonalizes the matriPκ −1 1/2 κ ces G(n) and Λx˜ (n). Since (Qzd , Qx˜ (n), Vg (n), Us˜x˜ (n), X ˜,i (n)λzd,i ) i=1 (λg x p − λ−1 (18) P1 > Λzd , D1 , Σz1 z1 ) are all invertible, for every matrix C1 (n) (or zd,i . λ (n)λ g x ˜ ,κ zd,κ i=1 B(n)) we can clearly find a unique matrix ΦC (corresponding
3.2. Dimensionality-Reducing Matrices for Multi-Sensor In the multi-sensor scenario it turns out that the minimization problem in (5) does not lead to a closed-form solution, and in general incurs complexity that grows exponentially with L, see e.g., [1]. For this reason, we resort to a block coordinate descent algorithm which converges at least to a stationary point of the cost in (5). Specifically, we suppose temporarily L that matrices {Bl (n)}L l=1,l6=` and {Cl (n)}l=1,l6=` are fixed and satisfy the power constraints tr(Cl (n)Σx˜l x˜l (n)CTl (n)) = Pl , for l = 1, . . . , L and l 6= `. Upon defining P ¯s` (n) := s(n) − ˆs(n|n − 1) − L yl (n|n − 1) l=1,l6=` Bl (n)˜ the cost in (5) can be written as J(B` (n), C` (n)) = E[k¯s` (n) − B` (n)˜ y` (n|n − 1)k2 ] (19) which is a function of C` (n) and B` (n). Interestingly, (19) falls under the realm of Proposition 1. This means that when L {Bl (n)}L l=1,l6=` and {Cl (n)}l=1,l6=` are given, the matrices B` (n) and C` (n) minimizing (19) under the power constraint tr(C` (n)Σx˜` x˜` (n)C` T (n)) ≤ P` can be directly obtained ˜ (n|n − 1) = from (17), after setting ˜s(n|n − 1) ≡ ¯s` (n) and y ˜ ` (n|n − 1) in Proposition 1. y In order to apply the corresponding matrix decomposition, we need to update the following covariance matrices Σs¯` s¯` (n) := E[¯s` (n)¯sT` (n)] =M(n|n − 1) −
L X
Bl (n)Dl Cl (n)Σx˜l s˜(n)
(20)
l=1,l6=` L L X X − Σs˜x˜l (n)CTl (n)DTl BTl (n) + Bl (n)Σzl zl BTl (n) l=1,l6=` L X
+
l=1,l6=` L X
Bl (n)Dl Cl (n)Σx˜l x˜m (n)CTm (n)DTm BTm (n)
and B` (n), which are guaranteed to attain at least a stationary point (local optimum) of (5). Algorithm 1 Fusion Center: Solving for Optimal Matrices Initialize
randomly
the
(0)
{C` (n)}L `=1
matrices
and
(0) (0)T tr(C` (n)Σx˜` x˜` (n)C`
(0) {B` (n)}L `=1
so that (n)) = P` . for k = 1, . . . do for ` = 1, . . . , L do (k) (k) (k) (k) (k−1) Given C1 (n), B1 (n), . . . , C`−1 (n), B`−1 (n), C`+1 (n), (k−1)
(k−1)
B`+1 (n), . . . , CL (k)
(k−1)
(n), BL
(n), determine
(k)
C` (n), B` (n) using Proposition 2. end for (k−1) (k) (k) L (n)}L if |J({B` (n)}L `=1 , `=1 , {C` (n)}`=1 ) − J({B` (k−1) L {C` (n)}`=1 )| < ² for a prescribed ² then break end if end for
Remark: The novel distributed KF using reduced dimensionality sensor data is channel- and power-aware. The MMSE optimal dimensionality-reducing matrices in the single-sensor setup as well as those in the multi-sensor case select the most ˜ ` (n|n − 1) and sent them through ‘informative’ entries of x the most reliable subchannels of D` . On the other hand, the channel-unaware approach in [3] is challenged by catastrophic error accumulation when AWGN is present at the FC. Further, the feedback link from the FC to sensors enables forming and reducing the dimensionality of the innovation ˜ ` (n|n − 1) that has smaller dynamic range than x` (n), and x thus effects power savings. Further, the novel reduced dimensionality KF offers a neat generalization of the stationary results in [1] to nonstationary Markov processes. The reduced dimensionality KF scheme is summarized in Algorithm 2, which is ran by the FC to track the dynamic process s(n):
l=1,l6=` m=1,m6=`
Σs¯` x˜` (n) = E[¯s` (n)˜ xT` (n|n − 1)] = Σs˜x˜` (n) −
L X
(21)
Bl (n)Dl Cl (n)Σx˜l x˜` (n) .
l=1,l6=`
Based on this setup, we have the following proposition: Proposition 2: Under (a1) and (a2) and for given matrices L {Bl (n)}L ˜l x ˜l l=1,l6=` and {Cl (n)}l=1,l6=` satisfying tr(Cl (n)Σx T (n)Cl (n)) = Pl , the optimal Bo` (n) and Co` (n) matrices minimizing (19) subject to tr(C` (n)Σx˜` x˜` (n)CT` (n)) ≤ P` are provided by Proposition 1, after setting s(n) − ˆs(n|n − ˜ (n|n − 1) = y ˜ ` (n|n − 1), and applying the 1) ≡ ¯s` (n) and y corresponding (cross-)covariance modifications in (20) and (21). Based on Proposition 2, a block coordinate descent algorithm follows where the FC determines in an alternating fashion (successively for ` = 1, . . . , L) the matrices C` (n)
Algorithm 2 Fusion Center: Reception and Estimation Require: prior estimate ˆs(0|0) and ECM M(0|0) for n = 1, 2,. . . do Compute ˆs(n|n − 1) and M(n|n − 1) using (6) and (7) o L Compute optimal matrices {Bo` (n)}L `=1 and {C` (n)}`=1 using Algorithm 1 FC transmits to sensor S` Co` (n) and ˆs(n|n − 1) for ` = 1, . . . , L. FC receives y(n) from sensor transmissions FC computes ˆs(n|n) and M(n|n) using (8) and (9) end for
4. SIMULATIONS Here we test the performance of the reduced-dimensionality KF and compare it with [3]. Consider a WSN deployed for
The WSN comprises with L = 3 sensors, and sensor S` measures the temperature x` (n) = T (n) − β` T˙ (n) + v(n) Σ2v`
with β1 = 0.1, β2 = 0.2, β3 = 0.3 and = 2 for ` = 1, 2, 3. Each sensor acquires data vectors of size N` = 10 and reduces them to k` = 1. Further, the reception noise covariance is set to Σz` z` = σz2 for ` = 1, 2, 3 so that the SNR := 10 log10 P` /σz2 is equal to 20 dB. Figure 2 depicts the estimation MSE along with that of [3] obtained using the trace of M(n|n) (theoretical), as well as Monte Carlo simulations (empirical). Note that in the novel channel-aware approach the MSE reaches steady-state, while in the channel-unaware approach [3] the MSE eventually diverges due to the accumulation of errors. 2
10
Channel−aware theoretical Channel−aware empirical Channel−unaware theoretical Channel−unaware empirical 1
Error in temperature (oK)
with α = 0.05 and covariance matrix µ 3 ¶ α /3 α2 /2 . Σww (n) = α2 /2 α
Channel−Aware
Error in temparature (oK)
instance measuring e.g., room temperature. A common state zero-acceleration propagation model is adopted from [2] µ ¶ µ ¶ T (n) 1 α s(n) = = s(n − 1) + w(n) 0 1 T˙ (n)
2 3−σ curve Error in T(n)
1 0 −1 −2
0
5
10
15 20 time (s) Channel−Unaware
25
30
25
30
10 3−σ curve Error in T(n)
5 0 −5 −10
0
5
10
15 time (s)
20
Fig. 3. The error in the first component p of the estimate T (n)− Tˆ(n|n) is compared with the ± 3 [M(n|n)]11 (3−σ curve) limited wireless sensors linked with a fusion center. We dealt with non-ideal channel links that are characterized by multiplicative fading and additive noise. The linear dimensionalityreduction matrices were derived at the FC to account for the sensors’ limited power and the noisy links, as well as to minimize the estimation MSE. In the single sensor case, meansquare error optimal schemes were developed in closed-form, while an algorithm relying on block coordinate descent iterations was developed for multi-sensor tracking. By allowing the FC to feed back state predictions we achieve power efficient tracking. 6. REFERENCES
tr[M(n|n)]
10
[1] I. D. Schizas, G. B. Giannakis and Z.Q. Luo,“Optimal Dimensionality Reduction for Multi-Sensor Fusion in the Presence of Fading and Noise,” Proc. of the ICASSP, Vol. 4, pp. 869–872, Toulouse, France, May 2006
0
10
0
5
10 Time index n
15
20
Fig. 2. Estimation MSE at FC versus time n. Same conclusions can be deduced from Figure 3 that deˆ picts the perror in temperature T (n) − T (n|n) compared with the ±3 [M(n|n)]11 curves for both channel-aware and channel unaware approach. Notice, that the estimation error in the channel-aware approach falls within the 3-σ curve. However, the error associated with [3] does not comply with the 3-σ curve since it does not account for the channel effects and noise. 5. CONCLUSIONS We derived algorithms for tracking nonstationary dynamic processes with dimensionality reduced data collected by power
[2] A. Ribeiro, G. B. Giannakis and S. I. Roumeliotis, “SOIKF: Distributed Kalman Filtering with Low-Cost Communications using the Sign of Innovations, ” IEEE Tran. on Sig. Proc., vol. 54, pp. 4782-4795, December 2006. [3] H. Chen, K. Zhang and X. R. Li, “Optimal Data Compression for Multisensor Target Tracking with Communication Constraints,” Proc. of Conf. on Dec. and Control, Vol. 3, pp. 2650–2655, Bahamas, Dec. 2004. [4] M. Kisialiou and Z.Q. Luo, “Reducing Power Consumption in a Sensor Network by Information Feedback,” Proc. of Eur. Sig. Proc. Conf., Florence, Sep. 2006. [5] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1993. [6] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.