Special Issue
Consensus-based sparse signal reconstruction algorithm for wireless sensor networks
International Journal of Distributed Sensor Networks 2016, Vol. 12(9) Ó The Author(s) 2016 DOI: 10.1177/1550147716666290 ijdsn.sagepub.com
Bao Peng1, Zhi Zhao2, Guangjie Han3 and Jian Shen4
Abstract This article presents a distributed Bayesian reconstruction algorithm for wireless sensor networks to reconstruct the sparse signals based on variational sparse Bayesian learning and consensus filter. The proposed approach is able to address wireless sensor network applications for a fusion-center-free scenario. In the proposed approach, each node calculates the local information quantities using local measurement matrix and measurements. A consensus filter is then used to diffuse the local information quantities to other nodes and approximate the global information at each node. Then, the signals are reconstructed by variational approximation with resultant global information. Simulation results demonstrate that the proposed distributed approach converges to their centralized counterpart and has good recovery performance. Keywords Compressive sensing, sparse, variational Bayesian, consensus filter, wireless sensor networks
Date received: 31 May 2016; accepted: 29 July 2016 Academic Editor: Wei Yu
Introduction The ongoing concerns about environment and global warming pose more challenges in meeting the increasing demand for deployment of wireless communication networks.1,2 The green communication (GC) has become a new trend in the design and operation of wireless communication networks. In the future communication system, the Internet of Things (IoT) will play an important role. As a key enabling technology of IoT, wireless sensor networks (WSNs) help IoT to flourish by fusing sensing and wireless communication.3,4 Generally, a WSN consists of a large number of sensor nodes with low processing, limited power, low storage capacity, and unreliable communication over short-range radio links.5,6 Based on the WSNs, the potential applications of IoT in industrial automation, habitat monitoring, and smart cities are numerous and diverse.7–10 However, the energy demand for IoT will increase dramatically in the near future considering the
widespread interest and adoption of various organizations, which will lead to higher carbon footprint and other environmental issues. The recently developed compressive sensing (CS) theory11,12 is a new sampling paradigm that can achieve acquisition of information contained in a large-scale data using much fewer samples than that are required 1
School of Electronic and Communication, Shenzhen Institute of Information Technology, Shenzhen, China 2 School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China 3 Department of Information and Communication Systems, Hohai University, Changzhou, China 4 School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China Corresponding author: Guangjie Han, Department of Information and Communication Systems, Hohai University, No. 200, Jinling North Road, Changzhou 213022, China. Email:
[email protected]
Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (http://www.uk.sagepub.com/aboutus/ openaccess.htm).
2 by Nyquist sampling theorem. By exploiting sparsity, which is inherent characteristic of many natural signals, CS enables the signal to be stored in few samples and subsequently be recovered accurately. Moreover, the CS has been extensively applied in WSNs since the signals in many applications exhibit sparsity. Sparse Bayesian learning (SBL) was introduced in Tipping13 and has become a popular method for sparse signal recovery in CS.14,15 In SBL, the sparse signal recovery problem is formulated from a Bayesian perspective, while the sparsity information is exploited by assuming a hierarchical sparse prior to the signal of interest. To recover the values of model parameters, an inference algorithm is derived based on well-known Type-II Maximum Likelihood which assumes that the hyperprior are uninformative.16 In contrast to uninformative, a fully Bayesian treatment was introduced in Bishop and Tipping17 with the variational rendition of sparse Bayesian learning (VSBL), where both the model parameters and hyperparameters can be estimated for the distribution.16 On the other hand, due to the high fault tolerance and scalability, distributed processing is becoming increasingly popular in WSNs applications. Different from the centralized approach that rely on a fusion center (FC), distributed processing requires no central coordinator and only single-hop communications among neighbors that aim to achieve consensus on local estimates. However, most sparse signal recovery algorithms operate in a centralized manner. Recently, distributed processing for CS applications has received considerable attention.18–21 Moreover, the VSBL algorithm is a centralized method and may not be used for sparse signal reconstruction in a distributed WSN directly. Average-consensus algorithms have lately investigated as a family of low-complexity iterative distributed algorithms, where sensors in a group communicate with each other to reach a consensus.22 In more detail, each sensor receives information from others and adjusts their own information state with the goal to reach an agreement in a scalable and fault-tolerant manner.23 Consensus was initially elaborated in Tsitsiklis et al.24 and has received a considerable attention in many subjects due to its wide range of applications such as load balancing in parallel calculation,25 coordination of autonomous agents, distributed control,26 and data fusion.27–29 In this article, we develop a distributed sparse signal reconstruction algorithm using probabilistic graphical models in the Bayesian framework. First, three global information quantities are particularly designed for distributed Sparse Bayesian inference by centralized update equations. Then, several average-consensus iterations are needed to reach a consensus on global information quantities in each local variational
International Journal of Distributed Sensor Networks Bayesian (VB) step. In comparison with the centralized VSBL algorithms, the proposed algorithm allows each sensor to parallelly reconstruct sparse signal with local information and moderate inter-node communication. The rest of this article is structured as follows: the fundamental of compressive sampling is provided in section ‘‘Background,’’ and introduces the sparse signal recovery using SBL. The system model is described in section ‘‘Problem statement and system model.’’ The centralized variational Bayesian inference for the system model is developed in section ‘‘Variational approximation.’’ Then, the proposed distributed sparse signal reconstruction is presented in section ‘‘Distributed variational SBL algorithm.’’ Numerical results are provided in section ‘‘Simulations,’’ followed by conclusions in the final section.
Notation Throughout this article, we use b, B, and b for scalars, matrices, and column vectors, respectively. The superscripts ()T and ()1 denote the transpose and the inverse of a matrix, respectively. Ep(x) () denotes the expectation with respect to p(x). U int ½a, b and N (m, S) denote the integer uniform distribution in the interval ½a, b and multivariate Gaussian distribution with mean m and variance S, respectively. IN and tr denote the N 3 N identity matrix and trace of matrix, respectively. jj jj0 , jj jjp , and jj jj2 denote ‘0 -norm, ‘p -norm, and ‘2 -norm, respectively.
Background In this following, we will briefly review the principle of CS and SBL,13,30 which is a centralized sparse signal recovery algorithm. Let s 2 RN be an original signal, consider the following noisy measurement model y = Fs + v
ð1Þ
where y 2 RM is the measurement vector, F 2 RM 3 N denotes the measurement matrix, and v 2 RM denotes the measurement noise. For the CS signal reconstruction, the s is assumed to have a sparse representation x 2 RN on some basis C 2 RN 3 N , that is, s = Cx. According to the characteristics of signal, the basis has a predefined structure, for example, wavelet basis and Fourier basis. While jjxjj0 = s N , the signal can be considered as sparse over the basis. Therefore, equation (1) can be modified as follows y = Hx + v
ð2Þ
where H = FC 2 RM 3 N is also referred as equivalent measurement matrix. In order to recover x from the noisy measurement y, following optimization problem can be used
Peng et al.
3 min jjxjj0 , s:t: jjHx yjj e x
ð3Þ
where e.0 is an estimate of the measurement noise level. Since the above optimization problem (3) is nondeterministic polynomial (NP)-hard and cannot be solved effectively, the conventional CS approaches generally resort to solve the following optimization problem 1 min jjHx yjj2 + ljjxjjp x 2
ð4Þ
where 0\p 1 and l.0 are regularized parameters. When p = 1, the problem in equation (4) becomes a convex problem, and the solution of equation (3) can be obtained with overwhelming probability. For p\1 case, equation (4) is non-convex but a closer approximation to sparsity than p = 1 case and shows superior performance.31,32 From a Bayesian perspective, the CS problem can be also formulated by SBL whose close relationship to non-convex ‘p -norm minimization problem is revealed in Wipf and colleagues.33,34 In SBL framework, a zeromean Gaussian prior distribution is considered p(xjG) =
N Y
recovery and demonstrate superior performance compared with other algorithms.
Problem statement and system model A network with K nodes modeled by an undirected graph G = (V, E, A) of the order K is considered in this study. In the considered system, the nodes in V = f1, 2, . . . , Kg represent the sensors, the edge (k, l) in the set E E 3 E models that sensor l can transmit the information to sensor k, and the adjacency matrix A = ½akl K 3 K with non-negative adjacency element aij . An edge of G is positive, that is, akl .0 , (k, l) 2 E. Node j is called a neighbor of node i if (k, l) 2 E and l 6¼ k. The neighbor set of node k is denoted by N k . Each node is able to process the data that are stored locally and exchange messages with its neighbors. An example graph is illustrated in Figure 1. Assuming that each node observes a linear combinations of unknown sparse signal x 2 RN , the resultant matrix will be sparse. Then, the measurement corrupted by some noise at node k is as follows yk = H k x + e k
N (xi j0, g i )
ð5Þ
i=1
where G 2 RN 3 N is a diagonal matrix that is composed of N hyperparameters g i (i = 1, . . . , N ) controlling the prior variance of each component. In Tipping,13 and Wipf and colleagues,30,34 the rationale of using this prior has been elaborated. With uniform hyperpriors p(g i ) and p(s2 ), one can infer these hyperparameters by maximizing log p(G, s2 jy)} log p(yjG, s2 ) ð = log p(yjx, s2 )p(xjG)dx
ð6Þ
As per the analysis in Tipping,13 solving equation (6) is equivalent to minimizing the following cost function L = log jSj + yT S1 y
where yk 2 Rmk is the local measurement of node k, mk is the number of simultaneous measurements made at node k, Hk 2 Rmk 3 N is the local measurement matrix of node k, mk is the number of simultaneous measurements made at node k, and ek 2 Rmk is the measurement noise at node k. The measurements at all K nodes are stacked as follows. Let M be the total P number of measurements at all nodes, that is, M = Kk= 1 mk . The global measurement vector, Y 2 RM , the global observation matrix, H 2 RM 3 N , and the global observation noise vector, e 2 RM , are defined as follows 2
3 2 3 2 3 y1 H1 e1 6 7 6 . 7 6 . 7 Y = 4 ... 5, H = 4 .. 5, e = 4 .. 5 HK eK yK
ð7Þ
where S = s2 IM + HGHT . In Tipping,13 the expectation maximization (EM) algorithm is employed to solve equation (17). Given these hyperparameters, x can be obtained by maximizing the posterior distribution x = arg max p(xjy, G, s2 ) x
= arg max p(yjx, s2 )p(xjG) x T 1
ð8Þ
= GH S y
In Wipf and colleagues,30,34 authors provide the theoretical justification for applying SBL to sparse signal
ð9Þ
Figure 1. An example of network structure.
ð10Þ
4
International Journal of Distributed Sensor Networks Then the global observation model is given by ð11Þ
Y = Hx + e
where e;N (0, b1 IM ). The elements of H are derived from a Gaussian distribution with zero mean and variance 1=M. Note that this construction of H satisfies the restricted isometry property (RIP) in the design of CS schemes. From the measurement model (11) and noise statistics, the measurements likelihood function is set by p(Yjx, b) =
K Y
Figure 2. DAG of the hierarchical Bayesian model.
p(yk jx, b)
k =1
ð12Þ b 2 = exp jjyk Hk xjj mk =2 2 k = 1 (2p) K Y bmk =2
Moreover, an appropriate conjugate prior with respect to equation (12) is further attached on the parameter b so as to complementing the likelihood. The prior for the noise precision b is selected to be a Gamma distribution with parameters R and d, that is p(bjR, d) = G(bjR, d) =
dR bR1 exp½db G(R)
ð13Þ
To reflect our knowledge about sparsity of x, a hierarchically heavy-tailed prior is selected for the signal x next. There are two levels of hierarchy set for the model. For the first level, a Gaussian prior is attached to x, that is 1
p(xja) = N (xj0, A ) =
N Y
According to Bayes’ theorem p(ujY, f) =
p(xi jai )
1 2 1=2 1=2 = (2p) ai exp xi ai 2 i=1 N Y
ð14Þ
log p(Yjf) = F(q(u)) + KL(q(u)jjp(ujY, f))
bai i aai i 1 exp½bi ai G(ai )
ð15Þ
So far, the system model is developed. For the Bayesian model described above, we illustrate the directed acyclic graph (DAG) in Figure 2, where a = ½a1 , . . . , aN T and b = ½b1 , . . . , bN T . Now, the centralized variational approximation method for the developed hierarchical Bayesian model is presented in the next section.
ð16Þ
ð17Þ
where F is the free energy given by the following expression ð
p(Yju)p(ujf) q(u) log du q(u)
ð18Þ
and ð KL(q(u)jjp(ujY, f)) =
where a = ½a1 , a2 , . . . , aN T is a vector of precision parameters of xi s, A = diag(a) and the xi s have been assumed to be a priori independent. Next, as the second level, the precision parameters ai s also depend on a prior distribution which is assumed to follow a Gamma distribution, expressed as
p(Yju)p(ujf) p(Yjf)
where u = ½x1 , . . . , xn , b, a1 , . . . , am T are the unknown parameters and hidden variables of the model, and f = ½a1 , . . . , an , b1 , . . . , bn , d, RT are the hyperparameters of the imposed prior. In order to estimate the value of the hyperparameters, that is, f, the following log-likelihood is maximized
F(q(u), f) =
i=1
p(ai jai , bi ) = G(ai jai , bi ) =
Variational approximation
q(u) q(u) log p(ujY, f)
ð19Þ
where KL is the Kullback–Leibler divergence between the true posterior p(ujY, f) and the variational distribution q(u). It is noted here that the KL divergence is greater than or equal to zero and is minimized when q(u) = p(xjY, f). Hence, F(q(u), f) also can be regarded as evidence lower bound. The minimization of KL divergence is equivalent to maximizing the evidence lower bound. From an optimization point of view, the model parameters of q(u) is well-suited so that the lower bound can be minimized. However, due to the complexity of model, computing the posterior of interest is troublesome. Thus, we resort to a simpler variational free form q(x, b, a) to approximate the posterior in equation (16). Based on the mean-field theory from statistical physics, q(x, b, a) can be fully factorized into a family of q-distribution with respect to the parameters as follows
Peng et al.
5 q(x, b, a) = q(x)q(b)q(a) =
N Y
q(xi )q(b)
i=1
N Y
q(ai )
ð20Þ
i=1
k=1
that is, all model parameters are assumed to be a posteriori independent. This fully factorized form of distribution q(x, b, a) turns out to be computationally tractable. In fact, if ui denote the ith component of the vector u = ½x1 , . . . , xN , b, a1 , . . . , aN T containing the parameters of the Bayesian hierarchical model, then ui refers to all parameters after removing the ith component. Maximizing the free energy in equation (17) is realized by computing the functional derivatives with respect to each of the q() distributions while fixing the other distributions and setting ∂F(q)=∂q() = 0. Furthermore, the computation of ∂F(q)=∂q() = 0 can be expressed as follows35 log q(ui )}Eq(ui ) ½log (p(Yju)p(ujf))
ð21Þ
where Eq(ui ) denotes the expectation with respect to Q j6¼i q(uj ). This cannot be solved directly, as all the factors q(ui ) are interdependent on each other. Then, the coupled q(ui ) involves an optimization process in turn, where the factors are initialized appropriately, and each one of them is updated iteratively. Moreover, the lower bound is known to be gradually increased until convergence in this process. The log joint distribution of Y and u is expressed as follows log (p(Yju)p(ujf)) =
K X
N X
p(xi jai )
i=1
+
N X
~= }G(aj~ a, b)
N Y i=1
ð25Þ
G(ai j aei , bei )
where S = diag(E½ai ) + E½b
K X
!1 HTk Hk
ð26Þ
k=1
m = E½bS
K X
HTk yk
ð27Þ
k=1
~ai = a +
1 2
2 ~bi = b + E½xi 2 M ~=R+ R 2 K K X X ~d = d + 1 yTk yk E½xT HTk yk 2 k=1 k=1 ! K X 1 T T + tr Hk Hk (S+m m) 2 k=1
ð28Þ ð29Þ ð30Þ
ð31Þ
The required moments can be easily evaluated using the following results E½x = m
logp(yk jx, b)
k=1
+
q(a)} exp (Eq(x)q(b) ½log (p(Yju)p(ujf))) " #! K X logp(yk jx, b) } exp Eq(x)q(b)
p(ai jai , bi ) + p(bjd, R)
i=1
E½x2i = Sii + m2i ~ai E½ai = bei ~ R E½b = ~d
ð32Þ
From the above, it is noted that the aforementioned formulas can be used to compute the paraDue to the conjugacy properties of the chosen meters of model in a centralized manner when all the distributions, as mentioned previously, a general measurements can be gathered in a FC. However, in solution (21) can be derived analytically as follows the distributed scenario, there is no FC in the network and the formulas derived above cannot be q(x)}exp(Eq(a)q(b) ½log(p(Yju)p(ujf))) implemented directly. In order to develop the distrib" # " #! K N uted algorithm for VSBL, the formulas so far disX X logp(yk jx,b) +Eq(a) p(xi jai ) }exp Eq(b) cussed will be reformulated in the following section i=1 k =1 such that the VSBL can be used for distributed sparse }N (xjm,S) ð23Þ learning in a WSN. ð22Þ
q(b)}exp(Eq(x)q(a) ½log(p(Yju)p(ujf))) " # " #! K N X X logp(yk jx,b) +Eq(a) p(xi jai ) }exp Eq(x) k=1
}G(bj~ R, ~ d)
i=1
ð24Þ
Distributed variational SBL algorithm Since the measurements are divided into K different nodes, the following global quantities can be defined by inspecting equations (26)–(31)
6
International Journal of Distributed Sensor Networks
A=
K X
Ak
k=1
B=
K X
ð33Þ
Bk
k=1
C=
K X
Ck
k=1
where Ak , Bk , and Ck are defined as local quantities and are defined as follows Ak = HTk Hk Bk = HTk yk
ð34Þ
Ck = yTk yk
Hence, equations (26)–(31) can be reformulated as follows S = (diag(E½ai ) + E½bA)
1
m = E½bSB ~ ai = a + ~ bi = b +
1 2
E½x2i
2 M ~=R+ R 2 1 1 ~ d = d + C E½xT B + tr A S + mT m 2 2
ð35Þ ð36Þ
j_ = (I eL)j
ð42Þ
where L is the Laplacian matrix of graph and is defined as follows ‘ij =
PK
k = 1, k6¼i
ai, k ,
ai, j ,
j=1 j 6¼ i
ð43Þ
The discrete-time form of consensus filter suggested in Kingston and Beard36 is as follows
ð37Þ ð38Þ ð39Þ ð40Þ
It should be noted that all the required parameters can be computed based on the variational approximation method and using the global quantities. In the sense of distributed calculation, each sensor communicates with its neighbors and operates accordingly. In the distributed variational sparse Bayesian learning (DVSBL) algorithm, the global quantities A, B, and C cannot be calculated locally. However, one can compute them by averaging the local quantities from all nodes in equation (34). This is because the global quantities in equation (34) can be redetermined as follows
jk (t + 1) = jk (t) +
K X
bkl (t)(jl (t) jk (t))
ð41Þ
K 1X C= yT y K k =1 k k
It is easy to see that above redefinition has no impact on the parameter approximations considered in equations (26)–(31). After inspecting the average formulas mentioned in equation (41), the idea of an
ð44Þ
k=1
where t denotes the iteration step of discrete-time consensus filter and bkl (t) is the linear weight on jl at node k. Thus, the filtering algorithm can be carried out in a distributed manner if the averages given by equation (41) are obtained at every node. Here, Ak , Bk , and Ck are treated as input states of consensus filter men^ k can tioned in equation (44), the outputs A^k , B^k , and C asymptotically track the values of A, B, and C, respectively. Then, the hyperparameters at each node are approximated using its estimated global quantities. Thus
1 Stk = diag(Et ½ai ) + Et ½bK A^k
ð45Þ
mtk = Et ½bStk K B^k
ð46Þ
~ati = a +
K 1X A= HT Hk K k =1 k K 1X B= HT y K k =1 k k
average-consensus filter suggested in Kingston and Beard36 can be employed to approximate the global information quantities. In particular, the local information quantities of each sensor are interchanged with their neighbors, then each sensor’s global information quantities changes depending on the local information quantities input from others using consensus filter. Hence, the DVSBL algorithm can be developed by employing such average-consensus filter. According to Kingston and Beard,36 a consensus filter can be formulated as follows in a continuous compact form
1 2
2 ~bt = b + E½xi i 2 M ~t = R + R 2 1 ~dt = d + K C ^ k E½xT K B^k 2
1 t + tr K A^k Stk + mtT m k k 2
ð47Þ ð48Þ ð49Þ
ð50Þ
It can be noted from the above equations that each node should communicate with its neighbors several
Peng et al.
7
Algorithm 1. Distributed variational Bayesian sparse learning (DVSBL) algorithm 1: Initialization: Input measurement vectors fy k g, measurement matrices fHk g, and weight matrix N; initialize Sk by identity matrix, mk by 0, and a, b, , d with 106 . 2: for t = 1, . . . , L do 3: Compute Ak = HTk Hk , Bk = HTk yk , Ck = y Tk yk and send fAk , Bk , Ck g to node l 2 N k . 4: Receive fAl , Bl , Cl g from node l 2 Nk and fuse the local information quantities using the consensus filter (equation (44)); 5: Compute equations (45)–(50) at each node; 6: If stop criterion is met, return xk (k = 1, . . . , K); otherwise go to step 5; 7: end for
Figure 3. Algorithm architecture.
times before implementing variational approximation. However, the exchange of messages among the nodes iteratively is inevitable to consume time and energy and it is hard to know the number of iterations that are necessary to achieve consensus in different application scenarios, which are the main limitations of the proposed distributed VSBL. Moreover, another concerned problem is the design of weight matrix N = ½bkl (t)K 3 K that leads to a fast convergence rate of average consensus. Usually, the weight matrix N must be subject to the constraints of algebraic connectivity and graph topology 8X K > b (t) = 1 > > < Xk = 1 kl K ð51Þ b (t) = 1 > l = 1 kl > > : bkl (t).0
have been proposed to accelerate the convergence in Aysal et al.39 and Sardellitti et al.40 In this article, metropolis weight is used in the simulations41
Moreover, it has been shown that the second smallest eigenvalue of N determines the convergence speed of the algorithm in the case of fixed network topology.37 Similar results have been obtained for time-varying network topology38 and many algorithms
First, a sensor network with six nodes is used to verify the performance of the proposed algorithm for distributed WSNs (Figure 4). Without loss of generality, the considered six-node network is represented by an undirected graph G = (V, E, A) with a set of nodes A = (1, 2, 3, 4, 5, 6), a
8 >
: 1 l2N k (t) bkl (t), 0, otherwise ð52Þ
The performance of the proposed algorithm is validated using simulations, and results are presented in the next section. Moreover, the intact algorithm is given in Algorithm 1 (Figure 3).
Simulations
8
International Journal of Distributed Sensor Networks
Figure 4. Random topology.
Figure 6. Normalized MSE versus iteration.
Figure 5. Convergence for local information quantity Ck.
set of edges E = f(1, 1), (1, 2), (1, 3), (2, 3), (2, 4), (2, 5), (3, 3), (3, 5), (3, 6), (4, 4), (4, 5), (5, 5), (5, 6), (6, 6)g, and an adjacency matrix 2
1 61 6 61 A=6 60 6 40 0
1 1 1 1 1 0
1 1 1 0 1 1
0 1 0 1 1 0
0 1 1 1 1 1
3 0 07 7 17 7 07 7 15 1
ð53Þ
In the considered example, the signal x 2 R256 is assumed to be sparse itself. There are altogether 10 nonzero elements xfig 6¼ 0 in the sparse signal, where i is the index of support and xfig denotes the value of support. Here, the index and value of support are unknown and sampled over i;U int ½1, 256 and xfig;N (0, 52 ), respectively. The measurement matrix Hk 2 R12 3 256 of sensor node k is constructed by the entries sampled from N (0, (1=72)). Then, the local measurement yk 2 R12 is obtained using equation (1) and the global measurement matrix H = colfH1 , . . . , H6 g 2 R72 3 256 satisfies the RIP with overwhelming probability. The measurement noise ek ;N (0, 104 3 I12 ). The normalized mean square error (MSE) and averaged relative error are employed to evaluate the performance, which are defined as follows
Peng et al.
9
Figure 7. Estimation of x.
Figure 8. Comparison of Normalized MSE (24 nodes).
Normalized MSE =
jjx x^jj2 jjxjj2
ð54Þ
PK Average relative error =
k =1
jj^ xk xjj22
K jjxjj22
ð55Þ
10
Figure 9. Comparison of Normalized MSE (72 nodes).
The convergence rate of the proposed algorithm is shown in Figures 5 and 6. It can be seen from the figures that both the local information quantity Ck and the normalized MSE reach the consensus as the iteration progresses. It can also be observed that the normalized MSE has the same convergence tendency with the local information quantity. The results confirm that the consensus iterations enhance the observability by information, which is in agreement with the analysis.
Figure 10. ARE versus iteration.
International Journal of Distributed Sensor Networks The estimated x from sensor nodes using DVSBL algorithm is presented in Figure 7. It can be observed that the estimated values of actual sparse signals at all nodes are satisfactory. In order to prove the scalability of the proposed distributed algorithm, we consider two networks of different sizes, one of which is an L-connected Harary graph formed by 24 nodes, and the other is an L-connected Harary graph formed by 72 nodes. The L-connection denotes the number of neighbors for each node. In the considered example, the L is set as 3. The error performance is shown in Figures 8 and 9. It can be noted that irrespective of the size, both the networks have the comparable reconstruction performance compared to the centralized VSBL. The average relative error of 72 nodes is presented in Figure 10. It can be seen that the relative error after two iterations is negligible. This demonstrates that the proposed algorithm is salable. Finally, the real temperature signals obtained from Intel Berkeley Research laboratory are considered to test the effectiveness of the proposed algorithm. The considered temperature signals are represented using discrete cosine transform (DCT) as shown in Figure 11. In this context, a three-connected Harary graph with 24 nodes is employed. The effectiveness of our distributed algorithm is demonstrated in Figure 12 where both the original signal and the reconstructed signal of one
Peng et al.
11 and convergence properties of the proposed distributed algorithm. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Figure 11. The coefficients of DCT.
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work is supported by ‘‘Qing Lan Project’’ and ‘‘the National Natural Science Foundation of China under Grant 61572172’’; ‘‘the Fundamental Research Funds for the Central Universities, No. 2016B10714’’; ‘‘The Basic Research Plan in Shenzhen City under Grant (No. JCYJ20130401100512995. JCYJ20140418100633654)’’; ‘‘the National Natural Science Foundation of China (No. 61001125)’’; ‘‘PAPD’’; and ‘‘CICAEET.’’
References
Figure 12. Reconstructed results of temperature signal.
node are provided. It can be observed from Figure 12 that both the proposed distributed algorithm and the centralized VSBL are successful in reconstructing the temperature signals. In a word, all the results presented so far have demonstrated the effectiveness of the distributed sparse signal reconstruction algorithm presented in this article.
Conclusion In this article, a new distributed variational SBL algorithm for sparse signal reconstruction in a WSNs is presented. By combining VSBL and average-consensus algorithm, the DVSBL is consistent with centralized VSBL, where all data are available at FC. Experimental results demonstrate the superior recovery performance
1. Han GJ, Qian AH, Jiang JF, et al. A grid-based joint routing and charging algorithm for industrial wireless rechargeable sensor networks. Comput Netw 2016; 101: 19–28. 2. Han GJ, Dong YH, Guo H, et al. Cross-layer optimized routing in wireless sensor networks with duty cycle and energy harvesting. Wirel Commun Mob Com 2015; 15(16): 1957–1981. 3. Zhao Z, Feng J and Peng B. A green distributed signal reconstruction algorithm in wireless sensor networks. IEEE Access 2016; PP(99): 1. 4. Guo X, Chu L and Sun X. Accurate localization of multiple sources using semidefinite programming based on incomplete range matrix. IEEE Sens J 2016; 16: 5319–5324. 5. Xie SD and Wang YX. Construction of tree network with limited delivery latency in homogeneous wireless sensor networks. Wireless Pers Commun 2014; 78(1): 231–246. 6. Xia ZH, Wang XH, Sun XM, et al. Steganalysis of least significant bit matching using multi-order differences. Secur Comm Network 2014; 78: 1283–1291. 7. Han GJ, Jiang JF and Zhang CY. A survey on mobile anchors assisted localization in wireless sensor networks. IEEE Commun Surv Tutor 2016; 18(3): 2220–2243. 8. Han GJ, Wan LT, Shu L, et al. Two novel DOA estimation approaches for real-time assistant calibration systems in future vehicle industrial. IEEE Syst J 2015; PP: 1–12. 9. Shen J, Tan HW, Wang J, et al. A novel routing protocol providing good transmission reliability in underwater sensor networks. J Internet Technol 2015; 16(1): 171–178. 10. Guo P, Wang J, Li B, et al. A variable threshold-value authentication architecture for wireless mesh networks. J Internet Technol 2014; 15(6): 929–936.
12 11. Donoho DL. Compressed sensing. IEEE T Inform Theory 2006; 52(4): 1289–1306. 12. Baraniuk RG, Candes E, Nowak R, et al. Compressive sampling [from the guest editors]. IEEE Signal Proc Mag 2008; 25(2): 12–13. 13. Tipping ME. Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 2001; 1: 211–244. 14. Ji S, Xue Y and Carin L. Bayesian compressive sensing. IEEE T Signal Proces 2008; 56(6): 2346–2356. 15. Zai Y, Xie L and Zhang C. Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE T Signal Proces 2013; 61(1): 38–43. 16. Themelis KE, Rontogiannis AA and Koutroumbas KD. A variational Bayes framework for sparse adaptive estimation. IEEE T Signal Proces 2014; 62(18): 4723–4736. 17. Bishop CM and Tipping ME. Variational relevance vector machines. In: Proceedings of the 16th conference on uncertainty in artificial intelligence, San Francisco, CA, 30 June–3 July 2000, pp.46–53. San Francisco, CA: Morgan Kaufmann Publishers, Inc. 18. Mateos G, Bazerque JA and Giannakis GB. Distributed sparse linear regression. IEEE T Signal Proces 2010; 58(10): 5262–5276. 19. Mota JFC, Xavier JMF, Aguiar PMQ, et al. Distributed basis pursuit. IEEE T Signal Proces 2012; 60(4): 1942–1956. 20. Chen W and Wassell IJ. A decentralized Bayesian algorithm for distributed compressive sensing in networked sensing systems. IEEE T Wirel Commun 2015; 15(2): 1282–1292. 21. Yu H, Liu Y and Wang W. Distributed sparse signal estimation in sensor networks using HN-consensus filtering. IEEE/CAA J Autom Sin 2014; 1(2): 149–154. 22. Olfati-Saber R and Shamma JS. Consensus filters for sensor networks and distributed sensor fusion. In: Proceedings of the 44th IEEE conference on decision and control, Seville, 12–15 December 2005, pp.6698–6703. New York: IEEE. 23. Olfati-Saber R and Murray RM. Consensus problems in networks of agents with switching topology and timedelays. IEEE T Automat Contr 2004; 49(9): 1520–1533. 24. Tsitsiklis J, Bertsekas D and Athans M. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE T Automat Contr 1986; 31(9): 803–812. 25. Amelina N, Fradkov A, Jiang Y, et al. Approximate consensus in stochastic networks with application to load balancing. IEEE T Inform Theory 2015; 61(4): 1739–1752. 26. Wang Y, Tan K, Peng X, et al. Coordinated control of distributed energy-storage systems for voltage regulation
International Journal of Distributed Sensor Networks
27.
28.
29.
30. 31.
32.
33.
34.
35.
36.
37. 38.
39.
40.
41.
in distribution networks. IEEE T Power Deliver 2015; 31(3): 1132–1141. Safarinejadian B and Estahbanati ME. Consensus filterbased distributed variational Bayesian algorithm for flow and speed density prediction with distributed traffic sensors. IEEE Syst J 2015; PP(99): 1–10. Li WL and Jia YM. Consensus-based distributed multiple model UKF for jump Markov nonlinear systems. IEEE T Automat Contr 2012; 7(1): 230–236. Safarinejadian B and Estakhri M. A novel distributed variational approximation method for density estimation in sensor networks. Measurement 2016; 89: 78–86. Wipf DP and Rao BD. Sparse Bayesian learning for basis selection. IEEE T Signal Proces 2004; 52(8): 2153–2164. Davies ME and Gribonval R. Restricted isometry constants where ‘p sparse recovery can fail for 0 p 1. IEEE T Inform Theory 2009; 55(5): 2203–2214. Wu R and Chen DR. The improved bounds of restricted isometry constant for recovery via ‘p-minimization. IEEE T Inform Theory 2013; 59(9): 6142–6147. Wipf D and Nagarajan S. Iterative reweighted ‘1 and ‘2 methods for finding sparse solutions. IEEE J Sel Top Signa 2010; 4(2): 317–329. Wipf DP, Rao BD and Nagarajan S. Latent variable Bayesian models for promoting sparsity. IEEE T Inform Theory 2011; 57(9): 6236–6255. Zhu H, Leung H and He Z. State estimation in unknown non-Gaussian measurement noise using variational Bayesian technique. IEEE T Aero Elec Sys 2013; 49(4): 2601–2614. Kingston D and Beard R. Discrete-time averageconsensus under switching network topologies. In: Proceedings of the 2006 American control conference, Minneapolis, MN, 14–16 June 2006, 6 pp. New York: IEEE. Xiao L and Boyd S. Fast linear iterations for distributed averaging. Syst Control Lett 2004; 53(1): 65–78. Kar S and Moura JMF. Sensor networks with random links: topology design for distributed consensus. IEEE T Signal Proces 2008; 56(7): 3315–3326. Aysal TC, Oreshkin BN and Coates MJ. Accelerated distributed average consensus via localized node state prediction. IEEE T Signal Proces 2009; 57(4): 1563–1576. Sardellitti S, Giona M and Barbarossa S. Fast distributed average consensus algorithms based on advectiondiffusion processes. IEEE T Signal Proces 2010; 58(2): 826–842. Xiao L, Boyd S and Lall S. A scheme for robust distributed sensor fusion based on average consensus. In: Proceedings of the 4th international symposium on information processing in sensor networks (IPSN 2005), Boise, ID, 15 April 2005, pp.63–70. New York: IEEE.