information about an environment by capturing and analyzing RF signals trans-
mitted between nodes in a wireless sensor network. In the case where few avail-.
Compressed RF Tomography for Wireless Sensor Networks: Centralized and Decentralized Approaches Mohammad A. Kanso and Michael G. Rabbat Department of Electrical and Computer Engineering McGill University Montreal, Quebec, Canada
[email protected] [email protected]
Abstract. Radio Frequency (RF) tomography refers to the process of inferring information about an environment by capturing and analyzing RF signals transmitted between nodes in a wireless sensor network. In the case where few available measurements are available, the inference techniques applied in previous work may not be feasible. Under certain assumptions, compressed sensing techniques can accurately infer environment characteristics even from a small set of measurements. This paper introduces Compressed RF Tomography, an approach that combines RF tomography and compressed sensing for monitoring in a wireless sensor network. We also present decentralized techniques which allow monitoring and data analysis to be performed cooperatively by the nodes. The simplicity of our approach makes it attractive for sensor networks. Experiments with simulated and real data demonstrate the capabilities of the approach in both centralized and decentralized scenarios.
1 Introduction Security and safety personnel need intelligent infrastructure to monitor environments for detecting and locating assets. Tracking assets includes being able to locate humans as well as obstructions. Imagine a situation where a disaster has occurred, and some obstructions may have blocked certain paths to the safety exit. The ability to detect the location of these objects in a timely and efficient manner allows quick response from security personnel directing evacuation. This paper provides a feasible and efficient approach to monitoring and surveillance using wireless sensor nodes. RF tomography is applied to analyze the characteristics of the environment. RF tomography is the process of inferring characteristics about a medium by analyzing wireless RF signals that traverse that medium. A wireless signal propagating along a path between a pair of sensors without obstructions loses average power with distance according to [1]: d ¯ P (d) = Pt − P0 − 10np log10 dBm, (1) d0 where P¯ (d) is the average received power at distance d from the transmitting sensor, Pt is the transmitted power, P0 is the received power at a reference distance d0 , and np is
2
Mohammad A. Kanso and Michael G. Rabbat
the path loss exponent which controls how fast power is lost along a path. For instance, np ≈ 2 for free space propagation, and varies with different environments. Received power on a wireless link between nodes i and j can generally be modeled as in [1]: Pij = P¯ (d) − Zij
Zij = Xij + Yij
(2) (3)
where Zij is the fading loss consisting of shadowing loss Xij and non-shadowing loss Yij . Thus, the signal attenuation Zij on a link allows us to determine whether or not an obstruction lies on its path. RSS (received signal strength) measurements among links provide means for reconstructing shadowing losses. Wireless signals traversing different obstructions undergo different levels of signal attenuation, depending on the obstruction’s nature and composition (e.g. thick walls attenuate signals more than humans). As more measurement links are available, analyzing those links allows us to infer information about objects’ locations and properties. As more links cross over the same object, more information is available to reach a solution. Essentially, this information will be used to reconstruct a map of power attenuation levels throughout the environment. Patwari and Agrawal introduce the concept of RF tomography for sensor networks in [1]. They propose a centralized reconstruction method based on weighted least squares estimation. This paper introduces compressed RF tomography, leading to an `1 penalized reconstruction criterion, and we propose decentralized schemes for simultaneously carrying our measurements and reconstruction. After going through a formal problem statement in Section 2, we introduce compressed RF tomography in section 3. Experiments using simulated and real data are reported in Section 4. Section 5 describes two decentralized reconstruction approaches which are then compared via simulation in Section 6, and we conclude in section 7.
2 Problem Formulation Assume that sensor nodes are deployed according to Figure 1(a) around the perimeter of a region to be monitored. Each line in Figure 1(a) corresponds to a wireless link. The monitored region is divided into a grid of pixels p ∈ Rn . Each pixel’s value reflects the amount of signal attenuation over its area. Once this information is available, it can be displayed in grayscale, where a darker intensity corresponds to more attenuation. We shall assume that each pixel has a constant attenuation loss over its region. Also, we let the shadowing loss over each lean be denoted by v ∈ Rk . The total shadowing loss of link i, represented as vi , is modeled as a weighted sum over the pixels crossed by this link plus noise by this link in addition to noise. The attenuation over each link in the network can be expressed in matrix form as follows, v = Ap + n,
(4)
where n is Gaussian noise (in dB) with variance σn2 , and the entries of A are defined by ( do √ ij if link i traverses pixel j di (5) Aij = 0 otherwise
Compressed RF Tomography for Wireless Sensor Networks
3
where di is the length of link i and doij is the overlap distance covered by link i through √ pixel j. The division by di parallels the adopted shadowing model [2]. The number of rows in A is equal to the number of existing links, and the number of columns is the number of pixels.
(a)
(b)
Fig. 1. Figure (a) shows a wireless sensor network in an RF Tomographic surveillance scenario. Figure (b) displays a single link passing through a set of pixels
To be able to monitor the environment, we must acquire a set of measurements and perform analysis on those measurements. A simple centralized algorithm can now be described as proposed in [1]: 1. 1. Nodes acquire signal strength measurements and forward them to the central server. 2. The server computes the power difference on each link P¯i,j − Pi,j and stores the results in a vector v. 3. The server reconstructs the vector pˆ to find the attenuation level over each pixel. The reconstruction approach described in [1] for recovering p from v involves solving a simple weighted least square (WLS) estimator, which is efficient to implement on sensor nodes. However, least squares methods usually require an overdetermined system of equations to provide acceptable results. If p is sparse enough and only a few measurements are available, `1 reconstruction techniques provide an attractive solution. Henceforth, we adopt this approach and investigate its performance next.
3 Compressed RF Tomography As mentioned earlier, the approach we propose in this paper involves compressed sensing of RSS measurements to discover characteristics of the medium. Compressed Sensing (CS) [3] [4] is a modern and novel approach for recovering signals with sparse representations from a set of measurements. In conventional Shannon/Nyquist sampling
4
Mohammad A. Kanso and Michael G. Rabbat
theorems, a bandlimited signal needs to be sampled at double its bandwidth for perfect reconstruction. Compressed sensing shows that undersampling a sparse signal at a rate well below its Nyquist rate may allow perfect recovery of all signal components under certain conditions. Due to the assumption of few changes in our environment (i.e. sparse p), compressed sensing applies naturally. This assumption may hold, for example, in border monitoring or nighttime security monitoring at a bank. In these examples few changes are expected at any given time, which means that only a few pixels will contain significant attenuation levels. With this in mind, compressed sensing can be combined with RF Tomography to enable monitoring with fewer required measurements. An m-sparse signal is a signal that contains maximum of m nonzero elements. A typical signal of length n and at most m non-zero components (m n) requires iterating all elements of the signal elements to determine the few nonzero components. The challenging aspect is the recovery of the original signal from the set of measurements. In general, this type of recovery is possible under certain conditions on the measurement matrix [3]. This reconstruction occurs with a success probability which increases as the sparsity level in the signal increases. The prior knowledge of the sparsity of a vector p allows us to reconstruct it from another vector v of k measurements, by solving an optimization problem, pˆ = arg min ||p||0 subject to v = Ap,
(6)
p
where A is defined above, and ||p||0 is defined as the number of nonzero elements in p. Unfortunately, equation (6) is a non-convex NP-hard optimization problem to solve and is computationally intractable. Reconstructing the signal requires iterating through n m sparse subspaces [5]. Researchers [3, 6, 4] have shown that an easier and equivalent problem to (6) can be solved: pˆ = arg min ||p||1 subject to v = Ap
(7)
p
Pn where ||p||1 is now the `1 norm of p defined as ||p||1 = i=1 |pi |. The optimization problem in (7) is a convex optimization problem, for which there are numerous algorithms to compute the solution [3, 7]. Among the first used solutions was linear programming, also referred to as Basis Pursuit [3], which requires O{m log(n)} measurements to reconstruct an m-sparse signal. In practical applications, measured signals are perturbed by noise as in (4). In this situation, (7) becomes in appropriate for estimating p since the solution should take the perturbation into account. The Least Absolute Shrinkage and Selection Operator (LASSO) [8, 9] is a popular sparse estimation technique which solves 1 pˆ = arg min λkpk1 + kv − Apk22 , p 2
(8)
where λ regulates the tradeoff between sparsity and signal intensity. Note that this method requires no prior knowledge of the noise power in the measurements. Alternatively, iterative greedy algorithms such as Orthogonal Matching Pursuit (OMP) exist [10]. OMP is characterized for being more practical for implementation and faster
Compressed RF Tomography for Wireless Sensor Networks
5
than `1 -minimization approaches. The tradeoff is the extra number of measurements needed and less robustness to noise in data. A detailed description of the algorithm can be found in [10]. OMP is particularly attractive for sensor network applications since it is computationally simple to implement. However, LASSO can still be a feasible solution, especially when reconstruction only happens on a more powerful receiver. For this, we choose to compare the performance of both centralized techniques in our simulations and show their tradeoffs.
4 Simulations and Results: Centralized Reconstruction This section presents an evaluation of Compressed RF Tomography. We present results from computerized simulations as well as some results from real sensor data. The primary focus is on the accuracy of results obtained by a compressed set of measurements. Accuracy in this case is measured in terms of mean squared error of the reconstructed signal. For better visibility, the recovered values in pˆ from the reconstruction technique are mapped onto a vector p˜ whose values are in [0,1]. Mapping can be a simple linear transformation onto [0,1], or by a nonlinear transformation as in [1] for better contrast. This allows an easy representation of p˜ on a grayscale as in Figure 2.
(a) A monitored area with few obstructions discovered (σn2 =0.01 dB 2 )
(b) A monitored area with few obstructions (σn2 =0.49 dB 2 ) Fig. 2. Simulated environment under surveillance showing the discovered obstructions
6
Mohammad A. Kanso and Michael G. Rabbat
The area under simulation is a square area surrounded by 20 sensor nodes, transmitting to each other. Each node exchanges information one way with 15 other nodes, = 150 possible links. Figure 2 as shown in Figure 1(a). This yields a total of 20×15 2 illustrates how our approach can monitor an environment with 30 links at difference noise levels. The figure shows dark pixels at 4 different positions, each corresponding to an existing obstructions at its location.
Fig. 3. Performance Comparison of LASSO and OMP with 15 measurements (total=150)
Next we turn our attention to examining the effect of noisy measurements on the performance of the design. To have a better insight into the amount of error caused by noise, the accuracy of the system is compared to the very low noise (effectively noiseless) case. Accuracy is measured by the mean squared error (MSE). We try to monitor the same obstructions as in Figure 2, with noise also added to measurements to examine its effect on accuracy. Performance results are plotted in Figure 3. As the figure shows, noise level is reflected on the accuracy of measurements. High noise levels cause inaccuracies in measurements and hence higher MSEs. At lower noise levels, accurate monitoring is realized, even with few measurements. Note that at low noise levels , the MSE becomes more affected by the few number of measurements available (only 30 in this case). Comparing the performance of LASSO and OMP techniques, it is obvious that LASSO performs better, especially when noise levels are high. Results show that OMP and LASSO techniques behave similar results with low noise, which leads to favoring the less complex OMP at such conditions. Even at high noise levels, one can still monitor some of the obstructions. Compressed RF tomography is well-suited to the case where measurements are available. This scenario can occur when some nodes are put to sleep to save battery power, or when some of the sensors malfunction, or even when links are dropped. To demonstrate the power of the reconstruction algorithm, we simulate the same obstruction scenario in Figure 2 with a varying number of links used (out of the total 150 link measurements).
Compressed RF Tomography for Wireless Sensor Networks
7
Fig. 4. Performance Comparison of LASSO and OMP with a varying number of measurements (noise variance at 0.16 dB2 and 150 links in total)
Figure 4 shows how the MSE varies as more links/nodes measurements are added into the system at a fixed noise level. As the figure shows, LASSO performs better than OMP when few measurements are available. OMP and LASSO provide identical results as soon as roughly 25% of the link measurements are available. Simulations show that OMP requires more measurements than LASSO to obtain the same level of accuracy. We also experimented with our approach on data used in [11]. In Figure 5 below, we compare our reconstruction approach using a small subset of measurements, to the approach that uses all measurements [1]. Figure 5 also demonstrates that compressed RF tomography can accurately monitor an environment if sparsity in the medium is satisfied. Observe that `1 -minimization in 5(b) removed inaccuracies present in 5(a) due to noise and non-line of sight components.
(a)
(b)
Fig. 5. Testing compressed RF tomography on real sensor data using least square reconstruction approach with all links in (a) and our compressed approach with 15 links in (b)
8
Mohammad A. Kanso and Michael G. Rabbat
5 Decentralized Reconstruction Techniques Thus far, we have considered a centralized approach to the reconstruction problem. Wireless nodes continuously transmit their data to a fusion center which handles data processing and analysis. In this section, we consider decentralized and in-network processing to achieve the same (or almost) performance levels as in a centralized fashion. While different tasks can be distributed in a sensor network, our focus in this work is to efficiently solve the following optimization problem: pˆ = arg min ||Ap − v||22 + λ||p||1
(9)
p
Distributed compressed sensing in sensor networks has been investigated in previous works [12, 13]. However the distributed aspect was in the joint sparsity of the signal. Our concern in this paper is a distributed reconstruction mechanism. In this section, we attempt to tailor certain optimization techniques to solving a compressed sensing problem cooperatively in a sensor network. Solving optimization problems in a distributed fashion in sensor networks has been investigated due to its benefits over a centralized approach [14, 15]. A fusion center in a centralized system constitutes a single point of failure, as is required to posses more powerful abilities than the sensor nodes to handle processing of signal measurements among nodes in the network. Wireless link failures can also heavily degrade a centralized system’s performance, as less information gets through to the fusion center. Some of these nodes may be distant from the server, which essentially requires them to spend more energy for communication (Power ∝ 1/distance2 ), thus reducing the lifetime of the network. Distributed algorithms on the other hand do not suffer from these problems. Processing is performed cooperatively among the nodes, thus distributing the workload equally over all active nodes. Even if certain nodes malfunction, monitoring can continue with the remaining functional nodes. In this work, we introduce CS reconstruction techniques using two different approaches, incremental subgradient methods and projection onto convex sets (POCS). We also differentiate between deterministic and randomized approaches of implementing each method. A detailed discussion of how these methods apply in our case will be our next concern, along with performance results for comparison. 5.1 Incremental Subgradient Optimization Gradient methods are well known techniques used in convex optimization problems. One of their advantages is their simplicity, a property well suited for a wireless sensor network. However, minimizing a convex function via a gradient method requires the function be differentiable. Subgradient methods generalize standard gradient descent methods for non-differentiable functions. Concepts of subgradient and gradient methods share some similar properties. For a convex and nondifferentiable function f : Rn → R, the following inequality holds at any point p0 : f (p) ≥ f (p0 ) + (p − p0 )T g ∀p0 ∈ Rn , (10)
Compressed RF Tomography for Wireless Sensor Networks
9
where the g ∈ Rn is a subgradient of f . The set of all subgradients of f (p) at any point p is called the subdifferential of f at p, denoted ∂f (p). Note that when f is differentiable, ∂f (p) = ∇f (p), i.e. then the gradient becomes the only possible subgradient. Incremental subgradient methods, originally introduced by [16], split the cost function into smaller functions. The algorithm works iteratively over a set of constraints by sequentially taking steps along the subgradients of those cost functions. For the special case of a sensor network environment, the incremental process iterates through measurements acquired at each node to converge to the solution at all nodes. For distributing the optimization task among sensor nodes, the cost function in (9) is split into smaller component functions. Assuming there are N sensor nodes in total in the network, those nodes gather measurements not necessarily uniformly distributed. Our problem can now be written as pˆ = arg min ||Ap − v||22 + λ||p||1 p
= arg min p
N X X
i=1 j∈Mi
((Ap)j − vj )2 +
|
{z
fi (p)
λ ||p||1 N }
(11)
where Mi is the number of RSS measurements acquired by node i. In each cycle, all nodes iteratively change p in a sequence of subiterations. The update equation in a decentralized subgradient approach now becomes p(c+1) = p(c) + µgi (p(c) ),
(12)
where µ is a step size, c is an iteration number, gi (p(c) ) is the subgradient of fi (p) at p(c) at node i. Rates of convergence have been analyzed in detail by Nedi´c and Bertsekas [17]. They show that under certain conditions the algorithm is guaranteed to converge to an optimal value. However convergence results depend on the approach in choosing the step size µ as well as whether iterations are performed deterministically (round-robin fashion for instance) or randomly. In a deterministic approach, nodes perform updates in a certain cycle. On the other hand, the updating node in a randomized approach is chosen in a uniformly distributed fashion, saving the requirement to implement a cycle. Assuming that each sensor i acquires a set of measurements vi via its sensing matrix Ai , the subgradient that each node uses in its update equation can be expressed as (2ATi (Ai p − vi ))w + (2ATi (Ai p − vi ))w + gi (pw ) = (2ATi (Ai p − vi ))w − 0,
λ N sgn(pw ), λ N, λ N,
pw 6= 0 λ pw = 0, (2ATi (Ai p − vi ))w < − N λ T pw = 0, (2Ai (Ai p − vi ))w > N otherwise, (13) where sgn(·) is the sign function, and (x)w is element w of vector x.
10
Mohammad A. Kanso and Michael G. Rabbat
5.2 Projection on Convex Sets (POCS) Method In addition to the incremental subgradient algorithm discussed earlier, we propose a distributed POCS method suited for a sensor network environment. One important drawback of subgradient algorithms is that they might converge to local optima or saddle points, and can suffer from slow convergence if step sizes are not properly set. The rate of convergence is more relevant to our setup. As simulations will demonstrate, POCS provides a feasible solution to this problem, with an additional price of complexity. The basic idea of POCS is that data is projected iteratively on the set of constraints. Perhaps one interesting benefit of this method is that it allows adding more constraints to the optimization problem without significantly changing the algorithm. Furthermore, POCS is known to converge much faster than incremental subgradient algorithms. POCS has been used in the area of image processing [18]. In the area of compressed sensing, POCS methods were employed for data reconstruction [6] but not in a distributed fashion. Let B be `1 ball such that B = {p ∈ Rn |||p||1 ≤ ||p∗ ||1 },
(14)
and, let H be the hyperplane such that H = {p ∈ Rn |Ap = v}.
(15)
Reconstructed data is required to explain the observations v and possess sparse features. Sets H and B attempt to enforce these requirements. Since both sets are convex, the algorithm performs projections on H and B in an alternate fashion. Perhaps one of the challenges is projecting on H since it requires solving arg min ||Ap − v||.
(16)
p
Fortunately, we know this is a simple optimization problem and can be expressed in a compact form via the pseudoinverse (Moore-Penrose inverse). Since each sensor i acquires a set of measurements vi via a sensing matrix Ai , then the POCS algorithm can be iteratively run for every node. In other words, the hyperplane H now becomes the union of hyperplanes Hi = {p ∈ Rn |Ap = v}. Each node performs an alternate projection on B and Hi and broadcasts the result in the sensor network. The projection on a hyperplane Hi can be expressed as projHi (x) = x + A+ i (vi − Ai x),
(17)
where A+ i is the pseudo-inverse of Ai . Note that the hyperplane projection step in [6] involved the inverse of (AAT )−1 instead of a pseudo-inverse. However, the sensing matrices used throughout yield uninvertible matrices (AAT )−1 , so naturally we decided to use the pseudo-inverse. Finding the pseudo-inverse requires performing the singular value decomposition (SVD) of the matrix. The projection on B is essentially a softthresholding step, and is expressed as if x > λ x − λ projB (x) = x + λ (18) if x < −λ 0 otherwise
Compressed RF Tomography for Wireless Sensor Networks
11
5.3 Centralized and Decentralized Tradeoffs The formulation of our decentralized approach in (11) has the attractive property that it can be run in parallel among the nodes. Since the objective function is expressed as a sum of separate components, each node can independently work on a component. However, each node must have an updated value for p on each iteration. A decentralized implementation would involve a node performing an update on p, using incremental subgradient or POCS techniques, and then broadcasting this new value to all the other nodes. Note that gathering RSS measurements and broadcasting p can be done at the same time, saving battery power. No other communication is required, since each node acquires its own measurements in v and has its own fixed entries in matrix A. So, the communication overhead is acceptable in a wireless sensor network. In a network of N total sensor nodes, a single iteration in a centralized scheme is equivalent to N (or an average of N in a randomized setting) iterations in a decentralized scheme. The centralized approach involves transmitting O(k) values, for k RSS measurements in the network. But compressed sensing theory indicates that k = O(m log n), where m is the number of nonzero elements in p. This means that centralized communication involves transmitting O(m log n) values per iteration. In a decentralized setting, a single iteration involves each node sending an updated version of p. At most O(N n) values are transmitted, where n is the dimension of p (generally N < n). But since p is a sparse vector, basic data compression methods can decrease packet sizes to O(N m). Also, since the application of RF tomography requires all nodes to communicate with each other, then no extra routing costs are required to broadcast p throughout the network. Comparing O(m log n) to O(N m) and observing that generally log n < N , shows that more communication is required in decentralized approaches. Interestingly, one can notice that if n is large enough (large number of pixels), decentralized processing would require less communication than centralized processing. Nevertheless, nodes will still spend more battery power to perform iterative updates on p. These local computations consist of simple matrix operations as described earlier. From an energy point of view, a centralized approach will generally provide fewer communication overhead, longer network lifetime, and faster processing since all information is gathered at the beginning of the first iteration. However, for practicality reasons, a decentralized scheme provides more robustness to server and link failures. An optimal approach would be a combination of both centralized and decentralized techniques in a hybrid architecture, to exploit the advantages of each technique simultaneously.
6 Simulations and Discussion: The Decentralized Approach Using the same environment in Figure 2, we simulate our distributed algorithms for compressed RF tomographic imaging. Since there is no prior information about the monitored environment, the algorithms are initialized with zero data. One hidden advantage in the algorithms proposed is that they can be run in warm start mode, continuing from results of previous iterations. So if there is no significant motion in the environment, one can expect faster convergence rates.
12
Mohammad A. Kanso and Michael G. Rabbat
Incremental subgradient and POCS methods are tested in both deterministic and randomized settings. On each iteration, a random node updates its results and broadcasts them to the other nodes in the network. We simulate the environment with noise level at 0.0025 dB2 , 30 available links for 200 iterations. Moreover, we used a step size of 0.3 for our subgradient approach.
(a) Cost function versus number of iterations
(b) MSE versus number of iterations Fig. 6. Comparing our decentralized approaches by varying the number of iterations
Figure 6 demonstrates that within 2 cycles (40 iterations) the reconstructed data becomes close to its optimal value. Notice that the POCS method performs considerably better than the incremental approach. This is mainly due to the constant step size assumed. Ideally, an adaptive step size should be employed. Deterministic approaches perform better than the randomized approaches, especially during the first iterations. This is expected since a deterministic approach guarantees that nodes perform iterations in a certain order, however in a randomized approach some nodes might perform updates to the solution more frequently than others.
Compressed RF Tomography for Wireless Sensor Networks
13
7 Conclusions and Future Work In this paper we have introduced the idea compressed sensing into RF tomographic imaging and have proposed models for centralized and decentralized processing. Benefits of our approach have been explored, along with an overview of the theory and simulations. The combination of compressed sensing and RF tomography produces an energy efficient approach for monitoring of environments. RF tomography by itself is a cheap approach for monitoring, since it relies on simple RSS measurements and basic data analysis. Extending the lifetime in a wireless sensor network while keeping reliable performance is a challenge by itself [19]. Network lifetime can be especially important in cases of unexpected power outages. Compressed RF Tomography targets efficiency and energy saving through minimizing measurements and the number of active nodes inside a network. Moreover, since few measurements can be as informative as more measurements, some fault tolerance aspects exist in the network. Finally, the decentralized scheme allows nodes to cooperatively analyze data without the need of a bottleneck fusion center. Simulations have supported the validity of the design, and provided a comparison between iterative and `1 -minimization on one hand, and centralized and decentralized techniques on the other hand. These techniques have shown the tradeoff between performance and simplicity of implementation. Performance of the design was examined through investigating the effects of noise and number of available measurement links. Furthermore, the incremental subgradient and POCS methods have demonstrated their validity and tradeoffs through simulations. Our future direction in this area involves investigating the benefits of exploiting prior information about the environment to choose an optimal set of measurements. We are also aiming at exploring other optimization techniques that can be applied in a distributed fashion. Moreover, we hope to generalize our design to more complicated environments and sensor node deployments, in which an optimal positioning scheme is to be found.
Acknowledgements We thank N. Patwari and J. Wilson from the University of Utah for sharing their sensor network measurements. We also gratefully acknowledge support from NSERC Discovery grant 341596-2007 and FQRNT Nouveaux Chercheurs grant NC-126057.
References 1. Patwari, N., Agrawal, P.: Effects of correlated shadowing: Connectivity, localization, and RF tomography. (April 2008) 82–93 2. Patwari, N., Agrawal, P.: Nesh: A joint shadowing model for links in a multi-hop network. (31 2008-April 4 2008) 2873–2876 3. Donoho, D.: Compressed sensing. IEEE Trans. on Info. Theory 52(4) (April 2006) 1289– 1306
14
Mohammad A. Kanso and Michael G. Rabbat
4. Candes, E., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. on Information Theory 52(2) (February 2006) 489–509 5. Candes, E., Tao, T.: Decoding by linear programming. IEEE Transactions on Information Theory 51(12) (Dec. 2005) 4203–4215 6. Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. on Information Theory 52(12) (Dec. 2006) 5406–5425 7. Figueiredo, M., Nowak, R.D., Wright, S.: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing 1(4) (Dec. 2007) 586–597 8. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58 (1996) 267–288 9. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Annals of Statistics 32(2) (2004) 407–499 10. Tropp, J., Gilbert, A.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory 53(12) (Dec. 2007) 4655–4666 11. Wilson, J., Patwari, N.: Radio tomographic imaging with wireless networks. Technical report, University of Utah (2008) 12. Duarte, M., Sarvotham, S., Baron, D., Wakin, M., Baraniuk, R.: Distributed compressed sensing of jointly sparse signals. Thirty-Ninth Asilomar Conference on Signals, Systems and Computers (November 2005) 13. Haupt, J., Bajwa, W., Rabbat, M., Nowak, R.: Compressed sensing for networked data. IEEE Signal Processing Magazine 25(2) (March 2008) 92–101 14. Rabbat, M., Nowak, R.: Distributed optimization in sensor networks. Third International Symposium on Information Processing in Sensor Networks (IPSN) (April 2004) 20–27 15. Johansson, B.: On distributed optimization in networked systems. PhD Thesis, Royal Institute of Technology (KTH) (2008) 16. Kibardin, V.M.: Decomposition into functions in the minimization problem. Automation and Remote Control 40(1) (1980) 109–138 17. Nedic, A., Bertsekas, D.: Stochastic Optimization: Algorithms and Applications, chapter Convergence Rate of Incremental Subgradient Algorithms. Kluwer Academic Publishers (2000) 18. L. G. Gubin, B.T.P., Raik, E.V.: The method of projections for finding the common point of convex sets. USSR Computational Mathematics and Mathematical Physics 7(6) (1967) 19. Akyildiz, I., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor networks. IEEE Communications Magazine 40(8) (Aug 2002) 102–114