ABSTRACT. Recent works in modified compressed sensing (CS) show that reconstruction of sparse or compressible signals with par- tially known support yields ...
ITERATIVE ALGORITHMS FOR COMPRESSED SENSING WITH PARTIALLY KNOWN SUPPORT Rafael E. Carrillo, Luisa F. Polania and Kenneth E. Barner Department of Electrical and Computer Engineering University of Delaware ABSTRACT Recent works in modified compressed sensing (CS) show that reconstruction of sparse or compressible signals with partially known support yields better results than traditional CS. In this paper, we extend the ideas of these works to modify three iterative algorithms to incorporate the known support in the recovery process. The performance and effect of the prior information are studied through simulations. Results show that the modification of iterative algorithms improves their performance, needing fewer samples to yield an approximate reconstruction. Index Terms— Compressed sensing, sampling methods, signal reconstruction, estimation.
2. BACKGROUND AND MOTIVATION 2.1. Compressed Sensing Review
1. INTRODUCTION Compressed sensing (CS) is a recently introduced framework that goes against the traditional data acquisition paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process that projects the signal onto a small set of vectors incoherent with the sparsity basis [1]. Applications of this framework include data compression, channel coding, image and data acquisition, astronomy and geosciences among others. There are several reconstructions algorithms that yield perfect or approximate reconstruction proposed in the literature ([1, 2, 3, 4, 5] and references therein). To see a review and comparison of the most relevant algorithms see [3]. However, the number of samples (or measurements) needed to achieve perfect signal reconstruction is mainly dictated by the sparsity level (number of non-zero elements in the representation) and the measurement matrix. Sparser representations mean more accurate reconstructions with the same number of samples. Recent works have explored the idea of exploiting prior information about the signal to reduce the number of samples [6, 7]. Duarte et. al exploit a known model for the signal (e.g subspace model or graphical model) and incorporate this model in the reconstruction process achieving accurate reconstructions with fewer samples [6]. Vaswani et. al assume that part of the signal support is known a priori and the problem is recast as finding the unknown support. The remainder of
978-1-4244-4296-6/10/$25.00 ©2010 IEEE
the signal (unkonw support) is a sparser signal than the original, thereby needing fewer samples to yield an accurate reconstruction [7]. Motivated by the results of [7], we propose an extension of their ideas, modifying three iterative algorithms to incorporate the known support in the recovery process. The performance and effect of the prior information are studied through simulations. Results show that the modification of iterative algorithms improves their performance, thereby needing fewer samples to yield an approximate reconstruction.
3654
Let x ∈ Rn be a signal that is either s-sparse or compressible in some orthogonal basis Ψ and let Φ be an m × n sensing matrix, m < n. The measurement model is y = Φx + z, where z is zero-mean additive white noise. It has been shown that the convex program min x1 s. t. y − Φx2 ≤ ,
x∈Rn
(1)
for some small > 0, can recover the original signal, x, from y [1]. If z2 ≤ and Φ meets a restricted isometry property (RIP), then the reconstructed signal, xˆ, is guaranteed to obey x − xˆ2 ≤ C [1]. The convex program is known as Basis Pursuit Denoising (BPD) and its noiseless version ( = 0) as Basis Pursuit (BP). A family of iterative greedy algorithms [2, 3] are shown to enjoy a similar approximate reconstruction property, generally with less computational complexity. Matching pursuit, Orthogonal Matching Pursuit (OMP) [2] and CoSaMP [3] being examples. However, these algorithms require more measurements for exact reconstruction than the L1 minimization approach. Recent works show that nonconvex problems can recover a sparse signal with fewer measurements than current geometric methods, while preserving the same reconstruction quality (see [4, 5] and references therein).
ICASSP 2010
2.2. Compressed Sensing with Partially Known Support Let x ∈ Rn be an sparse or compressible signal in some basis Ψ and denote T = supp(x). In this setting, we assume that T is partially known, i.e. T = T0 ∪ Δ\Δe . The set T0 ⊂ {1, . . . , n} is the a priori knowledge of the support of x possibly corrupted by an error Δe ⊂ {1, . . . , n} outside the true support, and Δ ⊂ {1, . . . , n} is the unknown part of the support. This scenario is typical in many real signal processing applications, e.g. the lowest subband coefficients in a wavelet decomposition, which represent a low frequency approximation of the signal, or the first coefficients of a DCT transform of an image with a constant background. Recent works show that modifying the CS framework to include prior knowledge of the support improves the reconstruction results using fewer measurements [7, 8]. The modified CS seeks for a signal that explains the measurements and whose support contains the smallest number of new additions to T0 . Vaswani et al. proposed in [7] to modify BP to find an sparse signal assuming uncorrupted measurements. This technique is extended in [8] to the case of corrupted measurements and compressible signals and a stability result is proven for this general case. The approach solves the following optimization program min xT0c 1 s. t. y − Φx2 ≤ .
x∈Rn
(2)
Although the modified CS approach needs less samples to recover a signal, the computational cost of solving (2) can be high or complicated to implement for people not familiar with convex optimization tools. Therefore, we propose to extend the ideas of modified CS to iterative approaches like greedy algorithms and iterative reweighted least squares methods. The aforementioned methods construct an estimate of the signal at each iteration, thereby being more intuitive to incorporate T0 in the recursion as an initial condition or at each iteration. 3. ITERATIVE ALGORITHMS FOR CS WITH PARTIALLY KNOWN SUPPORT In this section we describe the modifications of three iterative algorithms to incorporate the partially known support in to the iterative process. The iterative algorithms are: OMP, CoSaMP and RWLS-SL0 . 3.1. Notation Let x be a signal in Rn and r be a positive integer. We write xr for the signal in Rn that is formed by restricting x to its r largest-magnitude components. We write |T | to denote the cardinality of the T . If T is a subset of {1, 2, . . . , N }, then the restriction of the signal to
3655
the set T is defined as x|T =
xi , 0,
i∈T otherwise.
We write ΦT for the the column submatrix of Φ whose columns are listed in the set T . We also write Φ† to define the pseudoinverse of a tall, full-rank matrix Φ. 3.2. OMP OMP is an iterative greedy algorithm for sparse signal recovery [2]. At each iteration, we choose the column of Φ that is most strongly correlated with the remaining part of the signal . Then we subtract off its contribution to the measurement vector and iterate on the residual. Since the algorithm needs to determine which columns of Φ participate in the measurement vector, it is natural to think of the introduction of partially known support ideas to enhance its recovery performance. Thus, the partially known support gives a priori information about some of the columns that should be selected. This piece of information modifies the initialization of the algorithm because we need to subtract off the contribution of these columns to the measurement vector before starting the iteration. Therefore, the residual needs to be initialized as r = y − ΦT0 (Φ†T0 y),
(3)
where T0 is the partially known support and the initial support of the signal at t = 0. The algorithm terminates when the L2 norm of the residual falls below a selected approximation error bound. All the steps in the iteration remain the same as in OMP. 3.3. CoSaMP Compressive Sampling Matching Pursuit (CoSaMP) is also a greedy algorithm [3]. Then, as in the orthogonal matching pursuit case, the ideas of partially known support can be incorporated and the initialization needs to be modified in a similar way as to that for OMP. Thus the residual is calculated by subtracting the contribution of the first estimate. Additionally, we calculate the first estimate of the signal by solving a least squares problem using ΦT0 . In one step of the iteration process, CoSaMP identifies the 2s largest components of the signal proxy. Since we already know a subset of the support, we just need to identify the 2(s − |T0 |) largest components instead. CoSaMP prunes the signal to be s-sparse. In order to do that and include the a priori known information, an approximation to the signal is formed at each iteration by selecting the largest coordinates and the ones that correspond to the partially known support. The rest of the algorithm remains the same as CoSaMP. The entire algorithm is specified in Algorithm 1.
Algorithm 1 CoSaMP Algorithm with partially known support Require: CS matrix Φ, measurements y, sparsity level s and partial known support T0 . 1: Initialize x ˆ0 |T0 = Φ†T0 y, x ˆ0 |T0C = 0, r = y − ΦT0 x ˆ0 |T0 , K = s − |T0 | and i = 0. 2: while halting criterion false do 3: i ← i + 1. 4: e ← ΦT r. 5: Ω ←supp (e2K ) 6: T ← Ω ∪ supp (ˆ xi−1 ) 7: b|T ← Φ†T y, b|T C ← 0 8: A|T0C ← b|T0C , A|T0 ← 0 9: x ˆi ← A|(T0 ∪ supp (AK )) 10: r ← y − Φˆ xi 11: end while 12: return x ← x ˆi 3.4. RWLS-SL0 As described in [5], the iterative reweighted least squares approach based on smooth approximation of the L0 norm is an efficient method to reconstruct sparse signals. The following function, which converges pointwise to the L0 norm as σ → 0, was proposed in [5]: Fσ (x) =
n i=1
fσ (xi ) =
n i=1
|xi | σ + |xi |
(4)
In order to find the sparsest possible signal estimate whose support contains T0 , we propose to solve the following problem |xi | s. t. y − Φx2 ≤ . minn (5) x∈R σ + |xi | i∈T / 0
To solve the nonconvex optimization problem derived, an iterative re-weighted least squares approach, whose purpose is to encourage sparse solutions by giving a large weight to small components, was proposed in the paper. Since the objective is not convex and can have several local minima on the feasible set, a convex problem was introduced to be solved iteratively. We propose to rewrite the solution of the problem at iteration t as xˆt+1 = W t ΦT (ΦW t ΦT + λI)−1 y, where λ is a small regularization parameter set as some predefined λmin > 0. We also need to rewrite the diagonal weighting matrix Wt such that its diagonal elements become Wiit = (σ t + |ˆ xti |)2 , In order to include the partially known support in the solution, the elements of the diagonal whose positions are in T0 should have a larger weight than the others. We set this value as 100 times the largest element of the diagonal.
3656
4. EXPERIMENTAL RESULTS This section illustrates the effectiveness of partially known support algorithms by means of numerical examples. For the first experiment, we create synthetic sparse vectors, setting the length of the signal to n = 1000 and the sparsity level to 50. The nonzero coefficients are drawn from a Rademacher distribution and their position randomly chosen with amplitudes {−10, 10}. The vectors are sampled using measurement matrices Φ that have i.i.d. entries drawn from a standard normal distribution with normalized columns. We average 100 repetitions of each experiment varying the amplitude of the signals. We study the performance of the Orthogonal Matching Pursuit with partially known support (OMP-PKS), CoSaMP with partially known support (CoSaMP-PKS) and the iterative reweighted least squares approach based on smooth approximation of the L0 norm with partially known support (rwls-SL0 -PKS). For these algorithms, we analyze the effect of the partially known support by increasing it in steps of 10% for different number of measurements. The comparison is shown in Fig. 1 and the reconstruction SNR (R-SNR) is used as the performance measure. As expected, the reconstruction is more accurate when the percentage of the support known is larger for PKS algorithms. It is of notice that the number of measurements required by CoSaMP-PKS is smaller than the one required by OMP-PKS to achieve a prescribed R-SNR. As a final experiment, we want to illustrate the performance of the algorithms with real compressible signals. We choose ECG signals due to its nice structure for sparse decomposition. The experiment is carried out over 10-min long leads extracted from records 100, 101, 102, 103, 107, 109, 111, 115, 117, 118 and 119 from the MIT-BIH Arrhythmia Database (see [9] and references therein). In order to find a sparse representation of the signal, we use the nearly-perfect reconstruction cosine modulated filter banks designed in [9]. The low-pass approximation of the first subband accumulates the majority of the energy of the signal, which corresponds to the first 64 coefficients when processing 1024 samples of ECG data and setting the number of channels, M , to 16. Therefore, the positions of these coefficients are selected as the partially known support of the signal. The sparsity level is set to 128. We compare the performance of OMP, CoSaMP, rwlsSL0 and their partially known support versions for ECG signals when varying the number of samples. For comparison purposes, Basis Pursuit (BP) and Basis Pursuit with partially known support (denoted as BP-PKS) proposed in [7] are included. The results are shown in Fig. 3. For this example, rwls-SL0 -PKS has the best performance for all number of samples, being slightly better than BP-PSK. Surprisingly, OMP-PKS performs better than CoSaMP-PKS for small number of measurements and yield similar reconstruction to the more computationally expensive BP-PSK and rwls-SL0 -PKS.
Fig. 1. Comparison of the partially known support algorithms varying the number of measurements and the percentage of partially known support. L: OMP-PKS, M: CoSaMP-PKS, R: rwls-SL0 -PKS.
40
200
30 Reconstruction SNR, dB
Amplitude
ECG signal 400
0 −200 0
200
400
600
800
1000
1200
ECG signal representation in the wavelet domain Amplitude
1000 500
10
Basis Pursuit Basis Pursuit−PKS CoSaMP CoSaMP−PKS OMP OMP−PKS rwls−SL
0 −10 −20
0
−30
0 −500 0
20
200
400
600
800
1000
−40 100
1200
[2] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007. [3] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Ap-
300 400 Number of measurements
500
600
plied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, Apr. 2008.
In this paper, we modify three iterative algorithms to incorporate partially known support in the recovery process. The performance and effect of the prior information are studied through extensive simulations. Results show that the modification of iterative algorithms improves their performance, thereby needing fewer samples to yield an approximate reconstruction. Although there are no theoretical guarantees, yet, for this algorithms, they show as a promising direction of research. Future work includes analyzing if it is possible to extend theoretical stability guarantees of CoSaMP and rwlsSL0 to their partially known support counterparts.
[1] E. J. Cand`es and M. B Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, Mar. 2008.
200
Fig. 3. Comparison of BP, OMP, CoSaMP, rwls-SL0 and their partially known support versions for ECG signals.
Fig. 2. Decomposition of an ECG signal using CMFB, M = 16. 5. CONCLUSIONS
6. REFERENCES
rwls−SL0−PKS
[4] R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,” Inverse Problems, vol. 24, no. 3, 2008. [5] R.E. Carrillo and K.E. Barner, “Iteratively re-weighted least squares for sparse signal reconstruction from noisy measurements,” in Proceedings, CISS 2009, March 2009. [6] M. Duarte, C. Hegde, V. Cevher, and R. Baraniuk, “Recovery of compressible signals in unions of subspaces,” in Proceedings, CISS 2009, March 2009. [7] N. Vaswani and W. Lu, “Modified-cs: Modifying compressive sensing for problems with partially known support,” in Proceedings, IEEE Int. Symp. Info. Theory, 2009. [8] L. Jacques, “A short note on compressed sensing with partially known signal support,” Technical Report, Universit´e Catholique de Louvain, Aug. 2009. [9] M. Blanco-Velasco, F. Cruz-Roldn, E. Moreno-Martnez, J. Godino-Llorente, and K. E. Barner, “Embedded filter bank-based algorithm for ecg compression,” Signal Processing, vol. 88, no. 6, pp. 1402 – 1412, 2008.
3657