IEEE SIGNAL PROCESSING LETTERS, VOL. 19, NO. 7, JULY 2012

419

A New Reweighted Algorithm With Support Detection for Compressed Sensing Qin Li, Jianwei Ma, and Gordon Erlebacher

Abstract—We propose a new iterative reweighted algorithm with iterative support detection (referred to as RISD) to improve the decoding performance for compressed sensing (CS). The support detection from previous iterations can be interpreted as extracting “prior information” that allows for different reweighting strategies within and without the detected support. The proposed RISD method achieves better sparsity-measurement tradeoff than both classical algorithms and iteratively reweighted algorithms. Index Terms—Compressed sensing, imaging, reweighted algorithm, support detection.

I. INTRODUCTION

C

S theory [2], [4] states that a signal can be reconstructed from a small number of random linear measurements using sparsity-promoting (e.g., -norm regularization, ) optimal algorithms, if the two following fundamental conditions are satisfied: 1) the signals are compressible, i.e., they can be sparsely represented by a dictionary; 2) the linear measurement matrix is incoherent to the dictionary. The number of measurements is much smaller than that suggested by the Shannon-Nyquist sampling theorem. CS has led to many impact applications that revolved around cost reduction of the signal acquisition. The price to pay, of course, is more expensive signal reconstruction. This letter concentrates on the development of a decoding algorithm within the context of CS. Without loss of generality, consider a sparse unknown signal , which has a limited number of nonzero entires (i.e., ). To reconstruct from linear measurements, one solves the NP-hard optimization problem [4]: (1)

Manuscript received March 10, 2012; revised April 30, 2012; accepted May 01, 2012. Date of publication May 09, 2012; date of current version May 25, 2012. The work of J. Ma was supported by the Program for New Century Excellent Talents in University (NCET-11-0804). The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Petros T. Boufounos. Q. Li and G. Erlebacher are with the Department of Scientific Computing, Florida State University, Tallahassee, FL 32306-4120 USA (e-mail: [email protected]; [email protected]). J. Ma is with the Institute of Applied Mathematics, Harbin Institute of Technology, 150001 Harbin, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2012.2198641

where , , . An alternative, and widely used, strategy is to relax the norm to a convex norm, which leads to the more tractable problem: (2) regularMany approaches have been proposed to solve the ized convex optimal problem in (2), e.g., interior-point and iterative shrinkage/thresholding (IST) algorithms [1]. Recently, (1) has also been approximated by replacing the norm with the nonconvex norm ( ). Approximate solutions to the resulting nonconvex optimization problem are obtained through a transformation of the original problem into a sequence of convex problems using iterative reweighting [13]. For certain types of signals, the reweighted algorithms achieve better sparsity-measurement tradeoff than traditional algorithms without reweighting (see e.g., [3], [11], [6]). An algorithm with better sparsity-measurement tradeoff achieves higher decoding quality for a given number of measurements, or equivalently, a lower number of measurements or sometimes fewer iterations for a given decoding quality. However, reweighted algorithms usually execute slower. The motivation of our letter is to speed up the reweighted algorithms by incorporating it with a technique of support detection. Whether reweighted algorithms achieve better sparsity-measurement tradeoff still depends on the underlying sparse signal. For instance, for recovering sparse Bernoulli signals from noiseless measurements, reweighted algorithms only gain slight improvements over convex minimization [3]. It is well known that the sparsity-measurements tradeoff of the existing CS recovery methods can be further improved if the algorithms incorporate prior information derived from the signal, such as support (i.e., the locations of the nonzero entries) or statistical information. For example, given the support of the signal, it is possible to reduce the number of measurements necessary to reconstruct the signal [9], [7], [15]. Let be the support of the signal with the subset of , which is the prior information that satisfies the condition (3) With such information, one may recover the signal from [9] (4) Model (4) does not penalize the nonzero terms whose locations are known, which differs from model (2) where all terms are

1070-9908/$31.00 © 2012 IEEE

420

IEEE SIGNAL PROCESSING LETTERS, VOL. 19, NO. 7, JULY 2012

treated equally. Using a structure similar to that of (4), one can formulate the norm-based problem [7] as

, Input: , . Initialize a set and the stopping criterion is not met, do 1) Update according to

. While

(5) in order to reduce the number of necessary measurements. In [6], the authors consider a nonuniform sparse model using prior information based on a partitioning of the signal entries according to its probability of being zero. Even in the absence of available prior information, one can still improve on the sparsity-measurement tradeoff by properly extracting some useful information implied by the current solution and incorporate it with the next iteration. One such approach is iterative support detection (ISD) proposed by Wang and Yin [10]. ISD iteratively solves problems in the same class as (4), where is detected automatically and varies from iteration to iteration. At iteration , the entries of are separated into and where the magtwo subsets, in the former set are greater than a given nitude of the entries in the latter set are smaller than . The ISD threshold and algorithms is more efficient than most iterative reweighted algorithms in terms of sparsity-measurement tradeoff and computational time. The idea of support detection was further generalized to edge-detection reconstruction [5] and partially known support [8] applied to image recovery. The novel contribution of this letter is a new and fast reweighted algorithm with iterative support detection (RISD) for CS decoding. Numerical results show good performance of the proposed method on synthetic 1-D and 2-D data, in comparison to several existing algorithms. II. PROPOSED ALGORITHM Before describing our method, we first review the iterative algorithm (IRL1) [3] and the ISD algorithm [10]. reweighted A. Review of IRL1 and ISD 1) IRL1: IRL1 [3] iteratively solves a sequence of weighted subproblems

(8) 2) Update the threshold according to the first jump rule. in ascending order and let be the magnia) Sort tude of the th largest entry of . b) Locate the smallest such that (9) with (10) with some constant , which will be discussed in the numerical experiment section. . c) Set , 3) Update the detected support . , the support If one assumes that at a certain iteration, of the underlying true signal , then the solution to (8) is simply . However, contains only a subset of the full signal sup. In this case, the iteration will produce an import, i.e., proved approximation to the underlying sparse signal and thus help to determine additional entries in . It is often the case that includes some falsely detected entries. However, numerical results strongly suggest that the algorithm has a self-correction capacity. B. RISD Algorithm Our RISD algorithm is outlined as follows. Input: , , ( ) , , , set . 1) Initialize 2) While the stopping criterion is not met, do a) Compute by solving a constrained optimization: (11)

(6) where the weights

are updated according to (7)

It has been shown that converges to a local [3]. The log-sum function is a minimum of good approximation to for small . The regularization parameter is introduced to prevent undefined weights. In practice, to avoid the convergence to local minima, we initialize the algorithm with a large and then gradually decrease its value [13], as was done in the Graduated Non-Convexity method [14]. A few other iterative reweighted algorithms (e.g., [13], [11], [6]) have also been proposed for CS decoding. Generally, these algorithms are slower than most classical -based algorithms because they require iterative computation for the “reweightings.” However, they yield a much better sparsity-measurement tradeoff. 2) ISD: The algorithmic framework for ISD [10] is described as follows.

or an unconstrained minimization problem (if the measurements contain noise): (12) b) Update according to the first jump rule. c) Update the detected support: (13) d) Update weights according to (14) . e) Another interpretation of the above algorithm is as an iterative reweighted algorithm with a nonuniform weighting scheme. The formula for updating weights adapts to each entry. Any

LI et al.: NEW REWEIGHTED ALGORITHM

has a higher probability of being included in the support. The weights of are chosen according to (7) with no regare positive, their ularization parameter. Since the entries in have a reciprocals are well defined without . The higher probability of being zero, so should have larger weights. Ultimately, as they tend to zero, the weight should go to . is imperfect, and the parameter Naturally, the detection of measures the reliability of . The smaller , the more reliand hence the larger the associated weights. able With the above trends, we choose for the weights the simple for the entries in . This choice satisfies the relationship heuristic analysis and also guarantees that the weights for those are smaller than the weights for entries in due entries in to (13). The advantages of our choice over strategies such as (7) are 1) no regularization parameter is needed and 2) the weights for all entries are adaptively changed. Also note that when is sufficiently small, the solution to (11) is equivalent to the solution of (8). Thus the proposed algorithm can also be viewed as a general version of ISD. III. NUMERICAL RESULTS We have performed several numerical tests to examine the performance of the proposed algorithm and compared our apalgorithm (without proach with ISD, IRL1, and a traditional support detection and reweight). We present three sets of experiments. In Test1, we compare the average successful recovery rate and computing time of the algorithms for Gaussian random sparse signals (i.e., the randomly located nonzero entries are i.i.d, sampled from a Gaussian distribution). In Test2, we measure the sensitivity of RISD and ISD to the parameter in (10) when applied to Gaussian sparse and uniform-distributed sparse signals. In Test3, we present an example of image reconstruction. A. Test 1 In this test, the signal length , the sparsity and the number of measurements varies from 230 to 400 orwith a step size of 5. The measurement matrix is a thogonalized Gaussian random matrix. Each signal is Gaussian random sparse. White noise with zero mean and variance is superimposed on the measurement vector . The maxis used for all algorithms. imum number of iterations alWe choose the YALL1 algorithm [12] as the traditional subproblem involved gorithm, and also use it to solve the in ISD, IRL1 and RISD, to provide a fair comparison of computing time. Each problem has its own stopping criterion. For , the relative ISD and RISD, iterations stop once ( ) and change in ( ). For IRL1, each iteration stops when . is the relative In the above, change in . When we apply IRL1, the weights are updated ac, cording to (7). The parameter which determines the first significant jump, is based that found in [10]. Some discussion of this setting is presented in test 2. In the presence of noise, an exact reconstruction of the signal is not possible. We view a recovery as a success if ( ), and as a failure otherwise. For each , we consider 50 runs. Fig. 1 presents the success rate (left plot) and CPU time (right plot) for the algorithms. Based on the rate of successful recoveries, RISD marginally outperforms ISD. They

421

Fig. 1. (Left) Comparison in successful recovery rate and (right) computing time of , ISD, RISD, and IRL1. Each point on the curve is the average over 50 runs; the error bars indicate the standard deviation.

Fig. 2. Illustration of average performance by ISD/RISD with different ; -axes: successful recovery rate. threshold settings, and IRL1. -axes: (Left) Gaussian random signals. (Right) uniform-distributed signals.

are both better than IRL1 and much better than the traditional algorithm. From an efficiency standpoint, RISD, ISD and have run in similar wall clock times and they are both are faster than IRL1. B. Test 2 , , and varies from 85 to 140 In this test, with a step size of 5. We set up three ways to determine (with the methods they are applied to): 1) the formula from test1 (ISD, (ISD-1, RISD-1), RISD), 2) a smaller jump (ISD-3, RISD-3). and 3) a larger jump We test these six algorithms and IRL1 for signals characterized by noise-free measurements. In additional to the Gaussian random sparse signals (GRSS), we also considered sparse Uniform Random Sparse Signals (URSS) with nonzeros entries uniat random locations. The left and formly distributed in right plots of Fig. 2 present the results for GRSS and URSS respectively. RISD and RISD-1 perform similarly for GRSS, and outperform the other algorithms. RISD-3 yields the worst results among all RISD-based algorithms. We also observe that ISD-based algorithms are more sensitive to the choice of first jump setting influences than the others. ISD is comparable to RISD and RISD-1, while ISD-1 is the worst among them. For URSS, all RISD-based algorithms produce similar results and are better or comparable to ISD and ISD-3. With the larger jump the detection of support becomes more conservative. Less entries are filtered to the support. In other words, it is more likely that a higher number of true nonzeros are so that contains more falsely detected zeros. placed in Similarly, when the jump is smaller, the detection of support is more greedy. In such a case, may contain more falsely detected nonzero entries. Numerical results indicate that too many falsely detected zeros for ISD strongly deteriorates its performance, although the self-correction capability of false detected nonzeros for ISD is good. Contrary to ISD, RISD has a good self-correction capability of falsely detected zeros.

422

IEEE SIGNAL PROCESSING LETTERS, VOL. 19, NO. 7, JULY 2012

of 20 independent runs. For in the range 18%–20% of , RISD is more accurate than its competitors discussed in this letter, and has a smaller variance. On the efficiency side, RISD and ISD behave similarly and compare favorably to YALL1. IV. SUMMARY In this letter, we proposed a new iterative reweighted algorithm, RISD, for compressive sensing. RISD is based on a recent support detection technique called ISD, with the following differences: 1) the object function contains all entries in with different reweighting strategies while ISD treats a truncated problem in each iteration, where the entries of detected support are not penalized; 2) compared to ISD, RISD has improved handling of falsely detected zeros. Both algorithms have similar rates of convergence. On average, RISD marginally improved on ISD in all the tests presented. As a reweighted algorithm, RISD is both faster than the previous iterative reweighted algorithms such as IRL1 and offers improved sparsity-measurement tradeoff. Fig. 3. Comparison of reconstructing a 128 128 phantom by (a) IRL1, (19.8%). The (b) ISD, (c) RISD, and (d) . Number of sampling, . noise level for the measurements

ACKNOWLEDGMENT The authors thank W. Yin for providing the ISD code. REFERENCES

Fig. 4. (Left) Reconstruction error versus for reconstructing the phantom.

and (right) CPU time versus

In summary, ISD and RISD use an identical support detection technique but they are influenced by the threshold in opposite ways. C. Test 3 We recover a standard 128 128 image (named SheppLogan phantom) from the measurements generated by several partial discrete cosine transforms and contaminated by white noise. This image is not sparse but its wavelet coefficients are, so that the signal is transform sparse. In this case, simply replace by in our method, where is the wavelet transform. We . When also add a noise component to the image with is large enough, all algorithms recover the image very well. , ISD may fail badly. Overall, We notice that when RISD generated much better results than ISD. We present the results in Fig. 3. In this case, RISD recovers the image with the highest signal-to-noise ratio (snr). Although the recovery is not very accurate, all important features of the Phantom are well reconstructed in the sense that one can easily detect the three ellipses and the five small structures. The computing time is 40% higher than that of YALL1 and slightly less than ISD. range from 2359 to 5392, we show the results in Letting Fig. 4. Each point on the curve is the average of the results

[1] A. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev., vol. 51, no. 1, pp. 34–81, 2009. [2] E. Candes, Romberg, and T. Tao, “Rbust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006. [3] E. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted minimization,” J. Fourier Anal. Appl., vol. 14, no. 5, pp. 877–905, 2008. [4] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [5] W. Guo and W. Yin, “EdgeCS: An edge guided compressive sensing reconstruction,” in Proc. VCIP, 2010, vol. 7744. [6] M. Khajehnejad, W. Xu, A. Avestimehr, and B. Hassibe, “Analyzing weighted minimization for sparse recovery with nonuniform sparse models,” IEEE Trans. Signal Process., vol. 59, no. 5, pp. 1985–2001, 2011. [7] C. Miosso, R. von Borries, M. Argaez, L. Velazquez, C. Quintero, and C. Potes, “Compressive sensing reconstruction with prior information by iteratively reweighted least-squares,” IEEE Trans. Signal Process., vol. 57, no. 6, pp. 2424–2431, 2009. [8] N. Vaswani and W. Lu, “Modified-CS: Modifying compressive sensing for problems with partially known support,” IEEE Trans. Signal Process., vol. 58, no. 9, pp. 4595–4607, 2010. [9] R. von Borries, C. Miosso, and C. Potes, “Compressed sensing using prior information,” in Proc. 2nd IEEE Int. Workshop Comput. Adv. Multi-Sensor Adaptive Process, 2007, pp. 121–124. [10] Y. Wang and W. Yin, “Sparse signal reconstruction via iterative support detection,” SIAM J. Imag. Sci., vol. 3, no. 3, pp. 462–491, 2010. [11] D. Wipf and S. Nagarajan, “Iterative reweighted and methods for finding sparse solutions,” IEEE J. Sel. Topics in Signal Process., vol. 4, no. 2, pp. 317–329, 2010. prob[12] J. Yang and Y. Zhang, “Alternating direction algorithms for lems in compressive sensing,” SIAM J. Sci. Comput., vol. 33, no. 1, pp. 250–278, 2011. [13] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in 33rd ICASSP, 2008. [14] A. Blake and A. Zisserman, Visual Reconstruction. Cambridge, MA: MIT Press, 1987. [15] M. Friedlander, H. Mansour, R. Saab, and O. Yilmaz, “Recovering compressively sampled signals using partial support information,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 1122–1134, 2012.

419

A New Reweighted Algorithm With Support Detection for Compressed Sensing Qin Li, Jianwei Ma, and Gordon Erlebacher

Abstract—We propose a new iterative reweighted algorithm with iterative support detection (referred to as RISD) to improve the decoding performance for compressed sensing (CS). The support detection from previous iterations can be interpreted as extracting “prior information” that allows for different reweighting strategies within and without the detected support. The proposed RISD method achieves better sparsity-measurement tradeoff than both classical algorithms and iteratively reweighted algorithms. Index Terms—Compressed sensing, imaging, reweighted algorithm, support detection.

I. INTRODUCTION

C

S theory [2], [4] states that a signal can be reconstructed from a small number of random linear measurements using sparsity-promoting (e.g., -norm regularization, ) optimal algorithms, if the two following fundamental conditions are satisfied: 1) the signals are compressible, i.e., they can be sparsely represented by a dictionary; 2) the linear measurement matrix is incoherent to the dictionary. The number of measurements is much smaller than that suggested by the Shannon-Nyquist sampling theorem. CS has led to many impact applications that revolved around cost reduction of the signal acquisition. The price to pay, of course, is more expensive signal reconstruction. This letter concentrates on the development of a decoding algorithm within the context of CS. Without loss of generality, consider a sparse unknown signal , which has a limited number of nonzero entires (i.e., ). To reconstruct from linear measurements, one solves the NP-hard optimization problem [4]: (1)

Manuscript received March 10, 2012; revised April 30, 2012; accepted May 01, 2012. Date of publication May 09, 2012; date of current version May 25, 2012. The work of J. Ma was supported by the Program for New Century Excellent Talents in University (NCET-11-0804). The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Petros T. Boufounos. Q. Li and G. Erlebacher are with the Department of Scientific Computing, Florida State University, Tallahassee, FL 32306-4120 USA (e-mail: [email protected]; [email protected]). J. Ma is with the Institute of Applied Mathematics, Harbin Institute of Technology, 150001 Harbin, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2012.2198641

where , , . An alternative, and widely used, strategy is to relax the norm to a convex norm, which leads to the more tractable problem: (2) regularMany approaches have been proposed to solve the ized convex optimal problem in (2), e.g., interior-point and iterative shrinkage/thresholding (IST) algorithms [1]. Recently, (1) has also been approximated by replacing the norm with the nonconvex norm ( ). Approximate solutions to the resulting nonconvex optimization problem are obtained through a transformation of the original problem into a sequence of convex problems using iterative reweighting [13]. For certain types of signals, the reweighted algorithms achieve better sparsity-measurement tradeoff than traditional algorithms without reweighting (see e.g., [3], [11], [6]). An algorithm with better sparsity-measurement tradeoff achieves higher decoding quality for a given number of measurements, or equivalently, a lower number of measurements or sometimes fewer iterations for a given decoding quality. However, reweighted algorithms usually execute slower. The motivation of our letter is to speed up the reweighted algorithms by incorporating it with a technique of support detection. Whether reweighted algorithms achieve better sparsity-measurement tradeoff still depends on the underlying sparse signal. For instance, for recovering sparse Bernoulli signals from noiseless measurements, reweighted algorithms only gain slight improvements over convex minimization [3]. It is well known that the sparsity-measurements tradeoff of the existing CS recovery methods can be further improved if the algorithms incorporate prior information derived from the signal, such as support (i.e., the locations of the nonzero entries) or statistical information. For example, given the support of the signal, it is possible to reduce the number of measurements necessary to reconstruct the signal [9], [7], [15]. Let be the support of the signal with the subset of , which is the prior information that satisfies the condition (3) With such information, one may recover the signal from [9] (4) Model (4) does not penalize the nonzero terms whose locations are known, which differs from model (2) where all terms are

1070-9908/$31.00 © 2012 IEEE

420

IEEE SIGNAL PROCESSING LETTERS, VOL. 19, NO. 7, JULY 2012

treated equally. Using a structure similar to that of (4), one can formulate the norm-based problem [7] as

, Input: , . Initialize a set and the stopping criterion is not met, do 1) Update according to

. While

(5) in order to reduce the number of necessary measurements. In [6], the authors consider a nonuniform sparse model using prior information based on a partitioning of the signal entries according to its probability of being zero. Even in the absence of available prior information, one can still improve on the sparsity-measurement tradeoff by properly extracting some useful information implied by the current solution and incorporate it with the next iteration. One such approach is iterative support detection (ISD) proposed by Wang and Yin [10]. ISD iteratively solves problems in the same class as (4), where is detected automatically and varies from iteration to iteration. At iteration , the entries of are separated into and where the magtwo subsets, in the former set are greater than a given nitude of the entries in the latter set are smaller than . The ISD threshold and algorithms is more efficient than most iterative reweighted algorithms in terms of sparsity-measurement tradeoff and computational time. The idea of support detection was further generalized to edge-detection reconstruction [5] and partially known support [8] applied to image recovery. The novel contribution of this letter is a new and fast reweighted algorithm with iterative support detection (RISD) for CS decoding. Numerical results show good performance of the proposed method on synthetic 1-D and 2-D data, in comparison to several existing algorithms. II. PROPOSED ALGORITHM Before describing our method, we first review the iterative algorithm (IRL1) [3] and the ISD algorithm [10]. reweighted A. Review of IRL1 and ISD 1) IRL1: IRL1 [3] iteratively solves a sequence of weighted subproblems

(8) 2) Update the threshold according to the first jump rule. in ascending order and let be the magnia) Sort tude of the th largest entry of . b) Locate the smallest such that (9) with (10) with some constant , which will be discussed in the numerical experiment section. . c) Set , 3) Update the detected support . , the support If one assumes that at a certain iteration, of the underlying true signal , then the solution to (8) is simply . However, contains only a subset of the full signal sup. In this case, the iteration will produce an import, i.e., proved approximation to the underlying sparse signal and thus help to determine additional entries in . It is often the case that includes some falsely detected entries. However, numerical results strongly suggest that the algorithm has a self-correction capacity. B. RISD Algorithm Our RISD algorithm is outlined as follows. Input: , , ( ) , , , set . 1) Initialize 2) While the stopping criterion is not met, do a) Compute by solving a constrained optimization: (11)

(6) where the weights

are updated according to (7)

It has been shown that converges to a local [3]. The log-sum function is a minimum of good approximation to for small . The regularization parameter is introduced to prevent undefined weights. In practice, to avoid the convergence to local minima, we initialize the algorithm with a large and then gradually decrease its value [13], as was done in the Graduated Non-Convexity method [14]. A few other iterative reweighted algorithms (e.g., [13], [11], [6]) have also been proposed for CS decoding. Generally, these algorithms are slower than most classical -based algorithms because they require iterative computation for the “reweightings.” However, they yield a much better sparsity-measurement tradeoff. 2) ISD: The algorithmic framework for ISD [10] is described as follows.

or an unconstrained minimization problem (if the measurements contain noise): (12) b) Update according to the first jump rule. c) Update the detected support: (13) d) Update weights according to (14) . e) Another interpretation of the above algorithm is as an iterative reweighted algorithm with a nonuniform weighting scheme. The formula for updating weights adapts to each entry. Any

LI et al.: NEW REWEIGHTED ALGORITHM

has a higher probability of being included in the support. The weights of are chosen according to (7) with no regare positive, their ularization parameter. Since the entries in have a reciprocals are well defined without . The higher probability of being zero, so should have larger weights. Ultimately, as they tend to zero, the weight should go to . is imperfect, and the parameter Naturally, the detection of measures the reliability of . The smaller , the more reliand hence the larger the associated weights. able With the above trends, we choose for the weights the simple for the entries in . This choice satisfies the relationship heuristic analysis and also guarantees that the weights for those are smaller than the weights for entries in due entries in to (13). The advantages of our choice over strategies such as (7) are 1) no regularization parameter is needed and 2) the weights for all entries are adaptively changed. Also note that when is sufficiently small, the solution to (11) is equivalent to the solution of (8). Thus the proposed algorithm can also be viewed as a general version of ISD. III. NUMERICAL RESULTS We have performed several numerical tests to examine the performance of the proposed algorithm and compared our apalgorithm (without proach with ISD, IRL1, and a traditional support detection and reweight). We present three sets of experiments. In Test1, we compare the average successful recovery rate and computing time of the algorithms for Gaussian random sparse signals (i.e., the randomly located nonzero entries are i.i.d, sampled from a Gaussian distribution). In Test2, we measure the sensitivity of RISD and ISD to the parameter in (10) when applied to Gaussian sparse and uniform-distributed sparse signals. In Test3, we present an example of image reconstruction. A. Test 1 In this test, the signal length , the sparsity and the number of measurements varies from 230 to 400 orwith a step size of 5. The measurement matrix is a thogonalized Gaussian random matrix. Each signal is Gaussian random sparse. White noise with zero mean and variance is superimposed on the measurement vector . The maxis used for all algorithms. imum number of iterations alWe choose the YALL1 algorithm [12] as the traditional subproblem involved gorithm, and also use it to solve the in ISD, IRL1 and RISD, to provide a fair comparison of computing time. Each problem has its own stopping criterion. For , the relative ISD and RISD, iterations stop once ( ) and change in ( ). For IRL1, each iteration stops when . is the relative In the above, change in . When we apply IRL1, the weights are updated ac, cording to (7). The parameter which determines the first significant jump, is based that found in [10]. Some discussion of this setting is presented in test 2. In the presence of noise, an exact reconstruction of the signal is not possible. We view a recovery as a success if ( ), and as a failure otherwise. For each , we consider 50 runs. Fig. 1 presents the success rate (left plot) and CPU time (right plot) for the algorithms. Based on the rate of successful recoveries, RISD marginally outperforms ISD. They

421

Fig. 1. (Left) Comparison in successful recovery rate and (right) computing time of , ISD, RISD, and IRL1. Each point on the curve is the average over 50 runs; the error bars indicate the standard deviation.

Fig. 2. Illustration of average performance by ISD/RISD with different ; -axes: successful recovery rate. threshold settings, and IRL1. -axes: (Left) Gaussian random signals. (Right) uniform-distributed signals.

are both better than IRL1 and much better than the traditional algorithm. From an efficiency standpoint, RISD, ISD and have run in similar wall clock times and they are both are faster than IRL1. B. Test 2 , , and varies from 85 to 140 In this test, with a step size of 5. We set up three ways to determine (with the methods they are applied to): 1) the formula from test1 (ISD, (ISD-1, RISD-1), RISD), 2) a smaller jump (ISD-3, RISD-3). and 3) a larger jump We test these six algorithms and IRL1 for signals characterized by noise-free measurements. In additional to the Gaussian random sparse signals (GRSS), we also considered sparse Uniform Random Sparse Signals (URSS) with nonzeros entries uniat random locations. The left and formly distributed in right plots of Fig. 2 present the results for GRSS and URSS respectively. RISD and RISD-1 perform similarly for GRSS, and outperform the other algorithms. RISD-3 yields the worst results among all RISD-based algorithms. We also observe that ISD-based algorithms are more sensitive to the choice of first jump setting influences than the others. ISD is comparable to RISD and RISD-1, while ISD-1 is the worst among them. For URSS, all RISD-based algorithms produce similar results and are better or comparable to ISD and ISD-3. With the larger jump the detection of support becomes more conservative. Less entries are filtered to the support. In other words, it is more likely that a higher number of true nonzeros are so that contains more falsely detected zeros. placed in Similarly, when the jump is smaller, the detection of support is more greedy. In such a case, may contain more falsely detected nonzero entries. Numerical results indicate that too many falsely detected zeros for ISD strongly deteriorates its performance, although the self-correction capability of false detected nonzeros for ISD is good. Contrary to ISD, RISD has a good self-correction capability of falsely detected zeros.

422

IEEE SIGNAL PROCESSING LETTERS, VOL. 19, NO. 7, JULY 2012

of 20 independent runs. For in the range 18%–20% of , RISD is more accurate than its competitors discussed in this letter, and has a smaller variance. On the efficiency side, RISD and ISD behave similarly and compare favorably to YALL1. IV. SUMMARY In this letter, we proposed a new iterative reweighted algorithm, RISD, for compressive sensing. RISD is based on a recent support detection technique called ISD, with the following differences: 1) the object function contains all entries in with different reweighting strategies while ISD treats a truncated problem in each iteration, where the entries of detected support are not penalized; 2) compared to ISD, RISD has improved handling of falsely detected zeros. Both algorithms have similar rates of convergence. On average, RISD marginally improved on ISD in all the tests presented. As a reweighted algorithm, RISD is both faster than the previous iterative reweighted algorithms such as IRL1 and offers improved sparsity-measurement tradeoff. Fig. 3. Comparison of reconstructing a 128 128 phantom by (a) IRL1, (19.8%). The (b) ISD, (c) RISD, and (d) . Number of sampling, . noise level for the measurements

ACKNOWLEDGMENT The authors thank W. Yin for providing the ISD code. REFERENCES

Fig. 4. (Left) Reconstruction error versus for reconstructing the phantom.

and (right) CPU time versus

In summary, ISD and RISD use an identical support detection technique but they are influenced by the threshold in opposite ways. C. Test 3 We recover a standard 128 128 image (named SheppLogan phantom) from the measurements generated by several partial discrete cosine transforms and contaminated by white noise. This image is not sparse but its wavelet coefficients are, so that the signal is transform sparse. In this case, simply replace by in our method, where is the wavelet transform. We . When also add a noise component to the image with is large enough, all algorithms recover the image very well. , ISD may fail badly. Overall, We notice that when RISD generated much better results than ISD. We present the results in Fig. 3. In this case, RISD recovers the image with the highest signal-to-noise ratio (snr). Although the recovery is not very accurate, all important features of the Phantom are well reconstructed in the sense that one can easily detect the three ellipses and the five small structures. The computing time is 40% higher than that of YALL1 and slightly less than ISD. range from 2359 to 5392, we show the results in Letting Fig. 4. Each point on the curve is the average of the results

[1] A. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Rev., vol. 51, no. 1, pp. 34–81, 2009. [2] E. Candes, Romberg, and T. Tao, “Rbust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006. [3] E. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted minimization,” J. Fourier Anal. Appl., vol. 14, no. 5, pp. 877–905, 2008. [4] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [5] W. Guo and W. Yin, “EdgeCS: An edge guided compressive sensing reconstruction,” in Proc. VCIP, 2010, vol. 7744. [6] M. Khajehnejad, W. Xu, A. Avestimehr, and B. Hassibe, “Analyzing weighted minimization for sparse recovery with nonuniform sparse models,” IEEE Trans. Signal Process., vol. 59, no. 5, pp. 1985–2001, 2011. [7] C. Miosso, R. von Borries, M. Argaez, L. Velazquez, C. Quintero, and C. Potes, “Compressive sensing reconstruction with prior information by iteratively reweighted least-squares,” IEEE Trans. Signal Process., vol. 57, no. 6, pp. 2424–2431, 2009. [8] N. Vaswani and W. Lu, “Modified-CS: Modifying compressive sensing for problems with partially known support,” IEEE Trans. Signal Process., vol. 58, no. 9, pp. 4595–4607, 2010. [9] R. von Borries, C. Miosso, and C. Potes, “Compressed sensing using prior information,” in Proc. 2nd IEEE Int. Workshop Comput. Adv. Multi-Sensor Adaptive Process, 2007, pp. 121–124. [10] Y. Wang and W. Yin, “Sparse signal reconstruction via iterative support detection,” SIAM J. Imag. Sci., vol. 3, no. 3, pp. 462–491, 2010. [11] D. Wipf and S. Nagarajan, “Iterative reweighted and methods for finding sparse solutions,” IEEE J. Sel. Topics in Signal Process., vol. 4, no. 2, pp. 317–329, 2010. prob[12] J. Yang and Y. Zhang, “Alternating direction algorithms for lems in compressive sensing,” SIAM J. Sci. Comput., vol. 33, no. 1, pp. 250–278, 2011. [13] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” in 33rd ICASSP, 2008. [14] A. Blake and A. Zisserman, Visual Reconstruction. Cambridge, MA: MIT Press, 1987. [15] M. Friedlander, H. Mansour, R. Saab, and O. Yilmaz, “Recovering compressively sampled signals using partial support information,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 1122–1134, 2012.