2014 International Conference on Wireless Communication and Sensor Network
A Low Complexity Signal Recovery Algorithm Based on Compressed Sensing Hao Wang, Shi-lian Wang, Er-yang Zhang Wireless Communication Laboratory College of Electronic Science and Engineering National Univ. of Defense Technology, Changsha, China Email:
[email protected],
[email protected] Compressed sensing (CS) is a method of sampling sparse signals with points less than Nyquist rate. The signal reconstruction of CS becomes an optimization problem. Several methods have been developed to compute sparse signal representations, including greedy algorithms, gradient descent, linear programming, global optimization, and so on. Two of these methods are most popular: matching pursuit (MP) [6] and basis pursuit (BP) [7-8].
Abstract—Compressed sensing (CS) can give a sparse representation of compressible signals. We consider the problem of blind signal recovery based on CS and propose a novel algorithm for sparse signal reconstruction with low complexity. From the CS sampled measurements, we first get the signal parameters’ estimation, such as the carrier frequency, and reconstruct the narrow band signal using the estimated result. In particular, we focus on its noise performance and get approximate analytical expressions of the output signal-to-noise ratio. Simulation results show that our proposed algorithm has good noise performance, as well as low computation complexity.
MP is a kind of greedy algorithm. It finds the support of sparse signal by iteratively selecting the atom that maximally improves the representation. MP has the advantage of being very fast and easy to implement. Several improved algorithms of MP show better performance in signal reconstruction such as orthogonal matching pursuit (OMP) [9], stage-wise OMP (StOMP) [10], compressive sampling MP (CoSaMP) [11]. A disadvantage of these greedy algorithms is the weak guarantee of exact recovery, which leads to bad performance in recovering signal mixed with noise.
Keywords—Compressed Sensing (CS); parameter estimation; signal reconstruction; sparsity
I.
INTRODUCTION
In recent years, digital signal processing is becoming more and more popular and promotes the development of communication technology. Most conventional approaches for sampling signals follow Whittaker-Shannon sampling theorem [1]: the sampling rate for perfect reconstruction of a signal must be at least twice the maximum detection frequency of the signal. However, most man-made signals are sparse in frequency domain. The usual way of dealing with sparse signal is sampling signal at Nyquist rate or higher and transforming samples to the sparse domain. Then we can only keep the few large values to process, transmit and store. Future wide-band sensors will be capable of scanning a wide band of frequencies. Current development of analog-to-digital conversion (ADC) systems can not satisfy the need for wide band signal acquisition. Landau [2] developed a minimal rate requirement for an arbitrary sampling method that allows perfect reconstruction when band locations and their widths are known prior to sampling. The Landau rate is lower than the corresponding Nyquist rate and equivalent to the sum of the band widths. But for communication reconnaissance, the characteristics of signals, prior to sampling, is hard and costly to obtain. In most cases, it’s wasteful to obtain all of the spectrum information in the region of detection. In the last five years, an alternative sampling theory of “compressed sensing” [3-5] has emerged which needs far fewer measurements than Nyquist sampling theory in certain cases.
978-1-4799-7091-9/14 $31.00 © 2014 IEEE DOI 10.1109/WCSN.2014.22
Unlike greedy algorithm, BP is based on linear programming. BP is the typical representation of convex relaxation algorithms. It seeks representations that minimize the l1 -norm of the coefficients. The advantage of relaxation algorithms is their accurate reconstruction from very few measurements. However, BP is not suitable for actual implementation for its high complex computation. In this paper, we develop a novel algorithm for computing sparse signal representations including signal parameter estimation and signal recovery from noise. We consider narrow-band signal with only the prior information of min-max bandwidth. The carrier frequency location, which is very important for usual sparse signal sampling, can be estimated with CS measurements in our algorithm. The remainder of the paper is organized as follows. The CS sampling model is defined in section II-A and a brief review of CS reconstruction theory is given in section II-B. Our proposed method of parameter estimation is analyzed in Section III-A, a reduced complexity signal reconstruction is proposed in Section III-B and a theory bound for CS recovery performance is demonstrated in Section III-C. Simulation results are presented and discussed in Section IV, and some conclusions and future prospects are made in Section V. 75 76
II. SIGNAL MODEL AND PROBLEM FORMULATION
B. CS Signal Reconstruction Algorithms CS signal reconstruction is a problem of computing sparse representations using an overcomplete dictionary. However, computing sparse representations is NP-hard. E. J. Candes has proposed that if the measurement process satisfies the Restricted Isometry Property (RIP), the sparse signal can be reconstructed by the measurements y . As has been told in section I, linear programming and greedy pursuit are the two popular reconstruction algorithms. Linear programming focuses on solving linear optimization, the sparse signal is given by solving the program
A. CS Sampling Model First, we review sampling narrow band signal from a wide band receiver at rates higher than Nyquist rate. The digital discrete signal on an observation interval can be written as N
x(n) (1 / N ) X (k )e 2 kn / N
k 1
where X (k ) is the Discrete Fourier Transform (DFT) of sampling signal x(n) and N is the length of signal on an observation interval. In this paper, we consider x(n) to be narrow band signals, which means X (k ) have only K (or less than K )large coefficients ( K N ) and these large coefficients are located closely. Most linear modulation schemes (e.g. MPSK, MQAM) fit into this model. We rewrite (1) in vector-matrix form
CS theory asserts that a sparse vector x C N can be recovered from vector y C M . Here y is a linear measurement of x , i.e. y Ax Az
A denotes the M N measurement matrix and z C is a zero-mean Gaussian white noise vector. From (2) and (3), we can come to the following N
y (1/ N ) AF H ( X Z ) = X + Z
Definition 2.1 (Restricted Isometry Property) Let A donates the M N matrix. For all K -sparse signal x , if there exists a k (0,1) such that
2
2
2
2
2
2
ln
l2
means the n -norm of x . Due
A. Parameter Estimation Estimating frequency parameter is a challenging problem from compressed signal y in (4). Each row of measurement matrix is always random to satisfy the RIP. So the frequency domain information of original signal is hidden after linear measurement. But with the prior knowledge of bandwidth range, we can get the approximate spectral envelope from some projecting calculation. We design the estimation algorithm of spectral envelope as follows:
Here Z is the DFT of noise z and (1/ N ) AF H denotes the new measurement matrix which has been proved to satisfy the Restricted Isometry Property (RIP) [12].
(1 k ) X X (1 k ) X
s.t. y x
III. SIGNAL RECONSTRUCTION BASED ON PARAMETER ESTIMATION In this section, we present a new signal reconstruction algorithm based on parameter estimation (hereafter referred to as BOPE). The main process of BOPE needs two steps: First, estimating the carrier frequency and bandwidth of original modulated signal from measured signal vector. Second, recovering signal based on the estimated frequency parameter. We also give a brief explanation to our algorithm.
where
l1
to the high complexity of linear programming (i.e., O ( N 3 ) ), greedy algorithms have received great attention. Greedy algorithm like OMP can reduce the complexity to O ( MNK ) [13]. Meanwhile, optimization for computing brings inaccurate recovery, especially for signal mixed with noise. All of the algorithms regard the locations of sparse signal as being irrelevant. Selecting optimal atoms from overcomplete dictionary brings a lot of complicated computations. But for most manmade signals, the positions of frequency spectrum are always close to decrease the spectrum occupancy and most linear modulation schemes satisfy such condition. To make full use of prior knowledge of spectrum position, signal reconstruction can be more accurate and less complicated.
where F is the DFT matrix and F H is the conjugate transpose of F , which means x is the Inverse Discrete Fourier Transform (IDFT) of X .
x
for some small 0 . Here x
H x (1/ N ) F X
xˆ arg min x
1.
Then we say A has the Restricted Isometry Property (RIP).
77 76
Obtain the measurements y with M N matrix [1 , 2 N ] , k denotes the column of .
2.
Select block of physically contiguous line vector space k [k ,k 1 k L 1 ] , L is the minimum bandwidth of original signal, k [1, N L 1] .
3.
Calculate the projection of measurements y onto each
in are totally zero. The fourth step in our algorithm accumulate the signal energy and enlarge the changed value of W ( k , y ) . A typical spectral envelope comparison is showed in Fig. 1, in which N 2048 , M 768 and K 80 , the modulation type is binary phase shift keying (BPSK).
k space W ( k , y ) Ry ( k H k ) 1 k H y
where W ( k , y ) is an L 1 vector and R is the pseudoinverse of k . 4.
Calculate the power of each projection
L
Pk Wi ( k , y ) 2
Figure 1. Spectral envelope of a sparse signal on an observation interval (SNR =10dB). (a) Spectral envelope by DFT. (b) Spectral envelope recovery using CS method.
i 1
B. Sparse Signal Reconstruction Obtaining the envelope of signal spectrum is a proper way to estimate the matching atoms from an overcomplete dictionary. Acquiring the accurate position estimation of a sparse vector is extremely complex for most CS recovery method. But it is easy to locate the position of sparse signal through extracting information from estimated spectral envelope. For most linear modulation schemes, signal spectrum has high energy focus feature. With the estimation of carrier frequency and main lobe width, compressed signal can be recovered through few linear computations. The estimation of carrier frequency can be simply obtained by searching for the peak of spectral envelop. Such estimates will, however, cause some error and more accurate estimations can be obtained by smooth filtering and selecting middle position from a few large value. As for bandwidth estimation, or the value of K , we can simply measure the 3dB bandwidth and match for the prior bit rate sets. Then the estimation of original signal can be done through least square method by IDFT of (6). That is
Then the power Pk comprises the spectral envelope. Proof: We present a brief proof to take Pk as the estimated spectral envelope. Before our proof, we make some relatively intuitive assumptions concerning : (i) the columns of have norm energy equal to 1 and are approximately orthogonal, (ii) the rows of have equal norm. The first assumption means that each column of is weighted equally and the noise signal Z has an equal projection on each column of roughly. The two assumptions guarantee the RIP of matrix [14]. For a given k , we begin our proof with rewriting (4) as following. y = X + Z = + + Z Here X = + , denotes the elements located from k to k L 1 and denotes the other elements. Thus, from (6) and (8), we have
W ( k , y ) k H k
H k
k
1
1
R R Z
where [ i 1 , i 2 i K ] is the column sets of and [i 1, i 2 i K ] donates the continuous range of estimated locations of sparse signal spectrum. Compared with classical OMP algorithm, our algorithm avoids the complicated iteration by locating the sparse signal with linear operation and is easier to implement.
k H y k H Z
x IFFT (H )1 H y
R Z
C. Performance Analysis Although CS could detect narrow band signals from a wide band, it suffers the noise influences in the whole detection bandwidth. Due to the sub-sampling feature of compressed sampling, the columns of measurement matrix cannot be orthogonal any more. The noise in the whole detection bandwidth will have some impact on the measurements y
From assumption (i) and (ii), we know that the elements in each column of matrix R have a peak (equals to 1) at the k th place while other elements are zero-mean with low variance. So W ( k , y ) approximately equals to vector when the value of elements in is much larger than Z ( is exactly the sparse signal) and closes to zero vector when the elements
78 77
SNRnew SNRo 10*log10((1/ K ) / (( N 1) / MN ) 1/ N ) (14)
inevitably. Intuitively, for a fixed value N , the influence between each column of becomes larger with smaller value of M and so the noise will have larger variance in the recovery signal. We present a detail analysis of the influence between different M and N in this section. From section III-B, we know that the estimated spectrum of sparse signal can be written as
Xˆ H
1
Here SNRo denotes the SNR of sampling signal beyond Nyquist rate and SNRnew denotes the SNR of signal after CS recovery.
For N M K , 1 / N ! 0 , ( N 1) / N ! 1 ,then (14) can be written simply as
H y
Considering no estimation deviation of narrow band signal location, then (11) can be written as
Xˆ H
1
IV. NUMERICAL E XPERIMENTS In this section we investigate the practical ability of our novel algorithm to recover a sparse signal from corrupted data y Ax Az . Because of the locating error of carrier frequency estimation, we recommend that the length of recovery signal (namely, the value of K ) is slightly wider than the actual useful length of sparse signal. So we have 10% redundancy of the useful sparse signal length here. The simulation parameters are shown in TABLE I.
= X + RZ
We assume that the noise Z is zero-mean and spectrally white with covariance matrix n 2 I N . According to the theory of least square method and RIP, the inner products q between rows in R and columns in can have the following features : (i) the value of each inner product point is quite small except for the position of recovery signal (which equals 1), (ii) each inner product is zero-mean and has the same variance, denoted by . From the two features, it is easily to obtain 2
TABLE I.
the covariance of noise part RZ .
RZ R R I
RZ RZ
2
2
Detection Bandwidth Range Modulation Type
H
H
Filter
Symbol Rate
2
2
N 1 ) are N
distributed relatively equally in measurement yM 1 . According to the theory of least square theory, each point of projection
N 1 .Then the variance of the new NM noise part will be N 1 / M 1 2 / N in each point of will have a variance of
SIMULATION PARAMETERS
0~102.4 MHz
N
4096
QPSK
M
1400
Raised Root Cosine Filter
K
100
1.6 Mbps
Redundancy Recovery
10%
In the simulation, we use the frequency-shaping filter to reduce the side lobe effects. The data is sampled at 204.8MHz before CS measurements, which is the Nyquist rate. The baseband signal is modulated on a fixed carrier frequency. The algorithm performance is evaluated by Monte Carlo simulations. Fig. 2 reports the relation among SNRo , the SNR bound of recovery signal in (14) and SNR of recovery signal by BOPE and classical OMP algorithm respectively. The performance shows that our new algorithm BOPE has an advantage of about 5.5dB in high SNR compared with classical OMP algorithm and suffers a tiny performance loss compared with the bound in (14). This tiny performance loss comes from the redundancy of recovery signal. But it is also shown that our algorithm will be invalid under very low SNR of original signal. This is due to the error estimation of carrier frequency and the accumulated power in (7) is unable to eliminate the noise effect on the spectral envelope. However, our algorithm can meet the need for most communication systems. In Fig. 3 we analyze the effect of different decimation rates on the SNR of recovery signal. Our algorithm has about 5dB SNR improvement
We divide each row of R into two parts : the part which is located just at the sparse signal position and the other part. For the first part, the value in the corresponding location equals 1 and the noise in this part will be preserved totally. So each point of recovery signal will has a noise component with the 2 variance of / N . For the other part, the rest noise components (the variance of this part is
From (15), we know that the performance of CS recovery mainly depends on the ratio of M / K . Reconstruction will have good performance with high ratio of M / K , but increase the measurements needed for recovery. A numerical comparison between our algorithm and the bound in (14) is shown in section IV.
H X + Z
= RX + RZ
SNRnew SNRo 10*log10(M / K )
2
the recovery signal. So the signal-to-noise (SNR) ratio of recovery signal will be
79 78
compared with classical OMP algorithm when M N / 2 and the SNR gap increased with smaller M . For the theory SNR in (14), our algorithm can successfully approximate the SNR bound and has relatively little loss (about 1~2dB) with high decimation rates.
carrier frequency. It is also meaningful to find ways to get more accurate bandwidth estimation without prior knowledge of symbol rate. ACKNOWLEDGMENT This research was supported by the National Natural Science Foundation of China (No.61101097). REFERENCES [1] [2]
[3] [4] Figure 2. Relation between SNR of the whole detection bandwidth and SNR of the recovery signal.
[5] [6] [7] [8] [9]
C. E. Shannon, “Communication in the presence of noise,” Proc. IRE, vol. 37, pp. 10–21, 1949. H. J. Landau, “Necessary density conditions for sampling and interpolation of certain entire functions,” Acta Math., vol. 117, pp. 37–52, Feb. 1967. E. Candes, “Compressive sampling,” in Proc. Int. Congress Math., Madrid, Spain, 2006, vol. 3, pp. 1433–1452. D. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory, vol. 52, no. 4, pp. 1289–1306, 2006. E. Cand`es and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory, vol. 51, no. 12, pp. 4203–4215, 2005. F. Bergeaud and S. Mallat, “Matching pursuit of images,” in Proc. Int. Conf. Image Process., 1995, p. 53. S. Chen and D. Donoho, “Basis pursuit,” in Proc. 28th Asilomar Conf. Signals, Syst. Comput., 1994, pp. 41–44. S. S. Chen, D. L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20:33–61, 1998. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inform. Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007.
[10] D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck. Sparse solution of underdetermined linear equations by stagewise Orthogonal Matching Pursuit (StOMP). Submitted for publication, 2007. [11] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, March 2009. [12] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Const. Approx., vol. 28, no. 3, pp. 253–263, 2008.
Figure 3. SNR of recovery signal on decimation rate Q ( Q N / M ). The SNR of detection is fixed 9dB. Other parameter setting are shown in TABLE I.
V. CONCLUSION We present a different way to recover sparse signal using the knowledge of spectrum location gathering. The algorithm reduced the computation complexity by replacing complicated iteration with linear operation. The algorithm makes significant improvements over classical recovery method. It also can be seen from the simulation results that the algorithm is not perfect compared to SNR bound. A further research should be conducted to obtain more accurate estimate of
[13] J.A. Tropp, A.C. Gilbert, “ Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, pp. 4655–4666. 2007. [14] John Treichler, Mark Davenport, and Richard Baraniuk, “Application of compressive sensing to the design of wideband signal acquisition receivers.,” 6th U.S./Australia Joint Workshop on Defense Applications of Signal Processing(DASP). 2009.
80 79