A Reed-Solomon Code Based Measurement Matrix with ... - IEEE Xplore

0 downloads 0 Views 904KB Size Report
A Reed-Solomon Code Based Measurement. Matrix with Small Coherence. M. M. Mohades, A. Mohades, and Aliakbar Tadaion, Senior Member, IEEE.
IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 7, JULY 2014

839

A Reed-Solomon Code Based Measurement Matrix with Small Coherence M. M. Mohades, A. Mohades, and Aliakbar Tadaion, Senior Member, IEEE

Abstract—In this letter, we construct a class of deterministic measurement matrices which are asymptotically optimal. For this purpose, we first apply the tensor product over the Reed Solomon (R-S) generator matrix to produce a new one; then employing this generator matrix, we construct a measurement matrix. If the R-S , then the resulting measurement matrices code is defined on are of dimensions , where is an arbitrary prime power and its coherence would be , which is desirable for compressive sampling. We also illustrate the effectiveness of our proposed matrices in compressed sensing with some simulation examples. Index Terms—Coherence, compressed sensing, measurement matrix, reed-solomon code, tensor product.

greedy algorithms such as orthogonal matching pursuit (OMP) or its modifications [5]. Meanwhile we must note that choosing a proper measurement matrix is a key point to ensure exact recovery of the original signal via solving -minimization problem and using OMP algorithm. Thus we need that the measurement matrix satisfies some conditions. Candes and Tao introduced a criteria named Restricted Isometry Property (RIP) [4] in order to specify which matrices are proper to be used as the measurement matrix. A matrix satisfies RIP of order , if there exists a constant such that for every -sparse signal it follows [4] (3)

C

I. INTRODUCTION

OMPRESSED SENSING (CS) is a data compression and sampling method which creates a fundamental evolution in the processing of the sparse signals. This technique allows acquiring a sparse signal at sampling rates that are substantially much lower than the Nyquist-Shannon rate. A signal is said to be -sparse if it has at most nonzero entries. The CS takes care of the unique and exact reconstruction of the measurement vector which is obtained by projecting the sparse vector onto an -dimensional space by multi( ). Through a proper plying a sensing matrix , the sampling with a very few amount of measurements information of the original signal should be preserved. Reconstructing the original signal from the measurement vector is investigated by Candes [1] and Donoho [2]. If is a sparse signal, the reconstruction problem in the general case is as follows: subject to

(1)

denotes the number of non-zero elements of vector where . We must note that this minimization problem is NP-Hard [3]; thus some alternative methods are proposed to find a proper solution for Equation (1). We can replace Equation (1) with the following -minimization problem [4] ( subject to

) (2)

which is shown that leads to the exact recovery of the original signal. In order to find a solution for Equation (1), we can use

Manuscript received December 23, 2013; revised March 04, 2014; accepted March 14, 2014. Date of publication March 28, 2014; date of current version April 25, 2014. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Xiao-Ping Zhang. The authors are with the Signal Processing Lab, ECE Department, Yazd University, Yazd, Iran. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2014.2314281

where

specifies

-norm of the vector . If the

measurement matrix satisfies the RIP with a proper constant we can guarantee the exact reconstruction of the original signal from the measurement vector through -minimization [6] employing some algorithms such as OMP [5]. It is shown that some random matrices satisfy RIP of order with high probability [7], where is an upper bound between all sensing matrices with size . Therefore the probabilistic constructions can produce the RIP-fulfilling matrices which are the best possible ones. However, since the sampling procedure performs deterministically, we desire to employ deterministic measurement matrices. Furthermore, deterministic matrices need lower memory rather than random matrices. Verification of a sampling matrix to find out whether it satisfies RIP or not, is not straightforward as Equation (3) should be evaluated for all -sparse signals. Instead we refer to the coherence of a matrix, which fortunately is related to RIP. Consider a deterministic matrix with columns , the coherence of this matrix is defined as: (4) It is shown that a matrix with the coherence coefficient satwhenever isfies RIP of order with [8]. The recent inequality states that a small results in a high value of ; then if the coherence coefficient is reduced, we can reconstruct a sparse signal with a large value of with a constant , which is desirable. Therefore we should look for the matrices with low coherence. The minimum coherence of an aris stated by the Welch bound, which can bitrary matrix be expressed as follows [9] (5) whenWe observe that this bound asymptotically tends to ever . If we consider the coherence condition as a criteria to design a deterministic measurement matrix, matrices with RIP can be generated. of order

1070-9908 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

840

IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 7, JULY 2014

In recent years, several deterministic measurement matrices are presented which most of them are based on finite fields. Employing polynomials over finite fields, Devore in [10] generated and the cohersome binary matrices with dimension which satisfies the RIP of order , where ence is the degree of a polynomial with coefficients in . In [11] inspired by Devore’s method and employing Algebraic Geometry (AG) codes, authors proposed some binary matrices. If the algebraic curve which is used, is chosen properly, these matrices would perform better than the Devore’s matrices. In [12] some binary, bipolar and ternary sampling matrices are constructed from BCH codes. This construction is generalized to -array BCH codes [13]. The dimension of these measurement matrices are increased by applying tensor product on the existing low coherence deterministic measurement matrices. Employing optical orthogonal codes (OOC), the authors in [14] proposed another set of low coherence ternary deterministic matrices. In [15] some are deterministic measurement matrices with dimension constructed via additive character sequences. The Weil bound is used to show that these matrices have asymptotically optimal co. ; in this case the coherence is equal to herence for There have been proposed some non-RIP deterministic measurement matrices, such as the ones which use the Extractor graph or the chirp sequences [17], and adjacency [16] with matrices of bipartite graphs with large girth [18]. We must note . that in RIP measurement matrices, we have In this letter, applying the tensor product on the Reed-solomon generator matrices, we produce a new generator matrix and emdeterploying this matrix we introduce a new group of , where ministic measurement matrices with the coherence of is a prime power i.e. , is prime. When the amount of increases, the coherence of the proposed matrix approaches to the Welch bound, which is asymptotically optimal. The rest of the letter is as follows. In Section II, we discuss how to construct the measurement matrix and how to calculate its coherence. Simulation results are given in Section III and Section IV concludes the letter.

..

Definition 1: If and

be an

.

.. .

be an

matrix, then the tensor product of .. .

..

.

.. .

matrix and

is

.

We have the following lemma for the matrix . are linLemma 1: The rows of the matrix is the generator matrix of a code. early independent and then Proof 1: We must note that the produced matrix could be written as in the following, (8) In order to show that the rows of

are linearly independent, results in

we should prove that

. We prove this statement in two steps: 1- Consider the first columns of ; since the rows of , and are linearly independent, then we have i.e. (9) ; regarding the lin2- Consider the second columns of and we would then have, early independence of (10) Now we describe how to construct our proposed measurement matrix. We construct the matrix of all codewords

(11) II. MAIN RESULT In this Section we propose a low coherence measurement matrix which can be easily implemented. We construct this measurement matrix applying a tensor product to the generator matrix of an R-S code1. We will show that the result is the generator matrix of a new code; then our proposed measurement matrix would be designed using the matrix composed of the codewords of this new code. Consider the generator matrix of an R-S code with two bases and , where is a primitive as element,

apply the where trace function to this matrix, the trace function is defined as follows. , the trace function Definition 2: For of over is defined by (12) where

is a prime number. We define the trace of a vector as follows,

(6) We use the generator matrix follows,

in order to construct

as

(13) Then we make the matrix

, whose

th element would be

(7)

(14)

where denotes the tensor product of the two matrices, which is defined as follows:

where denotes the th element of . would be We must note that the coherence of the matrix 1, which is not desirable (we will explain in the following). To and remedy the situation, we take away the first column of construct employing this new matrix, i.e. (see (15), shown at

1Traditionally, this name is reserved for codes of length only, and others are called extended or shortened ReedSolomon codes; However, for us it is natural to use the same name for all of them [19].

MOHADES et al.: REED-SOLOMON CODE BASED MEASUREMENT MATRIX WITH SMALL COHERENCE

the bottom of the page) Therefore, we can write the columns of as whose elements are as follows, (16) where denote the codewords which are obtained by linear combinations of the three last rows of in . We have the following lemma on the coherence of this new matrix. Lemma 2: The sensing matrix as designed in the above is whose coherence would be . of dimensions Proof 2: The coherence of the matrix is (17)

841

In the following, we will show that . To do that, we first define the additive character of in the following. Definition 3 (Ch. 5, [20]): The canonical additive character of , is defined as (24) we have the following statements [20] where in a finite filed (see (25), shown at the bottom of the page). We must note that in the case of , the coherence of would be 0, which does and not degrade the measurement matrix. We have . We must note that then there exists a unique Therefore, we have

takes all values of except such that

and .

then we have,

(18) and is the set of all codewords. where In simplifying the above equation, we used the fact that the set of codewords is a group under addition and also the following statement [20] (19) It must be noted that if we consider the matrix in (14), the set would contain some codewords such the vector whose all elements are . In this case, we can easily observe that (see 18). For a codeword we have:

where,

Then from Equations (18) and (20) we have:

(26)

(20)

(21)

where we have used Equation (25) to simplify the above. So we obtain that the coherence is,

(22)

(27)

(23)

We proposed a matrix which is asymptotically optimal, i.e. when tends to infinity, the coherence of the proposed matrix is equal to the Welch bound. We must note that the minimum distance

(15)

(25)

842

IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 7, JULY 2014

Fig. 1. The recovery percentage versus sparsity order where the compressed samples are noiseless and the dimensions of the Gaussian matrix and our proand the dimensions of BCH matrix is . posed matrix are

Fig. 2. The reconstruction SNR versus sparsity order for the noisy compressed samples with the SNR of 15 dB where the dimensions of the Gaussian matrix and the dimensions of BCH matrix is and our proposed matrix are .

( ) of the proposed code, i.e. the minimum number of nonzero elements through all possible code words is also large. We for the code generated calculate the minimum distance . As the generator matrix has the row of all 1, it proby duces the codeword whose all elements are 1 and the matrix of codewords would be symmetric [12]. It is easy to verify that zeros, so each codeword would have at most which is large enough. Therefore, we expect the proposed measurement matrix would have appropriate properties [22].

III. SIMULATION RESULTS In this section, we evaluate the performance of our proposed measurement matrix by comparing its performance with the complex valued Gaussian sampling matrix and the proposed BCH deterministic matrix in [13]. We construct our deterministic matrix over , the size of this matrix would be with coherence , which satisfies the RIP of order . In addition we consider the Gaussian matrix with i.i.d. entries of and the BCH matrix over which is an size matrix with coherence equal to . In the first simulation, we examine the percentage of perfect recovery, when different sparsity orders are considered. We establish the illustrated percentage curves by considering the results for 5000 different random complex valued input signals (for each ) where the place of non-zero elements changes uniformly random in each order of sparsity. For the reconstruction of the -sparse input signals from the compressed measurements, we use orthogonal matching pursuit. Fig. 1 shows that both deterministic matrices substantially perform better than the Gaussian matrix. Furthermore the perfect reconstruction percentage in our proposed matrix is more than that of the BCH matrix, which is due to its lower coherence. In order to study the effect of noise in our results, we illustrate the SNRs for the reconstruction of -sparse input signals of size , where varies from 1 to 30. In this case we consider dB. damaged compressed samples by AWGN with We apply OMP algorithm to reconstruct the original signal, and 5000 independent runs are averaged to obtain smooth curves. Simulation results in Fig. 2 show that for all values of the degree of sparsity, both deterministic matrices have more SNR in comparison with the Gaussian matrix which inspires us to use

Fig. 3. The reconstruction SNR of a 25-sparse signal from its noisy compressed samples for various input SNRs where the dimensions of the Gaussian matrix and the dimensions of BCH matrix is and our proposed matrix are .

deterministic matrices. In addition, for low degree of sparsity, our proposed matrix surpasses the BCH matrix. and verify the Finally we fix the order of sparsity at output signal to noise ratio (SNR), when different noise levels are added to the measurements . As in previous cases, we use OMP to reconstruct the original signal and results are obtained by averaging over 5000 runs. Fig. 2 shows that, for all curves, by increasing the level of SNR at input, reconstruction SNR of the original signal improves. Furthermore these curves confirms that deterministic matrices are better than the Gaussian matrix and our proposed matrix acts slightly better than the BCH deterministic matrix. IV. CONCLUSION In this letter, we proposed a new deterministic matrix for compressive sensing, which is asymptotically optimal in coherence. We combined a Reed-Solomon generator matrix with itself by the tensor product and employed this generated matrix to construct a complex measurement matrix. We proved that the coherence coefficient of this matrix is , which according to the Welch bound is asymptotically optimal. We observed that our proposed matrix performs slightly better than the BCH matrix proposed in [13]. As a future work it would be useful to check the proposed method over different kinds of generator matrices to obtain a sensing matrix with more columns.

MOHADES et al.: REED-SOLOMON CODE BASED MEASUREMENT MATRIX WITH SMALL COHERENCE

REFERENCES [1] E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006. [2] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006. [3] D. Ge, X. Jiang, and Y. Ye., “A note on the complexity of minimization,” Math. Programm., vol. 129, no. 2, pp. 285–299, 2011. [4] E. Candès and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, Dec. 2005. [5] J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007. [6] E. Candès, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Math. Acad. Sci. Paris, vol. 346, no. 9 10, pp. 589–592, 2008. [7] E. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies,” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, Dec. 2006. [8] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova, “Explicit constructions of RIP matrices and related problems,” Duke Math. J., vol. 159, no. 1, pp. 145–185, 2011. [9] L. Welch, “Lower bounds on the maximum cross correlation of signals,” IEEE Trans. Inf. Theory, vol. 20, no. 3, pp. 397–399, May 1974. [10] R. DeVore, “Deterministic constructions of compressed sensing matrices,” J. Complexity, vol. 23, no. 46, pp. 918–925, 2007. [11] S. Li, F. Gao, G. Ge, and S. Zhang, “Deterministic construction of compressed sensing matrices via algebraic curves,” IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5035–5041, 2012.

843

[12] A. Amini and F. Marvasti, “Deterministic construction of binary, bipolar and ternary compressed sensing matrices,” IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2360–2370, Apr. 2011. [13] A. Amini, V. Montazerhodjat, and F. Marvasti, “Matrices with small coherence using -ary block codes,” IEEE Trans. Signal Process., vol. 60, no. 1, pp. 172–181, Jan. 2012. [14] N. Y. Yu and N. Zhao, “Deterministic construction of real-valued ternary sensing matrices using optical orthogonal codes,” IEEE Signal Process. Lett., vol. 20, no. 11, pp. 1106–1109, Nov. 2013. [15] N. Y. Yu, “Additive character sequences with small alphabets for compressed sensing matrices,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2011. [16] P. Indyc, “Explicit constructions for compressed sensing of sparse signals,” in Proc. ACM-SIAM Symp. Discrete Algorithms, 2008, pp. 30–33. [17] L. Applebaum, S. Howard, S. Searle, and R. Calderbank, “Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery,” Appl. Comput. Harmon. Anal., vol. 26, no. 2, pp. 283–290, 2009. [18] A. S. Tehrani, A. G. Dimakis, and G. Caire, “Optimal deterministic compressed sensing matrices,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2013. [19] M. Tsfasman, S. Vladut, and D. Nogin, Algebraic Geometry Codes Basic Notions. Providence, RI, USA: AMS, 2007. [20] R. Lidl and H. Niederreiter, Finite Fields. Cambridge, U.K.: Cambridge University Press, 1996. [21] G. L. Mullen and C. Mummer, Finite Fields and Applications. Providence, RI, USA: AMS, 2007. [22] M. Cheraghchi, “Coding theoretic methods for sparse recovery,” in IEEE Conf. Communication, Control, and Computing (Allerton), 2011.

Suggest Documents