SPARK-LEVEL SPARSITY AND THE ℓ1 TAIL MINIMIZATION
arXiv:1610.06853v1 [cs.IT] 20 Oct 2016
CHUN-KIT LAI, SHIDONG LI, AND DANIEL MONDO Abstract. Solving compressed sensing problems relies on the properties of sparse signals. It is commonly assumed that the sparsity s needs to be less than one half of the spark of the sensing matrix A, and then the unique sparsest solution exists, and recoverable by ℓ1 -minimization or related procedures. We discover, however, a measure theoretical uniqueness exists for nearly spark-level sparsity from compressed measurements Ax = b. Specifically, suppose A is of full spark with m rows, and suppose m 2 < s < m. Then the solution to Ax = b is unique for x with kxk0 ≤ s up to a set of measure 0 in every s-sparse plane. This phenomenon is observed and confirmed by an ℓ1 -tail minimization procedure, which recovers sparse signals uniquely with s > m 2 in thousands and thousands of random tests. We further show instead that the mere ℓ1 -minimization would actually fail if s > m 2 even from the same measure theoretical point of view.
1. Introduction Compressed sensing is a problem that arises very naturally in signal processing applications. A sparse signal x ∈ CN (a vector consisting of only s m
We first claim that X1 = ∅. Indeed, if x ∈ HT0 and x = v + z for some v + z ∈ HT + ker A \ {0}, then A(x − v) = AT0 ∪T (x − v) = 0, where the sparsity of (x − v) = #(T0 ∪ T ) ≤ m. But A is full-spark and so these vectors indexed by T0 ∪ T are linearly independent, so x = v. This forces X1 is empty. Hence, the union in (2.3) is just X2 . Suppose that this union has positive Lebesgue measure in HT0 . We enlarge the set by considering the zero vector in. HT0 ∩ (HT + ker A \ {0}) ⊆ HT0 ∩ (HT + ker A). Consider the set F=
[
|T |≤s #(T ∪T0 )>m
HT0 ∩ (HT + ker A).
(2.5)
Then F has positive measure in HT0 . There exists some T such that |T | ≤ s and #(T ∪ T0 ) > m with the property that |HT0 ∩ (HT + ker A| > 0 in HT0 , where | · | denotes the Lebesgue measure. Since HT0 ∩ (HT + ker A) is a subspace of HT0 , positive measure implies that HT0 ∩ (HT + ker A) = HT0 . Thus HT0 ⊆ HT + ker A. By Lemma 2.4, P : HT0 \T 7−→ ker A ∩ HT0 ∪T is one-to-one. We now compute the dimensions of the following subspaces. Since HT0 \T is a hyperplane, we have dim (P (HT0 \T )) = dim (HT0 \T ) = #T0 \ T.
(2.6)
8
CHUN-KIT LAI, SHIDONG LI, AND DANIEL MONDO
Note that ker A ∩ HT0 ∪T = {x : AT0 ∪T xT0 ∪T = 0}. Because A is full-spark, we have dim (ker A ∩ HT0 ∪T ) = #(T0 ∪ T ) − m.
(2.7)
But P (HT0 \T ) ⊆ ker A ∩ HT0 ∪T , dim (P (HT0 \T )) ≤ dim (ker A ∩ HT0 ∪T ). Then we have that #(T0 \ T ) ≤ #(T0 ∪ T ) − m.
Note that #(T0 \ T ) = #(T0 ∪ T ) − #T. and #T ≤ s,
#(T0 ∪ T ) − s ≤ #(T0 ∪ T ) − #T = #(T0 \ T ) ≤ #(T0 ∪ T ) − m. This forces s ≥ m. This is a contradiction to our assumption of s. Hence, we must have |F | = 0, completing the proof. Indeed, we can replace m by spark(A) − 1. We have the same conclusion as in Theorem 2.1 Theorem 2.5. Let A ∈ Cm×N and let s be such that
spark(A) − 1 < s < spark(A) − 1. 2
Let HT0 be any hyperplane in Hs . Then for almost everywhere x ∈ HT0 (with respect to the s-dimensional Lebesgue measure on HT ), x is the unique solution of (ℓ0 ). Proof. Following the same proof of Theorem 2.1 until (2.7), we note that rank(AT0 ∪T ) ≥ spark(A) − 1. Otherwise, if rank(AT0 ∪T ) + 1 < spark(A), then AT0 ∪T will contain rank(AT0 ∪T )+1 linearly independent vectors, which contradicts to the definition of the rank. Hence, dim (ker A ∩ HT0 ∪T ) = #(T0 ∪ T ) − rank(AT0 ∪T ) ≤ #(T0 ∪ T ) − (spark(A) − 1). Continuing the same proof and we finally arrives at s ≥ spark(A)−1, contradicting our initial assumption. Remark. As we have indicated in the introduction, uniqueness solution is possible over the traditional regime of compressed sensing. From the numerical result through a random choice of initial s-sparse x, the measure theoretical conclusion shows that the probability of choosing x as a non-unique solution of (ℓ0 ) is 0. Therefore, recovery of x in m/2 < s < m is possible. Comparing the basis pursuit which fails far before m/2, our ℓ1 tail minimization approach is not only capable of recovery vectors of near spark-level sparsity, but also going over to recover signals in s > m/2 regime.
SPARK-LEVEL SPARSITY AND THE ℓ1 TAIL MINIMIZATION
9
3. Failure of Basis Pursuit In this section, we provide a justification that the basis pursuit problem (1.2) has no unique solution on a significantly large set when m2 < s < m and A is a sensing matrix with real-valued entries. We first recall that the null space property is a necessary and sufficient condition for recovery of signals via basis pursuit. Definition 3.1. A matrix A ∈ Cm×N satisfies the null space proprty (NSP) with respect to T if kvT k1 < kvT C k1 for every v ∈ ker A \ {0}. Theorem 3.2. Every s-sparse vector is the unique solution to (1.2) if and only if A has the NSP for all T with |T | ≤ s. When the NSP of order s holds, then every s-sparse vector is the unique solution to (ℓ0 ). Because if z is a solution to (ℓ0 )
min kxk0 subject to b = Ax, x
and x is the solution to (ℓ1 ), then kzk0 ≤ kxk0 ≤ s. But every s-sparse vector is the unique solution to (ℓ1 ), x = z. Proposition 3.3. If fails.
m 2
< s < m, then there exists some |T | ≤ s such that NSP
Proof. If the NSP holds on all |T | ≤ s, then every s-sparse vector on T is the unique solution of (ℓ1 ) and hence (ℓ0 ). This will require m ≥ 2s, which is a contradiction. Proposition 3.4. [25, Theorem 4.30] Given a matrix A ∈ Rm×N . A vector x ∈ RN with support S is the unique minimizer of (ℓ1 ) if and only if X sgn(xj )vj < kvS c k1 , ∀v ∈ ker(A) \ {0}, j∈S
where
1 x = −1 sgn(x) = |x| 0
if x > 0 if x < 0 if x = 0.
It turns out, as m2 < s < m, we can find some T0 with |T0 | ≤ s such that the NSP with respect to T0 fails. Theorem 3.5. Let A ∈ Rm×N . If m2 < s < m, then there is a set of infinite measure on HT0 such that (ℓ1 ) cannot succeed at recovery. Proof. Since m2 < s < m, by Proposition 3.3, the NSP for some T0 fails. There exists some v ∈ ker A\{0} such that kvT0 k1 ≥ kvT0C k1 . By Proposition 3.4, x ∈ HT0 is recovered as a unique solution of (ℓ1 ) if and only if X sgn(xj )vj | < kvT0C k1 . (3.1) | j∈T0
10
CHUN-KIT LAI, SHIDONG LI, AND DANIEL MONDO
Now, kvT0 k1 ≥ kvT0C k1 for some v ∈ ker A \ {0}. If we consider x ∈ HT0 such that P sgn(xj )vj = |vj |, then the left hand side of (3.1) becomes j∈T0 |vj | ≥ kvT0C k1 . This shows all such x are not recovered by (ℓ1 ). Let (ǫj )j∈T0 be the sign such that ǫj vj = |vj |. Then the failure set of (ℓ1 ) contains {x ∈ HT0 : sgn(xj ) = ǫi ∀i}. This contains at least a quadrant in HT0 , which is of infinite measure. Thus the statement holds.
Remark. The necessity of the Proposition 3.4 is not true over the complex field [25, Remark 4.29]. It is not known for now whether Theorem 3.5 holds for CN and complex matrices A. Nonetheless, the theorem shows that in the real case, (ℓ0 ) and (ℓ1 ) are not equivalent on a significant set of vectors when s > m/2. We conclude the article by stating again that the ℓ1 tail minimization can still recover sparse signals for m/2 < s < m. References [1] A. Aldroubi, X. Chen, and A. Powell. Stability and robustness of lq minimization using null space property. Proceedings of SampTA, 2011, 2011. [2] B. Alexeev, J. Cahill, and D. Mixon. Full spark frames. J. Fourier. Anal. Appl., 18:1167 – 1194, 2012. [3] R. G. Baraniuk. Compressive sensing. IEEE signal processing magazine, 24(4), 2007. [4] T. Blumensath and M. E. Davies. Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14(5-6):629–654, 2008. [5] A. M. Bruckstein, D. L. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM review, 51(1):34–81, 2009. [6] E. J. Cand`e and M. B. Wakin. An introduction to compressive sampling. Signal Processing Magazine, IEEE, 25(2):21–30, 2008. [7] E. J. Candes, Y. C. Eldar, D. Needell, and P. Randall. Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis, 31(1):59–73, 2011. [8] E. J. Cand`es et al. Compressive sampling. In Proceedings of the international congress of mathematicians, volume 3, pages 1433–1452. Madrid, Spain, 2006. [9] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. Information Theory, IEEE Transactions on, 52(2):489–509, 2006. [10] E. J. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? Information Theory, IEEE Transactions on, 52(12):5406–5425, 2006. [11] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM review, 43(1):129–159, 2001. [12] X. Chen, H. Wang, and R. Wang. A null space analysis of the 1-synthesis method in dictionary-based compressed sensing. Applied and Computational Harmonic Analysis, 37(3):492–515, 2014. [13] O. Christensen. An introduction to frames and Riesz bases. Springer Science & Business Media, 2013. [14] W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. Information Theory, IEEE Transactions on, 55(5):2230–2249, 2009.
SPARK-LEVEL SPARSITY AND THE ℓ1 TAIL MINIMIZATION
11
[15] M. A. Davenport, D. Needell, and M. B. Wakin. Signal space cosamp for sparse recovery with redundant dictionaries. Information Theory, IEEE Transactions on, 59(10):6820–6829, 2013. [16] D. L. Donoho. High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete & Computational Geometry, 35(4):617–652, 2006. [17] D. L. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proceedings of the National Academy of Sciences, 100(5):2197– 2202, 2003. [18] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. Information Theory, IEEE Transactions on, 52(1):6–18, 2006. [19] D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. Information Theory, IEEE Transactions on, 58(2):1094–1121, 2012. [20] M. Elad, P. Milanfar, and R. Rubinstein. Analysis versus synthesis in signal priors. Inverse problems, 23(3):947, 2007. [21] J. H. Ender. On compressive sensing applied to radar. Signal Processing, 90(5):1402–1414, 2010. [22] A. C. Fannjiang, T. Strohmer, and P. Yan. Compressed remote sensing of sparse objects. SIAM Journal on Imaging Sciences, 3(3):595–618, 2010. [23] M. Fornasier and H. Rauhut. Compressive sensing. In Handbook of mathematical methods in imaging, pages 187–228. Springer, 2011. [24] S. Foucart and H. Rauhut. A mathematical introduction to compressive sensing, volume 1. Springer, 2013. [25] S. Foucart and H. Rauhut. A mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkh¨auser Boston Inc., Boston, MA, 2013. [26] D. Ge, X. Jiang, and Y. Ye. A note on the complexity of l p minimization. Mathematical programming, 129(2):285–299, 2011. [27] R. Giryes and M. Elad. Can we allow linear dependencies in the dictionary in the sparse synthesis framework? In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 5459–5463. IEEE, 2013. [28] R. Gribonval and M. Nielsen. Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Applied and Computational Harmonic Analysis, 22(3):335–355, 2007. [29] J. P. Haldar, D. Hernando, and Z.-P. Liang. Compressed-sensing mri with random encoding. Medical Imaging, IEEE Transactions on, 30(4):893–903, 2011. [30] M. A. Herman and T. Strohmer. High-resolution radar via compressed sensing. Signal Processing, IEEE Transactions on, 57(6):2275–2284, 2009. [31] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale l 1-regularized least squares. Selected Topics in Signal Processing, IEEE Journal of, 1(4):606–617, 2007. [32] Y. Liu, T. Mi, and S. Li. Compressed sensing with general frames via optimal-dual-basedanalysis. Information Theory, IEEE Transactions on, 58(7):4201–4214, 2012. [33] M. Lustig, D. Donoho, and J. M. Pauly. Sparse mri: The application of compressed sensing for rapid mr imaging. Magnetic resonance in medicine, 58(6):1182–1195, 2007. [34] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, 2009. [35] L. C. Potter, E. Ertin, J. T. Parker, and M. Cetin. Sparsity and compressed sensing in radar imaging. Proceedings of the IEEE, 98(6):1006–1020, 2010. [36] H. Rauhut. Random sampling of sparse trigonometric polynomials. Applied and Computational Harmonic Analysis, 22(1):16–42, 2007.
12
CHUN-KIT LAI, SHIDONG LI, AND DANIEL MONDO
[37] H. Rauhut. Stability results for random sampling of sparse trigonometric polynomials. Information Theory, IEEE Transactions on, 54(12):5661–5670, 2008. [38] H. Rauhut. Compressive sensing and structured random matrices. Theoretical foundations and numerical methods for sparse recovery, 9:1–92, 2010. [39] H. Rauhut, K. Schnass, and P. Vandergheynst. Compressed sensing and redundant dictionaries. Information Theory, IEEE Transactions on, 54(5):2210–2219, 2008. [40] J. Romberg. Imaging via compressive sampling [introduction to compressive sampling and recovery via convex programming]. IEEE Signal Processing Magazine, 25(2):14–20, 2008. [41] M. Rudelson and R. Vershynin. On sparse reconstruction from fourier and gaussian measurements. Communications on Pure and Applied Mathematics, 61(8):1025–1045, 2008. [42] Y. Wang and W. Yin. Sparse signal reconstruction via iterative support detection. SIAM Journal on Imaging Sciences, 3(3):462–491, 2010. [43] C. Zeng, S. Zhu, S. Li, Q. Liao, and L. Wang. Sparse frame doa estimations via a rank-one correlation model for low snr and limited snapshots. Applied and Computational Harmonic Analysis, 2016. Department of Mathematics, San Francisco State University, San Francisco, CA 94132 E-mail address:
[email protected] Department of Mathematics, San Francisco State University, San Francisco, CA 94132 E-mail address:
[email protected] Department of Mathematics, San Francisco State University, San Francisco, CA 94132 E-mail address:
[email protected]