Phaseless compressive sensing using partial support information

2 downloads 0 Views 392KB Size Report
IT] 11 May 2017 .... this theorem goes to Theorem 3.1 in [7]. Remark 4 If α = 1. 2 ... (11). Thus, by using Lemma 1 with η = 0, we have x♯ − x2 ≤ C1(ζ + ε) + C2.
Phaseless compressive sensing using partial support information Zhiyong Zhou and Jun Yu Department of Mathematics and Mathematical Statistics, Ume˚ a University,

arXiv:1705.04048v1 [cs.IT] 11 May 2017

Ume˚ a, 901 87, Sweden May 12, 2017

Abstract We study the recovery conditions of weighted `1 minimization for real signal reconstruction from phaseless compressed sensing measurements when partial support information is available. A Strong Restricted Isometry Property (SRIP) condition is provided to ensure the stable recovery. Moreover, we present the weighted null space property as the sufficient and necessary condition for the success of k-sparse phase retrieval via weighted `1 minimization.

1

Introduction Compressive Sensing (CS) aims to recover an unknown signal from the underdetermined linear

measurements (see [4, 5] for a comprehensive view). And it is known as phase retrieval or phaseless compressive sensing when there is no phase information. Specifically, the goal of phaseless compressive sensing is to recover x ∈ CN up to a unimodular scaling constant from noisy magnitude measurements y = |Ax| + e ∈ Cm with the measurement matrix A = (a1 , · · · , am )T ∈ Cm×N , |Ax| = (|ha1 , xi|, · · · , |ham , xi|)T and the noise term e. In the real case, i.e., x ∈ RN and when x is sparse or compressible, the stable recovery can be guaranteed by using the `1 minimization problem minkzk1 subject to k|Az| − yk2 ≤ ε,

(1)

provided that the measurement matrix A satisfies the SRIP [7, 9]. In the noiseless case, the first sufficient and necessary condition was presented in [10] by proposing a new version of null space property for the phase retrieval problem. In this paper, we generalize the existing theoretical framework for phaseless compressive sensing to incorporate partial support information, that is we consider the case that an estimate of the support of the signal is available. We follow the similar notations and arguments in [6]. For an arbitrary signal x ∈ RN , let xk be its best k-term approximation, so that xk minimizes kx − f k1 over all k-sparse vectors

1

f . Let T0 be the support of xk , where T0 ⊂ {1, · · · , N } and |T0 | ≤ k. Let T˜, the support estimate, be a subset of {1, 2 · · · , N } with cardinality |T˜| = ρk, where ρ ≥ 0 and |T˜ ∩ T0 | = αρk with 0 ≤ α ≤ 1. To incorporate prior support information T˜, we adopt the weighted `1 minimization   N ω ∈ [0, 1] i ∈ T˜, X min wi |zi |, subject to k|Az| − yk2 ≤ ε, where wi = z  1 i=1 i ∈ T˜c .

(2)

We present the SRIP condition and weighted null space property condition to guarantee the success of the recovery via the weighted `1 minimization problem above. The paper is organized as follows. In Section 2, we introduce the definition of SRIP and present the stable recovery condition with this tool. In Section 3, the sufficient and necessary weighted null space property condition for the real sparse phase retrieval is given. Finally, Section 4 is devoted to the conclusion. Throughout the paper, for any vector x ∈ RN , we denote the `p norm by kxkp = Pp PN ( i=1 xpi )1/p for p > 0 and the weighted `1 norm as kxk1,w = i=1 wi |xi |. For any set T , we denote its cardinality as |T |. The vector x ∈ RN is called k-sparse if at most k of its entries are nonzero, i.e., if kxk0 = |supp(x)| ≤ k, where supp(x) denotes the index set of the nonzero entries. We denote the index set [N ] := {1, 2, · · · , N }. For a matrix A = (a1 , · · · , am )T ∈ Rm×N and an index set I ⊂ [m], we denote AI the sub-matrix of A where only rows with indices in I are kept, i.e., AI = (aj , j ∈ I)T .

2

SRIP To recover sparse signals via `1 minimization in the classical compressive sensing setting, [2] intro-

duced the notion of Restricted Isometry Property (RIP) and established a sufficient condition. We say a matrix A satisfies the RIP of order k if there exists a constant δk ∈ [0, 1) such that for all k-sparse vectors x we have (1 − δk )kxk22 ≤ kAxk22 ≤ (1 + δk )kxk22 . (3) q [1] proved that the RIP condition with δtk < t−1 t where t > 1 can guarantee the exact recovery in the noiseless case and stable recovery in the noisy case via `1 minimization. And this condition is sharp when t ≥ 43 . Very recently, [3] generalized this sharp RIP condition to the weighted `1 minimization problem when partial support information was incorporated. We first present the following useful lemma, which is a simple extension of the result in [3].

Lemma 1 Let x ∈ RN , y = Ax + e ∈ Rm with kek2 ≤ ζ, and η ≥ 0. Suppose that A satisfies RIP with q √ t−d δtk < t−d+γ 1 + ρ − 2αρ and 2 for some t > d, where γ = ω + (1 − ω)

d=

  1,

ω=1

 1 − αρ + a,

0≤ω 21 , then d = 1 t(t−1)−δtk t

and γ < 1. The sufficient condition (7) in Theorem 1 is weaker than that of Theorem 2.2 in [9] and that of Theorem 3.1 in [7]. And in this case, the constants C1 < c1 , C2 < c2 . n Set tω = max d +

γ 2 (1−θ− )2 ,d 2 2θ− −θ−

+

γ 2 (1−θ+ )2 2 2θ+ −θ+

o . We illustrate how the constants tω , C1 and C2 change

with ω for different values of α in Figure 1. In all the plots, we set ρ = 1. In the plot of tω , we set θ− = and θ+ = 32 , then tω = d + ω

or α = 0.5, then t ≡ 1 +

γ2 3 . 1 3

1 2

In the plots of C1 and C2 , we fix t = 4 and δtk = 0.3. Note that if ω = 1

= 34 , C1 ≡ c1 and C1 ≡ c2 . And it shows that tω decreases as α increases,

which means that the sufficient condition (7) becomes weaker as α increases. For instance, if 90% of the support estimate is accurate (α = 0.9) and ω = 0.6, we have tω = 1.2022, while tω = 1.3333 for standard `1 minimization (ω = 1). As α increases, the constant C1 decreases with t = 4 and δtk = 0.3. Meanwhile, the constant C2 with α 6= 0.5 is smaller than that with α = 0.5.

4

Figure 1: Comparison of the constants tω , C1 and C2 for various of α. In all the plots, we set ρ = 1. In the plot of tω , we set θ− =

1 2

and θ+ = 32 . In the plots of C1 and C2 , we fix t = 4 and δtk = 0.3.

Proof of Theorem 1. For any solution x] of (2), we have kx] k1,w ≤ kxk1,w and k|Ax] | − |Ax| − ek2 ≤ ε. If we divide the index set {1, 2, · · · , m} into two subsets T = {j : sign(haj , x] i) = sign(haj , xi)} and T c = {j : sign(haj , x] i) = −sign(haj , xi)}, then it implies that kAT x] − AT x − ek2 + kAT c x] + AT c x − ek2 ≤ ε.

(9)

Here either |T | ≥ m/2 or |T c | ≥ m/2. If |T | ≥ m/2, we use the fact that kAT x] − AT x − ek2 ≤ ε. Then, we obtain x] ∈ {z ∈ RN : kzk1,w ≤ kxk1,w , kAT z − AT x − ek2 ≤ ε}. Since A satisfies SRIP of order tk with bounds θ− , θ+ and   γ 2 (1 − θ− )2 γ 2 (1 − θ+ )2 t ≥ max d + > d, 2 , d + 2θ − θ 2 2θ− − θ− + +

5

(10)

therefore, the definition of SRIP implies that AT satisfies the RIP of order tk with s t−d δtk ≤ max{1 − θ− , θ+ − 1} ≤ . t − d + γ2

(11)

Thus, by using Lemma 1 with η = 0, we have kx] − xk2 ≤ C1 (ζ + ε) + C2

2(ωkxT0c k1 + (1 − ω)kxT˜c ∩T c k1 ) 0 √ . k

Similarly, if |T c | ≥ m/2, we obtain the other corresponding result kx] + xk2 ≤ C1 (ζ + ε) + C2

2(ωkxT0c k1 + (1 − ω)kxT˜c ∩T c k1 ) 0 √ . k

The proof of Theorem 1 is now completed.

3

Weighted Null Space Property In this section, we consider the noiseless weighted `1 minimization problem, i.e.,   ω ∈ [0, 1], i ∈ T˜ min kzk1,w , subject to |Az| = |Ax|, where wi = . z  1, i ∈ T˜c

(12)

We denote the kernel space of A by N (A) := {h ∈ RN : Ah = 0} and denote the k-sparse vector space N ΣN k := {x ∈ R : kxk0 ≤ k}.

Definition 2 The matrix A satisfies the w-weighted null space property of order k if for any nonzero h ∈ N (A) and any T ⊂ [N ] with |T | ≤ k it holds that khT k1,w < khT c k1,w ,

(13)

where T c is the complementary index set of T and hT is the restriction of h to T .

Remark 5 Obviously, when the weight ω = 1, the weighted null space property reduces to the classical null space property. And according to the specific setting of wi , the expression (13) is equivalent to ωkhT ∩T˜ k1 + khT ∩T˜c k1 < ωkhT c ∩T˜ k1 + khT c ∩T˜c k1 ⇔ ωkhT k1 + (1 − ω)khG k1 < khT c k1 , where G = (T ∩ T˜c ) ∪ (T c ∩ T˜) (see [8] for more arguments).

It is known that a signal x ∈ ΣN k can be recovered via the weighted `1 minimization problem if and only if the measurement matrix A has the weighted null space property of order k. We state it as follows (see [11]):

6

Lemma 2 Given A ∈ Rm×N , for every k-sparse vector x ∈ RN it holds that arg min {kzk1,w : Az = Ax} = x z∈RN

if and only if A satisfies the w-weighted null space property of order k.

Next, we extend Lemma 2 to the following theorem on phaseless compressive sensing for the real signal reconstruction.

Theorem 2 The following statements are equivalent: (a) For any k-sparse x ∈ RN , we have arg min{kzk1,w : |Az| = |Ax|} = {±x}.

(14)

z∈RN

(b) For every S ⊆ [m], it holds ku + vk1,w < ku − vk1,w

(15)

for all nonzero u ∈ N (AS ) and v ∈ N (AS c ) satisfying ku + vk0 ≤ k.

Remark 6 If ω = 1, then Theorem 2 reduces to Theorem 3.2 in [10]. And since wi = ω when i ∈ T˜, and wi = 1 otherwise, the expression (15) is equivalent to ωku + vk1 + (1 − ω)k(u + v)T˜c k1 < ωku − vk1 + (1 − ω)k(u − v)T˜c k1 .

Proof of Theorem 2. The proof follows from the proof of Theorem 3.2 in [10] with minor modifications. First we show (a) ⇒ (b). Assume (b) is false, that is, there exist nonzero u ∈ N (AS ) and v ∈ N (AS c ) such that ku + vk1,w ≥ ku − vk1,w N and u + v ∈ ΣN k . Now set x = u + v ∈ Σk , obviously for i = 1, · · · , m, we have

|hai , xi| = |hai , u + vi| = |hai , u − vi|, since either hai , ui = 0 or hai , vi = 0. In other words |Ax| = |A(u − v)|. Note that u − v 6= −x, for otherwise we would have u = 0, which is a contradiction. Then, it follows from (a) that we obtain kxk1,w = ku + vk1,w < ku − vk1,w , This is a contradiction. Thus, (b) holds.

7

Next we prove (b) ⇒ (a).

Let b = (b1 , · · · , bm )T = |Ax| where x ∈ ΣN k .

For a fixed σ =

(σ1 , · · · , σm )T ∈ {−1, 1}m , we set bσ = (σ1 b1 , · · · , σm bm )T . We now consider the following weighted `1 minimization problem: min kzk1,w subject to Az = bσ .

(16)

z

Its solution is denoted as xσ . Then, we claim that for any σ ∈ {1, −1}m , if xσ exists (it may not exist), we have kxσ k1,w ≥ kxk1,w and the equality holds if and only if xσ = ±x. ?

To prove the claim, we assume σ ? ∈ {1, −1}m such that bσ = Ax. First note that the statement (b) implies the classical weighted null space property of order k. To see this, for any nonzero h ∈ N (A) and T ⊆ [N ] with |T | ≤ k, we set u = h, v = hT − hT c and S = [m]. Then, we have u ∈ N (AS ) and v ∈ N (AS c ). Therefore, the statement (b) now implies 2khT k1,w = ku + vk1,w < ku − vk1,w = 2khT c k1,w . ?

?

As a consequence, we have xσ = x by Lemma 2. And, similarly we have x−σ = −x. Next, for any σ ∈ {−1, 1}m 6= ±σ ? , if xσ doesn’t exist then we have nothing to prove. Assume it does exist, set S? = {i : σi = σi? }. Then hai , xσ i =

  hai , xi

i ∈ S? ,

 −hai , xi i ∈ S c . ? Set u = x−xσ and v = x+xσ . Obviously, u ∈ N (AS? ) and v ∈ N (AS?c ). Furthermore, u+v = 2x ∈ ΣN k . Then, by the statement (b), we have 2kxk1,w = ku + vk1,w < ku − vk1,w = 2kxσ k1,w . This proves (a) and the proof is completed.

4

Conclusion In this paper, we establish the sufficient SRIP condition and the sufficient and necessary weighted null

property condition for phaseless compressive sensing using partial support information via weighted `1 minimization. We only consider the real-valued signal reconstruction case. It is challenging to generalize the present results to the complex case. And it will be very interesting to construct the measurement matrix A ∈ Rm×N satisfying the weighted null space property given in (15).

References [1] Cai, T. T. and Zhang, A. (2014). Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Transactions on Information Theory 60(1) 122–132.

8

[2] Candes, E. J. and Tao, T. (2005). Decoding by linear programming. IEEE Transactions on Information Theory 51(12) 4203–4215. [3] Chen, W. and Li, Y. (2016). Recovery of signals under the high order RIP condition via prior support information. arXiv preprint arXiv:1603.03464. [4] Eldar, Y. C. and Kutyniok, G. (2012). Compressed Sensing: Theory and Applications. Cambridge University Press. [5] Foucart, S. and Rauhut, H. (2013). A Mathematical Introduction to Compressive Sensing. New York, NY, USA: Springer-Verlag. [6] Friedlander, M. P., Mansour, H., Saab, R. and Yilmaz, O. (2012). Recovering compressively sampled signals using partial support information. IEEE Transactions on Information Theory 58(2) 1122– 1134. [7] Gao, B., Wang, Y. and Xu, Z. (2016). Stable signal recovery from phaseless measurements. Journal of Fourier Analysis and Applications 22(4) 787–808. [8] Mansour, H. and Saab, R. (2015). Recovery analysis for weighted `1 -minimization using the null space property. Applied and Computational Harmonic Analysis. [9] Voroninski, V. and Xu, Z. (2016). A strong restricted isometry property, with an application to phaseless compressed sensing. Applied and Computational Harmonic Analysis 40(2) 386–395. [10] Wang, Y. and Xu, Z. (2014). Phase retrieval for sparse signals. Applied and Computational Harmonic Analysis 37(3) 531–544. [11] Zhou, S., Xiu, N., Wang, Y. and Kong, L. (2013). Exact recovery for sparse signal via weighted `1 minimization. arXiv preprint arXiv:1312.2358.

9