Phase Retrieval From Binary Measurements - arXiv

5 downloads 0 Views 659KB Size Report
Aug 2, 2017 - [4] F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-. Sicairos .... [36] P. T. Boufounos, “Greedy sparse signal reconstruction from sign mea- ... [47] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image.
ARXIV PREPRINT

1

Phase Retrieval From Binary Measurements

arXiv:1708.00602v1 [cs.IT] 2 Aug 2017

Subhadip Mukherjee and Chandra Sekhar Seelamantula, Senior Member, IEEE

Abstract—We consider the problem of signal reconstruction from quadratic measurements that are encoded as +1 or −1 depending on whether they exceed a predetermined positive threshold or not. Binary measurements are fast to acquire and less expensive from the hardware perspective. We formulate the problem of signal reconstruction using a consistency criterion, wherein one seeks to find a signal that is consistent with the measurements. We construct a convex cost using a one-sided quadratic penalty and minimize it using an iterative accelerated projected gradient-descent technique. We refer to the resulting algorithm as binary phase retrieval (BPR). Considering additive white noise contamination prior to quantization, we also derive the Cram´er-Rao Bound (CRB) for the binary encoding model. Experimental results demonstrate that the BPR algorithm yields a reconstruction signal-to-noise ratio (SNR) of approximately 25 dB in the absence of noise. In the presence of noise, the meansquared error of reconstruction is within 2 to 3 dB of the CRB. Index Terms—Phase retrieval, binary encoding, consistency, Cram´er-Rao bound, projected gradient descent.

I. I NTRODUCTION

P

HASE retrieval (PR) is encountered in a variety of imaging applications such as X-ray crystallography [1], holography [2], microscopy [3], coherent modulation imaging [4] etc. Since the sensors can only record the intensity of the complex field and not the phase directly, it becomes imperative to recover the phase from the magnitude measurement in order to reconstruct the underlying object. The objective is to solve this otherwise ill-posed, inverse problem by acquiring oversampled magnitude measurements and incorporating signal priors such as non-negativity, compact support, sparsity, etc. The initial contributions in PR were due to Fienup [5], [6], and Gerchberg and Saxton [7], who proposed iterative error reduction algorithms. A comprehensive overview of the Fienup algorithm and its variants can be found in [8]. There also exist non-iterative techniques that rely on a Hilbert transform relationship between the log-magnitude and the phase of the Fourier transform for certain classes of signals [9], [10]. Recently, we developed PR algorithms for a class of twodimensional (2-D) parametric signals [11], [12] and for signals belonging to principal shift-invariant spaces [13]. Recently, the problem of PR has been addressed within the realm of sparsity and magnitude-only compressive sensing (CS). Yu and Vetterli have proposed a sparse spectral factorization technique [14], and established uniqueness guarantees. Moravec et al. proposed compressive PR [15], where the PR problem is cast with the constraint of compressibility on the signal to be estimated. A greedy local search-based algorithm for sparse PR (GESPAR), was proposed by Schechtman et The authors are with the Department of Electrical Engineering, Indian Institute of Science, Bangalore–560012, India. Phone: +918022932695; Fax: +918023600444; Emails: [email protected], [email protected].

al. [16]. The scalability and accuracy of GESPAR have been established in [16]. Other notable contributions for sparse PR include dictionary learning for PR (DOLPHIn) [17], compressive PR via generalized message passing [18], phase retrieval of sparse boolean signals via simulated annealing [19], etc. We developed the sparse Fienup algorithm [20], [21], where sparsity is enforced via a hard-thresholding operation in the signal domain. Vaswani et al. [22] recently proposed an alternating minimization technique for recovering a low-rank matrix from quadratic measurements corresponding to projections with each of its columns. Fogel et al. [23] showed that incorporating signal priors such as sparsity and positivity lead to a significant speed-up of iterative reconstruction techniques. A seminal contribution in PR is the PhaseLift framework of Cand`es et al. [24], [25], which relies on lifting the ground-truth vector to a matrix such that the quadratic measurements get converted to an equivalent set of linear measurements. Reconstruction is achieved by solving a tractable semi-definite program (SDP). One could impose sparsity within the PhaseLift framework using the `1 penalty [26], or log-det relaxation [27]. Gradient-descent approaches for PR that do not rely on lifting include the Wirtinger Flow (WF) method [28], its truncated version (TWF) [29], etc. These algorithms are scalable and have convergence guarantees for the spectral initialization [30]. Waldspurger et al. developed PhaseCut [31], where PR is formulated as a non-convex quadratic program and solved using a block-coordinate-descent approach, having a per-iteration complexity comparable to that of GerchbergSaxton-type algorithms. The problem of measurement quantization was considered in the context of CS, but not PR. Zymnis et al. considered the problem of reconstruction from quantized CS measurements [32]. The case of binary CS measurements was addressed by Boufounos and Baraniuk [33], who proposed a fixed-point continuation algorithm for signal recovery. The other notable works in the context of binary CS include [34]–[40]. A. This Paper We consider a scenario where quadratic measurements of a signal are compared with a threshold τ > 0 and are encoded using the binary alphabet {−1, +1}. The motivation for binary encoding lies in the fact that, from the perspective of analogto-digital (A/D) conversion, it is efficient to encode coarsely by sampling at a high rate, than to encode finely at a low sampling rate. All practical A/D converters exhibit a tradeoff between encoding precision and sampling rate [41], [42]. This trade-off also lies at the heart of the development of delta-sigma A/D converters [43], [44]. For the reconstruction part, we combine the principles of lifting and consistent reconstruction in formulating an optimization cost, and employ an accelerated projected gradient-descent strategy for

ARXIV PREPRINT

2

reconstruction. Considering additive noise corruption before encoding, we derive the Cram´er-Rao Bound (CRB), which acts as the benchmark. Experimental results demonstrate that the proposed algorithm yields accurate reconstruction considering oversampling. Moreover, the reconstruction has a meansquared error (MSE) within 2 to 3 dB of the CRB. II. T HE B INARY P HASE R ETRIEVAL (BPR) A LGORITHM

Algorithm 1 The Binary Phase Retrieval (BPR) algorithm. 1. Initialization: Set t ← 0, X t = Y t ← 0n×n , θt ← 1, and Niter = Maximum iteration count. 2. Iterate until t exceeds Niter :  t+1 • X = Prank−1 Y t − η t ∇ F (Y )|Y =Y t , −1  q  t+1 , β t+1 = θt+1 θ1t − 1 , • θ = 2 1 + 1 + (θ4t )2  t+1 • Y = X t+1 + β t+1 X t+1 − X t , and t ← t + 1. >

The objective in standard PR for real signals is to re∗ n construct > ∗ 2 x ∈ R from quadratic measurements bi = a x , i = 1 : m. The vectors {ai } are Gaussian sampling i vectors, drawn independently from N (0, I n ), where I n is the n × n identity matrix. The notation i = 1 : m is used to indicate that i = 1, 2, · · · , m. In BPR, the squared-magnitude measurements are encoded using −1 or +1 by comparing them against a predetermined threshold τ > 0,  resulting in the sign  ∗ 2 x − τ , i = 1 : m, where measurements yi = sgn a> i sgn(·) denotes the signum function. The key idea behind consistent reconstruction is to seek a vector x that is consistent with the measurements, so that the reconstructed vector, when passed through the same acquisition process, matches the acquired measurements. The  condition could be expressed succinctly as  consistency 2 − τ > 0, ∀i. Stated formally, the problem is yi a> x i   2 BPR: Find x such that yi a> x − τ > 0, for all i. (1) i We combine the requirement of consistent recovery with the principle of lifting [24], [25], and formulate a suitable cost function for minimization. The key 2 idea behind lifting is = Tr (Ai X), where to express the quadratic term as a> x i X = xx> , Ai = ai a> , and Tr(·) denotes the trace operator. i Since X is positive semi-definite and has rank one, consistent recovery in the lifted domain takes the following form: Find X  0 s.t. yi (Tr (Ai X) − τ ) > 0 and rank(X) = 1. In order to solve this problem, we formulate an optimization cost using the one-sided quadratic loss f : R → R, defined as ( 1 2 u , if u ≤ 0, and f (u) = 2 0, otherwise. The BPR problem is cast as ˆ = arg min F (X) subject to rank (X) = 1, X X0

(2)

Pm where F (X) = i=1 f (yi (Tr (Ai X) − τ )). The one-sided quadratic loss penalizes lack of consistency. We solve (2) by employing an accelerated projected gradient-descent (APGD) algorithm. The resulting BPR algorithm is listed in Algorithm 1. The cost function in (2) is convex, but the rank-1 projection operator Prank−1 in Algorithm 1 is not. Therefore, employing Nesterov’s scheme [45] does not guarantee acceleration. A similar situation was encountered earlier in the context of low-rank matrix completion problems [46], where it turned out that incorporating a momentum factor led to a speed-up. Following the same strategy, we also incorporated a momentum factor and found that it indeed

3. Output: xt , where X t = xt xt .

resulted in acceleration. The step-size η t is chosen following the exact line-search strategy as   η t = arg min F X t − ηGt , where Gt = ∇F X t . (3) η>0

III. S IMULATION R ESULTS A. Performance Measures ˆ is a consistent solution to the BPR problem, so is If x ˆ , for any φ ∈ [0, 2π]. Since we are dealing with real ejφ x signals, in order to factor out the effect of the global sign, an appropriate metric to quantify the accuracy of reconstruction vis-`a-vis the ground truth x∗ would be the global-signˆ −x∗ k22 kα x . invariant MSE [25], defined as MSE = min kx∗ k2 α∈{−1,+1}

2

The reconstruction signal-to-noise ratio (SNR) is defined as the reciprocal of the MSE. The second metric that is relevant in the context of BPR is consistency of the reˆ with the measurements, construction x is defined as  which  Pm   > 2 1 ˆ Υm = m − τ > 0 , where I (E) is I y a x i i i=1 the indicator of the event E. The consistency metric Υm quantifies the fraction of measurements correctly explained by ˆ . Naturally, 0 ≤ Υm ≤ 1 and Υm = 1 is the reconstruction x the best one could hope to achieve. B. Signal Reconstruction in the Absence of Noise The first illustration is on synthetic signals. In each trial, an instance of x∗ is drawn uniformly at random from the unitsphere in Rn , and the measurement vectors ai ∼ N (0, I n ), where n = 64. The threshold τ is chosen such that the measurements are encoded as +1 or −1 with equal probability. ∗ 2 2 In this case, |a> i x | follows a χ1 distribution, corresponding to which the threshold value is τ = 0.4550. The equiprobable encoding strategy is reasonable and was also adopted for quantized CS [32] and binary CS [33]. The reconstruction is performed using the BPR algorithm with the step-size parameter η t determined according to (3), where the search range is fixed as [0, 2.5 × 10−3 ], with a precision of 10−5 . We examine the reconstruction SNR and consistency as iterations progress corresponding to different oversampling factors m n (cf. Figures 1(a) and (b)). The results have been averaged over 20 independent trials. As one would expect, the reconstruction SNR increases with increase in m n . For instance, when m = 20n, the maximum SNR is about 25 dB indicating a reasonably accurate reconstruction. Higher

3

20

15 10 5 0

0.8

0.6

0

125

150

25

1

20

0.9

15 10

BPR PhaseLift TWF AltMinPR

5 0

50 75 100 ITERATIONS

125

150

0.8 0.7 BPR PhaseLift TWF AltMinPR

0.6 0.5 0.4

25

50 75 100 ITERATIONS

(c)

125

150

25

50 75 100 125 150 ITERATIONS

25

50 75 100 ITERATIONS

0.8 INPUT SNR = 20 dB INPUT SNR = 25 dB INPUT SNR = 30 dB

0.6 0.4

25

50 75 100 125 150 ITERATIONS

BPR CRB −20 −25 −30

20

(b)

25 INPUT SNR (dB)

30

(c)

Fig. 2. (Color online) Performance of BPR in noise: (a) reconstruction

SNR, (b) consistency, and (c) MSE versus the CRB.

(b) CONSISTENCY

SNR (dB)

(a) 25

INPUT SNR = 20 dB INPUT SNR = 25 dB INPUT SNR = 30 dB

(a)

0.4

50 75 100 ITERATIONS

10 5

0.5

25

15

0.7

−15

1 MSE (dB)

25

0.9

CONSISTENCY

1

20

SNR (dB)

25

CONSISTENCY

SNR (dB)

ARXIV PREPRINT

125

150

(d)

Fig. 1. (Color online) Performance assessment of the BPR algorithm

on noise-free measurements: (a) Reconstruction SNR, and (b) Consistency, for different values of m. A comparison of (c) SNR, and (d) consistency vis-`a-vis the state-of-the-art algorithms for m = 20n.

oversampling factors also lead to faster convergence of Υm . This paper introduces the binary phase retrieval problem for the first time, to the best of our knowledge. Hence, there is no prior art for making comparisons. One way to compare with techniques such as PhaseLift [24], [25], AltMinPR [30], and TWF [29], is to model the quantization noise as an additive perturbation on the measurements. This is not totally reasonable, but under the given circumstances, probably the only way to make a comparison. Further, such a comparison calls for an appropriate encoding of yi for the competing techniques — the ±1 encoding would be meaningless for them because their cost functions involve a quadratic that measures ˆ |2 and yi , and no x ˆ , not even x∗ , the distance between |a> i x would be a suitable candidate. This is not an issue with BPR since the cost relies on consistency. Hence, in order to be fair to the competing techniques, we replace the −1 and +1 symbols with the centroids of the intervals [0, τ ] and [τ, ∞), respectively, computed with respect to the χ21 density. These turn out to be 0.1427 and 1.8573, respectively. A comparison is shown in Figures 1(c) and (d) for the same experimental setup considered in Figures 1(a) and (b). The competing techniques converge relatively fast and actually do a reasonable job given the very coarse quantization. The BPR algorithm, on the other hand, takes more iterations to ensure consistency, but ultimately results in an estimate that has a much higher accuracy (by about 5 dB in this case) and superior consistency with the acquired measurements.

m where the  noise  samples {ξi }i=1 are independent and follow the N 0, σξ2 distribution. The input SNR is defined as Pm > ∗ 4 1 . The experimental parameters SNRin = mσ 2 i=1 ai x ξ are kept the same as in Section III-B with m = 20n. The results are shown in Figure 2. From Figure 2(a), we observe that the reconstruction SNR steadily improves with increasing input SNR. A comparison with Figure 1(a) reveals that the reconstruction SNR corresponding to SNRin = 30 dB is nearly the same as that obtained with clean measurements. The consistency is also high (cf. Figure 2(b)). This result is indicative of the inherent noise robustness due to binary quantization. The consistency drops when the input SNR reduces. For example, when the input SNR dropped to 20 dB, the consistency measure decreased by about 10%.

D. Noise Robustness: MSE vis-`a-vis the CRB The theoretical benchmark against which the performance of the BPR algorithm could be compared is the CRB, which is derived in Appendix A for the problem at hand. For illustration, we consider the ground-truth signal x∗ to be a sum of two sinusoids, with the `th entry given by:      4π` 14π` ∗ x` = κ 1.5 sin + 2.5 cos , ` = 0 : n − 1, n n where n = 64 and the normalizing constant κ ensures that kx∗ k2 = 1. The entries of the measurement matrix A, which m contains {ai }i=1 on its rows, follow the N (0, 1) distribution. The reconstruction is performed using the BPR algorithm and the global-sign-invariant MSE (defined in Section III-A) corresponding to each SNR is averaged over 20 independent noise realizations for a fixed A. Since A is random, we have to perform one more level of averaging of the MSEs. For this purpose, we generate 20 different measurement matrices and average the MSEs. The results are shown in Figure 2(c) as a function of the input SNR. We observe that BPR attains reconstruction MSEs within 2 to 3 dB of the CRB at all input SNRs. The error-bars reflect the standard deviation over A. In this case, the error-bars are limited to within 1 dB of the average, which goes to show that the variability in the performance with respect to A is small.

C. Signal Reconstruction in the Presence of Noise

E. An Example of Image Reconstruction

In the presence of additive white noise ξi prior to quantization, the binary measurements are given by   ∗ 2 yi = sgn a> x + ξ − τ , i = 1 : m, (4) i i

Consider the Peppers image of size 256 × 256 (cf. Figure 3(a)). The image is divided into non-overlapping patches of size 8 × 8 leading to a total of 1024 patches. The effective dimension of the image is n = 2562 and the total

ARXIV PREPRINT

4

noise. In the presence of noise, enforcing consistency as in (1) is not entirely reasonable, because there may not exist an x that satisfies the requirement. Mildly relaxing the criterion depending on the noise level might enhance noise robustness — this aspect requires further investigation. A PPENDIX A: C RAM E´ R -R AO B OUND

(a) Ground-truth

(b) BPR reconstruction

32

1

31 0.8

29 0.6 28

SSIM

PSNR (dB)

30

0.4

27 26

0.2 25 24 12

(c) Difference image

16

0 24

20

(d) PSNR and SSIM vs.

m n

Fig. 3. (Color online) An example of image phase retrieval. The

reconstruction shown in (b) has PSNR = 29.60 dB and SSIM = 0.64.

number of measurements acquired is m. The measurement matrix A has entries following N (0, 1) distribution and is of m size 1024 × 64. We analyze the reconstruction performance (Niter = 75) as a function of the oversampling factor m n . The reconstruction is performed patch-wise. The optimal step-size is determined according to (3) where the search is carried over the range [0, 5.5 × 10−3 ] with a precision of 10−5 . The image reconstruction quality is quantified using the structural similarity index (SSIM) √ [47] and the peak SNR defined as n PSNR = 20 log10 255 dB, where I is the image, Iˆ is the kI−IˆkF reconstruction, and k · kF denotes the Frobenius norm. The PSNR and SSIM measures shown in Figure 3(d) increase with m n and indicate a good quality of reconstruction. An example reconstruction for m n = 20 is shown in Figure 3(b) and the reconstruction error is shown in Figure 3(c). The small error magnitude shows that the BPR algorithm can retrieve the phase accurately even from coarse magnitude measurements. IV. C ONCLUSIONS We have developed the BPR algorithm for phase retrieval from oversampled binary measurements and demonstrated successful reconstruction both in the presence and absence of noise. The optimization problem is formulated by amalgamating the principle of lifting with that of consistent recovery enforced by means of a one-sided quadratic loss function. The BPR algorithm is iterative and based on APGD. The proposed algorithm achieves a reconstruction SNR of nearly 25 dB for 20× oversampling. For images, the PSNR is as high as 30 dB and the SSIM is about 0.75. We have also considered the effect of noise and derived the CRB. The BPR algorithm is robust to noise and lies within 2 to 3 dB margin of the CRB although it was not particularly designed to handle

We derive the CRB for the binary measurements in (4), corresponding to a fixed sensing matrix A. The other works in which CRBs were derived for PR are in the context of Gaussian noise corrupting the quadratic measurements [48], non-additive Gaussian noise prior to computing the quadratic measurement [49], uniformly distributed additive noise arising out of high-rate quantization [50], frame-based measurements [51], and Fourier measurements [52]. In contrast to these works, our focus is on the extreme case of binary quantization, where none of the previously derived bounds hold. The measurement in (4), expressed equivalently as ( ∗ 2 , and +1, if ξi ≥ τ − a> i x yi = −1, otherwise, has the probability mass function   y¯i   > ∗ 2 1−¯yi ∗ 2 ai x p (yi ) = 1 − Φ τ − a> x Φ τ − , i i where y¯i = 1+y 2 , yi ∈ {−1, +1}, and Φ is the c.d.f. of noise. The log-likelihood function corresponding to the measurements y = [y1 , y2 , · · · , ym ] is given by m   X  ∗ 2 y¯i log 1 − Φ τ − a> plog (x∗ ) = i x

i=1

+

   ∗ 2 (1 − y¯i ) log Φ τ − a> . i x

(5)

∗ For notational brevity, let ui = a> i x . Differentiating both ∗ sides of (5) with respect to x gives m X

  2ui Φ0 τ − u2i 2ui Φ0 τ − u2i y¯i ∇plog (x ) = ai − (1 − y¯i ) ai . 1 − Φ (τ − u2i ) Φ (τ − u2i ) i=1 ∗

One can verify that the regularity condition Ey [∇plog (x∗ )] = 0, where E denotes the expectation operator, is satisfied — this guarantees the existence of the CRB. Differentiating once more with respect to x∗ , we get  m X (1 − ϕi ) 2ϕ0i − 4u2i ϕ00i − 4u2i ϕ02 i 2 ∗ ∇ plog (x ) = y¯i Ai 2 (1 − ϕi ) i=1  ϕi 2ϕ0i − 4u2i ϕ00i + 4u2i ϕ02 i − (1 − y¯i ) Ai , ϕ2i  0  2 0 2 where Ai = ai a> i , ϕi = Φ τ − ui , ϕi = Φ τ − ui , and ϕ00i = Φ00 τ − u2i . The Fisher information matrix is given by m   X I x∗ = −Ey ∇2 plog (x∗ ) = i=1

4u2i ϕ02 i Ai . ϕi (1 − ϕi )

(6)

ˆ is given as Cov (ˆ The CRB for an unbiased estimate x x)  I −1 . In the specific instance where the noise samples are i.i.d. ∗ x Gaussian, as considered in Sections III-C and III-D, Φ and Φ0 are the Gaussian c.d.f. and p.d.f., respectively.

ARXIV PREPRINT

5

R EFERENCES [1] R. P. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Amer. A, vol. 7, no. 3, pp. 394–411, Mar. 1990. [2] A. Szoke, “Holographic microscopy with a complicated reference,” J. Imag. Sci. Technol., vol. 41, pp. 332–341, 1997. [3] A. J. J. Drenth, A. Huiser, and H. Ferwerda, “The problem of phase retrieval in light and electron microscopy of strong objects,” Optica Acta, vol. 22, pp. 615–628, 1975. [4] F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. GuizarSicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” Nature Communications, Article no. 13367 (2016), Nov. 2016. [5] J. R. Fienup, “Phase retrieval algorithms: A comparison,” Appl. Opt., vol. 21, pp. 2758–2769, 1982. [6] J. R. Fienup, “Phase retrieval algorithms: A personal tour [invited],” Appl. Opt., vol. 52, pp. 45–56, 2013. [7] R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik, vol. 35, pp. 237–246, 1972. [8] H. H. Bauschke, P. L. Combettes, and D. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: A view from convex optimization,” J. Opt. Soc. Amer. A, vol. 19, pp. 1334–1345, 2002. [9] B.Yegnanarayana and A. Dhayalan, “Noniterative techniques for minimum phase signal reconstruction from phase or magnitude,” in Proc. IEEE Intl. Conf. Acoust., Speech, and Signal Process., vol. 8, pp. 639– 642, Apr. 1983. [10] C. S. Seelamantula, N. Pavillon, C. Depeursinge, and M. Unser, “Exact complex-wave reconstruction in digital holography,” J. Opt. Soc. Amer. A, vol. 28, no. 6, pp. 983–992, Jun. 2011. [11] B. A. Shenoy, S. Mukherjee, and C. S. Seelamantula, “Phase retrieval for a class of 2-D signals characterized by first-order difference equations,” in Proc. IEEE Intl. Conf. on Image Process., pp. 325–329, 2013. [12] B. A. Shenoy and C. S. Seelamantula, “Exact phase retrieval for a class of 2-D parametric signals,” IEEE Trans. Signal Process., vol. 63, no. 1, pp. 90–103, 2015. [13] B. A. Shenoy, S. Mulleti, and C. S. Seelamantula, “Exact phase retrieval in principal shift-invariant spaces,” IEEE Trans. Signal Process., vol. 64, no. 2, pp. 406–416, 2016. [14] Y. M. Yu and M. Vetterli, “Sparse spectral factorization: Unicity and reconstruction algorithms,” in Proc. IEEE Intl. Conf. Acoust. Speech, Signal Process., pp. 5976–5979, 2011. [15] M. L. Moravec, J. K. Romberg, and R. G. Baraniuk, “Compressive phase retrieval,” in Proc. Wavelets XII SPIE Int. Symp. Opt. Sci. Tech., Aug. 2007. [16] Y. Shechtman, A. Beck, and Y. C. Eldar, “GESPAR: Efficient phase retrieval of sparse signals,” IEEE Trans. Signal Process., vol. 62, no. 4, pp. 928–938, Feb. 2014. [17] A. M. Tillmann, Y. C. Eldar, and J. Mairal, “DOLPHIn–Dictionary learning for phase retrieval,” IEEE Trans. Signal Process., vol. 64, no. 24, Dec. 2016. [18] P. Schniter and S. Rangan, “Compressive phase retrieval via generalized approximate massage passing,” IEEE Trans. Signal Process., vol. 63, no. 4, pp. 1043–1055, Feb. 2015. [19] W. Peng and H. Wang, “Binary sparse phase retrieval via simulated annealing,” Math. Probl. in Engg., Article ID 8257612, May 2016. [20] S. Mukherjee and C. S. Seelamantula, “An iterative algorithm for phase retrieval with sparsity constraints: Application to frequency domain optical coherence tomography,” in Proc. IEEE Intl. Conf. Acoustics, Speech and Signal Process., pp. 553-556, Mar. 2012. [21] S. Mukherjee and C. S. Seelamantula, “Fienup algorithm with sparsity constraints: Application to frequency-domain optical-coherence tomography,” IEEE Trans. Signal Process., vol. 62, no. 18, pp. 4659–4672, Sep. 2014. [22] N. Vaswani, S. Nayer, and Y. C. Eldar, “Low rank phase retrieval,” IEEE Trans. Signal Process., DOI: 10.1109/TSP.2017.2684758, 2017. [23] F. Fogel, I. Waldspurger, and A. d’Aspremont, “Phase retrieval for imaging problems,” Mathematical Programming Computation, vol. 8, issue 3, pp. 311–335, Sep. 2016. [24] E. J. Cand`es, T. Strohmer, and V. Voroninski, “PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming,” Communications on Pure and Applied Math., vol. 66, issue 8, pp. 1241–1274, Aug. 2013. [25] E. J. Cand`es, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM Journal on Imaging Sciences, vol. 6, issue 1, pp. 199–224, Feb. 2013.

[26] H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry, “Compressive phase retrieval from squared output measurements via semidefinite programming,” in Proc. 16th IFAC Symposium on System Identification, vol. 45, issue 16, pp. 89–94, Jul. 2012. [27] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev, “Sparsity based sub-wavelength imaging with partially incoherent light via quadratic compressed sensing,” Opt. Exp., vol. 19, no. 16, pp. 14807–14822, 2011. [28] E. J. Cand`es, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Info. Theory, vol. 61, no. 4, pp. 1985–2007, Apr. 2015. [29] Y. Chen and E. J. Cand`es, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” in Proc. Advances in Neural Info. Process. Systems 28, 2015. [30] P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” IEEE Trans. Signal Process., vol. 63, no. 18, pp. 4814– 4826, Sep. 2015. [31] I. Waldspurger, A. dAspremont, and S. Mallat, “Phase recovery, maxcut and complex semidefinite programming,” Math. Program., vol. 149, no. 1, pp. 47–81, 2015. [32] A. Zymnis, S. Boyd, and E. J. Cand`es, “Compressed sensing with quantized measurements,” IEEE Signal Process. Lett., vol. 17, no. 2, pp. 149–152, Feb. 2010. [33] P. T. Boufounos and R. G. Baraniuk, “1-bit compressive sensing,” in Proc. Conf. on Info. Science and Systems, Princeton, NJ, Mar. 2008. [34] A. Gupta, R. Nowak, and B. Recht, “Sample complexity for 1-bit compressed sensing and sparse classification,” in Proc. Int. Symposium on Info. Theory (ISIT), 2010. [35] Y. Plan and R. Vershynin, “One-bit compressed sensing by linear programming,” Communications on Pure and Applied Math., vol. 66, issue 8, pp. 1275–1297, Aug. 2013. [36] P. T. Boufounos, “Greedy sparse signal reconstruction from sign measurements,” in Proc. Asilomar Conf. on Signals, Systems, and Computation (SSC), Asilomar, CA, Nov. 2009. [37] M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using adaptive outlier pursuit,” IEEE Trans. Signal Process., vol. 60, no. 7, pp. 3868–3875, Jul. 2012. [38] L. Jacques, J. N. Laska, P. T. Boufounos, and R. G. Baraniuk, “Robust 1bit compressive sensing via binary stable embeddings of sparse vectors,” IEEE Trans. Info Theory, vol. 59, no. 4, pp. 2082–2102, Apr. 2013. [39] Y. Plan and R. Vershynin, “Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach,” IEEE Trans. Info. Theory, vol. 59, no. 1, pp. 482-494, Jan. 2013. [40] A Bourquard and M. Unser, “Binary compressed imaging,” IEEE Trans. Image Process., vol. 22, no. 3, pp. 1042–1055, Mar. 2013. [41] R. H. Walden, “Analog-to-digital converter survey and analysis,” IEEE J. of Sel. Areas Comm., vol. 17, no. 4, pp. 539–550, Apr. 1999. [42] B. Le, T. W. Rondeau, J. H. Reed, and C. W. Bostian, “Analog-to-digital converters,” IEEE Signal Process. Mag., vol. 22, no. 6, pp. 69–77, 2005. [43] B. Baker, “How delta-sigma ADCs work, Part 1,” Texas Instruments Inc., url: http://www.ti.com/lit/an/slyt423a/slyt423a.pdf, 2011. [44] S. Park, “Motorola digital signal processors, principles of sigmadelta modulation for analog-to-digital converters,” url: http://www. numerix-dsp.com/appsnotes/APR8-sigma-delta.pdf, Mar. 1997. [45] Y. E. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Kluwer Academic Publishers, London, 2004. [46] J. Geng, X. Yang, X. Wang, and L. Wang, “An accelerated iterative hardthresholding method for matrix completion,” Intl. J. Signal Process., Image Process., and Patt. Recog., vol. 8, no. 7, pp. 141–150, 2015. [47] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. [48] R. Balan, “Reconstruction of signals from magnitudes of redundant representations: The complex case,” Foundations of Computational Math., pp. 1–45, 2013. [49] R. Balan, “The Fisher information matrix and the CRLB in a nonAWGN model for the phase retrieval problem,” in Proc. Intl. Conf. Sampl. Theory and Applications (SampTA), pp. 178–182, 2015. [50] C. Qian, N. D. Sidiropoulos, K. Huang, L. Huang, and H. C. So, “Phase retrieval using feasible point pursuit: Algorithms and Cram´erRao bound,” IEEE Trans. Signal Process., vol. 64, no. 20, pp. 5282– 5296, Oct. 2016. [51] A. S. Bandeira, J. Cahill, D. G. Mixon, and A. A. Nelson, “Saving phase: Injectivity and stability for phase retrieval,” Appl. and Comp. Harmonic Anal., vol. 37, no. 1, pp. 106–125, 2014. [52] J. N. Cederquist and C. C. Wackerman, “Phase-retrieval error: A lower bound,” J. Opt. Soc. Amer. A, vol. 4, no. 9, pp. 1788–1792, 1987.

Suggest Documents