Limited-Memory Trust-Region Methods for Sparse ...

1 downloads 0 Views 918KB Size Report
approach eliminates spurious solutions more effectively while improving computational time. Keywords: Large-scale optimization, trust-region methods, ...
Limited-Memory Trust-Region Methods for Sparse Relaxation Lasith Adhikaria , Omar DeGuchyb , Jennifer B. Erwayc , Shelby Lockhartc , and Roummel F. Marciab a

Department of Medicine, University of Florida, Gainesville, FL 32610 USA Applied Mathematics, University of California, Merced, Merced, CA 95343 USA c Department of Mathematics, Wake Forest University, Winston-Salem, NC 27109 USA b

ABSTRACT In this paper, we solve the `2 -`1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time. Keywords: Large-scale optimization, trust-region methods, limited-memory quasi-Newton methods, BroydenFletcher-Goldfarb-Shanno update

1. INTRODUCTION This paper concerns solving the sparse recovery problem minimize ˜ f ∈Rn

1 kAf − bk22 + τ kf k1 , 2

(1)

˜ n ˜ where A ∈ Rmט , f ∈ Rn˜ , b ∈ Rm , m ˜  n ˜ , and τ > 0 is a constant regularization parameter. By letting f = u − v, where u, v ≥ 0, we can write (1) as the constrained but differentiable optimization problem,

minimize ˜ f ∈Rn

subject to

1 kA(u − v) − bk22 + τ 1Tn˜ (u + v) 2 u, v ≥ 0,

where 1n˜ is the n ˜ -vector of ones.1 We transform the constrained differentiable problem into an unconstrained optimization problem using the change of variables ui = log(1 + eu˜i ) and vi = log(1 + ev˜i ), where u ˜i , v˜i ∈ R for 1≤i≤n ˜ . With these definitions, u and v are guaranteed to be non-negative. Thus, (1) is equivalent to the following minimization problem:  2      m ˜ n ˜ n ˜  u ˜j  X X X 1+e 4 1 u ˜j v ˜j   Ai,j log −b + τ log (1 + e )(1 + e ) . (2) min Φ(˜ u, v˜) = i ˜  2 1 + ev˜j  u ˜,˜ v ∈Rn i=1

j=1

j=1

Notice that the gradient of Φ(˜ u, v˜) can be written as follows: Letting w ˜ ∈ Rn˜ with w ˜i = log(1+eu˜i )−log(1+ev˜i ), then    eu˜i T ∇u˜i Φ(˜ u, v˜) = A ( A w ˜ − y) + τ , i 1 + eu˜i    ev˜i T A (−Aw ˜ + y) i + τ , ∇v˜i Φ(˜ u, v˜) = 1 + ev˜i

and

  ∇u˜ Φ(˜ u, v˜) ∇Φ(x) = . ∇v˜ Φ(˜ u, v˜)

We propose solving (2) using a limited-memory quasi-Newton trust-region optimization approach, which we describe in the next section. Related work. There are various methods for solving (1) (see e.g., Eldar and Kutyniok2 and all the references therein), many of which use a gradient descent-type approach. Our proposed approach is based on quasi-Newton methods, which have been previously shown to be effective for sparsity recovery problems.3–5 For example, Becker and Fadili6 use a zero-memory rank-one quasi-Newton approach for proximal splitting. Trust-region methods have also been implemented for sparse reconstruction.7, 8 Our approach is novel in the transformation of the sparse recovery problem to a differentiable unconstrained minimization problem and in the use of eigenvalues for efficiently solving the trust-region subproblem. Notation. Throughout this paper, we denote the identity matrix by I, with its dimension dependent on the context.

2. TRUST-REGION METHODS In this section, we outline the use of a trust-region method to solve (2). We begin by combining the unknowns u ˜ and v˜ into one vector of unknowns x = [˜ uT v˜T ]T ∈ Rn , where n = 2˜ n. (With this substitution, Φ can be considered as a function of x.) Trust-region methods to minimize Φ(x) define a sequence of iterates {xk } that are updated as follows: xk+1 = xk + pk , where pk is defined as the search direction. Each iteration, a new search direction pk is computed from solving the following quadratic subproblem with a two-norm constraint: pk = arg min qk (p)

4

=

p∈Rn

1 gkT p + pT Bk p s.t. kpk2 ≤ δk , 2

(3)

4 where gk = ∇Φ(xk ), Bk is an approximation to ∇2 Φ(xk ), and δk is a given positive constant. In large-scale optimization, solving (3) represents the bulk of the computational effort in trust-region methods.

Methods that solve the trust-region subproblem to high accuracy are often based on the optimality conditions for a global solution to the trust-region subproblem given in the following theorem:9–11 Theorem 2.1. Let δ be a positive constant. A vector p∗ is a global solution of the trust-region subproblem (3) if and only if kp∗ k2 ≤ δ and there exists a unique σ ∗ ≥ 0 such that B + σ ∗ I is positive semidefinite and (B + σ ∗ I)p∗ = −g

and

σ ∗ (δ − kp∗ k2 ) = 0.

(4)

Moreover, if B + σ ∗ I is positive definite, then the global minimizer is unique.

3. QUASI-NEWTON METHODS In this section we show how to build an approximation Bk of ∇2 Φ(x) using limited-memory quasi-Newton matrices. Given the continuously differentiable function Φ and a sequence of iterates {xk }, traditional quasi-Newton 4 4 matrices are genererated from a sequence of update pairs {(sk , yk )} where sk = xk+1 − xk and yk = ∇Φ(xk+1 ) − ∇Φ(xk ). In particular, given an initial matrix B0 , the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update12, 13 generates a sequence of matrices using the following recursion: Bk+1

4

=

Bk −

1 sTk Bk sk

Bk sk sTk Bk +

1 yk ykT , T yk sk

(5)

provided ykT sk 6= 0. In practice, B0 is often taken to be a nonzero constant multiple of the identity matrix, i.e., B0 = γI, for some γ > 0. Limited-memory BFGS (L-BFGS) methods store and use only the m most-recently

computed pairs {(sk , yk )}, where m  n. Often m may be very small (for example, Byrd et al.14 suggest m ∈ [3, 7]). The BFGS update is the most widely-used rank-two update formula that (i) satisfies the secant condition Bk+1 sk = yk , (ii) has hereditary symmetry, and (iii) generates a sequence of positive-definite {Bk }, provided that yiT si > 0 for i = 0, . . . k. The L-BFGS matrix Bk+1 in (5) can be defined recursively as follows: Bk+1 = B0 +

k  X



i=0

1 1 Bi si sTi Bi + T yi yiT T si Bi si yi si

 .

Then Bk+1 is at most a rank-2(k + 1) perturbation to B0 , and thus, Bk+1 can be written as     Mk ΨTk Bk+1 = B0 + Ψk  for some Ψk ∈ Rn×2(k+1) and Mk ∈ R2(k+1)×2(k+1) . Byrd et al.14 showed that Ψk and Mk are given by Ψk =

 B0 Sk

 T  S B S Yk and Mk = − k T0 k Lk

Lk −Dk

−1 ,

4 4 where Sk = [ s0 s1 s2 · · · sk ] ∈ Rn×(k+1) , and Yk = [ y0 y1 y2 · · · yk ] ∈ Rn×(k+1) , and Lk is the strictly lower triangular part and Dk is the diagonal part of the matrix SkT Yk ∈ R(k+1)×(k+1) , i.e., SkT Yk = Lk + Dk + Uk , where Uk is a strictly upper triangular matrix.

4. SOLVING THE TRUST-REGION SUBPROBLEM In this section, we show how to solve (3) efficiently. First, we transform (3) into an equivalent expression. For simplicity, we drop the subscript k. Let Ψ = QR be the “thin” QR factorization of Ψ, where Q ∈ Rn×2(k+1) has orthonormal columns and R ∈ R2(k+1)×2(k+1) is upper triangular. Then Bk+1 = B0 + ΨM ΨT = γI + QRM RT QT . b T = RM RT be the eigendecomposition of RM RT ∈ R2(k+1)×2(k+1) , where V ∈ R2(k+1)×2(k+1) is Now let V ΛV ˆ1, . . . , λ ˆ 2(k+1) ). We assume that the eigenvalues λ ˆ i are ordered in b is diagonal with Λ b = diag(λ orthogonal and Λ ˆ1 ≤ λ ˆ2 ≤ · · · ≤ λ ˆ 2(k+1) . Since Q has orthonormal columns and V is orthogonal, then increasing values, i.e., λ 4 n×2(k+1) Pk = QV ∈ R also has orthonormal columns. Let P⊥ be a matrix whose columns form an orthonormal 4 basis for the orthogonal complement of the column space of Pk . Then, P = [ Pk P⊥ ] ∈ Rn×n is such that T T P P = P P = I. Thus, the spectral decomposition of B is given by     ˆ + γI 0 Λ1 0 Λ 4 B = P ΛP T , where Λ = = , (6) 0 Λ2 0 γI ˆ i ’s where Λ = diag(λ1 , . . . , λn ), Λ1 = diag(λ1 , . . . , λ2(k+1) ) ∈ R2(k+1)×2(k+1) , and Λ2 = γIn−2(k+1) . Since the λ are ordered, then the eigenvalues in Λ are also ordered, i.e., λ1 ≤ λ2 ≤ . . . ≤ λ2(k+1) . The remaining eigenvalues, found on the diagonal of Λ2 , are equal to γ. Finally, since B is positive definite, then 0 < λi for all i. Defining v = P T p, the trust-region subproblem (3), can be written as v∗

=

1 g˜T v + v T Λv 2 subject to kvk2 ≤ δ, arg min v∈Rn

qk (v)

4

=

(7)

where g˜ = P T g. From the optimality conditions in Theorem 1, the solution, v ∗ , to (7) must satisfy the following equations: (Λ + σ ∗ I)v ∗

= −˜ g

(8)



=

0

(9)

≥ 0

(10)

≤ δ,

(11)



σ (kv k2 − δ) σ





kv k2 ∗



for some scalar σ . Note that the usual requirement that σ + λi ≥ 0 for all i is not necessary here since λi > 0 for all i (i.e., B is positive definite). Note further that (9) implies that if σ ∗ > 0, the solution must lie on the boundary, i.e., kv ∗ k2 = δ. In this case, the optimal σ ∗ can be obtained by finding solving the so-called secular equation: 1 1 φ(σ) = − = 0, (12) kv(σ)k2 δ where kv(σ)k2 = k − (Λ + σI)−1 g˜k2 . Since λi + σ > 0 for any σ ≥ 0, v(σ) is well-defined. In particular, if we let  T  T    P P g g g˜ = ||T g = ||T = || , g⊥ P⊥ P⊥ g then kv(σ)k22 =

 2(k+1) X 

i=1

 

(g|| )2i (λi −

σ)2 

+

kg⊥ k22 . (γ − σ)2

(13)

We note that φ(σ) ≥ 0 means v(σ) is feasible, i.e., kv(σ)k2 ≤ δ. Specifically, the unconstrained minimizer v(0) = −Λ−1 g˜ is feasible if and only if φ(0) ≥ 0 (see Fig. 1(a)). If v(0) is not feasible, then φ(0) < 0 and there exists σ ∗ > 0 such that v(σ ∗ ) = −(Λ + σ ∗ I)−1 g˜ with φ(σ ∗ ) = 0 (see Fig. 1(b)). Since B is positive definite, the function φ(σ) is strictly increasing and concave down for σ ≥ 0, making it a good candidate for Newton’s method. In fact, it can be shown that Newton’s method will converge monontonically and quadratically to σ ∗ with initial guess σ (0) = 0.11 0.1

0.2

−λ2

−λ2

−λ1

-0.1

0

φ(σ)

φ(σ)

σ∗

−λ1

0

0.1

-0.2 -0.1

-0.3 -0.2

-3

-2

-1

0

1

2

3

σ

(a)

-0.4 -3

-2

-1

0

1

2

3

σ

(b)

Figure 1. Plot of the secular function φ(σ) given in (12). (a) The case when φ(0) ≥ 0, which implies that the unconstrained minimizer of (7) is feasible. (b) When φ(0) < 0, there exists σ ∗ > 0 such that φ(σ ∗ ) = 0, i.e., v ∗ = −(Λ + σ ∗ I)−1 g˜ is well-defined and is feasible.

The method to obtain σ ∗ is significantly different that the one used by Burke et al.15 in that we explicitly use the eigendecomposition within Newton’s method to compute the optimal σ ∗ . That is, we differentiate the reciprocal of kv(σ)k in (13) to compute the derivative of φ(σ) in (12), obtaining a Newton update that is expressed only in terms of gk , g⊥ , and the eigenvalues of B. In contrast to the method by Burke et al.15 (specifically Alg. 2 in their paper), this approach eliminates the need for matrix solves for each Newton iteration. Given σ ∗ and v ∗ , the optimal p∗ is obtained as follows. Letting τ ∗ = γ +σ ∗ , the solution to the first optimality condition, (B + σ ∗ I)p∗ = −g, is given by p∗

=

−(B + σ ∗ I)g

=

−(γI + ΨM ΨT + σ ∗ I)−1 g  1  − ∗ I − Ψ(τ ∗ M −1 + ΨT Ψ)−1 ΨT g, τ

=

(14)

using the Sherman-Morrison-Woodbury formula. Algorithm 1 details the proposed approach for solving the trust-region subproblem. ALGORITHM 1: L-BFGS Trust-Region Subproblem Solver Compute R from the “thin” QR factorization of Ψ; Compute the spectral decomposition ˆ1 ≤ λ ˆ2 ≤ · · · ≤ λ ˆ 2(k+1) ; ˆ T with λ RM RT = V ΛV ˆ + γI; Let Λ1 = Λ Define Pk = ΨR−1 V and gk = PkT g; q Compute kP⊥T gk2 = kgk22 − kgk k22 ; if φ(0) ≥ 0 then σ ∗ = 0 and compute p∗ from (14) with τ ∗ = γ; else Use Newton’s method to find σ ∗ ; Compute p∗ from (14) with τ ∗ = γ + σ ∗ ; end The method described here guarantees that the trust-region subpoblem is solved to high accuracy. Other LBFGS trust-region methods that solve to high accuracy include the Mor´e-Sorensen Sequential Method (MSS),16 which uses a shifted L-BFGS approach, and the Limited-Memory Trust-Region Method,17 which uses a “shapechanging” norm in (3). Convergence. Global convergence of Algorithm 2 can be proven by modifying the techniques found in15, 18 that require that the following assumptions are satisfied: [A.1] There are constants l and u such that l ≤ kBk k ≤ u for all k. [A.2] ∇Φ is Lipschitz continuous. For Assumption A.1, since Bk is symmetric and positive definite, kBk k2 = λmax . Because we are able to explicitly compute the eigenvalues of Bk in (6), we can satisfy Assumption A.1 by accepting an update pair (sk , yk ) only if l ≤ λmax ≤ u. For Assumption A.2, the gradient of the function Φ(x) is continuously differentiable, and therefore, ∇Φ must be Lipschitz continuous. With these assumptions satisfied and noting that Φ(xk ) ≥ 0 for all xk (since each term in (2) is nonnegative), then by [15, Theorem 5.4] the sequence of iterates generated by Algorithm 2 converges to a critical point of Φ. ALGORITHM 2: Trust-Spa: Limited-Memory BFGS Trust-Region Method for Sparse Relaxation Define parameters: m, 0 < τ1 < 0.5, 0 < ε; Initialize x0 ∈ Rn and compute g0 = ∇Φ(x0 ); Let k = 0; while not converged if kgk k2 ≤ ε then done Use Algorithm 1 to find pk that solves (3); Compute ρk = (Φ(xk + pk ) − Φ(xk ))/qk (pk ); Compute gk+1 and update Bk+1 ; if ρk ≥ τ1 then xk+1 = xk + pk ; else xk+1 = xk ; end if Compute trust-region radius δk+1 ; k ← k + 1; end while

5. NUMERICAL EXPERIMENTS We evaluate the performance of the proposed method (Trust-Spa) by solving 1D and 2D signal reconstruction problems. In particular, we compare the results with the widely-used GPSR method1 and the more recent method, YALL1.19 All three methods were initialized using the same starting point, i.e., zero, and terminate if the relative objective values do not significantly change, i.e, |Φ(xk+1 )−Φ(xk )|/|Φ(xk )| ≤ 10−8 . The regularization parameter τ in (1) is optimized independently for each algorithm to minimize the mean-squared error (MSE = 1 ˆ 2 ˆ n kf − f k2 , where f is an estimate of f ). 1D signal recovery. In this experiment, the true signal f is of size 4,096 with 160 randomly assigned nonzeros with amplitude ±1 (see Fig. 2(a)). We obtain compressive measurements y of size 1,024 (see Fig. 2(b)) by projecting the true signal using a randomly generated system matrix (A) from the standard normal distribution with orthonormalized rows. In particular, the measurements are corrupted by 5% of Gaussian noise.

1 0 -1 0

500

1000

1500

2000

2500

3000

3500

4000

(a) Truth f (˜ n = 4096, number of nonzeros = 160)

1 0 -1 0

200

400

600

800

1000

(b) Measurements y (m ˜ = 1024, noise level = 5% ) Figure 2. Experimental setup: (a) True signal f of size 4,096 with 160 spikes (±1), (b) Compressive measurements y with 5% Gaussian noise.

We use compressive measurements b of size 1,024 (with 5% noise) to recover the true signal f of size 4,096 made up of 160 nonzeros with values ±1. On average over 10 trials, the proposed Trust-Spa method (average MSE = 9.827e-5) is shown to outperform GPSR (average MSE = 1.758e-4) and YALL1 (average MSE = 1.753e4) in comparable computation time. Note that the Trust-Spa has fewer reconstruction artifacts (see Fig. 3). Because of the variable transformations used by Trust-Spa, the algorithm terminates with no zero components in its solution (even though the true signal has only 160 nonzero components); however, only a relatively few components of the Trust-Spa reconstruction have significant amplitudes. For example, in on e particular one, only 579 components are greater than 10−6 in absolute value. This has the effect of rendering most spurious solutions less visible. 2D signal recovery. Here, we wish to deblur a Quick Response (QR) code of size 512 × 512 (see Fig. 4(a)) from a 3% zero-mean Gaussian noise corrupted blurry image. GPSR obtained MSE 5.3e-1 (20 sec) and YALL1 obtained MSE 4.03e-1 (40 sec). In contrast, the proposed Trust-Spa method took only 16 seconds to converge with MSE 3.9e-1. In the case of GPSR, there are very high amplitude artifacts around edges of the reconstruction (compare zoomed-in log-error plots Figs. 4(c) and 4(d) for orange areas). Even though YALL1 gives competitive results in MSE, it does not recover edges as well as Trust-Spa (compare circled areas in Fig. 4(e) with Figs. 4(c) and 4(d)).

6. CONCLUSIONS In this paper, we proposed an approach for solving the `2 -`1 minimization problem that arises in compressed sensing and sparse recovery problems. Unlike gradient projection-type methods, our approach uses gradients from previous iterations to approximate a more accurate Hessian. Numerical experiments show that our proposed approach mitigates spurious solutions more effectively with a lower average MSE in a smaller amount of time. Acknowledgments: This research is supported by NSF Grants CMMI-1334042 and CMMI-1333326.

0.05 0 -0.05 20

40

60

80

100

120

140

160 180

140

160 180

140

160 180

(a) Trust-Spa reconstruction 0.05 0 -0.05 20

40

60

80

100

120

(b) GPSR reconstruction 0.05 0 -0.05 20

40

60

80

100

120

(c) YALL1 reconstruction Figure 3. A Zoomed region of all reconstructions. Note the presence of more artifacts in the GPSR reconstruction (b) and the YALL1 recontrsuction (c) in comparison to the reconstruction from our proposed method Trust-Spa (a). 3

3

2.5

2.5

2

2

1.5

1.5

1

1

0.5

0.5 0

0

(a) True image

(c) Trust-Spa error

(b) Trust-Spa Reconstruction

1.8

1.8

1.8

1.6

1.6

1.6

1.4

1.4

1.4

1.2

1.2

1.2

1

1

1

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0

0

0

(d) GPSR error

(e) YALL1 error

Figure 4. (a) True QR code image, (b) Trust-Spa reconstruction, (c) Log-error plot of the Trust-Spa reconstruction, (d) Log-error plot of the GPSR reconstruction, and (e) Log-error plot of the YALL1 reconstruction. Note the log-error of GPSR has higher amplitude, and the YALL1 reconstruction has more edge artifacts than the Trust-Spa reconstruction.

REFERENCES [1] Figueiredo, M. and et al., “Gradient projection for sparse reconstruction,” IEEE J. of Selected Top.in Sig. Proc. 1(4), 586–597 (2007). [2] Eldar, Y. C. and Kutyniok, G., [Compressed sensing: theory and applications ], Cambridge Uni. Press (2012). [3] Yu, J., Vishwanathan, S., G¨ unter, S., and Schraudolph, N. N., “A quasi-Newton approach to nonsmooth

[4] [5]

[6] [7]

[8]

[9] [10] [11] [12] [13] [14] [15] [16]

[17] [18] [19]

convex optimization problems in machine learning,” The Journal of Machine Learning Research 11, 1145– 1200 (2010). Lee, J., Sun, Y., and Saunders, M., “Proximal Newton-type methods for convex optimization,” in [Advances in Neural Information Processing Systems ], 836–844 (2012). Zhou, G., Zhao, X., and Dai, W., “Low rank matrix completion: A smoothed l0-search,” in [2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)], 1010–1017, IEEE (2012). Becker, S. and Fadili, J., “A quasi-Newton proximal splitting method,” in [Advances in Neural Information Processing Systems ], 2618–2626 (2012). Wang, Y., Cao, J., and Yang, C., “Recovery of seismic wavefields based on compressive sensing by an l1-norm constrained trust region method and the piecewise random subsampling,” Geophysical Journal International 187(1), 199–213 (2011). Hinterm¨ uller, M. and Wu, T., “Nonconvex TVq -models in image restoration: Analysis and a trust-region regularization–based superlinearly convergent solver,” SIAM Journal on Imaging Sciences 6(3), 1385–1415 (2013). Gay, D. M., “Computing optimal locally constrained steps,” SIAM J. Sci. Stat. Com. 2(2) (1981). Mor´e, J. J. and Sorensen, D. C., “Computing a trust region step,” SIAM J. Sci. and Stat. Com. 4 (1983). Conn, A. R., Gould, N. I. M., and Toint, P. L., [Trust-Region Methods ], Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000). Liu, D. C. and Nocedal, J., “On the limited memory BFGS method for large scale optimization,” Math. Program. 45, 503–528 (1989). Nocedal, J. and Wright, S., [Numerical optimization], Springer Science & Business Media (2006). Byrd, R. H., Nocedal, J., and Schnabel, R. B., “Representations of quasi-Newton matrices and their use in limited-memory methods,” Math. Program. 63, 129–156 (1994). Burke, J. V., Wiegmann, A., and Xu, L., “Limited memory BFGS updating in a trust-region framework,” technical report, University of Washington (1996). Erway, J. B. and Marcia, R. F., “Algorithm 943: MSS: MATLAB software for L-BFGS trust-region subproblems for large-scale optimization,” ACM Transactions on Mathematical Software 40, 28:1–28:12 (June 2014). Burdakov, O., Gong, L., Yuan, Y.-X., and Zikrin, S., “On efficiently combining limited memory and trustregion techniques,” Tech. Rep. 2013:13, Linkping University, Optimization (2015). Powell, M. J. D., “Convergence properties of a class of minimization algorithms,” Nonlinear programming 2(0), 1–27 (1975). Yang, J. F. and Zhang, Y., “Alternating direction algorithms for L1 problems in compressive sensing,” Sci. Comput 33(1), 250278 (2011).

(a) True image

(c) Trust-Spa error

3

3

2.5

2.5

2

2

1.5

1.5

1

1

0.5

0.5

0

0

(b) Trust-Spa Reconstruction

1.8

1.8

1.8

1.6

1.6

1.6

1.4

1.4

1.4

1.2

1.2

1.2

1

1

1

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0

0

(d) GPSR error

0

(e) YALL1 error

Figure 5. (a) True QR code image, (b) Trust-Spa reconstruction, (c) Log-error plot of the Trust-Spa reconstruction, (d) Log-error plot of the GPSR reconstruction, and (e) Log-error plot of the YALL1 reconstruction. Note the log-error of GPSR has higher amplitude, and the YALL1 reconstruction has more edge artifacts than the Trust-Spa reconstruction.

Suggest Documents