MULTIPLICATIVE UPDATES ALGORITHM TO ... - Google Sites

2 downloads 198 Views 406KB Size Report
VARIATION FUNCTIONAL WITH A NON-NEGATIVITY CONSTRAINT. Paul Rodrıguez. Digital Signal Processing Group. Pontificia Univ
MULTIPLICATIVE UPDATES ALGORITHM TO MINIMIZE THE GENERALIZED TOTAL VARIATION FUNCTIONAL WITH A NON-NEGATIVITY CONSTRAINT Paul Rodr´ıguez Digital Signal Processing Group Pontificia Universidad Cat´olica del Per´u Lima, Peru ABSTRACT We propose an efficient algorithm to solve the generalized Total Variation (TV) functional with a non-negativity constraint. This algorithm, which does not involve the solution of a linear system, but rather multiplicative updates only, can be used to solve the denoising and deconvolution problems. The derivation of our method is straightforward once the generalized TV functional is cast as a Non-negative Quadratic Programming (NQP) problem. The proposed algorithm offers a fair computational performance to solve the `2 -TV and `1 -TV denoising and deconvolution problems and it is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator and a non-negativity constraint. Index Terms— Total Variation, Non-negative Quadratic Programming 1. INTRODUCTION The minimum of the generalized Total Variation (TV) functional [1]

p

q

q

1

+ λ (Dx u)2 + (Dy u)2 (1) Au − b T (u) =

p q p q is the `p regularized solution of the inverse problem involving grayscale image data b and forward linear operator A. Note that deconvolving (A = I for denoising) images corrupted with Gaussian (`2 -TV case) and salt-and-pepper noise (`1 -TV case) can be performed when p = 2, q = 1 and p = 1, q = 1 in (1) respectively. We use the following notation: • the p-norm of vector u is denoted by kukp , • scalar operations applied to a vector are considered to be applied element-wise,√so that, for example, u = √ v2 ⇒ uk = vk2 and u = v ⇒ uk = vk , p • (Dx u)2 + (Dy u)2 is the discretization of |∇u|, and • horizontal and vertical discrete derivative operators are denoted by Dx and Dy respectively.

There are several types of numerical algorithms (a detailed review can be found in [1]) to solve the TV problem describe in (1). Succinctly, we mention that algorithms such [2, 3, 4] (and more) do not need to solve a linear system of equations and are (in general) computationally efficient, but lack the ability to handle a nontrivial forward operator A in (1). On the other hand, algorithms that can handle a nontrivial forward operator do need to solve a linear system of equations (see [1, 5, 6, 7] and more) but either their computational performance suffers (specially when the operator A in (1) is a large non-separable kernel [1, 5, 6]) or their reconstruction performance suffers (for medium to strong levels of noise [7]). None of the above mentioned algorithms include (nor enforce) a non-negativity constraint. To the best of our knowledge only [8, Ch. 9] and more recently [9, 10] include a non-negativity constraint for TV deblurring, but they do need to solve a linear system of equations, and consequently, their computational performance suffers. In this paper we present an efficient algorithm to solve the generalized TV functional described in (1) which includes the non-negativity constraint 0 ≤ u ≤ vmax. This algorithm does not involve the solution of a linear system, but rather multiplicative updates only, and at the same time is able to handle a nontrivial forward operator A. The algorithm is called IRN-NQP (Iteratively Reweighted Norm or IRN, Nonnegative Quadratic Programming or NQP) and owes its name to the derivation of our method: it starts by representing the `p and `q norms in (1) by the equivalent weighted `2 norms, in the same fashion as the Iteratively Reweighted Norm (IRN) algorithm (see [1]), and then cast the resulting weighted `2 functional as a Non-negative Quadratic Programming problem (NQP, see [11]). Finally, we stress that our algorithm can handle any norm with 0 < p, q ≤ 2, including the `2 -TV and `1 -TV as special cases. 2. THE IRN-NQP ALGORITHM In this section we summarized the derivation of the IRN (Iteratively Reweighted Norm) [1] algorithm as well as the description of the NQP (Non-negative Quadratic Programming)

[11] problem to finally describe the IRN-NQP algorithm.

It is straightforward to check that the matrix A˜T W (k) A˜ is symmetric and positive definite, and therefore solving

2.1. The Iteratively Reweighted Norm (IRN) Algorithm The IRN algorithm [1] is a computational efficient method for solving the generalized TV problem, and it represents the `p and `q norms in (1) by the equivalent weighted `2 norms, resulting in (see [1] for details):

2

2

λ (k) 1/2

1 (k) 1/2 (k)

(Au − b) + WR Du T (u) = WF

+C 2 2 2 2 (2) where u(k) is a constant representing the solution of the previous iteration, C is a constant value and   (k) WF = diag τF,F (Au(k) − b) , (3)

˜ ˜ (k+1) = (A˜T W (k) b), (A˜T W (k) A)u

(10)

gives the minimum of (9), and converges (see [1] for details) to the minimum of (1) as the iterations proceeds. 2.3. Non-negative Quadratic Programming (NQP) Recently [11] an interesting and quite simple algorithm has been proposed to solve the Non-negative Quadratic Programming (NQP): min v

1 T v Φv + cT v s.t. 0 ≤ v ≤ vmax, 2

(11)

where the matrix Φ is assumed to be symmetric and positive defined, and vmax is some positive constant. The multiplicaDx 0 (k) tive updates for the NQP are summarized as follows (see [11] , (4) D= WR = (k) Dy 0 ΩR for details on derivation and convergence):      (k) (k) 2 (k) 2 . (5) ΩR = diag τR,R (Dx u ) + (Dy u ) |Φnl | if Φnl < 0 Φnl if Φnl > 0 and Φ-nl = Φ+nl = 0 otherwise, 0 otherwise Following a common strategy in IRLS (iteratively reweighted ( " # ) √ least squares) type algorithms [12], the functions 2 + υ (k) ν (k) −c + c v(k+1) = min v(k) , vmax  2υ (k) |x|p−2 if |x| > F τF,F (x) = , (6) (12) p−2 if |x| ≤ F , F where υ (k) = Φ+ v(k) , ν (k) = Φ- v(k) and all algebraic operand ations in (12) are to be carried out element wise. The NQP is  |x|(q−2)/2 if |x| > R quite efficient and has been used to solve interesting problems (7) τR,R (x) = (q−2)/2 if |x| ≤ R , R such as statistical learning [11], compressive sensing [13], etc. are defined to avoid numerical problems when p, q < 2 and (k) 2 2 Au −b or (Dx u) +(Dy u) has zero-valued components. 2.4. IRN-NQP Algorithm In [1] it is proven that the iterative solution of the (k) quadratic functional T (u) converges to the solution of If we compare the optimization problem describe in (9) and T (u) in (1). the NQP problem (11), we notice that by setting Φ(k) = ˜ the minimum of (9) can be A˜T W (k) A˜ and c = -A˜T W (k) b, 2.2. IRN as Iteratively Reweighted Least Squares computed using (12) instead of solving the linear system described in (10). We observe that (2) can be cast as the standard IRLS problem: It is important to highlight that the constraint 0 ≤ u ≤

  2 vmax is enforced when (12) is used to solve (9). Since the (k) 1

˜ ˜ −b T (k) (u) = W 1/2 Au (8)

, main target in TV problems is the denoising/deconvolution of 2 2 digital images, the u ≥ 0 constraint is physically meaningwhere ful in most of the cases (images acquired by digital cameras, !     MRI, CT, etc.), and its enforcement has recently attracted (k) A WF 0 b ˜ ˜ √ some attention [9, 10] because the non-negativity constraint W (k) = , A = , and b = . (k) 0 λD 0 WR improves the quality of the reconstructions [9]. The upper bound constraint may or may not be enforced (see Sections Note that we are neglecting the constant term, since it has no 2.1 and 2.3 in [11]) and could be useful when a priori inforimpact on the solution to the optimization problem at hands. mation about the upper bound is known. Moreover, after algebraic manipulation, the minimization The IRN-NQP algorithm is summarized in Algorithm 1. problem in (8) can be expressed as The thresholds values for the weighting matrices WF and WR have a great impact in the quality of the results and in the 1 ˜ T u. (9) ˜ − (A˜T W (k) b) min T (k) (u) = uT A˜T W (k) Au time performance, and while not done so here, this algorithm u 2 



(k) ΩR

!

can auto-adapt (as for the IRN algorithm, see [1, Sec. IV.G]) the values of F and R based on the particular characteristics of the input data to be denoised/deconvolved. Another key aspect of the IRN-NQP algorithm is the inclusion of the (k) NQP tolerance (N QP ) to terminate the inner loop in Algorithm 1. The NQP tolerance, which is inspired in the idea of forcing terms for the Inexact Newton method, adapts the tolerance used to decide when to stop the multiplicative updates (break the inner loop). Experimentally, α ∈ [1 .. 0.5] and γ ∈ [1e-3 .. 5e-1] give a good compromise between computational and reconstruction performance. Initialize u(0) = b for k = 0, 1, ...   (k) WF = diag τF,F (Au(k) − b)    (k) ΩR = diag τR,R (Dx u(k) )2 + (Dy u(k) )2 ! (k) ΩR 0 (k) WR = (k) 0 ΩR

(a)

The “Satellite” image was used for the `2 -TV deconvolution case and was blurred by 9 × 9 Gaussian kernel to match one of the experiments described in [9] and [10], and then corrupted with Gaussian noise, given a resulting image having a SNR of 7.62 dB. The “Lena” image was used for the `1 TV deconvolution case and was blurred by 7 × 7 out-of-focus kernel (2D pill-box filter) and then corrupted with salt-andpepper noise.

Φ(k) = AT WF A + λDT WR D (k)

c(k) = -AT WF b u(k,0) = u(k)  α (k) (k,0) (k) (k) N QP = γ · kΦ ukc(k) k+c k2

(b)

Fig. 1. (a) “Lena” and (b) “Satellite” test images.

(k)

(k)

2048K, RAM: 4G). Results corresponding to the IRN-NQP algorithm presented here may be reproduced using the the NUMIPAD (v. 0.30) distribution [14], an implementation of IRN and related algorithms.

(NQP tolerance)

2

for m = 0, 1, .., M υ (k,m) = Φ+

(k) (k,m)

u

(

, ν (k,m) = Φ-(k) u(k,m) " √ 2 (k)

(k)

(k,m) ν (k,m)

+υ u(k,m+1) = min u(k,m) −c + c 2υ(k,m)   (k) (k,m+1) (k) if kΦ u kc(k) k +c k2

Suggest Documents