Total Variation Image Restoration - Semantic Scholar

31 downloads 0 Views 314KB Size Report
Peter Blomgren. Tony F. Chan. Pep Mulet. C. K. Wong fblomgren,chan,mulet,[email protected]. UCLA Department of Mathematics. 405 Hilgard Ave.
Appeared in the Proceedings of the 1997 IEEE International Conference on Image Processing, Vol III, pp 384-387

Total Variation Image Restoration: Numerical Methods and Extensions Peter Blomgren

Tony F. Chan

Pep Mulet

C. K. Wong

fblomgren,chan,mulet,[email protected]

UCLA Department of Mathematics 405 Hilgard Ave. Los Angeles, CA 90095-1555

Abstract We describe some numerical techniques for the Total Variation image restoration method, namely a primaldual linearization for the Euler-Lagrange equations and some preconditioning issues. We also highlight extension of this technique to color images, blind deconvolution and the staircasing e ect.

1 Introduction The main purpose of this paper is to review some ongoing work in our research group on using total variation techniques in image restoration. The classical algorithms, mainly based on least squares, are not usually appropriate for edge recovery. Instead, we follow a variational formulation based on the minimization of the Total Variation norm subject to some noise constraints. We will present ecient numerical methods as well as extensions of TV to beyond gray scale images. In section 2 we rst describe a primal-dual Newton linearization technique for handling the highly singular and nonlinear nature of the TV model. We will next present a class of fast transform based preconditioners for the iterative solution of the linearized EulerLagrange equations. In section 3, we extend TV to vector images (including color), and to blind deblurring (where the blur is unknown) and discuss techniques to lessen the tendency of TV to over-sharpening smooth images.

2 Numerical Methods 2.1 Primal-Dual Linearization Let us denote by u and z the real and observed images, respectively, both de ned in a region . The model of degradation we assume is Ku + n = z , where n is a Gaussian white noise and K is a (known) linear blur operator. The problem Ku = z , for a compact operator K, is ill-posed, so we consider its Tikhonov regularization, which consists in the solution of the variational problem: minu R(u) + 21 jjKu ? z jj2L2 , for some regularization functional R which measures the irregularity of u and a coecient suitably chosen to balance the tradeo between a good t to the data and a regular solution. Examples of regularization functionals are: R(u) = jjujj2; jjujj2; jjru rujj2. The associated Euler-Lagrange equations are linear, but they are not usually suitable for edge recovery. R In [8] the Total Variation norm : TV (u) =

jruj dx dy is proposed as regularization functional. It does not penalize discontinuities in u and thus allows a better edge recovery. The restoration problem can thus be written as: (1) min TV (u) + 21 kKu ? z k2L2 ; u and its Euler-Lagrange equation, assuming homogeneous Neumann boundary conditions, is: 



ru + K (Ku ? z ) = g(u): 0 = ? r  jr uj

(2)

where K is the adjoint operator of K. Since this equation is not well de ned at points where ru = 0

Appeared in the Proceedings of the 1997 IEEE International Conference on Image Processing, Vol III, pp 384-387

ap commonly used technique is to replace jruj by jruj2j + , for a small positive parameter . The main diculty in solving equation (2) is the linearization of the highly nonlinear divergence term. A number of methods have been proposed to solve (2). L. Rudin, S. Osher and E. Fatemi [8] used a time marching scheme to compute the steady state of the parabolic equation ut = ?g(u) with initial condition u = z . This method can be slowly convergent due to stability constraints. C. Vogel and M. Oman [10] proposed the following xed point iteration to solve the Euler-Lagrange equation: u0 = z , solve for uk+1:  k+1  r u ? r  jrukj + K (Kuk+1 ? z ) = 0: (3) This method is robust but only linearly convergent. Due to the high nonlinearity of (2), Newton's method has an extremely small domain of convergence for small . So it is natural to use a continuation procedure, starting with a large value of and gradually reducing it to the desired value, see [3]. Although this method is locally quadratically convergent, the choice of the sequence of subproblems to solve is crucial for its eciency, and the authors have not succeeded in nding a fully satisfactory selection procedure, although some heuristics can be used. We will now brie y describe a better linearization technique that we introduced in [6]. The idea is to introduce a new variable ?! w = jrruuj , replace (2) by the following equivalent system: ? r  ?! w + K (Ku ? z ) = 0 (4) ?! w jruj ? ru = 0; and then linearize it by Newton's method. In practice, this method is globally convergent and the local convergence rate is quadratic. Although we do not have a complete theory yet, we believe that the key to its success is that (4) is more globally linear than (2). There is an alternative motivation for (4). It is known that the TV-norm R admits the weak formulation TV (u) = supk?! w dx dy. With this, w k1 1 ?ur  ?! problem (1) can be written as: min sup (u; ?! w ); (5) u k? ! w k1 1 where Z ? ! w + 21 (Ku ? z )2 ) dx dy: (6) (u; w ) = (? ur  ?!

By using arguments of duality theory for convex programming, it can shown that (4) is the necessary and sucient condition for the saddle point solution of (5) [6].

2.2 Preconditioning for Deblurring In a typical iterative solution of (2), one has to invert linear operators of the form A = K K + L, where K is a Toeplitz type matrix corresponding to the blur and L corresponds to the TV regularization. We note that A is typically very ill-conditioned and for eciency it is crucial to use a good preconditioner. Vogel and Oman [9] recently proposed using a \product" preconditioner which allows K K and L to be preconditioned separately. An alternative preconditioner is our cosine transform based preconditioner [2]. The motivation comes from the fact that cosine transform preconditioner is \good" for solving Toeplitz system [7] and as well as giving an exact factorization for the discrete Laplacian with Neumann boundary condition. Let Cn be the n-by-n discrete cosine transform matrix. If ij is the Kronecker delta, then the (i; j )th entry of Cn is given by r

  2 ? i1 cos (i ? 1)(2j ? 1) ; 1  i; j  n: n 2n (7) For any n2  n2 matrix Ann , we choose our preconditioner c(Ann) to be the minimizer of the Frobenius norm jjBnn ? AnnjjF over all matrices Bnn that can be diagonalized by the 2D discrete cosine transform matrix Cn Cn . More precisely, a cosine transform preconditioner for A = K K + L can be de ned as:

M = c(K) c(K) + c(L): Note that M has the eigendecomposition:

M = (Cn Cn )(KK+ L)(Cn Cn)t where K and L are the eigen-matrices of c(K) and c(L) respectively. Hence, in each PCG iteration, M ?1v can be done in O(n2 log n) operations by the 2D Fast Cosine transform algorithm. We remark that K is a block Toeplitz matrix with Toeplitz blocks and L is a banded matrix with ve nonzero bands. By exploiting these matrix structure, c(K) and c(L) can be both constructed in O(n2 log n) operations. Numerical results in [2] indicate that the condition number of the preconditioned system (M ?1A) is roughly O(n0:22), signi cantly smaller than (A)  O(n1:8) without preconditioning.

Appeared in the Proceedings of the 1997 IEEE International Conference on Image Processing, Vol III, pp 384-387

Initial Color Image

Noisy Color Image

DeNoised Color Image

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120 20

40

60

80

100

120

120 20

40

60

80

100

120

20

40

60

80

100

120

Figure 1: The initial, noisy, and denoised color images.

3 Extensions 3.1 Vector Valued Images: Color TV

3.2 Blind Deconvolution

In [1] we propose the following extension of the TVnorm to vector valued functions:

De nition 1 (The Multi Dimensional TVn,m () Norm)

For any function  : Rn ! Rm, we de ne: TVn,m ()

v um X def u = t [TV(i )]2

i=1

:

Figure 2: The noiseless, the noisy (SNR = 3:0 dB) image, and the recovered image.

(8)

This can be applied to restore color, and other vector valued images. We showed in [1] that our Color TV norm has the following desirable properties: rotational invariance in the image space, unbiased with respect to discontinuities, and invariance for the class of functions with monotone components. Given a noisy vector-valued image 0 , we are interested in minimizing:

0 2 2 min TV n,m () subject to K ?  2 =  ; 2BV( ) (9) where 2 is the variance of the noise. The corresponding Euler-Lagrange equations are: TV(i ) r   ri  ? K ?K ? 0 = 0; i i TVn,m () krik (10) where  is the associated Lagrange multiplier. Solutions of (10) can be computed by methods similar to those used for gray scale images presented earlier. We show two examples of denoising using the TVn,m norm. Example 1. (Fig. 1) We created a 2D image in RGB space, i.e.  : R2 ! R3; Gaussian noise was added (SNR = 4:0 dB), and we ran an explicit time marching scheme, similar to the one introduced in [8]. The reconstruction is very good, with sharp edges retained in the correct locations. Example 2. (Fig. 2) We denoise (SNR = 3.0 dB) a 128  128 color \Lena" image. We observe, that the reconstruction does not smear edges.

The blind deconvolution problem is ill-posed with respect to the image u and the blurring function K. Here K = (k11; : : :; k1m; : : :; km1; : : :; kmm ) with kii denote the intra-channel blur and kij for (i 6= j ) denote the cross channel blur. Following the earlier work on gray scale blind deconvolution [5], we regularize u and K by considering the joint minimization problem [4]: 1 kKu?z k2 + TV (u)+ TV 2 (K): min f ( u; K )  min 2 m L2 1 m u;K u;K 2 (11) Here 1 and 2 are positive parameters which measure the trade o between a good t and the regularity of the solutions u and K. To solve (11), we alternately minimize u (resp. K) with K (resp. u) xed [5]. In Figure 3, we show that our blind deconvolution model stated in (11) restores a very good image. The original image was provided by Calvin J Hamilton of the Argonne National Laboratory. To give a simple illustration of the method, a purely intra-channel degradation model (i.e. kij = 0 for i 6= j ) is used. We can see in Figure 3 that, even for blurring functions with edges (out-of-focus blur), our method can successfully remove almost all the noise and the blur and produces very good images (almost the same as those restored with known PSF's).

Figure 3: Comparison of restored images by (11) (middle) with those by non-blind TV restoration (right) when SNR=40dB. The rst column shows the out of focus blurred noisy image (top) and Gaussian blurred image (bottom).

Appeared in the Proceedings of the 1997 IEEE International Conference on Image Processing, Vol III, pp 384-387

TV Restoration

TV Restoration

1.6

H−1 Restoration

1.2

1.2

1.4 1

1

1.2

1.2 0.8

1 0.8

0.8

1 0.8

0.6

0.6

For a full color copy of this paper, visit:

0.6

0.6 0.4

0.4 0.2

0.4

0.4 0.2

0.2

0

0.2

www.math.ucla.edu/~blomgren/SHTML/camreports.shtml

0 0

0

−0.2 −0.4 0

4 Remarks and Acknowledgments

H−1 Restoration

1.6

1.4

−0.2

0.2

(a) 0.4

0.6

0.8

1

−0.2 0

0.2

(b) 0.4

0.6

0.8

1

−0.4 0

0.2

(c) 0.4

0.6

0.8

1

−0.2 0

(d)

0.2

0.4

0.6

0.8

1

Figure 4: The two leftmost gures show TV restorations of a \box" and a \parabola," the two rightmost gures show H 1 -seminorm reconstructions of the same signals.

3.3 Reducing the Staircasing E ect TV restorations typically exhibit \blockiness," or a \staircasing" (see gure 4b) e ect where the restored image comprises of piecewise at regions. While this does not constitute a problem for computer vision applications, the restoration is not pleasing to the human eye. In this section, we present our on-going research aimed at reducing this staircasing e ect. One of our ideas is to interpolate between the TV function space, W 1;1, and the H 1-seminorm space, W 1;2, in order to get the best of both worlds in terms of sharp edge capturing and smooth transition in regions without edges, see gure 4. Near an edge we want to use the W 1;1 space to capture edges in a TV-like fashion, and in

at regions we want to use the W 1;2 space to get an H 1 -like reconstruction. In between these regions we want to use a fractional norm W 1;p, p 2 (1; 2), where p is determined by the image gradient, i.e.

Rp(x) (u) =

Z



jrujp(jruj) dx;

(12)

where p monotonically decreases from 2, when jruj = 0, to 1, as jruj % 1. In gure 5 we show a sample restoration where the interpolating p is a third order polynomial interpolating between 0 and jrujmax (which is the maximum gradient on the discrete grid). TV Restoration

H−1 Restoration

Polynomial Interpolation

1.4

1.4

1.4

1.4

1.2

1.2

1.2

1.2

1

1

1

1

0.8

0.8

0.8

0.8

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0

0

0

0

−0.2

−0.2

−0.2

−0.2

−0.4

−0.4

−0.4

−0.4

−0.6 0

−0.6 0

−0.6 0

−0.6 0

0.2

0.4

0.6

0.8

1

0.2

0.4

0.6

0.8

1

0.2

0.2

0.4

0.6

0.8

1

0.2

0.4

0.6

0.8

1

Figure 5: Sample restoration using the interpolation norm. Left-to-right: initial+noisy signal, TV, H 1, and polynomial interpolation (PIP) norm restoration. Notice how the PIP-norm captures the edges and gives smooth ramps.

This research is supported by the ONR under contract ONR N00017-96-1-0277, NSF grant MS-9626755, and by NSF International Program{U.S.-Spain Research Project: Total Variation Methods In Image Processing.

References [1] Peter Blomgren and Tony F. Chan. Color TV: Total Variation Methods for Restoration of Vector Valued Images. IEEE Transactions on Image Processing, to appear. [2] R. Chan, T. F. Chan, and C. K. Wong. Cosine Transform Based Preconditioners for Total Variation Minimization Problems in Image Processing. In S. Margenov and P. Vassilevski, editors, Iterative Methods in Linear Algebra, volume 3 of IMACS Series in Computational and Applied Math., pages 311{329. IMACS, 1996. [3] R. H. Chan, T. F. Chan, and H. M. Zhou. Continuation Methods for Total Variation Denoising Problems. Technical Report CAM 95-18, Department of Mathematics, University of California, Los Angeles, 1995. [4] T. F. Chan and C. K. Wong. Multichannel Image Deconvolution by Total Variation Regularization. In Proceedings of SPIE, volume 3162, 1997. [5] T. F. Chan and C. K. Wong. Total Variation Blind Deconvolution. IEEE Transactions on Image Processing, to appear. [6] Tony F. Chan, Gene Golub, and Pep Mulet. A Nonlinear Primal-Dual Method for TV-Based Image Restoration. Technical Report CAM 95-43, UCLA Department of Mathematics, September 1995. [7] T. Kailath and V. Olshevsky. Displacement Structure Approach to Discrete-Trigonometric-Transform Based Preconditioners of G. Strang and T. Chan Types. Calcolo, 1996. Italian Journal, this issue devoted to the Proceedings of the International Workshop on \Toeplitz matrices, algorithms and applications", Cortona, Italy, September 1996. [8] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear Total Variation Based Noise Removal Algorithms. Physica D, 60:259{268, 1992. [9] C. R. Vogel and M. E. Oman. Fast Total VariationBased Image Reconstruction. In Proceedings of the 1995 ASME Design Engineering Conferences, volume 3, pages 1009{1015, 1995. [10] C. R. Vogel and M. E. Oman. Iterative Methods for Total Variation Denoising. SIAM J. Sci. Statist. Comput., 17:227{238, 1996.