Fast Poissonian image segmentation with a ... - Semantic Scholar

7 downloads 958 Views 3MB Size Report
All the experiments are implemented under Windows XP and MATLAB 7.0 running on a Lenovo laptop with a Dual Intel Pentium CPU 1.8G and 1 GB of memory.
Optik 125 (2014) 1507–1516

Contents lists available at ScienceDirect

Optik journal homepage: www.elsevier.de/ijleo

Fast Poissonian image segmentation with a spatially adaptive kernel Dai-Qiang Chen a,∗ , Li-Zhi Cheng b , Xin-Peng Du b a Department of Mathematics, School of Biomedical Engineering, Third Military Medical University, Chongqing, Chongqing 400038, People’s Republic of China b Department of Mathematics and System, School of Science, National University of Defense Technology, Changsha, Hunan 410073, People’s Republic of China

a r t i c l e

i n f o

Article history: Received 16 December 2012 Accepted 13 May 2013

Keywords: Image segmentation Poisson noise Generalized Kullback–Leibler divergence Spatially adaptive kernel Split-Bregman method

a b s t r a c t The variational models with the goal of minimizing the local variation are widely used for the segmentation of the intensity inhomogeneous images recently. Local variation is a good measure for the images corrupted by Gaussian noise. However, in many applications such as astronomical imaging, electronic microscopy and positron emission tomography, Poisson noise often occurs in the observed images. To deal with this kind of images, we develop a novel segmentation model based on minimizing local generalized Kullback–Leibler (KL) divergence with a spatially adaptive kernel. A fast algorithm based on the split-Bregman method is proposed to solve the corresponding optimization problem. Numerical experiments for synthetic and real images demonstrate that the proposed model outperforms most of the current state-of-the-art methods in the present of Poisson noise. © 2013 Elsevier GmbH. All rights reserved.

1. Introduction Image segmentation is one of the first and critical pre-processing tasks for many high-level image applications such as objects recognition and tracking. The goal is to partition a digital image into a collection of non-overlapped and consistent regions which share the similar characteristics such as color, intensity, texture, etc. During the last two decades, various variational models and optimization techniques [1–5] have been developed for this problem. Traditionally, the variational segmentation methods can be classified into two groups: edge-based and region-based approaches. The idea of the edge-based models is to use the edge informations such as the gradient and the smoothness to attract the active contours towards the object boundary. The classical edge-based methods include snakes model [6] and geodesic model [7]. This class of methods demands the objects in the image have clear boundary curves and the background is noise-free, in other words, it is unable to deal with noisy images or objects with complex boundary. In contrast, the region-based approaches group the similar pixels into subregions directly, e.g. the well-known Mumford–Shah (M–S) model [1]. The M–S model has been successfully applied to image segmentation with the assumption that intensities in each regions are well approximated by smooth functions. However, this

∗ Corresponding author. E-mail address: [email protected] (D.-Q. Chen). 0030-4026/$ – see front matter © 2013 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.ijleo.2013.05.195

model is nonconvex and very challenging to find the minimizer. In order to further improve the segmentation efficiency, the piecewise constant (PC) Mumford–Shah method [2] (the two-phase version is just the Chan–Vese model [3]) has been proposed. The essential idea is based on the assumption that the image intensities are statistically homogeneous (roughly a constant) rather than smooth in each region. Compared with these methods with the aim of minimizing the Mumford–Shah energy functional, the popular piecewise constant (PC) models avoid the solving of a piecewise smooth image, and hence reduce the computational complexity efficiently. However, the PC models are unable to segment the images with intensity inhomogeneities, which often occur in medical images such as X-ray radiography/tomography and magnetic resonance (MR) images. In order to handle this class of images directly, Li et al. [8] proposed a region-scalable fitting (RSF) model (originally termed as local binary fitting (LBF) model [9]), which uses the local region information (also regarded as the local variance estimation) to deal with intensity inhomogeneities. Recently, several related methods [10,11] were proposed and they had similar abilities to segment the intensity inhomogeneous images. Generally speaking, these models overcome the limitation of PC models, and meanwhile simplify the M–S model by avoiding the computation of the piecewise smooth images. Therefore, the RSF model and related methods have been widely used for intensity inhomogeneous image segmentation. The level set method (LSM) [4,12] is one of the most famous segmentation techniques which has been successfully applied to

1508

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

the aforementioned segmentation models. In this conventional method, the boundary curve is represented by the zero level set of a Lipschitz level set function (LSF) defined in the image domain. In order to prevent the LSF from being too flat or steep, a standard procedure of re-initializing the LSF to be a signed distance function to its boundary is implemented during the level set evolution. However, this is an expensive procedure. Several improved methods such as the distance regularized level set evolution [8,13,14] were proposed for avoiding this re-initialization recently. Although the LSM makes splitting and merging of boundary curves a simple matter, the energy functionals in the above models are still non-convex and hence may possess many local minima, and the segmentation results are very sensitive to the initial values and the noise. In order to overcome these shortcomings, Chan et al. [15] introduced the fuzzy membership/label functions to reformulate the PC models as convex optimization problems, and therefore firstorder algorithms such as the operator splitting techniques [5,16] and primal–dual methods [17] can be used to solve the corresponding problems efficiently. These methods discussed above are all based on the assumption that the observed image is contaminated by additive Gaussian noise, however, in many applications such as astronomical imaging, electronic microscopy and positron emission tomography (PET), non-Gaussian noises such as Poisson noise are often added to the images. Most recently, the PC model and M–S model are all extended to handle the images which are biased by Poisson noise [18–20]. However, in these methods, constants or smooth functions are used to approximate the image intensity of the whole subregions to which they belong, and therefore they lack of certain local properties to some extent. Due to the fact that local fitting functions are more suitable for approximation of images with intensity inhomogeneity, these Poissonian image segmentation methods may cause inexact segmentation results for intensity inhomogenous images. In order to overcome this drawback, we define a local generalized Kullback–Leibler (KL) divergence with an adaptive kernel which has local fitting form, and then propose a novel image segmentation model based on minimizing this new measurement. Compared with the previous works, our main contribution includes two aspects: • In order to handle the image contaminated with Poisson noise, the local generalized KL divergence is proposed to replace the local variance, and we find that the form of generalized KL divergence is more suitable from the viewpoint of Poisson distribution. • For the previous methods based on minimizing the local variance, the kernel function used for the estimation of the local variance is spatially unchanged, e.g. the Gaussian blurring kernel. Since the intensity variation of images are varying through the image domain, we use a spatially adaptive kernel while estimating the local KL divergence. The remainder of this paper is organized as follows. In Section 2, we briefly review several related image segmentation models; in Section 3 the proposed model based on minimizing the local generalized KL divergence with an adaptive kernel is introduced, and some basic properties such as the existence of solutions of the corresponding minimization problem are investigated in this section; in Section 4 we describe a fast alternating minimization algorithm for solving the proposed model. In Section 5 the proposed algorithm is implemented on real or synthetic images and compared with those of the current state-of-the-art methods. Numerical examples demonstrate that our method outperforms the other approaches for the segmentation of the Poissonian images. The conclusion about the proposed algorithm is given in Section 6.

2. Preexisting work 2.1. The Mumford–Shah and Chan–Vese models Given an observed image I(x) in the image domain ˝. The essential idea of the Mumford–Shah model [1] is to seek an optimal curve C that segments the image into pairwise disjoint subregions, and an optimal piecewise smooth function f(x) such that the intensities of I(x) are well approximated by f(x) within each of the subregions. This can be formulated by minimizing the following energy functional





2

EMS (f, C) = 

|∇ f |2 dx + length(C)

(I − f ) dx +  

(2.1)

/C

where C is a smooth and closed curve, and ,  > 0 are fixed parameters. The first term in (2.1) is called the data fidelity term, which is taken as the distance measurement between I(x) and f(x); the second term is called the smoothness term, which is the prior model of f(x) given C; and the third term is called the prior model of C which penalizes excessive length of boundary curves. Due to the unknown curve C and the function f(x), and the nonconvexity of the above energy functional as well, it is difficult to find the optimal solution. Therefore, the PC models [2] which assume the intensities of I(x) are well fitted by constants in each subregion are proposed. The two-phase version is just the Chan–Vese (CV) model [3], and the corresponding energy functional is





2

ECV (c1 , c2 , C) = 1

(I − c2 )2 dx + length(C)

(I − c1 ) dx + 2 1

2

(2.2)

where ˝i (i = 1, 2) denote the subregions inside and outside the curve C respectively, ci are the corresponding fitting constants, and i > 0 are fixed parameters. Note that the smooth function f(x) in the M–S model is replaced by two constants in the CV model. 2.2. The RSF model To overcome the difficulty caused by the intensity inhomogeneity, Li et al. [8] proposed the region-scalable fitting (RSF) model, which use the local intensity fitting instead of the smooth function or constant approximation in the M–S or CV models. In the RSF model, two spatially varying fitting functions f1 (x) and f2 (x) are introduced to approximate the local image intensities on both sides of the curve C. For a given pixel x, the local intensity fitting energy (also called region-scalable fitting energy) is defined by Ex (C, f1 (x), f2 (x)) =

 2  i

i=1

H(x, y)|I(y) − fi (x)|2 dy

(2.3)

i

where i > 0(i = 1, 2) are fixed parameters, and H(x, y) is a nonnegative kernel function which satisfies symmetrical and localization properties. In [8], it was chosen as a two-dimensional Gaussian function H(x, y) =

1 2 2 e−|x−y| /2 2 2

(2.4)

with a scale parameter  > 0. The above local fitting energy is just defined for a center pixel x. Through the integral of Ex over the image domain ˝, and meanwhile penalizing the length of the boundary curve C, the energy functional of the RSF model is defined by



ERSF =

Ex dx + length(C) 

(2.5)

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

where  > 0 is a regularization parameter. In [8], a level set method was proposed to minimize the RSF energy functional (2.5). Specifically speaking, using the zero level set of a Lipschitz function  to represent the curve C, and further regularizing the level set function through penalizing its deviation from a signed distance function, we can obtain



ERSF =





function u(x) : ˝ → [0, 1], the corresponding piecewise smooth (PS) model for Poisson noise can be reformulated as



EPS (s1 , s2 , u) = 1





+ 2

2

ı((x))|∇ (x)|dx +

(|∇ (x)| − 1) dx 

(s2 − I log(s2 ))(1 − u(x))dx + ˛1 JR (s1 ) 

+ ˛2 JR (s2 ) + uBV ()

(2.6)

where > 0 is a penalty parameter, and ı() is the Dirac function. The last two terms in (2.6) are the regularization term with respect to . For more details refer to [8].

(s1 − I log(s1 ))u(x)dx 



Ex dx + 

1509

where s1 , s2 are two smooth functions, JR is the regularization term



such as JR (s) =  |∇2ss| and  ·  BV() denotes the total variation of a function in . Then for fixed u, the smooth function si (i = 1, 2) can be updated by the following iteration scheme 2

2.3. The LIF model In [11], the author proposed a modified version of the RSF model, which utilizes the local image information to construct a local fitted image (LFI), and then use the difference between the LFI and the observed image to define the local fitting energy. Specifically, the LFI is just the combination of f1 (x), f2 (x) used in the RSF model, i.e. I LIF ((x)) = f1 (x)H ((x)) + f2 (x)(1 − H ((x))) where H (z) =

1 2



2 

1+

(2.7)

 

arctan

z

is an approximation of the

Heaviside function. Then a local image fitting (LIF) energy of measuring the difference between the LFI and the observed image is defined as follows



ELIF ((x)) =

|I(x) − I LIF ((x))|2 dx.

(2.8)

(2.11)

(cI − ˛i )sik+1 = csik − i ui (sik − I) +

k 2 ˛i |∇ si | · k 2 si

(2.12)

where c > 0 is a fixed constant and u1 = u, u2 = 1 − u. For more details refer to [20]. In the above methods, the PC and PS models for Poisson noise are just extension of CV and M–S models under non-Gaussian noise, and the main differences lie in the data fidelity terms. The constant or smooth function approximation lacks of local property, and experiments in [8] demonstrate that the RSF model which use the local fitting function is superior to the CV and M–S models for the segmentation of the intensity inhomogeneous images. Therefore, we further investigate the Poissonian image segmentation using local fitting functions and propose a variational model based on the local generalized KL divergence in the next section.



By minimizing ELIF with respect to (x) we can obtain the following gradient flow

3. Segmentation model based on the local generalized KL divergence

∂ = (I − I LIF )(f1 − f2 )ı () ∂t

3.1. The proposed variational segmentation model

(2.9)

where ı () = ∂H () is the regularized Dirac function. After each iteration according to (2.9), the obtained level set function is regularized by a Gaussian kernel, i.e.  = Gς ∗  where Gς denotes a Gaussian convolution with the standard deviation ς . 2.4. The PC and PS models for Poisson noise The CV model for additive Gaussian noise was extended to segment images contaminated by Poisson noise recently. In [19], assume the intensity of the original image is well approximated by a piecewise constant function. Then inspired by the variational model for Poisson denoising, the energy functional for the twophase segmentation is defined as follows:



EPC (c1 , c2 , C) = 1

(c1 − I log(c1 ))dx 1



+ 2

(c2 − I log(c2 ))dx + length(C)

(2.10)

2

where I(x) denotes the Poissonian image, ci (i = 1, 2) are the fitting constants of the observed image for each subregions, and i > 0 are fixed parameters. The first two terms in (2.10) are just the data fidelity energy under Poisson noise. Very recently, the M–S model was also extended to segment images under non-Gaussian noises [20]. By introducing the label

The local intensity fitting energy in the RSF model can be seen as the local variation estimation of the images. Therefore, it is just based on the assumption of additive Gaussian noise. In this section, we further discuss the segmentation of Poissonian images with intensity inhomogeneities. Assume that I(x) : ˝ → N is the observed image contaminated by Poisson noise. Similarly to the RSF model, we use two spatially varying functions f1 (x), f2 (x) to approximate the local image intensities on both sides of the boundary curve C, i.e., assume that f1 (x) and f2 (x) approximate the intensities of the original image I0 (x) around the pixel x in the subregions ˝1 and ˝2 respectively. Here ˝i (i = 1, 2) represent the subregions inside and outside the curve C. Based on these assumptions, for any pixel y in the neighborhood of the pixel x, I(y) can be regarded as a Poisson random variable with the expected value fi (x)(i = 1, 2), i.e. p(I(y)|fi (x)) =

fi (x)I(y) f (x) ei I(y)!

(3.1)

where the index i is chosen according to the subregions to which the pixel x belongs. The objective of segmentation is to seek an optimal curve C that makes f1 (x), f2 (x) as close as possible to the local image intensities centered at x, in other words, we aim at maximizing p(I(y)|fi (x)), which amounts to minimizing the log-likelihood − log p(p(I(y)|fi (x))), and then leads to the following term KL(I(y), fi (x)) = I(y) ln

I(y) + fi (x) − I(y) fi (x)

(3.2)

1510

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

which is known as the generalized Kullback–Leibler (KL) diver´ gence or Csiszar’s I-divergence [21]. Therefore, for a given pixel x, the local intensity fitting energy can be defined as follows

 

P E˜ x (C, f1 (x), f2 (x)) =

2

EPx (C, f1 (x), f2 (x)) =

i

H(x, y)KL(I(y), fi (x))dy



For the random variable in the form of the generalized KL divergence we have the following conclusion. Proposition 3.1. Let If be a Poisson random variable (r.v) with the expected value f and consider the following function of If



If ln

If f



+ f − If

.

1 . g(dKL (x))

E{J(If )} = 1 + O

1 f

.

(3.5)

x∈˝

g(s) =

x∈˝

s − dmin ×+1 dmax − dmin

(3.8)

where  > 0 is a scale parameter. Through minimizing the integral of the new energy in (3.6) over ˝, and meanwhile penalizing the length of the boundary curve C, we could define the following minimization problem for Poissonian image segmentation



P

P E˜ x dx + length(C).

min E˜ 0 (C, f1 , f2 ) =

(3.9)

In this study, we use the convex relaxation technique, which has been proposed in [25], to transform the energy functional of (3.9) into a convex function with respect to a fuzzy membership function. Concretely, by introducing the fuzzy membership function u(x) : ˝ → [0, 1], the energy functional in (3.9) can be reformulated as

 

P E˜ (u, f1 , f2 ) = 1



,

it is monotone increasing for s > z and decreasing for s < z. Besides, lz (z) = 0. Therefore, the value of lz (s) become very large while s is far from z. Then according to further analysis we infer that: in the heterogeneous regions where the intensities change acutely, the smooth image I (x) is far from I(y), and therefore the distance measure dKL (x) will be much larger than 1; while in the homogeneous regions, I (x) is close to I0 (y) and hence dKL (x) is around 1 according to Proposition 3.1. From the above discussion we conclude that the value of dKL (x) is able to reflect the local region properties of the image, i.e., dKL (x) is large for a heterogeneous region, and small for a homogeneous region. In this study, we employ the generalized KL divergence dKL (x) to measure quantitatively the local intensity variations. The basic criterion can be described as follows: the value of the fitting energy defined by (3.3) should be decreased for heterogeneous regions, and be increased for homogeneous regions. Therefore, we



˜ H(y, x)KL(I(x), f1 (y))dy

 







˜ x)KL(I(x), f2 (y))dy H(y,

2 



u(x)dx+

(1 − u(x))dx +



|Du|dx 

(3.10) where





It has appeared in several papers such as [22,23], and therefore we omit the proof here. In the next, considering the function z lz (s) = 2 z ln + s − z s

(3.7)

In the above formula, the function g(s) should be an increasing function with respect to s ∈ R+ . The new kernel implies that higher fitting with respect to I(y) and fi (x) is required in the homogeneous regions; whereas lower approximation is more suitable in the heterogeneous regions. In this study, we choose g(s) as follows: Let dmin = mindKL (x) and dmax = maxdKL (x), then

(3.4)

Then the following estimate of the expected value of J(If ) holds true for large f:

˜ H(x, y) = H(x, y) ·

H(x, y)KL (I(y), I (x)) dy.



(3.6)

i





J(If ) = 2

˜ H(x, y)KL(I(y), fi (x))dy

i

i=1

where

where H(x, y) is a two-dimensional Gaussian kernel. The formula (3.3) can be regarded as the local KL divergence estimation of the image. In the energy functional (3.3), the Gaussian kernel H(x, y) is chosen as the weight function, which is the same as that used in [8]. It satisfies symmetrical and localization properties, however, this definition is only related to the distance between two pixels x and y, and ignores intensity variation in a neighborhood. In fact, the intensity variations are dissimilar in different regions, specifically, it will be large in the heterogeneous regions, and small in the homogeneous regions. Therefore, we should take into account the local intensity variations while defining the kernel function, i.e., the new kernel can reflect the intensity variation in a neighborhood. To this end, we begin by defining a generalized KL divergence which can distinguish the heterogeneous regions and homogeneous regions fairly well. Let I (x) = (G * I)(x), where G denotes the Gaussian convolution with standard deviation . Define the generalized KL divergence as follows dKL (x) = 2

 2 

(3.3)

i

i=1

define the local intensity fitting energy with an adaptive kernel as follows

|Du|dx = sup 



2 u div  dx :  ∈ (C0∞ ()) ,   ∞ ≤ 1



is the BV-seminorm. Let BV[,1−] (˝) = {u|u ∈ BV(˝),  ≤ u ≤ 1 − }, where  ≈ 0 is a positive constant and BV(˝) denotes the space 1 of  functions of bounded variation, i.e. u ∈ BV(˝) iff u ∈ L (˝) and |Du|dx is finite. Define the set  X = {u, f1 , f2 |u ∈ BV [,1−] , f1 , f2 ∈ L1 (˝)}, then the variational segmentation model in (3.9) can be reformulated as min (u,f1 ,f2 )∈X

P E˜ (u, f1 , f2 ).

(3.11)

3.2. The analysis of the proposed model In this subsection, we first address the existence of solutions of problem (3.11). For the convenience of the following discussion, we use the modified kernel function



˜ 1 (y, x) = H

˜ H(y, x),

0 ,

|x − y| ≤ L, else

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

instead of the kernel function (3.7). Here L and 0 are positive ˜ constants, and 0 < 0 min H(y, x). Based on the modified ker|x−y|≤L

which converges weakly to some f˜1 ∈ L1 (). By the weak lower semicontinuity and Fatous lemma we have

 

nel function we have the following result



 ˜ 1 (y, x)KL(I(x), fi (y))dy≥0 H

for i = 1, 2.

KL(I(x), fi (y))dy,





1511



  k→∞



˜ ≤ u(x)dx



˜ 1 (y, x)KL(I(x), f k (y))dy H 1

lim inf

(3.12)

˜ 1 (y, x)KL(I(x), f˜1 (y))dy H

(3.17) uk (x)dx.



Then the existence of the minimizer can be stated as follows.

With respect to {f2k } we can obtain the similar conclusion. Then combining (3.13) and (3.17) we have

Theorem 3.2. Assume that 1 ≤ I(x) ≤ Imax for any x ∈ , where Imax > 0 is a constant, and ˝ is bounded. Then the minimization problem (3.11) has at least one solution.

P P P ˜ f˜1 , f˜2 ) ≤ lim inf E˜ (uk , f1k , f2k ) = inf E˜ (u, f1 , f2 ). E˜ (u,

Assume that {uk , f1k , f2k } is a minimizing sequence of

Proof 3.1.

P

the energy functional E˜ in (3.11). Let {u0 , f10 , f20 } = (, 1, 1). It is P

obvious that E˜

(u0 , f10 , f20 )

k

< +∞. Therefore, the sequence {E˜ =

P E˜ (uk , f1k , f2k )} is bounded. Then there exists a constant M > 0 that satisfies P E˜ (uk , f1k , f2k ) ≤ M.





k

˜ |Du|dx ≤ lim inf

k→∞

|Du |dx,

(3.13)



and hence u˜ ∈ BV [,1−] .  ˜ 1 (y, x)KL(I(x), f k (y))dy. Since Next consider {f1k }. Let ϕk (x) =  H 1

uk ≥ , from (3.12) we obtain that

 



KL(I(x), f1k (y))dy 

 ϕk (x)dx

dx ≤ 





ϕk (x)uk (x)dx ≤ M.



(3.14)



C

denotes the boundary curve of (). The above conclusion demonstrates that the process of finding P the boundary curve C by minimizing the energy functional E˜ 0 can be transformed to the computation of the minimizer of the energy P functional E˜ . Therefore, we focus on solving the segmentation model (3.11) in the next section. 4. Fast alternating minimization algorithm for the proposed model The energy functional in (3.11) is convex with respect to u, f1 , f2 respectively. Therefore, we adopt the strategy of alternately minimizing u, f1 , f2 to solve (3.11). First, for fixed u, (3.11) is reduced to a minimization problem with respect to f1 , f2 , and the minimizer is given by

 ⎧   I(x) ⎪ ˜ 0 H(y, ∈ x) 1 − u(x)dx, ⎪ ⎨ f1 (y)     ⎪ I(x) ⎪ ˜ ⎩0 ∈ x) 1 − (1 − u(x))dx. H(y,

Q (s)≥˛s, for any s≥C0 ,

(3.15)

which implies that KL(I(x), s)≥˛s + (I(x) ln I(x) − I(x)), for any s≥C0 and x ∈ . Let C0 = {y : f1k (y)≥C0 }. Then we have

 

 

KL(I(x), f1k (y))dy



where C1 = |C0 | ·

˛f1k (y)dy

dx≥ 

 

 dx + C1

0

(I(x) ln I(x) − I(x))dx.

From (3.14) and (3.16) we conclude that

(3.16)

 C

0

According to (4.1) we have

 ⎧ ⎪ ˜ H(y, x)u(x)I(x)dx ⎪ ⎪ ⎪  ⎪  f (y) = , ⎪ 1 ⎪ ⎪ ⎪ ˜ ⎪ H(y, x)u(x)dx ⎨   ⎪ ⎪ ˜ ⎪ H(y, x)(1 − u(x))I(x)dx ⎪ ⎪ ⎪  ⎪ f2 (y) =  . ⎪ ⎪ ⎪ ⎩ ˜ x)(1 − u(x))dx H(y, Note that

C

˛f1k (y)dy is

bounded. Since f1k (y)≥0 for any y ∈  (f1k is just a smooth approximation of I(x)), we infer that f1k (y) is bounded in L1 (), and hence there exists a subsequence {fkl } (which we still denote by {fk })

(4.1)

f2 (y)



Let Q(s) = s − Imax ln s. There exists a constant C0 > 1 and 0 < ˛ < 1 such that:



Theorem 3.3. Given f1 , f2 , and assume that u˜ is one solution of P minE˜ (u, f1 , f2 ). Let () = {x ∈  : u(x) > }. Then for almost each P

As a result, {uk } is bounded in BV(˝), and hence there exists a subsequence {ukl } (still denoted by {uk } in the following) which converges weakly in L2 () to some u˜ ∈ L2 () [28, theorem 2.6], and {Dukl } converges weakly as a measure to Du˜ [28, lemma 2.5].  Since the function  |Du|dx is weakly lower semicontinuous with respect to the L2 () topology [28, theorem 2.3], we have



Next, we illustrate the relationship between (3.9) and (3.10). Since similar conclusion has appeared in several literatures [15,25], we omit the proof and give the conclusion directly.

u

Since each term of E˜ is positive,  it is also boundedby M. Due to 0 ≤ u ≤ 1, we have uL1 (˝) =  u(x)dx ≤ |˝|, and  |Du|dx ≤ M.

0

˜ f˜1 , f˜2 ) is one solution of (3.11).  Therefore, (u,

 ∈ [, 1 − ], ∂() is one solution of minE˜ 0 (C, f1 , f2 ). Here ∂() P



k→∞



(4.2)



˜ x)u(x)dx = H(y, 

1 (h ∗ u)(y), g(dKL (y))

where h is the con-

volution operator corresponding to the Gaussian kernel function H(x, y). Therefore, f1 , f2 can be updated by fast convolution. Second, for fixed f1 , f2 , (3.11) is reduced to a minimization problem with respect to u as follows



u



u(x)r(x)dx +

min 

|Du|dx 

(4.3)

1512

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

where



 ˜ H(y, x)KL(I(x), f1 (y))dy − 2

r(x) = 1 

˜ H(y, x)KL(I(x), f2 (y))dy. 

This minimization problem can be solved by the split Bregman efficiently [5]. Assume that u ∈ W1,1 (), then we have method    |Du|dx =  | ∇ u|dx [26]. By introducing a new auxiliary variable l, (4.3) can be rewritten in a constrained optimization problem as follows







|l|dx | ∇ u = l

u(x)r(x)dx +

min u,d



.

(4.4)



The alternating split Bregman algorithm for (4.4) is defined as follows:

 ⎧  k ⎪ k 2 k+1 = argmin ⎪ u u(x)r(x)dx + + ∇ u − l  , b ⎪ ⎪ 2 u ⎪  ⎨   k k+1 2 k+1 |l| + b + ∇ u − l , ⎪ l = argmin ⎪ 2 ⎪ l ⎪  ⎪ ⎩ k+1 k k+1 k+1 = b + ∇u

b

−l

(4.5)

.

In the first sub-problem of (4.5), the minimizer uk+1 is given by

u = rhs

k+1

,

(4.6)

k+1

where rhs = 1 r(x) + div(lk − bk ). The formula (4.6) can be solved by the Gauss–Seidel iteration. The solution of the second subproblem of (4.5) can be written as lk+1 =





1 ∇ uk+1 + bk max |∇ uk+1 + bk | − , 0  |∇ uk+1 + bk |

(4.7)

For more details of the algorithm refer to [5]. Based on the above discussion, we obtain the iterative algorithm for Poissonian image segmentation shown as Algorithm 1. Algorithm 1. Poissonian image segmentation based on split Bregman method Choose: image I, initial solution u0 ; parameters 1 , 2 > 0,  > 0,  > 0,  > 0; number k ; of inner iteration Nin Initialization: k = 0, uk = u0 ; Iteration: update f 1 , f2 : k (x)I(x)dx ˜ H(y,x)u

 ; k ˜   H(y,x)u (x)dx k (x))I(x)dx ˜ H(y,x)(1−u  f2k (y) =  ; k (x))dx ˜ H(y,x)(1−u   k

f1k (y) =

r (x) = 1

 k 1 : Nin k,i

for i = update u end for

˜ x)KL(I(x), f1k (y))dy − 2 H(y,

 

˜ x)KL(I(x), f2k (y))dy; H(y,

uk,0 = uk ;

according to (4.5);

k,N k

uk+1 = u in , k = k + 1; until the result becomes stable.

5. Numerical experiments In this section, various experiments are reported to assess the performance of the proposed algorithm. Here we adopt the standard test images, most of which are available at http://www.engr.uconn.edu/ cmli/. In Section 5.1 the criteria for the parameters selection in Algorithm 1 are investigated. In Section 5.2 we verify the advantages of choosing the adaptive kernel and minimizing the local KL divergence (rather than the local variance) through several experiments. In Section 5.3 the proposed model is compared with the RSF model [8] and the LIF model [11]. Finally, we compare our method with the CV model [5], the PC model [19] and the PS model [20] for Poisson noise.

Our main programs are written in MATLAB, but the alternating split Bregman algorithm for solving (4.3) is implemented in C and called through MATLAB using a ‘mex’ interface. All the experiments are implemented under Windows XP and MATLAB 7.0 running on a Lenovo laptop with a Dual Intel Pentium CPU 1.8G and 1 GB of memory. 5.1. The selection of the parameters There are several parameters that need to be tuned in Algorithm 1: 1 , 2 as the regularization parameters used to balance the energy of local generalized KL divergence and the edge length,  as the standard deviation of the Gaussian kernel H,  as the scale parameter for computing the adaptive function g(·) in (3.8) and  as the penalty parameter for solving (4.3) with the split Bregman method. The segmentation results are affected by the choices of these parameters, i.e. the parameters such as 1 , 2 and  determine the segmentation accuracy of the proposed model to some extent. Therefore, we need to carefully select these parameters in order to obtain satisfactory segmentation results for given noisy images. Some principles of determining these parameters are discussed below. (1) The regularization parameters 1 , 2 : Their values control the approximation level of the given image and the local fitting functions f1 , f2 . Thus we choose them according to the noise level of the given image. In the following experiments, Poisson noise added to the original image is implemented by applying the Matlab routine ‘poissrnd’, i.e. I(x) = poissrnd(I0 (x)/ max(I0 ) ∗ Q ).

(5.1)

In the formula (5.1), the scale parameter Q determines the noise level. Specifically, large Q means low noise, and therefore 1 , 2 should be chosen to be large values; whereas small Q means high noise, and hence small values are more suitable for the regularization parameters. Moreover, we set 1 = 2 =  for convenience in the following experiments. (2) The parameters setting of the adaptive kernel: In the formula (3.7), the Gaussian kernel H can be truncated as a r × r mask, where r is the smallest odd number no less than 4. The values of the parameters  and  have some influences on the segmentation result. Specifically, a smaller  or  may produce more accurate location of the boundary curve; whereas it is more robust to the noise and initialization when a larger  or  is chosen. Therefore, we adjust these parameters to get the better segmentation results in the following experiments. Besides, the Gaussian kernel H, which corresponds to G , is also used for the computation of dKL in (3.7). (3) The implementation details of the split Bregman method for k ≡ 4. Let ek = solving (4.3): In Algorithm 1, we set Nin

|ek+1

− ek | < 10−6 ,

uk+1 −uk 2 uk 2

2

.

2

If we also stop the inner iteration. Besides, we choose the penalty parameter  = 100 and  = 10−9 for the following tests. (4) Initial solution u0 : the initial setting determines the iterative number of the segmentation algorithm, and hence affects the efficiency and sensitivity of the proposed algorithm. In the following section, we adopt two simple schemes for generating the initial segmentation results. The first one is to use the threshold method, i.e.

u0 (x) =

⎧ ⎨ 1, if I(x) > T, ⎩

0,

else.

Here T is a threshold value. In our experiments, we set T = L · Med(I(x)), where Med(I(x)) denotes the median value of the

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

1513

Fig. 1. The segmentation of an MR image of bladder: (a) the result generated by the Gaussian kernel; (b) the result generated by the adaptive kernel with  = 1.0; and (c) the values of g(dKL ) defined by (3.8).

given image, and the scale parameter L is set to be a constant such as 1.2. The second one is to use the scale transform, i.e. u0 (x) = I(x)/ max(I(x)). As discussed above, there are three parameters , ,  that need to be well chosen in our method. Among them, the experimental values  = 3.0,  = 1.0 are applied to almost all examples in our experiments except specially stated. 5.2. The performance of the proposed model In this subsection, we illustrate the advantages of the adaptive kernel and the local generalized KL divergence used in the proposed model through several numerical examples. The threshold method (see Section 5.1) is used for the initial configuration. Several real or synthetic images will be tested in the following experiments. Fig. 1 shows the segmentation results of an MR image of bladder with intensity homogeneity. Poisson noise with Q = 250 is added to the original image by the formula (5.1). We choose  = 5.0 and  = 10.0 for this example. The results generated by the proposed algorithm with and without the adaptive kernel are shown in Fig. 1(a) and (b) respectively. The values of g(dKL ) defined by (3.8) are shown as grayscale in Fig. 1(c). The light gray regions refer to large values of g(dKL ), whereas dark gray belongs to zones where g(dKL ) is small. Note that g(dKL ) is larger in the heterogeneous regions, which means that a stronger regularization of the active contour determined by the label function uk is required in the heterogeneous portion of the image. As a result, the active contour generated by the model with a adaptive kernel is more robust to the heterogeneity or noise. This may cause the improvement of the segmentation results, and we observe that the segmentation result in Fig. 1(b) is better than that in 1(a). For a clearer comparison, we give the zoomed versions of certain parts of the segmentation results in Fig. 2. It is observed that some noise spots appear in Fig. 2(a) rather than 2(b), which verifies the above analysis about the use of the adaptive kernel. In Fig. 3, we present the segmentation results of a synthetic image. The original image is contaminated by Poisson noise with Q = 150.  = 5.0 and  = 7.0 are used for this example. The first row of Fig. 3 displays the iterative results generated by the proposed algorithm with  = 0 in (3.8), i.e. just a Gaussian kernel H is used. The second row shows the results generated by the proposed algorithm with  = 2.5. We observe that the proposed method with an

Fig. 2. Zoomed version of certain parts of the segmentation results: (a) the detail in the ramp region corresponding to (a) and (b) the detail in the ramp region corresponding to (b).

Fig. 3. The contours evolution along the iterations: (a) iter = 2; (b) iter = 4; (c) iter = 7; (d) iter = 2; (e) iter = 3; and (f) iter = 4.

adaptive kernel can achieve a reasonable result more quickly than that without the adaptive kernel. This can be explained as follows: smaller values of g(dKL ) (which mean the better approximation) in the homogeneous regions could promote the disappearance of the active contour determined by the label function uk there, and hence accelerate the convergence of the active contour to the object boundary. Next we discuss the impact of the local generalized KL divergence. In Fig. 4, the segmentation results of blood vessel images with different noise levels are reported. The first column displays the noisy image polluted by Poisson noises with Q = 150, 250, 400 in sequence. We use the second scheme (see Section 5.1) to obtain the initial u0 and choose  = 7.0 in this example. The second column displays the segmentation results of the noisy images in the first column, which are generated by the proposed method using the local variance instead of the local generalized KL divergence. The regularization parameter  is set to be 2.5, 3.0, 4.0 in sequence. The results of the proposed algorithm are presented in the third column. The corresponding parameter  is chosen to be 2.0, 2.5, 3.5 in sequence. We observe that the results generated by the local generalized KL divergence outperform those produced by the local variance. This is especially obvious by noticing the left branches of the blood vessels in Fig. 4. 5.3. Comparison with the RSF and LIF models In this subsection, we compare the proposed algorithm with the RSF model 1 [8] and the LIF model 2 [11] that use a Gaussian prior. The RSF and LIF models are sensitive to the initial contours and noise. The initial contours of the noisy images used for the RSF model and the LIF model are shown in the first columns of Figs. 5 and 6. These initializations are suitable and most of them are the same as those provided by the authors in [8]. In Fig. 5, three test images are contaminated by Poisson noises with Q = 150, 250, 150 respectively. The results of the RSF and LIF models are displayed in the second and third columns respectively. For the RSF model, we set the regularization parameter  = 130.1, 390.2, 65.0, and the time step ıt = 0.05, 0.01, 0.1 for the three Poissonian images respectively, besides, the penalty parameter ,

1 2

http://www.engr.uconn.edu/ cmli/code/RSF v0 v0.1.rar. http://www.comp.polyu.edu.hk/ cslzhang/code/LIF.zip.

1514

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

Fig. 5. The segmentation results obtained by the RSF model, the LIF model and the proposed algorithm. Column 1: the noisy images and initial contours; columns 2 and 3 show the results generated by the RSF model and LIF model respectively; column 4 shows the results of the proposed algorithm.

Fig. 4. The segmentation results of blood vessel images. Each row displays the segmentation results of the images with different noise levels. The images with Q = 150, 250, 400 are shown in column 1; the results produced by the local variance are shown in column 2; the results produced by the local generalized KL divergence are shown in column 3.

the fidelity parameter 1 = 2 and the standard deviation  of the Gaussian kernel function H(x, y) are chosen to be 1.0, 1.0 and 3.0 respectively; for the LIF model, the time step ıt is set to be 0.025, and the standard deviation ς of the Gaussian kernel used to regularize  is chosen to be 0.9, 0.9, 1.2 respectively, moreover, the standard deviation  = 3.0, 3.0, 5.0 are used for the three tests. The third column exhibits the segmentation results generated by the proposed algorithm. We set the regularization parameter  = 5.0, 25.0, 8.0 for the three test images respectively, and choose  = 7.0 for the third image. From Fig. 5 we observe that our algorithm is more robust to the noise and is able to obtain more exact object boundary than the other two models. This is due to the generalized KL divergence follows the laws of the Poisson variable. Fig. 6 shows the segmentation results of a Brain MR image under Poisson noise. The standard deviation  = 5.0 is chosen for the three methods. For the RSF model, we set ıt = 0.1,  = 780.3, 1 = 1.0, 2 = 2.0 and = 1.0. For the LIF model, ıt = 0.025 and ς = 1.1 are used. For the proposed algorithm we choose  = 10.0. The noisy image is shown in Fig. 6(a), and the segmentation results of different methods are presented in Fig. 6(b)–(d). In order to make the comparison more clear, we zoom into certain regions of the results in Fig. 6(f)–(h). Besides, the original image region and the boundary

curve obtained by the RSF model are shown in Fig. 6(i)-(j). Compared with the RSF model and LIF model, we observe that the shape of the white matter boundary extracted by the proposed method is closer to that extracted from the noise-free image. Concretely, parts of the white matter are excluded in the final segmented white matter region and some mistaken segmentation points exist in Fig. 6(f); whereas parts of the gray matter are included in the final segmentation object in Fig. 6(g). The boundary curve generated by the proposed method falls somewhere between these two extremes. Therefore, our method produces boundary curve more exactly than the other two models. The iterative numbers (outer iterations for our method) and the CPU times for Figs. 5 and 6 are compared in Table 1. The sizes of the test images are also listed. From the table we observe that our

Fig. 6. Comparison of the three methods on segmentation results of a Brain MR image. Row 2 shows the zoomed version of certain parts of the corresponding images in Row 1: (a) the noisy image and the initial contour; (b) and (c) the results generated by the RSF model and LIF model respectively; (d) the result of the proposed algorithm, (i) certain parts of the noise-free image; and (j) the segmentation result of the original image generated by the RSF model.

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

1515

Table 1 Number of iterations and CPU times (in seconds) of different methods. Image

Size

RSF

127 × 96 103 × 131 271 × 282 119 × 78

1 2 3 4

LIF

Algorithm 1

Iterations

Time (s)

Iterations

Time (s)

Iterations

Time (s)

550 800 360 950

20.71 28.97 100.90 36.08

450 300 680 350

8.63 6.42 172.10 10.75

3 5 6 5

0.44 0.58 3.62 0.54

Table 2 Number of iterations and CPU times (in s) of different methods. Image

Size

256 × 256 206 × 206 79 × 75

1 2 3

PS model Time (s)

Iterations

Time (s)

5 6 12

3.73 7.56 1.36

5 8 11

3.45 2.76 1.06

Fig. 7. The noisy images used for the segmentation tests

algorithm is much faster than the RSF and LIF models. Therefore, the proposed algorithm outperforms the other two methods in both segmentation quality and efficiency. 5.4. Comparison with the PC and PS models In this subsection, we compare the proposed model with the CV model 3 [5], the PC [19] and the PS [20] models for Poisson noise. For all these models, the solving of the label function u is implemented by the alternating split Bregman algorithm. The details of the parameters setting for this algorithm are the same as those in [5]. We use the scale transform (see Section 5.1) to initialize the label function u and set the penalty parameter  = 100 for the alternating split Bregman algorithm. Besides, we set 1 = 2 =  in these methods, and ˛1 = ˛2 = ˛ for the PS model (see Section 2.4). The stopping criterion for the iteration scheme (2.12) is sik+1 − sik 22 sik+1 22

Algorithm 1

Iterations

choose  = 12, 15, 4 correspondingly; for the PS model, we set  = 8, 10, 7, c =  and ˛/ = 30, 30, 10 respectively; for the proposed algorithm,  = 8, 10, 10 are chosen, and  = 7.0 is used for the first two images. From the segmentation results we observe that the previous PC models fail to extract the object boundary correctly due to the intensity inhomogeneities of the images. However, our method utilizes the local information of the images and therefore overcomes the drawback of the previous works. Besides, it is also noted that our method outperforms the PS model in terms of segmentation accuracy. This is due to the spatially varying functions fi (x) used in our model have better local properties than the piecewise smooth function si (x) in the PS model, which is just restricted by a global regularization term. Table 2 compares the iterative numbers and the CPU times of the proposed algorithm and the PS model for Poisson noise. These values in the table demonstrate that our model is also superior to the PS model in terms of computational efficiency. This is caused by the facts that fi (x) in our model can be computed by fast convolution, however, si (x) in the PS model must be solved by the iteration formula (2.12).

< 0.001.

Three images are used for our experiments. Poisson noises with Q = 150, 100, 450 are added to the original images, and the noisy versions are shown in Fig. 7. The first is the rice image. It is observed that the lower part of the image is darker, and the intensities of several rices there are even lower than those of the background district in the upper part. The second is the tomato image. The intensity distribution of this image is similar to that in the rice image. The third is a synthetic image, it is observed that the intensity inhomogeneity also appears in this example. In Fig. 8, the segmentation results of the three test images are presented. For the CV model we set the fidelity parameter  = 16, 15, 4 respectively; for the PC model under Poisson noise we

3

http://www.cs.cityu.edu.hk/ xbresson/ucla/codes/Fast Min Active Contour Split Bregman.zip.

Fig. 8. The segmentation results obtained by two PC models, PS model and the proposed algorithm. Columns 1 and 2 show the results generated by the CV model and PC model for Poisson noise; column 3 shows the results generated by the PS model for Poisson noise; column 4 shows the results of the proposed algorithm.

1516

D.-Q. Chen et al. / Optik 125 (2014) 1507–1516

6. Conclusions In this article, a variational segmentation model with the aim of minimizing the local generalized KL divergence is proposed to segment the images contaminated by Poisson noise. The proposed method is derived from the distribution probability of Poisson random variable. Further, inspired by the statistical characteristics of the generalized KL divergence, we propose a new spatially-adaptive kernel function in the estimation of the local generalized KL divergence. Due to the local region property of the image is considered in the new defined kernel, it may improve the segmentation model compared with the widely used Gaussian kernel. Numerical experiments indicate that the proposed algorithm outperforms most of the current state-of-the-art methods in the segmentation of the Poissonian images with intensity inhomogeneities. Acknowledgements We are appreciative for the anonymous reviewers constructive comments, with which great improvements have been made in this manuscript. The research was supported in part by the National Natural Science Foundation of China under Grant 61072118 and Grant 11101430 and the National University of Defense Technology under Science Research Project JC11-02-06. References [1] D. Mumford, J. Shah, Optimal approximation by piecewise smooth functions and associated variational problems, Commun. Pure Appl. Math. 42 (1989) 577–685. [2] L.A. Vese, T.F. Chan, A multiphase level set framework for image segmentation using the Mumford and Shah model, Int. J. Comput. Vis. 50 (2002) 271–293. [3] T.F. Chan, L.A. Vese, Active contours without edges, IEEE Trans. Image Process. 10 (2001) 266–277. [4] S. Osher, R.P. Fedkiw, Level set methods: an overview and some recent results, J. Comput. Phys. 169 (2001) 463–502. [5] T. Goldstein, X. Bresson, S. Osher, Geometric applications of the split Bregman method: segmentation and surface reconstruction, J. Sci. Comput. 45 (2010) 272–293. [6] M. Kass, A. Witkin, D. Terzopoulos, Snakes active contour models, Int. J. Comput. Vis. 1 (1988) 321–331. [7] V. Caselles, R. Kimmel, G. Sapiro, On geodesic active contours, Int. J. Comput. Vis. 22 (1997) 61–79. [8] C.M. Li, C. Kao, J. Gore, Z. Ding, Minimization of region-scalable fitting energy for image segmentation, IEEE Trans. Image Process. 17 (2008) 1940–1949. [9] C.M. Li, C. Kao, J. Gore, Z. Ding, Implicit active contours driven by local binary fitting energy, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, 2007, pp. 1–7.

[10] L. Wang, C.M. Li, Q. Sun, D. Xia, C. Kao, Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation, Comput. Med. Imaging Graphics 33 (2009) 520–531. [11] K.H. Zhang, H.H. Song, L. Zhang, Active contours driven by local image fitting energy, Pattern Recognit. 43 (2010) 1199–1206. [12] H.K. Zhao, T. Chan, B. Merriman, S. Osher, A variational level set approach to multiphase motion, J. Comput. Phys. 127 (1996) 179–195. [13] C.M. Li, C. Xu, C. Gui, M.D. Fox, Distance regularized level set evolution and its application to image segmentation, IEEE Trans. Image Process. 19 (2010) 3243–3254. [14] C.M. Li, C. Xu, C. Gui, M.D. Fox, Level set evolution without re-initialization: a new variational formulation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, 2005, pp. 430–436. [15] X. Bresson, S. Esedoglu, P. Vandergheynst, J. Thiran, S. Osher, Fast global minimization of the active contour/snake model, J. Math. Imaging Vis. 28 (2007) 151–167. [16] D.Q. Chen, H. Zhang, L.Z. Cheng, A fast fixed point algorithm for total variation deblurring and segmentation, J. Math. Imaging Vis. 43 (2012) 167–179. [17] A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis. 40 (2011) 120–145. [18] T.M. Le, L.A. Vese, Additive and multiplicative piecewise-smooth segmentation models in a functional minimization approach, Interpolation Theory Appl. Contemp. Math. 445 (2007) 207–224. [19] Y.T. Lee, T.M. Le, Active contour without edges for multiphase segmentations with the presence of Poisson noise, UCLA CAM 11-46 (2011). [20] A. Sawatzky, D. Tenbrinck, X. Jiang, M. Burger, A Variational Framework for Region-Based Segmentation Incorporating Physical Noise Models, UCLA CAM11-81, 2011. [21] I. Csiszár, Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems, Ann. Stat. 19 (1991) 2032–2066. [22] D.Q. Chen, L.Z. Cheng, Spatially adapted regularization parameter selection based on the local discrepancy function for Poissonian image deblurring, Inverse Probl. 28 (2012) 015004. [23] R. Zanella, P. Boccacci, L. Zanni, M. Bertero, Efficient gradient projection methods for edge-preserving removal of Poisson noise, Inverse Probl. 25 (2009) 045010. [25] T.F. Chan, S. Esedoglu, M. Nikolova, Algorithms for finding global minimizers of image segmentation and denoising models, SIAM J. Appl. Math. 66 (2006) 1632–1648. [26] T.F. Chan, J Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods, SIAM, 2005. Dai-Qiang Chen received the B.S. degree in mathematics from Wuhan University, Wuhan, China, in 2005, the M.S. degree from National University of Defense Technology, Changsha, China, in 2008, the Ph.D degree from National University of Defense Technology, in 2012. He is currently working as a lecturer in Third Military Medical University. His research interests include statistical approaches in signal and image processing, inverse problems in image processing, wavelet analysis and nonparametric statistical techniques. Li-Zhi Cheng received the Ph.D. degree from National University of Defense Technology, Changsha, China, in 2002. He is now a professor in the College of Science. His interests are the mathematical foundation of signal analysis, wavelet analysis with applications to image compression. Xin-Peng Du received the B.S. and M.S. degrees from National University of Defense Technology, Changsha, China, in 2007 and 2010. He is currently working toward the Ph.D. degree in the College of Science. His interests are sparse representations in signal and image processing, convex analysis and optimization theory.

Suggest Documents