Some Variational Problems from Image Processing

0 downloads 0 Views 257KB Size Report
Nov 9, 2009 - ... National Science Foundation (Grant NSF DMS 071495). ‡Department of Mathematics, University of California, Los Angeles, 405 Hilgard Ave.
Some Variational Problems from Image Processing John B. Garnett



Triet M. Le

§

Luminita A. Vese

∗†



UCLA CAM Report 09-85 November 9, 2009

Abstract We consider in this paper a class of variational models introduced for image decomposition n o into cartoon and texture in [17] (see also [10]) of the form inf u kukBV + λkK ∗ (f − u)kqLp where K is a real analytic integration kernel. We analyse and characterize the extremals of these functionals and list some of their properties.

1

Introduction and Motivations

A variational model for decomposing a given image-function f into u + v can be given by n o inf F1 (u) + λF2 (v) : f = u + v , (u,v)∈X1 ×X2

where F1 , F2 ≥ 0 are functionals and X1 , X2 are function spaces such that F1 (u) < ∞, and F2 (v) < ∞, if and only if (u, v) ∈ X1 × X2 . The constant λ > 0 is a tuning (scale) parameter. A good model is given by a choice of X1 and X2 so that with the given desired properties of u and v, we have: F1 (u) > F2 (v). The decomposition model is equivalent with: n o inf F1 (u) + λF2 (f − u) u∈X1

In this work we are interested in the analysis of a class of variational BV models arising in the decomposition of an image function f into cartoon or BV component, and a texture or oscillatory component. This topic has been of much interest in the recent years. We first recall the definition of BV functions. ∗

The first named author thanks the CRM Barcelona for support during its Research Program on Harmonic Analysis, Geometric Measure Theory and Quasiconformal Mappings, where this paper was presented and where part of its research was undertaken. † This work was supported in part by the National Science Foundation (Grant NSF DMS 071495). ‡ Department of Mathematics, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA 900951555, USA ([email protected]). § Department of Mathematics, Yale Unversity, New haven, CT 06500, USA ([email protected]). ¶ Department of Mathematics, University of California, Los Angeles, 405 Hilgard Ave., Los Angeles, CA 900951555, USA ([email protected]).

1

Definition 1. Let u ∈ L1loc (Rd ) be real. We say u ∈ BV if nZ o sup udivϕdx : ϕ ∈ C01 (Rd ), sup |ϕ(x)| ≤ 1 = ||u||BV < ∞. If u ∈ BV there is an Rd valued measure µ ~ such that µ, and a Borel function ρ ~:

Rd



S d−1

∂u ∂xj

= (~ µ)j as distributions, a positive measure

such that Du = µ ~ =ρ ~µ

and

Z ||u||BV =

dµ.

(see Evans-Gariepy [16], for example).

1.1

History

Assume f ∈ L2 (Rd ), f real. We list here several variational BV models that have been proposed as image decomposition models. Rudin-Osher-Fatemi [23] (1992) proposed the minimization Z n o inf kukBV + λ |f − u|2 dx . u∈BV

In this model, we call u a “cartoon” component, and f − u a “noise+texture” component of f , with f = u + v. Note that there exists a unique minimizer u by the strict convexity of the functional. A limitation of this model is illustrated by the following example [21, 13]: let f = αχD , d = 2, with D a disk centered at the origin and of radius R; if λR ≥ 1/α, then u = (α − (λR)−1 )χD and v = f − u = (λR)−1 χD ; if λR ≤ 1/α, then u = 0. Thus, although f ∈ BV is without texture or noise, we do not have u = f . Chan-Esedoglu [12] (2005) considered and analyzed the minimization (see also Alliney [5] for the discrete case) Z n o inf kukBV + λ |f − u|dx . u∈BV

The minimizers of this problem exist, but they may not be unique. If d = 2, f = χB(0,R) , then u = f if R > λ2 and u = 0 if R < λ2 . W. Allard [2, 3, 4] (2007) analyzed extremals of Z n o inf kukBV + λ γ(u − f )dx u∈BV

where γ(0) = 0, γ ≥ 0, γ locally Lipschitz. Then there exist minimizers u, perhaps not unique, and ∂ ∗ ({u > t}) ∈ C 1+α ,

α ∈ (0, 1)

where ∂ ∗ denotes “measure theoretic boundary”. Also, Allard gave mean curvature estimates on ∂ ∗ ({u > t}).

2

Y. Meyer [21] (2001) in his book Oscillatory Patterns in Image Processing analysed further the R-O-F minimization and refined these models proposing n o inf kukBV + λku − f kX u∈BV

where

n o X = (W 1,1 )∗ = div~g : ~g ∈ L∞ = G,

or

n o X = div~g : ~g ∈ BM O = F,

n o X = 4g : g Zygmund = E.

Inspired by the proposals of Y. Meyer, recently a rich literature of models has been developed and analyzed theoretically and computationally. We list the more relevant ones. Osher-Vese [27] (2002) proposed n o inf kukBV + µkf − (u + div~g )k22 + λk~g kp , p → ∞ u,~g

to approximate the (BV, G) Meyer’s model and make it computationally amenable. Osher-Sol´e-Vese [22] proposed the minimization n o inf kukBV + λkf − ukH −1 u

and later Linh-Lieu [20] generalized it to o n inf kukBV + λkf − ukH −s , s > 0. u

Similarly, Le-Vese [19] (2005) approximated (BV, F ) Meyer’s model by o n inf kukBV + µkf − (u + div~g )k22 + λk~g kBM O . u,~g

Aujol et al. [7, 8] addressed the original (BV, G) Meyer’s problem and proposed an alternate method to minimize n o inf kukBV + λ||f − u − v||2 , u

subject to the constraint ||v||G ≤ µ. Garnett-Le-Meyer-Vese [17] (2007) proposed reformulations and generalizations of Meyer’s (BV, E) model (see also Aujol-Chambolle [10]), given by n o inf kukBV + µkf − (u + 4~g )k22 + λk~g kB˙ α p,q

u,~g

where 1 ≤ p, q ≤ ∞, 0 < α < 2, and the exact decompositions are given by n o inf kukBV + λkf − ukB˙ p,q α−2 . u

In a subsequent work, Garnett-Jones-Le-Meyer [18] proposed different formulations, n o inf kukBV + µkf − (u + 4~g )k22 + λk~g kBM˙ Oα , u,~g

3

˙ Oα = Iα (BM O), kvk ˙ α = kIα vkBM O , and with BM BM O n o 2 inf kukBV + µkf − (u + 4~g )k2 + λk~g kW˙ α,p , u,~g

with kvkW˙ α,p = kIα vkp , 0 < α < 2. α ), and the T V − Hilbert model [9], an easier cartoon+texture Generalizing (BV, H −s ), (BV, B˙ p,q decomposition model can be defined using a smoothing convolution kernel K (previously introduced in [17]): n o inf kukBV + λkK ∗ (f − u)kqLp . (1) u∈BV

This can be seen as a simplified version of all the previous models.

2

The Variational Problems

In even, bounded and real analytic kernel on Rd such that R this paper we assume K is a positive, Kdx = 1 and such that the map Lp 3 u → K ∗ u is injective. For example we may take K to be a Gaussian or a Poisson kernel. We fix λ > 0, 1 ≤ p < ∞ and 1 ≤ q < ∞. For real f (x) ∈ L1 we consider the extremal problems mp,q,λ = inf{||u||BV + Fp,q,λ (f − u) : u ∈ BV }

(2)

Fp,q,λ (h) = λ||K ∗ h||qLp .

(3)

where d

Since BV ⊂ L d−1 and K ∈ L∞ , a weak-star compactness argument shows that (2) has at least one minimizer u (see Section 3 below for a more detailed argument). Our objective is to describe, given f , the set Mp,q,λ (f ) of minimizers u of (2) . The papers of Chan-Esedoglu [12] and Allard [2, 3, 4] give very precise results about the minimizers for variations like (2) but without the real analytic kernel K, and this paper is intended to complement those works. Remark 1. According to the definition of admissibility given in [2], the functional Fp,q,λ is admissible for an appropriate choice of K, for instance take K to be bounded (i.e. heat kernel Kt or Poisson kernel Pt for some t > 0). Thus the regularity results from section 1.5 in [2] holds for minimizers in Mp,q,λ (f ). On the other hand, If K is not a Dirac delta function, then Fp,q,λ is not local as defined in [2].

2.1

Convexity

Since the functional in (2) is convex, the set of minimizers Mp,q,λ (f ) is a convex subset of BV . If p > 1 or if q > 1, then the functional (3) is strictly convex and the problem (2) has a unique minimizer because K ∗ u determines u. Lemma 1. If p = q = 1 and if u1 ∈ Mp,q,λ and u2 ∈ Mp,q,λ , then K ∗ (f − u1 ) K ∗ (f − u2 ) = almost everywhere, |K ∗ (f − u1 )| |K ∗ (f − u2 )| 4

(4)

and

d~ d~ µj µj ρ ~k · = , j 6= k, dµk dµk

(5)

where for j = 1, 2, Duj = µ ~j = ρ ~j µj with |~ ρj | = 1 and µj ≥ 0. 2 Proof: Since Mp,q,λ (f ) is a convex subset of BV , u1 +u is also a minimizer. This implies, 2

 

u1 + u2



K ∗ f − u1 + u2 = 1 [ku1 kBV + ku2 kBV ] + λ

2

2 2 BV 1 λ + [||K ∗ (f − u1 )||1 + ||K ∗ (f − u2 )||1 ] . 2

On the other hand, using convexity of k · kBV and k · kL1 , we have

u1 + u2 1

≤ [ku1 kBV + ku2 kBV ] , and

2 2 BV

 

K ∗ f − u1 + u2 ≤ 1 [||K ∗ (f − u1 )||1 + ||K ∗ (f − u2 )||1 ]

2 2 1

(6)

(7)

Combining (6) and (7), we obtain  1 u1 + u2 ) = ||K ∗ (f − u1 )||1 + ||K ∗ (f − u2 )||1 , K ∗ (f − 2 2 1 which implies (4). Moreover, ku1 + u2 kBV = ku1 kBV + ku2 kBV .

(8)

For j = 1, 2, let Duj = µ ~j = ρ ~j µj , with |ρ~j | = 1 and µj ≥ 0. Then for k = 1, 2, k 6= j, equation (8) implies Z Z Z d~ µj µj d~ ~k + ρ dµk = dµk + dµk , µk µk which implies (5).

2.2



Properties of u ∈ Mp,q,λ (f )

Lemma 2. Given an f ∈ L1 . Suppose u is a minimizer of (2) such that u 6= f . Let Du = µ ~ =ρ ~ · µ. d~ ν For each real-valued h ∈ BV , write Dh = ~ν and ~ν = dµ µ + ~νs as the Lebesgue decomposition of ~ν with respect to µ. Then Z Z d~ν ρ ≤ ||~νs ||, dµ − λ h(K ∗ J )dx (9) ~ · p,q dµ

5

where Jp,q = q

F |F |p−2 with F = K ∗ (f − u) ||F ||p−q p

(10)

and ||~νs || denotes the norm of the vector measure ~νs . Conversely, if u ∈ BV , u 6= f and (9) and (10) hold, then u ∈ Mp,q,λ (f ). Note that since u 6= f and K ∗ (f − u) is real analytic, Jp,q is defined almost everywhere. Proof: Let || be sufficiently small. Since u is extremal, we have ||u + h||BV − ||u||BV + Fp,q,λ (f − u − h) − Fp,q,λ (f − u) ≥ 0.

(11)

On the other hand, we have d~ν ρ ~ +  dµ =

2 !1/2  

ν d~ν d~ν 2 d~

1 + 2~ ρ· + + o(||) , = 1 + ~ ρ· dµ dµ dµ

where in the last equality, we use the estimate (1 + α)1/2 = 1 + α2 + o(|α|). This implies,  Z  Z d~ν d~ν ||u + h||BV − ||u||BV = ||||~νs || + dµ + o(||). ~ +  − 1 dµ = ||||~νs || +  ρ ~· ρ dµ dµ Moreover, Z Fp,q,λ (f − u − h) − Fp,q,λ (f − u) = −λ

(K ∗ h)Jp,q dx + o(||) Z

= −λ

h(K ∗ Jp,q )dx + o(||)

since K is even (symmetric). By (11), we have  Z Z d~ν dµ − λ h(K ∗ Jp,q )dx ≤ ||k~νs k + o(||) − ρ ~· dµ Taking ± and since the right hand side of the above equation does not depend on the sign of , we see that (9) holds. The converse holds because the functional (3) is convex.  Following Meyer [21], we define   d d   X X ∂uj ||v||∗ = inf k|u|k∞ : v = , |u|2 = |uj |2   ∂xj j=1

i=1

and note that ||v||∗ is the norm of the dual of W 1,1 ⊂ BV , when W 1,1 is given the norm of BV . By the weak-star density of W 1,1 in BV , Z hvdx ≤ ||h||BV ||v||∗ (12) whenever v ∈ L2 . 6

Remark 2. Taking h ∈ BV in Lemma 2 such that ~νs = 0, i.e. Dh is absolutely continuous with respect to Du, then (9) implies Z Z d~ν ρ ~· dµ − λ h(K ∗ Jp,q )dx = 0. (13) dµ In particular, for any h ∈ W 1,1 , the above equation holds. I.e. Z Z d~ν 1 ρ ~· dµ. h(K ∗ Jp,q ) dx = λ dµ

(14)

We have the following characterization of minimizers in terms of k · k∗ (following Meyer [21]). Lemma 3. Let u ∈ BV such that u 6= f , and let Jp,q be defined as in Lemma 2. Then u is a minimizer for the problem (2) if and only if ||K ∗ Jp,q ||∗ = and

Z u(K ∗ Jp,q )dx =

1 λ

1 ||u||BV . λ

(15)

(16)

Proof: If u is a minimizer, we use Lemma 2. For any h ∈ W 1,1 , (14) yields ||K ∗ Jp,q ||∗ ≤

1 . λ

By (12) Z u(K ∗ Jp,q )dx ≤ ||u||BV ||K ∗ Jp,q ||∗ , and by setting h = u in (13), we obtain Z λ u(K ∗ Jp,q )dx = ||u||BV . Therefore (15) and (16) hold. Conversely, assume u ∈ BV satisfies (15) and (16) and note that u determines Jp,q . Still following Meyer [21], we let h ∈ BV be real. Then for small  > 0, (12), (15) and (16) give Z ||u + h||BV + λ||K ∗ (f − u − h)||1 ≥ λ (u + h)(K ∗ Jp,q )dx + λ||K ∗ (f − u)||1 Z − λ h(K ∗ Jp,q )dx + o() Z Z = ||u||BV + λ h(K ∗ Jp,q )dx − λ h(K ∗ Jp,q )dx + o() ≥ 0. Therefore u is a local minimizer for the functional (2), and by convexity that means u is a global minimizer. 7

2.3

Radial Functions

Assume K is radial, K(x) = K(|x|). Also assume f is radial and f ∈ / Mp,q,λ (f ). Then averaging over rotations shows that each u ∈ Mp,q,λ (f ) is radial, so that Du = ρ(|x|)

x µ |x|

where µR is invariant under rotations and where ρ(|x|) = ±1 a.e. dµ. Let H ∈ L1 (µ) be radial and satisfy Hdµ = 0 and H = 0 on |x| < , and define Z 1 H(|y|) d−1 dµ. h(x) = |y| B(0,|x|) Then h ∈ BV is radial and Dh = ~ν = H(|x|) Consequently ~νs = 0 and (9) gives Z Z Z ρHdµ = λ K ∗ Jp,q (x) B(0,|x|)

x µ. |x|

H(y) dµ(y)dx = λ |y|d−1

Z Z |x|>|y|

K ∗ Jp,q (x)dx

 H(|y|) |y|d−1

dµ(y),

so that a.e. dµ, λ |y|d−1

ρ(|y|) =

Z K ∗ Jp,q (x)dx.

(17)

|x|>|y|

But the right side of (17) is real analytic in |y|, with a possible pole at |y| = 0, and ρ(|y|) = ±1 almost everywhere µ. Therefore there is a finite set {r1 < r2 < · · · < rn }

(18)

of radii such that n

Du =

x X cj Λd−1 |{|x| = rj }| |x| j=1

for real constants c1 , . . . , cn , where Λd−1 denotes d − 1 dimensional Hausdorff measure. By Lemma 1, Jp,q is uniquely determined by f , and hence the set (18) is also unique. Moreover, it follows from Lemma 1 that for each j, either cj ≥ 0 for all u ∈ Mp,1,λ (f ) or cj ≤ 0 for all u ∈ Mp,1,λ (f ). We have proved: Theorem 1. Suppose K and f are both radial. If f ∈ / Mp,q,λ (f ), then there is a finite set (18) such that all u ∈ Mp,q,λ (f ) have the form n X

cj χB(0,rj ) .

(19)

j=1

Moreover, there is X + ⊂ {1, 2, . . . , n} such that cj ≥ 0 if j ∈ X + while cj ≤ 0 if j ∈ / X +. Note that by convexity Mp,q,λ (f ) consists of a single function unless p = q = 1. In Section 2.6 we will say more about the solutions of the form (19). 8

2.4

Example

Unfortunately, Theorem 1 does not hold more generally. The reason is that when u is not radial it is difficult to produce BV functions satisfying ~ν 0 and E = E ∪ {0 ≤ xd ≤ u(y), y ∈ V }. Then E ⊂ E , and writing u0 = χE , we have Z q q ||u ||BV − ||u0 ||BV = (1 + |∇(fj + g)|2 ) − (1 + |∇fj |2 )dy = o()

(39)

V

because by [16] Λd−1 ((∂∗ E) ∪ (E \ E)) = o() Λd−1 a.e. on Kj . Also, for a similar reason Z λ||K ∗ (u − u0 )||p = λ||

udy + o(). V

12

(40)

Together (39) and (39) show Z V

 ∇fj ∇u · p dy + λ 1 + |∇fj |2 

Z udy ≥ 0.

(7.3)

V

Repeating this argument with  < 0, we obtain: Theorem 4. At Λd−1 almost every x ∈ ∂∗ E,   ∇fj ≤ λ. div p 2 1 + |∇fj |

(41)

as a distribution on Rd−1 .

2.8

Smooth Extremals

For convenience we assume d = 2 and we take p = q = 1. K∗(f −u) Theorem 5. Let u ∈ C 2 ∩ M1,1,λ (f ) and assume u 6= f. Set Et = {u > t} and J = |K∗(f −u)| . Then RR (i) Λ1 (∂∗ Et ) = λ Et K ∗ Jdxdy, (ii) the level curve {u(z) = c} has curvature λ(K ∗ J)(z), and

(iii) if |∇u| = 6 0, then d Λ1 (∂∗ Et ) = − dt

Z ∂Et

λ(K ∗ J)(z) ds. |∇u(z)|

Theorem 5 is proved using the variation u → u + h, h ∈ C02 . It should be true in greater generality, but we have no proof at this time.

3

Existence of minimizers

Although the proof of the existence of minimizers of our problem can be seen as a generalization and application of more classical techniques [1], [11], [26], we include it here for completeness in several cases. We consider the cases of bounded domain Q and of the whole domain Rd , with various kernel operators Ku = K ∗ u. We recall that here, for u ∈ BV (Q), kukBV (Q) denotes the semi-norm nZ o kukBV (Q) = sup udivϕdx : ϕ ∈ C01 (Q, Rd ), sup |ϕ(x)| ≤ 1, x ∈ Q .

3.1

Bounded domain, general operator K and general case p ≥ 1, 1 ≤ q < ∞

R We recall that K(x) is non-negative and even on Rd with K(x)dx = 1, thus K ∈ L1 (Rd ), kKkL1 = 1, with K1 = 1 6= 0. The linear and continuous operator u 7→ Ku = K ∗ u is well defined on L1 (Rd ). There are several ways to adapt linear and continuous convolution operators Ku to the case of bounded domains Q, e.g. as shown in [17].

13

Theorem 6. Assume p ≥ 1, 1 ≤ q < ∞, λ > 0, Q open, bounded and connected subset of Rd , with Lipschitz boundary ∂Q. If f ∈ Lp (Q) and K : L1 (Q) → Lp (Q) is linear and continuous, such that kKχQ kL1 (Q) > 0, then the minimization problem inf u∈BV (Q)

kukBV (Q) + λkK(f − u)kqLp (Q)

(42)

has an extremal u ∈ BV (Q). Proof: Let E(u) = kukBV + λkK(f − u)kqp . Infimum of E is finite since E(u) ≥ 0, and E(0) = λkKf kqp < ∞. Let un be a minimizing sequence, thus inf v E(v) = limn→∞ E(un ). Then E(un ) ≤ C < ∞, ∀n ≥ 1. Poincar´e-Wirtinger inequality implies that there is a constant C 0 = C 0 (d, Q) > 0 such that for all n ≥ 1, we have kun − un,Q k1 ≤ C 0 kun kBV , where un,Q is the mean of un over Q. Let vn = un − un,Q , thus vn,Q = 0 and Dvn = Dun . Similarly, we have kvn k1 ≤ C 0 kvn kBV . Since Q is bounded, we have for some constant C1 > 0, (C/λ)2/q ≥ kK(f − un )k2p ≥ C1 kK(f − un )k21 = C1 kKun − Kf k21 = C1 k(Kvn − Kf ) + Kun,Q k21 2 ≥ C1 kKvn − Kf k1 − kKun,Q k1 ≥ C1 kKun,Q k1 (kKun,Q k1 − 2kKvn − Kf k1 )   ≥ C1 kKun,Q k1 kKun,Q k1 − 2kKk(kvn k1 + kf k1 ) . 2/q

Let xn = kKun,Q k1 and an = kKk(kvn k1 + kf k1 ). Then xn (xn − 2an ) ≤ (C/λ) = c, with C1 p 0 2 2 0 ≤ an ≤ kKk(CC + kf k1 ), thus we obtain 0 ≤ xn ≤ an + an + c ≤ C2 for some constant C2 > 0, which implies R | Q un dx| kKun,Q k1 = kKχQ k1 ≤ C2 . |Q| Thanks to assumptions on K, we deduce that the sequence |un,Q | is uniformly bounded. By Poincar´e-Wirtinger inequality we obtain kun k1 ≤ constant. Thus, kun kBV (Q) + kun kL1 (Q) is uniformly bounded. Following e.g. [16], we deduce that there is a subsequence {unj } of {un }, and u ∈ BV (Q), such that unj converges to u strongly in L1 (Q). Then we also have kukBV (Q) ≤ lim inf nj →∞ kunj kBV (Q) . Since (unj − f ) → (u − f ) in L1 (Q), and K is continuous from L1 (Q) to Lp (Q), we deduce that kK(unj − f )kp → kK(u − f )kp as nj → ∞. We conclude that E(u) ≤ lim inf E(unj ) = inf E(v), nj →∞

v

thus u is extremal.

3.2



Convolution operator K and particular case p = q = 1

In this section, we study the existence of minimizers for different choices of convolution kernels K, in the particular case p = q = 1. 14

3.2.1

Smooth Kernels

Suppose Kv = Kt ∗ v, where for example Kt is the Poisson kernel of scale t > 0. We have ˆ t (ξ) = e−2πt|ξ| . Let f be a distribution such that kKt ∗ f kL1 < ∞. We recall our minimization K problem, inf {J (u) = kukBV + λkKt ∗ (f − u)kL1 } . (43) u∈BV

To motivate the proposed minimization model (43) with t > 0 for the decomposition of an image f into a BV component u and an oscillatory component f − u (rather than taking t = 0), we consider the following two examples of functions or distributions f with kKt ∗ f kL1 small while kf kL1 is large. Example 1. Suppose f (x) = sin(2πnx), x ∈ R, is an oscillatory function. 2 sin(2πnx)e−2πtn . For Q = [−m/n, m/n], we have kKt ∗ f kL1 (Q) = On the other hand, kf kL1 (Q) =

Then Kt ∗ f =

8m −2πtn e . πn

4m πn .

Clearly, kKt ∗ f kL1  kf kL1 when n is large. P∞ P Example 2. Suppose we are in R and f = ∞ i=0 |ai | < ∞ can also be seen as a i=0 ai δxi with 1 (generalized) oscillatory distribution. Note that f ∈ / L (R). However, kKt ∗ f kL1 ≤

∞ X

|ai | < ∞.

i=0

Recall that by using the standard property of convolution (Young’s inequality), we have for all v ∈ Lp , 1 ≤ p ≤ ∞, kKt ∗ vkLp ≤ kKt kL1 kvkLp = kvkLp . Also, using the same arguments as the ones from Lemma 3.24 in Chapter 3 of [6], one obtains the following result Lemma 7. Let u ∈ BV (Rd ). Then kKt ∗ u − ukL1 ≤ tkukBV .

(44)

Theorem 7. Let Q = (0, 1)d or Q = Rd , λ > 0. For each distribution f such that kKt ∗ f kL1 < ∞, the variational problem (43) has a minimizer. Proof. Let {un } be a minimizing sequence for (43). This minimizing sequence exists because J (u) ≥ 0 for all u ∈ BV (Q) and J (0) = kKt ∗ f kL1 (Q) < ∞. We have the following uniform bounds, kun kBV (Q) ≤ C,

(45)

kKt ∗ (f − un )kL1 (Q) ≤ C.

(46)

kun kL1 (Q) ≤ kun − Kt ∗ un kL1 (Q) + kKt ∗ un kL1 (Q) ≤ tkun kBV (Q) + kKt ∗ un kL1 (Q) .

(47)

Suppose Q = Rd , then

15

This shows that kun kL1 (Q) is uniformly bounded. On the other hand, if Q = (0, 1)d , then (45) and (46) imply that kun kL1 (Q) is uniformly bound. Indeed, suppose (45) and (46) hold. Let wn = un − un,Q as before, then kwn kBV (Q) ≤ C. By Poincare’s inequality, we have kwn kL1 (Q) = kwn − wn,Q kL1 (Q) ≤ CQ kwn kBV ≤ C. But, |un,Q | = kKt ∗ un,Q kL1 ≤ kKt ∗ un kL1 (Q) + kKt ∗ wn kL1 (Q) ≤ kKt ∗ un kL1 (Q) + kwn kL1 (Q) ≤ C, thus un,Q is uniformly bounded. Moreover, by applying Poincare’s inequality to un , we have kun kL1 ≤ |Q||un,Q | + kun − un,Q kL1 (Q) ≤ |Q||un,Q | + CQ kun kBV (Q) ≤ C. Therefore, kun kL1 (Q) is uniformly bounded. Now, using the compactness property in BV and the lower semicontinuity property of the map u → kukBV (Q) [16, 6], there exists u ∈ BV (Q) such that, up to a subsequence (which we still denote by un ), un → u in L1 (Q) and kukBV (Q) ≤ lim inf kun kBV (Q) . (48) n→∞

Moreover, kKt ∗ (un − u)kL1 (Q) ≤ kun − ukL1 (Q) → 0, as n → ∞.

(49)

This together with the asumption that Kt ∗ f ∈ L1 (Q), we have kKt ∗ (f − u)kL1 (Q) ≤ lim kKt ∗ (f − un )kL1 (Q) . n→∞

(50)

Combining (48) and (50), one obtains J (u) ≤ lim inf J (un ), n→∞

which shows that u is a minimizer.

3.2.2



Riesz Potential

Recall the Riesz potential Iα , 0 < α < d, defined as [24] Iα f = (−∆)−α/2 f = Kα ∗ f, ˆ α (ξ) = (2π|ξ|)−α . For each α ∈ (0, d), the homogeneous Sobolev potential space W ˙ −α,1 is where K defined as ˙ −α,1 = {f : kIα f kL1 < ∞} . W ˙ −α,1 becomes a Banach space. From Stein [24] Equipped with the norm kf kW˙ −α,1 = kIα f kL1 , W (Chapter V, Section 1.2), if 1 < p < ∞ and 1/q = 1/p − α/d, then kIα f kLq (Rd ) ≤ Ap,q kf kLp (Rd ) . 16

(51)

Here we would like to model the oscillatory component using Iα , 0 < α < d. Thus the variational problem (43) can be rewritten as inf {J (u) = kukBV + λkKα ∗ (f − u)kL1 } .

u∈BV

(52)

Theorem 8. Let Q = (0, 1)d . For each 0 < α < d and a distribution f such that kKα ∗f kL1 (Ω) < ∞, the above variational problem (52) has a minimizer. Proof. Again, as before, let {un } be a minimizing sequence for (52). We have kun kBV (Q) ≤ C,

(53)

kKα ∗ (f − un )kL1 (Q) ≤ C.

(54)

As in the proof of Thm. 7, condition (54) implies that un,Ω is uniformly bounded, and so by Poincare’s inequality, kun kL1 ≤ C, for all n. This implies that the BV -norm of un is uniformly bounded. Thus, there exists u ∈ BV such that, up to a subsequence, un → u in L1 and kukBV ≤ lim inf kun kBV . n→∞

By the compactness of BV in Lp , 1 ≤ p < d/(d − 1), we have up to a subsequence, un → u in Lp , 1 ≤ p < d/(d − 1). Now for a fixed p ∈ (1, d/(d − 1)), we have kKα ∗ (un − u)kL1 ≤ Cq kKα ∗ (un − u)kLq ≤ Cp,q kun − ukLp → 0 as n → ∞. This implies, up to a subsequence, kKα ∗ (f − u)kL1 = lim kKα ∗ (f − un )kL1 . n→∞

Therefore, u is a minimizer.

4



Characterization of Minimizers 2

In this section, we apply the general duality techniques of Ekeland-Temam [15] and in particular of Demengel-Temam [14] to our minimization problem. We note that these results may be seen as related with the other characterization of minimizers from Lemma 2 and Lemma 3, but expressed and proven here in a different language.

4.1

Dual problem and optimality conditions p = q = 1

Let f : L1 (Q) be the given data, with Q ⊂ Rd open, bounded, connected, and K a smoothing (analytic) convolution kernel, such as the Gaussian kernel or the Poisson kernel. The minimization problem for p = 1, q = 1 is (P1 )

inf u∈BV (Q)

E(u) = kukBV (Q) + λkK ∗ (u − f )kL1 (Q) ,

17

R using the notation kukBV (Q) = Q |Du| for the semi-norm of u in BV (Q). As we have seen, this problem has a solution u ∈ BV (Q) ⊂ L2 (Q). For u ∈ L1 (Q), we will also use the operator notation Ku = K ∗ u to be the corresponding linear and continuous operator from L1 to L1 , with adjoint K ∗ (with radially symmetric kernel K, then the operator K is self-adjoint). We wish to characterize the solution u of (P1 ) using duality techniques. We have inf E(u) = inf F (u), u∈W 1,1 (Q)

u∈BV (Q)

since for any u ∈ BV (Q), we can find un ∈ W 1,1 (Q) such that un → u strongly in L1 (Q) and kun kBV (Q) → kukBV (Q) . Thus let’s first consider the simpler problem Z (P2 ) inf F (u) = |∇u|dx + λkK ∗ (u − f )kL1 (Q) . u∈W 1,1 (Q)

Q

We now write (P2∗ ), the dual of (P2 ), in the sense of Ekeland-Temam [15]. We first recall the definition of the Legendre transform (or polar) of a function: let V and V ∗ be two normed vector ¯ be a function. Then the spaces in duality by a bilinear pairing denoted h·, ·i. Let φ : V → R ∗ ∗ ¯ is defined by Legendre transform φ : V → R n o φ∗ (u∗ ) = sup hu∗ , ui − φ(u) . u∈V

R R We let G1 (w0 ) = λ Q |w0 − K ∗ f |dx and G2 (w) ¯ = Q |w|dx, ¯ with G1 : L1 (Q) → R, G2 : L1 (Q)d → R, and using w = (w0 , w1 , w2 , ..., wd ) ∈ L1 (Q)d+1 , we define G(w) = G1 (w0 ) + G2 (w). ¯ Let Λ = (Ku, ∇u) : W 1,1 (Q) → L1 (Q)d+1 , and Λ∗ be its adjoint. Then E(u) = F (u) + G(Λu), with F (u) ≡ 0. Then (P2∗ ) is ([15], Chapter III, Section 4): (P2∗ )

−F ∗ (Λ∗ p∗ ) − G∗ (−p∗ ).

sup p∗ ∈L∞ (Q)d+1

We have F ∗ (Λ∗ p∗ ) = 0 if Λ∗ p∗ = 0, and F ∗ (Λ∗ p∗ ) = +∞ otherwise. It is easy to see that p∗ ), for p∗ = (p∗0 , p¯∗ ). G∗ (p∗ ) = G∗1 (p∗0 ) + G∗2 (¯ We have that, G∗1 (p∗0 ) =

Z

p∗0 (K ∗ f )dx



if

|p∗0 |

≤ λ a.e.,

G∗1 (p∗0 )

= +∞ otherwise, and G∗2 (¯ p∗ ) = 0

if |¯ p∗ | ≤ 1 a.e., G∗2 (¯ p∗ ) = +∞ otherwise. Thus we have Z (P2∗ ) sup − (−p∗0 )(K ∗ f )dx, p∗ ∈X



where X = {(p∗0 , p∗1 , ..., p∗d ) = (p∗0 , p¯∗ ) ∈ L∞ (Ω)d+1 , |p∗0 | ≤ λ, |¯ p∗ | ≤ 1, Λ∗ p∗ = 0}. 18

Under the satisfied assumptions, we have that inf(P1 ) = inf(P2 ) = sup(P2 )∗ and (P2∗ ) has at least one solution p∗ . Using the definition of Λ, we can show that [26] X = {(p∗0 , p∗1 , ..., p∗d ) = (p∗0 , p¯∗ ) ∈ L∞ (Ω)d+1 , |p∗0 | ≤ λ, |¯ p∗ | ≤ 1, K ∗ p∗0 −div¯ p∗ = 0, p¯∗ ·ν = 0 on ∂Q}. Now let u ∈ BV (Q) be the solution of (P1 ) and p = (p0 , p¯) ∈ X be the solution of (P2∗ ). We must have the extremality relation Z p0 (K ∗ f )dx. kukBV (Q) + kK ∗ u − K ∗ f kL1 (Q) = Q

We have that Du · p¯ is an unsigned measure, satisfying a Generalized Green’s formula Z Z Z Du · p¯ = − udiv¯ pdx + u(¯ p · ν)ds. Q

Q

∂Q

Since p¯ · ν = 0 ∂Ω a.e., we have Z Z Z Z Z |Du| + |K ∗ u − K ∗ f |dx + p0 Kudx + Du · p¯ − p0 (K ∗ f )dx = 0, Q

Q

Q

Q

Q

or using the decomposition Du = ∇udx+Ds u = ∇udx+Cu +Ju = ∇udx+Cu +(u+ −u− )νdHd−1 |Su [16], with Su the support of the jump measure Ju , we get Z Z Z Z Z + − 1 |∇u|dx + |Cu | + (u − u )dH + ∇u · p¯dx + p¯ · Cu Q

Z + Su

Q\Su

(u+ − u− )¯ p · νdHd−1 +

Su

Q

Z

Z |K ∗ u − K ∗ f |dx +

Q

Q\Su

Z p0 Kudx −

Q

p0 (K ∗ f )dx = 0. Q

Since for any function φ and its polar φ∗ we must have φ∗ (u∗ ) − hu∗ , ui + φ(u) ≥ 0 for any u ∈ V and u∗ ∈ V ∗ , we obtain: 1. |K ∗ u − K ∗ f | − (−p0 )(K ∗ u) + (−p0 )(K ∗ f ) ≥ 0 for dx a.e. in Ω 2. |∇u| − ∇u · (−¯ p) + 0 ≥ 0 for dx a.e. in Ω where ∇u(x) is defined 3. 0 − (−¯ p) · Cu + |Cu | = (1 + p¯ · h)|Cu | ≥ 0, since |¯ p| ≤ 1 (letting Cu = h · |Cu |, h ∈ L1 (|Cu |)d , |h| = 1) 4. 0 − (−¯ p · ν)(u+ − u− ) + (u+ − u− ) = (u+ − u− )(1 + p¯ · ν) ≥ 0 for dHd−1 a.e. in Su (again since |¯ p| ≤ 1). Therefore, each expression in 1-4 must be exactly 0 and we obtain another characterization of extremals u: Theorem 9. u is a minimizer of (P1 ) if and only if there is (p0 , p1 , ..., pd ) = (p0 , p¯) ∈ (L∞ )d+1 such that |p0 | ≤ λ, |¯ p| ≤ 1, p¯ · ν = 0 on ∂Q, K ∗ p0 − div¯ p = 0, |K ∗ (u − f )| + p0 (K ∗ u − K ∗ f ) = 0, 19

(55)

|∇u| + ∇u · p¯ = 0, 1 + p¯ · ν = 0 on Su and |¯ p| = 1 on Su , and supp|Cu | ⊂ {x ∈ Ω \ Su , 1 + p¯(x) · h(x) = 0, h ∈ L1 (|Cu |)d , |h| = 1, Cu = h|Cu |}.

4.2

Case 1 ≤ p, q < ∞

A similar statement as Thm. 9 can be shown for the general case 1 ≤ p, q < ∞. The main change R q/p for is in the definition of G1 , which becomes G1 (w0 ) = λkw0 − K ∗ f kqp = λ Q |w0 − K ∗ f |p dx h i R ∗ ∗ 0 p p w0 ∈ Lp (Q). For example, if 1 < q < ∞, then G1∗ (p∗0 ) = λq q10 k λq0 kqp0 + Q (K ∗ f ) λq0 dx and (55) changes accordingly, where

1 p

+

1 p0

= 1 and

1 q

+

1 q0

= 1 (similarly in the case 1 ≤ p < ∞, q = 1).

References [1] R. Acar and C.R. Vogel, Analysis of Bounded Variation Penalty Methods for Ill-Posed Problems, Inverse problems 10(6): 1217-1229, 1994. [2] W.K. Allard, Total variation regularization for image denoising. I: Geometric theory, SIAM J. Mathematical Analysis 39(4): 1150-1190, 2007. [3] W.K. Allard, Total variation regularization for image denoising. II: Examples [4] W.K. Allard, Total variation regularization for image denoising. II: Examples [5] S. Alliney, Digital filters as absolute norm regularizers, IEEE Transactions on Signal Processing 40(6): 1548-1562, 1992. [6] L. Ambrosio, N. Fusco, D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford University Press, 2000. [7] G. Aubert, and J.-F. Aujol, Modeling very oscillating signals. Application to image processing, Applied Mathematics and Optimization, Vol. 51, no. 2, pp. 163-182, 2005. ´raud, and A. Chambolle, Image decomposition [8] J.-F. Aujol, G. Aubert, L. Blanc-Fe into a bounded variation component and an oscillating component, JMIV 22(1): 71-88, 2005. [9] J.F. Aujol, G. Gilboa, T. Chan and S. Osher, Structure-texture image decomposition - modeling, algorithms and parameter selection, International Journal of Computer Vision 67(1): 111-136, 2006. [10] J.-F Aujol and A. Chambolle, Dual norms and image decomposition models, IJCV 63(2005), 85-104. [11] A. Chambolle and P.L. Lions, Image recovery via total variation minimization and related problems, Numerische Mathematik 76(2): 167-188, 1997.

20

[12] T. F. Chan, and S. Esedoglu, Aspects of total variation regularized L1 function approximation, Siam J. Appl. Math., Vol. 65, No. 5, pp. 1817-1837, 2005. [13] T. Chan and D. Strong, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problems 19: S165-S187, 2003. [14] F. Demengel, and R. Temam, Convex Functions of a Measure and Applications, Indiana Univ. Math. J., 33: 673-709, 1984. ´mam, Convex Analysis and Variational Problems, SIAM 28, 1999. [15] I. Ekeland and R. Te [16] L. C. Evans, and R. F Gariepy, Measure theory and fine properties of functions, CRC Press, Dec. 1991. [17] J.B. Garnett, T.M. Le, Y. Meyer, and L.A. Vese, Image decompositions using bounded variation and generalized homogeneous Besov spaces, Appl. Comput. Harmon. Anal. 23:25-56, 2007. [18] J.B. Garnett, P.W. Jones, T.M. Le, and L.A. Vese, Modeling Oscillatory Components with the Homogeneous Spaces, UCLA CAM Report 07-21, to appear in PAMQ. [19] T . M. Le and L. A. Vese, Image Decomposition Using Total Variation and div(BMO), Multiscale Model. Simul., Vol. 4, No. 2, pp. 390-423, 2005. [20] L.H. Lieu and L.A. Vese, Image Restoration and Decomposition via Bounded Total Variation and Negative Hilbert-Sobolev Spaces, pplied Mathematics & Optimization 58: 167-193, 2008. [21] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol. 22, Amer. Math. Soc., 2001. [22] S. Osher, A. Sole, L. Vese, Image decomposition and restoration using total variation minimization and the H −1 norm, SIAM Multiscale Modeling and Simulation 1(3): 349 - 370, 2003. [23] L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D, 60, pp. 259-268, 1992. [24] E. Stein, Singular Integrals and Differentiability Properties of Functions Princeton University Press, 1970. [25] G. Strang, L1 and L∞ Approximation of Vector Fields in the Plane, Lecture Notes in Num. Appl. Anal., 5, 273-288 (1982), Nonlinear PDE in Applied Science, U.S.-Japan Seminar, Tokyo, 1982. [26] L. Vese, A study in the BV space of a denoising-deblurring variational problem, Applied Mathematics and Optimization, 44(2):131-161, 2001. [27] L. Vese, S. Osher, Modeling Textures with Total Variation Minimization and Oscillating patterns in Image Processing, Journal of Scientific Computing, 19(1-3),2003, pp. 553-572.

21