An Iterative Approach to Quadratic Optimization1,2

0 downloads 0 Views 116KB Size Report
We devise an iterative algorithm which generates a sequence (xn) from an arbitrary ... norm to the unique solution of the quadratic minimization problem ... we can define recursively a sequence (xn) in C by. xnC1. _αnC1 uC(1AαnC1) Txn , n¤0. (1). Under certain assumptions on the parameters (αn), it has been shown (Refs.
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 116, No. 3, pp. 659–678, March 2003 ( 2003)

An Iterative Approach to Quadratic Optimization1,2 H. K. XU3 Communicated by P. Tseng Abstract. Assume that C1 , . . . , CN are N closed convex subsets of a real Hilbert space H having a nonempty intersection C. Assume also that each Ci is the fixed point set of a nonexpansive mapping Ti of H. We devise an iterative algorithm which generates a sequence (xn) from an arbitrary initial x0 ∈H. The sequence (xn) is shown to converge in norm to the unique solution of the quadratic minimization problem minx ∈C (1兾2)〈Ax, x〉A〈x, u〉, where A is a bounded linear strongly positive operator on H and u is a given point in H. Quadratic–quadratic minimization problems are also discussed. Key Words. Iterative algorithms, quadratic optimization, nonexpansive mappings, convex feasibility problems, Hilbert spaces.

1. Introduction We are concerned with the following quadratic optimization problem: (P1)

min (1兾2)〈Ax, x〉A〈x, u〉,

x ∈) iG1 Ci N

where N¤1 is an integer, C1 , . . . , CN are N closed convex subsets of a real Hilbert space H such that the intersection )N i G1 Ci ≠ ∅, 〈·, ·〉 is the inner product of H, u is an element of H, and A is a self-adjoint bounded linear operator such that A is strongly positive (Ref. 1); i.e., 〈Ax, x〉¤ γ 兩兩x兩兩2,

for all x∈H and for some constant γ H0.

It is known that Problem P1 has a unique solution which is denoted x*. 1

This research was supported in part by the National Research Foundation of South Africa. The author thanks Professor Kung Fu Ng for helpful discussions and hospitality while he was visiting the Chinese University of Hong Kong. The author is also grateful to the referees for their comments and suggestions which improved the presentation of this paper. 3 Professor, Department of Mathematics, University of Durban-Westville, Durban, South Africa. 2

659 0022-3239兾03兾0100-0659兾0  2003 Plenum Publishing Corporation

660

JOTA: VOL. 116, NO. 3, MARCH 2003

The objective of the present article is to design an iterative algorithm which generates a sequence converging in norm to the solution x*. Recall that a self-mapping T of a closed convex subset C of H is nonexpansive (Ref. 2) if there holds that 兩兩TxATy兩兩⁄兩兩xAy兩兩,

x, y∈C.

We shall use F(T) to denote the fixed point set of T; i.e., F(T)_{x∈C: TxGx}. It is known (Ref. 2) that F(T) is closed convex, but possibly empty. Let PC denote the (nearest point) projection from H onto C; that is, PC xGargmin{兩兩xAû兩兩: û∈C },

x∈H.

It is well known (cf. Refs. 2–3) that PC is nonexpansive. Our iterative algorithm is motivated by the works of Halpern (Ref. 4), Lions (Ref. 5), Wittmann (Ref. 6), and Bauschke (Ref. 7). Let T and C be as above. Suppose that T has a fixed point. Given a u∈C and a real sequence (α n) in the interval [0, 1], starting with an arbitrary initial x0 ∈C, we can define recursively a sequence (xn) in C by xnC1 _ α nC1 uC(1Aα nC1) Txn ,

n¤0.

(1)

Under certain assumptions on the parameters (α n), it has been shown (Refs. 4–6 and 8) that (xn) converges in norm to PF (T ) (u), the projection of u to the fixed point set F(T) of T. This means that the limit of (xn) solves the following minimization problem: min (1兾2)兩兩xAu兩兩2.

x ∈F (T )

(2)

Clearly, (2) is a special case of (P1). Our work is motivated also by the following problem of convex inequalities: Given a finite family of convex functions f1 , . . . , fN , find an x such that fi (x)⁄0,

iG1, 2, . . . , N.

(3)

More generally, our work is an extension of the following convex feasibility problem: Given a finite family of closed convex subsets {Ci }N i G1 with a nonempty intersection, N

find a point x∈ ) Ci .

(4)

i G1

Solving Problem (P1) is equivalent to solving the convex feasibility problem and the solution also minimizes a quadratic functional. So, the solution is

JOTA: VOL. 116, NO. 3, MARCH 2003

661

of some kind of viscosity nature (for details, see Ref. 9). For a recent survey on projection methods for convex feasibility problems, see Bauschke and Borwein (Ref. 10) and the references cited there; see also Combettes (Ref. 11). 2. Preliminaries We begin with the following elementary lemma. Lemma 2.1. Let (sn) be a sequence of nonnegative numbers satisfying the condition snC1 ⁄(1Aα n) snCα n β n ,

(5)

n¤0,

where (α n), (β n) are sequences of real numbers such that: (i)

S

(α n)⊂[0, 1] and ∑n G0 α n GS, or equivalently, ∏ nSG0 (1Aα n)_limn → S ∏ nk G0 (1Aα k)G0;

(ii) lim supn → Sβ n ⁄0; or (iii) ∑n α n β n is convergent. Then, limn → Ssn G0. Proof. First assume that (i) and (ii) hold. For any (H0, let N¤1 be an integer big enough so that

β n F(,

for nHN.

It follows from (5) that, for nHN, snC1 ⁄(1Aα n) snC(α n ⁄(1Aα n)(1Aα nA1) snA1C((1A(1Aα n)(1Aα nA1)). Hence, by induction, we obtain n



n



snC1 ⁄ ∏ (1Aα j) sNC( 1A ∏ (1Aa j) , j GN

j GN

nHN.

By condition (i), after taking the lim sup as n → S in the last inequality, we deduce that lim sup snC1 ⁄(. n→S

Next, assume that (i) and (iii) hold. Then, repeatedly using (5), we get that, for all nHm, n

n

j Gm

j Gm

snC1 ⁄ ∏ (1Aα j) smC ∑ α j β j .

(6)

662

JOTA: VOL. 116, NO. 3, MARCH 2003

Letting in (6) first n → S and then m → S, we obtain 䊐

lim sup sn ⁄0. n

Lemma 2.2. See Ref. 2, Demiclosedness Principle. Assume that T is a nonexpansive self-mapping of a closed convex subset C of a Hilbert space H. If T has a fixed point, then IAT is demiclosed. That is, whenever (xn) is a sequence in C weakly converging to some x∈C and the sequence {(IAT)xn } strongly converges to some y, it follows that (IAT)xGy. Here, I is the identity operator of H. Lemma 2.3. See Ref. 12, Optimality Condition. Let C be a closed convex subset of a Hilbert space H, and let f: C → ⺢∪{S} be a proper lower-semicontinuous differentiable convex function. If x* is a solution to the minimization problem f (x*)G inf f (x), x ∈C

then 〈f ′(x*), xAx*〉¤0,

x∈C.

(7)

In particular, if x* solves Problem (P1), then 〈uAAx*, xAx*〉⁄0,

x∈C,

(8)

where N

CG ) Ci . i G1

Proof. Since C is convex, x*Ct (xAx*)∈C,

for all x∈C and 0 < t < 1.

Hence, lim [ f (x*Ct (xAx*))Af (x*)]兾tG〈 f ′(x*), xAx*〉¤0.

t → 0+

In particular, if f (x)G(1兾2)〈Ax, x〉A〈x, u〉, where A is a self-adjoint linear bounded operator on a real Hilbert space H, then (8) follows from (7) and the fact that f ′(x)GAxAu.



JOTA: VOL. 116, NO. 3, MARCH 2003

663

The following lemma is an immediate consequence of an inner product. Lemma 2.4. In a real Hilbert space H, there holds the inequality 兩兩xCy兩兩2 ⁄兩兩x兩兩2C2〈y, xCy〉,

x, y∈H.

3. Iterative Algorithms Let N¤1 be a positive integer, and let {Ti }N i G1 be N nonexpansive mappings in a real Hilbert space H. Let Ci denote the fixed point set of Ti , i.e., Ci _{x∈H: Ti xGx}, and let N

C_ ) Ci . i G1

We will assume throughout this section that C ≠ ∅. The purpose of this section is to devise an iterative algorithm to solve the following quadratic minimization problem: (P2) min (1兾2)〈Ax, x〉A〈x, u〉, x ∈C

where A is a self-adjoint bounded linear operator on H such that 〈Ax, x〉¤ γ 兩兩x兩兩2,

for all x∈H and some γ H0,

and where we always assume 0Fγ F1 and u∈H. Note that the formulation for Problem (P2) above is slightly more general than the formulation for Problem (P1) in the introduction because each closed convex subset D of a Hilbert space is the fixed point set of the associated projection PD, which is nonexpansive. But not every nonexpansive mapping is a projection (see Ref. 3). Now, we introduce the iterative algorithm. Let a sequence (α n) of real numbers in the interval [0, 1] be given. Starting with an arbitrary initial x0 ∈H, we define the sequence (xn) recursively by the following algorithm: x1 G(IAα 1 A) T1 x0Cα 1 u, x2 G(IAα 2 A) T2 x1Cα 2 u, ·· · xN G(IAα N A) TN xNA1Cα N u, xNC1 G(IAα NC1 A) T1 xNCα NC1 u, ·· ·

664

JOTA: VOL. 116, NO. 3, MARCH 2003

We can rewrite the sequence (xn) generated by the algorithm above in a more compact form, xnC1 G(IAα nC1 A) TnC1 xnCα nC1 u,

n¤0,

(9)

where Tn GTn mod N and the mod function takes values in {1, 2, . . . , N}. Remark 3.1. If NG1 and AGI, algorithm (9) [i.e., algorithm (1)] was proposed in Ref. 4. The convergence of the algorithms (1) and (9) depends upon the control sequence (α n). Now, we list the conditions that will be imposed on (α n): (C1)

lim α n G0,

n→S S

(C2)

∑ α n GS.

n G1

Note that Halpern (Ref. 4) showed that Conditions (C1) and (C2) are necessary in the sense that, if algorithm (1) converges strongly for every nonexpansive mapping T: C → C such that F(T) ≠ ∅, then (C1) and (C2) must be satisfied. It is indeed an open question whether (C1) and (C2) are also sufficient. Lions (Ref. 5) proved the strong convergence of (1) under Conditions (C1), (C2), and (C3)

lim (α nAα nC1)兾α 2nC1 G0.

n→S

Wittmann (Ref. 6) replaced Condition (C3) by S

(C4)

∑ 兩α nAα nC1 兩FS.

n G1

This includes the particular case where (α n) is decreasing, as discussed in Ref. 8. If NH1 and AGI, algorithm (9) was studied by many authors; these include Lions (Ref. 5) under Conditions (C1), (C2), and (C5)

lim (α nAα nCN)兾α 2nCN G0,

n→S

and Bauschke (Ref. 7) under Conditions (C1), (C2), and S

(C6)

∑ 兩α nAα nCN 兩FS.

n G1

Deutsch and Yamada (Ref. 1) applied algorithm (9) to a more general optimization problem than Problem (P3); (see Problem (P3) below). Their convergence results were proved under the Conditions (C1), (C2), and (C6).

JOTA: VOL. 116, NO. 3, MARCH 2003

665

The feature of this paper is to replace the Lions Condition (C5) and Condition (C6) by the following new condition: (C7)

lim (α nAα nCN)兾α nCN G0,

n→S

or equivalently, lim α n 兾α nCN G1.

n→S

Remark 3.2. (i)

(ii)

Condition (C7) removes the square in the denominator of Condition (C5) and is thus more general than (C5). Indeed, (C7) [see also (C6)] includes the natural choice (1兾n) for (α n), while (C5) does not. Condition (C6) can be rewritten as S

∑ α nCN (兩α nCNAα n 兩兾α nCN)FS.

n G1

By Condition (C2), we get lim inf 兩α nCNAα n 兩兾α nCN G0. n→S

Hence, Condition (C6) implies Condition (C7) provided limn → Sα n 兾α nCN exists. However, Conditions (C6) and (C7) are not comparable in general [coupled with Conditions (C1) and (C2)]: Neither of them implies the other. Example 3.1 below shows a control sequence (α n) which satisfies Conditions (C1), (C2), and (C7), but fails to satisfy Condition (C6); Example 3.2 shows a control sequence (α n) which satisfies Conditions (C1), (C2), and (C6), but fails to satisfy Condition (C7). Example 3.1. Let the sequence (α n) be given by

αnG

1兾1n, 1兾(1nA1),



if n is odd, if n is even.

We may assume that N is odd, since the case of N being even is similar. It is not hard to find that

α n 兾α nCN G

(1nCNA1)兾1n,

冦1nCN兾(1nA1),

if n is odd, if n is even.

666

JOTA: VOL. 116, NO. 3, MARCH 2003

Hence, lim α n 兾α nCN G1,

n→S

and Condition (C7) is satisfied. To see that (α n) fails to satisfy condition (C6), we calculate

兩α nAα nCN 兩G

兩1兾1nA1兾(1nCNA1)兩 G1兾1n(1nCNA1)兩1AN兾(1nCNC1n)兩,

if n is odd;

兩1兾1nA1A1兾1nCN兩 G1兾(1nA1)1nCN(1CN兾(1nCNC1n)),

if n is even.



It then follows that 兩α nAα nCN 兩¤(1兾2)(nCN ), for all n such that N兾(1nCNC1n)⁄1兾2. Hence,

∑兩α nAα nCN 兩GS, and Condition (C6) is not satisfied. Example 3.2. Take sequences (mk) and (nk) of positive integers such that: (i)

m1 G1, mk Fnk , and max{2nk , nkCN}FmkC1 , k¤1;

(ii)

−1 k ∑i Gmki H1, k¤1. n

Define {α n } by

αiG

冦1兾2n , 1兾i,

if mk ⁄i⁄nk , if nk FiFmkC1 ,

k

k¤1, k¤1.

Then, (α n) is decreasing and lim α n G0.

n→S

Hence Conditions (C6) and (C1) are satisfied. Condition (C2) is also satisfied since, by (ii), S

S

∑ αn¤ ∑

n G1

nk

∑ α i GS.

k G1 i Gmk

667

JOTA: VOL. 116, NO. 3, MARCH 2003

On the other hand, we have that

α nk兾α nkCN G2,

for each k¤1.

This implies that Condition (C7) is not satisfied. Now, we prove the main result of the paper. Theorem 3.1. Let Conditions (C1), (C2), (C7), or (C6) be satisfied. Assume in addition that N

C G ) F(Ti) i G1

GF(T1 T2 · · · TN) GF(TN T1 · · · TNA1) G· · ·GF(T2 T3 · · · TN T1).

(10)

Then, the sequence (xn) generated by algorithm (9) converges in norm to the unique solution x* of Problem (P2). Proof. We shall divide the proof into several steps. Step 1. (xn) is bounded. Indeed, since A is self-adjoint, we have that, for any 0Fα F兩兩A兩兩−1, IAα A is positive. Hence, 兩兩IAα A兩兩G sup 〈(IAα A)x, x〉 兩兩x兩兩 G1

⁄1Aαγ . Without loss of generality, we may assume throughout the proof given below that

α n F兩兩A兩兩−1,

for all n.

It follows that, for p∈C, 兩兩xnC1Ap兩兩G兩兩(IAα nC1 A)(TnC1 xnAp)Cα nC1 (uAAp)兩兩 ⁄(1Aγ α nC1)兩兩xnAp兩兩Cα nC1 兩兩uAAp兩兩. Hence, by induction, we obtain 兩兩xnAp兩兩⁄max{兩兩x0Ap兩兩, 兩兩uAAp兩兩兾γ },

n¤0.

Step 2. 兩兩xnC1ATnC1 xn 兩兩 → 0. This is because Step 1 and the definition of the algorithm imply that 兩兩xnC1ATnC1 xn 兩兩Gα nC1 兩兩uAATnC1 xn 兩兩 → 0,

for α n → 0.

668

JOTA: VOL. 116, NO. 3, MARCH 2003

Step 3. 兩兩xnCNAxn 兩兩 → 0. Indeed, we have (note that TnCN GTn ) 兩兩xnCNAxn 兩兩 G兩兩(IAα nCN A)(TnCN xnCNA1ATn xnA1)C(α nCNAα n)(uAATn xnA1)兩兩 ⁄(1Aγ α nCN)兩兩xnCNA1AxnA1 兩兩CM兩α nCNAα n 兩 G(1Aγ α nCN)兩兩xnCNA1AxnA1 兩兩Cγ α nCN β n , where M is a constant such that 兩兩uAATn xnA1 兩兩⁄M,

for all n,

and

β n _Mγ −1α −1 nCN 兩α nCNAα n 兩. Applying Lemma 2.1 together with Conditions (C2) and (C7), we obtain that 兩兩xnCNAxn 兩兩 → 0. s

Step 4. xnATnCN · · · TnC1 xn → 0. Indeed, noting that each Tn is nonexpansive and using Step 2, we have the (finite) table below s

xnCNATnCN xnCNA1 → 0, s

TnCN xnCNA1ATnCN TnCNA1xnCNA2 → 0, ·· · s TnCN · · · TnC2 xnC1ATnCN · · · TnC1 xn → 0. Adding up the table above yields s

xnATnCN · · · TnC1 xn → 0. Step 5. lim supn → S〈uAAx*, xnAx*〉⁄0, where x* is the unique solution of Problem (P2). Take a subsequence (xn j) of (xn) such that lim sup 〈uAAx*, xnAx*〉G lim 〈uAAx*, xn jAx*〉. n→S

j→S

(11)

Since (xn) is bounded, we may also assume that there exists some x˜ ∈H such w that xn j → x˜ . Since the pool of mappings is finite, passing to a further subsequence if necessary, we may further assume that, for some i∈{1, 2, . . . , N}, T n j ≡ Ti ,

for all j¤1.

JOTA: VOL. 116, NO. 3, MARCH 2003

669

It follows from Step 4 that s

xn jATiCN · · · TiC1 xn j → 0. Lemma 2.2 then ensures that the weak limit x˜ of (xn j) is a fixed point of the mapping TiCN · · · TiC1 . Together with assumption (10), this implies that x˜ ∈F(TiCN · · · TiC1)GC. Therefore, we have by Lemma 2.3 and (11) that lim sup 〈uAAx*, xnAx*〉G〈uAAx*, x˜ Ax*〉⁄0. n→S

s

Step 6. xn → x*. We can write xnC1Ax*G(IAα nC1 A)(TnC1 xnAx*)Cα nC1 (uAAx*). Apply Lemma 2.4 to get 兩兩xnC1Ax*兩兩2 ⁄兩兩(IAα nC1 A)(TnC1 xnAx*)兩兩2C2α nC1 〈uAAx*, xnC1Ax*〉 ⁄(1Aγ α nC1)兩兩xnAx*兩兩2C2α nC1 〈uAAx*, xnC1Ax*〉. Using Lemma 2.1 and Step 5, we conclude that 兩兩xnAx*兩兩2 → 0.



Remark 3.3. Conditions (C1), (C2), and (C6) were used by Bauschke (Ref. 7) to find a solution to the following convex feasibility problem: N

x∈ ) F(Ti). i G1

In Ref. 13, it is shown that (C6) can be replaced by (C7). Remark 3.4. Assumption (10) in Theorem 3.1 is automatically satisfied (Ref. 7) if each Ti is attracting nonexpansive; that is, Ti is nonexpansive and satisfies 兩兩Ti xAp兩兩F兩兩xAp兩兩,

for all x∉F(Ti) and p∈F(Ti).

In particular, assumption (10) is satisfied if each Ti is firmly nonexpansive (Ref. 3), i.e., Ti satisfies the following property: [xAy, Ti xATi y〉¤兩兩Ti xATi y兩兩2,

for x, y∈H.

Since a projection is firmly nonexpansive, we have the following consequence of Theorem 3.1.

670

JOTA: VOL. 116, NO. 3, MARCH 2003

Corollary 3.1. Let Conditions (C1), (C2), (C7) or (C6) be satisfied. Let x0 ∈H be chosen arbitrarily, and let (xn) be generated by the following iterative algorithm: xnC1 _(IAα nC1 A)PnC1 xnCα nC1 u,

n¤0,

where Pk GPCk ,

1⁄k⁄N.

Then, (xn) converges in norm to the unique solution x* of Problem (P2). In particular, the sequence (xn), defined by xnC1 _(IA[1兾(nC1)]A)PnC1 xnC[1兾(nC1)]u,

n¤0,

converges strongly to the unique solution x* of (P2). Next, we consider a more general minimization problem than (P2), namely, (P3) min θ (x), x ∈C

where C is the intersection of the fixed point sets of nonexpansive mappings Ti : H → H, 1⁄i⁄N, θ : H → ⺢∪{S} is a convex function twice differentiable on some open subset U ⊃ ∆_*N i G1 Ti (H ) such that θ ″: U → B(H ) is uniformly strongly positive and uniformly bounded (USPUB) over ∆ (Ref. 1); that is, θ ″(x) is self-adjoint for x∈∆, and there are constants 0Fm⁄M with the property that m兩兩û兩兩2 ⁄〈θ ″(x)û, û〉⁄M兩兩û兩兩2,

for all x∈∆ and û∈H.

(12)

It is known that Problem (P3) has a unique solution which we denote by x*. In Ref. 1, Deutsch and Yamada proposed a similar iterative scheme to (9). Their control conditions on the parameters (α n) are again (C1), (C2), and (C6). We show again that (C6) can be replaced by (C7). We shall adopt some notations used in Ref. 1:

µ is a scalar such that 0FµF2兾M, where M satisfies (12). ψ (x)Gµθ (x)A(1兾2)兩兩x兩兩. LGmax{兩µmA1兩,兩 µMA1兩}F1, where m, M satisfy (12). For a nonexpansive mapping T: H → H and a real number α ∈(0, 1), let T α : H → H be defined by T α (x)_TxAαµθ ′(Tx) G(1Aα ) TxAαψ ′(Tx),

x∈H.

671

JOTA: VOL. 116, NO. 3, MARCH 2003

We collect the following facts (Ref. 1) for later use: 兩〈ψ ″(x)û, û〉兩⁄L兩兩û兩兩2,

for x∈∆ and û∈H

兩兩ψ ′(x)Aψ ′(y)兩兩⁄L兩兩xAy兩兩,

for x, y∈∆.

T α is a contraction (Ref. 1, Lemma 3.6), 兩兩T α xAT α y兩兩⁄[1Aα (1AL)]兩兩xAy兩兩,

x, y∈H.

The algorithm introduced by Deutsch and Yamada (Ref. 1) is the following: α nC1 xnC1 _T nC1 (xn)GTnC1 xnAα nC1 µθ ′(TnC1 xn),

n¤0,

(13)

where x0 ∈H is chosen arbitrarily. The following result parallels to the main result (Theorem 3.7) of Deutsch and Yamada (Ref. 1). Here, we replace the Deutsch–Yamada condition (C6) [i.e., (B3) in Ref. 1] by (C7). Our proof below is shorter and simpler than the proof of Deutsch and Yamada (Ref. 1). Theorem 3.2. Let Conditions (C1), (C2), and (C7) be satisfied. Assume in addition that N

C G ) F(Ti) i G1

GF(T1 T2 · · · TN) GF(TN T1 · · · TNA1) G· · ·GF(T2 T3 · · · TN T1).

(14)

Then, the sequence (xn) generated by algorithm (13) converges in norm to the unique solution x* of Problem (P3). Proof. We shall again divide the proof into several steps. Step 1. (xn) is bounded. Indeed, we have α nC1 兩兩xnC1Ax*兩兩G兩兩T nC1 xnAx*兩兩 α nC1 α nC1 α nC1 xnAT nC1 x*兩兩C兩兩T nC1 x*Ax*兩兩 ⁄兩兩T nC1

⁄[1A(1AL)α nC1 ]兩兩xnAx*兩兩Cα nC1 µ兩兩θ ′(x*)兩兩. By induction, we get (note that 0⁄LF1) 兩兩xnC1Ax*兩兩⁄max{兩兩x0Ax*兩兩, [( µ兾(1AL)]兩兩θ ′(x*)兩兩},

n¤0.

672

JOTA: VOL. 116, NO. 3, MARCH 2003

Step 2. Indeed, by Step 1, we have that {TnC1 xn } is bounded, and so is {θ ′(TnC1 xn)}. Thus, 兩兩xnC1ATnC1 xn 兩兩Gα nC1 µ兩兩θ ′(TnC1 xn)兩兩 → 0. Step 3. 兩兩xnCNAxn 兩兩 → 0, n → S. Indeed, noting that TnCN GTn , we have α nCN 兩兩xnCNAxn 兩兩G兩兩T nCN xnCNA1AT αn n xnA1 兩兩 α nCN α nCN α nCN xnCNA1AT nCN xnA1 兩兩C兩兩T nCN xnA1AT αn n xnA1 兩兩 ⁄兩兩T nCN

⁄[1A(1AL)α nCN ]兩兩xnCNA1AxnA1 兩兩C兩α nCNAα n 兩µ兩兩θ ′(Tn xnA1)兩兩. Since {xn } is bounded, there exists a constant cH0 such that 兩兩θ ′(Tn xnA1)兩兩⁄c,

for all n.

Hence, it follows from the last inequality that 兩兩xnCNAxn 兩兩⁄[1A(1AL)α nCN ]兩兩xnCNA1AxnA1 兩兩 Cµcα nCN 兩α nCNAα n 兩兾α nCN . By Lemma 2.1 and Condition (C7), we derive that 兩兩xnCNAxn 兩兩 → 0. s

Step 4. xnATnCN · · · TnC1 xn → 0. This follows from the same argument as in Step 4 in the proof of Theorem 3.1. Step 5. lim supn → S〈Aθ ′(x*), xnAx*〉⁄0, where x* is the unique solution of Problem (P3). Take a subsequence (xnj) of (xn) such that lim sup 〈Aθ ′(x*), xnAx*〉G lim 〈Aθ ′(x*), xn jAx*〉. n→S

j→S

(15)

Since (xn) is bounded, we may also assume that there exists some x˜ ∈H such ω that xn j → x˜ . Since the pool of mappings is finite, passing to a further subsequence if necessary, we may further assume that, for some i∈{1, 2, . . . , N}, T n j ≡ Ti ,

for all j¤1.

It follows from Step 4 that s

xn jATiCN · · · TiC1 xn j → 0. Then, Lemma 2.2 ensures that the weak limit x˜ of (xn j) is a fixed point of the mapping TiCN · · · TiC1 . Together with assumption (14), this implies that x˜ ∈F(TiCN · · · TiC1)GC.

673

JOTA: VOL. 116, NO. 3, MARCH 2003

Therefore, we have by Lemma 2.3 and (15) that lim sup 〈Aθ ′(x*), xnAx*〉G〈Aθ ′(x*), x˜ Ax*〉⁄0. n→S

s

Step 6. xn → x*. Indeed, noting that α nC1 T nC1 x*Gx*Aα nC1 µθ ′(x*),

we have, by Lemma 2.4, α nC1 α nC1 xnAT nC1 x*Aα nC1 µθ ′(x*)兩兩2 兩兩xnC1Ax*兩兩2G兩兩T nC1 α nC1 α nC1 ⁄兩兩T nC1 xnAT nC1 x*兩兩2C2µα nC1 〈Aθ ′(x*), xnC1Ax*〉

⁄[1A(1AL)α nC1 ]兩兩xnAx*兩兩2C2µα nC1 〈Aθ ′(x*), xnC1Ax*〉. s

An application of Lemma 2.1 together with Step 5 yields that xn → x*.



Remark 3.5. Due to the presence of the restriction upon the parameter µ [i.e., µ ∈(0, 2兾M ), which makes the mapping T α a contraction for each 0Fα F1], algorithm (13) does not indeed cover algorithm (9) though Problem (P3) is more general than Problem (P2). Remark 3.6. Recently, Yamada (Ref. 14, Theorem 3.3] adapted algorithm (13) for variational inequalities. He showed that, if for some κ H0 and η H0, F : H → H is κ —Lipschitzian and η —strongly monotone on ∆_*N i G1 Ti (H ), i.e., if 兩兩F xAF y兩兩⁄ κ 兩兩xAy兩兩, 〈F xAF y, xAy〉¤ η 兩兩xAy兩兩2,

for x, y∈∆,

then for any µ ∈(0, 2η 兾κ 2) and any sequence (α n) satisfying Conditions (C1), (C2), and (C6), the sequence (xn)nSG0 generated, with any x0 ∈H, by xnC1 _TnC1 xnAα nC1 µF F(TnC1 xn)

(16)

converges strongly to the unique solution x* of the variational inequality 〈ûAx*, F x*〉¤0,

û∈C.

(17)

Using ideas similar to those used in the proof of Theorem 3.2, very recently we showed in Ref. 15 that, in this result of Yamada, Condition (C6) can again be replaced by Condition (C7). That is, if F : H → H is κ — S Lipschitzian and η —strongly monotone on ∆_*N i G1 Ti (H ), if (xn)n G0 , with x0 ∈H, is generated by algorithm (16), and if the control sequence (α n) satisfies Conditions (C1), (C2), and (C7), then (xn) converges in norm to the unique solution x* of the variational inequality (17).

674

JOTA: VOL. 116, NO. 3, MARCH 2003

4. Quadratic–Quadratic Optimization Quadratic–quadratic optimization refers to those optimization problems in which the objective functions and constraints are all quadratic. For optimality conditions in finite-dimensional real Hilbert spaces, see HiriartUrruty (Ref. 16) and references therein. We shall concentrate on iterative algorithms for quadratic-quadratic optimization problems of the following form: (P4) min s.t.

f (x)_(1兾2)〈A0 x, x〉A〈x, u0 〉Cc0 , g j (x)_(1兾2)〈A j x, x〉A〈x, u j 〉Cc j ⁄0, jG1, . . . , N,

where A j , jG0, 1, . . . , N, are bounded symmetric linear operators in a real Hilbert space H, {u j }NjG0 are points in H, and {c j }NjG0 are real numbers. We assume that all the operators Aj are positive, i.e., 〈A j x, x〉¤0,

for x∈H, 0⁄j⁄N,

so that Problem (P4) is a convex optimization problem. Let S denote the solution set of Problem (P4). If A0 is strongly positive, then we can apply Corollary 3.1 to find the unique solution of Problem (P4). But, if A0 is not assumed to be strongly positive, the solution set S may be empty or contain more than one point. As a result, the iterative algorithm developed in Section 3 does not apply. In order to find an iterative solution, we assume that there is a j, 1⁄j⁄N, such that Aj is strongly positive; then, the feasible set C_{x∈H: gi (x)⁄0, 1⁄i⁄N} is bounded. Indeed, if λ j H0 satisfies 〈A j x, x〉¤ λ j 兩兩x兩兩2,

for all x∈H,

we have, for x∈C, (1兾2)λ j 兩兩x兩兩2 ⁄〈x, u j 〉Ac j ⁄兩兩u j 兩兩兩兩x兩兩C兩c j 兩. This implies that 兩兩x兩兩⁄(兩兩u j 兩兩C(兩兩u j 兩兩2C2λ j 兩c j 兩)1兾2)兾λ j . Hence, C is bounded. We shall also assume that Problem (P4) is consistent (i.e., C is nonempty). For (H0, consider the following perturbed problem: (P4() min s.t.

f( (x)_(1兾2)〈A( x, x〉A〈x, u0 〉Cc0 , g j (x)_(1兾2)〈A j x, x〉A〈x, u j 〉Cc j ⁄0,

jG1, 2, . . . , N,

JOTA: VOL. 116, NO. 3, MARCH 2003

675

where (H0 and A( G(ICA0 is strongly positive. Let Pi be the projection from H to the closed convex set Ci _{x∈H: gi (x)⁄0},

1⁄i⁄N.

Starting with an arbitrary initial x0 ∈H, we define a sequence (xn( ) by ( xnC1 G(IAα nC1 A() PnC1 xn( Cα nC1 u0 ,

n¤0.

Theorem 4.1. Assume that {A j }NjG0 are self-adjoint bounded linear operators on a real Hilbert space H such that Aj is strongly positive for at least one j, 1⁄j⁄N. Assume that the sequence (α n) satisfies Conditions (C1), (C2), (C7), or (C6). Then, (xn( ) converges in norm to the unique solution x( ∈C of Problem (P4(). Moreover, (x() converges in norm to the solution û0 of Problem (P4) with minimal norm, û0 兩兩Gmin{兩兩û兩兩: û∈S}.

Proof. By Corollary 3.1, we know that (xn( ) converges in norm to the unique solution x( of Problem (P4(). Let û∈S. Since f( (x()⁄f( (û), we derive that 2−1(兩兩x(兩兩2C2−1〈A0 x(, x(〉A〈x(, u0 〉 ⁄2−1(兩兩û兩兩2C2−1〈A0 û, û〉A〈û, u0 〉.

(18)

Since û solves Problem (P4), we have 2−1〈A0 x(, x(〉A〈x(, u0 〉¤2−1〈A0 û, û〉A〈û, u0 〉.

(19)

It follows from (18) and (19) that 兩兩x(兩兩⁄兩兩û兩兩,

û∈S.

Now let (n → 0,

ω as n → S and x(n → xS .

(20)

676

JOTA: VOL. 116, NO. 3, MARCH 2003

Since the objective function f is convex continuous, it is weakly lower semicontinuous. Then, we get f (xS) ⁄lim inf f (x(n) n→S

⁄lim inf (2−1(n 兩兩x(n兩兩2Cf (x(n)) n→S

Glim inf f(n (x(n) n→S

⁄lim inf f(n (û) n→S

Gf (û). Hence, xS ∈S. By (20), we also have 兩兩xS 兩兩⁄min{兩兩û兩兩: û∈S}. Since the point in S with minimum norm is unique, denoting this point by û0 , we have xS Gû0 . Thus, we have shown that û0 is the only weak limit point of (x(); therefore, as ( → 0.

w û0 , x( →

On the other hand, since the norm is weakly lower semicontinuous, we have [noting (20)] 兩兩û0 兩兩⁄lim inf 兩兩x(兩兩⁄lim sup 兩兩x(兩兩⁄兩兩û0 兩兩. (→0

(→0

That is, lim 兩兩x(兩兩G兩兩û0 兩兩.

(→0

s û0 . Hence, x( →



We conclude this paper by a result on the trust region optimization problem which is formulated as follows (see Ref. 16 for the optimality condition in a finite-dimensional Hilbert space): (P5) min

f (x)_(1兾2)〈Ax, x〉A〈x, u〉Cc

over C_{x∈H: 兩兩x兩兩⁄ δ }, where H is a real Hilbert space, A is a self-adjoint bounded linear operator on H, c is a real number, and δ is a positive number. Recall that the projection P from H to the ball C is given by PxGx,

if 兩兩x兩兩⁄ δ ,

PxGδ x兾兩兩x兩兩,

if 兩兩x兩兩Hδ 0 .

JOTA: VOL. 116, NO. 3, MARCH 2003

677

Theorem 4.2. Let Conditions (C1), (C2), (C7) or (C6) be satisfied. Let x0 ∈H be chosen arbitrarily. (i)

If A is strongly positive, then the sequence (xn) generated by the iterative algorithm xnC1 G

δ (IAα nC1 A)xn 兾兩兩xn 兩兩Cα nC1 u, nC1 A)xnCα nC1 u,

冦(IAα

if 兩兩xn 兩兩Hδ , if 兩兩xn 兩兩⁄ δ ,

converges in norm to the unique solution of Problem (P5). (ii) If A is not assumed to be strongly positive, then for each (H0, the sequence (xn( ) generated by the following iterative algorithm: ( G xnC1

δ (IAα nC1 A()xn( 兾兩兩xn( 兩兩Cα nC1 u, (IAα nC1 A()xn( Cα nC1 u,



if 兩兩xn( 兩兩Hδ , if 兩兩xn( 兩兩⁄ δ ,

where A( G(ICA converges in norm to some x(. Moreover, as ( → 0, x( converges strongly to the solution of Problem (P5) with minimal norm.

References 1. DEUTSCH, F., and YAMADA, I., Minimizing Certain Conûex Functions oûer the Intersection of the Fixed-Point Sets of Nonexpansiûe Mappings, Numerical Functional Analysis and Optimization, Vol. 19, pp. 33–56, 1998. 2. GOEBEL, K., and KIRK, W. A., Topics on Metric Fixed-Point Theory, Cambridge University Press, Cambridge, England, 1990. 3. GOEBEL, K., and REICH, S., Uniform Conûexity, Hyperbolic Geometry, and Nonexpansiûe Mappings, Marcel Dekker, New York, NY, 1984. 4. HALPERN, B., Fixed Points of Nonexpanding Maps, Bulletin of the American Mathematical Society, Vol. 73, pp. 957–961, 1967. 5. LIONS, P. L., Approximation de Points Fixes de Contractions, Comptes Rendus de l’Academie des Sciences, Serie I-Mathematique, Vol. 284, pp. 1357–1359, 1997. 6. WITTMANN, R., Approximation of Fixed Points of Nonexpansiûe Mappings, Archiv der Mathematik, Vol. 58, pp. 486–491, 1992. 7. BAUSCHKE, H. H., The Approximation of Fixed Points of Compositions of Nonexpansiûe Mappings in Hilbert Spaces, Journal of Mathematical Analysisis and Applications, Vol. 202, pp. 150–159, 1996. 8. REICH, S., Approximating Fixed Points of Nonexpansiûe Mappings, Panamerican Mathematical Journal, Vol. 4, pp. 23–28, 1994. 9. ATTOUCH, H., Viscosity Solutions of Minimization Problems, SIAM Journal on Optimization, Vol. 6, pp. 769–806, 1996. 10. BAUSCHKE, H. H., and BORWEIN, J. M., On Projection Algorithms for Solûing Conûex Feasibility Problems, SIAM Reviews, Vol. 38, pp. 367–426, 1996.

678

JOTA: VOL. 116, NO. 3, MARCH 2003

11. COMBETTES, P. L., Hilbertian Conûex Feasibility Problem: Conûergence of Projection Methods, Applied Mathematics and Optimization, Vol. 35, pp. 311–330, 1997. 12. ODEN, J. T., Qualitatiûe Methods on Nonlinear Mechanics, Prentice-Hall, Englewood Cliffs, New Jersey, 1986. 13. O’HARA, J. G., PILLAY, P., and XU, H. K., Iteratiûe Approaches to Finding Nearest Common Fixed Points in Hilbert Spaces, Nonlinear Analysis (to appear). 14. YAMADA, I., The Hybrid Steepest Descent Method for Variational Inequality Problems oûer the Intersection of the Fixed-Point Sets of Nonexpansiûe Mappings, Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Edited by D. Butnariu, Y. Censor, and S. Reich, North-Holland, Amsterdam, Holland, pp. 473–504, 2001. 15. XU, H. K., and KIM, T. W., Conûergence of Hybrid Steepest Descent Methods for Variational Inequalities, Preprint, 2003. 16. HIRIART-URRUTY, J. B., Conditions for Global Optimality 2, Journal of Global Optimization, Vol. 13, pp. 349–367, 1998.

Suggest Documents