Math. Program., Ser. A 96: 139–160 (2003) Digital Object Identifier (DOI) 10.1007/s10107-002-0365-3
J.J. Ye · Qiji J. Zhu
Multiobjective optimization problem with variational inequality constraints Received: November 2000 / Accepted: October 2001 Published online: December 19, 2002 – © Springer-Verlag 2002 Abstract. We study a general multiobjective optimization problem with variational inequality, equality, inequality and abstract constraints. Fritz John type necessary optimality conditions involving Mordukhovich coderivatives are derived. They lead to Kuhn-Tucker type necessary optimality conditions under additional constraint qualifications including the calmness condition, the error bound constraint qualification, the no nonzero abnormal multiplier constraint qualification, the generalized Mangasarian-Fromovitz constraint qualification, the strong regularity constraint qualification and the linear constraint qualification. We then apply these results to the multiobjective optimization problem with complementarity constraints and the multiobjective bilevel programming problem. Key words. Multiobjective optimization – Variational inequality – Complementarity constraint – Constraint qualification – Bilevel programming problem – Preference – Utility function – Subdifferential calculus – Variational principle
1. Introduction Let X, Y and Z be finite dimensional Banach spaces and let ≺ be a (nonreflexive) preference for vectors in Z. We consider the following multiobjective optimization problem with variational inequality constraints. P
minimize subject to
φ(x, y) fi (x, y) ≤ 0,
i = 1, 2, . . . , M, i = M + 1, . . . , N,
fi (x, y) = 0, (x, y) ∈ C, y ∈ , F (x, y), y − z ≤ 0 ∀z ∈ ,
(1)
where φ : X × Y → Z, fi : X × Y → R ∪ {+∞}, i = 1, ..., N, F : X × Y → Y , C is a nonempty closed subset of X × Y and is a closed convex subset of Y . In pratice, many problems can be formulated as the above multiobjective optimization problem with variational inequality constraints. For example, consider a firm J.J. Ye: Department of Mathematics and Statistics, University of Victoria, Victoria, B.C., Canada V8W 3P4; e-mail:
[email protected] Research of this paper was supported by NSERC and a University of Victoria Internal Research Grant Q.J. Zhu: Department of Mathematics and Statistics, Western Michigan University, Kalamazoo, MI 49008, USA; e-mail:
[email protected] Research was supported by the National Science Foundation under grants DMS-9704203 and DMS-0102496 Mathematics Subject Classification (2000): Sub49K24, 90C29
140
J.J. Ye, Q.J. Zhu
which produces several products labeled i = 1, . . . , m using a number of different resources labeled j = 1, . . . , n. The firm wishes to maximize the profit, minimize employee’s excess overtime subject to constraint on maintenance of market share in the face of collusive action by competitors and subject to resource constraints. Let (x, y, u, v) ∈ R m+m+m+m denote the firm’s production and marketing level and competitors’production and marketing levels respectively. Let the objective be the vector valued function φ(x, y) = (−φ1 (x, y), φ2 (x, y)) where φ1 (x, y) is the profit and φ2 (x, y) is the excess overtime associated with production level x and marketing level y. The resource utilization function hj (x, y) specifies the amount of resource j that is required for production level x and marketing level y. The market share function gi (x, y, u, v) expresses the firm’s market share of product i resulting from the firm’s production x and marketing y and competitors’ production u and marketing v. For a given level (x, y) of production and marketing, the firm’s minimum market share function for the i-th product in the face of competition is given by σi (x, y) = min{gi (x, y, u, v) : (u, v) ∈ W } where W is a feasible production and marketing levels for competitors. The firm’s optimal production and marketing strategy can be obtained by solving the following optimization problem in the variables (x, y): minimize φ(x, y) subject to hj (x, y) ≤ bj , j = 1, . . . , n, σi (x, y) ≥ ai , i = 1, . . . , m, where ai , bj denote the minimum fraction of market share of production i by the firm and the total amount of resource j available to the firm, respectively, and the minimization is taken with respect to a preference that compromises between maximizing the profit and restricting the excess overtime of the employee. This problem is the multiobjective version of the bilevel programs introduced by Bracken and McGill [4]. Under the condition that gi is convex and differentiable in (u, v) and W is a convex set, the above problem is equivalent to the following optimization problem with variational inequality constraints: minimize subject to
φ(x, y) hj (x, y) ≤ bj , j = 1, 2, . . . , n, gi (x, y, u, v) ≥ ai , i = 1, . . . , m, (x, y, u, v) ∈ R m+m × W, (u, v) ∈ W, ∇u,v gi (x, y, u, v), (u, v) − (u , v ) ≤ 0 ∀(u , v ) ∈ W, i = 1, 2, ..., m
where ∇ denotes the gradient. We say that (x, y) is a feasible point for problem P if it satisfies all constraints in P. We say (x, ¯ y) ¯ is a local solution to problem P provided that it is a feasible point for P and there exists no other feasible point (x, y) in the neighborhood of (x, ¯ y) ¯ such that φ(x, y) ≺ φ(x, ¯ y). ¯
Multiobjective optimization problem with variational inequality constraints
141
Note that in the case where = Y , the variational inequality constraint (1) reduces to an equality constraint F (x, y) = 0 and problem P becomes a usual multiobjective optimization problem. In the area of multiobjective optimization much research has been devoted to the weak Pareto solution and its generalizations. Let K be a closed cone in Z. The preference relation for two vectors x, y ∈ Z in a generalized weak Pareto sense is defined by x ≺ y if and only if x − y ∈ K and x = y. In particular, if Z = R n and K = {z ∈ R n : z has nonpositive components} then we have a preference in the weak Pareto sense. Necessary optimality conditions for (generalized) weak Pareto solutions were derived for optimization problems in [1, 5, 6, 11, 21–25] (see also the survey paper [7] for more information). In this paper, we work with a general preference which not only includes all preferences in the sense of the generalized weak Pareto but also includes other preferences that are not related to the weak Pareto solution and may not be represented by any utility functions such as the preference determined by the lexicographical order of the vectors. When Y = R m , φ is a scalar function, the preference is 0
∂ ∞ (τi fi )(x, ¯ y) ¯ + N (C, (x, ¯ y)) ¯
i∈{1,...,N };µi =0 ∗ ¯ y) ¯ + {0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ), +∂F, v ∗ (x,
(2)
where ∂g denotes the limiting subdifferential of function g (see Definition 2.2) and N denotes the set-valued map y ⇒ N (, y) and D ∗ denotes the coderivative of a set-valued map (see Definition 2.3). Remark 1. Note that for an optimization problem with discontinuous inequality constraints, the nonbinding constraints may not be removed. For example, x¯ = 1 is the optimal solution for minimize subject to
−x δ[0,1] (x) − x 2 ≤ 0,
where δA is the indicator function of set A defined by δA (x) := 0 for x ∈ A and +∞ otherwise. However, x¯ = 1 is not an optimal solution of the problem with the inequality constraint removed. Therefore in Theorem 1.2 there is no complementary slackness condition. But if all the constraint functions fi , i = 1, . . . , N are assumed to be Lipschitz continuous near the optimal solution (x, ¯ y), ¯ then by virtue of continuity of the constraint functions (x, ¯ y) ¯ is still a solution to the new problem with all nonbinding constraints removed. Since fi , i = 0, . . . , N are Lipschitz continuous near (x, ¯ y), ¯ ∂ ∞ (τi fi )(x, ¯ y) ¯ = {0}, i = 1, . . . , N and hence the basic constraint qualification is satisfied automatically. Applying the Theorem 1.2 to the new problem and then setting µi = 0 for all i ∈ {1, . . . , M} that corresponding to the nonbinding constraints we have the following version of the necessary optimality condition. Theorem 1.3. Let (x, ¯ y) ¯ be a local solution to the multiobjective optimization problem with variational inequality constraints P. Suppose that fi , i = 1, ..., N , φ and F are Lipschitz near (x, ¯ y) ¯ and the preference ≺ is regular at φ(x, ¯ y). ¯ Then there exist µ0 ∈ {0, 1}, µi ∈ R i = 1, ..., N , v ∗ ∈ Y ∗ not all zero, τi = 1 ∀i = 1, 2, . . . , M, τi ∈
Multiobjective optimization problem with variational inequality constraints
143
{1, −1} ∀i = M + 1, . . . , N and a unit vector λ ∈ N (l(φ(x, ¯ y)), ¯ φ(x, ¯ y)) ¯ such that ¯ y) ¯ + 0 ∈ µ0 ∂λ, φ(x,
N
µi ∂(τi fi )(x, ¯ y) ¯
i=1
∗ +N(C, (x, ¯ y)) ¯ + ∂F, v ∗ (x, ¯ y) ¯ + {0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ )
(3)
and µi ≥ 0 ∀i = 1, 2, . . . , M,
M
µi fi (x, ¯ y) ¯ = 0.
i=1
Remark 2. In [28, Examples 3.4, 3.7 and 3.12], it was shown that the preferences determined by either the generalized weak Pareto or the lexicographical order are regular. In the case where the preference is determined by the weak Pareto concept and Z = R q , q N(l(φ(x, ¯ y)), ¯ φ(x, ¯ y)) ¯ = R+ and in the case where the preference is determined by the ¯ y)), ¯ φ(x, ¯ y)) ¯ = K − := {s ∈ generalized weak Pareto with a closed cone K, N (l(φ(x, Z : s, t ≤ 0, t ∈ K}. In the case of lexicographical order, i.e., r ≺ s if there exists an integer p ∈ {0, 1, . . . , q − 1} such that ri = si , i = 1, . . . , p and rp+1 < sp+1 , N(l(φ(x, ¯ y)), ¯ φ(x, ¯ y)) ¯ = {ae1 : a ≥ 0} where e1 = (1, 0, . . . , 0) ∈ R q . Remark 3. Note that Theorems 1.2 and 1.3 involve the coderivative ∗ ). By the definition of coderivatives (see Definition 2.3), D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ∗ ξ ∈ D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ) ⇐⇒ (ξ, −v ∗ ) ∈ N (gphN , (y, ¯ −F (x, ¯ y))), ¯
where gph denotes the graph of a set-valued map , i.e., gph = {(y, v) : v ∈ (y)}. ∗ ) depends on calculation Hence calculation of the coderivative D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ of the limiting normal cone N (gphN , (y, ¯ −F (x, ¯ y))). ¯ In the case when Y = R m m and = R+ , the limiting normal cone N (gphN , (y, ¯ −F (x, ¯ y))) ¯ can be calculated explicitly by using the following proposition. Proposition 1.4. For any (y, ¯ −v) ¯ ∈ gphNR+m , N(gphNR+m , (y, ¯ −v)) ¯ = {(ξ, −η) ∈ R 2m : ξi = 0 if i ∈ L, ηi = 0 if i ∈ I+ , either ξi ηi = 0 or ξi < 0 and ηi < 0 if i ∈ I0 }, where L := L(y, ¯ v) ¯ := {i ∈ {1, 2, · · · , m} : y¯i > 0, v¯i = 0} I+ := I+ (y, ¯ v) ¯ := {i ∈ {1, 2, · · · , m} : y¯i = 0, v¯i > 0} I0 := I0 (y, ¯ v) ¯ := {i ∈ {1, 2, · · · , m} : y¯i = 0, v¯i = 0}.
144
J.J. Ye, Q.J. Zhu
For the case where is a polyhedral convex set in R m , a calculation of the limiting normal cone to the graph of the normal cone to the set was first given in the proof of [8, Theorem 2] and stated in [19, Proposition 4.4]. We organize the paper as follows. §2 contains the background material on nonsmooth analysis. §3 is devoted to prove Theorem 1.2. In §4, we discuss constraint qualifications under which µ0 in the Fritz John type optimality conditions can be taken as 1. They are the calmness condition, the no nonzero abnormal multiplier constraint qualification, the error bound constraint qualification, the strong regularity constraint qualification and the linear constraint qualification. Constraint qualifications and the necessary optimality conditions for the multiobjective optimization problems with complementarity constraints and the multiobjective bilevel programming problems are given in §5. The following notations are used throughout the paper. For a m-by-n matrix A and index sets I ⊆ {1, 2, . . . , m}, J ⊆ {1, 2, . . . , n}, AI and AI,J denote the submatrix of A with rows specified by I and the submatrix of A with rows and columns specified by I and J , respectively. For a vector d ∈ R n , di is the ith component of d and dI is the subvector composed from the components di , i ∈ I . gph := {(y, v) : v ∈ (y)} is the graph of a set-valued map . int, and co denote the interior, the closure and the convex hull of a set . We denote by BX or simply B the open unit ball in a Banach space X. 2. Preliminaries Let X be a real finite dimensional Banach space with topological real dual X ∗ . Note that X has a Fr´echet smooth norm and we will use this norm as the norm of X. Let f : X → R¯ := R ∪ {+∞} be an extended-valued function. We denote by dom f := {x ∈ X : f (x) ∈ R} the effective domain of f . We assume all our functions are proper in that they take some finite values: dom f = ∅. Let us now recall the definitions of subdifferentials and normal cones (see [3] for greater details and historical comments). Definition 2.1. Let f : X → R¯ be a lower semicontinuous function and C a closed subset of X. We say f is Fr´echet-subdifferentiable and x ∗ is a Fr´echet-subderivative of f at x if there exists a C 1 function g such that ∇g(x) = x ∗ and f − g attains a local minimum at x. We denote the set of all Fr´echet-subderivatives of f at x by DF f (x). We define the Fr´echet-normal cone of C at x to be NF (C, x) := DF δC (x). Corresponding limiting objects are defined below. Definition 2.2. Let f : X → R¯ be a lower semicontinuous function. Define ∂f (x) := { lim vi : vi ∈ DF f (xi ), (xi , f (xi )) → (x, f (x))}, i→∞
and ∂ ∞ f (x) := { lim ti vi : vi ∈ DF f (xi ), ti → 0+ , (xi , f (xi )) → (x, f (x))} i→∞
and call ∂f (x) and ∂ ∞ f (x) the limiting subdifferential and singular subdifferential of f at x respectively.
Multiobjective optimization problem with variational inequality constraints
145
Now let C be a subset of X and x ∈ C. Define N(C, x) := { lim vi : vi ∈ NF (C, xi ), C xi → x}, i→∞
and call N(C, x) the limiting normal cone of C at x. For set-valued maps, the definition for a limiting normal cone leads to the definition of coderivative of a set-valued map introduced by Mordukhovich (see e.g. [16]). Definition 2.3. Let : X ⇒ X be an arbitrary set-valued map (assigning to each x ∈ X a set (x) ⊂ X which may be empty) and (x, ¯ v) ¯ ∈ gph . The set-valued map D ∗ (x, ¯ v) ¯ from X ∗ into X ∗ defined by ¯ v)(v ¯ ∗ ) = {x ∗ ∈ X∗ : (x ∗ , −v ∗ ) ∈ N (gph , (x, ¯ v))}, ¯ D ∗ (x, is called the coderivative of at the point (x, ¯ v). ¯ We use the convention that, for (x, ¯ v) ¯ ∈ gph , D ∗ (x, ¯ v)(v ¯ ∗ ) = ∅. The symbol D ∗ (x) ¯ is used when is single-valued at x¯ and v¯ = (x). ¯ In the special case when a set-valued map is single-valued, the coderivative is related to the limiting subdifferential in the following way. Theorem 2.4. [16, Theorem 5.2] Let : X → X be single-valued and Lipschitz near x. ¯ Then D ∗ (x)(v ¯ ∗ ) = ∂v ∗ , (x) ¯ ∀v ∗ ∈ X∗ . The following are some of the calculus rules that we need in this paper. Theorem 2.5. (Fuzzy Sum Rule) [3, Theorem 2.6] Let f1 , ..., fN : X → R¯ be lower f ¯ Then, semicontinuous functions. Suppose that N n=1 n attains a local minimum at x. for any ε > 0, there exist xn ∈ x¯ + εB and xn∗ ∈ DF fn (xn ), n = 1, ..., N such that ∗ ) < ε |fn (xn ) − fn (x)| ¯ < ε, n = 1, 2, ..., N , diam(x1 , ..., xN ) · max(x1∗ , ..., xN and N xn∗ < ε. n=1
Here diam(x1 , ..., xN ) := max{xn − xm : n, m = 1, 2, ..., N}. Theorem 2.6. (Limiting Sum Rule) [16, Proposition 2.5 and Theorem 4.1] Let f1 , ..., fN : X → R¯ be lower semicontinuous functions. Suppose that all but one of these functions are Lipschitz near x. ¯ Then ∂(f1 + f2 + . . . + fN )(x) ¯ ⊆ ∂f1 (x) ¯ + . . . + ∂fN (x). ¯ Theorem 2.7. (Limiting Chain Rule) [16, Proposition 2.5 and Corollary 6.3] Let : X → R m be Lipschitz near x¯ and f : R m → R be Lipschitz near (x). ¯ Then ∗ ∂(f ◦ )(x) ¯ ⊆ ∪v ∗ ∈∂f ( (x)) ¯ ¯ ∂v , (x).
146
J.J. Ye, Q.J. Zhu
We will also need necessary optimality conditions for a single objective optimization problem. Consider the following optimization problem: S
minimize
f0 (x)
subject to
fi (x) ≤ 0,
i = 1, 2, . . . , M,
fi (x) = 0,
i = M + 1, . . . , N,
x ∈ C. The following multiplier rule is a special case of ([2, Corollary 2.6]). Theorem 2.8. (Fuzzy Multiplier Rule) Let C be a closed subset of X, let f0 be C 1 at x, ¯ fi be lower semicontinuous for i = 1, . . . , M and fi be continuous for i = M + 1, . . . , N and let x¯ be a local solution of P. Assume that lim inf x→x¯ d(DF fi (x), 0) > 0, for i = 1, ..., M and lim inf x→x¯ d(DF fi (x) ∪ DF (−fi )(x), 0) > 0, for i = M + 1, ..., N. Then, for any positive number ε > 0, there exist (xi , fi (xi )) ∈ (x, ¯ fi (x)) ¯ + εBX×R , i = 1, .., N , xN+1 ∈ (x¯ + εBX ) ∩ C, µi > 0 and τi = 1 ∀i = 1, 2, . . . , M, τi ∈ {1, −1} ∀i = M + 1, . . . , N such that 0 ∈ ∇f0 (x) ¯ +
N
µi DF (τi fi )(xi ) + NF (C, xN+1 ) + εBX∗ .
i=1
We conclude this section with a sum rule for coderivatives and its corollary. Theorem 2.9. [14, Corollary 4.2] Let 1 and 2 be closed-graph set-valued maps from X into X and let v¯ ∈ 1 (x)+
¯ ¯ Assume that the multifunction S : X ×X ⇒ X ×X 2 (x). defined by S(x, v) := {(v1 , v2 ) ∈ X × X|v1 ∈ 1 (x), v2 ∈ 2 (x), v1 + v2 = v} is locally bounded around (x, ¯ v) ¯ and either 1 is pseudo-Lipschitz around (x, ¯ v1 ) or 2 be pseudo-Lipschitz around (x, ¯ v2 ) for each (v1 , v2 ) ∈ S(x, ¯ v). ¯ Then for any v ∗ ∈ X∗ ∗ D ∗ ( 1 + 2 )(x, ¯ v)(v ¯ ∗ ) ⊆ ∪(v1 ,v2 )∈S(x, ¯ v1 )(v ∗ ) + D ∗ 2 (x, ¯ v2 )(v ∗ )]. ¯ v) ¯ [D 1 (x,
The following sum rule for the case where one of the set-valued maps is single-valued follows from Propositions 2.4 and 2.9. Corollary 2.10. Let 1 : X → X be single-valued and Lipschitz near x¯ and 2 : X ⇒ X be an arbitrary closed set-valued map. Then for any v¯ ∈ 1 (x) ¯ + 2 (x) ¯ and v ∗ ∈ X∗ ∗ D ∗ ( 1 + 2 )(x, ¯ v)(v ¯ ∗ ) ⊆ ∂v ∗ , 1 (v) ¯ + D ∗ 2 (x, ¯ v¯ − 1 (x))(v ¯ ).
3. Proof of Theorem 1.2 Without loss of generality, we may assume that lim inf d(DF fi (x, y), 0) > 0,
(x,y)→(x, ¯ y) ¯
∀i = 1, ..., M,
lim inf d(DF fi (x, y) ∪ DF (−fi )(x, y), 0) > 0,
(x,y)→(x, ¯ y) ¯
(4) ∀i = M + 1, ..., N.
(5)
Multiobjective optimization problem with variational inequality constraints
147
The reason is that if one of the conditions, say the one corresponding to the index i = j , fails then 0 ∈ ∂(τj fj )(x, ¯ y). ¯ Thus, the conclusion of the theorem is satisfied with µj = 1 and all the other µ i s are zeros. In what follows we will prove the theorem under these two assumptions. We divide the proof into several steps. Step 1. Converting the multiobjective optimization problem into a family of single objective optimization problems. Let A be the set of feasible points of problem P and := {φ(x, y) : (x, y) ∈ A} be the set of ‘the attainable values’ of P. Let ε be an arbitrary positive number. Choose θε ≺ φ(x, ¯ y) ¯ such that θε − φ(x, ¯ y) ¯ < ε 2 . The set l(θε ) is an approximation of l(φ(x, ¯ y)), ¯ the level set of ≺ at the optimal value φ(x, ¯ y). ¯ We need this approximation because the intersection of and l(φ(x, ¯ y)) ¯ is nonempty (contains at least φ(x, ¯ y)) ¯ yet
∩ l(θε ) = ∅.
(6)
Indeed, it follows from condition (H2) on ≺ that ∩ l(θε ) = ∅ implies that there exists a feasible point (x, y) ∈ A such that φ(x, y) ≺ φ(x, ¯ y), ¯ a contradiction. Next we use a method similar to that in [10, 12, 14] for proving the extremal principle to derive a necessary condition for a series of abstract minimization problems that approximate our original multiobjective optimization problem. Note that the extremal principle in the above references cannot be directly applied here for two reasons: (a) the separation in (6) is derived by moving (the closure of) the level sets of ≺ at φ(x, ¯ y). ¯ In order to apply the extremal principle the move of the level sets of ≺ at φ(x, ¯ y) ¯ must be a translate which only occurs in some special cases such as in a single objective problem or in a weak Pareto optimization problem. (b) Even in the cases when the extremal principle is applicable, applying it to the sets and l(φ(x, ¯ y)) ¯ will not give us necessary control on the locations of the ‘approximate’ optimal solutions. We now define an auxiliary function p that is similar to that in the proof of an extremal principle by p(θ, x, y, v) := φ(x, y) − θ + δl(θε ) (θ )
+δD (x, y) + δE (x, y, v) + δ{v=0} (v),
where E := {(x, y, v) : v ∈ F (x, y) + N (y, )}. and D := C ∩ (
N
Di )
i=1
with
Di :=
{(x, y) : fi (x, y) ≤ 0} i = 1, ..., M {(x, y) : fi (x, y) = 0} i = M + 1, ..., N.
¯ y, ¯ 0) = φ(x, ¯ y) ¯ − θε < ε2 . That is It is easy to check that p ≥ 0 and p(θε , x, ¯ y, ¯ 0) ≤ inf p + ε2 . p(θε , x,
148
J.J. Ye, Q.J. Zhu
Moreover since all the functions in p are either Lipschitz functions or indicator functions for a closed set, p is lower semicontinuous. By virtue of the Ekeland Variational Principle [9], there exist (θ˜ , x, ˜ y) ˜ satisfying θ˜ ∈ (θε + εBZ ) ∩ l(θε ) ⊂ (φ(x, ¯ y) ¯ + 2εBZ ) ∩ l(θε ), (x, ˜ y) ˜ ∈ ((x, ¯ y) ¯ + εBX×Y ) ∩ A, such that
˜ x, p(θ, x, y, v) + ε(θ, x, y, v) − (θ, ˜ y, ˜ 0) attains a minimum at (θ, x, y, v) = (θ˜ , x, ˜ y, ˜ 0). We turn to the task of decoupling information. To simplify notation we write w := (θ, x, y, v) and W := Z × X × Y × Y . Define functions p1 (w) := φ(x, y) − θ + δl(θε ) (θ ) p2 (w) := δD (x, y), p3 (w) := δE (x, y, v), p4 (w) := δ{v=0} (v) p5 (w) := εw − w. ˜
Then, p1 , p2 , p3 , p4 , p5 are lower semicontinuous and p1 + p2 + p3 + p4 + p5 attains a minimum at z˜ := (θ˜ , x, ˜ y, ˜ 0) in W . Step 2. Applying the sum rule.
Apply the fuzzy sum rule of Theorem 2.5 to 5i=1 pi at w˜ with ε˜ = φ(x, ˜ y) ˜ − θ˜ which is a positive number since ∩l(θε ) = ∅. Noticing that p5 is Lipschitzian with rank ε we conclude that there exist w1 , w2 , w3 , w4 ∈ w˜ + ε˜ BW and wi∗ ∈ DF pi (wi ), i = 1, 2, 3, 4, such that |pi (wi ) − pi (w)| ˜ < ε˜ , i = 1, 2, 3, 4 and w1∗ + w2∗ + w3∗ + w4∗ < ε + ε˜ .
(7)
Since p1 does not depend on v we have w1∗ = (θ ∗ , x1∗ , y1∗ , 0). Similarly, we may write w2∗ = (0, x2∗ , y2∗ , 0), w3∗ = (0, x3∗ , y3∗ , v ∗ ) and w4∗ = (0, 0, 0, t ∗ ). Hence (7) implies that (x1∗ , y1∗ ) + (x2∗ , y2∗ ) + (x3∗ , y3∗ ) < ε + ε˜ θ ∗ < ε + ε˜
(8)
Our next task is to calculate (xi∗ , yi∗ ) for i = 1, 2, 3. Step 3. Calculating (x1∗ , y1∗ ). Let w1 = (θ1 , x1 , y1 , v1 ) and let g be a C 1 function on Z × X × Y such that ∇g(θ1 , x1 , y1 ) = (θ ∗ , x1∗ , y1∗ ) and φ(x, y) − θ + δl(θε ) (θ ) − g(θ, x, y)
(9)
attains a minimum at (θ1 , x1 , y1 ). Let r(γ , θ ) := γ − θ and q(θ, x, y) := (φ(x, y), θ ). Applying the limiting sum rule of Theorem 2.6 we have 0 ∈ ∂(r ◦ q)(θ1 , x1 , y1 ) − (θ1∗ , x1∗ , y1∗ ) + N (l(θε ), θ1 ) × {0} × {0}.
(10)
Multiobjective optimization problem with variational inequality constraints
149
Noticing that φ(x1 , y1 ) − θ1 = 0, · is C 1 at φ(x1 , y1 ) − θ1 . Thus, by the chain rule of Theorem 2.7 we have ∂(r ◦ q)(θ1 , x1 , y1 ) = ∂(−λ, θ + λ, φ)(θ1 , x1 , y1 ) = {−λ} × ∂λ, φ(x1 , y1 ), (11) where λ = ∇ · (φ(x1 , y1 ) − θ1 ) is a unit vector. Combining (8),(10) and (11) we conclude that there exist a unit vector λ ∈ N (l(θε ), θ1 ) + (ε + ε˜ )B such that (x1∗ , y1∗ ) ∈ ∂λ, φ(x1 , y1 ).
(12)
Step 4. Calculating (x2∗ , y2∗ ). Let w2 = (θ2 , x2 , y2 , v2 ) and let g : X × Y → R be a C 1 function such that ∇g(x2 , y2 ) = (x2∗ , y2∗ ) and p2 − g attains a minimum at (x2 , y2 ). Observing that p2 is an indicator function and p2 (x2 , y2 ) = 0 we conclude that (x, y) = (x2 , y2 ) is a solution to the following optimization problem: minimize subject to
−g(x, y) fi (x, y) ≤ 0,
i = 1, 2, . . . , M,
fi (x, y) = 0,
i = M + 1, . . . , N,
(x, y) ∈ C. Applying the necessary optimality conditions of Theorem 2.8, by virtue of (4)–(5) we have that there exist µi > 0, i = 1, 2, . . . , M, µi ∈ R, i = M + 1, . . . , N, τi = 1, i = 1, ..., M and τi ∈ {−1, 1}, i = M + 1, ..., N , (x2,i , y2,i ), i = 1, ..., N + 1 and ∗ , y ∗ ), i = 1, ..., N + 1 satisfying (x2,i 2,i ((x2,i , y2,i ), fi (x2,i , y2,i )) ∈ ((x2 , y2 ), fi (x2 , y2 )) + εBX×Y ×R , (x2,N +1 , y2,N +1 ) ∈ ((x2 , y2 ) + εBX×Y ) ∩ C,
i = 1, 2, . . . , N,
∗ ∗ (x2,N +1 , y2,N +1 ) ∈ NF (C, (x2,N+1 , y2,N+1 )), ∗ ∗ (x2,i , y2,i ) ∈ DF (τi fi )(x2,i , y2,i ),
such that (x2∗ , y2∗ ) ∈
N
∗ ∗ ∗ ∗ µi (x2,i , y2,i ) + (x2,N+1 , y2,N+1 ) + εBX×Y .
(13)
i=1
Step 5. Calculating (x3∗ , y3∗ ). This is straightforward. Let w3 = (θ3 , x3 , y3 , v3 ). Then (x3∗ , y3∗ , v ∗ ) ∈ NF (gphχ , (x3 , y3 , v3 )) where χ(x, y) = F (x, y) + N (y, ). Step 6. Taking limits.
(14)
150
J.J. Ye, Q.J. Zhu
Let ε = 1/k for k = 1, 2, ..... By steps 3–5 there exist µki > 0 i = 1, . . . , M, µki ∈ R, i = M + 1, . . . , N, τik = 1, i = 1, ..., M, τik ∈ {−1, 1}, i = M + 1, ..., N θk , θ1k → φ(x, ¯ y), ¯ 3 λk ∈ N (l(θ k ), θ1k ) + B, λk = 1, k (xik , yik ) → (x, ¯ y) ¯ i = 1, 2, 3 and v3k → 0, k k (x2i , y2i ) → (x, ¯ y) ¯
i = 1, . . . , N + 1,
∗k ∗k k k , y2,i ) ∈ DF (τik fi )(x2,i , y2,i ) i = 1, . . . , N, (x2,i ∗k ∗k k k (x2,N+1 , y2,N+1 ) ∈ NF (C, (x2,N+1 , y2,N+1 ))
such that (x1∗k , y1∗k ) ∈ ∂λk , φ(x1k , y1k ), (x2∗k , y2∗k ) ∈
N
∗k ∗k ∗k ∗k µki (x2,i , y2,i ) + (x2,N+1 , y2,N+1 ) + (1/k)BX×Y ,
3
(16)
i=1 ∗k
(17)
(xi∗k , yi∗k ) → 0.
(18)
(x3∗k , y3∗k , v ) ∈ NF (gphχ , (x3k , y3k , v3k )),
(15)
i=1
We consider the limiting processes when k → ∞ for the following two cases: The Regular Case: The sequence t k := (x1∗k , y1∗k ) +
N
∗k ∗k ∗k ∗k µki (x2,i , y2,i ) + (x2,N+1 , y2,N+1 ) + (x3∗k , y3∗k , v ∗k )
i=1
(x1∗k , y1∗k ),
is bounded. Then (x2∗k , y2∗k ) and (x3∗k , y3∗k , v ∗k ) are all bounded. Passing to subsequences we may assume that τik = τi , i = 1, ..., N + 1 do not depend on k, ∗k , y ∗k ) converges to (xi∗k , yi∗k ) converges to (xi∗ , yi∗ ) i = 1, 2, 3, v ∗k → v ∗ , µki (x2,i 2,i ∗k k ∗ , y ∗ ) i = 1, . . . , N, (x ∗k ∗ ∗ (x2,i 2,i 2,N+1 , y2,N+1 ) → (x2,N+1 , y2,N+1 ), and µi → µi , i = ∗ , y ∗ ), i = 1, ..., N are 1, 2, ..., N (µi , i = 1, ..., N must be bounded because (x2,i 2,i bounded away from zero due to the assumptions (4)–(5)). Since φ is Lipschitzian and λk = 1, taking subsequences if necessary we can assume that λk converges to a unit vector λ ∈ N(l(φ(x, ¯ y)), ¯ φ(x, ¯ y)). ¯ Taking limits in (15) yields (x1∗ , y1∗ ) ∈ ∂λ, φ(x, ¯ y). ¯
(19)
∗ , y ∗ ) ∈ ∂ ∞ (τ f ) Next we take limits in (16). By Definition 2.2 if µi = 0 then (x2,i i i 2,i ∗ ∗ ¯ y). ¯ Thus, we have (x, ¯ y); ¯ otherwise if µi > 0 then (x2,i , y2,i )/µi ∈ ∂(τi fi )(x, (x2∗ , y2∗ ) ∈ µi ∂(τi fi )(x, ¯ y) ¯ i∈{1,...,N };µi >0
+
i∈{1,...,N };µj =0
∂ ∞ (τi fi )(x, ¯ y) ¯ + N (C, (x, ¯ y)). ¯
(20)
Multiobjective optimization problem with variational inequality constraints
151
Finally, taking limits in (17) and (18) yield ¯ y, ¯ 0)), (x3∗ , y3∗ , v ∗ ) ∈ NF (gphχ , (x,
(21)
and 3
(xi∗ , yi∗ ) = 0.
(22)
i=1
By the definition of coderivatives, (21) implies that (x3∗ , y3∗ ) ∈ D ∗ χ (x, ¯ y, ¯ 0)(v ∗ ). By Corollary 2.10, we have, ∗ ¯ y, ¯ 0)(v ∗ ) ⊆ ∂F (x, ¯ y), ¯ v ∗ + {0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ). (x3∗ , y3∗ ) ∈ D ∗ χ(x, (23)
Consequently inclusions (19), (20), (22) and (23) imply that Theorem 1.2 holds with µ0 = 1. The Singular Case: The sequence t k is unbounded. Without loss of generality we may assume that t k → ∞. Dividing all the sequences we considered in the regular case by t k they all become bounded. Passing to subsequences if necessary we may assume that τik = τi , i = 1, ..., N + 1 ∗k , y ∗k )/t k do not depend on k, (xi∗k , yi∗k )/t k converges to (xi∗ , yi∗ ), i = 1, 2, 3, µki (x2,i 2,i ∗k ∗k ∗ ∗ ∗ ∗ k ∗k k converges to (x2,i , y2,i ), (x2,N+1 , y2,N+1 )/t → (x2,N+1 , y2,N+1 ), v /t → v ∗ and µki /t k → µi , i = 1, 2, ..., N . Since (x1∗k , y1∗k ) is bounded we have (x1∗ , y1∗ ) = 0.
(24)
∗ , y ∗ ) ∈ ∂ ∞ (τ f ) Next we take limits in (16). By Definition 2.2 if µi = 0 then (x2,i i i 2,i ∗ ∗ ¯ y). ¯ Thus, we have (x, ¯ y); ¯ otherwise if µi > 0 then (x2,i , y2,i )/µi ∈ ∂(τi fi )(x, (x2∗ , y2∗ ) ∈ µi ∂(τi fi )(x, ¯ y) ¯ i∈{1,...,N };µj >0
+
∂ ∞ (τi fi )(x, ¯ y) ¯ + N (C, (x, ¯ y)). ¯
(25)
i∈{1,...,N };µj =0
Finally, taking limits in (17) and (18) yield ¯ y, ¯ 0)), (x3∗ , y3∗ , v ∗ ) ∈ N (gphχ , (x, that is to say, ∗ ¯ y, ¯ 0)(v ∗ ) ⊆ ∂F (x, ¯ y), ¯ v ∗ + {0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ). (x3∗ , y3∗ ) ∈ D ∗ χ(x, (26)
and 3 i=2
(xi∗ , yi∗ ) = 0.
(27)
152
J.J. Ye, Q.J. Zhu
Note that we also have 1=
N
∗ ∗ ∗ ∗ (x2,i , y2,i ) + (x2,N+1 , y2,N+1 ) + (x3∗ , y3∗ , v ∗ )
i=1
from which (along with the basic constraint qualification) we conclude that µi , i = 1, . . . , N, v ∗ are not all zero. Therefore Theorem 1.2 with µ0 = 0 follows from relations (24), (25), (26) and (27).
4. Constraint qualifications In this section, we extend some concepts of constraint qualifications studied in [26] for the case of single objective optimization problem with variational inequality constraints to our setting and prove that the Kuhn-Tucker type optimality conditions hold under these constraint qualifications. Definition 4.1. Let (x, ¯ y) ¯ be a local solution to P. P satisfies the No Nonzero Abnormal Multiplier Constraint Qualification (NNAMCQ) at (x, ¯ y) ¯ if 0∈ µi ∂(τi fi )(x, ¯ y) ¯ + ∂ ∞ (τi fi )(x, ¯ y) ¯ i∈{1,...,N },µi >0 ∗
i∈{1,...,N },µi =0 ∗
∗ ¯ y) ¯ + {0} × D N (y, ¯ −F (x, ¯ y))(v ¯ ), +N(C, (x, ¯ y)) ¯ + ∂F, v (x,
(28)
τi = 1 i = 1, ..., M, τi ∈ {1, −1} i = M + 1, . . . , N implies that µi = 0 ∀i ∈ {1, . . . , N}, v ∗ = 0. In particular, when all functions fi , i = 1, . . . , N are Lipschitz continuous near (x, ¯ y), ¯ we say that P satisfies NNAMCQ at (x, ¯ y) ¯ if 0∈
N
µi ∂(τi fi )(x, ¯ y) ¯ + N (C, (x, ¯ y)), ¯
i=1
∗ +∂F, v ∗ (x, ¯ y) ¯ + {0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ),
(29)
τi = 1 i = 1, ..., M, τi ∈ {1, −1} i = M + 1, . . . , N, µi ≥ 0 ∀i = 1, 2, . . . , N,
M
µi fi (x, ¯ y) ¯ =0
i=1
implies that µi = 0 ∀i ∈ {1, . . . , N}, v ∗ = 0. Noticing that the inclusions (28) (29) are the inclusions (2) and (3) respectively with µ0 = 0, it follows easily from the Fritz John type optimality conditions Theorems 1.2 and 1.3 that NNAMCQ is a constraint qualification. Theorem 4.2. Let (x, ¯ y) ¯ be a local solution of P. If NNAMCQ is satisfied at (x, ¯ y), ¯ then µ0 in the conclusions in Theorems 1.2 and 1.3 can be taken as 1.
Multiobjective optimization problem with variational inequality constraints
153
Definition 4.3. Consider P where there is no equality and inequality constraints, i.e., M = N = 0. The Strong Regularity Constraint Qualification (SRCQ) is said to be satisfied at (x, ¯ y) ¯ if F is C 1 around (x, ¯ y), ¯ C = D × Y where D is a closed subset of X and the generalized equation 0 ∈ F (x, ¯ y) + N (y, ) is strongly regular in the sense of Robinson [20]. Theorem 4.4. Let (x, ¯ y) ¯ be a local solution of P where there is no equality and inequality constraints. If (x, ¯ y) ¯ satisfies SRCQ at (x, ¯ y), ¯ then µ0 in the conclusion of Theorem 1.3 can be taken as 1. Proof. Notice that Y is a finite dimensional space. One only needs to replace the reference [15, Theorem 5.8] in the proof of [26, Theorem 4.7] by [15, Corollary 6.7]. The two constraint qualification conditions discussed above provide Kuhn-Tucker type necessary optimality conditions by ruling out the case µ0 = 0. Our next constraint qualification generalizes the calmness condition in [5] is different in nature. It guarantees that we can always choose µ0 = 1 even if µ0 = 0 is possible. Definition 4.5. Let (x, ¯ y) ¯ be a local solution to P. We say that P satisfies the calmness condition at (x, ¯ y) ¯ provided that there exist ε > 0 and a Lipschitzian function ψ : R N → Z satisfying ψ(0) = 0 such that there exists no ((x, y, p, q) ∈ [(x, ¯ y, ¯ 0, 0)+ εB] \ {(x, ¯ y, ¯ 0, 0)} satisfying fi (x, y) + pi ≤ 0, i = 1, 2, . . . , M, fi (x, y) + pi = 0, i = M + 1 . . . , N, q ∈ F (x, y) + N (, y), (x, y) ∈ C, φ(x, y) + ψ(p, q) ≺ φ(x, ¯ y). ¯ As in the case of the single objective optimization problem with variational inequality constraints [26], we can prove that the calmness condition is a constraint qualification. Theorem 4.6. Let (x, ¯ y) ¯ be a local solution of P. If P satisfies the calmness condition at (x, ¯ y) ¯ then µ0 in the conclusions in Theorems 1.2 and 1.3 can be taken as 1. Proof. We only prove the assertion for Theorem 1.2 since the one for Theorem 1.3 is similar. By the definition of the calmness condition, (x, ¯ y, ¯ 0, 0) is a local solution to the new problem: Minimize subject to
φ(x, y) + ψ(p, q) fi (x, y) + pi ≤ 0, fi (x, y) + pi = 0,
i = 1, 2, . . . , M, i = M + 1, . . . , N,
(x, y) ∈ C, 0 ∈ −q + F (x, y) + N (, y).
154
J.J. Ye, Q.J. Zhu
Note that the inclusion (28) for the above problem is 0∈ µi ∂(τi fi )(x, ¯ y) ¯ × {0, . . . , µi , . . . , 0} × {0} i∈{1,...,N},µi >0
+
∂ ∞ (τi fi )(x, ¯ y) ¯ × {0} × {0}
i∈{1,...,N},µi =0
+N ((x, ¯ y), ¯ C) × {0} × {0} + ∂F, v ∗ (x, ¯ y) ¯ × {0} × {−v ∗ } ∗ +{0} × D ∗ N (y, ¯ −F (x, ¯ y))(v ¯ ) × {0} × {0}
which implies that µi = 0, i ∈ {1, . . . , N} and v ∗ = 0 and hence NNAMCQ for the above problem is satisfied and the inclusion (2) for the above problem holds for µ0 = 1. Note that conclusion then follow from the inclusion (2) for the new problem with respected to (x, ¯ y) ¯ variables. Definition 4.7. We say that P satisfies the error bound constraint qualification at a feasible point (x, ¯ y) ¯ if there exist positive constants λ, δ and such that d((x, y), (0, 0)) ≤ λ(p, q) ∀(p, q) ∈ B ¯ y), ¯ (x, y) ∈ (p, q) ∩ Bδ (x,
(30)
(p, q) := {(x, y) ∈ C : fi (x, y) + pi ≤ 0, i = 1, . . . , M, fi (x, y) + pi = 0, i = M + 1, . . . , N, q ∈ F (x, y) + N (y, )}
(31)
where
is the set of solutions to the perturbed generalized equation. Note that the error bound constraint qualification is satisfied at a point (x, ¯ y) ¯ if and only if (p, q) is pseudo-upper-Lipschitz continuous around (0, 0, x, ¯ y) ¯ in the terminology of [27, Definition 2.8]. Either (p, q) is pseudo-Lipschitz continuous around (0, 0, x, ¯ y) ¯ or (p, q) is upper-Lipschitz continuous at (x, ¯ y) ¯ implies that the error bound constraint qualification is satisfied at (x, ¯ y). ¯ We now prove that the error bound constraint qualification is a stronger constraint qualification than the calmness condition in the case where the preference ≺ is defined in the sense of weak Pareto. Theorem 4.8. Assume that (x, ¯ y) ¯ is a local solution to P. If the preference ≺ is defined by the weak Pareto, then the error bound constraint qualification is satisfied at the point (x, ¯ y) ¯ implies that the calmness condition is satisfied at (x, ¯ y). ¯ Hence µ0 in the conclusions of Theorems 1.2 and 1.3 can be taken as 1 under the error bound constraint qualification. Proof. Let us prove the assertion by supposing the contrary. Then for every ε and µ > 0 there is a point (x, y, p, q) ∈ (x, ¯ y, ¯ 0, 0) + εB,
(x, y, p, q) = (x, ¯ y, ¯ 0, 0)
Multiobjective optimization problem with variational inequality constraints
155
such that φ(x, y) + µ(p, q) ≺ φ(x, ¯ y). ¯
(32)
For any > 0, there exists (x, ˜ y) ˜ ∈ (0, 0) such that (x, ˜ y) ˜ − (x, y) ≤ d(0,0) (x, y) + . Hence φ(x, ˜ y) ˜ ≤ φ(x, y) + Lφ (x, ˜ y) ˜ − (x, y)
since φ is Lipschitz with constant Lφ ,
≤ φ(x, y) + Lφ d(0,0) (x, y) + , ≤ φ(x, y) + Lφ λ(p, q) + by the existence of a local error bound, ≺ φ(x, ¯ y) ¯ + which contradicts to the fact that (x, ¯ y) ¯ is a local solution of P. The proof of the proposition is complete. As in [26, Theorem 4.3], in the case where X = R n , Y = R m one can prove that the constraint region of P is polyhedral implies the error bound constraint qualification at every feasible point of P. Theorem 4.9. Suppose that X = R n , Y = R m , the mappings fi i = 1, . . . , N and F are affine, C is polyhedral and is a polyhedral convex set. Then the solution map for the perturbed generalized equation (31) is upper-Lipschitz at any feasible solution of P and hence the error bound constraint qualification is satisfied at any feasible solution of P. Consequently, µ0 in Theorem 1.3 can be taken as 1 in the case where the preference ≺ is defined by the weak Pareto. 5. Applications In this section, we apply the results obtained in §3 and §4 to two important cases of P: the multiobjective optimization problem with complementarity constraints and the multiobjective bilevel programming problems.
5.1. Problem with complementarity constraints m , P reduces to the following multiobjective optimiIn the case where Y = R m , = R+ zation problem with complementarity constraints:
(MOP CC)
minimize subject to
φ(x, y) fi (x, y) ≤ 0, fi (x, y) = 0,
i = 1, 2, . . . , M, i = M + 1, . . . , N,
(x, y) ∈ C, y ≥ 0, F (x, y) ≥ 0, y, F (x, y) = 0.
156
J.J. Ye, Q.J. Zhu
For easy exposition, in this section we assume that X is a finite dimensional Banach spaces, φ = (φ1 , ..., φq ) is a vector-valued function on X × R m , fi : X × R m → R, i = ¯ y), ¯ C is a 1, ..., N , F : X × R m → R m , φ, fi , i = 1, . . . , N, F are all C 1 near (x, nonempty closed subset of X × R m . Let (x, ¯ y) ¯ be a feasible point of (MOPCC). Define I := I (x, ¯ y) ¯ := {i = 1, . . . , M} : fi (x, ¯ y) ¯ = 0} ¯ y) ¯ = 0} L := L(x, ¯ y) ¯ := {i = 1, . . . , m} : y¯i > 0, Fi (x, I+ := L(x, ¯ y) ¯ := {i = 1, . . . , m} : y¯i = 0, Fi (x, ¯ y) ¯ > 0} I0 := L(x, ¯ y) ¯ := {i = 1, . . . , m} : y¯i = 0, Fi (x, ¯ y) ¯ = 0}. Definition 5.1. We say that the generalized Mangasarian-Fromovitz constraint qualification (GMFCQ) is satisfied at (x, ¯ y) ¯ if X = R n , C = D × R m , D is a subset of R n and: (i) for every partition of I0 into sets P , Q, R with R = ∅, there exist vectors k ∈ int TC (x, ¯ D), h ∈ R m such that hI+ = 0, hQ = 0, hR ≥ 0, ∇fi (x, ¯ y), ¯ (k, h) ≤ 0,
i ∈ I, ∇fi (x, ¯ y), ¯ (k, h) = 0, i = M + 1, . . . , N, ∇Fi (x, ¯ y), ¯ (k, h) = 0, i ∈ L ∪ P , ∇Fi (x, ¯ y), ¯ (k, h) ≥ 0, i ∈ R and either hi > 0 or ∇Fi (x, ¯ y), ¯ (k, h) > 0 for some i ∈ R; (ii) for every partition of I0 into the sets P , Q, the matrix
∇x fJ (x, ¯ y) ¯ ∇y fJ,L∪P (x, ¯ y) ¯ ∇x FL∪P (x, ¯ y) ¯ ∇y FL∪P ,L∪P (x, ¯ z¯ , u) ¯
has full row rank and there exist vectors k ∈ intTC (x, ¯ D), h ∈ R m such that hI+ = 0, hQ = 0 ∇fi (x, ¯ y), ¯ (k, h) < 0,
i ∈ I,
∇fi (x, ¯ y), ¯ (k, h) = 0,
i = M + 1, . . . , N, ∇Fi (x, ¯ y), ¯ (k, h) = 0, i ∈ L ∪ P where TC ((x, ¯ z¯ ), D) denotes the Clarke tangent cone of D at x¯ and J = {M +1, . . . , N}. We have the following necessary optimality condition for (MOPCC).
Multiobjective optimization problem with variational inequality constraints
157
Theorem 5.2. Let (x, ¯ y) ¯ be a local solution to (MOPCC). Suppose that the preference ≺ is regular at φ(x, ¯ y). ¯ Then there exist µ0 ∈ {0, 1}, µi ∈ R, i = 1, . . . , N, η ∈ R m not all zero and a unit vector λ ∈ N (l(φ(x, ¯ y)), ¯ φ(x, ¯ y)) ¯ such that 0 ∈ µ0
q
λi ∇φi (x, ¯ y) ¯ +
N
i=1
µi ∇fi (x, ¯ y) ¯ +
i=1
m
ηi ∇Fi (x, ¯ y) ¯
i=1
+ N (C, (x, ¯ y)) ¯ + (0, ξ ), µi ≥ 0 ∀i = 1, . . . , M,
M
µi fi (x, ¯ y) ¯ = 0,
i=1
¯ y) ¯ = 0, ξi = 0 if y¯i > 0 and Fi (x, ηi = 0 if y¯i = 0 and Fi (x, ¯ y) ¯ > 0, either ξi < 0, ηi < 0 or ξi ηi = 0 if y¯i = 0 and Fi (x, ¯ y) ¯ = 0. Moreover if one of the following constraint qualification is satisfied, then µ0 can be taken as 1. (a) NNAMCQ is satisfied, i.e., 0∈
N
µi ∇fi (x, ¯ y) ¯ +
i=1
µi ≥ 0 ∀i = 1, . . . , M,
m
ηi ∇Fi (x, ¯ y) ¯ + N (C, (x, ¯ y)) ¯ + (0, ξ ),
i=1 M
µi fi (x, ¯ y) ¯ =0
i=1
¯ y) ¯ = 0, ξi = 0 if y¯i > 0 and Fi (x, ηi = 0 if y¯i = 0 and Fi (x, ¯ y) ¯ > 0, ¯ y) ¯ =0 either ξi < 0, ηi < 0 or ξi ηi = 0 if y¯i = 0 and Fi (x, implies that µi = 0, i = 1, . . . , N, η = 0. (b) X = R n , the preference ≺ is defined by the weak Pareto and the Linear CQ is satisfied, i.e., fi , F are affine and C is polyhedral. (c) The generalized Mangasarian-Fromovitz CQ is satisfied at (x, ¯ y). ¯ (d) SRCQ is satisfied at (x, ¯ y), ¯ i.e., C = D × R m for some D ⊆ R n , there is no ¯ y) ¯ and the following coninequality and equality constraints, F is C 1 around (x, ditions are satisfied: (i) the matrix ∇y FL,L (x, ¯ y) ¯ is nonsingular; (ii) the Shur complement of the above matrix in the matrix ¯ y) ¯ ∇y FL,I0 (x, ¯ y) ¯ ∇y FL,L (x, ∇y FI0 ,L (x, ¯ y) ¯ ∇y FI0 ,I0 (x, ¯ y) ¯ has positive principle minors. Proof. The Fritz John type necessary optimality condition follows easily from Theorem 1.3 and Proposition 1.4. The constraint qualifications in (a) and (b) follow from Theorems 4.2 and 4.9 respectively. For the proof of (c), see [26, Proposition 4.5] and [18, Proposition 3.3]. Condition (d) is equivalent to the strong regularity condition due to Robinson [20].
158
J.J. Ye, Q.J. Zhu
5.2. Multiobjective bilevel programming problems Consider the multiobjective bilevel programming problems defined as follows: (BP)
minimize φ(x, z)
subject to ψ(x, z) ≤ 0, (x, z) ∈ D and z ∈ S(x)
where S(x) is the set of solutions of the problem (Px ): (Px )
minimize g(x, z)
subject to ϕ(x, z) ≤ 0
and φ : R n+a → R q , ψ : R n+a → R d , ϕ : R n+a → R b . For simplicity, we assume all functions f, g, ψ, ϕ are smooth enough. Let z ∈ S(x). If a certain constraint qualification holds for the lower level problem (Px ) at z, then there exists u ∈ R b such that ∇z g(x, z) + u∇z ϕ(x, z) = 0, ϕ(x, z) ≤ 0,
u ≥ 0, u, ϕ(x, z) = 0
where u∇z ϕ(x, z) := uk ∇z ϕk (x, z). We now derive necessary optimality conditions for (BP). Theorem 5.3. Assume that f and ψ are C 1 , g, ϕ are twice continuously differentiable around (x, ¯ z¯ ), an optimal solution of (BP). Further assume that for each (x, z), the Kuhn-Tucker condition is necessary and sufficient for z ∈ P (x) and u¯ is a corresponding multiplier associated with (x, ¯ z¯ ), i.e., 0 = ∇z g(x, ¯ z¯ ) + u∇ ¯ z ϕ(x, ¯ z¯ ) u¯ ≥ 0,
ϕ(x, ¯ z¯ ), u ¯ = 0.
d , α ∈ R a , β ∈ R b not all zero and a unit vector Then there exist µ0 ∈ {0, 1}, γ ∈ R+ ¯ y)), ¯ φ(x, ¯ y)) ¯ such that λ ∈ N(l(φ(x,
0 ∈ µ0 λ∇φ(x, ¯ z¯ ) + γ ∇ψ(x, ¯ z¯ ) + α∇(∇z g + u∇ ¯ z ϕ)t (x, ¯ z¯ ) − β∇ϕ(x, ¯ z¯ ) + N(D, (x, ¯ z¯ )), ψ(x, ¯ z¯ ), γ = 0, (−∇z ϕ(x, ¯ z¯ )α, −β) ∈ N (gphNR b , (u, ¯ ϕ(x, ¯ z¯ ))). +
λ can be taken as 1 if one of the following constraint qualifications hold; (a) The preference ≺ is defined by the weak Pareto and ∇z g, ψ, ϕ are affine mappings and D is polyhedral. d × R a × R b such that (b) There is no nonzero vector (γ , α, β) ∈ R+ 0 ∈ γ ∇ψ(x, ¯ z¯ ) + α∇(∇z g + u∇ ¯ z ϕ)t (x, ¯ z¯ ) − β∇ϕ(x, ¯ z¯ ) + N (D, (x, ¯ z¯ )), ψ(x, ¯ z¯ ), γ = 0,
(−∇z ϕ(x, ¯ z¯ )α, −β) ∈ N (gphNR b , (u, ¯ ϕ(x, ¯ z¯ ))). +
(c) D = E ×R a where E is a closed subset of R n and there is no inequality constraint ψ(x, z) ≤ 0. Furthermore the strong second order sufficient condition and the linear independence of binding constraints hold for the lower level problem Px¯ at z¯ , i.e., for any nonzero v such that ∇z ϕi (x, ¯ z¯ )t v = 0,
i∈L
Multiobjective optimization problem with variational inequality constraints
159
v, (∇z2 g(x, ¯ z¯ ) + u∇ ¯ z2 ϕ(x, ¯ z¯ ))v > 0 and gradients of the binding constraints {∇z ϕi (x, ¯ z¯ ), i ∈ L ∪ I0 } are linearly independent, where ¯ z¯ ) := u¯ i ∇z2 ϕi (x, ¯ z¯ ) u∇ ¯ z2 ϕ(x, and L := L(x, ¯ z¯ , u) ¯ := {i : u¯ i > 0, ϕi (x, ¯ z¯ ) = 0} ¯ z¯ , u) ¯ := {i : u¯ i = 0, ϕi (x, ¯ z¯ ) = 0} I0 := I0 (x, I+ := I+ (x, ¯ z¯ , u) ¯ := {i : u¯ i = 0, ϕi (x, ¯ z¯ ) < 0}. Proof. Under the assumption that the Kuhn-Tucker condition for the lower level problem is necessary and sufficient for optimality, it is obvious that (x, ¯ z¯ , u) ¯ is a solution of the following problem: minimize subject to
φ(x, z) ψ(x, z) ≤ 0, (x, z) ∈ D, ∇z g(x, z) + u∇z ϕ(x, z) = 0, u ≥ 0, −ϕ(x, z) ≥ 0, u, ϕ(x, z) = 0,
which is a multiobjective optimization problem with complementarity constraints. The Fritz John type necessary optimality condition and the constraint qualifications (a) and (b) follows from Theorem 5.2. Condition (c) is a sufficient condition for the strong regularity condition as indicated by Robinson [20]. Acknowledgements. The authors would like to thank the anonymous referee for the suggestions which led to the better presentation of the results.
References 1. Borwein, J.M.: Proper efficient points for maximization with respect to cones. SIAM J. Contr. & Optim. 15, 57–63 (1977) 2. Borwein, J.M., Treiman, J.S., Zhu, Q.J.: Necessary conditions for constrained optimization problems with semicontinuous and continuous data. Trans. Amer. Math. Soc. 350, 2409–2429 (1998) 3. Borwein, J.M., Zhu, Q.J.: A survey of subdifferential calculus with applications. Nonlinear Analysis, TMA, 38, 687–773 (1999) 4. Bracken, J., McGill, J.T.: Production and marketing decisions with multiple objectives in a competitive environment. J. Optim. Theo. Appli. 24, 449–458 (1978) 5. Clarke, F.H.: Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, 1983, Russian edition MIR, Moscow, (1988). Reprinted as Vol. 5 of the series Classics in Applied Mathematics, SIAM, Philadelphia, 1990 6. Craven, B.D.: Nonsmooth multiobjective programming. Numer. Funct. Optim. 10, 49–64 (1989) 7. Dong, J.: Nondifferentiable multiobjective optimization. (Chinese) Adv. in Math. 23, 517–528 (1994) 8. Dontchev A., Rockafellar, R.T.: Characterizations of strong regularity for variational inequalities over polyhedral convex sets. SIAM J. Optim. 6, 1087–1105 (1996) 9. Ekeland, I.: On the variational principle. J. Math. Anal. Appl. 47, 324–353 (1974) 10. Kruger, A.Y., Mordukhovich, B.S.: Extremal points and Euler equations in nonsmooth optimization. (Russian) Dokl. Akad. Nauk. BSSR 24, 684–687 (1980) 11. Minami, M.: Weak Pareto-optimal necessary conditions in a nondifferential multiobjective program on a Banach space. J. Optim. Theo. Appli. 41, 451–461 (1983)
160
J.J. Ye, Q.J. Zhu: Multiobjective optimization problem with variational inequality constraints
12. Mordukhovich, B.S.: Maximum principle in problems of time optimal control with nonsmooth constraints. J. Appl. Math. Mech. 40, 960–969 (1976) 13. Mordukhovich, B.S.: Approximation Methods in Problems of Optimization and Control (Russian) Nauka, Moscow, (1988). (English transl. to appear in Wiley-Interscience.) 14. Mordukhovich, B.S.: Generalized differential calculus for nonsmooth and set-valued mappings. J. Math. Anal. Appl. 183, 250–288 (1994) 15. Mordukhovich, B.S., Shao, Y.: Stability of set-valued mappings in infinite dimensions: point criteria and applications. SIAM J. Contr. Optim. 35, 285–314 (1997) 16. Mordukhovich, B.S., Shao, Y.: Nonsmooth sequential analysis in Asplund spaces. Trans. Amer. Math. Soc. 348, 1235–1280 (1996) 17. Mordukhovich, B.S., Shao, Y.: Extremal characterization of Asplund spaces. Trans. Amer. Math. Soc. 124, 197–205 (1996) 18. Outrata, J.V.: Optimality conditions for a class of mathematical programs with equilibrium constraints. Math. Oper. Res. 24, 627–644 (1999) 19. Poliquin, R.A., Rockafellar, R.T.: Tilt stability of a local minimum. SIAM J. Optim. 8, 287–299 (1998) 20. Robinson, S.M.: Strongly regular generalized equations. Math. Oper. Res. 5, 43–62 (1980) 21. Singh, C.: Optimality conditions in multiobjective differentiable programming. J. Optim. Theo. Appli. 53, 115–123 (1987) 22. Wang, L., Dong, J., Liu, Q.: Optimality conditions in nonsmooth multiobjective programming. System Sci. Math. Sci. 7, 250–255 (1994) 23. Wang, L., Dong, J., Liu, Q.: Nonsmooth multiobjective programming. System Sci. Math. Sci. 7, 362–366 (1994) 24. Yang, X.Q., Jeyakumar, V.: First and second-order optimality conditions for convex composite multiobjective optimization. J. Optim. Theo. Appli. 95, 209–224 (1997) 25. Ying, M.: The nondominated solution and the proper efficient of nonsmooth multiobjective programming. J. Sys. Sci. Math. Sci. 5, 269–278 (1985) 26. Ye, J.J.: Constraint qualifications and necessary optimality conditions for optimization problems with variational inequality constraints. SIAM J. Optim. 10, 943–962 (2000) 27. Ye, J.J., Ye, X.Y.: Necessary optimality conditions for optimization problems with variational inequality constraints. Math. Oper. Res. 22, 977–997 (1997) 28. Zhu, Q.J.: Hamiltonian necessary conditions for a multiobjective optimal control problem with endpoint constraints. SIAM J. Contr. Optim. 39, 97–112 (2000)