Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 573408, 6 pages http://dx.doi.org/10.1155/2013/573408
Research Article Lagrangian Duality for Multiobjective Programming Problems in Lexicographic Order X. F. Hu1 and L. N. Wang2 1 2
School of Information Engineering, Chongqing City Management Vocational College, Chongqing 401331, China Chongqing Water Resources and Electric Engineering College, Chongqing 402160, China
Correspondence should be addressed to L. N. Wang;
[email protected] Received 27 April 2013; Revised 19 September 2013; Accepted 22 September 2013 Academic Editor: Shawn X. Wang Copyright Β© 2013 X. F. Hu and L. N. Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper deals with a constraint multiobjective programming problem and its dual problem in the lexicographic order. We establish some duality theorems and present several existence results of a Lagrange multiplier and a lexicographic saddle point theorem. Meanwhile, we study the relations between the lexicographic saddle point and the lexicographic solution to a multiobjective programming problem.
1. Introduction Duality assertions are very important in optimization researches from the theoretical as well as from the numerical point of view. Duality theorems in mathematical programming establish typical connections between a constrained minimization problem and a constrained maximization problem. The relationship is such that the existence of a solution to one of these problems ensures the existence of a solution to other, both having the same optimal value. In the past centuries, many authors have studied the duality problems of vector optimization problems; see, for example, [1β11] and reference therein. On the other hand, it is well known that partial order plays an important role in multiobjective optimization theory. But partial efficient points are usually large, so that one needs certain additional rules to reduce them. One of the possible approaches is to utilize the lexicographic order, which is introduced by the lexicographic cone. The main reason is that the lexicographic order is a total ordering and it can overcome the shortcoming that not all points can be compared in partial order. The lexicographic order has been investigated in connection with its applications in optimization and decision making theory; see, for example, [12β19] and references therein. However, the lexicographic cone is neither open nor closed. Note that a lot of results for vector optimization
problems are gotten under the assumption that the ordering cone is open or closed. Therefore, it is valuable to investigate multiobjective optimization problems in the lexicographic order. Konnov [12] first discussed the vector equilibrium problems in lexicographic order. Recently, Li et al. [13] studied the minimax inequality problem and have obtained minmax theorems in the lexicographic order. The rest of the paper is organized as follows. In Section 2, we first recall some definitions and their properties. In Section 3, with respect to the lexicographic order, we first establish weak duality theorem and the Lagrangian multiplier rules for a multiobjective programming. In Section 4, we investigate a lexicographic saddle point of a vector-valued Lagrangian function and discuss the connection between the lexicographic saddle point and the lexicographic solution to a multiobjective programming problem.
2. Preliminaries and Notations Throughout this paper, unless otherwise specified, let π be an arbitrarily nonempty subset of a topology space π and Rβ πdimensional Euclidean space. Let Rβ+ := {π₯ β Rβ : π₯π β₯ 0, βπ}. Let L(Rπ , Rπ ) be the space of continuous linear operator from Rπ to Rπ and L+ (Rπ , Rπ ) := {Ξ β L(Rπ , Rπ ) : π β Ξ(Rπ + ) β R+ }. By 0β denote the zero vector of R .
2
Abstract and Applied Analysis
For convenience, we set πΌπ = {1, 2, . . . , π}. As usual, for any V β Rπ and π β πΌπ , Vπ will denote the πth coordinate of V with respect to the canonical basis {π1 , π2 , . . . , ππ }. The lexicographic cone of Rπ is defined as the set of all vectors whose first nonzero coordinate (if any) is positive: πΆlex = {0} βͺ {V β π
π : βπ β πΌπ such that Vπ > 0; βπ β πΌπ , π < π such that Vπ =ΜΈ 0} .
(1)
Consider the following multiobjective programming problem: (MP)
π
min
π (π₯) = (π1 (π₯) , π2 (π₯) , . . . , ππ (π₯))
s.t.
π (π₯) β βπ
+π ,
lex
(3)
π₯ β π,
Note that the lexicographic cone πΆlex is neither closed nor open. However, it is convex, pointed and πΆlex βͺ (βπΆlex ) = Rπ . Thus the binary relation defined for any π’, V β Rπ by
where ππ : π β R (π = 1, 2, . . . , π) and π(π₯) = (π1 (π₯), π2 (π₯), . . . , ππ (π₯))π , and ππ : π β R (π = 1, 2, . . . , π). Let πΈ denote the set of all feasible points for the multiobjective optimization problem (MP); that is,
π’ β€lex V ββ π’ β V β πΆlex
πΈ = {π₯ β π : π (π₯) β βπ
+π } .
(2)
π
(4)
is a total order on R (i.e., it is reflexive, transitive, antisymmetric and any two vectors are comparable). The binary relation induced by πΆlex is called a lexicographic order. Now we recall the definitions of efficient points of a nonempty subset in the lexicographic order.
And let πΈπ denote the set of all minimal points for ππ on πΈ; that is,
Definition 1 (see [13, 14]). Let π β Rπ be a nonempty subset.
(5)
(i) A point π§0 β π is called a lexicographic maximal point of π if π β π§0 β πΆlex ; that is, for any π§ β π, π§ β€lex π§0 ; by maxlex π, we denote the set of all lexicographic maximal points of π.
Then, we write π(πΈ) = βπ₯βπΈ π(π₯). Throughout this paper, we always assume that πΈ =ΜΈ 0 and πΈπ =ΜΈ 0 (π = 1, 2, . . . , π). A vector-valued Lagrangian function for (MP) is defined from πΈ Γ π
πΓπ β π
π by
(ii) A point π§0 β π is called a lexicographic minimal point of π if π β π§0 + πΆlex ; that is, for any π§ β π, π§0 β€lex π§; by minlex π, we denote the set of all lexicographic minimal points of π. Obviously, if maxlex π =ΜΈ 0, maxlex π is a single point set and so is minlex π. Lemma 2 (see [13]). If π β Rπ is a nonempty compact set, then minπππ₯ π =ΜΈ 0 and maxπππ₯ π =ΜΈ 0. Proposition 3. If πΆ, π· β Rπ are two nonempty compact subsets, then (i) minπππ₯ (πΆ + π·) = minπππ₯ (πΆ) + minπππ₯ (π·), (ii) maxπππ₯ (πΆ + π·) = maxπππ₯ (πΆ) + maxπππ₯ (π·). Proof. It suffices to verify (i) since (ii) is evident by maxlex (πΆ) = βminlex (βπΆ). Indeed, let π₯ = minlex (πΆ) and π¦ = minlex (π·). Then π₯ β πΆ and π₯ β€lex π₯, for all π₯ β πΆ; π¦ β π· and π¦ β€lex π¦, for all π¦ β π·. Hence, π₯ + π¦ β πΆ + π· and π₯ + π¦ β€lex π₯ + π¦ for all π₯ β πΆ, and π¦ β π·. Namely, {π₯ + π¦} = minlex (πΆ + π·) and the proof is complete. Definition 4. A vector-valued mapping π : π β Rβ is called lower semicontinuous if, for any π§ β Rβ , the set {π₯ β π : π(π₯) β π§ β Rβ+ } is closed in X. Definition 5. A vector-valued mapping π : π β Rβ is called convex on π if and only if, for any π₯1 , π₯2 β π, π‘ β [0, 1], and π = 1, 2, . . . , β, ππ (π‘π₯1 + (1 β π‘)π₯2 ) β€ π‘ππ (π₯1 ) + (1 β π‘)ππ (π₯2 ).
πΈπ = {π₯ β πΈ : ππ (π₯) = minππ (π¦)} , π¦βπΈ
for π = 1, 2, . . . , π.
πΏ (π₯, Ξ) = π (π₯) + Ξπ (π₯) ,
(6)
where Ξ β L+ (π
π , π
π ); that is, Ξ πΓπ = (π 1 , π 2 , . . . , π π )π ; π π = (π π1 , π π2 , . . . , π ππ ) β π
+π , π = 1, 2, . . . , π. Then the Lagrangian dual problem is defined as the following multiobjective programming problem: (DMP)
max
π (Ξ)
s.t.
Ξ β L+ (π
π , π
π ) ,
lex
(7)
where π(Ξ) = min lex {πΏ(π₯, Ξ) : π₯ β π}. Definition 6. (i) A point π₯ β πΈ is said to be a lexicographic minimal solution to (MP) if π₯ β πΈ and π(π₯) β€lex π(π₯), for all π₯ β πΈ. By minlex πΈ and minlex π(πΈ) denote the set of all lexicographic minimal solutions and the lexicographic minimal value to (MP), respectively. (ii) A point π₯ β πΈ is said to be a strong minimal solution to (MP) if π₯ β πΈ and π(π₯) β π(π₯) β π
+π , for all π₯ β πΈ; that is, ππ (π₯) β€ ππ (π₯), and π₯ β πΈ, and π = 1, 2, . . . , π. The set of all strong minimal solutions to (MP) is denoted by minπ
+π π(πΈ). Remark 7. (1) From [14], it is easy to verify that (i) of Definition 6 is equivalent and refers to the following concept: a point π₯ β πΈ is said to be a lexicographic minimal solution to (MP) if π₯ β πΈ and {π(π₯) β πΆlex \ {0}} β© π(πΈ) = 0. (2) If π₯ β πΈ is a strong solution to (MP), then π₯ is a lexicographic solution to (MP). However, the converse generally is not valid.
Abstract and Applied Analysis
3
3. Duality Results The following result shows that, with respect to lexicographic order, the objective value of any feasible point to the dual problem (DMP) is less than or equal to the objective value of any feasible point to the primal problem (MP). Theorem 8 (weak duality theorem). Consider the primal problem (MP) given by (3) and its Lagrangian dual problem (DMP) given by (7). Let π₯ be a feasible point of (MP); that is, π₯ β π and π(π₯) β βπ
+π . Also, let Ξ be a feasible point of (π·ππ); that is, Ξ β L+ (π
π , π
π ). Then π (π₯) β₯πππ₯ π (Ξ) .
(8)
Proof. Since π₯ β π, π(π₯) β βπ
+π , and Ξ β L+ (π
π , π
π ), we obtain that Ξπ(π₯) β βπ
+π β βπΆlex ; that is, Ξπ(π₯) β€lex 0π . Then, it follows from the definition of π(Ξ) that
Theorem 11 (Lagrangian multiplier rule). Suppose that the following conditions are satisfied: (i) π is a nonempty convex subset of π
β ; (ii) π : π β π
π is a π
+π -convex vector function; that is, ππ : π β π
, π = 1, 2, . . . , π, are convex functions;
(iii) π : π β π
π is a π
+π -convex vector function; that is, ππ : π β π
, π = 1, 2, . . . , π, are convex functions; (iv) the Slater constraint qualification is satisfied; that is, there exists π₯0 β π such that π(π₯0 ) β β int π
+π , where int π
+π is the topology interior of π
+π ; (v) βππ=1 πΈπ =ΜΈ 0.
Μ + Ξπ (π₯) Μ : π₯Μ β π} π (Ξ) = min {π (π₯) lex
Μ since π₯Μ β π·0 is a strong On the other hand, π1 (π₯) β₯ π1 (π₯) Μ and solution and π₯ β ππ β π·0 . Therefore, π1 (π₯) = π1 (π₯) Μ for π = 1, 2, . . . , π β 1. π₯Μ β π·1 . Suppose that ππ (π₯) = ππ (π₯) Obviously, π₯ β π·πβ1 . Similarly, we can verify that ππ (π₯) = Μ and π₯Μ β π·π . The proof is complete. ππ (π₯)
(9)
β€lex π (π₯) + Ξπ (π₯) β€lex π (π₯) ,
If π₯ is a lexicographic solution to (MP), then there exists Ξ β L+ (π
π , π
π ) such that
and the proof is complete.
π (π₯) = min {π (π₯) + Ξπ (π₯) : π₯ β πΈ} , πππ₯
As a corollary of the previous theorem, we have the following result. Corollary 9. Assume that maxπππ₯ {π(Ξ) : Ξ β πΏ+ (π
π , π
π ) =ΜΈ 0. Then, min {π (π₯) : π (π₯) β βπ
+π , π₯ β π} πππ₯
β₯πππ₯ max {π (Ξ) : Ξ β πΏ+ (π
π , π
π )} .
(13) Ξπ (π₯) = 0π .
Proof. Let π₯ be a lexicographic solution of the problem (MP). By Lemma 10, π₯ is a strong solution for (MP). Then, for any π₯ β πΈ := {π₯ β π : π(π₯) β βπ
+π }, π1 (π₯) β₯ π1 (π₯) ,
(10)
πππ₯
π2 (π₯) β₯ π2 (π₯) ,
Before stating the Lagrangian multiplier rule to (MP) in lexicographic sense, we need the following result.
.. .
Lemma 10. Assume that βππ=1 πΈπ =ΜΈ 0. A point π₯ β πΈ is a lexicographic solution for (MP) if and only if π₯ β πΈ is a strong solution for (MP).
ππ (π₯) β₯ ππ (π₯) .
Proof. If π₯ is a lexicographic solution for (MP), then π(π₯) β minlex π(πΈ). Assume that βππ=1 πΈπ =ΜΈ 0. Then each π₯Μ β βππ=1 πΈπ Μ β is a strong solution for (MP), which implies that π(π₯) Μ = minlex π(πΈ). Since β minlex π(πΈ) is singleton, we have π(π₯) π(π₯). That is, π₯ is a strong solution for (MP). Conversely, suppose that π₯ is a lexicographic solution for (MP). Since βππ=1 πΈπ =ΜΈ 0, there exists π₯Μ β πΈ which is a strong solution for (MP). By induction, we can show that Μ which means that π₯ is a strong solution for π(π₯) = π(π₯) (MP). Let π·π = {π₯ β π·πβ1 : ππ (π₯) β€ ππ (π¦), βπ¦ β π·πβ1 } for π = 1, 2, . . . , π and π·0 = πΈ. Obviously, π·π β π·πβ1 β β
β
β
β π·0 . Noting that π₯ is a lexicographic solution for (MP), we have π₯ β π·π . Then, π₯ β π·1 ; that is, π1 (π₯) β₯ π1 (π₯) ,
βπ₯ β π·0 .
(11)
Taking π₯ = π₯Μ in the above inequality, we get Μ β₯ π1 (π₯) . π1 (π₯)
(12)
(14)
Then, the inequality system π1 (π₯) < π1 (π₯), π(π₯) β βπ
+π for some π₯ β π has no solution. Define the following set: π1 := {(π, π) β π
Γ π
π : π1 (π₯) β π1 (π₯) < π, π (π₯) β π β π
+π , for some π₯ β π} .
(15)
The set π1 is convex subset since π, π1 , and π are convex. Since the above inequality system has no solution, we have (0, 0π ) β π1 . By the standard separation theorem, there exists a nonzero Μ ) such that vector (πΜ1 , π 1 Μ π π (π₯) β₯ 0, πΜ1 (π1 (π₯) β π1 (π₯)) + π 1
βπ₯ β π.
(16)
That is, Μ π π (π₯) β₯ πΜ π (π₯) , πΜ1 π1 (π₯) + π 1 1 1
βπ₯ β π.
(17)
Now, fix an π₯ β π. From the definition of π1 , we have that π and π can be arbitrarily large. In order to satisfy (17), we Μ ) β π
Γ π
π . We will next show that πΜ > 0. must have (πΜ1 , π 1 + 1 +
4
Abstract and Applied Analysis
On the contrary, suppose that πΜ1 = 0. By the Slater constraint qualification, there exists π₯0 β π such that π(π₯0 ) β β int π
+π . Μ π(π₯ ) β₯ 0. But, since Then, letting π₯ = π₯0 in (17), we have π 1 0 π π Μ β π
,π Μ π(π₯ ) β₯ 0 is only possible π(π₯0 ) β β int π
+ and π 1 1 0 + Μ = 0 . Thus, it has been shown that (πΜ , π Μ ) = (0, 0 ), if π 1 π 1 1 π which is a contradiction. Μ /πΜ , we After dividing (17) by πΜ1 and denoting π1 = π 1 1 obtain π
π1 (π₯) + π1 π (π₯) β₯ π1 (π₯) ,
βπ₯ β π.
(18)
Consider the following convex multiobjective programming problem: minlex {π(π₯) : π(π₯) β€ 0, π₯ β π}. Direct computation shows that πΈ = {π₯ β π₯ : π(π₯) β€ 0} = [β1, 1], πΈ1 = [β1, 1], and πΈ2 = {1}. Obviously, π₯ = 1 β πΈ1 β© πΈ2 is a lexicographic solution of the multiobjective programming problem. Therefore, Theorem 11 is applicable. Indeed, by directly computing, there exists Ξ = (0, 1)π such that minlex {π (π₯) + Ξπ (π₯) : π₯ β π} = π (π₯) = (3, β1)π , Ξπ (π₯) = 02 .
π
From (18), letting π₯ = π₯, we get π1 π(π₯) β₯ 0. Since π(π₯) β βπ
+π π
and π1 β π
+π , we obtain π1 π(π₯) = 0. Similarly, we can show that there exist π2 , π3 , . . . , ππ β π
+π such that, for any π₯ β π, π
π2 (π₯) + π2 π (π₯) β₯ π2 (π₯) , π
π3 (π₯) + π3 π (π₯) β₯ π3 (π₯) ,
π
π2 π (π₯) = 0,
.. . π
ππ (π₯) + ππ π (π₯) β₯ ππ (π₯) ,
π2 (π₯) = π₯.
π
Set Ξ = (π1 , π2 , . . . , ππ ) . Then, it follows from (18), and (19) that π (π₯) + Ξπ (π₯) β π (π₯) + π
+π β π (π₯) + πΆlex ,
π
+π
β πΆlex ,
(20) Namely,
Ξπ (π₯) = 0π .
lex
min {π (π₯) + Ξπ (π₯) : π₯ β π} = π (π₯) = (2, 1)π ,
(22)
Ξπ (π₯) = 02 .
(23)
(27)
is a convex multiobjective programming. It follows from direct computation that πΈ = [β1, 1]πΈ1 = {1}, and πΈ2 = {β1}, and π₯ = 1 is a unique lexicographic solution for (MP). Thus, the assumption (v) of Theorem 11 is not satisfied. However, we can verify that there exists Ξ = (1, 0)π such that, for any π₯ β πΈ,
(21)
Thus, (21) and weak duality theorem (Theorem 8) together yield that π (π₯) = min {π (π₯) + Ξπ (π₯) : π₯ β πΈ} .
min {π (π₯) : π (π₯) β€ 0, π₯ β π} , lex
βπ₯ β π,
βπ₯ β π,
(26)
Clearly, the following problem:
Ξπ (π₯) = 0π .
lex
(28)
In order to obtain another sufficient condition for the existence of Lagrangian multiplier rule, we consider the following assumptions. Theorem 14 (Lagrangian multiplier rule). Assume that all conditions of Theorem 11 are satisfied except for the hypothesis (v) of Theorem 11 which is replaced by the following hypothesis:
And the proof is complete. Now, we give an example to illustrate that Theorem 11 is applicable. Example 12. Let π = [β2, 2] β π
; π : π β π
is defined by π(π₯) = π₯2 β 1 and π = (π1 , π2 )π : π β π
2 is defined by β2π₯ + 1, { { π1 (π₯) = {3, { {2π₯ + 1,
π1 (π₯) = (π₯ β 2)2 + 1,
(19)
ππ π (π₯) = 0.
π
π (π₯) + Ξπ (π₯) β₯lex π (π₯) ,
From Theorem 11, we get a sufficient condition for the existence of Lagrangian multiplier rule for the problem (MP). But this is not a necessary condition, as the following example shows. Example 13. Let π β π
, and π(π₯) is given as in Example 12. Let π = (π1 , π2 ) : π β π
2 be defined by
π
π3 π (π₯) = 0,
(25)
if β 2 β€ π₯ < β1, if β 1 β€ π₯ β€ 1, if 1 < π₯ β€ 2, 2
π2 (π₯) = (π₯ β 2) + 1.
(24)
σΈ
(v ) π1 : π β π
is a strict convex function; that is, for any π₯1 , π₯2 β π with π₯1 =ΜΈ π₯2 and π β (0, 1), π1 (ππ₯1 + (1 β π)π₯2 ) < ππ1 (π₯1 ) + (1 β π)π1 (π₯2 ). If π₯ is a lexicographic solution to (MP), then there exists Ξ β πΏ+ (π
π , π
π ) such that π (π₯) = min {π (π₯) + Ξπ (π₯) : π₯ β πΈ} , πππ₯
(29) Ξπ (π₯) = 0π .
Abstract and Applied Analysis
5
Proof. Since π is convex and π(π₯) is π
+π -convex, the set πΈ = {π₯ β π : π(π₯) β βπ
+π } is convex. Noting that π1 (π₯) is strict convex, we obtain πΈ1 = {π₯ β πΈ : π1 (π₯) = min π1 (πΈ)} is singleton. Obviously, π₯ β πΈ1 = {π₯} is a lexicographic solution to (MP). Following the same arguments as in Theorem 11,
4. Lexicographic Saddle Point
there exists π β π
+π such that π1 (π₯) = min{π1 (π₯) + π π(π₯) :
Definition 18. A pair (π₯, Ξ) β πΈ Γ L(π
π , π
π ) is said to be a lexicographic saddle point for the vector-valued Lagrangian function πΏ(π₯, Ξ) if, for all π₯ β πΈ and Ξ β L+ (π
π , π
π ),
π
π
π₯ β π}, π π(π₯) = 0. Since π(π₯) is π
+π ,
π
π
+π -convex
and π β +
we obtain that π π(π₯) is a convex function (π
-convex π
function). By hypothesis (vσΈ ), π1 (π₯)+π π(π₯) is a strict convex π
function. Thus, the solution to min{π1 (π₯) + π π(π₯) : π₯ β π} is also singleton. Let π1 = π, π2 = β
β
β
= ππ = 0π in Theorem 11. Therefore, π₯ is also a lexicographic solution to π minlex {π(π₯) + Ξ π(π₯) : π₯ β π} and Ξπ(π₯) = 0π . It completes the proof. Remark 15. The hypothesis (vσΈ ) in Theorem 14 is redundant if π is a real-valued function. Under this case, the hypothesis (v) in Theorem 11 always holds. From Theorems 8 and 11, we have the following strong duality theorem. The following result shows that, under suitable assumptions, there is no duality gap between the primal and dual lexicographic optimal objective function values. Theorem 16 (strong duality theorem). Assume that maxπππ₯ {π(Ξ) : Ξ β πΏ+ (π
π , π
π ) =ΜΈ 0. If all conditions of Theorem 11 or Theorem 14 are satisfied, then min {π (π₯) : π (π₯) β βπ
+π , π₯ β π} πππ₯
+
π
π
= max {π (Ξ) : Ξ β πΏ (π
, π
)} .
(30)
πππ₯
And there exists Ξ such that Ξπ(π₯) = 0π , where π₯ β {π₯ β π : π(π₯) = minπππ₯ π(πΈ)}. The following example is to illustrate that there is no duality gap if all conditions of Theorem 16 are satisfied. Example 17. Let π β π
, π(π₯) is given as in Example 12. Let π(π₯) be defined by π1 (π₯) = (π₯ β 2)2 + 1, π2 (π₯) = βπ₯.
(31)
min {π (π₯) : π (π₯) β lex
(33)
By Theorem 11 or Theorem 14, we directly have the following result. Theorem 19. Suppose that all conditions of Theorem 11 or Theorem 14 are satisfied. If π₯ is a lexicographic solution to (MP), then there exists Ξ β L+ (π
π , π
π ) such that (π₯, Ξ) is a lexicographic saddle point of the vector Lagrangian function πΏ(π₯, Ξ). Thus, we have verified the existence of a lexicographic saddle point for the vector-valued Lagrangian function πΏ(π₯, Ξ) under the appropriate conditions. We conclude this result by showing that the saddle point condition is sufficient for optimality for problem (MP). Theorem 20. If (π₯, Ξ) β πΈ Γ L(π
π , π
π ) is a lexicographic saddle point for the vector-valued Lagrangian function πΏ(π₯, Ξ), then π₯ is a lexicographic solution for (MP), and Ξπ(π₯) = 0π . Proof. Assume that (π₯, Ξ) is a lexicographic saddle point of πΏ(π₯, Ξ). Then, from the first inequality of (33), we get π (π₯) + Ξπ (π₯) β₯lex π (π₯) + Ξπ (π₯) ,
βΞ β L+ (π
π , π
π ) . (34)
This implies that Ξπ (π₯) β₯lex Ξπ (π₯) ,
βΞ β L+ (π
π , π
π ) .
(35)
We claim that π(π₯) β βπ
+π . Otherwise, we suppose that π(π₯) β βπ
+π . Since Ξ β L+ (π
π , π
π ) can be taken arbitrarily to be large, Ξπ(π₯) can be large enough, which contradicts (35). Therefore, we also get (36)
since π(π₯) β βπ
+π and Ξ β L+ (π
π , π
π ). This implies that
π₯ β π}
= max {π (Ξ) : Ξ β πΏ+ (π
π , π
π )} lex
πΏ (π₯, Ξ) β€lex πΏ (π₯, Ξ) β€lex πΏ (π₯, Ξ) .
Ξπ (π₯) β βπ
+π β βπΆlex
Direct computation shows that βπ
+π ,
Now, we introduce the notion of lexicographic saddle point for the vector-valued Lagrangian map πΏ(β
, β
) in terms of lexicographic order and give some optimality conditions.
Ξπ (π₯) β€lex 0π . (32)
π
(37)
On the other hand, by taking Ξ = 0 in (35), we can get
= (2, β1)
Ξπ (π₯) β₯lex 0π .
= π (1) .
Noting the reflexivity of the lexicographic order β€lex , (37) and (38) together yields
And there exists Ξ = (1, 1/2)π or (1, 0)π such that Ξπ(1) = 02 .
Ξπ (π₯) = 0π .
(38)
(39)
6
Abstract and Applied Analysis
Since π(π₯) β βπ
+π , π₯ is a feasible point of (MP). Let π₯ be any feasible point of (MP); that is, π(π₯) β βπ
+π . Then, it follows from the second inequality of (33) and (39) that π (π₯) = π (π₯) + Ξπ (π₯) β€lex π (π₯) + Ξπ (π₯) β€lex π (π₯) ,
[14]
(40) [15]
which implies that π₯ is a lexicographic solution to (MP).
Acknowledgments The authors would like to express their deep gratitude to the anonymous referee and the associate editor for their valuable comments and suggestions which helped improve the paper. This research was partially supported by the National Natural Science Foundation of China (Grant no. 11201509).
[16]
[17]
[18]
References [1] H. W. Corley, βDuality theory for maximizations with respect to cones,β Journal of Mathematical Analysis and Applications, vol. 84, no. 2, pp. 560β568, 1981. [2] H. W. Corley, βExistence and lagrangian duality for maximizations of set-valued functions,β Journal of Optimization Theory and Applications, vol. 54, no. 3, pp. 489β501, 1987. [3] P. C. Fishburn, βLexicographic orders, utilities and decision rules: a survey,β Management Science, vol. 20, pp. 1442β1471, 1974. [4] W.-S. Hsia and T. Y. Lee, βLagrangian function and duality theory in multiobjective programming with set functions,β Journal of Optimization Theory and Applications, vol. 57, no. 2, pp. 239β251, 1988. [5] E. HernΒ΄andez and L. RodrΒ΄Δ±guez-MarΒ΄Δ±n, βLagrangian duality in set-valued optimization,β Journal of Optimization Theory and Applications, vol. 134, no. 1, pp. 119β134, 2007. [6] J. Jahn, Mathematical Vector Optimization in Partially Ordered Linear Spaces, vol. 31 of Methoden und Verfahren der Mathematischen Physik, Peter Lang, Frankfurt, Germany, 1986. [7] Z.-F. Li and G.-Y. Chen, βLagrangian multipliers, saddle points, and duality in vector optimization of set-valued maps,β Journal of Mathematical Analysis and Applications, vol. 215, no. 2, pp. 297β316, 1997. [8] Z. F. Li and S. Y. Wang, βLagrange multipliers and saddle points in multiobjective programming,β Journal of Optimization Theory and Applications, vol. 83, no. 1, pp. 63β81, 1994. [9] W. Song, βLagrangian duality for minimization of nonconvex multifunctions,β Journal of Optimization Theory and Applications, vol. 93, no. 1, pp. 167β182, 1997. [10] Y. Sawaragi, H. Nakayama, and T. Tanino, Theory of Multiobjective Optimization, vol. 176 of Mathematics in Science and Engineering, Academic Press, Orlando, Fla, USA, 1985. [11] T. Tanino and Y. Sawaragi, βDuality theory in multiobjective programming,β Journal of Optimization Theory and Applications, vol. 27, no. 4, pp. 509β529, 1979. [12] I. V. Konnov, βOn lexicographic vector equilibrium problems,β Journal of Optimization Theory and Applications, vol. 118, no. 3, pp. 681β688, 2003. [13] X. B. Li, S. J. Li, and Z. M. Fang, βA minimax theorem for vectorvalued functions in lexicographic order,β Nonlinear Analysis:
[19]
Theory, Methods & Applications, vol. 73, no. 4, pp. 1101β1108, 2010. M. M. MΒ¨akelΒ¨a and Y. Nikulin, βOn cone characterizations of strong and lexicographic optimality in convex multiobjective optimization,β Journal of Optimization Theory and Applications, vol. 143, no. 3, pp. 519β538, 2009. J. E. MartΒ΄Δ±nez-Legaz, βLexicographical order, inequality systems and optimization,β in System Modelling and Optimization, P. Thoft-Christensen, Ed., vol. 59 of Lecture Notes in Control and Information Sciences, pp. 203β212, Springer, Berlin, Germany, 1984. J.-E. MartΒ΄Δ±nez-Legaz, βLexicographical order and duality in multiobjective programming,β European Journal of Operational Research, vol. 33, no. 3, pp. 342β348, 1988. V. V. Podinovskii and V. M. Gavrilov, Optimization with Respect to Sequentially Applied Criteria, Sovetskoe Radio, Moscow, Russia, 1976, (Russian). L. Pourkarimi and M. Zarepisheh, βA dual-based algorithm for solving lexicographic multiple objective programs,β European Journal of Operational Research, vol. 176, no. 3, pp. 1348β1356, 2007. N. Popovici, βStructure of efficient sets in lexicographic quasiconvex multicriteria optimization,β Operations Research Letters, vol. 34, no. 2, pp. 142β148, 2006.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014