Hindawi Publishing Corporation Advances in Operations Research Volume 2013, Article ID 708979, 9 pages http://dx.doi.org/10.1155/2013/708979
Research Article Optimality Conditions and Duality of Three Kinds of Nonlinear Fractional Programming Problems Xiaomin Zhang and Zezhong Wu Department of Mathematics, Chengdu University of Information Technology, Sichuan 610225, China Correspondence should be addressed to Xiaomin Zhang;
[email protected] Received 5 April 2013; Accepted 24 October 2013 Academic Editor: Ching-Jong Liao Copyright Β© 2013 X. Zhang and Z. Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Some assumptions for the objective functions and constraint functions are given under the conditions of convex and generalized convex, which are based on the πΉ-convex, π-convex, and (πΉ, π)-convex. The sufficiency of Kuhn-Tucker optimality conditions and appropriate duality results are proved involving (πΉ, π)-convex, (πΉ, πΌ, π, π)-convex, and generalized (πΉ, πΌ, π, π)-convex functions.
1. Introduction Multiobjective optimization theory is a development of numerical optimization and related to many subjects, such as nonsmooth analysis, convex analysis, nonlinear analysis, and the theory of set value. It has a wide range of applications in the fields of industrial design, economics, engineering, military, management sciences, financial investment, transport, and so forth, and now it is an interdisciplinary science branch between applied mathematics and decision sciences. Convexity plays an important role in optimization theory, and it becomes an important theoretical basis and useful tool for mathematical programming and optimization theory. Convex function theory can be traced back to the works of Holder, Jensen, and Minkowski in the beginning of this century, but the real work that caught the attention of people is the research on game theory and mathematical programming by von Neumann and Morgenstern [1], Dantzing, and Kuhn and Tucker in the forties to fifties, and people have done a lot of intensive research about convex functions from the fifties to sixties. In the middle of the sixties convex analysis was produced, and the concept of convex function is promoted in a variety of ways, and the notion of generalized convex is given. Fractional programming has an important significance in the optimization problems; for instance, in order to measure the production or the efficiency of a system, we should
minimize a ratio of functions between a given period of time and a utilized resource in engineering and economics. Preda [2] has established the concept of (πΉ, π)-convex based on πΉ-convex [3] and π-convex [4] and obtained some results, which are the expansion of πΉ-convex and π-convex. Motivated by various concepts of convexity, Liang et al. [5] have put forward a generalized convexity, which was called (πΉ, πΌ, π, π)-convex, which extended (πΉ, π)-convex, and Liang et al. [6], Weir and Mond [7], Weir [8], Jeyakumar and Mond [9], Egudo [10], Preda [2], and Gulati and Islam [3] obtained some corresponding optimality conditions and applied these optimality conditions to define dual problems and derived duality theorems for single objective fractional problems and multiobjective problems. Then the definition of generalized (πΉ, πΌ, π, π)-convex is given under the condition of (πΉ, πΌ, π, π)-convex. However, in general, fractional programming problems are nonconvex and the Kuhn-Tucker optimality conditions are only necessary. Under what conditions are the Kuhn-Tucker conditions sufficient for the optimality of problems? This question appeals to the interests of many researchers, and those are what we should probe. Based on the former conclusions, by adding conditions to objective functions and constraint functions and by changing K-T conditions [11], the optimality conditions and dual are given involving weaker convexity conditions. The main results in this paper are based on convex and generalized convex functions and the properties of sublinear functions.
2
Advances in Operations Research
In this paper, we will discuss sufficient optimality conditions and dual problems for three kinds of nonlinear fractional programming problems, and the paper is organized as follows. In Sections 3.1 and 3.2, we present the Kuhn-Tucker sufficient optimality conditions and dual for nonlinear fractional programming problem and multiobjective fractional programming problem based on generalized (πΉ, πΌ, π, π)convex. Section 3.3 contains optimality conditions and dual for multiobjective fractional programming problem under (πΉ, π)-convex. In these sections, I present some assumptions for the objective functions and constraint functions such that the Kuhn-Tucker optimality conditions are sufficient and obtain the corresponding duality theorem.
Definition 5 (see [4, 12]). Let π(π₯) be a real-valued function defined on the convex set π0 β πΈπ , if there exists a real number π β π
, such that
2. Preliminaries
Definition 6 (see [2]). The function ππ : π0 β π
is said to be (πΉ, π)-convex at π₯0 β π0 , if for any π₯0 β π0 , ππ (π₯) satisfies the following condition:
π
Let πΈ be the π-dimensional real vector space, that is, πdimensional Euclidean space, where π¦ = (π¦1 , π¦2 , . . . , π¦π )π , π§ = (π§1 , π§2 , . . . , π§π )π β πΈπ , and provides as follows, (see [12]), π¦ = π§ ββ π¦π = π§π ,
π = 1, 2, . . . , π;
π¦ > π§ ββ π¦π > π§π ,
π = 1, 2, . . . , π;
π¦ β§ π§ ββ π¦π β§ π§π ,
π = 1, 2, . . . , π.
(1)
Definition 2 (see [12]). Suppose that π₯0 β π0 ; that is, if π₯ β π0 , such that π(π₯) < π(π₯0 ), π₯0 is a weakly efficient solution of multiobjective programming problem. Definition 3 (see [5]). Given an open set π0 β π
π , a functional πΉ : π0 Γ π0 Γ π
π β π
is called sublinear if, for any π₯, π₯0 β π0 , πΉ (π₯, π₯0 ; π1 + π2 ) β¦ πΉ (π₯, π₯0 ; π1 ) + πΉ (π₯, π₯0 ; π2 ) , βπ1 , π2 β π
π , π
βπΌ β π
, πΌ β§ 0, βπ β π
. (2)
It follows from the second equality that πΉ (π₯, π₯0 ; 0) = πΉ (π₯, π₯0 ; 0 Γ π) = 0 Γ πΉ (π₯, π₯0 ; π) = 0, for any π β π
π .
σ΅© σ΅©2 β ππ (1 β π) σ΅©σ΅©σ΅©σ΅©π₯1 β π₯2 σ΅©σ΅©σ΅©σ΅©
(4)
for any π₯1 , π₯2 β π0 and any π β [0, 1], then the function π(π₯) is said to be π-convex on π0 . Especially, if π = 0, then we obtain the definition of convex. If π > 0 (or π < 0) in the above definition, then we have strong convex (or weak convex).
ππ (π₯) β ππ (π₯0 ) β§ πΉ (π₯, π₯0 ; βππ (π₯0 )) + ππ π2 (π₯, π₯0 ) .
(5)
Definition 7 (see [5]). The function ππ is said to be (πΉ, πΌ, ππ , π)convex at π₯0 β π0 , if ππ (π₯) β ππ (π₯0 ) β§ πΉ (π₯, π₯0 ; πΌ (π₯, π₯0 ) βππ (π₯0 )) + ππ π2 (π₯, π₯0 ) ,
Definition 1 (see [12]). Suppose that π₯0 β π0 ; that is, if π₯ β π0 , such that π(π₯) β¦ π(π₯0 ), π₯0 is an efficient solution of multiobjective programming problem.
πΉ (π₯, π₯0 ; πΌπ) = πΌπΉ (π₯, π₯0 ; π) ,
π (ππ₯1 + (1 β π) π₯2 ) β¦ ππ (π₯1 ) + (1 β π) π (π₯2 )
(3)
Let πΉ : π0 Γ π0 Γ π
π β π
be a sublinear function, and let πΌ : π0 Γ π0 β π
+ \ {0}, π = (π1 , π2 , . . . , ππ )π , ππ β π
, π : π0 Γ π0 β π
, and the function π = (π1 , π2 , . . . , ππ ) : π0 β π
π is differentiable at π₯0 β π0 . Definition 4 (see [3]). Let π(π₯) be a differentiable function defined on π0 β πΈπ . The function π(π₯) is said to be πΉ-convex on π0 with respect to πΉ, if π(π₯) β π(π¦) β§ πΉπ₯,π¦ [βπ(π¦)].
βπ₯ β π0 . (6) The function π is said to be (πΉ, πΌ, π, π)-convex at π₯0 , if each component ππ of π is (πΉ, πΌ, ππ , π)-convex at π₯0 . The function π is said to be (πΉ, πΌ, π, π)-convex on π0 , if it is (πΉ, πΌ, π, π)-convex at every point in π0 . Definition 8. The function ππ is said to be (πΉ, πΌ, ππ , π)-quasiconvex at π₯0 , if ππ (π₯) β¦ ππ (π₯0 ) β πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βππ (π₯0 )) β¦ βππ π2 (π₯, π₯0 ). The function π is said to be (πΉ, πΌ, π, π)-quasiconvex at π₯0 , if each component ππ of π is (πΉ, πΌ, ππ , π)-quasiconvex at π₯0 . Definition 9. The function ππ is said to be (πΉ, πΌ, ππ , π)pseudoconvex at π₯0 , if for all π₯ β π0 , ππ (π₯) < ππ (π₯0 ) β πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βππ (π₯0 )) < βππ π2 (π₯, π₯0 ). The function π is said to be (πΉ, πΌ, π, π)-pseudoconvex at π₯0 , if each component ππ of π is (πΉ, πΌ, ππ , π)-pseudoconvex at π₯0 . Definition 10. The function π is said to be strictly (πΉ, πΌ, ππ , π)-pseudoconvex at π₯0 β π0 , if π(π₯) β¦ π(π₯0 ) β πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βπ(π₯0 )) < βππ2 (π₯, π₯0 ), where πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βπ(π₯0 )) = (πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βπ1 (π₯0 )), . . . , πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βππ (π₯0 ))). Further, π is said to be weakly strictly (πΉ, πΌ, π, π)pseudoconvex at π₯0 β π0 , if π(π₯) β¦ π(π₯0 ) β πΉ(π₯, π₯0 ; πΌ(π₯, π₯0 )βπ(π₯0 )) < βππ2 (π₯, π₯0 ). In order to prove our main result, we need a lemma which we present in this section.
Advances in Operations Research
3
Lemma 11 (see [13]). Suppose that differentiable real-valued functions βπ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, ππ , π)-quasiconvex at π₯ β π; then ππ β(π₯) is (πΉ, πΌ, βπ π=1 Vπ ππ , π)-quasiconvex at π π₯ β π, where ππ β π
, π β§ 0 and π denote the transpose of the π-dimensional column vector π; that is, ππ = (V1 , V2 , . . . , Vπ ).
For each π (π = 1, 2, . . . , π), by the (πΉ, πΌ, ππ , π)quasiconvexity assumption of βπ (π₯) and Lemma 11, we have that π0π β(π₯) is (πΉ, πΌ, βπ π=1 Vπ ππ , π)-quasiconvex on π. Therefore, π
πΉ (π₯, π₯; πΌ (π₯, π₯) ββ (π₯) π0 ) β¦ β βππ Vπ π2 (π₯, π₯) , π=1
3. Optimality Conditions and Duality 3.1. Nonlinear Fractional Programming Problem Involved Inequality and Equality Constraints Based on Generalized (πΉ,πΌ,π,π)-Convex. Consider the nonlinear fractional programming problem (FP) min s.t.
π (π₯) π (π₯)
(7)
β (π₯) β¦ 0, π (π₯) = 0, π₯ β π0 , π
where π0 is an open set of π
, π(π₯) and π(π₯) are real-valued functions defined on π0 , β(π₯) is an π-dimensional vectorvalued functions defined also on π0 , and π(π₯) a π-dimensional vector-valued function. Let π = {π₯ β π0 | β (π₯) β¦ 0, π (π₯) = 0}
β(
π (π₯) ) + ββ (π₯) π0 + βπ (π₯) π0 = 0, π (π₯)
β (π₯) β¦ 0,
πΉ (π₯, π₯; πΌ (π₯, π₯) (β (
π (π₯) ) + ββ (π₯) π0 + βπ (π₯) π0 )) π (π₯)
< β (π + π0π πσΈ + π0π πσΈ σΈ ) π2 (π₯, π₯) . (13) Considering that π + π0π πσΈ + π0π πσΈ σΈ β§ 0, we get πΉ (π₯, π₯; πΌ (π₯, π₯) (β (
π (π₯) ) + ββ (π₯) π0 + βπ (π₯) π0 )) < 0. π (π₯) (14)
By the K-T conditions, we have β(π(π₯)/π(π₯)) + ββ(π₯)π0 + βπ(π₯)π0 = 0. Hence, based on the sublinearity of πΉ, we obtain π (π₯) ) + ββ (π₯) π0 + βπ (π₯) π0 )) = 0, π (π₯) (15)
which contradicts (14). The proof is complete.
π (π₯) = 0.
Proof. Suppose that π₯ is not an optimality solution of (FP). Then, there exists a feasible solution π₯ β π such that π(π₯)/π(π₯) < π(π₯)/π(π₯). By the (πΉ, πΌ, π, π)-pseudoconvexity assumption of π(π₯)/π(π₯), we have π (π₯) )) < βππ2 (π₯, π₯) . π (π₯)
(12)
By (10), (11), and (12), and based on the sublinearity of πΉ, we have
πΉ (π₯, π₯; πΌ (π₯, π₯) (β (
Theorem 12. Suppose that π₯ is a feasible solution of (FP), that the Kuhn-Tucker conditions hold at π₯, that π(π₯)/π(π₯) in problem (FP) is (πΉ, πΌ, π, π)-pseudoconvex on π, βπ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, ππ , π)-quasiconvex on π, and that ππ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, ππ , π)-quasiconvex over π, π, ππ , ππ β π
, π + π0π πσΈ + π0π πσΈ σΈ β§ 0, where π0π πσΈ is the inner product about π0 and πσΈ and π0π πσΈ σΈ the inner product about π0 and πσΈ σΈ . Then, π₯ is an optimality solution for problem (FP).
πΉ (π₯, π₯; πΌ (π₯, π₯) β (
πΉ (π₯, π₯; πΌ (π₯, π₯) βπ (π₯) π0 ) β¦ βπ0π πσΈ σΈ π2 (π₯, π₯) .
(9)
π0π β (π₯) = 0, π0 β§ 0,
that is, πΉ(π₯, π₯; πΌ(π₯, π₯)ββ(π₯)π0 ) β¦ βπ0ππσΈ π2 (π₯, π₯). By the (πΉ, πΌ, ππ , π)-quasiconvexity of ππ (π₯) (π = 1, 2, . . . , π) π over π, we have that π0π π(π₯) is (πΉ, πΌ, βπ=1 π€π ππ , π)quasiconvexity on π. β¦ Then we obtain πΉ(π₯, π₯; πΌ(π₯, π₯)βπ(π₯)π0 ) π β βπ=1 ππ π€π π2 (π₯, π₯), that is,
(8)
denotes the set of all feasible solutions for (FP) and assume that π(π₯), π(π₯), βπ (π₯) (π = 1, 2, . . . , π), and ππ (π₯) (π = 1, 2, . . . , π) are continuously differentiable over π0 and that π(π₯) β§ 0, π(π₯) > 0, for all π₯ β π0 . If π₯ β π0 is a solution for problem (FP) and if a constraint qualification [14] holds, then the Kuhn-Tucker necessary conditions are given below: there exists π0 β π
π and π0 β π
π such that
(11)
(10)
Consider the dual problem of (FP): max s.t.
π (π¦) + π’π β (π¦) + Vπ π (π¦) π (π¦) π (π¦) ) + π’π ββ (π¦) + Vπ βπ (π¦) = 0, ππ β ( π (π¦) π’πβ (π¦) β§ 0,
(FD)
Vπ π (π¦) = 0, π’ β§ 0,
V β§ 0.
Theorem 13. Suppose that ππ (π(π¦)/π(π¦)) is (πΉ, πΌ, π1 , π)pseudoconvex at π¦, π’πβ(π¦)+Vπ π(π¦) is (πΉ, πΌ, π2 , π)-quasiconvex at π¦ in problem (FP) and (FD), and that π1 + π2 β§ 0; then ππ (π(π₯)/π(π₯)) β§ ππ (π(π¦)/π(π¦)), for any feasible solution π₯ of (FP) and (π¦, π, π’, V) of (FD).
4
Advances in Operations Research
Proof. Assume that the conclusion is not true; that is, ππ (π(π₯)/π(π₯)) < ππ (π(π¦)/π(π¦)). By the (πΉ, πΌ, π1 , π)-pseudoconvex of ππ (π(π₯)/π(π₯)) at π¦, we get πΉ (π₯, π¦; πΌ (π₯, π¦) ππ β (
π (π¦) )) < βπ1 π2 (π₯, π¦) . π (π¦)
(16)
Using π’π β(π₯) β¦ 0, Vπ π(π₯) = 0, π’πβ(π¦) β§ 0, Vπ π(π¦) = 0, we have π
π
π
π
π’ β (π₯) + V π (π₯) β¦ π’ β (π¦) + V π (π¦) .
(17)
By the (πΉ, πΌ, π2 , π)-quasiconvex of π’π β(π¦) + Vπ π(π¦), we get πΉ (π₯, π¦; πΌ (π₯, π¦) (π’πββ (π¦) + Vπ βπ (π¦))) β¦ βπ2 π2 (π₯, π¦) . (18) By (16) and (18) and based on the sublinearity of πΉ, we have πΉ (π₯, π¦; πΌ (π₯, π¦) (ππ β (
where π0 is an open set of π
π , ππ (π₯) (π = 1, 2, . . . , π), π(π₯), βπ (π₯) : π0 β π
(π = 1, 2, . . . , π) are real-valued functions defined on π0 , ππ (π₯) : π0 β π
(π = 1, 2, . . . , π) are real-valued functions defined also on π0 , π(π₯) = (π1 (π₯), π2 (π₯), . . . , ππ (π₯))π , β(π₯) = (β1 (π₯), β2 (π₯), . . . , βπ (π₯))π , and π(π₯) = (π1 (π₯), π2 (π₯), . . . , ππ (π₯))π . Let π = {π₯ | π₯ β π0 , β (π₯) β¦ 0, π (π₯) = 0}
denotes the set of all feasible solutions of (VFP) and assume that ππ (π₯) (π = 1, 2, . . . , π), π(π₯), βπ (π₯) (π = 1, 2, . . . , π), and ππ (π₯) (π = 1, 2, . . . , π) are continuously differentiable over π0 and that π(π₯) > 0, for all π₯ β π0 . Theorem 14. Suppose that π(π₯)/π(π₯) is weakly strictly (πΉ, πΌ, π1 , π)-pseudoconvex at π₯ β π0 , βπ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, π2 , π)-quasiconvex with respect to π₯ β π0 , ππ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, π3 , π)-convex with respect to π₯ β π0 , and π that there exists π β β§++ (or β§+ ), π’ β π
+π , V β π
+ satisfying
π (π¦) ) π (π¦)
π
π
βππ β ( π=1
π
π
+π’ ββ (π¦) + V βπ (π¦)))
(19)
π π (π₯) ) + βπ’π ββπ (π₯) + β Vπ βππ (π₯) = 0, π (π₯) π=1 π=1
π’π βπ (π₯) = 0,
(24)
< β (π1 + π2 ) π2 (π₯, π¦) .
βπ (π₯) β¦ 0,
π = 1, 2, . . . , π,
Since (π¦, π, π’, V) is the feasible solution of (FD), so
ππ (π₯) = 0,
π = 1, 2, . . . , π
π (π¦) π β( ) + π’πββ (π¦) + Vπ βπ (π¦) = 0. π (π¦) π
π
(20)
(23)
π
and π = βπ=1 ππ π1 + βπ π=1 π’π π2 + βπ=1 Vπ π3 β§ 0. Then, π₯ is an efficient solution of (VFP), where β§+
Hence, π (π¦) ) 0 = πΉ (π₯, π¦; πΌ (π₯, π¦) (π β ( π (π¦) +π’πββ (π¦) + Vπ βπ (π¦)))
= {π = (π1 , π2 , . . . , ππ ) | ππ β§ 0, π = 1, 2, . . . , π β
βππ = 1}, π=1
(21)
π=1
which contradicts the known condition π1 +π2 β§ 0. The proof is complete. 3.2. Nonlinear Multiobjective Fractional Programming Problem Involved Inequality and Equality Constraints Based on (πΉ, πΌ, π, π)-Convex and Generalized (πΉ, πΌ, π, π)-Convex. Consider the nonlinear multiobjective fractional programming problem (VFP) ππ (π₯) π (π₯) π2 (π₯) π (π₯) =[ 1 , ,..., ] π (π₯) π (π₯) π (π₯) π (π₯)
s.t.
β (π₯) β¦ 0, π₯ β π0 π (π₯) = 0, π₯ β π0 ,
π
π
< β (π1 + π2 ) π (π₯, π¦) ,
π (π₯) =
β§
++
= {π = (π1 , π2 , . . . , ππ ) | ππ > 0, π = 1, 2, . . . , π β
βππ = 1}.
2
min
π
π
π
(25) Proof. Suppose that π₯ is not an efficient solution of (VFP); then there exists a feasible solution π₯ β π0 such that π(π₯) β¦ π(π₯), that is, π(π₯)/π(π₯) β¦ π(π₯)/π(π₯). By the weakly strict (πΉ, πΌ, π1 , π)-pseudoconvexity of π(π₯)/π(π₯) at π₯ β π0 , we get πΉ (π₯, π₯; πΌ (π₯, π₯) β (
π (π₯) )) < βπ1 π2 (π₯, π₯) . π (π₯)
(26)
Using π β β§++ (or β§+ ), then we have (22)
π
βππ πΉ (π₯, π₯; πΌ (π₯, π₯) β ( π=1
π
π (π₯) )) < ββππ π1 π2 (π₯, π₯) . (27) π (π₯) π=1
Advances in Operations Research
5
Based on the sublinearity of πΉ, we obtain π
Consider the dual problem of (VFP) π
πΉ (π₯, π₯; πΌ (π₯, π₯) βππ β ( π=1
π (π₯) )) < ββππ π1 π2 (π₯, π₯) . π (π₯) π=1 (28)
max
Since π’π β(π₯) = 0, π’ β§ 0 and βπ (π₯) β¦ 0, we have π’π β(π₯) β π’π β(π₯) β¦ 0; that is, π’π β(π₯) β¦ π’π β(π₯). Since βπ (π₯) (π = 1, 2, . . . , π) are (πΉ, πΌ, π2 , π)-quasiconvex at π₯ β π0 , by Lemma 11, we have π’π β is (πΉ, πΌ, π2 βπ π=1 π’π , π)quasiconvex at π₯ β π0 . Hence, we obtain the following inequality: π
π
π=1
π=1
(
π1 (π¦) + π’π β (π¦) + Vπ π (π¦) , . . . , π (π¦) ππ (π¦) π (π¦) π
s.t.
βπ π β ( π=1
+ π’π β (π¦) + Vπ π (π¦)) π ππ (π¦) ) + βπ’π ββπ (π¦) π (π¦) π=1
(35)
π
+ β Vπ βππ (π¦) = 0
2
πΉ (π₯, π₯; πΌ (π₯, π₯) β π’π ββπ (π₯)) β¦ β βπ’π π2 π (π₯, π₯) . (29)
π=1
π β β§++ ,
By the (πΉ, πΌ, π3 , π)-convexity of ππ (π₯) at π₯ β π0 , we have ππ (π₯) β ππ (π₯) β§ πΉ (π₯, π₯; πΌ (π₯, π₯) βππ (π₯)) + π3 π2 (π₯, π₯) . (30) Since V β§ 0, we have π
0 = Vπ (π₯) β Vπ (π₯) β§ β Vπ πΉ (π₯, π₯; πΌ (π₯, π₯) βππ (π₯)) π=1
π’ β π
+π ,
V β π
+π .
Theorem 15. (ππ /π) (π = 1, 2, . . . , π) is (πΉ, πΌ, π1 , π)-convex at π¦, βπ (π = 1, 2, . . . , π) is (πΉ, πΌ, π2 , π)-convex at π¦, π ππ (π = 1, 2, . . . , π) is (πΉ, πΌ, π3 , π)-convex at π¦, and βπ=1 π π π1 + π π βπ=1 π’π π2 + βπ=1 Vπ π3 β§ 0, then
(31)
π
2
+ β Vπ π3 π (π₯, π₯) .
π
βπ π (
π=1
π=1
By the sublinearity of πΉ, we obtain
π π (π¦) ππ (π₯) ) β§ βπ π ( π ) + π’π β (π¦) + Vπ π (π¦) . π (π₯) π (π¦) π=1
(36)
π
π
π=1
π=1
πΉ (π₯, π₯; πΌ (π₯, π₯) β Vπ βππ (π₯)) β¦ β β Vπ π3 π2 (π₯, π₯) .
(32)
Proof. By the (πΉ, πΌ, π1 , π)-convex of ππ /π at π¦, we get
By the known conditions, we have π (π¦) ππ (π₯) ππ (π¦) β§ πΉ (π₯, π¦; πΌ (π₯, π¦) β ( π )) β π (π₯) π (π¦) π (π¦)
π
π π (π₯) ) + β π’π ββπ (π₯) πΉ (π₯, π₯; πΌ (π₯, π₯) (βππ β ( π (π₯) π=1 π=1 π
(33)
+ π1 π2 (π₯, π¦) ,
(37)
π = 1, 2, . . . , π.
+ β Vπ βππ (π₯))) = 0. π=1
By (28), (29), and (32) and by the sublinearity of πΉ, we obtain π
πΉ (π₯, π₯; πΌ (π₯, π₯) (βππ β ( π=1
π π (π₯) ) + β π’π ββπ (π₯) π (π₯) π=1
By the (πΉ, πΌ, π2 , π)-convex of βπ at π¦, we get βπ (π₯) β βπ (π¦) β§ πΉ (π₯, π¦; πΌ (π₯, π¦) ββπ (π¦)) + π2 π2 (π₯, π¦) , π = 1, 2, . . . , π. (38)
π
+ β Vπ βππ (π₯))) π=1
π
π
π
π=1
π=1
π=1
(34)
< β (βππ π1 + β π’π π2 + β Vπ π3 ) π2 (π₯, π₯) = βππ2 (π₯, π₯) β¦ 0 which contradicts the fact of (33). Therefore, π₯ is an efficient solution of (VFP). The proof is complete.
By the (πΉ, πΌ, π3 , π)-convex of ππ at π¦, we get ππ (π₯) β ππ (π¦) β§ πΉ (π₯, π¦; πΌ (π₯, π¦) βππ (π¦)) + π3 π2 (π₯, π¦) , π = 1, 2, . . . , π. (39)
6
Advances in Operations Research π
Since π β β§++ , π’ β π
+π , V β π
+ , and by the previous three inequalities, we have that π
(βπ π ( π=1
π π=1
π=1
β§ βπ π πΉ (π₯, π¦; πΌ (π₯, π¦) β ( π=1
π π (π¦) ππ (π₯) ) β§ βπ π ( π ) + π’π β (π¦) + Vπ π (π¦) . π (π₯) π (π¦) π=1 (44)
The proof is complete.
ππ (π¦) ) + π’π β (π¦) + Vπ π (π¦)) π (π¦)
π
π
βπ π (
ππ (π₯) ) + π’π β (π₯) + Vπ π (π₯)) π (π₯)
β (βπ π (
Since π’π β(π₯) β¦ 0, Vπ π(π₯) = 0, we obtain
ππ (π¦) )) π (π¦) (40)
π
+ β π’π πΉ (π₯, π¦; πΌ (π₯, π¦) ββπ (π¦)) π=1 π
+ β Vπ πΉ (π₯, π¦; πΌ (π₯, π¦) βππ (π¦))
3.3. Nonlinear Multiobjective Fractional Programming Problem Involved Inequality and Equality Constraints under (πΉ,π)Convex. Consider the multiobjective fractional programming problem (MFP) (
s.t.
βπ (π₯) β¦ 0, π = 1, 2, . . . , π, π₯ β π0 ,
π=1
π
π
π=1
π=1
π=1
where π0 is an open set of π
π , ππ (π₯) (π = 1, 2, . . . , π) : π0 β π
, ππ (π₯) β§ 0, ππ (π₯) (π = 1, 2, . . . , π) : π0 β π
, ππ (π₯) > 0, and βπ (π₯) (π = 1, 2, . . . , π) : π0 β π
, ππ (π₯) (π = 1, 2, . . . , π) : π0 β π
are continuously differentiable over π0 . Denote by πΊ the set of all feasible solutions for (MFP); that is,
By the sublinearity of πΉ, we obtain π π=1
ππ (π₯) ) + π’π β (π₯) + Vπ π (π₯)) π (π₯)
πΊ = {π₯ β π0 | βπ (π₯) β¦ 0, π = 1, 2, . . . , π;
π
π (π¦) β (βπ π ( π ) + π’π β (π¦) + Vπ π (π¦)) π (π¦) π=1 π
β§ πΉ (π₯, π¦; πΌ (π₯, π¦) (βπ π β ( π=1
ππ (π₯) = 0, π = 1, 2, . . . , π}
ππ (π¦) ) + β π’π ββπ (π¦) π (π¦) π=1
π
π
π=1
π=1
π=1
π
βπ (π₯) β¦ 0, π = 1, 2, . . . , π, ππ (π₯) = 0, π = 1, 2, . . . , π, (41)
By the feasibility of (π¦, π, π’, V), we have π
π
(42)
π
Since βπ=1 π π π1 + βπ π=1 π’π π2 + βπ=1 Vπ π3 β§ 0 and by (41), we get π
( βπ π ( π=1
π
ππ (π₯) ) + π’πβ (π₯) + Vπ π (π₯)) π (π₯)
π (π¦) β (βπ π ( π ) + π’π β (π¦) + Vπ π (π¦)) β§ 0. π (π¦) π=1
π
ππ βπ (π₯) = 0, π = 1, 2, . . . , π,
+ (βπ π π1 + β π’π π2 + β Vπ π3 ) π2 (π₯, π¦) .
π π π (π¦) ) + β π’π ββπ (π¦) + β Vπ βππ (π¦) = 0. βπ π β ( π π (π¦) π=1 π=1 π=1
Theorem 16. Assume that there exists (π₯, πΌ, π, V) and πΌ = π (πΌ1 , πΌ2 , . . . , πΌπ ) β π
+ , π = (π1 , π2 , . . . , ππ ), V = (V1 , V2 , . . . , Vπ ) such that (i) βπ=1 πΌπ ππ (π₯) + βπ π=1 ππ ββπ (π₯) + βπ=1 Vπ βππ (π₯) = 0,
π=1
π
(46)
and let ππ (π₯) = ππ (π₯)/ππ (π₯), π(π₯) = (π1 (π₯), π2 (π₯), . . . , ππ (π₯)).
π
+ β Vπ βππ (π¦) )) π
(45)
ππ (π₯) = 0, π = 1, 2, . . . , π, π₯ β π0 ,
π
+ (βπ π π1 + β π’π π2 + β Vπ π3 ) π2 (π₯, π¦) .
(βπ π (
ππ (π₯) π1 (π₯) π2 (π₯) , ,..., ) π1 (π₯) π2 (π₯) ππ (π₯)
min
(43)
where ππ (π₯) = (1/ππ (π₯))[βππ (π₯) β ππ (π₯)βππ (π₯)]; (ii) ππ and βππ (π = 1, 2, . . . , π) are (πΉ, π)-convex at π₯, and π > 0; (iii) βπ are (πΉ, π)-convex at π₯ for all π, π = 1, 2, . . . , π, and π > 0; (iv) ππ are (πΉ, π)-convex at π₯ for all π, π = 1, 2, . . . , π, and π > 0. Then π₯ is a Pareto optimality solution of (MFP). Proof. Suppose that π₯ is not a Pareto optimality solution of (MFP); then there exists a feasible solution π₯ β πΊ such that ππ (π₯)/ππ (π₯) β¦ ππ (π₯)/ππ (π₯), π = 1, 2, . . . , π, that is, ππ (π₯) β (ππ (π₯)ππ (π₯))/ππ (π₯) β¦ 0, that is, ππ (π₯) β (ππ (π₯)/ππ (π₯))ππ (π₯) β¦ ππ (π₯) β (ππ (π₯)/ππ (π₯))ππ (π₯); it follows that ππ (π₯) β ππ (π₯) ππ (π₯) β¦ ππ (π₯) β ππ (π₯) ππ (π₯) .
(47)
Advances in Operations Research
7
By the (πΉ, π)-convexity of ππ and βππ , π = 1, 2, . . . , π, we have ππ (π₯) β ππ (π₯) β§ πΉ (π₯, π₯; βππ (π₯)) + ππ2 (π₯, π₯) ,
which together with ππ βπ (π₯) β¦ 0 and ππ βπ (π₯) = 0 yields πΉ (π₯, π₯; ππ ββπ (π₯)) + ππ ππ2 (π₯, π₯) β¦ 0.
βπ₯ β π0 ,
βππ (π₯) + ππ (π₯) β§ πΉ (π₯, π₯; ββππ (π₯)) + ππ2 (π₯, π₯),
βπ₯ β π0 . (48)
Using the conditions ππ (π₯) β§ 0, ππ (π₯) > 0, we see that ππ (π₯) = ππ (π₯)/ππ (π₯) β§ 0. By the properties of the sublinear functional πΉ, we obtain β ππ (π₯) ππ (π₯) + ππ (π₯) ππ (π₯) β§ ππ (π₯) πΉ (π₯, π₯; ββππ (π₯)) + ππ (π₯) ππ2 (π₯, π₯)
(49)
= πΉ (π₯, π₯; βππ (π₯) βππ (π₯)) + ππ (π₯) ππ2 (π₯, π₯) .
ππ (π₯) β ππ (π₯) ππ (π₯) β [ππ (π₯) β ππ (π₯) ππ (π₯)] β§ πΉ (π₯, π₯; βππ (π₯) β ππ (π₯) βππ (π₯))
By accumulating the inequality (56) with π, we have
(50)
π
πΉ (π₯, π₯; βπΌπ (
1 ) [βππ (π₯) β ππ (π₯) βππ (π₯)]) ππ (π₯)
(52)
π=1
π=1
(58)
For π = 1, 2, . . . , π, by the (πΉ, π)-convexity of ππ at π₯, we have that βπ₯ β π0 . (59)
On multiplying the inequality (59) with Vπ β§ 0, we get Vπ ππ (π₯) β Vπ ππ (π₯) β§ πΉ (π₯, π₯; Vπ βππ (π₯)) + Vπ ππ2 (π₯, π₯) (60)
π
π=1
π=1
(61)
(62)
The inequality (62) along with the sublinearity of πΉ implies π
π
π=1
π=1
πΉ (π₯, π₯; β Vπ βππ (π₯)) + β Vπ ππ2 (π₯, π₯) β¦ 0.
(63)
π
π
π
π=1
π=1
πΉ (π₯, π₯; βπΌπ ππ (π₯) + β ππ ββπ (π₯) + β Vπ βππ (π₯)) π
+ βπΌ π ( π=1
π
πΉ (π₯, π₯; βπΌπ ππ (π₯))
1 ) [1 + ππ (π₯)] ππ2 (π₯, π₯) ππ (π₯)
π
π
π=1
π=1
(64)
+ βππ ππ2 (π₯, π₯) + β Vπ ππ2 (π₯, π₯) β¦ 0.
π=1
(53) 1 ) [1 + ππ (π₯)] ππ2 (π₯, π₯) β¦ 0. ππ (π₯)
On the other hand, for π = 1, 2, . . . , π, by the (πΉ, π)-convexity of βπ at π₯, we have βπ (π₯) β βπ (π₯) β§ πΉ (π₯, π₯; ββπ (π₯)) + ππ2 (π₯, π₯) ,
π
β πΉ (π₯, π₯; Vπ βππ (π₯)) + β Vπ ππ2 (π₯, π₯) β¦ 0.
π=1
Since ππ (π₯) = (1/ππ (π₯))[βππ (π₯) β ππ (π₯)βππ (π₯)], we get
π=1
π
The sublinearity of πΉ, (53), (58), and (63) yields
1 ) [1 + ππ (π₯)] ππ2 (π₯, π₯) β¦ 0. ππ (π₯)
+ βπΌ π (
(57)
By accumulating the inequality (61) with π, we have
If we sum up after multiplying by πΌπ (1/ππ (π₯)) β§ 0 (π = 1, 2, . . . , π) in the above inequality and by using the sublinearity of πΉ, we have
π
π
πΉ (π₯, π₯; β ππ ββπ (π₯)) + βππ ππ2 (π₯, π₯) β¦ 0.
πΉ (π₯, π₯; βππ (π₯) β ππ (π₯) βππ (π₯)) + [1 + ππ (π₯)] ππ (π₯, π₯) β¦ 0. (51)
π=1
π=1
that is,
2
+ βπΌ π (
π=1
πΉ (π₯, π₯; Vπ βππ (π₯)) + Vπ ππ2 (π₯, π₯) β¦ 0.
By (47), we have
π
π
which together with Vπ ππ (π₯) = 0 and Vπ ππ (π₯) = 0 yields
+ [1 + ππ (π₯)] ππ2 (π₯, π₯) .
π=1
π
β πΉ (π₯, π₯; ππ ββπ (π₯)) + β ππ ππ2 (π₯, π₯) β¦ 0,
ππ (π₯) β ππ (π₯) β§ πΉ (π₯, π₯; βππ (π₯)) + ππ2 (π₯, π₯) ,
By (48) and (49) and based on the sublinearity of πΉ, we have
(56)
βπ₯ β π0 . (54)
On multiplying the inequality (54) by ππ β₯ 0 and using the sublinearity of πΉ, we have ππ βπ (π₯) β ππ βπ (π₯) β§ πΉ (π₯, π₯; ππ ββπ (π₯)) + ππ ππ2 (π₯, π₯) , (55)
According to the assumption and the sublinearity of πΉ, we obtain π
π
π
π=1
π=1
πΉ (π₯, π₯; βπΌπ ππ (π₯) + βππ ββπ (π₯) + β Vπ βππ (π₯)) π=1
π
+ βπΌπ ( π=1
π 1 ) [1 + ππ (π₯)] ππ2 (π₯, π₯) + β ππ ππ2 (π₯, π₯) ππ (π₯) π=1
π
+ β Vπ ππ2 (π₯, π₯) > 0, π=1
(65) which contradicts (64) obviously.
8
Advances in Operations Research Therefore, π₯ is a Pareto optimality solution of (MFP). The proof is complete.
By (67) and based on the sublinearity of πΉ, we have π
βπ π ( π=1
Consider the dual problem of (MFP) (
max
ππ (π¦) π
βπ π β (
s.t.
π=1
π
π=1
π=1
π=1
π π=1
(66)
π
π
π=1
π=1
π
π
π
π=1
π=1
π=1
π
π=1
Since βπ=1 π π β(ππ (π¦)/ππ (π¦)) π βπ=1 Vπ βππ (π¦) = 0, we have
π’πβ (π¦) β§ 0,
π
Vπ π (π¦) = 0,
βπ π ( π=1
V β§ 0.
Theorem 17. ππ (π¦)/ππ (π¦) (π = 1, 2, . . . , π) is (πΉ, π)-convex at π¦, βπ (π¦) (π = 1, 2, . . . , π) is (πΉ, π)-convex at π¦, ππ (π¦) (π = π 1, 2, . . . , π) is (πΉ, π)-convex at π¦, and π β β§++ , π’ β π
+π , V β π
+ , π π π π βπ=1 π π ππ + βπ=1 π’π ππ + βπ=1 Vπ ππ β§ 0, then π (ππ (π₯)/ππ (π₯)) β§
π=1
π π (π¦) ππ (π₯) ) β βπ π ( π ) ππ (π₯) ππ (π¦) π=1
π
π
π
π=1
π=1
π=1
π=1
π=1
β π’π βπ (π₯) β β π’π βπ (π¦) π
π
π=1
π=1
β§ πΉ (π₯, π¦; βπ’π ββπ (π¦)) + βπ’π ππ π2 (π₯, π¦) , π
π
β Vπ ππ (π₯) β β Vπ ππ (π¦)
π=1
π
π
π
π
π=1
π=1
π=1
π=1
+ (βπ π ππ + β π’π ππ + β Vπ ππ ) π2 (π₯, π¦) . (69) Since βπ (π₯) β¦ 0, (π = 1, 2, . . . , π), ππ (π₯) = 0, (π = 1, 2, . . . , π), π π π’π β(π¦) β§ 0, Vπ π(π¦) = 0, βπ=1 π π ππ + βπ π=1 π’π ππ + βπ=1 Vπ ππ β§ 0, π π we have βπ=1 π π (ππ (π₯)/ππ (π₯)) β§ βπ=1 π π (ππ (π¦)/ππ (π¦)). That is, π π π (ππ (π₯)/ππ (π₯)) β§ π (ππ (π¦)/ππ (π¦)).
This work has been supported by the young and middle-aged leader scientific research foundation of Chengdu University of Information Technology (no. J201218) and the talent introduction foundation of Chengdu University of Information Technology (no. KYTZ201203).
π π (π¦) β§ πΉ (π₯, π¦; βπ π β ( π )) + βπ π ππ π2 (π₯, π¦) , ππ (π¦) π=1 π=1 π
+
Acknowledgments
π
π
βπ π=1 π’π ββπ (π¦)
β§ βπ’π βπ (π¦) β βπ’π βπ (π₯) + β Vπ ππ (π¦) β β Vπ ππ (π₯)
π (ππ (π¦)/ππ (π¦)).
Proof. By the (πΉ, π)-convexity of ππ (π¦)/ππ (π¦), βπ (π¦), and ππ (π¦), π the sublinearity of πΉ, and since π β β§++ , π’ β π
+π , V β π
+ , we have that
+
π π (π¦) ππ (π₯) ) β βπ π ( π ) ππ (π₯) ππ (π¦) π=1
π
π
(68)
+ (βπ π ππ + βπ’π ππ + β Vπ ππ ) π2 (π₯, π¦) .
π
βπ π (
ππ (π¦) ) ππ (π¦)
+ βπ’π ββπ (π¦) + β Vπ βππ (π¦)))
+ β Vπ βππ (π¦) = 0,
π’ β§ 0,
π
β§ πΉ (π₯, π¦; (βπ π β (
+ π’π β (π¦) + Vπ π (π¦))
π ππ (π¦) ) + β π’π ββπ (π¦) ππ (π¦) π=1
π
β β π’π βπ (π¦) + β Vπ ππ (π₯) β β Vπ ππ (π¦)
π1 (π¦) + π’π β (π¦) + Vπ π (π¦) , . . . , π1 (π¦) ππ (π¦)
π π π (π¦) ππ (π₯) ) β βπ π ( π ) + βπ’π βπ (π₯) ππ (π₯) ππ (π¦) π=1 π=1
π=1
π
π
π=1
π=1
β§ πΉ (π₯, π¦; β Vπ βππ (π¦)) + β Vπ ππ π2 (π₯, π¦) .
(67)
References [1] J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ, USA, 1944. [2] V. Preda, βOn efficiency and duality for multiobjective programs,β Journal of Mathematical Analysis and Applications, vol. 166, no. 2, pp. 365β377, 1992. [3] T. R. Gulati and M. A. Islam, βSufficiency and duality in multiobjective programming involving generalized πΉ-convex functions,β Journal of Mathematical Analysis and Applications, vol. 183, no. 1, pp. 181β195, 1994.
Advances in Operations Research [4] J. P. Vial, βStrong and weak convexity of sets and functions,β Mathematics of Operations Research, vol. 8, no. 2, pp. 231β259, 1983. [5] Z. A. Liang, H. X. Huang, and P. M. Pardalos, βOptimality conditions and duality for a class of nonlinear fractional programming problems,β Journal of Optimization Theory and Applications, vol. 110, no. 3, pp. 611β619, 2001. [6] Z. A. Liang, H. X. Huang, and P. M. Pardalos, βEfficiency conditions and duality for a class of multiobjective fractional programming problems,β Journal of Global Optimization, vol. 27, no. 4, pp. 447β471, 2003. [7] T. Weir and B. Mond, βGeneralised convexity and duality in multiple objective programming,β Bulletin of the Australian Mathematical Society, vol. 39, no. 2, pp. 287β299, 1989. [8] T. Weir, βA duality theorem for a multiple objective fractional optimization problem,β Bulletin of the Australian Mathematical Society, vol. 34, no. 3, pp. 415β425, 1986. [9] V. Jeyakumar and B. Mond, βOn generalised convex mathematical programming,β Australian Mathematical Society B, vol. 34, no. 1, pp. 43β53, 1992. [10] R. R. Egudo, βEfficiency and generalized convex duality for multiobjective programs,β Journal of Mathematical Analysis and Applications, vol. 138, no. 1, pp. 84β94, 1989. [11] A. Cambini and L. Martein, Generalized Convexity and Optimization, Springer-Verlag, Berlin, Germany, 2008. [12] L. Cuoyun and D. Jiali, Methods and Theories of Multi-Objective Optimization, Jilin Education Press, Changchun, China, 1992. [13] W. Zezheng and Z. Fenghua, βOptimality and duality for a class of nonlinear fractional programming problems,β Journal of Sichuan Normal University, vol. 30, no. 5, pp. 594β597, 2007. [14] O. L. Mangasarian, Nonlinear Programming, McGraw-Hill, New York, NY, USA, 1969.
9
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014