Minimax Fractional Programming for n-Set Functions ... - Springer Link

1 downloads 0 Views 459KB Size Report
May 1, 2008 - Keywords Minimax fractional programming · Partial differentiable n-set function · ... They include programming problems for functions of.
J Optim Theory Appl (2008) 139: 295–313 DOI 10.1007/s10957-008-9410-6

Minimax Fractional Programming for n-Set Functions and Mixed-Type Duality under Generalized Invexity H.C. Lai · T.Y. Huang

Published online: 1 May 2008 © Springer Science+Business Media, LLC 2008

Abstract We establish the sufficient optimality conditions for a minimax programming problem involving p fractional n-set functions under generalized invexity. Using incomplete Lagrange duality, we formulate a mixed-type dual problem which unifies the Wolfe type dual and Mond-Weir type dual in fractional n-set functions under generalized invexity. Furthermore, we establish three duality theorems: weak, strong, and strict converse duality theorem, and prove that the optimal values of the primal problem and the mixed-type dual problem have no duality gap under extra assumptions in the framework. Keywords Minimax fractional programming · Partial differentiable n-set function · Mixed-type dual · Duality theorems · Quasi/Pseudo-invex set function 1 Introduction In order to make sure a feasible solution of a programming problem is optimal, many efforts have been made in the literature. In particular, various types of sufficient optimality conditions are obtained from the converse of necessary optimality conditions by applying extra assumptions, such as convexity, generalized convexity, invexity or generalized invexity [1–9]. They include programming problems for functions of real variables in Rn [2, 4, 6–9] and functions of set variables Ω in a σ -algebra A [1, 3, 5]. Furthermore, by using the sufficient optimality conditions, various types of dual models have been formulated, and three types of duality theorems, including the Communicated by P.L. Yu. This research was partly supported by the National Science Council, NSC 94-2115-M-033-003, Taiwan. H.C. Lai () · T.Y. Huang Department of Applied Mathematics, Chung-Yuan Christian University, Chung Li 320, Taiwan e-mail: [email protected]

296

J Optim Theory Appl (2008) 139: 295–313

weak, strong and strict converse duality have been established. In particular, many researchers deal with the duality on Wolfe type dual [10] and Mond-Weir type dual [11] for minimax fractional programming problems [2–6, 9]. Since there are various types of duality, efforts have been made to integrate some known duality models [9, 12–15]. However, a general dual formulation is not available. To combine the Wolfe type dual and the Mond-Weir type dual, a mixed-type dual was investigated by Lai [9] for minimax fractional variational problem in Rn under generalized invexity. The generalized invexity (Lai and Liu [6] and Chen and Lai [7]) is constructed for real functions with real independent variables. In this paper we focus on programming problem involving n-set functions. We consider a minimax fractional programming problem of the form (P)

min max {fi (S)/gi (S)}(≡ ϕ(S)),

S∈P 1≤i≤r

s.t.

S ∈ P ≡ {S ∈ Γ n | hj (S) ≤ 0, j ∈ m = {1, . . . , m}},

where Γ n is the n-fold product of a σ -algebra Γ for subsets in a given measure space (X, Γ, μ); fi , gi , i ∈ r = {1, . . . , r} and hj , j ∈ m, are differentiable n-set functions defined on Γ n . For each i ∈ r, without loss of generality, we may assume throughout that gi (S) > 0 and fi (S) ≥ 0 for all S ∈ P , the set of all feasible solutions of (P). This minimax fractional programming problem (P) is the same type as that given in [2, 5]. In [2], Lai, Liu and Tanaka have treated the minimax fractional programming for real-valued convex continuous functions in Rn . Then it is extended to set functions in [5]. In [5], Lai and Liu investigated the minimax fractional programming under generalized (F , ρ, θ )-convexity. This paper is organized as follows. In Sect. 2, we recall a number of basic definitions for set functions [1, 3, 5] and [14–23]. The definitions of generalized (ρ, θ )invexity is given in Sect. 3. The generalized (ρ, θ )-invexity in this paper is different from the generalized (F , ρ, θ )-convexity in [5]. In Sect. 4, we describe the necessary optimality conditions. The duality models include the Wolfe type, Mond-Weir type [10, 11] and mixed-type for n-set functions described in Sect. 5. Sections 6 and 7 are the main parts of this paper. The sufficient optimality conditions is given in Sect. 6, it is based on necessary optimality conditions with the generalized (ρ, θ )-invexity assumptions. Finally, we establish three duality theorems: weak, strong and strict converse duality theorem, and prove that the optimal values of the primal problem and the mixed-type dual problem have no duality gap under extra assumptions in the framework.

2 Preliminaries Recall that (X, Γ, μ) is a finite atomless measure space with L1 (X, Γ, μ) separable. ∞ Thus for h ∈ L1 (X, Γ, μ) and S ∈ Γ with characteristic  function χS ∈ L (X, Γ, μ), the dual pair h, χS  will stand for the integral S hdμ. The notions of differentiability and convexity were originally introduced by Morris [23]. For various properties and further development for set functions, one can consult the papers [1, 3, 14, 18–21, 23].

J Optim Theory Appl (2008) 139: 295–313

297

Let Γ n be the n-fold product of a σ -algebra Γ for subsets in the measure space (X, Γ, μ). Define the function d on Γ n × Γ n by d(R, S) =

 n 

1/2 [μ(Rk Sk )]

2

,

k=1

for R = (R1 , . . . , Rn ) and S = (S1 , . . . , Sn ), in Γ n , where denotes the symmetric difference of sets. This function d is a pseudometric on Γ n . A set function F : Γ → R is said to be differentiable at U0 ∈ Γ , if there exist an element DFU0 ∈ L1 (X, Γ, μ) and a set function ϕ : Γ × Γ → R such that F (U ) = F (U0 ) + DFU0 , χU − χU0  + ϕ(U, U0 ),

for any U ∈ Γ,

where ϕ(U, U0 ) is o(ρ(U, U0 )), that is, lim

ρ(U,U0 )→0

{ϕ(U, U0 )/ρ(U, U0 )} = 0,

with ρ(U, U0 ) = μ(U U0 ), which is a pseudometric on Γ . An n-set function Q : Γ n → R is said to have a partial derivative at S 0 = 0 (S1 , . . . , Sn0 ) ∈ Γ n with respect to (w.r.t. in short) its kth argument if the function 0 0 F (Sk ) = Q(S10 , . . . , Sk−1 , Sk , Sk+1 , . . . , Sn0 )

has the derivative DFS 0 , k ∈ n = {1, 2, . . . , n}. In this case, the kth partial derivative k

of Q at S 0 is defined by Qk (S 0 ) = Dk QS 0 = DFS 0 (∈ L(L∞ ) = L1 ), k

k ∈ n,

which are regarded as continuous linear operators on L∞ . An n-set function Q : Γ n → R is said to be differentiable at S 0 ∈ Γ n if each partial derivative Dk QS 0 exists for k ∈ n and if ϕ : Γ n × Γ n → R is o(d(S, S 0 )) such that Q(S) = Q(S 0 ) +

n  Qk (S 0 ), χSk − χS 0  + ϕ(S, S 0 ), k

k=1

for any S = (S1 , . . . , Sn ) ∈ Γ n . The differential of the n-set function Q at S ∈ Γ n is often denoted by DQS = (D1 QS , . . . , Dn QS ) ∈ Ln1 (X, Γ, μ), Q (S) = (Q1 (S), . . . , Qn (S)) ∈ Ln1 (X, Γ, μ).

or equivalently by

298

J Optim Theory Appl (2008) 139: 295–313

3 Generalized Invexity In order to establish the sufficient optimality conditions for a mixed-type dual involving n-set functions, we introduce (ρ, θ )-invexity for n-set functions as follows. Let θ : Γ n × Γ n → R+ be a positive function such that w∗

θ (S, S 0 ) = 0 only if S = S 0 , or S −→ S 0 in Γ n implies θ (S, S 0 ) → 0, where w ∗ stands for weak∗ topology in Ln∞ (≈ Γ n ) = (Ln1 )∗ and S = (S1 , . . . , Sn ), S 0 = (S10 , . . . , Sn0 ) ∈ Γ n . In particular, θ defined by  θ (S, S ) = 0

n 

1/2 |μ(Sk Sk0 )|2

is a pseudometric on Γ n .

k=1

Let ρ ∈ R. We define the (ρ, θ )-invexity for a differentiable n-set function Q : Γ n → R, mutatis mutandis, as the integral functional J defined in [6] (or [7]) for generalized invexity. Assume that there is a vector function η : Γ n × Γ n → Ln∞ such that η(R, S) ∈ Ln∞ for R, S in Γ n . The particular case of η can be given by η(R, S) = χR −χS for χR , χS in Ln∞ . Note that the vector valued function η is different from many cases in the previous papers. We will employ this generalized invexity to approve the existence of the optimal solutions of (P), and then establish the duality theorems for (MD), a kind of mixed-type dual problem for n-set functions, relative to the primal problem (P). Now by these preparations, we can define (ρ, θ )-invexity for n-set functions as follows. Definition 3.1 Let Q : Γ n → R be a differentiable n-set function. A positive functional θ and vector valued function η are given as above. Then for ρ ∈ R, we can say that (i) Q is (ρ, θ )-invex (strictly) at S 0 ∈ Γ n with respect to (w.r.t. for short) η if, for any S ∈ Γ n , there is a vector-valued function η : Γ n × Γ n → Ln∞ such that Q(S) − Q(S 0 ) ≥ Q (S 0 ) · η(S, S 0 ) + ρθ (S, S 0 ). (>) (ii) Q is (ρ, θ )-pseudoinvex (strictly) at S 0 ∈ Γ n w.r.t. η if, for any S ∈ Γ n , there is a vector-valued function η : Γ n × Γ n → Ln∞ such that Q (S 0 ) · η(S, S 0 ) + ρθ (S, S 0 ) ≥ 0



Q(S) ≥ Q(S 0 ). (Q(S) > Q(S 0 )).

(iii) Q is (ρ, θ )-quasiinvex (prestrictly) at S 0 ∈ Γ n w.r.t. η if, for any S ∈ Γ n , there is a vector-valued function η : Γ n × Γ n → Ln∞ such that Q(S) ≤ Q(S 0 ) (Q(S) < Q(S 0 ))



Q (S 0 ) · η(S, S 0 ) + ρθ (S, S 0 ) ≤ 0.

J Optim Theory Appl (2008) 139: 295–313

299

It is remarkable that, in the above definitions, Q (S) ∈ L(Ln∞ ) ⊂ Ln1 is a continuous linear functional on Ln∞ ⊂ Ln1 . The vector functional η : Γ n × Γ n → Ln∞ can be denoted by η(S, S 0 ) = (η1 , . . . , ηn )(S, S 0 ),

for S, S 0 ∈ Γ n ,



where ηk (S, S 0 ) = ηk (Sk , Sk0 ) = Projk η(S, S 0 ) ∈ L∞ , k ∈ n = {1, . . . , n}. The particular case of the vector functional η can be taken as η(S, S 0 ) = χS − χS 0 , that is, η(S, S 0 ) = (η1 , . . . , ηn )(S, S 0 ) = (χS1 − χS 0 , . . . , χSn − χSn0 ) ∈ Ln∞ , 1

where ηk (S, S 0 ) = χSk − χS 0 ∈ L∞ . Thus, if Q : Γ n → R is differentiable at S 0 , k there exists a vector functional η on Γ n × Γ n into Ln∞ , such that Q(S) − Q(S 0 ) =

n 

Dk Q(S 0 ), ηk (S, S 0 ) + ρθ (S, S 0 )

k=1

= Q (S 0 ), η(S, S 0 ) + ρθ (S, S 0 ), where ρ ∈ R, θ : Γ n × Γ n → R+ with the property θ (S, S 0 ) = 0 only if S = S 0 . In the following, we often use the notation Q (S 0 ) · η(S, S 0 ) = Q (S 0 ), η(S, S 0 ) =

n 

Dk Q(S 0 ), ηk (S, S 0 ).

k=1

4 Necessary Optimality Conditions For simplicity in evaluation, we use the following notations. Let  I=

 r 

u ∈ Rr+ 

 ui = 1 ⊂ Rr+ .

i=1

Denote →

f (Ω, u) =

r 



ui fi (Ω) = f (Ω), ur ,

i=1 →

g(Ω, u) =

r 



ui gi (Ω) = g(Ω, u)r ,

i=1 →

h(Ω, v) =

m  j =1



vj hj (Ω) = h(Ω, v)m .

(1)

300

J Optim Theory Appl (2008) 139: 295–313

Then, Df (Ω, u) = (D1 f (Ω, u), . . . , Dn f (Ω, u)) = (Dk f (Ω), ur )nk=1  r n r   = ui Dfi (Ω) = ui Dk fi (Ω) . i=1

i=1

k=1

It is easy to see that problem (P) is equivalent to the following parametric problem: ( P)

min λ, s.t. fi (S) − λgi (S) ≤ 0, hj (S) ≤ 0, j ∈ m,

i ∈ r,

S ∈ Γ n. Hence, if fi , gi i ∈ r, and hj , j ∈ m, are differentiable at S ∗ , then one can get the Kuhn-Tucker type necessary conditions for problem (P) (see Corley [17]); see the following theorem. Theorem 4.1 (Necessary Optimality Conditions) Let S ∗ ∈ Γ n be a regular solution (see Corley [17]) of (P) with optimal value λ∗ ∈ R. Suppose that the set functions ∗ fi , gi , i ∈ r and h

j , j ∈ m are differentiable at S . Then, there exist multipliers r m ∗ r ∗ u ∈ I = {u ∈ R+ | i=1 ui = 1} and v ∈ R+ such that, for S = (S1 , . . . , Sn ) ∈ Γ n and for each k ∈ n, we have r m   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ui [Dk fi (S ) − λ Dk gi (S )] + vj Dk hj (S ), ηk (S, S ) ≥ 0, (2) j =1

i=1

u∗i [fi (S ∗ ) − λ∗ gi (S ∗ )] = 0, for vj∗ hj (S ∗ ) = 0, for all j ∈ m.

all i ∈ r,

(3) (4)

For convenience, we show the following proposition. Proposition 4.1 The objective function of problem (P) satisfies   r r

  ϕ(S) ≡ max {fi (S)/gi (S)} = max ui fi (S) ui gi (S) 1≤i≤r

u∈I

i=1

i=1



= max{f (S, u)/g(S, u)}. u∈I

Proof For each S ∈ Γ n and i ∈ r, we have fi (S)/gi (S) 0 · f1 (S) + · · · + 0 · fi−1 (S) + 1 · fi (S) + 0 · fi+1 (S) + · · · + 0 · fr (S) 0 · g1 (S) + · · · + 0 · gi−1 (S) + 1 · gi (S) + 0 · gi+1 (S) + · · · + 0 · gr (S)   r r

  ≤ max ui fi (S) ui gi (S) .

=

u∈I

i=1

i=1

J Optim Theory Appl (2008) 139: 295–313

301

Hence,  ϕ(S) = max {fi (S)/gi (S)} ≤ max 1≤i≤r

u∈I

r 

 r

 ui fi (S) ui gi (S) .

i=1

i=1

(5)

On the other hand, (u1 f1 (S) + · · · + ur fr (S))/(u1 g1 (S) + · · · + ur gr (S)) u1 g1 (S)[f1 (S)/g1 (S)] + u2 g2 (S)[f2 (S)/g2 (S)] + · · · + ur gr (S)[fr (S)/gr (S)] u1 g1 (S) + u2 g2 (S) + · · · + ur gr (S)  u1 g1 (S) + u2 g2 (S) + · · · + ur gr (S)  · max {fi (S)/gi (S)} ≤ u1 g1 (S) + u2 g2 (S) + · · · + ur gr (S) 1≤i≤r =

= max {fi (S)/gi (S)}. 1≤i≤r

It follows that  max u∈I

r 

 r

 ui fi (S) ui gi (S) ≤ max {fi (S)/gi (S)} = ϕ(S).

i=1

1≤i≤r

i=1

(6) 

By (5) and (6), we get the desired identity. From Proposition 4.1, if S ∗ is (P)-optimal, then  r  r

  ∗ ∗ ∗ ∗ ∗ ui fi (S ) ui gi (S ) = λ∗ ϕ(S ) = i=1

i=1

= f (S ∗ ), u∗ r /g(S ∗ ), u∗ r = f (S ∗ , u∗ )/g(S ∗ , u∗ ), where f (S ∗ , u∗ ) = f (S ∗ ), u∗ r =

r 

u∗i fi (S ∗ )

i=1

and g(S ∗ , u∗ ) = g(S ∗ ), u∗ r =

r 

u∗i gi (S ∗ )

i=1

Rr .

stand for the inner product in Substitute f (S ∗ , u∗ )/g(S ∗ , u∗ ) for λ∗ into (2). Then, for each k ∈ n, we have g(S ∗ , u∗ )[Dk f (S ∗ , u∗ ) + Dk h(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dk g(S ∗ , u∗ ), ηk (S, S ∗ ) ≥ 0. It follows that Theorem 4.1 can be restated as the following theorem. Theorem 4.2 (Necessary Optimality Conditions) Let S ∗ be a regular solution of (P). Then, there exist multipliers u∗ ∈ I and v ∗ ∈ Rm + such that, for all S =

302

J Optim Theory Appl (2008) 139: 295–313

(S1 , . . . , Sn ) ∈ Γ n , {g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ )} · η(S, S ∗ ) ≥ 0, (7) ϕ(S ∗ ) ≡ f (S ∗ , u∗ )/g(S ∗ , u∗ ) = max {fi (S ∗ )/gi (S ∗ )},

(8)

h(S ∗ , v ∗ ) = 0.

(9)

1≤i≤r

5 Duality Models Employing the necessary optimality conditions of differentiable n-set functions, we constitute two kinds of parameter free duality models, the Wolfe type dual (cf. [10]) and Mond-Weir type dual (cf. [11]); see the following forms:  r   m r

   (WD) max ui fi (Ω) + vj hj (Ω) ui gi (Ω) j =1

i=1

 ≡ [f (Ω, u) + h(Ω, v)]/g(Ω, u)

i=1

 = [f (Ω), ur + h(Ω), vm ]/g(Ω), ur ,

s.t. Ω = (Ω1 , . . . , Ωn ) ∈ Γ n , u ∈ I, v ∈ Rm + , and {g(Ω, u)[Df (Ω, u) + Dh(Ω, v)] − [f (Ω, u) + h(Ω, v)] · Dg(Ω, u)} · η(S, Ω) ≥ 0, for S = (S1 , . . . , Sn ) ∈ Γ n ;

(MWD)

max

 r  i=1

(10)

 r

 ui fi (Ω) ui gi (Ω) i=1

  ≡ f (Ω, u)/g(Ω, u) = f (Ω), ur /g(Ω), ur , s.t. Ω = (Ω1 , . . . , Ωn ) ∈ Γ n , u ∈ I, v ∈ Rm +, {g(Ω, u)[Df (Ω, u) + Dh(Ω, v)] − f (Ω, u) · Dg(Ω, u)} · η(S, Ω) ≥ 0, for S = (S1 , . . . , Sn ) ∈ Γ n and h(Ω, v) = h(Ω), vm ≥ 0.

(11) (12)

In view of the Wolfe type dual, one can see that the numerator of the objective of (WD) contains all constraint functions of problem (P). While the Mond-Weir type dual contains no constraint function of (P) in the objective of (MWD). Thus a question arises that whether we can formulate a new mixed-type dual (MD) like an incomplete Lagrangian dual in the numerator which will involve the Wolfe type dual and Mond-Weir type dual as the special cases. For this purpose, in this paper we constitute a mixed-type dual model relative to problem (P) as follows.

J Optim Theory Appl (2008) 139: 295–313

303

Let η : Γ n × Γ n → Ln∞ be a vector function such that, for S = (S1 , . . . , Sn ) and Ω = (Ω1 , . . . , Ωn ) ∈ Γ n , η(S, Ω) = (η1 , . . . , ηn )(S, Ω) = (η1 (S1 , Ω1 ), . . . , ηn (Sn , Ωn )) ∈ Ln∞ . For k ∈ n, ηk : Γ × Γ → L∞ is taken by ηk (S, Ω) = ηk (Sk , Ωk ) ∈ L∞ . In particular, η can be chosen as ηk (Sk , Ωk ) = (χSk − χΩk ) ∈ L∞ , for k ∈ n. We partition the index set M = {1, 2, . . . , m} of the constraint functions hj , j ∈  M, in (P) to be M0 , M1 , . . . , Mk with kα=0 Mα = M and Mα ∩ Mβ = ∅ if α = β. Then, we consider  r   r

   (MD) max ui fi (Ω) + vj hj (Ω) ui gi (Ω) j ∈M0

i=1

i=1

  ≡ [f (Ω, u) + hM0 (Ω, v)]/g(Ω, u) ,

s.t. Ω = (Ω1 , . . . , Ωn ) ∈ Γ n , u ∈ I, v ∈ Rm +, g(Ω, u)[Df (Ω, u) + Dh(Ω, v)] − [f (Ω, u) + hM0 (Ω, v)]Dg(Ω, u), η(S, Ω) ≥ 0, for S = (S1 , . . . , Sn ) ∈ Γ n and  hMα (Ω, v) = vj hj (Ω) ≥ 0,

(13) for α = 0, 1, . . . , k. (14)

j ∈Mα

In problem (MD), let the index set M of the constraints in (P) be separated into two parts M0 and M1 , that is, M = M0 ∪ M1 (Mα = ∅ for α = 2, . . . , k); then, the mixed-type dual problem (MD) reduces to (MD) = (WD),

when M0 = M and M1 = ∅,

and (MD) = (MWD),

when M0 = ∅ and M1 = M.

This shows that the Wolfe type dual (WD) and Mond-Weir type dual (MWD) are special cases of the mixed-type dual (MD).

6 Sufficient Optimality Conditions Let S ∗ ∈ Γ n be (P)-optimal with optimal value λ∗ in Theorem 4.1 as well as in Theorem 4.2, λ∗ = ϕ(S ∗ ) = f (S ∗ , u∗ )/g(S ∗ , u∗ ) = max {fi (S ∗ )/gi (S ∗ )}. 1≤i≤r

The purpose of this paper is to establish three duality theorems: weak, strong and strict converse duality theorem between the mixed-type dual (MD) and the primary problem (P). It is based on the sufficient optimality conditions. Since the existence

304

J Optim Theory Appl (2008) 139: 295–313

theorem of optimal solution is regarded as the converse of necessary optimality conditions by extra assumptions, there are various sufficient optimality theorems for (P) under some generalized (ρ, θ )-invexities. Here, we establish four sufficient optimality conditions. Theorem 6.1 (Sufficient Optimality Conditions) Let S ∗ ∈ Γ n be a feasible solution of (P). Suppose that there exist u∗ ∈ I ⊂ Rr+ , v ∗ ∈ Rm + satisfying the conditions of Theorem 4.2 and that there is a vector function η : Γ n × Γ n → Ln∞ such that, for any S ∈ Γ n , the expression (7) is replaced by g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ ), η(S, S ∗ ) ≥ 0. (15) Denote A(•) = g(S ∗ , u∗ )f (•, u∗ ) − f (S ∗ , u∗ )g(•, u∗ ) and H (•) = h(•, v ∗ ). Assume further that any one of the following conditions (i)–(iv) holds: (i) A is (ρ1 , θ )-pseudoinvex and H is (ρ2 , θ )-quasiinvex at S ∗ w.r.t. (the same) η such that ρ1 + g(S ∗ , u∗ )ρ2 ≥ 0. (ii) A is strictly (ρ1 , θ )-pseudoinvex and H is (ρ2 , θ )-quasiinvex at S ∗ w.r.t. η such that ρ1 + g(S ∗ , u∗ )ρ2 > 0. (iii) A + g(S ∗ , u∗ )H is (ρ, θ )-pseudoinvex at S ∗ w.r.t. η and ρ ≥ 0. (iv) A + g(S ∗ , u∗ )H is prestrictly (ρ, θ )-quasiinvex at S ∗ w.r.t. η and ρ > 0. Then, S ∗ is an optimal solution of (P). Proof Suppose on the contrary that S ∗ is not an optimal solution of (P). Since ϕ(S) ≡ max1≤i≤r {fi (S)/gi (S)}, there exists a feasible solution S ∈ Γ n such that ϕ( S) < ϕ(S ∗ ). By Proposition 4.1 and (8) of Theorem 4.2, we have S, u∗ ) ≤ max{f ( S, u)/g( S, u)} ≡ ϕ( S) < ϕ(S ∗ ) f ( S, u∗ )/g( u∈I

= max {fi (S ∗ )/gi (S ∗ )} = f (S ∗ , u∗ )/g(S ∗ , u∗ ). 1≤i≤r

Thus, f ( S, u∗ )/g( S, u∗ ) < f (S ∗ , u∗ )/g(S ∗ , u∗ ), S, u∗ ) < 0. f ( S, u∗ )g(S ∗ , u∗ ) − f (S ∗ , u∗ )g( It follows that A( S) ≡ f ( S, u∗ )g(S ∗ , u∗ ) − f (S ∗ , u∗ )g( S, u∗ ) < 0, and then A( S) < 0 = f (S ∗ , u∗ )g(S ∗ , u∗ ) − f (S ∗ , u∗ )g(S ∗ , u∗ ) = A(S ∗ ). By (9) of Theorem 4.2,  ∗







h(S , v ) = 0 = h(S ), v M =

 j ∈M

 vj∗ hj (S ∗ )

,

(16)

J Optim Theory Appl (2008) 139: 295–313

305

and hj (S) ≤ 0, j ∈ M, for any S ∈ P ; it follows that  vj∗ hj ( S) = h( S), v ∗ M = h( S, v ∗ ) ≤ 0 = h(S ∗ , v ∗ ). j ∈M

Hence, we obtain H ( S) ≡ h( S, v ∗ ) ≤ 0 = h(S ∗ , v ∗ ) ≡ H (S ∗ ).

(17)

(i) If condition (i) holds, as A(•) is (ρ1 , θ )-pseudoinvex, (16) yields A (S ∗ ) · η( S, S ∗ ) < −ρ1 θ ( S, S ∗ ). That is, {g(S ∗ , u∗ )Df (S ∗ , u∗ ) − f (S ∗ , u∗ )Dg(S ∗ , u∗ )} · η( S, S ∗ ) < −ρ1 θ ( S, S ∗ ); we then obtain  r   ∗ ∗ ∗ ∗ ∗ ∗ ∗ ui [g(S , u )Dfi (S ) − f (S , u )Dgi (S )] · η( S, S ∗ ) < −ρ1 θ ( S, S ∗ ). (18) i=1

Similarly, as H (•) is (ρ2 , θ )-quasiinvex w.r.t. η at S ∗ , and by (17) H ( S) ≤ H (S ∗ )



H (S ∗ ) · η( S, S ∗ ) ≤ − ρ2 θ ( S, S ∗ ).

That is, [Dh(S ∗ , v ∗ )] · η( S, S ∗ ) ≤ −ρ2 θ ( S, S ∗ ).

(19)

Then, (18) + g(S ∗ , u∗ ) × (19) will imply {g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ )} · η( S, S ∗ ) < −(ρ1 + g(S ∗ , u∗ )ρ2 )θ ( S, S ∗ ).

(20)

By hypothesis (15), for any S ∈ Γ n ,

 g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ ), η(S, S ∗ ) ≥ 0,

we then deduce from (20) that 0 < −[ρ1 + g(S ∗ , u∗ )ρ2 ] · θ ( S, S ∗ ), which contradicts ∗ ∗ ∗ ρ1 + g(S , u )ρ2 > 0 (since θ (S, S ) ∈ R+ ). Hence, if (i) hold, S ∗ is an optimal solution of (P). (ii) If (ii) holds, we can prove it by the same way as the proof in (i). (iii) If (iii) holds, denote K(•) = A(•) + g(S ∗ , u∗ )H (•), that is, K(•) = g(S ∗ , u∗ )[f (•, u∗ ) + h(•, v ∗ )] − f (S ∗ , u∗ )g(•, u∗ ). Since K(•) is (ρ, θ )-pseudoinvex at S ∗ w.r.t. η, by (16) and (17), there is a vector valued function η such that K( S) = A( S) + g(S ∗ , u∗ )H ( S) < A(S ∗ ) + g(S ∗ , u∗ )H (S ∗ ) = K(S ∗ ) ⇒

K (S ∗ ) · η( S, S ∗ ) < −ρθ ( S, S ∗ ).

306

J Optim Theory Appl (2008) 139: 295–313

It follows that {g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ )} · η( S, S ∗ ) < −ρθ ( S, S ∗ ). By (15), the above expression yields 0 < −ρθ ( S, S ∗ ), which contradicts the fact that ρ > 0. Hence, if (iii) holds, S ∗ is an optimal solution of (P). (iv) If (iv) holds, we can prove it by the same way as the proof in (iii).  Remark 6.1 The sufficiency for the optimal solution of (P) under generalized (ρ, θ )invexity will approve that the mixed-type dual problem (MD) have solution corresponding to the optimal solution of (P). The duality theorems for (MD) related to problem (P) are then discussed in the next section.

7 Duality Theorems In this section, we establish the weak, strong and strict converse duality theorems for (MD) with respect to the primal problem (P). Theorem 7.1 (Weak Duality) Let S ∈ P and (Ω, u, v) ∈ MD be feasible solutions of (P) and (MD) respectively. Let B(•) = g(Ω, u)[f (•, u) + hM0 (•, v)] − g(•, u)[f (Ω, u) + hM0 (Ω, v)].

(21)

Suppose that any one of the following conditions holds: (i) f (•, u) is (ρ1 , θ )-invex, −g(•, u) is (ρ2 , θ )-invex, and hMα (•, v) for α = 0, 1, 2, . . . , k are (ρ3α , θ )-invex at Ω ∈ Γ n w.r.t. the same η such that  g(Ω, u) ρ1 +

k 

 ρ3α + [f (Ω, u) + hM0 (Ω, v)]ρ2 ≥ 0.

α=0

(ii) B(•) is (ρ1 , θ )-pseudoinvex, hMα (•, u) for α = 0, 1, 2, . . . , k are (ρ2α , θ )

quasiinvex at Ω ∈ Γ n w.r.t. η, and ρ1 + g(Ω, u) kα=1 ρ2α ≥ 0. (iii) B(•) is strictly (ρ1 , θ )-pseudoinvex, hMα (•, u) for α = 0, 1, 2, . . . , k are

(ρ2α , θ )-quasiinvex at Ω ∈ Γ n w.r.t. η, and ρ1 + g(Ω, u) kα=1 ρ2α > 0.

(iv) B(•) + g(Ω, u) kα=1 hMα (•, v) is (ρ, θ )-pseudoinvex at Ω ∈ Γ n w.r.t. η, and ρ ≥ 0.

(v) B(•) + g(Ω, u) kα=1 hMα (•, v) is strictly (ρ, θ )-pseudoinvex at Ω ∈ Γ n w.r.t. η, and ρ > 0. Then, ϕ(S) ≥ {f (Ω, u) + hM0 (Ω, v)}/g(Ω, u),

for each S ∈ P .

J Optim Theory Appl (2008) 139: 295–313

307

Proof Suppose on the contrary that there is a feasible solution S ∈ P such that ϕ(S)
ϕ(S) = max f (S, w)/g(S, w) ≥ f (S, u)/g(S, u), w∈I g(Ω, u) we obtain g(Ω, u)f (S, u) − g(S, u)[f (Ω, u) + hM0 (Ω, v)] < 0.

(22)

By (14), hMα (Ω, v) ≥ 0, for α = 0, 1, . . . , k, and as S is (P)-feasible, hj (S) ≤ 0, for j ∈ m, it follows that hMα (S, v) ≤ 0 ≤ hMα (Ω, v),

α = 0, 1, . . . , k.

(23)

f (S, u) − f (Ω, u) ≥ Df (Ω, u) · η(S, Ω) + ρ1 θ (S, Ω),

(24)

−[g(S, u) − g(Ω, u)] ≥ −Dg(Ω, u) · η(S, Ω) + ρ2 θ (S, Ω),

(25)

(i). If (i) holds, then by the invexities, we have

hMα (S, v) − hMα (Ω, v) ≥ DhMα (Ω, v) · η(S, Ω) + ρ3α θ (S, Ω), α = 0, 1, . . . , k.

(26)

Since g(Ω, u) > 0 and f (Ω, u) + hM0 (Ω, v) ≥ 0, we evaluate g(Ω, u) × (24) + [f (Ω, u) + hM0 (Ω, v)] × (25) + g(Ω, u) × (26) to get  g(Ω, u) × [f (S, u) − f (Ω, u)] − [f (Ω, u) + hM0 (Ω, v)] × [g(S, u) − g(Ω, u)]  k  k   + g(Ω, u) × hMα (S, v) − hMα (Ω, v) α=0

α=0

≥ {g(Ω, u) × Df (Ω, u) · η(S, Ω) + [f (Ω, u) + hM0 (Ω, v)] × (−1)Dg(Ω, u) · η(S, Ω) + g(Ω, u) × Dh(Ω, v) · η(S, Ω)}  + g(Ω, u) × ρ1 θ (S, Ω) + [f (Ω, u) + hM0 (Ω, v)] × ρ2 θ (S, Ω)  k    + g(Ω, u) × ρ3α θ (S, Ω) . α=0

(27)

308

J Optim Theory Appl (2008) 139: 295–313

Claim 7.1 0 > g(Ω, u) × [f (S, u) − f (Ω, u)] − [f (Ω, u) + hM0 (Ω, v)] × [g(S, u) − g(Ω, u)]   k k   hMα (S, v) − hMα (Ω, v) . + g(Ω, u) × α=0

α=0

Indeed, g(Ω, u)[f (S, u) − f (Ω, u)] − [f (Ω, u) + hM0 (Ω, v)][g(S, u) − g(Ω, u)]  k  k   + g(Ω, u) hMα (S, v) − hMα (Ω, v) α=0

α=0

= g(Ω, u)f (S, u) − g(Ω, u)f (Ω, u) − g(S, u)[f (Ω, u) + hM0 (Ω, v)] + g(Ω, u)f (Ω, u) + g(Ω, u)hM0 (Ω, v) + g(Ω, u)   k k   hMα (S, v) − hMα (Ω, v) × α=0

α=0

= g(Ω, u)f (S, u) − g(S, u)[f (Ω, u) + hM0 (Ω, v)] + g(Ω, u)hM0 (Ω, v)   k k   hMα (S, v) − hMα (Ω, v) + g(Ω, u) α=0

α=0



< g(Ω, u)hM0 (Ω, v) + g(Ω, u)  = −g(Ω, u)  = −g(Ω, u)

k 

hMα (S, v) −

α=0 k  α=0 k 



k  α=0

 hMα (Ω, v) 

hMα (Ω, v) − hM0 (Ω, v) + g(Ω, u) 

k 

 hMα (S, v)

α=0

hMα (Ω, v) + g(Ω, u)h(S, v)

α=1

(since hj (S) ≤ 0, h(S, v) ≤ 0)  k   ≤ −g(Ω, u) hMα (Ω, v) ≤ 0. α=1

Claim 7.2 {g(Ω, u) × Df (Ω, u) · η(S, Ω) + [f (Ω, u) + hM0 (Ω, v)] × (−1)Dg(Ω, u) · η(S, Ω) + g(Ω, u) × Dh(Ω, v) · η(S, Ω)} ≥ 0. Indeed, by (13) and (14) in (MD), we get the inequality of Claim 7.2.

J Optim Theory Appl (2008) 139: 295–313

309

By Claim 7.1, Claim 7.2 and θ (S, Ω) ≥ 0, then (27) yields  0 > g(Ω, u) × ρ1 + [f (Ω, u) + hM0 (Ω, v)] × ρ2 + g(Ω, u) ×

k 

 ρ3α ,

α=0

which contradicts the inequality (i). So, if (i) holds, then   ϕ(S) ≥ f (Ω, u) + hM0 (Ω, v) /g(Ω, u) for any S ∈ P , and (Ω, u, v) ∈ MD . (ii). If (ii) holds, by (22),we have g(Ω, u)f (S, u) − g(S, u)[f (Ω, u) + hM0 (Ω, v)] < 0. Thus, by adding g(Ω, u)hM0 (S, v) to both sides of the above inequality, we have g(Ω, u)f (S, u) + g(Ω, u)hM0 (S, v) − g(S, u)[f (Ω, u) + hM0 (Ω, v)] < g(Ω, u)hM0 (S, v) ≤ 0,

if S ∈ P .

It follows that B(S) = g(Ω, u)[f (S, u) + hM0 (Ω, v)] − g(S, u)[f (Ω, u) + hM0 (Ω, v)] < 0 = B(Ω) and then (28)

B(S) < B(Ω). Now, if condition (ii) holds, B(•) is (ρ1 , θ )-pseudoinvex. Thus (28) implies that B (Ω) · η(S, Ω) < −ρ1 θ (S, Ω). That is, {g(Ω, u)[Df (Ω, u) + DhM0 (Ω, v)] − Dg(Ω, u)[f (Ω, u) + hM0 (Ω, v)]} · η(S, Ω) < −ρ1 θ (S, Ω).

(29)

Since hMα (•, u) for α = 0, 1, 2, . . . , k are (ρ2α , θ )-quasiinvex, the inequality (23) implies (by summing up from α = 1, . . . , k) that   k k   DhMα (Ω, v) · η(S, Ω) ≤ − ρ2α θ (S, Ω), (30) α=1

α=1

where  j ∈M0

vj Dhj (Ω) +

k  α=1

DhMα (Ω, v) =

k  α=0

DhMα (Ω, v) =

m  j =1

vj Dhj (Ω).

310

J Optim Theory Appl (2008) 139: 295–313

Then (29) + g(Ω, u) × (30) will imply that {g(Ω, u)[Df (Ω, u) + Dh(Ω, v)] − [f (Ω, u) + hM0 (Ω, v)]Dg(Ω, u)} · η(S, Ω)   k   < − ρ1 + g(Ω, u) ρ2α θ (S, Ω). α=1

By (13), we get 



0 < − ρ1 + g(Ω, u)

k 

 ρ2α

θ (S, Ω).

α=1

This contradicts assumption (ii). Therefore, ϕ(S) ≥ {f (Ω, u) + hM0 (Ω, v)}/g(Ω, u),

for any S ∈ P .

(iii). If (iii) holds, we can prove it by the same way as the proof in case (ii). (iv). Now, we evaluate (28) + g(Ω, u) × (23) by summing up for α = 1, 2, . . . , k; then,   k   k   hMα (S, v) < B(Ω) + g(Ω, u) hMα (Ω, v) . B(S) + g(Ω, u) α=1

If (iv) holds, B(•) + g(Ω, u) above inequality, we have

α=1

k

α=1 hMα (•, v)

is (ρ, θ )-pseudoinvex at Ω and, by the

{g(Ω, u)[Df (Ω, u) + Dh(Ω, v)] − [f (Ω, u) + hM0 (Ω, v)] · Dg(Ω, u)} · η(S, Ω) < −ρθ (S, Ω).

(31)

It follows from (13) of (MD) and (31) that 0 < −ρθ (S, Ω), and so ρ < 0, which contradicts (iv). Therefore, ϕ(S) ≥ {f (Ω, u) + hM0 (Ω, v)}/g(Ω, u),

for each S ∈ P .

(v). If (v) holds, we can prove it by the same way as the proof in case (iv).



Theorem 7.2 (Strong Duality) Let S ∗ be a regular optimal solution of (P) and assume that the conditions of Theorem 7.1 hold for all feasible solutions of (MD); then, ∗ ∗ ∗ there exist u∗ ∈ I and v ∗ ∈ Rm + such that (S , u , v ) is an optimal solution of (MD) and the optimal values of (P) and (MD) are equal; that is, min (P) = max (MD). Proof Let S ∗ be a regular optimal solution of (P); by Theorem 4.2, there exist u∗ ∈ I n and v ∗ ∈ Rm + such that, for any S ∈ Γ , g(S ∗ , u∗ )Df (S ∗ u∗ ) − f (S ∗ , u∗ )Dg(S ∗ , u∗ ) + g(S ∗ , u∗ )Dh(S ∗ , v ∗ ), η(S, S ∗ ) ≥ 0;

(32)

J Optim Theory Appl (2008) 139: 295–313

311

by (4) of Theorem 4.1, vj∗ hj (S ∗ ) = 0, for j ∈ m, we obtain hMα (S ∗ , v ∗ ) = 0,

α = 0, 1, . . . , k.

(33)

By (32) and (33), for any S ∈ Γ n , we have g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − [f (S ∗ , u∗ ) + hM0 (S ∗ , v ∗ )]Dg(S ∗ , u∗ ), η(S, S ∗ ) =  g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − [f (S ∗ , u∗ ) + 0]Dg(S ∗ , u∗ ), η(S, S ∗ ) = g(S ∗ , u∗ )[Df (S ∗ , u∗ ) + Dh(S ∗ , v ∗ )] − f (S ∗ , u∗ )Dg(S ∗ , u∗ ), η(S, S ∗ ) ≥ 0. By the above inequality, (S ∗ , u∗ , v ∗ ) is a feasible solution of (MD) and {f (S ∗ , u∗ ) + hM0 (S ∗ , v ∗ )}/g(S ∗ , u∗ ) = {f (S ∗ , u∗ ) + 0}/g(S ∗ , u∗ ) = f (S ∗ , u∗ )/g(S ∗ , u∗ ) = ϕ(S ∗ ). From Theorem 7.1, we obtain that (S ∗ , u∗ , v ∗ ) is an optimal solution of (MD) and min (P) = max (MD).  Theorem 7.3 (Strict Converse Duality) Let S ∗ and ( S, u, ˆ v) ˆ be optimal solutions of (P) and (MD) respectively. Suppose that the conditions of Theorem 7.2 are fulfilled; as in (21), let B(•) = g( S, u)[f ˆ (•, u) ˆ + hM0 (•, v)] ˆ − g(•, u)[f ˆ ( S, u) ˆ + hM0 ( S, v)]. ˆ Furthermore, let either one of the following conditions (i) and (ii) hold: (i) B(•) is strictly (ρ1 , θ )-pseudoinvex, hMα (•, u) are (ρ2α , θ )-quasiinvex, α =

0, 1, 2, . . . , k, w.r.t. η and ρ1 + g(Ω, u) kα=1 ρ2α > 0.

k (ii) B(•) + g(Ω, u) α=1 hMα (•, v) is strictly (ρ, θ )-pseudoinvex w.r.t. η and ρ > 0. S, that is,  S is also the optimal solution of (P), and min (P) = max (MD). Then, S ∗ =  Proof Suppose on the contrary that  S = S ∗ . Then, from Theorem 7.2, there exist m ∗ ∗ ∗ ∗ u ∈ I and v ∈ R+ such that (S , u , v ∗ ) is an optimal solution of (MD) and ϕ(S ∗ ) = {f (S ∗ , u∗ ) + hM0 (S ∗ , v ∗ )}/g(S ∗ , u∗ ). Suppose that  S, u) ˆ + hM0 ( S, v)}/g( ˆ S, u). ˆ ϕ(S ∗ ) ≤ {f (

(34)

312

J Optim Theory Appl (2008) 139: 295–313

By Proposition 4.1,  {f ( S, u) ˆ + hM0 ( S, v)}/g( ˆ S, u) ˆ ≥ ϕ(S ∗ ) = max{f (S ∗ , u)/g(S ∗ , u)} u∈I



∗ ˆ , u). ˆ ≥ f (S , u)/g(S

Thus ∗ g( S, u)f ˆ (S ∗ , u) ˆ − [f ( S, u) ˆ + hM0 ( S, v)]g(S ˆ , u) ˆ ≤ 0.

It follows that ∗ g( S, u)[f ˆ (S ∗ , u) ˆ + hM0 (S ∗ , v)] ˆ − [f ( S, u) ˆ + hM0 ( S, v)]g(S ˆ , u) ˆ

≤ g( S, u)h ˆ M0 (S ∗ , v) ˆ ≤ 0. That is, B(S ∗ ) ≤ 0 = B( S).

(35)

is (P)-optimal and ( S, u, ˆ v) ˆ is (MD)-optimal, it follows from their constraint inequalities that Since S ∗

hMα (S ∗ , v) ˆ ≤ 0 ≤ hMα ( S, v), ˆ

α = 0, 1, 2, . . . , k.

(36)

Thus, if (i) holds, B(•) is strictly (ρ1 , θ )-pseudoinvex and then (35) implies that {g( S, u)[Df ˆ ( S, u) ˆ + DhM0 ( S, v)] ˆ − Dg( S, u)[f ˆ ( S, u) ˆ + hM0 ( S, v)]} ˆ · η(S ∗ ,  S) < −ρ1 θ (S ∗ ,  S).

(37)

Since hMα (•, u) for α = 1, 2, . . . , k are (ρ2α , θ )-quasiinvex, (36) implies that Dh( S, v) ˆ · η(S ∗ ,  S) ≤ −ρ2α θ (S ∗ ,  S),

α = 1, 2, . . . , k.

(38)

Next, we evaluate (37) + g( S, u) ˆ × (38). The result follows from g( S, u) ˆ > 0 and condition (13) in (MD) that  0 ≤ {g( S, u)D[f ˆ ( S, u) ˆ + h( S, v)] ˆ − [f ( S, u) ˆ + hM0 ( S, v)]Dg( ˆ S, u)} ˆ · η(S ∗ ,  S)   k   < − ρ1 + g( S, u) ˆ ρ2α θ (S ∗ ,  S). α=1

Hence,



 k    ρ1 + g(S, u) ˆ ρ2α < 0. α=1

This contradicts assumption (i). Therefore, we conclude that S ∗ =  S and  ϕ(S ∗ ) = {f ( S, u) ˆ + hM0 ( S, v)}/g( ˆ S, u). ˆ That is, min (P) = max (MD). If (ii) holds, it can be shown by the same way as in the case (i).



J Optim Theory Appl (2008) 139: 295–313

313

Acknowledgements The authors acknowledge Professor P.L. Yu and the referees for useful advice and valuable suggestions in an earlier version of the paper.

References 1. Lai, H.C., Liu, J.C.: Optimality conditions for multiobjective programming with generalized (F , ρ, θ)-convex set functions. J. Math. Anal. Appl. 215, 443–460 (1997) 2. Lai, H.C., Liu, J.C., Tanaka, K.: Duality without a constraint qualification for minimax fractional programming. J. Optim. Theory Appl. 101(1), 109–125 (1999) 3. Lai , H.C., Liu, J.C.: Duality for a minimax programming problem containing n-set functions. J. Math. Anal. Appl. 229, 587–604 (1999) 4. Lai, H.C., Liu, J.C., Tanaka, K.: Necessary and sufficient conditions for minimax fractional programming. J. Math. Anal. Appl. 230, 311–328 (1999) 5. Lai, H.C., Liu, J.C.: On minimax fractional programming of generalized convex set functions. J. Math. Anal. Appl. 244, 442–462 (2000) 6. Lai, H.C., Liu, J.C.: Minimax fractional programming involving generalized invex functions. ANZIAM J. 44, 339–354 (2003) 7. Chen, J.C., Lai, H.C.: Fractional programming for variational problem with (F , ρ, θ)-invexity. J. Nonlinear Convex Anal. 4(1), 25–41 (2003) 8. Lee, J.C., Lai, H.C.: Parametric-free dual models for fractional programming with generalized invexity. Ann. Oper. Res. 133, 47–61 (2005) 9. Lai, H.C.: Sufficiency and duality of fractional integral programming with generalized invexity. Taiwan. J. Math. 10(6), 1685–1695 (2006) 10. Wolfe, P.: A duality theorem for nonlinear programming. Q. Appl. Math. 19, 239–244 (1961) 11. Mond, B., Weir, T.: Generalized concavity and duality. In: Schaible, S., Ziemba, W.T. (eds.) Generalized concavity in optimization and economics, pp. 263–279. Academic Press, New York (1981) 12. Ahmand, I.: Optimality conditions and mixed duality in nondifferentiable programming. J. Nonlinear Convex Anal. 5(1), 71–83 (2004) 13. Bector, C.R., Abha, S.C.: On mixed duality in mathematical programming. J. Math. Anal. Appl. 259, 346–356 (2001) 14. Huang, T.Y., Lai, H.C., Schaible, S.: Optimization theory for set functions on nondifferentiable fractional programming with mixed type duality. Taiwan. J. Math. (2008, to appear) 15. Zalmai, G.J.: Duality for generalized fractional programs involving n-set functions. J. Math. Anal. Appl. 149, 339–350 (1990) 16. Bector, C.R., Singh, M.: Duality for minimax B-vex programming involving n-set functions. J. Math. Anal. Appl. 215, 112–131 (1997) 17. Corley, H.W.: Optimization theory for n-set functions. J. Math. Anal. Appl. 127, 193–205 (1987) 18. Lai, H.C., Yang, S.S.: Saddle point and duality in the optimization theory of convex set functions. J. Austr. Math. Soc. Ser. B 24, 130–137 (1982) 19. Lai, H.C., Yang, S.S., Hwang, G.R.: Duality in mathematical programming of set functions: on Fenchel duality theorem. J. Math. Anal. Appl. 95, 223–234 (1983) 20. Lai, H.C., Lin, L.J.: Moreau-Rockafellar type theorem for convex set functions. J. Math. Anal. Appl. 132, 558–571 (1988) 21. Lai, H.C., Lin, L.J.: The Fenchel-Moreau theorem for set functions. Proc. Am. Math. Soc. 103, 85–90 (1988) 22. Preda, V.: On minimax programming problems containing n-set functions. Optimization 22, 527–537 (1991) 23. Morris, R.J.T.: Optimal constrained selection of a measurable subset. J. Math. Anal. Appl. 70, 546– 562 (1979)