Duality in Nonlinear Programming - Semantic Scholar

1 downloads 0 Views 154KB Size Report
Duality in Nonlinear Programming. Vsevolod I. Ivanov. Received: date / Accepted: date. The final publication is available at http://link.springer.com. Abstract In ...
Optim Lett DOI 10.1007/s11590-012-0512-6 manuscript No. (will be inserted by the editor)

Duality in Nonlinear Programming Vsevolod I. Ivanov

Received: date / Accepted: date

The final publication is available at http://link.springer.com

Abstract In this paper are defined new first- and second-order duals of the nonlinear programming problem with inequality constraints. We introduce a notion of a WD-invex problem. We prove weak, strong, converse, strict converse duality, and other theorems under the hypothesis that the problem is WD-invex. We obtain that a problem with inequality constraints is WD-invex if and only if weak duality holds between the primal and dual problems. We introduce a notion of a second-order WDinvex problem with inequality constraints. The class of WD-invex problems is strictly included in the class of second-order ones. We derive that the first-order duality results are satisfied in the second-order case. Keywords Duality · Invex functions · Second-order duality · WD-invex problem with inequality constraints · Second-order WD-invex problem

1 Introduction Consider the minimization problem with inequality constraints minimize

f (x)

subject to

x ∈ X,

g(x) 5 0,

(P)

where X ⊆ Rn is an open set, f : X → R and g : X → Rm are given Fr´echet differentiable functions, defined on X. Here g(x) 5 0 implies that all components of g(x) are non-positive. There are several dual problems of (P). The most popular of them are the Lagrangian dual [2], the Wolfe dual [17, 12], and the Mond-Weir one [15]. A V.I. Ivanov Department of Mathematics, Technical University of Varna, 1 Studentska Str., 9010 Varna, Bulgaria Tel.: +359-52-383436 E-mail: [email protected]

2

Vsevolod I. Ivanov

lot of results are obtained on the base of these duals. The recent developments of Lagrangian duality can be seen in the paper of Giannessi [4]. It was shown by Giannessi and Mastroeni [5] that, in the case of convex problems and some generalized convex ones, the Wolfe and the Mond-Weir duals are particular cases of some generalized Lagrangian dual problem. On the other hand, under certain conditions the duality gap of the Lagrangian dual problem depend on the non-convexity of the primal problem (see Aubin, Ekeland [1]). Therefore, it is sensible to study duality theory, especially when the primal problem is not convex. Martin [14] proved that weak duality invex problems are the largest class such that weak duality holds between (P) and its Wolfe dual. For another similar result concerning strong duality between (P) and the Wolfe dual of (P) see [8]. It was shown by Ivanov [8] that weak duality is satisfied between (P) and its Mond-Weir dual if and only if (P) is Mond-Weir weak duality invex. Another approach, which originated from Dantzig, Eisenberg, Cottle [3], is the symmetric duality. The duals, which differ from the mentioned ones, are rather limited. Such problems are studied by Tind and Wolsey [16], Johri [11]. The investigation of second- and higher-order duality started by Mangasarian [13]. In the present paper, we introduce new duals of first- and second-order of the problem (P). We prove weak, strong, (strict) converse duality theorems, and several more results under the hypothesis that the problem (P) satisfy some assumptions of invexity type. We call a problem with inequality constraints, which satisfies these assumptions, WD-invex. Additional properties of the dual problem are derived. We prove that WD-invex problems are the largest class with the property that weak duality holds between (P) and its new dual. In our opinion, the dual problem presented in this paper, is comparable with the mentioned duals. It is as simple as them. In the contrast to the approach started by Mangasarian, our second-order duality extents the first-order one, which is preferable. In other words, each first-order dual of the problem (P) is a second-order dual.

2 Duality with WD-invex problems We denote by g the vector function (g1 , g2 , . . . , gm ), by hλ , gi the sum ∑m i=1 λi gi , where λ = (λ1 , λ2 , . . . , λm ) ∈ Rm , by L = f + hλ , gi the Lagrange function with a Lagrange m multiplier λ , by Rm + the non-negative orthant in the Euclidean space R , by S the set of feasible points of the problem (P) that is S := {x ∈ X | g(x) 5 0}. We suppose that the functions f and g are Fr´echet differentiable. Consider the following problem, which we call the dual of (P): maximize

θ (u)

subject to

u ∈ X,

(D)

where θ (u) = infn { f (u) + ∇ f (u)v | ∇g(u)v 5 −g(u)}. v∈R

(1)

For every u ∈ X, the problem (1) is the linear programming dual of the following one with a variable λ :

Duality in Nonlinear Programming

maximize subject to

3

f (u) + hλ , g(u)i ∇ f (u) + hλ , ∇g(u)i = 0,

λ = 0.

Definition 1 It is called that weak duality holds between (P) and (D) iff f (x) = θ (u) for all feasible points x for (P) and u for (D). Definition 2 It is called that strong duality holds between (P) and (D) iff there exist feasible points x¯ for (P) and u¯ for (D) such that f (x) ¯ = vP = vD = θ (u), ¯ where by definition vP = inf { f (x) | x ∈ S},

vD = sup {θ (u) | u ∈ X}.

Consider the problems (P) and (D). Let f (x) = hc, xi and g(x) = Ax − b, where c and A are given vector and matrix respectively. If (P) is solvable, then θ does not depend on u and strong duality trivially holds. If (P) is infeasible, then vP = vD = +∞. If (P) is unbounded, then vP = vD = −∞. The next theorem under various constraint qualifications is well known: Karush-Kuhn-Tucker Theorem Let f : X → R and g : X → Rm be Fr´echet differentiable functions, which are defined on some open set X ⊆ Rn . If x¯ is a local minimizer of (P), and any one of the constraint qualifications holds, then there exists λ¯ ∈ Rm + such that m ∇ f (x) ¯ + λ¯ i ∇gi (x) ¯ = 0, λ¯ i gi (x) ¯ = 0, i = 1, 2, ..., m. (2)



i=1

Definition 3 A point x¯ ∈ S such that there exists a vector λ¯ ∈ Rm + , satisfying conditions (2), is called a Karush-Kuhn-Tucker point (for short, KKT point). The following definition is slightly different from the notion of a weak duality invex problem in the paper of Martin [14]: Definition 4 We call the problem (P) WD-invex iff f : X → R and g : X → Rm are Fr´echet differentiable and there exists a map η : X × X → Rn such that the following implication holds:   x ∈ X, u ∈ X, f (x) − f (u) = ∇ f (u)η(x, u), imply (WDI) g(x) 5 0 −g(u) = ∇g(u)η(x, u). Definition 5 The problem minimize

f1 (x)

subject to

x ∈ X,

g1 (x) 5 0

(P1 )

is called a relaxation of (P) iff f (x) = f1 (x) and g1 (x) 5 0 for all x ∈ S. If (P1 ) is a relaxation of (P), then inf { f (x) | x ∈ X, g(x) 5 0} = inf { f1 (x) | x ∈ X, g1 (x) 5 0}. Theorem 1 (Weak Duality) Consider a Fr´echet differentiable problem (P) with inequality constraints. Then f (x) = θ (u) > −∞ or θ (u) = −∞ for all points x ∈ S and u ∈ X if and only if (P) is WD-invex.

4

Vsevolod I. Ivanov

Proof Let (P) be WD-invex, x ∈ S, u ∈ X. We prove that f (x) = θ (u). It follows from the WD-invexity of (P) that there exists η ∈ Rn , which depend on x and u, such that f (x) = f (u) + ∇ f (u)η(x, u),

−g(u) = ∇g(u)η(x, u).

(3)

Therefore, for every u ∈ X, the problem with a variable x minimize subject to

f (u) + ∇ f (u)η(x, u) ∇g(u)η(x, u) 5 −g(u),

x∈X

is a relaxation of (P). Hence, the theorem follows from the inequality inf { f (u) + ∇ f (u)η(x, u) | ∇g(u)η(x, u) 5 −g(u)} x

= inf { f (u) + ∇ f (u)v | ∇g(u)v 5 −g(u)} = θ (u). v

Conversely, take arbitrary x ∈ S and u ∈ X. Suppose that f (x) = θ (u) > −∞ or θ (u) = −∞. We prove that (P) is WD-invex. According to the hypothesis, the linear programming problem minimize subject to

f (u) + ∇ f (u)v ∇g(u)v 5 −g(u),

x∈X

is solvable and its minimal value is less than f (x) or it is unbounded from below. Therefore, there exists η(x, u) ∈ Rn such that Inequalities (3) hold. Thus (P) is WDinvex. t u Corollary 1 Let x ∈ S, u ∈ X, and (P) be WD-invex. Then vP = vD . Theorem 2 (Strong Duality) Let (P) be WD-invex. If x¯ ∈ S is a KKT point, then x¯ is a solution of both (P) and (D), and vP = f (x) ¯ = vD = θ (x). ¯

(4)

Proof There exists λ¯ ∈ Rm + such that Conditions (2) are satisfied. Consider the problem maximize subject to

f (x) ¯ + hλ , g(x)i ¯ ∇ f (x) ¯ + hλ , ∇g(x)i ¯ = 0,

λ =0

It is feasible by Karush-Kuhn-Tucker conditions (2). Since g(x) ¯ 5 0, then the objective function of the last problem is bounded from above by f (x). ¯ Therefore, it is solvable. According to Conditions (2) the maximum is attained when λ = λ¯ and L(x, ¯ λ¯ ) = f (x). ¯ The dual linear problem of the mentioned one is the following: minimize subject to

f (x) ¯ + ∇ f (x)v ¯ ∇g(x)v ¯ 5 −g(x) ¯

Applying Strong Duality Theorem in Linear Programming, we obtain that the dual problem is also solvable and the objective functions of both problems are equal, that is θ (x) ¯ = f (x). ¯ Hence, vD = sup θ (u) = θ (x) ¯ = f (x). ¯ u∈X

Duality in Nonlinear Programming

5

According to Theorem 1, we have f (x) ¯ = vD , which implies that Equation (4) is satisfied. We prove that x¯ is a solution of (P) and (D). Let x be an arbitrary point from S. According to Theorem 1 we have f (x) = θ (x) ¯ = f (x). ¯ Therefore, x¯ is a global solution of (P). The proof that x¯ is a global solution of (D) follows from similar arguments. t u Example 1 Consider the problem minimize subject to

f (x1 , x2 ) = exp (x12 + x22 ) + x1 g(x1 , x2 ) = x12 + x22 5 0,

where f : R2 → R and g : R2 → R. The functions f and g are convex. The only feasible point is (x1 , x2 ) = (0, 0), and it is a global minimizer. This point does not satisfy KKT conditions. Therefore, there is no a point, which fulfills these conditions. On the other hand, θ (u1 , u2 ) = u1 /2 + (1 − u21 ) exp(u21 ) if a < u1 < 0, u2 = 0, where a ≈ −0.4193648239 is the only root of the equation 2 x exp(x2 ) + 1 = 0 for x ∈ R and θ (u1 , u2 ) = −∞ in all other cases. Strong duality is satisfied between this problem and its dual in the sense that sup {θ (u1 , u2 ) | (u1 , u2 ) ∈ R2 } = 1 = f (0, 0), but not in the sense of Definition 2. The function θ does not attain its supremum. Consider the Lagrange dual of the problem from this example: sup {ϕ(λ ) | λ = 0}, where ϕ(λ ) = inf [ f (u) + λ g(u)] = u∈R2

inf

(u1 ,u2 )∈R2

[exp (u21 + u22 ) + u1 + λ (u21 + u22 )].

(5)

We solve the Lagrange dual problem. The infinimum in (5) is attained when u2 = 0. Let λ be fixed. Consider the function h(u1 ) = exp (u21 ) + u1 + λ u21 . Since limu1 →±∞ h(u1 ) = +∞, then by Weierstrass Theorem, h attains its minimal value at some point v such that h0 (v) = 0. Therefore 2v exp(v2 ) + 2λ v + 1 = 0. It follows from here that v < 0 for every λ = 0. On the other hand, we have sup inf [ f (u) + λ g(u)] 5 f (0, 0) = 1

2 λ =0 u∈R

6

Vsevolod I. Ivanov

Therefore, h(v) 5 1 for every v 5 0. Consider the system of equations: exp(v2 ) + v + λ v2 = 1 − ε,

2v exp(v2 ) + 2λ v = −1,

(6)

where ε is a sufficiently small positive number. We obtain from the system, by elimination of the unknown λ that (1 − v2 ) exp(v2 ) + v/2 = 1 − ε.

(7)

The function ψ(t) = (1 − t 2 ) exp(t 2 ) + t/2 is increasing when t < 0. Then Equation (7) has exactly one solution such that v < 0. Therefore, the system (6) has a solution (λ , v) such that λ = 0 and v < 0, if ε > 0, but it has no solutions when ε = 0. Hence, sup {ϕ(λ ) | λ = 0} = 1, but the supremum is not attained. If λ tends to +∞, then ϕ approaches 1. We see from this example that the Lagrange dual of (P) is quite different from the dual problem (D). Theorem 3 (Converse Duality) Let (P) be WD-invex. Suppose that x¯ ∈ S, u¯ ∈ X, and f (x) ¯ = θ (u). ¯ Then x¯ is a solution of (P), and u¯ is a solution of (D). Proof The proof uses the arguments of the respective part of the previous theorem. t u We introduce the following definition: Definition 6 Let the problem (P) be WD-invex. We call (P) strictly WD-invex at u ∈ X iff the map η from Definition 4 satisfies the following strict inequality f (x) − f (u) > ∇ f (u)η(x, u) for all x ∈ S, x 6= u. Theorem 4 (Strict Converse Duality) Let (P) be a WD-invex problem, which is strictly WD-invex at u¯ ∈ X. If x¯ is a feasible point for (P), and f (x) ¯ = θ (u), ¯ then x¯ ≡ u. ¯ Proof Suppose the contrary that x¯ 6= u. ¯ By strict WD-invexity we obtain that there exists a map η such that η¯ = η(x, ¯ u) ¯ satisfies the inequalities ¯ f (x) ¯ > f (u) ¯ + ∇ f (u) ¯ η,

¯ 0 = g(u) ¯ + ∇g(u) ¯ η.

(8)

Consider the linear programming problem with a variable v minimize subject to

f (u) ¯ + ∇ f (u)v ¯ ∇g(u)v ¯ 5 −g(u). ¯

According to (8) the point v = η¯ is feasible. Therefore, this problem is feasible. We have θ (u) ¯ = f (x) ¯ > −∞. It follows from here that the problem is bounded from below. Hence, it is solvable with an optimal value of the objective function θ (u). ¯ Its dual linear problem is the following one: maximize subject to

f (u) ¯ + hλ , g(u)i ¯ ∇ f (u) ¯ + hλ , ∇g(u)i ¯ = 0, λ = 0.

Duality in Nonlinear Programming

7

By Duality Theorem in Linear Programming the dual linear problem is also solvable with the same optimal value. Therefore, there exists λ¯ ∈ Rm + such that f (u) ¯ + hλ¯ , g(u)i ¯ = θ (u) ¯ = f (x) ¯

(9)

∇ f (u) ¯ + hλ¯ , ∇g(u)i ¯ =0

(10)

Taking into account that λ¯ = 0 we conclude from (8) that f (x) ¯ > L(u, ¯ λ¯ ) + (∇ f (u) ¯ + hλ¯ , ∇g(u)i)η( ¯ x, ¯ u), ¯ where L = f + hλ¯ , gi is the Lagrange function. Then it follows from Equations (9) and (10) that f (x) ¯ > L(u, ¯ λ¯ ) = f (x), ¯ t u

which is impossible.

Theorem 5 (Absence of a primal solution) Let (P) be WD-invex. Suppose that there exists u¯ ∈ X with vP = θ (u) ¯ > −∞. Then u¯ is a global solution of the dual problem (D). Proof There exists a sequence of feasible points {xk }∞ k=1 such that f (xk ) approaches vP . Let u ∈ X be an arbitrary point. It follows from Theorem 1 that f (xk ) = θ (u). Taking the limits when k → ∞ we obtain that θ (u) ¯ = θ (u). Therefore, u¯ is a global solution of (D), because u is an arbitrary point. t u Theorem 6 (Empty primal feasible set) Let (P) be WD-invex. Suppose that vD = +∞. Then S = 0. / Proof Assume the contrary that there exists a point x ∈ S. It follows from the hypothesis that there exists a sequence {uk }∞ k=1 such that uk ∈ X and θ (uk ) > k. By Theorem 1 we have f (x) = θ (uk ) > k, which is impossible, because f (x) is finite. t u Theorem 7 (Unbounded primal feasible set) Let (P) be WD-invex. Suppose that vP = −∞. Then sup {θ (u) | u ∈ X} = −∞. Proof According to the hypothesis there exists a sequence {xk }∞ k=1 such that xk ∈ S and f (xk ) < −k. Let u ∈ X be an arbitrary point. Therefore, by WD-invexity, there exists a map η such that 0 = g(u)+∇g(u)η(xk , u). It follows from here that the vector vk = η(xk , u) satisfies the inequalities g(u) + ∇g(u)vk 5 0. By WD-invexity we have −k > f (xk ) = f (u) + ∇ f (u)η(xk , u) = infn { f (u) + ∇ f (u)v | g(u) + ∇g(u)v 5 0} = θ (u). v∈R

The inequality θ (u) < −k for every positive integer k implies that the theorem holds. t u

8

Vsevolod I. Ivanov

The following questions often arise when invex functions are objects of investigation: How can we recognize invex functions? Is it necessary to find the kernel η in the definition of invex functions? Differentiable invex functions f : X → R, where X ⊆ Rn is an open set, are the largest class such that every point x ∈ X with the property ∇ f (x) = 0 is a global minimizer of f . This condition is equivalent to the implication x ∈ X, x is not a global minimizer of f over X



∇ f (x) 6= 0.

The kernel η does not appear in the last implication. Therefore, it is not necessary to find η, when one wants to prove that a given function is invex. This question was studied in a separate section in the paper of Hanson [7], where invex functions were introduced. Similar questions concern the class of invex problems, which we introduce in Definition 4. The next proposition shows how WD-invex problems can be recognized. Proposition 1 A Fr´echet differentiable problem (P) is WD-invex if and only if for all x ∈ S, u ∈ X, the following linear problem is solvable and its optimal value is 0: minimize λ [ f (x) − f (u)] − hµ, g(u)i

(11)

subject to λ ∇ f (u) + hµ, ∇g(u)i = 0,

(12)

λ = 0,

µ = 0.

(13)

Proof It follows from Definition 4 that (P) is WD-invex if and only if for all x ∈ S and u ∈ X, the following problem with a variable η is solvable: maximize 0 subject to ∇ f (u)η 5 f (x) − f (u) ∇g(u)η 5 −g(u). The linear programming dual of the last problem is the problem (11), (12), (13). Then the claim follows directly from Duality Theorem in Linear Programming. t u Example 2 Consider the problem minimize f (x1 , x2 ) = arctan (2x1 + x2 ) subject to g(x1 , x2 ) = −(x1 + x2 )/(x1 x2 ) 5 0, where f : R2 → R, g : X → R, X = {(x1 , x2 ) ∈ R2 | x1 6= 0, x2 6= 0}. It follows from Proposition 1 that this problem is WD-invex. Indeed, for all x ∈ S, u ∈ X the conditions λ ∇ f (u) + hµ, ∇g(u)i = 0, λ = 0, µ = 0 are satisfied only if (λ , µ) = (0, 0). Therefore, the problem (11), (12), (13) is solvable and its optimal value is 0. Our problem is not convex. The following problem with variables u and λ is called the Wolfe dual of (P): maximize subject to

L(u, λ ) = f (u) + hλ , g(u)i ∇ f (u) + hλ , ∇g(u)i = 0, λ = 0, u ∈ X.

Denote vW D = sup {L(u, λ ) | ∇ f (u) + hλ , ∇g(u)i = 0, λ = 0, u ∈ X}. u,λ

(WD)

Duality in Nonlinear Programming

9

Proposition 2 Let the problem (P) be WD-invex. Then vP ≥ vD ≥ vW D . In particular, if strong duality is satisfied for Wolfe dual, then strong duality holds for (D). Proof It follows from Weak Duality Theorem in Linear Programming that θ (u) ≥ sup {L(u, λ ) | ∇ f (u) + hλ , ∇g(u)i = 0, λ = 0} λ

for all u ∈ X. We conclude from here that vD ≥ vW D for arbitrary differentiable problem (P). Thus, the claim is a consequence of Theorem 1. t u We compare WD-invex problems with other classes invex ones. The following notion was introduced by Hanson [7]. Later the problems, which satisfy this condition were called Hanson-Craven invex. Definition 7 The problem (P) is called Hanson-Craven invex (for short, HC-invex) iff f : X → R and g : X → Rm are Fr´echet differentiable and there exists a map η : X × X → Rn such that the following condition is satisfied for all x ∈ X, u ∈ X: f (x) − f (u) = ∇ f (u)η(x, u),

g(x) − g(u) = ∇g(u)η(x, u).

It follows from this definition that every HC-invex problem with inequality constraints is WD-invex. Proposition 3 (Sufficient optimality conditions) Let the problem (P) be WD-invex. Suppose that Karush-Kuhn-Tucker conditions (2) are satisfied at the point x¯ ∈ S. Then x¯ is a global minimizer. Proof Let x ∈ S be an arbitrary feasible point. It follows from (WDI) that f (x) − f (x) ¯ = ∇ f (x)η(x, ¯ x), ¯

−g(x) ¯ = ∇g(x)η(x, ¯ x). ¯

We conclude from here and from (2) that f (x) ≥ f (x). ¯ Therefore, the proposition holds. t u KT-invex problems with inequality constraints were introduced by Martin [14]. They are the largest class of problems such that every point, which satisfies conditions (2) is a global minimizer. In other words the problem (P) is KT-invex if and only if every Kuhn-Tucker point is a global minimizer (see [14]). Then, by Proposition 3, every WD-invex problem is KT-invex. For every differentiable problem (P) we have vD ≥ vW D . Therefore, for each problem (P) such that weak duality holds between (P) and (D), weak duality is satisfied between (P) and its Wolfe dual. On the other hand, it was proved by Martin [14] that (P) is weak duality invex (see [14, Definition 3.1]) if and only if weak duality is satisfied between (P) and its Wolfe dual. Then it follows from Theorem 1 that every WD-invex problem is weak duality invex according to Definition 3.1 in [14]. We should mention that the same fact follows directly from Definition 3.1 in [14].

10

Vsevolod I. Ivanov

3 Second-order duality In this section, we study a new second-order dual of the problem (P). We suppose that the functions f : X → R and g : X → Rm are Fr´echet differentiable. Definition 8 Let the function f : X → R be Fr´echet differentiable at the point u ∈ Rn . Then the second-order directional derivative of f at u in direction d ∈ Rn is defined by f 00 (u, d) = lim 2t −2 ( f (u + td) − f (u) − t∇ f (u)d) . t→+0

Let g : X →

Rm

be a vector-valued function. We denote by g00 (u, d) the vector g00 (u, d) := (g001 (u, d), g002 (u, d), . . . , g00m (u, d)).

Definition 9 Let the problem (P) be Fr´echet differentiable. A direction d ∈ Rn is called critical at the point x ∈ S if the following inequalities are satisfied: ∇ f (x)d 5 0,

∇gi (x)d 5 0, ∀i ∈ I(x),

where we denote by I(x) := {i ∈ {1, 2, . . . , m} | gi (x) = 0} the set of active constraints. We introduce the following notion: Definition 10 We call the problem (P) second-order WD-invex iff f : X → R and g : X → Rm are Fr´echet differentiable and there are maps η : X ×X → Rn , d : X ×X → Rn , and a function ω : X × X → [0, +∞) such that for all x ∈ S, u ∈ X the secondorder directional derivatives f 00 (u, d), g00i (u, d), i = 1, 2, . . . , m exist, they are finite, the following inequalities hold: f (x) = f (u) + ∇ f (u) η(x, u) + ω(x, u) f 00 (u, d(x, u)),

(14)

00

0 = g(u) + ∇g(u) η(x, u) + ω(x, u) g (u, d(x, u)), and the direction d(x, u) is critical at u. If Inequality (14) is strict for all x 6= u, then we say that (P) is additionally second-order strictly WD-invex at u. It is obvious that the objective function of a second-order WD-invex problem is second-order invex (see Ivanov [9]). Theorem 8 Let the problem (P) be WD-invex. Then (P) is second-order WD-invex. Proof Let (P) be WD-invex with a map η. We take ω ≡ 0 and keep the same map η. For every u ∈ X we choose d = 0, because this direction is critical, the second-order derivatives f 00 (u, 0), g00 (u, 0) always exist, and they all are equal to zero. t u Consider the following problem, which we call the second-order dual of (P): Maximize

subject to

ψ(u)

u ∈ X,

(SOD)

where ψ(u) = inf {θ (u, d) | d is a critical direction at u}, d

θ (u, d) =

inf

{ f (u) + ∇ f (u)v + w f 00 (u, d) | ∇g(u)v + wg00 (u, d) 5 −g(u)}.

v∈Rn ,w∈[0,+∞)

Duality in Nonlinear Programming

11

Theorem 9 (Weak Duality) Let (P) be a second-order WD-invex problem. Then f (x) = ψ(u) > −∞ or ψ(u) = −∞ for all points x ∈ S, u ∈ X. Proof Let x and u be arbitrary feasible points for (P) and (SOD) respectively. There exist a vector η, a critical direction d at u, and a number ω = 0 such that both inequalities from Definition 10 are satisfied. It follows from the second-order WD-invexity of (P) that the problem with a variable x minimize subject to

f (u) + ∇ f (u)η(x, u) + ω(x, u) f 00 (u, d(x, u)) ∇g(u)η(x, u) + ω(x, u)g00 (u, d(x, u)) 5 −g(u)

is a relaxation of (P). Hence, the theorem follows from the inequalities inf { f (u) + ∇ f (u)η(x, u) + ω(x, u) f 00 (u, d) | ∇g(u)η(x, u) x

+ω(x, u)g00 (u, d) 5 −g(u)} = inf { f (u) + ∇ f (u)v + w f 00 (u, d) | ∇g(u)v + w f 00 (u, d) 5 −g(u), w = 0} v,w

= θ (u, d) = ψ(u). t u We apply the following theorem due to Ginchev, Ivanov [6]. Theorem 10 (Second-order necessary optimality conditions ) Let X be an open set in Rn , the functions f , gi (i = 1, . . . , m) be defined on X. Suppose that x is a local minimizer of the problem (P), the functions gi (i ∈ / I(x)) are continuous at x, the functions f , gi (i ∈ I(x)) are continuously differentiable, and f , gi (i ∈ I(x)) are second-order directionally differentiable at x in every critical direction d ∈ Rn such that ∇ f (x)d = 0 and ∇gi (x)d = 0 (i ∈ I(x)). Assume that Mangasarian-Fromovitz constraint qualifications hold. Then corresponding to any critical direction d there exists a Lagrange multiplier λ = (λ1 , λ2 , . . . , λm ), λi = 0, i = 1, 2, . . . , m with λi gi (x) = 0, i = 1, 2, . . . , m,

(15)

∇ f (x) +

(16)



λi ∇gi (x) = 0,

i∈I(x)

∇ f (x)d = 0, 00

λi ∇gi (x)d = 0,

f (x, d) +



∀i ∈ I(x),

λi g00i (x, d) =

0.

(17)

i∈I(x)

Definition 11 ([10]) A point x ∈ S is called second-order Karush-Kuhn-Tucker stationary (for short, second-order KKT point) iff for every direction d ∈ Rn , which is critical at x, and such that there exist the second-order derivatives f 00 (x, d), g00i (x, d), i ∈ I(x), there is a Lagrange multiplier λ = 0 satisfying Equations (15), (16), and (17). Theorem 11 (Strong Duality) Let the problem (P) be second-order WD-invex. If x¯ is a second-order KKT point, then f (x) ¯ = ψ(x), ¯ and x¯ is a global solution of both (P) and (SOD).

12

Vsevolod I. Ivanov

Proof Let d be an arbitrary direction, critical at x. ¯ Then there exists λ¯ ∈ Rm + , which ¯ satisfies conditions (15), (16), and (17) with λ = λ and x = x. ¯ Consider the linear programming problem with a variable λ : maximize subject to

f (x) ¯ + hλ , g(x)i ¯ ∇ f (x) ¯ + hλ , ∇g(x)i ¯ = 0, f 00 (x, ¯ d) + hλ , g00 (x, ¯ d)i = 0 λ = 0.

It is feasible, because x¯ is a second-order KKT point for (P). Using that g(x) ¯ 5 0, we obtain that the objective function of this problem is bounded from above by f (x) ¯ for every feasible λ . Therefore, the problem is solvable. The maximal value f (x) ¯ of the objective function is attained when λ = λ¯ . The dual linear problem is the following one with variables v and w: minimize subject to

f (x) ¯ + ∇ f (x)v ¯ + w f 00 (x, ¯ d) ∇g(x)v ¯ + wg00 (x, ¯ d) 5 −g(x) ¯ w = 0.

Applying Duality Theorem in Linear Programming, we obtain that the last problem is also solvable and θ (x, ¯ d) = f (x). ¯ Since d is an arbitrary critical at x¯ direction, then we get ψ(x) ¯ = f (x). ¯ t u Theorem 12 (Strict Converse Duality) Let x¯ be a feasible point for (P), u¯ ∈ X, and f (x) ¯ = ψ(u). ¯ If the problem (P) is second-order WD-invex and second-order strictly WD-invex at u, ¯ then x¯ ≡ u. ¯ Proof Suppose the contrary that x¯ 6= u. ¯ Therefore, by the second-order strict invexity of (P), we obtain that there exists a critical at u¯ direction d,¯ η¯ ∈ Rn , ω¯ ∈ [0, +∞) such that ¯ f (x) ¯ > f (u) ¯ + ∇ f (u) ¯ η¯ + ω¯ f 00 (u, ¯ d),

¯ ¯ 00 (u, 0 = g(u) ¯ + ∇g(u) ¯ η¯ + ωg ¯ d).

(18)

Consider the linear programming problem: minimize subject to

¯ f (u) ¯ + ∇ f (u)v ¯ + w f 00 (u, ¯ d) 00 ¯ ∇g(u)v ¯ + wg (u, ¯ d) 5 −g(u) ¯ w = 0.

¯ w = ω¯ fulfills the constraints. It is feasible, because the point (v, w) with v = η, ¯ We have θ (u, ¯ d) = ψ(u) ¯ = f (x) ¯ > −∞. Therefore, this problem is solvable, because its objective function is bounded from below over the constraint set. The dual linear problem is the following one: maximize subject to

f (u) ¯ + hλ , g(u)i ¯ ∇ f (u) ¯ + hλ , ∇g(u)i ¯ = 0, ¯ + hλ , g00 (u, ¯ =0 f 00 (u, ¯ d) ¯ d)i λ = 0.

By Duality Theorem in Linear Programming the dual linear problem is also solvable with the same optimal value. Therefore, there exists λ¯ ∈ Rm + such that

Duality in Nonlinear Programming

13

¯ = ψ(u) f (u) ¯ + hλ¯ , g(u)i ¯ = θ (u, ¯ d) ¯ = f (x) ¯

(19)

∇ f (u) ¯ + hλ¯ , ∇g(u)i ¯ =0

(20)

¯ + hλ , g00 (u, ¯ = 0. f 00 (u, ¯ d) ¯ d)i

(21)

It follows from (18) that ¯ + hλ¯ , g00 (u, ¯ ¯ f 00 (u, f (x) ¯ > L(u, ¯ λ¯ ) + (∇ f (u) ¯ + hλ¯ , ∇g(u)i) ¯ η¯ + ω( ¯ d) ¯ d)i). Then, we obtain from (19), (20), (21) that f (x) ¯ > L(u, ¯ λ¯ ) = f (x), ¯ which is impossible. t u Theorem 13 (Unbounded primal feasible set) Let the problem (P) be second-order WD-invex and vP = −∞. Then for every u ∈ X we have ψ(u) = −∞. Proof According to the hypothesis there exists a sequence {xk }∞ k=1 such that xk ∈ S and f (xk ) < −k. Suppose the contrary that there is u ∈ X with ψ(u) > −∞, and dk is the direction, which appears in Definition 10 if x = xk and dk is critical at u. Therefore, 0 = g(u) + ∇g(u)η(xk , u) + ω(xk , u, )g00 (u, dk ), It follows from here that −k > f (xk ) = f (u) + ∇ f (u)η(xk , u) + ω(xk , u) f 00 (u, dk ) =

inf

{ f (u) + ∇ f (u)v + w f 00 (u, dk ) | g(u) + ∇g(u)v + wg00 (u, dk ) 5 0}

v∈Rn ,w∈[0,+∞)

= θ (u, dk ) = ψ(u). The inequality ψ(u) < −k for every positive integer k causes a contradiction.

t u

The proofs of following theorems use the arguments of the respective results concerning the first-order case: Theorem 14 (Converse Duality) Let (P) be second-order WD-invex. Suppose that x¯ ∈ S, u¯ ∈ X, and f (x) ¯ = ψ(u). ¯ Then x¯ is a solution of (P), and u¯ is a solution of (SOD). Theorem 15 (Absence of a primal solution) Let (P) be second-order WD-invex. Suppose that there exists u¯ ∈ X with vP = ψ(u) ¯ < +∞. Then u¯ is a global solution of the dual problem (SOD) Theorem 16 (Empty primal feasible set) Let (P) be second-order WD-invex. Suppose that vSOD = +∞, where vSOD = sup{ψ(u) | u ∈ X}. Then S = 0. / It is interesting how we can check whether a given problem with inequality constraints is WD-invex. The next example contains a problem, which is second-order WD-invex, but it is not first-order WD-invex. We verify this fact using the definition. We find the vector functions η, d and ω.

14

Vsevolod I. Ivanov

Example 3 Consider the problem (P), where f : R2 → R and g : R2 → R are defined as follows and X ≡ R2 : f (x1 , x2 ) = −x12 − x22 + 4x1 x2 + 2x1 − 4x2 ,

g(x1 , x2 ) = x12 + 2x22 − 4x1 x2 − 2x1 .

This problem √ is second-order WD-invex with η(x, u) = x − u, ω(x, u) = 1, d(x, u) = (x − u)/ 2. On the other hand, the problem is not WD-invex. Indeed, if u = (1, 0) and x = (1/2, 0) we have x ∈ S. There is no η ∈ R2 with f (x) − f (u) = ∇ f (u)η, because ∇ f (u) = (0, 0) and f (x) − f (u) = −1/4. Sometimes it is difficult to determine the functions η, ω, and d. Really, if we want to prove that (P) is second-order WD-invex, it is enough to show that η, ω, d exist, but we do not need to find them. The following proposition is useful to verify the second-order WD-invexity: Proposition 4 A Fr´echet differentiable problem (P) is second-order WD-invex if and only if for all x ∈ S, u ∈ X, there exists a critical at u direction d such that the following linear problem is solvable with an optimal value 0: minimize λ [ f (x) − f (u)] − hµ, g(u)i

(22)

subject to λ ∇ f (u) + hµ, ∇g(u)i = 0,

(23)

00

00

λ f (u, d) + hµ, g (u, d)i = 0

(24)

λ = 0, µ = 0.

(25)

Proof It follows from Definition 10 that (P) is second-order WD-invex if and only if for all x ∈ S and u ∈ X, there exists a critical at u direction d such that the following problem with variables η and ω is solvable: maximize 0 subject to ∇ f (u)η + ω f 00 (u, d) 5 f (x) − f (u) ∇g(u)η + ωg00 (u, d) 5 −g(u) ω =0 The linear programming dual of the last problem is the problem (22), (23), (24), (25). Then the theorem follows directly from Duality Theorem in Linear Programming. t u Example 4 Consider the problem (P), where f : R2 → R and g : R2 → R are defined as follows and X ≡ R2 : f (x1 , x2 ) = x14 + x24 + x12 x22 − x13 + x12 − 2x22 ,

g(x1 , x2 ) = x12 + x22 − 1.

This problem is second-order WD-invex according to Proposition 4. Consider the following cases. If u1 6= 0 or u1 = 0, u22 − 1 = 0, then the only feasible point for the problem (22), (23), (24), (25) is (λ , µ) = (0, 0). Therefore, the optimal value of the objective function is 0. If u1 = 0, u2 ∈ (−1, 0) ∪ (0, 1), then we conclude from (23) that µ = 2λ (1 − u22 ). It follows from here that the objective function (22) is nonnegative and its optimal value is 0. In the last case when u1 = 0, u2 = 0 we choose a critical direction d = (d1 , d2 ) with d1 = 0, d2 6= 0. By (24) we have µ = 2λ . This implies that the objective function is non-negative and its minimal value is 0.

Duality in Nonlinear Programming

15

On the other hand, the problem is not WD-invex, because in the case when x = (0, 1), u = (0, 0) the problem (11), (12), (13) is unbounded from below. Then, according to Proposition 1 the problem is not WD-invex. Acknowledgements The author thanks to the Publishing House Springer and the editors of the Journal of Global Optimization for publishing this paper.

References 1. Aubin, J.P., Ekeland, I.: Estimates of the duality gap in nonconvex optimization. Math. Oper. Res. 1, 225–245 (1976) 2. Bazaraa, M.S.,Sherali, H.D., Shetty, C.M.: Nonlinear Programming - Theory and Algorithms. Wiley, New York, 3rd ed. (2006) 3. Dantzig, G.B., Eisenberg, E., Cottle, R.W.: Symmetric dual nonlinear programs. Pacific J. Math. 15, 809–812 (1965) 4. Giannessi, F.: On the theory of Lagrangian duality. Optim. Lett. 1, 9–20 (2007) 5. Giannessi, F., Mastroeni, G.: Separation of sets and Wolfe duality. J. Global Optim. 42, 401–412 (2008) 6. Ginchev, I., Ivanov, V.I.: Second-order optimality conditions for problems with C1 data. J. Math. Anal. Appl. 340, 646–657 (2008) 7. Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80, 545–550 (1981) 8. Ivanov, V.I.: On the optimality of some classes of invex problems. Optim. Lett. 6, 43–54 (2012) 9. Ivanov, V.I.: Second-order invex functions in nonlinear programming. Optimization (2012). doi 10.1080/02331934.2010.522711 (in press) 10. Ivanov, V.I.: Second-order Kuhn-Tucker invex constrained problems. J. Global Optim. 50, 519–529 (2011). 11. Johri, P.K.: Implied constraints and a unified theory of duality in linear and nonlinear programming. European J. Oper. Res. 71, 61–69 (1993) 12. Mangasarian, O.L.: Nonlinear programming. McGraw Hill, New York (1969) 13. Mangasarian, O.L.: Second- and higher-order duality in nonlinear programming. J. Math. Anal. Appl. 51, 607–620 (1975) 14. Martin, D.H.: The essence of invexity. J. Optim. Theory Appl. 47, 65–76 (1985) 15. Mond, B., Weir, T.: Generalized concavity and duality. In: Schaible, S, Ziemba, W.T. (eds.) Generalized Concavity in Optimization and Economics, pp. 263–279. Academic Press, New York (1981) 16. Tind, J., Wolsey, L.: An elementary survey of general duality theory in mathematical programming. Math. Programming 21, 241–261 (1981) 17. Wolfe, P.: A duality theorem for nonlinear programming. Quart. Appl. Math. 19, 239–244 (1961)

Suggest Documents