Revisions of constraint approximations in the ... - Springer Link

2 downloads 0 Views 344KB Size Report
Key words: Nonlinear Programming, Successive Quadratic Programming Method, Linear. Approximations, Start Procedure Exact Penalty Function. I. Introduction.
Mathematical Programming 26 (1983) 144-152 North-Holland Publishing Company

REVISIONS OF CONSTRAINT APPROXIMATIONS IN THE SUCCESSIVE QP METHOD FOR NONLINEAR PROGRAMMING PROBLEMS Kaoru T O N E Graduate School for" Policy Science, Saitama University, Urawa, Saitama 338, Japan Received 23 February 1982 Revised manuscript received 20 July 1982

In the last few years the successive quadratic programming methods proposed by Han and Powell have been widely recognized as excellent means for solving nonlinear programming problems. However. there remain some questions about their linear approximations to the constraints from both theoretical and empirical points of view. In this paper, we propose two revisions of the linear approximation to the constraints and show that the directions generated by the revisions are also descent directions of exact penalty functions of nonlinear programming problems. The new technique can cope better with bad starting points than the usual one.

Key words: Nonlinear Programming, Successive Quadratic Programming Method, Linear Approximations, Start Procedure Exact Penalty Function.

I. Introduction The nonlinear programming problem to be considered in this paper is defined as

(NLP)

rain

f(x),

(1.1)

x

subject to

ci(x) = 0

(i ~ I0,

(1.2)

ci(x)>-O

(iCI2),

(1.3)

where f, c~ : R" ~ R and It and 12 are the index sets of equality and inequality constraints, respectively. Powell [5] defined an associated quadratic programming problem with an approximation x to the solution as follows: (QP(x, H))

min

Vf(x)Zp + ~pVHp,

(1.4)

P

subject to

ci(x)+Vci(x)Tp = 0

(i E I0,

(1.5)

c~(x) + Vc~(x)Vp >_0

(i E I2),

(1.6)

where the n z n symmetric matrix H is a positive definite approximation to the 144

K. Tol~e[Constraint approximations for nonlinear programming

145

Hessian of the Lagrangian L(x, ~,) = f ( x ) + ATc(x).

(1.7)

(1.5) and (1.6) are linear approximations to the constraints (1.2) and (1.3). If they are inconsistent, Powell [5] introduces an extra variable s into the quadratic programming and replaces the constraints (1.5) and (1.6) by the conditions ci(x)s + Vci(x)'rp = 0

(i G_ I0,

(1.8)

c~(x)si + Vci(x)rp >- 0

(i E I-2_),

(1.9)

where s~ has the value fl si=~,s

if c~(x)>O, if ci(x) 0. It is easy to see that the s y s t e m is inconsistent. The modified s y s t e m c o r r e s p o n d i n g to (1.8) and (1.9) is: 190 - 20pl - 209, = 0. -lls+p~_>0, - l l s +p2___0. The s y s t e m has the only solution s = 0, P E= 0, Pe = 0, and the original p r o b l e m will thus be a s s u m e d inconsistent even t h o u g h it is feasible. This simple e x a m p l e s h o w s that the modified a p p r o x i m a t i o n does not a l w a y s w o r k well and suggests the n e c e s s i t y for other ones. The main cause of such troubles is that the f r e e d o m of p b e c o m e s v e r y restricted as we a p p r o x i m a t e the violated equality or inequality constraints by (1.8) or (1.9). A natural w a y to avoid this p r o b l e m is to relax the constraints and in our case to introduce the extra slack variables similar to the linear p r o g r a m m i n g case. For example, we relax (1.5) by introducing slack variables tli and t~ as follows: c~(x) + V c i ( x ) T p - t ~; + t ~" = 0

(t~, t~>_ 0)

w h e r e tl + t~ should be made as small as possible. A b o v e all, the following are i m p o r t a n t f a c t o r s to be c o n s i d e r e d in designing the modifications. (1) The new direction p g e n e r a t e d by the modified Q P s u b p r o b l e m should be a d e s c e n t direction of an exact penalty f u n c t i o n of the nonlinear p r o g r a m m i n g problem. (2) Since bad starting values of x o f t e n cause the i n c o n s i s t e n c y , we should design the modifications in such a w a y that it is e a s y to go b a c k to the original s u c c e s s i v e QP m e t h o d s as soon as (1.5) and (1.6) r e c o v e r the c o n s i s t e n c y at a suitable stage. In the following sections, we p r o p o s e two modified Q P s u b p r o b l e m s designed a c c o r d i n g to the a b o v e - m e n t i o n e d plan. T h e first one lays e m p h a s i s on the relaxation of the violated nonlinear equality constraints, while the s e c o n d one is more general in relaxing the constraints but needs the i n t r o d u c t i o n of m o r e slack variables than the first one.

K. Tone/Constraint approximations for nonlinear programming

147

3. The first modified QP subproblem As was mentioned above, the first modification puts emphasis on the resolution of the inconsistency caused by the nonlinear equality constraints. In the case that the approximations (1.2) and (1.3) are found to be inconsistent, we revise them in the following way. (MQPI(x, H ) ) rain

p, tl,t2, S

subject to

Vf(x)Vp +{pVHp + M, Z (t'i + t~) - M2s,

(3.1)

ci(x) + Vc+(x)Vp -- t J++ t+ = 0

(i ~ It, U 1,3),

(3.2)

(i E I12),

(3.3)

(i E 12t U 122),

(3.4)

(i ~ I20,

(3.5)

Vci(x)Tp = 0 Ci(X) + VCi(x)T p >- 0 ci(x)s + VCi(x)Vp >-- 0 0 --< S --< 1, tli, t~ >--O

(3.6) (i E I,, U 1,3).

where M, and M 2 a r e sufficiently large positive numbers, and lk, = {i I i ~ Ik, ci(x) > 0}, Ik2 = {i I i ~ Ik, Ci(X) = 0},

(3.7)

/ks = {i ] i ~ Ik, c~(x) < O} (k = 1,2). It is easy to see that M Q P I ( x , H ) is feasible if the system (1.8)-(1.12) is consistent. In this sense, M Q P I ( x , H ) has a wider range of applications to solve nonlinear p r o g r a m m i n g problems. Also, since it has an obvious solution p = 0, S = 0, tIi = max{0, ci(x)} and t~ = max{0, -ci(x)}, it is always consistent. An algorithm based on this modification goes as follows: L e t a K u h n - T u c k e r solution to M Q P I ( x , H ) be p, t ~, t'- and s. Then, if p = 0, the constraints (1.2) and (1.3) are assumed to be inconsistent or the approximation x is not well suited for the iteration. Otherwise, the direction p will be used to find out the next approximation Y, = x + ap (0 < a 0, then

t~i=ci(x)+Vcdx)rp>o,

t~=0

and

u, =

and

ui = Mti,

-

M',,

(ii) if i ~ I13 U I~2 and Vc~(x)Tp < 0, then

t~=-ci(x)-Vci(x)rp>o,

tli=0

(iii) if i E I2~ U I22 and Vc~(x)Tp > 0. then si=0

and

wi=0,

and (iv) if i E I23 1.3I2,_ and Vci(x)Tp < O, then

si = -cdx)-Vc~(x)Tp > 0

and

w~ = M~.

From the above relations, (4.6) and (4.7), we obtain

Doq,(x) -lui] (i C Ii) and M~ >- wi (i C I2). R e m a r k 4.1. Biggs [1] proposes a recursive QP subproblem for which constraints are of the form:

c~(x) + VCi(x)Tp + ~rui = 0

(i E I~),

(4.10)

c~(x)+ Vci(x)Sp + , 1--, _0 r~ >

(/El,),

(4.11)

where the & and ~re~ are estimated Lagrange multipliers and ~ is a penalty parameter. It is shown that fi~ and Cv,~can be defined in such a w a y that these constraints are always consistent for positive ~. (4.10) and (4.11) are linearized and perturbed forms of the original constraints which assure the consistency, while MQP2(x, H ) introduces the perturbations t~i, t~ and si as the solutions only when they are necessary. Let a feasible solution of (4.10) and (4.11) be PB. Then, p -- p~, t~ = max{0,-~Pfi,}, t~ = max{0, ~P&} and si = ~P~ are a feasible solution of MQP2(x, H). Thus, MQP2(x, H ) contains the optimal direction of Biggs's subproblem. It is very interesting that both formulations resemble each other, although they c o m e from different origins. C o m p a r i s o n of both methods with respect to line research strategies, updating of the matrix H, computational efficiency and robustness against bad starting points, is a subject of future research and experimentation.

Acknowledgment The author wishes to thank two referees for suggesting several significant i m p r o v e m e n t s in the paper.

References [1] M.C. Biggs, "Recursive quadratic programming methods for nonlinear constraints", in: M.J.D. Powell, ed., Nonlinear optimization 1981 (Academic Press, New York, 1982) pp. 213-221. [2] R.M. Chamberlain, C. Lemarechal, H.C. Pedersen and M.J.D. Powell, "The watchdog technique for forcing convergence in algorithms for constrained optimization", DAMTP 80/NA9, Department of Applied Mathematics and Theoretical Physics, University of Cambridge (Cambridge, 1980). [3] V,F. Dem'yanov and V.N. Matozemov, Introduction to minimax (John Wiley and Sons, New York, 1974). [4] S.P. Han. "A globally convergent method for nonlinear programming", Journal of Optimization Theory and Applications 22 (1977) 297-309. [5] M.J.D. Powell, "A fast algorithm for nonlinearly constrained optimization calculus", Proceedings of the 1977 Dundee Biennial Conference on Numerical Analysis (Springer-Verlag, Berlin, 1978).

Suggest Documents