Nov 28, 2010 - in nonlinear programming, Optimization: A Journal of Mathematical ..... marginal of a differentiable program in Banach space, J. Optim. Theor.
This article was downloaded by: [UZH Hauptbibliothek / Zentralbibliothek Zürich] On: 10 July 2014, At: 10:31 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Optimization: A Journal of Mathematical Programming and Operations Research Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gopt20
b-Subgradients of the optimal value function in nonlinear programming a
A. Jofre & L. Thibault
b
a
Departamento de Matematicas Aplicadas , Universidad de Chile , Santiago, Chile b
Laboratoire de Mathématiques Appliquées , Université de Pau , URA 1204 – CNRS, Avenue de I’Uniuersité, 64000 PAU, France Published online: 28 Nov 2010.
To cite this article: A. Jofre & L. Thibault (1992) b-Subgradients of the optimal value function in nonlinear programming, Optimization: A Journal of Mathematical Programming and Operations Research, 26:3-4, 153-163 To link to this article: http://dx.doi.org/10.1080/02331939208843850
PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/ page/terms-and-conditions
Optimization, 1992, Vol. 26, pp. 153-163 Reprints available directly from the publisher Photocopying permitted by license only
01992 Gordon and Breach Science Publishers S.A. Printed in the United Kingdom
b-SUBGRADIENTS OF THE OPTIMAL VALUE FUNCTION IN NONLINEAR PROGRAMMING
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
A. JOFRE1 and L. THIBAULT2
'Departamento de Matematicas Aplicadas,
Universidad de Chile, Santiago, CHILE Universite' de Pau, Laboratoire de Mathkmatiques Appliqukes, URA 1204 - CNRS, Avenue de I'Universitk, 64000 PA U - FRANCE (Received 16 October 1991; in final form 13 July 1992)
KEY WORDS Nonsmooth optimization, parametric optimization, b-Multiplier sets, normal cone, calculus rules, proximal normal formulae, b-subgradient formulae, optimality conditions, stability property. Mathematics Subject Classification 1991: Primary: 90C30 Secondary: 90C31
1. INTRODUCTION In this paper we study the following parametric nonsmooth optimization problem with parameter vector u E RP: minimize f (x) subject to g(x) + u
E
-K,
x
>
C
(pu where f :E* R and g : E+ RP are locally Lipschitzian mappings, C is a nonempty closed subset of a Banach space E, and K is a convex cone in RP. Optimization problems with a finite number of inequality and equality constraints For each and with right-hand perturbations belong to the class of problems parameter vector u E RP we can associate the global optimal value p ( u ) E R U {-m, m) for (P,,) defined as E
(e,).
p ( u ) : = i n f { f ( x ) : g ( x ) + u ~ - K and X E C )
(e,)
with the convention p(u) = +m when is infeasible. The optimal solutions set will be denoted S(u). It is well known that the derivative (whenever it exists) or the generalized derivatives of the function p have great importance in the basic questions of the existence and the interpretation of Lagrange multipliers for the problem (P,). In the last two decades many papers (see references in [6] and [20]) have been devoted to the study of estimates of some generalized derivatives of p . This study is actually known as the theory of sensitivity analysis (see [6]) and has many other applications (for examples) in mathematical programming (as optimality conditions, duality results, algorithms, . . .) and mathematical economics. By using the Ekeland's variational principle Clarke [4] proved the following necessary condition for optimality when .f E S(0) = S and K = R4, x (0): for L > 0 sufficiently large there exist /? r 0 and u * E K + = { k * : ( k *, k ) 2 0 for all k E K ) , 153
A. JOFRE AND L. TTIIBAULT
154
not both zero such that (u*, g(.t)) = 0,
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
This relation ensures that
Here d,(F(.))(i?) is the Clarke's generalized gradient of a real-valued function F at f [4] [3], d,-(y) := inf{(ly - xll :x E C) is the distance function to C and 3,. is the indicator function of C, vC(x)= 0 if x E C and @,.(x) = +M otherwise. Another proof of condition (1.1) was given before by Hiriart-Urruty, see [9]. The necessary condition (1.2) was used by Rockafellar [19,20,21] to prove that the "normal" and "abnormal" multiplier sets, defined as M',(S) =
U {u*:u* satisfies condition (1.1) with P = 1 at 2 )
(1.3)
xes
and
M ~ (=sX)€ S {u* :U * satisfies condition (1.1) with /?
-
0 at f )
(1.4)
respectively, allow one to give important inner and outer estimates to the set of Clarke's subgradients d,p(O) and singular subgradients dTp(0) of the global optimal value function p at 0: aCp(o)= ci CO{M",S) n d,p(o)
+ M',(S) n a;p(o)}
d:p (0) = cl CO{M',(S) n drp (0))
(under some conditions).
(1 ' 5 )
A proof of this result is based on the proximal subgradient formula of Rockafellar [19]. Another proof of the estimates above has been given by Clarke [4], in which the proximal normal formula (Clarke [3]) is the key ingredient. It is known that, in some cases, the Clarke generalized gradient is too big (for example there is a locally Lipschitzian function h : R + R such that a,h(x) = [-I, 11 for all x E R). The b-subgradient introduced in [22] has the advantage of being a singleton at any point where the function is FrCchet-differentiable. Our aim in this paper is to give estimates similar to (1.5) for the set of h-subgradients ahp(0) and singular b-subgradients dTp(O) in terms of "b-Multiplier sets" M ~ , ( s ) and M;.(s). These subgradient sets and the related normal cone Nb are smaller than d,p(O), a,"p(O) and the Clarke normal cone Nc respectively. Moreover, an important number of calculus rules [15], [22], proximal normal and b-subgradient formulae [22], [lo], [23], a Frechet (E, r)-normal formula [lo], and necessary optimality conditions [22] have been proved using these concepts. Our proof follows the ideas of Rockafellar's and Clarke's works described above with a major difference, the set-valued map ahf(.), even i f f is locally Lipschitzian, is not upper semicontinuous, which makes not available some key arguments given in Clarke's [4] and Rockafellar's [IS], [19], 1211 works. However it is possible to solve this problem, using simultaneously the proximal normal and (E, r)-Frkchet normal formulae, when the solution set satisfies a certain stability property.
b-SUBGRADIENTS IN NONLINEAR PROGRAMMING
155
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
2. CONNECTIONS BETWEEN LAGRANGE MULTIPLIERS AND THE SUBGRADIENTS a b p AND a;p In this section E will be a real Banach space, if nothing more specific is said. As we mentioned in the introduction our arguments to prove the estimates are based on the proximal 6-normal formula proved by Treiman [22] in the finite dimensional case (and extended to Hilbert spaces in [lo]) and on the (E, r)-Frkchet normal formula proved by Jofrk and Thibault [lo] in Hilbert spaces. Let Nb(A, x) be the 6-normal cone, that is, the negative polar of the 6-tangent cone Tb(A;x ) to a subset A of E at a point x E A,
where Tb(A;x) is the set of all y
A
E
E such that for all X,+x,
til(x, - x) bounded, there exists y, +y with x, A
x, +x means x,
-.x and x,
E
+ t,y,
E
t,LO, with
A for all n
E
N. (Here
A). Tb(A;x) is a closed convex cone which always
contains the Clarke's tangent cone. Let Prox,(A; x) be the set of r-normal proximal vectors to A at x E A and let F,,,(A;x) be the set of (E, r)-Frkchet normal vectors to A at x E A which are defined as v* E E* : (v*, x ' - x )
1 ~ l x' x112 for all x' 2r
5-
EA
1
(2.1)
(if E is Hilbert) and F , , , ( A ; x ) = { v * ~ E * : ( v * , x ' - x ) ~ & l l x ' - fxol lr a l l x ' ~ ( x + & r B ) f I A ) (2.2) respectively, where B is the closed unit ball of E and E * is the topological dual of E. Then the following formulae have been proved in [lo] when E is a Hilbert space: Nb(A;x)=clcoLPb(A;x)
(2.3)
where
v,*E Proxrn(A;x,) and rl1(x,
-
x) is bounded
I
- -
(called proximal normal formula for Nb) and v* E E * : there are v,*
Nb(A, x) = cl co &>O
such that v: E F,JA;
w
v*, x,
A
x and r,L0
x) and ril(xn - x) is bounded}
(2.4)
A. JOFRE AND L. THIBAULT
156
(called
(E,
r)-Frkchet normal formula for N,) where cl co is the closed convex hull
and v,*> v * means that (u,*)weak-converges to v*. In the proof of our main result we use the following type of Frkchet subgradients, which are analogous to the Frkchet ( E , r)-normals.
Definition 2.1: Let E and r be two positive real numbers. An element x* E E* is said to be a Frkchet (E, r)-subgradient to a function f :E+ [W U {-m, +m) at a point x E E where I f (x)l< if Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
CQ
for all x' E X + &rB. We denote the set of all Frkchet (e, r)-subgradients to f at x by S&f (x). Remark 2.2: For any E ' 2 E and r ' 5 r we have Sr,,f (x) c 6c,,,.f (x) and when f is the indicator function of C we get 6:,,f (x) = F,,,(C; x). We shall use the following type of limit relative to the Frkchet subgradients. 66f(2) :=
=
X* E
E* :there exist x,-+p, rnL0, r;'
U w-lim sup Sf,,f (x)
(E,
r)-
Ilx, - X I [ bounded and
(the sequential w-lim sup)
s Z U (r,xt.,(U+.f)
(where x,
+f2
means x,
4
2 and f ( x , )
-+f (i)) and we
shall denote
The relationship between 6, f (2) and 3,f (iis)given in the next proposition. We recall that the b-subgradient set abf(x) of a function f : E--+ R U (-03, + m ) at x, If (x)l< m, is defined by where f "x; h ) := inf{p : (h, p ) E Th(epif ; (x, f (x)))
and epi f is the epigraph of the function f . Futhermore when f is Lipschitzian in a neighbourhood of x we observe that f h ( x ; h ) = sup lirn sup t-'[f (x' 1'--tx
sal
+ th) - f(xl)].
(2.7)
ruJ f
IIIX'-XIISS
It is also possible to define dhf (x) in terms of the normal cone N,, in this case
157
b-SUBGRADIENTS IN NONLINEAR PROGRAMMING
Starting from Nh we can also introduce, following Rockafellar [18], [20] the singular subgradient set
which is a cone that always contains the point zero.
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
Proposition 2.3: Iff :E -+ R is Lipschitzian in a neighbourhood of x then
Proof: It suffices to prove that if x * E Ghf (x) and h (x*, h )
E
E with llhll
5 f b ( i ; h).
Let E > 0 be given. Then there exist x,, -+x, r,iO with r;' x,* E tf, f (x,,), that is
for all x '
EX,,
= 1 then
J(x,,- x J J bounded and
+ &r,B, in particular for x ' = x, + &r,h one has
(X*, h ) 5 lim sup (~r,)-'[f (x,
+ sr,h)
-
f (x,)]
+E
n-a
and since
E
> 0 is arbitrary we conclude the result.
Remark 2.4: A more general result describing dhf (x) in terms of b-Fr6chet and singular b-Fr6chet subgradients will appear elsewhere. An important ingredient in our theorem is the following basic hypothesis: (H) there exists a selection s of the set-valued map solution S on some closed ball a B of RP and a real number A > 0 such that IJs(u)- s(O)lJ 5 A JluJJ for all u
E
aB.
Remark 2.5: a) It is easy to see that under hypothesis (H) the optimal value function p satisfies a relation similar to (H) at 0. Indeed, if I denotes a Lipschitz constant for f at R , then for u in some neighbourhood of zero we have
b) Similar conditions to (H) have been used in the study of the directional derivative and Clarke's generalized gradients of the optimal value function (see for example Hiriart-Urruty [8], Pomerol [16], Gauvin and Janin [7]). Condition (H) implies the tameness (see Rockafellar [20], proposition 10) of P(0) with compact set A = (2). Sufficient conditions for (H) have been given by Aubin [I], Malanowski [13], Cornet and Vial [5], Mangasarian and Shiau [14].
A. JOFRE AND L. THIBAULT
158
c) Obviously (H) is satisfied if there exist a neighbourhood U of 0 and a real number p > 0 such that i) S(u) f 0 and p(u) -p(O) G P llull for all u E U ii) The "positive calm condition"
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
is satisfied for some selection s of S. As we pointed out in the introduction, our aim in this paper is to give inner and outer estimates for dbp(0) in terms of Lagrange multiplier vectors. ,For a given solution f E S to (P,),a positive number L > 0 and p 2 0, we say that u* is a P-Lagrange multiplier vector at ,t if i) u* E K f , (u*, g(P)) = 0 and Ld~)(f>' ii) 0 E ah(Pf + (u*, g(')) + II(u*, We denote by M ~ , ( x=) {u* : U * is a 1-Lagrange multiplier to 2 ) and @(P) = (u* : U * is a 0-Lagrange multiplier to i ) the normal and abnormal rnultielier sets with penalization coefficient L associated to i E S respectively. We put Mi(S)
=
UMi(i) iEJ
and &h,(~) =
U ~;ib,(P). fE S
Remark 2.6: We observe that the following inclusions hold: M t ( i ) c Mr(b) and M~,(P)c M'L(.~) where Mi(%) and ~ ~ ( are 2 the ) normal and abnormal Clarke's multiplier sets recalled in the introduction. Now we establish the main theorem of the paper. We recall that, by assumption, the mappings f and g are i-Lipschitzian in an open neighbourhood of a point i s(0) E S for some 7 > 0. We put L .- max(i, P).
-
Theorem 2.9: If hypothesis (H) is satisfied and if p is lower semicontinuous in a neighbourhood of 0 then for all L > Furthermore, whenever ~ b , ( i is ) pointed. Proof: The proof is based on two lemmas, which are stated and proved below. From a result of Rockafellar [19] (see also Loewen [Ill), it is sufficient to prove that
b-SUBGRADIENTS IN NONLINEAR PROGRAMMING
where
D
= { a ( w * ,- 1 ) :
a > 0, W * E M:(z) n a,p(o))
and
D ~ {=( w * , o ) : w * E M ~ ( zn)aYp(o)).
+
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
Since the inclusion cl co(D Dm)c Nb(epif ; (0,p(0))) is obviously true, we only prove the converse inclusion. From formula (2.3)we see that it is enough to show that
.
Let (u*, -P) E LPb(epif ; (0, p(0))). If /3 > 0 then by Lemma 2.9 below and the definition of a,p(O) we obtain P- u* E Mb,(Z) n dbp(0).Analogously we deduce that u* E ~ : ( . f ) fl aFp(0) if /3 = 0. Hence in both cases, ( u * , - P ) E D + Dm, which allows us to conclude the result. The following lemma is inspired from Clarke [4, p. 2461.
Lemma 2.8: Let (u*, - P ) be a point in Prox,(epi p ; ( u , a ) ) with r < 1, let x be a solution of the problem (P,) and let 7 be a Lipschitz constant for f and g around x. Then i) u * E K + ,P z O a n d ( u * , g ( x ) + u )= O ii) for all y > max{y, j?2) the point x is a local minimum of the function
over a ball x + qrB where r] is a real number in 10, 1[ scch that f and g are 7-Lipschitzian on x + r]B and (Il(u*,-P)II + yr])? < Il(u*',P)II y. Proof: For all y
E
C, t E [0, +m[ and k
E
K we have
1 + -pa +[lu + g ( y ) + k12 + + a- f ( y )- t12]2 0. 2r
(2.10)
Thus, for k = -g(x) - u and y = x we obtain that for all t r 0
hence the directional derivative q l ( a -f ( x ) ;1 ) of q1at cu -f ( x ) in the direction 1 must satisfy
q l ( a - f ( x ) ;1) = /3 2 0. Analogously, for t = a -f ( x ) and y = x one has
q,(k) := ( u * ,g(x) + u )
+ ( u * ,k ) + 2r1 lu + g ( x ) + k122 0 -
160
A . JOFRE AND L. THIBAUL'I
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
for all k E K, which implies that the directional derivative q ; ( - g ( x ) - u ; h ) of the function q2at - g ( x ) - u in the direction h E K + g ( x ) + u satisfies & ( - g ( x ) - u ; h ) = ( u * ,h ) 2 0 , in particular ( u*, k ) 2 0 for any k E K (since K is convex) and ( u *, g ( x ) + u ) 2 0. So u* E K+ and ( u * ,g ( x ) + u ) = 0 since - g ( x ) - u E K. Now we prove ii). If we fix t = m - f ( x ) and k = - g ( x ) - u in inequality (2.10), we deduce that for all y E C
Let q
E
10, 1[ be such that f and g are p Lipschitzian on x
+ q B and
w * , P ) I I+ r ~ ) L
Proof: Let E E I O , 1[ be given. By the definition of LP, we can choose sequences (u,, a,) +,(O, p(O)), r,JO with r;' 11 (u,, a,, - p ( 0 ) )11 bounded and (u:, - f i n ) E Prox,,(epip; u,, an) such that (u,, -P,) + ( u * , -P). Let 2 = s(0) and x, = s(u,) be points in S(0) and S(u,) respectively. Let p E 10, 1[ be such that n 2 + 2pB and the functions f and g are L - ~ i ~ s c h i t z i aon Then by Lemma 2.8 we obtain, for n sufficiently large, that for every Y E Xn + P ~ , B
O P n f ( r ) + ( u ~ ? ( Y )+ ) II(un*,P n N L d c ( y )
and furthermore u,* E K t ,
P,
20
and (u:, g(x,)
+ u n ) = 0. Hence we have
b-SUBGRADIENTS IN NONLINEAR PROGRAMMING
for all y
EX,
161
+ pmB. But for n sufficiently large
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
and
+
for all y E X , &qr,B, where q = min{p, j L ) . Thus we obtain from inequality (2.12) that for s, = qr, O E afs,(Pf + ( u * , g ( . ) ) + II(u*, P)II L d ~ ) ( x n ) .
(2.13)
Moreover from hypothesis (H) we have -1
sn IIxn -211 < A q - l r i 1 IlunII 5 A q - ' r i 1 II(un,
an
-p(O))II,
therefore s l 1 Ilx, -211 is bounded which implies with (2.13) that
+ ( u * , g ( 9 ) + IKu*, P)II Ldc)(-f) c ab(pf + ( u * , g ( . ) ) + IKU*, P)II L d c ) ( 4 . Furthermore from (2.11) we obtain that u* E K+, P 2 0 and ( u * , ~ ( 2 )=) 0. = db(Pf
.
Example 2.10: Now we give a concrete and simple example for which it is possible to compare the use of Clarke subgradients and the one of bsubgradients. Let f ( x ) = i x 2sin $ + h ( x ) where h ( x ) = (1 y ) ~if x 2 0 and h(x) = (1 - y)x if x 5 0. Then for
+
p(u) =inf{f(x):u - x 5 0 , x E R) X
one has for any y E [0, :[ d,p(O)=[l-
y, 1 + y l = M b ( 0 ) ,
whereas dcp(0) = [$- y,
Note that, for y = 0, dbp(0) = ( 1 ) while
2 + y ] = Mc(0). dcp(0) = [i, ?].
In the following corollary we give upper and lower estimations of the Dini's directional derivatives Dp(0; d ) , Dp(0; d ) of the optimal value function p at 0 in the direction d defined by
Dp(0; d ) := lim sup t - ' [ p (td) - p (O)] Qp(0; d ) := lim inf tC1[p(td)- p(O)].
A. JOFRE AND L, THIBAULT
162
These estimations are deduced, analogously to Rockafellar [20], from a general upper bound for the generalized directional derivative pb(O; d ) . It is not difficult to verify that iff is lower semicontinuous in a neighbourhood of u then fh(u;d)=suplim limsup s2O EJO
u'-pi
inf
t-'[f(u1+td')-f(ul)].
Ildr-dlls~
t3.o
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
~-'IIU'-U~~~~S
Corollary 2.11: If the msumptions of Theorem 2.7 are satisfied and if either P ~ ( o ; ~ +) m< or ~ i ( i 0 ) ff o r i ES(O)then pb(O; d ) 5
sup
( u * , d ) for
d E (Mi(X))-
U*EM;(E)
where ( ~ b , ( iis) the ) - negative polar of Mh,(X). Furthermore, i f p is Lipschitzian in a neighbourhood of 0 one has
.
D p ( O ; d ) > inf
inf
(u*,d).
* E ~ ( O )u * e M f . ( x )
Proof: It follows the same lines than the proof of Theorem 3 in Rockafellar [20]. Remark 2.12: a) These estimations improve those in Rockafellar [20, Corollary 1 of Theorem 31 since ~ b , ( i c ) Mk(i). b) For other related results see for example [ 4 ] , [ 6 ] , [7], [20] and references therein.
REFERENCES [I] Aubin, J. P. (1984) Lipschitz behavior of solutions to convex minimization problems, Math. Oper. Res., 9, 87-111 [2] Clarke, F. H. (1975) Generalized gradients and applications, T r u r . Amer. Math. Soc., 205, 247-262 [3] Clarke, F. H. (1976) A new approach to Lagrange multipliers, Math. Oper. Res., 1, 165-174 [4] Clarke, F. H. (1983) Optimization and Nommooth Analysis, John Wiley and Sons, Canad. Math. Soc. Series [5] Cornet B. and Vial J.-Ph. (1986) Lipschitzian solutions of perturbed nonlinear programming problems, SJAM J. Control Optim., f4, 1123-1137 [6] Fiacco, A. V. (1983) Introduction to sensitivity and stability analysis in nonlinear programming, Mathematics in Science and Engineering, Academic Press [7] Gauvin, J, and Janin, R . (1988) Directional behaviour of optimal solution in nonlinear mathematical problem, Math. Oper. Rex., 13, 629-649 [8] Hiriart-Urruty, J . B. (1978). Gradients gtntralisCs de fonctions marginales, SJAM J. Control Optim., 16, 301-316 [9] Hiriart-Urruty, J . B. (1979) Refinements of necessary optimality conditions in nondifferentiable programming I, Appl. Math. Optim, 5 , 63-82 [lo] Jofre, A. and Thibault, L. Proximal and Frkchet normal formulae for some small normal cones in Hilbert space, Nonlinear Anal. Th. Meth. A p p l . , to appear [ l l ] Loewen, P. D. (1987) The proximal normal formula in Hilbert space, Nonlinear Anal. Th. Meth. Appl., 11, 979-995
Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 10:31 10 July 2014
b-SUBGRADIENTS IN NONLINEAR PROGRAMMING
163
[12] Maurer, H. (1979) Differential stability in optimal control problems, Appl. Math. Optim., 5, 283-295 [13] Malanowski, K. (1985) Differentiability with respect to parameters of solutions to convex programming problems, Math. Programming, 33, 352-361 [14] Mangasarian, 0 . L. and Shiau, T. H. (1987) Lipschitz continuity of solutions of linear inequalities, programs and complementary problems, SIAM J . Control Optim., 25, 583-595 [15] Michel, Ph. and Penot, J. P. (1984) Calcul sous-diffirentiel pour des fonctions Lipschitziennes, C.R. Acad. Sci. Paris 298, 269-272 [16] Michel, Ph. and Penot, J. P. (1992) A generalized derivative for calm and stable functions, Differential and Integral Equations 5 , 433-454 [17] Pomerol, J. C. (1982) The Lagrange multiplier set and the generalized gradient set of the marginal of a differentiable program in Banach space, J . Optim. Theor. Appl. 38, 307-317 [18] Robinson, S. M. (1982) Generalized equations and their solutions. Part 11. Applications to nonlinear programming, Math. Programming Study, 19, 200-221 [19] Rockafellar, R. T. (1981) Proximal subgradients, marginal values and augmented Lagrangians in non convex optimization, Math. Oper. Res., 6 , 427-437 [20] Rockafellar, R. T. (1982) Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming, Math. Programming Study, 17, 28-66 [21] Rockafellar, R. T. (1985) Extensions of subgradients calculus with applications to optimization, Nonlinear Anal. Th. Meth. Appl., 9 , 665-698 [22] Treiman, J. S. (1988) Shrinking generalized gradients, Nonlinear Anal. Th. Meth. Appl. 12, 1429-1450 [23] Treiman, J. S. Finite dimensional optimality conditions: B-gradients, J. Optim. Theor. Appl. [24] Treiman, J. S. (1990) Optimal control with small generalized gradients, SIAM J . Control and Optimization, Vol. 28, 720-732 [25] Ward, D. (1987) Convex subcones of the contingent cone in nonsmooth calculus and Optimization, Tram. Amer. Math. Soc., 302, 661-682