Nondifferentiable Convex Optimization: An Algorithm Using Moreau ...

2 downloads 0 Views 700KB Size Report
Index Terms— Moreau-Yosida regularization, non-smooth ... Having in mind that the Moreau-Yosida regularization of a proper closed ..... Convergence theorem.
Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

Nondifferentiable Convex Optimization: An Algorithm Using Moreau-Yosida Regularization Nada DJURANOVIC-MILICIC 1 , Milanka GARDASEVIC-FILIPOVIC 2

Abstract—In this paper we present an algorithm for minimization of a nondifferentiable proper closed convex function. Using the second order Dini upper directional derivative of the Moreau-Yosida regularization of the objective function we make a quadratic approximation. The purpose of the paper is to establish that the sequence of points generated by the algorithm has an accumulation point which satisfies the first order necessary and sufficient conditions. A convergence proof is given, as well as an estimate of the rate of convergence.

Dini upper directional derivative (described in [1,2]) based on the results from [3]. That is the main idea of this paper.

Index Terms— Moreau-Yosida regularization, non-smooth convex optimization, second order Dini upper directional derivative.

d k are defined by the particular algorithms.

I. INTRODUCTION

T

he following minimization problem is considered:

minn f (x) ,

(1)

x∈R

where f : R → R ∪ {+ ∞} is a convex and not necessary n

differentiable function with a nonempty set X

*

of minima.

For nonsmooth programs, many approaches have been presented so far and they are often restricted to the convex unconstrained case. It is reasonable because a constrained problem can be easily transformed to an unconstrained problem ussing a distance function. In general, the various approaches are based on combinations of the following methods: subgradient methods; bundle techniques and the Moreau-Yosida regularization. For a function f it is very important that its MoreauYosida regularization is a new function which has the same set of minima as f and is differentiable with Lipschitz continuous gradient, even when f is not differentiable. In [12, 13, 17] the second order properties of the MoreauYosida regularization of a given function f are considered. Having in mind that the Moreau-Yosida regularization of 1

a proper closed convex function is an LC function, we present an optimization algorithm (using the second order

We shall present an iterative algorithm for finding an optimal solution of problem (1) by generating the sequence of points {xk } of the following form:

xk +1 = xk + α k sk + α k d k k = 0,1,..., d k ≠ 0 2

where the step-size

αk

(2)

and the directional vectors sk and

Paper is organized as follows: in the second section some basic theoretical preliminaries are given; in the third section the Moreau-Yosida regularization and its properties are described; in the fourth section the definition of the second order Dini upper directional derivative and the basic properties are given; in the fifth section the semi-smooth functions and conditions for their minimization are described. Finally in the sixth section a model algorithm is suggested and its convergence is proved, and an estimate rate of its convergence is given, too. II. THEORETICAL PRELIMINARIES Throughout the paper we will use the following notation. A vector s refers to a column vector, and ∇ denotes the T

⎛ ∂ ∂ ∂ ⎞ ⎟ . The Euclidean gradient operator ⎜⎜ , ,..., ∂xn ⎟⎠ ⎝ ∂x1 ∂x2 product is denoted by ⋅,⋅ and ⋅ is the associated norm. For a given symmetric positive definite linear operator M we set ⋅,⋅ M := M ⋅,⋅ ; hence it is shortly denoted by

x

2 M

:= x, x

M

. The smallest and the largest eigenvalue

of M we denote by

λ

and Λ respectively.

The domain of a given function f : R → R ∪ {+ ∞} is n

{

the set dom( f ) = x ∈ R proper

if

its

domain

n

}

f ( x ) < +∞ . We say f is

is

nonempty.

The

point

x = arg min f (x ) refers to the minimum point of a *

Manuscript received February 24, 2012. This work was supported by Science Fund of Serbia, grant number III44007. 1 Nada DJURANOVIC-MILICIC is with Faculty of Technology and Metallurgy, University of Belgrade, Serbia (corresponding author to provide phone: +381-11-33-03-865; e-mail: [email protected] 2 Milanka GARDASEVIC-FILIPOVIC is with Vocational College of Technology, Arandjelovac, Serbia; e-mail: [email protected]

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

x∈R n

given function f : R → R ∪ {+ ∞} . n

A vector g ∈ R

n

is said to be a subgradient of a given

proper convex function f : R → R ∪ {+ ∞} at a point n

WCE 2012

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

x ∈ R n if the inequality

The minimum point p ( x ) of the function (3), i. e.:

f (z ) ≥ f (x ) + g T ⋅ (z − x )

holds for all z ∈ R . The set of all subgradients of f ( x ) at the point x , called the subdifferential at the point x , is n

denoted by ∂f ( x ) . The subdifferential ∂f ( x ) is a

nonempty set if and only if x ∈ dom( f ) . The condition

0 ∈ ∂f ( x ) is a first order necessary and sufficient condition for a global minimizer for the convex function f

at the point x ∈ R (see in [14,15]). n

{

}

f ( x ) = manx f ( z ) + g T ( x − z ) holds, where g ∈ ∂f ( z )

M

⎫ ⎬ ⎭

(4)

is called the proximal point of x . In [6] it is proved that the function always differentiable.

F defined by (3) is

The first order regularity of F is well known: without any further assumptions, F has a Lipschitzian gradient on the whole space R . More precisely,

∇F (x1 ) − ∇F (x2 ) ≤ Λ ∇F ( x1 ) − ∇F ( x2 ), x1 − x2 2

(see in [10]). The concept of the subgradient is a simple generalization of the gradient for nondifferentiable convex functions. The directional derivative of a real function f defined on

R n at the point x′ ∈ R n in the direction s ∈ R n , denoted f ( x′ + t ⋅ s ) − f ( x′) by f ′( x′, s ) , is f ′( x′, s ) = lim t ↓0 t when this limit exists. For a real convex function a directional derivative at the point

2

n

For a convex function f it follows that z∈R

1 ⎧ p ( x ) := argmin ⎨ f ( y ) + y − x 2 y∈R n ⎩

x′ ∈ R n in the direction

s exists in any direction s ∈ R n (see in [2]).

x1 , x2 ∈ R n (see in [12]), where ∇F ( x ) = M ( x − p ( x)) ∈ ∂f ( p( x )) and p (x ) is the

holds

for

all

unique minimum in (4). So, according to above consideration and Definition 1, we conclude that F is an

LC 1 function. Lemma 1. [13]: The following statements are equivalent: (i) x minimizes f ; (ii) p ( x ) = x ; (iii) ∇F ( x ) = 0 ;

(iv) x minimizes F ;

(v) f ( p( x )) = f ( x ) ;

At the end of this section we recall the definition of an

(vi) F ( x ) = f ( x ) .

1

LC function. n

Definition 1. A real function f defined on R is an function on the open set D ⊆ R

n

LC 1

if it is continuously

differentiable and its gradient ∇f is locally Lipschitzian, i.e. ∇f ( x ) − ∇f ( y ) ≤ L x − y for x, y ∈ D holds for

some L > 0 .

We shall give some preliminaries that will be used in the remainder of the paper. Definition 3. The second order Dini upper directional derivative of the function f ∈ LC at the point x ∈ R in 1

n

the direction d ∈ R is defined to be n

III. THE MOREAU-YOSIDA REGULARIZATION Definition 2. Let f : R → R ∪ {+ ∞} be a proper closed convex function. The Moreau-Yosida regularization of a given function f , associated to the metric defined by M, denoted by F, is defined as n

1 ⎧ F ( x ) := minn ⎨ f ( y ) + y − x y∈R ⎩ 2

⎫ . M⎬ ⎭

2

x∈R

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

f D′′ ( x, d ) = lim sup

[∇f (x + αd ) − ∇f (x )]T ⋅ d . α

α ↓0

If

∇f is directionally differentiable at xk , we have

f D′′(xk , d ) = f ′′(xk , d ) = lim

[∇f (x + αd ) − ∇f (x)]T ⋅ d α

α ↓0

(3)

The above function is an infimal convolution. In [10] it is proved that the infimal convolution of a convex function is also a convex function. Hence the function defined by (3) is a convex function and has the same set of minima as the function f (see in [6]), so the motivation of the study of Moreau-Yosida regularization is due to the fact that minn f ( x ) is equal to minn F ( x ) . x∈R

IV. DINI SECOND UPPER DIRECTIONAL DERIVATIVE

for all

d ∈ Rn .

Since the Moreau-Yosida regularization of a proper 1

closed convex function f is an LC function, we can consider its second order Dini upper directional derivative at the point

x ∈ R n in the direction d ∈ R n , i.e.:

FD′′ ( x, d ) = limsup α ↓0

g1 − g 2

α

d,

WCE 2012

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

where g1 ∈ ∂f ( p ( x + αd )), g 2 ∈ ∂f ( p ( x )) , and F ( x ) is

to compute f ( x), F ( x ), ∇F ( x ) and FD′′ ( x, d ) for a

defined by (3) for

given d ∈ R .

M =I.

Lemma 2.[2]: Let

n

f : R n → R be a closed convex proper

function and F is its Moreau –Yosida regularization for M = I . Then the next statements are valid. (i) FD′′ ( x k , kd ) = k FD′′ ( xk , d ) 2

(ii) FD′′( xk , d1 + d 2 ) ≤ 2(FD′′ ( xk , d1 ) + FD′′( xk , d 2 )) (iii) FD′′( xk , d ) ≤ K ⋅ d , where 2

K is some constant.

V. SEMI-SMOOTH FUNCTIONS AND OPTIMALITY CONDITIONS Definition 4. A function ∇F : R → R n

semi-smooth at the point x ∈ R

n

n

if ∇F

is locally

n

at 2

h→d

λ ↓0

Note that for a closed convex proper function, the gradient of its Moreau-Yosida regularization is a semismooth function. Lemma 3. [16]: If ∇F : R → R is semi-smooth at the n

point x ∈ R

x∈R

n

n

1 T minn Φ k (d ), Φ k (d ) = ∇F ( xk ) d + FD′′ ( xk , d ) , (5) d ∈R 2 where FD′′ ( xk , d ) stands for the second order Dini upper directional derivative at xk in the direction d . Note that if

Λ is a Lipschitzian constant for F , it is also a Lipschitzian constant for ∇F . The function Φ k (d ) is called an iteration function. It is easy to see that Φ k (0 ) = 0 and

Φ k (d ) is Lipschitzian on R n . We generate the sequence

{xk }of the form

xk +1 = xk + α k sk + α k2 d k , sk ≠ 0, d k ≠ 0 ,

is said to be

x∈R and the limit n lim{Vh}, V ∈ ∂ F ( x + λh ) exists for any d ∈ R .

Lipschitzian

At the k-th iteration we consider the following problem

n

then ∇F is directionally differentiable at

and for any V ∈ ∂ F ( x + h ), h → 0 we have:

where the step-size

Vh − (∇F ) ( x, h ) = o( h ) . Similarly we have that

( ).

h Vh − F ′′( x, h ) = o h T

and the direction vectors s k and

d k are defined by particular algorithms. For a given q ∈ (0,1) the step-size satisfying

αk = q

i (k )

α k is

a number

, where i (k ) is the smallest integer

from {0,1,2,...} such that

F ( xk +1 ) − F ( xk ) ≤ 1 ⎞ (6) ⎛ T ≤ σ ⎜ q i (k )∇F ( xk ) sk − q 4i (k ) (FD′′ ( xk , d k ))⎟ 2 ⎠ ⎝

2



αk

and where

σ ∈ (0,1)

is a preassigned constant, and

x0 ∈ R is a given point. n

2

We make the following assumptions.

Lemma 4. [4]: Let f : R → R be a proper closed convex n

function and let F be its Moreau-Yosida regularization.

A1. Suppose that c1 d

2

≤ FD′′( xk , d ) ≤ c2 d

2

some c1 and c2 such that 0 < c1 < c2 for every d ∈ R . n

x ∈ R n is solution of the problem (1) then F ′( x, d ) = 0 and FD′′ (x, d ) ≥ 0 for all d ∈ R n .

A2. d k = 1,

sk = 1, k = 0,1,2,...

A3.

exists

Lemma 5. [4]: Let f : R → R be a proper closed convex

∇F (xk ) sk ≤ − β ∇F ( xk ) ⋅ sk , k = 0,1,2,...

So, if

n

function, F its Moreau-Yosida regularization, and x a point from R . If F ′( x, d ) = 0 and FD′′ ( x, d ) > 0 for all n

d ∈ R n , then x ∈ R n is a strict local minimizer of the problem (1).

hold for

There

a

value β > 0

such

that

T

Lemma 6. [4]: Under the assumption A1 the function Φ k (⋅) is coercive. Remark. Coercivity of the function Φ k assures that the optimal solution of the problem (5) exists (see in [16]). It also means that, under the assumption A1 the direction

VI. A MODEL ALGORITHM In this section an algorithm for solving the problem (1) is introduced. We suppose that at each x ∈ R

n

it is possible

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

sequence

{d k }is

a bounded sequence on R

n

(proof is

analogous to the proof in [16]).

WCE 2012

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

Proposition 1. [3]: If the Moreau-Yosida regularization F (⋅) of the proper closed convex function f (⋅) satisfies assumption A1 then : (i) the function F (⋅) is uniformly and, hence, strictly convex;

{

(ii) the level set L( x0 ) = x ∈ R

n

}

: F (x ) ≤ F ( x0 ) is a

compact convex set, and (iii)

there

( )

exists

a

F x = min F ( x ) .

unique

x*

point

such

that

*

(i) d = 0 is the globally optimal solution of the problem (5) (ii) 0 is the optimum of the objective function in (5) (iii) the corresponding xk is such that

0 ∈ ∂f ( xk )

f is a proper closed convex function and its Moreau-Yosida regularization F Convergence theorem. Suppose that

satisfies assumptions A1, A2 and A3. Then for any initial

, xk → x∞ , as k → +∞ , where x∞ is a unique minimal point of the function f . n

If d k ≠ 0 is a solution of (5), it follows that

Φ k (d k ) ≤ 0 = Φ k (0) .

Therefore we have to prove that ∇F ( xk ) → 0, k → +∞ . Let K1 be a set of indices such that lim xk = x∞ . Then k∈K 1

there are two cases to consider:

Consequently,

we

have

by

assumption A1 that

1 1 T ∇F ( xk ) d k ≤ − FD′′( xk , d k ) ≤ − c1 d k 2 2

2

< 0.

(7)

From the above inequality it follows that the vector d k is a descent direction at xk . By (6) and assumption A1 we get

1 ⎞ ⎛ T F(xk+1 ) − F(xk ) ≤ σ ⎜ qi(k )∇F(xk ) sk − q4i(k ) FD′′(xk , dk )⎟ 2 ⎠ ⎝ 1 4i(k ) 2⎞ ⎛ ≤ σ ⎜ − β ∇F(xk ) ⋅ sk − q c1 dk ⎟ < 0 2 ⎠ ⎝ for every d k ≠ 0 . Hence the sequence

(8)

{F (xk )} has the

descent

property, and, consequently, the sequence {xk } ⊂ L(x0 ) . Since L(x0 ) is by the Proposition 1 a

compact convex set, it follows that the sequence

{xk }

is

bounded. Therefore there exist accumulation points of the sequence {xk } .

∇F is continuous, then, if ∇F ( xk ) → 0, k → +∞ then it follows that every Since

accumulation point x∞ of the sequence

{xk }

satisfies

∇F (x∞ ) = 0 . Since F is (by the Proposition 1) strictly convex, there exists a unique point x∞ ∈ L( x0 ) such that ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

{i(k )}

bounded above by a number follows that:

Lemma 7. [4]: The following statements are equivalent:

Proof.

F and by Lemma 1 it is a global minimizer of the function f .

limit point x∞ and it is a global minimizer of

a) The set of indices

x∈ L ( x 0 )

point x0 ∈ R

∇F ( x∞ ) = 0 . Hence, the sequence {xk } has a unique

for k ∈ K1 , is uniformly

I . From A2, A3 and (6) it

F ( xk +1 ) − F ( xk ) ≤ 1 ⎛ ⎞ T ≤ σ ⎜ q i (k )∇F ( xk ) sk − q 4i (k ) FD′′ ( xk , d k )⎟ 2 ⎝ ⎠ 1 4I ⎞ ⎛ I T ≤ σ ⎜ q ∇F ( xk ) sk − q FD′′ ( xk , d k )⎟ 2 ⎝ ⎠ σ ≤ σq I β ∇F ( xk ) ⋅ sk − q 4 I FD′′ ( xk , d k ) 2 1 4I I ≤ − βσq ∇F ( xk ) − q σFD′′ ( xk , d k ) . 2 Hence, it follows that

F ( xk ) − F ( xk +1 ) ≥

1 ≥ βσq I ∇F ( x ) + q 4 I σFD′′ ( xk , d k ). 2

Since {F ( xk )}

is

bounded

(9)

bellow

and

F ( xk +1 ) − F ( xk ) → 0 as k → ∞, k ∈ K1 , from (9) it follows that

∇F( xk ) → 0 and FD′ (xk;dk ) →0, k →∞

k∈K1 i.e. x∞ is a stationary point of the objective function F, i.e. ∇F(x∞ ) = 0 . From Lemma 1 it follows that x∞ is a unique optimal point of the function f . b) There is a subset K 2 ⊂ K1 such that lim i (k ) = +∞ . k →∞

By the definition of i (k ) , we have for k ∈ K 2

that

F (xk +1 ) − F (xk ) > 1 ⎛ ⎞ . (10). T > σ ⎜ qi (k )−1∇F ( xk ) sk − q 4i (k )−4 FD′′ (xk , d k ) ⎟ 2 ⎝ ⎠ By Definitin 3, A1 and Lemma 2 we have

F (xk +1 ) − F (xk ) = = qi (k )−1∇F (xk ) sk + q 2i(k )−2∇F T (xk )d k + 1 + FD′′ xk , qi (k )−1sk + q 2i (k )−2 d k + o q 2i(k )−2 2 T

(

) (

)

WCE 2012

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

≤ qi (k )−1∇F (xk ) sk + q 2i (k )−2∇F T (xk )d k + + FD′′ xk , qi (k )−1sk + FD′′ xk , q 2i (k )−2 d k + o q 2i (k )−2 T

(

)

(

) (

In the case d k ≠ 0 by Lemma 7 and hence, by assumption

)

A1 it follows that FD′′ ( xk , d k ) > 0 ; so (11) can be clearly

= qi (k )−1∇F ( xk ) sk + q 2i (k )−2∇F T (xk )d k + + q 2i (k )−2 FD′′ (xk , sk ) + q 4i (k )−4 FD′′(xk , d k ) + o q 2i (k )−2 T

(

satisfied.

)

Convergence rate theorem. Under the assumptions of the previous theorem we have that the following estimate holds for the sequence {xk } generated by the algorithm.

≤ qi (k )−1∇F (xk ) sk + q 2i (k )−2∇F T (xk )d k + 2 2 + c2 q 2i (k )−2 sk + c2 q 4i (k )−4 d k + o q 2i (k )−2 . T

(

)

⎡ 1 n−1 F (xk ) − F (xk +1 )⎤ F (xn ) − F (x∞ ) ≤ μ0 ⎢1 + μ0 2 ∑ ⎥ η k =0 ∇F (xk ) 2 ⎥⎦ ⎢⎣

Hence, from (10) it follows that:

⎛ ⎝

⎞ ⎠

1 2

σ ⎜ qi (k )−1∇F (xk )T sk − q 4i (k )−4 FD′′(xk , d k )⎟

for

2i ( k )−2

T

)

(

2 i ( k )− 2

Accumulating all terms of order higer than o q

(

the o q

2 i ( k )− 2

) into

∇F T (xk )d k ≤ 0 from the last inequality it follows that:

(

+ c2 q 2i (k )−2 sk + o q 2i (k )−2 2

)

,

i.e.

(1− σ )qi(k )−1∇F T (xk )sk + c2 q2i(k )−2 sk Hence , dividing by c2 ⋅ q that

qi (k )−1 >

σ −1

i ( k )−1

i ( k )−1

(

)

+ o q 2i (k )−2 > 0 .

, by A2 and A3 it follows

(

)

(

)

o qi (k )−1 c2 o qi (k )−1 + . c2

∇F T (xk )sk +

c2 1− σ ≥β ∇F (xk ) c2

Since q

2

VII. CONCLUSION The Moreau-Yosida regularization is a powerful tool for smoothing nondifferentiable functions. It allows us to transform the solving an NDO problem into the solving an

LC1 optimization problem using the properties of this regularization. To our knowledge this is a new approach to solving NDO problems, and in some sense it is close to the proximal quasi Newton algorithm. The algorithm presented in this paper is based on the algorithm from [3]. This method uses minimization along a plane defined by the vectors sk and d k to generate a new iterative point at each iteration. Relating to the algorithm in [3], the presented algorithm is defined and converges for noonsmooth convex function. REFERENCES [1] [2] [3]

→ 0 as k → ∞, k ∈ K 2 , it follows that

∇F (xk ) → 0 as k → ∞, k ∈ K 2 .

[4]

In order to have a finite value i (k ) , it is sufficient that s k and d k have descent properties, i.e. ∇F ( xk ) sk < 0 and T

∇F (xk ) d k < 0 whenever ∇F ( xk ) ≠ 0 .

[5]

T

The first relation follows from A3 and the second relation follows from (7).

σ

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

[6]

[7]

At a saddle point the relation (6) becomes

F (xk +1 ) − F (xk ) ≤ − q 4i (k )−4 FD′′(xk , d k ) . 2

and

Proof . The proof directly follows from the Theorem 9.2, page 167, in [11].

) (by assumption A2), and using the fact that

σqi (k )−1∇F (xk )T sk < qi (k )−1∇F T (xk )sk +

μ0 = F(x0 ) − F(x∞ )

L( x0 ) is bounded).

T

(

where

diamL(x0 ) =η < +∞ (since by Proposition 1 it follows that

≤ q ∇F ( xk )sk + q ∇F (xk )d k 2 2 2i ( k )−2 4i ( k )−4 + c2 q sk + c2 q d k + o q 2i (k )−2 . i ( k )−1

n = 1,2,3,...,

−1

(11)

[8]

Browien J., Lewis A.: “Convex analysis and nonlinear optimization”, Canada 1999 Clarke F. H.: “Optimization and Nonsmooth Analysis”, Wiley, New York, 1983 Djuranovic Milicic Nada: “ An Algorithm For LC 1 Optimization”, Yugoslav Journal of Operations Research, Vol. 15, No. 2, pp 301306, 2005 Djuranovic Milicic N., Gardasevic Filipovic M..: „An Algorithm For Minimization Of A Nondifferentiable Convex Function“, Lecture Notes in Engineering and Computer Science, WCE 2009, VOL. II, pp 1241-1246, 2009 Djuranovic Milicic N., Gardasevic Filipovic M. :“A Multi-step Curve Search Algorithm in Nonlinear Optimization - Nondifferentiable Convex Case" , FACTA UNIVERSITATIS Series Mathematics and Informatics, Vol. 25, pp. 11-24, (2010) Fuduli A.: “Metodi Numerici per la Minimizzazione di Funzioni Convesse Nondifferenziabili”, PhD Thesis, Departimento di Electronica Informatica e Sistemistica, Univerzita della Calabria (1998) Fletcher R.: “Practical Methods of Optimization, Volume 2, Constrained Optimization”, Department of Mathematics, University of Dundee, Scotland, U.K,Wiley-Interscience, Publication, (1981) Gardasevic Filipovic M., Djuranovic Milicic N. „An Algorithm Using Minimization of a Moreau-Yosida Regularization for Nondifferentiable Convex Function“, Filomat, in press

WCE 2012

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K. [9]

[10] [11] [12] [13]

Gardasevic Filipovic M. , Djuranovic Milicic N. "An Algorithm Using Trust Region Strategy For Minimization Of A Nondifferentiable Function", Numerical Functional Analysis and Optimization, Vol. 32, No. 12, pp. 1239-1251, 2011 Иоффе А. Д., Тихомиров В. М.: „Теория экстремальныџ задач“, Москва „Наука“, 1974 Karmanov V.G: “Mathematical Programming”. Nauka, Moscow, 1975 (in Russian). Lemarechal C. , Sagastizabal C.: “Practical aspects of the MoreauYosida regularization I: theoretical preliminaries”, SIAM Journal of Optimization, Vol.7, pp 367-385, 1997 Meng F.,Zhao G.: ”On Secon -order Properties of the Morea -Yosida Regularization for Constrained Nonsmooth Convex Programs” ,

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

[14] [15] [16]

[17]

Numerical Functional Analysis and Optimization , VOL 25; No.5/6, pp 515-530, 2004 Пшеничнй Б. Н.: „Выпуклый анализ и экстремальные задачи“, Москва „Наука“, 1980. Rockafellar R.: “Convex Analysis”, Princeton University Press, New Jersey (1970) (Russ. trans.: Mir, Moscow 1973) Sun W., Sampaio R.J.B., Yuan J.: “Two Algorithms for LC 1 Unconstrained Optimization”, Journal of Computational Mathematics, Vol. 18, No. 6, pp. 621-632, 2000 Qi L.: “Second Order Analysis of the Moreau-Yosida Regularization”, School of Mathematics, The University of New Wales Sydney, New South Wales 2052, Australia, 1998.

WCE 2012