Hindawi Discrete Dynamics in Nature and Society Volume 2017, Article ID 6406514, 9 pages https://doi.org/10.1155/2017/6406514
Research Article A New Augmented Lagrangian Method for Equality Constrained Optimization with Simple Unconstrained Subproblem Hao Zhang1,2 and Qin Ni1 1
College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
2
Correspondence should be addressed to Hao Zhang;
[email protected] Received 3 May 2017; Accepted 16 August 2017; Published 19 October 2017 Academic Editor: Seenith Sivasundaram Copyright Š 2017 Hao Zhang and Qin Ni. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose a new method for equality constrained optimization based on augmented Lagrangian method. We construct an unconstrained subproblem by adding an adaptive quadratic term to the quadratic model of augmented Lagrangian function. In each iteration, we solve this unconstrained subproblem to obtain the trial step. The main feature of this work is that the subproblem can be more easily solved. Numerical results show that this method is effective.
1. Introduction
For (1), we define the Lagrangian function
In this paper, we consider the following equality constrained optimization:
đ (đĽ, đ) = đ (đĽ) â đđ đ (đĽ)
(2)
and the augmented Lagrangian function min đ (đĽ) s.t.
đđ (đĽ) = 0,
đ = 1, . . . , đ,
(1)
where đĽ â đ
đ , đ(đĽ) : đ
đ â đ
, and đđ (đĽ) : đ
đ â đ
(đ = 1, . . . , đ) are twice continuously differentiable. The method presented in this paper is a variant of the augmented Lagrangian method (denoted by AL). In the late 1960s, AL method was proposed by Hestenes [1] and Powell [2]. Later Conn et al. [3, 4] presented a practical AL method and proved the global convergence under the LICQ condition. Since then, AL method attracted the attentions of many scholars and many variants were presented (see [5â11]). Up to now, there are many computer packages based on AL method, such as LANCELOT [4] and ALGENCAN [5, 6]. In the past decades, AL method was fully developed. Attracted by its well performance, there are still many scholars devoted to research AL method and its applications in recent years (see [7, 8, 11â15]).
đż (đĽ, đ, đ) = đ (đĽ, đ) +
đ âđ (đĽ)â2 , 2
(3)
where đ is called the Lagrangian multiplier and đ is called the penalty parameter. In this paper, â â
â refers to the Euclidean norm. In a typical AL method, at the đth step, for given multiplier đ đ and penalty parameter đđ , an unconstrained subproblem minđ đż (đĽ, đ đ , đđ )
(4)
đĽâđ
is solved to find the next iteration point. Then, the multiplier and penalty parameter are updated by some rules. For convenience, for given đ đ and đđ , we define ÎŚđ (đĽ) = đż (đĽ, đ đ , đđ ) = đ (đĽ, đ đ ) +
đđ âđ (đĽ)â2 . 2
(5)
Motivated by the regularized Newton method for unconstrained optimization (see [16â19]), we construct a new
2
Discrete Dynamics in Nature and Society
subproblem of (1). At the đth iteration point đĽđ , ÎŚđ (đĽđ + đ ) is approximated by the following quadratic model: đđ (đ ) 1 đ = đ (đĽđ , đ đ ) + (âđĽ đ (đĽđ , đ đ )) đ + đ đ đľđ đ 2 đ óľŠ óľŠ2 + đ óľŠóľŠóľŠđ (đĽđ ) + đ´ (đĽđ ) đ óľŠóľŠóľŠ 2 đ
đ
= đ (đĽđ , đ đ ) + (âđĽ đ (đĽđ ) â đ´ (đĽđ ) đ đ ) đ (6)
đ óľŠ 1 óľŠ2 + đ đ đľđ đ + đ óľŠóľŠóľŠđ (đĽđ ) + đ´ (đĽđ ) đ óľŠóľŠóľŠ 2 2 = ÎŚđ (đĽđ ) đ
đ
đĚđ = âÎŚđ (đĽđ ) đ
= âđĽ đ (đĽđ ) â đ´ (đĽđ ) đ đ + đđ đ´ (đĽđ ) đ (đĽđ ) ;
(7)
đ đľĚđ = đľđ + đđ đ´ (đĽđ ) đ´ (đĽđ ) ,
thus we have 1 đđ (đ ) = ÎŚđ (đĽđ ) + đĚđđ đ + đ đ đľĚđ đ . 2
(8)
In [14, 15], đđ (đ ) is minimized within a trust region to find the next iteration point. Motivated by the regularized Newton method, we add a regularization term to the quadratic model đđ (đ ) and define 1 đđ (đ ) = đđ (đ ) + đđ âđ â2 2 1 1 + đ đ đľĚđ đ + đđ âđ â2 , 2 2
(9)
where đđ is called regularized parameter. At the đth step of our algorithm, we solve the following convex unconstrained quadratic subproblem: 1 1 minđ đđ (đ ) = ÎŚđ (đĽđ ) + đĚđđ đ + đ đ đľĚđ đ + đđ âđ â2 đ âđ
2 2
(10)
for finding the trial step đ đ . Then, we compute the ratio between the actual reduction and predicted reduction đđ =
(13)
óľŠ óľŠ đđ (óľŠóľŠóľŠđ đ óľŠóľŠóľŠ â Î đ ) = 0,
where đ´(đĽđ ) = [âđĽ đ1 (đĽđ ), . . . , âđĽ đđ (đĽđ )]đ and đľđ is a positive 2 đ(đĽđ , đ đ ). Let semidefinite approximation of âđĽđĽ
= ÎŚđ (đĽđ ) +
(12)
The exact solution đ đ of (12) satisfies the first-order critical conditions if there exists some đđ ⼠0 such that đľĚđ + đđ đź is positive semidefinite and óľŠóľŠ óľŠóľŠ óľŠóľŠđ đ óľŠóľŠ ⤠Πđ
1 đ + đ đ [đľđ + đđ đ´ (đĽđ ) đ´ (đĽđ )] đ , 2
đĚđđ đ
1 min đđ (đ ) = ÎŚ (đĽđ ) + đĚđđ đ + đ đ đľĚđ đ . đ âÎ đ 2
đĚđ + (đľĚđ + đđ đź) đ đ = 0
đ
+ [âđĽ đ (đĽđ ) â đ´ (đĽđ ) đ đ + đđ đ´ (đĽđ ) đ (đĽđ )] đ
đ
a sufficiently âgoodâ approximation of ÎŚđ (đĽđ + đ ) and reduce the value of đđ . Conversely, when đđ is close to zero, we set đĽđ+1 = đĽđ and increase the value of đđ , by which we wish to reduce the length of the next trial step. This technique is similar to the update rule of trust region radius. Actually, sufficiently large đđ indeed reduces the length of the trial step đ đ . However, the regularized parameter is different from trust region radius. In [14, 15], the authors construct a trust region subproblem
Aredđ ÎŚđ (đĽđ ) â ÎŚđ (đĽđ + đ đ ) = . Predđ ÎŚđ (đĽđ ) â đđ (đĽđ + đ đ )
(11)
When đđ is close to 1, we accept đĽđ + đ đ as the next iteration point. At the same time, we think the quadratic model đđ (đ ) is
while the first-order critical condition of (10) is đĚđ + (đľĚđ + đđ ) đ đ = 0.
(14)
Equations (13) and (15) can show the similarities and differences between regularized subproblem (10) and trust region subproblem (12). It seems that the parameter đđ plays a role similar to the multiplier đđ in the trust region subproblem. But, actually, the update rule of đđ (see (26)) shows that đđ is not the approximation of đđ . The update of đđ depends on the quality of last trial step đ đâ1 and has no direct relation with system (13). To establish the global convergence of an algorithm, some kind of constraint qualification is required. There are many well-known constraint qualifications, such as LICQ, MFCQ, CRCQ, RCR, CPLD, and RCPLD. In case there are only equality constraints, LICQ is equivalent to MFCQ in which {âđđ (đĽ) | đ = 1, . . . , đ} has full rank; CRCQ is equivalent to CPLD in which any subset of {âđđ (đĽ) | đ = 1, . . . , đ} maintains constant rank in a neighborhood of đĽ; RCR is equivalent to RCPLD in which {âđđ (đĽ) | đ = 1, . . . , đ} maintains constant rank in a neighborhood of đĽ. RCPLD is weaker than CRCQ, and CRCQ is weaker than LICQ. In this paper, we use RCPLD which is defined in the following. Definition 1. One says that RCPLD holds at a feasible point đĽâ of (1), if there exists a neighborhood đ(đĽâ ) of đĽâ such that {âđđ (đĽ) | đ = 1, . . . , đ} maintains constant rank for all đĽ â đ(đĽâ ). The rest of this paper is organized as follows. In Section 2, we give a detailed description of the presented algorithm. The global convergence is proved in Section 3. In Section 4, we present the numerical experiments. Some conclusions are given in Section 5. Notations. For convenience, we abbreviate âđĽ đ(đĽđ ) to đđ , đ(đĽđ ) to đđ , đ(đĽđ ) to đđ , and đ´(đĽđ ) to đ´ đ . In this paper, đ(đ) denotes the đth component of the vector đ.
Discrete Dynamics in Nature and Society
3
2. Algorithm In this section, we give a detailed description of the proposed algorithm. As mentioned in Section 1, we solve the unconstrained subproblem (10) to obtain the trial step đ đ . Since đľđ is at least positive semidefinite and đľĚđ = đľđ + đđ đ´đđ đ´ đ , đľĚđ + đđ đź is positive definite as đđ > 0. Therefore, (10) is a strictly convex quadratic unconstrained optimization. đ đ solves (10) if and only if đĚđ + (đľĚđ + đđ đź) đ đ = 0
(15)
holds. Global convergence does not depend on the exact solution of (15), although the linear system (15) is easy to be solved. For minimizer of (10) along the direction âđĚđ , specifically, we consider the following subproblem: óľŠ óľŠ2 1 min đđ (âđźđĚđ ) = ÎŚđ (đĽđ ) â đź óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ + đź2 đĚđđ đľĚđ đĚđ đźâĽ0 2 1 óľŠ óľŠ2 + đź2 đđ óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ . 2
(16)
Predđ ⼠Όđ (đĽđ ) â đđ (âđźđâ đĚđ ) (17)
(18)
In Section 3, we always suppose that (18) holds. In a typical AL algorithm, the update rule of đđ depends on the improvement of constraint violation. A commonly used update rule is that if âđđ+1 â < đđ âđđ â, where 0 < đđ < 1, one may think that the constraint violation is reduced sufficiently and thus đđ+1 = đđ is a good choice. Otherwise, if âđđ+1 â ⼠đđ âđđ â, one thinks that current penalty parameter can not sufficiently reduce the constraint violation and increase it in the next iteration. In [20], Yuan proposed a different update rule of đđ for trust region algorithm. Specifically, if óľŠ óľŠ óľŠ óľŠ2 Predđ < đżđ đđ min {Î đ óľŠóľŠóľŠđđ óľŠóľŠóľŠ , óľŠóľŠóľŠđđ óľŠóľŠóľŠ } ,
đđ is increased.
when đđ is sufficiently small, đ đ â đđ đđ is a good estimate of the next multiplier đ đ+1 . As we obtain đĽđĽ+1 by minimizing đđ (đ ), the critical point of đđ (đ ) has no direct relation to ââđĽ đż(đĽđ+1 , đ đ , đđ )â. Therefore, the update rule đ đ+1 = đ đ â đđ đđ does not suit our algorithm. We obtain đ đ+1 by approximately solving the following least squares problem: min
đâđ
đ
1 óľŠóľŠ óľŠ2 óľŠóľŠđ â đ´đđ+1 đóľŠóľŠóľŠ . óľŠ 2 óľŠ đ+1
(22)
Most AL algorithms require that {đ đ } is bounded to ensure the global convergence. Hence, all components of đ đ are restricted to certain interval [đ, đ]. This technique is also used in our algorithm. Now, we give the detailed algorithm in the following.
Step 0 (initialization). Choose the parameters 0 < đž1 < đž2 < 1, 0 < đ1 < 1 ⤠đ2 < đ3 , đ > đ. Determine đĽ0 â đ
đ , đ 0 â đ
đ , đ0 > 0, đż0 > 0, đľ0 â đ
đĂđ . Let đ
0 = max{âđ(đĽ0 )â, 1}. Set đ fl 0. Step 1 (termination test). If âđĚđ â = 0 and âđđ â = 0, return đĽđ as a KKT point. If âđĚđ â = 0, âđđ â > 0, and âđ´đđ đđ â = 0, return đĽđ as an infeasible KKT point.
1 1 minđ đđ (đ ) = ÎŚđ (đĽđ ) + đĚđđ đ + đ đ đľĚđ đ + đđ âđ â2 đ âđ
2 2
(20)
(23)
such that (18) holds. Compute the ratio between the actual reduction to the predicted reduction đđ =
Aredđ , Predđ
(24)
where Aredđ = ÎŚđ (đĽđ )âÎŚđ (đĽđ +đ đ ), Predđ = ÎŚđ (đĽđ )âđđ (đĽđ + đ đ ). Set {đĽđ + đ đ , đđ ⼠đž1 , đĽđ+1 = { đĽ , đđ < đž1 , { đ
(19)
đđ is increased. In (19), đżđ is an auxiliary parameter such that đżđ đđ tends to zero. We slightly modify (19) in our algorithm. Specifically, if óľŠóľŠ óľŠóľŠ óľŠ óľŠ óľŠđđ óľŠóľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} óľŠóľŠ óľŠóľŠ2 óľŠ < đż đ min { , óľŠóľŠđđ óľŠóľŠ } , Predđ đ đ đđ
(21)
Step 2 (determine the trial step). Evaluate the trial step đ đ by solving
By direct calculation, we have that óľŠóľŠ Ě óľŠóľŠ2 óľŠđ óľŠ 1 1 Predđ âĽ óľŠ đ óľŠ min { óľŠóľŠ óľŠóľŠ , } . 4 óľŠóľŠđľĚđ óľŠóľŠ đđ óľŠ óľŠ
âđĽ đż (đĽđ+1 , đ đ , đđ ) = đđ+1 â đ´đđ+1 (đ đ â đđ đđ ) ,
Algorithm 2.
If âđĚđ â > 0, then the minimizer of (16) is đźđâ = âđĚđ â2 /(đĚđđ đľĚđ đĚđ +đđ âđĚđ â2 ). Therefore, at the đth step, it follows that óľŠóľŠ Ě óľŠóľŠ4 óľŠóľŠđđ óľŠóľŠ = óľŠ óľŠ2 . đ 2 (đĚđ đľĚđ đĚđ + đđ óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ )
In typical AL method, next iteration point đĽđ+1 is obtained by minimizing đż(đĽ, đ đ , đđ ). In most AL methods, đĽđ+1 satisfies that ââđĽ đż(đĽđ+1 , đ đ , đđ )â < đđ , where đđ is controlling parameter which tends to zero. As
đđ+1
đ1 đđ , đđ ⼠đž2 , { { { { = {đ2 đđ , đž1 ⤠đđ < đž2 , { { { {đ3 đđ , đđ < đž1 .
Step 3 (update the penalty parameter). If óľŠ óľŠ óľŠóľŠ óľŠóľŠ óľŠđ óľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} óľŠóľŠ óľŠóľŠ2 Predđ < đżđ đđ min { óľŠ đ óľŠ , óľŠóľŠđđ óľŠóľŠ } , đđ
(25)
(26)
(27)
4
Discrete Dynamics in Nature and Society
set đđ+1 = 2đđ , 1 đżđ+1 = đżđ . 4
(28)
Otherwise, set đđ+1 = đđ , đżđ+1 = đżđ .
(29)
Now, we discuss convergence properties in two cases. One is that the penalty parameter đđ tends to â and the other is that {đđ } is bounded. 3.1. The Case of đđ â â
Step 4 (update the multiplier). If âđđ+1 â ⤠đ
đ , set đ
đ+1 = Ě (1/2)đ
đ . Evaluate đ đ+1 by óľŠ óľŠ min óľŠóľŠóľŠóľŠđđ â đ´đđ đóľŠóľŠóľŠóľŠ , (30) đâđ
đ and let Ě (đ) đ(đ) đ+1 = min {max {đ đ+1 , đ} , đ} , đ = 1, . . . , đ.
If âđ đ â = 0, then (32) holds. If âđ đ â > 0, (âđĚđ â â (1/ 4)đđ âđ đ â)âđ đ â + (1/2)(âđľĚđ â â (1/2)đđ )âđ đ â2 ⼠0 implies that âđĚđ â â (1/4)đđ âđ đ â ⼠0 or âđľĚđ â â (1/2)đđ ⼠0. Thus we can obtain (32).
(31)
If âđđ+1 â > đ
đ , set đ
đ+1 = đ
đ and đ đ+1 = đ đ . Set đ fl đ + 1 and go to Step 1.
Lemma 5. Suppose that (A1)-(A2) hold and đđ â â; then there exists a constant đâ such that âđđ â â đâ . Proof. See Lemma 3.1 in Wang and Yuan [15]. In Lemma 5, if đâ > 0, then any accumulation point of {đĽđ } is infeasible. Sometimes (1) is naturally infeasible; in other words, the feasible set {đĽ | đ(đĽ) = 0} is empty. In this case, we wish to find a minimizer of constraint violation. Specifically, we wish to solve âđ (đĽ)â2 .
minđ
(34)
Remark 3. In practical calculation, it is not required to Ě . In our implementation of solve (30) exactly to find đ đ+1 Algorithm 2, we use the Matlab subroutine minres to find an approximate solution of the linear system đ´ đ đ´đđ đ = đ´ đ đđ Ě . and take it as an approximation of đ đ+1
The solution of this problem is characterized by
3. Global Convergence
In the next theorem, we show that if {đđ } is not convergent to zero, at least one of the accumulation points of {đĽđ } satisfies (35).
In this section, we discuss the global convergence of Algorithm 2. We assume that Algorithm 2 can find an infinite set {đĽđ } and give some assumptions in the following. Assumptions 1. (A1) đ(đĽ) and đ(đĽ) are twice continuously differentiable. (A2) {đĽđ } and {đľđ } are bounded, where đľđ is positive 2 đ(đĽđ , đ đ ). semidefinite approximation of âđĽđĽ Firstly, we give a result on the upper bound of the trial step. Lemma 4. If đ đ solves subproblem (23), then one has óľŠóľŠ Ě óľŠóľŠ óľŠđđ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠđ đ óľŠóľŠ ⤠4 óľŠ óľŠ đđ (32) óľŠ Ě óľŠóľŠ óľŠ or đđ ⤠2 óľŠóľŠóľŠđľđ óľŠóľŠóľŠ hold âđ ⼠0. Proof. Any approximate solution đ đ of (23) satisfies Predđ = ÎŚđ (đĽđ ) â đđ (đ đ ) ⼠0. Clearly, 0 ⤠Όđ (đĽđ ) â đđ (đ đ )
óľŠ óľŠ 1 óľŠ óľŠ óľŠ óľŠ 1 óľŠ óľŠ 1 óľŠ óľŠ2 = (óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ â đđ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ) óľŠóľŠóľŠđ đ óľŠóľŠóľŠ + (óľŠóľŠóľŠóľŠđľĚđ óľŠóľŠóľŠóľŠ â đđ ) óľŠóľŠóľŠđ đ óľŠóľŠóľŠ . 4 2 2
đ
đ´ (đĽâ ) đ (đĽâ ) = 0.
(35)
Theorem 6. Suppose that (A1)-(A2) hold and đđ â â. If âđđ â â đ > 0, then óľŠ óľŠ lim inf óľŠóľŠóľŠóľŠđ´đđ đđ óľŠóľŠóľŠóľŠ = 0. đââ
(36)
Proof. We prove this result by contradiction. Suppose that there exists some đ > 0 such that óľŠóľŠ đ óľŠóľŠ óľŠóľŠđ´ đ đđ óľŠóľŠ > 2đ, âđ ⼠0. (37) óľŠ óľŠ By the definition of đĚđ in (7), we know that óľŠ đ óľŠ óľŠ óľŠóľŠ Ě óľŠóľŠ đ óľŠ óľŠóľŠđđ óľŠóľŠ ⼠đđ óľŠóľŠóľŠóľŠđ´ đ đđ óľŠóľŠóľŠóľŠ â óľŠóľŠóľŠóľŠđđ â đ´ đ đ đ óľŠóľŠóľŠóľŠ .
(38)
As {đĽđ } and {đ đ } are bounded, we can deduce the boundedness of âđđ â đ´đđ đ đ â by (A2); that is, there exists some đ > 0 such that óľŠóľŠ óľŠ óľŠóľŠđđ â đ´đđ đ đ óľŠóľŠóľŠ < đ. (39) óľŠ óľŠ By (37), (38), and (39), we can conclude that
1 1 óľŠ óľŠ2 = âđĚđđ đ đ â đ đđ đľĚđ đ đ â đđ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ 2 2 óľŠ óľŠ óľŠ óľŠ 1 óľŠ óľŠ óľŠ óľŠ2 1 óľŠ óľŠ2 ⤠óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ + óľŠóľŠóľŠóľŠđľĚđ óľŠóľŠóľŠóľŠ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ â đđ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ 2 2
đĽâđ
(33)
óľŠóľŠ Ě óľŠóľŠ óľŠóľŠđđ óľŠóľŠ ⼠2đđđ â đ > đđđ
(40)
holds for all sufficiently large đ. By the boundedness of đľđ and đ´ đ , we can conclude that there exists đ > 0, such that óľŠóľŠ Ě óľŠóľŠ óľŠóľŠđľđ óľŠóľŠ < đđđ óľŠ óľŠ
(41)
Discrete Dynamics in Nature and Society
5
holds for all sufficiently large đ, where đľĚđ is defined by (7). By (18), (40), and (41), óľŠóľŠ Ě óľŠóľŠ2 óľŠđ óľŠ 1 1 Predđ âĽ óľŠ đ óľŠ min { óľŠóľŠ óľŠóľŠ , } 4 óľŠóľŠđľĚđ óľŠóľŠ đđ óľŠ óľŠ óľŠóľŠ Ě óľŠóľŠ óľŠóľŠ Ě óľŠóľŠ óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ = óľŠ đ óľŠ min { óľŠóľŠóľŠ đ óľŠóľŠóľŠ , óľŠ đ óľŠ } 4 óľŠóľŠđľĚđ óľŠóľŠ đđ óľŠ óľŠ óľŠ óľŠ đđ đ óľŠóľŠđĚ óľŠóľŠ ⼠đ min { , óľŠ đ óľŠ } 4 đ đđ
(42)
holds for all sufficiently large đ. By the update rule of đđ and the fact that đđ â â, we have that óľŠ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠđđ óľŠóľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} óľŠóľŠ óľŠóľŠ2 (43) , óľŠóľŠđđ óľŠóľŠ } Predđ < đżđ đđ min { đđ holds for infinitely many đ. As âđĚđ â > 1 holds for all sufficiently đ by (40), it is easy to see that (42) contradicts to (43) as đżđ đđ â 0 and {âđđ â} is convergent. Thus we can prove the desired result. Lemma 7. Suppose that (A1)-(A2) hold, đđ â â, and âđđ â â 0; then óľŠ óľŠ lim inf óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ = 0. (44) đââ Proof. Assume that there exists đ > 0 such that óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ > đ > 0, âđ ⼠0. óľŠ óľŠ
(45)
Then, by (18) and (41), we know that, for all sufficiently large đ, óľŠ óľŠ đ đ óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ (46) , }. Predđ ⼠min { 4 đđđ đđ By the update rule of đđ and đđ â â, óľŠ óľŠ óľŠóľŠ óľŠóľŠ óľŠđ óľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} óľŠóľŠ óľŠóľŠ2 , óľŠóľŠđđ óľŠóľŠ } Predđ < đżđ đđ min { óľŠ đ óľŠ đđ
(47)
holds for infinitely many đ. We will prove that (47) contradicts to (46). Let đž be the index set containing all đ such that (47) holds. Therefore, for all đ â đž, óľŠóľŠóľŠđ óľŠóľŠóľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} óľŠ óľŠ , (48) Predđ < đżđ đđ óľŠ đ óľŠ đđ óľŠ óľŠ2 Predđ < đżđ đđ óľŠóľŠóľŠđđ óľŠóľŠóľŠ .
(49)
If there exists an infinite subset đž1 â đž such that âđĚđ â ⤠1 holds for all đ â đž1 , then, by (48), it holds that óľŠóľŠ óľŠóľŠ óľŠđ óľŠ (50) Predđ < đżđ đđ óľŠ đ óľŠ , âđ â đž1 . đđ As đżđ đđ â 0, âđđ â â 0, and âđĚđ â > đ, (50) implies that óľŠ óľŠ đ óľŠóľŠđĚ óľŠóľŠ (51) Predđ < óľŠ đ óľŠ 4 đđ
holds for all sufficiently large đ â đž1 . If there exists an infinite subset đž2 â đž such that âđĚđ â > 1 holds for all đ â đž2 , then by (48) we have that óľŠóľŠóľŠđ óľŠóľŠóľŠ óľŠóľŠóľŠđĚ óľŠóľŠóľŠ (52) Predđ < đżđ đđ óľŠ đ óľŠ óľŠ đ óľŠ đđ for all đ â đž2 . Equation (52) also implies (51) as đżđ đđ â 0 and âđđ â â 0. From (28) and (29), it follows that đżđ đđ2 = đż0 đ02 holds for all đ ⼠0. Therefore, by (49) we know that, for all đ â đž, óľŠ óľŠ2 óľŠ óľŠ2 1 óľŠ óľŠ2 1 Predđ < đżđ đđ óľŠóľŠóľŠđđ óľŠóľŠóľŠ = đżđ đđ2 óľŠóľŠóľŠđđ óľŠóľŠóľŠ = đż0 đ02 óľŠóľŠóľŠđđ óľŠóľŠóľŠ . (53) đđ đđ As âđđ â â 0, (53) implies that Predđ
0 such that óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ (58) Predđ > đ óľŠ đ óľŠ holds âsufficiently large đ, đđ then âđâđ (âđĚđ â/đđ ) is divergent.
6
Discrete Dynamics in Nature and Society
Proof. We prove this lemma by contradiction. We will show that if âđâđ (âđĚđ â/đđ ) is convergent, then đđ+1 < đđ holds for all sufficiently large đ which contradicts to the fact that 1/đđ â 0, as đ â â. Suppose that óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ â óľŠ đ óľŠ = đ . đđ đâđ
where đ
đ is defined by Steps 0 and 4 in Algorithm 2. From Step 4 of Algorithm 2, we know that if đ â đž, then âđđ+1 â > đ
đ and đ đ+1 = đ đ . Hence we have â
â
đ=0
đ=0
(59)
đ
= âđđ0 đ0 + â (đ đ â đ đ+1 ) đđ+1 đâđž
limđââ (1/đđ ) = 0 and (32) imply that
â¤
óľŠóľŠ Ě óľŠóľŠ óľŠđđ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠđ đ óľŠóľŠ ⤠4 óľŠ óľŠ đđ
(60)
â
óľŠ óľŠ óľŠ óľŠ â óľŠóľŠóľŠđĽđ+1 â đĽđ óľŠóľŠóľŠ = â óľŠóľŠóľŠđ đ óľŠóľŠóľŠ .
óľŠ óľŠ óľŠ óľŠ + 2 óľŠóľŠóľŠđ max óľŠóľŠóľŠ â óľŠóľŠóľŠđđ+1 óľŠóľŠóľŠ
(68)
đâđž
óľŠ óľŠ â¤ âđđ0 đ0 + 2 óľŠóľŠóľŠđ max óľŠóľŠóľŠ â đ
đ ,
(61)
where âđ max â is the upper bound of {đ đ }. From Step 4 and (67), we have
đâđ
â đ
đ ⤠đ
0 (1 +
đâđž
Equations (59)â(61) imply that {đĽđ } is convergent. Let
1 1 + + â
â
â
) = 2đ
0 , 2 4
(69)
which implies
đđ = (đđ â đž2 ) Predđ = (1 â đž2 ) Predđ + (Aredđ â Predđ ) .
â
(62)
óľŠ óľŠ â (âđđđ đđ + đđđ đđ+1 ) ⤠âđđ0 đ0 + 4đ
0 óľŠóľŠóľŠđ max óľŠóľŠóľŠ .
(70)
đ=0
It is clear that đđ > 0 â đđ > đž2 . By Taylorâs theorem, it holds that
Then, we have â
â
đ=0
đ=0
â Aredđ = â [ÎŚđ (đĽđ ) â ÎŚđ (đĽđ+1 )]
Aredđ â Predđ = đđ (đ đ ) â ÎŚđ (đĽđ + đ đ ) đ
= â [âÎŚđ (đĽđ + đđ ) â đĚđ ] đ đ 1 1 óľŠ óľŠ2 + đ đđ đľĚđ đ đ + đđ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ 2 2 óľŠóľŠ óľŠ óľŠ âĽ â óľŠóľŠóľŠâÎŚđ (đĽđ + đđ ) â đĚđ óľŠóľŠóľŠ óľŠóľŠóľŠđ đ óľŠóľŠóľŠ ,
óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ óľŠ óľŠóľŠ óľŠ Aredđ â Predđ ⼠â4 óľŠóľŠâÎŚđ (đĽđ + đđ ) â đĚđ óľŠóľŠ óľŠ đ óľŠ đ
â
đ=0
đ=0
+
đ0 â óľŠóľŠ óľŠóľŠ2 óľŠóľŠ óľŠóľŠ2 â (óľŠđ óľŠ â óľŠđ óľŠ ) 2 đ=0 óľŠ đ óľŠ óľŠ đ+1 óľŠ
(71)
đ óľŠ óľŠ2 óľŠ óľŠ â¤ đ0 + (âđđ0 đ0 + 4đ
0 óľŠóľŠóľŠđ max óľŠóľŠóľŠ) + 0 óľŠóľŠóľŠđ0 óľŠóľŠóľŠ 2 â đ´,
(64)
đ
where ÎŚđ (đĽ) is defined by (5). Secondly, we prove
holds for all sufficiently large đ and thus óľŠ óľŠ óľŠóľŠ óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ óľŠóľŠ Ě . đđ ⼠[đ (1 â đž2 ) â 4 óľŠóľŠâÎŚđ (đĽđ + đđ ) â đđ óľŠóľŠ] đ
â
= â (đđ â đđ+1 ) + â (âđđđ đđ + đđđ đđ+1 )
(63)
where đđ is a convex combination of đĽđ and đĽđ +đ đ . According to (60), we have that
óľŠ óľŠ lim inf óľŠóľŠóľŠđđ óľŠóľŠóľŠ = 0 đââ
(65)
đ
The convergence of {đĽđ } and the boundedness of {đ đ } imply that ââÎŚđ (đĽđ + đk ) â đĚđ â â 0. Therefore, for all sufficiently large đ, đđ > 0. This implies that đđ > đž2 and đđ+1 < đđ . Lemma 10. Suppose that (A1)-(A2) hold and đđ = đ0 for all đ ⼠0; then we have that óľŠ óľŠ lim óľŠóľŠđđ óľŠóľŠ = 0. (66) đââ óľŠ óľŠ Proof. Firstly, we prove that the sum of Aredđ is bounded. Define the indices set óľŠ óľŠ đž = {đ | óľŠóľŠóľŠđđ+1 óľŠóľŠóľŠ ⤠đ
đ } ,
âđđ0 đ0
đâđž
holds for all sufficiently large đ. By the definition of đ,
đ=0
đ
â (âđđđ đđ + đđđ đđ+1 ) = âđđ0 đ0 + â (đ đ â đ đ+1 ) đđ+1
(67)
(72)
by contradiction. Suppose that there exists some đ > 0 such that óľŠóľŠ óľŠóľŠ (73) óľŠóľŠđđ óľŠóľŠ > đ > 0, âđ ⼠0. Equations (56) and (73) imply that Predđ ⼠đż0 đ0 min {
óľŠ óľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} 2 đ, đ } . đđ
(74)
Considering the sum of Predđ on the index set đ (see (57)), we have by (71) that â Predđ â¤
đâđ
1 1 â 1 â Aredđ = â Aredđ ⤠đ´. đž1 đâđ đž1 đ=0 đž1
(75)
Discrete Dynamics in Nature and Society It can be deduced by (74) and (75) that óľŠ óľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} < +â â đđ đâđ and thus
óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ â óľŠ đ óľŠ < +â, đđ đâđ 1 â < +â. đ đâđ đ
7 Thus by Lemma 4, we have that, for all sufficiently large đ â đž,
(77)
holds for all sufficiently large đ. Hence it can be deduced by Lemma 9 that âđâđ (âđĚđ â/đđ ) is divergent which contradicts the first part in (77). Finally, we prove (66). If đ is a finite set, then {đĽđ } is convergent. Thus, (72) implies (66). From now on we assume that đ is an infinite set. Suppose that (66) does not hold; then Ě = {đđ } (đž Ě â đ) and a there exist an infinite index set đž constant đ > 0 such that óľŠóľŠ óľŠóľŠ Ě óľŠóľŠđđđ óľŠóľŠ > 2đ holds âđđ â đž. (79) óľŠ óľŠ Ě = {đđ } (đž Ě â đ) By (72), there also exists an infinite index set đž such that đđ < đđ , óľŠóľŠóľŠđđ óľŠóľŠóľŠ ⼠đ holds, âđ such that đđ ⤠đ < đđ , (80) óľŠ óľŠ óľŠóľŠóľŠđ óľŠóľŠóľŠ < đ holds âđ â đž. Ě (81) óľŠóľŠ đđ óľŠóľŠ đ Ě â đž and đž is an infinite Let đž = {đ | đ â đ, âđđ â ⼠đ}; then đž index set, â Predđ ⤠â Predđ . đâđ
(82)
Therefore, by (75), we have that â Predđ ⤠đâđž
1 đ´. đž1
With the help of (56), (80), and (83), we obtain that óľŠ óľŠ max {1, óľŠóľŠóľŠđĚđ óľŠóľŠóľŠ} < +â. â đđ
(83)
óľŠóľŠ óľŠ óľŠóľŠđĽđđ â đĽđđ óľŠóľŠóľŠ â¤ óľŠ óľŠ
â đđ â¤đ đ
holds, âđ.
(89)
By (18), we have that Predđ âĽ
1 đ2 1 min { óľŠóľŠ óľŠóľŠ , } 4 óľŠóľŠđľĚđ óľŠóľŠ đđ óľŠ óľŠ
holds âđ.
(90)
As {đľĚđ } is bounded above, similar to the second part in the proof of Lemma 10, we can conclude that 1 ół¨â 0, đđ
as đ ół¨â â,
(91)
holds for all sufficiently large đ.
(92)
and thus Predđ âĽ
đ2 1 4 đđ
By (75) and (92), we have that âđâđ (1/đđ ) is convergent and thus âđâđ (âđĚđ â/đđ ) is also convergent as {đĚđ } is bounded. However, Lemma 9, (91), (92), and the boundedness of {đĚđ } deduce the divergence of âđâđ (âđĚđ â/đđ ). This contradiction completes the proof. With the help of Lemmas 10 and 11, we can easily obtain the following result.
(84)
Theorem 12. Suppose that (A1)-(A2) hold and đđ = đ0 for all đ ⼠0; there exists an accumulation point of {đĽđ } at which the KKT condition holds.
(85)
Note that, in Theorem 12, we do not suppose that RCPLD holds.
đâđž
A direct conclusion which can be drawn by (84) is 1 ół¨â 0, as đ â đž, đ ół¨â â. đđ
(86)
Therefore, for sufficiently large đ,
If đ is a finite set, then it follows from (57) and Step 2 that đđ ⼠đž1 and đđ+1 = đ3 đđ (đ3 > 1) hold for all sufficiently large đ. Therefore, 1/đđ â 0, as đ â â. If đ is an infinite set, the second inequality in (77) implies that 1/đđ â 0, as đ â â and đ â đ. From Step 2, we know that if đ â đ, then đđ+1 ⼠đđ . Hence, we have 1/đđ â 0, as đ â â. The fact that 1/đđ â 0 and (74) imply that óľŠóľŠ Ě óľŠóľŠ óľŠđ óľŠ (78) Predđ ⼠đż0 đ0 đ óľŠ đ óľŠ đđ
đâđž
óľŠóľŠ Ě óľŠóľŠ óľŠđđ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠđ đ óľŠóľŠ ⤠4 óľŠ óľŠ . đđ
(76)
8
Discrete Dynamics in Nature and Society Table 1: Results of Algorithm 2 and ALGENCAN.
Name AIRCRFTA ARGTRIG BOOTH BROWNALE BROYDN3D BYRDSPHR BT1 BT2 BT4 BT5 BT7 BT8 BT9 BT10 BT11 BT12 CHNRSNBE CLUSTER CUBENE DECONVNE EIGENB GOTTFR HATFLDF HEART6 HEART8 HS39 HS48 HYDCAR6 INTEGREQ MARATOS MWRIGHT ORTHREGB RECIPE RSNBRNE SINVALNE TRIGGER YFITNE ZANGWIL3
n
m
8 200 2 200 5000 3 2 3 3 3 5 5 4 2 5 5 50 2 2 63 7 2 3 6 8 4 5 29 502 2 5 27 3 2 2 7 3 3
5 200 2 200 5000 2 1 1 2 1 3 2 2 2 3 3 98 2 2 40 7 2 3 6 8 2 2 29 500 1 3 6 3 2 2 6 17 3
4. Numerical Experiment In this Section, we investigate the performance of Algorithm 2. We compare Algorithm 2 with the famous Fortran package ALGENCAN. In our computer program, the parameters in Algorithm 2 are chosen as follows: đ1 = 0.1, đ2 = 1, đ3 = 4, đž1 = 0.1, đž2 = 0.9, đ0 = 1,
Algorithm 2 đđ 5 69 6 8 23 42 17 40 17 12 5 19 18 15 15 10 100 11 8 6 2550 6 5 122 29 65 46 267 6 13 14 6 6 6 6 481 11 6
ALGENCAN đđ 5 69 6 8 23 21 17 37 17 11 5 15 15 15 15 10 37 11 6 6 2550 6 5 61 10 61 19 85 6 11 14 6 6 6 6 290 11 6
đđ 20 15 5 21 22 80 67 74 34 28 97 76 106 95 75 20 19 26 17 3982 2451 24 55 387 87 106 6 212 14 42 29 23 39 16 8 757 805 5
đđ 21 16 6 22 23 78 45 71 34 28 98 51 100 95 75 21 20 27 18 1195 608 24 45 216 43 100 7 154 15 43 29 24 40 17 8 270 520 6
đż0 = 1, đ = 1020 , đ = 10â20 . (93) We set đľđ to be the exact Hessian of the Lagrangian đ(đĽ) â đđ đ(đĽ) at the point đĽđ . The Matlab subroutine minres is used to solve (15). All algorithms are terminated when one of the following conditions holds: (1) âđđ +đ´đđ đ đ â ⤠10â8 and âđđ â ⤠10â8 ; (2) âđđ + đ´đđ đ đ â ⤠10â8 and âđ´đđ đđ â ⤠10â8 ; (3) âđ đ â ⤠10â8 . All test problems are chosen from CUTEst collection [22]. The numerical results are listed in Table 1 where the name of problem is denoted by đđđđ, the number of its variables
Discrete Dynamics in Nature and Society is denoted by đ, the number of constraints is denoted by đ, the number of function evaluations is denoted by đđ , and the number of gradient evaluations is denoted by đđ . In Table 1, we list the results of 38 test problems. Considering the numbers of function evaluations (đđ ), Algorithm 2 is better than ALGENCAN for 30 cases (78.9%). Considering the numbers of gradient evaluations (đđ ), Algorithm 2 is better than ALGENCAN for 31 cases (81.6%).
5. Conclusions In this paper, we present a new algorithm for equality constrained optimization. We add an adaptive quadratic term to the quadratic model of the augmented Lagrangian function. In each iteration, we solve a simple unconstrained subproblem to obtain the trail step. The global convergence is established under reasonable assumptions. From the numerical results and the theoretical analysis, we believe that the new algorithm can efficiently solve equality constrained optimization problems.
9
[9]
[10]
[11]
[12]
[13]
Conflicts of Interest
[14]
The authors declare that there are no conflicts of interest regarding the publication of this paper.
[15]
Acknowledgments This work is supported by NSFC (11771210, 11471159, 11571169, and 61661136001) and the Natural Science Foundation of Jiangsu Province (BK20141409).
[16]
[17]
References [1] M. R. Hestenes, âMultiplier and gradient methods,â Journal of Optimization Theory and Applications, vol. 4, pp. 303â320, 1969. [2] M. J. D. Powell, âA method for nonlinear constraints in minimization problems,â in Optimization, R. Fletcher, Ed., pp. 283â 298, Academic Press, New York, NY, USA, 1969. [3] A. R. Conn, N. I. Gould, and P. L. Toint, âA globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,â SIAM Journal on Numerical Analysis, vol. 28, no. 2, pp. 545â572, 1991. [4] A. R. Conn, N. I. Gould, and P. L. Toint, ,LANCELOT: A Fortran Package for Large-scale Nonlinear Optimization(Release A), vol. 17 of Springer, New York, USA, 1992. [5] R. Andreani, E. G. Birgin, J. M. Martnez, and M. L. Schuverdt, âAugmented Lagrangian methods under the constant positive linear dependence constraint qualification,â Mathematical Programming, vol. 111, no. 1-2, Ser. B, pp. 5â32, 2008. [6] R. Andreani, E. G. Birgin, J. M. Martnez, and M. L. Schuverdt, âOn augmented Lagrangian methods with general lower-level constraints,â SIAM Journal on Optimization, vol. 18, no. 4, pp. 1286â1309, 2007. [7] E. G. Birgin and J. M. Martnez, âAugmented Lagrangian method with nonmonotone penalty parameters for constrained optimization,â Computational Optimization and Applications, vol. 51, no. 3, pp. 941â965, 2012. [8] E. G. Birgin and J. M. Martnez, âOn the application of an augmented Lagrangian algorithm to some portfolio problems,â
[18]
[19]
[20] [21]
[22]
EURO Journal on Computational Optimization, vol. 4, no. 1, pp. 79â92, 2016. Z. Dost´al, âSemi-monotonic inexact augmented Lagrangians for quadratic programming with equality constraints,â Optimization Methods & Software, vol. 20, no. 6, pp. 715â727, 2005. Z. Dost´al, A. Friedlander, and S. A. Santos, âAugmented Lagrangians with adaptive precision control for quadratic programming with simple bounds and equality constraints,â SIAM Journal on Optimization, vol. 13, no. 4, pp. 1120â1140, 2003. F. E. Curtis, H. Jiang, and D. P. Robinson, âAn adaptive augmented Lagrangian method for large-scale constrained optimization,â Mathematical Programming, vol. 152, no. 1-2, Ser. A, pp. 201â245, 2015. F. E. Curtis, N. I. Gould, H. Jiang, and D. P. Robinson, âAdaptive augmented Lagrangian methods: algorithms and practical numerical experience,â Optimization Methods & Software, vol. 31, no. 1, pp. 157â186, 2016. A. F. Izmailov, M. V. Solodov, and E. I. Uskov, âGlobal convergence of augmented Lagrangian methods applied to optimization problems with degenerate constraints, including problems with complementarity constraints,â SIAM Journal on Optimization, vol. 22, no. 4, pp. 1579â1606, 2012. L. Niu and Y. Yuan, âA new trust-region algorithm for nonlinear constrained optimization,â Journal of Computational Mathematics, vol. 28, no. 1, pp. 72â86, 2010. X. Wang and Y. Yuan, âAn augmented Lagrangian trust region method for equality constrained optimization,â Optimization Methods & Software, vol. 30, no. 3, pp. 559â582, 2015. C. Cartis, N. I. Gould, and L. Toint, âAdaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results,â Mathematical Programming, vol. 127, no. 2, Ser. A, pp. 245â295, 2011. C. Cartis, N. I. Gould, and L. Toint, âAdaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity,â Mathematical Programming, vol. 130, no. 2, Ser. A, pp. 295â319, 2011. K. Ueda and N. Yamashita, âA regularized Newton method without line search for unconstrained optimization,â Computational Optimization and Applications, vol. 59, no. 1-2, pp. 321â 351, 2014. H. Zhang and Q. Ni, âA new regularized quasi-Newton algorithm for unconstrained optimization,â Applied Mathematics and Computation, vol. 259, pp. 460â469, 2015. Y. X. Yuan, âOn the convergence of a new trust region algorithm,â Numerische Mathematik, vol. 70, no. 4, pp. 515â539, 1995. R. Andreani, G. Haeser, M. a. Schuverdt, and P. J. Silva, âA relaxed constant positive linear dependence constraint qualification and applications,â Mathematical Programming, vol. 135, no. 1-2, Ser. A, pp. 255â273, 2012. N. I. M. Gould, D. Orban, and P. L. Toint, âCUTEst: a Constrained and Unconstrained Testing Environment with safe threads for mathematical optimization,â Computational Optimization and Applications, vol. 60, no. 3, pp. 545â557, 2015.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at https://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
#HRBQDSDÄŽ,@SGDL@SHBR
Journal of
Volume 201
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014