International Conference on Mathematical Applications in Engineering (ICMAE’10), 3-5 August 2010, Kuala Lumpur, Malaysia
Capability of the Function Optimization Algorithms for Solving Functionals SEYEDALIREZA SEYEDI1, ROHANIN AHMAD 2, AND MOHD ISMAIL ABD AZIZ 3 Department of Mathematics, Faculty Science Universiti Teknologi Malaysia 81310 Skudai, Johor, Malaysia 1
[email protected] 2 3
[email protected]
[email protected]
Abstract—Optimization algorithm is a numerical method for finding the optimum values of functions. This paper investigates the existence of the Inheritance and Generalizability properties on optimization algorithms. In particular, this study demonstrates that optimization algorithms which work for functions can be extended to solve functionals directly based on these properties. Keywords-component: Subset Principle, Axiom of Induction, Function, Functional, Optimization Algorithm.
I.
INTRODUCTION
Calculus of variations is a subdivision of mathematics which includes a type of generalization of calculus. It seeks to find the path, curve, surface, etc., for which a given function has a stationary value (which, in physical problems, is usually a minimum or maximum). This involves finding stationary values of integrals of the form 𝐼=
𝒹 𝐹(𝑦, 𝑦 𝒶
, 𝑥)𝑑𝑥 ,
which is called a functional. 𝐼 has an extremum if the EulerLagrange differential equation is satisfied [1], i.e., if ∂𝐹 ∂𝑦
−
𝑑
∂𝐹
𝑑𝑥
∂𝑦
= 0.
In most branches of science, finding optimal value of the functionals is becoming an increasingly important field of study. For this reason, many scientists proposed several appropriate optimization methods to find local or global optimum of the functionals. This research is going to demonstrate properties which exist in the nature of the optimization algorithms in this field.
978-1-4244-6235-3/10/$26.00 ©2010 IEEE
II.
GENERALIZABILITY AND INHERITANCE PROPERTIES OF OPTIMIZATION ALGORITHMS
Suppose 𝒰 and 𝒱 be sets. The ordered pair (𝒰, 𝒱) is defined by the set { 𝒰 , 𝒰, 𝒱 }. In addition, a set is called a relation if and only if all its members are ordered pairs. Also, a set 𝑓 is called a function if and only if 𝑓 is a relation and for each 𝓍 ∈ domain(𝑓) there exists a unique set 𝒴 such that 𝒳, 𝒴 ∈ 𝑓(𝓍, 𝓎), [2]. The following definition and theorem from functional analysis, are useful for showing the existence of Generalizability and Inheritance properties of optimization algorithms, shall be put forward. Definition 1: A functional is a map from a vector space of functions usually to real numbers. In other words, it is a function that takes functions as its argument or input and returns a real number [3], [4]. Theorem 1 (SUBSET PRINCIPLE): If Ξ(𝓍) is a condition on sets, then, for each set 𝒜, there exists a unique set whose members are precisely those members ℬ of 𝒜 for which Ξ(ℬ) holds [2]. From Theorem 1, let Ξ (.) be the Laws and Properties of functions. Let 𝒜 be a set of all functions and ℬ be a set of all functionals. It is clear from Definition 1 functional is a type function and set ℬ is a subset of set 𝒜. Also Theorem 1 implies that set ℬ inherits its properties from set 𝒜. Let set 𝒟 be the set of all optimization algorithms which are used for functions and set 𝒬 be the set of all optimization algorithms which are used for functionals. Based on this assumption, the function optimization algorithms, which belong to set 𝒟, can work on the elements (functions) of set 𝒜.
[Type text] Hence set ℬ being a subset of set 𝒜, implies that the function optimization algorithms of set 𝒟 can also be used on the elements of set ℬ. That is, the function optimization algorithms can also be used on functionals (Generalizability property). It implies that the elements of set 𝒟 also belong to set 𝒬 i.e. the set of all function optimization algorithms (set 𝒟) equals to the set of all functional optimization algorithms (set 𝒬). Therefore, the elements of set 𝒬 are the elements of set 𝒟. This implies that the properties of set 𝒟 are the properties of set 𝒬 (Inheritance property). The following section shall investigate the existence of these properties on optimization algorithms.
III)
EXAMPLES
OF INHERITANCE GENERALIZABILITY PROPERTIES OPTIMIZATION ALGORITHMS
Example 2: min 1 0 1
𝑥 𝓉 = 𝑥 + 𝑢, 2 𝑥 0 = 0. This problem was solved by the simplex method with subroutine DLPRS in IMSL library of Compaq Visual Fortran6. The optimal value of objective function was 0.145218.
AND IN
V
NEWTON METHOD
Newton's method is a well-known algorithm for finding roots of equations in one or more dimensions. It can also be used to find optimum of functions by noticing that if a real number 𝓍 ∗ is a stationary vector of a function 𝑓(𝓍), then 𝓍 ∗ is a root of the derivative of 𝑓(𝓍), and therefore one can solve for 𝓍 ∗ by applying Newton's method on 𝑓(𝓍), [8].
The previous section investigated the properties of functional optimization algorithms theoretically based on the function framework. This section is going to demonstrate these properties in some well-known optimization algorithms numerically. Each subsection here shall be demonstrating the application of the algorithms on both functions and functionals for accentuating the existence of these properties.
IV
𝑢2 (𝓉) 𝑑𝓉
The above discussion shows that Newton Method can be used as an optimization algorithm for finding the optimizer of functions. The following problem is an example solved by using Newton Optimization Method on function.
SIMPLEX METHOD
The simplex algorithm is a popular method of numerical solution for the linear programming problem. The journal of Computing in Science and Engineering listed it as one of the top 10 algorithms of the century [5]. The following examples are the examples which were solved by the method for both functions and functionals.
Example 3: Consider 𝑓 𝓍 Optimization Method was used After a number of iterations obtained. The numerical results being its iteration number [9].
= 𝓍 3 + 4𝓍 2 − 10, Newton with several initial solution. the optimizer of f 𝓍 was are listed in Table 1, with 𝒾
Based on the same idea, the following is an example on the application of Newton method for the optimization of functionals. In solving the following problem, Chebyshev polynomials technique was used for discretization. Example 4 was carried out on VAX/8530 and CDC Cyber 170/750, by [10].
The following Example from [6] is a successful usage of the method on function. Example 1: min 𝑓 𝓍 = −6𝓍1 + 2𝓍12 − 2𝓍1 𝓍2 + 2𝓍22
Example 4: The objective is to find the optimal control 𝑢(𝓉) which optimizes the energy cost functional
subject to 𝓍1 +𝓍2 ≤ 2 𝓍1 , 𝓍2 ≥ 0.
𝐽= 1
The optimal solution of the problem was 𝓍1 = 3 2, , 𝓍2 = 2 and the corresponding value of the objective function was f 𝓍 = − 11 2.
1 2 0 1
𝑥 2 + 𝑢2 𝑑𝓉
(1)
subject to 𝑥 = −𝑥 + 𝑢, 𝑥 0 = 1,
(2)
The optimum solution is 𝐽 = 0.19290922, for N = 6, where p is the order of Chebyshev polynomials technique for solving (1)-(2) (see Table 2).
The method can also be extended for usage on functionals. This is evident from examples gathered in [7], the method was used for solving functionals successfully. An excerpt of the examples is given in Example 2.
2
[Type text] Table 1. Value of optimizer 𝓍 ∗ of the 𝑓 𝓍 with initial point 𝓍0 in iteration 𝒾, [9] for Example 4.
with nine inequality constraints. The obtained optimum solution with Genetic Algorithm was 4.971699587 in [11]. GA was used for minimizing functionals successfully. The following example was minimized by GA and was discretized by Chebyshev polynomials technique. The authors of [12] used the Genetic Algorithm Toolbox of Matlab (Ver,7.04) to solve the following example on functional. Example 6: The authors of [12] considered the following functional problem
Table 2. Performance index for different values of p. [10].
min 1
𝐼=
𝑢2 𝓉 𝑑𝓉
0
VI
subject to
GENETIC ALGORITHM
𝑥 𝓉 = 𝑥 2 𝓉 + 𝑢(𝓉) 𝑥 0 = 0, 𝑥 1 = 0.5.
Genetic algorithm (GA) is a search technique used in computing to find approximate solutions to optimization and search problems. Genetic algorithms are categorized as global search heuristics.
After solving with GA, The authors of [12] obtained the following results:
The following examples show that GA was used for functions and functionals successfully. The immediate example demonstrates the capability of GA for optimizing functions.
The final value 𝑥(1) = 0.500054, the optimal value 𝐼 ∗ = 0.4447 and the error function 𝑒(1) = 5.43 × 10 − 5. The control and trajectory functions are shown in Fig. 1 and Fig. 2, respectively.
An example used in the literature for genetic algorithms testing in [11] is presented below as an illustration. All simulations were done on Pentium-IV machine. Example 5: Objective function min (𝓎1 − 1)2 + (𝓎2 − 2)2 + (𝓎3 − 1)2 − log 𝓎4 + 1 + (𝓍1 − 1)2 + (𝓍2 − 2)2 + (𝓍3 − 3)2 Subject to 𝓎1 + 𝓎2 + 𝓎3 + 𝓍1 + 𝓍2 + 𝓍3 ≤ 5 𝓎3 2 + 𝓍1 2 + 𝓍2 2 + 𝓍3 2 ≤ 5.5 𝓎1 + 𝓍1 ≤ 1.2 𝓎2 + 𝓍2 ≤ 1.8 𝓎3 + 𝓍3 ≤ 2.5 𝓎4 + 𝓍1 ≤ 1.2 𝓎2 2 + 𝓍2 2 ≤ 1.64 𝓎3 2 + 𝓍3 2 ≤ 4.25 𝓎2 2 + 𝓍3 2 ≤ 4.64 𝓍𝒾 ≥ 0, 𝒾 = 1, … ,3 𝓎𝒾 ∈ 0, 1 , 𝒾 = 1, … ,4
Figure 1. Control function for Example 6 [12].
It is a MINLP optimization problem taken from Floudas, et al. (1989). It has four binary and three continuous variables Figure 2. Trajectory function for Example 6 [12]. 3
[Type text] VII
CUTTING ANGLE METHOD
0.3
Cutting Angle Method (CAM) is based on results in abstract convexity [13]. It was proposed in [14] and extended in [13] for optimizing the increasing convex-along-rays functions. As a special case, CAM is a generalization of the cutting plane method (CPM) from convex optimization. It can be applied to a very broad class of non-convex global optimization problems in which the functions involved, possess suitable generalized affine minorants [14].
0.25
0.2
X1
x(t)
0.15
0.1
0.05
X2
0
The following example is an application of CAM on functions. The author of [15] considered the following example on function.
-0.05 0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0.9
Figure 3. Optimum state vectors of Example 8.
Example 7: min 𝑓1 𝓍 = max
𝒾=1,2,…,𝓃
𝒶𝒾 𝓍𝒾 + min
𝒾=1,2,…,𝓃
𝒸𝒾 𝓍𝒾 , 0
𝒾
𝒶𝒾 = 2 + ; 𝒸𝒾 = 𝒾 + 2 𝓃 − 𝒾 + 2 , 𝒾 = 1,2, … , 𝓃. 2
-1
The optimal value of the objective function after 18 iterations was 1.4449. u(t)
-2
Based on the Inheritance and Generalizability properties on optimization algorithm CAM has been extended to use on functionals. The author found the optimizer of the following example on functional with CAM based on Simpson’s rule for Discretization.
-3
-4
-5
-6 0.1
0.2
0.3
0.4
Example 8: min 0.78
𝐽= 0
(1 + 𝑢(𝓉))(𝑥1 𝓉 + 0.25), 𝑥2 = 0.5 − 𝑥2 𝓉 − (𝑥2 𝓉 + 0.5) exp 𝑥 0 = [0.05 0]T
25𝑥 1 𝓉 𝑥 1 𝓉 +2
0.6
0.7
0.8
0.9
Figure 4. Optimum control vector of Example 8.
(𝑥12 𝓉 + 𝑥22 𝓉 + 0.1𝑢2 (𝓉))𝑑𝓉
s. t 𝑥1 = − 𝑥1 𝓉 + 0.25 + 𝑥2 𝓉 + 0.5 exp
0.5 t
− VIII
25𝑥1 𝓉 𝑥1 𝓉 + 2
CONCLUSION
The main result of this paper is the demonstration of the existence of Generalizability and Inheritance properties in optimization algorithms. These properties lead to the claim that all optimization algorithms for functions can be extended for use on functionals.
The global solution of this example was found after three iterations of CAM and is 0.0290. Fig. 3 shows the optimal state vectors 𝑥1 and 𝑥2 of the problem while Fig, 4 shoes the trajectory of the optimal control vector.
4
[Type text] REFERENCES [1]
Weisstein, Eric W., Calculus of Variations, From MathWorld--A Wolfram Web Resource, 1999,
[8]
A. Mordecai, Nonlinear Programming: Analysis and Methods, Dover Publishing, 2003.
[9]
S. Seyedi, A. Rohanin and A. A. Mohd Ismail, Inheritance of Function Properties For functional, ICORAFSS 2009, Malaysia, 2009.
[10]
M. A. Kazemi and M. Miri, Numerical Solution of Optimal Control Problems, Published by IEEE, 1993.
http://mathworld.wolfram.com/CalculusofVariations.html. [2]
M.O. Searcoid, “Elements of Abstract Analysis”, Springer undergraduate series, Berlin Heidelberg New York: Springer-Verlag, 1948.
[3]
R. Todd, “Functional”,From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein, 1999, http://mathworld.wolfram.com/Functional.html.
[11]
E. G. Shopova and N. G. Vaklieva-Bancheva, BASIC—A genetic algorithm for engineering problems solution, Computers and Chemical Engineering 30, Published by Elsevier, 2006.
[4]
S. Lang, “Algebr”, published by Addison-Wesley, 1993.
[12]
[5]
Sci-Tech Dictionary, McGraw-Hill Dictionary of Scientific and Technical Terms, Published by McGraw-Hill Companies, Inc, 2003, http://www.answers.com/topic/lower-semicontinuous-function.
O. S. Fard And A. H. Borzabadi, Optimal Control Problem, Quasi Assignment Problem and Genetic Algorithm, Published by Proceedings of World Academy of Science, Engineering and Technology, Volume 21, 2007.
[13] [6]
K. Swarup and M. K. Bedi, Convex Simplex Method and Nonlinear Programming Problems, Indian Journal of Pure and Applied Mathematics 6, No.2, 1972.
A.M. Rubinov, Abstract Convexity and Global Optimization. Nonconvex Optimization and its Aplications, Published by Kluwer Academic Publishers, 2000.
[14] [7]
A. J. Fakharzadeh, S. A. Ghasemiyan, and A.J. Badiozzaman, Survey for Abilities of Wavelets in Solving Optimal Control Problems by Embedding Methods, Int. J. Contemp. Math. Sciences, Vol. 3, no. 14, 2008.
M. Andramonov, A. Rubinov and B. Glover, Cutting Angle Methods in Global Optimization, Applied Mathematics Letters 12, Elsevier Science Ltd, 1999.
[15]
L.M. Batten and G. Beliakov, Fast Algorithm for The Cutting Angle Method of Global Optimization, Journal of Global Optimization, Netherlands: Kluwer Academic Publishers, 2002.
5