International Journal of Innovative Computing, Information and Control Volume 7, Number 11, November 2011
c ICIC International ⃝2011 ISSN 1349-4198 pp. 6301–6318
SWARM INTELLIGENCE OPTIMIZED NEURAL NETWORKS FOR SOLVING FRACTIONAL DIFFERENTIAL EQUATIONS Muhammad Asif Zahoor Raja1 , Ijaz Mansoor Qureshi2 and Junaid Ali Khan1 1
Department of Electronic Engineering International Islamic University H-10, Islamabad, Pakistan { asif.phdee10; junaid.phdee17 }@iiu.edu.pk 2
Department of Electrical Engineering Air University Air Headquarters, E-9, Islamabad Pakistan
[email protected]
Received March 2010; revised August 2010 Abstract. In this paper, a swarm intelligence technique, better known as Particle swarm optimization, has been used in solving the fractional differential equations. The approximate mathematical modeling has been done by employing feed-forward artificial neural networks by defining the unsupervised error. The learning of weights for such errors has been carried out by using particle swarm optimization hybridized with simulating annealing algorithms for efficient local search. The design scheme has been successfully applied to solve different problems associated with linear and nonlinear ordinary differential equations of fractional order. The results were compared with available exact solutions, analytic solutions and standard numerical techniques including both deterministic and stochastic approaches. In case of linear ordinary fractional differential equations, relatively more precise solutions were obtained than those of the deterministic numerical methods. Moreover, for complex non-linear fractional differential equations, the technique is still applicable, but with reduced accuracy. The advantages of the proposed scheme are easy implementation, simplicity of concept and broad scope of applications. Keywords: Computational intelligence, Fractional differential equations, Particle swarm optimization, Neural networks, Numerical computing, Simulating annealing
1. Introduction. In the last few decades, fractional differential equations (FDEs) have gained considerable importance due to their varied applications in the fields of applied sciences and engineering [1,2]. The historical survey, theory and applications have been carried out by various writers including Miller and Ross [3], Oldham and Spanier [4] and A. K. Anatoly et al. [5]. The problem to develop numerical solvers for FDEs has attracted many researchers. In this regard, successful advancements have been made in extending the existing classical, as well as, modern numerical solvers. The approximate analytic solutions were derived, and successfully applied to a variety of linear and nonlinear FDEs. Some of the important numerical solvers include Adomian decomposition method [6,7], variational iteration method [8,9], homotopy analysis method [10,11], Taylor collocation method [12], etc. The classical numerical solvers like Grunwald-Letnikov, and Lubich’s convolutional quadrature method have also been applied to solve a number of FDEs [13], but with reduced accuracy. Besides this, Diethelm [14] combined the short memory principal with predictor-corrector approach to solve such problems more precisely. Recently, Podlubny [15,16] has provided his famous method of successive approximation in matrix 6301
6302
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
form to solve the FDEs and system of FDEs. However, so far, no noticeable advancement has been seen in extending stochastic numerical methods to solve such problems. The stochastic methods have universal capability to solve a variety of differential equations. In such a scheme, approximate mathematical model of the equation is carried out with the help of feedforward artificial neural networks. The learning of the weights of the neural networks is carried out with initial population based on biological inspired methods, like evolutionary techniques and swarm intelligence [17,18]. A variety of practical significant applications associated with differential equations is solved by these techniques [19-21]. However, these techniques are limited to the differential equations involving integer order derivatives. There is a strong need to extend the existing stochastic solvers to differential equations of fractional order. In our pervious works, such problems have been solved successfully by using artificial neural network (ANN) aided with genetic algorithm [22,23]. The particle swarm optimization (PSO) algorithm, presented first by Kennedy and Eberhant [24], is a global optimization technique inspired by social behavior of bird flocking and fish schooling. Its discrete and continuous versions have widely been applied to different optimization problems in science and engineering. Few examples include the application of its discrete version in multiuser detection schemes in mobile communications [25-27] and sensor networks [28]. Similarly, its continuous versions have been used in diverse fields like inventory control [29], multiprocessor scheduling [30], controls [31,32], etc. Some improved versions of PSO can also be seen elsewhere [33-35]. In this article, the strength of neural network has been exploited to represent the approximate mathematical model of fractional differential equations, but, this time, the learning of unknown weights of neural network is carried out by using PSO, hybridized with simulating annealing (SA). A large number of simulations are performed by using this stochastic solver for enough statistical analysis of the results and to determine the effectiveness of the scheme. Comparison of this numerical solver has been made with neural network aided with various population based stochastic algorithms as well as classical deterministic approaches including Grnnwald-Letnikov (GL) and Podlubny matrix approach. The general form of the ordinary FDEs solved in this article is written as: Dν y(t) = f (t, y(t), Dn y(t)) ,
0 ≤ t ≤ T,
(1)
with initial conditions as follows: Dk y(0) = ck ,
k = 0, 1, 2, . . . , N − 1,
(2)
and boundary condition at t = tb , for 0 ≤ tb ≤ T , is written as: Dk y(tb ) = bk ,
k = 0, 1, 2, . . . , N − 1,
(3)
where D is the operator giving the derivative of fractional order Dν and integer order Dn , ν > 0, ν ∈ R, N = ⌈v⌉. ck and bk are the constants representing the initial and boundary conditions, respectively. Our investigation in this paper is limited to solve such linear and nonlinear fractional differential equations, which contain only one fractional derivative. The developed methodology can easily and efficiently be extended to a variety of FDEs in Electromagnetics [36], Fluid Dynamics [37] and Control problems [38]. Before introducing the proposed methodology, it is necessary to introduce some definitions and relations which will be used in the next sections. The fractional integral and fractional derivative have been expressed in the literature in a variety of ways, including Riemann-Liouville, Caputo, Erdlyi-Kober, Hadamard, Grnwald-Letnikov and Riesz type
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6303
fractional integrals and derivatives, etc. [3-5]. All these definitions have their own importance and advantages in different types of mathematical problems. Throughout this paper, Riemann-Liouville definition for fractional derivative with lower terminal at zero will be used. The definition of fractional integration of order ν > 0 is given as [39]: ∫ t 1 ν (I f ) (t) = (t − τ )ν−1 f (τ )dτ, (4) Γ(α) 0 (I 0 f )(t) = f (t), (5) along with its fractional derivative of order ν > 0 which is normally given as: ) dn ( Dν f (t) = n I n−ν f (t), n − 1 < ν ≤ n, (6) dt where Dν is the fractional derivative, and n is an integer. By using (6), the fractional derivative of exponential function f (t) = eat by simple mathematical manipulation is given as: Dν eat = t−ν M1,1−ν (at), (7) where M1,1−ν (at), the Mittag-Leffler function of two parameters α = 1 and β = 1 − ν, is defined by the series expansion. Mα,β (t) =
∞ ∑ k=0
tk Γ(αk + β)
(α > 0, β > 0).
(8)
2. Mathematical Model. In this section, the detailed description is provided about the development of mathematical model for FDEs. The model is developed using feed-forward ANNs by defined unsupervised error. The designed scheme is presented for ordinary differential equations involving integer order derivative, and then extended for fractional differential equations. 2.1. Integer order case. The general form of ordinary differential equation of integer order n can be represented as a special case of (1) by taking n = ν as: Dn y(t) = f (t, y(t), Dy(t)),
0 ≤ t ≤ T,
(9)
where the initial and boundary conditions are as given in (2) and (3). The solution y(t) of such equations along with its nth order derivative can be approximated by the following continuous mappings as used in ANN methodology [17-21]. yˆ(t) =
m ∑
αi A(wi t + bi ),
(10)
αi Dn A(wi t + bi ),
(11)
i=1 n
D yˆ(t) =
m ∑ i=1
where αi , wi and bi are bounded real-valued adaptive weights, m is the number of neurons, and A is the activation function normally taken as log sigmoid function. 1 . (12) A(x) = 1 + e−x ANN architecture formed by linear combinations of the networks represented in (10) and (11) can approximately model the integer order ordinary differential equations as given in (9).
6304
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
2.2. Fractional order case. The networks given in (10) and (11) cannot directly be applied to the ordinary differential equation of fractional order due to extreme difficulty in obtaining fractional derivative of the log-sigmoid activation function. To solve this issue, the exponential function is taken as a candidate to replace the log-sigmoid function in the neural network modeling. It has universal function approximation capability and known fractional derivative as well. The approximate continuous mappings in the form of linear combination of exponential functions can be taken to approximate the solution y(t) and its integer and fractional derivatives. m ∑ yˆ(t) = αi ewi t+bi , (13) i=1 n
D yˆ(t) =
m ∑
αi win ewi t+bi ,
(14)
αi ebi t−ν M1,1−ν (wi t),
(15)
i=1
Dν yˆ(t) =
m ∑ i=1
respectively. The linear combination of the networks represented in (13)-(15) can approximately model the fractional differential equations as given in (1). The standard ANN architecture has been extended to be applicable to solution of such problems. It is named as fractional differential equation neural network (FDE-NN). A generic form of the FDE-NN architecture is represented in Figure 1.
Figure 1. Fractional differential equation neural network architecture for FDEs 3. Learning Procedure. In this section, the learning procedure is given for finding the unknown weights of networks representing the FDEs with the help of efficient stochastic computational intelligence algorithms. These training procedures are based on simulating annealing (SA) technique, PSO and PSO hybridized with SA algorithm. Simulating annealing is designated as a probabilistic computational method for local and global optimization problems of applied mathematics. It is a technique inspired from heating and controlled cooling of the materials. Its goal is to efficiently find the required
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6305
objective parameters in fixed amount of time instead of the best solutions. This method is originally introduced in 1983 and still widely used for optimization [40,41]. In standard PSO algorithm, each single solution to an optimization problem is considered as a particle in the search space. The exploration of a problem space was made in PSO by a population of particles called a swarm. All particles in the swarm have fitness values which are evaluated by the fitness function related to the problem specific optimization. So, the PSO algorithm is originally initialized with a swarm of particles placed in the search space randomly and is used to search for optimal solution iteratively. In each iteration, the position and the velocity of each particle are updated according to its known previous local best position Pn−1 and the global best position of all particles L Pn−1 in the swarm so far. The updating formula for each particle velocity and position G in continuous standard PSO is written as: ( ) ( n−1 ) n−1 n−1 vni = ωvn−1 + c1 r1 Pn−1 − x + c r P − x , (16) 2 2 i i i L G xni = xn−1 + vn−1 , (17) i i where xi is vector to represent ith particle of the swarm, i = 1, 2, . . . , M , M is the total number of particles in a swarm, vi is the velocity vector associated with ith particle, c1 and c2 are the social acceleration constant, ω is the inertia weight linearly decreasing over the course of search between 0 and 1, r1 and r2 are random vectors with its elements distributed between 0 and 1. The position and velocity are to be taken as the restricted real numbers such that (xi , vi ) ∈ Rd where is the dimension of the search space. The broader spread of initial swarm, results in optimal performance of the algorithm. The element of velocity are assigned as vi ∈ [−vmax , vmax ], where vmax is assigned maximum velocity defined by users according to the objective optimization function. If the velocity goes beyond the maximum value, it will be set to vmax . This parameter controls the convergence rate and can prevent the method from growing too fast. The termination criterion for iterations is determined according to maximum flights/cycles completed or a designated value of the fitness is achieved. The flowchart showing the process of proposed algorithm is given in Figure 2. The objective function to be minimized is given as the sum of errors. ej = ej1 + ej2 ,
j = 1, 2, . . .
(18)
where j is the flight number and ej1 is given as: 1∑ ν = [D yˆ(ti ) − f (ti , yˆ(ti ), Dn yˆ(ti ))]2 |j , s i=0 s
ej1
(19)
where s is the number of time steps, yˆ, Dn yˆ and Dν yˆ are the networks represented in (13)-(15), respectively. The value of s is adjusted as a tradeoff between the computation complexity and the accuracy of algorithm. Similarly, ej2 is linked with initial and boundary conditions and can be written as: N −1 N −1 ]2 ]2 1 ∑[ k 1 ∑[ k j e2 = D yˆ(0) − ck + D yˆ(tb ) − bk |j . (20) N k=0 N k=0 The iterative process for optimization continues until user defined number of cycles is achieved or pre-defined level of error ej is obtained. The algorithm is given in the following steps: Step 1 : Initialize swarm: Randomly generate bounded real values to form initial swarm of particles. Each particle represents the unknown parameters of neural network. The initial swarm is scattered enough for better search space for the algorithm.
6306
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
Figure 2. Flowchart of particle swarm optimization algorithm Step 2 : Initialization: Following parameter values assigned for algorithm execution. Set the number of flights. Set the fitness limit and start cycle count. Set the values of individual best and global best acceleration factors. Set the value of inertia weight ω and maximum velocity vmax . Step 3 : Fitness Evaluation: Calculate fitness by using the fitness function given in expressions (18)-(20). Step 4 : Ranking: Rank each particle of the swarm on the basis of minimum fitness values. Store the best fitness particle. Step 5 : Termination Criteria: Terminate the algorithm if either predefined fitness value, i.e., MSE 10−08 for linear FDEs and 10−04 for non-linear FDEs is achieved or number of maximum flights/cycles is reached. If yes go to Step 7 else continue. Step 6 : Renewal: Update the Velocity using Equation (16). Update the position using Equation (17). Repeat the procedure from Step 3 to Step 6 until total number of flights is reached. Step 7 : Storage: Store the best fitted particle so far and name it as global best particle for this run. Step 8 : Refinement: MATLAB optimization toolbox is used for simulating annealing algorithm for further fine-tuning by taking the best fitted particle as start point of the algorithm. Store the value of fitness along with the best particle for this run. Stop. 4. Simulation and Results. Designed scheme was applied to three different problems of FDEs using the FDE-NN method and comparison was made with exact solutions and other numerical methods to validate the obtained results.
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6307
4.1. Problem I. In this problem, we have taken a linear ordinary fractional differential equation with known exact solution and analyze the applicability of the proposed designed scheme. The following equation is taken which has also been solved by many authors of fractional calculus [13,22,23,42,43]. 2 Dν y(t) = t2 + t2−ν − y(t), 0 < ν ≤ 1, (21) Γ(3 − ν) with condition y(0) = 0 and y(1) = 1. The exact solution is given as: y(t) = t2 .
(22)
To solve this problem using FDE-NN methodology, the learning of weights was made with PSO-SA, a hybrid intelligent algorithm. Results were also determined by training of weights with PSO, GA, GA-SA and SA algorithms. A total number of 10 neurons were taken, which resulted in 30 unknown parameters or weights (αi , wi and bi ). These weights were restricted to be real numbers between −10 and 10. The values of the fractional order derivative ν were taken as 0.5 and 0.75. The input t is taken between 0 and 1 with a step size of 0.1. Therefore, the fitness function is formulated as: ]2 11 [ 1 ∑ 2 2−ν ν 2 ej = D yˆ(ti ) − ti − t + yˆ(ti ) +[ˆ y (0)]2 +[ˆ y (1) − 1]2 |j , j = 1, 2, 3, . . . 11 i=1 Γ(3 − ν) i (23) ν where j is the flight index, yˆ and D yˆ are networks given in (13) and (15), respectively. The parameter settings used for evaluation of results are provided in Table 1. Table 1. Parameters setting of the algorithms PSO Parameters Setting Swarm Size 80 Particle size 30 Flights 2000 c1 Linearly decreasing (5 to 0.5) c2 Linearly increasing (0.5 to 5) Inertia weight Linearly decreasing (0.9 to 0.4) vmax 02
GA Parameters Population Size Chromosome size Generations Selection
Setting 200 30 1000 Stochastic Uniform
SA Parameters Start Point Size Annealing function Iteration Max. function evaluations
Setting 30 Fast 60000 90000
Crossover fraction 0.8 Reannealing (method) (Scattered) interval
100
Mutation
Adaptive feasible
Temperature update
Exponential
Elite count
4
Initial temperature 100
Our scheme runs iteratively in order to find the minimum of fitness function, ej , with stoppage criteria as 2000 number of flights or fitness value ej ≤ 108 whichever comes earlier. One of the set of unknown weights learned by the PSO-SA algorithm is provided in Table 2. These weights can be used in equation (13) to obtain the solution of the equation for any input time t between 0 and 1. The classical deterministic numerical technique in GL method is also used to obtain the solution of Equation (21). The relation used to solve the equation can be written as: ) ( m ∑ 2 1 (24) W ν ym−j , m = 1, 2, 3, . . . , 100, t2 + t2−ν + ym = 1 + hν m−1 Γ3 − ν m−1 j=1 j
6308
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
Table 2. The sample set of weights trained by FDE-NN with PSO-SA algorithm i 1 2 3 4 5 6 7 8 9 10
wi –0.868488 0.000202 –0.114243 0.351035 0.194204 –0.365107 –0.573958 0.727216 0.792515 –0.557634
ν = 0.5 αi 2.267107 0.508606 –0.541645 –0.664693 –0.656206 1.172230 –0.516307 0.951739 0.138632 –2.763771
ν = 0.75 bi wi αi 0.158642 0.818694 1.037723 –1.322480 0.826450 0.690690 1.406505 0.846779 –0.513357 –0.157457 0.247052 –0.093627 –3.438001 1.023052 –0.416330 –0.431901 0.235052 –1.818209 0.319690 0.019579 0.597240 0.458089 –1.490539 0.292185 –0.129783 0.726425 –0.321066 –0.504124 –0.070790 –1.453336
bi –0.043346 –0.162463 –0.973743 –0.677092 –4.214132 –0.428702 0.746637 0.784214 –1.424499 0.305023
(
) ν where ym = y(mh), h = 0.01, tm = mh and = (−1) , j = 0, 1, 2, . . . j The solution obtained for the equation by FDE-NN with PSO-SA algorithm and the GL method are given in Table 3. It also includes the exact solution as well as the reported result of stochastic solver using neural network supported by genetic algorithms [22]. It can be seen from the table that our proposed methodology gives better result than that of GL method. It also gives better results as compared to that of GA-SA, although the population size for GA-SA case was 200, while that for PSO-SA was only 80. Hence, with less than half the computational cost, the results of PSO-SA are still comparable to that of GA-SA. Wjν
j
Table 3. Results for solution of FDE given in problem I ν = 0.5 Exact t y(t) 0.0 0.00 0.1 0.01 0.2 0.04 0.3 0.09 0.4 0.16 0.5 0.25 0.6 0.36 0.7 0.49 0.8 0.64 0.9 0.81 1.0 1.00
Our 2.8e-5 0.0099 0.0404 0.0910 0.1614 0.2514 0.3611 0.4905 0.6399 0.8098 1.0007
FDE-NN GA [22] 0.0004 – 0.0396 – 0.1596 – 0.3573 – 0.6352 – 1.0004
GL 0.0000 0.0104 0.0407 0.0910 0.1613 0.2516 0.3619 0.4921 0.6423 0.8126 1.0028
Absolute Error Our GA [22] GL 2.8e-5 3.7e-4 0.0000 1.1e-4 – 4.0e-4 3.6e-4 3.6e-4 7.4e-4 9.6e-4 – 1.1e-3 1.4e-3 4.3e-4 1.3e-3 1.4e-3 – 1.6e-3 1.1e-3 2.7e-3 1.9e-3 4.9e-4 – 2.1e-3 6.5e-5 4.8e-3 2.3e-3 1.7e-4 – 2.6e-3 7.5e-4 3.5e-4 2.8e-3
ν = 0.75 FDE-NN Absolute Error Our GL Our GL 0.0001 0.0000 6.3e-5 0.0000 0.0103 0.0107 3.1e-4 6.7e-4 0.0414 0.0413 1.4e-3 1.2e-3 0.0928 0.0918 2.8e-3 1.8e-3 0.1636 0.1622 3.6e-3 2.2e-3 0.2538 0.2527 3.8e- 2.7e-3 0.3631 0.3631 3.1e-3 3.1e-3 0.4918 0.4934 1.8e-3 3.4e-3 0.6402 0.6438 2.3e-4 3.8e-3 0.8091 0.8141 9.3e-4 4.1e-3 0.9991 1.0044 9.1e-4 4.4e-3
In order to make further analysis of the scheme, results are also determined for various inputs t ∈ (0, 10). The results are shown graphically in Figure 3. It can be seen from Figure 3(a), that the result obtained by our algorithm is overlapping the exact solution. In order to view the difference clearly, the zoomed diagram is also shown in Figure 3(b). The training of FDE-NN is made for bounded input between 0 and 1. Therefore, the error starts to accumulate for input greater than 1, as can be seen from Figure 3(c). It
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6309
may start to grow rapidly for more deviated training inputs. Moreover, it can be seen for Figure 3(d) that solution starts to diverge for inputs greater than 4.
(a)
(b)
(c)
(d)
Figure 3. Comparison of our result with exact solution for different input intervals for ν = 0.75 Enough simulations have been performed to test the reliability of our designed scheme. In this regard 100 independent runs were carried out for finding the weights of FDENN optimized with SA, GA, GA-SA, PSO and PSO-SA algorithms. The MATLAB optimization toolbox is used for SA and GA with parameter setting as given in Table 1. The term best and worst corresponds to minimum and maximum errors. The statistical parameter like mean and standard deviation (STD) are useful to determine the behavior of the results. The best, worst, mean and STD for inputs t ∈ (0, 1) with step 0.1 is calculated for FDE-NN optimized by PSO-SA. The results are summarized in Table 4. These results validate the applicability of our scheme for solution of such equations. Moreover, the other stochastic solvers are used for this optimization problem for comparison. In this regard, 100 independent runs for finding weights by SA, GA, GA-SA and standard PSO are also executed. The comparison of the results is made for some inputs and it is given in Table 5. It can be seen that the best result are obtained using PSO-SA hybrid technique. The same is verified by Figure 4, in which the value of objective function, ej , is plotted in descending order against the number of independent runs of different solvers. It is further added that our approach used optimization mainly based on particle
6310
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
Table 4. The statistical parameters of the solution by FDE-NN optimized with PSO-SA scheme t 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Best 3.03e-06 1.31e-05 1.69e-04 2.34e-06 8.41e-04 1.91e-05 1.04e-04 2.46e-04 3.55e-05 4.89e-05 3.77e-06
ν = 0.5 Worst Mean 0.03467 4.45e-03 0.01877 5.22e-03 0.03917 8.42e-03 0.04928 8.94e-03 0.05069 1.00e-02 0.04241 9.37e-03 0.04956 9.44e-03 0.04828 1.09e-03 0.03809 1.06e-02 0.02684 5.59e-03 0.02643 6.55e-03
STD 5.36e-03 3.94e-03 7.46e-03 9.27e-03 8.93e-03 1.04e-02 1.15e-02 1.06e-02 8.22e-03 4.52e-03 5.52e-03
Best 2.55e-05 1.31e-04 9.79e-05 1.88e-04 8.71e-04 2.77e-04 1.32e-05 3.69e-05 2.67e-05 1.20e-05 3.15e-05
ν = 0.75 Worst Mean 0.05611 6.03e-03 0.01429 5.33e-03 0.04466 9.82e-03 0.07427 1.22e-02 0.09121 1.41e-02 0.09974 1.39e-02 0.10673 1.30e-02 0.09606 1.41e-02 0.06624 1.27e-02 0.03013 6.10e-03 0.05315 8.49e-03
STD 8.59e-03 3.41e-03 8.60e-03 1.16e-02 1.33e-02 1.62e-02 1.81e-02 1.58e-02 1.10e-02 4.83e-03 8.57e-03
swarm optimization technique, which is already proven to have reduced computational complexity than that of Genetic Algorithm based approaches [44,45]. Table 5. Comparison of stochastic solvers for solution of problem I t FDE-NN Best 0.2 SA 2.47e-03 GA 4.52e-03 GA-SA 3.09e-04 PSO 7.25e-04 PSO-SA 1.69e-04 0.8 SA 3.38e-03 GA 4.86e-04 GA-SA 1.67e-04 PSO 6.17e-05 PSO-SA 3.55e-05
ν = 0.5 Worst Mean 33.1611 1.25208 0.05746 1.89e-02 0.04393 1.50e-02 0.06474 1.12e-02 0.03917 8.42e-03 59.8564 1.20177 0.07343 1.61e-02 0.06357 1.38e-02 0.06138 1.44e-02 0.03809 1.06e-02
STD 3.18773 9.37e-03 8.57e-03 9.48e-03 7.46e-03 5.42192 1.26e-02 1.12e-02 1.16e-02 8.22e-03
Best 3.58e-03 4.44e-04 4.38e-04 3.60e-04 9.79e-05 6.48e-04 2.28e-04 1.71e-05 4.95e-05 2.67e-05
ν = 0.75 Worst Mean 10.4269 1.04062 0.08311 2.83e-02 0.05050 2.45e-02 0.06313 1.29e-02 0.04466 9.82e-03 7.97759 0.68042 0.07917 3.14e-02 0.06028 2.19e-02 0.07404 1.41e-02 0.06624 1.27e-02
STD 1.38306 1.09e-02 9.55e-03 1.08e-02 8.60e-03 1.13319 1.66e-02 1.39e-02 1.28e-02 1.10e-02
4.2. Problem II. In this example, our intent is to further analyze the proposed methodology by applying to the differential equation of fractional order for which the exact solution is not known. Therefore, the comparative studies of the scheme are carried out against approximate analytic solver and other standard numerical methods. Let us take such ordinary fractional differential equation as [23]: 11 2 Dy(t) − D1/2 y(t) + y(t) = 0, y(0) = 0. (25) 15 15 The approximate analytic solution can be determined by direct approach method [3]. It is written in the form of special function as: ( ) ( ) ) ( ( ) (26) y(t) = a1 εt 0, a21 − a2 εt 0, a22 + a1 + a1 εt −1/2, a21 − a2 εt −1/2, a22 , where a1 and a2 are the zeros of indicial polynomial of (19) and εt is a special function to two inputs and it can be represented in terms of Mittag-Leffler function as: ∞ ∑ att ν = tν M1,1+ν (at), (27) εt (ν, a) = t Γν + k + 1 k=0
SOLVING FDES USING ANN OPTIMIZED WITH PSO
(a)
6311
(b)
Figure 4. Comparison of FDE-NN networks optimized with stochastic solvers, (a) is for ν = 0.5 and (b) is for ν = 0.75 where M1,1+ν (at) is given in (8). Similarly, solution for Equation (25) can be written in terms of error function as: ( ) ( ) 2 2 (28) y(t) = a1 ea1 t erf c −a1 t1/2 − a2 ea2 t erf c −a2 t1/2 . Equation (25) is an important equation in fractional calculus. It can be interpreted as a simplified form of composite fractional relaxation equation, and also it is representing a special case of the unsteady motion of a particle accelerating in a viscous fluid under the action of gravity, which is referred to as Basset problem [46,47]. This problem has been simulated by FDE-NN networks (13)-(15) similar to previous example. The input of the training set is chosen from time t ∈ (0, 1) with a step of 0.1. The fitness function can be given as: ]2 11 [ 1 ∑ 11 1/2 2 ej = Dˆ y (t) − D yˆ(t) + yˆ(t) + [ˆ y (0)]2 |j , j = 1, 2, 3, . . . (29) 11 i=1 15 15 The set of weights learnt stochastically using GA-SA, PSO, PSO-SA algorithms are given in Table 6. Using these weights in Equation (13) the solution of the equation can be determined. The famous numerical technique developed by Podlubny [39], based on successive approximation method, is also applied to solve the equation. The recursive relations are used for computations, given as: ) ( m 2 11 1/2 ∑ 1/2 11 1/2 Wj ym−j , m = 1, 2, 3, . . . , 100, h = 0.01. ym = 1 + h − h ym−1 + h 15 15 15 j=1 (30) In order to compare the results on input interval t ∈ (0, 1) with step 0.1, Podlubny numerical method (PNM) using Equation (30), approximate analytical solution using direct approach (DAM) as given in (28) and solution due to stochastic solvers GA-SA, PSO and PSO-SA have been given in Table 7. It can be seen that our algorithm can also approximate the solution for such equations. Once again 100 independent runs have been executed for this equation using solvers SA, GA, GA-SA, PSO and PSO-SA. The minimum value of fitness function, ej , is set as figure of merit for this equation. The summary of the statistical results are provided in
6312
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
Table 6. A set of weights trained for FDE-NN networks
i 1 2 3 4 5 6 7 8 9 10
wi 0.4094 –0.6304 0.2215 –0.7881 0.5069 0.5308 0.3133 0.2510 –0.1792 –0.6003
GA-SA αi 0.9826 –0.7774 0.5764 0.2389 0.4321 –0.7908 1.6549 –1.3473 0.8190 –1.6898
bi –0.2061 –1.1842 –0.5567 –0.2642 –0.6154 0.6351 –0.5389 –1.0939 –0.3509 –0.6316
PSO PSO-SA wi αi bi wi αi bi –2.3181 –0.1225 0.9630 –0.0892 0.3091 0.1164 1.0130 0.1499 –0.6659 0.4324 0.1317 –1.7017 –0.2743 0.1722 –0.3252 –0.0593 0.2479 –1.5050 –0.9922 0.8438 0.0738 –0.3343 0.3535 –0.5573 0.0771 –2.5002 –0.1074 –6.0474 –0.7880 –1.2859 –0.4106 0.2219 0.3278 0.2264 –0.0343 0.4295 –0.0258 0.9376 0.9195 –2.0075 0.0881 0.9903 –1.0528 0.5050 –0.5135 –0.6530 –0.2343 0.0673 –0.0671 0.8402 –2.1567 –0.0425 –1.4225 –0.8739 –0.8067 –1.4298 0.1100 0.3306 0.1853 0.2807
Table 7. Results for solution of the equation in problem II t 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
DAM 0.066666 0.087156 0.097760 0.106931 0.115451 0.123620 0.131591 0.139453 0.147266 0.155071 0.162898
PNM 0.066700 0.080612 0.089371 0.096917 0.103878 0.110499 0.116910 0.123186 0.129376 0.135515 0.141627
GA-SA 0.008640 0.042230 0.071560 0.096658 0.117534 0.134181 0.146577 0.154685 0.158449 0.157798 0.152642
PSO PSO-SA 0.005948 0.003596 0.048428 0.068514 0.080050 0.103152 0.103402 0.119309 0.120541 0.126997 0.133108 0.131393 0.142423 0.135090 0.149556 0.139316 0.155386 0.144588 0.160649 0.151071 0.165974 0.158757
Table 8 and plotted in Figure 5. The trend of the results is same as in case of the previous example and PSO-SA hybrid intelligent approach is invariably the best. Table 8. Comparison of stochastic solvers using error ej for problem II FDE-NN SA GA GA-SA PSO PSO-SA
Best 1.47e-02 1.37e-05 9.01e-06 1.25e-05 1.79e-06
Worst 112.161 0.05746 0.04393 0.06474 0.03917
Mean 4.12208 2.81e-03 1.75e-03 1.82e-03 1.41e-03
STD 7.1233 4.57e-03 3.72e-03 9.48e-03 2.45e-03
4.3. Problem III. In this example, a complex fractional differential equation is taken to determine the strength and weakness of our proposed scheme for such models. In this regard, let us take a non-linear ordinary fractional differential equation given as [13,23,48]: ( )3 Γ(5 + ν/2) 4−ν/2 9 3 ν/2 40320 8−ν 4 ν t −3 t + Γ(ν + 1) + t −t − [y(t)]3/2 , (31) D y(t) = Γ(9 − ν) Γ(5 − ν/2) 4 2
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6313
Figure 5. Comparison of results for FDE-NN networks optimized with stochastic solvers with initial and boundary conditions given as y(0) = 0 and y(1) = 0.25, respectively. The exact solution for the this equation is given as: 9 y(t) = t8 − 3t4+ν/2 + tν (32) 4 The classical numerical techniques used for solution of FDEs in problems I and II are unable to provide solution to such problems. However, modern deterministic methods with higher computational cost can provide solution for (31), such as fractional Adams method [49] and variation iteration method [50]. The simulation has been performed for this problem on similar pattern to the previous examples. The order of fractional derivative ν is taken as 0.25, 0.5 and 0.75. The set of weights learned stochastically using PSO-SA algorithms are given in Table 9. Table 9. A set of weights trained for FDE-NN networks by PSO-SA algorithm i wi 1 0.1054 2 0.2637 3 0.0883 4 –5.3349 5 1.4004 6 0.4616 7 –0.1989 8 0.2006 9 –0.8850 10 0.5557
ν = 0.25 αi 0.6575 3.1429 0.4428 –1.9707 –0.6343 –0.7894 7.2787 1.2917 –0.8777 –0.7218
bi 0.0868 –0.9555 0.7153 0.2794 0.2388 –0.3201 –6.1817 0.5472 –1.5429 0.2288
wi –1.6488 –0.1061 –1.2606 –0.7293 –0.5461 2.9929 0.5516 –2.5526 –3.6911 –0.5337
ν = 0.5 αi –0.0591 0.5117 0.4263 –0.0394 0.4472 –0.4345 1.1411 –2.0805 –0.8474 0.1364
bi 1.7376 –1.1450 –0.4043 –7.9611 1.6015 –1.6370 –1.3916 –0.5869 0.6270 0.4523
ν = 0.75 wi αi bi 0.5114 0.9316 0.1784 –0.0521 0.9119 –0.2006 –4.6482 0.5299 –2.0818 –2.0321 –0.657 1.1406 –0.4419 0.0081 –0.215 –1.7249 –0.063 0.4909 1.2395 0.3243 –0.5894 –0.3688 1.23 –1.6236 2.0127 0.0272 1.375 2.4863 –2.503 –2.1158
Using these weights in Equation (13), the results can be obtained which are given in Table 10. Moreover, the graphic comparison of our obtained solution with the exact solution is given in Figure 6. The numerical result of the Adams scheme [13] based on predictor corrector approach are given separately in Table 11 as the errors are given only for t = 1
6314
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
Table 10. Summary of the result for solution of FDE in problem III t 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Exact 0.0000 0.7113 1.0030 1.2145 1.3626 1.4372 1.4175 1.2813 1.0181 0.6479 0.2500
PSO-SA 3.8e-05 0.6235 1.0308 1.2787 1.4051 1.4339 1.3780 1.2410 1.0181 0.6955 0.2499
ν = 0.5 |Error| 3.8e-05 0.0879 0.0278 0.0642 0.0425 0.0034 0.0395 0.0041 8.6e-05 0.0477 4.7e-05
GA[23] – 0.7066 0.9994 1.2198 1.3656 1.4317 1.4095 1.2857 1.0420 – –
|Error| – 0.0047 0.0036 0.0054 0.0030 0.0055 0.0080 0.0044 0.0239 – –
ν = 0.25 Exact PSO-SA 0.0000 4.1e-04 1.2650 0.9272 1.5007 1.4600 1.6443 1.7297 1.7215 1.8183 1.7240 1.7759 1.6323 1.6323 1.4268 1.4036 1.1007 1.0970 0.6794 0.7136 0.2500 0.2501
ν = 0.75 Exact PSO-SA 0.0000 1.5e-06 0.3999 0.3781 0.6702 0.6811 0.8966 0.9137 1.0778 1.0779 1.1971 1.1725 1.2296 1.1926 1.1494 1.1286 0.9408 0.9658 0.6174 0.6828 0.2500 0.2499
Figure 6. The exact and PSO-SA algorithm solutions for problem III for different mesh sizes. As can be seen from TableS 10 and 11 that at time t = 1, Table 11. Numerical result of Adam scheme at different mesh sizes 1 2 3 4 5 6 7
Mesh Size (h) Error at t = 1 1/10 0.25 1/20 1.81e-02 1/40 3.61e-03 1/80 1.45e-03 1/160 6.58e-04 1/320 2.97e-04 1/640 1.31e-04
ν = 0.25 and mesh size h = 1/10, the value of the error by Adam scheme is 0.25 while by our proposed scheme is 5.60 × 10−4 . This error reduces to 1.31 × 10−4 in Adam scheme by reducing mesh size to 1/640, but at the cost of much greater computational complexity.
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6315
The statistical analysis, based on 100 independent runs of our scheme is provided in Table 12. It can be seen that differences exist between best and worst results. Moreover, Table 12. Statistical parameters of the solution by FDE-NN with PSO-SA algorithm t 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Best 2.06e-05 2.34e-03 2.10e-03 2.58e-04 1.55e-04 8.82e-08 4.50e-04 3.82e-04 4.10e-04 1.80e-03 1.08e-09
ν = 0.5 Worst Mean 0.0392 0.0106 0.3549 0.1366 0.3164 0.0933 0.2946 0.0784 0.1627 0.0477 0.1835 0.0094 0.2091 0.0550 0.2690 0.0816 0.3646 0.0924 0.3331 0.0887 0.6208 0.0120
STD 0.0184 0.0866 0.0762 0.0599 0.0334 0.0259 0.0398 0.0630 0.0782 0.0767 0.0641
Best 1.73e-05 8.30e-04 1.31e-03 4.35e-04 4.29e-04 4.19e-07 3.03e-03 1.48e-03 3.52e-04 1.77e-05 1.06e-06
ν = 0.75 Worst Mean 0.0961 0.0342 0.2191 0.0568 0.3038 0.0898 0.2476 0.0820 0.1288 0.0477 0.1837 0.0207 0.2762 0.0679 0.3071 0.0837 0.2698 0.0757 0.2619 0.0621 0.1944 0.0105
STD 0.0283 0.0487 0.0722 0.0646 0.0322 0.0349 0.0541 0.0720 0.0643 0.0419 0.0296
the average accuracy obtained is in the range of 10−1 to 10−2 . The stochastic solver, like SA algorithm, mostly provides the results with objective function value more than 1. Similarly the effects of SA in hybridization approach with PSO and GA are also not significant. In most of the independent runs there is no improvement made with SA. Therefore, 125 independent runs of GA-SA and PSO-SA are executed. The results of best 100 runs are plotted in Figure 7, in which the value of the fitness function, ej , has been drawn in descending order against the numbers of independent runs of the algorithms. On the basis of these result, it can be stated that our proposed method is applicable to solve such problems, but with reduced accuracy.
(a)
(b)
Figure 7. Comparison of FDE-NN networks optimized with stochastic solvers, (a) is for ν = 0.5 and (b) is for ν = 0.75 5. Conclusions. A new stochastic computational approach has been developed for solving the FDEs by swarm intelligence optimized neural networks. The method has been tested successfully by applying it to different linear and non-linear ordinary differential
6316
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
equations of fractional order. A large number of Monte Carlo simulations with stochastic solvers validated its reliability and effectiveness. The best results are achieved for FDE-NN networks optimized with PSO-SA algorithm instead of SA, GA, GA-SA, PSO algorithms. We have shown that PSO-SA, for less than half the population as compared to GA-SA, gives same accuracy. It has also been shown that proposed scheme can approximate the solution with the accuracy comparable to the standard state of art deterministic numerical solvers or even better in some cases. The strength of designed scheme over such solvers is that, it can provide the result on continuous finite time domain, instead of predefined discrete grid of points. In the future, we intend to look for basis function other than exponential function for which the fractional derivative is available. Moreover, we shall also look into artificial bee colony as a good global optimizer for not only the problems given in this paper, but also fractional differential equations which still lie unsolved. Acknowledgment. The authors gratefully acknowledge Professor I. Podlubny for providing the MATLAB routine for calculating Mittag-Leffler functions, which are used extensively in our simulations. REFERENCES [1] Y. K. Chang and J. J. Nieto, Some new existence results for fractional differential inclusions with boundary conditions, J. Math. Comput. Modelling, vol.49, pp.605-609, 2009. [2] W. Deng, Numerical algorithm for the time fractional Fokker-Planck equation, J. Comput. Phys., vol.227, pp.1510-1522, 2007. [3] K. B. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley Publisher, New York, 1993. [4] K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press Publisher, New York, 1974. [5] A. K. Anatoly, H. M. Srivastava and J. J. Trujillo, Theory and application of fractional differential equations, North-Holland Mathematics Studies, vol.204, 2006. [6] Y. Hu, Y. Luo and Z. Lu, Analytic solution of linear fractional differential equation by Adomian decomposition method, Journal of Computer and Applied Mathematics, vol.215, no.1, pp.220-229, 2008. [7] S. S. Ray and R. K. Bera, An approximate solution of a nonlinear fractional differential equation by Adomian decomposition method, Applied Mathematics and Computation, vol.167, no.1, pp.561-571, 2005. [8] S. Abbasbandy, A new application of He’s variational iteration method for quadractic Riccati differential equation by using Adomian’s polynomials, Journal of Computational and Applied Mathematics, vol.207, no.1, pp.59-63, 2007. [9] S. Das, Analytical solution of a diffusion equation by variational iteration method, Computers and Mathematics with Application, vol.57, no.3, pp.483-487, 2009. [10] M. Zurigat, S. Momani, Z. Odibat and A. Alawneh, The homotopy analysis method for handling system of fractional differential equations, Applied Mathematics Modeling, vol.34, no.1, pp.24-35, 2010. [11] M. Ganjiani, Solution of nonlinear fractional differential equations using homotopy analysis method, Applied Mathematical Modeling, vol.34, no.6, pp.1634-1641, 2010. [12] Y. Cenesiz, Y. Keskin and A. Kurnaz, The solution of the Bagley-Torvik equation with generalized Taylor collocation method, Journal of the Franklin institute, vol.347, no.2, pp.452-466, 2009. [13] M. Weibeer, Efficient Numerical Methods for Fractional Differential Equations and Their Analytical Background, Ph.D. Thesis, 2005. [14] W. Deng, Short memory principle and a predictor-corrector approach for fractional differential equations, Journal of Computational and Applied Mathematics, vol.206, no.1, pp.174-188, 2007. [15] I. Podlubny, Numerical solution of ordinary fractional differential equations by the fractional difference method, Advances in Difference Equation, pp.507-516, 1997. [16] I. Podlubny, A. V. Chechkin, T. Skovranek, C. Yq and B. Vinagre, Matrix approach to discrete fractional calculus II: Partial fractional differential equations, Journal of Computational Physics, vol.228, no.8, pp.3137-3153, 2009.
SOLVING FDES USING ANN OPTIMIZED WITH PSO
6317
[17] A. Junaid, M. A. Z. Raja and I. M. Qureshi, Evolutionary computing approach for the solution of initial value problems in ordinary differential equations, Proc. of WASET, vol.55, pp.578-581, 2009. [18] A. K. Junaid, R. M. A. Zahoor and I. M. Qureshi, Swarm intelligence for the solution of problems in differential equations, The 2nd International Conference on Environmental and Computer Science, pp.141-147, 2009. [19] L. P. Aarts and P. van der Veer, Neural network method for solving the partial differential equations, Neural Processing Letters, vol.14, pp.261-271, 2001. [20] D. R. Rarisi et al., Solving differential equations with unsupervised neural networks, J. Chemical Engineering and Processing, vol.42, pp.715-721, 2003. [21] L. G. Tsoulos, D. Gavrilis and E. Glavas, Solving differential equations with constructed neural networks, Journal of Neurocomputing, vol.72, no.10-12, pp.2385-2391, 2009. [22] R. M. A. Zahoor, A. K. Junaid and I. M. Qureshi, Evolutionary computation technique for solving Riccati differential equation of arbitrary order, Proc. of WASET, vol.58, pp.303-309, 2009. [23] M. A. Z. Raja, A. K. Junaid and I. M. Qureshi, Evolutionary computational intelligence in solving the fractional differential equations, Lecture Notes in Computer Science, vol.5990, pp.231-240, 2010. [24] J. Kennedy and R. Eberhart, Particle swarm optimization, Proc. of IEEE International Conference on Neural Networks, Perth, Australia, vol.4, pp.1942-1948, 1995. [25] Z. S. Lu and S. Yan, Multiuser detector based on particle swarm algorithm, Proc. of IEEE Symp. Emerging Technologies: Mobile and Wireless Communications, Shanghai, China, 2004. [26] C. Liu and Y. Xiao, Multiuser detection using the particle swarm optimization algorithm, Proc. of IEEE ISCIT, 2005. [27] M. A. S. Choudhry, M. Zubair, A. Naveed and I. M. Qureshi, Near optimum detector for DS-CDMA system using particle swarm optimization, IEICE Trans. on Commun., vol.E90-B, no.11, pp.32783282, 2007. [28] J. Nagashima, A. Utani and H. Yamamoto, Efficient flooding method using discrete particle swarm optimization for long-term operation of sensor networks, ICIC Express Letters, vol.3, no.3(B), pp.833840, 2009. [29] C.-H. Hsu, C.-S. Tsou and F.-J. Yu, Multicriteria tradeoffs in inventory control using memetic particle swarm optimization, International Journal of Innovative Computing, Information and Control, vol.5, no.11(A), pp.3755-3768, 2009. [30] S. N. Sivanandam and P. Visalakshi, Multiprocessor scheduling using hybrid particle swarm optimization with dynamically varying inertia, International Journal of Computer Science and Applications, vol.4, no.3, pp.95-106, 2007. [31] C. Lin, Y. Liu and C. Lee, An efficient neural fuzzy network based on immune particle swarm optimization for prediction and control applications, International Journal of Innovative Computing, Information and Control, vol.4, no.7, pp.1711-1722, 2008. [32] G.-D. Li, S. Masuda, D. Yamaguchi and M. Nagai, The optimal GNN-PID control system using particle swarm optimization algorithm, International Journal of Innovative Computing, Information and Control, vol.5, no.10(B), pp.3457-3470, 2009. [33] C. Wang, P. Sui and W. Liu, Improved particle swarm optimization algorithm based on double mutation, ICIC Express Letters, vol.3, no.4(B), pp.1417-1422, 2009. [34] Z. Cui, J. Zeng and G. Sun, A fast particle swarm optimization, International Journal of Innovative Computing, Information and Control, vol.2, no.6, pp.1365-1380, 2006. [35] J. H. Seo, C. H. Im, C. G. Heo, J. K. Kim, H. K. Jung and C. G. Lee, Multimodal function optimization based on particle swarm optimization, IEEE Trans. on Magnetic, vol.42, no.4, pp.10951098, 2006. [36] N. Engheta, On the role of fractional calculus in electromagnetic theory, IEEE Antennas Propagat. Mag., vol.39, pp.35-46, 1997. [37] S. Zhou et al., Chaos control and synchronization in fractional neuron network system, Chaos, Solitons and Fractals, vol.36, no.4, pp.973-984, 2008. [38] J. Y. Cao et al., Optimization of fractional order PID controllers based on genetic algorithm, International Conference on Machine Learning and Cybernetics, pp.5686-5689, 2005. [39] I. Podlubny, Fractional Differential Equations, Academic Press, New York, 1999. [40] S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi, Optimization by simulated annealing, Science, New Series, vol.220, no.4598, pp.671-680, 1983. [41] V. Kroumov, J. Yu and K. Shibayama, 3D path planning for mobile robots using simulated annealing neural network, International Journal of Innovative Computing, Information and Control, vol.6, no.7, pp.2885-2899, 2010.
6318
M. A. Z. RAJA, I. M. QURESHI AND J. A. KHAN
[42] Z. Odibat, S. Momani and V. S. Erturk, Generalized differential transform method: Application to differential equations of fractional order, Applied Mathematics and Computation, vol.197, pp.467-477, 2008. [43] M. A. Z. Raja, J. A. Khan and I. M. Qureshi, Heuristic computational approach using swarm intelligence in solving fractional differential equations, Proc. of the 12th GECCO, pp.2023-2026, 2010. [44] P. Angeline, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences, Evolutionary Programming, LNCS, vol.1447, pp.601-610, 1998. [45] R. C. Eberhart and Y. Shi, Comparison between genetic algorithms and particle swarm optimization, Evolutionary Programming, LNCS, vol.1447, pp.611-616, 1998. [46] R. Gorenflo and F. Mainardi, Fractional calculus: Integral and differential equations of fractional order, CISM Lecture Notes, 1996. [47] F. Mainardi, Fractional relaxation and fractional diffusion equations, mathematical aspects, Proc. of the 14th IMACS World Congress, pp.329-332, 1994. [48] A. Saadatmandi and M. Dehghan, A new operational matrix for solving fractional-order differential equations, Computer and Mathematics with Applications, vol.59, no.3, pp.1326-1336, 2010. [49] C. Li and C. Tao, On the fractional Adams method, Journal of Computer and Mathematics with Applications, vol.58, no.8, pp.1573-1588, 2009. [50] Z. M. Odibat and S. Momani, Application of variational iteration method to nonlinear differential equations of fractional order, International Journal of Nonlinear Sciences and Numerical Simulation, vol.7, pp.27-34, 2006.