Artificial neural network approach for solving fuzzy differential ...

21 downloads 10959 Views 2MB Size Report
[17] used artificial neural networks to solve ordinary differential equations (ODEs) ..... objective function in (17) the Matlab 7 optimization toolbox was employed ...
Information Sciences 180 (2010) 1434–1457

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Artificial neural network approach for solving fuzzy differential equations Sohrab Effati a,*, Morteza Pakdaman b a b

Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran Sama Organization (affiliated with Islamic Azad University)-Mashhad Branch, Mashhad, Iran

a r t i c l e

i n f o

Article history: Received 23 May 2007 Received in revised form 19 December 2009 Accepted 21 December 2009

Keywords: Fuzzy differential equations Fuzzy Cauchy problem Artificial neural networks

a b s t r a c t The current research attempts to offer a novel method for solving fuzzy differential equations with initial conditions based on the use of feed-forward neural networks. First, the fuzzy differential equation is replaced by a system of ordinary differential equations. A trial solution of this system is written as a sum of two parts. The first part satisfies the initial condition and contains no adjustable parameters. The second part involves a feed-forward neural network containing adjustable parameters (the weights). Hence by construction, the initial condition is satisfied and the network is trained to satisfy the differential equations. This method, in comparison with existing numerical methods, shows that the use of neural networks provides solutions with good generalization and high accuracy. The proposed method is illustrated by several examples. Ó 2009 Elsevier Inc. All rights reserved.

1. Introduction Uncertainty is an attribute of information, [28] and the use of fuzzy differential equations (FDEs) is a natural way to model dynamic systems with embedded uncertainty. Most practical problems can be modeled as FDEs (e.g. [5,8] and Section 3.2). The method of fuzzy mapping was initially introduced by Chang and Zadeh [10]. Later, Dubois and Prade [11] presented a form of elementary fuzzy calculus based on the extension principle [27]. Puri and Ralescue [23] suggested two definitions for the fuzzy derivative of fuzzy functions. The first method was based on H-difference notation and was further investigated by Kaleva [16]. Several approaches were later proposed for FDEs and the existence of their solutions (e.g. [15,19,21,24,26]). The approach based on H-derivative has the disadvantage that it leads to solutions which have an increasing length of their support. This shortcoming was resolved by interpreting the FDE as a family of differential inclusions. Later, the authors of [6,7] introduced the concept of generalized differentiability. According to this new definition, the solution of the FDE may have decreasing length of its support. Other researchers have proposed several approaches to the solutions of FDE (e.g. [9,19]). Another group of researchers tried to extend some numerical methods to solve FDEs (e.g. [1,12,13]) such as Runge–Kutta method [2], Adomian method [4], predictor–corrector method and multi-step methods [3]. These methods are extended versions of the equivalent methods for solving ordinary differential equations (ODEs). Lagaris et al. [17] used artificial neural networks to solve ordinary differential equations (ODEs) and partial differential equations (PDEs) for both boundary value problems and initial value problems. They used multilayer perceptron to estimate the solution of differential equation. Their neural network model was trained over an interval (over which the differential equation must be solved), so the inputs of the neural network model were the training points. The comparison of their method with the existing numerical methods shows that their method was more accurate and the solution had also more generalizations. The ability of neural networks in function approximation is our main objective. In this paper, we will construct a new model with the use of neural networks to obtain a solution for FDE. In this new model, the inputs of the neural network are the training points as well as a parameter r which shows the domain of uncertainty. * Corresponding author. E-mail addresses: [email protected] (S. Effati), [email protected] (M. Pakdaman). 0020-0255/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2009.12.016

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1435

In Section 2, the basic notations of fuzzy numbers, fuzzy derivative and fuzzy functions are briefly presented. In Section 3, fuzzy differential equations and their applications are introduced; in addition, a general Cauchy problem is defined. In Section 4, the proposed new method (based on the use of a feed-forward neural network) is described. In Section 5, the applicability of the method is illustrated by several examples in which the exact solution and the computed results are compared with each other. In order to show the applicability of the method, a nonlinear FDE and a nonlinear FDE containing a fuzzy parameter are solved. Also, a problem in electrical circuit analysis is solved. Finally, Section 6 presents concluding remarks. 2. Preliminaries  Þ of functions uðrÞ; u  ðrÞ : ½0; 1 ! R, Definition 2.1 (see [25]). A fuzzy number u is completely determined by any pair u ¼ ðu; u satisfying the three conditions: (i) uðrÞ is a bounded, monotonic, increasing (nondecreasing) left-continuous function for all r 2 ð0; 1 and right-continuous for r ¼ 0.  ðrÞ is a bounded, monotonic, decreasing (nonincreasing) left-continuous function for all r 2 ð0; 1 and right-continu(ii) u ous for r ¼ 0.  ðrÞ. (iii) For all r 2 ð0; 1 we have: uðrÞ 6 u  Þ; For every u ¼ ðu; u

v ¼ ðv ; v Þ and k > 0 we define addition and multiplication as follows:

ðu þ v ÞðrÞ ¼ uðrÞ þ v ðrÞ;

ð1Þ

 ðrÞ þ v ðrÞ; ðu þ v ÞðrÞ ¼ u

ð2Þ

ðkuÞðrÞ ¼ kuðrÞ;

 ðrÞ: ðkuÞðrÞ ¼ ku

ð3Þ

The collection of all fuzzy numbers with addition and multiplication as defined by Eqs. (1)–(3) is denoted by E1 . For 0 < r 6 1, we define the r-cuts of fuzzy number u with ½ur ¼ fx 2 RjuðxÞ P rg and for r ¼ 0, the support of u is defined as ½u0 ¼ fx 2 RjuðxÞ > 0g.  Þ and Definition 2.2. The distance between two arbitrary fuzzy numbers u ¼ ðu; u

 ðrÞ  v ðrÞjg: dðu; v Þ ¼ sup fmax½juðrÞ  v ðrÞj; ju

v ¼ ðv ; v Þ is defined as follows: ð4Þ

r2½0;1

It is shown [22] that ðE1 ; dÞ is a complete metric space. Definition 2.3. The function f : R1 ! E1 is called a fuzzy function. Now if, for an arbitrary fixed ^t 2 R1 and e > 0 there exists a d > 0 such that:

jt  ^tj < d ) d½f ðtÞ; f ð^tÞ < e then f is said to be continuous. Note that d is the metric which is defined in Definition 2.2 (In this article we simply replace R1 by ½t 0 ; T). Definition 2.4. Let u; v 2 E1 . If there exists w 2 E1 such that u ¼ v þ w then w is called the H-difference of u; v and it is denoted by u  v . Definition 2.5. A function f : ða; bÞ ! E1 is called H-differentiable at ^t 2 ða; bÞ if, for h > 0 sufficiently small, there exist the Hdifferences f ð^t þ hÞ  f ð^tÞ; f ð^tÞ  f ð^t  hÞ, and an element f 0 ð^tÞ 2 E1 such that:

    f ð^t þ hÞ  f ð^tÞ 0 ^ f ð^tÞ  f ð^t  hÞ 0 ^ 0 ¼ limþ d ; f ðtÞ ¼ limþ d ; f ðtÞ : h h h!0 h!0

ð5Þ

Then f 0 ð^tÞ is called the fuzzy derivative of f at ^t. 3. Fuzzy differential equations and applications 3.1. Fuzzy differential equations In this section, a first order fuzzy differential equation is defined. Then it is replaced by its equivalent parametric form, and the new system, which contains two ordinary differential equations, is solved. A fuzzy differential equation of the first order is in the following form:

x0 ðtÞ ¼ f ðt; xðtÞÞ;

ð6Þ

1436

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

where x is a fuzzy function of t and f ðt; xÞ is a fuzzy function of the crisp variable t and the fuzzy variable x. Here x0 is the fuzzy derivative (according to Definition 2.5) of x. If an initial condition xðt0 Þ ¼ x0 (where x0 is a fuzzy number) is given, a fuzzy Cauchy problem [20] of the first order is obtained as follows:



x0 ðtÞ ¼ f ðt; xðtÞÞ;

t 2 ½t 0 ; T;

ð7Þ

xðt 0 Þ ¼ x0 :

It is clear that the fuzzy function f ðt; xÞ is the mapping f : R1  E1 ! E1 . Sufficient conditions for the existence of a unique solution to Eq. (7), which have been given by Kaleva [16], are that:  f is continuous,  A Lipschitz condition dðf ðt; xÞ; f ðt; yÞÞ 6 Ldðx; yÞ satisfied for some L > 0 ðx; y 2 E1 Þ. Now it is possible to replace (7) by the following equivalent system:

(

x0 ðtÞ ¼ f ðt; xÞ ¼ Fðt; x; xÞ;

xðt 0 Þ ¼ x0 ;

x0 ðtÞ ¼ f ðt; xÞ ¼ Gðt; x; xÞ;

xðt 0 Þ ¼ x0 ;

ð8Þ

where



Fðt; x; xÞ ¼ minff ðt; uÞju 2 ½x; xg; Gðt; x; xÞ ¼ maxff ðt; uÞju 2 ½x; xg:

ð9Þ

The parametric form of system (8) is given by:



x0 ðt; rÞ ¼ F½t; xðt; rÞ; xðt; rÞ; x0 ðt; rÞ ¼ G½t; xðt; rÞ; xðt; rÞ;

xðt0 ; rÞ ¼ x0 ðrÞ; xðt 0 ; rÞ ¼ x0 ðrÞ;

ð10Þ

where t 2 ½t 0 ; T and r 2 ½0; 1. Now with a discretization of the interval ½t 0 ; T, a set of points t i ; i ¼ 1; 2; . . . ; m are obtained. Thus for an arbitrary t i 2 ½t0 ; T, system (10) can be rewritten as:



x0 ðt i ; rÞ  F½ti ; xðt i ; rÞ; xðti ; rÞ ¼ 0; x0 ðt i ; rÞ  G½t i ; xðt i ; rÞ; xðt i ; rÞ ¼ 0

ð11Þ

with initial conditions:

xðt 0 ; rÞ ¼ x0 ðrÞ; xðt0 ; rÞ ¼ x0 ðrÞ;

0 6 r 6 1:

In some cases, the system given by Eq. (10) can be solved analytically. In most cases, however, an analytical solution cannot be found and a numerical approach must be applied. For each r 2 ½0; 1, Eq. (10) presents an ordinary Cauchy problem for which any converging classical numerical procedure may be applied (e.g. [12]). In Section 4, instead of using the classical numerical methods, this novel method, which is based on the use of an artificial neural network, is introduced. 3.2. Applications of fuzzy differential equations To show the applications of the FDE, some practical examples are mentioned here:  Electrical engineering: Consider a simple RL circuit. The differential equation corresponding to this electrical circuit is:

di R ¼  iðtÞ þ v ðtÞ; dt L

ið0Þ ¼ i0 ;

where R is the circuit resistance and L is a coefficient corresponding to the solenoid. Environmental conditions, inaccuracy in element modelling, electrical noise, leakage and other parameters cause uncertainty in the above-mentioned differential equation. Considering it instead as a fuzzy differential equation yields more realistic results. This innovation helps to detect unknown conditions in circuit analysis. This differential equation can be modeled as the FDE in Eq. (7). Example 5.6 presents an electrical circuit with v ðtÞ ¼ sinðtÞ.  Population dynamics: The first models for the growing population are the classical Malthus and Verhulst models. Suppose that the population obeys the Malthusian equation as follows:

u0 ðtÞ ¼ auðtÞ;

uð0Þ ¼ u0 ;

where a is a real number. Due to such noise parameters as demographic and environmental stochasticity (see [5]), this differential equation is a type of FDE, where u0 is a fuzzy initial condition.  Other applications for FDEs include modeling life expectancy, the population of HIV (see [14]) and other ecological models, logistics, control theory, and so on.

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1437

4. Neural networks The use of neural networks provides solutions with very good generalizability (such as differentiability). On the other hand, an important feature of multilayer perceptrons is their utility to approximate functions, which leads to a wide applicability in most problems. In this paper, the function approximation capabilities of feed-forward neural networks is used by expressing the trial solutions for a system (10) as the sum of two terms (see Eq. (13)). The first term satisfies the initial conditions and contains no adjustable parameters. The second term involves a feed-forward neural network to be trained so as to satisfy the differential equations. Since it is known that a multilayer perceptron with one hidden layer can approximate any function to arbitrary accuracy, the multilayer perceptron is used as the type of the network architecture. Þ is a trial solution for the second equation in xT ðt; r; p If xT ðt; r; pÞ is a trial solution for the first equation in system (10) and   are adjustable parameters, (indeed xT ðt; r; pÞ and  Þ are approximations of xðt; rÞ and xT ðt; r; p the system (10) where p and p  xðt; rÞ respectively) then a discretized version of system (10) can be converted to the following optimization problem:

min ~ p

m  h i2  h i2  X Þ Þ  G ti ; xT ðt i ; r; pÞ; xT ðt i ; r; p Þ : x0T ðt i ; r; pÞ  F ti ; xT ðt i ; r; pÞ; xT ðt i ; r; p þ x0T ðt i ; r; p

ð12Þ

i¼1

Þ contains all adjustable parameters) subject to initial conditions: (Here ~ p ¼ ðp; p

Þ ¼ x0 ðrÞ: xT ðt0 ; r; pÞ ¼ x0 ðrÞ; xT ðt 0 ; r; p Each trial solution xT and  xT employs one feed-forward neural network for which the corresponding networks are denoted by , respectively. The trial solutions xT and  N and N, with adjustable parameters p and p xT should satisfy the initial conditions, xT can be chosen as follows: and the networks must be trained to satisfy the differential equations. Thus xT and 

(

xT ðt; r; pÞ ¼ xðt 0 ; rÞ þ ðt  t0 ÞNðt; r; pÞ;

ð13Þ

xT ðt; r; p Þ ¼ xðt 0 ; rÞ þ ðt  t0 ÞNðt; r; p Þ;

, respectively. Here t and where N and N are single-output feed-forward neural networks with adjustable parameters p and p xT satisfy the initial conditions. According to (13) it is straightr are the network inputs. It is easy to see that in (13), xT and  forward to show that:

(

@N

x0T ðt; r; pÞ ¼ Nðt; r; pÞ þ ðt  t0 Þ @t ;

ð14Þ

Þ ¼ Nðt; r; p Þ þ ðt  t0 Þ @N x0T ðt; r; p : @t

Now consider a multilayer perceptron having one hidden layer with H sigmoid units and a linear output unit (Fig. 1). Here we have:

8 H P > > > < N ¼ v i rðzi Þ;

zi ¼ wi1 t þ wi2 r þ ui ;

i¼1

H > P > > : N ¼ v i rðzi Þ;

ð15Þ zi ¼ w  i1 t þ w  i2 r þ ui ;

i¼1

where

rðzÞ is the sigmoid transfer function. The following is obtained: 8 H P > @N > > @t ¼ v i wi1 r0 ðzi Þ; < i¼1

ð16Þ

H > P > >  i1 r0 ðzi Þ; ¼ v iw : @N @t i¼1

Fig. 1. Architecture of the perceptron.

1438

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

where r0 ðzi Þ is the first derivative of the sigmoid function. N and N have the same architecture as shown in Fig. 1.  (H  2 matrices) are the weights of input layers and U; U (H  1 matrices) are In Fig. 1, A ¼ ðt; rÞt is the input vector, w; w the bias vectors of input units. V and V are the weight vectors (H  1 matrices) of output units and Lin is the linear function. Sig is the sigmoid transfer function given as follows:

rðzÞ ¼

1 : 1 þ ez

 i1 and w  i2 denote respectively the weights from the inputs t and r to the hidden unit i; v  i denotes the In network N : w  i denotes the bias of hidden unit i. weight from the hidden unit i to the output, and u In network N : wi1 and wi2 denote respectively the weights from the inputs t and r to the hidden unit i, v i denotes the weight from the hidden unit i to the output, and ui denotes the bias of hidden unit i. For each iteration, r is fixed and we can solve the problem for an arbitrary r 2 ½0; 1. Now if we replace (12) by (14), the constrained optimization problem (12) will be converted to the following unconstrained optimization problem:

( m h i2 X @N Þ min Nðt i ; r; pÞ þ ðti  t0 Þ  F t i ; xT ðti ; r; pÞ; xT ðti ; r; p ~ @t p i¼1 ! 9 h i 2= @N Þ þ ðti  t0 Þ Þ :  G t i ; xT ðt i ; r; pÞ; xT ðt i ; r; p þ Nðt i ; r; p ; @t

ð17Þ

Þ contains all adjustable parameters (weights of input and output layers and biases ) of the two networks N Here ~ p ¼ ðp; p and N. To minimize this unconstrained optimization problem, minimization techniques such as the steepest descent method and the conjugate gradient or quasi-Newton methods can be employed. The Newton method is one of the important algorithms in nonlinear optimization. The main disadvantage of the Newton method is that it is necessary to evaluate the second derivative matrix (Hessian matrix). Quasi-Newton methods were originally proposed by Davidon in 1959 and were later developed by Fletcher and Powell (1963). The most fundamental idea in quasi-Newton methods is the requirement to calculate an approximation of the Hessian matrix. Here the quasi-Newton BFGS (Broyden–Fletcher–Goldfarb–Shanno) method is used. This method is quadratically convergent (for more details see [18]).  After the optimization step, optimal values of the weights are obtained. Thus by replacing the optimal parameters p and p xT Þ will be the approximated solution of FDE (7). in (13), the trial solution xT ¼ ðxT ;  Remark 4.1. The proposed method has two main advantages:  First, the approximated solution is very close to the real solution because neural networks are very good approximators. By comparing the results of the numerical methods (e.g. [3]) with the results obtained in the next section, it can be noted that the proposed new method has some small errors.  Second, after solving a problem with this new method, the solution of the FDE is available for each arbitrary point in the training interval(even between the training points). Indeed, solving the FDE results an approximate function; therefore, it is possible to calculate the answer at every point.

5. Numerical examples To show the behavior and properties of this new method, six problems will be solved in this section. To minimize the objective function in (17) the Matlab 7 optimization toolbox was employed using the quasi-Newton BFGS method. For each example, the accuracy of the method is illustrated by computing the deviations Eðt; rÞ ¼ xT ðt; rÞ  xa ðt; rÞ; Eðt; rÞ ¼  xT ðt; rÞ   xa ðt; rÞ (for a constant t and various values of r), where xa ðt; rÞ ¼ ðxa ðt; rÞ;  xa ðt; rÞÞ is the known exact soluxT ðt; rÞÞ is the approximated solution. Note that, for all examples, a multilayer perceptron consisttion and xT ðt; rÞ ¼ ðxT ðt; rÞ;  ing of one hidden layer with ten hidden units and one linear output unit is used. In order to obtain better results (especially in the nonlinear cases), more hidden layers or training points can be used. The weights computed using this method are convergent. For each example, the computed values of the weights are plotted over a number of iterations. Example 5.1. Consider the following fuzzy initial value problem:



x0 ¼ xðtÞ;

t 2 ½0; 1;

ð18Þ

xð0Þ ¼ ð0:75 þ 0:25r; 1:125  0:125rÞ: Exact solution for t ¼ 1 is:

xð1; rÞ ¼ ðð0:75 þ 0:25rÞe; ð1:125  0:125rÞeÞ;

r 2 ½0; 1:

Here the trial solutions in the neural form are as follows:

1439

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

(

xT ðtÞ ¼ ð0:75 þ 0:25rÞ þ tNðt; r; pÞ;

ð19Þ

Þ; xT ðtÞ ¼ ð1:125  0:125rÞ þ tNðt; r; p

where t 2 ½0; 1. It is easy to show that in (19) the trial solutions satisfy the initial conditions. Fig. 2 shows the exact solution and the approximated solution for t ¼ 1, and the numerical results can be seen in Table 1. Figs. 3 and 4 show the accuracy of the solution, Eð1; rÞ; Eð1; rÞ (for t ¼ 1), respectively. Figs. 5–8 show the convergence property of the computed values of the weights (for r ¼ 0:5 and t ¼ 1). Example 5.2. Consider the following fuzzy initial value problem:

(

x0 ðtÞ ¼ 3t 2 xðtÞ; t 2 ½0; 1; pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi xð0Þ ¼ ð0:5 r; 0:2 1  r þ 0:5Þ:

ð20Þ

pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi The exact solution for t ¼ 1 is: xð1; rÞ ¼ ð0:5 r e; ð0:2 1  r þ 0:5ÞeÞ. Fig. 9 shows the exact solution and the approximated solution for t ¼ 1. Numerical results can be found in Table 2. Figs. 10 and 11 show the accuracy of the solution Eð1; rÞ ¼ ðEð1; rÞ; Eð1; rÞÞ. Figs. 12–15 show the convergence behaviors for computed values of the weight parameters wi1 ; wi2 ; bias u and the weights of output layer v for different numbers of iterations. Example 5.3. Consider the following fuzzy initial value problem:



x0 ðtÞ ¼ xðtÞ þ t þ 1;

t 2 ½0; 1;

ð21Þ

xð0Þ ¼ ð0:96 þ 0:04r; 1:01  0:01rÞ; where r 2 ½0; 1. We can write the parametric form of the problem as follows:

1

Exact Approximation

0.9 0.8 0.7

r

0.6 0.5 0.4 0.3 0.2 0.1 0 1

1.5

2

2.5

3

3.5

4

Fig. 2. The exact and approximated solution for Example 5.1.

Table 1 Comparison of the exact ðxa ð1; rÞÞ and approximated ðxT ð1; rÞÞ solutions for Example 5.1. r

xa ð1; rÞ

xT ð1; rÞ

jEðt ¼ 1; rÞj

 xa ð1; rÞ

 xT ð1; rÞ

jEðt ¼ 1; rÞj

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

2.038711 2.106668 2.174625 2.242583 2.310540 2.378497 2.446454 2.514411 2.582368 2.650325 2.718282

2.038800 2.106639 2.174579 2.242607 2.310544 2.378551 2.446482 2.514364 2.582465 2.650339 2.718226

8.895270e5 2.903725e5 4.693243e5 2.484654e5 4.739291e6 5.478406e5 2.827934e5 4.693161e5 9.712388e5 1.417971e5 5.574679e5

3.058067 3.024089 2.990110 2.956131 2.922153 2.888174 2.854196 2.820217 2.786239 2.752260 2.718282

3.058127 3.024090 2.990136 2.956140 2.922187 2.888160 2.854238 2.820269 2.786269 2.752106 2.718310

6.003329e5 1.107843e6 2.582699e5 8.472236e6 3.384699e5 1.443122e5 4.194040e5 5.199824e5 3.025500e5 1.541195e4 2.787937e5

1440

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 −3

2

x 10

Error 1.5 1 0.5 0 −0.5 −1 −1.5 −2

0

0.2

0.4

r

0.6

0.8

1

Fig. 3. Eð1; rÞ for Example 5.1.

−3

2

x 10

Error 1.5 1 0.5 0 −0.5 −1 −1.5 −2

0

0.2

0.4

r

0.6

0.8

1

Fig. 4. Eð1; rÞ for Example 5.1.

2 1.5 1

Wi,1

0.5 0

−0.5 −1 −1.5 −2

0

2

4

6

8 10 12 14 Number of iterations

16

Fig. 5. Convergence of the weights wi1 for Example 5.1.

18

20

1441

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1.4 1.2 1

Wi,2

0.8 0.6 0.4 0.2 0 −0.2 0

2

4

6

8 10 12 14 Number of iterations

16

18

20

Fig. 6. Convergence of the weights wi2 for Example 5.1.

3

2

U

1

0

−1

−2 0

5

10 Number of iterations

15

20

Fig. 7. Convergence of the bias u for Example 5.1.

2.5 2 1.5 1 0.5 V

0 −0.5 −1 −1.5 −2 −2.5 −3 0

5

10 Number of iterations

Fig. 8. Convergence of the weights

v

15

for Example 5.1.

20

1442

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1

Trial Exact

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

3.5

4

Fig. 9. The exact and approximated solution for Example 5.2.

Table 2 Comparison of the exact ðxa ð1; rÞÞ and approximated ðxT ð1; rÞÞ solutions for Example 5.2. r

xa ð1; rÞ

xT ð1; rÞ

jEðt ¼ 1; rÞj

 xa ð1; rÞ

 xT ð1; rÞ

jEðt ¼ 1; rÞj

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.4297981 0.6078263 0.7444321 0.8595962 0.9610578 1.052786 1.137139 1.215653 1.289394 1.359141

3.787978e7 0.4296523 0.6077049 0.7443416 0.8603474 0.9605738 1.052719 1.137410 1.215635 1.289925 1.359143

3.787978e7 1.457549e4 1.213900e4 9.049358e5 7.511922e4 4.839134e4 6.748295e5 2.711069e4 1.773549e5 5.310672e4 2.244555e6

1.902797 1.874899 1.845402 1.813996 1.780255 1.743564 1.702979 1.656914 1.602271 1.531060 1.359141

1.902417 1.872192 1.843975 1.814533 1.780520 1.743461 1.703009 1.657742 1.602701 1.532301 1.359470

3.798493e4 2.706702e3 1.427181e3 5.365383e4 2.649743e4 1.027169e4 2.935789e5 8.278025e4 4.297191e4 1.240992e3 3.295712e4

−3

5

x 10

Error

4 3 2 1 0 −1 −2 −3 −4 −5

0

0.2

0.4

r

0.6

0.8

1

Fig. 10. Eð1; rÞ for Example 5.2.



x0 ðtÞ ¼ F½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 0:96 þ 0:04r; x0 ðtÞ ¼ G½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 1:01  0:01r

ð22Þ

1443

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 −3

5

x 10

Error

4 3 2 1 0 −1 −2 −3 −4 −5

0

0.2

0.4

r

0.6

0.8

1

Fig. 11. Eð1; rÞ for Example 5.2.

5 4 3

Wi,1

2 1 0 −1 −2 −3 −4

0

5

10 Number of iterations

15

20

Fig. 12. Convergence of the weights wi1 for Example 5.2.

in which t 2 ½0; 1 and F; G satisfy (9). The neural form of the trial solutions is as follows:

(

xT ðt; rÞ ¼ ð0:96 þ 0:04rÞ þ tNðt; r; pÞ; xT ðt; rÞ ¼ ð1:01  0:01rÞ þ tNðt; r; p Þ;

ð23Þ

where r 2 ½0; 1. In (23) the trial solutions satisfy the initial conditions. Now we replace (17) by (22) and (23), so the unconstrained optimization problem can be solved with the quasi-Newton BFGS method. Fig. 16 and Table 3 show the exact solution and the approximated solution for t ¼ 0:1. Figs. 17 and 18 show the accuracy of the solution Eð0:1; rÞ ¼ ðEð0:1; rÞ; Eð0:1; rÞÞ. Figs. 19–22 show the convergence behaviors for the computed values of the weight parameters wi1 ; wi2 , bias u and the weights of output layer v for different numbers of iterations. By comparing Table 3 with the numerical solution [3], it is evident that the proposed new method has some small errors (however we can use more training points or more weights to obtain better results). To show the accuracy of the method, note the results at t ¼ 1 shown in Table 4. In the next example, this method is applied to solve a nonlinear fuzzy differential equation.

1444

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

5 4 3

Wi,2

2 1 0 −1 −2 −3 −4 0

5

10 Number of iterations

15

20

Fig. 13. Convergence of the weights wi2 for Example 5.2.

8 6 4

u

2 0 −2 −4 −6 0

5

10 Number of iterations

15

20

Fig. 14. Convergence of the bias u for Example 5.2.

Example 5.4. Consider the following nonlinear fuzzy initial value problem:

(

x0 ðtÞ ¼ txðtÞ2 ;

t 2 ½0; 1;

xð0Þ ¼ ð1:1 þ 0:1r; 1:3  0:1rÞ;

ð24Þ

where r 2 ½0; 1. This problem is solved with the help of the novel method. The parametric form of the problem is:



x0 ðtÞ ¼ F½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 1:1 þ 0:1r; x0 ðtÞ ¼ G½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 1:3  0:1r

ð25Þ

which t 2 ½0; 1 and F; G satisfy (9). The neural form of the trial solutions is as follows:

(

xT ðt; rÞ ¼ ð1:1 þ 0:1rÞ þ tNðt; r; pÞ; Þ: xT ðt; rÞ ¼ ð1:3  0:1rÞ þ tNðt; r; p

ð26Þ

Figs. 24 and 25 show the accuracy of the solution Eð0:2; rÞ and Fig. 23 and Table 5 show the exact solution and the approximated solution. Figs. 26–29 show the convergence behaviors of the computed values for the weight parameters wi1 ; wi2 , bias u and the weights of output layer v for different numbers of iterations.

1445

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

6 4 2

v

0 −2 −4 −6 −8 0

5

10 Number of iterations

Fig. 15. Convergence of the weights

v

15

20

for Example 5.2.

1

Exact Approximation

0.9 0.8 0.7 0.6

r

0.5 0.4 0.3 0.2 0.1 0 0.8

0.85

0.9

0.95

x(t)

1

1.05

1.1

1.15

Fig. 16. The exact and approximated solution for Example 5.3.

Table 3 Comparison of the exact ðxa ð0:1; rÞÞ and approximated ðxT ð0:1; rÞÞ solutions for Example 5.3. r

xa ð0:1; rÞ

xT ð0:1; rÞ

jEðt ¼ 0:1; rÞj

 xa ð0:1; rÞ

 xT ð0:1; rÞ

jEðt ¼ 0:1; rÞj

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.9636356 0.9677558 0.9718760 0.9759961 0.9801163 0.9842365 0.9883567 0.9924769 0.9965971 1.000717 1.004837

0.9636863 0.9677945 0.9718966 0.9760564 0.9801237 0.9842988 0.9883914 0.9925201 0.9966132 1.000730 1.004840

5.067853e5 3.873561e5 2.067133e5 6.021995e5 7.410150e6 0.6225859e5 3.472099e5 4.320305e5 1.614909e5 1.253534e5 2.142460e6

1.018894 1.017488 1.016083 1.014677 1.013271 1.011866 1.010460 1.009054 1.007649 1.006243 1.004837

1.018932 1.017530 1.016075 1.014716 1.013312 1.011901 1.010485 1.009098 1.007676 1.006282 1.004857

3.753089e5 4.153197e5 7.614799e6 3.922612e5 4.092031e5 3.571873e5 2.494235e5 4.355865e5 2.703282e5 3.883441e5 1.924814e5

Example 5.5. Consider the following nonlinear FDE:

(

x0 ðtÞ ¼ 3AxðtÞ2 ; t 2 ½0; 0:1; pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi xð0Þ ¼ ð0:5 r; 0:2 1  r þ 0:5Þ;

where A ¼ ð1 þ r; 3  rÞ is a fuzzy parameter (and 0 6 r 6 1). The parametric form of the problem is:

ð27Þ

1446

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 −4

x 10

4

2

0

−2

−4

−6 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.8

0.9

1

16

18

20

Fig. 17. Eð0:1; rÞ for Example 5.3.

−4

x 10

4

2

0

−2

−4

−6

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Fig. 18. Eð0:1; rÞ for Example 5.3.

3 2.5 2

Wi,1

1.5 1 0.5 0 −0.5 −1

0

2

4

6

8 10 12 14 Number of iterations

Fig. 19. Convergence of the weights wi1 for Example 5.3.

1447

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1 0.9 0.8

Wi,2

0.7 0.6 0.5 0.4 0.3 0.2 0.1

0

2

4

6

8 10 12 14 Number of iterations

16

18

20

Fig. 20. Convergence of the weights wi2 for Example 5.3.

2

1.5

U

1

0.5

0

−0.5 0

5

10 Number of iterations

15

20

Fig. 21. Convergence of the bias u for Example 5.3.

1

0.5

V

0

−0.5

−1

−1.5 0

5

10 Number of iterations

Fig. 22. Convergence of the weights

v

15

for Example 5.3.

20

1448

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

Table 4 Comparison of the exact ðxa ð1; rÞÞ and approximated ðxT ð1; rÞÞ solutions. r

xa ð1; rÞ

xT ð1; rÞ

jEðt ¼ 1; rÞj

 xa ð1; rÞ

 xT ð1; rÞ

jEðt ¼ 1; rÞj

0 0.2 0.4 0.6 0.8 1

1.294404 1.309099 1.323794 1.338489 1.353184 1.367879

1.294390 1.309114 1.323811 1.338471 1.353199 1.367859

1.419715e5 1.428876e5 1.666511e5 1.861692e5 1.458449e5 2.022175e5

1.430318 1.417831 1.405343 1.392855 1.380367 1.367879

1.430333 1.417820 1.405329 1.392876 1.380366 1.367907

1.451833e5 1.051050e5 1.340752e5 2.060321e5 1.289424e6 2.768503e5

1

Exact Solution Trial Solution

0.9 0.8 0.7

r

0.6 0.5 0.4 0.3 0.2 0.1 0 1.1

1.15

1.2

1.25

1.3

1.35

Fig. 23. The exact and approximated solution for Example 5.4.

−4

x 10 2.5

Error

2 1.5 1 0.5 0 −0.5 −1 −1.5 −2 0

0.1

0.2

0.3

0.4

0.5 r

0.6

0.7

0.8

0.9

Fig. 24. Eð0:2; rÞ for Example 5.4.

(

pffiffiffi x0 ðtÞ ¼ F½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 0:5 r ; pffiffiffiffiffiffiffiffiffiffiffi x0 ðtÞ ¼ G½t; xðt; rÞ; xðt; rÞ; xð0Þ ¼ 0:2 1  r þ 0:5;

ð28Þ

where F; G satisfy (9). The neural form of the trial solution is as follows:

(

pffiffiffi xT ðt; rÞ ¼ ð0:5 r Þ þ tNðt; r; pÞ; pffiffiffiffiffiffiffiffiffiffiffi xT ðt; rÞ ¼ ð0:2 1  r þ 0:5Þ þ tNðt; r; p Þ:

ð29Þ

1449

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 −3

x 10

Error

2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 −2

0.1

0.2

0.3

0.4

0.5 r

0.6

0.7

0.8

0.9

1

Fig. 25. Eð0:2; rÞ for Example 5.4.

Table 5 Comparison of the exact xa ð0:2; rÞ and approximated xT ð0:2; rÞ solutions for Example 5.4. r

xa ð0:2; rÞ

xT ð0:2; rÞ

jEðt ¼ 0:2; rÞj

 xa ð0:2; rÞ

 xT ð0:2; rÞ

jEðt ¼ 0:2; rÞj

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

1.124744 1.135201 1.145663 1.156129 1.166598 1.177073 1.187551 1.198034 1.208521 1.219012 1.229508

1.124766 1.135212 1.145672 1.156145 1.166633 1.177105 1.187587 1.198045 1.208543 1.219025 1.229525

2.169210e5 1.097921e5 9.015843e6 1.660173e5 3.432918e5 3.185205e5 3.543493e5 1.129344e5 2.214695e5 1.270053e5 1.689782e5

1.334702 1.324163 1.313629 1.303099 1.292573 1.282051 1.271534 1.261021 1.250513 1.240008 1.229508

1.334767 1.324122 1.313763 1.303090 1.291846 1.282199 1.271621 1.261074 1.250741 1.239971 1.229564

6.468769e5 4.109317e5 1.341154e4 8.407460e6 7.270456e4 1.475989e4 8.677992e5 5.336968e5 2.281000e4 3.765595e5 5.614828e5

4 3

Wi,1

2 1 0 −1 −2 0

2

4

6 8 Number of iterations

10

12

Fig. 26. Convergence of the weights wi1 for Example 5.4.

Figs. 31 and 32 show the accuracy of the solution Eð0:1; rÞ and Fig. 30 and Table 6 show the exact solution and the approximated solution. Figs. 33–36 show the convergence behaviors of the computed values for the weight parameters wi1 ; wi2 , bias u and the weights of the output layer v for different numbers of iterations.

1450

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1.2 1 0.8

Wi,2

0.6 0.4 0.2 0 −0.2

0

5

10 Number of iterations

15

Fig. 27. Convergence of the weights wi2 for Example 5.4.

1 0.8 0.6 0.4 0.2 U

0 −0.2 −0.4 −0.6 −0.8 −1 0

2

4

6 8 Number of iterations

10

12

Fig. 28. Convergence of the bias u for Example 5.4.

4 3 2

V

1 0 −1 −2 −3 0

2

4

6 8 Number of iterations

Fig. 29. Convergence of the weights

v

10

for Example 5.4.

12

1451

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1

Exact Solution Trial Solution

0.9 0.8 0.7

r

0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

Fig. 30. The exact and approximated solution for Example 5.5.

−5

x 10 2

Error

1.5 1 0.5 0 −0.5 −1 −1.5 0

0.1

0.2

0.3

0.4

0.5 r

0.6

0.7

0.8

0.9

Fig. 31. Eð0:1; rÞ for Example 5.5.

Example 5.6. Consider an electrical circuit (LR circuit) with an AC source. The current equation of this circuit can be written as follows:

(

0

i ðtÞ ¼  RL iðtÞ þ v ðtÞ;

t 2 ½0; 1;

ið0Þ ¼ ð0:96 þ 0:04r; 1:01  0:01rÞ;

ð30Þ

where R is the circuit resistance, L is a coefficient, corresponding to the solenoid and 0 6 r 6 1. Suppose that v ðtÞ ¼ sinðtÞ; R ¼ 1 ohm and L ¼ 1H, so (30) can be written as:

(

0

i ðtÞ ¼ iðtÞ þ sinðtÞ;

t 2 ½0; 1;

ið0Þ ¼ ð0:96 þ 0:04r; 1:01  0:01rÞ:

ð31Þ

The parametric form of the problem is:

(

i0 ðtÞ ¼ F½t; iðt; rÞ; iðt; rÞ; i0 ðtÞ ¼ G½t; iðt; rÞ; iðt; rÞ;

ið0Þ ¼ 0:96 þ 0:04r; ið0Þ ¼ 1:01  0:01r;

where F; G satisfy (9). Trial solutions are:

ð32Þ

1452

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 −4

3

x 10

Error

2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 0

0.1

0.2

0.3

0.4

0.5 r

0.6

0.7

0.8

0.9

Fig. 32. Eð0:1; rÞ for Example 5.5.

Table 6 Comparison of the exact ðxa ð0:1; rÞÞ and approximated ðxT ð0:1; rÞÞ solutions for Example 5.5. r

xa ð0:1; rÞ

xT ð0:1; rÞ

jEðt ¼ 0:1; rÞj

 xa ð0:1; rÞ

 xT ð0:1; rÞ

jEðt ¼ 0:1; rÞj

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1668180 0.2431826 0.3066089 0.3646604 0.4204459 0.4757399 0.5317856 0.5895990 0.6501168 0.7142857

1.521837e8 0.1668180 0.2431828 0.3066094 0.3646614 0.4204467 0.4757396 0.5317851 0.5895982 0.6501146 0.7142825

1.521837e8 9.006475e8 1.333566e7 4.564952e7 1.037906e6 7.377851e7 2.985106e7 5.020650e7 8.304654e7 2.172205e6 8.217290e7

1.891892 1.724647 1.579772 1.452423 1.338857 1.236037 1.141303 1.052001 0.9647689 0.8730387 0.7142857

1.891879 1.724595 1.579803 1.452412 1.338843 1.236061 1.141312 1.052007 0.9647744 0.8730405 0.7142830

1.325872e5 5.245013e5 3.013005e5 1.173814e5 1.477213e5 2.468082e5 9.430248e6 5.657456e6 5.578531e6 1.815409e6 2.720800e6

60

50

Wi,1

40

30

20

10

0 2

4

6 8 Number of iterations

10

Fig. 33. Convergence of the weights wi1 for Example 5.5.

12

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 2 1 0 −1

Wi,2

−2 −3 −4 −5 −6 −7 −8

2

4

6 8 Number of iterations

10

12

Fig. 34. Convergence of the weights wi2 for Example 5.5.

2 0 −2 −4

U

−6 −8 −10 −12 −14 −16

2

4

6 8 Number of iterations

10

12

Fig. 35. Convergence of the bias u for Example 5.5.

30 20

V

10 0 −10 −20 −30 2

4

6 8 Number of iterations

Fig. 36. Convergence of the weights

v

10

for Example 5.5.

12

1453

1454

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1

Exact Approximation

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.6

0.65

0.7

0.75

0.8

Fig. 37. The exact and approximated solution for Example 5.6. −4

5

x 10

4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

0

0.1

0.2

0.3

0.4

t

0.5

0.6

0.7

0.8

0.9

0.7

0.8

0.9

Fig. 38. Eðt; 0:5Þ for Example 5.6. −4

5

x 10

4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

0

0.1

0.2

0.3

0.4

t

0.5

0.6

Fig. 39. Eðt; 0:5Þ for Example 5.6.

1455

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457 Table 7 Comparison of the exact ðxa ð0:98; rÞÞ and approximated ðxT ð0:98; rÞÞ solutions for Example 5.6. r

ia ð0:98; rÞ

0 0.2 0.4 0.6 0.8 1

0.6274630 0.6419112 0.6563594 0.6708076 0.6852558 0.6997041

(

iT ð0:98; rÞ

jEðt ¼ 0:98; rÞj

ia ð0:98; rÞ

i ð0:98; rÞ T

jEðt ¼ 0:98; rÞj

0.6274234 0.6418955 0.6563819 0.6709217 0.6852821 0.6997584

3.955934e5 1.565424e5 2.251957e5 1.140533e4 2.626915e5 5.437859e5

0.7606858 0.7484895 0.7362931 0.7240968 0.7119004 0.6997041

0.7607083 0.7484859 0.7362615 0.7239564 0.7118550 0.6996472

2.245913e5 3.553948e6 3.157580e5 1.403410e4 4.541480e5 5.684037e5

iT ðt; rÞ ¼ ð0:96 þ 0:04rÞ þ tNðt; r; pÞ; iT ðt; rÞ ¼ ð1:01  0:01rÞ þ tNðt; r; p Þ:

ð33Þ

Figs. 38 and 39 show the accuracy of the solution Eðt; r ¼ 0:5Þ and Fig. 37 and Table 7, show the exact solution and the approximated solution. Figs. 40–43 show the convergence behaviors of the computed values for the weight parameters wi1 ; wi2 ; bias u and the weights of the output layer v for different numbers of iterations.

1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4

1

2

3

4 5 6 Number of iterations

7

8

9

Fig. 40. Convergence of the weights wi1 for Example 5.6.

1.2

1

0.8

0.6

0.4

0.2

0

1

2

3

4 5 6 Number of iterations

7

8

Fig. 41. Convergence of the weights wi2 for Example 5.6.

9

1456

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1.2 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 1

2

3

4 5 6 Number of iterations

7

8

9

8

9

Fig. 42. Convergence of the bias u for Example 5.6.

2 1.5 1 0.5 0 −0.5 −1 −1.5 −2

1

2

3

4 5 6 Number of iterations

Fig. 43. Convergence of the weights

v

7

for Example 5.6.

6. Concluding remarks The use of FDEs is a natural way to model dynamical systems under possibilistic uncertainty. In this paper, we presented a new method for solving fuzzy differential equations. We demonstrate, for the first time, the ability of neural networks to approximate the solutions of FDEs. By comparing our results with the results obtained by using numerical methods (e.g. [3]), it can be observed that the proposed new method yields more accurate approximations. Even better results (specially in nonlinear cases) may be possible if one uses more neurons or more training points. Moreover, after solving a FDE the solution is obtainable at any arbitrary point in the training interval (even between training points). The main reason for using neural networks was their applicability in function approximation. Further research is in progress to apply and extend this new approach to solve n-order FDEs as well as a system of FDEs. Acknowledgments The authors wish to thank the referees and the Editor-in-Chief for their kind comments and valuable remarks. References [1] S. Abbasbandy, Numerical methods for fuzzy differential inclusions, Computers and Mathematics with Applications 48 (2004) 1633–1641. [2] S. Abbasbandy, T. AllahViranloo, Numerical solution of fuzzy differential equation by Runge–Kutta method, Nonlinear Studies 11 (1) (2004) 117–129.

S. Effati, M. Pakdaman / Information Sciences 180 (2010) 1434–1457

1457

[3] T. Allahviranloo, N. Ahmady, E. Ahmady, Numerical solution of fuzzy differential equations by predictor–corrector method, Information Sciences 177 (2007) 1633–1647. [4] E. Babolian, H. Sadeghi, Sh. Javadi, Numerically solution of fuzzy differential equations by Adomian method, Applied Mathematics and Computation 149 (2004) 547–557. [5] L.C. Barros, R.C. Bassanezi, P.A. Tonelli, Fuzzy modeling in population dynamics, Ecological Modeling 128 (2000) 27–33. [6] B. Bede, S.G. Gal, Generalizations of the differentiability of fuzzy-number-valued functions with applications to fuzzy differential equations, Fuzzy Sets and Systems 151 (2005) 581–599. [7] B. Bede, I.J. Rudas, A.L. Bencsik, First order linear fuzzy differential equations under generalized differentiability, Information Sciences 177 (2007) 1648–1662. [8] J.J. Buckley, T. Feuring, Fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) 43–54. [9] Y.C. Cano, H.R. Flores, On new solutions of fuzzy differential equations, Chaos, Solitons and Fractals 38 (1) (2008) 112–119. [10] S.S.L. Chang, L. Zadeh, On fuzzy mapping and control, IEEE Transactions on System, Man and Cybernetics 2 (1972) 30–34. [11] D. Dubois, H. Prade, Towards fuzzy differential calculus, Fuzzy Sets and Systems 8 (1982) 225–233. [12] M. Friedman, M. Ma, A. Kandel, Numerical solutions of fuzzy differential and integral equations, Fuzzy Sets and Systems 106 (1999) 35–48. [13] E. Hullermeier, Numerical methods for fuzzy initial value problems, International Journal of Uncertainty Fuzziness Knowledge-Based Systems 7 (1999) 439–461. [14] R.M. Jafelice, L.C. Barros, F. Gomide, Fuzzy modeling in symptomatic HIV virus infected population, Bulletin of Mathematical Biology 66 (2004) 1597– 1620. [15] L.J. Jowers, J.J. Buckley, K.D. Reilly, Simulating continuous fuzzy systems, Information Sciences 177 (2007) 436–448. [16] O. Kaleva, Fuzzy differential equations, Fuzzy Sets and Systems 24 (1987) 301–317. [17] I.E. Lagaris, A. Likas, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9 (5) (1998). September. [18] D.G. Luenberger, Linear and Nonlinear Programming, second ed., Addison-Wesley, 1984. [19] M.T. Mizukoshi, L.C. Barros, Y. Chalco-Cano, H. Roman-Flores, R.C. Bassanezi, Fuzzy differential equations and the extension principle, Information Sciences 177 (2007) 3627–3635. [20] J.J. Nieto, The Cauchy problem for continuous fuzzy differential equations, Fuzzy Sets and Systems 102 (1999) 259–262. [21] P. Prakash, Existence of solutions of fuzzy neutral differential equations in Banach spaces, Dynamical Systems and Applications 14 (2005) 407–417. [22] M.L. Puri, D. Ralescu, Fuzzy random variables, Journal of Mathematical Analysis and Applications 114 (1986) 409–422. [23] M.L. Puri, D. Ralescu, Differential for fuzzy function, Journal of Mathematical Analysis and Applications 91 (1983) 552–558. [24] S. Song, C. Wu, Existence and uniqueness of solutions to Cauchy problem of fuzzy differential equations, Fuzzy Sets and Systems 110 (2000) 55–67. [25] L. Stefaninia, L. Sorinia, M.L. Guerraa, Parametric representation of fuzzy numbers and application to fuzzy calculus, Fuzzy Sets and Systems 157 (2006) 2423–2455. [26] C. Wu, S. Song, Existence theorem to the Cauchy problem of fuzzy differential equations under compactness-type conditions, Information Sciences 108 (1998) 123–134. [27] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338–353. [28] L.A. Zadeh, Toward a generalized theory of uncertainty (GTU) an outline, Information Sciences 172 (2005) 140.