Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 Available online at www.ispacs.com/jfsva Volume 2017, Issue 1, Year 2017 Article ID jfsva-00380, 12 Pages doi:10.5899/2017/jfsva-00380 Research Article
Numerical Solution of Fuzzy linear Regression using Fuzzy Neural Network Based on Probability Function Somayeh Ezadi1*, Sahar Askari2 (1) Department of Applied Mathematics, Hamedan Branch, Islamic Azad University, Hamedan 65138, Iran (2) Department of Mathematics, Farhangian University (Shahid Sadougi Pardise) of Kermanshah Branch, Kermanshah, Iran.
Copyright 2017 Β© Somayeh Ezadi and Sahar Askari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract In this work, we consider the development of a fuzzy neural network based on probability function for Estimated output of fuzzy regression models with test real input and fuzzy output. The proposed approach is a fuzzification of the outputs and weights of conventional fuzzy neural network based on probability function. The error of the proposed method is based on total square error is minimized by optimization method in order to be able to obtain the optimal weights of the neural network. The advantage of the proposed approach is its simplicity and computation as well as its performance. To compare the performance of the proposed method with the other traditional methods given in the literature several numerical examples are presented. Keywords: Fuzzy-numbers, Probability Function, Linear Regression, Neural Network, Error Analysis, Weights.
1 Introduction Regression analysis is one of the most popular methods of estimation. It is applied to evaluate the functional relationship between dependent and independent variables. Fuzzy regression (FR) is a fuzzy type of classical regression analysis in which some elements of the model are represented by fuzzy numbers. After introducing the concept of fuzzy sets by zadeh in 1965 [1, 2, 3], different researchers developed regression analysis. Fuzzy linear regression (FLR) was first suggested by Tanaka [4], that is the extension of classic regression analysis that has turned into a powerful tool for the discovery of ambiguous relationships [5]. Indeed, in fuzzy regression, some of the elements of the regression model have been presented with ambiguous information. Different methods have been presented for solving these type of problems [6- 10]. These are divided into two approaches: linear programming (LP) based methods and fuzzy least squares (FLS) methods [11]. Neural networks (NN) can be valuable when the functional
* Corresponding Author. Email address:
[email protected], Tel: +989183565466 38
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
39
relationship between dependent and independent variables are not known. Probable basis function (PBF) neural network forms one of the essential type of neural networks. In this paper, we proposed simple but powerful method for fuzzy regression analysis using PBF neural network, since the proposed method employ PBF neural network which have higher flexibility and a wider application field than the existing LP based and FLS fuzzy regression methods [12-16]. The effectiveness of our method was demonstrated by three examples and a computational experience. Present article consists of following sections: In Section 2, the basic and required concepts have been stated. In Section 3, the proposed model for solving the FLR regression equations is introduced. In Section 4, the analysis of recommended method error has been stated. In Section 5, the method of calculating the weights of PBF neural network has been mentioned. In Section 6, numerical example has been presented. In Section 7, the conclusion has been presented and in Section 8, the references have been stated. 2 Preliminaries and notations Definition 2.1. The parametric form of a fuzzy number that is introduced with regular pair from function in the form of (π1 (πΌ), π1 (πΌ)), 0 β€ πΌ β€ 1, consists of following conditions: π1 (πΌ) is a bounded increasing function and is continuous on interval [0, 1] from right. π1 (πΌ) is a descending bounded function and is continuous on interval [0, 1] from left. Definition 2.2. If π΄ and π΅ be fuzzy numbers with [π΄]πΌ = (π1 (πΌ), π2 (πΌ)), [π΅]πΌ = (π1 (πΌ), π2 (πΌ)) and πΌ β [0, 1], then fuzzy operation between them are defined as follows [17]: [π΄ + π΅]πΌ = (π1 (πΌ) + π1 (πΌ), π2 (πΌ) + π2 (πΌ)), [βπ΄]πΌ = (βπ2 (πΌ), βπ1 (πΌ)), [π΄ β π΅]πΌ = (π1 (πΌ) β π2 (πΌ), π2 (πΌ) β π1 (πΌ), [ππ΄]πΌ = (ππ1 (πΌ), ππ2 (πΌ)), π β₯ 0, [ππ΄]πΌ = (βππ2 (πΌ), βππ1 (πΌ)), π < 0. Definition 2.3. The distance between two fuzzy numbers of π΄ and π΅ based on weight function of π(πΌ) is defined as: 1
1
π(π΄, π΅) = (β« π(πΌ)π2 (π΄πΌ , π΅πΌ )ππΌ ),2 0
π 2 (π΄πΌ , π΅πΌ ) = (π1 (πΌ) β π1 (πΌ))2 + (π2 (πΌ) β π2 (πΌ))2 . Where π΄ and π΅ are two fuzzy numbers, π΄πΌ = (π1 (πΌ), π2 (πΌ)) and π΅πΌ = (π1 (πΌ), π2 (πΌ)) are cuts of π΄ and π΅, respectively. π(πΌ) is an increasing function on the interval [0, 1] for which we have, π(0) = 0 and 1
1
β«0 π(πΌ)ππΌ = 2. The amount π(π΄πΌ , π΅πΌ ) measures the distance between πΌ β ππ’π‘ of fuzzy numbers from π΄ and π΅. π(πΌ) could be interpreted as weight of π2 (π΄πΌ , π΅πΌ ). Definition 2.4. Let π΄ is a symmetric trianggolar fuzzy number in LR form (π, π π ) such that π is center and π π is width. We write π΄ = (π, π ππΏ , π ππ
) π , π is the center of π΄. π ππΏ and π ππ
are the left and right widths, respectively. In special case, if π ππΏ = π ππ
, the π΄ is called symmetric fuzzy and we write π΄ = (π, π π ) π . Theorem 2.1. Suppose π΄ = (π, π π ) π and π΅ = (π, π π ) π be two symmetric fuzzy numbers, then based on weight function π(πΌ) = πΌ, we have 1 π 2 (π΄, π΅) = (π β π)2 + (π π β π π )2 6 Proof. [18].
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
40
Definition 2.5. (MLP Network training) learning the feed forward neural networks, the law of error propagation (BP) is used which is based on The error correction learning rule. Therefore, to calculate sensitivities for the different layers of neurons in the MLP network the Derivative of conversion neurons functions is required. So functions used that have derivative. One of these functions is Linear function which The characteristics of this function was explained in the previous section. The Error function is described in the following sections. Definition 2.6. (BFGs Teqnique) To minimize this unconstrained optimization problem, minimization techniques such as the steepest descent method and the conjugate gradient or Quasi-Newton methods can be employed. The Newton method is one of the important algorithms in nonlinear optimization. The main disadvantage of the Newton method is that it is necessary to evaluate the second derivative matrix (Hessian matrix). Quasi-Newton methods were originally proposed by Davidon in 1959 and were later developed by Fletcher and Powell (1963). The most fundamental idea in quasi-Newton methods is the require ment to calculate an approximation of the Hessian matrix. Here the Quasi-Newton BFGS (Broyden Fletcher Goldfarb Shanno) method is used. This method is quadratically (see[19]). 3 Main section Consider the general model of FLR as follows: (y , xi1 , xi2 , β¦ , xin ); xij β R, i = 1, β¦ , m, y = A0 + A1 x1 + β― + An xn , j = 1, β¦ , n. (3.1) is to obtain an optimal model with fuzzy coefficients for describing and analyzing the data and predicting based on it where xij are real numbers and yi are of Fuzzy type. For estimating regression with above conditions, we define the proposed method as follows: π¦π = πππ‘. (3.2) Where π¦π is the proposed solution and Net is the feed forward artificial neural network that consists of two layers. The first layer is the inputs layer and the second layer is the out puts layer with linear transfer function that is introduced in the following form: πππ‘ = π€0 + π€1 oi1 + β― + wn oin , π = 1, β¦ , π. , π = 1, β¦ . , π. (3.3) Where π€π are artificial neural network weights. oij are inputs of neural network and correspond x. Now, Let's suppose our proposed method consists of the same density function of the problem, that is, f(π) = Ξ»π βππ , πΌ β€ π β€ π½. It is clear that the neural network of relation (3.3) has fuzzy value and this means that we have fuzzy weights for real observations, thus equation (3.3) could be written as follows: π
πππ‘ = π€0 + β π€π oi1 π . π‘ π€π ππ ππ’π§π§π¦ π=1
(3.4) We suppose πππ‘ = (πππ‘π΄1 , πππ‘π΄2 ) where the value of πππ‘π΄1 is the center and the value of πππ‘π΄2 is the fuzzy width of πππ‘ neural network, so the relation (3.4) could be rewritten as follows: πππ‘ = (πππ‘π΄1 , πππ‘π΄2 ) = (π€0π΄1 , π€0π΄2 ) + (π€1π΄1 , π€1π΄2 )oi1 + β― Where πππ‘π΄1 and πππ‘π΄2 are in the following form: πππ‘π΄1 = π€0π΄1 + π€1π΄1 oi1 + β― { πππ‘π΄2 = |π€0π΄2 | + |π€1π΄2 |oi1 + β―
(3.5)
(3.6)
Now, we should find the four weights of relation (3.5) that is met following conditions:
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
41
The value of πππ‘ that is almost the estimated answer π¦π , is close to the value limit of the main answer π, to that end, we define the target function of neural network as follows: 2 2 1 πππ ((π β(πππ‘π΄1 π1 ) β π βππ1 ) + (π β(πππ‘π΄2 π2 ) β π βππ2 ) ) 6 (3.7) Where in general, for π observations we will have π₯. π
2 2 1 πππ β ((π β(πππ‘ππ΄1 π1 ) β π βπππ1 ) + (βπ β(πππ‘ππ΄2 π2 ) + π βπππ2 ) ) 6 π=1
That is, with the assumption that Y = (π, π) where π be the center, b the fuzzy widths, Y. So we have π
π
π(π β€ π β€ π) = β« f(π)ππ = β« Ξ»π βππ ππ βπ βππ |ππ
π βππ
π βππ
= =π βπ Where for π = (π1 , π2 ) = (πΌ, π½), in (3.8) will be as follows: π = π βπ1 π β π βπ1 π , π1 = πΌ π(π β€ π β€ π) = { π = π βπ2 π β π βπ2 π , π2 = π½ Where degree of membership for above relation is introduced with πΊ(π) and is as follows: βππ β π βππ β€ π πΊ(π) = π΅(π(π β€ π β€ π)) = {1, π β€ π 0, ππ‘βππ. And so
(3.8)
(3.9)
π(π¦ ππ πππ‘) = β« ππππ‘ (π’)ππ¦ (π’)ππ’, π
Where ππππ‘ is the membership function of fuzzy set πππ‘ and u is a part of y. ππ¦ (π’) is the probability density function of y and P(y = u) is the probability function of y. Where as we do not know the basic probability distribution. It is clear from this information that probability distribution is itself a fuzzy number. By minimizing equation (3.7) four weights of the neural network, namely, π€0π΄1 , π€1π΄1 , π€0π΄2 and π€1π΄2 are obtained. By substituting the weights obtained in equation (3.5), the values of πππ‘π΄1 and πππ‘π΄2 are obtained and eventually the value of πππ‘ in equation (3.4) will be obtained.
Figure 1: Fuzzy Neural Network Based on Probability Function.
4 Error Analysis In this section, we study the error. The Ξ΅π error in equation (3.2) is stated in following form: Ξ΅π = |π(ππ ) β π(Net π )|. Where Ξ΅π is symmetric triangular fuzzy numbers. Thus:
(4.10)
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
42
π
M π = β Ξ΅π 2 . π=1
Now, we minimize the total square error given distance π mentioned in previous section by using wπ = π’π . πππ Mπ = πππ M(u0 , β¦ , un ). Given what was said, for both equation (3.4) we have: m
M(u0 , β¦ , un ) = β d2 (π(Net π ), π(ππ ) ). π=1
(4.11) The idea of finding the least squares is wi that are obtained by minimizing Mi , that is, the total square errors on distance π and this is done using the Matlab soft-ware and fminunc command is based on QuasiNewton algorithm. 5 Weight Calculation Algorithm for the third state In this section, the algorithm for the calculation of neural network weights for third state is studied. The first and second states resemble the third state, too. To that end, we first explain the method of calculating ui . Suppose Yπ and Ni be as Yπ = (yπ , sπ )T and Ni = (uπ , Οπ )T . Where yi and Ni are the centers, sπ and Οπ the widths of symmetric triangular fuzzy numbers yπ and Nπ , respectively. Given the equation (3.6) we have: Nπ = (u0 + u1 ππ1 + β― + un oin , |Ο0 | + |Ο1 |oπ1 + β― + |Οn |oπn ) π . (5.12) Where NπOne = u0 + u1 ππ1 + β― + un oπn , (5.13) NπTwo = |Ο0 | + |Ο1 |oπ1 + β― + |Οn |oπn . (5.14) Given the equation (5.13) and (5.14), relation (4.11) turns into following form: 1 6
[NetπOne ] M(u0 , β¦ , un ) = βm β eyi )2 + (e[Netπππ€π ] β eSi )2 ). i=1((e
(5.15)
Now, by minimizing equation (5.15) is with initial weights ui = 0, weights in the direction that the target function (the performance function) decreases, that is, contrary to its slope, they are updated, the algorithm of the neural network for the purpose of calculating the weights in this article is a Quasi-Newton algorithm or BFGs [19]. The basic step in Quasi-Newton methods is calculated based on Newton formula. βFΜ1i (k) uij (k + 1) = uij (k) β π΄1 β1 (π) , βuij (k) βFΜ1i (k) Οij (π + 1) = Οij (k) β π΄1 β1 (π) . βΟij (k) (5.16) In a way that π΄1 (π) is respectively the matrices of the second derivatives from performance functions FΜ 1(k) for present values of the weights. Also m
m
i=1
i=1
1 FΜ 1(k) = β Ξ΅i1 2 + β Ξ΅i2 2 ; 6 (5.17) where Ξ΅i1 = |e[NetπOne ] β eyπ |, Ξ΅i2 = |e[NetπTwo ] β eSπ |.
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
43
In above equations π is the number of repetitions. The disadvantage of Newton's method is that it is too complex and computationally too expensive and as a result they are not appropriate for neural networks, of course, there is a type of algorithm based on Newton's methods that since it does not require calculating the second derivative, its computational cost is lower. These methods are called Qauss-Newton's methods. They update the algorithm of approximate Hessian matrix each repetition. Updating is conducted through a function from the slope. The Quasi-Newton method that has considerably been successful, consists of BFGs method. This algorithm is usually converged faster and in lower number of repetitions. In the end, after obtaining weights, that is, ui using the mentioned algorithm (that is by order of fminunc in subject), we substitute them in equation (5.12) and achieve relation (3.6). For the first and second states the same process is applied. 6 Numerical examples Example 6.1. For fuzzy variables, consider dependent variable π and independent real variable π₯π , the values given in Table 1 (information in Table 1 has been adopted from reference [20]). Table 1: Crisp input- fuzzy output data set from (Tanaka and Watada, 1989) i
ππ
Yi = (ππ , ππ , ππ ) π
Interval Yi
1
1
(8, 1.8)
(6.2, 9.8)
2
2
(6.4, 2.2)
(4.2, 8.6)
3
3
(9.5, 2.6)
(6.9, 12.1)
4
4
(13.5, 2.6)
(10.9, 16.1)
5
5
(13, 2.4)
(10.6, 15.4)
Using these data, develop an estimated fuzzy regression equation Yi by π¦π = πππ‘. Where stopping condition: k=32 iterations of the learning algorithm. The training starts with w0 = (0,0), w1 = (0,0). The value of π¦ππ for density function Ξ»π βππ in Table 2 for Ξ» = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. πππ‘ = (5.60, 1.79) + (1.34, 0.17)x To have π¦π fval = 0.012674214593777 firstorderopt: 2.346583642065525e-06 TT = 0.343202200000000 Table 2: Crisp input- fuzzy output data set from ππ
yi T = (π¦ππ , ππ , ππ ) π
Interval Yi
1
(6.9, 1.9)
(5, 8.8)
2
(8.3, 2.1)
(6.2, 10.4)
3
(9.6, 2.3)
(7.3, 11.9)
4
(10.9, 2.4)
(8.5, 13.3)
5
(12.3, 2.6)
(9.7, 14.9)
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
44
Table 3: Fuzzy estimates (π πΏ , π π’ ) and SSE values for different methods. i
Tan
HBS
Pet
SP
FLS
FRBF[21]
PBF
1
(2.1, 9.8)
(6.2, 9.8)
(5.91, 8.00)
(3.52, 9.80)
(4.66, 8.66)
(4.99, 9.25)
(5, 8.8)
2
(4.2, 11.9)
(7.3, 11.2)
(7.68, 9.77)
(4.84, 11.90)
(6.21, 10.53)
(5.60, 9.44)
(6.2, 10.4)
3
(6.3, 14.0)
(8.4, 12.6)
(9.45, 11.54)
(6.16, 14.00)
(7.76, 12.40)
(7.73, 12.34)
(7.3, 11.9)
4
(8.4, 16.1)
(9.5, 14.0)
(11.22, 13.31)
(7.48, 16.10)
(9.31, 14.27)
(10.41, 15.42)
(8.5, 13.3)
5
(10.5, 18.2)
(10.6, 15.4)
(13.00, 15.09)
(8.80, 18.20)
(10.86, 16.14)
(11.04, 16.14)
(9.7, 14.9)
SSE
562.660
40.504
37.362
529.2607
18.758
6.874
3.818
Convergence weights in a neural network can be displayed in Figure 2.
Figure 2: The convergence of neural network weights Example 6.1. Example 6.2. For fuzzy variables, consider dependent variable π and independent real variable π₯π , the values given in Table 4 (information in Table 4 has been adopted from reference [16]). Table 4: Crisp input- fuzzy output data set from (Diamond, 1988) i
xi
Yi = (ππ , ππ , ππ ) π
1
21
(4, 0.8)
2
15
(3, 0.3)
3
15
(3.5,0.35)
4
4
(2, 0.4)
5
12
(3, 0.45)
6
18
(3.5, 0.7)
7
6
(2.5, 0.38)
8
12
(2.5, 0.5)
Using these data, develop an estimated fuzzy regression equation Yi by π¦π = πππ‘. Where stopping condition: k=30 iterations of the learning algorithm. The training starts with w0 = (0,0), w1 = (0,0). The
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
45
value of π¦ππ for density function Ξ»π βππ in Table 5 for Ξ» = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. πππ‘ = (1.42, 0.16) + (0.11, 0.02)x. Table 5: Crisp input- Fuzzy output data set from i
Xi
yiT = (π¦ππ , ππ, ππ ) π
1
21
(3.73, 0.58)
2
15
(3.07, 0.46)
3
15
(3.07, 0.46)
4
9
(2.41, 0.34)
5
12
(2.74, 0.4)
6
18
(3.40, 0.52)
7
6
(2.08, 0.28)
8
12
(2.74, 0.4)
Table 6: Fuzzy estimates (π πΏ , π π’ ) and SSE values for different methods. i
Tan
HBS
1
(3.13, 4.80)
(3.20, 4.65)
(3.52, 4.16)
(3.15, 4.31)
2
(2.36, 4.03)
(2.70, 3.74)
(2.85, 3.49)
(2.61, 3.53)
3
(2.36, 4.03)
(2.70, 3.74)
(2.85, 3.49)
(2.61, 3.53)
4
(1.60, 3.26)
(2.20, 2.82)
(2.18, 2.83)
(2.07, 2.75)
5
(1.98, 3.64)
(2.45, 3.28)
(2.52, 3.16)
(2.34, 3.14)
6
(2.75, 4.41)
(2.95, 4.20)
(3.18, 3.83)
(2.88, 3.92)
7
(1.21, 2.88)
(1.95, 2.36)
(1.85, 2.49)
(1.8, 2.36)
8
(1.98, 3.64)
(2.45, 3.28)
(2.52, 3.16)
(2.34. 3.14)
SSE
45.027
3.270
Pet
2.194
PBF
1.661
Convergence weights in a neural network can be displayed in Figure 3.
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
46
Figure 3: The convergence of neural network weights Example 6.2.
Example 6.3. For fuzzy variables, consider dependent variable π and independent real variable π₯π , the values given in Table 7 (information in Table 7 has been adopted from reference [18]). Using these data, develop an estimated fuzzy regression equation ππ by π¦π = πππ‘, Where stopping criteria: k=26 iterations of the learning algorithm. The training starts with π€0 = (0,0), π€1 = (0,0). The value of π¦ππ for density function ππ βππ in Table 7 for π = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. πππ‘ = (4.41, 0.16) + (1.15, 0.54)π₯. π¦ = ( 0.8265,0.0827) + (6.5021,0.6902)π₯. πππ ππβππππππ πππ ππβπππ πππ‘βππ For example, if π₯ = 0.75 the predicted value of π¦ will be π = (5.70, 0.6), where to [18]: πππ‘ = (5.28, 0.57). To have π¦π fval = 0.816651116226149 firstorderopt: 4.649162292480469e-06 SSE= 11.454565534132762 TT =0.156001000000003 Convergence weights in a neural network can be displayed in Figure 4.
Figure 4: The convergence of neural network weights Example 6.3.
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
47
Table 7: Crisp input- fuzzy output data set from. i
π₯π
π¦π = (ππ , ππ, ππ ) π
1
0.78
(3.08, 0.31)
2
0.64
(2.86, 0.29)
3
0.62
(6.25, 0.63)
4
0.49
(4.11, 0.41)
5
1.10
(1.04, 0.10)
6
0.61
(2.71, 0.27)
7
0.74
(4.45, 0.45)
8
1.15
(6.92, 0.69)
9
1.08
(7.41, 0.74)
10
0.38
(9.08, 0.91)
11
0.61
(6.56, 0.66)
12
0.98
(5.05, 0.51)
13
0.71
(5.23, 0.52)
14
0.51
(5.16, 0.52)
15
0.77
(11.10, 1.11)
16
0.99
(4.47, 0.45)
17
3.56
(28.84, 2.88)
18
0.86
(9.43, 0.94)
19
0.61
(4.50, 0.45)
20
0.64
(9.30, 0.94)
21
0.71
(9.48, 0.95)
22
0.61
(3.65, 0.37)
23
0.63
(10.14, 1.01)
24
1.13
(3, 0.3)
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
48
7 Conclusion In this article, we introduced a Mathematical model from regression with fuzzy coefficients and generalized neural system based on the given number. Then, we calculated the FLR regression coefficients using artificial neural network, the optimization technique and the least square error method based on the distance between two fuzzy numbers. The error of the method was studied and it was shown that the method error is of fuzzy type. It was proved that the neural network weight section and regression coefficient section (FLR) are convergent. References [1] L. A. Zadeh, Fuzzy sets, Inform and Control, 8 (1965) 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X [2] L. A. Zadeh, Fuzzy sets and information granularity, in: M.M. Gupta, R.K. Ragade, R.R. Yager (Eds.), Advances in Fuzzy Set Theory and Applications, North Holland, Amsterdam, (1979) 3-18. [3] L. A. Zadeh, Fuzzy logic = computing with words, IEEE Trans. Fuzzy Systems, 4 (2) (1996) 103-111. https://doi.org/10.1109/91.493904 [4] H. Tanaka, Fuzzy data analysis by possibilistic linear models, Fuzzy Sets and Systems, 24 (1987) 363375. https://doi.org/10.1016/0165-0114(87)90033-9 [5] R. Coppi, Management of uncertainty in statistical reasoning: the case of regression analysis, International Journal of Approximate Reasoning, 47 (3) (2008) 284-305. https://doi.org/10.1016/j.ijar.2007.05.011 [6] C. Kao, C. L. Chyu, Least-squares estimates in fuzzy regression analysis, European J. Oper. Res. York, 148 (2003) 426-435. https://doi.org/10.1016/S0377-2217(02)00423-X [7] M. Modarres, E. Nasrabadi, M. M. Nasrabadi, Fuzzy linear regression models with least square errors, Appl. Math. Comput, 163 (2005) 977-989. https://doi.org/10.1016/j.amc.2004.05.004 [8] M. Mosleh, T. Allahviranloo, M. Otadi, Evaluation of fully fuzzy regression models by fuzzy neural network, Neural Computing and Applications, 21 (2012) 105-112. https://doi.org/10.1007/s00521-011-0698-z [9] M. Mosleh, M. Otadi, S. Abbasbandy, Fuzzy polynomial regression with fuzzy neural networks, Applied mathematical modeling, 35 (2011) 5400-5412. https://doi.org/10.1016/j.apm.2011.04.039 [10] H. Tanaka, I. Havashi, J. Watada, Possibilistic Liner regression analysis for fuzzy data, European J. Oper. Res, 40 (1989) 389-396. https://doi.org/10.1016/0377-2217(89)90431-1
International Scientific Publications and Consulting Services
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/
49
[11] M. S. Yang, T. S. Lin, Fuzzy Least Squares Linear Regression Analysis for Fuzzy Input-Output Data, Fuzzy Sets and Systems, 126 (2002) 389-399. https://doi.org/10.1016/S0165-0114(01)00066-5 [12] H. Tanaka, J. Watada, Possibilistic Linear Systems and Their Application to the Linear Regression Models, Fuzzy Sets and Systems, 27 (1989) 275-289. https://doi.org/10.1016/0165-0114(88)90054-1 [13] M. Hojati, C. R. Bector, K. Smimou, A Simple Method of Fuzzy Linear Regression, European Journal of Operational Research, 166 (2005) 172-184. https://doi.org/10.1016/j.ejor.2004.01.039 [14] G. Peters, Fuzzy Linear Regression with fuzzy intervals, Fuzzy Sets and Systems, 63 (1994) 45-55. https://doi.org/10.1016/0165-0114(94)90144-9 [15] D. A. Savic, W. Pedrycz, Evaluation of fuzzy Linear Regression Models, Fuzzy Sets and Systems, 39 (1991) 51-63. https://doi.org/10.1016/0165-0114(91)90065-X [16] P. Diamond, Fuzzy Least Squares, Information Sciences, 46 (1988) 141-157. https://doi.org/10.1016/0020-0255(88)90047-3 [17] H. J. Zimmermann, Fuzzy set theory and its applications, Kluwer Academic, Boston, (1991). https://doi.org/10.1007/978-94-015-7949-0 [18] J. Mohammadi, S. M. Taheri, Pedomodels fitting with fuzzy least squares regression, Iraninan J. Fuzzy Systems, 1 (2) (2004) 45-61. http://ijfs.usb.ac.ir/article_505_0.html [19] C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical Programming, 45 (3) (1989) 503-528. https://doi.org/10.1007/BF01589116 [20] H. Tanaka, I. Havashi, J. Watada, Possibilistic Liner regression analysis for fuzzy data, European J. Oper. Res, 40 (1989) 389-396. https://doi.org/10.1016/0377-2217(89)90431-1 [21] N. Y. Pehlivan, T. Paks, Ch. T. Chang, An Alternative Method for Fuzzy Regression: Fuzzy Radial Basis Function Network, International Journal of Lean Thinking, (2010).
International Scientific Publications and Consulting Services