Numerical Solution of Fuzzy linear Regression ... - Semantic Scholar

2 downloads 0 Views 1MB Size Report
Keywords: Fuzzy-numbers, Probability Function, Linear Regression, Neural ... In this paper, we proposed simple but powerful method for fuzzy regressionΒ ...
Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 Available online at www.ispacs.com/jfsva Volume 2017, Issue 1, Year 2017 Article ID jfsva-00380, 12 Pages doi:10.5899/2017/jfsva-00380 Research Article

Numerical Solution of Fuzzy linear Regression using Fuzzy Neural Network Based on Probability Function Somayeh Ezadi1*, Sahar Askari2 (1) Department of Applied Mathematics, Hamedan Branch, Islamic Azad University, Hamedan 65138, Iran (2) Department of Mathematics, Farhangian University (Shahid Sadougi Pardise) of Kermanshah Branch, Kermanshah, Iran.

Copyright 2017 Β© Somayeh Ezadi and Sahar Askari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract In this work, we consider the development of a fuzzy neural network based on probability function for Estimated output of fuzzy regression models with test real input and fuzzy output. The proposed approach is a fuzzification of the outputs and weights of conventional fuzzy neural network based on probability function. The error of the proposed method is based on total square error is minimized by optimization method in order to be able to obtain the optimal weights of the neural network. The advantage of the proposed approach is its simplicity and computation as well as its performance. To compare the performance of the proposed method with the other traditional methods given in the literature several numerical examples are presented. Keywords: Fuzzy-numbers, Probability Function, Linear Regression, Neural Network, Error Analysis, Weights.

1 Introduction Regression analysis is one of the most popular methods of estimation. It is applied to evaluate the functional relationship between dependent and independent variables. Fuzzy regression (FR) is a fuzzy type of classical regression analysis in which some elements of the model are represented by fuzzy numbers. After introducing the concept of fuzzy sets by zadeh in 1965 [1, 2, 3], different researchers developed regression analysis. Fuzzy linear regression (FLR) was first suggested by Tanaka [4], that is the extension of classic regression analysis that has turned into a powerful tool for the discovery of ambiguous relationships [5]. Indeed, in fuzzy regression, some of the elements of the regression model have been presented with ambiguous information. Different methods have been presented for solving these type of problems [6- 10]. These are divided into two approaches: linear programming (LP) based methods and fuzzy least squares (FLS) methods [11]. Neural networks (NN) can be valuable when the functional

* Corresponding Author. Email address: [email protected], Tel: +989183565466 38

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

39

relationship between dependent and independent variables are not known. Probable basis function (PBF) neural network forms one of the essential type of neural networks. In this paper, we proposed simple but powerful method for fuzzy regression analysis using PBF neural network, since the proposed method employ PBF neural network which have higher flexibility and a wider application field than the existing LP based and FLS fuzzy regression methods [12-16]. The effectiveness of our method was demonstrated by three examples and a computational experience. Present article consists of following sections: In Section 2, the basic and required concepts have been stated. In Section 3, the proposed model for solving the FLR regression equations is introduced. In Section 4, the analysis of recommended method error has been stated. In Section 5, the method of calculating the weights of PBF neural network has been mentioned. In Section 6, numerical example has been presented. In Section 7, the conclusion has been presented and in Section 8, the references have been stated. 2 Preliminaries and notations Definition 2.1. The parametric form of a fuzzy number that is introduced with regular pair from function in the form of (π‘Ž1 (𝛼), 𝑏1 (𝛼)), 0 ≀ 𝛼 ≀ 1, consists of following conditions: π‘Ž1 (𝛼) is a bounded increasing function and is continuous on interval [0, 1] from right. 𝑏1 (𝛼) is a descending bounded function and is continuous on interval [0, 1] from left. Definition 2.2. If 𝐴 and 𝐡 be fuzzy numbers with [𝐴]𝛼 = (π‘Ž1 (𝛼), π‘Ž2 (𝛼)), [𝐡]𝛼 = (𝑏1 (𝛼), 𝑏2 (𝛼)) and 𝛼 ∈ [0, 1], then fuzzy operation between them are defined as follows [17]: [𝐴 + 𝐡]𝛼 = (π‘Ž1 (𝛼) + 𝑏1 (𝛼), π‘Ž2 (𝛼) + 𝑏2 (𝛼)), [βˆ’π΄]𝛼 = (βˆ’π‘Ž2 (𝛼), βˆ’π‘Ž1 (𝛼)), [𝐴 βˆ’ 𝐡]𝛼 = (π‘Ž1 (𝛼) βˆ’ 𝑏2 (𝛼), π‘Ž2 (𝛼) βˆ’ 𝑏1 (𝛼), [πœ†π΄]𝛼 = (πœ†π‘Ž1 (𝛼), πœ†π‘Ž2 (𝛼)), πœ† β‰₯ 0, [πœ†π΄]𝛼 = (βˆ’πœ†π‘Ž2 (𝛼), βˆ’πœ†π‘Ž1 (𝛼)), πœ† < 0. Definition 2.3. The distance between two fuzzy numbers of 𝐴 and 𝐡 based on weight function of 𝑓(𝛼) is defined as: 1

1

𝑑(𝐴, 𝐡) = (∫ 𝑓(𝛼)𝑑2 (𝐴𝛼 , 𝐡𝛼 )𝑑𝛼 ),2 0

𝑑 2 (𝐴𝛼 , 𝐡𝛼 ) = (π‘Ž1 (𝛼) βˆ’ 𝑏1 (𝛼))2 + (π‘Ž2 (𝛼) βˆ’ 𝑏2 (𝛼))2 . Where 𝐴 and 𝐡 are two fuzzy numbers, 𝐴𝛼 = (π‘Ž1 (𝛼), π‘Ž2 (𝛼)) and 𝐡𝛼 = (𝑏1 (𝛼), 𝑏2 (𝛼)) are cuts of 𝐴 and 𝐡, respectively. 𝑓(𝛼) is an increasing function on the interval [0, 1] for which we have, 𝑓(0) = 0 and 1

1

∫0 𝑓(𝛼)𝑑𝛼 = 2. The amount 𝑑(𝐴𝛼 , 𝐡𝛼 ) measures the distance between 𝛼 βˆ’ 𝑐𝑒𝑑 of fuzzy numbers from 𝐴 and 𝐡. 𝑓(𝛼) could be interpreted as weight of 𝑑2 (𝐴𝛼 , 𝐡𝛼 ). Definition 2.4. Let 𝐴 is a symmetric trianggolar fuzzy number in LR form (π‘Ž, π‘ π‘Ž ) such that π‘Ž is center and π‘ π‘Ž is width. We write 𝐴 = (π‘Ž, π‘ π‘ŽπΏ , π‘ π‘Žπ‘… ) 𝑇 , π‘Ž is the center of 𝐴. π‘ π‘ŽπΏ and π‘ π‘Žπ‘… are the left and right widths, respectively. In special case, if π‘ π‘ŽπΏ = π‘ π‘Žπ‘… , the 𝐴 is called symmetric fuzzy and we write 𝐴 = (π‘Ž, π‘ π‘Ž ) 𝑇 . Theorem 2.1. Suppose 𝐴 = (π‘Ž, π‘ π‘Ž ) 𝑇 and 𝐡 = (𝑏, 𝑠𝑏 ) 𝑇 be two symmetric fuzzy numbers, then based on weight function 𝑓(𝛼) = 𝛼, we have 1 𝑑 2 (𝐴, 𝐡) = (π‘Ž βˆ’ 𝑏)2 + (π‘ π‘Ž βˆ’ 𝑠𝑏 )2 6 Proof. [18].

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

40

Definition 2.5. (MLP Network training) learning the feed forward neural networks, the law of error propagation (BP) is used which is based on The error correction learning rule. Therefore, to calculate sensitivities for the different layers of neurons in the MLP network the Derivative of conversion neurons functions is required. So functions used that have derivative. One of these functions is Linear function which The characteristics of this function was explained in the previous section. The Error function is described in the following sections. Definition 2.6. (BFGs Teqnique) To minimize this unconstrained optimization problem, minimization techniques such as the steepest descent method and the conjugate gradient or Quasi-Newton methods can be employed. The Newton method is one of the important algorithms in nonlinear optimization. The main disadvantage of the Newton method is that it is necessary to evaluate the second derivative matrix (Hessian matrix). Quasi-Newton methods were originally proposed by Davidon in 1959 and were later developed by Fletcher and Powell (1963). The most fundamental idea in quasi-Newton methods is the require ment to calculate an approximation of the Hessian matrix. Here the Quasi-Newton BFGS (Broyden Fletcher Goldfarb Shanno) method is used. This method is quadratically (see[19]). 3 Main section Consider the general model of FLR as follows: (y , xi1 , xi2 , … , xin ); xij ∈ R, i = 1, … , m, y = A0 + A1 x1 + β‹― + An xn , j = 1, … , n. (3.1) is to obtain an optimal model with fuzzy coefficients for describing and analyzing the data and predicting based on it where xij are real numbers and yi are of Fuzzy type. For estimating regression with above conditions, we define the proposed method as follows: 𝑦𝑇 = 𝑁𝑒𝑑. (3.2) Where 𝑦𝑇 is the proposed solution and Net is the feed forward artificial neural network that consists of two layers. The first layer is the inputs layer and the second layer is the out puts layer with linear transfer function that is introduced in the following form: 𝑁𝑒𝑑 = 𝑀0 + 𝑀1 oi1 + β‹― + wn oin , 𝑖 = 1, … , π‘š. , 𝑗 = 1, … . , 𝑛. (3.3) Where 𝑀𝑖 are artificial neural network weights. oij are inputs of neural network and correspond x. Now, Let's suppose our proposed method consists of the same density function of the problem, that is, f(𝑋) = λ𝑒 βˆ’πœ†π‘‹ , 𝛼 ≀ πœ† ≀ 𝛽. It is clear that the neural network of relation (3.3) has fuzzy value and this means that we have fuzzy weights for real observations, thus equation (3.3) could be written as follows: 𝑛

𝑁𝑒𝑑 = 𝑀0 + βˆ‘ 𝑀𝑖 oi1 𝑠. 𝑑 𝑀𝑖 𝑖𝑠 𝑓𝑒𝑧𝑧𝑦 𝑖=1

(3.4) We suppose 𝑁𝑒𝑑 = (𝑁𝑒𝑑𝐴1 , 𝑁𝑒𝑑𝐴2 ) where the value of 𝑁𝑒𝑑𝐴1 is the center and the value of 𝑁𝑒𝑑𝐴2 is the fuzzy width of 𝑁𝑒𝑑 neural network, so the relation (3.4) could be rewritten as follows: 𝑁𝑒𝑑 = (𝑁𝑒𝑑𝐴1 , 𝑁𝑒𝑑𝐴2 ) = (𝑀0𝐴1 , 𝑀0𝐴2 ) + (𝑀1𝐴1 , 𝑀1𝐴2 )oi1 + β‹― Where 𝑁𝑒𝑑𝐴1 and 𝑁𝑒𝑑𝐴2 are in the following form: 𝑁𝑒𝑑𝐴1 = 𝑀0𝐴1 + 𝑀1𝐴1 oi1 + β‹― { 𝑁𝑒𝑑𝐴2 = |𝑀0𝐴2 | + |𝑀1𝐴2 |oi1 + β‹―

(3.5)

(3.6)

Now, we should find the four weights of relation (3.5) that is met following conditions:

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

41

The value of 𝑁𝑒𝑑 that is almost the estimated answer 𝑦𝑇 , is close to the value limit of the main answer π‘Œ, to that end, we define the target function of neural network as follows: 2 2 1 𝑀𝑖𝑛 ((𝑒 βˆ’(𝑁𝑒𝑑𝐴1 πœ†1 ) βˆ’ 𝑒 βˆ’π‘Žπœ†1 ) + (𝑒 βˆ’(𝑁𝑒𝑑𝐴2 πœ†2 ) βˆ’ 𝑒 βˆ’π‘πœ†2 ) ) 6 (3.7) Where in general, for 𝑛 observations we will have π‘₯. 𝑛

2 2 1 𝑀𝑖𝑛 βˆ‘ ((𝑒 βˆ’(𝑁𝑒𝑑𝑖𝐴1 πœ†1 ) βˆ’ 𝑒 βˆ’π‘Žπ‘–πœ†1 ) + (βˆ’π‘’ βˆ’(𝑁𝑒𝑑𝑖𝐴2 πœ†2 ) + 𝑒 βˆ’π‘π‘–πœ†2 ) ) 6 𝑖=1

That is, with the assumption that Y = (π‘Ž, 𝑏) where π‘Ž be the center, b the fuzzy widths, Y. So we have 𝑏

𝑏

𝑝(π‘Ž ≀ π‘Œ ≀ 𝑏) = ∫ f(𝑋)𝑑𝑋 = ∫ λ𝑒 βˆ’πœ†π‘‹ 𝑑𝑋 βˆ’π‘’ βˆ’πœ†π‘‹ |π‘π‘Ž

π‘Ž βˆ’πœ†π‘Ž

π‘Ž βˆ’πœ†π‘

= =𝑒 βˆ’π‘’ Where for πœ† = (πœ†1 , πœ†2 ) = (𝛼, 𝛽), in (3.8) will be as follows: 𝑐 = 𝑒 βˆ’πœ†1 π‘Ž βˆ’ 𝑒 βˆ’πœ†1 𝑏 , πœ†1 = 𝛼 𝑝(π‘Ž ≀ π‘Œ ≀ 𝑏) = { 𝑑 = 𝑒 βˆ’πœ†2 π‘Ž βˆ’ 𝑒 βˆ’πœ†2 𝑏 , πœ†2 = 𝛽 Where degree of membership for above relation is introduced with 𝐺(𝑝) and is as follows: βˆ’πœ†π‘Ž βˆ’ 𝑒 βˆ’πœ†π‘ ≀ 𝑑 𝐺(𝑝) = 𝐡(𝑝(π‘Ž ≀ π‘Œ ≀ 𝑏)) = {1, 𝑐 ≀ 𝑒 0, π‘œπ‘‘β„Žπ‘’π‘Ÿ. And so

(3.8)

(3.9)

𝑃(𝑦 𝑖𝑠 𝑁𝑒𝑑) = ∫ πœ‡π‘π‘’π‘‘ (𝑒)𝑃𝑦 (𝑒)𝑑𝑒, 𝑅

Where πœ‡π‘π‘’π‘‘ is the membership function of fuzzy set 𝑁𝑒𝑑 and u is a part of y. 𝑃𝑦 (𝑒) is the probability density function of y and P(y = u) is the probability function of y. Where as we do not know the basic probability distribution. It is clear from this information that probability distribution is itself a fuzzy number. By minimizing equation (3.7) four weights of the neural network, namely, 𝑀0𝐴1 , 𝑀1𝐴1 , 𝑀0𝐴2 and 𝑀1𝐴2 are obtained. By substituting the weights obtained in equation (3.5), the values of 𝑁𝑒𝑑𝐴1 and 𝑁𝑒𝑑𝐴2 are obtained and eventually the value of 𝑁𝑒𝑑 in equation (3.4) will be obtained.

Figure 1: Fuzzy Neural Network Based on Probability Function.

4 Error Analysis In this section, we study the error. The Ρ𝑖 error in equation (3.2) is stated in following form: Ρ𝑖 = |𝑃(π‘Œπ‘– ) βˆ’ 𝑃(Net 𝑖 )|. Where Ρ𝑖 is symmetric triangular fuzzy numbers. Thus:

(4.10)

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

42

𝑛

M 𝑖 = βˆ‘ Ρ𝑖 2 . 𝑖=1

Now, we minimize the total square error given distance 𝑑 mentioned in previous section by using w𝑖 = 𝑒𝑖 . 𝑀𝑖𝑛 M𝑖 = 𝑀𝑖𝑛 M(u0 , … , un ). Given what was said, for both equation (3.4) we have: m

M(u0 , … , un ) = βˆ‘ d2 (𝑃(Net 𝑖 ), 𝑃(π‘Œπ‘– ) ). 𝑖=1

(4.11) The idea of finding the least squares is wi that are obtained by minimizing Mi , that is, the total square errors on distance 𝑑 and this is done using the Matlab soft-ware and fminunc command is based on QuasiNewton algorithm. 5 Weight Calculation Algorithm for the third state In this section, the algorithm for the calculation of neural network weights for third state is studied. The first and second states resemble the third state, too. To that end, we first explain the method of calculating ui . Suppose Y𝑖 and Ni be as Y𝑖 = (y𝑖 , s𝑖 )T and Ni = (u𝑖 , σ𝑖 )T . Where yi and Ni are the centers, s𝑖 and σ𝑖 the widths of symmetric triangular fuzzy numbers y𝑖 and N𝑖 , respectively. Given the equation (3.6) we have: N𝑖 = (u0 + u1 π‘œπ‘–1 + β‹― + un oin , |Οƒ0 | + |Οƒ1 |o𝑖1 + β‹― + |Οƒn |o𝑖n ) 𝑇 . (5.12) Where N𝑖One = u0 + u1 π‘œπ‘–1 + β‹― + un o𝑖n , (5.13) N𝑖Two = |Οƒ0 | + |Οƒ1 |o𝑖1 + β‹― + |Οƒn |o𝑖n . (5.14) Given the equation (5.13) and (5.14), relation (4.11) turns into following form: 1 6

[Net𝑖One ] M(u0 , … , un ) = βˆ‘m βˆ’ eyi )2 + (e[Netπ‘–π‘‡π‘€π‘œ ] βˆ’ eSi )2 ). i=1((e

(5.15)

Now, by minimizing equation (5.15) is with initial weights ui = 0, weights in the direction that the target function (the performance function) decreases, that is, contrary to its slope, they are updated, the algorithm of the neural network for the purpose of calculating the weights in this article is a Quasi-Newton algorithm or BFGs [19]. The basic step in Quasi-Newton methods is calculated based on Newton formula. βˆ‚FΜ‚1i (k) uij (k + 1) = uij (k) βˆ’ 𝐴1 βˆ’1 (π‘˜) , βˆ‚uij (k) βˆ‚FΜ‚1i (k) Οƒij (π‘˜ + 1) = Οƒij (k) βˆ’ 𝐴1 βˆ’1 (π‘˜) . βˆ‚Οƒij (k) (5.16) In a way that 𝐴1 (π‘˜) is respectively the matrices of the second derivatives from performance functions FΜ‚ 1(k) for present values of the weights. Also m

m

i=1

i=1

1 FΜ‚ 1(k) = βˆ‘ Ξ΅i1 2 + βˆ‘ Ξ΅i2 2 ; 6 (5.17) where Ξ΅i1 = |e[Net𝑖One ] βˆ’ ey𝑖 |, Ξ΅i2 = |e[Net𝑖Two ] βˆ’ eS𝑖 |.

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

43

In above equations π‘˜ is the number of repetitions. The disadvantage of Newton's method is that it is too complex and computationally too expensive and as a result they are not appropriate for neural networks, of course, there is a type of algorithm based on Newton's methods that since it does not require calculating the second derivative, its computational cost is lower. These methods are called Qauss-Newton's methods. They update the algorithm of approximate Hessian matrix each repetition. Updating is conducted through a function from the slope. The Quasi-Newton method that has considerably been successful, consists of BFGs method. This algorithm is usually converged faster and in lower number of repetitions. In the end, after obtaining weights, that is, ui using the mentioned algorithm (that is by order of fminunc in subject), we substitute them in equation (5.12) and achieve relation (3.6). For the first and second states the same process is applied. 6 Numerical examples Example 6.1. For fuzzy variables, consider dependent variable π‘Œ and independent real variable π‘₯𝑖 , the values given in Table 1 (information in Table 1 has been adopted from reference [20]). Table 1: Crisp input- fuzzy output data set from (Tanaka and Watada, 1989) i

𝑋𝑖

Yi = (π‘Œπ‘– , 𝑒𝑖 , 𝑒𝑖 ) 𝑇

Interval Yi

1

1

(8, 1.8)

(6.2, 9.8)

2

2

(6.4, 2.2)

(4.2, 8.6)

3

3

(9.5, 2.6)

(6.9, 12.1)

4

4

(13.5, 2.6)

(10.9, 16.1)

5

5

(13, 2.4)

(10.6, 15.4)

Using these data, develop an estimated fuzzy regression equation Yi by 𝑦𝑇 = 𝑁𝑒𝑑. Where stopping condition: k=32 iterations of the learning algorithm. The training starts with w0 = (0,0), w1 = (0,0). The value of 𝑦𝑖𝑇 for density function λ𝑒 βˆ’πœ†π‘‹ in Table 2 for Ξ» = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. 𝑁𝑒𝑑 = (5.60, 1.79) + (1.34, 0.17)x To have 𝑦𝑇 fval = 0.012674214593777 firstorderopt: 2.346583642065525e-06 TT = 0.343202200000000 Table 2: Crisp input- fuzzy output data set from 𝑋𝑖

yi T = (𝑦𝑇𝑖 , 𝑒𝑖 , 𝑒𝑖 ) 𝑇

Interval Yi

1

(6.9, 1.9)

(5, 8.8)

2

(8.3, 2.1)

(6.2, 10.4)

3

(9.6, 2.3)

(7.3, 11.9)

4

(10.9, 2.4)

(8.5, 13.3)

5

(12.3, 2.6)

(9.7, 14.9)

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

44

Table 3: Fuzzy estimates (π‘Œ 𝐿 , π‘Œ 𝑒 ) and SSE values for different methods. i

Tan

HBS

Pet

SP

FLS

FRBF[21]

PBF

1

(2.1, 9.8)

(6.2, 9.8)

(5.91, 8.00)

(3.52, 9.80)

(4.66, 8.66)

(4.99, 9.25)

(5, 8.8)

2

(4.2, 11.9)

(7.3, 11.2)

(7.68, 9.77)

(4.84, 11.90)

(6.21, 10.53)

(5.60, 9.44)

(6.2, 10.4)

3

(6.3, 14.0)

(8.4, 12.6)

(9.45, 11.54)

(6.16, 14.00)

(7.76, 12.40)

(7.73, 12.34)

(7.3, 11.9)

4

(8.4, 16.1)

(9.5, 14.0)

(11.22, 13.31)

(7.48, 16.10)

(9.31, 14.27)

(10.41, 15.42)

(8.5, 13.3)

5

(10.5, 18.2)

(10.6, 15.4)

(13.00, 15.09)

(8.80, 18.20)

(10.86, 16.14)

(11.04, 16.14)

(9.7, 14.9)

SSE

562.660

40.504

37.362

529.2607

18.758

6.874

3.818

Convergence weights in a neural network can be displayed in Figure 2.

Figure 2: The convergence of neural network weights Example 6.1. Example 6.2. For fuzzy variables, consider dependent variable π‘Œ and independent real variable π‘₯𝑖 , the values given in Table 4 (information in Table 4 has been adopted from reference [16]). Table 4: Crisp input- fuzzy output data set from (Diamond, 1988) i

xi

Yi = (π‘Œπ‘– , 𝑒𝑖 , 𝑒𝑖 ) 𝑇

1

21

(4, 0.8)

2

15

(3, 0.3)

3

15

(3.5,0.35)

4

4

(2, 0.4)

5

12

(3, 0.45)

6

18

(3.5, 0.7)

7

6

(2.5, 0.38)

8

12

(2.5, 0.5)

Using these data, develop an estimated fuzzy regression equation Yi by 𝑦𝑇 = 𝑁𝑒𝑑. Where stopping condition: k=30 iterations of the learning algorithm. The training starts with w0 = (0,0), w1 = (0,0). The

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

45

value of 𝑦𝑖𝑇 for density function λ𝑒 βˆ’πœ†π‘‹ in Table 5 for Ξ» = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. 𝑁𝑒𝑑 = (1.42, 0.16) + (0.11, 0.02)x. Table 5: Crisp input- Fuzzy output data set from i

Xi

yiT = (𝑦𝑇𝑖 , 𝑒𝑖, 𝑒𝑖 ) 𝑇

1

21

(3.73, 0.58)

2

15

(3.07, 0.46)

3

15

(3.07, 0.46)

4

9

(2.41, 0.34)

5

12

(2.74, 0.4)

6

18

(3.40, 0.52)

7

6

(2.08, 0.28)

8

12

(2.74, 0.4)

Table 6: Fuzzy estimates (π‘Œ 𝐿 , π‘Œ 𝑒 ) and SSE values for different methods. i

Tan

HBS

1

(3.13, 4.80)

(3.20, 4.65)

(3.52, 4.16)

(3.15, 4.31)

2

(2.36, 4.03)

(2.70, 3.74)

(2.85, 3.49)

(2.61, 3.53)

3

(2.36, 4.03)

(2.70, 3.74)

(2.85, 3.49)

(2.61, 3.53)

4

(1.60, 3.26)

(2.20, 2.82)

(2.18, 2.83)

(2.07, 2.75)

5

(1.98, 3.64)

(2.45, 3.28)

(2.52, 3.16)

(2.34, 3.14)

6

(2.75, 4.41)

(2.95, 4.20)

(3.18, 3.83)

(2.88, 3.92)

7

(1.21, 2.88)

(1.95, 2.36)

(1.85, 2.49)

(1.8, 2.36)

8

(1.98, 3.64)

(2.45, 3.28)

(2.52, 3.16)

(2.34. 3.14)

SSE

45.027

3.270

Pet

2.194

PBF

1.661

Convergence weights in a neural network can be displayed in Figure 3.

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

46

Figure 3: The convergence of neural network weights Example 6.2.

Example 6.3. For fuzzy variables, consider dependent variable π‘Œ and independent real variable π‘₯𝑖 , the values given in Table 7 (information in Table 7 has been adopted from reference [18]). Using these data, develop an estimated fuzzy regression equation π‘Œπ‘– by 𝑦𝑇 = 𝑁𝑒𝑑, Where stopping criteria: k=26 iterations of the learning algorithm. The training starts with 𝑀0 = (0,0), 𝑀1 = (0,0). The value of 𝑦𝑖𝑇 for density function πœ†π‘’ βˆ’πœ†π‘‹ in Table 7 for πœ† = (0.2, 0.3) visible. The optimal weights of the neural network is as follows. 𝑁𝑒𝑑 = (4.41, 0.16) + (1.15, 0.54)π‘₯. 𝑦 = ( 0.8265,0.0827) + (6.5021,0.6902)π‘₯. π‘“π‘œπ‘Ÿ π‘€π‘œβ„Žπ‘Žπ‘šπ‘šπ‘Žπ‘‘π‘– π‘Žπ‘›π‘‘ π‘‡π‘Žβ„Žπ‘’π‘Ÿπ‘– π‘šπ‘’π‘‘β„Žπ‘œπ‘‘ For example, if π‘₯ = 0.75 the predicted value of 𝑦 will be π‘Œ = (5.70, 0.6), where to [18]: 𝑁𝑒𝑑 = (5.28, 0.57). To have 𝑦𝑇 fval = 0.816651116226149 firstorderopt: 4.649162292480469e-06 SSE= 11.454565534132762 TT =0.156001000000003 Convergence weights in a neural network can be displayed in Figure 4.

Figure 4: The convergence of neural network weights Example 6.3.

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

47

Table 7: Crisp input- fuzzy output data set from. i

π‘₯𝑖

𝑦𝑖 = (π‘Œπ‘– , 𝑒𝑖, 𝑒𝑖 ) 𝑇

1

0.78

(3.08, 0.31)

2

0.64

(2.86, 0.29)

3

0.62

(6.25, 0.63)

4

0.49

(4.11, 0.41)

5

1.10

(1.04, 0.10)

6

0.61

(2.71, 0.27)

7

0.74

(4.45, 0.45)

8

1.15

(6.92, 0.69)

9

1.08

(7.41, 0.74)

10

0.38

(9.08, 0.91)

11

0.61

(6.56, 0.66)

12

0.98

(5.05, 0.51)

13

0.71

(5.23, 0.52)

14

0.51

(5.16, 0.52)

15

0.77

(11.10, 1.11)

16

0.99

(4.47, 0.45)

17

3.56

(28.84, 2.88)

18

0.86

(9.43, 0.94)

19

0.61

(4.50, 0.45)

20

0.64

(9.30, 0.94)

21

0.71

(9.48, 0.95)

22

0.61

(3.65, 0.37)

23

0.63

(10.14, 1.01)

24

1.13

(3, 0.3)

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

48

7 Conclusion In this article, we introduced a Mathematical model from regression with fuzzy coefficients and generalized neural system based on the given number. Then, we calculated the FLR regression coefficients using artificial neural network, the optimization technique and the least square error method based on the distance between two fuzzy numbers. The error of the method was studied and it was shown that the method error is of fuzzy type. It was proved that the neural network weight section and regression coefficient section (FLR) are convergent. References [1] L. A. Zadeh, Fuzzy sets, Inform and Control, 8 (1965) 338-353. https://doi.org/10.1016/S0019-9958(65)90241-X [2] L. A. Zadeh, Fuzzy sets and information granularity, in: M.M. Gupta, R.K. Ragade, R.R. Yager (Eds.), Advances in Fuzzy Set Theory and Applications, North Holland, Amsterdam, (1979) 3-18. [3] L. A. Zadeh, Fuzzy logic = computing with words, IEEE Trans. Fuzzy Systems, 4 (2) (1996) 103-111. https://doi.org/10.1109/91.493904 [4] H. Tanaka, Fuzzy data analysis by possibilistic linear models, Fuzzy Sets and Systems, 24 (1987) 363375. https://doi.org/10.1016/0165-0114(87)90033-9 [5] R. Coppi, Management of uncertainty in statistical reasoning: the case of regression analysis, International Journal of Approximate Reasoning, 47 (3) (2008) 284-305. https://doi.org/10.1016/j.ijar.2007.05.011 [6] C. Kao, C. L. Chyu, Least-squares estimates in fuzzy regression analysis, European J. Oper. Res. York, 148 (2003) 426-435. https://doi.org/10.1016/S0377-2217(02)00423-X [7] M. Modarres, E. Nasrabadi, M. M. Nasrabadi, Fuzzy linear regression models with least square errors, Appl. Math. Comput, 163 (2005) 977-989. https://doi.org/10.1016/j.amc.2004.05.004 [8] M. Mosleh, T. Allahviranloo, M. Otadi, Evaluation of fully fuzzy regression models by fuzzy neural network, Neural Computing and Applications, 21 (2012) 105-112. https://doi.org/10.1007/s00521-011-0698-z [9] M. Mosleh, M. Otadi, S. Abbasbandy, Fuzzy polynomial regression with fuzzy neural networks, Applied mathematical modeling, 35 (2011) 5400-5412. https://doi.org/10.1016/j.apm.2011.04.039 [10] H. Tanaka, I. Havashi, J. Watada, Possibilistic Liner regression analysis for fuzzy data, European J. Oper. Res, 40 (1989) 389-396. https://doi.org/10.1016/0377-2217(89)90431-1

International Scientific Publications and Consulting Services

Journal of Fuzzy Set Valued Analysis 2017 No. 1 (2017) 38-49 http://www.ispacs.com/journals/jfsva/2017/jfsva-00380/

49

[11] M. S. Yang, T. S. Lin, Fuzzy Least Squares Linear Regression Analysis for Fuzzy Input-Output Data, Fuzzy Sets and Systems, 126 (2002) 389-399. https://doi.org/10.1016/S0165-0114(01)00066-5 [12] H. Tanaka, J. Watada, Possibilistic Linear Systems and Their Application to the Linear Regression Models, Fuzzy Sets and Systems, 27 (1989) 275-289. https://doi.org/10.1016/0165-0114(88)90054-1 [13] M. Hojati, C. R. Bector, K. Smimou, A Simple Method of Fuzzy Linear Regression, European Journal of Operational Research, 166 (2005) 172-184. https://doi.org/10.1016/j.ejor.2004.01.039 [14] G. Peters, Fuzzy Linear Regression with fuzzy intervals, Fuzzy Sets and Systems, 63 (1994) 45-55. https://doi.org/10.1016/0165-0114(94)90144-9 [15] D. A. Savic, W. Pedrycz, Evaluation of fuzzy Linear Regression Models, Fuzzy Sets and Systems, 39 (1991) 51-63. https://doi.org/10.1016/0165-0114(91)90065-X [16] P. Diamond, Fuzzy Least Squares, Information Sciences, 46 (1988) 141-157. https://doi.org/10.1016/0020-0255(88)90047-3 [17] H. J. Zimmermann, Fuzzy set theory and its applications, Kluwer Academic, Boston, (1991). https://doi.org/10.1007/978-94-015-7949-0 [18] J. Mohammadi, S. M. Taheri, Pedomodels fitting with fuzzy least squares regression, Iraninan J. Fuzzy Systems, 1 (2) (2004) 45-61. http://ijfs.usb.ac.ir/article_505_0.html [19] C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Mathematical Programming, 45 (3) (1989) 503-528. https://doi.org/10.1007/BF01589116 [20] H. Tanaka, I. Havashi, J. Watada, Possibilistic Liner regression analysis for fuzzy data, European J. Oper. Res, 40 (1989) 389-396. https://doi.org/10.1016/0377-2217(89)90431-1 [21] N. Y. Pehlivan, T. Paks, Ch. T. Chang, An Alternative Method for Fuzzy Regression: Fuzzy Radial Basis Function Network, International Journal of Lean Thinking, (2010).

International Scientific Publications and Consulting Services