Design of Polynomial Fuzzy Radial Basis Function Neural Networks ...

4 downloads 352 Views 2MB Size Report
Oct 8, 2013 - Furthermore, HFC-PSSA is exploited here to optimize the proposed ...... [30] J. J. Hu, E. D. Goodman, K. S. Seo, and M. Pei, “Adaptive hierar-.
Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2013, Article ID 745314, 10 pages http://dx.doi.org/10.1155/2013/745314

Research Article Design of Polynomial Fuzzy Radial Basis Function Neural Networks Based on Nonsymmetric Fuzzy Clustering and Parallel Optimization Wei Huang and Jinsong Wang School of Computer and Communication Engineering, Tianjin University of Technology, Tianjin 300384, China Correspondence should be addressed to Jinsong Wang; [email protected] Received 20 April 2013; Revised 24 September 2013; Accepted 8 October 2013 Academic Editor: Jianming Zhan Copyright © 2013 W. Huang and J. Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We first propose a Parallel Space Search Algorithm (PSSA) and then introduce a design of Polynomial Fuzzy Radial Basis Function Neural Networks (PFRBFNN) based on Nonsymmetric Fuzzy Clustering Method (NSFCM) and PSSA. The PSSA is a parallel optimization algorithm realized by using Hierarchical Fair Competition strategy. NSFCM is essentially an improved fuzzy clustering method, and the good performance in the design of “conventional” Radial Basis Function Neural Networks (RBFNN) has been proven. In the design of PFRBFNN, NSFCM is used to design the premise part of PFRBFNN, while the consequence part is realized by means of weighted least square (WLS) method. Furthermore, HFC-PSSA is exploited here to optimize the proposed neural network. Experimental results demonstrate that the proposed neural network leads to better performance in comparison to some existing neurofuzzy models encountered in the literature.

1. Introduction With the learning and generalization abilities, Fuzzy Radial Basis Function Neural Networks (FRBFNN) have been utilized in numerous fields for engineering, medical engineering, and social science [1, 2]. They are developed by integrating the principles of Radial Basis Function Neural Networks (RBFNN) and invoking the mechanisms of information granulation [3]. In the design of classical FRBFNN, information granulation realized by using Fuzzy C-Means (FCM) is utilized to the premise part, while the consequence part (output) is treated as a linear combination of zero-order polynomials; with the use of FCM, the centers of the clusters are determined and the membership functions of the granules can be formed. The visible advantage of the FRBFNN is that it does not suffer from the curse of dimensionality which eminently appeared in other networks based on grid portioning [3]. When dealing with the FRBFNN, it is required to estimate the parameters based on vast amount of data. As powerful optimization tools in many science fields [4–6],

various evolutionary algorithms have been also proposed to improve the accuracy of models. Recently, polynomial Fuzzy Radial Basis Function Neural Networks (PFRBFNN) [3] were proposed. The PFRBFNN adopts four polynomial types, which overcome the zeroorder polynomials of FRBFNN. In the PFRBFNN, the output of the conventional FRBFNN is considered as a linear combination of four types of polynomials. In spite of successful construction in the design of PFRBFNN, there are still two following limitations: (1) the FCM selects the hidden node centers by partitioning the input space to an equal number of fuzzy sets for each input variable, while the modification of the original method in order to take into account nonsymmetric fuzzy partitions of the input space is not considered [7, 8] and (2) most of the optimization algorithms in the design of FRBFNN are not parallel; they lack parallel optimization algorithms. To alleviate the limitations, in this study we propose a parallel space search algorithm (PSSA) and introduce a design of PFRBFNN with the aid of HFC-PSSA and Nonsymmetric

2

Mathematical Problems in Engineering

Fuzzy Clustering Method (NSFCM). On the one hand, with the use of nonsymmetric fuzzy partitions of the input space, information granulation [24–27] realized by NSFCM may overcome the first limitation. On the other hand, with the use of hierarchical fair competition strategy, the proposed PSSA may help to overcome the second limitation. The overall methodology of PFRBFNN is as follows: first, NSFCM is utilized to determine the premise part of PFRBFNN, while the coefficients of the consequence polynomials are estimated by Weighted Least Squares (WLS) method; second, HFCPSSA is exploited to optimize the proposed PFRBFNN. The structure of the paper is organized as follows. Section 2 presents the HFC-PSSA. Section 3 describes the architecture and learning methods applied to the PFRBFNN. Section 4 deals with the optimization of PFRBFNN. Section 4 reports on the experimental results. Finally, some conclusions are drawn in Section 5.

2. Hierarchical Fair Competition-Based Parallel Space Search Algorithm First, let us recall the space search algorithm (SSA), an adaptive heuristic optimization algorithm whose search method comes with the analysis of the solution space [24]. The SSA generates new solutions from the old ones by using the socalled space search operators, which generate two successive steps: it first generates a new subspace (local area) and then realizes search in this new subspace. The search in the new space is realized by randomly generating a new solution (individual) located in this space, while the generation of the new space includes two cases [24]: (a) space search based on 𝑀 selected solutions (denoted here as Case I) and (b) space search based on the current best solution (Case II). For convenience, we consider the following optimization problem min (or max) st :

𝑓 (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 )

𝑥𝑖 ∈ [𝑙𝑖 , 𝑢𝑖 ] ,

(1)

𝑖 = 1, 2, 3, . . . , 𝑛.

Here a feasible solution is denoted in the following way (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ). Suppose that 𝑋𝑘 = (𝑥1𝑘 , 𝑥2𝑘 , . . . , 𝑥𝑛𝑘 ) is the 𝑘th 𝑗 𝑀 𝑗 solution, 𝐿 𝑖 = min𝑀 𝑗=1 𝑥𝑖 , 𝑈𝑖 = max𝑗=1 𝑥𝑖 . Two scenarios are considered as follows [24]. (a) Space search based on 𝑀 selected solutions: in this case, 𝑀 solutions are randomly selected from the current population. The role of this operator is to update the current solutions by new solutions approaching to the optimum. The adjacent space based on 𝑀 solutions is given in the form 𝑉 = {𝑋

new

|

𝑥𝑖new

=

𝑀

∑ 𝑎𝑖 𝑥𝑖𝑘 𝑘=1

= 1, −1 ≤ 𝑎𝑖 ≤ 2} .

⋃𝑋

new

𝑀

∈ 𝑆, where ∑𝑎𝑖 𝑖=1

(2)

(b) Space search based on the current best solution: in this case, the given solution is the best solution in the current population. The expression that generates a new solution is as follows: 𝑉2 = { (𝑥1new , 𝑥2new , . . . , 𝑥𝑛new ) | 𝑥𝑗new (3) =

𝑥𝑗 (𝑗 ≠ 𝑖) ⋃ 𝑥𝑖new

∈ [𝑙𝑖 , 𝑢𝑖 ]} .

In the HFC-PSSA, a migration mechanism that is executed in regular generation intervals is included in the evolutionary process. To explain the details of migration operation, let us consider the following sequence of steps [28– 30]. Step 1. Normalize the fitness of individuals. We generate several subpopulations according to the idea of hierarchical fair competition, and then normalize the fitness of individuals in each subpopulation using the following expression: 𝑛𝑓𝑗,𝑖 =

𝑓𝑗,𝑖 − 𝑓min 𝑓max − 𝑓min

,

(4)

where 𝑓𝑖,𝑗 is fitness of 𝑖th subpopulation and jth individual and 𝑓max and 𝑓min are maximum and minimum values of fitness, respectively. Step 2. Calculate some admission threshold (𝐴 𝐿 𝑖 ). The admission threshold for the 𝑖th subpopulation is determined by the average of normalized fitness. The expression of 𝐴 𝐿 𝑖 is as follows: 𝑛

𝐴 𝐿𝑖 =

1 𝑖 ∑ 𝑛𝑓 , 𝑛𝑖 𝑗=1 𝑗,𝑖

(5)

where 𝑛𝑖 is a size of 𝑖th subpopulation. Step 3. Create admission buffer that is located at each admission threshold level. Step 4. Migrate the qualified individuals. Here the individuals are migrated from the admission buffer to the corresponding subpopulation. Algorithm 1 summarizes the flow of computing the HFCPSSA. The termination condition is such that all the solutions in the current population have the same fitness or terminate after a certain fixed number of generations.

3. A New Design of the PFRBFNN In the design of PFRBFNN, fuzzy clustering may be used to determine the number of RBFs, as well as a position of their centers and the values of the widths [31]. The gradientmethod or the least square algorithm is used to realize parametric learning to deal with the conclusions of the rules [32, 33]. In contrast to PFRBFNN, the proposed PFRBFNN are designed as shown in Figure 1. There are mainly two new

Mathematical Problems in Engineering

3

BEGIN Initialize the solution set (population) Evaluate each solution in the solution set Sort all current solutions in the solution set While {the termination conditions are not met} Implement migration operation and generate new solution set 𝑆󸀠 Select the solutions from 𝑆󸀠 Search solution space (case I) Search solution space (case II) Sort all current solutions in 𝑆󸀠 End while Report the optimal solution End Algorithm 1: The flow of computing the HFC-PSSA.

where 𝑙 = 1, 2, . . . , 𝑐𝑁. Then one can define the center vector 𝑎𝑙 and the side vector 𝛿𝛼 of each fuzzy subspace:

Step 1. Realize radial basis functions by means of NSFM

𝑙 𝑙 𝑙 𝐴𝑙 = {𝑎𝑙 , 𝛿𝛼} = {[𝑎1𝑗1 , 𝑎2𝑗2 , . . . , 𝑎𝑁𝑗𝑁 ] , [𝛿𝑎, 𝛿𝑎, . . . , 𝛿𝑎]} ,

Step 2. Construct the consequence polynomial using information granulation

Step 3. Learning the consequence part with the aid of WLS

Step 4. Optimize the PFRBFNN using HFC-PSSA

𝑙 = 1, . . . , 𝑐𝑁, (7)

Construction of PFRBFNN

Optimization of PFRBFNN

𝑙 where 𝑎𝑖𝑗1 is the center element of the one-dimension fuzzy set 𝐴 𝑖𝑗1 that has been assigned to input 𝑖. In some senses, the conventional fuzzy means method can be regarded as “symmetric” fuzzy means [7], and the Euclidean 𝑑𝑙 (𝑥(𝑘)) relative distance between 𝐴𝑙 and the input data vector 𝑥(𝑘) can be represented as 2

Figure 1: An overall design of PFRBFNN.

𝑑𝑙 (𝑥 (𝑘)) =

points in the design of PFRBFNN. First, in the construction of PFRBFNN, instead of FCM, NSFCM is used to realize radial basis functions. Second, in the optimization of PFRBFNN, we use the HFC-PSSA as an optimization vehicle. 3.1. Realization of Radial Basis Functions Using Nonsymmetric Fuzzy Means. In the PFRBFNN, the receptive fields in Radial Basis Functions are formed by nonsymmetric fuzzy means (NSFM) [7, 8], where the main idea is as follows. Consider a system with 𝑁 normalized input variables 𝑥𝑖 , where 𝑖 = 1, 2, . . . , 𝑁; the domain of each input variable is partitioned into a number of one-dimensional triangular fuzzy sets, 𝑐; then each fuzzy set can be written as 𝐴 𝑖𝑗 = {𝑎𝑖𝑗 , 𝛿𝛼} ,

𝑖 = 1, . . . , 𝑁, 𝑗 = 1, . . . , 𝑐,

𝑙 √ ∑𝑁 𝑖=1 (𝑎𝑖𝑗1 − 𝑥𝑖 (𝑘))

.

√𝑁𝛿𝑎

In the nonsymmetric fuzzy means method, the Euclidean 𝑑𝑙 (𝑥(𝑘)) becomes 2

𝑑 (𝑥 (𝑘)) = √ 𝑙

𝑙 ∑𝑁 𝑖=1 (𝑎𝑖𝑗1 − 𝑥𝑖 (𝑘))

𝑁(𝛿𝑎)2

.

where 𝑎𝑖𝑗 is the center element of fuzzy set 𝐴 𝑖𝑗 and 𝛿𝛼 is half of the respective width; this partitioning technique creates a total of 𝑐𝑁 multidimensional fuzzy subspaces 𝐴𝑙 ,

(9)

To obtain the detail of NSFM, one can refer to the reference [8]. 3.2. Construction of Consequence Polynomials Using Information Granulation. The PFRBFNN [3] based on information granulation [25–27] can be represented in form of “if-then” fuzzy rules R𝑖 : IF x𝑘 is included in cluster 𝐴 𝑖 THEN 𝑦𝑘𝑖

(6)

(8)

− 𝑀𝑖 = 𝑓𝑖 (x𝑘 , k𝑖 ) ,

(10)

where R𝑖 is the 𝑖th fuzzy rule, 𝑖 = 1, . . . , 𝑛, 𝑛 is the number of fuzzy rules (the number of clusters), 𝑓𝑖 (x𝑘 , k𝑖 ) is the consequent polynomial of the 𝑖th fuzzy rule, that is,

4

Mathematical Problems in Engineering

a local model representing input-output relationship of the 𝑖th subspace (local area); 𝑤𝑖𝑘 is the degree of membership (i.e., activation level) of the 𝑖th local model while k𝑖 = [V𝑖1 V𝑖2 . . . V𝑖𝑙 ]T is the 𝑖th prototype. In the design of PFRBFNN, four types of polynomials are considered as the consequent part of fuzzy rules. One of the four types is selected for each subspace as the result of the optimization, which will be described later in this study. It is noted that the PFRBFNN using IG do not suffer from the curse of dimensionality (as all variables are considered en block) [3], more accurate and compact models with a small number of fuzzy rules by using high-order polynomials may be constructed. The four types of consequent polynomials are as follows [24–27]. Type 1: zero-order polynomial (constant type) (11)

Type 2: first-order polynomial (linear type) 𝑓𝑖 (𝑥𝑘1 , . . . , 𝑥𝑘𝑙 , k𝑖 ) = 𝑎𝑖0 + 𝑎𝑖1 (𝑥𝑘1 − V𝑖1 ) + 𝑎𝑖2 (𝑥𝑘2 − V𝑖2 ) + ⋅ ⋅ ⋅ + 𝑎𝑖𝑙 (𝑥𝑘𝑙 − V𝑖𝑙 ) . (12)

𝑚

2

(16)

𝑖=1 𝑘=1

It is clear that 𝐽𝐿 can be rearranged as [34] 𝑛

𝑇

𝐽𝐿 = ∑ (Y − X𝑖 a𝑖 ) W𝑖 (Y − X𝑖 a𝑖 ) 𝑖=1

(17)

𝑛

𝑇

1/2 1/2 1/2 = ∑ (W1/2 𝑖 Y − W𝑖 X𝑖 a𝑖 ) (W𝑖 Y − W𝑖 X𝑖 a𝑖 ) ,

[ [ W𝑖 = [ [

𝑓𝑖 (𝑥𝑘1 , . . . , 𝑥𝑘𝑙 , k𝑖 ) = 𝑎𝑖0 + 𝑎𝑖1 (𝑥𝑘1 − V𝑖1 ) + 𝑎2 (𝑥𝑘2 − V𝑖2 ) + ⋅ ⋅ ⋅ + 𝑎𝑖𝑙 (𝑥𝑘𝑙 − V𝑖𝑙 ) 2

+ 𝑎𝑖(𝑙+1) (𝑥𝑘1 − V𝑖1 ) + 𝑎𝑖(𝑙+2) (𝑥𝑘2 − V𝑖2 )

where a𝑖 is the vector of coefficients of 𝑖th consequent polynomial (local model), W𝑖 is the diagonal matrix (weighting factor matrix) which involves the activation levels; X𝑖 is a matrix which includes input data shifted by the locations of the information granules (more specifically, centers of clusters). For example, if the consequent polynomial is type 2 (linear or a first-order polynomial), we have 𝑤𝑖1 0 0 𝑤𝑖2 .. .. . . [0 0

Type 3: second-order polynomial (quadratic type)

2

2

+ ⋅ ⋅ ⋅ + 𝑎𝑖(2𝑙) (𝑥𝑘𝑙 − V𝑖𝑙 )

+ 𝑎𝑖(2𝑙+1) (𝑥𝑘1 − V𝑖1 ) (𝑥𝑘2 − V𝑖2 )

⋅⋅⋅ ⋅⋅⋅

0 0 .. .

] ] 𝑚×𝑚 , ]∈R ]

d ⋅ ⋅ ⋅ 𝑤𝑖𝑚 ]

1 (𝑥11 − V𝑖1 ) [1 (𝑥12 − V𝑖1 ) [ X𝑖 = [ .. [1 . 1 (𝑥 1𝑚 − V𝑖1 ) [

⋅ ⋅ ⋅ (𝑥𝑙1 − V𝑖𝑙 ) ⋅ ⋅ ⋅ (𝑥𝑙1 − V𝑖𝑙 ) ] ] ], .. ] d .

(18)

⋅ ⋅ ⋅ (𝑥𝑙𝑚 − V𝑖𝑙 )]

a𝑖 = [𝑎𝑖0 𝑎𝑖1 . . . 𝑎𝑖𝑙 ] .

+ ⋅ ⋅ ⋅ + 𝑎𝑖((𝑙+1)(𝑙+2)/2) × (𝑥𝑘(𝑙−1) − V𝑖(𝑙−1) ) (𝑥𝑘𝑙 − V𝑖𝑙 ) . (13) Type 4: modified second-order polynomial (modified quadratic type) 𝑓𝑖 (𝑥𝑘1 , . . . , 𝑥𝑘𝑙 , k𝑖 ) = 𝑎𝑖0 + 𝑎𝑖1 (𝑥𝑘1 − V𝑖1 ) + 𝑎𝑖2 (𝑥𝑘2 − V𝑖2 ) + ⋅ ⋅ ⋅ + 𝑎𝑖𝑙 (𝑥𝑘𝑙 − V𝑖𝑙 ) + 𝑎𝑖(𝑙+1) (𝑥𝑘1 − V𝑖1 ) (𝑥𝑘2 − V𝑖2 )

For the local learning algorithm [34], the objective function is defined as a linear combination of the squared error being the difference between the data and the corresponding output of each fuzzy rule, based on a weighting factor matrix. The weighting factor matrix, W𝑖 , captures the activation levels of input data to 𝑖th subspace. In this sense the weighting factor matrix can be considered to form a discrete version of the fuzzy linguistic representation for the corresponding subspace. For the 𝑖th fuzzy rule, the coefficients of polynomial in the consequent part can be written in a usual manner, namely, −1

a𝑖 = (X𝑖𝑇 W𝑖 X𝑖 ) X𝑖 W𝑖 Y.

+ ⋅ ⋅ ⋅ + 𝑎𝑖(𝑙(𝑙+1)/2) × (𝑥𝑘(𝑙−1) − V𝑖(𝑙−1) ) (𝑥𝑘𝑙 − V𝑖𝑙 ) . (14) The determination of the numeric output of the model, based on the activation levels of the rules, is given in the form 𝑛

𝑖=1

𝑛

𝐽𝐿 = ∑ ∑ 𝑤𝑖𝑘 (𝑦𝑘 − 𝑓𝑖 (x𝑘 − k𝑖 )) .

𝑖=1

𝑓𝑖 (𝑥𝑘1 , . . . , 𝑥𝑘𝑙 , k𝑖 ) = 𝑎𝑖0 .

𝑦̂𝑘 = ∑ 𝑤𝑖𝑘 𝑓𝑖 (𝑥𝑘1 , . . . , 𝑥𝑘𝑙 , k𝑖 ) .

3.3. Learning of the Consequent Part Using Weighted Learning Square. To determine the coefficients of the model, we use the weighted learning square (WLS) method. The minimization of the objective function 𝐽𝐿 is as follows:

(15)

(19)

Notice that the coefficients of polynomial in the consequent part in each fuzzy rule are calculated independently by means of a certain subset of training data. 3.4. Optimization of the PFRBFNN Using HFC-PSSA. In the HFC-PSSA, a solution is represented as a vector comprising the fuzzification coefficient, the number of input variables, the input variable to be selected, the number of fuzzy rules,

Mathematical Problems in Engineering

1.03

4

1

2

1

6

5

2

4

3

5

6

3

Interpretation of chromosome 1 Fuzzification coefficient: 1.03 2 The number of input variable to be used fuzzy model: 4 3 Selected input variables: [1 6 2 4]

2

3

1

4

2

3

5

4

4 The number of fuzzy rules: 6 5 The order of polynomials: [2 3 1 4 2 3]

Figure 2: Solution composition of HFC-PSSA and its interpretation.

and the polynomial type. The length of solution vector corresponds to the maximal number of fuzzy rules to be considered in the optimization. Figure 2 offers an interpretation of the content of the particle in case the upper bound of search space of the fuzzy rule is set to 6. As the number of rules and the polynomials orders (in consequent part) have to be integer number, we round off these values to the nearest integer. The fuzzification coefficient is equal to 1.03. The number of selected input variables is four, while the number of the rule is six. The first local model is of linear type while the other three local models are linear and quadratic. Here we consider two performance indexes that is the standard root mean squared error (RMSE) and mean squared error (MSE) [24] 1 𝑚 2 { { √ ∑(𝑦𝑖 − 𝑦𝑖∗ ) , { { { 𝑚 { 𝑖=1 PI (or E PI) = { 𝑚 { { 1 { ∗ 2 { { ∑(𝑦𝑖 − 𝑦𝑖 ) , 𝑚 { 𝑖=1

HFC-PSSA parameters Number of generations Number of subpopulations Number of subsolutions Migration interval Number of solutions for search space (Case I) Boundary of search space of decision variables Fuzzification coefficient Number of input variables∗ Number of rules Order of polynomials of each fuzzy rule

200 6 [40, 40, 40, 40, 40, 40] 25 generations 8

1.01∼5 2∼10 2∼20 1∼4 (type 1∼type 4)



(RMSE) (20) (MSE) ,

where 𝑦∗ is the output of the fuzzy model, 𝑚 is the total number of data, and 𝑖 is the data index. The accuracy criterion MPI [24] includes both the training data and testing data and comes as a convex combination of the two components: MPI = 𝜃 × PI + (1 − 𝜃) × EPI,

Table 1: List of the parameters of the HFC-PSSA and boundaries of search space.

(21)

where PI and EPI denote the performance index for the training data and testing data, respectively. 𝜃 is a weighting factor that allows us to strike a sound balance between the performance of the model for the training and testing data.

4. Experimental Study To demonstrate the effectiveness of the proposed approach, we have done the experiments based on numerical examples. In all the experiments, we set 𝜃 = 0.5. The proposed HFCPSSA is carried out as an optimization vehicle in the design of PFRBFNN. Table 1 summarizes the list of parameters and boundaries of the HFC-PSSA.

If the number of input variables in dataset is smaller than 10, then we use the number of all input variables.

4.1. Sewage Treatment Process (STP). The first well-known dataset comes from the sewage treatment system plant in Seoul, Republic of Korea. The proposed PFRBFNN are carried out on the sewage treatment process data [11], which consists of 52 input-output pairs and four input variables (MLSS, WSR, RRSP, and DOSP). This dataset has been intensively studied in the previous literature [9–14]. Here the gas furnace process is partitioned into two parts. The first 60% of dataset is selected as training data used for the construction of the fuzzy model. The remaining 40% dataset is considered as the testing dataset, which is used to quantify the predictive quality of the model. The performance index is specified as the RMSE given by (20). The optimal network that consists of four fuzzy rules with type 2 is obtained by using the 200 generations of PSSA. The polynomials in the consequence part of four rules are as follows: 𝑅1 : 𝑦 = 0.817017 + 77.17488 (𝑥1 − 1604.615) + 14.82316 (𝑥2 − 2215) , 𝑅2 : 𝑦 = − 31.5631 + 0.007825 (𝑥1 − 1604.615) + 0.010179 (𝑥2 − 1.152941) ,

6

Mathematical Problems in Engineering Table 2: Comparative analysis of selected models (STP).

Model

PI

Neural Network [9] Fuzzy model [10] Fuzzy model (min-max) [11] Fuzzy model (HCM) [12] SOPNN [13] FRFNN [14]

E PI

Table 3: Results of comparative analysis (MIS).

No. of Performance rules index

Model SONFN [15] Simplified

PI

E PI

Index

40.375

17.898

MSE

Linear

35.745

17.807

MSE

1.801 39.294 14.11 16.56

6

MSE MSE

12.49

32.65

6

MSE

FPNN [16]

12.65

21.03

6

MSE

SI = 2

32.195

18.642

MSE

5.365

4.852

5

MSE

SI = 3

32.251

19.622

MSE

6.837

8.871

3

MSE

GA-based FSONN [17]

MSE

PN-based FSONN

18.043

11.898

MSE

FPN-based FSONN

23.739

9.090

MSE

10.584 12.108

Our model Without the use of PSSA 2.14

4.52

6

RMSE

Incremental model [18]

With the use of PSSA

4.20

6

RMSE

Linear regression

5.877

6.570

RMSE

Incremental model

4.620

6.624

RMSE

SI = 2

2.2141

3.4630

RMSE

SI = 3

0.8852

3.4690

RMSE

SI = 4

0.7818

3.4211

RMSE

0.908 0.896

1.68 1.65

RMSE RMSE

2.07

GO-FPNN model [19]

𝑅3 : 𝑦 = 0.017146 + 0.075889 (𝑥1 − 0.567143) − 1.77867 (𝑥2 − 2215) , 𝑅4 : 𝑦 = − 20.7821 + 12.91189 (𝑥1 − 0.567143) + 1.070568 (𝑥2 − 1.152941) . (22) Figure 3 illustrates the resulting values of the performance index when running the PFRBFNN based on the HFC-PSSA. As it could have been expected, by increasing the total number of generations, the accuracy of PFRBFNN becomes better. The performance of the proposed model is compared with some other models available in the literature; refer to Table 2. Local models of other models have same type of fuzzy rule such as constant or linear form. The proposed model can have different types of local models. In this comparison, the proposed model, having a small number of rules, shows better accuracy, while the model leads to disadvantage of having a large number of coefficients of local model in case of selecting the quadratic form. 4.2. Medical Imaging System (MIS). The second dataset is a medical imaging system dataset which involves 390 software modules written in Pascal and FORTRAN, and each module is described by 11 input variables [15–19]. Applying the proposed design methodology, the given dataset is randomly partitioned to produce two datasets [15–17]: the first 60% of dataset is used for training the models; the remaining 40% of dataset, the testing dataset, provides for quantifying the predictive quality (generalization ability) of the fitted models. We consider the RMSE (20) as the performance index. With the running 200 generations of PSSA, we obtain the optimal network that consists of four fuzzy rules with type 2. The polynomials standing in the consequence part of these four fuzzy rules are as follows: 𝑅1 : 𝑦 = − 0.01809 − 0.25241 (𝑥1 − 7.173333) + 0.005256 (𝑥2 − 22.03571) ,

Our model Without the use of PSSA With the use of PSSA

𝑅2 : 𝑦 = − 0.13802 + 0.006942 (𝑥1 − 7.173333) − 0.01327 (𝑥2 − 12.12128) , 𝑅3 : 𝑦 = − 0.04035 − 0.02267 (𝑥1 − 3.501149) + 0.239457 (𝑥2 − 22.03571) , 𝑅4 : 𝑦 = 0.060663 + 0.278776 (𝑥1 − 3.501149) + 0.099469 (𝑥2 − 12.12128) . (23) Figure 4 illustrates the resulting values of the performance index when running the PFRBFNN based on the HFC-PSSA. The proposed model is also contrasted with some previously developed fuzzy models as shown in Table 3. It is easy to see that the performance of the proposed model is better in the sense of its approximation and prediction abilities. 4.3. Abalone Data (ABA). Finally we experiment with the abalone dataset [20–23], which is a larger dataset consisting of 4,177 input-output pairs that concerns the age of abalone predicted on a basis of seven input variables (including length, weight, diameter, etc.). The dataset is split into two separate parts [20, 21]: the construction of the fuzzy model is completed for 2506 data points being regarded as a training set; and the rest of the dataset (i.e., 1671 data points) is retained for testing purposes. RMSE is considered as the performance index.

7

2.25

4.7

2.2

4.6

2.15

4.5

EPI (RMSE)

PI (RMSE)

Mathematical Problems in Engineering

2.1

2.05

2

4.4

4.3

0

50

100 Generations

150

4.2

200

50

0

100 Generations

150

200

HFC-PSSA

HFC-PSSA (a) Training error in successive optimization

(b) Testing error in successive optimization

0.94

1.69

0.93

1.68

0.92

1.67

EPI (RMSE)

PI (RMSE)

Figure 3: Trace curves of the performance indexes produced by HFC-PSSA (STP).

0.91

1.65

0.9

0.89

1.66

0

50

100 Generations

150

200

1.64

0

50

100 Generations

150

HFC-PSSA

HFC-PSSA

(a) Training error in successive optimization

(b) Testing error in successive optimization

200

Figure 4: Trace curves of the performance indexes produced by HFC-PSSA (MIS).

After running 200 generations of PSSA, we obtain the optimal network with eight fuzzy rules with type 3, whose polynomials of the consequence part are as follows: 1

𝑅 : 𝑦 = 7.9872288 + 11.45947664 (𝑥1 − 0.124376) + 451.4966298 (𝑥2 − 0.630624) + ⋅ ⋅ ⋅ + 143.9173838 (𝑥3 − 0.282961) − 47.44803317 (𝑥1 − 0.124376) (𝑥2 − 0.630624)

+ ⋅ ⋅ ⋅ − 217.5141906 (𝑥1 − 0.124376) × (𝑥3 − 0.282961) + 162.4868597 (𝑥2 − 0.630624) (𝑥3 − 0.282961) + ⋅ ⋅ ⋅ − 1.667754581(𝑥1 − 0.124376) − 9.677924915(𝑥2 − 0.630624)

2 2

− 134.1737445(𝑥3 − 0.282961) ,

2

Mathematical Problems in Engineering 2

2.248

1.98

2.247

1.96

2.246

EPI (RMSE)

PI (RMSE)

8

1.94

2.244

1.92

1.9

2.245

0

50

100 Generations

150

200

2.243

0

50

100 Generations

150

200

HFC-PSSA

HFC-PSSA (a) Training error in successive optimization

(b) Testing error in successive optimization

Figure 5: Trace curves of the performance indexes produced by HFC-PSSA (ABA).

𝑅2 : 𝑦 = − 1188.04 − 855.99 (𝑥1 − 0.124376)

+ ⋅ ⋅ ⋅ + 499.0509 (𝑥3 − 0.446352)

+ 464.2795 (𝑥2 − 0.630624)

+ 3659.62 (𝑥1 − 0.124376) (𝑥2 − 1.096818)

+ 904.6096 (𝑥3 − 0.446352)

+ ⋅ ⋅ ⋅ + 803.6259 (𝑥1 − 0.124376) (𝑥3 − 0.446352)

+ ⋅ ⋅ ⋅ − 1356.86 (𝑥1 − 0.124376) (𝑥2 − 0.630624) + 21.30605 (𝑥1 − 0.124376) (𝑥3 − 0.446352) + ⋅ ⋅ ⋅ + 21.12003 (𝑥2 − 0.630624) (𝑥3 − 0.446352)

+ 994.0846 (𝑥2 − 1.096818) (𝑥3 − 0.446352) + ⋅ ⋅ ⋅ + 2050.08(𝑥1 − 0.124376) 2

+ 26.0111(𝑥1 − 0.124376)

2

+ 1659.251(𝑥2 − 1.096818)

− 778.713(𝑥2 − 0.630624)

2

− 6.39226(𝑥3 − 0.446352) ,

2

2

+ ⋅ ⋅ ⋅ − 95.3784(𝑥3 − 0.446352) , 𝑅3 : 𝑦 = − 143.405 − 42.2197 (𝑥1 − 0.124376) + 71.0844 (𝑥2 − 1.096818) + ⋅ ⋅ ⋅ + 13.19766 (𝑥3 − 0.282961) − 51.0632 (𝑥1 − 0.124376) (𝑥2 − 1.096818) + ⋅ ⋅ ⋅ + 1.11368 (𝑥1 − 0.124376) (𝑥3 − 0.282961) − 387.954 (𝑥2 − 1.096818) (𝑥3 − 0.282961) + ⋅ ⋅ ⋅ − 107.995(𝑥1 − 0.124376) + 622.8024(𝑥2 − 1.096818)

2

2

2

𝑅5 : 𝑦 = − 658.152 + 406.3837 (𝑥1 − 0.162877) + 5785.213 (𝑥2 − 0.630624) + ⋅ ⋅ ⋅ − 340.025 (𝑥3 − 0.282961) − 3561.14 (𝑥1 − 0.162877) (𝑥2 − 0.630624) + ⋅ ⋅ ⋅ − 7102.14 (𝑥1 − 0.162877) (𝑥3 − 0.282961) + 2663.84 (𝑥2 − 0.630624) (𝑥3 − 0.282961) + ⋅ ⋅ ⋅ − 38.9337(𝑥1 − 0.162877) − 291.782(𝑥2 − 0.630624)

2

2 2

− 108.948(𝑥3 − 0.282961) ,

− 95.3784(𝑥3 − 0.282961) ,

𝑅6 : 𝑦 = 546.4625 + 55.4919 (𝑥1 − 0.162877)

𝑅4 : 𝑦 = − 532.195 − 28.6716 (𝑥1 − 0.124376)

+ 50.1344 (𝑥2 − 0.630624)

2

+ 228.1629 (𝑥2 − 1.096818)

+ ⋅ ⋅ ⋅ − 744.188 (𝑥3 − 0.446352)

Mathematical Problems in Engineering

9

+ 66.87478 (𝑥1 − 0.162877) (𝑥2 − 0.630624)

Table 4: Results of comparative analysis (ABA).

+ ⋅ ⋅ ⋅ − 28.5836 (𝑥1 − 0.162877) (𝑥3 − 0.446352) + 29.87009 (𝑥2 − 0.630624) (𝑥3 − 0.446352) + ⋅ ⋅ ⋅ − 1321.04(𝑥1 − 0.162877) − 13672(𝑥2 − 0.630624)

2

2 2

+ 744.3283(𝑥3 − 0.446352) , 𝑅7 : 𝑦 = − 954.934 + 807.175 (𝑥1 − 0.162877) − 3140.58 (𝑥2 − 1.096818) + ⋅ ⋅ ⋅ + 16.91164 (𝑥3 − 0.282961)

+ ⋅ ⋅ ⋅ − 34.206 (𝑥1 − 0.162877) (𝑥3 − 0.282961) + 119.1346 (𝑥2 − 1.096818) (𝑥3 − 0.282961)

+ 9.594292(𝑥2 − 1.096818)

2

2

− 1.58319(𝑥3 − 0.282961) , 𝑅8 : 𝑦 = − 166.398 + 4.89712 (𝑥1 − 0.162877)

Index MSE MSE MSE MSE MSE MSE/2 MSE

1.942 1.910

2.247 2.243

RMSE RMSE

Acknowledgments

References

+ 383.2878 (𝑥2 − 1.096818) + ⋅ ⋅ ⋅ − 23.2903 (𝑥3 − 0.446352) − 1369.65 (𝑥1 − 0.162877) (𝑥2 − 1.096818) + ⋅ ⋅ ⋅ + 69.9118 (𝑥1 − 0.162877) (𝑥3 − 0.446352) − 330.398 (𝑥2 − 1.096818) (𝑥3 − 0.446352)

+ 82.85281(𝑥2 − 1.096818)

E PI 17.22 10.48 10.58 8.68 6.894 2.454 6.123

This work was supported by the National Natural Science Foundation of China (Grant nos. 61272450 and 61301140). The authors would like to thank Tianjin Key Lab of Intelligent Computing and Novel Software Technology and Key Laboratory of Computer Vision and System, Ministry of Education, for their support to the work.

2

+ ⋅ ⋅ ⋅ + 960.4564(𝑥1 − 0.162877)

PI 14.15 10.36 10.54 8.39 6.710 2.393 7.116

HFC-PSSA-based PFRBFNN achieve better performance in comparison with some other models reported in the literature.

+ 47.5888 (𝑥1 − 0.162877) (𝑥2 − 1.096818)

+ ⋅ ⋅ ⋅ − 14.2966(𝑥1 − 0.162877)

Model Linear regression [20] RBFNN [20] RBFNN (context-free clustering) [20] Boosting of granular model [20] RBFNN II [21] GA-FIS [22] SP-SVM [23] Our model Without the use of PSSA With the use of PSSA

2

2 2

+ 43.62129(𝑥3 − 0.446352) . (24) Figure 5 shows performance indexes generated by means of the HFC-PSSA in case of using training data and testing data. Table 4 summarizes the results of comparative analysis of the PFRBFNN using HFC-PSSA when being contrasted with other models. It is clear that the proposed model outperforms several previous fuzzy models reported in the literature.

5. Conclusions In this study, we have proposed an HFC-PSSA and have introduced a design of PFRBFNN based on the HFC-PSSA. A suite of comparative studies demonstrated that the proposed

[1] S. Mitra and J. Basak, “FRBF: a fuzzy radial basis function network,” Neural Computing and Applications, vol. 10, no. 3, pp. 244–252, 2001. [2] F. Behloul, B. P. F. Lelieveldt, A. Boudraa, and J. H. C. Reiber, “Optimal design of radial basis function neural networks for fuzzy-rule extraction in high dimensional data,” Pattern Recognition, vol. 35, no. 3, pp. 659–675, 2002. [3] S.-K. Oh, W.-D. Kim, W. Pedrycz, and B.-J. Park, “Polynomialbased radial basis function neural networks (P-RBF NNs) realized with the aid of particle swarm optimization,” Fuzzy Sets and Systems, vol. 163, no. 1, pp. 54–77, 2011. [4] W. Huang and L. Ding, “Project-scheduling problem with random time-dependent activity duration times,” IEEE Transactions on Engineering Management, vol. 58, no. 2, pp. 377–387, 2011. [5] W. Huang and L. Ding, “The shortest path problem on a fuzzy time-dependent network,” IEEE Transactions on Communications, vol. 60, no. 11, pp. 3376–3385, 2012. [6] F.-J. Lin, L.-T. Teng, J.-W. Lin, and S.-Y. Chen, “Recurrent functional-link-based fuzzy-neural-network-controlled induction-generator system using improved particle swarm optimization,” IEEE Transactions on Industrial Electronics, vol. 56, no. 5, pp. 1557–1577, 2009. [7] A. Alexandridis, H. Sarimveis, and K. Ninos, “A radial basis function network training algorithm using a non-symmetric partition of the input space—application to a model predictive control configuration,” Advances in Engineering Software, vol. 42, no. 10, pp. 830–837, 2011. [8] A. Alexandridis, E. Chondrodima, and H. Sarimveis, “Radial basis function network training using a nonsymmetric partition

10

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20] [21]

[22]

[23]

[24]

Mathematical Problems in Engineering of the input space and particle swarm optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 42, no. 2, pp. 219–230, 2013. E. Kim, M. Park, S. Ji, and M. Park, “A new approach to fuzzy modeling,” IEEE Transactions on Fuzzy Systems, vol. 5, no. 3, pp. 328–337, 1997. Y. Lin and G. A. Cunningham, “A new approach to fuzzy modeling,” IEEE Transactions on Fuzzy Systems, vol. 5, no. 2, pp. 190–197, 1997. S. Oh and W. Pedrycz, “Identification of fuzzy systems by means of an auto-tuning algorithm and its application to nonlinear systems,” Fuzzy Sets and Systems, vol. 115, no. 2, pp. 205–230, 2000. B. J. Park, S. K. Oh, T. C. Ahn, and H. K. Kim, “Optimization of fuzzy systems by means of GA and weighting factor,” Transactions of the Korean Institute of Electrical Engineers, vol. 48, no. 6, pp. 789–799, 1999. H.-S. Park, B.-J. Park, H.-K. Kim, and S.-K. Oh, “Self-organizing polynomial neural networks based on genetically optimized multi-layer perceptron architecture,” International Journal of Control, Automation and Systems, vol. 2, no. 4, pp. 423–434, 2004. H.-S. Park and S.-K. Oh, “Fuzzy relation-based fuzzy neuralnetworks using a hybrid identification algorithm,” International Journal of Control, Automation and Systems, vol. 1, no. 3, pp. 289–300, 2003. S.-K. Oh, W. Pedrycz, and B.-J. Park, “Relation-based neurofuzzy networks with evolutionary data granulation,” Mathematical and Computer Modelling, vol. 40, no. 7-8, pp. 891–921, 2004. S. K. Oh and W. Pedrycz, “Fuzzy polynomial neuron-based selforganizing neural networks,” International Journal of General Systems, vol. 32, no. 3, pp. 237–250, 2003. S.-K. Oh, H.-S. Park, C.-W. Jeong, and S.-C. Joo, “GA-based feed-forward self-organizing neural network architecture and its applications for multi-variable nonlinear process systems,” KSII Transactions on Internet and Information Systems, vol. 3, no. 3, pp. 309–330, 2009. W. Pedrycz and K.-C. Kwak, “The development of incremental models,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 3, pp. 507–518, 2007. S. K. Oh, W. D. Kim, B. J. Park, and W. Pedrycz, “A design of granular-oriented self-organizing hybrid fuzzy polynomial neural networks,” Neurocomputing, vol. 119, pp. 292–307, 2013. W. Pedrycz and K.-C. Kwak, “Boosting of granular models,” Fuzzy Sets and Systems, vol. 157, no. 22, pp. 2934–2953, 2006. W. Pedrycz, H. S. Park, and S. K. Oh, “A granular-oriented development of functional radial basis function neural networks,” Neurocomputing, vol. 72, no. 1–3, pp. 420–435, 2008. R. Alcal´a, M. J. Gacto, and F. Herrera, “A fast and scalable multiobjective genetic fuzzy system for linguistic fuzzy modeling in high-dimensional regression problems,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 4, pp. 666–681, 2011. F. Lin and J. Guo, “A novel support vector machine algorithm for solving nonlinear regression problems based on symmetrical points,” in Proceedings of the 2nd International Conference on Computer Engineering and Technology (ICCET ’10), pp. 176–180, Chendu, China, April 2010. W. Huang, L. Ding, S.-K. Oh, C.-W. Jeong, and S.-C. Joo, “Identification of fuzzy inference system based on information granulation,” KSII Transactions on Internet and Information Systems, vol. 4, no. 4, pp. 575–594, 2010.

[25] F. Hoffmann, “Combining boosting and evolutionary algorithms for learning of fuzzy classification rules,” Fuzzy Sets and Systems, vol. 141, no. 1, pp. 47–58, 2004. [26] W. Pedrycz and K.-C. Kwak, “Boosting of granular models,” Fuzzy Sets and Systems, vol. 157, no. 22, pp. 2934–2953, 2006. [27] W. Pedrycz and P. Rai, “Collaborative clustering with the use of Fuzzy C-Means and its quantification,” Fuzzy Sets and Systems, vol. 159, no. 18, pp. 2399–2427, 2008. [28] S.-C. Lin, W. F. Punch III, and E. D. Goodman, “Coarse-grain parallel genetic algorithms: categorization and new approach,” in Proceeedings of the 6th IEEE Symposium on Parallel and Distributed Processing, pp. 28–37, Phoenix, Ariz, USA, October 1994. [29] J. J. Hu and E. D. Goodman, “The hierarchical fair competition HFC model for parallel evolutionary algorithms,” in Proceedings of the Congress on Evolutionary Computation (CEC ’02), pp. 45– 94, Honolulu, Hawaii, USA, 2002. [30] J. J. Hu, E. D. Goodman, K. S. Seo, and M. Pei, “Adaptive hierarchical fair competition AHFC model for parallel evolutionary algorithms,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’02), pp. 772–779, New York, NY, USA, 2002. [31] A. Staiano, R. Tagliaferri, and W. Pedrycz, “Improving RBF networks performance in regression tasks by means of a supervised fuzzy clustering,” Neurocomputing, vol. 69, no. 13– 15, pp. 1570–1581, 2006. [32] X. Hong and S. Chen, “A new RBF neural network with boundary value constraints,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 1, pp. 298–303, 2009. [33] C.-M. Huang and F.-L. Wang, “An RBF network with OLS and EPSO algorithms for real-time power dispatch,” IEEE Transactions on Power Systems, vol. 22, no. 1, pp. 96–104, 2007. [34] S. K. Oh and W. Pedrycz, “Fuzzy Identification by means of an auto-tuning algorithm and a weighted performance index,” Journal of Fuzzy Logic and Intelligent Systems, vol. 8, pp. 106–118, 1998.

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014