Research Article Support Vector Regression Based on Grid-Search

0 downloads 0 Views 4MB Size Report
Jun 17, 2014 - which permits unrestricted use, distribution, and reproduction in any medium, ... [3]. Moreover, accurate short-term wind power forecasting ... (PF), based on the Grid-search method for SVR such that ... Volume 2014, Article ID 835791, 11 pages ... been measured every 3 minutes, a total of about five days.
Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 835791, 11 pages http://dx.doi.org/10.1155/2014/835791

Research Article Support Vector Regression Based on Grid-Search Method for Short-Term Wind Power Forecasting Hong Zhang,1,2 Lixing Chen,1,2 Yong Qu,3 Guo Zhao,1,2 and Zhenwei Guo4 1

School of Electrical Engineering, Southeast University, Nanjing, Jiangsu 210096, China Jiangsu Key Laboratory of Smart Grid Technology and Equipment, Nanjing 210096, China 3 VLSI Lab, Nanyang Technological University, Singapore 639798 4 School of Information Science and Engineering, Hunan University, Changsha 410082, China 2

Correspondence should be addressed to Hong Zhang; [email protected] Received 16 November 2013; Revised 18 April 2014; Accepted 23 April 2014; Published 17 June 2014 Academic Editor: Hongjie Jia Copyright Β© 2014 Hong Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The purpose of this paper is to investigate the short-term wind power forecasting. STWPF is a typically complex issue, because it is affected by many factors such as wind speed, wind direction, and humidity. This paper attempts to provide a reference strategy for STWPF and to solve the problems in existence. The two main contributions of this paper are as follows. (1) In data preprocessing, each encountered problem of employed real data such as irrelevant, outliers, missing value, and noisy data has been taken into account, the corresponding reasonable processing has been given, and the input variable selection and order estimation are investigated by Partial least squares technique. (2) STWPF is investigated by multiscale support vector regression (SVR) technique, and the parameters associated with SVR are optimized based on Grid-search method. In order to investigate the performance of proposed strategy, forecasting results comparison between two different forecasting models, multiscale SVR and multilayer perceptron neural network applied for power forecasts, are presented. In addition, the error evaluation demonstrates that the multiscale SVR is a robust, precise, and effective approach.

1. Introduction Compared to the traditional thermal power [1], wind energy is a significant aspect of renewable energy and it is getting more and more attention due to it is a renewable, inexhaustible and free source. Therefore, the corresponding wind energy forecasting becomes a critical issue for dispatch and scheduling of power systems [2]. Precise wind energy forecasting can balance and integrate the multiple volatile power sources at all levels of the transmission and distribution grid [3]. Moreover, accurate short-term wind power forecasting can reduce the problems which are caused by grid integration and energy trading [4]. Short-term wind distribution is essentially a random one although it can be described by a continuous probability distribution named Weibull distribution in a long term. It is hard to obtain the intrinsic regulation of wind speed in a shortterm; thus, the soft computing can play a significant role in

short-term wind power forecasting. SVR technique has already been employed for the short energy forecasting in the existing literatures; however, many of them assume that the employed data have good quality. In fact, it is impossible to acquire the data without noise because there are many reasons that are sometimes beyond the control of human operators [5]. Generalization capability of traditional forecast approach is relatively low because the employed approach is seriously dependent on the quality of sample. In this paper, we optimize the parameters for two different kernel functions, radial basis function (RBF) or polynomial function (PF), based on the Grid-search method for SVR such that the accuracy of forecasting results is improved. The main algorithm flow is illustrated in Figure 1. Figure 1 provides a brief illustration for the main processing step of proposed approach in this paper. The rest of this paper is organized as follows: formalization and illustration related to the dataset are investigated in Section 2. A brief

2

Journal of Applied Mathematics Order estimation (data delay)

Real data

Fill missing values

Nearest interpolation, etc.

MAD filter used for noise elimination

Data smoothing

Wavelet decomposition and denoising

Filtered data synthesis

High-frequency data decomposition Low-frequency data decomposition

Multilayer perceptron networks forecasting Input variable selection Hypothesis statistic probability calculation

Short-term wind power forecasting via support vector regression (SVR)

Parameter optimization via Grid search method

Forecasting error assessment and analysis

Performance comparison results

Figure 1: Algorithm flow.

review on wind energy forecasting is given in Section 3. Section 4 presents the detailed information related to our proposed approach including the basic employed theory. In Section 5, firstly, the missing value of employed data is filled by the data interpolation techniques. Secondly, two different filter methods (median absolute deviation (MAD) filter and wavelet decomposition and denoising) are used to eliminate the irrelevant, noisy, and outlier value. Thirdly, the input variable selection is investigated by Partial least squares (PLS) technique. Fourthly, the data order estimation is implemented through the cross-correlation methods. Fifthly, the multiscale SVR in combination with Grid-search technique is applied to forecast the short-term wind power. Finally, the performance evaluation and error analysis are given. In Section 6, the proposed results and the prospective research questions are summarized and discussed, respectively.

2. Problem Description

is the fundamental issue for the data analysis; in particular, data do not exist without noise in the real application. The main objective of data analysis is to discover knowledge which will be used to solve real problem and make decisions [5]. The wind tower which is employed to collect the data contains two different heights, 30 m and 60 m. The data have been measured every 3 minutes, a total of about five days of measurement data are selected. Specifically, the employed data contain few irrelevant, corrupt, and noisy ones which must be removed and filled from the data in order to proceed with further data analysis. Taking into account the real application, because the data sampling equipment sometime encounter a temporary mistake and the associated data are recorded as zero, therefore, these data with bad quality are eliminated. Typically, the short-term wind power prediction is within 4 days and ultra-short-term one is within 4 hours. The paper mainly discussed the short-term wind power prediction by SVR techniques based on Grid-search method.

2.1. Data Resources. The quality of data samples plays an important role in wind power forecasting because it has a direct impact on forecasting performance. The data quality

2.2. Formalization. In this paper, the short-term wind forecasting issue is formulated as a regression problem. The time series is denoted by

𝑇

π΄π‘Šπ‘† π΄π‘Šπ· 𝑇𝐸𝑀 π»π‘ˆπ‘€ 𝑃𝑅𝐸 π΄π‘Šπ‘† π΄π‘Šπ· 𝑉𝐴𝑅 π‘…π‘Šπ‘† π‘…π‘Šπ· π‘₯𝑖 (𝑑𝑖 ) = (π‘₯𝑖1 , π‘₯𝑖2 , π‘₯𝑖3 , π‘₯𝑖4 , π‘₯𝑖5 , π‘₯𝑖6 , π‘₯𝑖7 , π‘₯𝑖8 , π‘₯𝑖9 , π‘₯𝑖10 ) ,

where 10 variables indicate 30 m average wind speed, direction, temperature, humidity, and pressure and 60 m average wind speed, direction, variance, real wind speed, and real wind direction, respectively; 𝑁 is a positive integer with the value more than one. In general, (1) provides all the factors in real application for short-term wind power forecasting. The primary task of this paper is to predict the output wind power 𝑦𝑖 (𝑑𝑖 ), 𝑖 = 1, . . . , 𝑁 at time 𝑑 = 𝑑𝑖 based on the wind measurements through π‘₯𝑖 (𝑑𝑖 ) at time 𝑑𝑖 , π‘‘π‘–βˆ’1 , . . . , π‘‘π‘–βˆ’π›½ , 𝑖 > 𝛽, 𝑖, 𝛽 ∈ 𝑁+ , past observations.

3. Related Work Short-term wind power forecast has attracted more and more attention in recent decades. Alessandrini et al. [4]

𝑖 = 1, . . . , 𝑁,

(1)

discussed the comparison between ECMWF EPS (Ensemble Prediction System in use at the European Centre for MediumRange Weather Forecasts) and COSMO-LEPS (Limited-area Ensemble Prediction System developed within Consortium for Small-scale Modelling) based on two forecasting models. As a survey associated with short-term prediction in the last 30 years; Costa et al. [6] investigated the performance of two principal approaches (mathematical and physical). Kramer and Gieseke [2] applied the SVR techniques for shortterm wind energy forecasting based on the real world wind power data from the National Renewable Energy Laboratory (NREL) western wind resource dataset, and Osowski and Garanty [7] utilized the SVR and wavelet decomposition method to forecast the short-term air pollution. Chen et al.

Journal of Applied Mathematics [8] proposed new learning techniques based on support vector machine (SVM) model for the power load forecasting and conducted experimental results for the short-term load forecasting. Chang [9] proposed a hybrid method which combined the orthogonal least squares (OLS) algorithm and genetic algorithm (GA) for short-term wind power forecasting based on the radial basis function (RBF) neural network. Che et al. [10] applied a two-voltage stage topology with boost converter to improve the conversion efficiency of commercial small wind grid inverter by proposed control strategy. Based on the combination of wavelet transform and neural network arithmetic, Song et al. [11] dealt with an energy management related to a hybrid power generation system such that the stability of power generation system was improved greatly. Moreover, a bibliographical survey associated with the general application of research and developments was presented by Lei et al. [12] in the fields of wind power forecasting. Li et al. [13] presented ideal subspace approximation techniques based on a chaotic time series and nonlinear Kalman filtering, and the wind speed prediction experiments were used to demonstrate the high chaotic prediction accuracy. Hoai et al. [14] optimized an empirical-statistical downscaling technique for prediction based on a feed-forward multilayer perceptron (MLP) neural network, and they gave the numerical simulation to demonstrate the robustness of proposed technology. SΒ΄anchez [15] gave a statistical forecasting system for short-term wind power prediction in 48 hours ahead based on the techniques combination with recursive methods and schemes with adaptability. An in-depth review of the current wind power generation method and advances in wind power forecasting was formulated by Foley et al. [16], and Botterud et al. [17] discussed the current development of wind power forecasting in US ISO/RTO markets and the application of state-of-the-art forecasts refers to associated forecasting. Based on the numerical and statistical models, Stathopoulos et al. [18] proposed some strategies for accurate local wind forecasts by the combination with statistical post processes.

4. Theory for Proposed Approach Data analysis is the fundamental approach for the knowledge investigation [19]. Proper data preprocessing can eliminate the unreasonable trend of data without loss of data characteristics. Therefore, efficient techniques for automatic data preprocessing are crucial [20]. In this section, the basic theory illustration refers to data preprocessing and multiscale SVR are introduced as follows. 4.1. Missing Value. In the data preprocessing, the issues that must be considered are about the irrelevant, missing value, and noisy data. The data with poor quality will result from the poor performance of final employed approaches. In this case, the data are not complete because there are missing values which cannot be eliminated because of the requirement for time continuity as well as they may contain useful information [5]. In this paper, the missing value is filled via data interpolation by the intrinsic relationship between data; for instance, the missing value of wind power can be filled by

3 its associated influential factor wind speed. The nearestneighbor interpolation (also named as proximal interpolation) is a typical method of multivariate interpolation in one or more dimensions, which can be used to approximate the value of a nongiven point based on the corresponding point of the neighborhood. Under normal circumstances, the filled data is reasonable because the value of data is not suddenly changed in the real application, and the neighboring data correlation is taken into account in nearest-neighbor interpolation method. 4.2. MAD Filter. Median absolute deviation from the median π‘₯βˆ— for a data sequence (π‘₯1 , π‘₯2 , . . . , π‘₯𝑁) is more robust with good performance in the presence of multiple outliers [21, 22]. The Hampel filter (also known as MAD filter) which is used to eliminate the outlier is denoted by 󡄨 󡄨 𝑑𝑠 < 󡄨󡄨󡄨π‘₯𝑖 βˆ’ π‘₯π‘–βˆ— 󡄨󡄨󡄨 ,

𝑠 = π‘Ž β‹… 𝑀, 𝑖 = 1, . . . , 𝑛,

(2)

where π‘Ž = 1.4826 and 𝑀 = median{|π‘₯𝑖 βˆ’ π‘₯π‘–βˆ— |} is the median value of |π‘₯𝑖 βˆ’ π‘₯π‘–βˆ— |. 𝑑 is the threshold employed to control range of convergence, and it can be estimated based on the sample standard deviation of the distribution. The MAD filter can replace the outlier-sensitive mean and standard deviation estimates with the outlier-resistant median as well as MAD from the data [23]. 4.3. Wavelet Decomposition and Denoising. Unlike the MAD filter, the wavelet decomposition and denoising analysis for data is localized in both time domain and frequency domain, and it can be used to decompose the original data into highfrequency component (HFC) and low-frequency component (LFC). Typically, the HFC denotes the detailed information such as mutant value, while the LFC usually represents the generalized or stationary characteristic related to employed data. The more detailed discussion of wavelet decomposition and denoising can be founded in [24–27]. Comparing to the continuous wavelet transform, the discrete wavelet transform (DWT) is more commonly used in real application and defined by ∞

𝐢𝑓 (𝑗, π‘˜) = ∫

βˆ’βˆž

𝑓 (𝑑) πœ‘π‘—π‘˜ (𝑑) 𝑑𝑑,

βˆ€π‘“ (𝑑) ∈ 𝐿2 (𝑅) ,

(3) 𝑗, π‘˜ ∈ 𝑍,

where βˆ€ means for any πœ‘π‘—π‘˜ (𝑑) is a basic wavelet, and πœ‘π‘—π‘˜ (𝑑) = 𝑗/2

𝑗

π‘Ž0 πœ‘π‘—π‘˜ (π‘Ž0 𝑑 βˆ’ π‘˜π‘0 ), π‘Ž0 , 𝑏0 ∈ 𝑅1 π‘Ž0 =ΜΈ 0. The above DWT is also known as Mallat algorithm [28], in this paper, the Vaidyanathan filter [29–31] is applied for the implementation of data decomposition and denoising. 4.4. Input Variable Selection. Swedish statistician named Herman Wold first introduces Partial least squares (PLS) technique which is used to find the fundamental relations between two variables (π‘₯ and 𝑦); that is, a latent variable approach is employed to investigate the covariance structures between variables. Partial correlation can be used to explore

4

Journal of Applied Mathematics

the association between pairs of random variables in the presence of other variables [32, 33], and its coefficients can be calculated between the variables and exclude the influence of other variables, then the main instruction of PLS coefficients can be derived via the following three steps. (i) Hypothesis. Consider three variables, 𝑦1 , 𝑦2 , and π‘₯3 ; the partial correlation coefficient π‘Ÿ12(3) between 𝑦1 and 𝑦2 given π‘₯3 is defined by π‘Ÿ12(3) =

(π‘Ÿ12 βˆ’ π‘Ÿ13 π‘Ÿ23 ) 2 ) (1 βˆ’ π‘Ÿ2 ) √(1 βˆ’ π‘Ÿ13 23

,

(4)

where π‘Ÿπ‘–π‘— is the product-moment correlation coefficient between variables with subscripts 𝑖 and 𝑗. The range of PLS coefficients values is (βˆ’1, 1), in particular, 0 indicates no association between 𝑦1 and 𝑦2 . (ii) Statistic. A test associated with the full correlation coefficient is used to test the original hypothesis in Step (4) under the assumption that data has an approximately normal distribution. If the π‘Ÿ12(3) is the obtained partial correlation coefficient, then the appropriate 𝑑 statistics is denoted as π‘‘π‘›βˆ’π‘žβˆ’2 =

π‘Ÿ12(3) βˆšπ‘› βˆ’ π‘ž βˆ’ 2 , 2 1 βˆ’ π‘Ÿ12(3)

(5)

4.5. Order Estimation. Data order reflects the intrinsic relationship between past data and current data, which is derived through the autocorrelation function (ACF) for one data sequence and cross-correlation function (CCF) for two different data sequences. Autocorrelation (is also sometimes called β€œlagged correlation” or β€œserial correlation”) refers to the correlation of a time series with its own past and future values, which relate to the correlation between members of a series of numbers arranged in time. CCF is a measure of similarity of two given date as a function of a time lag, and it is commonly used for the replacement of a long data by a shorter and suitable length. In the discrete domain, ACF and CCF for two real time series π‘₯𝑖 , 𝑦𝑗 , 𝑖 = 0, 1, . . . , 𝑀 βˆ’ 1, 𝑗 = 0, 1, . . . , 𝑁 βˆ’ 1 are defined by ∞

π‘Ÿπ‘₯π‘₯ (𝑙) = βˆ‘ π‘₯ (𝑛) π‘₯ (𝑛 βˆ’ 𝑙) = π‘Ÿπ‘₯π‘₯ (βˆ’π‘™) , 𝑛=βˆ’βˆž

𝑙 = 0, Β±1, Β±2, . . . , (6)

βˆ‘

π‘₯ (𝑗 βˆ’ 𝑙) 𝑦 (𝑗) ,

𝑗=max(0,π‘˜)

(7)

𝑙 = βˆ’ (𝑀 + 1) , . . . , 0, . . . , (𝑁 βˆ’ 1) . In MATLAB, ACF and CCF are computed with the function β€œxcorr” which are defined by (6) and (7) in the frequency domain, respectively. 4.6. Support Vector Regression. SVM for regression was proposed in 1996 by Drucker et al. in [33]; this method is called support vector regression (SVR), and its basic idea is based on support vector classification, more precisely, the fact that the cost function does not take into account the training points that lie beyond the margin; thus, the SVR only depends on a subset of the training data. Analogously, least squares support vector machine (LS-SVM) which is known as another SVM has been presented by Suykens and Vandewalle [34]. Vanik-Chervonenkis theory and structural risk minimization (SRM) are the fundamental theory for the SVM [35, 36]. SVM is to investigate the intrinsic relationship between the prediction model related to wind time series and learning capability and derive the best generalization capability, if the given sample dataset is 𝑋 Γ— π‘Œ = {(π‘₯1 , 𝑦1 ), . . . , (π‘₯𝑖 , 𝑦𝑖 ), . . .}𝑁 𝑖=1 ∈ 𝑅𝑁 Γ— 𝑅. The basic task of regression is to establish the nonlinear function 𝑓 : 𝑅𝑁 β†’ 𝑅, such that 𝑦 = 𝑓(π‘₯); the estimation function and loss function are, respectively, defined as 𝑦 = 𝑓 (π‘₯) = [πœ” β‹… πœ™ (π‘₯)] + 𝑏,

where π‘‘π‘›βˆ’π‘žβˆ’2 has an approximate Student’s 𝑑-distribution, and its degrees of freedom are 𝑛 βˆ’ π‘ž βˆ’ 2; 𝑛 is the number of observations from the computed full correlation coefficients. (iii) Probability Calculation. Calculate the observation of test 𝑑-statistics as well as its corresponding values of probabilities 𝑃. If the value of probability 𝑃, which is used to test 𝑑-statistics, is less than the value of given significance level 𝛼, then the original hypothesis should not be accepted; otherwise, it is available for the test.

min(π‘€βˆ’1+𝑙,π‘βˆ’1)

π‘Ÿ (π‘₯, 𝑦, 𝑙) =

𝑁 1 𝑄 = β€–πœ”β€–2 + πΆβˆ‘ (πœ‰π‘–βˆ— + πœ‰π‘– ) 2 𝑖=1

min

subject to 𝑦𝑖 βˆ’ [πœ” β‹… π‘˜πœ™ (π‘₯𝑖 )] βˆ’ 𝑏 ≀ πœ€ + πœ‰π‘–βˆ— ,

(8)

[πœ” β‹… π‘˜πœ™ (π‘₯𝑖 )] + 𝑏 βˆ’ 𝑦 = πœ€ + πœ‰π‘– , πœ‰π‘–βˆ— , πœ‰π‘– β‰₯ 0,

𝑖 = 1, . . . , 𝑁,

where 𝐢 is penalty factor which is employed for empirical risk and confidence range, πœ‰π‘–βˆ— , πœ‰ are relaxation factors which are used to modify the convergence speed, πœ€ is loss function which is applied to estimate the prediction accuracy for (8), and its detailed information is given as 󡄨 󡄨 󡄨 󡄨 𝐿 πœ€ (𝑦) = 󡄨󡄨󡄨𝑓 (π‘₯) βˆ’ 𝑦󡄨󡄨󡄨 βˆ’ πœ€, if 󡄨󡄨󡄨𝑓 (π‘₯) βˆ’ 𝑦󡄨󡄨󡄨 β‰₯ πœ€. (9) Moreover, the 𝐿 πœ€ (𝑦) is equal to zero if |𝑓(π‘₯) βˆ’ 𝑦| < πœ€. Essentially, (8) and (9) are equivalent quadratic convex optimization problems which are defined as max βˆ— πœ€,𝛼𝑖

𝑃 (𝛼𝑖 , π›Όπ‘–βˆ— ) 𝑁

𝑁

𝑖=1

𝑖=1

= βˆ‘π‘¦π‘– (𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) βˆ’ πœ€βˆ‘ (𝛼𝑖 + π›Όπ‘–βˆ— ) 1 𝑁 βˆ’ βˆ‘ (𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) (𝛼𝑗 βˆ’ π›Όπ‘—βˆ— ) 𝐾 (π‘₯𝑖 , π‘₯𝑗 ) 2 𝑖,𝑗=1

𝑁

𝑁

𝑖=1

𝑖=1

subject to βˆ‘π›Όπ‘– = βˆ‘π›Όπ‘–βˆ— ,

0 ≀ 𝛼𝑖 , π›Όπ‘–βˆ— ≀ 𝐢, 𝑖 = 1, . . . , 𝑁, (10)

5

π‘βˆ’π‘š

π‘₯𝑛+𝑙 = βˆ‘ (𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) 𝐾 (π‘₯𝑖 , π‘₯π‘βˆ’π‘š+𝑙 ) + 𝑏,

(11)

𝑖=1

π‘₯π‘βˆ’π‘š+𝑙 = {π‘₯π‘βˆ’π‘š+𝑙 , . . . , π‘₯𝑁+π‘™βˆ’1 } ,

𝑙 = 1, 2, . . . , 𝐿.

Original and filtered state trajectory of wind speed 2 1.8 1.6 1.4 1.2 1

0

500

RMSE =

βˆšπ‘›

,

MAE =

󡄨 󡄨 βˆ‘π‘›π‘‘=1 σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ƒπ‘Ÿπ‘‘ βˆ’ 𝑃𝑓𝑑 󡄨󡄨󡄨󡄨 RMAE = 󡄨 󡄨 , βˆ‘π‘›π‘‘=1 σ΅„¨σ΅„¨σ΅„¨π‘ƒπ‘Ÿπ‘‘ 󡄨󡄨󡄨

2000

0

500

1000

1500

2000

Time

βˆ‘π‘›π‘‘=1

󡄨󡄨 󡄨 σ΅„¨σ΅„¨π‘ƒπ‘Ÿπ‘‘ βˆ’ 𝑃𝑓𝑑 󡄨󡄨󡄨 󡄨 󡄨, βˆšπ‘›

(12)

where MAE is the absolute average error between forecasting power 𝑃𝑓𝑑 and real power π‘ƒπ‘Ÿπ‘‘ over the time 𝑑. 𝑛 is the number of test sample; RMSE and RMAE are the root mean square error and relative average absolute error, respectively.

Original MAD filter Wavelet filter

Figure 2: State trajectories refer to the 1st variable and 2nd variable.

Normalized value

2

1500

Original and filtered state trajectory of wind direction 2 1.8 1.6 1.4 1.2 1

4.7. Error Evaluation Criteria. Consider √ βˆ‘π‘›π‘‘=1 (𝑃𝑓𝑑 βˆ’ π‘ƒπ‘Ÿπ‘‘ )

1000 Time

Original MAD filter Wavelet filter Normalized value

where if the value of 𝐢 is large, then the accuracy capability of data fitting will be more good in the complexity and approximation error of the control model; πœ€ is applied to control the generalization capability and regression approximate error, 𝛼𝑖 , π›Όπ‘–βˆ— are the Lagrange multiplier, when they are not equivalent to zero and then the SVR can be used for regression prediction, and 𝐾(π‘₯𝑖 , π‘₯𝑗 ) is the kernel function which is used to simulate the inner production, which can be given as radial basis function or polynomial function. Furthermore, the regression function is 𝑦𝑗 = βˆ‘π‘βˆ’π‘š 𝑖=1 (𝛼𝑖 βˆ’ π›Όπ‘–βˆ— )𝐾(π‘₯𝑖 , π‘₯𝑗 ) + 𝑏; 𝑗 = π‘š + 1, . . . , 𝑁, the regression step 𝑙th step prediction can be denoted by

Normalized value

Journal of Applied Mathematics

Original and filtered state trajectory of temperature 2 1.8 1.6 1.4 1.2 1 0

5. Experimental Analysis

5.1. Data Interpolation and Filter. In this section, the data interpolation, MAD filter, wavelet decomposition, and denoising technique which correspond to the theory analysis in Sections 4.1–4.3 are, respectively, to handle the missing value, irrelevant, outliers, and noisy data. The variables which can be used as the forecasting factors consist of two groups, that is, 30 m average wind speed (AWS), direction (AWD), temperature (Tem), humidity (Hum), and pressure (Pre) and 60 m AWS, AWD, AWS variance, real wind speed (RWS), real wind direction (RWD), and power. The above variables are labeled in accordance with the above order as the first variable to eleventh variable, respectively. Moreover, the state trajectory of original data, filled data, and filtered data are given in Figures 2, 3, and 4, respectively. Figures 2–4, respectively, denote the trajectory of original data, filled data, and filtered data which are derived, respectively, by nearest-neighbor interpolation (NNI), MAD filter, wavelet decomposition, and denoising techniques, where NNI is mainly focused on the relationship between 30 m

1000 Time

1500

2000

Original MAD filter Wavelet filter Normalized value

In this section, the numerical simulation is constructed for each part of Section 4 based on the wind resources dataset which are presented in Section 2.1.

500

Original and filtered state trajectory of humidity 2 1.8 1.6 1.4 1.2 1 0

500

1000 Time

1500

2000

Original MAD filter Wavelet filter

Figure 3: State trajectories refer to the 3rd variable and 4th variable.

AWS and power based on the discussion in Section 5.2. Because 10 samples are collected in half an hour according to sampling frequency, half of windows size of MAD filter is set as 5, and the Vaidyanathan wavelet filter is employed for the data preprocessing. Without loss of generality, other wavelet transform such as Daubechies wavelet are also accepted.

6

Journal of Applied Mathematics Original and filtered state trajectory of power

Associated power value

Table 1: PLS coefficients refer to eleven variables. Variables 11 1 2 3 4 5 11 1.0000 βˆ’0.7644 βˆ’0.7185 βˆ’0.6779 βˆ’0.7602 βˆ’0.7214 6 βˆ’0.7595 0.9848 0.5741 0.8837 0.9815 0.5689 7 βˆ’0.7191 0.5944 0.9437 0.6413 0.5906 0.9228 8 βˆ’0.9648 0.8005 0.7235 0.7718 0.7949 0.7259 9 βˆ’0.9732 0.6883 0.6697 0.5516 0.6854 0.6705 10 βˆ’0.9998 0.7621 0.7174 0.6748 0.7579 0.7205 Table 2: Lower and upper bound related to Table 1. 1 βˆ’0.7434 βˆ’0.7838 0.9862 0.9832 0.6249 0.5621 0.8173 0.7824 0.7130 0.6619 0.7818 0.7410

2 βˆ’0.6942 βˆ’0.7412 0.6058 0.5406 0.9488 0.9381 0.7458 0.6995 0.6956 0.6420 0.7402 0.6930

3 βˆ’0.6508 βˆ’0.7033 0.8939 0.8726 0.6690 0.6118 0.7907 0.7514 0.5845 0.5169 0.7004 0.6475

4 βˆ’0.7389 βˆ’0.7799 0.9832 0.9796 0.6213 0.5580 0.8121 0.7763 0.7103 0.6588 0.7779 0.7365

5 βˆ’0.6973 βˆ’0.7439 0.6009 0.5352 0.9297 0.9152 0.7481 0.7021 0.6964 0.6429 0.7430 0.6963

From Figures 2–4, irrelevant, outliers, and noisy data of employed data are, respectively, eliminated and filled by the MAD filter and wavelet transform. The data become a relative stationary and smooth time series now, which is a necessary step for the further analysis. 5.2. Input Variables Selection Results. This section provides the simulation results with respect to the theory analysis of Section 4.4. The quality and quantity of samples have significant impact on the accuracy of forecasting because more data will increase the difficulty of real operation, and less data may not contain enough information for analysis. As the discussion of Section 5.1, the PLS coefficient with regard to eleven variables is computed by (4) and (5) and are given in Tables 1 and 2. Based on the discussion of Section 4.4, all the value of β€œsignificance value” is equivalent to about 0.6936 Γ— 10βˆ’308 , which is far less than the set value 𝛼 = 0.01; in other words, the original hypothesis should not be accepted because there are significant correlation between variables. Moreover, the lower and upper bound with confidence level of 95% are provided in Table 2, where UB represents the upper bound and LB represents the lower bound. From Tables 1 and 2, we can learn two facts: one is that all the obtained data in Table 1 are accepted because of the corresponding testing value which is presented in Table 2; the other one is that one group is enough for the wind power forecasting because of the similarity between different variables. 5.3. Data Order Estimation and Normalization. Based on the discussion of Section 5.2, the 30 m variables except for the pressure are selected as the input variables because there is no significant change about pressure. The data order estimation

500

0

1000

1500

2500

2000

Time Original MAD filter Wavelet filter

Figure 4: State trajectories refer to the 11th variable.

ACF discrete trajectory

ACF

11 1.0000 1.0000 βˆ’0.7382 βˆ’0.7793 βˆ’0.6948 βˆ’0.7417 βˆ’0.9613 βˆ’0.9680 βˆ’0.9705 βˆ’0.9757 βˆ’0.9998 βˆ’0.9998

1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0

2

4

6

8

10

12

14

16

18

20

16

18

20

Lags Hum Pow

WS WD Tem

Figure 5: ACF discrete trajectory.

CCF discrete trajectory 0.8 0.75 0.7 CCF

Variables UB 11 LB UB 6 LB UB 7 LB UB 8 LB UB 9 LB UB 10 LB

2 1.9 1.8 1.7 1.6 1.5 1.4 1.3 1.2 1.1 1

0.65 0.6 0.55 0.5 0

2

4

WS WD

6

8

10 Lags

12

14

Tem Hum

Figure 6: CCF discrete trajectory.

Journal of Applied Mathematics

7 Table 3: Simulation results refer to multiscale SVR.

Item RBF(3)

PF(3)

RBF(6)

PF(6)

RBF(10)

PF(10)

O M W O M W O M W O M W O M W O M W

BCVM 0.00121642 0.00118972 0.00217458 0.00121642 0.00118972 0.00217458 0.00117004 0.0010876 0.0021321 0.00117004 0.0010876 0.0021321 0.0011804 0.00107203 0.00217458 0.0011804 0.00107203 0.00214353

Bc 1024 64 0.5 1024 64 0.5 1024 128 0.5 1024 128 0.5 512 128 0.5 512 128 0.5

Bg 0.00195313 0.015625 1 0.00195313 0.015625 1 0.00390625 0.00390625 1 0.00390625 0.00390625 1 0.00195313 0.00390625 1 0.00195313 0.00390625 1

Foin 3465 1756 1628 4365 2756 1628 4562 2810 1534 4562 2810 1534 5366 2441 1534 5366 2441 1534

RM 0.00383891 0.00194981 0.0211323 0.00383891 0.00194981 0.0211323 0.00381612 0.00189855 0.0219367 0.00381612 0.00189855 0.0219367 0.00381804 0.00191227 0.0219367 0.00381804 0.00191227 0.0219367

RS 0.943034 0.972598 0.897533 0.943034 0.972598 0.897533 0.94281 0.972905 0.895725 0.94281 0.972905 0.895725 0.94289 0.972918 0.895725 0.94289 0.972918 0.895725

O: original data; M: MAD filter; W: wavelet filter; RBF(3): number of cross-validation for testing set is 3 by RBF, similar to RBF(6) and RBF(10); PF(3): number of cross-validation for testing set is 3 by PF, similar to PF(6) and PF(10); BCVM: best cross-validation mean squared error; Bc: best c; Bg: best g; Foin: finished optimization iteration number; RM: regression mean squared error; RS: regression squared correlation coefficient.

is implemented through (6) and (7) and denoted in Figures 5 and 6, respectively. From Figures 5 and 6, we can learn that the two past values are the most significant impact factors for the current value because the ACF value is about more than 0.9 and CCF value is more than 0.7. Thus, 2 can be set as the data order based on the data estimation by performance of ACF and CCF.

Table 4: RMSE, MAE, and RMAE refer to SVR and MLP. Item RBF(3)

PF(3)

5.4. Multiscale SVR Performance. The following simulation is based on [37, 38] libsvm (2013, version 3.17, platform: MATLAB 2012b, Microsoft Visual C++ 2010), and the kernel functions are radial basis function (RBF) and polynomial function (PF). Without loss of generality, 80% and 20% of employed data (2297) are selected as the training sample and testing samples, respectively. Because the changes in wind direction are not obvious, so both sine and cosine with regard to AWD are selected as the input variables. Therefore, the number of input variables is essentially about 10 because the data order is set as 2. In order to obtain the accuracy comparison between the different variables, all the variables value are normalized at the range (1,2). Moreover, the k-fold crossvalidation (K-CV) [39–42] and β€œGrid search” are utilized for the parameters selection. β€œGrid search” is to try every possible value of the parameters (𝑐, 𝑔), and the best accuracy of the (𝑐, 𝑔) can be derived based on the K-CV, where 𝑐 and 𝑔 are, respectively, penalty factor and kernel function parameters. The state trajectory of original data and forecasting data are given in Figures 7, 8, 9, and 10. Moreover, the detailed information refers to simulation results comparison between multiscale SVR and MLP neural network will be given in Tables 3 and 4.

RBF(6)

PF(6)

RBF(10)

PF(10)

RMSE

MAE RMAE Hn

Et

O

1.6044 1.0931 0.1094 NA 132.122945

M

1.0887 0.6336 0.0638 NA

W

3.3950 2.3570 0.2375 NA 113.548014

110.138815

O

1.6044 1.0931 0.1094 NA 137.698291

M

1.0887 0.6336 0.0638 NA 112.438881

W

3.3950 2.3570 0.2375 NA 113.008574

O

1.5997 1.0888 0.1090 NA 374.338732

M

1.0743 0.5767 0.0581 NA 309.984871

W

3.4590 2.4161 0.2435 NA 288.569125

O

1.5997 1.0888 0.1090 NA 346.989140

M

1.0743 0.5767 0.0581 NA 280.287914

W

3.4590 2.4161 0.2435 NA 284.077991

O

1.6001 1.0892 0.1090 NA 624.630533

M

1.0782 0.5872 0.0591 NA 501.316555

W

3.4590 2.4161 0.2435 NA 368.402477

O

1.6001 1.0892 0.1090 NA 623.420666

M

1.0782 0.5872 0.0591 NA 512.878352

W

3.4590 2.4161 0.2435 NA 501.985355

MLP neural network 11.6547 9.6265 0.9631

20

97.536770

O: original data; M: MAD filter; W: wavelet filter; RBF(3): number of crossvalidation for testing set is 3 by RBF, similar to RBF(6) and RBF(10); PF(3): number of cross-validation for testing set is 3 by PF, similar to PF(6) and PF(10); Hn: number of hidden layer; Et: elapsed time in seconds; RMSE: regression mean squared error for testing sample; MAE: MAE for testing sample; RMAE: RMAE for testing sample; NA: not available.

8

Journal of Applied Mathematics

3D SVR parameter selection results [contours method]

2D SVR parameter selection results [Grid-search method] 10 0

8

0.1

1

0.05 0.15 0.2 0.25 0.3 0.35 0.4

0.55 0.4

6

0.8 0 0

log 2g

0

2 0

0 0.1 0.05 0

0.0

0.7 0.6 0.5 0.4 0.3 0.2

0.1 0.05

0

βˆ’4

0.15

5

0.15.25 00..023.53 00.4 45 00..5

βˆ’2

Mean squared error

4

0.9

0.1 0 10

βˆ’6

5

βˆ’8

2

4

0

5

0

0.0

βˆ’2

0.1

5 0.1 0.2 5 0.02.3 5 00..34.45 00.5

βˆ’10 βˆ’4

log

6

0 2g

βˆ’5 βˆ’10

8

βˆ’4

βˆ’2

0

4

2 log

6

8

2c

log 2c

Figure 7: 2D and 3D parameter selection results. (Grid-search method and contours method, RBF).

State trajectory of original, forecasting, and error value 30 25 20 Value

15 10 5 0 βˆ’5 βˆ’10 βˆ’15

0

50

100

150

200

250

300

350

400

450

Time Real data SVC (original data) Forecasting error 1 SVC (MAD filter)

Forecasting error 2 SVC (wavelet filter) Forecasting error 3

Figure 8: Final forecasting results via RBF.

5.5. Error Analysis and Evaluation. In this section, the error with respect to the forecasting results is given to evaluate the performance of multiscale SVR and MLP network of Section 5.4. Number of cross-validation for testing set is, respectively, set as 3, 6, and 10 for RBF and PF, moreover, the original data, filtered data via MAD, and wavelet are, respectively, utilized in the SVR with RBF and PF. Specifically, MLP neural network is applied to compare the performance

of SVR based on the RMSE, MAE, and RMAE which are defined in Section 4.7. In addition, the best cross-validation mean squared error, regression mean squared error, regression squared correlation coefficient, and best kernel function parameters are provided. The detailed information is given in Tables 3 and 4. Based on Tables 3 and 4, we can learn that the performance of SVR is better than the tradition MLP neural network. Because the best performance of SVR employed MAD filter, therefore, data for MLP neural network are still filtered by MAD. In fact, SVR turns to be a robust time series forecasting method even with different parameters such as different number of cross-validation for testing set, different kernel function, and associated parameters. The simulation results denote that the SVR is an effective approach for STWPF.

6. Conclusions In this paper, the multiscale SVR technique is applied for the short-term wind power forecasting. Firstly, we introduce a brief illustration for the main processing step and its corresponding theory analysis. Secondly, the data interpolation technique is used to fill the missing value of employed data. Thirdly, median absolute deviation filter and wavelet decomposition and denoising technique are applied to eliminate the irrelevant, noisy, and outlier value. Fourthly, Partial least squares technique, autocorrelation function, and crosscorrelation function are, respectively, employed to the input

Journal of Applied Mathematics

9

2D SVR parameter selection results [Grid-search method] 10

3D SVR parameter selection results [contours method]

0

045 .5 0.0. 435 0. 0.3 0.25 0.2 0.15 0.1

8 0.05

6

1 0.9

4 Mean squared error

0.8 0 0

2 log 2g

0 0

0

5 0.0

0.15

0.6 0.5 0.4 0.3 0.2 0.1

0.0

5

0

βˆ’4

0.1

0

0.05

0.1 0.1.525 00.2.3 05 00..354 .4 00.5

βˆ’2

0.7

0 10

βˆ’6

5

βˆ’2

0

2

4

0

5

βˆ’10 βˆ’4

0.0

0.1

5 0.1 0.2 5 00.2.3 5 00..34.455 00.

βˆ’8

6

0

lo

g2

g

8

βˆ’5 βˆ’10

log 2c

βˆ’4

βˆ’2

0

2

log

4

6

8

2c

Figure 9: 2D and 3D parameter selection results. (Grid-search method and contours method, PF).

has reference value for short-term wind power forecasting and other energy consumption on the demand aspect.

State trajectory of original, forecasting, and error value 30 25 20

Conflict of Interests

Value

15

The authors declare that there is no conflict of interests regarding the publication of this paper.

10 5 0 βˆ’5

Acknowledgments

βˆ’10 βˆ’15

0

50

100

150

200

250

300

350

400

450

Time Real data SVC (original data) Forecasting error 1 SVC (MAD filter)

Forecasting error 2 SVC (wavelet filter) Forecasting error 3

Figure 10: Final forecasting results via PF.

The authors would like to thank the Professor Hongjie Jia and editor Dean who gave valuable comments by means of considerable suggestion and comments which improved highly the quality of this paper; the authors gratefully acknowledge Professor Xueliang Huang from the School of Electrical Engineering, Southeast University. This paper is partly supported by the National High Technology Research and Development Program (863 Program) (2011AA05A107).

References variable selection and order estimation. Fifthly, the multiscale SVR in combination with Grid-search technique is utilized to forecast the short-term wind power. Finally, the performance evaluation and error analysis are applied to evaluate the performance of multiscale SVR. Comparing to the multilayer perceptron (MLP) neural network, the performance demonstrates that SVR technique is a fast and robust time series forecasting approach. We believe that the proposed strategy

[1] M. A. Mohandes, T. O. Halawani, S. Rehman, and A. A. Hussain, β€œSupport vector machines for wind speed prediction,” Renewable Energy, vol. 29, no. 6, pp. 939–947, 2004. [2] O. Kramer and F. Gieseke, β€œShort-term wind energy forecasting using support vector regression,” Advances in Intelligent and Soft Computing, vol. 87, pp. 271–280, 2011. [3] M. Milligan, K. Porter, E. DeMeo et al., β€œWind power myths debunked,” IEEE Power and Energy Magazine, vol. 7, no. 6, pp. 89–99, 2009.

10

Journal of Applied Mathematics

[4] S. Alessandrini, S. Sperati, and P. Pinson, β€œA comparison between the ECMWF and COSMO Ensemble Prediction Systems applied to short-term wind power forecasting on real data,” Applied Energy, vol. 107, pp. 271–280, 2013.

[20] G. Noriega and S. Pasupathy, β€œApplication of Kalman filtering to real-time preprocessing of geophysical data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 5, pp. 897–910, 1992.

[5] F. Famili, W. M. Shen, R. Weber, and E. Simoudis, β€œData pre-processing and intelligent data analysis,” Intelligent Data Analysis, vol. 1, no. 1–4, pp. 3–23, 1997.

[21] D. P. Allen, β€œA frequency domain Hampel filter for blind rejection of sinusoidal interference from electromyograms,” Journal of Neuroscience Methods, vol. 177, no. 2, pp. 303–310, 2009.

[6] A. Costa, A. Crespo, J. Navarro, G. Lizcano, H. Madsen, and E. Feitosa, β€œA review on the young history of the wind power shortterm prediction,” Renewable and Sustainable Energy Reviews, vol. 12, no. 6, pp. 1725–1744, 2008.

[22] L. Davies and U. Gather, β€œThe identification of multiple outliers,” Journal of the American Statistical Association, vol. 88, no. 423, pp. 782–792, 1993.

[7] S. Osowski and K. Garanty, β€œForecasting of the daily meteorological pollution using wavelets and support vector machine,” Engineering Applications of Artificial Intelligence, vol. 20, no. 6, pp. 745–755, 2007. [8] B.-J. Chen, M.-W. Chang, and C.-J. Lin, β€œLoad forecasting using support vector machines: a study on EUNITE competition 2001,” IEEE Transactions on Power Systems, vol. 19, no. 4, pp. 1821–1830, 2004. [9] W.-Y. Chang, β€œAn RBF neural network combined with OLS algorithm and genetic algorithm for short-term wind power forecasting,” Journal of Applied Mathematics, vol. 2013, Article ID 971389, 9 pages, 2013. [10] Y. B. Che, W. Zhang, L. J. Ge, and J. J. Zhang, β€œA two-stage wind grid inverter with boost converter,” Journal of Applied Mathematics, vol. 2014, Article ID 816564, 5 pages, 2014. [11] Y. D. Song, Q. Cao, X. Q. Du, and H. R. Karimi, β€œControl strategy based on wavelet transform and neural network for hybrid power system,” Journal of Applied Mathematics, vol. 2013, Article ID 375840, 8 pages, 2013. [12] M. Lei, L. Shiyan, J. Chuanwen, L. Hongling, and Z. Yan, β€œA review on the forecasting of wind speed and generated power,” Renewable and Sustainable Energy Reviews, vol. 13, no. 4, pp. 915–920, 2009. [13] T.-F. Li, W. Jia, W. Zhou, J.-K. Ge, Y.-C. Liu, and L.-Z. Yao, β€œIncomplete phase space reconstruction method based on subspace adaptive evolution approximation,” Journal of Applied Mathematics, vol. 2013, Article ID 983051, 9 pages, 2013.

[23] R. K. Pearson, β€œOutliers in process modeling and identification,” IEEE Transactions on Control Systems Technology, vol. 10, no. 1, pp. 55–63, 2002. [24] S. Mallat, A Wavelet Tour of Signal Processing: The Sparse Way, Academic Press, 3rd edition, 2009. [25] R. Shalom, Mixed Representation and Their Applications, Lecture Notes, Technion-Israel Institute of Technology, 2011. [26] M. Steinbuch and M. J. G. Molengraft, β€œWavelet theory and applications: a literature study,” Tech. Rep., Eindhoven University of Technology, 2005. [27] D. L. Donoho, β€œDe-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995. [28] S. G. Mallat, β€œA theory for multiresolution signal decomposition: the wavelet representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674–693, 1989. [29] P. P. Vaidyanathan, β€œQuadrature mirror filter banks, M-band extensions and perfect-reconstruction techniques,” IEEE ASSP Magazine, vol. 4, no. 3, pp. 4–20, 1987. [30] S. G. Mallat, β€œA theory for multiresolution signal decomposition: the wavelet representation,” Tech. Rep. 668, CIS, 1987. [31] J. Buckheit, S. Chen, D. Donoho, I. Johnstone, and J. D. Scargle, Wavelab Reference Manual, Version 0. 700, 1995. [32] G. E. Dallal, β€œPartial Correlation Coefficients,” http://www .jerrydallal.com/LHSP/partial.htm. [33] H. Drucker, C. J. Burges, L. Kaufman, A. Smola, and V. Vapnik, β€œSupport vector regression machines,” Advances in Neural Information Processing Systems, vol. 9, pp. 155–161, 1997.

[14] N. Do Hoai, K. Udo, and A. Mano, β€œDownscaling global weather forecast outputs using ANN for flood prediction,” Journal of Applied Mathematics, vol. 2011, Article ID 246286, 14 pages, 2011.

[34] J. A. K. Suykens and J. Vandewalle, β€œLeast squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999.

[15] I. SΒ΄anchez, β€œShort-term prediction of wind energy production,” International Journal of Forecasting, vol. 22, no. 1, pp. 43–56, 2006.

[35] C. J. C. Burges, β€œA tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998.

[16] A. M. Foley, P. G. Leahy, A. Marvuglia, and E. J. McKeogh, β€œCurrent methods and advances in forecasting of wind power generation,” Renewable Energy, vol. 37, no. 1, pp. 1–8, 2012.

[36] N. Cristianini and J. Shawe-Taylor, An Introduction To Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, 2000.

[17] A. Botterud, J. Wang, V. Miranda, and R. J. Bessa, β€œWind power forecasting in U.S. electricity markets,” Electricity Journal, vol. 23, no. 3, pp. 71–82, 2010.

[37] F. Shi, X. Wang, L. Yu, and Y. Li, 30 Cases MATLAB Neural Network Analysis, Beijing University of Aeronautics and Astronautics Press, Beijing, China, 2010, (Chinese).

[18] C. Stathopoulos, A. Kaperoni, G. Galanis, and G. Kallos, β€œWind power prediction based on numerical and statistical models,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 112, pp. 25–38, 2013.

[38] C. C. Chang and C. J. Lin, β€œLIBSVM: a library for support vector machines,” version 3.17, 2013, http://www.csie.ntu.edu.tw/∼ cjlin/libsvm/.

[19] F. Famili and J. Ouyang, β€œData mining: understanding data and disease modeling,” in Proceedings of the 21st IASTED International Multi-Conference on Applied Informatics, 2003.

[40] Y. Bengio and Y. Grandvalet, β€œNo unbiased estimator of the variance of K-fold cross-validation,” Journal of Machine Learning Research, vol. 5, pp. 1089–1105, 2004.

[39] http://statweb.stanford.edu/∼tibs/sta306b/cvwrong.pdf.

Journal of Applied Mathematics [41] S. Arlot and A. Celisse, β€œA survey of cross-validation procedures for model selection,” Statistics Surveys, vol. 4, pp. 40–79, 2010. [42] J. D. Rodriguez, A. Perez, and J. A. Lozano, β€œSensitivity analysis of k-fold cross validation in prediction error estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 3, pp. 569–575, 2010.

11

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Suggest Documents