Introduction of Seasonality concept in PSF algorithm to improve univariate time series Predictions Neeraj Bokde1, Aditya Gupta1, Kishore Kulat1 1
Department of Electronics and Communication,
Visvesvaraya National Institution of Technology, Nagpur, India
Corresponding Author: Neeraj Bokde Department of Electronics and Communication, Visvesvaraya National Institution of Technology, Nagpur, India Email address:
[email protected]
1 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
Introduction of Seasonality concept in PSF algorithm to improve univariate time series
2
Predictions
3 4
Abstract:
5
This paper proposed a novel modification in Pattern Sequence based Forecasting (PSF)
6
algorithm, named as Seasonal PSF. The proposed modification in PSF algorithm is done with
7
addition of seasonality concept, such that the longer univariate time series database with
8
combination of many sequence patterns and outliers can be converted into more relevant
9
database in accordance with data under test of predictions. In this paper, seasonal PSF is
10
examined on electricity load database for two years, provided by EUNITE network. The
11
comparative analysis consists of two methodologies, multiple steps prediction and one step
12
ahead forecasting. This analysis conclude that, seasonality based decomposition of database
13
leads to much better performance of seasonal PSF over original PSF and benchmarked methods
14
for univariate predictions like ARIMA and SARIMA. It is found that the maximum accuracy is
15
achieved in minimum computational delay with Seasonal PSF.
16
performed with RMSE, MAE and MAPE as error performance metrics.
These comparisons are
17 18
Introduction:
19
Time series forecasting is very important task in many areas like economics,
20
environmental science, engineering, marketing and many other. The accurate forecasting leads to
21
accurate decisions for policy making which indirectly leads to more growth in respective areas.
22
The performances of different prediction methods are dependent on characteristics of the data. In
23
data driven approach, the methods used for prediction can be divided in two categories as
24
parametric and nonparametric statistical techniques. Methods like historic means, regression
25
models including linear and nonlinear types, moving average, autoregressive, ARIMA, seasonal
26
ARIMA and many others are included in parametric approach of predictions. Whereas Neural
27
Networks (NN), special cases of regression like nonparametric regression methods are included
28
in non parametric approach of prediction. In time series data forecasting, the analysis of time
29
series data is done with two characteristics, trends and seasonality. Methods like moving
30
average, ARIMA and SARIMA processes on trends and seasonality characteristics with
31
differencing and moving averages methods. The ARIMA method with consideration of seasonal 2 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
component was able to perform much better than its former version (simple ARIMA) [3]. Along
2
with this, the same model outperformed other methods like linear regression and Support Vector
3
Machine (SVM) [4, 5]. These articles are good evidence to state the importance of consideration
4
of seasonality concept while analyzing time series data. The most widely used data with
5
seasonality includes daily traffic volume, electricity demand and supply, weather parameter
6
variation and many others which shows repetition of patterns at specific interval of time. Hence,
7
it allows decomposing of the original data in accordance with the seasonal period and improves
8
the accuracy of a prediction method up to larger extent. On the basis of same philosophy, this
9
article discuss about a newly proposed, seasonal PSF algorithm which works on seasonal time
10
series data with concept of seasonality to improve the performance of the original PSF algorithm
11
of time series predictions.
12 13
PSF Algorithm:
14
PSF stands for Pattern Sequence based Forecasting algorithm which was proposed by
15
Martinez-Alvarez et.al [1] and then modified version is explained in [2]. The PSF algorithm is
16
based on patterns present in data sequence in the time series data. PSF algorithm consists of
17
different processes like data normalization, clustering, forecasting based on clustered data and
18
de-normalization, etc. The novelty in PSF algorithm is the use of labels for different patterns
19
present in time series data, instead of use of original time series data. Normalization process is
20
applied on raw input data to remove the redundancies present in the data. The process of
21
normalization is expressed as following equation. Where Xj is the input time series and N is its
22
size in units of time.
23
𝑋𝑗 =
𝑋𝑗 1 𝑁
𝑁
𝑋𝑖
(1)
𝑖=1
24 25
The labeling of different patterns in raw input data is done with clustering method. This
26
clustering method produce clusters with k-means clustering. The k-means clustering technique is
27
easy to use and consumes very less time for calculation. But this technique requires the number
28
of centre into which the data has to be clustered. Article [1] suggested Silhouette index to decide
29
suitable numbers of cluster centers, whereas in modified article [2], authors suggested three 3 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
indices Silhouette index [6], Dunn index [7] and Davies Bouldin index [8]. The optimum number
2
of clusters is to be decided by „best two among three‟ policy. In other words, cluster size will be
3
finalized with number returned by more than one index. But many articles [9-11] suggested and
4
used single index, which leads to simplification of computation for clustering process. The result
5
of this step is conversion of time series data into the series of labels which is to be fed to next
6
step of prediction.
7
This prediction step in PSF algorithm consists of different processes like window size
8
selection, pattern sequence matching and estimation process. In the step of prediction, the
9
sequence of labels of length size W from backward position of time series is get selected and this
10
sequence is searched in whole series of labels obtained in clustering process. This sequence of
11
labels with size W is referred as „Window‟. Suppose, while searching, the sequence in the
12
window is not found in the series of labels, in that case, the sequence size is reduced by one unit.
13
This process continues till the sequence repeat itself in label sequence at least once. This process
14
confirms that at least some sequence will repeat in complete label sequence when value of W is
15
1. While searching the patterns of window within the complete sequence of labels, it note down
16
the next adjacent label to each matched sequence. The mean value of these obtained labels is
17
considered as label for next predicted value.
18
Finally, de-normalization process is applied to replace the labels with their alternative
19
value in the real dataset. In case, if it is required to predict more than one future values, then the
20
last predicted value is get appended on original dataset and whole procedure of PSF algorithm is
21
applied on new dataset till desire number of predictions are performed.
22
The main challenge in the process of prediction is the selection of optimum window size
23
(W) to perform the prediction with minimum error. The selection of W is dependent on the
24
dataset under study. The mathematical expression for W size selection is minimization of
25
following expression.
26
𝑋 𝑡 − 𝑋(𝑡)
(2)
𝑡 ∈ 𝑇𝑠
27 28
Where 𝑋 𝑡 is predicted value at time index t with PSF method and 𝑋(𝑡) is the original
29
observed data at same time index t. Practically, calculation of window size (W) is to be done by 4 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
means of cross validation. The general scheme of the PSF algorithm proposed in article [2] is as
2
shown in Figure 1.
3 4
Figure 1 PSF algorithm proposed in article [2]
5 6
The PSF method is studied and compared with different benchmarked univariate time
7
series prediction methods in various articles. In articles [1,2], time series of electricity price
8
prediction is compared with Artificial Neural Network, ARIMA, mixed models and weighted
9
Neural Networks. These studies conclude that, PSF algorithm performed much better than these
10
methods. Afterward, article [9] discussed about limitations in PSF algorithm and proposed few
11
modifications such that computation delays get minimized. In article [10], PSF algorithm
12
outperformed kNN and ARIMA in forecasting electric vehicle charging energy consumption.
13
The modification proposed by this article was repositioning of window such that label sequence
14
will get searched at the centre of window instead of at the end of the window. Apart from this,
15
the combination of PSF and Neural Network (NN) methods was attempted in [12]. This
16
algorithm was able to outperform over PSF for electricity demand forecasting study. In [13], PSF
17
algorithm was modified by replacing k-means clustering technique with non-negative tensor
18
factorization for energy demand forecasting using Photovoltaic energy records. For the first time,
19
article [11] and [16] discussing about practical software package working over time series
20
prediction with PSF algorithm. This R package „PSF‟ takes univariate time series data and 5 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
predict future values with AUTO_PSF function. This function automates other processes like
2
selection of optimum value of cluster and window size, data normalization, de-normalization and
3
many others.
4 5
Drawbacks of PSF Algorithm:
6
Though PSF algorithm outperforms various state of art and benchmarked methods for
7
time series prediction, it was not get popular among researchers and data scientists because of
8
few reasons. First thing is that, it is a series of functions, which were interdependent on each
9
other. Selection of optimum cluster size was dependent on various indices. The main challenge
10
was optimum window size selection for accurate predictions. The window size was calculated by
11
means of cross validation by performing a training phase. Martinez Alvarez et al. [2] also
12
mentioned, “Application of standard mathematical programming methods is not possible of
13
window size selection.” With the advancement in computational tools, the software packages
14
like PSF, an R package attempted to automate the various steps involved in the PSF algorithm.
15
But still, it is quite complex to search the matched pattern sequence with optimum window size
16
in dataset consisting of multiple patterns and outliers with usual PSF prediction step. Apart from
17
this, if the data size is very large, searching sequence in the whole dataset consumes more and
18
more time and leads to very large delay in overall method of prediction.
19 20
The Proposed Methodology:
21
The proposed method is the improvement of conventional PSF algorithm [1-2] with
22
addition of concept of seasonality. Many of the time series dataset used in applications like
23
electricity demand forecasting, road traffic predictions, weather forecasting and many more
24
which follow the patterns that shows similar variation with change in time period. This is the
25
seasonality characteristics of the time series data. Likewise, change in temperature for particular
26
region shows almost similar pattern for many years. In months of summer, time series shows
27
high peaks whereas peaks get shorter for months of winters. Suppose, a dataset is following a
28
monthly variation in temperature since last 4-5 years and it is desired to predict temperature for
29
summer in next year. In that case, instead of considering all previous data, the pattern of change
30
in temperature in summers of previous years will be enough to forecast the same for next
31
summer. 6 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
2 3
Figure 2 Block Diagram for Seasonal PSF
4 5
6 7
Figure 3 Block Diagram explaining process of ‘Data Decomposition’ in Seasonal PSF
8 9
The block diagram for proposed method is as shown in Figure 2. The main contribution
10
of Seasonal PSF algorithm is the concept of seasonality in original PSF algorithm which is done
11
with „Data Decomposition‟ block as shown in Figure 3. The ultimate aim of this block is to
12
optimize the input dataset in accordance with data reserved for testing purpose. The seasonal
13
PSF algorithm initiates with inputting of time series data along with approximate period of
14
season which indicates after that time period the seasonal pattern repeats. First of all, this season 7 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
period is get examined such that, whether the seasonal pattern repeats at least once in training
2
dataset. If this necessary condition is not satisfied, the dataset will get processed with original
3
PSF. On the other hand, if this condition is satisfied, following steps involved in seasonal PSF
4
algorithm get applied on the dataset.
5
This procedure starts with segmentation of dataset into subsets. These subsets are equal
6
segments of size that of seasonal period. In other word, each segment represents the pattern for
7
one of the season present in dataset. The last segment among all of these segments is considered
8
as segment under test, which is referred as „Test Segment‟. The novel thing about seasonal PSF is
9
generation of new optimum dataset with respect to test segment. This is done with procedure
10
discussed in pseudo code in Figure 4.
11
12 13
Figure 4 General scheme of the Seasonal PSF algorithm
8 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
Each of the segment representing seasonal patterns compare themselves with test
2
segment and calculate corresponding Root Mean Square (RMS) errors. Consider, these error
3
values are stored in vector „error‟. The next step involves classification of the entities present in
4
vector „error‟. This is done with k-means clustering technique, such that each of the error value
5
in „error‟ will get labeled corresponding to its cluster centre. All the segments with label similar
6
to that of test segment get separated and composed in a new dataset „newD‟. This dataset is much
7
more relevant to the test segment since all of these segments in the dataset follow the similar
8
patterns as that of test segment. The final step is the implementation of PSF algorithm on new
9
dataset „newD‟. If more than one values are to be predicted, the value predicted by PSF
10
algorithm with dataset „newD‟ get appended to same dataset „newD‟ till desired numbers of
11
predictions are done. The maximum number of prediction values is limited to the length of
12
season in units of time. Whenever, the predicted values exceed this length of season, the values
13
predicted with PSF algorithm is get appended on original dataset „D‟ and the whole procedure of
14
seasonal segmentation and prediction is get repeated till desired numbers of predictions are
15
obtained. Hence modification in PSF algorithm is required to minimize the computation delay.
16
The proposed method in this paper attempts the PSF algorithm to remove the drawbacks within it
17
with very less efforts.
18 19
Experimental Setup:
20
Data: The performance of proposed method is examined with EUNITE dataset of daily
21
electricity load from January 1997 to December 1998. This data is provided by EUNITE
22
Network used in a well known competition [14]. The aim of this competition was to predict the
23
maximum daily electrical load values for next month. This data was collected daily for every half
24
hour interval samples for each year. For current study, one sample at specific time per day is
25
considered. Hence, total number of samples used in the study is 365 x 2 = 730, altogether.
26 27
Performance Metrics: In this study, performance of the proposed algorithm and other methods
28
used for comparison is evaluated with three error performance metrics which include RMSE,
29
MAE and MAPE. The calculations of all these performance metrics are dependent on original
30
and predicted data. Consider Xi is the original data under test and the corresponding predicted
9 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
data is 𝑋𝑖. With N is the number of samples in Xi, the equations for RMSE, MAE and MAPE
2
metrics are as shown in (3), (4) and (5), respectively.
3
1 𝑁
𝑅𝑀𝑆𝐸 =
𝑁
𝑖=1
𝑋𝑖 − 𝑋𝑖 𝑋𝑖
2
(3)
4
1 𝑀𝐴𝐸 = 𝑁
𝑁
𝑋𝑖 − 𝑋𝑖
(4)
𝑖=1
5
1 𝑀𝐴𝑃𝐸 = 𝑁
𝑁
𝑖=1
𝑋𝑖 − 𝑋𝑖 × 100% 𝑋𝑖
(5)
6 7
Results: This study is about the comparison of benchmarked predictive methods with the
8
proposed seasonal PSF algorithm. This study is done with varying the prediction step size from 1
9
to higher values and calculating corresponding errors in prediction and time consumed by each
10
algorithm to perform the predictions. The suitable ARIMA and SARIMA models are fitted for
11
the dataset, which includes parameters (1, 1, 3) and (1, 1, 3) (1, 0, 0), respectively. Considering
12
30 days as a seasonal period in both SARIMA and seasonal PSF algorithm, the comparative
13
analysis is performed with 'PredictTestbench', an R package [15] used as a test bench for
14
predictive methods comparison. These analyses are performed on a hardware platform with
15
Intel® Core™ i3 Processor along with 8GB RAM. The Table 1 shows the RMSE value
16
comparison among the selected predictive algorithms. Similarly, Table 2 and Table 3 shows the
17
same comparison with MAE and MAPE metrics, respectively.
18
In all of these observations, the column labeled as 'Step Size' represents the number of
19
values predicted as well as compared with test dataset. For each step size predictive method,
20
there are two subparts. Upper rows are for error performance metric and lower rows are for
21
computational time consumed by particular method to predict the future values. The comparison
22
analysis for step sizes 1, 2, 5 and 10 presenting minimum prediction error values and minimum
23
computation delays are belonging to seasonal PSF method as shown in bold font. In very few
10 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
cases, these optimum values in bold font belong to methods other than seasonal PSF. These
2
outcomes in the analysis are enough to conclude that, for all error performance metrics, the
3
seasonal PSF algorithm are outperforming over ARIMA, SARIMA and PSF methods in seasonal
4
database considering aspects of accuracy and computational speed.
5 6
Table 1 RMSE and Computational Delay comparison Step
ARIMA
SARIMA
PSF
Seasonal
Size 1
2
5
10
PSF RMSE
13.5852
15.1999
23.6217
3.0752
Delay
0.172
1.2560
4.9892
0.0980
RMSE
8.8190
9.3691
17.3639
10.083
Delay
0.1580
1.2420
9.1175
0.097
RMSE
12.7221
10.2339
16.3180
10.1241
Delay
0.1540
1.2380
19.3601
0.0659
RMSE
12.8329
13.9035
15.8986
11.2151
Delay
0.1990
1.2430
26.8674
0.0604
7 8 9
Table 2 MAE and Computational Delay comparison Step
ARIMA
SARIMA
PSF
Size 1
2
Seasonal PSF
MAE
13.5830
15.1421
23.6217
3.0752
Delay
0.1680
1.2750
5.0862
0.1180
MAE
8.3921
8.5670
15.1411
8.5
Delay
0.1690
1.2620
9.3975
0.1160
11 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
5
10
MAE
11.3636
9.3462
14.4433
9.0690
Delay
0.1590
1.2400
19.2541
0.6690
MAE
11.3187
12.4377
14.0602
10.1693
Delay
0.1520
1.2740
26.9735
0.6070
1 2 3
Table 3 MAPE and Computational Delay comparison Step
ARIMA
SARIMA
PSF
Size 1
2
5
10
Seasonal PSF
MAPE
7.3422
8.1849
12.7684
1.6623
Delay
0.1640
1.2850
5.1352
0.1370
MAPE
4.6923
4.7681
8.3660
4.9754
Delay
0.1670
1.2850
9.4465
0.1150
MAPE
6.4217
5.4537
8.0745
5.3337
Delay
0.1580
1.2860
19.3741
0.6750
MAPE
6.3628
6.9966
7.8734
5.7747
Delay
0.1640
1.2700
27.0715
0.6220
4 5
In addition to this, one step ahead forecasting analysis is performed to compare ARIMA
6
and PSF algorithm with seasonal PSF using the 'step_ahead_forecast' function in the same R
7
package, 'PredictTestbench' [15]. In this analysis, out of data corresponds to 24 months, 18
8
months are considered as training data and rests of the months are predicted with one step ahead
9
forecast technique. The plots for this analysis are shown in Figure 5, Figure 6 and Figure 7 for
10
PSF, ARIMA and seasonal PSF algorithms, respectively.
11
12 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1 2
Figure 5 One step ahead forecasting for PSF method
3
4 5
Figure 6 One step ahead forecasting for ARIMA method
6 7
13 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
Figure 7 One step ahead forecasting for Seasonal PSF method
2 3 4
For this analysis, the RMSE value obtained for PSF and ARIMA algorithm is 34.0578
5
and 16.7275 respectively, whereas that for seasonal PSF is 16.6829. These error values are very
6
attractive evidence to decide the superiority of seasonal PSF algorithm over its initial phase
7
(PSF) other benchmarked method for univariate prediction.
8 9
Conclusion:
10
This paper is the discussion about a proposed method Seasonal Pattern Sequence based
11
Forecasting (seasonal PSF), which is a modification of PSF algorithm with the introduction of
12
concept of seasonality. This algorithm targets the applications for predictions of univariate time
13
series data with seasonal variation. The superiority of the proposed algorithm is examined with a
14
case study, which uses electricity load time series data for two years and compared its prediction
15
results with ARIMA, SARIMA and PSF methods with error performance metrics RMSE, MAE
16
and MAPE. The comparison is made for multiple step predictions and one step ahead forecasting
17
methodologies. All of these comparison processes are performed with a test bench provided by
18
an R package „PredictTestbench‟.
14 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
Seasonal PSF method decomposes the dataset in into smaller and more relevant dataset,
2
in accordance with the seasonal period. This shorter time series leads to lesser time for prediction
3
calculations. Along with this, the effects of unwanted sequences and outliers in time series data
4
on prediction process get reduced. So, the maximum accuracy can be achieved in minimum
5
computational delay with Seasonal PSF. Future work is focused on dynamic estimation of
6
seasonal period, such that more accurate segmentation of dataset can be done.
7 8
References:
9
[1] Martínez-Álvarez F, Troncoso A, Riquelme JC, Ruiz JS. LBF: A labeled-based forecasting
10
algorithm and its application to electricity price time series. InData Mining, 2008. ICDM'08.
11
Eighth IEEE International Conference on 2008 Dec 15 (pp. 453-461). IEEE
12
[2] Martinez Alvarez F, Troncoso A, Riquelme JC, Aguilar Ruiz JS. Energy time series
13
forecasting based on pattern sequence similarity. Knowledge and Data Engineering, IEEE
14
Transactions on. 2011 Aug;23(8):1230-43.
15
[3] Williams BM, Hoel LA. Modeling and forecasting vehicular traffic flow as a seasonal
16
ARIMA process: Theoretical basis and empirical results. Journal of transportation engineering.
17
2003 Nov;129(6):664-72.
18
[4] Lippi M, Bertini M, Frasconi P. Short-term traffic flow forecasting: An experimental
19
comparison of time-series analysis and supervised learning. Intelligent Transportation Systems,
20
IEEE Transactions on. 2013 Jun;14(2):871-82.
21
[5] Chung E, Rosalion N. Short term traffic flow prediction. In Australasian Transport Research
22
Forum (ATRF), 24Th, 2001, Hobart, Tasmania, Australia 2001.
23
[6] Kaufman L, Rousseeuw PJ. Finding groups in data: an introduction to cluster analysis. John
24
Wiley & Sons; 2009 Sep 25.
25
[7] Dunn† JC. Well-separated clusters and optimal fuzzy partitions. Journal of cybernetics. 1974
26
Jan 1;4(1):95-104.
27
[8] Davies DL, Bouldin DW. A cluster separation measure. Pattern Analysis and Machine
28
Intelligence, IEEE Transactions on. 1979 Apr(2):224-7.
29
[9] Jin CH, Pok G, Park HW, Ryu KH. Improved pattern sequence‐ based forecasting method
30
for electricity load. IEEJ Transactions on Electrical and Electronic Engineering. 2014 Nov
31
1;9(6):670-4. 15 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016
1
[10] Majidpour M, Qiu C, Chu P, Gadh R, Pota HR. Modified pattern sequence-based
2
forecasting
3
(SmartGridComm), 2014 IEEE International Conference on 2014 Nov 3 (pp. 710-715). IEEE.
4
[11] Neeraj Bokde (2016). PSF: Algorithm for Pattern Sequence Based Forecasting. R package
5
version 0.1.1. URL: https://CRAN.R-project.org/package=PSF
6
[12] Koprinska I, Rana MM, Troncoso A, Martínez-Álvarez F. Combining pattern sequence
7
similarity with neural networks for forecasting electricity demand time series. InNeural
8
Networks (IJCNN), The 2013 International Joint Conference on 2013 Aug 4 (pp. 1-8). IEEE.
9
[13] Fujimoto Y, Hayashi Y. Pattern sequence-based energy demand forecast using photovoltaic
10
energy records. InRenewable Energy Research and Applications (ICRERA), 2012 International
11
Conference on 2012 Nov 11 (pp. 1-6). IEEE.
12
[14] EUNITE dataset. http://neuron.tuke.sk/competition/index.php
13
[15] Neeraj Bokde (2016). PredictTestbench: Test Bench for Comparison of Data Prediction
14
Models. R package version 0.1.1. URL:https://CRAN.R-project.org/package=PredictTestbench
15
[16] Neeraj Bokde (2016). PSF: Introduction to R Package for Pattern Sequence Based
16
Forecasting Algorithm. arXiv preprint arXiv:1606.05492
for
electric
vehicle
charging
stations.
InSmart
Grid
Communications
16 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2184v2 | CC BY 4.0 Open Access | rec: 8 Jul 2016, publ: 8 Jul 2016