Minitab. BJ, DC, REG, XS. + o o o o. +. 0.17. SAS/ETS. BJ, DC, ECM, REG, XS .....
forecasts for manual review. Automatic reconciliation of judgmental overrides.
Table 1: Software Ratings*** by Data Preparation Principles Principles* Software Category
Spreadsheet Add-Ins Forecasting Modules of Statistical Programs Neural Network Programs
Dedicated BusinessForecasting Programs
Software Program
Methods Offered
CB Predictor
REG, XS
Excel DAT
REG, XS
Insight.xla
REG, XS
Minitab
BJ, DC, REG, XS
SAS/ETS
BJ, DC, ECM, REG, XS
Soritec
BJ, DC, ECM, REG, XS
SPSS Trends
BJ, DC, REG, XS
NeuroShell Predictor
NN
NeuroShell Professional Time Series
NN
SPSS Neural Connection
NN
Autobox
BJ, HIER, ID, XS
Forecast Pro
BJ, DC, HIER, ID, REG, XS
SmartForecasts
DC, HIER, ID, REG, XS
Time Series Expert
BJ, DC, ECM, REG, XS
tsMetrix
BJ, NN, REG, XS Fraction of Maximum Possible Rating
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Examining whether series is forecastable
Plotting Cleaning the Adjusting for cleansed, data (errors, Transforming seasonality and transformed and missing values, the data trading days deseasonalized outliers) data
1.4*
5.1, 5.3
5.4
5.2
5.7
o o o + + o o o + o + + + o o
o o o + + o ++ + + o ++ ++ ++ o +
o o o + ++ ++ ++ ++ ++ o + + + ++ +
o o o ++ ++ + ++ + + o ++ ++ ++ ++ ++
o o o + ++ ++ + ++ ++ o o + ++ + ++
0.20
0.43
0.57
0.63
0.53
Fraction of Maximum Possible Rating
0.00 0.00
Spreadsheet Add-Ins
0.00
0.00
0.60 0.80
General Statistics
0.50 0.70 0.60 0.70 0.00
0.65 Neural Nets 0.43
0.60 0.70 0.80
Business Forecasting
0.50 0.60
0.64 0.47
Table 2: Software Ratings*** by Method Selection Principles Principles*
Software Category
Spreadsheet Add-Ins Forecasting Modules of Statistical Programs Neural Network Programs Dedicated BusinessForecasting Programs
Software Program
Methods Offered
CB Predictor Excel DAT Insight.xla Minitab SAS/ETS Soritec SPSS Trends NeuroShell Predictor NeuroShell Professional Time Series SPSS Neural Connection Autobox Forecast Pro SmartForecasts Time Series Expert tsMetrix
REG, XS REG, XS REG, XS BJ, DC, REG, XS BJ, DC, ECM, REG, XS BJ, DC, ECM, REG, XS BJ, DC, REG, XS NN NN NN BJ, HIER, ID, XS BJ, DC, HIER, ID, REG, XS DC, HIER, ID, REG, XS BJ, DC, ECM, REG, XS BJ, NN, REG, XS Fraction of Maximum Possible Rating
Matching forecasting method to the data 6.7* + o + + ++ o o + + o ++ + + ++ +
6.8 ++ o o o ++ o o ++ ++ o ++ ++ ++ ++ o
6.6 o o o o + o o ++ + ++ ++ ++ + + o
9.3 o o o o ++ o o ++ ++ ++ ++ ++ ++ ++ o
12.3 o o o o ++ o o o ++ ++ + o o o o
swf** o o o + ++ + o o o o ++ ++ + + ++
0.47
0.53
0.40
0.53
0.23
0.40
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) ** swf represents software forecasting standards not included in the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Selecting Considering Including methods out-ofCombining Discouraging dynamic based on sample forecasts needless terms in comparison performance formal complexity causal of track in method procedure model records selection
Fraction of Maximum Possible Rating
0.25 0.00 0.08 0.17 0.92 0.08 0.00 0.58 0.67 0.50 0.92 0.75 0.58 0.67 0.25
Spreadsheet Add-Ins 0.11 General Statistics 0.29 Neural Nets 0.58 Business Forecasting 0.63 0.43
Table 3: Software Ratings*** by Method Implementation Principles Principles* Software Category
Software Program
CB Predictor Spreadsheet Excel DAT Add-Ins Insight.xla Minitab Forecasting Modules of SAS/ETS Statistical Soritec Programs SPSS Trends NeuroShell Predictor Neural Network NeuroShell Professional Time Series Programs SPSS Neural Connection Autobox Dedicated Forecast Pro BusinessSmartForecasts Forecasting Time Series Expert Programs tsMetrix
Methods Offered
REG, XS REG, XS REG, XS BJ, DC, REG, XS BJ, DC, ECM, REG, XS BJ, DC, ECM, REG, XS BJ, DC, REG, XS NN NN NN BJ, HIER, ID, XS BJ, DC, HIER, ID, REG, XS DC, HIER, ID, REG, XS BJ, DC, ECM, REG, XS BJ, NN, REG, XS Fraction of Maximum Possible Rating
Selecting fit vs. test period
Choosing best-fit criterion
swf** + o ++ ++ ++ + + ++ + ++ ++ ++ + + ++
swf o o o o ++ o o + + ++ o o o o o
7.5* o o o + ++ o o o o o ++ ++ ++ ++ o
9.4 + o + + ++ ++ ++ + + + ++ ++ + ++ o
11.3 + o o + ++ o o o ++ + o + ++ o o
swf o o o o + o o o ++ o o ++ ++ o o
10.8 ++ o o + ++ o o + + ++ ++ o ++ o o
0.73
0.20
0.37
0.63
0.33
0.23
0.43
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) ** swf represents software forecasting standards not included in the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Integrating Weighting forecasts of Allowing Overriding the most explanatory user to statistical relevant variables integrate forecasts data more into causal judgment heavily model
Adjusting for expected events
Fraction of Maximum Possible Rating
0.36 0.00 0.21 0.43 0.93 0.21 0.21 0.36 0.57 0.57 0.57 0.64 0.71 0.36 0.14
Spreadsheet Add-Ins 0.19 General Statistics 0.45 Neural Nets 0.50 Business Forecasting 0.49 0.42
Table 4: Software Ratings*** by Method Evaluation Principles Principles*
Software Category
Software Program
CB Predictor Excel DAT Insight.xla Minitab Forecasting SAS/ETS Modules of Statistical Soritec Programs SPSS Trends NeuroShell Predictor Neural Network NeuroShell Professional Time Series Programs SPSS Neural Connection Autobox Dedicated Forecast Pro BusinessSmartForecasts Forecasting Time Series Expert Programs tsMetrix Spreadsheet Add-Ins
Methods Offered
REG, XS REG, XS REG, XS BJ, DC, REG, XS BJ, DC, ECM, REG, XS BJ, DC, ECM, REG, XS BJ, DC, REG, XS NN NN NN BJ, HIER, ID, XS BJ, DC, HIER, ID, REG, XS DC, HIER, ID, REG, XS BJ, DC, ECM, REG, XS BJ, NN, REG, XS Fraction of Maximum Possible Rating
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Testing validity of model assumptions
Distinguishing within-sample from out-ofsample forecasting accuracy
13.2* + + (REG) - (XS) o + ++ ++ + + o ++ ++ + ++ ++
13.26 o o + o ++ o + ++ ++ + ++ ++ + + ++
13.25 + o o ++ ++ + o + ++ ++ ++ ++ ++ ++ ++
13.20, 13.24 + o o + + + o o + o ++ ++ + + ++
13.4 o o o o o o o o + o ++ ++ ++ o ++
0.53
0.57
0.70
0.43
0.30
Providing Providng error Measuring multiple measures that errors by measures of adjust for forecast accuracy scale and horizon outliers
Fraction of Maximum Possible Rating
0.30 0.00 0.10 0.40 0.70 0.40 0.20 0.20 0.70 0.30 1.00 1.00 0.70 0.60 1.00
Spreadsheet Add-Ins 0.13 General Statistics 0.43 Neural Nets 0.40 Business Forecasting 0.86 0.51
Table 5 : Software Ratings*** by Assessment of Uncertainty Principles ++ Effectively implemented; +: Partially implemented; o: Ignored -: Undermined n.a.: Not applicable
Principles* Software Category
Spreadsheet Add-Ins Forecasting Modules of Statistical Programs Neural Network Programs Dedicated BusinessForecasting Programs
Software Program
Methods Offered
CB Predictor Excel DAT Insight.xla Minitab SAS/ETS Soritec SPSS Trends NeuroShell Predictor NeuroShell Professional Time Series SPSS Neural Connection Autobox Forecast Pro SmartForecasts Time Series Expert tsMetrix
REG, XS REG, XS REG, XS BJ, DC, REG, XS BJ, DC, ECM, REG, XS BJ, DC, ECM, REG, XS BJ, DC, REG, XS NN NN NN BJ, HIER, ID, XS BJ, DC, HIER, ID, REG, XS DC, HIER, ID, REG, XS BJ, DC, ECM, REG, XS BJ, NN, REG, XS Fraction of Maximum Possible Rating
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Providing objective prediction intervals
Developing prediction intervals from ex ante forecast errors
14.1*, 14.2 + + + ++ ++ + + o o o ++ ++ + ++ +
14.3 + o + + + o o o o o o + ++ o +
14.6, 14.13 + o + o o o o o + o o o -
14.9 + o o o o o o o o o + o o o o
0.57
0.27
0.00
0.07
Specifying sources of uncertainty
-
Combining prediction intervals from alternative methods
Fraction of Maximum Possible Rating
0.25 0.00 0.38 0.38 0.50 0.13 0.13 0.00 0.00 0.00 0.50 0.38 0.38 0.25 0.13
Spreadsheet Add-Ins 0.21 General Statistics 0.28 Neural Nets 0.00 Business Forecasting 0.33 0.23
Table 6: Software Ratings*** by Forecast Presentation Principles ++ Effectively implemented; +: Partially implemented; o: Ignored -: Undermined n.a.: Not applicable
Principles* Software Category
Spreadsheet Add-Ins Forecasting Modules of Statistical Programs Neural Network Programs Dedicated BusinessForecasting Programs
Software Program
Methods Offered
CB Predictor Excel DAT Insight.xla Minitab SAS/ETS Soritec SPSS Trends NeuroShell Predictor NeuroShell Professional Time Series SPSS Neural Connection Autobox Forecast Pro SmartForecasts Time Series Expert tsMetrix
REG, XS REG, XS REG, XS BJ, DC, REG, XS BJ, DC, ECM, REG, XS BJ, DC, ECM, REG, XS BJ, DC, REG, XS NN NN NN BJ, HIER, ID, XS BJ, DC, HIER, ID, REG, XS DC, HIER, ID, REG, XS BJ, DC, ECM, REG, XS BJ, NN, REG, XS Fraction of Maximum Possible Rating
Illustrating Transparency how in theoretical Explaining forecasts assumptions methodology were made generated
BJ: DC: ECM: HIER: ID: NN: REG: XS:
Methods Legend ARIMA (Box-Jenkins) Decomposition Econometric Hierarchical Intermittent Data Neural Networks Regression Exponential Smoothing
Providing forecasts in exportable formats
Forecast report
15.3* o o o o ++ o o o + o + + + + o
15.2 o o o + ++ o o o ++ o ++ ++ ++ ++ o
15.2 o o + ++ + o + + + + + + + o +
15.4 ++ o + ++ ++ + + + + + ++ ++ ++ ++ ++
swf** ++ ++ ++ ++ ++ + + ++ ++ + ++ ++ ++ + ++
swf + o o o o o o o o o ++ + ++ o +
0.23
0.43
0.40
0.73
0.87
0.23
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) ** swf represents software forecasting standards not included in the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
Graphically presenting point and interval forecasts
Fraction of Maximum Possible Rating
0.42 0.17 0.33 0.58 0.75 0.17 0.25 0.33 0.58 0.25 0.83 0.75 0.83 0.50 0.50
Spreadsheet Add-Ins 0.31 General Statistics 0.44 Neural Nets 0.39 Business Forecasting 0.68 0.48
Table 7: Software Ratings*** by Product Hierarchy Principles Principles*
Software Program
Edition
Methods Offered
Autobox Forecast Pro SmartForecasts
Version 5 Unlimited Unlimited Batch
BJ, ID XS, ID XS, ID
Automatic method selection
Fraction of Maximum Possible Rating
swf** ++ ++ ++
swf + ++ +
swf ++ ++ ++
swf ++ ++ ++
swf o o ++
swf o ++ ++
swf + + +
1.00
0.67
1.00
1.00
0.33
0.67
0.50
* The numerical designation corresponds to the Forecasting Standards Checklist (Armstrong 2001b) ** swf represents software forecasting standards not included in the Forecasting Standards Checklist (Armstrong 2001b) *** Ratings Legend ++ : Effectively Implemented + : Partially Implemented o : Ignored - : Undermined na : Not Applicable All ratings were made in 1999/2000: For updates, see the Web site: http://hops.wharton.upenn.edu/forecast
Facilitate Identify Multiple Automatic comparison Fraction of Adjustments Procedures for problem procedures reconciliation of forecasting Maximum Possible for special intermittent forecasts for for of judgmental and Rating events demands manual reconciliation overrides reconcilation review approaches
Methods Legend BJ: ARIMA (Box-Jenkins) DC: Decomposition ECM: Econometric HIER: Hierarchical ID: Intermittent Data NN: Neural Networks REG: Regression XS: Exponential Smoothing
0.57 0.79 0.86
Product Hierarchy Software
0.74
Table 8: Summary Ratings by Program and Category
Fraction of Maximum Possible Rating in Tables 1-6
Spreadsheet Add-Ins CB Predictor Excel DAT Insight.xla
0.16
Forecasting Modules of Statistical Programs Minitab SAS/ ETS Soritec for W 95/NT SPSS Trends
0.42
Neural Network Programs NeuroShell Predictor NeuroShell Professional Time Series SPSS Neural Connection
0.38
Dedicated Business-Forecasting Programs Autobox Forecast Pro SmartForecasts Time Series Expert tsMetrix
0.60
0.26 0.03 0.18
0.43 0.77 0.25 0.25
0.35 0.54 0.27
0.74 0.70 0.67 0.48 0.44
Table 9: Summary Ratings by Forecasting Principle Fraction of Maximum Possible Rating Data Preparation · Examining whether series is forecastable · Cleaning the data (errors, missing values, outliers) · Adjusting for seasonality and trading days · Transforming the data · Plotting cleansed, transformed and deseasonalized data
0.47
Method Selection · Matching forecasting method to the data · Selecting methods based on comparison of track records · Discouraging needless complexity · Considering out-of-sample performance in method selection · Combining forecasts - formal procedure · Including dynamic terms in causal model
0.43
Method Implementation · Selecting fit vs. test period · Choosing best-fit criterion · Adjusting for expected events · Weighting the most relevant data more heavily · Allowing user to integrate judgment? · Overriding statistical forecasts · Integrating forecasts of explanatory variables into causal model
0.42
Method Evaluation · Testing validity of model assumptions · Distinguishing in-sample from out-of-sample fcst accuracy · Providng multiple measures of accuracy · Providing error measures that adjust for scale and outliers · Measuring errors by forecast horizon
0.51
Assessment of Uncertainty · Providing objective prediction intervals · Developing empirical prediction intervals from forecast errors · Specifying sources of uncertainty · Combining prediction intervals from alternative methods
0.23
Forecast Presentation · Transparency in theoretical assumptions made · Explaining methododology · Illustrating how forecasts were generated · Graphically presenting point and interval forecasts · Providing forecasts in exportable formats · Forecast Report
0.48
Forecasting a Product Hierarchy · Automatic Method Selection · Multiple procedures for reconciliation · Adjustments for special events · Procedures for intermittent demands · Identify problem forecasts for manual review · Automatic reconciliation of judgmental overrides · Facilitate comparison of fcsting and reconcil. approaches
0.74
Overall Weighted Average For All Principles
0.20 0.43 0.57 0.63 0.53
0.47 0.53 0.40 0.53 0.23
0.73 0.20 0.37 0.63 0.33 0.23 0.43
0.53 0.57 0.70 0.43 0.30
0.57 0.27 0.00 0.07
0.23 0.43 0.40 0.73 0.87 0.23
1.00 0.67 1.00 1.00 0.33 0.67 0.50 0.49