Forecasting Error Measure
A Forecasting Error Measure is an approximation error measure for a forecasting task.
- AKA: Time Series Approximation Performance Measure.
- …
- Example(s):
- a Scale-Dependent Forecasting Performance Measure, such as: ...
- a Scale-Independent Forecasting Performance Measure, such as: ...
- a Relative Errors-based Forecasting Performance Measure, such as: ...
- a Relative Measure-based Forecasting Performance Measure, such as: ...
- a Scaled Error-based Forecasting Performance Measure, such as: ...
- …
- Counter-Example(s):
- See: Naıve Random Walk Algorithm.
References
2013
- http://en.wikipedia.org/wiki/Forecast_error
- In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest.
In simple cases, a forecast is compared with an outcome at a single time-point and a summary of forecast errors is constructed over a collection of such time-points. Here the forecast may be assessed using the difference or using a proportional error. By convention, the error is defined using the value of the outcome minus the value of the forecast.
In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of assessing the match between the time-profiles of the forecast and the outcome. If a main application of the forecast is to predict when certain thresholds will be crossed, one possible way of assessing the forecast is to use the timing-error—the difference in time between when the outcome crosses the threshold and when the forecast does so. When there is interest in the maximum value being reached, assessment of forecasts can be done using any of:
- the difference of times of the peaks;
- the difference in the peak values in the forecast and outcome;
- the difference between the peak value of the outcome and the value forecast for that time point.
- Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If we observe this for multiple products for the same period, then this is a cross-sectional performance error. Reference class forecasting has been developed to reduce forecast error. Combining forecasts has also been shown to reduce forecast error.[1][2]
- In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest.
- ↑ J. Scott Armstrong (2001). "Combining Forecasts". Kluwer Academic Publishers. http://marketing.wharton.upenn.edu/ideas/pdf/Armstrong/CombiningForecasts.pdf.
- ↑ J. Andreas Graefe, Scott Armstrong, Randall J. Jones, Jr. and Alfred G. Cuzán (2010). "Combining forecasts for predicting U.S. Presidential Election outcomes". https://dl.dropbox.com/u/3662406/Articles/Graefe_et_al_Combining.pdf.
- http://en.wikipedia.org/wiki/Forecasting_accuracy#Forecasting_accuracy
- The forecast error is the difference between the actual value and the forecast value for the corresponding period. :[math]\displaystyle{ \ E_t = Y_t - F_t }[/math] where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.
Measures of aggregate error:
- The forecast error is the difference between the actual value and the forecast value for the corresponding period. :[math]\displaystyle{ \ E_t = Y_t - F_t }[/math] where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.
Mean absolute error (MAE) | [math]\displaystyle{ \ MAE = \frac{\sum_{t=1}^{N} |E_t|}{N} }[/math] |
Mean Absolute Percentage Error (MAPE) | [math]\displaystyle{ \ MAPE = \frac{\sum_{t=1}^N |\frac{E_t}{Y_t}|}{N} }[/math] |
Mean Absolute Deviation (MAD) | [math]\displaystyle{ \ MAD = \frac{\sum_{t=1}^{N} |E_t|}{N} }[/math] |
Percent Mean Absolute Deviation (PMAD) | [math]\displaystyle{ \ PMAD = \frac{\sum_{t=1}^{N} |E_t|}{\sum_{t=1}^{N} |Y_t|} }[/math] |
Mean squared error (MSE) or Mean squared prediction error (MSPE) | [math]\displaystyle{ \ MSE = \frac{\sum_{t=1}^N {E_t^2}}{N} }[/math] |
Root Mean squared error (RMSE) | [math]\displaystyle{ \ RMSE = \sqrt{\frac{\sum_{t=1}^N {E_t^2}}{N}} }[/math] |
Forecast skill (SS) | [math]\displaystyle{ \ SS = 1- \frac{MSE_{forecast}}{MSE_{ref}} }[/math] |
Average of Errors (E) | [math]\displaystyle{ \ \bar{E}= \frac{\sum_{i=1}^N {e_i}}{N} }[/math] |
Business forecasters and practitioners sometimes use different terminology in the industry. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE.Template:Cn For more information see Calculating demand forecast accuracy.
2006
- (Hyndman & Koehler, 2006) ⇒ Rob J. Hyndman, and A. B. Koehler. (2006). “Another Look at Measures of Forecast Accuracy.” In: International Journal of Forecasting, 22(4). [doi>10.1016/j.ijforecast.2006.03.001