Mean Absolute Scaled Error
See: MASE, Time Series Forecasting, Time Series Forecasting Performance Measure.
References
2013
- http://en.wikipedia.org/wiki/Mean_absolute_scaled_error
- In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts . It was proposed in 2006 by Australian statistician Rob J. Hyndman, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."
The mean absolute scaled error is given by : [math]\displaystyle{ \mathrm{MASE} = \frac{1}{n}\sum_{t=1}^n\left( \frac{\left| e_t \right|}{\frac{1}{n-1}\sum_{i=2}^n \left| Y_i-Y_{i-1}\right|} \right) = \frac{\sum_{t=1}^{n} \left| e_t \right|}{\frac{n}{n-1}\sum_{i=2}^n \left| Y_i-Y_{i-1}\right|} }[/math] where the numerator et is the forecast error for a given period, defined as the actual value (Yt) minus the forecast value (Ft) for that period: et = Yt − Ft, and the denominator is the average forecast error of the one-step "naive forecast method", which uses the actual value from the prior period as the forecast: Ft = Yt−1
This scale-free error metric "can be used to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series[clarification needed] because it never gives infinite or undefined values except in the irrelevant case where all historical data are equal.
- In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts . It was proposed in 2006 by Australian statistician Rob J. Hyndman, who described it as a "generally applicable measurement of forecast accuracy without the problems seen in the other measurements."
2006
- (Hyndman & Koehler, 2006) ⇒ Rob J. Hyndman, and A. B. Koehler. (2006). “Another Look at Measures of Forecast Accuracy.” In: International Journal of Forecasting, 22(4). [doi>10.1016/j.ijforecast.2006.03.001