Time-Series Numeric Prediction Task
A Time-Series Numeric Prediction Task is a sequential data point estimation task that is a temporal prediction task.
- Context:
- Input: a Timeseries Dataset (typically an unlabeled timeseries dataset).
- output: a List of Point Estimates.
- Performance Measure: Forecasting Performance Measure.
- It can be solved by a Forecasting System (that implements a forecasting algorithm).
- It can range from being a Univariate Forecasting Task to being a Multi-Predictor Forecasting Task.
- It can support a Sequential-Data Data Mining Task.
- It can be supported by an Autocorrelation Task.
- Example(s)
- a Daily Rain Quantity Forecasting Task (within weather forecasting).
- an M-Competition such as the M3-Competition Benchmark Task.
- Monthly Sea Surface Temperature Prediction, http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/ensoforecast.shtml
- an Economic Forecasting Task, such as an EPC prediction task.
- a National Quarterly GDP Forecasting Task (of national GDP).
- …
- Counter-Example(s):
- See: What-If Analysis, Timeseries Analysis Task, IID Point Estimation Task.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Forecasting Retrieved:2020-4-28.
- Forecasting is the process of making predictions of the future based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to be as accurate as possible. In some cases the data used to predict the variable of interest is itself forecast.
- Forecasting is the process of making predictions of the future based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
2016
- Ian Talley. (2016). “The IMF Is Sounding the Alarm. Is Anyone Listening?." Wall Street Journal. Mar 8, 2016 1:31 pm ET
2013a
- http://en.wikipedia.org/wiki/Time_series#Analysis
- Fully formed statistical models for stochastic simulation purposes, so as to generate alternative versions of the time series, representing what might happen over non-specific time-periods in the future
- Simple or fully formed statistical models to describe the likely outcome of the time series in the immediate future, given knowledge of the most recent outcomes (forecasting).
- Forecasting on time series is usually done using automated statistical software packages and programming languages, such as R (programming language), S (programming language), SAS (software), SPSS, Minitab and many others.
2008a
- http://www.meted.ucar.edu/hydro/verification/intro/print_version/06-Accuracy.htm
- QUOTE: This section covers the verification of forecast accuracy. Accuracy is one of the seven important verification topics to consider when verifying hydrologic forecasts.
The accuracy is defined as how well observed and forecast values are matched. The accuracy statistics are really measures of forecast error and therefore we could call them error statistics. Except for the bias statistic, we prefer values close to 0.0 for these statistics, indicating that forecast error is minimized. Because bias is a proportion, a value close to 1.0 indicates minimal error between forecasts and observations.
Measures of Accuracy (Error Statistics):
- Deterministic: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Error (ME), and Volumetric Bias.
- Probabilistic: Continuous RPS (CRPS) (The continuous ranked probability score is an error statistic used for verification of probabilistic forecasts.).
- … ...
- QUOTE: This section covers the verification of forecast accuracy. Accuracy is one of the seven important verification topics to consider when verifying hydrologic forecasts.
2008b
- (Upton & Cook, 2008) ⇒ Graham Upton, and Ian Cook. (2008). “A Dictionary of Statistics, 2nd edition revised." Oxford University Press. ISBN:0199541450
- Forecasting: The prediction of future values in a time series. Methods include exponential smoothing, the Holt-Winters procedure, and the Box-Jenkins procedure.
2006a
- (Cesa-Bianchi & Lugosi, 2006) ⇒ Nicolo Cesa-Bianchi, and Gabor Lugosi. (2006). “Prediction, Learning, and Games.” In: Cambridge University Press. ISBN: 0521841089.
- QUOTE: Prediction, as we understand it in this book, is concerned with guessing the short-term evolution of certain phenomena. Examples of prediction problems are forecasting tomorrow’s temperature at a given location or guessing which asset will achieve the best performance over the next month. Despite their different nature, these tasks look similar at an abstract level: one must predict the next element of an unknown sequence given some knowledge about the past elements and possibly other available information.
2006b
- (Hyndman & Koehler, 2006) ⇒ Rob J. Hyndman, and A. B. Koehler. (2006). “Another Look at Measures of Forecast Accuracy.” In: International Journal of Forecasting, 22(4). [doi>10.1016/j.ijforecast.2006.03.001
- We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition as well as the M3-competition, and many of the measures recommended by previous authors on this topic, are found to be degenerate in commonly occurring situations. Instead, we propose that the mean absolute scaled error become the standard measure for comparing forecast accuracy across multiple time series.
1993
- (Murphy, 1993) ⇒ Allan H. Murphy. (1993). “What is a Good Forecast? An essay on the nature of goodness in weather forecasting.” In: Weather and Forecasting, 8(2). 1993)008<0281:WIAGFA>2.0.CO;2 doi:10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2