Timeseries Smoothing Algorithm
(Redirected from timeseries smoothing algorithm)
Jump to navigation
Jump to search
A Timeseries Smoothing Algorithm is a smoothing algorithm that can be applied by a timeseries smoothing system (to solve a timeseries smoothing task).
- Example(s):
- Counter-Example(s):
- See: Missing Data; Kalman Filter; EM Algorithm; Forecasting Algorithm.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/time_series#Curve_fitting Retrieved:2017-8-23.
- Curve fitting [1] [2] is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, [3] possibly subject to constraints. [4] [5] Curve fitting can involve either interpolation, [6] [7] where an exact fit to the data is required, or smoothing, [8] [9] in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, [10] [11] which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, [12] [13] to infer values of a function where no data are available, [14] and to summarize the relationships among two or more variables. [15] Extrapolation refers to the use of a fitted curve beyond the range of the observed data, [16] and is subject to a degree of uncertainty [17] since it may reflect the method used to construct the curve as much as it reflects the observed data. The construction of economic time series involves the estimation of some components for some dates by interpolation between values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines"). [18] Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates. [19] Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
Extrapolation is the process of estimating, beyond the original observation range, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results.
- Curve fitting [1] [2] is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, [3] possibly subject to constraints. [4] [5] Curve fitting can involve either interpolation, [6] [7] where an exact fit to the data is required, or smoothing, [8] [9] in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, [10] [11] which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, [12] [13] to infer values of a function where no data are available, [14] and to summarize the relationships among two or more variables. [15] Extrapolation refers to the use of a fitted curve beyond the range of the observed data, [16] and is subject to a degree of uncertainty [17] since it may reflect the method used to construct the curve as much as it reflects the observed data. The construction of economic time series involves the estimation of some components for some dates by interpolation between values ("benchmarks") for earlier and later dates. Interpolation is estimation of an unknown quantity between two known quantities (historical data), or drawing conclusions about missing information from the available information ("reading between the lines"). [18] Interpolation is useful where the data surrounding the missing data is available and its trend, seasonality, and longer-term cycles are known. This is often done by using a related series known for all relevant dates. [19] Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression). The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set. Spline interpolation, however, yield a piecewise continuous function composed of many polynomials to model the data set.
2012
- (Godsill et al., 2012) ⇒ Simon J. Godsill, Arnaud Doucet, and Mike West. (2012). “Monte Carlo Smoothing for Nonlinear Time Series." Journal of the american statistical association
- ABSTRACT: An approach to smoothing and forecasting for time series with missing observations is proposed. For an underlying state-space model, the EM algorithm is used in conjunction with the conventional Kalman smoothed estimators to derive a simple recursive procedure for estimating the parameters by maximum likelihood. An example is given which involves smoothing and forecasting an economic series using the maximum likelihood estimators for the parameters.
1982
- (Shumway & Stoffer, 1982) ⇒ Robert H. Shumway, and David S. Stoffer . (1982). “An Approach to Time Series Smoothing and Forecasting Using the EM Algorithm." Journal of time series analysis 3, no. 4
- ABSTRACT: We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filtered trajectories.
1957
- (Holt, 1957) ⇒ Charles C. Holt. (1957). “Forecasting Seasonals and Trends by Exponentially Weighted Moving Averages." Carnegie Inst. of Technology.
- ↑ Sandra Lach Arlinghaus, PHB Practical Handbook of Curve Fitting. CRC Press, 1994.
- ↑ William M. Kolb. Curve Fitting for Programmable Calculators. Syntec, Incorporated, 1984.
- ↑ S.S. Halli, K.V. Rao. 1992. Advanced Techniques of Population Analysis. Page 165 (cf. ... functions are fulfilled if we have a good to moderate fit for the observed data.)
- ↑ [https://books.google.com/books?id=SI-VqAT4_hYC The Signal and the Noise: Why So Many Predictions Fail-but Some Don't. By Nate Silver
- ↑ Data Preparation for Data Mining: Text. By Dorian Pyle.
- ↑ Numerical Methods in Engineering with MATLAB®. By Jaan Kiusalaas. Page 24.
- ↑ Numerical Methods in Engineering with Python 3. By Jaan Kiusalaas. Page 21.
- ↑ Numerical Methods of Curve Fitting. By P. G. Guest, Philip George Guest. Page 349.
- ↑ See also: Mollifier
- ↑ Fitting Models to Biological Data Using Linear and Nonlinear Regression. By Harvey Motulsky, Arthur Christopoulos.
- ↑ Regression Analysis By Rudolf J. Freund, William J. Wilson, Ping Sa. Page 269.
- ↑ Visual Informatics. Edited by Halimah Badioze Zaman, Peter Robinson, Maria Petrou, Patrick Olivier, Heiko Schröder. Page 689.
- ↑ Numerical Methods for Nonlinear Engineering Models. By John R. Hauser. Page 227.
- ↑ Methods of Experimental Physics: Spectroscopy, Volume 13, Part 1. By Claire Marton. Page 150.
- ↑ Encyclopedia of Research Design, Volume 1. Edited by Neil J. Salkind. Page 266.
- ↑ Community Analysis and Planning Techniques. By Richard E. Klosterman. Page 1.
- ↑ An Introduction to Risk and Uncertainty in the Evaluation of Environmental Investments. DIANE Publishing. Pg 69
- ↑ Hamming, Richard. Numerical methods for scientists and engineers. Courier Corporation, 2012.
- ↑ Friedman, Milton. “The interpolation of time series by related series." Journal of the American Statistical Association 57.300 (1962): 729-757.