Numerical Prediction Algorithm
Jump to navigation
Jump to search
A Numerical Prediction Algorithm is an prediction algorithm that can be implemented into a point estimation system (to solve a point estimation task).
- AKA: Parameter Estimation Algorithm.
- Context:
- It can first try to estimate the a probability distribution parameter from which the data was generated.
- It can (typically) be an Iterative Point Estimation Algorithm to being a ...
- It can range from being a Heuristic Point Estimation Algorithm to being a Data-Driven Point Estimation Algorithm (such as a supervised numeric prediction algorithm).
- Example(s):
- Counter-Example(s):
- See: Probability Function, Statistical Inference, Regression Algorithm, Regressed Function, Unbiased Estimator.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Estimation_theory#Estimators Retrieved:2014-9-23.
- Commonly used estimators and estimation methods, and topics related to them:
- Maximum likelihood estimators
- Bayes estimators.
- Method of moments estimators
- Cramér–Rao bound.
- Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
- Maximum a posteriori (MAP)
- Minimum variance unbiased estimator (MVUE)
- nonlinear system identification.
- Best linear unbiased estimator (BLUE)
- Unbiased estimators — see estimator bias.
- Particle filter.
- Markov chain Monte Carlo (MCMC)
- Kalman filter, and its various derivatives
- Wiener filter
- Commonly used estimators and estimation methods, and topics related to them:
2009
- http://en.wikipedia.org/wiki/Point_estimation
- … Point estimation should be contrasted with general Bayesian methods of estimation, where the goal is usually to compute (perhaps to an approximation) the posterior distributions of parameters and other quantities of interest. The contrast here is between estimating a single point (point estimation), versus estimating a weighted set of points (a |probability density function). However, where appropriate, Bayesian methodology can include the calculation of point estimates, either as the expectation or median of the posterior distribution or as the mode of this distribution.
In a purely frequentist context (as opposed to Bayesian), point estimation should be contrasted with the specific interval estimation calculation of confidence intervals.
- Routes to deriving point estimates directly: maximum likelihood (ML), method of moments, generalized method of moments, minimum mean squared error (MMSE), minimum variance unbiased estimator (MVUE), best linear unbiased estimator (BLUE)
- Routes to deriving point estimates via Bayesian Analysis, maximum a posteriori (MAP), particle filter, Markov chain Monte Carlo (MCMC), Kalman filter, Wiener filter.
- Properties of Point estimates: bias of an estimator
- … Point estimation should be contrasted with general Bayesian methods of estimation, where the goal is usually to compute (perhaps to an approximation) the posterior distributions of parameters and other quantities of interest. The contrast here is between estimating a single point (point estimation), versus estimating a weighted set of points (a |probability density function). However, where appropriate, Bayesian methodology can include the calculation of point estimates, either as the expectation or median of the posterior distribution or as the mode of this distribution.
2006
- (Dubnicka, 2006l) ⇒ Suzanne R. Dubnicka. (2006). “Point Estimation - Handout 12." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
- QUOTE: Ideally, one would like to construct a point estimator which is an unbiased estimator of the unknown parameter and has the smallest possible variance among all unbiased estimators of that parameter. Such an estimator is called a minimum variance unbiased estimator (MVUE). … Recall that a statistic is a summary measure of the sample which is a set of random variables. Thus, a statistic is also a random variable. This is why is makes sense to compute the expectation and variance of a statistic. Also, as a statistic is a random variable also has a distribution associated with it. The distribution of a statistic is called the sampling distribution of the statistic. … The standard error of an estimator ˆ, denoted SE(ˆ ) is simply the standard deviation of ˆ . Thus, the standard error of ˆ is SE(ˆ ) = qVar(ˆ).
2003
- (Myung, 2003) ⇒ In Jae Myung. (2003). “Tutorial on Maximum Likelihood Estimation.” In: Journal of Mathematical Psychology, 47. doi:10.1016/S0022-2496(02)00028-7
- QUOTE: There are two general methods of parameter estimation. They are least-squares estimation (LSE) and maximum likelihood estimation (MLE). The former has been a popular choice of model fitting in ...
2000
- (Valpola, 2000) ⇒ Harri Valpola. (2000). “Bayesian Ensemble Learning for Nonlinear Factor Analysis." PhD Dissertation, Helsinki University of Technology.
- QUOTE: The most efficient and least accurate approximation is, in general, a point estimate of the posterior probability. It means that only the model with highest probability or probability density is used for making the predictions and decisions. Whether the accuracy is good depends on how large a part of the probability mass is occupied by models which are similar to the most probable model. … The two point estimates in wide use are the maximum likelihood (ML) and the maximum a posteriori (MAP) estimator. The ML estimator neglects the prior probability of the model and maximize only the probability which the model gives for the observation. The MAP estimator chooses the model which has the highest posterior probability mass or density.
1999
- (Hollander & Wolfe) ⇒ Myles Hollander, Douglas A. Wolfe. (1999). “Nonparametric Statistical Methods, 2nd Edition." Wiley. ISBN:0471190454
- QUOTE: An estimator is a decision rule (strategy, recipe) which, on the basis of the sample observations, estimates the value of a parameter. The specific value (on the basis of a particular set of data) which the estimator assigns is called the estimate.