Estimation Task
(Redirected from estimation task)
Jump to navigation
Jump to search
An estimation task is a data-driven prediction task that requires finding the sample statistic for a population parameter (mapped to an estimate of an estimand).
- AKA: Response Variable Prediction/Regression.
- Context:
- Input: an Data Sample.
- output: an Estimated Value.
- measures: RMSE, minimum Error, ...
- It can be solved by an Estimation System (that implements a estimation algorithm).
- It can range from being a Point Estimation Task to being an Interval Estimation Task.
- It can range from being a Parametric Estimation Task to being a Non-Parametric Estimation Task.
- It can be instantiated in an Estimation Act.
- …
- Example(s):
- Counter-Example(s):
- See: Approximation Task, Validation Task, Optimization Task, Estimation Function, Uncertainty, Instability, Statistic, Estimation Theory.
References
2016
- https://en.wiktionary.org/wiki/estimation#Noun
- (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/estimation Retrieved:2016-1-9.
- Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available.[1] Typically, estimation involves "using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter".[2] The sample provides information that can be projected, through various formal or informal processes, to determine a range most likely to describe the missing information. An estimate that turns out to be incorrect will be an overestimate if the estimate exceeded the actual result, [3] and an underestimate if the estimate fell short of the actual result. [4]
- ↑ C. Lon Enloe, Elizabeth Garnett, Jonathan Miles, Physical Science: What the Technology Professional Needs to Know (2000), p. 47.
- ↑ Raymond A. Kent, "Estimation", Data Construction and Data Analysis for Survey Research (2001), p. 157.
- ↑ James Tate, John Schoonbeck, Reviewing Mathematics (2003), page 27: "An overestimate is an estimate you know is greater than the exact answer".
- ↑ James Tate, John Schoonbeck, Reviewing Mathematics (2003), page 27: "An underestimate is an estimate you know is less than the exact answer".
2013
- http://en.wikipedia.org/wiki/Estimation_theory
- Estimation theory is a branch of statistics and signal processing that deals with estimating the values of parameters based on measured/empirical data that has a random component. The parameters describe an underlying physical setting in such a way that the value of the parameters affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
2012
- http://www.stats.ox.ac.uk/~tomas/html_links/0809/Lecture23.pdf
- QUOTE: Both estimation and testing are concerned with a parameter [math]\displaystyle{ \theta }[/math], which should (if possible) be a meaningful quantity. … A statistic [math]\displaystyle{ t = t(\mathbf{x}) }[/math] is any number calculated from the sample. Since the sample is a random observation of [math]\displaystyle{ X_1, X_2, ...,X_n }[/math], we can regard [math]\displaystyle{ t }[/math] as a sample of the random variable [math]\displaystyle{ T = t(X) }[/math]. The distribution of T is called the sampling distribution. … A statistic [math]\displaystyle{ T }[/math] is an estimator of (population parameter) [math]\displaystyle{ \theta }[/math] if its intention is to be close to the (unknown) value of [math]\displaystyle{ \theta }[/math]. To perform statistical inference for an estimator [math]\displaystyle{ T }[/math] of [math]\displaystyle{ \theta }[/math] we will often need to derive its distribution.
Suppose the population has mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math]. Then we can often use : [math]\displaystyle{ \bar{X}_n \sim N(\mu,\frac{\sigma^2}{n}) }[/math].
- QUOTE: Both estimation and testing are concerned with a parameter [math]\displaystyle{ \theta }[/math], which should (if possible) be a meaningful quantity. … A statistic [math]\displaystyle{ t = t(\mathbf{x}) }[/math] is any number calculated from the sample. Since the sample is a random observation of [math]\displaystyle{ X_1, X_2, ...,X_n }[/math], we can regard [math]\displaystyle{ t }[/math] as a sample of the random variable [math]\displaystyle{ T = t(X) }[/math]. The distribution of T is called the sampling distribution. … A statistic [math]\displaystyle{ T }[/math] is an estimator of (population parameter) [math]\displaystyle{ \theta }[/math] if its intention is to be close to the (unknown) value of [math]\displaystyle{ \theta }[/math]. To perform statistical inference for an estimator [math]\displaystyle{ T }[/math] of [math]\displaystyle{ \theta }[/math] we will often need to derive its distribution.
2011
- http://rimarcik.com/en/navigator/hypotezy.html
- QUOTE: The basic idea of statistics is simple: you want to extrapolate from the data you have collected to make general conclusions. Population can be e.g. all the voters and sample the voters you polled. Population is characterized by parameters and sample is characterized by statistics. For each parameter we can find appropriate statistics. This is called estimation. Parameters are always fixed, statistics vary from sample to sample.
Statistical hypothesis is a statement about population. …
- QUOTE: The basic idea of statistics is simple: you want to extrapolate from the data you have collected to make general conclusions. Population can be e.g. all the voters and sample the voters you polled. Population is characterized by parameters and sample is characterized by statistics. For each parameter we can find appropriate statistics. This is called estimation. Parameters are always fixed, statistics vary from sample to sample.
2006
- (Dubnicka, 2006k) ⇒ Suzanne R. Dubnicka. (2006). “Introduction to Statistics - Handout 11." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
- QUOTE: … Estimation and hypothesis testing are the two common forms of statistical inference. … In estimation, we are trying to answer the question, “What is the value of the population parameter?” An estimate is our “best guess” of the value of the population parameter and is based on the sample. Therefore, an estimate is a statistic. Two types of estimates are considered: point estimates and interval estimates. A point estimate is a single value (point) which represents our best guess of a parameter value. …