Error Estimation Task
(Redirected from uncertainty quantification)
Jump to navigation
Jump to search
An Error Estimation Task is an estimation task that estimates uncertainty bounds.
- AKA: Uncertainty Quantification.
- Context:
- It can be solved by an Error Estimation System (that implements an error astimation algorithm).
- …
- Example(s):
- …
- Counter-Example(s):
- See: Aleatory Uncertainty, Epistemic Uncertainty, Prediction with Error Bounds, Error Analysis, Error Bound, Measurement Uncertainty.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Uncertainty_quantification Retrieved:2020-4-12.
- Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if we exactly knew the speed, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification. [1] [2] [3]
- Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if we exactly knew the speed, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
- ↑ Jerome Sacks, William J. Welch, Toby J. Mitchell and Henry P. Wynn, "Design and Analysis of Computer Experiments", Statistical Science, Vol. 4, No. 4 (Nov., 1989), pp. 409–423
- ↑ Ronald L. Iman, Jon C. Helton, "An Investigation of Uncertainty and Sensitivity Analysis Techniques for Computer Models", Risk Analysis, Volume 8, Issue 1, pages 71–90, March 1988,
- ↑ W.E. Walker, P. Harremoës, J. Rotmans, J.P. van der Sluijs, M.B.A. van Asselt, P. Janssen and M.P. Krayer von Krauss, "Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support", Integrated Assessment, Volume 4, Issue 1, 2003,
2015
- (Chamandy et al., 2015) ⇒ Nicholas Chamandy, Omkar Muralidharan, and Stefan Wager. (2015). “Teaching Statistics at Google-Scale.” In: The American Statistician, 69(4).
- QUOTE: ... Rather, we have chosen the problems to highlight a few distinct ways in which modern data and applications differ from traditional statistical applications, yet can benefit from careful statistical analysis.
In the first problem, uncertainty estimation was nontrivial because the types of second-order statistics that statisticians normally take for granted can be prohibitively expensive in large data streams. …
- QUOTE: ... Rather, we have chosen the problems to highlight a few distinct ways in which modern data and applications differ from traditional statistical applications, yet can benefit from careful statistical analysis.
2011
- (Roy & Oberkampf, 2011) ⇒ Christopher J. Roy, and William L. Oberkampf. (2011). “A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing.” Computer methods in applied mechanics and engineering, 200 (25-28).
- QUOTE: ... An overview of a comprehensive framework is given for estimating the predictive uncertainty of scientific computing applications. The framework is comprehensive in the sense that it treats both types of uncertainty (aleatory and epistemic), incorporates uncertainty due to the mathematical form of the model, and it provides a procedure for including estimates of numerical error in the predictive uncertainty. Aleatory (random) uncertainties in model inputs are treated as random variables, while epistemic (lack of knowledge) uncertainties are treated as intervals with no assumed probability distributions. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are briefly discussed. Numerical approximation errors (due to discretization, iteration, and computer round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainty is quantified using (a) model validation procedures, i.e., statistical comparisons of model predictions to available experimental data, and (b) extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented. The different steps in the predictive uncertainty framework are illustrated using a simple example in computational fluid dynamics applied to a hypersonic wind tunnel. …