False Positive Classification
(Redirected from False positive)
Jump to navigation
Jump to search
A False Positive Classification is a classification error where a positive prediction is an incorrect prediction.
- AKA: FP Outcome.
- Context:
- It can be a member of a False Positive Prediction Set (to calculate a false positive error rate).
- Example(s):
- If a model predicts that a person has cancer when in fact they do not have cancer then the prediction is labeled as a False Positive Prediction.
- a Type I Error, in hypothesis testing.
- …
- Counter-Example(s):
- See: Probability Theory, Classification Performance.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/False_positives_and_false_negatives Retrieved:2020-10-5.
- A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result, a Template:Visible anchor and a Template:Visible anchor.) They are also known in medicine as a false positive (respectively negative) diagnosis, and in statistical classification as a false positive (respectively negative) error.[1]
In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis. The terms are often used interchangeably, but there are differences in detail and interpretation due to the differences between medical testing and statistical hypothesis testing.
- A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result, a Template:Visible anchor and a Template:Visible anchor.) They are also known in medicine as a false positive (respectively negative) diagnosis, and in statistical classification as a false positive (respectively negative) error.[1]
2006
- (Fawcett, 2006) ⇒ Tom Fawcett. (2006). “An Introduction to ROC Analysis.” In: Pattern Recognition Letters, 27(8). doi:10.1016/j.patrec.2005.10.010
- QUOTE: Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a true positive; if it is classified as negative, it is counted as a false negative. If the instance is negative and it is classified as negative, it is counted as a true negative; if it is classified as positive, it is counted as a false positive. Given a classifier and a set of instances (the test set), a two-by-two confusion matrix (also called a contingency table) can be constructed representing the dispositions of the set of instances.