Classification Accuracy Metric: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "ions]] " to "ion]]s ")
m (Text replacement - "** It is (typically)" to "** It can (typically) be ")
 
Line 8: Line 8:
** It can be reported as the rate which a case will be labeled with the right category, if the [[Predictive Model]] is a [[Classifier]].
** It can be reported as the rate which a case will be labeled with the right category, if the [[Predictive Model]] is a [[Classifier]].
** It can be reported as the average distance between the predicted label and the correct value, if the [[Predictive Model]] is an [[Estimator]].
** It can be reported as the average distance between the predicted label and the correct value, if the [[Predictive Model]] is an [[Estimator]].
** It is (typically) required that the [[Test Case]] be unseen during the [[Training Phase]].
** It can (typically) be  required that the [[Test Case]] be unseen during the [[Training Phase]].
** It can be the [[Inverse Function]] to the [[Error Rate Function]].
** It can be the [[Inverse Function]] to the [[Error Rate Function]].
** …
** …

Latest revision as of 06:24, 9 November 2024

An classification accuracy metric is a classifier performance metric to assess the accuracy metric of a class prediction system (based on the proportion of the classifier's correct classifications to incorrect classifications(on labeled testing records).



References

2018

2017

2002

1998