Prequential Evaluation
(Redirected from prequential setting)
Jump to navigation
Jump to search
A Prequential Evaluation is a data stream mining evaluation method to test a model by using the each instances first.
References
2016
- (SAMOA website, 2016) ⇒ https://samoa.incubator.apache.org/documentation/Prequential-Evaluation-Task.html
- QUOTE: In data stream mining, the most used evaluation scheme is the prequential or interleaved-test-then-train evolution. The idea is very simple: we use each instance first to test the model, and then to train the model. The Prequential Evaluation task evaluates the performance of online classifiers doing this. It supports two classification performance evaluators: the basic one which measures the accuracy of the classifier model since the start of the evaluation, and a window based one which measures the accuracy on the current sliding window of recent instances.
2015
- (Bifet et al., 2015) ⇒ Albert Bifet, Gianmarco de Francisci Morales, Jesse Read, Geoff Holmes, and Bernhard Pfahringer. (2015). “Efficient Online Evaluation of Big Data Stream Classifiers.” In: Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2015). ISBN:978-1-4503-3664-2 doi:10.1145/2783258.2783372
- QUOTE: Prequential evaluation, the most commonly used evaluation in stream learning, uses examples first to test and then to train a single model. This method is unable to provide statistical significance when comparing classifiers. On the other hand, the most common error estimation technique for the batch setting, cross-validation, is not directly applicable in streaming.