Software-based System Performance Evaluation Task
(Redirected from Empirical Computing System Evaluation)
Jump to navigation
Jump to search
A Software-based System Performance Evaluation Task is a software evaluation task (an empirical system evaluation task) that requires the estimation of a computing system's system performance.
- AKA: Computing System Performance Testing.
- Context:
- Input:
- an Algorithms.
- a Tasks.
- a Test Record Set.
- output: a Performance Analysis Report.
- It requires that the Algorithm be applied by a Computing System.
- It can be an Empirical Algorithm Comparison Study which tests several Algorithms against several Benchmark Tasks.
- It can involve the Testing against a Baseline Algorithm.
- …
- Input:
- Example(s):
- Estimate a Predictive Function's Accuracy (Accuracy Estimation Task)..
- a Supervised Learning System Performance Evaluation Task.
- ...
- See: Algorithm Complexity Analysis Task, Model Assessment Task.
References
2009
- (Jin et al., 2009) ⇒ Wei Jin, Hung Hay Ho, Rohini K Srihari. (2009). “OpinionMiner: A Novel Machine Learning System for Web Opinion Mining and Extraction.” In: Proceedings of ACM SIGKDD Conference (KDD-2009). doi:10.1145/1557019.1557148.
- In this paper, we describe the architecture and main components of the system. The evaluation of the proposed method is presented based on processing the online product reviews from Amazon and other publicly available datasets.
2007
- (Kakkonen, 2007) ⇒ Tuomo Kakkonen. (2007). “Framework and Resources for Natural Language Evaluation." Academic Dissertation. University of Joensuu.
- Evaluation of the correctness of a parser’s output is generally done by comparing the system output to correct human-constructed structures. These gold standard parses are obtained from a linguistic resource.
1999
- (Yang & Liu, 1999) ⇒ Yiming Yang, and Xin Liu. (1999). “A Re-examination of Text Categorization Methods.” In: Proceedings of the 22nd ACM SIGIR Conference Retrieval (SIGIR 1999).
- In this paper we presented a controlled study with significance analyses on five well-known text categorization methods.
- (Goldberg, 1999) ⇒ Andrew V. Goldberg. (1999). “Selecting Problems for Algorithm Evaluation.” In: Algorithm Engineering. Springer. doi:10.1007/3-540-48318-7_1
- ABSTRACT: In this paper we address the issue of developing test sets for computational evaluation of algorithms. We discuss both test families for comparing several algorithms and selecting one to use in an application, and test families for predicting algorithm performance in practice.
1995
- (Kohavi, 1995) ⇒ Ron Kohavi. (1995). “A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection.” In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI 1995).