Algorithm Analysis Task
An Algorithm Analysis Task is an system analysis task that analyzes an algorithm's algorithm performance (on some Algorithm Performance Measure).
- Context:
- It can range from being an Empirical Algorithm Evaluation Task to being a Theoretical Algorithm "Computational Complexity" Evaluation Task.
- …
- Example(s)
- Counter-Example(s):
- See: Computational Learning Theory; Model Evaluation, Algorithmic Efficiency, Computational Complexity.
References
2017
- (Webb, 2017) ⇒ Geoffrey I. Webb. (2017). "Algorithm Evaluation". In: Sammut C., Webb G.I. (eds) of Machine Learning and Data Mining. Springer, Boston, MA
- QUOTE: Algorithm evaluation is the process of assessing a property or properties of an algorithm (...)
Many machine learning and data mining algorithms have been proposed. In order to understand the relative merits of these alternatives, it is necessary to evaluate them. The primary approaches to evaluation can be characterized as either theoretical or experimental. Theoretical evaluation uses formal methods to infer properties of the algorithm, such as its computational complexity (Papadimitriou 1994 [1]), and also employs the tools of computational learning theory to assess learning theoretic properties. Experimental evaluation applies the algorithm to learning tasks to study its performance in practice.
There are many different types of property that may be relevant to assess depending upon the intended application. These include algorithmic properties, such as time and space complexity. These algorithmic properties are often assessed separately with respect to performance when learning a model, that is, at training time, and performance when applying a learned model, that is, at test time.
Other types of property that are often studied are the properties of the models that are learned (see Model Evaluation). Strictly speaking, such properties should be assessed with respect to a specific application or class of applications. However, much machine learning research includes experimental studies in which algorithms are compared using a set of data sets with little or no consideration given to what class of applications those data sets might represent. It is dangerous to draw general conclusions about relative performance in general across any application from relative performance on this sample of some unknown class of applications. Such experimental evaluation has become known disparagingly as a bake-off.
An approach to experimental evaluation that may be less subject to the limitations of bake-offs is the use of experimental evaluation to assess a learning algorithm’s bias and variance profile. Bias and variance measure properties of an algorithm’s propensities in learning models rather than directly being properties of the models that are learned. Hence, they may provide more general insights into the relative characteristics of alternative algorithms than do assessments of the performance of learned models on a finite number of applications. One example of such use of bias-variance analysis is found in Webb. (2000)[2].
Techniques for experimental algorithm evaluation include bootstrap sampling, cross-validation, holdout evaluation, out-of-sample evaluation and prospective evaluation.
- QUOTE: Algorithm evaluation is the process of assessing a property or properties of an algorithm (...)
2009
- http://en.wikipedia.org/wiki/Analysis_of_algorithms
- To analyze an algorithm is to determine the amount of resources (such as time and storage) necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or complexity of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, omega notation and theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has [math]\displaystyle{ n }[/math] elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 [math]\displaystyle{ n }[/math] + 1 time units are needed to return an answer.
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual execution time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.
- To analyze an algorithm is to determine the amount of resources (such as time and storage) necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or complexity of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).
- ↑ Papadimitriou CH (1994) Computational complexity. Addison-Wesley.
- ↑ Webb GI (2000) MultiBoosting: a technique for combining boosting and wagging. Mach Learn 40(2):159–196