No Free Lunch Theorem
A No Free Lunch Theorem is a theorem that any two optimization algorithms are equivalent when their performance is averaged across all possible problems
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/No_free_lunch_theorem Retrieved:2015-1-18.
- In mathematical folklore, the "no free lunch" theorem (sometimes pluralized) of David Wolpert and William Macready appears in the 1997 "No Free Lunch Theorems for Optimization".[1] Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).[2] In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".[3] The 1997 theorems of Wolpert and Macready are mathematically technical and somefind them unintuitive. The folkloric "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them.
Various investigators have extended the work of Wolpert and Macready substantively. See No free lunch in search and optimization for treatment of the research area.
- In mathematical folklore, the "no free lunch" theorem (sometimes pluralized) of David Wolpert and William Macready appears in the 1997 "No Free Lunch Theorems for Optimization".[1] Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).[2] In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".[3] The 1997 theorems of Wolpert and Macready are mathematically technical and somefind them unintuitive. The folkloric "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them.
- ↑ Wolpert, D.H., Macready, W.G. (1997), "No Free Lunch Theorems for Optimization", IEEE Transactions on Evolutionary Computation '1, 67.
- ↑ Wolpert, David (1996), "The Lack of A Priori Distinctions between Learning Algorithms", Neural Computation, pp. 1341-1390.
- ↑ Wolpert, D.H., and Macready, W.G. (2005) "Coevolutionary free lunches", IEEE Transactions on Evolutionary Computation, 9(6): 721-735
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). “No-Free-Lunch Theorem.” In: (Sammut & Webb, 2011) p.721
2009
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization
- In computing, there are circumstances in which the outputs of all procedures solving a particular type of problem are statistically identical. A colorful way of describing such a circumstance, introduced by David H. Wolpert and William G. Macready in connection with the problems of search[1] and optimization,[2] is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). [3] Before Wolpert's article was published, Cullen Schaffer had summarized a preprint version of this work of Wolpert's, but used different terminology. [4]
In the "no free lunch" metaphor, each "restaurant" (problem-solving procedure) has a "menu" associating each "lunch plate" (problem) with a "price" (the performance of the procedure in solving the problem). The menus of restaurants are identical except in one regard — the prices are shuffled from one restaurant to the next. For an omnivore who is as likely to order each plate as any other, the average cost of lunch does not depend on the choice of restaurant. But a vegan who goes to lunch regularly with a carnivore who seeks economy pays a high average cost for lunch. To methodically reduce the average cost, one must use advance knowledge of a) what one will order and b) what the order will cost at various restaurants. That is, improvement of performance in problem-solving hinges on using prior information to match procedures to problems. [2][4]
In formal terms, there is no free lunch when the probability distribution on problem instances is such that all problem solvers have identically distributed results. In the case of search, a problem instance is an objective function, and a result is a sequence of values obtained in evaluation of candidate solutions in the domain of the function. For typical interpretations of results, search is an optimization process. There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions. [5][6][7] This condition does not hold precisely in practice,[6] but an "(almost) no free lunch" theorem suggests that it holds approximately. [8]
- In computing, there are circumstances in which the outputs of all procedures solving a particular type of problem are statistically identical. A colorful way of describing such a circumstance, introduced by David H. Wolpert and William G. Macready in connection with the problems of search[1] and optimization,[2] is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). [3] Before Wolpert's article was published, Cullen Schaffer had summarized a preprint version of this work of Wolpert's, but used different terminology. [4]
1997
- (Wolpert & Macready, 1997) ⇒ David H. Wolpert, and William G. Macready. (1997). “No Free Lunch Theorems for Optimization.” In: IEEE Transactions on Evolutionary Computation, 1(1).
1995
- (Wolpert & Macready, 1995) ⇒ David H. Wolpert, and William G. Macready. (1995). “No Free Lunch Theorems for Search." Vol. 10. Technical Report SFI-TR-95-02-010, Santa Fe Institute, 1995.