Learning Cost Function
Jump to navigation
Jump to search
A Learning Cost Function is a cost function that is used by a learning task.
- Context:
- It can range from being a Classification Loss Function to being a Ranking Loss Function to being a Regression Loss Function.
- …
- Counter-Example(s):
- See: Squared Loss, One-half Squared-Error Cost Function.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/loss_function Retrieved:2014-4-3.
- In mathematical optimization, statistics, decision theory and machine learning, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative (sometimes called a reward function or a utility function), in which case it is to be maximized.
In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums. In optimal control the loss is the penalty for failing to achieve a desired value.
- In mathematical optimization, statistics, decision theory and machine learning, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative (sometimes called a reward function or a utility function), in which case it is to be maximized.
- QUOTE
- Suppose we have a fixed training set [math]\displaystyle{ \{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \} }[/math] of [math]\displaystyle{ m }[/math] training examples. We can train our neural network using batch gradient descent. In detail, for a single training example [math]\displaystyle{ (x,y) }[/math], we define the cost function with respect to that single example to be …
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). “Loss Function.” In: (Sammut & Webb, 2011) p.632
2009
- en.wiktionary.org/wiki/loss_function
- In statistics, decision theory and economics, a loss function is a function that maps an event (technically an element of a sample space) onto a ...
- http://clopinet.com/isabelle/Projects/ETH/Exam_Questions.html
- loss function: A loss function is a function measuring the discrepancy between a predicted output f(x) and the desired outcome y: L(f(x), y). The risk is the average of L over many examples. Examples of loss functions include the square loss often used in regression (y-f(x))2 and the 0/1 loss used in classification, which is 1 in case of error and 0 otherwise.