Batch Gradient Descent Algorithm
Jump to navigation
Jump to search
A Batch Gradient Descent Algorithm is a gradient descent algorithm that accumulates the gradient function contributions for all Data Points in the training set before updating the Function Weights.
- …
- Counter-Example(s):
- See: Iterative Gradient Descent Algorithm, Backpropagation, Batch Normalization..
References
2014
- http://deeplearning.stanford.edu/wiki/index.php/Backpropagation_Algorithm
- QUOTE: Suppose we have a fixed training set [math]\displaystyle{ \{ (x^{(1)}, y^{(1)}), \ldots, (x^{(m)}, y^{(m)}) \} }[/math] of [math]\displaystyle{ m }[/math] training examples. We can train our neural network using batch gradient descent. In detail, for a single training example [math]\displaystyle{ (x,y) }[/math], we define the cost function with respect to that single example to be: [math]\displaystyle{ \begin{align} J(W,b; x,y) = \frac{1}{2} \left\| h_{W,b}(x) - y \right\|^2. \end{align} }[/math] This is a (one-half) squared-error cost function. Given a training set of …