Batch Normalization-based NNet Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - ". ---- " to ". ---- ")
m (Text replacement - " ([[" to " ([[")
Line 19: Line 19:
=== 2017b ===
=== 2017b ===
* ([[2017_DeepNeuroevolutionGeneticAlgori|Such et al., 2017]]) ⇒ [[Felipe Petroski Such]], [[Vashisht Madhavan]], [[Edoardo Conti]], [[Joel Lehman]], [[Kenneth O. Stanley]], and [[Jeff Clune]]. ([[2017]]). “[https://arxiv.org/pdf/1712.06567 Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning].” In: arXiv:1712.06567  
* ([[2017_DeepNeuroevolutionGeneticAlgori|Such et al., 2017]]) ⇒ [[Felipe Petroski Such]], [[Vashisht Madhavan]], [[Edoardo Conti]], [[Joel Lehman]], [[Kenneth O. Stanley]], and [[Jeff Clune]]. ([[2017]]). “[https://arxiv.org/pdf/1712.06567 Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning].” In: arXiv:1712.06567  
** QUOTE: ... The ES requires require virtual [[Batch Normalization-based NNet Algorithm|batch normalization]] to generate diverse policies amongst the pseudo-offspring,  which is necessary for accurate finite difference  approximation ([[Salimans et al., 2016]]). Virtual [[Batch Normalization-based NNet Algorithm|batch normalization]]  requires additional forward passes for a reference  batch – a [[random set of observations]] chosen at the start  of training–to compute layer normalization statistics that  are then used in the same manner as [[Batch Normalization-based NNet Algorithm|batch normalization]] ([[Ioffe & Szegedy, 2015]]). We found that the random GA parameter  perturbations generate sufficiently diverse policies  without virtual [[Batch Normalization-based NNet Algorithm|batch normalization]] and thus avoid these  additional forward passes through the network.
** QUOTE: ... The ES requires require virtual [[Batch Normalization-based NNet Algorithm|batch normalization]] to generate diverse policies amongst the pseudo-offspring,  which is necessary for accurate finite difference  approximation ([[Salimans et al., 2016]]). Virtual [[Batch Normalization-based NNet Algorithm|batch normalization]]  requires additional forward passes for a reference  batch – a [[random set of observations]] chosen at the start  of training–to compute layer normalization statistics that  are then used in the same manner as [[Batch Normalization-based NNet Algorithm|batch normalization]] ([[Ioffe & Szegedy, 2015]]). We found that the random GA parameter  perturbations generate sufficiently diverse policies  without virtual [[Batch Normalization-based NNet Algorithm|batch normalization]] and thus avoid these  additional forward passes through the network.


=== 2016 ===
=== 2016 ===

Revision as of 01:28, 25 July 2023

A Batch Normalization-based NNet Algorithm is a deep neural network training algorithm that performs function normalization for each mini-batch gradient descent.



References

2018

2017a

2017b

2016

2015

  • (Ioffe & Szegedy, 2015) ⇒ Sergey Ioffe, and Christian Szegedy. (2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” In: Proceedings of the International Conference on Machine Learning, pp. 448-456.
    • ABSTRACT: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.