Shakir Mohamed
Jump to navigation
Jump to search
Shakir Mohamed is a person.
References
- Personal Webpage: http://blog.shakirm.com
- Google Scholar Author Page: https://scholar.google.com/citations?user=cIlDEugAAAAJ
2015
- (Mohamed, 2015) ⇒ Shakir Mohamed (2015). “A Statistical View of Deep Learning (V): Generalisation and Regularisation.” In: Personal Blog, 10 May 2015
- QUOTE: The principle technique for addressing overfitting in deep learning is by regularisation — adding additional penalties to our training objective that prevents the model parameters from becoming large and from fitting to the idiosyncrasies of the training data. This transforms our estimation framework from maximum likelihood into a maximum penalised likelihood, or more commonly maximum a posteriori (MAP) estimation (or a shrinkage estimator). For a deep model with loss function L(θ) and parameters θ, we instead use the modified loss that includes a regularisation function R:
L(θ)=−∑nlogp(yn|xn,θ)+1λR(θ)
- QUOTE: The principle technique for addressing overfitting in deep learning is by regularisation — adding additional penalties to our training objective that prevents the model parameters from becoming large and from fitting to the idiosyncrasies of the training data. This transforms our estimation framework from maximum likelihood into a maximum penalised likelihood, or more commonly maximum a posteriori (MAP) estimation (or a shrinkage estimator). For a deep model with loss function L(θ) and parameters θ, we instead use the modified loss that includes a regularisation function R:
2014
- (Rezendeimenez et al., 2014) ⇒ Danilo J. Rezendeimenez, Shakir Mohamed, and Daan Wierstra. (2014). “Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In: Proceedings of the 31st International Conference on Machine Learning (ICML-2014).
2007
- (Nelwamondo et al., 2007) ⇒ Fulufhelo V. Nelwamondo, Shakir Mohamed, and Tshilidzi Marwala. (2007). “Missing Data: A Comparison of Neural Network and Expectation Maximization Techniques.” Current Science