Maximum Likelihood Estimation System
(Redirected from Maximum Likelihood System)
Jump to navigation
Jump to search
A Maximum Likelihood Estimation System is an continuous optimization system that implements an MLE algorithm to solve an MLE task.
- AKA: MLE-based System.
- Context:
- …
- Example(s):
- http://www.mathworks.com/help/stats/mle.html
phat = mle(data)
.phat = mle(data,'distribution',dist) example
.phat = mle(data,'pdf',pdf,'start',start,'cdf',cdf) example
.phat = mle(data,'logpdf',logpdf,'start',start,'logsf',logsf) example
.phat = mle(data,'nloglf',nloglf,'start',start) example
.
- …
- http://www.mathworks.com/help/stats/mle.html
- Counter-Example(s):
- See: Parameter Optimization System, Supervised Point Estimation, Mixture Models.
References
2014
- https://github.com/ibab/python-mle
- QUOTE: mle is a Python framework for constructing probability models and estimating their parameters from data using the Maximum Likelihood approach. While being less flexible than a full Bayesian probabilistic modeling framework, it can handle larger datasets (> 10^6 entries) and more complex statistical models.
To achieve maximum performance, this package (like pymc) uses Theano to optimize and compile statistical models. This also means that models can automatically be evaluated using multiple CPU cores or GPUs. Derivatives used for the likelihood optimization are calculated using automatic differentiation.
- QUOTE: mle is a Python framework for constructing probability models and estimating their parameters from data using the Maximum Likelihood approach. While being less flexible than a full Bayesian probabilistic modeling framework, it can handle larger datasets (> 10^6 entries) and more complex statistical models.
2013
# import the packages
import numpy as np
from scipy.optimize import minimize
import scipy.stats as stats
import time
#
# Set up your x values
x = np.linspace(0, 100, num=100)
#
# Set up your observed y values with a known slope (2.4), intercept (5), and sd (4)
yObs = 5 + 2.4*x + np.random.normal(0, 4, 100)
#
# Define the likelihood function where params is a list of initial parameter estimates
def regressLL(params):
# Resave the initial parameter guesses
b0 = params [0]
b1 = params [1]
sd = params [2]
#
# Calculate the predicted values from the initial parameter guesses
yPred = b0 + b1*x
#
# Calculate the negative log-likelihood as the negative sum of the log of a normal
# PDF where the observed values are normally distributed around the mean (yPred) with a standard deviation of sd
logLik = -np.sum( stats.norm.logpdf(yObs, loc=yPred, scale=sd) )
#
# Tell the function to return the NLL (this is what will be minimized)
return(logLik)
#
# Make a list of initial parameter guesses (b0, b1, sd)
initParams = [1, 1, 1]
2012
- http://en.wikibooks.org/wiki/R_Programming/Maximum_Likelihood#Example_with_a_logistic_distribution
- For instance, we draw from a logistic distribution and we estimate the parameters using .
# draw from a gumbel distribution using the inverse cdf simulation method e.1 <- -log(-log(runif(10000,0,1))) e.2 <- -log(-log(runif(10000,0,1))) u <- e.2 - e.1 # u follows a logistic distribution (difference between two gumbels.) fitdistr(u,densfun=dlogis,start=list(location=0,scale=1))