sklearn.neural network.MLPRegressor
Jump to navigation
Jump to search
A sklearn.neural_network.MLPRegressor is a Multi-layer Perceptron Regression System within sklearn.neural_network
.
- Context
- ...
- Example(s):
- ...
- Counter-Example(s):
- ...
- See:Classification System, Regularization Task, Ridge Regression Task.
References
2017a
- (Scikit-Learn, 2017A) ⇒ http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html Retrieved:2017-10-22
- QUOTE:
class sklearn.neural_network.MLPRegressor(hidden_layer_sizes=(100, ), activation=’relu’, solver=’adam’, alpha=0.0001, batch_size=’auto’, learning_rate=’constant’, learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
Multi-layer Perceptron regressor. This model optimizes the squared-loss using LBFGS or stochastic gradient descent.
- QUOTE:
2017b
- (sklearn,2017) ⇒ http://scikit-learn.org/stable/modules/neural_networks_supervised.html#regression Retrieved:2017-12-3.
- QUOTE: Class
MLPRegressor
implements a multi-layer perceptron (MLP) that trains using backpropagation with no activation function in the output layer, which can also be seen as using the identity function as activation function. Therefore, it uses the square error as the loss function, and the output is a set of continuous values.MLPRegressor
also supports multi-output regression, in which a sample can have more than one target.
- QUOTE: Class