다중 레이어 퍼셉트론(Multi-Layer Perceptron, MLP)은 피드 포워드 신경망(Feed Forward Neural Network, FFNN)의 가장 기본적인 형태이다. 피드 포워드 신경망은 입력층에서 출력층으로 한 방향으로만 연산 방향이 정해져 있는 신경망을 말한다.
Multi-layer Perceptron regressor.
This model optimizes the squared error using LBFGS or stochastic gradient descent.
Parameters
hidden_layer_sizes : array-like of shape(n_layers - 2,), default=(100,)
The ith element represents the number of neurons in the ith hidden layer.
activation : {'identity', 'logistic', 'tanh', 'relu'}, default='relu'
Activation function for the hidden layer.
- 'identity', no-op activation, useful to implement linear bottleneck, returns f(x) = x
- 'logistic', the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).
- 'tanh', the hyperbolic tan function, returns f(x) = tanh(x).
- 'relu', the rectified linear unit function, returns f(x) = max(0, x)
solver : {'lbfgs', 'sgd', 'adam'}, default='adam' # 기본값은 adam
The solver for weight optimization.
- 'lbfgs' is an optimizer in the family of quasi-Newton methods.
- 'sgd' refers to stochastic gradient descent.
- 'adam' refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba
Note: The default solver 'adam' works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, 'lbfgs' can converge faster and perform better.
alpha : float, default=0.0001
Strength of the L2 regularization term. The L2 regularization term is divided by the sample size when added to the loss.
batch_size : int, default='auto'
Size of minibatches for stochastic optimizers. If the solver is 'lbfgs', the regressor will not use minibatch. When set to "auto", batch_size=min(200, n_samples).
learning_rate : {'constant', 'invscaling', 'adaptive'}, default='constant'
Learning rate schedule for weight updates.
- 'constant' is a constant learning rate given by 'learning_rate_init'.
- 'invscaling' gradually decreases the learning rate learning_rate_ at each time step 't' using an inverse scaling exponent of 'power_t'. effective_learning_rate = learning_rate_init / pow(t, power_t)
- 'adaptive' keeps the learning rate constant to 'learning_rate_init' as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if 'early_stopping' is on, the current learning rate is divided by 5.
Only used when solver='sgd'.
learning_rate_init : float, default=0.001
The initial learning rate used. It controls the step-size in updating the weights. Only used when solver='sgd' or 'adam'.
power_t : float, default=0.5
The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to 'invscaling'. Only used when solver='sgd'.
max_iter : int, default=200
Maximum number of iterations. The solver iterates until convergence (determined by 'tol') or this number of iterations. For stochastic solvers ('sgd', 'adam'), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps.
shuffle : bool, default=True
Whether to shuffle samples in each iteration. Only used when solver='sgd' or 'adam'.
random_state : int, RandomState instance, default=None
Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver='sgd' or 'adam'. Pass an int for reproducible results across multiple function calls. See Glossary <random_state>.
tol : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to 'adaptive', convergence is considered to be reached and training stops.
verbose : bool, default=False
Whether to print progress messages to stdout.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary <warm_start>.
momentum : float, default=0.9
Momentum for gradient descent update. Should be between 0 and 1. Only used when solver='sgd'.
nesterovs_momentum : bool, default=True
Whether to use Nesterov's momentum. Only used when solver='sgd' and momentum > 0.
early_stopping : bool, default=False
Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside validation_fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. Only effective when solver='sgd' or 'adam'.
validation_fraction : float, default=0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.
beta_1 : float, default=0.9
Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver='adam'.
beta_2 : float, default=0.999
Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver='adam'.
epsilon : float, default=1e-8
Value for numerical stability in adam. Only used when solver='adam'.
n_iter_no_change : int, default=10
Maximum number of epochs to not meet tol improvement. Only effective when solver='sgd' or 'adam'.
max_fun : int, default=15000
Only used when solver='lbfgs'. Maximum number of function calls. The solver iterates until convergence (determined by tol), number of iterations reaches max_iter, or this number of function calls. Note that number of function calls will be greater than or equal to the number of iterations for the MLPRegressor.
Attributes
loss_ : float
The current loss computed with the loss function.
best_loss_ : float
The minimum loss reached by the solver throughout fitting. If early_stopping=True, this attribute is set to None. Refer to the best_validation_score_ fitted attribute instead. Only accessible when solver='sgd' or 'adam'.
loss_curve_ : list of shape (n_iter_,)
Loss value evaluated at the end of each training step. The ith element in the list represents the loss at the ith iteration. Only accessible when solver='sgd' or 'adam'.
validation_scores_ : list of shape (n_iter_,) or None
The score at each iteration on a held-out validation set. The score reported is the R2 score. Only available if early_stopping=True, otherwise the attribute is set to None. Only accessible when solver='sgd' or 'adam'.
best_validation_score_ : float or None
The best validation score (i.e. R2 score) that triggered the early stopping. Only available if early_stopping=True, otherwise the attribute is set to None. Only accessible when solver='sgd' or 'adam'.
t_ : int
The number of training samples seen by the solver during fitting. Mathematically equals n_iters * X.shape[0], it means time_step and it is used by optimizer's learning rate scheduler.
coefs_ : list of shape (n_layers - 1,)
The ith element in the list represents the weight matrix corresponding to layer i.
intercepts_ : list of shape (n_layers - 1,)
The ith element in the list represents the bias vector corresponding to layer i + 1.
n_features_in_ : int
Number of features seen during fit.
feature_names_in_ : ndarray of shape (n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
n_iter_ : int
The number of iterations the solver has run.
n_layers_ : int
Number of layers.
n_outputs_ : int
Number of outputs.
out_activation_ : str
Name of the output activation function.
'AI' 카테고리의 다른 글
NumPy 배열의 차원 확인하기 (1) | 2023.10.02 |
---|---|
결정 계수(R-squared, R2)와 정확도(Accuracy) (0) | 2023.08.28 |
비용 함수(Cost Function, 또는 손실 함수) (1) | 2023.08.19 |
Celluloid (0) | 2023.08.19 |
판다스 Pandas - unique() (1) | 2023.08.19 |