doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
class sklearn.linear_model.LarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True) [source]
Cross-validated Least Angle Regression model. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
max_iterint, default=500
Maximum number of iterations to perform.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
max_n_alphasint, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobsint or None, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten. Attributes
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of lists, the outer list length is n_targets.
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function
coef_path_array-like of shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_float
the estimated regularization parameter alpha
alphas_array-like of shape (n_alphas,)
the different values of alpha along the path
cv_alphas_array-like of shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
mse_path_array-like of shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas)
n_iter_array-like or int
the number of iterations run by Lars with the optimal alpha. See also
lars_path, LassoLars, LassoLarsCV
Examples >>> from sklearn.linear_model import LarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_samples=200, noise=4.0, random_state=0)
>>> reg = LarsCV(cv=5).fit(X, y)
>>> reg.score(X, y)
0.9996...
>>> reg.alpha_
0.0254...
>>> reg.predict(X[:1,])
array([154.0842...])
Methods
fit(X, y) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV |
sklearn.linear_model.LarsCV
class sklearn.linear_model.LarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True) [source]
Cross-validated Least Angle Regression model. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
max_iterint, default=500
Maximum number of iterations to perform.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
max_n_alphasint, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobsint or None, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten. Attributes
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of lists, the outer list length is n_targets.
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function
coef_path_array-like of shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_float
the estimated regularization parameter alpha
alphas_array-like of shape (n_alphas,)
the different values of alpha along the path
cv_alphas_array-like of shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
mse_path_array-like of shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas)
n_iter_array-like or int
the number of iterations run by Lars with the optimal alpha. See also
lars_path, LassoLars, LassoLarsCV
Examples >>> from sklearn.linear_model import LarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_samples=200, noise=4.0, random_state=0)
>>> reg = LarsCV(cv=5).fit(X, y)
>>> reg.score(X, y)
0.9996...
>>> reg.alpha_
0.0254...
>>> reg.predict(X[:1,])
array([154.0842...])
Methods
fit(X, y) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.larscv |
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self. | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.larscv#sklearn.linear_model.LarsCV.set_params |
sklearn.linear_model.lars_path(X, y, Xy=None, *, Gram=None, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
Compute Least Angle Regression or Lasso path using LARS algorithm [1] The optimization objective for the case method=’lasso’ is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lars’, the objective function is only known in the form of an implicit equation (see discussion in [1]) Read more in the User Guide. Parameters
XNone or array-like of shape (n_samples, n_features)
Input data. Note that if X is None then the Gram matrix must be specified, i.e., cannot be None or False.
yNone or array-like of shape (n_samples,)
Input targets.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
GramNone, ‘auto’, array-like of shape (n_features, n_features), default=None
Precomputed Gram matrix (X’ * X), if 'auto', the Gram matrix is precomputed from the given X, if there are more samples than features.
max_iterint, default=500
Maximum number of iterations to perform, set to infinity for no limit.
alpha_minfloat, default=0
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
method{‘lar’, ‘lasso’}, default=’lar’
Specifies the returned model. Select 'lar' for Least Angle Regression, 'lasso' for the Lasso.
copy_Xbool, default=True
If False, X is overwritten.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Grambool, default=True
If False, Gram is overwritten.
verboseint, default=0
Controls output verbosity.
return_pathbool, default=True
If return_path==True returns the entire path, else returns only the last point of the path.
return_n_iterbool, default=False
Whether to return the number of iterations.
positivebool, default=False
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function. Returns
alphasarray-like of shape (n_alphas + 1,)
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller.
activearray-like of shape (n_alphas,)
Indices of active variables at the end of the path.
coefsarray-like of shape (n_features, n_alphas + 1)
Coefficients along the path
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
lars_path_gram
lasso_path
lasso_path_gram
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
1
“Least Angle Regression”, Efron et al. http://statweb.stanford.edu/~tibs/ftp/lars.pdf
2
Wikipedia entry on the Least-angle regression
3
Wikipedia entry on the Lasso | sklearn.modules.generated.sklearn.linear_model.lars_path#sklearn.linear_model.lars_path |
sklearn.linear_model.lars_path_gram(Xy, Gram, *, n_samples, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
lars_path in the sufficient stats mode [1] The optimization objective for the case method=’lasso’ is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lars’, the objective function is only known in the form of an implicit equation (see discussion in [1]) Read more in the User Guide. Parameters
Xyarray-like of shape (n_samples,) or (n_samples, n_targets)
Xy = np.dot(X.T, y).
Gramarray-like of shape (n_features, n_features)
Gram = np.dot(X.T * X).
n_samplesint or float
Equivalent size of sample.
max_iterint, default=500
Maximum number of iterations to perform, set to infinity for no limit.
alpha_minfloat, default=0
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
method{‘lar’, ‘lasso’}, default=’lar’
Specifies the returned model. Select 'lar' for Least Angle Regression, 'lasso' for the Lasso.
copy_Xbool, default=True
If False, X is overwritten.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Grambool, default=True
If False, Gram is overwritten.
verboseint, default=0
Controls output verbosity.
return_pathbool, default=True
If return_path==True returns the entire path, else returns only the last point of the path.
return_n_iterbool, default=False
Whether to return the number of iterations.
positivebool, default=False
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function. Returns
alphasarray-like of shape (n_alphas + 1,)
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller.
activearray-like of shape (n_alphas,)
Indices of active variables at the end of the path.
coefsarray-like of shape (n_features, n_alphas + 1)
Coefficients along the path
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
lars_path
lasso_path
lasso_path_gram
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
1
“Least Angle Regression”, Efron et al. http://statweb.stanford.edu/~tibs/ftp/lars.pdf
2
Wikipedia entry on the Least-angle regression
3
Wikipedia entry on the Lasso | sklearn.modules.generated.sklearn.linear_model.lars_path_gram#sklearn.linear_model.lars_path_gram |
class sklearn.linear_model.Lasso(alpha=1.0, *, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic') [source]
Linear Model trained with L1 prior as regularizer (aka the Lasso) The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Technically the Lasso model is optimizing the same objective function as the Elastic Net with l1_ratio=1.0 (no L2 penalty). Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=False
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
positivebool, default=False
When set to True, forces the coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
dual_gap_float or ndarray of shape (n_targets,)
Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y.
sparse_coef_sparse matrix of shape (n_features, 1) or (n_targets, n_features)
Sparse representation of the fitted coef_.
intercept_float or ndarray of shape (n_targets,)
Independent term in decision function.
n_iter_int or list of int
Number of iterations run by the coordinate descent solver to reach the specified tolerance. See also
lars_path
lasso_path
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1)
>>> print(clf.coef_)
[0.85 0. ]
>>> print(clf.intercept_)
0.15...
Methods
fit(X, y[, sample_weight, check_input]) Fit model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None, check_input=True) [source]
Fit model with coordinate descent. Parameters
X{ndarray, sparse matrix} of (n_samples, n_features)
Data.
y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets)
Target. Will be cast to X’s dtype if necessary.
sample_weightfloat or array-like of shape (n_samples,), default=None
Sample weight. New in version 0.23.
check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso |
sklearn.linear_model.Lasso
class sklearn.linear_model.Lasso(alpha=1.0, *, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic') [source]
Linear Model trained with L1 prior as regularizer (aka the Lasso) The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Technically the Lasso model is optimizing the same objective function as the Elastic Net with l1_ratio=1.0 (no L2 penalty). Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=False
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
positivebool, default=False
When set to True, forces the coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
dual_gap_float or ndarray of shape (n_targets,)
Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y.
sparse_coef_sparse matrix of shape (n_features, 1) or (n_targets, n_features)
Sparse representation of the fitted coef_.
intercept_float or ndarray of shape (n_targets,)
Independent term in decision function.
n_iter_int or list of int
Number of iterations run by the coordinate descent solver to reach the specified tolerance. See also
lars_path
lasso_path
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1)
>>> print(clf.coef_)
[0.85 0. ]
>>> print(clf.intercept_)
0.15...
Methods
fit(X, y[, sample_weight, check_input]) Fit model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None, check_input=True) [source]
Fit model with coordinate descent. Parameters
X{ndarray, sparse matrix} of (n_samples, n_features)
Data.
y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets)
Target. Will be cast to X’s dtype if necessary.
sample_weightfloat or array-like of shape (n_samples,), default=None
Sample weight. New in version 0.23.
check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_.
Examples using sklearn.linear_model.Lasso
Release Highlights for scikit-learn 0.23
Compressive sensing: tomography reconstruction with L1 prior (Lasso)
Lasso on dense and sparse data
Joint feature selection with multi-task Lasso
Lasso and Elastic Net for Sparse Signals
Cross-validation on diabetes Dataset Exercise | sklearn.modules.generated.sklearn.linear_model.lasso |
fit(X, y, sample_weight=None, check_input=True) [source]
Fit model with coordinate descent. Parameters
X{ndarray, sparse matrix} of (n_samples, n_features)
Data.
y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets)
Target. Will be cast to X’s dtype if necessary.
sample_weightfloat or array-like of shape (n_samples,), default=None
Sample weight. New in version 0.23.
check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.get_params |
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.path |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.set_params |
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.lasso#sklearn.linear_model.Lasso.sparse_coef_ |
class sklearn.linear_model.LassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source]
Lasso linear model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. The best model is selected by cross-validation. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
verbosebool or int, default=False
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positivebool, default=False
If positive, restrict regression coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
alpha_float
The amount of penalization chosen by cross validation.
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
intercept_float or ndarray of shape (n_targets,)
Independent term in decision function.
mse_path_ndarray of shape (n_alphas, n_folds)
Mean square error for the test set on each fold, varying alpha.
alphas_ndarray of shape (n_alphas,)
The grid of alphas used for fitting.
dual_gap_float or ndarray of shape (n_targets,)
The dual gap at the end of the optimization for the optimal alpha (alpha_).
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also
lars_path
lasso_path
LassoLars
Lasso
LassoLarsCV
Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn.linear_model import LassoCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(noise=4, random_state=0)
>>> reg = LassoCV(cv=5, random_state=0).fit(X, y)
>>> reg.score(X, y)
0.9993...
>>> reg.predict(X[:1,])
array([-78.4951...])
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute Lasso path with coordinate descent
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV |
sklearn.linear_model.LassoCV
class sklearn.linear_model.LassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source]
Lasso linear model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. The best model is selected by cross-validation. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
verbosebool or int, default=False
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positivebool, default=False
If positive, restrict regression coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
alpha_float
The amount of penalization chosen by cross validation.
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
intercept_float or ndarray of shape (n_targets,)
Independent term in decision function.
mse_path_ndarray of shape (n_alphas, n_folds)
Mean square error for the test set on each fold, varying alpha.
alphas_ndarray of shape (n_alphas,)
The grid of alphas used for fitting.
dual_gap_float or ndarray of shape (n_targets,)
The dual gap at the end of the optimization for the optimal alpha (alpha_).
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also
lars_path
lasso_path
LassoLars
Lasso
LassoLarsCV
Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn.linear_model import LassoCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(noise=4, random_state=0)
>>> reg = LassoCV(cv=5, random_state=0).fit(X, y)
>>> reg.score(X, y)
0.9993...
>>> reg.predict(X[:1,])
array([-78.4951...])
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute Lasso path with coordinate descent
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.LassoCV
Combine predictors using stacking
Model-based and sequential feature selection
Lasso model selection: Cross-Validation / AIC / BIC
Common pitfalls in interpretation of coefficients of linear models
Cross-validation on diabetes Dataset Exercise | sklearn.modules.generated.sklearn.linear_model.lassocv |
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.get_params |
static path(*args, **kwargs) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]] | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.path |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassocv#sklearn.linear_model.LassoCV.set_params |
class sklearn.linear_model.LassoLars(alpha=1.0, *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, positive=False, jitter=None, random_state=None) [source]
Lasso model fit with Least Angle Regression a.k.a. Lars It is a Linear Model trained with an L1 prior as regularizer. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=500
Maximum number of iterations to perform.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
fit_pathbool, default=True
If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
jitterfloat, default=None
Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability. New in version 0.23.
random_stateint, RandomState instance or None, default=None
Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See Glossary. Ignored if jitter is None. New in version 0.23. Attributes
alphas_array-like of shape (n_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If this is a list of array-like, the length of the outer list is n_targets.
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is n_targets.
coef_path_array-like of shape (n_features, n_alphas + 1) or list of such arrays
If a list is passed it’s expected to be one of n_targets such arrays. The varying values of the coefficients along the path. It is not present if the fit_path parameter is False. If this is a list of array-like, the length of the outer list is n_targets.
coef_array-like of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_float or array-like of shape (n_targets,)
Independent term in decision function.
n_iter_array-like or int
The number of iterations taken by lars_path to find the grid of alphas for each target. See also
lars_path
lasso_path
Lasso
LassoCV
LassoLarsCV
LassoLarsIC
sklearn.decomposition.sparse_encode
Examples >>> from sklearn import linear_model
>>> reg = linear_model.LassoLars(alpha=0.01)
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
LassoLars(alpha=0.01)
>>> print(reg.coef_)
[ 0. -0.963257...]
Methods
fit(X, y[, Xy]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, Xy=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars |
sklearn.linear_model.LassoLars
class sklearn.linear_model.LassoLars(alpha=1.0, *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, fit_path=True, positive=False, jitter=None, random_state=None) [source]
Lasso model fit with Least Angle Regression a.k.a. Lars It is a Linear Model trained with an L1 prior as regularizer. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=500
Maximum number of iterations to perform.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
fit_pathbool, default=True
If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
jitterfloat, default=None
Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability. New in version 0.23.
random_stateint, RandomState instance or None, default=None
Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See Glossary. Ignored if jitter is None. New in version 0.23. Attributes
alphas_array-like of shape (n_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If this is a list of array-like, the length of the outer list is n_targets.
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is n_targets.
coef_path_array-like of shape (n_features, n_alphas + 1) or list of such arrays
If a list is passed it’s expected to be one of n_targets such arrays. The varying values of the coefficients along the path. It is not present if the fit_path parameter is False. If this is a list of array-like, the length of the outer list is n_targets.
coef_array-like of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_float or array-like of shape (n_targets,)
Independent term in decision function.
n_iter_array-like or int
The number of iterations taken by lars_path to find the grid of alphas for each target. See also
lars_path
lasso_path
Lasso
LassoCV
LassoLarsCV
LassoLarsIC
sklearn.decomposition.sparse_encode
Examples >>> from sklearn import linear_model
>>> reg = linear_model.LassoLars(alpha=0.01)
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
LassoLars(alpha=0.01)
>>> print(reg.coef_)
[ 0. -0.963257...]
Methods
fit(X, y[, Xy]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, Xy=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolars |
fit(X, y, Xy=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns
selfobject
returns an instance of self. | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolars#sklearn.linear_model.LassoLars.set_params |
class sklearn.linear_model.LassoLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True, positive=False) [source]
Cross-validated Lasso, using the LARS algorithm. See glossary entry for cross-validation estimator. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
max_iterint, default=500
Maximum number of iterations to perform.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool or ‘auto’ , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
max_n_alphasint, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobsint or None, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsCV only makes sense for problems where a sparse solution is expected and/or reached. Attributes
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function.
coef_path_array-like of shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_float
the estimated regularization parameter alpha
alphas_array-like of shape (n_alphas,)
the different values of alpha along the path
cv_alphas_array-like of shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
mse_path_array-like of shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas)
n_iter_array-like or int
the number of iterations run by Lars with the optimal alpha.
active_list of int
Indices of active variables at the end of the path. See also
lars_path, LassoLars, LarsCV, LassoCV
Notes The object solves the same problem as the LassoCV object. However, unlike the LassoCV, it find the relevant alphas values by itself. In general, because of this property, it will be more stable. However, it is more fragile to heavily multicollinear datasets. It is more efficient than the LassoCV if only a small number of features are selected compared to the total number, for instance if there are very few samples compared to the number of features. Examples >>> from sklearn.linear_model import LassoLarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(noise=4.0, random_state=0)
>>> reg = LassoLarsCV(cv=5).fit(X, y)
>>> reg.score(X, y)
0.9992...
>>> reg.alpha_
0.0484...
>>> reg.predict(X[:1,])
array([-77.8723...])
Methods
fit(X, y) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV |
sklearn.linear_model.LassoLarsCV
class sklearn.linear_model.LassoLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True, positive=False) [source]
Cross-validated Lasso, using the LARS algorithm. See glossary entry for cross-validation estimator. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
max_iterint, default=500
Maximum number of iterations to perform.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool or ‘auto’ , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
max_n_alphasint, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobsint or None, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsCV only makes sense for problems where a sparse solution is expected and/or reached. Attributes
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function.
coef_path_array-like of shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_float
the estimated regularization parameter alpha
alphas_array-like of shape (n_alphas,)
the different values of alpha along the path
cv_alphas_array-like of shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
mse_path_array-like of shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas)
n_iter_array-like or int
the number of iterations run by Lars with the optimal alpha.
active_list of int
Indices of active variables at the end of the path. See also
lars_path, LassoLars, LarsCV, LassoCV
Notes The object solves the same problem as the LassoCV object. However, unlike the LassoCV, it find the relevant alphas values by itself. In general, because of this property, it will be more stable. However, it is more fragile to heavily multicollinear datasets. It is more efficient than the LassoCV if only a small number of features are selected compared to the total number, for instance if there are very few samples compared to the number of features. Examples >>> from sklearn.linear_model import LassoLarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(noise=4.0, random_state=0)
>>> reg = LassoLarsCV(cv=5).fit(X, y)
>>> reg.score(X, y)
0.9992...
>>> reg.alpha_
0.0484...
>>> reg.predict(X[:1,])
array([-77.8723...])
Methods
fit(X, y) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.LassoLarsCV
Lasso model selection: Cross-Validation / AIC / BIC | sklearn.modules.generated.sklearn.linear_model.lassolarscv |
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self. | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolarscv#sklearn.linear_model.LassoLarsCV.set_params |
class sklearn.linear_model.LassoLarsIC(criterion='aic', *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, positive=False) [source]
Lasso model fit with Lars using BIC or AIC for model selection The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
AIC is the Akaike information criterion and BIC is the Bayes Information criterion. Such criteria are useful to select the value of the regularization parameter by making a trade-off between the goodness of fit and the complexity of the model. A good model should explain well the data while being simple. Read more in the User Guide. Parameters
criterion{‘bic’ , ‘aic’}, default=’aic’
The type of criterion to use.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=500
Maximum number of iterations to perform. Can be used for early stopping.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsIC only makes sense for problems where a sparse solution is expected and/or reached. Attributes
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function.
alpha_float
the alpha parameter chosen by the information criterion
alphas_array-like of shape (n_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If a list, it will be of length n_targets.
n_iter_int
number of iterations run by lars_path to find the grid of alphas.
criterion_array-like of shape (n_alphas,)
The value of the information criteria (‘aic’, ‘bic’) across all alphas. The alpha which has the smallest information criterion is chosen. This value is larger by a factor of n_samples compared to Eqns. 2.15 and 2.16 in (Zou et al, 2007). See also
lars_path, LassoLars, LassoLarsCV
Notes The estimation of the number of degrees of freedom is given by: “On the degrees of freedom of the lasso” Hui Zou, Trevor Hastie, and Robert Tibshirani Ann. Statist. Volume 35, Number 5 (2007), 2173-2192. https://en.wikipedia.org/wiki/Akaike_information_criterion https://en.wikipedia.org/wiki/Bayesian_information_criterion Examples >>> from sklearn import linear_model
>>> reg = linear_model.LassoLarsIC(criterion='bic')
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
LassoLarsIC(criterion='bic')
>>> print(reg.coef_)
[ 0. -1.11...]
Methods
fit(X, y[, copy_X]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, copy_X=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
training data.
yarray-like of shape (n_samples,)
target values. Will be cast to X’s dtype if necessary
copy_Xbool, default=None
If provided, this parameter will override the choice of copy_X made at instance creation. If True, X will be copied; else, it may be overwritten. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC |
sklearn.linear_model.LassoLarsIC
class sklearn.linear_model.LassoLarsIC(criterion='aic', *, fit_intercept=True, verbose=False, normalize=True, precompute='auto', max_iter=500, eps=2.220446049250313e-16, copy_X=True, positive=False) [source]
Lasso model fit with Lars using BIC or AIC for model selection The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
AIC is the Akaike information criterion and BIC is the Bayes Information criterion. Such criteria are useful to select the value of the regularization parameter by making a trade-off between the goodness of fit and the complexity of the model. A good model should explain well the data while being simple. Read more in the User Guide. Parameters
criterion{‘bic’ , ‘aic’}, default=’aic’
The type of criterion to use.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=500
Maximum number of iterations to perform. Can be used for early stopping.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsIC only makes sense for problems where a sparse solution is expected and/or reached. Attributes
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function.
alpha_float
the alpha parameter chosen by the information criterion
alphas_array-like of shape (n_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If a list, it will be of length n_targets.
n_iter_int
number of iterations run by lars_path to find the grid of alphas.
criterion_array-like of shape (n_alphas,)
The value of the information criteria (‘aic’, ‘bic’) across all alphas. The alpha which has the smallest information criterion is chosen. This value is larger by a factor of n_samples compared to Eqns. 2.15 and 2.16 in (Zou et al, 2007). See also
lars_path, LassoLars, LassoLarsCV
Notes The estimation of the number of degrees of freedom is given by: “On the degrees of freedom of the lasso” Hui Zou, Trevor Hastie, and Robert Tibshirani Ann. Statist. Volume 35, Number 5 (2007), 2173-2192. https://en.wikipedia.org/wiki/Akaike_information_criterion https://en.wikipedia.org/wiki/Bayesian_information_criterion Examples >>> from sklearn import linear_model
>>> reg = linear_model.LassoLarsIC(criterion='bic')
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
LassoLarsIC(criterion='bic')
>>> print(reg.coef_)
[ 0. -1.11...]
Methods
fit(X, y[, copy_X]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, copy_X=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
training data.
yarray-like of shape (n_samples,)
target values. Will be cast to X’s dtype if necessary
copy_Xbool, default=None
If provided, this parameter will override the choice of copy_X made at instance creation. If True, X will be copied; else, it may be overwritten. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.LassoLarsIC
Lasso model selection: Cross-Validation / AIC / BIC | sklearn.modules.generated.sklearn.linear_model.lassolarsic |
fit(X, y, copy_X=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
training data.
yarray-like of shape (n_samples,)
target values. Will be cast to X’s dtype if necessary
copy_Xbool, default=None
If provided, this parameter will override the choice of copy_X made at instance creation. If True, X will be copied; else, it may be overwritten. Returns
selfobject
returns an instance of self. | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.lassolarsic#sklearn.linear_model.LassoLarsIC.set_params |
sklearn.linear_model.lasso_path(X, y, *, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, **params) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]] | sklearn.modules.generated.sklearn.linear_model.lasso_path#sklearn.linear_model.lasso_path |
class sklearn.linear_model.LinearRegression(*, fit_intercept=True, normalize=False, copy_X=True, n_jobs=None, positive=False) [source]
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation. Parameters
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
n_jobsint, default=None
The number of jobs to use for the computation. This will only provide speedup for n_targets > 1 and sufficient large problems. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positivebool, default=False
When set to True, forces the coefficients to be positive. This option is only supported for dense arrays. New in version 0.24. Attributes
coef_array of shape (n_features, ) or (n_targets, n_features)
Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features.
rank_int
Rank of matrix X. Only available when X is dense.
singular_array of shape (min(X, y),)
Singular values of X. Only available when X is dense.
intercept_float or array of shape (n_targets,)
Independent term in the linear model. Set to 0.0 if fit_intercept = False. See also
Ridge
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients with l2 regularization.
Lasso
The Lasso is a linear model that estimates sparse coefficients with l1 regularization.
ElasticNet
Elastic-Net is a linear regression model trained with both l1 and l2 -norm regularization of the coefficients. Notes From the implementation point of view, this is just plain Ordinary Least Squares (scipy.linalg.lstsq) or Non Negative Least Squares (scipy.optimize.nnls) wrapped as a predictor object. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
>>> # y = 1 * x_0 + 2 * x_1 + 3
>>> y = np.dot(X, np.array([1, 2])) + 3
>>> reg = LinearRegression().fit(X, y)
>>> reg.score(X, y)
1.0
>>> reg.coef_
array([1., 2.])
>>> reg.intercept_
3.0000...
>>> reg.predict(np.array([[3, 5]]))
array([16.])
Methods
fit(X, y[, sample_weight]) Fit linear model.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit linear model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. Will be cast to X’s dtype if necessary
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample New in version 0.17: parameter sample_weight support to LinearRegression. Returns
selfreturns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression |
sklearn.linear_model.LinearRegression
class sklearn.linear_model.LinearRegression(*, fit_intercept=True, normalize=False, copy_X=True, n_jobs=None, positive=False) [source]
Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation. Parameters
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
n_jobsint, default=None
The number of jobs to use for the computation. This will only provide speedup for n_targets > 1 and sufficient large problems. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positivebool, default=False
When set to True, forces the coefficients to be positive. This option is only supported for dense arrays. New in version 0.24. Attributes
coef_array of shape (n_features, ) or (n_targets, n_features)
Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features.
rank_int
Rank of matrix X. Only available when X is dense.
singular_array of shape (min(X, y),)
Singular values of X. Only available when X is dense.
intercept_float or array of shape (n_targets,)
Independent term in the linear model. Set to 0.0 if fit_intercept = False. See also
Ridge
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients with l2 regularization.
Lasso
The Lasso is a linear model that estimates sparse coefficients with l1 regularization.
ElasticNet
Elastic-Net is a linear regression model trained with both l1 and l2 -norm regularization of the coefficients. Notes From the implementation point of view, this is just plain Ordinary Least Squares (scipy.linalg.lstsq) or Non Negative Least Squares (scipy.optimize.nnls) wrapped as a predictor object. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
>>> # y = 1 * x_0 + 2 * x_1 + 3
>>> y = np.dot(X, np.array([1, 2])) + 3
>>> reg = LinearRegression().fit(X, y)
>>> reg.score(X, y)
1.0
>>> reg.coef_
array([1., 2.])
>>> reg.intercept_
3.0000...
>>> reg.predict(np.array([[3, 5]]))
array([16.])
Methods
fit(X, y[, sample_weight]) Fit linear model.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit linear model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. Will be cast to X’s dtype if necessary
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample New in version 0.17: parameter sample_weight support to LinearRegression. Returns
selfreturns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.LinearRegression
Principal Component Regression vs Partial Least Squares Regression
Plot individual and voting regression predictions
Ordinary Least Squares and Ridge Regression Variance
Logistic function
Non-negative least squares
Linear Regression Example
Robust linear model estimation using RANSAC
Sparsity Example: Fitting only features 1 and 2
Theil-Sen Regression
Robust linear estimator fitting
Automatic Relevance Determination Regression (ARD)
Bayesian Ridge Regression
Isotonic Regression
Face completion with a multi-output estimators
Plotting Cross-Validated Predictions
Underfitting vs. Overfitting
Using KBinsDiscretizer to discretize continuous features | sklearn.modules.generated.sklearn.linear_model.linearregression |
fit(X, y, sample_weight=None) [source]
Fit linear model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. Will be cast to X’s dtype if necessary
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample New in version 0.17: parameter sample_weight support to LinearRegression. Returns
selfreturns an instance of self. | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.linearregression#sklearn.linear_model.LinearRegression.set_params |
class sklearn.linear_model.LogisticRegression(penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None) [source]
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.) This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied). The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation, or no regularization. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the ‘saga’ solver. Read more in the User Guide. Parameters
penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied. New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)
dualbool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.
fit_interceptbool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
intercept_scalingfloat, default=1
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
class_weightdict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight=’balanced’
random_stateint, RandomState instance, default=None
Used when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. See Glossary for details.
solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty ‘liblinear’ and ‘saga’ also handle L1 penalty ‘saga’ also supports ‘elasticnet’ penalty ‘liblinear’ does not support setting penalty='none'
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. Changed in version 0.22: The default solver changed from ‘liblinear’ to ‘lbfgs’ in 0.22.
max_iterint, default=100
Maximum number of iterations taken for the solvers to converge.
multi_class{‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
verboseint, default=0
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. See the Glossary. New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers.
n_jobsint, default=None
Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the solver is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
l1_ratiofloat, default=None
The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes
classes_ndarray of shape (n_classes, )
A list of class labels known to the classifier.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. In particular, when multi_class='multinomial', coef_ corresponds to outcome 1 (True) and -coef_ corresponds to outcome 0 (False).
intercept_ndarray of shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape (1,) when the given problem is binary. In particular, when multi_class='multinomial', intercept_ corresponds to outcome 1 (True) and -intercept_ corresponds to outcome 0 (False).
n_iter_ndarray of shape (n_classes,) or (1, )
Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. See also
SGDClassifier
Incrementally trained logistic regression (when given the parameter loss="log").
LogisticRegressionCV
Logistic regression with built-in cross validation. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References L-BFGS-B – Software for Large-scale Bound-constrained Optimization
Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html LIBLINEAR – A Library for Large Linear Classification
https://www.csie.ntu.edu.tw/~cjlin/liblinear/ SAG – Mark Schmidt, Nicolas Le Roux, and Francis Bach
Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document SAGA – Defazio, A., Bach F. & Lacoste-Julien S. (2014).
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202 Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent
methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegression(random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :])
array([[9.8...e-01, 1.8...e-02, 1.4...e-08],
[9.7...e-01, 2.8...e-02, ...e-08]])
>>> clf.score(X, y)
0.97...
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
predict_log_proba(X) Predict logarithm of probability estimates.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.17: sample_weight support to LogisticRegression. Returns
self
Fitted estimator. Notes The SAGA solver supports both float64 and float32 bit arrays.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression |
sklearn.linear_model.LogisticRegression
class sklearn.linear_model.LogisticRegression(penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None) [source]
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the ‘lbfgs’, ‘sag’, ‘saga’ and ‘newton-cg’ solvers.) This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied). The ‘newton-cg’, ‘sag’, and ‘lbfgs’ solvers support only L2 regularization with primal formulation, or no regularization. The ‘liblinear’ solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the ‘saga’ solver. Read more in the User Guide. Parameters
penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied. New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)
dualbool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.
fit_interceptbool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
intercept_scalingfloat, default=1
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
class_weightdict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight=’balanced’
random_stateint, RandomState instance, default=None
Used when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. See Glossary for details.
solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty ‘liblinear’ and ‘saga’ also handle L1 penalty ‘saga’ also supports ‘elasticnet’ penalty ‘liblinear’ does not support setting penalty='none'
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. Changed in version 0.22: The default solver changed from ‘liblinear’ to ‘lbfgs’ in 0.22.
max_iterint, default=100
Maximum number of iterations taken for the solvers to converge.
multi_class{‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
verboseint, default=0
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. See the Glossary. New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers.
n_jobsint, default=None
Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the solver is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
l1_ratiofloat, default=None
The Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. Setting l1_ratio=0 is equivalent to using penalty='l2', while setting l1_ratio=1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes
classes_ndarray of shape (n_classes, )
A list of class labels known to the classifier.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. In particular, when multi_class='multinomial', coef_ corresponds to outcome 1 (True) and -coef_ corresponds to outcome 0 (False).
intercept_ndarray of shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape (1,) when the given problem is binary. In particular, when multi_class='multinomial', intercept_ corresponds to outcome 1 (True) and -intercept_ corresponds to outcome 0 (False).
n_iter_ndarray of shape (n_classes,) or (1, )
Actual number of iterations for all classes. If binary or multinomial, it returns only 1 element. For liblinear solver, only the maximum number of iteration across all classes is given. Changed in version 0.20: In SciPy <= 1.0.0 the number of lbfgs iterations may exceed max_iter. n_iter_ will now report at most max_iter. See also
SGDClassifier
Incrementally trained logistic regression (when given the parameter loss="log").
LogisticRegressionCV
Logistic regression with built-in cross validation. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References L-BFGS-B – Software for Large-scale Bound-constrained Optimization
Ciyou Zhu, Richard Byrd, Jorge Nocedal and Jose Luis Morales. http://users.iems.northwestern.edu/~nocedal/lbfgsb.html LIBLINEAR – A Library for Large Linear Classification
https://www.csie.ntu.edu.tw/~cjlin/liblinear/ SAG – Mark Schmidt, Nicolas Le Roux, and Francis Bach
Minimizing Finite Sums with the Stochastic Average Gradient https://hal.inria.fr/hal-00860051/document SAGA – Defazio, A., Bach F. & Lacoste-Julien S. (2014).
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives https://arxiv.org/abs/1407.0202 Hsiang-Fu Yu, Fang-Lan Huang, Chih-Jen Lin (2011). Dual coordinate descent
methods for logistic regression and maximum entropy models. Machine Learning 85(1-2):41-75. https://www.csie.ntu.edu.tw/~cjlin/papers/maxent_dual.pdf Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegression(random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :])
array([[9.8...e-01, 1.8...e-02, 1.4...e-08],
[9.7...e-01, 2.8...e-02, ...e-08]])
>>> clf.score(X, y)
0.97...
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
predict_log_proba(X) Predict logarithm of probability estimates.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.17: sample_weight support to LogisticRegression. Returns
self
Fitted estimator. Notes The SAGA solver supports both float64 and float32 bit arrays.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
Examples using sklearn.linear_model.LogisticRegression
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.22
Comparison of Calibration of Classifiers
Probability Calibration curves
Plot classification probability
Plot class probabilities calculated by the VotingClassifier
Feature transformations with ensembles of trees
Logistic function
Regularization path of L1- Logistic Regression
Logistic Regression 3-class Classifier
Comparing various online solvers
MNIST classification using multinomial logistic + L1
Plot multinomial and One-vs-Rest Logistic Regression
L1 Penalty and Sparsity in Logistic Regression
Multiclass sparse logistic regression on 20newgroups
Compact estimator representations
Visualizations with Display Objects
Classifier Chain
Restricted Boltzmann Machine features for digit classification
Pipelining: chaining a PCA and a logistic regression
Column Transformer with Mixed Types
Feature discretization
Digits Classification Exercise | sklearn.modules.generated.sklearn.linear_model.logisticregression |
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.decision_function |
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.densify |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.17: sample_weight support to LogisticRegression. Returns
self
Fitted estimator. Notes The SAGA solver supports both float64 and float32 bit arrays. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.get_params |
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.predict |
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.predict_log_proba |
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.linear_model.logisticregression#sklearn.linear_model.LogisticRegression.sparsify |
class sklearn.linear_model.LogisticRegressionCV(*, Cs=10, fit_intercept=True, cv=None, dual=False, penalty='l2', scoring=None, solver='lbfgs', tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class='auto', random_state=None, l1_ratios=None) [source]
Logistic Regression CV (aka logit, MaxEnt) classifier. See glossary entry for cross-validation estimator. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. Elastic-Net penalty is only supported by the saga solver. For the grid of Cs values and l1_ratios values, the best hyperparameter is selected by the cross-validator StratifiedKFold, but it can be changed using the cv parameter. The ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers can warm-start the coefficients (see Glossary). Read more in the User Guide. Parameters
Csint or list of floats, default=10
Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization.
fit_interceptbool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
cvint or cross-validation generator, default=None
The default cross-validation generator used is Stratified K-Folds. If an integer is provided, then it is the number of folds used. See the module sklearn.model_selection module for the list of possible cross-validation objects. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
dualbool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
penalty{‘l1’, ‘l2’, ‘elasticnet’}, default=’l2’
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver.
scoringstr or callable, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). For a list of scoring functions that can be used, look at sklearn.metrics. The default scoring option used is ‘accuracy’.
solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas ‘liblinear’ and ‘saga’ handle L1 penalty. ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver.
tolfloat, default=1e-4
Tolerance for stopping criteria.
max_iterint, default=100
Maximum number of iterations of the optimization algorithm.
class_weightdict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight == ‘balanced’
n_jobsint, default=None
Number of CPU cores used during the cross-validation loop. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
For the ‘liblinear’, ‘sag’ and ‘lbfgs’ solvers set verbose to any positive number for verbosity.
refitbool, default=True
If set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters. Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averaged.
intercept_scalingfloat, default=1
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
multi_class{‘auto, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
random_stateint, RandomState instance, default=None
Used when solver='sag', ‘saga’ or ‘liblinear’ to shuffle the data. Note that this only applies to the solver and not the cross-validation generator. See Glossary for details.
l1_ratioslist of float, default=None
The list of Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. A value of 0 is equivalent to using penalty='l2', while 1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes
classes_ndarray of shape (n_classes, )
A list of class labels known to the classifier.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary.
intercept_ndarray of shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape(1,) when the problem is binary.
Cs_ndarray of shape (n_cs)
Array of C i.e. inverse of regularization parameter values used for cross-validation.
l1_ratios_ndarray of shape (n_l1_ratios)
Array of l1_ratios used for cross-validation. If no l1_ratio is used (i.e. penalty is not ‘elasticnet’), this is set to [None]
coefs_paths_ndarray of shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1)
dict with classes as the keys, and the path of coefficients obtained during cross-validating across each fold and then across each Cs after doing an OvR for the corresponding class as values. If the ‘multi_class’ option is set to ‘multinomial’, then the coefs_paths are the coefficients corresponding to each class. Each dict value has shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1) depending on whether the intercept is fit or not. If penalty='elasticnet', the shape is (n_folds, n_cs, n_l1_ratios_, n_features) or (n_folds, n_cs, n_l1_ratios_, n_features + 1).
scores_dict
dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold, after doing an OvR for the corresponding class. If the ‘multi_class’ option given is ‘multinomial’ then the same scores are repeated across all classes, since this is the multinomial class. Each dict value has shape (n_folds, n_cs or (n_folds, n_cs, n_l1_ratios) if penalty='elasticnet'.
C_ndarray of shape (n_classes,) or (n_classes - 1,)
Array of C that maps to the best scores across every class. If refit is set to False, then for each class, the best C is the average of the C’s that correspond to the best scores for each fold. C_ is of shape(n_classes,) when the problem is binary.
l1_ratio_ndarray of shape (n_classes,) or (n_classes - 1,)
Array of l1_ratio that maps to the best scores across every class. If refit is set to False, then for each class, the best l1_ratio is the average of the l1_ratio’s that correspond to the best scores for each fold. l1_ratio_ is of shape(n_classes,) when the problem is binary.
n_iter_ndarray of shape (n_classes, n_folds, n_cs) or (1, n_folds, n_cs)
Actual number of iterations for all classes, folds and Cs. In the binary or multinomial cases, the first dimension is equal to 1. If penalty='elasticnet', the shape is (n_classes, n_folds,
n_cs, n_l1_ratios) or (1, n_folds, n_cs, n_l1_ratios). See also
LogisticRegression
Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegressionCV
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegressionCV(cv=5, random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :]).shape
(2, 3)
>>> clf.score(X, y)
0.98...
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
predict_log_proba(X) Predict logarithm of probability estimates.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Returns the score using the scoring option on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Returns the score using the scoring option on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Score of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV |
sklearn.linear_model.LogisticRegressionCV
class sklearn.linear_model.LogisticRegressionCV(*, Cs=10, fit_intercept=True, cv=None, dual=False, penalty='l2', scoring=None, solver='lbfgs', tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class='auto', random_state=None, l1_ratios=None) [source]
Logistic Regression CV (aka logit, MaxEnt) classifier. See glossary entry for cross-validation estimator. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. Elastic-Net penalty is only supported by the saga solver. For the grid of Cs values and l1_ratios values, the best hyperparameter is selected by the cross-validator StratifiedKFold, but it can be changed using the cv parameter. The ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers can warm-start the coefficients (see Glossary). Read more in the User Guide. Parameters
Csint or list of floats, default=10
Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization.
fit_interceptbool, default=True
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
cvint or cross-validation generator, default=None
The default cross-validation generator used is Stratified K-Folds. If an integer is provided, then it is the number of folds used. See the module sklearn.model_selection module for the list of possible cross-validation objects. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
dualbool, default=False
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
penalty{‘l1’, ‘l2’, ‘elasticnet’}, default=’l2’
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver.
scoringstr or callable, default=None
A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). For a list of scoring functions that can be used, look at sklearn.metrics. The default scoring option used is ‘accuracy’.
solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’
Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas ‘liblinear’ and ‘saga’ handle L1 penalty. ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver.
tolfloat, default=1e-4
Tolerance for stopping criteria.
max_iterint, default=100
Maximum number of iterations of the optimization algorithm.
class_weightdict or ‘balanced’, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight == ‘balanced’
n_jobsint, default=None
Number of CPU cores used during the cross-validation loop. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
For the ‘liblinear’, ‘sag’ and ‘lbfgs’ solvers set verbose to any positive number for verbosity.
refitbool, default=True
If set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters. Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averaged.
intercept_scalingfloat, default=1
Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
multi_class{‘auto, ‘ovr’, ‘multinomial’}, default=’auto’
If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22.
random_stateint, RandomState instance, default=None
Used when solver='sag', ‘saga’ or ‘liblinear’ to shuffle the data. Note that this only applies to the solver and not the cross-validation generator. See Glossary for details.
l1_ratioslist of float, default=None
The list of Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. A value of 0 is equivalent to using penalty='l2', while 1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes
classes_ndarray of shape (n_classes, )
A list of class labels known to the classifier.
coef_ndarray of shape (1, n_features) or (n_classes, n_features)
Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary.
intercept_ndarray of shape (1,) or (n_classes,)
Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape(1,) when the problem is binary.
Cs_ndarray of shape (n_cs)
Array of C i.e. inverse of regularization parameter values used for cross-validation.
l1_ratios_ndarray of shape (n_l1_ratios)
Array of l1_ratios used for cross-validation. If no l1_ratio is used (i.e. penalty is not ‘elasticnet’), this is set to [None]
coefs_paths_ndarray of shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1)
dict with classes as the keys, and the path of coefficients obtained during cross-validating across each fold and then across each Cs after doing an OvR for the corresponding class as values. If the ‘multi_class’ option is set to ‘multinomial’, then the coefs_paths are the coefficients corresponding to each class. Each dict value has shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1) depending on whether the intercept is fit or not. If penalty='elasticnet', the shape is (n_folds, n_cs, n_l1_ratios_, n_features) or (n_folds, n_cs, n_l1_ratios_, n_features + 1).
scores_dict
dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold, after doing an OvR for the corresponding class. If the ‘multi_class’ option given is ‘multinomial’ then the same scores are repeated across all classes, since this is the multinomial class. Each dict value has shape (n_folds, n_cs or (n_folds, n_cs, n_l1_ratios) if penalty='elasticnet'.
C_ndarray of shape (n_classes,) or (n_classes - 1,)
Array of C that maps to the best scores across every class. If refit is set to False, then for each class, the best C is the average of the C’s that correspond to the best scores for each fold. C_ is of shape(n_classes,) when the problem is binary.
l1_ratio_ndarray of shape (n_classes,) or (n_classes - 1,)
Array of l1_ratio that maps to the best scores across every class. If refit is set to False, then for each class, the best l1_ratio is the average of the l1_ratio’s that correspond to the best scores for each fold. l1_ratio_ is of shape(n_classes,) when the problem is binary.
n_iter_ndarray of shape (n_classes, n_folds, n_cs) or (1, n_folds, n_cs)
Actual number of iterations for all classes, folds and Cs. In the binary or multinomial cases, the first dimension is equal to 1. If penalty='elasticnet', the shape is (n_classes, n_folds,
n_cs, n_l1_ratios) or (1, n_folds, n_cs, n_l1_ratios). See also
LogisticRegression
Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegressionCV
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegressionCV(cv=5, random_state=0).fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :]).shape
(2, 3)
>>> clf.score(X, y)
0.98...
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
predict_log_proba(X) Predict logarithm of probability estimates.
predict_proba(X) Probability estimates.
score(X, y[, sample_weight]) Returns the score using the scoring option on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
score(X, y, sample_weight=None) [source]
Returns the score using the scoring option on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Score of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv |
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.decision_function |
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.densify |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,) default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. Returns
selfobject | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.get_params |
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.predict |
predict_log_proba(X) [source]
Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.predict_log_proba |
predict_proba(X) [source]
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters
Xarray-like of shape (n_samples, n_features)
Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns
Tarray-like of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.predict_proba |
score(X, y, sample_weight=None) [source]
Returns the score using the scoring option on the given test data and labels. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Score of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.linear_model.logisticregressioncv#sklearn.linear_model.LogisticRegressionCV.sparsify |
class sklearn.linear_model.MultiTaskElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source]
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer. The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||_Fro^2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = sum_i sqrt(sum_j W_ij ^ 2)
i.e. the sum of norms of each row. Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1/L2 term. Defaults to 1.0.
l1_ratiofloat, default=0.5
The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 1 the penalty is an L1/L2 penalty. For l1_ratio = 0 it is an L2 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). If a 1D y is passed in at fit (non multi-task usage), coef_ is then a 1D array. Note that coef_ stores the transpose of W, W.T.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
dual_gap_float
The dual gaps at the end of the optimization.
eps_float
The tolerance scaled scaled by the variance of the target y.
sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features)
Sparse representation of the fitted coef_. See also
MultiTaskElasticNetCV
Multi-task L1/L2 ElasticNet with built-in cross-validation.
ElasticNet
MultiTaskLasso
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskElasticNet(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [[0, 0], [1, 1], [2, 2]])
MultiTaskElasticNet(alpha=0.1)
>>> print(clf.coef_)
[[0.45663524 0.45612256]
[0.45663524 0.45612256]]
>>> print(clf.intercept_)
[0.0872422 0.0872422]
Methods
fit(X, y) Fit MultiTaskElasticNet model with coordinate descent
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet |
sklearn.linear_model.MultiTaskElasticNet
class sklearn.linear_model.MultiTaskElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source]
Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer. The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||_Fro^2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = sum_i sqrt(sum_j W_ij ^ 2)
i.e. the sum of norms of each row. Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1/L2 term. Defaults to 1.0.
l1_ratiofloat, default=0.5
The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 1 the penalty is an L1/L2 penalty. For l1_ratio = 0 it is an L2 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). If a 1D y is passed in at fit (non multi-task usage), coef_ is then a 1D array. Note that coef_ stores the transpose of W, W.T.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
dual_gap_float
The dual gaps at the end of the optimization.
eps_float
The tolerance scaled scaled by the variance of the target y.
sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features)
Sparse representation of the fitted coef_. See also
MultiTaskElasticNetCV
Multi-task L1/L2 ElasticNet with built-in cross-validation.
ElasticNet
MultiTaskLasso
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskElasticNet(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [[0, 0], [1, 1], [2, 2]])
MultiTaskElasticNet(alpha=0.1)
>>> print(clf.coef_)
[[0.45663524 0.45612256]
[0.45663524 0.45612256]]
>>> print(clf.intercept_)
[0.0872422 0.0872422]
Methods
fit(X, y) Fit MultiTaskElasticNet model with coordinate descent
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet |
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.get_params |
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.path |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.set_params |
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnet#sklearn.linear_model.MultiTaskElasticNet.sparse_coef_ |
class sklearn.linear_model.MultiTaskElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, random_state=None, selection='cyclic') [source]
Multi-task L1/L2 ElasticNet with built-in cross-validation. See glossary entry for cross-validation estimator. The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. New in version 0.15. Parameters
l1_ratiofloat or list of float, default=0.5
The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 1 the penalty is an L1/L2 penalty. For l1_ratio = 0 it is an L2 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2. This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7,
.9, .95, .99, 1]
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasarray-like, default=None
List of alphas where to compute the models. If not provided, set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
verbosebool or int, default=0
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
alpha_float
The amount of penalization chosen by cross validation.
mse_path_ndarray of shape (n_alphas, n_folds) or (n_l1_ratio, n_alphas, n_folds)
Mean square error for the test set on each fold, varying alpha.
alphas_ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas)
The grid of alphas used for fitting, for each l1_ratio.
l1_ratio_float
Best l1_ratio obtained by cross-validation.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
dual_gap_float
The dual gap at the end of the optimization for the optimal alpha. See also
MultiTaskElasticNet
ElasticNetCV
MultiTaskLassoCV
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskElasticNetCV(cv=3)
>>> clf.fit([[0,0], [1, 1], [2, 2]],
... [[0, 0], [1, 1], [2, 2]])
MultiTaskElasticNetCV(cv=3)
>>> print(clf.coef_)
[[0.52875032 0.46958558]
[0.52875032 0.46958558]]
>>> print(clf.intercept_)
[0.00166409 0.00166409]
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV |
sklearn.linear_model.MultiTaskElasticNetCV
class sklearn.linear_model.MultiTaskElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, random_state=None, selection='cyclic') [source]
Multi-task L1/L2 ElasticNet with built-in cross-validation. See glossary entry for cross-validation estimator. The optimization objective for MultiTaskElasticNet is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. New in version 0.15. Parameters
l1_ratiofloat or list of float, default=0.5
The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 1 the penalty is an L1/L2 penalty. For l1_ratio = 0 it is an L2 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2. This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7,
.9, .95, .99, 1]
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasarray-like, default=None
List of alphas where to compute the models. If not provided, set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
verbosebool or int, default=0
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
alpha_float
The amount of penalization chosen by cross validation.
mse_path_ndarray of shape (n_alphas, n_folds) or (n_l1_ratio, n_alphas, n_folds)
Mean square error for the test set on each fold, varying alpha.
alphas_ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas)
The grid of alphas used for fitting, for each l1_ratio.
l1_ratio_float
Best l1_ratio obtained by cross-validation.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
dual_gap_float
The dual gap at the end of the optimization for the optimal alpha. See also
MultiTaskElasticNet
ElasticNetCV
MultiTaskLassoCV
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskElasticNetCV(cv=3)
>>> clf.fit([[0,0], [1, 1], [2, 2]],
... [[0, 0], [1, 1], [2, 2]])
MultiTaskElasticNetCV(cv=3)
>>> print(clf.coef_)
[[0.52875032 0.46958558]
[0.52875032 0.46958558]]
>>> print(clf.intercept_)
[0.00166409 0.00166409]
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv |
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.get_params |
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.path |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.linear_model.multitaskelasticnetcv#sklearn.linear_model.MultiTaskElasticNetCV.set_params |
class sklearn.linear_model.MultiTaskLasso(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source]
Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1/L2 term. Defaults to 1.0.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4 Attributes
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
dual_gap_ndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
eps_float
The tolerance scaled scaled by the variance of the target y.
sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features)
Sparse representation of the fitted coef_. See also
MultiTaskLasso
Multi-task L1/L2 Lasso with built-in cross-validation.
Lasso
MultiTaskElasticNet
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskLasso(alpha=0.1)
>>> clf.fit([[0, 1], [1, 2], [2, 4]], [[0, 0], [1, 1], [2, 3]])
MultiTaskLasso(alpha=0.1)
>>> print(clf.coef_)
[[0. 0.60809415]
[0. 0.94592424]]
>>> print(clf.intercept_)
[-0.41888636 -0.87382323]
Methods
fit(X, y) Fit MultiTaskElasticNet model with coordinate descent
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_. | sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso |
sklearn.linear_model.MultiTaskLasso
class sklearn.linear_model.MultiTaskLasso(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, random_state=None, selection='cyclic') [source]
Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1/L2 term. Defaults to 1.0.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4 Attributes
coef_ndarray of shape (n_tasks, n_features)
Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T.
intercept_ndarray of shape (n_tasks,)
Independent term in decision function.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance.
dual_gap_ndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
eps_float
The tolerance scaled scaled by the variance of the target y.
sparse_coef_sparse matrix of shape (n_features,) or (n_tasks, n_features)
Sparse representation of the fitted coef_. See also
MultiTaskLasso
Multi-task L1/L2 Lasso with built-in cross-validation.
Lasso
MultiTaskElasticNet
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn import linear_model
>>> clf = linear_model.MultiTaskLasso(alpha=0.1)
>>> clf.fit([[0, 1], [1, 2], [2, 4]], [[0, 0], [1, 1], [2, 3]])
MultiTaskLasso(alpha=0.1)
>>> print(clf.coef_)
[[0. 0.60809415]
[0. 0.94592424]]
>>> print(clf.intercept_)
[-0.41888636 -0.87382323]
Methods
fit(X, y) Fit MultiTaskElasticNet model with coordinate descent
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_.
Examples using sklearn.linear_model.MultiTaskLasso
Joint feature selection with multi-task Lasso | sklearn.modules.generated.sklearn.linear_model.multitasklasso |
fit(X, y) [source]
Fit MultiTaskElasticNet model with coordinate descent Parameters
Xndarray of shape (n_samples, n_features)
Data.
yndarray of shape (n_samples, n_tasks)
Target. Will be cast to X’s dtype if necessary. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. | sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.get_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.