doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
static path(*args, **kwargs) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.path
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.set_params
property sparse_coef_ Sparse representation of the fitted coef_.
sklearn.modules.generated.sklearn.linear_model.multitasklasso#sklearn.linear_model.MultiTaskLasso.sparse_coef_
class sklearn.linear_model.MultiTaskLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, random_state=None, selection='cyclic') [source] Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. See glossary entry for cross-validation estimator. The optimization objective for MultiTaskLasso is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. New in version 0.15. Parameters epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasarray-like, default=None List of alphas where to compute the models. If not provided, set automatically. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. max_iterint, default=1000 The maximum number of iterations. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. verbosebool or int, default=False Amount of verbosity. n_jobsint, default=None Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes intercept_ndarray of shape (n_tasks,) Independent term in decision function. coef_ndarray of shape (n_tasks, n_features) Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T. alpha_float The amount of penalization chosen by cross validation. mse_path_ndarray of shape (n_alphas, n_folds) Mean square error for the test set on each fold, varying alpha. alphas_ndarray of shape (n_alphas,) The grid of alphas used for fitting. n_iter_int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. dual_gap_float The dual gap at the end of the optimization for the optimal alpha. See also MultiTaskElasticNet ElasticNetCV MultiTaskElasticNetCV Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn.linear_model import MultiTaskLassoCV >>> from sklearn.datasets import make_regression >>> from sklearn.metrics import r2_score >>> X, y = make_regression(n_targets=2, noise=4, random_state=0) >>> reg = MultiTaskLassoCV(cv=5, random_state=0).fit(X, y) >>> r2_score(y, reg.predict(X)) 0.9994... >>> reg.alpha_ 0.5713... >>> reg.predict(X[:1,]) array([[153.7971..., 94.9015...]]) Methods fit(X, y) Fit linear model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute Lasso path with coordinate descent predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3 n_alphasint, default=100 Number of alphas along the regularization path alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). **paramskwargs keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also lars_path Lasso LassoLars LassoCV LassoLarsCV sklearn.decomposition.sparse_encode Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]] predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV
sklearn.linear_model.MultiTaskLassoCV class sklearn.linear_model.MultiTaskLassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, random_state=None, selection='cyclic') [source] Multi-task Lasso model trained with L1/L2 mixed-norm as regularizer. See glossary entry for cross-validation estimator. The optimization objective for MultiTaskLasso is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. New in version 0.15. Parameters epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasarray-like, default=None List of alphas where to compute the models. If not provided, set automatically. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. max_iterint, default=1000 The maximum number of iterations. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. verbosebool or int, default=False Amount of verbosity. n_jobsint, default=None Number of CPUs to use during the cross validation. Note that this is used only if multiple values for l1_ratio are given. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes intercept_ndarray of shape (n_tasks,) Independent term in decision function. coef_ndarray of shape (n_tasks, n_features) Parameter vector (W in the cost function formula). Note that coef_ stores the transpose of W, W.T. alpha_float The amount of penalization chosen by cross validation. mse_path_ndarray of shape (n_alphas, n_folds) Mean square error for the test set on each fold, varying alpha. alphas_ndarray of shape (n_alphas,) The grid of alphas used for fitting. n_iter_int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. dual_gap_float The dual gap at the end of the optimization for the optimal alpha. See also MultiTaskElasticNet ElasticNetCV MultiTaskElasticNetCV Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X and y arguments of the fit method should be directly passed as Fortran-contiguous numpy arrays. Examples >>> from sklearn.linear_model import MultiTaskLassoCV >>> from sklearn.datasets import make_regression >>> from sklearn.metrics import r2_score >>> X, y = make_regression(n_targets=2, noise=4, random_state=0) >>> reg = MultiTaskLassoCV(cv=5, random_state=0).fit(X, y) >>> r2_score(y, reg.predict(X)) 0.9994... >>> reg.alpha_ 0.5713... >>> reg.predict(X[:1,]) array([[153.7971..., 94.9015...]]) Methods fit(X, y) Fit linear model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute Lasso path with coordinate descent predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3 n_alphasint, default=100 Number of alphas along the regularization path alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). **paramskwargs keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also lars_path Lasso LassoLars LassoCV LassoLarsCV sklearn.decomposition.sparse_encode Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]] predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv
fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.get_params
static path(*args, **kwargs) [source] Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3 n_alphasint, default=100 Number of alphas along the regularization path alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). **paramskwargs keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also lars_path Lasso LassoLars LassoCV LassoLarsCV sklearn.decomposition.sparse_encode Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]]
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.path
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.multitasklassocv#sklearn.linear_model.MultiTaskLassoCV.set_params
class sklearn.linear_model.OrthogonalMatchingPursuit(*, n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize=True, precompute='auto') [source] Orthogonal Matching Pursuit model (OMP). Read more in the User Guide. Parameters n_nonzero_coefsint, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tolfloat, default=None Maximum norm of the residual. If not None, overrides n_nonzero_coefs. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precompute‘auto’ or bool, default=’auto’ Whether to use a precomputed Gram and Xy matrix to speed up calculations. Improves performance when n_targets or n_samples is very large. Note that if you already have such matrices, you can pass them directly to the fit method. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the formula). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. n_iter_int or array-like Number of active features across every target. n_nonzero_coefs_int The number of non-zero coefficients in the solution. If n_nonzero_coefs is None and tol is None this value is either set to 10% of n_features or 1, whichever is greater. See also orthogonal_mp orthogonal_mp_gram lars_path Lars LassoLars sklearn.decomposition.sparse_encode OrthogonalMatchingPursuitCV Notes Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples >>> from sklearn.linear_model import OrthogonalMatchingPursuit >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4, random_state=0) >>> reg = OrthogonalMatchingPursuit().fit(X, y) >>> reg.score(X, y) 0.9991... >>> reg.predict(X[:1,]) array([-78.3854...]) Methods fit(X, y) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit
sklearn.linear_model.OrthogonalMatchingPursuit class sklearn.linear_model.OrthogonalMatchingPursuit(*, n_nonzero_coefs=None, tol=None, fit_intercept=True, normalize=True, precompute='auto') [source] Orthogonal Matching Pursuit model (OMP). Read more in the User Guide. Parameters n_nonzero_coefsint, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tolfloat, default=None Maximum norm of the residual. If not None, overrides n_nonzero_coefs. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precompute‘auto’ or bool, default=’auto’ Whether to use a precomputed Gram and Xy matrix to speed up calculations. Improves performance when n_targets or n_samples is very large. Note that if you already have such matrices, you can pass them directly to the fit method. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the formula). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. n_iter_int or array-like Number of active features across every target. n_nonzero_coefs_int The number of non-zero coefficients in the solution. If n_nonzero_coefs is None and tol is None this value is either set to 10% of n_features or 1, whichever is greater. See also orthogonal_mp orthogonal_mp_gram lars_path Lars LassoLars sklearn.decomposition.sparse_encode OrthogonalMatchingPursuitCV Notes Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples >>> from sklearn.linear_model import OrthogonalMatchingPursuit >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4, random_state=0) >>> reg = OrthogonalMatchingPursuit().fit(X, y) >>> reg.score(X, y) 0.9991... >>> reg.predict(X[:1,]) array([-78.3854...]) Methods fit(X, y) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.OrthogonalMatchingPursuit Orthogonal Matching Pursuit
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit
fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary Returns selfobject returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuit#sklearn.linear_model.OrthogonalMatchingPursuit.set_params
class sklearn.linear_model.OrthogonalMatchingPursuitCV(*, copy=True, fit_intercept=True, normalize=True, max_iter=None, cv=None, n_jobs=None, verbose=False) [source] Cross-validated Orthogonal Matching Pursuit model (OMP). See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters copybool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. max_iterint, default=None Maximum numbers of iterations to perform, therefore maximum features to include. 10% of n_features but at least 5 if available. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verbosebool or int, default=False Sets the verbosity amount. Attributes intercept_float or ndarray of shape (n_targets,) Independent term in decision function. coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the problem formulation). n_nonzero_coefs_int Estimated number of non-zero coefficients giving the best mean squared error over the cross-validation folds. n_iter_int or array-like Number of active features across every target for the model refit with the best hyperparameters got by cross-validating across all folds. See also orthogonal_mp orthogonal_mp_gram lars_path Lars LassoLars OrthogonalMatchingPursuit LarsCV LassoLarsCV sklearn.decomposition.sparse_encode Examples >>> from sklearn.linear_model import OrthogonalMatchingPursuitCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=100, n_informative=10, ... noise=4, random_state=0) >>> reg = OrthogonalMatchingPursuitCV(cv=5).fit(X, y) >>> reg.score(X, y) 0.9991... >>> reg.n_nonzero_coefs_ 10 >>> reg.predict(X[:1,]) array([-78.3854...]) Methods fit(X, y) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV
sklearn.linear_model.OrthogonalMatchingPursuitCV class sklearn.linear_model.OrthogonalMatchingPursuitCV(*, copy=True, fit_intercept=True, normalize=True, max_iter=None, cv=None, n_jobs=None, verbose=False) [source] Cross-validated Orthogonal Matching Pursuit model (OMP). See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters copybool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. max_iterint, default=None Maximum numbers of iterations to perform, therefore maximum features to include. 10% of n_features but at least 5 if available. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verbosebool or int, default=False Sets the verbosity amount. Attributes intercept_float or ndarray of shape (n_targets,) Independent term in decision function. coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the problem formulation). n_nonzero_coefs_int Estimated number of non-zero coefficients giving the best mean squared error over the cross-validation folds. n_iter_int or array-like Number of active features across every target for the model refit with the best hyperparameters got by cross-validating across all folds. See also orthogonal_mp orthogonal_mp_gram lars_path Lars LassoLars OrthogonalMatchingPursuit LarsCV LassoLarsCV sklearn.decomposition.sparse_encode Examples >>> from sklearn.linear_model import OrthogonalMatchingPursuitCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=100, n_informative=10, ... noise=4, random_state=0) >>> reg = OrthogonalMatchingPursuitCV(cv=5).fit(X, y) >>> reg.score(X, y) 0.9991... >>> reg.n_nonzero_coefs_ 10 >>> reg.predict(X[:1,]) array([-78.3854...]) Methods fit(X, y) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.OrthogonalMatchingPursuitCV Orthogonal Matching Pursuit
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv
fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. Returns selfobject returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.orthogonalmatchingpursuitcv#sklearn.linear_model.OrthogonalMatchingPursuitCV.set_params
sklearn.linear_model.orthogonal_mp(X, y, *, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, return_n_iter=False) [source] Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form: When parametrized by the number of non-zero coefficients using n_nonzero_coefs: argmin ||y - Xgamma||^2 subject to ||gamma||_0 <= n_{nonzero coefs} When parametrized by error using the parameter tol: argmin ||gamma||_0 subject to ||y - Xgamma||^2 <= tol Read more in the User Guide. Parameters Xndarray of shape (n_samples, n_features) Input data. Columns are assumed to have unit norm. yndarray of shape (n_samples,) or (n_samples, n_targets) Input targets. n_nonzero_coefsint, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tolfloat, default=None Maximum norm of the residual. If not None, overrides n_nonzero_coefs. precompute‘auto’ or bool, default=False Whether to perform precomputations. Improves performance when n_targets or n_samples is very large. copy_Xbool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. return_pathbool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iterbool, default=False Whether or not to return the number of iterations. Returns coefndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If return_path=True, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis yields coefficients in increasing order of active features. n_itersarray-like or int Number of active features across every target. Returned only if return_n_iter is set to True. See also OrthogonalMatchingPursuit orthogonal_mp_gram lars_path sklearn.decomposition.sparse_encode Notes Orthogonal matching pursuit was introduced in S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf
sklearn.modules.generated.sklearn.linear_model.orthogonal_mp#sklearn.linear_model.orthogonal_mp
sklearn.linear_model.orthogonal_mp_gram(Gram, Xy, *, n_nonzero_coefs=None, tol=None, norms_squared=None, copy_Gram=True, copy_Xy=True, return_path=False, return_n_iter=False) [source] Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the User Guide. Parameters Gramndarray of shape (n_features, n_features) Gram matrix of the input data: X.T * X. Xyndarray of shape (n_features,) or (n_features, n_targets) Input targets multiplied by X: X.T * y. n_nonzero_coefsint, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tolfloat, default=None Maximum norm of the residual. If not None, overrides n_nonzero_coefs. norms_squaredarray-like of shape (n_targets,), default=None Squared L2 norms of the lines of y. Required if tol is not None. copy_Grambool, default=True Whether the gram matrix must be copied by the algorithm. A false value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway. copy_Xybool, default=True Whether the covariance vector Xy must be copied by the algorithm. If False, it may be overwritten. return_pathbool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iterbool, default=False Whether or not to return the number of iterations. Returns coefndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If return_path=True, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis yields coefficients in increasing order of active features. n_itersarray-like or int Number of active features across every target. Returned only if return_n_iter is set to True. See also OrthogonalMatchingPursuit orthogonal_mp lars_path sklearn.decomposition.sparse_encode Notes Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf
sklearn.modules.generated.sklearn.linear_model.orthogonal_mp_gram#sklearn.linear_model.orthogonal_mp_gram
class sklearn.linear_model.PassiveAggressiveClassifier(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='hinge', n_jobs=None, random_state=None, warm_start=False, class_weight=None, average=False) [source] Passive Aggressive Classifier Read more in the User Guide. Parameters Cfloat, default=1.0 Maximum step size (regularization). Defaults to 1.0. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat or None, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseinteger, default=0 The verbosity level lossstring, default=”hinge” The loss function to be used: hinge: equivalent to PA-I in the reference paper. squared_hinge: equivalent to PA-II in the reference paper. n_jobsint or None, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. class_weightdict, {class_label: weight} or “balanced” or None, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) New in version 0.17: parameter class_weight to automatically weight samples. averagebool or int, default=False When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD Attributes coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features] Weights assigned to the features. intercept_array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function. n_iter_int The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit. classes_array of shape (n_classes,) The unique classes labels. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). loss_function_callable Loss function used by the algorithm. See also SGDClassifier Perceptron References Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006) Examples >>> from sklearn.linear_model import PassiveAggressiveClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_features=4, random_state=0) >>> clf = PassiveAggressiveClassifier(max_iter=1000, random_state=0, ... tol=1e-3) >>> clf.fit(X, y) PassiveAggressiveClassifier(random_state=0) >>> print(clf.coef_) [[0.26642044 0.45070924 0.67251877 0.64185414]] >>> print(clf.intercept_) [1.84127814] >>> print(clf.predict([[0, 0, 0, 0]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init]) Fit linear model with Passive Aggressive algorithm. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes]) Fit linear model with Passive Aggressive algorithm. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data ynumpy array of shape [n_samples] Target values coef_initarray, shape = [n_classes,n_features] The initial coefficients to warm-start the optimization. intercept_initarray, shape = [n_classes] The initial intercept to warm-start the optimization. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Subset of the training data ynumpy array of shape [n_samples] Subset of the target values classesarray, shape = [n_classes] Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier
sklearn.linear_model.PassiveAggressiveClassifier class sklearn.linear_model.PassiveAggressiveClassifier(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='hinge', n_jobs=None, random_state=None, warm_start=False, class_weight=None, average=False) [source] Passive Aggressive Classifier Read more in the User Guide. Parameters Cfloat, default=1.0 Maximum step size (regularization). Defaults to 1.0. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat or None, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseinteger, default=0 The verbosity level lossstring, default=”hinge” The loss function to be used: hinge: equivalent to PA-I in the reference paper. squared_hinge: equivalent to PA-II in the reference paper. n_jobsint or None, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. class_weightdict, {class_label: weight} or “balanced” or None, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) New in version 0.17: parameter class_weight to automatically weight samples. averagebool or int, default=False When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD Attributes coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features] Weights assigned to the features. intercept_array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function. n_iter_int The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit. classes_array of shape (n_classes,) The unique classes labels. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). loss_function_callable Loss function used by the algorithm. See also SGDClassifier Perceptron References Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006) Examples >>> from sklearn.linear_model import PassiveAggressiveClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_features=4, random_state=0) >>> clf = PassiveAggressiveClassifier(max_iter=1000, random_state=0, ... tol=1e-3) >>> clf.fit(X, y) PassiveAggressiveClassifier(random_state=0) >>> print(clf.coef_) [[0.26642044 0.45070924 0.67251877 0.64185414]] >>> print(clf.intercept_) [1.84127814] >>> print(clf.predict([[0, 0, 0, 0]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init]) Fit linear model with Passive Aggressive algorithm. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes]) Fit linear model with Passive Aggressive algorithm. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data ynumpy array of shape [n_samples] Target values coef_initarray, shape = [n_classes,n_features] The initial coefficients to warm-start the optimization. intercept_initarray, shape = [n_classes] The initial intercept to warm-start the optimization. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Subset of the training data ynumpy array of shape [n_samples] Subset of the target values classesarray, shape = [n_classes] Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. Examples using sklearn.linear_model.PassiveAggressiveClassifier Out-of-core classification of text documents Comparing various online solvers Classification of text documents using sparse features
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier
decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.decision_function
densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.densify
fit(X, y, coef_init=None, intercept_init=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data ynumpy array of shape [n_samples] Target values coef_initarray, shape = [n_classes,n_features] The initial coefficients to warm-start the optimization. intercept_initarray, shape = [n_classes] The initial intercept to warm-start the optimization. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.get_params
partial_fit(X, y, classes=None) [source] Fit linear model with Passive Aggressive algorithm. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Subset of the training data ynumpy array of shape [n_samples] Subset of the target values classesarray, shape = [n_classes] Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.partial_fit
predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.predict
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.score
set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.set_params
sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveclassifier#sklearn.linear_model.PassiveAggressiveClassifier.sparsify
sklearn.linear_model.PassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False) [source] Passive Aggressive Regressor Read more in the User Guide. Parameters Cfloat, default=1.0 Maximum step size (regularization). Defaults to 1.0. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat or None, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseinteger, default=0 The verbosity level lossstring, default=”epsilon_insensitive” The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper. epsilonfloat, default=DEFAULT_EPSILON If the difference between the current prediction and the correct label is below this threshold, the model is not updated. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. averagebool or int, default=False When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD Attributes coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features] Weights assigned to the features. intercept_array, shape = [1] if n_classes == 2 else [n_classes] Constants in decision function. n_iter_int The actual number of iterations to reach the stopping criterion. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also SGDRegressor References Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006) Examples >>> from sklearn.linear_model import PassiveAggressiveRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=4, random_state=0) >>> regr = PassiveAggressiveRegressor(max_iter=100, random_state=0, ... tol=1e-3) >>> regr.fit(X, y) PassiveAggressiveRegressor(max_iter=100, random_state=0) >>> print(regr.coef_) [20.48736655 34.18818427 67.59122734 87.94731329] >>> print(regr.intercept_) [-0.02306214] >>> print(regr.predict([[0, 0, 0, 0]])) [-0.02306214]
sklearn.modules.generated.sklearn.linear_model.passiveaggressiveregressor#sklearn.linear_model.PassiveAggressiveRegressor
class sklearn.linear_model.Perceptron(*, penalty=None, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False) [source] Read more in the User Guide. Parameters penalty{‘l2’,’l1’,’elasticnet’}, default=None The penalty (aka regularization term) to be used. alphafloat, default=0.0001 Constant that multiplies the regularization term if regularization is used. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty='elasticnet'. New in version 0.24. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level eta0double, default=1 Constant by which the updates are multiplied. n_jobsint, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. class_weightdict, {class_label: weight} or “balanced”, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Attributes classes_ndarray of shape (n_classes,) The unique classes labels. coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features) Weights assigned to the features. intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,) Constants in decision function. loss_function_concrete LossFunction The function that determines the loss, or difference between the output of the algorithm and the target values. n_iter_int The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also SGDClassifier Notes Perceptron is a classification algorithm which shares the same underlying implementation with SGDClassifier. In fact, Perceptron() is equivalent to SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None). References https://en.wikipedia.org/wiki/Perceptron and references therein. Examples >>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import Perceptron >>> X, y = load_digits(return_X_y=True) >>> clf = Perceptron(tol=1e-3, random_state=0) >>> clf.fit(X, y) Perceptron() >>> clf.score(X, y) 0.939... Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron
sklearn.linear_model.Perceptron class sklearn.linear_model.Perceptron(*, penalty=None, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False) [source] Read more in the User Guide. Parameters penalty{‘l2’,’l1’,’elasticnet’}, default=None The penalty (aka regularization term) to be used. alphafloat, default=0.0001 Constant that multiplies the regularization term if regularization is used. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty='elasticnet'. New in version 0.24. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level eta0double, default=1 Constant by which the updates are multiplied. n_jobsint, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20. validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20. n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20. class_weightdict, {class_label: weight} or “balanced”, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Attributes classes_ndarray of shape (n_classes,) The unique classes labels. coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features) Weights assigned to the features. intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,) Constants in decision function. loss_function_concrete LossFunction The function that determines the loss, or difference between the output of the algorithm and the target values. n_iter_int The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit. t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also SGDClassifier Notes Perceptron is a classification algorithm which shares the same underlying implementation with SGDClassifier. In fact, Perceptron() is equivalent to SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None). References https://en.wikipedia.org/wiki/Perceptron and references therein. Examples >>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import Perceptron >>> X, y = load_digits(return_X_y=True) >>> clf = Perceptron(tol=1e-3, random_state=0) >>> clf.fit(X, y) Perceptron() >>> clf.score(X, y) 0.939... Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. Examples using sklearn.linear_model.Perceptron Out-of-core classification of text documents Comparing various online solvers Classification of text documents using sparse features
sklearn.modules.generated.sklearn.linear_model.perceptron
decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.decision_function
densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.densify
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.get_params
partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.partial_fit
predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.predict
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.score
set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.set_params
sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.perceptron#sklearn.linear_model.Perceptron.sparsify
class sklearn.linear_model.PoissonRegressor(*, alpha=1.0, fit_intercept=True, max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Poisson distribution. Read more in the User Guide. New in version 0.23. Parameters alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X @ coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.PoissonRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [12, 17, 22, 21] >>> clf.fit(X, y) PoissonRegressor() >>> clf.score(X, y) 0.990... >>> clf.coef_ array([0.121..., 0.158...]) >>> clf.intercept_ 2.088... >>> clf.predict([[1, 1], [3, 4]]) array([10.676..., 21.875...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor
sklearn.linear_model.PoissonRegressor class sklearn.linear_model.PoissonRegressor(*, alpha=1.0, fit_intercept=True, max_iter=100, tol=0.0001, warm_start=False, verbose=0) [source] Generalized Linear Model with a Poisson distribution. Read more in the User Guide. New in version 0.23. Parameters alphafloat, default=1 Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs. In this case, the design matrix X must have full column rank (no collinearities). fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the linear predictor (X @ coef + intercept). max_iterint, default=100 The maximal number of iterations for the solver. tolfloat, default=1e-4 Stopping criterion. For the lbfgs solver, the iteration will stop when max{|g_j|, j = 1, ..., d} <= tol where g_j is the j-th component of the gradient (derivative) of the objective function. warm_startbool, default=False If set to True, reuse the solution of the previous call to fit as initialization for coef_ and intercept_ . verboseint, default=0 For the lbfgs solver set verbose to any positive number for verbosity. Attributes coef_array of shape (n_features,) Estimated coefficients for the linear predictor (X @ coef_ + intercept_) in the GLM. intercept_float Intercept (a.k.a. bias) added to linear predictor. n_iter_int Actual number of iterations used in the solver. Examples >>> from sklearn import linear_model >>> clf = linear_model.PoissonRegressor() >>> X = [[1, 2], [2, 3], [3, 4], [4, 3]] >>> y = [12, 17, 22, 21] >>> clf.fit(X, y) PoissonRegressor() >>> clf.score(X, y) 0.990... >>> clf.coef_ array([0.121..., 0.158...]) >>> clf.intercept_ 2.088... >>> clf.predict([[1, 1], [3, 4]]) array([10.676..., 21.875...]) Methods fit(X, y[, sample_weight]) Fit a Generalized Linear Model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using GLM with feature matrix X. score(X, y[, sample_weight]) Compute D^2, the percentage of deviance explained. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.PoissonRegressor Release Highlights for scikit-learn 0.23 Poisson regression and non-normal loss Tweedie regression on insurance claims
sklearn.modules.generated.sklearn.linear_model.poissonregressor
fit(X, y, sample_weight=None) [source] Fit a Generalized Linear Model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor.get_params
predict(X) [source] Predict using GLM with feature matrix X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Samples. Returns y_predarray of shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor.predict
score(X, y, sample_weight=None) [source] Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True values of target. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat D^2 of self.predict(X) w.r.t. y.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.poissonregressor#sklearn.linear_model.PoissonRegressor.set_params
class sklearn.linear_model.RANSACRegressor(base_estimator=None, *, min_samples=None, residual_threshold=None, is_data_valid=None, is_model_valid=None, max_trials=100, max_skips=inf, stop_n_inliers=inf, stop_score=inf, stop_probability=0.99, loss='absolute_loss', random_state=None) [source] RANSAC (RANdom SAmple Consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Read more in the User Guide. Parameters base_estimatorobject, default=None Base estimator object which implements the following methods: fit(X, y): Fit model to given training data and target values. score(X, y): Returns the mean accuracy on the given test data, which is used for the stop criterion defined by stop_score. Additionally, the score is used to decide which of two equally large consensus sets is chosen as the better one. predict(X): Returns predicted values using the linear model, which is used to compute residual error using loss function. If base_estimator is None, then LinearRegression is used for target values of dtype float. Note that the current implementation only supports regression estimators. min_samplesint (>= 1) or float ([0, 1]), default=None Minimum number of samples chosen randomly from original data. Treated as an absolute number of samples for min_samples >= 1, treated as a relative number ceil(min_samples * X.shape[0]) for min_samples < 1. This is typically chosen as the minimal number of samples necessary to estimate the given base_estimator. By default a sklearn.linear_model.LinearRegression() estimator is assumed and min_samples is chosen as X.shape[1] + 1. residual_thresholdfloat, default=None Maximum residual for a data sample to be classified as an inlier. By default the threshold is chosen as the MAD (median absolute deviation) of the target values y. is_data_validcallable, default=None This function is called with the randomly selected data before the model is fitted to it: is_data_valid(X, y). If its return value is False the current randomly chosen sub-sample is skipped. is_model_validcallable, default=None This function is called with the estimated model and the randomly selected data: is_model_valid(model, X, y). If its return value is False the current randomly chosen sub-sample is skipped. Rejecting samples with this function is computationally costlier than with is_data_valid. is_model_valid should therefore only be used if the estimated model is needed for making the rejection decision. max_trialsint, default=100 Maximum number of iterations for random sample selection. max_skipsint, default=np.inf Maximum number of iterations that can be skipped due to finding zero inliers or invalid data defined by is_data_valid or invalid models defined by is_model_valid. New in version 0.19. stop_n_inliersint, default=np.inf Stop iteration if at least this number of inliers are found. stop_scorefloat, default=np.inf Stop iteration if score is greater equal than this threshold. stop_probabilityfloat in range [0, 1], default=0.99 RANSAC iteration stops if at least one outlier-free set of the training data is sampled in RANSAC. This requires to generate at least N samples (iterations): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to high value such as 0.99 (the default) and e is the current fraction of inliers w.r.t. the total number of samples. lossstring, callable, default=’absolute_loss’ String inputs, “absolute_loss” and “squared_loss” are supported which find the absolute loss and squared loss per sample respectively. If loss is a callable, then it should be a function that takes two arrays as inputs, the true and predicted value and returns a 1-D array with the i-th value of the array corresponding to the loss on X[i]. If the loss on a sample is greater than the residual_threshold, then this sample is classified as an outlier. New in version 0.18. random_stateint, RandomState instance, default=None The generator used to initialize the centers. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes estimator_object Best fitted model (copy of the base_estimator object). n_trials_int Number of random selection trials until one of the stop criteria is met. It is always <= max_trials. inlier_mask_bool array of shape [n_samples] Boolean mask of inliers classified as True. n_skips_no_inliers_int Number of iterations skipped due to finding zero inliers. New in version 0.19. n_skips_invalid_data_int Number of iterations skipped due to invalid data defined by is_data_valid. New in version 0.19. n_skips_invalid_model_int Number of iterations skipped due to an invalid model defined by is_model_valid. New in version 0.19. References 1 https://en.wikipedia.org/wiki/RANSAC 2 https://www.sri.com/sites/default/files/publications/ransac-publication.pdf 3 http://www.bmva.org/bmvc/2009/Papers/Paper355/Paper355.pdf Examples >>> from sklearn.linear_model import RANSACRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression( ... n_samples=200, n_features=2, noise=4.0, random_state=0) >>> reg = RANSACRegressor(random_state=0).fit(X, y) >>> reg.score(X, y) 0.9885... >>> reg.predict(X[:1,]) array([-31.9417...]) Methods fit(X, y[, sample_weight]) Fit estimator using RANSAC algorithm. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the estimated model. score(X, y) Returns the score of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit estimator using RANSAC algorithm. Parameters Xarray-like or sparse matrix, shape [n_samples, n_features] Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weightarray-like of shape (n_samples,), default=None Individual weights for each sample raises error if sample_weight is passed and base_estimator fit method does not support it. New in version 0.18. Raises ValueError If no valid consensus set could be found. This occurs if is_data_valid and is_model_valid return False for all max_trials randomly chosen sub-samples. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the estimated model. This is a wrapper for estimator_.predict(X). Parameters Xnumpy array of shape [n_samples, n_features] Returns yarray, shape = [n_samples] or [n_samples, n_targets] Returns predicted values. score(X, y) [source] Returns the score of the prediction. This is a wrapper for estimator_.score(X, y). Parameters Xnumpy array or sparse matrix of shape [n_samples, n_features] Training data. yarray, shape = [n_samples] or [n_samples, n_targets] Target values. Returns zfloat Score of the prediction. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor
sklearn.linear_model.RANSACRegressor class sklearn.linear_model.RANSACRegressor(base_estimator=None, *, min_samples=None, residual_threshold=None, is_data_valid=None, is_model_valid=None, max_trials=100, max_skips=inf, stop_n_inliers=inf, stop_score=inf, stop_probability=0.99, loss='absolute_loss', random_state=None) [source] RANSAC (RANdom SAmple Consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Read more in the User Guide. Parameters base_estimatorobject, default=None Base estimator object which implements the following methods: fit(X, y): Fit model to given training data and target values. score(X, y): Returns the mean accuracy on the given test data, which is used for the stop criterion defined by stop_score. Additionally, the score is used to decide which of two equally large consensus sets is chosen as the better one. predict(X): Returns predicted values using the linear model, which is used to compute residual error using loss function. If base_estimator is None, then LinearRegression is used for target values of dtype float. Note that the current implementation only supports regression estimators. min_samplesint (>= 1) or float ([0, 1]), default=None Minimum number of samples chosen randomly from original data. Treated as an absolute number of samples for min_samples >= 1, treated as a relative number ceil(min_samples * X.shape[0]) for min_samples < 1. This is typically chosen as the minimal number of samples necessary to estimate the given base_estimator. By default a sklearn.linear_model.LinearRegression() estimator is assumed and min_samples is chosen as X.shape[1] + 1. residual_thresholdfloat, default=None Maximum residual for a data sample to be classified as an inlier. By default the threshold is chosen as the MAD (median absolute deviation) of the target values y. is_data_validcallable, default=None This function is called with the randomly selected data before the model is fitted to it: is_data_valid(X, y). If its return value is False the current randomly chosen sub-sample is skipped. is_model_validcallable, default=None This function is called with the estimated model and the randomly selected data: is_model_valid(model, X, y). If its return value is False the current randomly chosen sub-sample is skipped. Rejecting samples with this function is computationally costlier than with is_data_valid. is_model_valid should therefore only be used if the estimated model is needed for making the rejection decision. max_trialsint, default=100 Maximum number of iterations for random sample selection. max_skipsint, default=np.inf Maximum number of iterations that can be skipped due to finding zero inliers or invalid data defined by is_data_valid or invalid models defined by is_model_valid. New in version 0.19. stop_n_inliersint, default=np.inf Stop iteration if at least this number of inliers are found. stop_scorefloat, default=np.inf Stop iteration if score is greater equal than this threshold. stop_probabilityfloat in range [0, 1], default=0.99 RANSAC iteration stops if at least one outlier-free set of the training data is sampled in RANSAC. This requires to generate at least N samples (iterations): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to high value such as 0.99 (the default) and e is the current fraction of inliers w.r.t. the total number of samples. lossstring, callable, default=’absolute_loss’ String inputs, “absolute_loss” and “squared_loss” are supported which find the absolute loss and squared loss per sample respectively. If loss is a callable, then it should be a function that takes two arrays as inputs, the true and predicted value and returns a 1-D array with the i-th value of the array corresponding to the loss on X[i]. If the loss on a sample is greater than the residual_threshold, then this sample is classified as an outlier. New in version 0.18. random_stateint, RandomState instance, default=None The generator used to initialize the centers. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes estimator_object Best fitted model (copy of the base_estimator object). n_trials_int Number of random selection trials until one of the stop criteria is met. It is always <= max_trials. inlier_mask_bool array of shape [n_samples] Boolean mask of inliers classified as True. n_skips_no_inliers_int Number of iterations skipped due to finding zero inliers. New in version 0.19. n_skips_invalid_data_int Number of iterations skipped due to invalid data defined by is_data_valid. New in version 0.19. n_skips_invalid_model_int Number of iterations skipped due to an invalid model defined by is_model_valid. New in version 0.19. References 1 https://en.wikipedia.org/wiki/RANSAC 2 https://www.sri.com/sites/default/files/publications/ransac-publication.pdf 3 http://www.bmva.org/bmvc/2009/Papers/Paper355/Paper355.pdf Examples >>> from sklearn.linear_model import RANSACRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression( ... n_samples=200, n_features=2, noise=4.0, random_state=0) >>> reg = RANSACRegressor(random_state=0).fit(X, y) >>> reg.score(X, y) 0.9885... >>> reg.predict(X[:1,]) array([-31.9417...]) Methods fit(X, y[, sample_weight]) Fit estimator using RANSAC algorithm. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the estimated model. score(X, y) Returns the score of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit estimator using RANSAC algorithm. Parameters Xarray-like or sparse matrix, shape [n_samples, n_features] Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weightarray-like of shape (n_samples,), default=None Individual weights for each sample raises error if sample_weight is passed and base_estimator fit method does not support it. New in version 0.18. Raises ValueError If no valid consensus set could be found. This occurs if is_data_valid and is_model_valid return False for all max_trials randomly chosen sub-samples. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the estimated model. This is a wrapper for estimator_.predict(X). Parameters Xnumpy array of shape [n_samples, n_features] Returns yarray, shape = [n_samples] or [n_samples, n_targets] Returns predicted values. score(X, y) [source] Returns the score of the prediction. This is a wrapper for estimator_.score(X, y). Parameters Xnumpy array or sparse matrix of shape [n_samples, n_features] Training data. yarray, shape = [n_samples] or [n_samples, n_targets] Target values. Returns zfloat Score of the prediction. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.RANSACRegressor Robust linear model estimation using RANSAC Theil-Sen Regression Robust linear estimator fitting
sklearn.modules.generated.sklearn.linear_model.ransacregressor
fit(X, y, sample_weight=None) [source] Fit estimator using RANSAC algorithm. Parameters Xarray-like or sparse matrix, shape [n_samples, n_features] Training data. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weightarray-like of shape (n_samples,), default=None Individual weights for each sample raises error if sample_weight is passed and base_estimator fit method does not support it. New in version 0.18. Raises ValueError If no valid consensus set could be found. This occurs if is_data_valid and is_model_valid return False for all max_trials randomly chosen sub-samples.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor.get_params
predict(X) [source] Predict using the estimated model. This is a wrapper for estimator_.predict(X). Parameters Xnumpy array of shape [n_samples, n_features] Returns yarray, shape = [n_samples] or [n_samples, n_targets] Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor.predict
score(X, y) [source] Returns the score of the prediction. This is a wrapper for estimator_.score(X, y). Parameters Xnumpy array or sparse matrix of shape [n_samples, n_features] Training data. yarray, shape = [n_samples] or [n_samples, n_targets] Target values. Returns zfloat Score of the prediction.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ransacregressor#sklearn.linear_model.RANSACRegressor.set_params
class sklearn.linear_model.Ridge(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver='auto', random_state=None) [source] Linear least squares with l2 regularization. Minimizes the objective function: ||y - Xw||^2_2 + alpha * ||w||^2_2 This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)). Read more in the User Guide. Parameters alpha{float, ndarray of shape (n_targets,)}, default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number. fit_interceptbool, default=True Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. X and y are expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_iterint, default=None Maximum number of iterations for conjugate gradient solver. For ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ solver, the default value is 1000. tolfloat, default=1e-3 Precision of the solution. solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’ Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution. ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when fit_intercept is True. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. random_stateint, RandomState instance, default=None Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details. New in version 0.17: random_state to support Stochastic Average Gradient. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Weight vector(s). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. n_iter_None or ndarray of shape (n_targets,) Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None. New in version 0.17. See also RidgeClassifier Ridge classifier. RidgeCV Ridge regression with built-in cross validation. KernelRidge Kernel ridge regression combines ridge regression with the kernel trick. Examples >>> from sklearn.linear_model import Ridge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = Ridge(alpha=1.0) >>> clf.fit(X, y) Ridge() Methods fit(X, y[, sample_weight]) Fit Ridge regression model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Ridge regression model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge
sklearn.linear_model.Ridge class sklearn.linear_model.Ridge(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver='auto', random_state=None) [source] Linear least squares with l2 regularization. Minimizes the objective function: ||y - Xw||^2_2 + alpha * ||w||^2_2 This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)). Read more in the User Guide. Parameters alpha{float, ndarray of shape (n_targets,)}, default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number. fit_interceptbool, default=True Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. X and y are expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_iterint, default=None Maximum number of iterations for conjugate gradient solver. For ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ solver, the default value is 1000. tolfloat, default=1e-3 Precision of the solution. solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’ Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution. ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when fit_intercept is True. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. random_stateint, RandomState instance, default=None Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details. New in version 0.17: random_state to support Stochastic Average Gradient. Attributes coef_ndarray of shape (n_features,) or (n_targets, n_features) Weight vector(s). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. n_iter_None or ndarray of shape (n_targets,) Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None. New in version 0.17. See also RidgeClassifier Ridge classifier. RidgeCV Ridge regression with built-in cross validation. KernelRidge Kernel ridge regression combines ridge regression with the kernel trick. Examples >>> from sklearn.linear_model import Ridge >>> import numpy as np >>> n_samples, n_features = 10, 5 >>> rng = np.random.RandomState(0) >>> y = rng.randn(n_samples) >>> X = rng.randn(n_samples, n_features) >>> clf = Ridge(alpha=1.0) >>> clf.fit(X, y) Ridge() Methods fit(X, y[, sample_weight]) Fit Ridge regression model. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Ridge regression model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfreturns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.Ridge Compressive sensing: tomography reconstruction with L1 prior (Lasso) Prediction Latency Plot Ridge coefficients as a function of the regularization Ordinary Least Squares and Ridge Regression Variance Plot Ridge coefficients as a function of the L2 regularization Polynomial interpolation HuberRegressor vs Ridge on dataset with strong outliers Poisson regression and non-normal loss Common pitfalls in interpretation of coefficients of linear models
sklearn.modules.generated.sklearn.linear_model.ridge
fit(X, y, sample_weight=None) [source] Fit Ridge regression model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) or (n_samples, n_targets) Target values sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfreturns an instance of self.
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridge#sklearn.linear_model.Ridge.set_params
class sklearn.linear_model.RidgeClassifier(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, class_weight=None, solver='auto', random_state=None) [source] Classifier using Ridge regression. This classifier first converts the target values into {-1, 1} and then treats the problem as a regression task (multi-output regression in the multiclass case). Read more in the User Guide. Parameters alphafloat, default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_iterint, default=None Maximum number of iterations for conjugate gradient solver. The default value is determined by scipy.sparse.linalg. tolfloat, default=1e-3 Precision of the solution. class_weightdict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’ Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution. ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its unbiased and more flexible version named SAGA. Both methods use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. random_stateint, RandomState instance, default=None Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details. Attributes coef_ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. n_iter_None or ndarray of shape (n_targets,) Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None. classes_ndarray of shape (n_classes,) The classes labels. See also Ridge Ridge regression. RidgeClassifierCV Ridge classifier with built-in cross validation. Notes For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifier >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifier().fit(X, y) >>> clf.score(X, y) 0.9595... Methods decision_function(X) Predict confidence scores for samples. fit(X, y[, sample_weight]) Fit Ridge classifier model. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. fit(X, y, sample_weight=None) [source] Fit Ridge classifier model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. New in version 0.17: sample_weight support to Classifier. Returns selfobject Instance of the estimator. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier
sklearn.linear_model.RidgeClassifier class sklearn.linear_model.RidgeClassifier(alpha=1.0, *, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, class_weight=None, solver='auto', random_state=None) [source] Classifier using Ridge regression. This classifier first converts the target values into {-1, 1} and then treats the problem as a regression task (multi-output regression in the multiclass case). Read more in the User Guide. Parameters alphafloat, default=1.0 Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. max_iterint, default=None Maximum number of iterations for conjugate gradient solver. The default value is determined by scipy.sparse.linalg. tolfloat, default=1e-3 Precision of the solution. class_weightdict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’ Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution. ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its unbiased and more flexible version named SAGA. Both methods use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. random_stateint, RandomState instance, default=None Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details. Attributes coef_ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. n_iter_None or ndarray of shape (n_targets,) Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None. classes_ndarray of shape (n_classes,) The classes labels. See also Ridge Ridge regression. RidgeClassifierCV Ridge classifier with built-in cross validation. Notes For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifier >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifier().fit(X, y) >>> clf.score(X, y) 0.9595... Methods decision_function(X) Predict confidence scores for samples. fit(X, y[, sample_weight]) Fit Ridge classifier model. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. fit(X, y, sample_weight=None) [source] Fit Ridge classifier model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. New in version 0.17: sample_weight support to Classifier. Returns selfobject Instance of the estimator. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.RidgeClassifier Classification of text documents using sparse features
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier
decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.decision_function
fit(X, y, sample_weight=None) [source] Fit Ridge classifier model. Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. New in version 0.17: sample_weight support to Classifier. Returns selfobject Instance of the estimator.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.get_params
predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.predict
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifier#sklearn.linear_model.RidgeClassifier.set_params
class sklearn.linear_model.RidgeClassifierCV(alphas=0.1, 1.0, 10.0, *, fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None, store_cv_values=False) [source] Ridge classifier with built-in cross-validation. See glossary entry for cross-validation estimator. By default, it performs Leave-One-Out Cross-Validation. Currently, only the n_features > n_samples case is handled efficiently. Read more in the User Guide. Parameters alphasndarray of shape (n_alphas,), default=(0.1, 1.0, 10.0) Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. scoringstring, callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the efficient Leave-One-Out cross-validation integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. Refer User Guide for the various cross-validation strategies that can be used here. class_weightdict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) store_cv_valuesbool, default=False Flag indicating if the cross-validation values corresponding to each alpha should be stored in the cv_values_ attribute (see below). This flag is only compatible with cv=None (i.e. using Leave-One-Out Cross-Validation). Attributes cv_values_ndarray of shape (n_samples, n_targets, n_alphas), optional Cross-validation values for each alpha (if store_cv_values=True and cv=None). After fit() has been called, this attribute will contain the mean squared errors (by default) or the values of the {loss,score}_func function (if provided in the constructor). This attribute exists only when store_cv_values is True. coef_ndarray of shape (1, n_features) or (n_targets, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float Estimated regularization parameter. best_score_float Score of base estimator with best alpha. New in version 0.23. classes_ndarray of shape (n_classes,) The classes labels. See also Ridge Ridge regression. RidgeClassifier Ridge classifier. RidgeCV Ridge regression with built-in cross validation. Notes For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifierCV >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifierCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y) >>> clf.score(X, y) 0.9630... Methods decision_function(X) Predict confidence scores for samples. fit(X, y[, sample_weight]) Fit Ridge classifier with cv. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. fit(X, y, sample_weight=None) [source] Fit Ridge classifier with cv. Parameters Xndarray of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. When using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV
sklearn.linear_model.RidgeClassifierCV class sklearn.linear_model.RidgeClassifierCV(alphas=0.1, 1.0, 10.0, *, fit_intercept=True, normalize=False, scoring=None, cv=None, class_weight=None, store_cv_values=False) [source] Ridge classifier with built-in cross-validation. See glossary entry for cross-validation estimator. By default, it performs Leave-One-Out Cross-Validation. Currently, only the n_features > n_samples case is handled efficiently. Read more in the User Guide. Parameters alphasndarray of shape (n_alphas,), default=(0.1, 1.0, 10.0) Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. scoringstring, callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the efficient Leave-One-Out cross-validation integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. Refer User Guide for the various cross-validation strategies that can be used here. class_weightdict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) store_cv_valuesbool, default=False Flag indicating if the cross-validation values corresponding to each alpha should be stored in the cv_values_ attribute (see below). This flag is only compatible with cv=None (i.e. using Leave-One-Out Cross-Validation). Attributes cv_values_ndarray of shape (n_samples, n_targets, n_alphas), optional Cross-validation values for each alpha (if store_cv_values=True and cv=None). After fit() has been called, this attribute will contain the mean squared errors (by default) or the values of the {loss,score}_func function (if provided in the constructor). This attribute exists only when store_cv_values is True. coef_ndarray of shape (1, n_features) or (n_targets, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float Estimated regularization parameter. best_score_float Score of base estimator with best alpha. New in version 0.23. classes_ndarray of shape (n_classes,) The classes labels. See also Ridge Ridge regression. RidgeClassifier Ridge classifier. RidgeCV Ridge regression with built-in cross validation. Notes For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import RidgeClassifierCV >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = RidgeClassifierCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y) >>> clf.score(X, y) 0.9630... Methods decision_function(X) Predict confidence scores for samples. fit(X, y[, sample_weight]) Fit Ridge classifier with cv. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. fit(X, y, sample_weight=None) [source] Fit Ridge classifier with cv. Parameters Xndarray of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. When using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv
decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.decision_function
fit(X, y, sample_weight=None) [source] Fit Ridge classifier with cv. Parameters Xndarray of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. When using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.get_params
predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.predict
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgeclassifiercv#sklearn.linear_model.RidgeClassifierCV.set_params
class sklearn.linear_model.RidgeCV(alphas=0.1, 1.0, 10.0, *, fit_intercept=True, normalize=False, scoring=None, cv=None, gcv_mode=None, store_cv_values=False, alpha_per_target=False) [source] Ridge regression with built-in cross-validation. See glossary entry for cross-validation estimator. By default, it performs efficient Leave-One-Out Cross-Validation. Read more in the User Guide. Parameters alphasndarray of shape (n_alphas,), default=(0.1, 1.0, 10.0) Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If using Leave-One-Out cross-validation, alphas must be positive. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. scoringstring, callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). If None, the negative mean squared error if cv is ‘auto’ or None (i.e. when using leave-one-out cross-validation), and r2 score otherwise. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the efficient Leave-One-Out cross-validation integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if y is binary or multiclass, StratifiedKFold is used, else, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. gcv_mode{‘auto’, ‘svd’, eigen’}, default=’auto’ Flag indicating which strategy to use when performing Leave-One-Out Cross-Validation. Options are: 'auto' : use 'svd' if n_samples > n_features, otherwise use 'eigen' 'svd' : force use of singular value decomposition of X when X is dense, eigenvalue decomposition of X^T.X when X is sparse. 'eigen' : force computation via eigendecomposition of X.X^T The ‘auto’ mode is the default and is intended to pick the cheaper option of the two depending on the shape of the training data. store_cv_valuesbool, default=False Flag indicating if the cross-validation values corresponding to each alpha should be stored in the cv_values_ attribute (see below). This flag is only compatible with cv=None (i.e. using Leave-One-Out Cross-Validation). alpha_per_targetbool, default=False Flag indicating whether to optimize the alpha value (picked from the alphas parameter list) for each target separately (for multi-output settings: multiple prediction targets). When set to True, after fitting, the alpha_ attribute will contain a value for each target. When set to False, a single alpha is used for all targets. New in version 0.24. Attributes cv_values_ndarray of shape (n_samples, n_alphas) or shape (n_samples, n_targets, n_alphas), optional Cross-validation values for each alpha (only available if store_cv_values=True and cv=None). After fit() has been called, this attribute will contain the mean squared errors (by default) or the values of the {loss,score}_func function (if provided in the constructor). coef_ndarray of shape (n_features) or (n_targets, n_features) Weight vector(s). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float or ndarray of shape (n_targets,) Estimated regularization parameter, or, if alpha_per_target=True, the estimated regularization parameter for each target. best_score_float or ndarray of shape (n_targets,) Score of base estimator with best alpha, or, if alpha_per_target=True, a score for each target. New in version 0.23. See also Ridge Ridge regression. RidgeClassifier Ridge classifier. RidgeClassifierCV Ridge classifier with built-in cross validation. Examples >>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import RidgeCV >>> X, y = load_diabetes(return_X_y=True) >>> clf = RidgeCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y) >>> clf.score(X, y) 0.5166... Methods fit(X, y[, sample_weight]) Fit Ridge regression model with cv. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Ridge regression model with cv. Parameters Xndarray of shape (n_samples, n_features) Training data. If using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject Notes When sample_weight is provided, the selected hyperparameter may depend on whether we use leave-one-out cross-validation (cv=None or cv=’auto’) or another form of cross-validation, because only leave-one-out cross-validation takes the sample weights into account when computing the validation score. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV
sklearn.linear_model.RidgeCV class sklearn.linear_model.RidgeCV(alphas=0.1, 1.0, 10.0, *, fit_intercept=True, normalize=False, scoring=None, cv=None, gcv_mode=None, store_cv_values=False, alpha_per_target=False) [source] Ridge regression with built-in cross-validation. See glossary entry for cross-validation estimator. By default, it performs efficient Leave-One-Out Cross-Validation. Read more in the User Guide. Parameters alphasndarray of shape (n_alphas,), default=(0.1, 1.0, 10.0) Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If using Leave-One-Out cross-validation, alphas must be positive. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. scoringstring, callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). If None, the negative mean squared error if cv is ‘auto’ or None (i.e. when using leave-one-out cross-validation), and r2 score otherwise. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the efficient Leave-One-Out cross-validation integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if y is binary or multiclass, StratifiedKFold is used, else, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. gcv_mode{‘auto’, ‘svd’, eigen’}, default=’auto’ Flag indicating which strategy to use when performing Leave-One-Out Cross-Validation. Options are: 'auto' : use 'svd' if n_samples > n_features, otherwise use 'eigen' 'svd' : force use of singular value decomposition of X when X is dense, eigenvalue decomposition of X^T.X when X is sparse. 'eigen' : force computation via eigendecomposition of X.X^T The ‘auto’ mode is the default and is intended to pick the cheaper option of the two depending on the shape of the training data. store_cv_valuesbool, default=False Flag indicating if the cross-validation values corresponding to each alpha should be stored in the cv_values_ attribute (see below). This flag is only compatible with cv=None (i.e. using Leave-One-Out Cross-Validation). alpha_per_targetbool, default=False Flag indicating whether to optimize the alpha value (picked from the alphas parameter list) for each target separately (for multi-output settings: multiple prediction targets). When set to True, after fitting, the alpha_ attribute will contain a value for each target. When set to False, a single alpha is used for all targets. New in version 0.24. Attributes cv_values_ndarray of shape (n_samples, n_alphas) or shape (n_samples, n_targets, n_alphas), optional Cross-validation values for each alpha (only available if store_cv_values=True and cv=None). After fit() has been called, this attribute will contain the mean squared errors (by default) or the values of the {loss,score}_func function (if provided in the constructor). coef_ndarray of shape (n_features) or (n_targets, n_features) Weight vector(s). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. Set to 0.0 if fit_intercept = False. alpha_float or ndarray of shape (n_targets,) Estimated regularization parameter, or, if alpha_per_target=True, the estimated regularization parameter for each target. best_score_float or ndarray of shape (n_targets,) Score of base estimator with best alpha, or, if alpha_per_target=True, a score for each target. New in version 0.23. See also Ridge Ridge regression. RidgeClassifier Ridge classifier. RidgeClassifierCV Ridge classifier with built-in cross validation. Examples >>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import RidgeCV >>> X, y = load_diabetes(return_X_y=True) >>> clf = RidgeCV(alphas=[1e-3, 1e-2, 1e-1, 1]).fit(X, y) >>> clf.score(X, y) 0.5166... Methods fit(X, y[, sample_weight]) Fit Ridge regression model with cv. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y, sample_weight=None) [source] Fit Ridge regression model with cv. Parameters Xndarray of shape (n_samples, n_features) Training data. If using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject Notes When sample_weight is provided, the selected hyperparameter may depend on whether we use leave-one-out cross-validation (cv=None or cv=’auto’) or another form of cross-validation, because only leave-one-out cross-validation takes the sample weights into account when computing the validation score. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.RidgeCV Combine predictors using stacking Common pitfalls in interpretation of coefficients of linear models Face completion with a multi-output estimators Effect of transforming the targets in regression model
sklearn.modules.generated.sklearn.linear_model.ridgecv
fit(X, y, sample_weight=None) [source] Fit Ridge regression model with cv. Parameters Xndarray of shape (n_samples, n_features) Training data. If using GCV, will be cast to float64 if necessary. yndarray of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X’s dtype if necessary. sample_weightfloat or ndarray of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. Returns selfobject Notes When sample_weight is provided, the selected hyperparameter may depend on whether we use leave-one-out cross-validation (cv=None or cv=’auto’) or another form of cross-validation, because only leave-one-out cross-validation takes the sample weights into account when computing the validation score.
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV.get_params
predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values.
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.linear_model.ridgecv#sklearn.linear_model.RidgeCV.set_params
sklearn.linear_model.ridge_regression(X, y, alpha, *, sample_weight=None, solver='auto', max_iter=None, tol=0.001, verbose=0, random_state=None, return_n_iter=False, return_intercept=False, check_input=True) [source] Solve the ridge equation by the method of normal equations. Read more in the User Guide. Parameters X{ndarray, sparse matrix, LinearOperator} of shape (n_samples, n_features) Training data yndarray of shape (n_samples,) or (n_samples, n_targets) Target values alphafloat or array-like of shape (n_targets,) Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number. sample_weightfloat or array-like of shape (n_samples,), default=None Individual weights for each sample. If given a float, every sample will have the same weight. If sample_weight is not None and solver=’auto’, the solver will be set to ‘cholesky’. New in version 0.17. solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’ Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution via a Cholesky decomposition of dot(X.T, X) ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when fit_intercept is True. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. max_iterint, default=None Maximum number of iterations for conjugate gradient solver. For the ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ and saga solver, the default value is 1000. tolfloat, default=1e-3 Precision of the solution. verboseint, default=0 Verbosity level. Setting verbose > 0 will display additional information depending on the solver used. random_stateint, RandomState instance, default=None Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details. return_n_iterbool, default=False If True, the method also returns n_iter, the actual number of iteration performed by the solver. New in version 0.17. return_interceptbool, default=False If True and if X is sparse, the method also returns the intercept, and the solver is automatically changed to ‘sag’. This is only a temporary fix for fitting the intercept with sparse data. For dense data, use sklearn.linear_model._preprocess_data before your regression. New in version 0.17. check_inputbool, default=True If False, the input arrays X and y will not be checked. New in version 0.21. Returns coefndarray of shape (n_features,) or (n_targets, n_features) Weight vector(s). n_iterint, optional The actual number of iteration performed by the solver. Only returned if return_n_iter is True. interceptfloat or ndarray of shape (n_targets,) The intercept of the model. Only returned if return_intercept is True and if X is a scipy sparse array. Notes This function won’t compute the intercept.
sklearn.modules.generated.sklearn.linear_model.ridge_regression#sklearn.linear_model.ridge_regression
class sklearn.linear_model.SGDClassifier(loss='hinge', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, n_jobs=None, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, average=False) [source] Linear classifiers (SVM, logistic regression, etc.) with SGD training. This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). SGD allows minibatch (online/out-of-core) learning via the partial_fit method. For best results using the default learning rate schedule, the data should have zero mean and unit variance. This implementation works with data represented as dense or sparse arrays of floating point values for the features. The model it fits can be controlled with the loss parameter; by default, it fits a linear support vector machine (SVM). The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection. Read more in the User Guide. Parameters lossstr, default=’hinge’ The loss function to be used. Defaults to ‘hinge’, which gives a linear SVM. The possible options are ‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’, or a regression loss: ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. The ‘log’ loss gives logistic regression, a probabilistic classifier. ‘modified_huber’ is another smooth loss that brings tolerance to outliers as well as probability estimates. ‘squared_hinge’ is like hinge but is quadratically penalized. ‘perceptron’ is the linear loss used by the perceptron algorithm. The other losses are designed for regression but can be useful in classification as well; see SGDRegressor for a description. More details about the losses formulas can be found in the User Guide. penalty{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’ The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. alphafloat, default=0.0001 Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change consecutive epochs. New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level. epsilonfloat, default=0.1 Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. n_jobsint, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used for shuffling the data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. learning_ratestr, default=’optimal’ The learning rate schedule: ‘constant’: eta = eta0 ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou. ‘invscaling’: eta = eta0 / pow(t, power_t) ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5. New in version 0.20: Added ‘adaptive’ option eta0double, default=0.0 The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate [default 0.5]. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score returned by the score method is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20: Added ‘early_stopping’ option validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20: Added ‘validation_fraction’ option n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20: Added ‘n_iter_no_change’ option class_weightdict, {class_label: weight} or “balanced”, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter. averagebool or int, default=False When set to True, computes the averaged SGD weights accross all updates and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. Attributes coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features) Weights assigned to the features. intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,) Constants in decision function. n_iter_int The actual number of iterations before reaching the stopping criterion. For multiclass fits, it is the maximum over every binary fit. loss_function_concrete LossFunction classes_array of shape (n_classes,) t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also sklearn.svm.LinearSVC Linear support vector classification. LogisticRegression Logistic regression. Perceptron Inherits from SGDClassifier. Perceptron() is equivalent to SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None). Examples >>> import numpy as np >>> from sklearn.linear_model import SGDClassifier >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.pipeline import make_pipeline >>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) >>> Y = np.array([1, 1, 2, 2]) >>> # Always scale the input. The most convenient way is to use a pipeline. >>> clf = make_pipeline(StandardScaler(), ... SGDClassifier(max_iter=1000, tol=1e-3)) >>> clf.fit(X, Y) Pipeline(steps=[('standardscaler', StandardScaler()), ('sgdclassifier', SGDClassifier())]) >>> print(clf.predict([[-0.8, -1]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. property predict_log_proba Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss=”modified_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See predict_proba for details. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Input data for prediction. Returns Tarray-like, shape (n_samples, n_classes) Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. property predict_proba Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss=”modified_huber” are given by (clip(decision_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with CalibratedClassifierCV instead. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Input data for prediction. Returns ndarray of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. References Zadrozny and Elkan, “Transforming classifier scores into multiclass probability estimates”, SIGKDD’02, http://www.research.ibm.com/people/z/zadrozny/kdd2002-Transf.pdf The justification for the formula in the loss=”modified_huber” case is in the appendix B in: http://jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier
sklearn.linear_model.SGDClassifier class sklearn.linear_model.SGDClassifier(loss='hinge', *, penalty='l2', alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=1000, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, n_jobs=None, random_state=None, learning_rate='optimal', eta0=0.0, power_t=0.5, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, average=False) [source] Linear classifiers (SVM, logistic regression, etc.) with SGD training. This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). SGD allows minibatch (online/out-of-core) learning via the partial_fit method. For best results using the default learning rate schedule, the data should have zero mean and unit variance. This implementation works with data represented as dense or sparse arrays of floating point values for the features. The model it fits can be controlled with the loss parameter; by default, it fits a linear support vector machine (SVM). The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection. Read more in the User Guide. Parameters lossstr, default=’hinge’ The loss function to be used. Defaults to ‘hinge’, which gives a linear SVM. The possible options are ‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’, or a regression loss: ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. The ‘log’ loss gives logistic regression, a probabilistic classifier. ‘modified_huber’ is another smooth loss that brings tolerance to outliers as well as probability estimates. ‘squared_hinge’ is like hinge but is quadratically penalized. ‘perceptron’ is the linear loss used by the perceptron algorithm. The other losses are designed for regression but can be useful in classification as well; see SGDRegressor for a description. More details about the losses formulas can be found in the User Guide. penalty{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’ The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. alphafloat, default=0.0001 Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. l1_ratiofloat, default=0.15 The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. fit_interceptbool, default=True Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. max_iterint, default=1000 The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19. tolfloat, default=1e-3 The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change consecutive epochs. New in version 0.19. shufflebool, default=True Whether or not the training data should be shuffled after each epoch. verboseint, default=0 The verbosity level. epsilonfloat, default=0.1 Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold. n_jobsint, default=None The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance, default=None Used for shuffling the data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary. learning_ratestr, default=’optimal’ The learning rate schedule: ‘constant’: eta = eta0 ‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou. ‘invscaling’: eta = eta0 / pow(t, power_t) ‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5. New in version 0.20: Added ‘adaptive’ option eta0double, default=0.0 The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate [default 0.5]. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score returned by the score method is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20: Added ‘early_stopping’ option validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20: Added ‘validation_fraction’ option n_iter_no_changeint, default=5 Number of iterations with no improvement to wait before early stopping. New in version 0.20: Added ‘n_iter_no_change’ option class_weightdict, {class_label: weight} or “balanced”, default=None Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter. averagebool or int, default=False When set to True, computes the averaged SGD weights accross all updates and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. Attributes coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features) Weights assigned to the features. intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,) Constants in decision function. n_iter_int The actual number of iterations before reaching the stopping criterion. For multiclass fits, it is the maximum over every binary fit. loss_function_concrete LossFunction classes_array of shape (n_classes,) t_int Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also sklearn.svm.LinearSVC Linear support vector classification. LogisticRegression Logistic regression. Perceptron Inherits from SGDClassifier. Perceptron() is equivalent to SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None). Examples >>> import numpy as np >>> from sklearn.linear_model import SGDClassifier >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.pipeline import make_pipeline >>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) >>> Y = np.array([1, 1, 2, 2]) >>> # Always scale the input. The most convenient way is to use a pipeline. >>> clf = make_pipeline(StandardScaler(), ... SGDClassifier(max_iter=1000, tol=1e-3)) >>> clf.fit(X, Y) Pipeline(steps=[('standardscaler', StandardScaler()), ('sgdclassifier', SGDClassifier())]) >>> print(clf.predict([[-0.8, -1]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent. get_params([deep]) Get parameters for this estimator. partial_fit(X, y[, classes, sample_weight]) Perform one epoch of stochastic gradient descent on given samples. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**kwargs) Set and validate the parameters of estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source] Fit linear model with Stochastic Gradient Descent. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Training data. yndarray of shape (n_samples,) Target values. coef_initndarray of shape (n_classes, n_features), default=None The initial coefficients to warm-start the optimization. intercept_initndarray of shape (n_classes,), default=None The initial intercept to warm-start the optimization. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns self : Returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. partial_fit(X, y, classes=None, sample_weight=None) [source] Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data. yndarray of shape (n_samples,) Subset of the target values. classesndarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. sample_weightarray-like, shape (n_samples,), default=None Weights applied to individual samples. If not provided, uniform weights are assumed. Returns self : Returns an instance of self. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. property predict_log_proba Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss=”modified_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See predict_proba for details. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Input data for prediction. Returns Tarray-like, shape (n_samples, n_classes) Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. property predict_proba Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss=”modified_huber” are given by (clip(decision_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with CalibratedClassifierCV instead. Parameters X{array-like, sparse matrix}, shape (n_samples, n_features) Input data for prediction. Returns ndarray of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. References Zadrozny and Elkan, “Transforming classifier scores into multiclass probability estimates”, SIGKDD’02, http://www.research.ibm.com/people/z/zadrozny/kdd2002-Transf.pdf The justification for the formula in the loss=”modified_huber” case is in the appendix B in: http://jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**kwargs) [source] Set and validate the parameters of estimator. Parameters **kwargsdict Estimator parameters. Returns selfobject Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. Examples using sklearn.linear_model.SGDClassifier Model Complexity Influence Out-of-core classification of text documents SGD: Maximum margin separating hyperplane SGD: convex loss functions SGD: Penalties SGD: Weighted samples Comparing various online solvers Plot multi-class SGD on the iris dataset Early stopping of Stochastic Gradient Descent Explicit feature map approximation for RBF kernels Comparing randomized search and grid search for hyperparameter estimation Sample pipeline for text feature extraction and evaluation Semi-supervised Classification on a Text Dataset Classification of text documents using sparse features
sklearn.modules.generated.sklearn.linear_model.sgdclassifier
decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
sklearn.modules.generated.sklearn.linear_model.sgdclassifier#sklearn.linear_model.SGDClassifier.decision_function