doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.set_params |
class sklearn.covariance.ShrunkCovariance(*, store_precision=True, assume_centered=False, shrinkage=0.1) [source]
Covariance estimator with shrinkage Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation.
shrinkagefloat, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True) Notes The regularized covariance is given by: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features Examples >>> import numpy as np
>>> from sklearn.covariance import ShrunkCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = ShrunkCovariance().fit(X)
>>> cov.covariance_
array([[0.7387..., 0.2536...],
[0.2536..., 0.4110...]])
>>> cov.location_
array([0.0622..., 0.0193...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the shrunk covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance |
sklearn.covariance.ShrunkCovariance
class sklearn.covariance.ShrunkCovariance(*, store_precision=True, assume_centered=False, shrinkage=0.1) [source]
Covariance estimator with shrinkage Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation.
shrinkagefloat, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True) Notes The regularized covariance is given by: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features Examples >>> import numpy as np
>>> from sklearn.covariance import ShrunkCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = ShrunkCovariance().fit(X)
>>> cov.covariance_
array([[0.7387..., 0.2536...],
[0.2536..., 0.4110...]])
>>> cov.location_
array([0.0622..., 0.0193...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the shrunk covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.ShrunkCovariance
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
Model selection with Probabilistic PCA and Factor Analysis (FA) | sklearn.modules.generated.sklearn.covariance.shrunkcovariance |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.error_norm |
fit(X, y=None) [source]
Fit the shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.shrunkcovariance#sklearn.covariance.ShrunkCovariance.set_params |
sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1) [source]
Calculates a covariance matrix shrunk on the diagonal Read more in the User Guide. Parameters
emp_covarray-like of shape (n_features, n_features)
Covariance matrix to be shrunk
shrinkagefloat, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Returns
shrunk_covndarray of shape (n_features, n_features)
Shrunk covariance. Notes The regularized (shrunk) covariance is given by: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features | sklearn.modules.generated.sklearn.covariance.shrunk_covariance#sklearn.covariance.shrunk_covariance |
class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
Canonical Correlation Analysis, also known as “Mode B” PLS. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
max_iterint, default=500
the maximum number of iterations of the power method.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. See also
PLSCanonical
PLSSVD
Examples >>> from sklearn.cross_decomposition import CCA
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> cca = CCA(n_components=1)
>>> cca.fit(X, Y)
CCA(n_components=1)
>>> X_c, Y_c = cca.transform(X, Y)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA |
sklearn.cross_decomposition.CCA
class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
Canonical Correlation Analysis, also known as “Mode B” PLS. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
max_iterint, default=500
the maximum number of iterations of the power method.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. See also
PLSCanonical
PLSSVD
Examples >>> from sklearn.cross_decomposition import CCA
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> cca = CCA(n_components=1)
>>> cca.fit(X, Y)
CCA(n_components=1)
>>> X_c, Y_c = cca.transform(X, Y)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
Examples using sklearn.cross_decomposition.CCA
Compare cross decomposition methods
Multilabel classification | sklearn.modules.generated.sklearn.cross_decomposition.cca |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.cca#sklearn.cross_decomposition.CCA.transform |
class sklearn.cross_decomposition.PLSCanonical(n_components=2, *, scale=True, algorithm='nipals', max_iter=500, tol=1e-06, copy=True) [source]
Partial Least Squares transformer and regressor. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
algorithm{‘nipals’, ‘svd’}, default=’nipals’
The algorithm used to estimate the first singular vectors of the cross-covariance matrix. ‘nipals’ uses the power method while ‘svd’ will compute the whole SVD.
max_iterint, default=500
the maximum number of iterations of the power method when algorithm='nipals'. Ignored otherwise.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. Empty if algorithm='svd'. See also
CCA
PLSSVD
Examples >>> from sklearn.cross_decomposition import PLSCanonical
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> plsca = PLSCanonical(n_components=2)
>>> plsca.fit(X, Y)
PLSCanonical()
>>> X_c, Y_c = plsca.transform(X, Y)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical |
sklearn.cross_decomposition.PLSCanonical
class sklearn.cross_decomposition.PLSCanonical(n_components=2, *, scale=True, algorithm='nipals', max_iter=500, tol=1e-06, copy=True) [source]
Partial Least Squares transformer and regressor. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
algorithm{‘nipals’, ‘svd’}, default=’nipals’
The algorithm used to estimate the first singular vectors of the cross-covariance matrix. ‘nipals’ uses the power method while ‘svd’ will compute the whole SVD.
max_iterint, default=500
the maximum number of iterations of the power method when algorithm='nipals'. Ignored otherwise.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. Empty if algorithm='svd'. See also
CCA
PLSSVD
Examples >>> from sklearn.cross_decomposition import PLSCanonical
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> plsca = PLSCanonical(n_components=2)
>>> plsca.fit(X, Y)
PLSCanonical()
>>> X_c, Y_c = plsca.transform(X, Y)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
Examples using sklearn.cross_decomposition.PLSCanonical
Compare cross decomposition methods | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plscanonical#sklearn.cross_decomposition.PLSCanonical.transform |
class sklearn.cross_decomposition.PLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
PLS regression PLSRegression is also known as PLS2 or PLS1, depending on the number of targets. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
algorithm{‘nipals’, ‘svd’}, default=’nipals’
The algorithm used to estimate the first singular vectors of the cross-covariance matrix. ‘nipals’ uses the power method while ‘svd’ will compute the whole SVD.
max_iterint, default=500
The maximum number of iterations of the power method when algorithm='nipals'. Ignored otherwise.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. Empty if algorithm='svd'. Examples >>> from sklearn.cross_decomposition import PLSRegression
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> pls2 = PLSRegression(n_components=2)
>>> pls2.fit(X, Y)
PLSRegression()
>>> Y_pred = pls2.predict(X)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression |
sklearn.cross_decomposition.PLSRegression
class sklearn.cross_decomposition.PLSRegression(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
PLS regression PLSRegression is also known as PLS2 or PLS1, depending on the number of targets. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
algorithm{‘nipals’, ‘svd’}, default=’nipals’
The algorithm used to estimate the first singular vectors of the cross-covariance matrix. ‘nipals’ uses the power method while ‘svd’ will compute the whole SVD.
max_iterint, default=500
The maximum number of iterations of the power method when algorithm='nipals'. Ignored otherwise.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. Empty if algorithm='svd'. Examples >>> from sklearn.cross_decomposition import PLSRegression
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> pls2 = PLSRegression(n_components=2)
>>> pls2.fit(X, Y)
PLSRegression()
>>> Y_pred = pls2.predict(X)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
Examples using sklearn.cross_decomposition.PLSRegression
Principal Component Regression vs Partial Least Squares Regression
Compare cross decomposition methods | sklearn.modules.generated.sklearn.cross_decomposition.plsregression |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.get_params |
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.inverse_transform |
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.set_params |
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plsregression#sklearn.cross_decomposition.PLSRegression.transform |
class sklearn.cross_decomposition.PLSSVD(n_components=2, *, scale=True, copy=True) [source]
Partial Least Square SVD. This transformer simply performs a SVD on the crosscovariance matrix X’Y. It is able to project both the training data X and the targets Y. The training data X is projected on the left singular vectors, while the targets are projected on the right singular vectors. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
The number of components to keep. Should be in [1,
min(n_samples, n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the SVD of the cross-covariance matrix. Used to project X in transform.
y_weights_ndarray of (n_targets, n_components)
The right singular vectors of the SVD of the cross-covariance matrix. Used to project X in transform.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead. See also
PLSCanonical
CCA
Examples >>> import numpy as np
>>> from sklearn.cross_decomposition import PLSSVD
>>> X = np.array([[0., 0., 1.],
... [1., 0., 0.],
... [2., 2., 2.],
... [2., 5., 4.]])
>>> Y = np.array([[0.1, -0.2],
... [0.9, 1.1],
... [6.2, 5.9],
... [11.9, 12.3]])
>>> pls = PLSSVD(n_components=2).fit(X, Y)
>>> X_c, Y_c = pls.transform(X, Y)
>>> X_c.shape, Y_c.shape
((4, 2), (4, 2))
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimensionality reduction.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y]) Apply the dimensionality reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Targets.
fit_transform(X, y=None) [source]
Learn and apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None) [source]
Apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to be transformed.
Yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD |
sklearn.cross_decomposition.PLSSVD
class sklearn.cross_decomposition.PLSSVD(n_components=2, *, scale=True, copy=True) [source]
Partial Least Square SVD. This transformer simply performs a SVD on the crosscovariance matrix X’Y. It is able to project both the training data X and the targets Y. The training data X is projected on the left singular vectors, while the targets are projected on the right singular vectors. Read more in the User Guide. New in version 0.8. Parameters
n_componentsint, default=2
The number of components to keep. Should be in [1,
min(n_samples, n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the SVD of the cross-covariance matrix. Used to project X in transform.
y_weights_ndarray of (n_targets, n_components)
The right singular vectors of the SVD of the cross-covariance matrix. Used to project X in transform.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead. See also
PLSCanonical
CCA
Examples >>> import numpy as np
>>> from sklearn.cross_decomposition import PLSSVD
>>> X = np.array([[0., 0., 1.],
... [1., 0., 0.],
... [2., 2., 2.],
... [2., 5., 4.]])
>>> Y = np.array([[0.1, -0.2],
... [0.9, 1.1],
... [6.2, 5.9],
... [11.9, 12.3]])
>>> pls = PLSSVD(n_components=2).fit(X, Y)
>>> X_c, Y_c = pls.transform(X, Y)
>>> X_c.shape, Y_c.shape
((4, 2), (4, 2))
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimensionality reduction.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y]) Apply the dimensionality reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Targets.
fit_transform(X, y=None) [source]
Learn and apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None) [source]
Apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to be transformed.
Yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd |
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Targets. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.fit |
fit_transform(X, y=None) [source]
Learn and apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Training samples.
yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.set_params |
transform(X, Y=None) [source]
Apply the dimensionality reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to be transformed.
Yarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Targets. Returns
outarray-like or tuple of array-like
The transformed data X_tranformed if Y is not None, (X_transformed, Y_transformed) otherwise. | sklearn.modules.generated.sklearn.cross_decomposition.plssvd#sklearn.cross_decomposition.PLSSVD.transform |
sklearn.datasets.clear_data_home(data_home=None) [source]
Delete all the content of the data home cache. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | sklearn.modules.generated.sklearn.datasets.clear_data_home#sklearn.datasets.clear_data_home |
sklearn.datasets.dump_svmlight_file(X, y, f, *, zero_based=True, comment=None, query_id=None, multilabel=False) [source]
Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y{array-like, sparse matrix}, shape = [n_samples (, n_labels)]
Target values. Class labels must be an integer or float, or array-like objects of integer or float for multilabel classifications.
fstring or file-like in binary mode
If string, specifies the path that will contain the data. If file-like, data will be written to f. f should be opened in binary mode.
zero_basedboolean, default=True
Whether column indices should be written zero-based (True) or one-based (False).
commentstring, default=None
Comment to insert at the top of the file. This should be either a Unicode string, which will be encoded as UTF-8, or an ASCII byte string. If a comment is given, then it will be preceded by one that identifies the file as having been dumped by scikit-learn. Note that not all tools grok comments in SVMlight files.
query_idarray-like of shape (n_samples,), default=None
Array containing pairwise preference constraints (qid in svmlight format).
multilabelboolean, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html) New in version 0.17: parameter multilabel to support multilabel datasets. | sklearn.modules.generated.sklearn.datasets.dump_svmlight_file#sklearn.datasets.dump_svmlight_file |
sklearn.datasets.fetch_20newsgroups(*, data_home=None, subset='train', categories=None, shuffle=True, random_state=42, remove=(), download_if_missing=True, return_X_y=False) [source]
Load the filenames and data from the 20 newsgroups dataset (classification). Download it if necessary.
Classes 20
Samples total 18846
Dimensionality 1
Features text Read more in the User Guide. Parameters
data_homestr, default=None
Specify a download and cache folder for the datasets. If None, all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
subset{‘train’, ‘test’, ‘all’}, default=’train’
Select the dataset to load: ‘train’ for the training set, ‘test’ for the test set, ‘all’ for both, with shuffled ordering.
categoriesarray-like, dtype=str or unicode, default=None
If None (default), load all the categories. If not None, list of category names to load (other categories ignored).
shufflebool, default=True
Whether or not to shuffle the data: might be important for models that make the assumption that the samples are independent and identically distributed (i.i.d.), such as stochastic gradient descent.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
removetuple, default=()
May contain any subset of (‘headers’, ‘footers’, ‘quotes’). Each of these are kinds of text that will be detected and removed from the newsgroup posts, preventing classifiers from overfitting on metadata. ‘headers’ removes newsgroup headers, ‘footers’ removes blocks at the ends of posts that look like signatures, and ‘quotes’ removes lines that appear to be quoting another post. ‘headers’ follows an exact standard; the other filters are not always correct.
download_if_missingbool, default=True
If False, raise an IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.22. Returns
bunchBunch
Dictionary-like object, with the following attributes.
datalist of shape (n_samples,)
The data list to learn. target: ndarray of shape (n_samples,)
The target labels. filenames: list of shape (n_samples,)
The path to the location of the data. DESCR: str
The full description of the dataset. target_names: list of shape (n_classes,)
The names of target classes.
(data, target)tuple if return_X_y=True
New in version 0.22. | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups#sklearn.datasets.fetch_20newsgroups |
sklearn.datasets.fetch_20newsgroups_vectorized(*, subset='train', remove=(), data_home=None, download_if_missing=True, return_X_y=False, normalize=True, as_frame=False) [source]
Load and vectorize the 20 newsgroups dataset (classification). Download it if necessary. This is a convenience function; the transformation is done using the default settings for CountVectorizer. For more advanced usage (stopword filtering, n-gram extraction, etc.), combine fetch_20newsgroups with a custom CountVectorizer, HashingVectorizer, TfidfTransformer or TfidfVectorizer. The resulting counts are normalized using sklearn.preprocessing.normalize unless normalize is set to False.
Classes 20
Samples total 18846
Dimensionality 130107
Features real Read more in the User Guide. Parameters
subset{‘train’, ‘test’, ‘all’}, default=’train’
Select the dataset to load: ‘train’ for the training set, ‘test’ for the test set, ‘all’ for both, with shuffled ordering.
removetuple, default=()
May contain any subset of (‘headers’, ‘footers’, ‘quotes’). Each of these are kinds of text that will be detected and removed from the newsgroup posts, preventing classifiers from overfitting on metadata. ‘headers’ removes newsgroup headers, ‘footers’ removes blocks at the ends of posts that look like signatures, and ‘quotes’ removes lines that appear to be quoting another post.
data_homestr, default=None
Specify an download and cache folder for the datasets. If None, all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise an IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
normalizebool, default=True
If True, normalizes each document’s feature vector to unit norm using sklearn.preprocessing.normalize. New in version 0.22.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string, or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. New in version 0.24. Returns
bunchBunch
Dictionary-like object, with the following attributes. data: {sparse matrix, dataframe} of shape (n_samples, n_features)
The input data matrix. If as_frame is True, data is a pandas DataFrame with sparse columns. target: {ndarray, series} of shape (n_samples,)
The target labels. If as_frame is True, target is a pandas Series. target_names: list of shape (n_classes,)
The names of target classes. DESCR: str
The full description of the dataset. frame: dataframe of shape (n_samples, n_features + 1)
Only present when as_frame=True. Pandas DataFrame with data and target. New in version 0.24.
(data, target)tuple if return_X_y is True
data and target would be of the format defined in the Bunch description above. New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups_vectorized#sklearn.datasets.fetch_20newsgroups_vectorized |
sklearn.datasets.fetch_california_housing(*, data_home=None, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the California housing dataset (regression).
Samples total 20640
Dimensionality 8
Features real
Target real 0.15 - 5. Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False.
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. New in version 0.23. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datandarray, shape (20640, 8)
Each row corresponding to the 8 feature values in order. If as_frame is True, data is a pandas object.
targetnumpy array of shape (20640,)
Each value corresponds to the average house value in units of 100,000. If as_frame is True, target is a pandas object.
feature_nameslist of length 8
Array of ordered feature names used in the dataset.
DESCRstring
Description of the California housing dataset.
framepandas DataFrame
Only present when as_frame=True. DataFrame with data and target. New in version 0.23.
(data, target)tuple if return_X_y is True
New in version 0.20. Notes This dataset consists of 20,640 samples and 9 features. | sklearn.modules.generated.sklearn.datasets.fetch_california_housing#sklearn.datasets.fetch_california_housing |
sklearn.datasets.fetch_covtype(*, data_home=None, download_if_missing=True, random_state=None, shuffle=False, return_X_y=False, as_frame=False) [source]
Load the covertype dataset (classification). Download it if necessary.
Classes 7
Samples total 581012
Dimensionality 54
Features int Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=False
Whether to shuffle dataset.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.24. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datandarray of shape (581012, 54)
Each row corresponds to the 54 features in the dataset.
targetndarray of shape (581012,)
Each value corresponds to one of the 7 forest covertypes with values ranging between 1 to 7.
framedataframe of shape (581012, 53)
Only present when as_frame=True. Contains data and target.
DESCRstr
Description of the forest covertype dataset.
feature_nameslist
The names of the dataset columns. target_names: list
The names of the target columns.
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_covtype#sklearn.datasets.fetch_covtype |
sklearn.datasets.fetch_kddcup99(*, subset=None, data_home=None, shuffle=False, random_state=None, percent10=True, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the kddcup99 dataset (classification). Download it if necessary.
Classes 23
Samples total 4898431
Dimensionality 41
Features discrete (int) or continuous (float) Read more in the User Guide. New in version 0.18. Parameters
subset{‘SA’, ‘SF’, ‘http’, ‘smtp’}, default=None
To return the corresponding classical subsets of kddcup 99. If None, return the entire kddcup 99 dataset.
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders. .. versionadded:: 0.19
shufflebool, default=False
Whether to shuffle dataset.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and for selection of abnormal samples if subset='SA'. Pass an int for reproducible output across multiple function calls. See Glossary.
percent10bool, default=True
Whether to load only 10 percent of the data.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.20.
as_framebool, default=False
If True, returns a pandas Dataframe for the data and target objects in the Bunch returned object; Bunch return object will also have a frame member. New in version 0.24. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (494021, 41)
The data matrix to learn. If as_frame=True, data will be a pandas DataFrame.
target{ndarray, series} of shape (494021,)
The regression target for each sample. If as_frame=True, target will be a pandas Series.
framedataframe of shape (494021, 42)
Only present when as_frame=True. Contains data and target.
DESCRstr
The full description of the dataset.
feature_nameslist
The names of the dataset columns target_names: list
The names of the target columns
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_kddcup99#sklearn.datasets.fetch_kddcup99 |
sklearn.datasets.fetch_lfw_pairs(*, subset='train', data_home=None, funneled=True, resize=0.5, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True) [source]
Load the Labeled Faces in the Wild (LFW) pairs dataset (classification). Download it if necessary.
Classes 2
Samples total 13233
Dimensionality 5828
Features real, between 0 and 255 In the official README.txt this task is described as the “Restricted” task. As I am not sure as to implement the “Unrestricted” variant correctly, I left it as unsupported for now. The original images are 250 x 250 pixels, but the default slice and resize arguments reduce them to 62 x 47. Read more in the User Guide. Parameters
subset{‘train’, ‘test’, ‘10_folds’}, default=’train’
Select the dataset to load: ‘train’ for the development training set, ‘test’ for the development test set, and ‘10_folds’ for the official evaluation set that is meant to be used with a 10-folds cross validation.
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
funneledbool, default=True
Download and use the funneled variant of the dataset.
resizefloat, default=0.5
Ratio used to resize the each face picture.
colorbool, default=False
Keep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than the shape with color = False.
slice_tuple of slice, default=(slice(70, 195), slice(78, 172))
Provide a custom 2D slice (height, width) to extract the ‘interesting’ part of the jpeg files and avoid use statistical correlation from the background
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site. Returns
dataBunch
Dictionary-like object, with the following attributes.
datandarray of shape (2200, 5828). Shape depends on subset.
Each row corresponds to 2 ravel’d face images of original size 62 x 47 pixels. Changing the slice_, resize or subset parameters will change the shape of the output.
pairsndarray of shape (2200, 2, 62, 47). Shape depends on subset
Each row has 2 face images corresponding to same or different person from the dataset containing 5749 people. Changing the slice_, resize or subset parameters will change the shape of the output.
targetnumpy array of shape (2200,). Shape depends on subset.
Labels associated to each pair of images. The two label values being different persons or the same person.
DESCRstring
Description of the Labeled Faces in the Wild (LFW) dataset. | sklearn.modules.generated.sklearn.datasets.fetch_lfw_pairs#sklearn.datasets.fetch_lfw_pairs |
sklearn.datasets.fetch_lfw_people(*, data_home=None, funneled=True, resize=0.5, min_faces_per_person=0, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True, return_X_y=False) [source]
Load the Labeled Faces in the Wild (LFW) people dataset (classification). Download it if necessary.
Classes 5749
Samples total 13233
Dimensionality 5828
Features real, between 0 and 255 Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
funneledbool, default=True
Download and use the funneled variant of the dataset.
resizefloat, default=0.5
Ratio used to resize the each face picture.
min_faces_per_personint, default=None
The extracted dataset will only retain pictures of people that have at least min_faces_per_person different pictures.
colorbool, default=False
Keep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than the shape with color = False.
slice_tuple of slice, default=(slice(70, 195), slice(78, 172))
Provide a custom 2D slice (height, width) to extract the ‘interesting’ part of the jpeg files and avoid use statistical correlation from the background
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (dataset.data, dataset.target) instead of a Bunch object. See below for more information about the dataset.data and dataset.target object. New in version 0.20. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datanumpy array of shape (13233, 2914)
Each row corresponds to a ravelled face image of original size 62 x 47 pixels. Changing the slice_ or resize parameters will change the shape of the output.
imagesnumpy array of shape (13233, 62, 47)
Each row is a face image corresponding to one of the 5749 people in the dataset. Changing the slice_ or resize parameters will change the shape of the output.
targetnumpy array of shape (13233,)
Labels associated to each face image. Those labels range from 0-5748 and correspond to the person IDs.
DESCRstring
Description of the Labeled Faces in the Wild (LFW) dataset.
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_lfw_people#sklearn.datasets.fetch_lfw_people |
sklearn.datasets.fetch_olivetti_faces(*, data_home=None, shuffle=False, random_state=0, download_if_missing=True, return_X_y=False) [source]
Load the Olivetti faces data-set from AT&T (classification). Download it if necessary.
Classes 40
Samples total 400
Dimensionality 4096
Features real, between 0 and 1 Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
shufflebool, default=False
If True the order of the dataset is shuffled to avoid having images of the same person grouped.
random_stateint, RandomState instance or None, default=0
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.22. Returns
dataBunch
Dictionary-like object, with the following attributes. data: ndarray, shape (400, 4096)
Each row corresponds to a ravelled face image of original size 64 x 64 pixels.
imagesndarray, shape (400, 64, 64)
Each row is a face image corresponding to one of the 40 subjects of the dataset.
targetndarray, shape (400,)
Labels associated to each face image. Those labels are ranging from 0-39 and correspond to the Subject IDs.
DESCRstr
Description of the modified Olivetti Faces Dataset.
(data, target)tuple if return_X_y=True
New in version 0.22. | sklearn.modules.generated.sklearn.datasets.fetch_olivetti_faces#sklearn.datasets.fetch_olivetti_faces |
sklearn.datasets.fetch_openml(name: Optional[str] = None, *, version: Union[str, int] = 'active', data_id: Optional[int] = None, data_home: Optional[str] = None, target_column: Optional[Union[str, List]] = 'default-target', cache: bool = True, return_X_y: bool = False, as_frame: Union[str, bool] = 'auto') [source]
Fetch dataset from openml by name or dataset id. Datasets are uniquely identified by either an integer ID or by a combination of name and version (i.e. there might be multiple versions of the ‘iris’ dataset). Please give either name or data_id (not both). In case a name is given, a version can also be provided. Read more in the User Guide. New in version 0.20. Note EXPERIMENTAL The API is experimental (particularly the return value structure), and might have small backward-incompatible changes without notice or warning in future releases. Parameters
namestr, default=None
String identifier of the dataset. Note that OpenML can have multiple datasets with the same name.
versionint or ‘active’, default=’active’
Version of the dataset. Can only be provided if also name is given. If ‘active’ the oldest version that’s still active is used. Since there may be more than one active version of a dataset, and those versions may fundamentally be different from one another, setting an exact version is highly recommended.
data_idint, default=None
OpenML ID of the dataset. The most specific way of retrieving a dataset. If data_id is not given, name (and potential version) are used to obtain a dataset.
data_homestr, default=None
Specify another download and cache folder for the data sets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
target_columnstr, list or None, default=’default-target’
Specify the column name in the data to use as target. If ‘default-target’, the standard target column a stored on the server is used. If None, all columns are returned as data and the target is None. If list (of strings), all columns with these names are returned as multi-target (Note: not all scikit-learn classifiers can handle all types of multi-output combinations)
cachebool, default=True
Whether to cache downloaded datasets using joblib.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target objects.
as_framebool or ‘auto’, default=’auto’
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. The Bunch will contain a frame attribute with the target and the data. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as describe above. If as_frame is ‘auto’, the data and target will be converted to DataFrame or Series as if as_frame is set to True, unless the dataset is stored in sparse format. Changed in version 0.24: The default value of as_frame changed from False to 'auto' in 0.24. Returns
dataBunch
Dictionary-like object, with the following attributes.
datanp.array, scipy.sparse.csr_matrix of floats, or pandas DataFrame
The feature matrix. Categorical features are encoded as ordinals.
targetnp.array, pandas Series or DataFrame
The regression target or classification labels, if applicable. Dtype is float if numeric, and object if categorical. If as_frame is True, target is a pandas object.
DESCRstr
The full description of the dataset
feature_nameslist
The names of the dataset columns target_names: list
The names of the target columns New in version 0.22.
categoriesdict or None
Maps each categorical feature name to a list of values, such that the value encoded as i is ith in the list. If as_frame is True, this is None.
detailsdict
More metadata from OpenML
framepandas DataFrame
Only present when as_frame=True. DataFrame with data and target.
(data, target)tuple if return_X_y is True
Note EXPERIMENTAL This interface is experimental and subsequent releases may change attributes without notice (although there should only be minor changes to data and target). Missing values in the ‘data’ are represented as NaN’s. Missing values in ‘target’ are represented as NaN’s (numerical target) or None (categorical target) | sklearn.modules.generated.sklearn.datasets.fetch_openml#sklearn.datasets.fetch_openml |
sklearn.datasets.fetch_rcv1(*, data_home=None, subset='all', download_if_missing=True, random_state=None, shuffle=False, return_X_y=False) [source]
Load the RCV1 multilabel dataset (classification). Download it if necessary. Version: RCV1-v2, vectors, full sets, topics multilabels.
Classes 103
Samples total 804414
Dimensionality 47236
Features real, between 0 and 1 Read more in the User Guide. New in version 0.17. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
subset{‘train’, ‘test’, ‘all’}, default=’all’
Select the dataset to load: ‘train’ for the training set (23149 samples), ‘test’ for the test set (781265 samples), ‘all’ for both, with the training samples first if shuffle is False. This follows the official LYRL2004 chronological split.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=False
Whether to shuffle dataset.
return_X_ybool, default=False
If True, returns (dataset.data, dataset.target) instead of a Bunch object. See below for more information about the dataset.data and dataset.target object. New in version 0.20. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datasparse matrix of shape (804414, 47236), dtype=np.float64
The array has 0.16% of non zero values. Will be of CSR format.
targetsparse matrix of shape (804414, 103), dtype=np.uint8
Each sample has a value of 1 in its categories, and 0 in others. The array has 3.15% of non zero values. Will be of CSR format.
sample_idndarray of shape (804414,), dtype=np.uint32,
Identification number of each sample, as ordered in dataset.data.
target_namesndarray of shape (103,), dtype=object
Names of each target (RCV1 topics), as ordered in dataset.target.
DESCRstr
Description of the RCV1 dataset.
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_rcv1#sklearn.datasets.fetch_rcv1 |
sklearn.datasets.fetch_species_distributions(*, data_home=None, download_if_missing=True) [source]
Loader for species distribution dataset from Phillips et. al. (2006) Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site. Returns
dataBunch
Dictionary-like object, with the following attributes.
coveragesarray, shape = [14, 1592, 1212]
These represent the 14 features measured at each point of the map grid. The latitude/longitude values for the grid are discussed below. Missing data is represented by the value -9999.
trainrecord array, shape = (1624,)
The training points for the data. Each point has three fields: train[‘species’] is the species name train[‘dd long’] is the longitude, in degrees train[‘dd lat’] is the latitude, in degrees
testrecord array, shape = (620,)
The test points for the data. Same format as the training data.
Nx, Nyintegers
The number of longitudes (x) and latitudes (y) in the grid
x_left_lower_corner, y_left_lower_cornerfloats
The (x,y) position of the lower-left corner, in degrees
grid_sizefloat
The spacing between points of the grid, in degrees Notes This dataset represents the geographic distribution of species. The dataset is provided by Phillips et. al. (2006). The two species are:
“Bradypus variegatus” , the Brown-throated Sloth.
“Microryzomys minutus” , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela. For an example of using this dataset with scikit-learn, see examples/applications/plot_species_distribution_modeling.py. References
“Maximum entropy modeling of species geographic distributions” S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006. | sklearn.modules.generated.sklearn.datasets.fetch_species_distributions#sklearn.datasets.fetch_species_distributions |
sklearn.datasets.get_data_home(data_home=None) → str[source]
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user home folder. Alternatively, it can be set by the ‘SCIKIT_LEARN_DATA’ environment variable or programmatically by giving an explicit folder path. The ‘~’ symbol is expanded to the user home folder. If the folder does not already exist, it is automatically created. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | sklearn.modules.generated.sklearn.datasets.get_data_home#sklearn.datasets.get_data_home |
sklearn.datasets.load_boston(*, return_X_y=False) [source]
Load and return the boston house-prices dataset (regression).
Samples total 506
Dimensionality 13
Features real, positive
Targets real 5. - 50. Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18. Returns
dataBunch
Dictionary-like object, with the following attributes.
datandarray of shape (506, 13)
The data matrix.
targetndarray of shape (506, )
The regression target.
filenamestr
The physical location of boston csv dataset. New in version 0.20.
DESCRstr
The full description of the dataset.
feature_namesndarray
The names of features
(data, target)tuple if return_X_y is True
New in version 0.18. Notes Changed in version 0.20: Fixed a wrong data point at [445, 0]. Examples >>> from sklearn.datasets import load_boston
>>> X, y = load_boston(return_X_y=True)
>>> print(X.shape)
(506, 13) | sklearn.modules.generated.sklearn.datasets.load_boston#sklearn.datasets.load_boston |
sklearn.datasets.load_breast_cancer(*, return_X_y=False, as_frame=False) [source]
Load and return the breast cancer wisconsin dataset (classification). The breast cancer dataset is a classic and very easy binary classification dataset.
Classes 2
Samples per class 212(M),357(B)
Samples total 569
Dimensionality 30
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (569, 30)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (569,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (569, 31)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. filename: str
The path to the location of the data. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is
downloaded from:
https://goo.gl/U2Uwz2
Examples Let’s say you are interested in the samples 10, 50, and 85, and want to know their class name. >>> from sklearn.datasets import load_breast_cancer
>>> data = load_breast_cancer()
>>> data.target[[10, 50, 85]]
array([0, 1, 0])
>>> list(data.target_names)
['malignant', 'benign'] | sklearn.modules.generated.sklearn.datasets.load_breast_cancer#sklearn.datasets.load_breast_cancer |
sklearn.datasets.load_diabetes(*, return_X_y=False, as_frame=False) [source]
Load and return the diabetes dataset (regression).
Samples total 442
Dimensionality 10
Features real, -.2 < x < .2
Targets integer 25 - 346 Read more in the User Guide. Parameters
return_X_ybool, default=False.
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (442, 10)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (442,)
The regression target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. frame: DataFrame of shape (442, 11)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. data_filename: str
The path to the location of the data. target_filename: str
The path to the location of the target.
(data, target)tuple if return_X_y is True
New in version 0.18. | sklearn.modules.generated.sklearn.datasets.load_diabetes#sklearn.datasets.load_diabetes |
sklearn.datasets.load_digits(*, n_class=10, return_X_y=False, as_frame=False) [source]
Load and return the digits dataset (classification). Each datapoint is a 8x8 image of a digit.
Classes 10
Samples per class ~180
Samples total 1797
Dimensionality 64
Features integers 0-16 Read more in the User Guide. Parameters
n_classint, default=10
The number of classes to return. Between 0 and 10.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (1797, 64)
The flattened data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (1797,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. New in version 0.20. frame: DataFrame of shape (1797, 65)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. images: {ndarray} of shape (1797, 8, 8)
The raw image data. DESCR: str
The full description of the dataset.
(data, target)tuple if return_X_y is True
New in version 0.18. This is a copy of the test set of the UCI ML hand-written digits datasets
https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
Examples To load the data and visualize the images: >>> from sklearn.datasets import load_digits
>>> digits = load_digits()
>>> print(digits.data.shape)
(1797, 64)
>>> import matplotlib.pyplot as plt
>>> plt.gray()
>>> plt.matshow(digits.images[0])
>>> plt.show() | sklearn.modules.generated.sklearn.datasets.load_digits#sklearn.datasets.load_digits |
sklearn.datasets.load_files(container_path, *, description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0) [source]
Load text files with categories as subfolder names. Individual samples are assumed to be files stored a two levels folder structure such as the following: container_folder/
category_1_folder/
file_1.txt file_2.txt … file_42.txt category_2_folder/
file_43.txt file_44.txt … The folder names are used as supervised signal label names. The individual file names are not important. This function does not try to extract features into a numpy array or scipy sparse matrix. In addition, if load_content is false it does not try to load the files in memory. To use text files in a scikit-learn classification or clustering algorithm, you will need to use the :mod`~sklearn.feature_extraction.text` module to build a feature extraction transformer that suits your problem. If you set load_content=True, you should also specify the encoding of the text using the ‘encoding’ parameter. For many modern text files, ‘utf-8’ will be the correct encoding. If you leave encoding equal to None, then the content will be made of bytes instead of Unicode, and you will not be able to use most functions in text. Similar feature extractors should be built for other kind of unstructured data input such as images, audio, video, … Read more in the User Guide. Parameters
container_pathstr or unicode
Path to the main folder holding one subfolder per category
descriptionstr or unicode, default=None
A paragraph describing the characteristic of the dataset: its source, reference, etc.
categorieslist of str, default=None
If None (default), load all the categories. If not None, list of category names to load (other categories ignored).
load_contentbool, default=True
Whether to load or not the content of the different files. If true a ‘data’ attribute containing the text information is present in the data structure returned. If not, a filenames attribute gives the path to the files.
shufflebool, default=True
Whether or not to shuffle the data: might be important for models that make the assumption that the samples are independent and identically distributed (i.i.d.), such as stochastic gradient descent.
encodingstr, default=None
If None, do not try to decode the content of the files (e.g. for images or other non-text content). If not None, encoding to use to decode text files to Unicode if load_content is True.
decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’
Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. Passed as keyword argument ‘errors’ to bytes.decode.
random_stateint, RandomState instance or None, default=0
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
dataBunch
Dictionary-like object, with the following attributes.
datalist of str
Only present when load_content=True. The raw text data to learn.
targetndarray
The target labels (integer index).
target_nameslist
The names of target classes.
DESCRstr
The full description of the dataset. filenames: ndarray
The filenames holding the dataset. | sklearn.modules.generated.sklearn.datasets.load_files#sklearn.datasets.load_files |
sklearn.datasets.load_iris(*, return_X_y=False, as_frame=False) [source]
Load and return the iris dataset (classification). The iris dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class 50
Samples total 150
Dimensionality 4
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (150, 4)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (150,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (150, 5)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. filename: str
The path to the location of the data. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. Notes Changed in version 0.20: Fixed two wrong data points according to Fisher’s paper. The new version is the same as in R, but not as in the UCI Machine Learning Repository. Examples Let’s say you are interested in the samples 10, 25, and 50, and want to know their class name. >>> from sklearn.datasets import load_iris
>>> data = load_iris()
>>> data.target[[10, 25, 50]]
array([0, 0, 1])
>>> list(data.target_names)
['setosa', 'versicolor', 'virginica'] | sklearn.modules.generated.sklearn.datasets.load_iris#sklearn.datasets.load_iris |
sklearn.datasets.load_linnerud(*, return_X_y=False, as_frame=False) [source]
Load and return the physical excercise linnerud dataset. This dataset is suitable for multi-ouput regression tasks.
Samples total 20
Dimensionality 3 (for both data and target)
Features integer
Targets integer Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (20, 3)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, dataframe} of shape (20, 3)
The regression targets. If as_frame=True, target will be a pandas DataFrame. feature_names: list
The names of the dataset columns. target_names: list
The names of the target columns. frame: DataFrame of shape (20, 6)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. data_filename: str
The path to the location of the data. target_filename: str
The path to the location of the target. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. | sklearn.modules.generated.sklearn.datasets.load_linnerud#sklearn.datasets.load_linnerud |
sklearn.datasets.load_sample_image(image_name) [source]
Load the numpy array of a single sample image Read more in the User Guide. Parameters
image_name{china.jpg, flower.jpg}
The name of the sample image loaded Returns
img3D array
The image as a numpy array: height x width x color Examples >>> from sklearn.datasets import load_sample_image
>>> china = load_sample_image('china.jpg')
>>> china.dtype
dtype('uint8')
>>> china.shape
(427, 640, 3)
>>> flower = load_sample_image('flower.jpg')
>>> flower.dtype
dtype('uint8')
>>> flower.shape
(427, 640, 3) | sklearn.modules.generated.sklearn.datasets.load_sample_image#sklearn.datasets.load_sample_image |
sklearn.datasets.load_sample_images() [source]
Load sample images for image manipulation. Loads both, china and flower. Read more in the User Guide. Returns
dataBunch
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The two sample image.
filenameslist
The filenames for the images.
DESCRstr
The full description of the dataset. Examples To load the data and visualize the images: >>> from sklearn.datasets import load_sample_images
>>> dataset = load_sample_images()
>>> len(dataset.images)
2
>>> first_img_data = dataset.images[0]
>>> first_img_data.shape
(427, 640, 3)
>>> first_img_data.dtype
dtype('uint8') | sklearn.modules.generated.sklearn.datasets.load_sample_images#sklearn.datasets.load_sample_images |
sklearn.datasets.load_svmlight_file(f, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load datasets in the svmlight / libsvm format into sparse CSR matrix This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. This format is used as the default format for both svmlight and the libsvm command line programs. Parsing a text based source can be expensive. When working on repeatedly on the same dataset, it is recommended to wrap this loader with joblib.Memory.cache to store a memmapped backup of the CSR results of the first call and benefit from the near instantaneous loading of memmapped structures for the subsequent calls. In case the file contains a pairwise preference constraint (known as “qid” in the svmlight format) these are ignored unless the query_id parameter is set to True. These pairwise preference constraints can be used to constraint the combination of samples when using pairwise loss functions (as is the case in some learning to rank problems) so that only pairs with the same query_id value are considered. This implementation is written in Cython and is reasonably fast. However, a faster API-compatible loader is also available at: https://github.com/mblondel/svmlight-loader Parameters
fstr, file-like or int
(Path to) a file to load. If a path ends in “.gz” or “.bz2”, it will be uncompressed on the fly. If an integer is passed, it is assumed to be a file descriptor. A file-like or file descriptor will not be closed by this function. A file-like object must be opened in binary mode.
n_featuresint, default=None
The number of features to use. If None, it will be inferred. This argument is useful to load several files that are subsets of a bigger sliced dataset: each subset might not have examples of every feature, hence the inferred shape might vary from one slice to another. n_features is only required if offset or length are passed a non-default value.
dtypenumpy data type, default=np.float64
Data type of dataset to be loaded. This will be the data type of the output numpy arrays X and y.
multilabelbool, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_basedbool or “auto”, default=”auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe when no offset or length is passed. If offset or length are passed, the “auto” mode falls back to zero_based=True to avoid having the heuristic check yield inconsistent results on different segments of the file.
query_idbool, default=False
If True, will return the query_id array for each file.
offsetint, default=0
Ignore the offset first bytes by seeking forward, then discarding the following bytes up until the next new line character.
lengthint, default=-1
If strictly positive, stop reading any new line of data once the position in the file has reached the (offset + length) bytes threshold. Returns
Xscipy.sparse matrix of shape (n_samples, n_features)
yndarray of shape (n_samples,), or, in the multilabel a list of
tuples of length n_samples.
query_idarray of shape (n_samples,)
query_id for each sample. Only returned when query_id is set to True. See also
load_svmlight_files
Similar function for loading multiple files in this format, enforcing the same number of features/columns on all of them. Examples To use joblib.Memory to cache the svmlight file: from joblib import Memory
from .datasets import load_svmlight_file
mem = Memory("./mycache")
@mem.cache
def get_data():
data = load_svmlight_file("mysvmlightfile")
return data[0], data[1]
X, y = get_data() | sklearn.modules.generated.sklearn.datasets.load_svmlight_file#sklearn.datasets.load_svmlight_file |
sklearn.datasets.load_svmlight_files(files, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load dataset from multiple files in SVMlight format This function is equivalent to mapping load_svmlight_file over a list of files, except that the results are concatenated into a single, flat list and the samples vectors are constrained to all have the same number of features. In case the file contains a pairwise preference constraint (known as “qid” in the svmlight format) these are ignored unless the query_id parameter is set to True. These pairwise preference constraints can be used to constraint the combination of samples when using pairwise loss functions (as is the case in some learning to rank problems) so that only pairs with the same query_id value are considered. Parameters
filesarray-like, dtype=str, file-like or int
(Paths of) files to load. If a path ends in “.gz” or “.bz2”, it will be uncompressed on the fly. If an integer is passed, it is assumed to be a file descriptor. File-likes and file descriptors will not be closed by this function. File-like objects must be opened in binary mode.
n_featuresint, default=None
The number of features to use. If None, it will be inferred from the maximum column index occurring in any of the files. This can be set to a higher value than the actual number of features in any of the input files, but setting it to a lower value will cause an exception to be raised.
dtypenumpy data type, default=np.float64
Data type of dataset to be loaded. This will be the data type of the output numpy arrays X and y.
multilabelbool, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_basedbool or “auto”, default=”auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe when no offset or length is passed. If offset or length are passed, the “auto” mode falls back to zero_based=True to avoid having the heuristic check yield inconsistent results on different segments of the file.
query_idbool, default=False
If True, will return the query_id array for each file.
offsetint, default=0
Ignore the offset first bytes by seeking forward, then discarding the following bytes up until the next new line character.
lengthint, default=-1
If strictly positive, stop reading any new line of data once the position in the file has reached the (offset + length) bytes threshold. Returns
[X1, y1, …, Xn, yn]
where each (Xi, yi) pair is the result from load_svmlight_file(files[i]).
If query_id is set to True, this will return instead [X1, y1, q1,
…, Xn, yn, qn] where (Xi, yi, qi) is the result from
load_svmlight_file(files[i])
See also
load_svmlight_file
Notes When fitting a model to a matrix X_train and evaluating it against a matrix X_test, it is essential that X_train and X_test have the same number of features (X_train.shape[1] == X_test.shape[1]). This may not be the case if you load the files individually with load_svmlight_file. | sklearn.modules.generated.sklearn.datasets.load_svmlight_files#sklearn.datasets.load_svmlight_files |
sklearn.datasets.load_wine(*, return_X_y=False, as_frame=False) [source]
Load and return the wine dataset (classification). New in version 0.18. The wine dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class [59,71,48]
Samples total 178
Dimensionality 13
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (178, 13)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (178,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (178, 14)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset.
(data, target)tuple if return_X_y is True
The copy of UCI ML Wine Data Set dataset is downloaded and modified to fit
standard format from:
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data
Examples Let’s say you are interested in the samples 10, 80, and 140, and want to know their class name. >>> from sklearn.datasets import load_wine
>>> data = load_wine()
>>> data.target[[10, 80, 140]]
array([0, 1, 2])
>>> list(data.target_names)
['class_0', 'class_1', 'class_2'] | sklearn.modules.generated.sklearn.datasets.load_wine#sklearn.datasets.load_wine |
sklearn.datasets.make_biclusters(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with constant block diagonal structure for biclustering. Read more in the User Guide. Parameters
shapeiterable of shape (n_rows, n_cols)
The shape of the result.
n_clustersint
The number of biclusters.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
minvalint, default=10
Minimum value of a bicluster.
maxvalint, default=100
Maximum value of a bicluster.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape shape
The generated array.
rowsndarray of shape (n_clusters, X.shape[0])
The indicators for cluster membership of each row.
colsndarray of shape (n_clusters, X.shape[1])
The indicators for cluster membership of each column. See also
make_checkerboard
References
1
Dhillon, I. S. (2001, August). Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 269-274). ACM. | sklearn.modules.generated.sklearn.datasets.make_biclusters#sklearn.datasets.make_biclusters |
sklearn.datasets.make_blobs(n_samples=100, n_features=2, *, centers=None, cluster_std=1.0, center_box=- 10.0, 10.0, shuffle=True, random_state=None, return_centers=False) [source]
Generate isotropic Gaussian blobs for clustering. Read more in the User Guide. Parameters
n_samplesint or array-like, default=100
If int, it is the total number of points equally divided among clusters. If array-like, each element of the sequence indicates the number of samples per cluster. Changed in version v0.20: one can now pass an array-like to the n_samples parameter
n_featuresint, default=2
The number of features for each sample.
centersint or ndarray of shape (n_centers, n_features), default=None
The number of centers to generate, or the fixed center locations. If n_samples is an int and centers is None, 3 centers are generated. If n_samples is array-like, centers must be either None or an array of length equal to the length of n_samples.
cluster_stdfloat or array-like of float, default=1.0
The standard deviation of the clusters.
center_boxtuple of float (min, max), default=(-10.0, 10.0)
The bounding box for each cluster center when centers are generated at random.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.
return_centersbool, default=False
If True, then return the centers of each cluster New in version 0.23. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for cluster membership of each sample.
centersndarray of shape (n_centers, n_features)
The centers of each cluster. Only returned if return_centers=True. See also
make_classification
A more intricate variant. Examples >>> from sklearn.datasets import make_blobs
>>> X, y = make_blobs(n_samples=10, centers=3, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0])
>>> X, y = make_blobs(n_samples=[3, 3, 4], centers=None, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 1, 2, 0, 2, 2, 2, 1, 1, 0]) | sklearn.modules.generated.sklearn.datasets.make_blobs#sklearn.datasets.make_blobs |
sklearn.datasets.make_checkerboard(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with block checkerboard structure for biclustering. Read more in the User Guide. Parameters
shapetuple of shape (n_rows, n_cols)
The shape of the result.
n_clustersint or array-like or shape (n_row_clusters, n_column_clusters)
The number of row and column clusters.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
minvalint, default=10
Minimum value of a bicluster.
maxvalint, default=100
Maximum value of a bicluster.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape shape
The generated array.
rowsndarray of shape (n_clusters, X.shape[0])
The indicators for cluster membership of each row.
colsndarray of shape (n_clusters, X.shape[1])
The indicators for cluster membership of each column. See also
make_biclusters
References
1
Kluger, Y., Basri, R., Chang, J. T., & Gerstein, M. (2003). Spectral biclustering of microarray data: coclustering genes and conditions. Genome research, 13(4), 703-716. | sklearn.modules.generated.sklearn.datasets.make_checkerboard#sklearn.datasets.make_checkerboard |
sklearn.datasets.make_circles(n_samples=100, *, shuffle=True, noise=None, random_state=None, factor=0.8) [source]
Make a large circle containing a smaller circle in 2d. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (2,), dtype=int, default=100
If int, it is the total number of points generated. For odd numbers, the inner circle will have one point more than the outer circle. If two-element tuple, number of points in outer circle and inner circle. Changed in version 0.23: Added two-element tuple.
shufflebool, default=True
Whether to shuffle the samples.
noisefloat, default=None
Standard deviation of Gaussian noise added to the data.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and noise. Pass an int for reproducible output across multiple function calls. See Glossary.
factorfloat, default=.8
Scale factor between inner and outer circle in the range (0, 1). Returns
Xndarray of shape (n_samples, 2)
The generated samples.
yndarray of shape (n_samples,)
The integer labels (0 or 1) for class membership of each sample. | sklearn.modules.generated.sklearn.datasets.make_circles#sklearn.datasets.make_circles |
sklearn.datasets.make_classification(n_samples=100, n_features=20, *, n_informative=2, n_redundant=2, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None) [source]
Generate a random n-class classification problem. This initially creates clusters of points normally distributed (std=1) about vertices of an n_informative-dimensional hypercube with sides of length 2*class_sep and assigns an equal number of clusters to each class. It introduces interdependence between these features and adds various types of further noise to the data. Without shuffling, X horizontally stacks features in the following order: the primary n_informative features, followed by n_redundant linear combinations of the informative features, followed by n_repeated duplicates, drawn randomly with replacement from the informative and redundant features. The remaining features are filled with random noise. Thus, without shuffling, all useful features are contained in the columns X[:, :n_informative + n_redundant + n_repeated]. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=20
The total number of features. These comprise n_informative informative features, n_redundant redundant features, n_repeated duplicated features and n_features-n_informative-n_redundant-n_repeated useless features drawn at random.
n_informativeint, default=2
The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension n_informative. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.
n_redundantint, default=2
The number of redundant features. These features are generated as random linear combinations of the informative features.
n_repeatedint, default=0
The number of duplicated features, drawn randomly from the informative and the redundant features.
n_classesint, default=2
The number of classes (or labels) of the classification problem.
n_clusters_per_classint, default=2
The number of clusters per class.
weightsarray-like of shape (n_classes,) or (n_classes - 1,), default=None
The proportions of samples assigned to each class. If None, then classes are balanced. Note that if len(weights) == n_classes - 1, then the last class weight is automatically inferred. More than n_samples samples may be returned if the sum of weights exceeds 1. Note that the actual class proportions will not exactly match weights when flip_y isn’t 0.
flip_yfloat, default=0.01
The fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Note that the default setting flip_y > 0 might lead to less than n_classes in y in some cases.
class_sepfloat, default=1.0
The factor multiplying the hypercube size. Larger values spread out the clusters/classes and make the classification task easier.
hypercubebool, default=True
If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.
shiftfloat, ndarray of shape (n_features,) or None, default=0.0
Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].
scalefloat, ndarray of shape (n_features,) or None, default=1.0
Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.
shufflebool, default=True
Shuffle the samples and the features.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for class membership of each sample. See also
make_blobs
Simplified variant.
make_multilabel_classification
Unrelated generator for multilabel tasks. Notes The algorithm is adapted from Guyon [1] and was designed to generate the “Madelon” dataset. References
1
I. Guyon, “Design of experiments for the NIPS 2003 variable selection benchmark”, 2003. | sklearn.modules.generated.sklearn.datasets.make_classification#sklearn.datasets.make_classification |
sklearn.datasets.make_friedman1(n_samples=100, n_features=10, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0, 1]. The output y is created according to the formula: y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 + 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1).
Out of the n_features features, only 5 are actually used to compute y. The remaining features are independent of y. The number of features has to be >= 5. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=10
The number of features. Should be at least 5.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman1#sklearn.datasets.make_friedman1 |
sklearn.datasets.make_friedman2(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #2” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output y is created according to the formula: y(X) = (X[:, 0] ** 2 + (X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) ** 2) ** 0.5 + noise * N(0, 1).
Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 4)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman2#sklearn.datasets.make_friedman2 |
sklearn.datasets.make_friedman3(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #3” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output y is created according to the formula: y(X) = arctan((X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) / X[:, 0]) + noise * N(0, 1).
Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 4)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman3#sklearn.datasets.make_friedman3 |
sklearn.datasets.make_gaussian_quantiles(*, mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=3, shuffle=True, random_state=None) [source]
Generate isotropic Gaussian and label samples by quantile. This classification dataset is constructed by taking a multi-dimensional standard normal distribution and defining classes separated by nested concentric multi-dimensional spheres such that roughly equal numbers of samples are in each class (quantiles of the \(\chi^2\) distribution). Read more in the User Guide. Parameters
meanndarray of shape (n_features,), default=None
The mean of the multi-dimensional normal distribution. If None then use the origin (0, 0, …).
covfloat, default=1.0
The covariance matrix will be this value times the unit matrix. This dataset only produces symmetric normal distributions.
n_samplesint, default=100
The total number of points equally divided among classes.
n_featuresint, default=2
The number of features for each sample.
n_classesint, default=3
The number of classes
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for quantile membership of each sample. Notes The dataset is from Zhu et al [1]. References
1
Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class AdaBoost”, 2009. | sklearn.modules.generated.sklearn.datasets.make_gaussian_quantiles#sklearn.datasets.make_gaussian_quantiles |
sklearn.datasets.make_hastie_10_2(n_samples=12000, *, random_state=None) [source]
Generates data for binary classification used in Hastie et al. 2009, Example 10.2. The ten features are standard independent Gaussian and the target y is defined by: y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
Read more in the User Guide. Parameters
n_samplesint, default=12000
The number of samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 10)
The input samples.
yndarray of shape (n_samples,)
The output values. See also
make_gaussian_quantiles
A generalization of this dataset approach. References
1
T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009. | sklearn.modules.generated.sklearn.datasets.make_hastie_10_2#sklearn.datasets.make_hastie_10_2 |
sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, *, effective_rank=10, tail_strength=0.5, random_state=None) [source]
Generate a mostly low rank matrix with bell-shaped singular values. Most of the variance can be explained by a bell-shaped curve of width effective_rank: the low rank part of the singular values profile is: (1 - tail_strength) * exp(-1.0 * (i / effective_rank) ** 2)
The remaining singular values’ tail is fat, decreasing as: tail_strength * exp(-0.1 * i / effective_rank).
The low rank part of the profile can be considered the structured signal part of the data while the tail can be considered the noisy part of the data that cannot be summarized by a low number of linear components (singular vectors). This kind of singular profiles is often seen in practice, for instance:
gray level pictures of faces TF-IDF vectors of text documents crawled from the web Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=100
The number of features.
effective_rankint, default=10
The approximate number of singular vectors required to explain most of the data by linear combinations.
tail_strengthfloat, default=0.5
The relative importance of the fat noisy tail of the singular values profile. The value should be between 0 and 1.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The matrix. | sklearn.modules.generated.sklearn.datasets.make_low_rank_matrix#sklearn.datasets.make_low_rank_matrix |
sklearn.datasets.make_moons(n_samples=100, *, shuffle=True, noise=None, random_state=None) [source]
Make two interleaving half circles. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (2,), dtype=int, default=100
If int, the total number of points generated. If two-element tuple, number of points in each of two moons. Changed in version 0.23: Added two-element tuple.
shufflebool, default=True
Whether to shuffle the samples.
noisefloat, default=None
Standard deviation of Gaussian noise added to the data.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 2)
The generated samples.
yndarray of shape (n_samples,)
The integer labels (0 or 1) for class membership of each sample. | sklearn.modules.generated.sklearn.datasets.make_moons#sklearn.datasets.make_moons |
sklearn.datasets.make_multilabel_classification(n_samples=100, n_features=20, *, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None) [source]
Generate a random multilabel classification problem. For each sample, the generative process is:
pick the number of labels: n ~ Poisson(n_labels) n times, choose a class c: c ~ Multinomial(theta) pick the document length: k ~ Poisson(length) k times, choose a word: w ~ Multinomial(theta_c) In the above process, rejection sampling is used to make sure that n is never zero or more than n_classes, and that the document length is never zero. Likewise, we reject classes which have already been chosen. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=20
The total number of features.
n_classesint, default=5
The number of classes of the classification problem.
n_labelsint, default=2
The average number of labels per instance. More precisely, the number of labels per sample is drawn from a Poisson distribution with n_labels as its expected value, but samples are bounded (using rejection sampling) by n_classes, and must be nonzero if allow_unlabeled is False.
lengthint, default=50
The sum of the features (number of words if documents) is drawn from a Poisson distribution with this expected value.
allow_unlabeledbool, default=True
If True, some instances might not belong to any class.
sparsebool, default=False
If True, return a sparse feature matrix New in version 0.17: parameter to allow sparse output.
return_indicator{‘dense’, ‘sparse’} or False, default=’dense’
If 'dense' return Y in the dense binary indicator format. If 'sparse' return Y in the sparse binary indicator format. False returns a list of lists of labels.
return_distributionsbool, default=False
If True, return the prior class probability and conditional probabilities of features given classes, from which the data was drawn.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
The label sets. Sparse matrix should be of CSR format.
p_cndarray of shape (n_classes,)
The probability of each class being drawn. Only returned if return_distributions=True.
p_w_cndarray of shape (n_features, n_classes)
The probability of each feature being drawn given each class. Only returned if return_distributions=True. | sklearn.modules.generated.sklearn.datasets.make_multilabel_classification#sklearn.datasets.make_multilabel_classification |
sklearn.datasets.make_regression(n_samples=100, n_features=100, *, n_informative=10, n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0, shuffle=True, coef=False, random_state=None) [source]
Generate a random regression problem. The input set can either be well conditioned (by default) or have a low rank-fat tail singular profile. See make_low_rank_matrix for more details. The output is generated by applying a (potentially biased) random linear regression model with n_informative nonzero regressors to the previously generated input and some gaussian centered noise with some adjustable scale. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=100
The number of features.
n_informativeint, default=10
The number of informative features, i.e., the number of features used to build the linear model used to generate the output.
n_targetsint, default=1
The number of regression targets, i.e., the dimension of the y output vector associated with a sample. By default, the output is a scalar.
biasfloat, default=0.0
The bias term in the underlying linear model.
effective_rankint, default=None
if not None:
The approximate number of singular vectors required to explain most of the input data by linear combinations. Using this kind of singular spectrum in the input allows the generator to reproduce the correlations often observed in practice. if None:
The input set is well conditioned, centered and gaussian with unit variance.
tail_strengthfloat, default=0.5
The relative importance of the fat noisy tail of the singular values profile if effective_rank is not None. When a float, it should be between 0 and 1.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
shufflebool, default=True
Shuffle the samples and the features.
coefbool, default=False
If True, the coefficients of the underlying linear model are returned.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,) or (n_samples, n_targets)
The output values.
coefndarray of shape (n_features,) or (n_features, n_targets)
The coefficient of the underlying linear model. It is returned only if coef is True. | sklearn.modules.generated.sklearn.datasets.make_regression#sklearn.datasets.make_regression |
sklearn.datasets.make_sparse_coded_signal(n_samples, *, n_components, n_features, n_nonzero_coefs, random_state=None) [source]
Generate a signal as a sparse combination of dictionary elements. Returns a matrix Y = DX, such as D is (n_features, n_components), X is (n_components, n_samples) and each column of X has exactly n_nonzero_coefs non-zero elements. Read more in the User Guide. Parameters
n_samplesint
Number of samples to generate
n_componentsint
Number of components in the dictionary
n_featuresint
Number of features of the dataset to generate
n_nonzero_coefsint
Number of active (non-zero) coefficients in each sample
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
datandarray of shape (n_features, n_samples)
The encoded signal (Y).
dictionaryndarray of shape (n_features, n_components)
The dictionary with normalized components (D).
codendarray of shape (n_components, n_samples)
The sparse code such that each column of this matrix has exactly n_nonzero_coefs non-zero items (X). | sklearn.modules.generated.sklearn.datasets.make_sparse_coded_signal#sklearn.datasets.make_sparse_coded_signal |
sklearn.datasets.make_sparse_spd_matrix(dim=1, *, alpha=0.95, norm_diag=False, smallest_coef=0.1, largest_coef=0.9, random_state=None) [source]
Generate a sparse symmetric definite positive matrix. Read more in the User Guide. Parameters
dimint, default=1
The size of the random matrix to generate.
alphafloat, default=0.95
The probability that a coefficient is zero (see notes). Larger values enforce more sparsity. The value should be in the range 0 and 1.
norm_diagbool, default=False
Whether to normalize the output matrix to make the leading diagonal elements all 1
smallest_coeffloat, default=0.1
The value of the smallest coefficient between 0 and 1.
largest_coeffloat, default=0.9
The value of the largest coefficient between 0 and 1.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
precsparse matrix of shape (dim, dim)
The generated matrix. See also
make_spd_matrix
Notes The sparsity is actually imposed on the cholesky factor of the matrix. Thus alpha does not translate directly into the filling fraction of the matrix itself. | sklearn.modules.generated.sklearn.datasets.make_sparse_spd_matrix#sklearn.datasets.make_sparse_spd_matrix |
sklearn.datasets.make_sparse_uncorrelated(n_samples=100, n_features=10, *, random_state=None) [source]
Generate a random regression problem with sparse uncorrelated design. This dataset is described in Celeux et al [1]. as: X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:, 2] - 1.5 * X[:, 3]
Only the first 4 features are informative. The remaining features are useless. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=10
The number of features.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
G. Celeux, M. El Anbari, J.-M. Marin, C. P. Robert, “Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation”, 2009. | sklearn.modules.generated.sklearn.datasets.make_sparse_uncorrelated#sklearn.datasets.make_sparse_uncorrelated |
sklearn.datasets.make_spd_matrix(n_dim, *, random_state=None) [source]
Generate a random symmetric, positive-definite matrix. Read more in the User Guide. Parameters
n_dimint
The matrix dimension.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_dim, n_dim)
The random symmetric, positive-definite matrix. See also
make_sparse_spd_matrix | sklearn.modules.generated.sklearn.datasets.make_spd_matrix#sklearn.datasets.make_spd_matrix |
sklearn.datasets.make_swiss_roll(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate a swiss roll dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 3)
The points.
tndarray of shape (n_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold. Notes The algorithm is from Marsland [1]. References
1
S. Marsland, “Machine Learning: An Algorithmic Perspective”, Chapter 10, 2009. http://seat.massey.ac.nz/personal/s.r.marsland/Code/10/lle.py | sklearn.modules.generated.sklearn.datasets.make_swiss_roll#sklearn.datasets.make_swiss_roll |
sklearn.datasets.make_s_curve(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate an S curve dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 3)
The points.
tndarray of shape (n_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold. | sklearn.modules.generated.sklearn.datasets.make_s_curve#sklearn.datasets.make_s_curve |
class sklearn.decomposition.DictionaryLearning(n_components=None, *, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars', transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, n_jobs=None, code_init=None, dict_init=None, verbose=False, split_sign=False, random_state=None, positive_code=False, positive_dict=False, transform_max_iter=1000) [source]
Dictionary learning Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code. Solves the optimization problem: (U^*,V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
Read more in the User Guide. Parameters
n_componentsint, default=n_features
Number of dictionary elements to extract.
alphafloat, default=1.0
Sparsity controlling parameter.
max_iterint, default=1000
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for numerical error.
fit_algorithm{‘lars’, ‘cd’}, default=’lars’
'lars': uses the least angle regression method to solve the lasso problem (lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (Lasso). Lars will be faster if the estimated components are sparse. New in version 0.17: cd coordinate descent method to improve speed.
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (lars_path);
'lasso_lars': uses Lars to compute the Lasso solution.
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (Lasso). 'lasso_lars' will be faster if the estimated components are sparse.
'omp': uses orthogonal matching pursuit to estimate the sparse solution.
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'. New in version 0.17: lasso_cd coordinate descent method to improve speed.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.0
n_jobsint or None, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
code_initndarray of shape (n_samples, n_components), default=None
Initial value for the code, for warm restart.
dict_initndarray of shape (n_components, n_features), default=None
Initial values for the dictionary, for warm restart.
verbosebool, default=False
To control the verbosity of the procedure.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or 'lasso_lars'. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
dictionary atoms extracted from the data
error_array
vector of errors at each iteration
n_iter_int
Number of iterations run. See also
SparseCoder
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA
Notes References: J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf) Examples >>> import numpy as np
>>> from sklearn.datasets import make_sparse_coded_signal
>>> from sklearn.decomposition import DictionaryLearning
>>> X, dictionary, code = make_sparse_coded_signal(
... n_samples=100, n_components=15, n_features=20, n_nonzero_coefs=10,
... random_state=42,
... )
>>> dict_learner = DictionaryLearning(
... n_components=15, transform_algorithm='lasso_lars', random_state=42,
... )
>>> X_transformed = dict_learner.fit_transform(X)
We can check the level of sparsity of X_transformed: >>> np.mean(X_transformed == 0)
0.88...
We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_
>>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1))
0.07...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the object itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning |
sklearn.decomposition.DictionaryLearning
class sklearn.decomposition.DictionaryLearning(n_components=None, *, alpha=1, max_iter=1000, tol=1e-08, fit_algorithm='lars', transform_algorithm='omp', transform_n_nonzero_coefs=None, transform_alpha=None, n_jobs=None, code_init=None, dict_init=None, verbose=False, split_sign=False, random_state=None, positive_code=False, positive_dict=False, transform_max_iter=1000) [source]
Dictionary learning Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code. Solves the optimization problem: (U^*,V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
Read more in the User Guide. Parameters
n_componentsint, default=n_features
Number of dictionary elements to extract.
alphafloat, default=1.0
Sparsity controlling parameter.
max_iterint, default=1000
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for numerical error.
fit_algorithm{‘lars’, ‘cd’}, default=’lars’
'lars': uses the least angle regression method to solve the lasso problem (lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (Lasso). Lars will be faster if the estimated components are sparse. New in version 0.17: cd coordinate descent method to improve speed.
transform_algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’omp’
Algorithm used to transform the data:
'lars': uses the least angle regression method (lars_path);
'lasso_lars': uses Lars to compute the Lasso solution.
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (Lasso). 'lasso_lars' will be faster if the estimated components are sparse.
'omp': uses orthogonal matching pursuit to estimate the sparse solution.
'threshold': squashes to zero all coefficients less than alpha from the projection dictionary * X'. New in version 0.17: lasso_cd coordinate descent method to improve speed.
transform_n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then transform_n_nonzero_coefs=int(n_features / 10).
transform_alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.0
n_jobsint or None, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
code_initndarray of shape (n_samples, n_components), default=None
Initial value for the code, for warm restart.
dict_initndarray of shape (n_components, n_features), default=None
Initial values for the dictionary, for warm restart.
verbosebool, default=False
To control the verbosity of the procedure.
split_signbool, default=False
Whether to split the sparse feature vector into the concatenation of its negative part and its positive part. This can improve the performance of downstream classifiers.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary New in version 0.20.
transform_max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or 'lasso_lars'. New in version 0.22. Attributes
components_ndarray of shape (n_components, n_features)
dictionary atoms extracted from the data
error_array
vector of errors at each iteration
n_iter_int
Number of iterations run. See also
SparseCoder
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA
Notes References: J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009: Online dictionary learning for sparse coding (https://www.di.ens.fr/sierra/pdfs/icml09.pdf) Examples >>> import numpy as np
>>> from sklearn.datasets import make_sparse_coded_signal
>>> from sklearn.decomposition import DictionaryLearning
>>> X, dictionary, code = make_sparse_coded_signal(
... n_samples=100, n_components=15, n_features=20, n_nonzero_coefs=10,
... random_state=42,
... )
>>> dict_learner = DictionaryLearning(
... n_components=15, transform_algorithm='lasso_lars', random_state=42,
... )
>>> X_transformed = dict_learner.fit_transform(X)
We can check the level of sparsity of X_transformed: >>> np.mean(X_transformed == 0)
0.88...
We can compare the average squared euclidean norm of the reconstruction error of the sparse coded signal relative to the squared euclidean norm of the original signal: >>> X_hat = X_transformed @ dict_learner.components_
>>> np.mean(np.sum((X_hat - X) ** 2, axis=1) / np.sum(X ** 2, axis=1))
0.07...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Encode the data as a sparse combination of the dictionary atoms.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the object itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Encode the data as a sparse combination of the dictionary atoms. Coding method is determined by the object parameter transform_algorithm. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning |
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the object itself. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.decomposition.dictionarylearning#sklearn.decomposition.DictionaryLearning.set_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.